Next Article in Journal
Algorithms for Game AI
Previous Article in Journal
The Computability of the Channel Reliability Function and Related Bounds
 
 
Due to scheduled maintenance work on our database systems, there may be short service disruptions on this website between 10:00 and 11:00 CEST on June 14th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

WIRE: A Weighted Item Removal Method for Unsupervised Rank Aggregation

by
Leonidas Akritidis
and
Panayiotis Bozanis
*,†
School of Science and Technology, International Hellenic University, 57001 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2025, 18(6), 362; https://doi.org/10.3390/a18060362
Submission received: 29 April 2025 / Revised: 10 June 2025 / Accepted: 11 June 2025 / Published: 12 June 2025
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)

Abstract

:
Rank aggregation deals with the problem of fusing multiple ranked lists of elements into a single aggregate list with improved element ordering. Such cases are frequently encountered in numerous applications across a variety of areas, including bioinformatics, machine learning, statistics, information retrieval, and so on. The weighted rank aggregation methods consider a more advanced version of the problem by assuming that the input lists are not of equal importance. In this context, they first apply ad hoc techniques to assign weights to the input lists, and then, they study how to integrate these weights into the scores of the individual list elements. In this paper, we adopt the idea of exploiting the list weights not only during the computation of the element scores, but also to determine which elements will be included in the consensus aggregate list. More specifically, we introduce and analyze a novel refinement mechanism, called WIRE, that effectively removes the weakest elements from the less important input lists, thus improving the quality of the output ranking. We experimentally demonstrate the effectiveness of our method in multiple datasets by comparing it with a collection of state-of-the-art weighted and non-weighted techniques.

1. Introduction

Rank aggregation algorithms accept as input a set of ordered element lists submitted by a group of rankers. Then, they process these lists and return a consensus ranking with enhanced element ordering. Such problems are quite common in numerous research fields, such as bioinformatics [1,2], sports [3], recommendation systems [4,5], information retrieval [6,7,8], ballots [9,10], and so forth.
Most existing methods consider that input lists are of equal importance. Hence, they do not investigate issues where, for example, a list is submitted by an expert or a non-expert ranker, or a spammer submits a preference list in order to promote or downgrade specific elements. Although this non-weighted approach aligns with several application areas (e.g., fair elections), in other cases, such as collaborative filtering systems, it usually leads to output lists that suffer from bias, manipulation, and low-quality element ranking.
In contrast, the weighted rank aggregation methods take the aforementioned issues into account and attempt to assess the input list quality before they fuse them into a single ranking [11,12,13]. By applying unsupervised exploratory analysis techniques, they assign weights to the input lists with the aim of evaluating their significance. In the sequel, they exploit these weights during the computation of the scores of the respective list elements. More specifically, the list weights are embodied in the individual element scores, thus affecting their ranking in the output list.
In the vast majority of cases, this is how the weighted methods exploit the learned list weights. However, the distance-based method of Akritidis et al. [13] introduced a more inspiring approach. First, an iterative algorithm was applied to estimate the importance of the input lists by measuring their distances from the generated output list. Then, the distance-based ranker weights were used in two ways as follows: (i) to compute the element scores in a manner that promotes those submitted by expert rankers, and (ii) to prevent low-quality elements submitted by non-expert rankers from entering the final aggregate list. In this context, the learned weights were used to determine the number of elements that each input list contributed to the final ranking.
The experiments of [13] demonstrated that weight-based list pruning can yield promising results, especially when the input lists are long. However, for input lists that consist of only a few items, the gains are considerably limited, or even reversed in some cases. These findings indicate that a more sophisticated approach is required to fully exploit the learned ranker weights.
To alleviate this issue, this paper introduces a novel, unsupervised method, called WIRE, that regulates the contribution of each input list in the formulation of the aggregate list. In contrast to the suggestions of [13], our strategy does not simply remove elements from the bottom of the input lists, but it carefully removes the weakest elements, with respect to their overall score, from the less important lists, with respect to their weights. In this way, it constructs aggregate lists of improved quality, even when the input preference lists are short. In addition, note that this refinement mechanism depends only on the input lists, the consensus ranking, and the weights of the involved rankers. Since this information is directly accessible either from the input or from the base aggregator, two major advantages of WIRE are derived. First, it fits into any weighted rank aggregation method and can be applied as a post-processing step. Second, its independence from individual element or list attributes (e.g., ranks, scores, lengths, etc.) renders it capable of handling partial rankings. These design choices increase our method’s flexibility and applicability.
The remainder of this paper is organized as follows. Section 2 provides a brief overview of the relevant literature on weighted rank aggregation. Several preliminary elements are discussed in Section 3, whereas the proposed weighted item selection algorithm is presented and analyzed in Section 4. Section 5 describes the experimental evaluation and discusses the acquired results. The paper is concluded in Section 6 with the most significant findings and points for future research.

2. Related Work

The need for designing fair elections is quite old and has attracted the attention of researchers many decades ago. The Borda Count [14] and Condorcet criterion [15] are among the oldest rank aggregation methods that have been introduced for this purpose. Nowadays, rank aggregation has numerous applications in a wide variety of research areas, including information retrieval, bioinformatics, ballots, etc. [16,17,18].
In [19], Dwork et al. introduced four rank aggregation methods based on Markov chains to combine rankings coming from different search engines. Originally proposed to combat spam entries, the four methods are derived by considering different forms of the chain’s transition matrix. Similarly, Ref. [20] adopted a Markov chain framework to compare the results of multiple microarray experiments expressed as ranked lists of genes.
The robust rank aggregation method of [21] proposed an unbiased probabilistic technique to process the results of various genomic analysis applications. More specifically, it compared the actual ranking of an element with the ranking provided by a null model that dictated random ordering. This strategy renders it robust to outliers, noise, and errors.
On the other hand, the order-based methods examine the relevant rankings of the items in the input lists, assigning them scores based on their pairwise wins, ties, and losses. The Condorcet method was among the first approaches to adopt this logic [15]. Similarly, the well-known Kemeny–Young method is based on a matrix that also counts the pairwise preferences of the rankers [22,23]. Then, it uses this matrix to assign scores to each possible permutation of the input elements. Its high computational complexity—the problem is NP-Hard even for four rankers—rendered it inappropriate for datasets having many input lists. More recently, the outranking approach of [24] introduced four threshold values to quantify the concordance or discordance between the input lists.
The aforementioned methods have been proven to be quite effective in a variety of tasks. However, they do not take into consideration the level of expertise of the rankers who submit their preference lists. Consequently, they are prone to cases where a ranker may attempt to manipulate the output ranking by submitting spam entries, or simply, to rankers with different importance degrees.
The work of Pihur et al. proposed a solution that optimized the weighted distances among the input lists [25]. Based on the traditional Footrule distance, the authors established a function that combined the list weights with the individual scores that were assigned to each element by its respective rankers. However, in many cases, these scores are unknown (that is, they are not included in the input lists); therefore, the distance computation is rendered intractable.
The weighted method of Desarkar et al. modeled the problem by constructing preference relation graphs [26]. The list weights were subsequently computed by verifying the validity of several custom majoritarian rules. Furthermore, Chatterjee et al. introduced another weighted technique for crowd opinion analysis applications by adopting the principles of agglomerative clustering [27].
More recently, an iterative distance-based method called DIBRA was published in [13]. At each iteration, DIBRA computes the distances of the input lists from an aggregate list that is derived by applying a simple baseline method like the Borda Count. The lists that are proximal to the aggregate list are assigned higher weights, compared to the most distant ones. This process is repeated until all weights converge and the aggregate list is stabilized. Interestingly, Ref. [13] also introduced the idea of exploiting the learned weights to limit the contribution of the weakest rankers in the formulation of the aggregate list. In particular, the number of elements that participate in the aggregation process is determined by the weight of the respective preference list.
This simple pruning mechanism has been proven to be beneficial in several experiments that involved long input lists. However, for shorter lists, the gains were either minimized or reversed. This study introduces an item selection method that performs significantly better in all cases.

3. Weighted Rank Aggregation

A typical rank aggregation scenario involves a group of n rankers U = { u 1 , u 2 , , u n } that submits n preference lists l ( u 1 ) , , l ( u n ) and an aggregation algorithm A that combines these preference lists with the aim of finding a consensus ranking L . In this paper, we consider the most generic version of the problem, where the preference lists (i) can be of variable length | l ( u ) | and (ii) may include only a subset of the entire universe S of the elements; such lists are called partial. In contrast, full lists contain permutations of all elements of S, so they are of equal lengths.
A typical non-weighted aggregator A receives the preference lists as input and assigns a score value s i to each element i, according to its ranking in the input lists, the number of pairwise wins and losses, or other criteria. Then, it outputs the unique elements of the lists sorted in score order, thus producing a consensus ranked list L as follows:
L = A l ( u 1 ) , , l ( u n ) .
In contrast, a weighted aggregator A w employs an unsupervised mechanism E that evaluates the importance w ( u ) of each ranker u, by taking into account its preferences and several other properties. In the sequel, it integrates these weights in the computation of the element scores as follows:
w ( u 1 ) , , w ( u n ) = E l ( u 1 ) , , l ( u n ) ,
L = A w w ( u 1 ) , , w ( u n ) , l ( u 1 ) , , l ( u n ) .
In [13], the authors also introduced a post-processing technique P that removes the low-ranked elements from each input list based on the weights of their respective rankers as follows:
L = P L , w ( u 1 ) , , w ( u n ) .
In this study, we propose a new list pruning mechanism with the aim of improving the quality of the aggregate list L . The block diagram of Figure 1 illustrates the sequential flow of the aforementioned procedure.

4. WIRE: Algorithm Design and Analysis

Algorithm description. Most weighted rank aggregation methods integrate the learned weights into the scores of the individual elements, with the sole objective of improving the quality of their produced ranking. As mentioned earlier, a representative exception is the list pruning policy of [13], which uses the learned ranker weights to remove the weakest elements from the input lists. In the sequel, we introduce a new method called WIRE (Weighted Item Removal) that utilizes the learned list weights to effectively determine the contribution of each input preference list in the formulation of the output list L .
WIRE introduces a novel mechanism that identifies appropriate list elements for removal. Notably, this mechanism depends only on the input preference lists, the ranker weights, and the consensus ranking. This increases the flexibility of our algorithm, since its independence from other factors (e.g., the scores of the input list elements) enables its attachment to any weighted rank aggregation method A w as a post-processing step.
On the other hand, if A w cannot estimate the ranker weights in a robust manner, then the ability of WIRE to identify the weakest elements will inevitably decrease. For this reason, we present WIRE in the context of DIBRA, the distance-based weighted method that was introduced in [13]. The experiments of [13] have shown that DIBRA is superior to other weighted algorithms (e.g., [26,27]), and this was also verified in the present experiments. Therefore, even though our method can be applied in combination with any weighted rank aggregation method, the capability of DIBRA in distinguishing the expert rankers from the non-experts renders it a suitable basis for applying WIRE.
In short, DIBRA capitalizes on the concept that the importance of a ranker is determined by the distance of its preference list from the consensus ranking L . In this spirit, it iteratively adjusts the weight w ( u ) of a ranker u by using the following equation:
w i ( u ) = w i 1 ( u ) + exp i · d l ( u ) , L i 1 ,
where w i ( u ) and L i are the weight of ranker u and the aggregate list after i iterations, respectively. Moreover, d represents a function that quantifies the distance between the preference list l ( u ) and the aggregate list L . The form of Equation (5) guarantees that the weights converge after several iterations, usually between 5 and 20.
After the voter weights have been calculated and the final aggregate list L has been constructed, WIRE is deployed to further improve the quality of L . The proposed method refines the input preference lists by removing their weakest elements and operates in five phases, according to Algorithm 1.
Algorithm 1 WIRE: Weighted Item Removal
     Input: Rankers u 1 u n , Ranker weights w ( u 1 ) , , w ( u n ) , Preference lists l ( u 1 ) , , l ( u n ) ,
             number of buckets B < n , hyper-parameter δ 1
     Output: Aggregate list L .
1:
W sort w ( u 1 ) , , w ( u n )                   ▹Sort the ranker weights
2:
b , i 1
3:
while  b < B  do            ▹Compute the confidence scores of all buckets
4:
     C b δ 1 + ( 1 δ 1 ) exp ( ( b 1 ) B / n )                 ▹ Equation (6)
5:
     b b + 1
6:
BucketCount 1
7:
while  i n  do                  ▹Group the rankers into the buckets
8:
    if  i > n · BucketCount / B  then
9:
         BucketCount BucketCount + 1
10:
     b ( u i ) BucketCount                 ▹Place the ranker u i in b ( u i )
11:
     C b ( u i ) C BucketCount        ▹ u i inherits the confidence score of its bucket b ( u i )
12:
     i i + 1
13:
for each element e L  do  ▹ Compute the preservation scores of all elements of L
14:
     P e 0 , i 1
15:
    while  i < n  do     ▹ Sum the confidence scores of all rankers who preferred e
16:
        if  e l ( u i )  then
17:
              P e P e + C b ( u i )              ▹ Preservation score, Equation (7)
18:
         i i + 1
19:
i 1
20:
while  i n  do    ▹Remove the elements with the lowest preservation scores from each list
21:
     R b ( u i ) * | l ( u i ) | C b ( u i )                   ▹ Protection score, Equation (8)
22:
     H BuildMinHeap l ( u i )       ▹ Build a min heap from the elements of the list
23:
     RemovedElements 0
24:
    while  RemovedElements < | l ( u i ) | R b ( u i )  do    ▹ Remove | l ( u i ) | R b ( u i ) elements
25:
         e H . pop ( )                     ▹ Pop the heap’s head e
26:
         l ( u i ) l ( u i ) e                           ▹Remove e
27:
         RemovedElements RemovedElements + 1
28:
     i i + 1
29:
L A w l ( u 1 ) , , l ( u n )     ▹Rerun the weighted aggregator A w on the new lists
30:
return  L
Initially, the rankers are sorted in decreasing weight order. Then, they are distributed into a set of B < n equally sized buckets in such a manner that each bucket contains n / B rankers. Since the ranker weights are sorted in descending order, the first bucket will contain the expert rankers, that is, those who received the n / B highest weights; the second bucket will include the rankers with lower weights, and so on. We now introduce an exponentially decaying confidence score for each bucket b B as follows:
C b = δ 1 + ( 1 δ 1 ) exp ( ( b 1 ) B / n ) ,
where δ 1 [ 0 , 1 ] is a hyper-parameter that specifies the minimum confidence score of a bucket. The confidence scores of Equation (6) are inherited by all rankers belonging to a particular bucket, as shown in step 11 of Algorithm 1. Therefore, the rankers of the first bucket ( b = 1 ) receive a confidence score C 1 = 1 , regardless of the total number of buckets B. Similarly, all rankers in the second bucket ( b = 2 ) are assigned a confidence score S 2 = δ 1 + ( 1 δ 1 ) exp ( B / n ) ) , etc. Notice that C b [ 0 , 1 ] .
Subsequently, for each element e L we introduce the following preservation score:
P e = u | e l ( u ) C b ( u ) = u | e l ( u ) δ 1 + ( 1 δ 1 ) exp ( b ( u ) 1 ) B / n ,
where b ( u ) denotes the bucket in which the ranker u has been placed. Equation (7) indicates that the preservation score of an item e L depends on the sum of the confidence scores of the rankers who included it in their preference lists (steps 15–17). Thus, if the item was preferred by numerous experts, then its preservation score will be high. In contrast, the elements submitted by a few non-experts will be assigned lower preservation scores. Moreover, notice the independence of P e from other parameters, e.g., the individual rankings, or the number of pairwise wins and losses, etc. This design choice renders the preservation scores robust to manipulation from spammers or low-quality preferences originating from non-expert users. The left diagram in Figure 2 illustrates an example of how confidence and preservation scores are computed for n = 6 rankers and B = 3 buckets.
The preservation score is a measure of the importance of an element e and determines whether it will be included in the aggregate list or whether it will be discarded. Formally, the higher the preservation score of e , the higher the probability that e will be preserved in L . For this reason, the protection score is introduced to determine the number of elements that will be taken into account during the aggregation. Similarly to the confidence score, the protection score R b ( u ) is defined for each ranker u according to the following equation:
R b ( u ) = * | l ( u ) | C b ( u ) .
Equation (8) defines the protection score as an integer threshold that denotes the number of elements of l ( u ) that will contribute during the aggregation process. In other words, it tells us that the | l ( u ) | R b ( u ) weakest elements must be removed from l ( u ) . The item removal process is shown in steps 20–28 of Algorithm 1. For each input list, we build a min-heap structure H to efficiently identify the items to be removed without having to sort l ( u ) . H is built by inserting all elements of l ( u ) and keeps on its head the element with the minimum preservation score. Then, the | l ( u ) | R b ( u ) items with the lowest preservation scores are indicated by performing an equal number of pop operations on H (steps 24–28). The diagram on the right in Figure 2 shows the removal of the two weakest elements from an exemplary input list based on the preservation scores of its elements.
After the removal of the weakest elements from each input list, the aggregator A w is executed once more on the new lists to obtain the enhanced aggregate list L (step 29).
Illustrative Examples. Figure 2 illustrates two examples of how the stages of Algorithm 1 operate in practice. The left part (a) shows the distribution of n = 6 rankers in B = 3 equally sized buckets. Initially, each ranker u submits a preference list l ( u ) and a weighted aggregator A w constructs a consensus ranking L by simultaneously assigning weights w ( u 1 ) , , w ( u 6 ) to all rankers. Setting δ 1 = 0.5 , the confidence scores for each bucket b = 1 , 2 , 3 are computed using Equation (6). Subsequently, the preservation scores of the elements of the aggregate list are computed with Equation (7). For example, the element e 1 L appeared in the input lists of u 1 , u 2 , and u 3 , which have been previously grouped into buckets 1, 1, and 2, respectively. Therefore, its preservation score will be computed by summing the confidence scores of the following corresponding buckets: P e 1 = C b ( u 1 ) + C b ( u 2 ) + C b ( u 3 ) = 1 + 1 + 0.803 = 2.803 .
Having the preservation scores computed, we can now go back to the input lists and remove the weakest elements. First, we need to compute the protection scores for each ranker. The right part of Figure 2 displays an exemplary preference list l ( u 6 ) . Since u 6 has been grouped in the bucket b = 3 , it can be derived that C b ( u 6 ) = 0.684 . Therefore, Equation (8) indicates that R b ( u 6 ) = * | l ( u 6 ) | C b ( u 6 ) = * 8 · 0.684 = 6 elements of l ( u 6 ) should be preserved, and 2 elements should be removed. According to the figure, the three elements with the lowest preservation scores are σ 2 , σ 4 , and σ 7 . From these elements, σ 2 is preserved because it was ranked higher than the other two; σ 4 and σ 7 are eventually removed from the list.
Complexity Analysis. In the sequel, we will prove the following lemma:
Lemma 1. 
The time complexity of WIRE is upper-bounded by O ( n | L | log | L | ) , given that n = O ( 2 | L | ) , while it needs Θ ( n + | L | ) space.
Proof. 
The cost of sorting the ranker weights (step 1) is O ( n log n ) . The confidence score calculation (steps 3, 5) takes O ( B ) time. The cost of grouping n rankers (steps 6–11) into buckets is O ( n ) . Calculating the preservation scores of all elements e of L has worst-case complexity O ( e | u ; e l ( u ) | = O ( n | L | ) , since each element may appear in all lists.
The last block of Algorithm 1 between steps 20 and 28 includes the construction of n min-heaps H ( O ( n max u | l ( u ) | ) worst-case cost), and n max u ( | l ( u ) | R b u ) element removals with a worst-case cost of O ( n ( | L | | L | ) ) . Hence, the total worst-case cost of the entire block is O ( n ( | L | | L | ) log max u | l ( u ) | ) .
Consequently, the overall cost of WIRE is expressed as follows:
O ( n log n + B + n + n | L | + n max u | l ( u ) | + n ( | L | | L | ) log max u | l ( u ) | ) ,
that can be upper-bounded by O ( n | L | log | L | ) , since B = o ( n ) , max u | l ( u ) | = O ( | L | ) , and n = O ( 2 | L | ) . Please note that this a quite pessimistic upper bound, since it assumes that every l ( u ) will be excluded from L in its entirety.
The space complexity is linear to the number of rankers and the size of the initial aggregate list L , since we need to keep all C b ( u i ) values, all P e scores, while the maximum size of the auxiliary heap is linear to the length of the longest l ( u ) , which is O ( | L | ) . □

5. Experiments

This section presents the experimental evaluation of the proposed method. Initially, we describe the datasets, the comparison framework, and the measures that were used during this evaluation. In the sequel, we present and discuss the performance of WIRE against a variety of well-established rank aggregation methods in terms of both effectiveness and running times. We also present a study of how the two hyper-parameters of WIRE, B and δ 1 , affect its performance. Finally, the statistical significance of the presented measurements is verified in the last part of the section.
The implementation code of the proposed method has been embodied in the FLAGR 1.0.20 and PyFLAGR 1.0.20 (https://flagr.site (accessed on 10 June 2025) libraries [28]. Both libraries are available for download from Github (https://github.com/lakritidis/FLAGR), whereas PyFLAGR can be installed from the Python Package Index (https://pypi.org/project/pyflagr/).

5.1. Datasets

The list pruning strategy of [13] was shown to be effective in scenarios that included long preference lists. However, the experiments have shown that these benefits were diminished, and in some cases, reversed in applications where the preference lists were short. For this reason, in this study, we aimed to examine the performance of WIRE by using not only multiple datasets but also, with diverse characteristics.
In this context, we synthesized six case studies with different numbers of rankers and variable list lengths. We used RASDaGen, an open source dataset generation tool (https://github.com/lakritidis/RASDaGen), to create 6 synthetic datasets that would simulate multiple real-word applications. In particular, we created datasets with long lists to simulate applications related to Bioinformatics (e.g., gene rankings) or Information Retrieval (e.g., metasearch engines). In contrast, the datasets with shorter lists can be considered as representatives of collaborative filtering systems.
In general, finding high-quality datasets with objective relevance judgments is a challenging task. Text Retrieval Conference (TREC) (https://trec.nist.gov/) satisfies these quality requirements, since it annually organizes multiple diverse tracks, accepts ranked lists from the participating groups, and employs specialists to judge the relevance of the submitted items. We used the following two real-world datasets originating from TREC: the Clinical Trials Track of 2022 (CTT22) and the NeuCLIR Technical Docs Track of 2023 (NTDT23). The first one includes 50 topics and 41 rankers, whereas the second one includes 41 topics and 51 rankers. In both datasets, the maximum length of a preference list is limited to 1000 elements.
Table 1 illustrates the attributes of our benchmark datasets. The third column denotes the number of topics for which the rankers submitted their preference lists. The fourth and fifth columns indicate the number of rankers and the length of their submitted lists, respectively. All synthetic datasets have been made publicly available on Kaggle (https://www.kaggle.com/datasets/lakritidis/rankaggregation).

5.2. Comparison Framework and Evaluation Measures

The effectiveness of WIRE was compared against a wide variety of non-weighted and weighted rank aggregation methods. The first set included Borda Count and CombMNZ as they were formalized in [29], the first and the fourth Markov chain-based methods (MC1, MC4) of [19], the Markov chain framework (MCT) of [20], the Robust Rank Aggregation (RRA) method of [21], the Outranking Approach (OA) of [24], the traditional Condorcet [15], and Copeland Winners [30].
The second set included the following four state-of-the-art weighted methods: DIBRA with and without list pruning (termed DIBRA-P and DIBRA, respectively) [13], the weighted approach of [26] based on Preference Relation Graphs (PRGs), and the Agglomerative Aggregation Method (AAM) of [27]. In all methods, we used the same hyper-parameter values as those suggested in the respective studies. Our item removal method was tested in combination with DIBRA, and we refer to it as DIBRA-WIRE in the discussion that follows. In all experiments, we kept the values of the two hyper-parameters of WIRE constant. Therefore, the number of buckets was fixed to B = 5 , and we set δ 1 = 0.5 .
The quality of the output lists was measured by employing the following three widespread IR measures: Precision, Normalized Discounted Cumulative Gain (nDCG), and Mean Average Precision (MAP). The first two are computed at specific cut-off points of the aggregate list L , and we refer to them as P @ k and N @ k , respectively. Precision@k is defined as follows:
P @ k = 1 k i = 1 k rel ( L i ) ,
where rel ( L i ) is a binary indicator of the relevance of the i-th element of L (1/0 for relevant/non-relevant).
On the other hand, nDCG is defined as the Discounted Cumulative Gain (DCG) of L divided by the Discounted Cumulative Gain of an imaginary ideal list that has all relevant elements ranked at its highest positions as follows:
DCG @ k = i = 1 k 2 rel ( L i ) 1 log 2 ( k + 1 ) ,
N @ k = DCG @ k iDCG @ k .
Finally, Mean Average Precision (MAP) is defined as the mean of the average precision scores among a set of T topics as follows:
MAP = 1 | T | t = 1 | T | P ¯ ( t ) = 1 | T | t = 1 | T | 1 R t k = 1 | L | rel ( L k ) P @ k ,
where L i is the i-th element of the aggregate list L , P @ k denotes the Precision measured at L k , and R t is the total number of relevant entries for the topic t T .

5.3. Results and Discussion

In this subsection, we present the results of the experimental evaluation of WIRE. In Table 2, Table 3 and Table 4, we report the values of the aforementioned evaluation measures for the 8 datasets of Table 1. More specifically, the second column illustrates the MAP values achieved by the examined methods, the next 5 columns hold the Precision values for the top-5 elements of the produced aggregate list L , whereas the last 5 columns denote the nDCG values also for the top-5 elements of L .
Now, let us discuss these numbers. At first, the results indicate that DIBRA was the most effective weighted rank aggregation method, achieving the highest MAP scores among the other two methods, PRG and AAM. Interestingly, in all eight cases, our proposed WIRE method managed to further improve the performance of DIBRA by a margin between 1% and 20% (in the FESO dataset). In fact, in most of the examined benchmark datasets, the combination of DIBRA and WIRE outperformed all the competitive aggregation methods, weighted or not.
In the MOLO dataset with the long lists of 100 items, the MAP gain of DIBRA-WIRE compared to DIBRA was roughly 2% ( 0.250 vs. 0.244 ). In contrast, the simple list pruning method of [13] offered only infinitesimal improvements in terms of MAP scores. In addition, its effect on the quality of the top-5 elements of the aggregate list was negative in terms of both Precision and nDCG. MC1 and MC4 were also quite effective in this dataset.
The next three datasets, MASO, FESO, and MOSO, were of particular interest, because they involved short lists, and the simple pruning method of [13] is known to perform poorly in such cases. In this experiment, we examined three different scenarios, where variable populations of rankers (i.e., 100, 10, and 50, respectively) submitted short preference lists comprising 30, 10, and 30 items, respectively.
The results were particularly satisfactory in all these scenarios. Compared to DIBRA, our proposed item selection method achieved superior performance in terms of MAP, Precision, and nDCG. For example, the MAP improvements over DIBRA were roughly 1% for MASO and MOSO, and 20% for FESO. Improvements of similar magnitudes were also observed for Precision and nDCG, especially for P @ 1 and N @ 1 . As it was expected, the Mean Average Precision of DIBRA-P was slightly worse compared to that of DIBRA for MASO ( 0.323 vs. 0.324 ) and significantly worse in FESO ( 0.272 vs. 0.332 ). DIBRA and DIBRA-WIRE were also superior to all the other aggregation methods in all three datasets. The only exception to this observation was MC1 in the FESO dataset, which was superior to DIBRA but inferior to the combination of DIBRA-WIRE.
The fifth case study, abbreviated MAVSO, introduced a scenario that resembles that of a recommender system, namely, numerous rankers submitting very short preference lists of 5 elements. The Outranking Approach of [24] exhibited the best performance in this test, achieving the highest MAP ( 0.416 ), P @ 1 = 0.400 , and N @ 1 = 0.400 . Our proposed DIBRA-WIRE method was the second best method, achieving a slightly worse MAP ( 0.414 ), P @ 1 = 0.350 , and N @ 1 = 0.350 . The original DIBRA without list pruning outperformed the other two weighted methods, PRG and AAM, but the application of the simple list pruning method of [13] had a strong negative impact on its performance.
The sixth test resembled that of a metasearch engine, with few rankers submitting very long ranked lists of 200 items. Once again, DIBRA-WIRE achieved top performance in terms of MAP ( 0.789 ), P @ 1 = 0.800 , and N @ 1 = 0.800 . The method of Copeland Winners and MC4 scored the second highest MAP (i.e., 0.786 ), but in terms of Precision and nDCG, the aggregate lists created by MCT, RRA, and DIBRA-P were of higher quality. Regarding the weighted methods, DIBRA was more effective than PRG and AAM in terms of Precision and nDCG.
In general, in the six synthetic datasets, the Agglomerative AAM method achieved very low MAP values, but it managed to output decent top-5 rankings. This behavior indicates its weakness in generating high-quality rankings in their entirety. The Preference Relations Graph method was superior to AAM but inferior to DIBRA. On the other hand, the quite old Markov chain-based algorithms were quite strong opponents on datasets with long preference lists (namely, MOLO and FEVLO), but their performance degraded when they were applied to short lists. On average, the Outranking Approach achieved higher MAP values than the Markov Chain methods, but their top-5 rankings were of inferior quality. As mentioned earlier, DIBRA was the most effective rank aggregation method, and our WIRE post-processing step further improved its performance.
Regarding the two real-world datasets, namely, CTT22 and NTDT23, the proposed method was again proved to be beneficial, since it managed to improve the retrieval effectiveness of the baseline DIBRA, even by small margins. On the other hand, the list pruning algorithm of [13] had almost no visible impact in both cases. The order-based methods (i.e., Condorcet, Copeland, and the Outranking Approach) were quite strong opponents to DIBRA and DIBRA-WIRE, producing very qualitative top-5 aggregate lists. In contrast, the three Markov chain-based methods (that performed quite well in the six synthetic datasets) were both ineffective and slow in these two experiments.
The other two weighted aggregation methods had significant difficulties in completing these tests. More specifically, AAM failed to generate consensus rankings in reasonable time, since it required more than 1 h to process one topic from each dataset. In general, AAM was by far the slowest method among all, due to its computationally expensive hierarchical nature. For this reason, and since the TREC datasets were significantly larger than the synthetic ones, Table 4 does not contain results from AAM. On the other hand, PRG is a graph-based memory-intensive algorithm and that became a major bottleneck in the two large TREC datasets. Therefore, a memory starvation error prevented its execution in CTT22. Nevertheless, it managed to complete the aggregation task in all 41 topics of NTDT23, achieving a MAP value of 0.307 , 17% lower than that of DIBRA.

5.4. Execution Times

In this subsection we examine how WIRE affects the execution time of a weighted rank aggregation method and particularly, DIBRA. Table 5 displays the running times of the involved rank aggregation methods in the benchmark datasets of Table 1. The presented values reveal the running times of each method per topic in milliseconds (recall that each dataset comprises multiple topics).
These measurements indicate that WIRE imposes only small (and in some cases, infinitesimal) delays in the aggregation process. More specifically, in the six synthetic datasets, Algorithm 1 appended on average only 0.5–2.2 milliseconds to the processing of each query, demonstrating its high efficiency. In fact, DIBRA-WIRE was much faster than (i) the other weighted algorithms, PRG and AAM; (ii) all the order-based techniques (Condorcet and Copeland methods, Outranking approach); and (iii) all the Markov chain-based methods (MC1, MC4, MCT). In contrast, it was slightly slower only than simple linear combination methods like Borda Count and CombMNZ.
As mentioned earlier, in the large TREC datasets, AAM failed to construct aggregate lists in reasonable time (more than one hour per topic). Moreover, PRG consumed all the available memory of our 32 GB workstation in CTT22, also failing to complete the aggregation task. In NTDT23, it was much slower than DIBRA, putting its usefulness into question. This conclusion applies to all the order-based and Markov chain-based methods. Despite their effectiveness in some experiments, the average topic processing times can be 3–5 orders of magnitude larger than those of Borda Count, DIBRA, DIBRA-WIRE, etc.

5.5. Statistical Significance Tests

The reliability of the aforementioned measurements was checked by applying two statistical significance tests. In particular, we executed the Friedman non-parametric statistical test in the measurements of MAP and P @ 1 of all methods. The results of the tests were equal to 6.9 × 10 4 and 1.5 × 10 4 , respectively, indicating that the measurements’ distributions exhibit statistically significant discrepancies and that we can reject the null hypothesis.
We also applied the Wilcoxon signed-rank test to examine the pairwise statistical significance of the performances of DIBRA, DIBRA-P, and DIBRA-WISE in terms of Mean Average Precision. The results are presented in Table 6 and indicate that the MAP differences between DIBRA-WIRE and DIBRA are more significant than between DIBRA-P and DIBRA.

5.6. Hyper-Parameter Study

The proposed item removal method introduces the following two hyper-parameters: the number of buckets B in which the rankers are grouped, and the parameter δ 1 of Equation (6). In this subsection, we study how these parameters affect the performance of DIBRA-WIRE, and why the setting of B = 5 and δ 1 = 0.5 that was applied in the previous experiment is a choice that, in general, leads to good results.
Our methodology for this study dictated the parallel variation of the values of B and δ 1 and the independent measurement of MAP in each case. More specifically, for each one of the eight benchmark datasets, we modified B in the range [ 2 , 12 ] by taking integer steps of 1 at each time (namely, we tested 11 values). Simultaneously we modified d 1 in the range [ 0 , 1 ] by taking steps of 0 , 1 at each iteration, also considering 11 values. In other words, we conducted 121 measurements of MAP for each dataset.
The results are illustrated in the heatmaps of Figure 3 and Figure 4. Each heatmap represents the MAP measurements for a different benchmark dataset. The horizontal and vertical axes depict the values of B [ 2 , 12 ] and δ 1 [ 0 , 1 ] , respectively. The darker background colors denote higher MAP values, with the black rectangles revealing top performance.
A careful observation of these heatmaps reveals that our choice of B = 5 and δ 1 = 0.5 yields satisfactory results in all cases. Of course, there are combinations that “optimize” the performance, but this is not consistent. For example, setting B = 4 and δ 1 = 0.3 yielded the best performance in terms of MAP in the MASO dataset. However, this setting does not work well in the FESO dataset. Similarly, B = 11 and δ 1 = 0 was the best combination for MOSO, but it was also among the worst in the MOLO, MASO, and FESO datasets. For FESO, the best setting was B = 12 and δ 1 = 0.6 ; a rather disappointing choice for MOLO and MASO.
We also constructed similar heatmaps for all Precision and nDCG values at the top-5 elements of each aggregate list L . Due to space restrictions, we cannot illustrate 80 such diagrams here. Instead, we indicatively choose to depict the fluctuation of N @ 5 against B and δ 1 in Figure 5 and Figure 6. The results of this exhaustive study verify the conclusions of our previous experiment. Namely, there are multiple combinations of B and δ 1 that yield decent results, but similarly to the MAP case, there is no “golden rule” achieving top performance in all scenarios. In contrast, one that consistently performs well is B = 5 and δ 1 = 0.5 .

6. Conclusions and Future Work

In this paper, we introduced WIRE, a novel item selection approach for weighted rank aggregation applications. WIRE functions as a post-processing step to any weighted rank aggregation method and aims to further improve the quality of the output aggregate list. More specifically, our approach initially groups the input preference lists into a pre-defined number of buckets, according to the weights that have been assigned to their respective rankers by the original weighted aggregator. Based on that bucket, each preference list (that is, its respective ranker) is assigned a confidence score that quantifies its importance in a discretized manner.
Then, for each element of the aggregate list, WIRE computes a preservation score that derives from the sum of the confidence scores of the rankers who included it in their preference lists. In the sequel, the aggregate list is sorted in decreasing preservation score order. Consequently, WIRE favors the elements that have been selected by multiple expert rankers. The preservation score is designed to be free of manipulation; therefore, it does not depend on external factors like the individual rankings, pairwise wins and losses, or other score values that may have been assigned by the original aggregator. WIRE also employs the aforementioned confidence scores to compute a threshold value that determines the number of elements to be removed from the (sorted) aggregate list.
Our proposed method was theoretically analyzed and experimentally tested against a collection of 13 state-of-the-art rank aggregation methods in 8 benchmark datasets. The collection included both weighted and non-weighted aggregators, whereas the datasets were carefully selected in order to cover multiple diverse scenarios. The experiments highlighted the high retrieval effectiveness of WIRE in terms of Precision, normalized Discounted Cumulative Gain, and Mean Average Precision. Regarding future work, we intend to enhance WIRE by introducing more sophisticated discretization methods to determine the intervals (and ideally the number) of buckets where the rankers will be grouped. Examples of unsupervised binning methods include buckets of equal widths instead of equally sized buckets, or buckets that derive from the application of a clustering algorithm. We also plan to study alternative mathematical definitions for the confidence, preservation, and protection scores with the aim of improving the effectiveness of WIRE.

Author Contributions

Conceptualization, methodology, software, validation, formal analysis, resources, data curation, writing—original draft preparation, writing—review and editing: L.A. and P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets used in this study are publicly available on Kaggle: https://www.kaggle.com/datasets/lakritidis/rankaggregation. The implementation code of WIRE can be found in the open-source libraries FLAGR 1.0.20 & PyFLAGR 1.0.20: https://flagr.site, https://github.com/lakritidis/FLAGR, https://pypi.org/project/pyflagr/.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
WIREWeighted Item Removal
MAPMean Average Precision
nDCGnormalized Discounted Cumulative Gain

References

  1. Chen, J.; Long, R.; Wang, X.l.; Liu, B.; Chou, K.C. PdRHP-PseRA: Detecting remote homology proteins using profile-based pseudo protein sequence and rank aggregation. Sci. Rep. 2016, 6, 32333. [Google Scholar]
  2. Li, X.; Wang, X.; Xiao, G. A comparative study of rank aggregation methods for partial and top ranked lists in genomic applications. Briefings Bioinform. 2019, 20, 178–189. [Google Scholar] [CrossRef] [PubMed]
  3. Gyarmati, L.; Orbán-Mihálykó, É.; Mihálykó, C.; Vathy-Fogarassy, Á. Aggregated Rankings of Top Leagues’ Football Teams: Application and Comparison of Different Ranking Methods. Appl. Sci. 2023, 13, 4556. [Google Scholar] [CrossRef]
  4. Oliveira, S.E.; Diniz, V.; Lacerda, A.; Merschmanm, L.; Pappa, G.L. Is rank aggregation effective in recommender systems? An experimental analysis. ACM Trans. Intell. Syst. Technol. (TIST) 2020, 11, 1–26. [Google Scholar] [CrossRef]
  5. Bałchanowski, M.; Boryczka, U. A comparative study of rank aggregation methods in recommendation systems. Entropy 2023, 25, 132. [Google Scholar] [CrossRef] [PubMed]
  6. Akritidis, L.; Katsaros, D.; Bozanis, P. Effective ranking fusion methods for personalized metasearch engines. In Proceedings of the 12th Panhellenic Conference on Informatics, Samos Island, Greece, 28–30 August 2008; pp. 39–43. [Google Scholar]
  7. Wang, M.; Li, Q.; Lin, Y.; Zhou, B. A personalized result merging method for metasearch engine. In Proceedings of the 6th International Conference on Software and Computer Applications, Bangkok, Thailand, 26–28 February 2017; pp. 203–207. [Google Scholar]
  8. Akritidis, L.; Katsaros, D.; Bozanis, P. Effective rank aggregation for metasearching. J. Syst. Softw. 2011, 84, 130–143. [Google Scholar] [CrossRef]
  9. Bartholdi, J.; Tovey, C.A.; Trick, M.A. Voting schemes for which it can be difficult to tell who won the election. Soc. Choice Welf. 1989, 6, 157–165. [Google Scholar] [CrossRef]
  10. Kilgour, D.M. Approval balloting for multi-winner elections. In Handbook on Approval Voting; Springer: Berlin/Heidelberg, Germany, 2010; pp. 105–124. [Google Scholar]
  11. Chen, D.; Xiao, Y.; Wu, J.; Pérez, I.J.; Herrera-Viedma, E. A Robust Rank Aggregation Framework for Collusive Disturbance Based on Community Detection. Inf. Process. Manag. 2025, 62, 104096. [Google Scholar] [CrossRef]
  12. Ma, K.; Xu, Q.; Zeng, J.; Liu, W.; Cao, X.; Sun, Y.; Huang, Q. Sequential Manipulation Against Rank Aggregation: Theory and Algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 9353–9370. [Google Scholar] [CrossRef] [PubMed]
  13. Akritidis, L.; Fevgas, A.; Bozanis, P.; Manolopoulos, Y. An unsupervised distance-based model for weighted rank aggregation with list pruning. Expert Syst. Appl. 2022, 202, 117435. [Google Scholar] [CrossRef]
  14. de Borda, J.C. Mémoire sur les élections au scrutin. In Histoire de l’Academie Royale des Sciences; Imprimerie Royale: Paris, France, 1781; pp. 657–665. [Google Scholar]
  15. De Condorcet, N. Essai sur l’Application de l’Analyse à la Probabilité des Décisions Rendues à la Pluralité des Voix; Imprimerie Royale: Paris, France, 1785. [Google Scholar]
  16. Emerson, P. The original Borda Count and partial voting. Soc. Choice Welf. 2013, 40, 353–358. [Google Scholar] [CrossRef]
  17. Montague, M.; Aslam, J.A. Condorcet fusion for improved retrieval. In Proceedings of the 11th ACM International Conference on Information and Knowledge Management, McLean, VA, USA, 4–9 November 2002; pp. 538–548. [Google Scholar]
  18. Li, G.; Xiao, Y.; Wu, J. Rank Aggregation with Limited Information Based on Link Prediction. Inf. Process. Manag. 2024, 61, 103860. [Google Scholar] [CrossRef]
  19. Dwork, C.; Kumar, R.; Naor, M.; Sivakumar, D. Rank aggregation methods for the Web. In Proceedings of the 10th International Conference on World Wide Web, Hong Kong, China, 1–5 May 2001; pp. 613–622. [Google Scholar]
  20. DeConde, R.P.; Hawley, S.; Falcon, S.; Clegg, N.; Knudsen, B.; Etzioni, R. Combining results of microarray experiments: A rank aggregation approach. Stat. Appl. Genet. Mol. Biol. 2006, 5, 15. [Google Scholar] [CrossRef] [PubMed]
  21. Kolde, R.; Laur, S.; Adler, P.; Vilo, J. Robust rank aggregation for gene list integration and meta-analysis. Bioinformatics 2012, 28, 573–580. [Google Scholar] [CrossRef] [PubMed]
  22. Kemeny, J.G. Mathematics without numbers. Daedalus 1959, 88, 577–591. [Google Scholar]
  23. Young, H.P.; Levenglick, A. A consistent extension of Condorcet’s election principle. SIAM J. Appl. Math. 1978, 35, 285–300. [Google Scholar] [CrossRef]
  24. Farah, M.; Vanderpooten, D. An outranking approach for rank aggregation in information retrieval. In Proceedings of the 30th ACM Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, 23–27 July 2007; pp. 591–598. [Google Scholar]
  25. Pihur, V.; Datta, S.; Datta, S. Weighted rank aggregation of cluster validation measures: A Monte Carlo cross-entropy approach. Bioinformatics 2007, 23, 1607–1615. [Google Scholar] [CrossRef] [PubMed]
  26. Desarkar, M.S.; Sarkar, S.; Mitra, P. Preference relations based unsupervised rank aggregation for metasearch. Expert Syst. Appl. 2016, 49, 86–98. [Google Scholar] [CrossRef]
  27. Chatterjee, S.; Mukhopadhyay, A.; Bhattacharyya, M. A weighted rank aggregation approach towards crowd opinion analysis. Knowl.-Based Syst. 2018, 149, 47–60. [Google Scholar] [CrossRef]
  28. Akritidis, L.; Alamaniotis, M.; Bozanis, P. FLAGR: A flexible high-performance library for rank aggregation. SoftwareX 2023, 21, 101319. [Google Scholar] [CrossRef]
  29. Renda, M.E.; Straccia, U. Web metasearch: Rank vs. Score based rank aggregation methods. In Proceedings of the 2003 ACM Symposium on Applied Computing, Melbourne, FL, USA, 9–12 March 2003; pp. 841–846. [Google Scholar]
  30. Copeland, A.H. A Reasonable Social Welfare Function; Technical Report; University of Michigan: Ann Arbor, MI, USA, 1951. [Google Scholar]
Figure 1. Schematic diagram of weighted rank aggregation with list pruning.
Figure 1. Schematic diagram of weighted rank aggregation with list pruning.
Algorithms 18 00362 g001
Figure 2. Indicative examples of the execution of WIRE. The left part demonstrates how the rankers are grouped into buckets and the computation of the confidence and preservation scores (a). The right part illustrates the removal of the weakest elements from an input preference list (b).
Figure 2. Indicative examples of the execution of WIRE. The left part demonstrates how the rankers are grouped into buckets and the computation of the confidence and preservation scores (a). The right part illustrates the removal of the weakest elements from an input preference list (b).
Algorithms 18 00362 g002
Figure 3. Hyper-parameter study of DIBRA-WIRE for the MOLO, MASO, FESO, and MOSO datasets. The heatmaps illustrate the fluctuation of Mean Average Precision for variable number of buckets B (horizontal axis) and variable δ 1 values (in the range [ 0 , 1 ] ).
Figure 3. Hyper-parameter study of DIBRA-WIRE for the MOLO, MASO, FESO, and MOSO datasets. The heatmaps illustrate the fluctuation of Mean Average Precision for variable number of buckets B (horizontal axis) and variable δ 1 values (in the range [ 0 , 1 ] ).
Algorithms 18 00362 g003
Figure 4. Hyper-parameter study of DIBRA-WIRE for the MAVSO, FEVLO, CTT22, and NTDT23 datasets. The heatmaps illustrate the fluctuation of Mean Average Precision for a variable number of buckets B (horizontal axis) and variable δ 1 values (in the range [ 0 , 1 ] ).
Figure 4. Hyper-parameter study of DIBRA-WIRE for the MAVSO, FEVLO, CTT22, and NTDT23 datasets. The heatmaps illustrate the fluctuation of Mean Average Precision for a variable number of buckets B (horizontal axis) and variable δ 1 values (in the range [ 0 , 1 ] ).
Algorithms 18 00362 g004
Figure 5. Hyper-parameter study of DIBRA-WIRE for the MOLO, MASO, FESO, and MOSO datasets. The heatmaps illustrate the fluctuation of N @ 5 for a variable number of buckets B (horizontal axis) and variable δ 1 values (in the range [ 0 , 1 ] ).
Figure 5. Hyper-parameter study of DIBRA-WIRE for the MOLO, MASO, FESO, and MOSO datasets. The heatmaps illustrate the fluctuation of N @ 5 for a variable number of buckets B (horizontal axis) and variable δ 1 values (in the range [ 0 , 1 ] ).
Algorithms 18 00362 g005aAlgorithms 18 00362 g005b
Figure 6. Hyper-parameter study of DIBRA-WIRE for the MAVSO, FEVLO, CTT22, and NTDT23 datasets. The heatmaps illustrate the fluctuation of N @ 5 for a variable number of buckets B (horizontal axis) and variable δ 1 values (in the range [ 0 , 1 ] ).
Figure 6. Hyper-parameter study of DIBRA-WIRE for the MAVSO, FEVLO, CTT22, and NTDT23 datasets. The heatmaps illustrate the fluctuation of N @ 5 for a variable number of buckets B (horizontal axis) and variable δ 1 values (in the range [ 0 , 1 ] ).
Algorithms 18 00362 g006
Table 1. Attributes of the benchmark datasets.
Table 1. Attributes of the benchmark datasets.
DatasetDescriptionTopics | T | nk
MOLOSynthetic, modest number of long lists2050100
MASOSynthetic, many short lists2010030
FESOSynthetic, few short lists201010
MOSOSynthetic, modest number of short lists205030
MAVSOSynthetic, many very short lists205005
FEVLOSynthetic, few very long lists2010200
CTT22Real, 2022 TREC Clinical Trials Track50411000
NTDT23Real, 2023 TREC NeuCLIR Technical Docs Track41511000
Table 2. Performance evaluation of various rank aggregation methods in the MOLO, MASO, and FESO datasets.
Table 2. Performance evaluation of various rank aggregation methods in the MOLO, MASO, and FESO datasets.
MOLOMAPPrecisionNDCG
P@1P@2P@3P@4P@5N@1N@2N@3N@4N@5
Borda Count0.2400.2500.1500.1670.1750.1700.2500.1730.1790.1830.178
CombMNZ0.2380.2000.2250.2000.1880.1700.2000.2190.2030.1940.182
Condorcet0.2390.3000.1750.1670.1880.1700.3000.2030.1910.2010.188
Copeland0.2380.2000.1750.1500.1880.1600.2000.1810.1620.1850.167
Outranking0.2380.3000.2000.1670.1630.1900.3000.2230.1940.1860.201
MC10.2490.2500.2500.2170.2630.2500.2500.2500.2270.2560.248
MC40.2490.2500.2500.2170.2630.2500.2500.2500.2270.2560.248
MCT0.2380.0000.1000.1670.2120.2100.0000.0770.1300.1670.171
RRA0.2400.1000.1000.1670.1880.1900.1000.1000.1470.1640.169
PRG0.2390.2500.2000.1670.1750.1700.2500.2110.1850.1880.183
AAM0.1360.3000.2750.2330.2750.2600.3000.2810.2500.2750.265
DIBRA0.2440.3000.2250.2000.2370.2200.3000.2420.2200.2420.230
DIBRA-P0.2450.2000.2250.2170.2370.2200.2000.2190.2150.2290.219
DIBRA-WIRE0.2500.3500.3250.2330.2630.2300.3500.3310.2650.2790.256
MASOMAPPrecisionNDCG
P@1P@2P@3P@4P@5N@1N@2N@3N@4N@5
Borda Count0.3140.3000.2750.3170.2870.2900.3000.2810.3090.2900.292
CombMNZ0.3170.3500.2750.3000.3120.3200.3500.2920.3060.3130.318
Condorcet0.3110.3500.2750.3330.3000.3000.3500.2920.3290.3070.306
Copeland0.3120.3000.2500.3000.2870.3100.3000.2610.2940.2860.301
Outranking0.3170.3000.2500.3170.3250.3000.3000.2610.3060.3130.298
MC10.3190.3500.3250.3170.3250.3300.3500.3310.3230.3280.331
MC40.3190.3500.3250.3170.3250.3300.3500.3310.3230.3280.331
MCT0.3200.1500.2750.3170.3000.2900.1500.2470.2830.2770.274
RRA0.3180.3000.3250.3000.2870.3100.3000.3190.3030.2940.308
PRG0.3160.3500.2250.3170.3380.3300.3500.2530.3110.3260.323
AAM0.1820.1500.3000.3500.3380.3000.1500.2660.3090.3080.287
DIBRA0.3240.4000.3000.3170.3120.3000.4000.3230.3290.3240.314
DIBRA-P0.3230.3500.3250.3330.2870.2700.3500.3310.3350.3040.290
DIBRA-WIRE0.3270.4000.2500.3330.3000.3200.4000.2840.3350.3120.324
FESOMAPPrecisionNDCG
P@1P@2P@3P@4P@5N@1N@2N@3N@4N@5
Borda Count0.2880.1500.1000.1170.1250.1300.1500.1500.1900.2220.255
CombMNZ0.2980.2000.1250.1170.1250.1300.2000.1810.2020.2320.264
Condorcet0.2280.1000.1250.1170.1380.1400.1000.1510.1750.2280.268
Copeland0.2630.1000.0750.1330.1380.1300.1000.1000.1770.2070.239
Outranking0.2890.2000.1250.1170.1130.1200.2000.1810.2020.2100.243
MC10.3770.2500.2000.1670.1630.1600.2500.2820.2930.3260.344
MC40.2680.1000.1000.1170.1380.1300.1000.1320.1590.2210.243
MCT0.2910.1000.1000.1000.1250.1500.1000.1510.1700.2120.278
RRA0.2960.1500.1250.1330.1380.1400.1500.1620.2160.2480.266
PRG0.2910.1500.1250.1170.1250.1300.1500.1690.1930.2250.257
AAM0.2590.2500.1750.1500.1380.1200.2500.2620.2780.3220.343
DIBRA0.3320.1500.1750.1500.1500.1400.1500.2320.2470.2700.304
DIBRA-P0.2720.1000.2000.1500.1500.1400.1000.2260.2190.2580.272
DIBRA-WIRE0.3990.2000.2500.2000.1750.1700.2000.3330.3430.3610.334
Table 3. Performance evaluation of various rank aggregation methods in the MOSO, MAVSO, and FEVLO datasets.
Table 3. Performance evaluation of various rank aggregation methods in the MOSO, MAVSO, and FEVLO datasets.
MOSOMAPPrecisionNDCG
P@1P@2P@3P@4P@5N@1N@2N@3N@4N@5
Borda Count0.5070.3500.4500.4000.4250.4400.3500.4270.3970.4150.426
CombMNZ0.5080.3500.4500.4000.4250.4400.3500.4270.3970.4150.426
Condorcet0.5120.4500.5000.4830.4630.4900.4500.4890.4800.4660.484
Copeland0.5160.4500.4750.5000.5000.5200.4500.4690.4880.4900.505
Outranking0.5070.3500.4500.4000.4250.4400.3500.4270.3970.4150.426
MC10.5120.4000.4750.4670.5000.5000.4000.4580.4560.4800.483
MC40.5160.4500.4500.5000.4870.4700.4500.4500.4850.4790.469
MCT0.4940.4500.4250.4170.4500.4400.4500.4310.4230.4450.439
RRA0.4940.3000.3500.4000.4380.4500.3000.3390.3770.4060.418
PRG0.5070.3500.4500.4000.4250.4400.3500.4270.3970.4150.426
AAM0.3060.6000.5000.4670.4630.4500.6000.5230.4940.4860.475
DIBRA0.5170.4000.4000.4500.5000.4900.4000.4000.4350.4710.469
DIBRA-P0.4990.4500.4000.4170.4250.4700.4500.4110.4200.4250.455
DIBRA-WIRE0.5210.4500.4250.4500.4870.4900.4500.4310.4470.4730.476
MAVSOMAPPrecisionNDCG
P@1P@2P@3P@4P@5N@1N@2N@3N@4N@5
Borda Count0.4000.4000.2750.2330.2500.2300.4000.3230.2940.3220.328
CombMNZ0.4030.4000.2750.2500.2250.2200.4000.3230.3060.3060.320
Condorcet0.3650.3500.2750.2500.2500.2300.3500.3040.2870.3060.311
Copeland0.3620.3500.2750.2500.2370.2300.3500.3040.2870.2960.310
Outranking0.4160.4000.3500.2830.2750.2600.4000.3810.3380.3520.362
MC10.3790.3000.3500.2830.2630.2400.3000.3510.3180.3250.325
MC40.3790.3000.3500.2830.2630.2400.3000.3510.3180.3250.325
MCT0.2810.1500.2000.2000.2000.1900.1500.1890.1910.1950.196
RRA0.4000.4000.2750.2500.2250.2100.4000.3230.3060.3060.311
PRG0.4030.4000.2750.2500.2250.2300.4000.3230.3060.3060.328
AAM0.1880.2500.2500.2500.2000.2100.2500.2620.2670.2400.258
DIBRA0.4050.3500.2750.3000.2630.2700.3500.2920.3320.3250.351
DIBRA-P0.2960.2000.2000.1830.1750.2100.2000.2000.1880.1820.219
DIBRA-WIRE0.4140.3500.3500.3170.2870.2800.3500.3620.3570.3550.371
FEVLOMAPPrecisionNDCG
P@1P@2P@3P@4P@5N@1N@2N@3N@4N@5
Borda Count0.7850.7000.8000.7670.7870.7800.7000.7770.7590.7740.771
CombMNZ0.7850.7000.7750.7670.7870.7800.7000.7580.7560.7720.769
Condorcet0.7850.6500.7500.8000.8000.8000.6500.7270.7680.7730.777
Copeland0.7860.7000.8000.8000.8000.8000.7000.7770.7830.7860.787
Outranking0.7850.7000.8000.7670.7870.7800.7000.7770.7590.7740.771
MC10.7850.8000.8500.8170.7870.7800.8000.8390.8180.7980.792
MC40.7860.7000.7500.8000.8000.7900.7000.7390.7770.7800.776
MCT0.7850.8000.8250.7500.7380.7600.8000.8420.7850.7710.781
RRA0.7830.8000.8250.8000.7380.7600.8000.8190.8030.7610.772
PRG0.7850.7000.8000.7670.7870.7800.7000.7770.7590.7740.771
AAM0.3800.7500.7000.7500.7620.7800.7500.7110.7440.7530.766
DIBRA0.7850.7500.7750.7830.7870.7900.7500.7690.7770.7800.783
DIBRA-P0.7830.8000.8000.8670.8500.8500.8000.9000.8770.8640.862
DIBRA-WIRE0.7890.8000.8500.8000.7620.7300.8000.7610.7940.7700.747
Table 4. Performance evaluation of various rank aggregation methods in the CTT22 and NTDT23 datasets.
Table 4. Performance evaluation of various rank aggregation methods in the CTT22 and NTDT23 datasets.
CTT22MAPPrecisionNDCG
P@1P@2P@3P@4P@5N@1N@2N@3N@4N@5
Borda Count0.3800.7600.7100.7070.6900.6840.5600.5570.5660.5580.556
CombMNZ0.3780.7600.7100.7000.6850.6800.5600.5570.5610.5540.552
Condorcet0.4180.7000.6900.7330.7300.7200.5670.5590.5610.5630.552
Copeland0.4180.7200.7800.7800.7950.7640.5670.5590.5610.5630.552
Outranking0.4010.7200.7100.7200.7250.7160.5870.5890.5780.5780.572
MC10.2670.7000.7100.6530.6450.6400.5670.5690.5370.5270.518
MC40.1330.2400.2200.2070.1950.1800.2270.2060.1840.1730.159
MCT0.2940.7200.7100.6530.6850.6800.3150.5690.5640.5270.552
RRA0.4110.7400.7600.7400.7300.7280.6470.6520.6320.6120.605
PRG
AAM
DIBRA0.4220.7000.6900.7330.7250.7080.5670.5590.5640.5640.546
DIBRA-P0.4220.7200.7000.7330.7250.7080.5600.5650.5640.5600.542
DIBRA-WIRE0.4240.7600.8000.7800.7850.7680.6530.6640.6440.6340.617
NTDT23MAPPrecisionNDCG
P@1P@2P@3P@4P@5N@1N@2N@3N@4N@5
Borda Count0.3060.5120.4630.4220.4020.4000.3660.3760.3600.3550.361
CombMNZ0.3000.5120.4630.4230.3900.3900.3660.3760.3600.3500.357
Condorcet0.3550.6590.5370.5200.4700.4340.5960.4910.4670.4550.436
Copeland0.3550.6590.5370.5120.4700.4340.5960.4910.4670.4550.436
Outranking0.3480.5850.5240.4800.4760.4440.4600.4530.4250.4260.415
MC10.2220.4390.3780.3580.3230.3070.3550.3390.3180.3000.296
MC40.3040.6100.5610.5040.4450.4100.5470.5130.4670.4360.419
MCT0.2530.4390.4440.3880.3730.3070.3750.3590.3240.3110.296
RRA0.3550.6340.5850.5040.4700.4340.5920.5290.4660.4470.432
PRG0.3080.5120.4880.4390.4020.3850.3660.3950.3740.3630.358
AAM
DIBRA0.3560.6590.5370.5040.4570.4340.5750.4720.4510.4390.431
DIBRA-P0.3550.6590.5490.5040.4570.4340.5750.4820.4530.4410.434
DIBRA-WIRE0.3600.6590.5490.5040.4700.4340.5960.4910.4540.4420.427
Table 5. Execution times (in milliseconds per query) of various rank aggregation methods on the benchmark datasets of Table 1.
Table 5. Execution times (in milliseconds per query) of various rank aggregation methods on the benchmark datasets of Table 1.
MethodMOLOMASOFESOMOSOMAVSOFEVLOCTT22NTDT23
Borda Count5.072.910.011.632.970.05 41.2 22.7
CombMNZ5.042.930.011.622.850.05 41.0 25.4
Condorcet18.887.400.011.934.050.10 10 × 10 3 8195
Copeland17.646.890.011.883.800.10 10 × 10 3 8219
Outranking31.2810.260.012.224.570.16 26 × 10 3 23 × 10 3
MC128.8110.290.012.094.940.13 23 × 10 3 19 × 10 3
MC429.0610.240.012.074.840.15 105 × 10 3 68 × 10 3
MCT29.6910.330.012.045.000.15 165 × 10 3 109 × 10 3
RRA6.113.700.011.834.840.0565.6732.90
PRG53.5117.020.012.676.680.33 98 × 10 3
AAM2096.711940.830.11141.86 14 × 10 3 0.79>1 h>1 h
DIBRA5.983.870.012.054.170.0582.3353.94
DIBRA-P6.454.240.012.064.590.0687.2646.69
DIBRA-WIRE8.115.050.012.675.890.07112.4078.51
Table 6. Statistical significance of the MAP measurements of DIBRA-WIRE against DIBRA and DIBRA-P. The presented p-values have been obtained by executing the Wilcoxon post hoc signed-rank test.
Table 6. Statistical significance of the MAP measurements of DIBRA-WIRE against DIBRA and DIBRA-P. The presented p-values have been obtained by executing the Wilcoxon post hoc signed-rank test.
MethodDIBRA-WIRE (MAP)
DIBRA-WIRE vs. DIBRA0.0078125
DIBRA-WIRE vs. DIBRA-P0.0078125
DIBRA vs. DIBRA-P0.0141051
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akritidis, L.; Bozanis, P. WIRE: A Weighted Item Removal Method for Unsupervised Rank Aggregation. Algorithms 2025, 18, 362. https://doi.org/10.3390/a18060362

AMA Style

Akritidis L, Bozanis P. WIRE: A Weighted Item Removal Method for Unsupervised Rank Aggregation. Algorithms. 2025; 18(6):362. https://doi.org/10.3390/a18060362

Chicago/Turabian Style

Akritidis, Leonidas, and Panayiotis Bozanis. 2025. "WIRE: A Weighted Item Removal Method for Unsupervised Rank Aggregation" Algorithms 18, no. 6: 362. https://doi.org/10.3390/a18060362

APA Style

Akritidis, L., & Bozanis, P. (2025). WIRE: A Weighted Item Removal Method for Unsupervised Rank Aggregation. Algorithms, 18(6), 362. https://doi.org/10.3390/a18060362

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop