Next Article in Journal
Design and Implementation of a Graphic Simulator for Calculating the Inverse Kinematics of a Redundant Planar Manipulator Robot
Previous Article in Journal
Study on the Effect of Structure Parameters on NO Oxidation in DBD Reactor under Oxygen-Enriched Condition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Players Ranking in E-Sport

by
Karol Urbaniak
1,
Jarosław Wątróbski
2,* and
Wojciech Sałabun
1,*
1
Research Team on Intelligent Decision Support Systems, Department of Artificial Intelligence Methods and Applied Mathematics, Faculty of Computer Science and Information Technology, West Pomeranian University of Technology in Szczecin, Szczecin ul. Żołnierska 49, 71-210 Szczecin, Poland
2
Department of Information Systems Engineering in the Faculty of Economics, Finance and Management of the University of Szczecin, Mickiewicza 64, 71-101 Szczecin, Poland
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(19), 6768; https://doi.org/10.3390/app10196768
Submission received: 20 August 2020 / Revised: 19 September 2020 / Accepted: 22 September 2020 / Published: 27 September 2020
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Human activity is moving steadily to virtual reality. More and more, people from all over the world are keen on growing fascination with e-sport. In practice, e-sport is a type of sport in which players compete using computer games. The competitions in games, like FIFA, Dota2, the League of Legends, and Counter-Strike, are prestigious tournaments with a global reach and a budget of millions of dollars. On the other hand, reliable player ranking is a critical issue in both classic and e-sport. For example, the “Golden Ball” is the most valuable prize for an individual football player in the whole football history. Moreover, the entire players’ world wants to know who the best player is. The position of each player in the ranking depends on the assessment of his skills and predispositions. In this paper, we studied identification of players evaluation and ranking obtained using the multiple-criteria decision-making based method called Characteristic Objects METhod (COMET) on the example of the popular game Counter-Strike: Global Offensive (CS: GO). We present a range of advantages of the player evaluation model created using the COMET method and, therefore, prove the practicality of using multi-criteria decision analysis (MCDA) methods to build multi-criteria assessment models in emerging areas of eSports. Thus, we provide a methodical and practical background for building a decision support system engine for the evaluation of players in several eSports.

1. Introduction

Sport has always played an essential role in every culture in the past and still does in current times. Everybody knows conventional sports, such as football, volleyball, basketball, etc., but there are new sports appearing that are increasingly expanding in popularity. One of them is Electronic Sports, also known as eSports or e-sports [1]. At the beginning of the 90s, the history of e-sport began. During this decade, it became more and more popular, and the number of players increased significantly [2,3,4,5]. E-sport is a type of sport in which players compete in computer games [6,7]. The players’ activities are only restrained from being placed in the virtual environment [3]. E-sport is exciting entertainment for many fans, but it is also a source of income for the professional players and the whole e-sport organization. Professional players usually belong to different e-sport organizations and represent their teams competing in omnifarious tournaments, events, and international championship [2,3,4,8]. The competition takes place online or through so-called local networks (LAN). The most encounters take place in a LAN network, where both smaller and larger numbers of computers are connected in one building allowed for lower in-game latency between gamers [2,6,8,9,10]. In e-sports, the viewership is crucial. The gameplay should be designed to attract and emotionally engage the participation of as many gameplay observers as possible.
E-sport is a lifestyle for computer gamers. It becomes a real career path from which you can start, develop, and build your future. People still consider e-sport very conservatively. They think of it as something trivial and frivolous. While some people do not take it seriously all the time, spectator count records, as well as prize pool records, are regularly updated during major tournaments, reaching millions watching Counter-Strike: Global Offensive (CS: GO) [11]. It is full of opportunities, awards, travel, and also requires great sacrifice. It is incredibly demanding to reach a world-class level [1]. Actually, it looks like a full-time job. Players usually train 8 hours a day or more. They use the computer as a tool to achieve success in a new field. To become a professional, people have to work hard without any excuses. A player is considered as professional when he is hired by an organization that pays for his work representing that entity by appearing at events, mostly official tournaments on a national or international level [8]. E-sport has become an area that requires so much precision that even milliseconds determine whether to win or lose. Pointing out the importance of specialized skills, such as hand-eye coordination, muscle memory, or reaction time, as well as strategical or tactical in-game knowledge, increases achieving success in that area [12]. Hand-eye coordination is the ability of the vision system to coordinate the information received through the eyes to control, guide, and direct the hands in the accomplishment of a given task, such as handwriting or catching a ball [13]. The aim of e-sports is defeating other players. It could be done by neutralizing them, or just like in sports games, by racing as fast as possible to cross the finish line before your opponents. In addition, the win may be achieved by scoring the most points [2,3].
One of the most popular genres of eSports games is First-Person-Shooter (FPS) [2,6,8,14]. The virtual environment of the game is approached from the perspective of the avatar. The only thing visible of the avatars on the screen is the hands and the weapons they handle [2]. Counter-Strike is an FPS multiplayer game created and released by Valve Corporation and Hidden Path Entertainment [5,6]. There were many other versions of the game, which did not achieve much success. Valve realized how popular e-sport had become and create the new Counter-Strike game we play today, wholly tailored for competition, known as CS: GO. The rules in CS: GO are uncomplicated. There are two teams in the game: terrorists (T) and counter-terrorists (CT). Each team aims to eliminate the opposing team or to perform a specific task. The first one’s target is to plant the bomb and let it explode, while the second’s is to prevent the bomb from being planted and/or exploding. Additionally, the game consists of 30 rounds, where each last about 2 min. After 15 rounds, players need to switch teams. Then, the team that first achieves 16 rounds is the winner. When the game does not end in 30 rounds, it goes to overtime. It consists of a best of six rounds, three on each side. The team that gets to 4 rounds wins. If there is another draw situation, the same rule applies until a winner is found [4,8]. The team’s economy is concerned with the amount of money that everybody on the team have pooled cooperatively in order to buy new weapons and equipment. Winning a round by eliminating the entire enemy team provides the winners with USD 3250 per player, plus USD 300 if the bomb is planted by a terrorist. Winning by time on the counter-terrorist’s side rewards players USD 3250, and winning the round with a defusal (CT) or detonation of the bomb (T) rewards USD 3500. However, if the terrorists run out of time before killing all the oponnents or planting the bomb, they will not come in for any cash prize. If a round is lost on the T-side, but they still manage to plant the bomb, the team will be awarded USD 800 in addition to the current round loss streak value. The money limit for each individual player in competitive matches is equal to USD 16.000 [15].
For gamers, the foundation of e-sports is the glory of winning, the ability to evoke excitement in people, and the privilege of being perceived as one of the best players in the world [2,8]. In the past, players had to bring their equipment to LAN events, while having fun in a hermetically sealed society. They could then eventually win small cash prizes or gadgets. Now, these players are winning a prize pool of over USD 500 thousand, performing on big stages full of cameras and audience [1]. The increase in popularity of e-sports was not only impressive but also forced many business people, large corporations, and television companies to become interested in this dynamically developing market [8]. E-sport teams are often headed by traditional sports organizations and operated by traditional sports media. Tournaments are organized by conventional sports leagues highlighting the growing connections between classical sport and e-sport [16].
In recent years, e-sport has become one of the fastest-growing forms of new media driven by the growing origins of games broadcasting technologies [7,17]. E-sport and computer gaming have entered the mainstream, transforming into a convenient way of entertainment. In 2019, 453.8 million people had been watching e-sport worldwide, which increased by about 15% compared to 2018. It consisted of 201 million regular and 252 million occasional viewers. Between 2019 and 2023, total e-sport viewership is expected to increase by 9% per year, from 454 million in 2019 to 646 million in 2023. In six years, the number of watchers will almost double, reaching 335 million in 2017. In the current economic situation, global revenues from e-sport may reach USD 1.8 billion by 2022, or even an optimistic USD 3.2 billion. Hamari in Reference [3] claims that with the development of e-sport, classic sport is becoming a computer-based form of media and information technology. Therefore, e-sport is a fascinating subject of research in the field of information technology.
The accurate player ranking is a crucial issue in both classic [18] and e-sport [19,20]. The result of a classification, calculated based on wins and losses in a competitive game, is often considered to be an indicator of a player’s skills [20]. Each player’s position in the ranking is strictly determined by their abilities, predispositions, and talent in the field of represented discipline [16]. However, there are more than just statistics to prove the player’s value and ability. Many professional players play a supporting role in their teams, and winning even a single round is a priority. What matters first and foremost is the team’s victory, unlike the ambitions of the individual units. The team members have to work collectively, like one organism, and everyone has to cooperate to achieve the team’s success and the best possible results [21]. That is why the creation of accurate player ranking is a problematic issue.
In this paper, we identify the model to generate a ranking of players in the popular e-sport game, i.e., Counter-Strike: Global Offensive (CS: GO), using the Characteristic Objects METhod (COMET). The obtained ranking will be compared to Rating 2.0, which is the most popular for CS: GO game [22,23]. This study case facilitates the application of COMET in the new field of application. The COMET is a novel method of identifying a multi-criteria expert decision model to solve decision problems based on a rule set, using elements of the theory of fuzzy sets [23,24]. Unlike most available multi-criteria decision analysis (MCDA) methods, COMET is completely free of the rank reversal problem. The advantages of this technique are both an intuitive dialogue with the decision-maker and the identification of a complete model of the modeling area, which is a vital element in the application of the proposed approach in the methodological and algorithmic engine in the area of computer games and, more specifically, e-sport.
The most important methodological contribution is the analysis of the significance of individual inputs and outputs, which enables the analysis of the dependence of results on individual input data. Similarly, as in the Analytic Hierarchy Process (AHP) method, it is to serve as a possibility of extended decision analyzing in order to explain what influence particular aspects had on the final result. The Spearman correlation coefficient is used to measure the input-output dependencies, which extends the COMET technique to include new interpretative possibilities. It is important to note that this is significant as the COMET method itself does not apply any significance weights. The proposed approach makes it possible to estimate the significant weights.
The justification of the undertaken research has both theoretical and practical dimensions. MCDA methods themselves have proved to be powerful tools to solve different practical problems [25,26]. In particular, the construction of assessment models and rankings using MCDA methods is extensively discussed in the literature [27,28,29,30]. Examples of decision-making problems successfully solved with the usage of different multi-criteria methods include the assessment of environmental effects of marine transportation [31], innovation [32,33], sustainability [34,35], evaluation of renewable energy sources (RES) investments [36,37] or a broad environmental investments assessment [38], and industrial [39], as well as personnel assessment [40] of preventive health effects [41] or even evaluation of medical therapy effects [42,43]. It is also worth noticing the additional motivation of the research that MCDA methods have already shown their utility in building assessment models in traditional sports. For instance, the MCDA-based evaluation of soccer players was conducted in Reference [44], where Choquet method was used to evaluate the performance of sailboats [45]. Preference ranking organization method for enrichment evaluation II (PROMETHEE II)-based evaluation model of football clubs was proposed in Reference [46], while application of AHP/SWOT model in sport marketing was presented in Reference [47]. MCDA-based, multi-stakeholder perspective was handled in the evaluation of national-level sport organizations in Reference [48]. Both the examples provided and state of the art presented in Reference [49] clearly show the critical role of MCDA methods in the area of building assessment models and rankings in the field of sport. When we analyze the area of the e-sport in addition to the dominant trends, including economic research [50], sociological [3,51], psychological [52] or conversion-oriented research, and user experience (UX) [53], we observe attempts to use quantitative methods in the search for the algorithmic engines of digital products and games. For example, ex-post surveys and statistical-based approach were used to manage the health of the eSport athlete [54]. Personal branding of eSports athletes was evaluated in Reference [55]. In Reference [56], streaming-based performance indicators were developed, and players’ behavior and performance were assessed. The research focused on win/live prediction in multi-player games was conducted in Reference [57]. The study aimed to identify the biometric features contributing to the classification of particular skills of the players was presented in Reference [58]. So, far, only one example of MCDA-based method usage in e-sport player selection and evaluation has been proposed [58]. The authors indicate the appropriateness of fuzzy MCDA in the domain of e-sport player selection and assessment. The above literature studies show a distinct research gap, including the limited application of MCDA in e-sport domain. Besides, the paper addresses the following essential theoretical and practical research gaps:
  • extension of the COMET method by the stage of analyzing the significance of individual input data and decision-making sub-models to the final form of a ranking of decision-making options
  • transferring the methodological experience of using MCDA methods to the important and promising ground for building decision support systems in the area of eSports;
  • by identifying a domain-specific proper reflecting modeling domain (e-sport player evaluation), the form of which (both within the family of evaluation criteria and alternatives) is significantly different from that of classical sports; and
  • analysis and study of the adaptation and examination of MCDA methods usage as an algorithmic methodological engine of decision support system (potentially providing additional functionality to a range of available digital products and games).
The rest of the paper is organized as follows: MCDA foundations and simple comparison of MCDA techniques are presented in Section 2. Section 3 contains preliminaries of the fuzzy sets theory. The explanation of the definitions and algorithms of the multi-criteria decision-making method named COMET is given in Section 4. Section 5 introduces the results of the study, and, in Section 6, the discussion about the differences in both rankings. In Section 7, we present the conclusions and future directions.

2. MCDA Fundations

Multi-criteria decision support aims to achieve a solution for the most satisfactory decision-maker and at the same time to meet a sufficient number of often conflicting goals [59]. The search for such solutions requires the consideration of many alternatives and their evaluation against many criteria, as well as the transfer of the subjectivity of evaluation (e.g., the relevance of the criteria by the decision-maker) into a target model. Multi-criteria Decision Analysis (MCDA) methods is dedicated to solving this class of decision problems. During many years of research, two schools of MCDA methods have been developed. First, American MCDA school is based on the assumption that the decision-maker’s preferences are expressed using two basic binary relations. When comparing the decision-making options, undifferentiated relations and preferences may occur. In the case of the European MCDA school, this set has been significantly extended by introducing the so-called “superiority relationship”. The relation of superiority, apart from the two basic relations mentioned above, introduces the relation of weak preference of one of the variants to another and the relation of incomparability of the decision options.
In the case of the American school methods, the result of the comparison of variants is determined for each criterion separately, and the effect of the aggregation of the grades is a single, synthesized criterion, with the order of variants being full. The methods of the American school of decision support in the vast majority using the function of value or utility. The best-known methods of the American school are Multi Attribute Utility Theory (MAUT), AHP, Analytic Network Process (ANP), Simple Multi-Attribute Rating Technique (SMART), UTA, Simple Multi-Attribute Rating Technique (MACBETH), or Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS).
In contrast to the American school (which is at the same time an accusation of “European School”-oriented researchers), the algorithms of the European School methods are strongly oriented on faithful reflection of the decision-maker’s preferences (including their imprecision). The aggregation of the assessment results in itself is done with the use of the relation of superiority, and the effect of aggregation in the vast majority of methods is a partial order of variants (the effect of using the relation of incomparability). The primary methods of the European School are ELimination Et Choice Translating REality (ELECTRE) and PROMETHEE [60]. What is important, among them only the Promethee II method as a result of the aggregation of assessments provides a full order of decision options. Other methods belonging to the MCDA European School are, for example, ORESTE, REGIME, ARGUS, TACTIC, MELCHIOR or PAMSSEM. An important additional difference between the indicated schools is also the fact that there is a substitution of criteria in the methods using synthesis to one criterion. In contrast, the methods of the European School are considered non-compensatory [61].
The third group of MCDA methods are based on decision-making rules. The formal basis of these methods is fuzzy set theory and approximate set theory. Algorithms of this group of methods consist in building decision rules and consequences, and, using these rules, variants are compared and evaluated, and the final ranking is generated. Examples of MCDA rules are DRSA (Dominance-based Rough Set Approach) or Characteristic Objects Method (COMET) [24].
The COMET uses fuzzy triangular numbers to build criteria functions. A set of characteristic objects is created Using the core values of particular fuzzy numbers. So, it is a method based on fuzzy logic mechanisms. Additionally, it can also support problems with uncertain data. Table 1 shows the comparison of the COMET method with other MCDA problems. The most important is that the COMET technique is working without knowing the criteria weights. The decision-maker’s task is to compare pairs of characteristic objects. Based on these comparisons, a model ranking is generated. The model variants are the base for building a fuzzy rule database. When the considered alternatives are given to the decision-making system, the appropriate rules are activated, and the aggregated evaluation of the variant is determined as the sum of the degree products in which the variants activate the individual rules [62].
Additionally, the literature also indicates groups of so-called basic methods (e.g., lexicographic method, maximin method, or additive weighting method) and mixed methods, e.g., EVAMIX [63] or QUALIFLEX, as well as Pairwise Criterion Comparison Approach (PCCA). Examples of the latter are methods: MAPPAC, PRAGMA, PACMAN, and IDRA [64].

3. Fuzzy Set Theory: Preliminaries

Fuzzy set theory is a very valuable strategy to control and model in several scientific fields. Modeling using Fuzzy sets has proven to be an efficient alternative of forming multicriteria decision problems. The necessary concepts of the Fuzzy Set Theory can be presented using the following eight definitions [13]:
Definition 1.
The fuzzy set and the membership function—the characteristic function μ A of a crisp set A X assigns a value of either 0 or 1 to each member of X, and the crisp sets only allow a full membership μ A ( x ) = 1 or no membership at all μ A ( x ) = 0 . This function can be generalized to a function μ A ˜ so that the value assigned to the element of the universal set X falls within a specified range, i.e., μ A ˜ : X [ 0 , 1 ] . The assigned value indicates the degree of membership of the element in the set A. The function μ A ˜ is called a membership function and the set A ˜ = x , μ A ˜ ( x ) , where x X , defined by μ A ˜ ( x ) for each x X is called a fuzzy set.
Definition 2.
Triangular fuzzy number (TFN)—A Fuzzy set A ˜ , defined on the universal set of real numbers ℜ, is said to be a TFN A ˜ ( a , m , b ) if its membership function has the following form:
μ A ˜ ( x , a , m , b ) = 0 , x a x a m a , a x m 1 , x = m b x b m , m x b 0 , x a .
Definition 3.
The support of a TFN A ˜ —This is the crisp subset of the set A ˜ in which all elements have nonzero membership values in the set A ˜ .
S ( A ˜ ) = { x : μ A ˜ ( x ) > 0 } = [ a , b ] .
Definition 4.
The core of a TFN A ˜ —This is the singleton (one-element Fuzzy set) with the membership value equal to one.
C ( A ˜ ) = { x : μ A ˜ ( x ) = 1 } = m .
Definition 5.
The Fuzzy rule—The single Fuzzy rule can be based on tautology modus ponens. The reasoning process uses logical connectives IF-THEN, OR, and AND.
Definition 6.
The rule base—The rule base consists of logical rules determining causal relationships existing in the system between Fuzzy sets of its inputs and output.
Definition 7.
The T-norm operator—the T-norm operator (intersection) is a T function modeling the AND intersection operation of two or more fuzzy numbers, e.g., A ˜ and B ˜ .
μ A ˜ ( x ) A N D μ B ˜ ( y ) = μ A ˜ ( x ) · μ B ˜ ( y ) .
Definition 8.
The S-norm operator—The S-norm operator (union), or T-conorm is an S function modeling the OR union operation of two or more fuzzy numbers, e.g., A ˜ and B ˜ .
μ A ˜ ( x ) O R μ B ˜ ( y ) = μ A ˜ ( x ) + μ B ˜ ( y ) 1 .

4. The Characteristic Objects Method

COMET (Characteristic Objects Method) is a very simple approach, most commonly used in the field of sustainable transport [34,35,62], interactive marketing [65,66], sport [67], medicine [68], in handling the uncertain data in decision-making [69,70], and banking [71]. Carnero, in Reference [72], suggests using COMET method as future work to improve her waste segregation model. The COMET is an innovative method of identifying a multi-criteria expert decision model to solve decision problems based on a rule set, using elements of the theory of fuzzy sets [24,68]. The COMET method distinguishes itself from other multiple-criteria decision-making methods by its resistance to the rank reversal paradox [73]. Contrary to other methods, the assessed alternatives are not being compared here, and the result of the assessment is obtained only based on the model [24].
The whole decision-making process by using the COMET method is presented in Figure 1. The formal notation of this method can be presented using the following five steps [34]:
Step 1. Define the space of the problem – an expert determines dimensionality of the problem by selecting number r of criteria, C 1 , C 2 , , C r . Subsequently, the set of fuzzy numbers for each criterion C i is selected, i.e., C ˜ i 1 , C ˜ i 2 , , C ˜ i c i . In this way, the following result is obtained:
C 1 = { C ˜ 11 , C ˜ 12 , , C ˜ 1 c 1 } C 2 = { C ˜ 21 , C ˜ 22 , , C ˜ 2 c 1 } C r = { C ˜ r 1 , C ˜ r 2 , , C ˜ r c r } ,
where c 1 , c 2 , , c r are numbers of the fuzzy numbers for all criteria.
Step 2. Generate the characteristic objects—The characteristic objects ( C O ) are obtained by using the Cartesian Product of fuzzy numbers cores for each criteria as follows:
C O = C ( C 1 ) × C ( C 2 ) × × C ( C r ) .
As the result, the ordered set of all C O is obtained:
C O 1 = C ( C ˜ 11 ) , C ( C ˜ 21 ) , , C ( C ˜ r 1 ) C O 2 = C ( C ˜ 11 ) , C ( C ˜ 21 ) , , C ( C ˜ r 2 ) C O t = C ( C ˜ 1 c 1 ) , C ( C ˜ 2 c 2 ) , , C ( C ˜ r c r ) ,
where t is a number of C O :
t = i = 1 r c i .
Step 3. Rank the characteristic objects—the expert determines the Matrix of Expert Judgment ( M E J ). It is a result of pairwise comparison of the characteristic objects by the expert knowledge. The M E J structure is as follows:
M E J = α 11 α 12 α 1 t α 21 α 22 α 2 t α t 1 α t 2 α t t ,
where α i j is a result of comparing C O i and C O j by the expert. The more preferred characteristic object gets one point and the second object get zero points. If the preferences are balanced, the both objects get half point. It depends solely on the knowledge of the expert and can be presented as:
α i j = { 0.0 , f e x p ( C O i ) < f e x p ( C O j ) 0.5 , f e x p ( C O i ) = f e x p ( C O j ) 1.0 , f e x p ( C O i ) > f e x p ( C O j ) ,
where f e x p is an expert mental judgment function. Afterwards, the vertical vector of the Summed Judgments ( S J ) is obtained as follows:
S J i = j = 1 t α i j .
The last step assigns to each characteristic object an approximate value of preference. In the result, the vector P is obtained, where i -th row contains the approximate value of preference for C O i .
Step 4. The rule base—each characteristic object and value of preference is converted to a fuzzy rule as follows detailed form:
I F C ( C ˜ 1 i ) A N D C ( C ˜ 2 i ) A N D T H E N P i .
In this way, the complete fuzzy rule base is obtained, that approximates the expert mental judgement function f e x p ( C O i )
Step 5. Inference and final ranking—The each one alternative is a set of crisp numbers corresponding to criteria C 1 , C 2 , , C r . It can be presented as follows:
A i = { a 1 i , a 2 i , , a r i } .

5. Results

The detailed steps of the research to identify players carried out according to the methodical framework is presented in Figure 2. It is worth mentioning, once again, that algorithmic background and methodical approach provide COMET method.
The identified model creates a ranking, which is compared with Rating 2.0 that was proposed by Half-Life Television (HLTV). It is a news website that covers professional CS: GO news, tournaments, statistics, and rankings [23]. The obtained ranking is more natural to interpret. Each player assessment has three additional parameters. Many parameters influence the player’s performance, including the evaluation of his skills and predispositions. For instance, with player’s age, the drop-off in reaction time makes it hard for them to compete and harder to aim the head of moving target. High percentage of headshots reflects the shooting skills and is a kind of prestige [74]. There are plenty of criteria, which could be used to create an evaluation model. For instance, the damage per round given by grenades, the total number of rounds played by a player, which could inform us about the player’s experience or a high percentage of headshots, that was mentioned earlier. Other criteria have been chosen because of their greater impact on the assessment of the individual skills of each player. Especially important are the C 1 and C 4 criteria. They inform us that eliminating the player is smaller than the possibility that he will kill the enemies [74].
Many parameters influence the player’s performance, including the assessment of his skills and predispositions. For instance, with player’s age, the drop-off in reaction time makes it hard for them to compete and harder to aim the head of moving target. Hight percentage of headshots reflects the shooting skills and is a kind of prestige. Therefore, the following six criteria have been selected [22,23]:
  • C 1 —Average kills per round, the average number of kills scored by the player during one round;
  • C 2 —Average damage per round, mean damage inflicted by a player during one round;
  • C 3 —Total kills, the total number of kills gained by the player;
  • C 4 —K/D Ratio, the number of kills divided by number of deaths;
  • C 5 —Average assists per round, the mean number of assists gained by the player during one round; and
  • C 6 —Average deaths per round, the average number of deaths of a player during one round.
There are plenty of criteria, which could be used to create an evaluation model. For instance, the damage per round given by grenades, the total number of rounds played by a player, which could inform us about the player’s experience or a high percentage of headshots, that was mentioned earlier. However, a set of six criteria have been chosen because of their greater impact on the assessment of the individual skills of each player. The collected data for all applied criteria and Rating 2.0 assessment are derived from the official HLTV website and dated June 2019. Especially important are the C 1 and C 4 criteria. They inform us that the chance to eliminate the player is smaller than the possibility that he will kill the enemies.
The economy of a player depends on how much he has spent on weapons and armor, the kill awards that have been received per elimination (based on weapon type), the status of bomb planting or defusing, and finally who won the round [15]. Average kills per round ( C 1 ) is always an important criterion because, by fragging (killing an enemy), you can eliminate first of all the threat from your opponent. For each elimination you get, depending on the weapon used, the amount of money needed to buy ammunition, equipment, grenades, and other utilities at a later stage of the game. For instance, elimination with a sniper rifle (AWP) is the least economically profitable and gives the player only USD 100, while almost any pistol gives 300 dollars reward, and shotguns, which are the most cost-effective, give even up to USD 900 in cash prize. Additionally, by killing enemies, they lose the weapons they acquired, thus losing all equipment, such as kevlar with a helmet or defuse kit (CT). Criterion C 1 is a profit type criterion, where the value increase means the preference increase. Based on the information about players statistics from the HLTV database for best 40 professional players, for C 1 , the lowest obtained value is 0.72, the highest 0.88, and the average value is equal to 0.78.
As the number of Average damage per round increases, the probability of killing an enemy increases, as well. Moreover, the player is more priceless and useful for a team when he deprives the enemy team of the precious health points and makes gaining frag much easier for his teammates. There was a situation during the PGL Major Kraków 2017 event when a professional player Mikhail "Dosia" Stolyarov from Gambit Esports during the grand final against Immortals team done some unbelievable action. His team (on CT side) was going to lose the round because there was not enough time to defuse the bomb versus three opponents. Dosia knew it was impossible to win, but he came up with an idea and threw a grenade to give some extra damage to players, who were saving their weapons. It was a few seconds before the detonation of the bomb, which takes many health points (HP) from players located in an area of the explosion. Doing it, he contributed to the death of two players, which lost precious weapons and equipment, forcing them to spent extra money in the next round. That was an example of the validity of this criterion on the professional field of CS: GO. Criterion C 2 is characterized by a positive correlation to player value. For criterion C 2 , the lowest result is 75.60, the highest 88.20, and the mean value is equal to 82.70.
Criterion C 3 determine the total number of kills scored by the player, which could signify that the player plays a lot and has a background in Counter-Strike, like the legendary player Christopher “GeT RiGhT” Alesund from Sweden or Filip “Neo” Kubski from Poland. When C 3 value increases, the player’s evaluation also improves. As the total number of kills increases, the player’s skill level and overall experience develop, as well, playing later against much better enemies. For criterion C 3 , the lowest result is 1516.00, the highest 4151.00, and the mean value is equal to 2514.90. Frankly, it is not the most critical parameter because players with much less number of frags could play as good or even better. It depends on individual predispositions and the innate potential of the gamer.
Criterion C 4 is probably the most prominent rate of players’ abilities in CS: GO. It is a profit type criterion, like the previous three criteria. It informs us that the chance to eliminate the player is smaller than the possibility that he will kill the enemies. If the total number of kills is more significant than the overall number of deaths, the player’s skill level is getting more superior, and the gamer improves every time he plays. For professional gamers, the criterion C 4 obtained the lowest result equal to 1.15, the highest 1.51, and the mean value is equal to 1.25. Even the worst K/D Ratio value in this set of players is a great result.
Obtaining assistance in team games is proof of successful and productive team play. In CS: GO, assists are also received in this way because it is an evident proof that the player was close to making an elimination on the opponent. However, something went wrong and only deprived him of most of the health points in the end without gaining a single frag. Then, he gives his teammates the opportunity for an easy kill, but he only got an assist instead of a full frag on his account. Often, players who play a supporting role get a significant amount of assists because they contribute to getting eliminations on the rival by, for example, blinding him with a flashlight, helping his colleagues. For criterion C 5 , the lowest result is 0.09, the highest 0.18, and the average value is equal to 0.13.
As it is known in FPS games, the most important thing is to eliminate your opponents instead of being killed. By analyzing the Average number of deaths per round, we can conclude which player loses the most shooting duels and has to observe the actions of his teammates only as an observer. It could show us the weakness of the player and skill shortages that will allow the best ones to be distinguished. It is a cost-type criterion, which means the value increase indicates the preference decrease. For criterion C 6 , the lowest result is 0.52, the highest 0.68, and the mean value is equal to 0.63.
The values of selected criteria C 1 C 6 , positions and names of alternatives are presented in the Table 2. In this study case, the considered problem is simplified to a structure, which is presented in Figure 3.
In that way, we have to identify three related models, where each one requires a lot smaller number of queries to the expert. The final decision model consists of three following models, where, for each one, nine characteristic objects and 36 pairwise comparisons are needed:
  • P 1 —Effectiveness per round assessment model with two inputs;
  • P 2 —Frag gaining assessment model with two inputs;
  • P 3 —Failures per round assessment model with two inputs.
In the Effectiveness per round assessment model ( P 1 ), we aggregate two essential criteria, like Average kills per round ( C 1 ) and Average damage per round ( C 2 ), as input values. The output value is our player evaluation for model P 1 , and the lowest result is 0.23, the highest 0.88, and the mean value is equal to 0.45 for top 40 professional players in CS: GO. The input values of the Frag gaining assessment model ( P 2 ) are two significant criteria, like Total kills ( C 3 ) and K/D Ratio ( C 4 ). The outcome value is our player assessment for model P 2 , and the lowest result is 0.00, the highest 0.84, and the mean value is equal to 0.45. In the Failures per round assessment model ( P 3 ), we connect two crucial criteria, like Average assists per round ( C 5 ) and Average deaths per round ( C 6 ). The output value is our player evaluation for model P 3 , and the lowest result is 0.25, the highest 0.78, and the mean value is equal to 0.44.
The model will be validated based on the results obtained from the official HLTV website for the top 10 professional CS: GO players for June 2019, which are presented in Table 2. To identify the final model for players assessment, we have to determine the three following assessment models, i.e., Effectiveness per round, Frag gaining, and Failures per round.

5.1. Effectiveness per Round Assessment Model

This model evaluates the efficiency in eliminating and injuring enemies, which is one of the essential elements of CS: GO. The expert identified two significant criteria for the Effectiveness per round assessment model: Average kills per round, which is the mean number of frags scored by the player pending one round, and Average damage per round, that is mean damage delivered by a player during one round. Both of them are a profit type criteria, where the value increase means the preference increase. In such complex problems, the relationship is sporadically linear. Table 3 presents the values of the criteria C 1 and C 2 and the P 1 assessment model. Based on the presented data, it can be determined that the best value of the criteria C 1 was achieved by ‘Simple’, which is equal to 0.88, while the worst result was obtained by ‘dexter’ with the value equal to 0.72. In the case of the second criterion, the best score was given to ‘huNter’ with 88.2, and the lowest score was received by ‘xsepower’ with value equal to 75.6. Analyzing the results of the effectiveness per round assessment model ( P 1 ), we can conclude that the highest score P 1 was obtained by ‘Simple’, and is equal to 0.8825. The triangular fuzzy numbers of criterium C 1 are presented in Figure 4, while C 2 is presented in Figure 5.
In the considered set of parameters, there were players with: Average kills per round ( C 1 ) with the values of the support of the triangular fuzzy number from 0.7 ( C 11 ) to 0.9 ( C 13 ) and the core valued 0.8 ( C 12 ); Average damage per round ( C 2 ) with the values of the support of the triangular fuzzy number from 70 ( C 21 ) to 90 ( C 23 ) and the core valued 0.8 ( C 22 ) health points. Based on the data presented in the Table 4, it turned out that the output P 1 takes values from 0.1 to 0.9. Therefore, the variable P 1 will take two values. Both of them will also be determined as triangular fuzzy numbers. They were displayed in Figure 6. The comparison of the 36 pairwise of the 9 characteristic objects were executed. Consequently, the Matrix of Expert Judgment ( M E J ) was defined as (15), where each α i j value was calculated using Equation (11).
M E J = 0.5 0 0 0 0 0 0 0 0 1 0.5 0 0 0 0 0 0 0 1 1 0.5 0 0 0 0 0 0 1 1 1 0.5 0 0 0 0 0 1 1 1 1 0.5 0 0 0 0 1 1 1 1 1 0.5 0 0 0 1 1 1 1 1 1 0.5 0 0 1 1 1 1 1 1 1 0.5 0 1 1 1 1 1 1 1 1 0.5 .
As a result, the vector of the Summed Judgements ( S J ) was calculated using Equation (12), and it was employed to determine the values of preference ( P 1 ), which are presented in Table 3. The characteristic objects C O 1 C O 9 presented in Table 3 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C 1 and C 2 . The highest value of preference P 1 received C O 9 with a triangular fuzzy number of criterion C 1 valued 0.9 ( C 13 ) and with a triangular fuzzy number of criterion C 2 valued 90 ( C 23 ). The lowest value of preference P 1 fell to C O 1 with a triangular fuzzy number of criterion C 1 valued 0.7 ( C 11 ) and with a triangular fuzzy number of criterion C 2 valued 70 ( C 21 ). With an increase in the value of the criterion C 1 , the preference increases more significantly than with an increase in the value of the criterion C 2 . It means that C 1 has a greater impact on the assessment of the P 1 model than C 2 .
For a better demonstration of the relevance of the criteria to the P 1 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C 1 , C 2 and reference ranking obtained by P 1 assessment model for top 10 players is equal to 0.9273 and 0.2970. The correlation between the first one is strong, while, in the second one, it is weak. The visualization of the relation diagram of Average kills per round ( C 1 ) and P 1 assessment model, as well as the relation diagram of Average damage per round ( C 2 ) and P 1 assessment model, is presented in Figure 7.

5.2. Frag Gaining Assessment Model

The model verifies the probability of a player to get an elimination based on the number of kills he has obtained in official CS: GO matches and a specific factor, which shows that the player is superior. The expert identified two significant criteria for the Frag gaining assessment model. Total kills, which is the total number of frags delivered by the player, and K/D Ratio, that is the number of frags divided by the number of deaths. Both of them are profit type criteria, whereas it was mentioned earlier, with the increase in values, preference increases, too. Table 5 shows the values of the criteria C 3 and C 4 and the P 2 assessment model. Based on the presented data, it can be determined that the best value of the criteria C 3 was achieved by ‘ZywOo’, which is equal to 1.000, while the worst result was obtained by ‘BnTeT’ with the value equal to 0. In the case of the second criterion, the best score was given to ‘Jame’ with 1.51, and the lowest score was received by ‘Texta’ with a value equal to 1.15. Analyzing the results of the Frag gaining assessment model ( P 2 ), we can conclude that the highest score was obtained by ‘Jame’, and is equal to 0.8423. The triangular fuzzy numbers of criterium C 3 are presented in Figure 8 and C 4 in Figure 9.
In the considered set of parameters, there were players with: total kills ( C 3 ) with the values of the support of the triangular fuzzy number from 0 ( C 31 ) to 1 ( C 33 ) and the core valued 0.5 ( C 32 ); K/D ratio ( C 4 ) with the values of the support of the triangular fuzzy number from 1 ( C 41 ) to 1.6 ( C 43 ) and the core valued 1.25 ( C 42 ). Based on the data presented in the Table 6, it turned out that the output P 2 takes values from 0.2 to 0.9. Therefore, the variable P 2 will take two values. Both of them will also be saved as triangular fuzzy numbers. They are displayed in Figure 10. The comparison of the 36 pairwise of the 9 characteristic objects was executed. Consequently, the Matrix of Expert Judgment ( M E J ) was defined (16), where each α i j value was calculated using Equation (11).
M E J = 0.5 0 0 0 0 0 0 0 0 1 0.5 0 1 0 0 1 0 0 1 1 0.5 1 1 0 1 1 0 1 0 0 0.5 0 0 0 0 0 1 1 0 1 0.5 0 1 0 0 1 1 1 1 1 0.5 1 1 0 1 0 0 1 0 0 0.5 0 0 1 1 0 1 1 0 1 0.5 0 1 1 1 1 1 1 1 1 0.5 .
As a result, the vector of the Summed Judgements ( S J ) was calculated using Equation (12), and it was used to determine the values of preference ( P 2 ), which are presented in Table 5. The characteristic objects C O 1 C O 9 presented in Table 5 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C 3 and C 4 . The highest value of preference P 2 received C O 9 with a triangular fuzzy number of criterion C 3 valued 1 ( C 33 ) and with a triangular fuzzy number of criterion C 4 valued 1.6 ( C 43 ). The lowest value of preference P 2 fell to C O 1 with a triangular fuzzy number of criterion C 3 valued 0 ( C 31 ) and with a triangular fuzzy number of criterion C 4 valued 1 ( C 41 ). With an increase in the value of the criterion C 4 , the preference increases more significantly than with an increase in the value of the criterion C 3 . It means that C 4 has a greater impact on the assessment of the P 2 model than C 3 .
For a better demonstration of the relevance of the criteria to the P 2 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C 3 , C 4 , and reference ranking obtained by P 2 assessment model is equal to 0.5636 and 0.4910. The correlation between the first one and the reference ranking is moderately strong, while, in the second one, it is weak. The visualization of the relation diagram of total kills ( C 3 ) and P 2 assessment model, as well as the relation diagram of K/D ratio ( C 4 ) and P 1 assessment model, is shown in Figure 11.

5.3. Failures per Round Assessment Model

This model evaluates the weaker side of the player by showing how often he has a decline in form and skill deficiencies, which are vital to maintaining himself at the top of the global e-sport scene. The expert identified two crucial criteria for the Failures per round assessment model. Average assists per round, which is the average number of assists scored by the player during one round and Average deaths per round, that is the average number of deaths of a player pending one round. The first one is a profit type criterion, which means that the value increase indicates the preference increase; however, the second one is a cost-type criterion, which means the value increase indicates the preference decrease. Table 7 shows the values of the criteria C 5 and C 6 and the P 3 assessment model. Based on the presented data, it can be determined that the best value of the criteria C 5 was achieved by ‘INS’, which is equal to 1.18, while the worst result was obtained by ‘kNgV-’ with the value equal to 0.09. In the case of the second criterion, the best score was given to ‘Jame’ with 0.52, and the worst score was received by ‘roeJ’ with a value equal to 0.68. Analyzing the results of the Failures per round assessment model ( P 3 ), we can conclude that the highest score was obtained by ‘Jame’ and is equal to 0.7750. The triangular fuzzy numbers of criterium C 5 are presented in Figure 12 and C 6 in Figure 13.
In the considered set of parameters there were players with: Average assists per round ( C 5 ) with the values of the support of the triangular fuzzy number from 0.05 ( C 51 ) to 0.2 ( C 53 ) and the core valued 0.1 ( C 52 ); Average deaths per round ( C 6 ) with the values of the support of the triangular fuzzy number from 0.5 ( C 61 ) to 0.7 ( C 63 ) and the core valued 0.6 ( C 62 ). Based on the data presented in the Table 8, it turned out that the output P 3 takes values from 0.2 to 0.8. Therefore, the variable P 3 will take two values. Both of them will also be saved as triangular fuzzy numbers. They were displayed in Figure 14. The comparison of the 36 pairwise of the 9 characteristic objects were executed. Consequently, the Matrix of Expert Judgment ( M E J ) was defined (17), where each α i j value was calculated using Equation (11).
M E J = 0.5 0 0 1 1 1 1 1 1 1 0.5 0 1 1 1 1 1 1 1 1 0.5 1 1 1 1 1 1 0 0 0 0.5 0 0 1 1 1 0 0 0 1 0.5 0 1 1 1 0 0 0 1 1 0.5 1 1 1 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0 0 1 0.5 0 0 0 0 0 0 1 1 0 0.5 .
As a result, the vector of the Summed Judgements ( S J ) was calculated using Equation (12), and it was employed to determine the values of preference ( P 3 ), which are presented in Table 7. The characteristic objects C O 1 C O 9 presented in Table 7 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C 5 and C 6 . The highest value of preference P 3 received C O 3 with a triangular fuzzy number of criterion C 5 valued 0.2 ( C 53 ) and with a triangular fuzzy number of criterion C 6 valued 0.5 ( C 63 ). The lowest value of preference P 3 fell to C O 7 with a triangular fuzzy number of criterion C 5 valued 0.05 ( C 51 ) and with a triangular fuzzy number of criterion C 6 valued 0.7 ( C 61 ). With a decrease in the value of the criterion C 6 , the preference increases more significantly than with an increase in the value of the criterion C 5 . It means that C 6 has a greater impact on the assessment of the P 3 model than C 5 .
For a better demonstration of the relevance of the criteria to the P 3 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C 5 , C 6 , and reference ranking obtained by P 3 assessment model is equal to 0.5273 and 0.1636. The correlation between the first one and the reference ranking is moderately strong, while, in the second one, it is weak. The visualization of the relation diagram of Average assists per round ( C 5 ) and P 3 assessment model, as well as the relation diagram of Average deaths per round ( C 6 ) and P 3 assessment model, is shown in Figure 15.

5.4. Final Model

CS: GO Players assessment model finally determines the uniqueness of the Counter-Strike Global: Offensive player by placing him in the final ranking, based on previous partial assessments. The final model for the players’ assessment has three aggregated input variables. The output variable from the Effectiveness per round assessment, Frag gaining assessment, and the output variable from the Failures per round assessment were applied. The aggregated variables P 1 and P 2 are both profit type, whereas the P 3 is cost type. The triangular fuzzy numbers of parameter P 1 is presented in Figure 16, P 2 in Figure 17, and P 3 in Figure 18.
In the considered set of parameters there were players with: Effectiveness per round ( P 1 ) with the values of the support of the triangular fuzzy number from 0.2 ( P 11 ) to 1 ( P 12 ), Frag gaining ( P 2 ) with the values of the support of the triangular fuzzy number from 0.1 ( P 21 ) to 0.9 ( P 22 ), Failures per round ( P 3 ) with the values of the support of the triangular fuzzy number from 0.2 ( P 31 ) to 0.9 ( P 32 ). The comparison of the 28 pairwise of the 8 characteristic objects were executed. Consequently, the Matrix of Expert Judgment ( M E J ) was defined as (18), where each α i j value was calculated using Equation (11).
M E J = 0.5 0 0 0 1 0 0 0 1 0.5 1 0 1 1 1 0 1 0 0.5 0 1 0 1 0 1 1 1 0.5 1 1 1 1 0 0 0 0 0.5 0 0 0 1 0 1 0 1 0.5 1 0 1 0 0 0 1 0 0.5 0 1 1 1 0 1 1 1 0.5 .
As a result, the vector of the Summed Judgements ( S J ) was calculated using Equation (12), and it was employed to determine the final values of preference (P), which are presented in Table 9. The characteristic objects C O 1 C O 8 presented in Table 9 are generated using the Cartesian product of the fuzzy numbers’ cores of related models P 1 , P 2 , and P 3 . The highest value of preference P received C O 4 with a triangular fuzzy number of parameter P 1 valued 0.9 ( P 12 ), with a triangular fuzzy number of parameter P 2 valued 0.9 ( P 22 ), and with a triangular fuzzy number of parameter P 3 valued 0.2 ( P 32 ). The lowest value of preference P fell to C O 5 with a triangular fuzzy number of parameter P 1 valued 0.1 ( P 11 ), with a triangular fuzzy number of parameter P 2 valued 0.2 ( P 21 ), and with a triangular fuzzy number of parameter P 3 valued 0.8 ( P 31 ). With an increase in the value of the parameter P 2 , the preference increases more significantly than with an increase in the value of the parameters P 1 and P 3 . It means that P 2 has the greatest impact on the assessment of the P model compared to the other two parameters.
For a better demonstration of the relevance of the obtained parameters to the final assessment model P, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the P 1 , P 2 , and P 3 model and final ranking P is equal, respectively, to 0.5122, 0.6679, and 0.3182. The correlation between the first two models is moderately strong, and, in the case of the third model, the correlation is weak. The visualization of the relation diagram of effectiveness per round ( P 1 ) and final assessment P is shown in Figure 19, the relation diagram of frag gaining ( P 2 ) and final assessment P is presented in Figure 20, and the relation diagram of failures per round ( P 3 ) and final assessment P is presented in Figure 21.
The sample data for the top 10 players is shown in Table 10. The final decision assessment model identified ‘Simple’ as the best player at all when the worst rating was given to ’Hatz’. Analyzing the results of the three related models, we can conclude that the highest score in the first model ( P 1 ) was obtained by ‘Simple’ again, and is equal to 0.8825. In the ( P 2 ) and P 3 models, the best outcome was acquired by ‘Jame’ with the value 0.8423 as P 2 and 0.7750 as P 3 . The interesting fact is that ‘ZywOo’, who placed the second position, even if he did not have the best score in any of the three models, was still better than Jame. ‘ZywOo’ received a much better result in the first model and had a comparable score to ‘Jame’ in the second model. Furthermore, ‘huNter’ with the fourth result was close to beat ‘Jame’ and take over his position. In comparison with ‘Jame’, ‘huNter’ had much higher assessment in P 1 , getting average results at the rest of the models. It follows from this that the most critical models are P 1 and P 2 .
ρ Spearman’s rank correlation coefficient between the P 1 , P 2 , and P 3 model and reference ranking is equal, respectively, to 0.7818, 0.7091, and 0.0061. The correlation between the first two models is moderately strong, and, in the case of the third model, there is no correlation. However, ρ Spearman’s coefficient between the final model and reference ranking is equal to 0.9636, which means that both rankings are strongly correlated, and the proposed structure of the assessment model well defines the investigated relationships.

6. Practical Exploitation of the Identified Model

This section proposes and applies the own players’ assessment model using a hierarchical structure with the application of COMET method. It describes every related assessment model and shows the final summary and obtained results for the top 40 professional players in CS: GO game. The performance table is presented in Table 11.
The performance table of the selected criteria C 1 , C 2 and assessment model P 1 is presented in Table 12. For a better demonstration of the relevance of the criteria to the P 1 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C 1 , C 2 and reference ranking obtained by P 1 assessment model is equal to 0.6670 and 0.2420. The correlation between the first one is moderately strong, while, in the second one, it is weak. The visualization of the relation diagram of Average kills per round ( C 1 ) and P 1 assessment model, as well as the relation diagram of Average damage per round ( C 2 ) and P 1 assessment model, is presented in Figure 22. The P 2 and P 3 models were analyzed, as well, and their results presented in an analogical way. The whole process can be found in Appendix A.
The sample data for the top 40 players is shown in Table 13. The final decision assessment model identified ‘Simple’ as the best player at all with an excellent value equal to 0.7437, when the worst rating was given to ‘BnTeT’, who received value equal to 0.1021. Analyzing the results of the three related models, we can conclude that the highest score in the first model P 1 was obtained by ‘Simple’ again, and is equal to 0.8825, while the lowest score received by ’NAF’ was only 0.2275. In the P 2 and P 3 models, the best outcome was acquired by ‘Jame’ with the value 0.8423 as P 2 and 0.7750 as P 3 . The worst assessment in P 2 was given to ‘BnTeT’ with the value 0, and in the P 3 model the lowest evaluation value was given to ‘roeJ’ with the value 0.2500. The interesting fact is that ‘ZywOo’, who placed the second position, even if he did not have the best score in any of the three models, was still better than Jame. ‘ZywOo’ received a much better result in the first model and had a comparable score to ‘Jame’ in the second model. Furthermore, ‘huNter’ with the fourth result was close to beat ‘Jame’ and take over his position. In comparison with ‘Jame’, ‘huNter’ had much higher assessment in P 1 , getting average results at the rest of the models. It follows from this that the most critical models are P 1 and P 2 .
To show the relation of the obtained parameters with the final assessment model P, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the P 1 , P 2 , and P 3 model and final ranking P is equal, respectively, to 0.5122, 0.6679, and 0.3182. The correlation between the first two models is moderately strong, and, in the case of the third model, the correlation is weak. The visualization of the relation diagram of effectiveness per round ( P 1 ) and final assessment P is shown in Figure 23, the relation diagram of frag gaining ( P 2 ) and final assessment P is presented in Figure 24, and the relation diagram of failures per round ( P 3 ) and final assessment P is presented in Figure 25.
ρ Spearman’s coefficient between the final model and reference ranking is equal to 0.5304, which means that both rankings are moderately correlated. The visualization of the relation diagram of final ranking P and Rating 2.0 is presented in Figure 26.

7. Conclusions

The objective of this work was an identification of the model to create a ranking of players in the popular e-sport game, i.e., Counter-Strike: Global Offensive (CS: GO), using the appropriate multi-criteria decision-making method. For verification purposes, the obtained ranking of players was compared to the existing ranking created by HLTV called Rating 2.0, which is the most prestigious for this game. It was decided to set a ranking for top 10 and, later, even for 40 players. The additional purpose of this paper is to familiarize people with the term of e-sport and to convince them that it is worthwhile and future-proof.
The main contribution of this work is a proposal of the CS: GO players assessment model with three related evaluation models. It was obligatory to choose the right method, build associated models for the players, and then to calculate the player’s assessments for their performances. Comet’s resistant to the rank reversal paradox is a significant feature. That’s not relevant which set of players will be applied, because each of them will always get the same value. The comparison of characteristic objects is easier than the players because the distance between them is bigger than the compared players. The identification of the CS: GO players assessment model additionally allows evaluating every set of players in the considered numerical space without involving the expert in the evaluation process again because the model is defined for a certain set of player characteristics. Another original feature of the COMET method is the employment of global criterion weights, which determine the average significance of a criterion for the final assessment. The linear weighting of non-linear problems reduces the accuracy of the results. That is the reason why the calculation procedure of this method has resigned from arbitrary weights for specific criteria. Therefore, the COMET method was chosen as the best approach to make an identification of the players’ assessment model.
The results demonstrate that the model could be utilized to evaluate the players and helps to generate the ranking and select the best CS: GO player. The positions of incorrectly classified players are quite close to each other. ρ Spearman’s coefficient between the final model and reference ranking is equal to 0.5304, which means that both rankings are moderately correlated. Despite quite an average result, this ranking might be considered as sufficient, because the top positions of the classification are more fitted to reference ranking. The proposed structure of the assessment model satisfactorily defines the investigated relationships.
Further future work directions should concentrate on the improvement of model effectiveness. Perhaps adding a greater amount of input criteria, and thus increasing the number of related assessment models, could make the final ranking more reliable and appropriate to reflect some players real talent. Moreover, this should fix on the prospective empirical investigation for CS: GO game but also in other e-sport games.

Author Contributions

Conceptualization, K.U. and W.S.; methodology, K.U. and W.S.; software, K.U.; validation, J.W. and W.S.; formal analysis, J.W. and W.S.; investigation, K.U.; resources, K.U.; data curation, K.U.; writing—original draft preparation, K.U.; writing—review and editing, W.S. and J.W.; visualization, W.S.; supervision, W.S. and J.W.; project administration, W.S.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the National Science Centre, Decision number UMO-2018/29/B/HS4/ 02725 (W.S.) and and by the project financed within the framework of the program of the Minister of Science and Higher Education under the name “Regional Excellence Initiative” in the years 2019–2022, Project Number 001/RID/2018/19; the amount of financing: PLN 10.684.000,00 (J.W.).

Acknowledgments

The authors would like to thank the editor and the anonymous reviewers, whose insightful comments and constructive suggestions helped us to significantly improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CS: GOCounter-Strike: Global Offensive
COMETThe Characteristic Objects Method
MEJMatrix of Expert Judgment
SJvector of the Summed Judgments
e-sport/eSportselectronic sports

Appendix A

ρ Spearman’s rank correlation coefficient between the criteria C 3 , C 4 , and reference ranking obtained by P 2 assessment model is equal to 0.1283 and 0.7555. There is no correlation between the first one when the second one is moderately correlated with the reference ranking. The visualization of the relation diagram of total kills ( C 3 ) and P 2 assessment model, as well as the relation diagram of K/D ratio ( C 4 ) and P 1 assessment model, is shown in Figure A1.
Table A1. The performance table of the selected criteria C 3 , C 4 and assessment model P 2 .
Table A1. The performance table of the selected criteria C 3 , C 4 and assessment model P 2 .
Pos.Name C 3 C 4 P 2
1s1mple0.1681.500.6849
2ZywOo1.0001.400.7857
3Jame0.7551.510.8423
4Jamppi0.5071.300.5553
5huNter0.9811.220.5753
6vsm0.3431.220.4158
7meyern0.0801.280.4271
8Kaze0.0891.320.4723
9Hatz0.1901.280.4546
10Sico0.1371.360.5271
11yuurih0.8791.240.5798
12aliStair0.1941.320.4985
13TenZ0.3381.220.4145
14xsepower0.8881.310.6613
15roeJ0.3121.180.3480
16floppy0.6581.320.6145
17Brehze0.3701.220.4225
18KSCERATO0.8051.350.6834
19electronic0.0441.210.3260
20EliGE0.5251.190.4162
21woxic0.0691.250.3923
22kNgV-0.1701.270.4389
23INS0.1691.190.3273
24BnTeT0.0001.230.0000
25erkaSt0.2131.220.3833
26somedieyoung0.3351.190.3688
27NAF0.4561.180.3840
28NiKo0.2091.180.3223
29dexter0.2021.180.3205
30kennyS0.5381.240.4945
31jks0.0851.190.3063
32blameF0.3641.180.3610
33shz0.1871.240.4067
34nexa0.8741.240.5785
35Bubzkji0.6341.160.3985
36Texta0.2201.150.2800
37coldzera0.0951.220.3538
38frozen0.4831.190.4058
39MarKE0.1271.170.2868
40mantuu0.4701.160.3575
Figure A1. The relation diagram of Total kills ( C 3 ) for assessment P 2 (left side) and K/D Ratio ( C 4 ) for assessment P 2 (right side).
Figure A1. The relation diagram of Total kills ( C 3 ) for assessment P 2 (left side) and K/D Ratio ( C 4 ) for assessment P 2 (right side).
Applsci 10 06768 g0a1
ρ Spearman’s rank correlation coefficient between the criteria C 5 , C 6 , and reference ranking obtained by P 3 assessment model is equal to −0.1225 and −0.2722. The correlation between those two criteria and reference ranking is negatively weak. The visualization of the relation diagram of Average assists per round ( C 5 ) and P 3 assessment model, as well as the relation diagram of Average deaths per round ( C 6 ) and P 3 assessment model, is shown in Figure A2.
Table A2. The performance table of the selected criteria C 5 , C 6 and assessment model P 3 .
Table A2. The performance table of the selected criteria C 5 , C 6 and assessment model P 3 .
Pos.Name C 5 C 6 P 3
1s1mple0.090.590.5125
2ZywOo0.120.590.5625
3Jame0.090.520.7750
4Jamppi0.100.640.3500
5huNter0.150.660.3375
6vsm0.130.650.3500
7meyern0.120.640.3750
8Kaze0.100.600.5000
9Hatz0.150.600.5625
10Sico0.130.560.6875
11yuurih0.150.630.4500
12aliStair0.120.580.6000
13TenZ0.130.650.3500
14xsepower0.090.580.5500
15roeJ0.140.680.2500
16floppy0.140.650.3625
17Brehze0.120.650.3375
18KSCERATO0.120.560.6750
19electronic0.140.620.4750
20EliGE0.140.650.3625
21woxic0.110.610.4750
22kNgV-0.090.620.4000
23INS0.180.620.5250
24BnTeT0.150.590.6000
25erkaSt0.140.650.3625
26somedieyoung0.140.650.3625
27NAF0.160.610.5375
28NiKo0.120.670.2625
29dexter0.140.650.3625
30kennyS0.100.620.4250
31jks0.120.640.3750
32blameF0.160.640.4250
33shz0.130.650.3500
34nexa0.130.600.5375
35Bubzkji0.130.660.3125
36Texta0.160.660.3500
37coldzera0.110.640.3625
38frozen0.140.630.4375
39MarKE0.130.630.4250
40mantuu0.120.670.2625
Figure A2. The relation diagram of Average assists per round ( C 5 ) for assessment P 3 (left side) and Average deaths per round ( C 6 ) for assessment P 3 (right side).
Figure A2. The relation diagram of Average assists per round ( C 5 ) for assessment P 3 (left side) and Average deaths per round ( C 6 ) for assessment P 3 (right side).
Applsci 10 06768 g0a2

References

  1. Xen, C. The Road to Professionalism: A Qualitative Study on the Institutionalization of eSports. 2017. Available online: http://gupea.ub.gu.se/bitstream/2077/52951/1/gupea_2077_52951_1.pdf (accessed on 11 June 2020).
  2. Jonasson, K.; Thiborg, J. Electronic sport and its impact on future sport. Sport Soc. 2010, 13, 287–299. [Google Scholar] [CrossRef]
  3. Hamari, J.; Sjöblom, M. What is eSports and why do people watch it? Internet Res. 2017, 27, 211–232. [Google Scholar] [CrossRef]
  4. Lux, M.; Halvorsen, P.; Dang-Nguyen, D.T.; Stensland, H.; Kesavulu, M.; Potthast, M.; Riegler, M. Summarizing E-sports matches and tournaments: The example of counter-strike: Global offensive. In Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems, Amherst, MA, USA, 18 June 2019; pp. 13–18. [Google Scholar]
  5. Rizani, M.N.; Iida, H. Analysis of Counter-Strike: Global Offensive. In Proceedings of the 2018 International Conference on Electrical Engineering and Computer Science (ICECOS), Pangkal Pinang, Indonesia, 2–4 October 2018; pp. 373–378. [Google Scholar]
  6. Bornemark, O. Success factors for e-sport games. In Proceedings of the Umeå’s 16th Student Conference in Computing Science, Umeå, Sweden, June 2013; pp. 1–12. [Google Scholar]
  7. Egliston, B. E-sport, phenomenality and affect. Transformations 2018, 31, 156–176. [Google Scholar]
  8. Menasce, R.M. From Casual to Professional: How Brazilians Achieved Esports Success in Counter-Strike: Global Offensive. Ph.D. Thesis, Northeastern University, Boston, MA, USA, 2017. [Google Scholar]
  9. Ma, H.; Wu, Y.; Wu, X. Research on essential difference of e-sport and online game. In Informatics and Management Science V; Springer: Berlin/Heidelberg, Germany, 2013; pp. 615–621. [Google Scholar]
  10. Martončik, M. e-Sports: Playing just for fun or playing to satisfy life goals? Comput. Hum. Behav. 2015, 48, 208–211. [Google Scholar] [CrossRef]
  11. Makarov, I.; Savostyanov, D.; Litvyakov, B.; Ignatov, D.I. Predicting winning team and probabilistic ratings in “Dota 2” and “Counter-Strike: Global Offensive” video games. In International Conference on Analysis of Images, Social Networks and Texts; Springer: Berlin/Heidelberg, Germany, 2017; pp. 183–196. [Google Scholar]
  12. Adamus, T. Playing computer games as electronic sport: In search of a theoretical framework for a new research field. In Computer Games And New Media Cultures; Springer: Berlin/Heidelberg, Germany, 2012; pp. 477–490. [Google Scholar]
  13. Laberge, M. Hand Eye Coordination, Encyclopedia of Children’s Health. 2019. Available online: http://www.healthofchildren.com/G-H/Hand-Eye-Coordination.htm (accessed on 16 June 2020).
  14. Rambusch, J.; Jakobsson, P.; Pargman, D. Exploring E-sports: A case study of game play in Counter-strike. In Proceedings of the 3rd Digital Games Research Association International Conference: “ Situated Play”, DiGRA 2007, Digital Games Research Association (DiGRA), Tokyo, Japan, 24–28 September 2007; Volume 4, pp. 157–164. [Google Scholar]
  15. Vaz, C. CS: GO Economy Guide. 2019. Available online: https://www.metabomb.net/csgo/gameplay-guides/csgo-economy-guide-2 (accessed on 8 June 2020).
  16. Pizzo, A.D.; Na, S.; Baker, B.J.; Lee, M.A.; Kim, D.; Funk, D.C. eSport vs. Sport: A Comparison of Spectator Motives. Sport Mark. Q. 2018, 27, 108–123. [Google Scholar]
  17. Drenthe, R. Informal Roles Within eSport Teams: A Content Analysis of the Game “Counter-Strike: Global Offensive”, 2016. Available online: http://urn.fi/URN:NBN:fi:jyu-201606062893 (accessed on 7 June 2020).
  18. Mertz, J.; Hoover, L.D.; Burke, J.M.; Bellar, D.; Jones, M.L.; Leitzelar, B.; Judge, W.L. Ranking the greatest NBA players: A sport metrics analysis. Int. J. Perform. Anal. Sport 2016, 16, 737–759. [Google Scholar] [CrossRef]
  19. Funk, D.C.; Pizzo, A.D.; Baker, B.J. eSport management: Embracing eSport education and research opportunities. Sport Manag. Rev. 2018, 21, 7–13. [Google Scholar] [CrossRef]
  20. Kou, Y.; Gui, X.; Kow, Y.M. Ranking practices and distinction in league of legends. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, Austin, TX, USA, 13–19 October 2016; pp. 4–9. [Google Scholar]
  21. DiFrancisco-Donoghue, J.; Balentine, J.R. Collegiate eSport: Where do we fit in? Curr. Sports Med. Rep. 2018, 17, 117–118. [Google Scholar] [CrossRef]
  22. HLTV. CS:GO World Ranking. 2019. Available online: https://www.hltv.org/ranking/teams/2019/december/16 (accessed on 1 June 2020).
  23. HLTV. CS:GO News & Coverage. 2019. Available online: https://www.hltv.org (accessed on 2 June 2020).
  24. Sałabun, W. The Characteristic Objects Method: A New Distance-based Approach to Multicriteria Decision-making Problems. J. Multi-Criteria Decis. Anal. 2015, 22, 37–50. [Google Scholar] [CrossRef]
  25. Wątróbski, J.; Jankowski, J.; Ziemba, P.; Karczmarczyk, A.; Zioło, M. Generalised framework for multi-criteria method selection: Rule set database and exemplary decision support system implementation blueprints. Data Brief 2019, 22, 639. [Google Scholar] [CrossRef]
  26. Wątróbski, J.; Jankowski, J.; Ziemba, P.; Karczmarczyk, A.; Zioło, M. Generalised framework for multi-criteria method selection. Omega 2019, 86, 107–124. [Google Scholar] [CrossRef]
  27. Pamučar, D.; Janković, A. The application of the hybrid interval rough weighted Power-Heronian operator in multi-criteria decision-making. Oper. Res. Eng. Sci. Theory Appl. 2020, 3, 54–73. [Google Scholar] [CrossRef]
  28. Zavadskas, E.K.; Pamučar, D.; Stević, Ž.; Mardani, A. Multi-Criteria Decision-Making Techniques for Improvement Sustainability Engineering Processes. Symmetry 2020, 12, 986. [Google Scholar] [CrossRef]
  29. Matić, B.; Jovanović, S.; Das, D.K.; Zavadskas, E.K.; Stević, Ž.; Sremac, S.; Marinković, M. A new hybrid MCDM model: Sustainable supplier selection in a construction company. Symmetry 2019, 11, 353. [Google Scholar] [CrossRef] [Green Version]
  30. Wątróbski, J.; Ziemba, E.; Karczmarczyk, A.; Jankowski, J. An index to measure the sustainable information society: The Polish households case. Sustainability 2018, 10, 3223. [Google Scholar] [CrossRef] [Green Version]
  31. Walker, T.R.; Adebambo, O.; Feijoo, M.C.D.A.; Elhaimer, E.; Hossain, T.; Edwards, S.J.; Morrison, C.E.; Romo, J.; Sharma, N.; Taylor, S.; et al. Environmental effects of marine transportation. In World Seas: An Environmental Evaluation; Elsevier: Amsterdam, The Netherlands, 2019; pp. 505–530. [Google Scholar]
  32. Nalmpantis, D.; Roukouni, A.; Genitsaris, E.; Stamelou, A.; Naniopoulos, A. Evaluation of innovative ideas for Public Transport proposed by citizens using Multi-Criteria Decision Analysis (MCDA). Eur. Transp. Res. Rev. 2019, 11, 22. [Google Scholar] [CrossRef] [Green Version]
  33. Silva, A.R.D.; Ferreira, F.A.; Carayannis, E.G.; Ferreira, J.J. Measuring SMEs’ propensity for open innovation using cognitive mapping and MCDA. IEEE Trans. Eng. Manag. 2019. [Google Scholar] [CrossRef]
  34. Sałabun, W.; Palczewski, K.; Wątróbski, J. Multicriteria approach to sustainable transport evaluation under incomplete knowledge: Electric bikes case study. Sustainability 2019, 11, 3314. [Google Scholar] [CrossRef] [Green Version]
  35. Wątróbski, J.; Sałabun, W.; Karczmarczyk, A.; Wolski, W. Sustainable decision-making using the COMET method: An empirical study of the ammonium nitrate transport management. In Proceedings of the 2017 Federated Conference on Computer Science and Information Systems (FedCSIS), Prague, Czech Republic, 3–6 September 2017; pp. 949–958. [Google Scholar]
  36. Baumann, M.; Weil, M.; Peters, J.F.; Chibeles-Martins, N.; Moniz, A.B. A review of multi-criteria decision-making approaches for evaluating energy storage systems for grid applications. Renew. Sustain. Energy Rev. 2019, 107, 516–534. [Google Scholar] [CrossRef]
  37. Wątróbski, J.; Ziemba, P.; Wolski, W. Methodological aspects of decision support system for the location of renewable energy sources. In Proceedings of the 2015 Federated Conference on Computer Science and Information Systems (FedCSIS), Lodz, Poland, 13–16 September 2015; pp. 1451–1459. [Google Scholar]
  38. Ortiz-Urbina, E.; González-Pachón, J.; Diaz-Balteiro, L. Decision-Making in Forestry: A Review of the Hybridisation of Multiple Criteria and Group Decision-Making Methods. Forests 2019, 10, 375. [Google Scholar] [CrossRef] [Green Version]
  39. Longaray, A.A.; Ensslin, L.; Dutra, A.; Ensslin, S.; Brasil, R.; Munhoz, P. Using MCDA-C to assess the organizational performance of industries operating at Brazilian maritime port terminals. Oper. Res. Perspect. 2019, 6, 100109. [Google Scholar] [CrossRef]
  40. Maghsoodi, A.I.; Riahi, D.; Herrera-Viedma, E.; Zavadskas, E.K. An integrated parallel big data decision support tool using the W-CLUS-MCDA: A multi-scenario personnel assessment. Knowl. Based Syst. 2020, 105749. [Google Scholar] [CrossRef]
  41. Lloyd-Williams, H. The role of multi-criteria decision analysis (MCDA) in public health economic evaluation. In Applied Health Economics for Public Health Practice and Research; Oxford University Press: Oxford, UK, 2019; p. 301. [Google Scholar]
  42. Hansen, P.; Devlin, N. Multi-criteria decision analysis (MCDA) in healthcare decision-making. In Oxford Research Encyclopedia of Economics and Finance; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
  43. Moreno-Calderón, A.; Tong, T.S.; Thokala, P. Multi-criteria Decision Analysis Software in Healthcare Priority Setting: A Systematic Review. Pharmacoeconomics 2020, 38, 269–283. [Google Scholar] [CrossRef] [PubMed]
  44. Gavião, L.O.; Sant’Anna, A.P.; Alves Lima, G.B.; de Almada Garcia, P.A. Evaluation of soccer players under the Moneyball concept. J. Sports Sci. 2020, 38, 1221–1247. [Google Scholar] [CrossRef]
  45. Angilella, S.; Arcidiacono, S.G.; Corrente, S.; Greco, S.; Matarazzo, B. An application of the SMAA—Choquet method to evaluate the performance of sailboats in offshore regattas. Oper. Res. 2020, 20, 771–793. [Google Scholar] [CrossRef]
  46. Chelmis, E.; Niklis, D.; Baourakis, G.; Zopounidis, C. Multiciteria evaluation of football clubs: The Greek Superleague. Oper. Res. 2019, 19, 585–614. [Google Scholar] [CrossRef]
  47. Lee, S.; Walsh, P. Does your left hand know what your right hand is doing? Impacts of athletes’ pre-transgression philanthropic behavior on consumer post-transgression evaluation. Sport Manag. Rev. 2019, 22, 553–565. [Google Scholar] [CrossRef]
  48. Thompson, A.; Parent, M.M. Understanding the impact of radical change on the effectiveness of national-level sport organizations: A multi-stakeholder perspective. Sport Manag. Rev. 2020. [Google Scholar] [CrossRef]
  49. Thomson, A.; Cuskelly, G.; Toohey, K.; Kennelly, M.; Burton, P.; Fredline, L. Sport event legacy: A systematic quantitative review of literature. Sport Manag. Rev. 2019, 22, 295–321. [Google Scholar] [CrossRef]
  50. Rascher, D.A.; Maxcy, J.G.; Schwarz, A. The Unique Economic Aspects of Sports. J. Glob. Sport Manag. 2019, 1–28. [Google Scholar] [CrossRef]
  51. Hallmann, K.; Giel, T. eSports—Competitive sports or recreational activity? Sport Manag. Rev. 2018, 21, 14–20. [Google Scholar] [CrossRef]
  52. Bányai, F.; Griffiths, M.D.; Király, O.; Demetrovics, Z. The psychology of esports: A systematic literature review. J. Gambl. Stud. 2019, 35, 351–365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Jankowski, J.; Hamari, J.; Wątróbski, J. A gradual approach for maximising user conversion without compromising experience with high visual intensity website elements. Internet Res. 2019, 29, 194–217. [Google Scholar] [CrossRef]
  54. DiFrancisco-Donoghue, J.; Balentine, J.; Schmidt, G.; Zwibel, H. Managing the health of the eSport athlete: An integrated health management model. BMJ Open Sport Exerc. Med. 2019, 5. [Google Scholar] [CrossRef]
  55. Musabirov, I.; Bulygin, D.; Marchenko, E. Personal Brands of ESports Athletes: An Exploration of Evaluation Mechanisms. High. Sch. Econ. Res. Pap. 2019, 90. [Google Scholar] [CrossRef] [Green Version]
  56. Matsui, A.; Sapienza, A.; Ferrara, E. Does Streaming Esports Affect Players’ Behavior and Performance? Games Cult. 2020, 15, 9–31. [Google Scholar] [CrossRef]
  57. Hodge, V.J.; Devlin, S.M.; Sephton, N.J.; Block, F.O.; Cowling, P.I.; Drachen, A. Win Prediction in Multi-Player Esports: Live Professional Match Prediction. IEEE Trans. Games 2019. [Google Scholar] [CrossRef] [Green Version]
  58. Khromov, N.; Korotin, A.; Lange, A.; Stepanov, A.; Burnaev, E.; Somov, A. Esports Athletes and Players: A Comparative Study. IEEE Pervasive Comput. 2019, 18, 31–39. [Google Scholar] [CrossRef] [Green Version]
  59. Kodikara, P.N. Multi-Objective Optimal Operation of Urban Water Supply Systems. Ph.D. Thesis, Victoria University, Melbourne, Australia, 2008. [Google Scholar]
  60. Greco, S.; Figueira, J.; Ehrgott, M. Multiple Criteria Decision Analysis; Springer: Berlin, Germany, 2016. [Google Scholar]
  61. e Costa, C.A.B.; Vincke, P. Multiple criteria decision aid: An overview. In Readings In Multiple Criteria Decision Aid; Springer: Berlin, Germany, 1990; pp. 3–14. [Google Scholar]
  62. Sałabun, W.; Karczmarczyk, A. Using the comet method in the sustainable city transport problem: An empirical study of the electric powered cars. Procedia Comput. Sci. 2018, 126, 2248–2260. [Google Scholar] [CrossRef]
  63. Martel, J.M.; Matarazzo, B. Other outranking approaches. In Multiple Criteria Decision Analysis: State of the Art Surveys; Springer: Berlin, Germany, 2005; pp. 197–259. [Google Scholar]
  64. Greco, S. A new pcca method: Idra. Eur. J. Oper. Res. 1997, 98, 587–601. [Google Scholar] [CrossRef]
  65. Lewandowska, A.; Jankowski, J.; Sałabun, W.; Wątróbski, J. Multicriteria Selection of Online Advertising Content for the Habituation Effect Reduction. In Asian Conference on Intelligent Information and Database Systems; Springer: Berlin, Germany, 2019; pp. 499–509. [Google Scholar]
  66. Jankowski, J.; Sałabun, W.; Wątróbski, J. Identification of a multi-criteria assessment model of relation between editorial and commercial content in web systems. In Multimedia and Network Information Systems; Springer: Berlin, Germany, 2017; pp. 295–305. [Google Scholar]
  67. Palczewski, K.; Sałabun, W. Identification of the football teams assessment model using the COMET method. Procedia Comput. Sci. 2019, 159, 2491–2501. [Google Scholar] [CrossRef]
  68. Sałabun, W.; Piegat, A. Comparative analysis of MCDM methods for the assessment of mortality in patients with acute coronary syndrome. Artif. Intell. Rev. 2017, 48, 557–571. [Google Scholar] [CrossRef]
  69. Sałabun, W.; Karczmarczyk, A.; Wątróbski, J.; Jankowski, J. Handling data uncertainty in decision-making with COMET. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1478–1484. [Google Scholar]
  70. Faizi, S.; Sałabun, W.; Ullah, S.; Rashid, T.; Więckowski, J. A New Method to Support Decision-Making in an Uncertain Environment Based on Normalized Interval-Valued Triangular Fuzzy Numbers and COMET Technique. Symmetry 2020, 12, 516. [Google Scholar] [CrossRef] [Green Version]
  71. Chmielarz, W.; Zborowski, M. On Analysis of e-Banking Websites Quality–Comet Application. Procedia Comput. Sci. 2018, 126, 2137–2152. [Google Scholar] [CrossRef]
  72. Carnero, M. Waste Segregation FMEA Model Integrating Intuitionistic Fuzzy Set and the PAPRIKA Method. Mathematics 2020, 8, 1375. [Google Scholar] [CrossRef]
  73. Sałabun, W.; Ziemba, P.; Wątróbski, J. The rank reversals paradox in management decisions: The comparison of the ahp and comet methods. In International Conference on Intelligent Decision Technologies; Springer: Berlin, Germany, 2016; pp. 181–191. [Google Scholar]
  74. Karol, U. Identification of Players Ranking in E-Sport: CS:GO Study Case. In Polskie Porozumienie na Rzecz Rozwoju Sztucznej Inteligencji (PP-RAI’2019); Department of Systems and Computer Networks, Faculty of Electronics, Wroclaw University of Science and Technology: Wrocław, Poland, 2019; pp. 45–48. [Google Scholar]
Figure 1. The procedure of the Characteristic Objects Method (COMET) to identify decision-making model.
Figure 1. The procedure of the Characteristic Objects Method (COMET) to identify decision-making model.
Applsci 10 06768 g001
Figure 2. Research procedure.
Figure 2. Research procedure.
Applsci 10 06768 g002
Figure 3. The hierarchical structure of the players ranking assessment problem.
Figure 3. The hierarchical structure of the players ranking assessment problem.
Applsci 10 06768 g003
Figure 4. Visualization of Average kills per round ( C 1 ) and triangular fuzzy numbers 0.70 ( C 11 ), 0.80 ( C 12 ), and 0.90 ( C 13 ).
Figure 4. Visualization of Average kills per round ( C 1 ) and triangular fuzzy numbers 0.70 ( C 11 ), 0.80 ( C 12 ), and 0.90 ( C 13 ).
Applsci 10 06768 g004
Figure 5. Visualization of Average damage per round ( C 2 ) and triangular fuzzy numbers 70 ( C 21 ), 80 ( C 22 ), and 90 ( C 23 ).
Figure 5. Visualization of Average damage per round ( C 2 ) and triangular fuzzy numbers 70 ( C 21 ), 80 ( C 22 ), and 90 ( C 23 ).
Applsci 10 06768 g005
Figure 6. Visualization of triangular fuzzy numbers for Effectiveness per round assessment model ( P 1 ).
Figure 6. Visualization of triangular fuzzy numbers for Effectiveness per round assessment model ( P 1 ).
Applsci 10 06768 g006
Figure 7. The relation diagram of Average kills per round ( C 1 ) for assessment P 1 (left side) and Average damage per round ( C 2 ) for assessment P 1 (right side).
Figure 7. The relation diagram of Average kills per round ( C 1 ) for assessment P 1 (left side) and Average damage per round ( C 2 ) for assessment P 1 (right side).
Applsci 10 06768 g007
Figure 8. Visualization of Total kills ( C 3 ) and triangular fuzzy numbers 0.0 ( C 31 ), 0.5 ( x 32 ), and 1.0 ( x 33 ).
Figure 8. Visualization of Total kills ( C 3 ) and triangular fuzzy numbers 0.0 ( C 31 ), 0.5 ( x 32 ), and 1.0 ( x 33 ).
Applsci 10 06768 g008
Figure 9. Visualization of the number of kills divided by number of deaths (K/D) Ratio ( C 4 ) and triangular fuzzy numbers 1.00 ( C 41 ), 1.25 ( C 42 ), and 1.60 ( C 43 ).
Figure 9. Visualization of the number of kills divided by number of deaths (K/D) Ratio ( C 4 ) and triangular fuzzy numbers 1.00 ( C 41 ), 1.25 ( C 42 ), and 1.60 ( C 43 ).
Applsci 10 06768 g009
Figure 10. Visualization of triangular fuzzy numbers for Frag gaining assessment model ( P 2 ).
Figure 10. Visualization of triangular fuzzy numbers for Frag gaining assessment model ( P 2 ).
Applsci 10 06768 g010
Figure 11. The relation diagram of Total kills ( C 3 ) for assessment P 2 (left side) and K/D Ratio ( C 4 ) for assessment P 2 (right side).
Figure 11. The relation diagram of Total kills ( C 3 ) for assessment P 2 (left side) and K/D Ratio ( C 4 ) for assessment P 2 (right side).
Applsci 10 06768 g011
Figure 12. Visualization of Assists per round ( C 5 ) and triangular fuzzy numbers 0.05 ( C 51 ), 0.10 ( C 52 ), and 0.20 ( C 53 ).
Figure 12. Visualization of Assists per round ( C 5 ) and triangular fuzzy numbers 0.05 ( C 51 ), 0.10 ( C 52 ), and 0.20 ( C 53 ).
Applsci 10 06768 g012
Figure 13. Visualization of Average deaths per round ( C 6 ) and triangular fuzzy numbers 0.5 ( C 61 ), 0.6 ( C 62 ), and 0.7 ( C 63 ).
Figure 13. Visualization of Average deaths per round ( C 6 ) and triangular fuzzy numbers 0.5 ( C 61 ), 0.6 ( C 62 ), and 0.7 ( C 63 ).
Applsci 10 06768 g013
Figure 14. Visualization of triangular fuzzy numbers for Failures per round assessment model ( P 3 ).
Figure 14. Visualization of triangular fuzzy numbers for Failures per round assessment model ( P 3 ).
Applsci 10 06768 g014
Figure 15. The relation diagram of Average assists per round ( C 5 ) for assessment P 3 (left side) and Average deaths per round ( C 6 ) for assessment P 3 (right side).
Figure 15. The relation diagram of Average assists per round ( C 5 ) for assessment P 3 (left side) and Average deaths per round ( C 6 ) for assessment P 3 (right side).
Applsci 10 06768 g015
Figure 16. Visualization of Effectiveness per round assessment model ( P 1 ) and triangular fuzzy numbers 0.1 ( P 11 ) and 0.9 ( P 12 ).
Figure 16. Visualization of Effectiveness per round assessment model ( P 1 ) and triangular fuzzy numbers 0.1 ( P 11 ) and 0.9 ( P 12 ).
Applsci 10 06768 g016
Figure 17. Visualization of Frag gaining assessment model ( P 2 ) and triangular fuzzy numbers 0.2 ( P 21 ) and 0.9 ( P 22 ).
Figure 17. Visualization of Frag gaining assessment model ( P 2 ) and triangular fuzzy numbers 0.2 ( P 21 ) and 0.9 ( P 22 ).
Applsci 10 06768 g017
Figure 18. Visualization of Failures per round assessment model ( P 3 ) and triangular fuzzy numbers 0.2 ( P 31 ) and 0.8 ( P 32 ).
Figure 18. Visualization of Failures per round assessment model ( P 3 ) and triangular fuzzy numbers 0.2 ( P 31 ) and 0.8 ( P 32 ).
Applsci 10 06768 g018
Figure 19. The relation diagram of the Effectiveness per round assessment ( P 1 ) for final assessment P.
Figure 19. The relation diagram of the Effectiveness per round assessment ( P 1 ) for final assessment P.
Applsci 10 06768 g019
Figure 20. The relation diagram of the Frag gaining assessment ( P 2 ) for final assessment P.
Figure 20. The relation diagram of the Frag gaining assessment ( P 2 ) for final assessment P.
Applsci 10 06768 g020
Figure 21. The relation diagram of the Failures per round assessment ( P 3 ) for final assessment P.
Figure 21. The relation diagram of the Failures per round assessment ( P 3 ) for final assessment P.
Applsci 10 06768 g021
Figure 22. The relation diagram of Average kills per round ( C 1 ) for assessment P 1 (left side) and Average damage per round ( C 2 ) for assessment P 1 (right side).
Figure 22. The relation diagram of Average kills per round ( C 1 ) for assessment P 1 (left side) and Average damage per round ( C 2 ) for assessment P 1 (right side).
Applsci 10 06768 g022
Figure 23. The relation diagram of the Effectiveness per round assessment ( P 1 ) for final assessment P.
Figure 23. The relation diagram of the Effectiveness per round assessment ( P 1 ) for final assessment P.
Applsci 10 06768 g023
Figure 24. The relation diagram of the Frag gaining assessment ( P 2 ) for final assessment P.
Figure 24. The relation diagram of the Frag gaining assessment ( P 2 ) for final assessment P.
Applsci 10 06768 g024
Figure 25. The relation diagram of the Failures per round assessment ( P 3 ) for final assessment P.
Figure 25. The relation diagram of the Failures per round assessment ( P 3 ) for final assessment P.
Applsci 10 06768 g025
Figure 26. The relation diagram of the final ranking (P) for reference ranking Rating 2.0.
Figure 26. The relation diagram of the final ranking (P) for reference ranking Rating 2.0.
Applsci 10 06768 g026
Table 1. Comparison of the Characteristic Objects Method (COMET) with other multi-criteria decision analysis (MCDA) methods.
Table 1. Comparison of the Characteristic Objects Method (COMET) with other multi-criteria decision analysis (MCDA) methods.
Method NameW. UsageWeights TypePerf. of the v. MeasurementUncert. HandlingType of Uncertainty
AHPYesrelativerelativeNo-
COMETNo-quantitativeYesinput data
ELECTRE IYesquantitativequalitativeNo-
ELECTRE ISYesquantitativequantitativeYesDM preferences
ELECTRE TRIYesquantitativequantitativeYesDM preferences
Fuzzy AHPYesrelativerelativeYesinput data
Fuzzy TOPSISYesquantitativequantitativeYesinput data
Fuzzy VIKORYesquantitativequantitativeYesinput data
IDRAYesquantitativequantitativeNo-
PROMETHEE IYesquantitativequantitativeYesDM preferences
PROMETHEE IIYesquantitativequantitativeYesDM preferences
TOPSISYesquantitativequantitativeNo-
VIKORYesquantitativequantitativeNo-
Table 2. The performance table of the alternatives and selected criteria.
Table 2. The performance table of the alternatives and selected criteria.
Pos.Name C 1 C 2 C 3 C 4 C 5 C 6
1s1mple0.8886.619581.500.090.59
2ZywOo0.8385.341511.400.120.59
3Jame0.7879.335051.510.090.52
4Jamppi0.8383.128511.300.10.64
5huNter0.8088.241001.220.150.66
6vsm0.8086.624201.220.130.65
7meyern0.8283.817281.280.120.64
8Kaze0.7880.717501.320.10.60
9Hatz0.7681.820171.280.150.60
10Sico0.7678.418761.360.130.56
Table 3. Overview of characteristic objects C O , vector P values for the Effectiveness per round assessment model.
Table 3. Overview of characteristic objects C O , vector P values for the Effectiveness per round assessment model.
CO i C 1 C 2 P 1
C O 1 0.7700.0000
C O 2 0.7800.1250
C O 3 0.7900.2500
C O 4 0.8700.3750
C O 5 0.8800.5000
C O 6 0.8900.6250
C O 7 0.9700.7500
C O 8 0.9800.8750
C O 9 0.9901.0000
Table 4. The performance table of the selected criteria C 1 , C 2 and assessment model P 1 .
Table 4. The performance table of the selected criteria C 1 , C 2 and assessment model P 1 .
Pos.Name C 1 C 2 P 1
1s1mple0.8886.60.8825
2ZywOo0.8385.30.6788
3Jame0.7879.30.4163
4Jamppi0.8383.10.6513
5huNter0.8088.20.6025
6vsm0.8086.60.5825
7meyern0.8283.80.6225
8Kaze0.7880.70.4338
9Hatz0.7681.80.3725
10Sico0.7678.40.3300
Table 5. Overview of characteristic objects C O , vector P values for the Frag gaining assessment model.
Table 5. Overview of characteristic objects C O , vector P values for the Frag gaining assessment model.
CO i C 3 C 4 P 2
C O 1 01.00.0000
C O 2 01.250.3750
C O 3 01.60.7500
C O 4 0.51.00.1250
C O 5 0.51.250.5000
C O 6 0.51.60.8750
C O 7 1.01.00.2500
C O 8 1.01.250.6250
C O 9 1.01.61.0000
Table 6. The performance table of the selected criteria C 3 , C 4 and assessment model P 2 .
Table 6. The performance table of the selected criteria C 3 , C 4 and assessment model P 2 .
Pos.Name C 3 C 4 P 2
1s1mple0.1681.500.6849
2ZywOo1.0001.400.7857
3Jame0.7551.510.8423
4Jamppi0.5071.300.5553
5huNter0.9811.220.5753
6vsm0.3431.220.4158
7meyern0.0801.280.4271
8Kaze0.0891.320.4723
9Hatz0.1901.280.4546
10Sico0.1371.360.5271
Table 7. Overview of characteristic objects C O , vector P values for the Failures per round assessment model.
Table 7. Overview of characteristic objects C O , vector P values for the Failures per round assessment model.
CO i C 5 C 6 P 3
C O 1 0.050.50.7500
C O 2 0.10.50.8750
C O 3 0.20.51.0000
C O 4 0.050.60.3750
C O 5 0.10.60.5000
C O 6 0.20.60.6250
C O 7 0.050.70.0000
C O 8 0.10.70.1250
C O 9 0.20.70.2500
Table 8. The performance table of the selected criteria C 5 , C 6 and assessment model P 3 .
Table 8. The performance table of the selected criteria C 5 , C 6 and assessment model P 3 .
Pos.Name C 5 C 6 P 3
1s1mple0.090.590.5125
2ZywOo0.120.590.5625
3Jame0.090.520.7750
4Jamppi0.100.640.3500
5huNter0.150.660.3375
6vsm0.130.650.3500
7meyern0.120.640.3750
8Kaze0.100.600.5000
9Hatz0.150.600.5625
10Sico0.130.560.6875
Table 9. Overview of characteristic objects C O , vector P values for the Counter-Strike: Global Offensive (CS: GO) Players assessment model.
Table 9. Overview of characteristic objects C O , vector P values for the Counter-Strike: Global Offensive (CS: GO) Players assessment model.
CO i P 1 P 2 P 3 P
C O 1 0.10.20.20.1429
C O 2 0.10.90.20.7143
C O 3 0.90.20.20.4286
C O 4 0.90.90.21.0000
C O 5 0.10.20.80.0000
C O 6 0.10.90.80.5714
C O 7 0.90.20.80.2857
C O 8 0.90.90.80.8571
Table 10. The performance table of the assessment models P 1 P 3 , final assessment P with their rankings.
Table 10. The performance table of the assessment models P 1 P 3 , final assessment P with their rankings.
Pos.Name P 1 P 2 P 3 P Rank P 1 Rank P 2 Rank P 3 Rank P
1s1mple0.88250.68490.51250.74371331
2ZywOo0.67880.78570.56250.741422102
3Jame0.41630.84230.77500.64324123
4Jamppi0.65130.55530.35000.59417595
5huNter0.60250.57530.33750.59595414
6vsm0.58250.41580.35000.455661087
7meyern0.62250.42710.37500.47328876
8Kaze0.43380.47230.50000.41293948
9Hatz0.37250.45460.56250.361797610
10Sico0.33000.52710.68750.375910659
Table 11. The performance table of the alternatives, selected criteria, and reference ranking.
Table 11. The performance table of the alternatives, selected criteria, and reference ranking.
Pos.Name C 1 C 2 C 3 C 4 C 5 C 6 Rating 2.0
1s1mple0.8886.619581.500.090.591.34
2ZywOo0.8385.341511.400.120.591.32
3Jame0.7879.335051.510.090.521.29
4Jamppi0.8383.128511.300.10.641.25
5huNter0.8088.241001.220.150.661.24
6vsm0.8086.624201.220.130.651.24
7meyern0.8283.817281.280.120.641.24
8Kaze0.7880.717501.320.10.601.24
9Hatz0.7681.820171.280.150.601.23
10Sico0.7678.418761.360.130.561.23
11yuurih0.7987.138331.240.150.631.22
12aliStair0.7778.720281.320.120.581.22
13TenZ0.8085.524061.220.130.651.22
14xsepower0.7675.638561.310.090.581.21
15roeJ0.8087.723381.180.140.681.21
16floppy0.7984.332511.320.140.651.21
17Brehze0.8083.324921.220.120.651.21
18KSCERATO0.7579.436371.350.120.561.21
19electronic0.7683.916311.210.140.621.21
20EliGE0.7784.829001.190.140.651.20
21woxic0.7780.516991.250.110.611.20
22kNgV-0.7879.819641.270.090.621.20
23INS0.7483.819611.190.180.621.19
24BnTeT0.7381.215161.230.150.591.19
25erkaSt0.7983.320761.220.140.651.19
26somedieyoung0.7884.623981.190.140.651.19
27NAF0.7282.227171.180.160.611.19
28NiKo0.7883.920661.180.120.671.18
29dexter0.7682.220471.180.140.651.18
30kennyS0.7778.629341.240.10.621.18
31jks0.7682.717411.190.120.641.18
32blameF0.7583.324741.180.160.641.18
33shz0.8083.020091.240.130.651.18
34nexa0.7580.338191.240.130.601.18
35Bubzkji0.7685.231861.160.130.661.18
36Texta0.7683.120971.150.160.661.18
37coldzera0.7882.217671.220.110.641.18
38frozen0.7581.227891.190.140.631.17
39MarKE0.7480.918511.170.130.631.17
40mantuu0.7782.027551.160.120.671.17
Table 12. The performance table of the selected criteria C 1 , C 2 and assessment model P 1 .
Table 12. The performance table of the selected criteria C 1 , C 2 and assessment model P 1 .
Pos.Name C 1 C 2 P 1
1s1mple0.8886.60.8825
2ZywOo0.8385.30.6788
3Jame0.7879.30.4163
4Jamppi0.8383.10.6513
5huNter0.8088.20.6025
6vsm0.8086.60.5825
7meyern0.8283.80.6225
8Kaze0.7880.70.4338
9Hatz0.7681.80.3725
10Sico0.7678.40.3300
11yuurih0.7987.10.5513
12aliStair0.7778.70.3712
13TenZ0.8085.50.5688
14xsepower0.7675.60.2950
15roeJ0.8087.70.5963
16floppy0.7984.30.5163
17Brehze0.8083.30.5413
18KSCERATO0.7579.40.3050
19electronic0.7683.90.3988
20EliGE0.7784.80.4475
21woxic0.7780.50.3937
22kNgV-0.7879.80.4225
23INS0.7483.80.3225
24BnTeT0.7381.20.2525
25erkaSt0.7983.30.5038
26somedieyoung0.7884.60.4825
27NAF0.7282.20.2275
28NiKo0.7883.90.4738
29dexter0.7682.20.3775
30kennyS0.7778.60.3700
31jks0.7682.70.3838
32blameF0.7583.30.3538
33shz0.8083.00.5375
34nexa0.7580.30.3163
35Bubzkji0.7685.20.415
36Texta0.7683.10.3887
37coldzera0.7882.20.4525
38frozen0.7581.20.3275
39MarKE0.7480.90.2863
40mantuu0.7782.00.4125
Table 13. The performance table of the related assessment models and final ranking.
Table 13. The performance table of the related assessment models and final ranking.
Pos.Name P 1 P 2 P 3 P Ranking   P
1s1mple0.88250.68490.51250.74371
2ZywOo0.67880.78570.56250.74142
3Jame0.41630.84230.77500.64323
4Jamppi0.65130.55530.35000.59415
5huNter0.60250.57530.33750.59594
6vsm0.58250.41580.35000.455616
7meyern0.62250.42710.37500.473211
8Kaze0.43380.47230.50000.412914
9Hatz0.37250.45460.56250.361718
10Sico0.33000.52710.68750.37597
11yuurih0.55130.57980.45000.55456
12aliStair0.37120.49850.60000.388213
13TenZ0.56880.41450.35000.449617
14xsepower0.29500.66130.55000.505734
15roeJ0.59630.34800.25000.429033
16floppy0.51630.61450.36250.591215
17Brehze0.54130.42250.33750.449430
18KSCERATO0.30500.68340.67500.49758
19electronic0.39880.32600.47500.286922
20EliGE0.44750.41620.36250.404820
21woxic0.39370.39230.47500.339225
22kNgV-0.42250.43890.40000.405535
23INS0.32250.32730.52500.248812
24BnTeT0.25250.00000.60000.102126
25erkaSt0.50380.38330.36250.398010
26somedieyoung0.48250.36880.36250.378640
27NAF0.22750.38400.53750.25839
28NiKo0.47380.32230.26250.361328
29dexter0.37750.32050.36250.301637
30kennyS0.37000.49450.42500.426121
31jks0.38380.30630.37500.289338
32blameF0.35380.36100.42500.311432
33shz0.53750.40670.35000.432229
34nexa0.31630.57850.53750.448731
35Bubzkji0.41500.39850.31250.390619
36Texta0.38870.28000.35000.275636
37coldzera0.45250.35380.36250.355627
38frozen0.32750.40580.43750.335523
39MarKE0.28630.28680.42500.226639
40mantuu0.41250.35750.26250.368224

Share and Cite

MDPI and ACS Style

Urbaniak, K.; Wątróbski, J.; Sałabun, W. Identification of Players Ranking in E-Sport. Appl. Sci. 2020, 10, 6768. https://doi.org/10.3390/app10196768

AMA Style

Urbaniak K, Wątróbski J, Sałabun W. Identification of Players Ranking in E-Sport. Applied Sciences. 2020; 10(19):6768. https://doi.org/10.3390/app10196768

Chicago/Turabian Style

Urbaniak, Karol, Jarosław Wątróbski, and Wojciech Sałabun. 2020. "Identification of Players Ranking in E-Sport" Applied Sciences 10, no. 19: 6768. https://doi.org/10.3390/app10196768

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop