Next Article in Journal
Reconciling Tensions in Security Operations Centers a Paradox Theory Approach
Previous Article in Journal
Demo-ToT: Enhancing the Reasoning Capabilities of AI Agent via Improved Demonstrations Retrieval Strategy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Linguistic q-Rung Orthopair ELECTRE II Algorithm for Fuzzy Multi-Criteria Ontology Ranking

by
Ameeth Sooklall
* and
Jean Vincent Fonou-Dombeu
School of Mathematics, Statistics & Computer Science, University of KwaZulu-Natal, Pietermaritzburg 3201, South Africa
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(11), 277; https://doi.org/10.3390/bdcc9110277
Submission received: 28 August 2025 / Revised: 24 October 2025 / Accepted: 28 October 2025 / Published: 3 November 2025

Abstract

In recent years, interest in the application of ontologies in various domains of knowledge has grown significantly. Ontologies are widely used in a myriad of areas, such as artificial intelligence, data integration, knowledge management, and the semantic web, to name but a few. However, despite the widespread adoption, there exist a range of problems associated with ontologies, such as the complexity and cognitive challenges associated with ontology engineering, design, and development. One of the solutions to these challenges is to reuse existing ontologies rather than developing new ontologies afresh for new applications. The reuse of ontologies that describe a knowledge domain is a complex task consisting of many aspects. One of the key aspects involves ranking ontologies to aid in their selection. Various techniques have been proposed for this task, but many of them fall short in their expressiveness and ability to capture the cognitive aspects of human-like decision-making processes. Furthermore, much of the existing research focuses on an objective approach to ontology ranking, but it is unquestionable that a wide range of aspects pertaining to the quality of an ontology simply cannot be captured in a quantitative manner. Existing ranking models fail to provide a robust and flexible canvas for facilitating qualitative ontology ranking and selection for reuse. To address the aforementioned shortcomings of existing ontology ranking approaches, this study proposes a novel algorithm for ranking ontologies that extends the Elimination and Choice Translating Reality (ELECTRE) multi-criteria decision-making method with the Linguistic q-Rung Orthopair Fuzzy Set (Lq-ROFS-ELECTRE II), allowing the expression of uncertainty in a more robust and precise manner. The new Lq-ROFS-ELECTRE II algorithm was applied to rank a set of 19 ontologies of the machine learning (ML) domain. The ML ontologies were evaluated using a set of seven qualitative criteria extracted from the Ontometric framework. The proposed Lq-ROFS-ELECTRE II algorithm was then applied to rank the 19 ontologies in light of the seven criteria. The ranking results obtained were compared against the quantitative rankings of the same 19 ontologies using the traditional ELECTRE II algorithm, and confirmed the validity of the ranking performed by the proposed Lq-ROFS-ELECTRE II algorithm and its effectiveness in the task of ontology ranking. Furthermore, a comparative analysis of the proposed Lq-ROFS-ELECTRE II against existing MCDM methods and other existing fuzzy ELECTRE II methods displayed its superior modeling capabilities that allow for more natural decision evaluation from subject experts in real-world applications and allow the decision-maker to have much flexibility in expressing their preferences. These capabilities of the Lq-ROFS-ELECTRE II algorithm make it applicable not only in ontology ranking, but in any domain where there exist decision-making scenarios that comprise multiple conflicting criteria under uncertainty.

1. Introduction

Ontology is a structured representation of knowledge within a specific domain that defines a set of concepts and the relationships between them. Ontologies play a crucial role in representing and managing domain knowledge to exploit big data and artificial intelligence. In a world of diverse and heterogeneous data sources, it becomes a major challenge to make sense of information that uses different terms or formats. Ontologies solve this by providing a common language and shared understanding in order to bridge the gap between heterogeneous and disparate data sources. Ontologies are widely used in a myriad of areas, such as artificial intelligence, data integration, knowledge management, and the semantic web, where they help to enhance machine understanding, support reasoning, and ensure interoperability between the different data sources. The core components of an ontology include classes or concepts, properties, individuals or instances, and axioms. Furthermore, there has been a proliferation of tools to aid the ontology development process, such as Protégé and the Web Ontology Language. In recent years, ontologies have seen widespread adoption in various applications in diverse fields, such as finance [1], mathematics [2], robotics [3], healthcare [4], and education [5]. However, despite the successful adoption and benefits, there exist a range of problems associated with ontologies. Some of the main challenges to the adoption and integration of ontology include the following [6,7]:
  • The complexity and cognitive challenges associated with ontology engineering, design, and development.
  • The steepness of entry for ontology adoption and the usability of their development tools.
  • The lack of standardization and ambiguity within the domain terminology.
  • Limited/lack of resources and resistance from stakeholders.
  • Unfriendly user interfaces (UI) and UI interactions with ontology tools.
One of the solutions to address the aforementioned challenges is to reuse existing ontologies, possibly with some modifications, rather than developing new ontologies de novo [8,9]. This allows developers to leverage established domain vocabularies and accelerate the design process, ensuring improved consistency and interoperability. Reusing existing ontologies also promotes standardization and facilitates data integration within the same domain. However, ontology reuse in itself can be a complex and unintuitive process, requiring many stages. Oftentimes there may exist a vast number of ontologies applicable for reuse pertaining to a use-case. The task of evaluating and selecting the best ontology then becomes increasingly complex [10].
In order to reduce the complexity associated with selecting appropriate ontologies for reuse, many authors have developed strategies to rank ontologies based on their qualities, features, and characteristics [11,12,13,14,15]. In recent years, there has been interest in the applications of decision modeling and computational intelligence methods to aid in the selection of ontologies. This is considered a more viable technique because of the large amounts of data inherent in ontologies, along with the complexity and nuances associated with modeling the human decision-making process. Authors have made use of advanced techniques such as multi-criteria decision-making (MCDM) [9,10,16,17,18] for the task of ontology ranking and selection. These methods include the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), ELECTRE, Weighted Sum Model, and the Weighted Product Model. But much of these works had the following shortcomings.
  • The existing studies rank ontologies according to their quantitative metrics (numerical) as opposed to their qualitative metrics (in natural language). This is due to the inability of the models to process, capture, and exploit qualitative information. Very little work has been conducted on expressive decision-making that is able to capture qualitative criteria effectively.
  • Almost all of the existing work conducted on ontology ranking is based on a singular perspective. However, in most real-world scenarios, there will be a group of stakeholders involved in the decision-making process. There is a lack of research focused on group decision-making for ontology selection.
  • Most of the existing work does not take into account the users’ perspective. Accordingly, most techniques and current systems are not very user-friendly and users’ may often find it complex and tedious to express their opinions and judgments.
To address the aforementioned shortcomings, this research proposes a multi-criteria group decision-making algorithm based on the ELECTRE II method and the Linguistic q-Rung Orthopair Fuzzy Set (Lq-ROFS-ELECTRE II). The ELECTRE methods are a family of robust and widely applied MCDM methods. There are five variants of ELECTRE, which are ELECTRE I, II, III, IV, and Tri. Particularly, ELECTRE has been applied for the task of ontology ranking previously [10,16,18]. To further allow for extended modeling capabilities authors have integrated ELECTRE methods with fuzzy sets [19,20,21,22]. Distinctively, ELECTRE II is a robust and scalable method that has proven its viability for various tasks, especially ontology selection. On the other hand, the Linguistic q-Rung Orthopair Fuzzy Set (Lq-ROFS) is a powerful tool for enabling flexible and expressive decision-making, relaxing the constraints of other fuzzy sets such as Pythagorean fuzzy sets, hesitant fuzzy sets, and q-Rung Orthopair Fuzzy Sets. Furthermore, Lq-ROFS allows for decision-makers to provide their evaluations using a set of linguistic variables. It provides an extremely flexible structure for capturing the cognitive aspect of human-like decision-making. This enables decision-makers, such as stakeholders and domain experts, to express their judgments and perspectives in a manner that feels natural to them in an ontology selection process. The combination of ELECTRE II and Lq-ROFS results in an extremely powerful algorithm capable of efficiently and effectively solving the challenges of ontology selection explained above.
The contributions of this study are as follows.
  • A novel Lq-ROFS-ELECTRE II algorithm is proposed that enables multiple decision-makers to evaluate decision-making problems in an effective manner, with the use of linguistic variables.
  • A robust solution is proposed for the ontology ranking and selection problem by making use of multi-criteria group decision-making and fuzzy sets.
  • A comprehensive survey and analysis is performed to extract the existing machine learning ontologies to promote their selection for reuse.
The rest of this paper is structured as follows. Section 2 reviews and discusses related literature. ELECTRE II is presented, and the various concepts relevant to the specification of the proposed Lq-ROFS-ELECTRE II algorithm are defined in Section 3. In Section 4, the detailed specification of the proposed Lq-ROFS-ELECTRE II algorithm is presented. The experimental results of the application of the Lq-ROFS-ELECTRE II algorithm to rank a set of ontologies are described in Section 5. Section 6 discusses the application of the proposed Lq-ROFS-ELECTRE II algorithm in real-world scenarios in other knowledge domains, and the paper is concluded in Section 7.

2. Literature Review

Many ranking methodologies have been proposed for ontology selection. A study by Lozano-Tello and Gomez-Perez [23] proposed the Ontometric framework to compare ontologies. This comprised a hierarchical taxonomy of features based upon five dimensions, namely content, language, methodology, tools, and costs. In Ref. [24] a content-based ranking system was proposed whereby the ontologies were evaluated based on their coverage of a particular domain of interest, using a calculated ontology score. Park et al. [25] developed a ranking model based on selection standards and metrics, enhancing semantic matching abilities. The model determines the specific metrics that should be used for ranking and assigns an appropriate weight to each metric. Another study by Butt et al. [26] introduced DWRank, a bi-directional graph walk ranking algorithm, that uses Learning-to-Rank methodologies to assign weights to the different ranking strategies it employs. Subhashini and Akilandeswari [14] developed the Onto-DSB method to rank ontologies according to their semantic web link as well as their internal structure.
Despite some success in the above-mentioned studies, there are various aspects to consider in an ontology ranking problem, and oftentimes these aspects conflict with each other. Some of these aspects include (i) the identification of suitable ontologies, (ii) the semantic compatibility, (iii) the licensing and legal considerations, and (iv) the documentation and provenance [8]. Accordingly, there has been an interest in applying multi-criteria decision-making algorithms to compare and select ontologies. A study by Esposito et al. [18] applied the ELECTRE I and III algorithms for the task of ontology ranking. Sooklall and Fonou-Dombeu [10] applied the ELECTRE I, II, III, and IV methods to rank a set of 200 biomedical ontologies according to 13 of their complexity metrics. Another study by Fonou-Dombeu and Viriri [17] proposed a framework known as C-Rank, which uses the Weighted Linear Combination Ranking technique (WLCRT) to rank ontologies. In ref. [16] a preference-disaggregation framework was applied to classify ontologies according to their complexity metric levels. A genetic algorithm was developed to determine the optimal thresholds in order to build an ELECTRE Tri method, which was then applied for classification of the ontologies. There have also been other MCDM methods applied to rank ontologies such as Technique of Preference by Similarity to Ideal Solution (TOPSIS), Weighted Sum Model (WSM), and Weighted Product Model (WPM) [9]. However, despite the success of the aforementioned MCDM models in ranking ontologies, they all make use of quantitative metrics only. Oftentimes it is equally important to evaluate ontologies according to the judgments of subject domain experts. This is achievable through the use of fuzzy-based MCDM. Therefore, this paper proposes a novel fuzzy-based MCDM algorithm, namely Lq-ROFS-ELECTRE II, that enables the evaluation of ontologies according to the judgments of subject domain experts. The ELECTRE methods are a family of outranking MCDM methods built on the concepts of concordance and discordance. There are five variants of ELECTRE, which are ELECTRE I [27], II [28], III [29], IV [30], and Tri [31]. The ELECTRE I method was the initial method and subsequent methods were developed to introduce additional modeling capabilities. ELECTRE versions I to IV perform ranking and ELECTRE Tri is an ordinal classification method based on predefined thresholds. The ELECTRE methods have been successfully applied for the task of ranking ontologies in the past and has proven to be viable, robust, and scalable [10,16,32]. To further increase the expressiveness of the methods, authors have proposed various extensions using fuzzy set theory. The Intuitionistic Fuzzy ELECTRE II method was proposed by Devadoss and Rekha [33], allowing for membership and non-membership degrees u + v 1 , where u and v are the membership and non-membership degrees, respectively. This was extended with hesitant fuzzy sets by Chen and Xu [34] to allow membership degrees to comprise multiple values to capture decision-makers’ hesitance. The Pythagorean Fuzzy ELECTRE II method [19] was proposed in order to relax the constraint of the Intuitionistic Fuzzy ELECTRE II, such that u 2 + v 2 1 . Pinar and Boran [35] proposed the q-Rung Orthopair Fuzzy ELECTRE method which further generalizes the membership and non-membership constraints to u q + v q 1 . This study contributes to the efforts by existing work to develop improved ELECTRE II versions by proposing a Linguistic q-Rung Orthopair Fuzzy variation to allow decision-makers to have enhanced modeling capabilities to capture both qualitative and uncertain information.

3. Preliminaries

In this section, the traditional ELECTRE II algorithm is presented, followed by the concepts of q-ROFS, Lq-ROFS, and Linguistic q-Rung Orthopair Fuzzy Numbers (Lq-ROFNs).

3.1. Traditional ELECTRE II

The ELECTRE II technique was developed by Roy and Bertier [28] in 1971 to handle ranking problems. The steps of ELECTRE II are as follows [28,36]:
  • Step 1: Comparative Categories I , I = , and  I +
When considering alternatives A h and A k , the criteria C j are sorted into sets according to the relations between those criterion pairs for A h and A k , as in Equations (1)–(3). I + contains those criteria where the h t h alternative outperforms the k t h alternative. I = contains those criteria where two alternatives perform the same, and  I contains those criteria where A h is outperformed by A k . g j ( A h ) represents the performance of the h t h alternative in light of the j t h criterion.
I + = { C j g j ( A h ) > g j ( A k ) }
I = = { C j g j ( A h ) = g j ( A k ) }
I = { C j g j ( A h ) < g j ( A k ) }
The sets I + , I = , and  I are aggregated, and the weights are summed up to form the sets W + , W = , and  W in Equations (4), (5) and (6), respectively. These sets represent the weights corresponding to the sets I + , I = , and  I , and  w j is the importance weighting for the j t h criterion.
W + = j I + w j
W = = j I = w j
W = j I w j
  • Step 2: Index of Concordance
The index of concordance between every alternative pair is formulated. The degree of concordance C ( h , k ) between A h and A k is quantified and calculated in Equation (7).
C ( h , k ) = W + + W = W + + W = + W , h k
  • Step 3: Discordance Index
The discordance index D ( h , k ) for all alternative pairs A h and A k is given in Equation (8).
D ( h , k ) = max j I | g j ( A h ) g j ( A k ) | max j | g j ( A h ) g j ( A k ) |
  • Step 4: Concordance and Discordance Parameters
The decision-maker must define concordance parameters { p , p 0 , p * } such that the condition in Equation (9) is satisfied.
0 p p 0 p * 1
They must also define discordance parameters { q 0 , q * } such that the condition in Equation (10) holds.
0 < q 0 < q * < 1
  • Step 5: Weak and Strong Outranking Relations
The strong outranking relation A h S F A k is defined in Equation (11).
A h S F A k = C ( h , k ) C ( k , h ) , ( C ( h , k ) p * and D ( h , k ) q * ) or ( C ( h , k ) p 0 and D ( h , k ) q 0 )
The weak outranking relation A h S f A k is defined as in Equation (12).
A h S f A k = C ( h , k ) C ( k , h ) , C ( h , k ) p and D ( h , k ) q 0
  • Step 6: Exploit Outranking Relations and Determine Final Rank
Finally, the strong and weak outranking relations are exploited by applying a forward and backward distillation process [36], each of which results in a respective ranking of alternatives. These rankings are then combined in an appropriate manner to obtain the final ranking.

3.2. q-Rung Orthopair Fuzzy Set

Definition 1.
Let the set X be a non-empty finite set. A q-Rung Orthopair Fuzzy Set (q-ROFS) can be defined by Equation (13).
Q = { x , u Q ( x ) , v Q ( x ) }
where u Q and v Q are mapping functions such that u Q : X [ 0 , 1 ] and v Q : X [ 0 , 1 ] . These represent the membership and non-membership degrees of an element x with respect to X, respectively. The degrees must adhere to the constraint 0 < u Q ( x ) q + v Q ( x ) q 1 for q 1 . The indeterminacy degree of a q-ROFS can be expressed by Equation (14).
π Q ( x ) = ( 1 u Q ( x ) q v Q ( x ) q ) 1 q
where π Q ( x ) [ 0 , 1 ] is an expression of the uncertainty that x belongs to Q. A q-Rung Orthopair Fuzzy Number (q-ROFN) can be used to express elements of a q-ROFS in the form ( u Q , v Q ) where u Q q + v Q q 1 and u Q , v Q [ 0 , 1 ] .

3.3. Linguistic q-Rung Orthopair Fuzzy Set

Definition 2.
Let X = { x 1 , x 2 , , x n } be the universe of discourse. A Linguistic q-Rung Orthopair Fuzzy Set (Lq-ROFS) can be defined by Equation (15).
γ ^ = { ( x , s a ( x ) , s b ( x ) ) | x X }
where s a ( x ) and s b ( x ) express the degree of linguistic membership and non-membership, respectively. For  x X , a q + b q l q must hold for q 1 and where l is the maximum linguistic term. The Linguistic q-Rung Orthopair Fuzzy Number (Lq-ROFN) is given as the pair ( s a , s b ) . The linguistic indeterminacy degree, which represents the expression of hesitation in each evaluation, is then expressed as π ( x ) = s ( l q a q b q ) 1 q .

3.3.1. Score of Lq-ROFN

In order to evaluate Lq-ROFNs the score can be determined using Equation (16). Let y ^ = ( s a , s b ) S [ 0 , k ] be an Lq-ROFN. The score of γ ^ denoted by L s ( γ ^ ) is given in Equation (16).
L s ( γ ^ ) = k q + a q b q 2 1 q

3.3.2. Accuracy of Lq-ROFN

Furthermore, to compare Lq-ROFNs, their accuracy values can be calculated using Equation (17). Let y ^ = ( s a , s b ) S [ 0 , k ] be a Lq-ROFN. The accuracy of γ ^ denoted by L h ( γ ^ ) is given by Equation (17).
L h ( γ ^ ) = a q + b q 1 q

3.3.3. Comparing Lq-ROFNs

In order to compare different Lq-ROFNs with each other, their accuracy and score values can be used. Let γ 1 ^ = ( s a 1 , s b 1 ) and γ 2 ^ = ( s a 2 , s b 2 ) be two Lq-ROFNs where γ 1 ^ , γ 2 ^ S [ 0 , k ] . The following relations can be defined using their score and accuracy values, L s ( γ ^ ) and L h ( γ ^ ) .
  • If L s ( γ ^ 1 ) > L s ( γ ^ 2 ) then γ ^ 1 > γ ^ 2
  • If L s ( γ ^ 1 ) < L s ( γ ^ 2 ) then γ ^ 1 < γ ^ 2
  • If L s ( γ ^ 1 ) = L s ( γ ^ 2 ) then γ ^ 1 = γ ^ 2
    (a)
    If L h ( γ ^ 1 ) > L h ( γ ^ 2 ) then γ ^ 1 > γ ^ 2
    (b)
    If L h ( γ ^ 1 ) < L h ( γ ^ 2 ) then γ ^ 1 < γ ^ 2
    (c)
    If L h ( γ ^ 1 ) = L h ( γ ^ 2 ) then γ ^ 1 = γ ^ 2

3.3.4. Distance Measure

Definition 3.
Let γ 1 ^ = ( s a 1 , s b 1 ) and γ 2 ^ = ( s a 2 , s b 2 ) be two Lq-ROFNs, where γ 1 ^ , γ 2 ^ S [ 0 , k ] , and  ( π 1 , π 2 ) are their corresponding degrees of indeterminacy, respectively. The Hamming distance between the Lq-ROFNs, d H d ( γ 1 ^ , γ 2 ^ ) , is given by Equation (18).
d H d ( γ 1 ^ , γ 2 ^ ) = | a 1 q a 2 q | + | b 1 q b 2 q | + | π 1 q π 2 q | 2 k q

3.3.5. Aggregation Operators

Let A be a set of n Lq-ROFNs such that A = { γ 1 ^ , γ 2 ^ , , γ n ^ } and γ 1 ^ = ( s a i , s b i ) for i = 1 , 2 , , n and ( s a i , s b i ) S [ 0 , k ] . The set A can be aggregated using the Linguistic q-Rung Fuzzy Weighted Averaging operator (Lq-ROFWA) as in Equation (19).
L q - ROFWA ( γ 1 ^ , γ 2 ^ , , γ n ^ ) = s k 1 i = 1 n 1 a i q k q ω i 1 q , s k i = 1 n b i k ω i

4. Proposed L q -ROFS-ELECTRE II Algorithm

In this section, the novel Lq-ROFS-ELECTRE II algorithm is presented in the form of steps as follows.
Step 1: Let there be R decision-makers ( r = 1 , 2 , , R ) providing their judgments over n criteria for m alternatives. Define a decision matrix for each decision-maker, as in Equation (20).
D r = [ γ ^ i j r ] m × n = γ ^ 11 r γ ^ 1 n r γ ^ m 1 r γ ^ m n r
Step 2: Aggregate the R decision-makers matrices to form the final decision matrix D by applying Equation (21).
D = L q - ROFWA ( γ ^ i j 1 , γ ^ i j 2 , , γ ^ i j r ) m × n = γ ^ 11 γ ^ 1 n γ ^ m 1 γ ^ m n
Step 3: Define a set of criteria importance weights, ω ¯ * , for each criterion as in Equation (22).
ω ¯ * = { ω 1 ¯ , ω 2 ¯ , , ω j ¯ }
Step 4: Determine strong ( C x y s ), medium ( C x y m ), and weak ( C x y w ) concordance sets, as per Equations (23), (24) and (25), respectively.
C x y s = { j L s ( γ ^ x j ) > L s ( γ ^ y j ) , L h ( γ ^ x j ) > L h ( γ ^ y j ) }
C x y m = { j L s ( γ ^ x j ) > L s ( γ ^ y j ) , L h ( γ ^ x j ) L h ( γ ^ y j ) }
C x y w = { j L s ( γ ^ x j ) = L s ( γ ^ y j ) , L h ( γ ^ x j ) > L h ( γ ^ y j ) }
Step 5: Determine strong ( Q x y s ), medium ( Q x y m ), and weak ( Q x y w ) discordance sets, as per Equations (26)–(28).
Q x y s = { j L s ( γ ^ x j ) < L s ( γ ^ y j ) , L h ( γ ^ x j ) < L h ( γ ^ y j ) }
Q x y m = { j L s ( γ ^ x j ) < L s ( γ ^ y j ) , L h ( γ ^ x j ) L h ( γ ^ y j ) }
Q x y w = { j L s ( γ ^ x j ) = L s ( γ ^ y j ) , L h ( γ ^ x j ) < L h ( γ ^ y j ) }
Step 6: Determine the indifference set ( I x y ) by applying Equation (29).
I x y = { j L s ( γ ^ x j ) = L s ( γ ^ y j ) , L h ( γ ^ x j ) = L h ( γ ^ y j ) }
Step 7: Define the weights for the strong, medium, and weak concordance and discordance relations, and a weight for the indifference relation, as  ω C s , ω C m , ω C w , ω Q s , ω Q m , ω Q w , ω I , where ω C x , ω Q x , ω I 1 , x { s , m , w } .
Step 8: Determine the concordance matrix by applying Equation (30).
C = [ c x y ] m × m = c 11 c 1 m c m 1 c m n
where c x y is given by Equation (31).
c x y = ω C s j C x y s ω ¯ j + ω C m j C x y m ω ¯ j + ω C w j C x y w ω ¯ j + ω I j I x y ω ¯ j j = 1 n ω ¯ j
Step 9: Determine the discordance matrix by applying Equation (32).
Q = [ q x y ] m × m = q 11 q 1 m q m 1 q m n
where q x y is given by Equation (33).
q x y = max ω Q s max j Q x y s ω ¯ j d H d ( γ ^ x j , γ ^ y j ) , ω Q m max j Q x y m ω ¯ j d H d ( γ ^ x j , γ ^ y j ) , ω Q w max j Q x y w ω ¯ j d H d ( γ ^ x j , γ ^ y j ) max j d H d ( γ ^ x j , γ ^ y j )
Step 10: Determine the strong and weak outranking relationships as in Equations (34), (35) and (36), respectively.
x S F y C x y α + Q x y β + C x y C y x
x S F y C x y α 0 Q x y β 0 C x y C y x
x S f y C x y α Q x y β + C x y C y x
where α and β are the discordance thresholds defined by the decision-maker, such that 0 < α < α 0 < α + < 1 and 0 < β + < β 0 < 1 .
Step 11: Exploit Outranking Relations.
In order to generate the strong outranking graph, if alternative x strongly outranks alternative y, ( x S F y ) , then a directed arc is drawn from x to y. Similarly, if alternative x weakly outranks alternative y, ( x S f y ) , then a directed arc is drawn in the weak outranking graph. Thereafter, the forward and backward orders are performed, and finally, the results of the two orders are combined to produce a final ranking. The forward order is performed as follows:
  • According to the strong outranking graph G 1 F , the set N 1 F can be generated as the set of non-dominant alternatives. This set contains all alternatives that are not outranked by any other alternatives, and thus all alternatives that have no incoming arcs. The set N 1 f is also generated in the same manner by considering the weak outranking graph  G 1 f .
  • The elements that lie in the intersection of sets N 1 F and N 1 f , that is, N 1 F N 1 f , results in the set N 1 . The alternatives in N 1 are those not outranked in both the strong and weak outranking graphs. All alternatives in N 1 are assigned a forward rank of 1, ψ 1 ( x ) = 1 , x N 1 , where ψ 1 ( x ) represents the forward ranking for alternative x.
  • All nodes and edges corresponding to alternatives in N 1 are removed from both the strong and weak outranking graphs. After removing these nodes and edges, the resulting graphs are G 2 F and G 2 f .
  • Steps 1 to 3 are repeated iteratively until all alternatives are assigned a ranked, with each iteration producing a new set of strong and weak graphs G v F and G v f . Eventually, all alternatives are assigned a forward rank.
Thereafter, the reverse ordering is performed as follows:
  • All of the directed edges in the strong and weak outranking graphs, G 1 F and G 1 f , are reversed to form the mirror image graphs.
  • Each alternative is assigned a rank ψ 2 ( x ) using the same procedure as in the forward ranking (steps 1 to 3), where ψ 2 ( x ) represents the reverse ranking for alternative x.
  • Due to the graph reversals, each rank is transformed using Equation (37), where the transformed rank is denoted by ψ 3 ( x ) .
    ψ 3 ( A i ) = 1 + max A v A ψ 2 ( A v ) ψ 2 ( A i )
The final ranking for alternative x, denoted as ψ ¯ ( x ) , is obtained by combining the forward and reverse order rankings. This is conducted by taking the midpoint of both rankings, as in Equation (38).
ψ ¯ ( A i ) = ψ 1 ( A i ) + ψ 3 ( A i ) 2
The pseudocode for the Lq-ROFS-ELECTRE II algorithm is provided in Algorithm 1.
Algorithm 1 Lq-ROFS-ELECTRE II
Require: R decision-makers, m alternatives, n criteria
Ensure: Final ranking of alternatives
  1:
for  r = 1 to R do
  2:
      Construct decision matrix D r = [ γ ^ i j r ] m × n
  3:
end for
  4:
Aggregate all D r into final decision matrix D = [ γ ^ i j ] using Lq-ROFWA operator
  5:
Define criteria importance weights ω ¯ * = { ω ¯ 1 , , ω ¯ n }
  6:
for all  x , y { 1 , , m } , x y  do
  7:
      Determine weak, medium, and strong concordance sets C x y w , C x y m , C x y s
  8:
      Determine indifference set I x y
  9:
      Determine weak, medium, and strong discordance sets Q x y w , Q x y m , Q x y s
10:
      Compute concordance value c x y
11:
      Compute discordance value q x y
12:
end for
13:
Define thresholds: α , α 0 , α + , β + , β 0
14:
for all  x , y { 1 , , m } , x y  do
15:
      if  c x y α +  and  q x y β +  and  c x y c y x  then
16:
             x S F y (Strong outranking)
17:
      else if  c x y α 0  and  q x y β 0  and  c x y c y x  then
18:
             x S F y (Strong outranking)
19:
      else if  c x y α  and  q x y β +  and  c x y c y x  then
20:
             x S f y (Weak outranking)
21:
      end if
22:
end for
23:
Build strong outranking graph G F and weak outranking graph G f
24:
Perform forward and backward ranking from G F and G f
25:
Combine rankings to obtain final ranking of alternatives

5. Application of L q -ROFS-ELECTRE II for Ontology Selection

5.1. Experimental Design

5.1.1. Dataset

The dataset used in this study was obtained by performing a survey on the existing machine learning ontologies available. A thorough survey was conducted on both research articles and ontology repositories. Ten research databases were surveyed, viz. IEEE Xplore, ACM, Taylor & Francis, ScienceDirect, Springer, JSTOR, Emerald, MDPI, Wiley, and ArXiv. Furthermore, 6 popular ontology repositories were also surveyed, viz. BioPortal, AgroPortal, Ontology Lookup Service, Ontobee, OBO Foundry, and LOV. The first search of the 10 research databases using the search terms “Machine Learning” and “Ontology” resulted in 19 applicable papers in which an ML-related ontology was developed. The second search within the ontology repositories resulted in 8 ontologies. Thereafter, after a secondary analysis of the ontologies, 8 of the ontologies were eliminated and the remaining 19 were studied further. These reasons include duplicated ontologies, unavailability of source files, as well as dated and unmaintained source files. The final 19 ML-related ontologies along with a brief summary of each of the ontologies within the dataset is provided in Table 1.

5.1.2. Ontology Evaluation Criteria

The criteria used to evaluate the ontologies are extracted from the Ontometric framework [23]. Ontometric provides 5 categories of characteristics that can assist in the choice of the right ontology, including tools, language, content, methodology, and costs. Based on these categories, we constructed seven criteria in this study to run the proposed Lq-ROFS-ELECTRE II algorithm. The tools and costs categories were omitted because the ontologies in the dataset (listed in Table 1) are not specific to a project or system implementation, but rather a comparative analysis among the entire dataset was desired. Accordingly, in some cases, it may be appropriate to include criteria from these two categories as well. The seven criteria are presented in the Venn diagram in Figure 1, and are elaborated on as follows.
  • Expressiveness of the knowledge domain by the ontology. How well does the ontology express ML concepts, relations, attributes, and their dependencies? This criterion aligns with the Language category of characteristics in Ontometric [23].
  • Axiom design and logical expressiveness for reasoning support. Does the ontology include well-defined axioms that adequately formalize the logical relationships between concepts and instances, enabling potential reasoning tasks such as inference, consistency checking, and query answering? This criterion represents the Language category of metrics in Ontometric [23].
  • Ontology evolution and maintenance practices. Does the ontology show evidence of systematic maintenance and evolution? Are there clear versioning practices, change documentation, and evidence of iterative improvement? This criterion corresponds to the Methodology category of characteristics in Ontometric [23].
  • Documentation quality and usability support. How clear and comprehensive is the ontology documentation? Does it include complete usage examples, methodology explanations, and sufficient detail to enable effective use by domain experts and developers? This criterion covers the Methodology category of characteristics in Ontometric [23].
  • Natural language alignment and concept clarity. How closely does the ontology’s formal representation align with natural language expressions of ML concepts? Are the concepts, labels, and definitions intuitive and clear to domain experts? This criterion aligns with the Language and Content categories of characteristics in Ontometric [23].
  • Modularity and proper taxonomy use. How well structured is the taxonomy of the ontology? Are its taxonomy, modules and partitions in line with the ML domain requirements? This criterion addresses the Content category of characteristics in Ontometric [23].
  • Alignment between formal ontology definitions and natural language descriptions. Does the ontology provide clear and accurate natural language documentation (e.g., rdfs:comment) for its formally defined classes, properties, and axioms? This criterion evaluates how well the ontology bridges the gap between formal specification and human-readable domain understanding. It aligns with three Ontometric categories [23] of Content, Language, and Methodology.

5.1.3. Computer and Software Environment

The experiments in this study were carried out using a 64-bit Microsoft® Windows® 10 device with 8 GB of RAM and a 512 GB SSD. The device had an Intel® Core™ i-3 processor with a speed of 2.30 GHz. The Lq-ROFS-ELECTRE II algorithm was implemented using the Python 3 programming language together with the PyCharm 2025.2.4 development environment.

5.2. Experimental Results and Discussion

5.2.1. Decision Matrices

The first step is to obtain the expert opinions in the form of a decision matrix. In light of the 7 criteria defined above, 4 experts in the field of artificial intelligence and ontology engineering performed judgments on the 19 ontologies. These judgments, displayed in Table 2, Table 3, Table 4, Table 5, were expressed using a linguistic term set defined as L T S = { s 0 = very poor, s 1 = poor, s 2 = average, s 3 = good, and s 4 = very good}. The 4 decision matrices were aggregated using the Equation (21) and resulted in the aggregated decision matrix in Table 6. The q parameter was set to 2.

5.2.2. Importance Weightings

In this study, the mean weighting method was applied to obtain the criteria importance weights. This resulted in an equal weight of 1 7 for each of the 7 criteria. The weights for the strong, medium, and weak concordance and discordance sets, and the indifference set were defined as ( ω C s , ω C m , ω C w , ω Q s , ω Q s , ω Q s , ω I ) = ( 1 , 0.85 , 0.70 , 1 , 0.85 , 0.70 , 0.55 ) .

5.2.3. Concordance and Discordance Matrices

In order to determine the concordance and discordance values, the score and accuracy values were determined using Equations (16) and (17). These are displayed in Table 7 and Table 8. Using the score and accuracy values, the weak, medium, and strong concordance sets were determined. These were used to generate the global concordance matrix in Table 9. After determining the global concordance, the sets of strong, medium, and weak discordance were determined. Using these sets, the global discordance matrix was determined and is presented in Table 10.

5.2.4. Weak and Strong Outranking Relations

The global concordance and discordance matrices were used to determine the strong and weak outranking relations by comparing the values to the corresponding thresholds ( α , α 0 , α + , β + , β 0 ) = ( 0.6 , 0.75 , 0.9 , 0.05 , 0.12 ) . Table 11 shows the outranking relations x S F y (alternative x strongly outranks alternative y) and x S f y (alternative x weakly outranks alternative y). In other words, for every alternative pair ( i , j ) , if the i t h ontology weakly outranks the j t h ontology, then it is expressed as x S f y , and if it strongly outranks the j t h ontology, then it is expressed as x S F y .
Figure 2 and Figure 3 display the outranking relations S F and S f graphically. The ontologies are represented as nodes and the directed edges show which ontologies strongly or weakly outrank other ontologies. The node of each ontology is expressed according to the color scale in-degree value, where green represents 0 and red represents the maximum number of incoming edges. In the strong outranking graph in Figure 2, it can be seen that ontologies 8 and 4 have the lowest node in-degree value, which means they are the strongest, whereas ontology 19 has the highest in-degree value, making it the weakest. In the weak outranking graph in Figure 3, ontologies 1, 3, 11, and 18 have the lowest in-degree value, and ontologies 2 and 6 have the highest values.

5.2.5. Exploitation of Outranking Relations

The final step is to calculate the forward and backward orders and then apply their midpoints to obtain a final ranking. Table 12 shows the ranks obtained by the forward order ( ψ 1 ), the backward order ( ψ 2 ), the transformed backward order ( ψ 3 ), and the midpoint ( ψ ¯ ).

5.2.6. Final Ranking of Ontologies

The midpoint ( ψ ¯ ) values in the rightmost column of Table 12 represent the final ranking of the ontologies. Some ontologies are shown to occupy the same position in the ranking. For example, the first position is occupied by ontologies 4 (EDAM ontology in Table 1) and 8 (ITO ontology in Table 1), ontologies 1 and 16 in the 2.5 position, ontologies 2 and 10 in the 10th position, etc. The complete final ranking of the ontologies in increasing order is as follows: 4 ,   8 18 1 16 14 17 5 6 3 9 13 10 12 7 11 2 15 19 . That is, the EDAM and ITO ontologies in Table 1 are both ranked in first place. The second position is occupied by the SWeLMS ontology (see Table 1). In third position are the MLOnto and OntoDM ontologies. The MLSchema ontology in Table 1 is ranked last. The second and third last-ranked ontologies are the pair <Expose, DSEO> and <RAInS, Mex-Core>, ranked at positions 9 and 10, respectively.
The heatmap in Figure 4 expresses the extent to which each ontology in the dataset in Table 1 covers AI concepts. It can be observed that there are certain topics that have very poor coverage throughout almost all ontologies in the dataset. Only 3 out of 19 ontologies in Table 1 contain concepts regarding autoencoders: MLOnto, ITO, and AIO ontologies. Explainable AI is also represented very sparsely, with just 6 of 19 ontologies containing related content. A mere 2 of 19 ontologies contain concepts related to quantum computing; these are MLOnto and ITO ontologies. Transfer learning only appears in 1 of 19 ontologies, ITO ontology, and Transformer finds itself in just 4 of the 19 ontologies. This means that these concepts are present in just a few ontologies, and more work is required to develop further ontologies around these concepts.
The stacked bar graph in Figure 5 depicts the different quality metrics for each of the ontologies. It can be observed that the dataset in Table 1 contains both large and small ontologies. The largest ontology is the ITO ontology, with over 1 million axioms, 16,000 classes, and 93,000 individuals. The smallest ontologies are MEX-Perf, MLSchema, and Expose, with fewer than 500 axioms each. Regarding the density of the ontologies, SWeLMS has a very high average population density as well as a high relationship richness, suggesting a well-linked and densely populated ontology. Ontologies like EDAM and MLLO have many axioms and classes, but they lack individuals. Meanwhile, SWeLMS, VAIR, and FAIRnets have a more balanced level, showing both high class richness and large populations. The ranking achieved by the proposed Lq-ROFS-ELECTRE II (Table 12), along with the above analysis of the characteristics of the ML ontologies in the dataset in Table 1, can assist in the selection and reuse of these ontologies in the ML domain.

5.3. Comparative Analysis

This section compares the proposed Lq-ROFS-ELECTRE II algorithm to the traditional ELECTRE II method and other existing MCDM methods.

5.3.1. Comparison with Traditional ELECTRE II

In order to ascertain the validity of the ranking obtained with the proposed Lq-ROFS-ELECTRE II, the traditional ELECTRE II method is applied on the same dataset of ML ontologies in Table 1. However, traditional ELECTRE II cannot handle qualitative/linguistic variables and therefore it cannot make use of the 7 criteria used in the proposed Lq-ROFS-ELECTRE II algorithm. Instead, the 7 quantitative metrics displayed in Figure 5 were calculated and used as the criteria to run the traditional ELECTRE II method. Using concordance thresholds of p = 0.5 , p 0 = 0.6 , and p * = 0.7 , and discordance thresholds of q 0 = 0.3 and q * = 0.4 , the traditional ELECTRE II method generated a ranking of 8 ≻ 18 ≻ 3, 14, 16 ≻ 1, 4, 5 ≻ 6, 17 ≻ 7 ≻ 13 ≻ 9 ≻ 19 ≻ 10 ≻ 2 ≻ 11 ≻ 15 ≻ 12.
It can be observed that the rankings from the proposed Lq-ROFS-ELECTRE II and the traditional ELECTRE II are quite similar, with some ontologies being assigned the same rank by both methods. Ontology 8 (ITO) was placed first by both methods. The graph in Figure 6 shows the ranks assigned to each ontology from either methods. The blue plot shows the ranks from the Lq-ROFS-ELECTRE II, the purple plot shows the ranks from the traditional ELECTRE II, and the red bars depict the rank difference for each ontology, that is, the difference between the rank given by the proposed Lq-ROFS-ELECTRE II and traditional ELECTRE II. When the rank difference is 0, it means the ontology was ranked the same by both methods. It can be seen that both plots follow the same trend. In 42% of instances, the same ranking is assigned to the ontologies, specifically, ontologies 8, 9, 10, 11, 15, 16, 17, and 18. The minimum non-zero rank difference is 1, and the maximum rank difference is 6. On average the difference in ranks between the methods is 2.9 .
To further evaluate and compare the rankings, two statistical metrics are calculated, that is, Spearman’s Rho and Kendall’s Tau correlation coefficients. The Spearman Rho value was 0.84 and the Kendall Tau value was 0.67 . Both these values are relatively high which implies that there is a strong positive monotonic relationship between the rankings from proposed Lq-ROFS-ELECTRE II and traditional ELECTRE II. When the rank from one method increases, the rank from the other method tends to increase as well, and they largely agree on the order of ontologies. These findings attest to the validity of the ranking performed by the proposed Lq-ROFS-ELECTRE II algorithm and its effectiveness in the task of ontology ranking.

5.3.2. Comparison with Existing MCDM Methods

The proposed Lq-ROFS-ELECTRE II algorithm is compared with other MCDM methods that were applied for ontology selection in Table 13. It can be seen in Table 13 that only the proposed Lq-ROFS-ELECTRE II algorithm has been tested in ML ontologies, while most of the other MCDM methods used datasets of biomedical ontologies. The bottom of Table 13 also shows that the proposed Lq-ROFS-ELECTRE II algorithm has been tested on a larger dataset compared to the other fuzzy method implemented in an earlier study. Furthermore, the proposed Lq-ROFS-ELECTRE II algorithm implements the largest number of qualitative criteria than any previous study. These qualitative criteria implemented by the Lq-ROFS-ELECTRE II algorithm make it superior to other existing MCDM methods in Table 13 in that (1) it can deal with linguistic criteria that allow for more natural decision evaluation from subject experts in real-world applications of ontology selection, and (2) the individual concordance, discordance, and indifference weights, as well as outranking relation thresholds, allow the decision-maker to have much flexibility in expressing their preferences.
The proposed Lq-ROFS-ELECTRE II algorithm is further compared with other fuzzy variations of ELECTRE II in Table 14, specifically the Intuitionistic Fuzzy Set (IFS), Pythagorean fuzzy set (PF), Probabilistic Linguistic Term Set (PLTS), Z-Probabilistic Linguistic Term Set (Z-PLTS), and q-ROFS versions of ELECTRE II. Table 14 shows that, unlike other ELECTRE II-based algorithms that rely on fixed threshold values, the proposed Lq-ROFS-ELECTRE II algorithm offers threshold adaptability through the integration of a tunable q-parameter and linguistic evaluations, allowing context-sensitive outranking that aligns with the uncertainty and semantics of decision-maker inputs. It can be observed from Table 14 that the proposed Lq-ROFS-ELECTRE II algorithm is the only one that allows both membership and non-membership degrees, as well as indeterminancy degree, provides linguistic support, and allows for threshold adaptability. The combination of these capabilities leads to a powerful and expressive framework for decision-makers, whilst maintaining a balance between flexibility and accuracy.

5.3.3. Time Complexity of the Lq-ROFS-ELECTRE II Algorithm

The time complexity of the proposed Lq-ROFS-ELECTRE II algorithm was analyzed to further assess its efficiency. The 3 main parameters in the proposed Lq-ROFS-ELECTRE II algorithm outlined in Algorithm 1 are the number of decision-makers denoted as R, the number of alternatives denoted by m, and the number of criteria denoted as n. The time complexity for each step of the Lq-ROFS-ELECTRE II algorithm is shown in Table 15. The total time complexity is R m n + m 2 n + m 3 . This time complexity may vary depending on the values of R, m and n. Specifically, there are 3 cases where the complexity of the proposed Lq-ROFS-ELECTRE II algorithm is reduced. The first case is when there are many alternatives and relatively few criteria, where m n and m R . In this case m 3 dominates and the complexity becomes O ( m 3 ) . The second case is when there are many criteria relative to alternatives, that is, n m and m 2 × n m 3 . Here the complexity becomes O ( m 2 n ) . The third case is when there are many decision-makers, that is, R m and R × n m 2 . The complexity becomes O ( R m n ) in this case. The surface plots representing the different time complexity cases are shown in Figure 7, Figure 8, Figure 9 and Figure 10. It can be observed in the general case (Figure 7) that there is nonlinear growth, combining contributions from all terms. When there are many alternatives (Figure 8), it can be seen that the surface is relatively flat along the number of criteria, indicating the minimal impact of n. The sharp increase in the third surface plot along n (Figure 9) indicates the effects of the dominance of the criteria. And when the number of decision-makers (R) dominates (Figure 10), a proportional growth can be observed with minimal curvature compared to other cases.

5.3.4. Limitations of the Proposed Lq-ROFS-ELECTRE II

Despite its superior capabilities over the existing MCDM methods and fuzzy ELECTRE II variants as presented above, the proposed Lq-ROFS-ELECTRE II algorithm has some limitations.
  • The large number of weights required to be defined may sometimes be difficult for the user to select, especially if he/she does not want to specify importance weightings.
  • The concordance and discordance thresholds require expert knowledge in order to define appropriate thresholds.

6. Applicability of L q -ROFS-ELECTRE II in Other Domains

Although the proposed Lq-ROFS-ELECTRE II algorithm has been implemented to rank ontologies, it can be applied in any domain where there are decision-making scenarios that comprise multiple conflicting criteria under uncertainty. The concept of Lq-ROFS enhances the algorithm’s ability to handle uncertainty, hesitation, and imprecise information, which are common challenges in real-world decision environments. The integration of Lq-ROFS with the ELECTRE II framework further strengthens the Lq-ROFS-ELECTRE II algorithm by incorporating both concordance and discordance analyses, which allow for robust ranking even in the presence of conflicting evaluations. Some critical areas where the proposed Lq-ROFS-ELECTRE II algorithm may be particularly useful are as follows:
  • Medical and healthcare diagnosis tasks, such as selecting optimal treatment plans based on criteria such as effectiveness, side effects, cost, and patient preferences.
  • Smart cities and urban planning tasks, such as evaluating and prioritizing urban development projects using criteria like cost, environmental impact, public acceptance, and scalability.
  • Education and e-learning systems evaluation, such as the evaluation of different e-learning platforms or curricula according to factors like accessibility, pedagogical effectiveness, user experience, and flexibility.
  • Finance and investment tasks, such as portfolio selection and optimization and investment decision-making under risk and uncertainty.

7. Conclusions and Future Work

In this paper, a novel decision-making algorithm, namely Lq-ROFS-ELECTRE II, was developed and applied to rank 19 ontologies of the ML domain in light of seven qualitative criteria. The criteria were extracted from the Ontometric framework and expressed in terms of its Language, Content, and Methodology categories. Four experts evaluated the criteria using the defined linguistic term set, which were then used as the decision matrices to the ranking problem. The ranking results show that the Lq-ROFS-ELECTRE II algorithm has successfully ranked all 19 ML ontologies. In order to ascertain the validity of the ranking, the traditional ELECTRE II method was applied to rank the 19 ontologies according to seven quantitative metrics. Spearman’s Rho and Kendall’s Tau correlation coefficients were calculated to compare the ranking obtained from the Lq-ROFS-ELECTRE II and the ELECTRE II methods. Despite the usage of different criteria, both methods have a fairly high level of correlation, indicating the validity of the rankings by the proposed Lq-ROFS-ELECTRE II. The proposed Lq-ROFS-ELECTRE II algorithm provides a more flexible and robust structure for experts to provide their judgments, making it better suited for the real-world application of ontology selection compared to existing MCDM methods. Furthermore, a comparative analysis of the proposed Lq-ROFS-ELECTRE II compared to other existing fuzzy ELECTRE II methods showed its superior modeling capabilities.
Future work would focus on applying the proposed Lq-ROFS-ELECTRE II algorithm to aid in ontology selection and ranking by considering a larger dataset of ontologies, as well as ontologies from other domains. Furthermore, a future direction would focus on the development of expressive criteria which can be used to evaluate and rank ontologies. Apart from the task of ontology ranking, the proposed Lq-ROFS-ELECTRE II algorithm may be applied to decision-making problems, both in academic and real-world environments. To further enhance the modeling capabilities of the Lq-ROFS-ELECTRE II algorithm, it can be integrated with other intelligent methods such as machine learning.

Author Contributions

Conceptualization, A.S. and J.V.F.-D.; methodology, A.S. and J.V.F.-D.; software, A.S.; validation, A.S. and J.V.F.-D.; formal analysis, A.S. and J.V.F.-D.; investigation, A.S. and J.V.F.-D.; resources, A.S. and J.V.F.-D.; data curation, A.S. and J.V.F.-D.; writing—original draft preparation, A.S.; writing—review and editing, A.S. and J.V.F.-D.; visualization, A.S.; supervision, J.V.F.-D.; project administration, A.S. and J.V.F.-D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset of ontologies used in this study can be accessed from https://bit.ly/4psEeFh (accessed on 27 October 2025).

Acknowledgments

The authors would like to express their appreciation to the reviewers for their valuable efforts.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xu, Z.; Ichise, R. FinCaKG-Onto: The financial expertise depiction via causality. Appl. Intell. 2025, 65, 789–805. [Google Scholar]
  2. Kirillovich, A.; Nevzorova, O.; Lipachev, E. OntoMath PRO: An ontology of mathematical knowledge. Doklady Math. 2022, 105, 45–50. [Google Scholar]
  3. Adamik, M.; Pernisch, R.; Tiddi, I.; Schlobach, S. ORKA: An ontology for robotic knowledge acquisition. In Proceedings of the Conference on Knowledge Engineering and Knowledge Management, EKAW 2024, Amsterdam, The Netherlands, 26–28 November 2024; Springer: Cham, Switzerland, 2025; Volume 15370, pp. 305–321. [Google Scholar]
  4. Sharma, S.; Jain, S. CovidO: An ontology for COVID-19 metadata. J. Supercomput. 2024, 80, 1238–1267. [Google Scholar] [CrossRef]
  5. Tkachenko, K.; Tkachenko, O.; Tkachenko, O.; Mazur, N.; Mashkina, I. Ontological approach in modern educational processes. In Proceedings of the CPITS-2024: Cybersecurity Providing in Information and Telecommunication Systems, Kyiv, Ukraine, 28 February 2024. CEUR Workshop Proceedings. [Google Scholar]
  6. Abtew, A. Ontology development for public policy implementation: Challenges, opportunities, and applications. J. Math. Techniques Comput. Math. 2023, 2, 363–367. [Google Scholar]
  7. Tudorache, T. Ontology engineering: Current state, challenges, and future directions. Semant. Web 2019, 11, 125–138. [Google Scholar] [CrossRef]
  8. Carriero, V.A.; Daquino, M.; Gangemi, A.; Nuzzolese, A.G.; Peroni, S.; Presutti, V.; Tomasi, F. The Landscape of Ontology Reuse Approaches. In Applications and Practices in Ontology Design, Extraction, and Reasoning; Studies on the Semantic Web; IOS Press: Amsterdam, The Netherlands, 2020; Volume 49, pp. 21–38. [Google Scholar] [CrossRef]
  9. Fonou-Dombeu, J.V. A comparative application of multi-criteria decision making in ontology ranking. In Business Information Systems; Springer: Cham, Switzerland, 2019; pp. 55–69. [Google Scholar]
  10. Sooklall, A.; Dombeu, J.V.F. Comparative ranking of ontologies with ELECTRE family of multi-criteria decision-making algorithms. In Proceedings of the International Conference on Advanced Information Systems Engineering, Bari, Italy, 28–30 November 2022; Springer: Cham, Switzerland, 2022; pp. 265–275. [Google Scholar]
  11. Alani, H.; Brewster, C.; Shadbolt, N. Ranking ontologies with AKTiveRank. In Proceedings of the 5th International Conference on The Semantic Web, Athens, Greece, 1–15 November 2006. [Google Scholar]
  12. Yu, W.; Chen, J.; Cao, J. A novel approach for ranking ontologies on the semantic web. In Proceedings of the First International Symposium on Pervasive Computing and Applications, Urumqi, China, 3–5 August 2006; pp. 608–612. [Google Scholar]
  13. Alipanah, N.; Srivastava, P.; Parveen, P.; Thuraisingham, B. Ranking ontologies using verified entities to facilitate federated queries. In Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Washington, DC, USA, 31 August–3 September 2010. [Google Scholar]
  14. Subhashini, R.; Akilandeswari, J. A novel approach for ranking ontologies based on the structure and semantics. J. Theor. Appl. Inf. Technol. 2014, 65, 147–153. [Google Scholar]
  15. Alani, H.; Brewster, C. Ontology ranking based on the analysis of concept structures. In Proceedings of the 3rd International Conference on Knowledge Capture (K-Cap), Banff, AB, Canada, 2–5 October 2005. [Google Scholar]
  16. Sooklall, A.; Dombeu, J.V.F. Application of genetic algorithm for complexity metrics-based classification of ontologies with ELECTRE Tri. In Proceedings of the Pan-African Artificial Intelligence and Smart Systems Conference, Dakar, Senegal, 2–4 November 2022. [Google Scholar] [CrossRef]
  17. Fonou-Dombeu, J.V.; Viriri, S. CRank: A novel framework for ranking semantic web ontologies. In Model and Data Engineering; Springer: Cham, Switzerland, 2018; pp. 107–121. [Google Scholar]
  18. Esposito, A.; Tarricone, L.; Zappatore, M. Applying multi-criteria approaches to ontology ranking: A comparison with AKTiveRank. Int. J. Metadata Semant. Ontol. 2012, 7, 197. [Google Scholar] [CrossRef]
  19. Akram, M.; Ilyas, F.; Garg, H. ELECTRE-II method for group decision-making in Pythagorean fuzzy environment. Appl. Intell. 2021, 51, 8701–8719. [Google Scholar] [CrossRef]
  20. Akram, M.; Noreen, U.; Deveci, M. Enhanced ELECTRE II method with 2-tuple linguistic m-polar fuzzy sets for multi-criteria group decision making. Expert Syst. Appl. 2023, 213, 119237. [Google Scholar] [CrossRef]
  21. Akram, M.; Zahid, K.; Kahraman, C. A new ELECTRE-based decision-making framework with spherical fuzzy information for the implementation of autonomous vehicles project in Istanbul. Knowl.-Based Syst. 2024, 283, 111207. [Google Scholar] [CrossRef]
  22. Al-Quran, A.; Jamil, N.; Tehrim, S.T.; Riaz, M. Cubic bipolar fuzzy VIKOR and ELECTRE-II algorithms for efficient freight transportation in Industry 4.0. AIMS Math. 2023, 8, 24484–24514. [Google Scholar] [CrossRef]
  23. Lozano-Tello, A.; Gomez-Perez, A. ONTOMETRIC: A method to choose the appropriate ontology. J. Database Manag. 2004, 15, 1–18. [Google Scholar] [CrossRef]
  24. Jones, M.; Alani, H. Content-based ontology ranking. In Proceedings of the 9th International Protégé Conference, Stanford, CA, USA, 23–26 July 2006. [Google Scholar]
  25. Park, J.; Oh, S.; Ahn, J. Ontology selection ranking model for knowledge reuse. Expert Syst. Appl. 2011, 38, 5133–5144. [Google Scholar] [CrossRef]
  26. Butt, A.; Haller, A.; Xie, L. DWRank: Learning concept ranking for ontology search. Semant. Web 2016, 7, 447–461. [Google Scholar] [CrossRef]
  27. Roy, B. Classement et choix en présence de points de vue multiples. Rev. Française D’Informatique Rech. Opérationnelle 1968, 2, 57–75. [Google Scholar] [CrossRef]
  28. Roy, B.; Bertier, P. La methode ELECTRE II: Une Methode de Classement en Presence de Critteres Multiples; Note de Travail No. 142; SEMA (Metra International), Direction Scientifique: Paris, France, 1971. [Google Scholar]
  29. Roy, B. ELECTRE III: Un algorithme de classements fondé sur une representation floue des preferences en presence de criteres multiples. Cahiers CERO 1978, 20, 3–24. [Google Scholar]
  30. Roy, B.; Hugonnard, J. Classement des prolongements de lignes de metro en banlieue parisienne. Cahiers CERO 1982, 24, 153–171. [Google Scholar]
  31. Yu, W. ELECTRE TRI—Aspects Méthodologiques et Manuel d’Utilisation; Université de Paris-Dauphine, LAMSADE: Paris, France, 1992. [Google Scholar]
  32. Sooklall, A.; Dombeu, J.V.F. An Enhanced ELECTRE II Method for Multi-Attribute Ontology Ranking with Z-Numbers and Probabilistic Linguistic Term Set. Future Internet 2022, 14, 271. [Google Scholar] [CrossRef]
  33. Devadoss, V.A.; Rekha, M. A New Intuitionistic Fuzzy ELECTRE II approach to study the Inequality of women in the society. Glob. J. Pure Appl. Math. 2017, 13, 6583–6594. [Google Scholar]
  34. Chen, N.; Xu, Z. Hesitant fuzzy ELECTRE II approach: A new way to handle multi-criteria decision making problems. Inf. Sci. 2015, 292, 175–197. [Google Scholar] [CrossRef]
  35. Pinar, A.; Boran, E. A q-rung orthopair fuzzy multi-criteria group decision making method for supplier selection based on a novel distance measure. Int. J. Mach. Learn. Cybern. 2020, 11, 1–15. [Google Scholar] [CrossRef]
  36. Rogers, M.; Bruen, M.; Maystre, L. ELECTRE and Decision Support; Springer Science+Business Media, LLC: New York, NY, USA, 2000. [Google Scholar]
  37. Braga, J.; Dias, J.L.R.; Regateiro, F. A machine learning ontology. Frenxiv Pap. 2023; in press. [Google Scholar]
  38. Vanschoren, J.; Soldatova, L. Exposé: An ontology for data mining experiments. In Proceedings of the International Workshop on Third Generation Data Mining: Towards Service-Oriented Knowledge Discovery (SoKD-2010), Barcelona, Spain, 24 September 2010. [Google Scholar]
  39. Liao, C.; Lin, P.H.; Verma, G.; Vanderbruggen, T.; Emani, M.; Nan, Z.; Shen, X. HPC ontology: Towards a unified ontology for managing training datasets and AI models for high-performance computing. In Proceedings of the 2021 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), St. Louis, MO, USA, 15 November 2021; pp. 69–80. [Google Scholar]
  40. Ison, J.; Kalaš, M.; Jonassen, I.; Bolser, D.; Uludag, M.; McWilliam, H.; Malone, J.; Lopez, R.; Pettifer, S.; Rice, P. EDAM: An ontology of bioinformatics operations, types of data and identifiers, topics and formats. Bioinformatics 2013, 29, 1325–1332. [Google Scholar] [CrossRef]
  41. vair Project Website [Online]. Available online: https://delaramglp.github.io/vair/ (accessed on 30 October 2025).
  42. Franklin, J.S.; Bhanot, K.; Ghalwash, M.; Bennett, K.P.; McCusker, J.; McGuinness, D.L. An ontology for fairness metrics. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Oxford, UK, 19–21 May 2022; pp. 265–275. [Google Scholar]
  43. RAInS Ontology Documentation [Online]. Available online: https://rains-uoa.github.io/RAInS-Ontology/v2.0/index-en.html (accessed on 30 October 2025).
  44. Blagec, K.; Barbosa-Silva, A.; Ott, S.; Samwald, M. A curated, ontology-based, large-scale knowledge graph of artificial intelligence tasks and benchmarks. Sci. Data 2022, 9, 322. [Google Scholar] [CrossRef]
  45. Dasoulas, I.; Yang, D.; Dimou, A. MLSea: A semantic layer for discoverable machine learning. In Proceedings of the European Semantic Web Conference ESWC 2024, Hersonissos, Greece, 26–30 May 2024; Springer: Cham, Switzerland, 2024; Volume 14665, pp. 186–202. [Google Scholar]
  46. Esteves, D.; Moussallem, D.; Neto, C.B.; Soru, T.; Usbeck, R.; Ackermann, M.; Lehmann, J. MEX vocabulary: A lightweight interchange format for machine learning experiments. In Proceedings of the 11th International Conference on Semantic Systems, Vienna, Austria, 15–17 September 2015. [Google Scholar]
  47. Nguyen, A.; Weller, T.; Faerber, M.; Sure-Vetter, Y. Making neural networks FAIR. In Proceedings of the Semantic Web: ESWC 2020 Satellite Events, Heraklion, Greece, 31 May–4 June 2020; Springer: Cham, Switzerland, 2020; pp. 31–36. [Google Scholar]
  48. Artificial Intelligence Ontology (AIO) [Online]. BioPortal. Available online: https://bioportal.bioontology.org/ontologies/AIO (accessed on 30 October 2025).
  49. Rashid, S.; McGuinness, D. Creating and Using an Education Standards Ontology to Improve Education; Rensselaer Polytechnic Institute: Troy, NY, USA, 2018. [Google Scholar]
  50. Panov, P.; Džeroski, S.; Soldatova, L. OntoDM: An ontology of data mining. In Proceedings of the 2008 IEEE International Conference on Data Mining Workshops, Washington, DC, USA, 15–19 December 2008; pp. 752–760. [Google Scholar]
  51. Drobnjakovic, M.; Charoenwut, P.; Nikolov, A.; Oh, H.; Kulvatunyou, B. An introduction to machine learning lifecycle ontology and its applications. In IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2024. [Google Scholar]
  52. Ekaputra, F.J.; Waltersdorfer, L.; Breit, A.; Sabou, M. Towards a standardized description of semantic web machine learning systems. In Proceedings of the SemAI 2022: First Workshop on Semantic AI, Co-Located with SEMANTiCS Conference 2022, Vienna, Austria, 13–15 September 2022. [Google Scholar]
  53. Publio, G.C.; Esteves, D.; Panov, P.; Soldatova, L.; Soru, T.; Vanschoren, J.; Zafar, H. ML schema: Exposing the semantics of machine learning with schemas and ontologies. In Proceedings of the ICML 2018 Workshop on Reproducibility in Machine Learning, Stockholm, Sweden, 15 July 2018; Available online: https://openreview.net/pdf?id=B1e8MrXVxQ (accessed on 30 October 2025).
  54. Dombeu, J.V.F. Ranking Semantic Web Ontologies with ELECTRE. In Proceedings of the 2019 International Conference on Advances in Big Data, Computing and Data Communication Systems (ICABCD), KwaZulu-Natal, South Africa, 5–6 August 2019; pp. 1–6. [Google Scholar]
  55. Shen, F.; Liang, C.; Yang, Z. Combined probabilistic linguistic term set and ELECTRE II method for solving a venture capital project evaluation problem. Econ. Res. Ekon. Istraživanja 2021, 35, 60–82. [Google Scholar] [CrossRef]
  56. Ye, J.; Chen, T.-Y. Development of an outranking-oriented multiple-criteria decision model within q-rung orthopair fuzzy contexts. Ain Shams Eng. J. 2025, 16, 103508. [Google Scholar] [CrossRef]
Figure 1. Venn diagram showing the 7 evaluation criteria.
Figure 1. Venn diagram showing the 7 evaluation criteria.
Bdcc 09 00277 g001
Figure 2. Strong outranking graph with node in-degree values.
Figure 2. Strong outranking graph with node in-degree values.
Bdcc 09 00277 g002
Figure 3. Weak outranking graph with node in-degree values.
Figure 3. Weak outranking graph with node in-degree values.
Bdcc 09 00277 g003
Figure 4. A heatmap depicting the concept coverage per ontology.
Figure 4. A heatmap depicting the concept coverage per ontology.
Bdcc 09 00277 g004
Figure 5. A stacked bar graph depicting different ontology metrics for each ontology.
Figure 5. A stacked bar graph depicting different ontology metrics for each ontology.
Bdcc 09 00277 g005
Figure 6. Ranks obtained from Lq-ROFS-ELECTRE II and traditional ELECTRE II.
Figure 6. Ranks obtained from Lq-ROFS-ELECTRE II and traditional ELECTRE II.
Bdcc 09 00277 g006
Figure 7. Surface plot showing time complexity of Lq-ROFS-ELECTRE II for the general case.
Figure 7. Surface plot showing time complexity of Lq-ROFS-ELECTRE II for the general case.
Bdcc 09 00277 g007
Figure 8. Surface plot showing time complexity of Lq-ROFS-ELECTRE II with many alternatives.
Figure 8. Surface plot showing time complexity of Lq-ROFS-ELECTRE II with many alternatives.
Bdcc 09 00277 g008
Figure 9. Surface plot showing time complexity of Lq-ROFS-ELECTRE II with many criteria.
Figure 9. Surface plot showing time complexity of Lq-ROFS-ELECTRE II with many criteria.
Bdcc 09 00277 g009
Figure 10. Surface plot showing time complexity of Lq-ROFS-ELECTRE II with many decision-makers.
Figure 10. Surface plot showing time complexity of Lq-ROFS-ELECTRE II with many decision-makers.
Bdcc 09 00277 g010
Table 1. Machine learning ontologies in the dataset.
Table 1. Machine learning ontologies in the dataset.
iOntologyClass CountAxiomsThematic AreaDescription
1MLOnto [37]4331877Machine LearningCore ML concepts and workflows
2Expose [38]135364Data MiningTracks exposure events and context
3HPC [39]3731931High-Perf. Comp.HPC resource and task modeling
4EDAM [40]351136,519BioinformaticsBioinformatics operations and formats
5VAIR [41]4245084AI RisksModels AI risk properties
6FMO [42]2441860Fairness in MLFairness in ML concepts and topologies
7RAInS [43]1551471AI AccountabilityResponsible AI and computing
8ITO [44]16,2831,048,656General MLComprehensive IT process taxonomy
9MLSO [45]36403General MLLight ML software pipeline ontology
10Mex-Algo [46]157492ML AlgorithmsMetadata for ML algorithm components
11Mex-Core [46]65218Experiment MetadataCore experiment metadata
12Mex-Perf [46]13188Performance MetricsPerformance indicators for experiments
13FAIRnets [47]77516Neural NetworksFAIR principles applied to Neural Nets
14AIO [48]4433529Deep LearningGeneric AI-related concepts
15DSEO [49]1321072Data ScienceData science education
16OntoDM [50]4732350Data MiningFormalization of data mining tasks
17MLLO [51]3452884ML LifecycleFull ML model lifecycle ontology
18SWeMLS [52]653995Semantic WebLinking ML to semantic web resources
19MLSchema  [53]25308General MLSchema for describing ML elements
Table 2. Evaluation given by expert 1.
Table 2. Evaluation given by expert 1.
Ontology C 1 C 2 C 3 C 4 C 5 C 6 C 7
MLOnto( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 5 , s 1 )( s 3 , s 3 )( s 5 , s 1 )( s 5 , s 1 )
Expose( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
HPC( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
EDAM( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )
VAIR( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
FMO( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
RAInS( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
ITO( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )
MLSO( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
Mex-Algo( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
Mex-Core( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
Mex-Perf( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
FAIRnets( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
AIO( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
DSEO( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
OntoDM( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
MLLO( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
SWeMLS( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
MLSchema( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
Table 3. Evaluation given by expert 2.
Table 3. Evaluation given by expert 2.
Ontology C 1 C 2 C 3 C 4 C 5 C 6 C 7
MLOnto( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 5 , s 1 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
Expose( s 2 , s 4 )( s 2 , s 4 )( s 2 , s 4 )( s 2 , s 4 )( s 2 , s 4 )( s 2 , s 4 )( s 2 , s 4 )
HPC( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
EDAM( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 4 , s 2 )
VAIR( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
FMO( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
RAInS( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
ITO( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 5 , s 1 )( s 4 , s 2 )
MLSO( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
Mex-Algo( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
Mex-Core( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
Mex-Perf( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
FAIRnets( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
AIO( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
DSEO( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
OntoDM( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
MLLO( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
SWeMLS( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 4 , s 2 )( s 3 , s 3 )
MLSchema( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )( s 3 , s 3 )
Table 4. Evaluation given by expert 3.
Table 4. Evaluation given by expert 3.
Ontology C 1 C 2 C 3 C 4 C 5 C 6 C 7
MLOnto( s 4 , s 1 )( s 3 , s 2 )( s 4 , s 1 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )
Expose( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )( s 1 , s 4 )
HPC( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )
EDAM( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 1 )( s 4 , s 1 )( s 5 , s 0 )( s 5 , s 0 )
VAIR( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 4 , s 1 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )
FMO( s 4 , s 1 )( s 4 , s 1 )( s 3 , s 2 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 2 , s 3 )
RAInS( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )( s 1 , s 4 )
ITO( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )
MLSO( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )
Mex-Algo( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )
Mex-Core( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )( s 1 , s 4 )
Mex-Perf( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )
FAIRnets( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )
AIO( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )
DSEO( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )( s 1 , s 4 )
OntoDM( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 3 , s 2 )
MLLO( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )
SWeMLS( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 3 , s 2 )
MLSchema( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 1 , s 4 )
Table 5. Evaluation given by expert 4.
Table 5. Evaluation given by expert 4.
Ontology C 1 C 2 C 3 C 4 C 5 C 6 C 7
MLOnto( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )
Expose( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 3 , s 2 )( s 2 , s 3 )( s 1 , s 4 )( s 1 , s 4 )
HPC( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )
EDAM( s 4 , s 1 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 1 )( s 4 , s 1 )( s 5 , s 0 )( s 4 , s 1 )
VAIR( s 3 , s 2 )( s 2 , s 3 )( s 3 , s 2 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )
FMO( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 4 , s 1 )( s 4 , s 1 )( s 3 , s 2 )( s 2 , s 3 )
RAInS( s 2 , s 3 )( s 3 , s 2 )( s 2 , s 3 )( s 3 , s 2 )( s 2 , s 3 )( s 1 , s 4 )( s 1 , s 4 )
ITO( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 5 , s 0 )( s 4 , s 1 )
MLSO( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 1 , s 4 )( s 1 , s 4 )
Mex-Algo( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )
Mex-Core( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 2 , s 3 )( s 1 , s 4 )( s 1 , s 4 )
Mex-Perf( s 4 , s 1 )( s 1 , s 4 )( s 1 , s 4 )( s 1 , s 4 )( s 1 , s 4 )( s 1 , s 4 )( s 1 , s 4 )
FAIRnets( s 3 , s 2 )( s 2 , s 3 )( s 3 , s 2 )( s 2 , s 3 )( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )
AIO( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )
DSEO( s 2 , s 3 )( s 1 , s 4 )( s 1 , s 4 )( s 3 , s 2 )( s 2 , s 3 )( s 2 , s 3 )( s 1 , s 4 )
OntoDM( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 3 , s 2 )
MLLO( s 3 , s 2 )( s 3 , s 2 )( s 3 , s 2 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )( s 2 , s 3 )
SWeMLS( s 4 , s 1 )( s 4 , s 1 )( s 4 , s 1 )( s 3 , s 2 )( s 4 , s 1 )( s 3 , s 2 )( s 3 , s 2 )
MLSchema( s 2 , s 3 )( s 2 , s 3 )( s 1 , s 4 )( s 1 , s 4 )( s 2 , s 3 )( s 1 , s 4 )( s 1 , s 4 )
Table 6. Aggregated evaluation matrix.
Table 6. Aggregated evaluation matrix.
Ontology C 1 C 2 C 3 C 4 C 5 C 6 C 7
MLOnto ( s 2.61 , s 0 ) ( s 2.35 , s 1.19 ) ( s 2.61 , s 0 ) ( s 4 , s 0 ) ( s 2.35 , s 1.19 ) ( s 4 , s 0 ) ( s 4 , s 0 )
Expose ( s 1.35 , s 2.21 ) ( s 1.35 , s 2.21 ) ( s 1.35 , s 2.21 ) ( s 1.82 , s 1.57 ) ( s 1.35 , s 2.21 ) ( s 1.26 , s 2.45 ) ( s 1.16 , s 2.71 )
HPC ( s 2.61 , s 0 ) ( s 2 , s 1.41 ) ( s 2 , s 1.41 ) ( s 2 , s 1.41 ) ( s 2.35 , s 0 ) ( s 2 , s 1.41 ) ( s 1.82 , s 1.68 )
EDAM ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 )
VAIR ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 2.61 , s 0 ) ( s 2.35 , s 0 ) ( s 2 , s 1.41 ) ( s 1.82 , s 1.68 )
FMO ( s 2.35 , s 0 ) ( s 2.35 , s 0 ) ( s 2 , s 1.41 ) ( s 2.61 , s 0 ) ( s 2.61 , s 0 ) ( s 2.35 , s 0 ) ( s 1.61 , s 2 )
RAInS ( s 1.61 , s 2 ) ( s 1.82 , s 1.68 ) ( s 1.61 , s 2 ) ( s 2 , s 1.41 ) ( s 1.61 , s 2 ) ( s 1.54 , s 2.21 ) ( s 1.46 , s 2.45 )
ITO ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 )
MLSO ( s 2.35 , s 1.19 ) ( s 2.22 , s 1.41 ) ( s 2.22 , s 1.41 ) ( s 2.22 , s 1.41 ) ( s 2.22 , s 1.41 ) ( s 2.02 , s 1.86 ) ( s 2.02 , s 1.86 )
Mex-Algo ( s 2 , s 1.41 ) ( s 1.82 , s 1.68 ) ( s 1.82 , s 1.68 ) ( s 2 , s 1.41 ) ( s 2 , s 1.41 ) ( s 1.61 , s 2 ) ( s 1.61 , s 2 )
Mex-Core ( s 1.82 , s 1.57 ) ( s 1.61 , s 1.86 ) ( s 1.61 , s 1.86 ) ( s 1.61 , s 1.86 ) ( s 1.61 , s 1.86 ) ( s 1.26 , s 2.45 ) ( s 1.16 , s 2.71 )
Mex-Perf ( s 2.51 , s 0 ) ( s 1.54 , s 2.06 ) ( s 1.54 , s 2.06 ) ( s 1.54 , s 2.06 ) ( s 1.54 , s 2.06 ) ( s 1.54 , s 2.06 ) ( s 1.26 , s 2.45 )
FAIRnets ( s 2.35 , s 0 ) ( s 1.82 , s 1.68 ) ( s 2 , s 1.41 ) ( s 1.82 , s 1.68 ) ( s 2 , s 1.41 ) ( s 1.82 , s 1.68 ) ( s 1.82 , s 1.68 )
AIO ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 2.83 , s 0 ) ( s 2.35 , s 1.19 ) ( s 2.35 , s 1.19 ) ( s 1.82 , s 1.68 )
DSEO ( s 1.35 , s 2.21 ) ( s 1.26 , s 2.45 ) ( s 1.26 , s 2.45 ) ( s 1.82 , s 1.57 ) ( s 1.61 , s 1.86 ) ( s 1.35 , s 2.21 ) ( s 1.16 , s 2.71 )
OntoDM ( s 3 , s 0 ) ( s 3 , s 0 ) ( s 3 , s 0 ) ( s 3 , s 0 ) ( s 3 , s 0 ) ( s 3 , s 0 ) ( s 2.35 , s 1.19 )
MLLO ( s 2.61 , s 1 ) ( s 2.61 , s 1 ) ( s 2.61 , s 1 ) ( s 2.83 , s 0 ) ( s 2.61 , s 1 ) ( s 2.61 , s 1 ) ( s 2.07 , s 1.68 )
SWeMLS ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 4 , s 0 ) ( s 2.83 , s 0 ) ( s 3 , s 0 ) ( s 2.83 , s 0 ) ( s 2.35 , s 1.19 )
MLSchema ( s 1.35 , s 2.21 ) ( s 1.35 , s 2.21 ) ( s 1.26 , s 2.45 ) ( s 1.26 , s 2.45 ) ( s 1.35 , s 2.21 ) ( s 1.26 , s 2.45 ) ( s 1.16 , s 2.71 )
Table 7. Scores for aggregated decision matrix.
Table 7. Scores for aggregated decision matrix.
Ontology C 1 C 2 C 3 C 4 C 5 C 6 C 7
MLOnto3.383.173.384.003.174.004.00
Expose2.542.542.542.902.542.412.24
HPC3.383.003.003.003.283.002.87
EDAM4.004.004.004.004.004.004.00
VAIR4.004.004.003.383.283.002.87
FMO3.283.283.003.383.383.282.70
RAInS2.702.872.703.002.702.602.46
ITO4.004.004.004.004.004.004.00
MLSO3.173.083.083.083.082.882.88
Mex-Algo3.002.872.873.003.002.702.70
Mex-Core2.902.752.752.752.752.412.24
Mex-Perf3.342.662.662.662.662.662.41
FAIRnets3.282.873.002.873.002.872.87
AIO4.004.004.003.463.173.172.87
DSEO2.542.412.412.902.752.542.24
OntoDM3.543.543.543.543.543.543.17
MLLO3.303.303.303.463.303.302.95
SWeMLS4.004.004.003.463.543.463.17
MLSchema2.542.542.412.412.542.412.24
Table 8. Accuracy for aggregated decision matrix.
Table 8. Accuracy for aggregated decision matrix.
Ontology C 1 C 2 C 3 C 4 C 5 C 6 C 7
MLOnto2.612.632.614.002.634.004.00
Expose2.592.592.592.402.592.762.95
HPC2.612.452.452.452.352.452.48
EDAM4.004.004.004.004.004.004.00
VAIR4.004.004.002.612.352.452.48
FMO2.352.352.452.612.612.352.57
RAInS2.572.482.572.452.572.692.85
ITO4.004.004.004.004.004.004.00
MLSO2.632.632.632.632.632.752.75
Mex-Algo2.452.482.482.452.452.572.57
Mex-Core2.402.462.462.462.462.762.95
Mex-Perf2.512.572.572.572.572.572.76
FAIRnets2.352.482.452.482.452.482.48
AIO4.004.004.002.832.632.632.48
DSEO2.592.762.762.402.462.592.95
OntoDM3.003.003.003.003.003.002.63
MLLO2.802.802.802.832.802.802.67
SWeMLS4.004.004.002.833.002.832.63
MLSchema2.592.592.762.762.592.762.95
Table 9. Global concordance matrix.
Table 9. Global concordance matrix.
12345678910111213141516171819
10.001.000.790.240.430.711.000.240.961.001.001.001.000.510.960.430.670.430.98
20.000.000.000.000.000.000.000.000.000.000.280.120.120.000.480.000.000.000.64
30.200.890.000.000.240.340.830.000.360.830.870.870.790.200.890.000.120.000.87
40.811.001.000.000.811.001.000.551.001.001.001.001.000.811.001.001.000.811.00
50.550.940.810.240.000.630.940.240.790.940.940.940.890.440.940.430.430.240.91
60.240.890.610.000.340.000.890.000.610.850.890.770.690.240.890.000.120.000.87
70.000.870.080.000.000.000.000.000.000.160.510.630.200.000.770.000.000.000.85
80.811.001.000.550.811.001.000.001.001.001.001.001.000.811.001.001.000.811.00
90.000.960.570.000.140.290.980.000.001.000.960.840.860.140.940.000.000.000.91
100.000.870.080.000.000.080.760.000.000.000.910.730.280.000.870.000.000.000.85
110.000.640.000.000.000.000.360.000.000.000.000.490.000.000.520.000.000.000.76
120.000.730.000.000.000.140.240.000.120.140.390.000.140.000.610.000.120.000.85
130.000.730.160.000.080.280.690.000.120.640.890.730.000.080.730.000.000.000.85
140.510.960.790.240.600.690.960.240.840.980.960.980.940.000.980.430.510.310.96
150.000.480.000.000.000.000.120.000.000.000.400.240.120.000.000.000.000.000.60
160.570.981.000.000.571.000.980.000.981.000.980.981.000.570.980.000.980.440.98
170.290.980.860.000.570.860.980.000.981.000.980.841.000.510.980.000.000.080.98
180.570.981.000.240.811.000.980.240.981.000.980.981.000.740.980.590.910.000.98
190.000.390.000.000.000.000.000.000.000.000.160.000.000.000.360.000.000.000.00
Table 10. Global discordance matrix.
Table 10. Global discordance matrix.
12345678910111213141516171819
10.000.000.010.140.120.010.000.140.000.000.000.000.000.120.000.040.010.140.00
20.140.000.140.140.140.140.140.140.140.140.140.140.140.140.140.140.140.140.00
30.140.000.000.140.140.140.000.140.140.000.000.000.000.140.000.140.140.140.00
40.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
50.140.000.000.140.000.020.000.140.020.000.000.000.000.140.000.080.040.140.00
60.140.000.100.140.140.000.000.140.100.000.000.050.040.140.000.140.140.140.00
70.140.000.140.140.140.140.000.140.140.140.140.140.140.140.020.140.140.140.00
80.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
90.140.000.130.140.140.140.000.140.000.000.000.070.080.140.000.140.140.140.00
100.140.000.140.140.140.140.000.140.140.000.000.140.140.140.000.140.140.140.00
110.140.060.140.140.140.140.140.140.140.140.000.140.140.140.060.140.140.140.00
120.140.050.140.140.140.140.080.140.140.120.030.000.140.140.050.140.140.140.00
130.140.010.140.140.140.140.030.140.140.060.000.070.000.140.010.140.140.140.00
140.140.000.010.140.080.010.000.140.020.000.000.000.000.000.000.060.020.140.00
150.140.110.140.140.140.140.140.140.140.140.140.140.140.140.000.140.140.140.05
160.140.000.000.140.140.000.000.140.000.000.000.000.000.140.000.000.000.140.00
170.140.000.020.140.140.030.000.140.000.000.000.040.000.140.000.140.000.140.00
180.140.000.000.140.000.000.000.140.000.000.000.000.000.000.000.020.000.000.00
190.140.140.140.140.140.140.140.140.140.140.140.140.140.140.140.140.140.140.00
Table 11. Strong and weak outranking relations.
Table 11. Strong and weak outranking relations.
12345678910111213141516171819
1- S F S F -- S f S F - S F S F S F S F S F - S F - S f - S F
2------------------ S f
3- S F ---- S F -- S F S F S F S F - S F --- S F
4 S F S F S F - S F S F S F - S F S F S F S F S F S F S F S F S F S F S F
5- S F S F -- S f S F - S F S F S F S F S F - S F --- S F
6- S F S f --- S F - S f S F S F S F S f - S F --- S F
7- S F ------------ S F --- S F
8 S F S F S F - S F S F S F - S F S F S F S F S F S F S F S F S F S F S F
9- S F ---- S F -- S F S F S f S f - S F --- S F
10- S F ---- S F --- S F --- S F --- S F
11- S f ---------------- S F
12- S f ------------ S f --- S F
13- S f ---- S f -- S f S F S f -- S f --- S F
14- S F S F - S f S f S F - S F S F S F S F S F - S F --- S F
15------------------ S f
16- S F S F -- S F S F - S F S F S F S F S F - S F - S F - S F
17- S F S F -- S F S F - S F S F S F S F S F - S F --- S F
18- S F S F - S F S F S F - S F S F S F S F S F S f S F - S F - S F
19-------------------
Table 12. Rankings obtained by forward and backward orders, and their midpoint.
Table 12. Rankings obtained by forward and backward orders, and their midpoint.
Ontology ψ 1 ( x ) ψ 2 ( x ) ψ 3 ( x ) ψ ¯ ( x )
12932.5
21021010
36666
411111
54844
65755
79399
811111
96666
108488
119399
128398.5
137577
143933
151021010
162932.5
173843.5
1821022
191111111
Table 13. Comparison of Lq-ROFS-ELECTRE II with other MCDM methods for ontology ranking.
Table 13. Comparison of Lq-ROFS-ELECTRE II with other MCDM methods for ontology ranking.
MCDM MethodDomainNo. of OntologiesQuantitative CriteriaQualitative Criteria
Traditional Methods
ELECTRE I [54]Biomedical708No
ELECTRE I/III [18]Academia125No
ELECTRE I/II/III/IV [10]Biomedical20013No
WLCRT [17]Biomedical1008No
TOPSIS/WSM/WPM [9]Biomedical708No
Fuzzy Methods
ZPLTS-ELECTRE II [32]Mental Health955
L q -ROFS-ELECTRE IIMachine Learning1907
Table 14. Comparison of proposed Lq-ROFS-ELECTRE II with ELECTRE II fuzzy variants based on key modeling capabilities.
Table 14. Comparison of proposed Lq-ROFS-ELECTRE II with ELECTRE II fuzzy variants based on key modeling capabilities.
ELECTRE II VariantStructureMembershipNon-MembershipIndeterminacyLinguistic SupportThreshold Adaptability
Traditional ELECTRE II [28]Crisp×××××
IFS-ELECTRE II [33]IFS××
PF ELECTRE II [19]PFS××
PLTS-ELECTRE II [55]PLTS×
ZPLTS-ELECTRE II [32]Z-PLTS×
q-ROFS-ELECTRE II [56]q-ROFS×
L q -ROFS-ELECTRE IIL q -ROFS
Table 15. Time complexity analysis of Lq-ROFS-ELECTRE II algorithm steps.
Table 15. Time complexity analysis of Lq-ROFS-ELECTRE II algorithm steps.
StepTime Complexity
1 O ( R × m × n )
2 O ( R × m × n )
3 O ( n )
4 O ( m 2 × n )
5 O ( m 2 × n )
6 O ( m 2 × n )
7 O ( 1 )
8 O ( m 2 × n )
9 O ( m 2 × n )
10 O ( m 2 )
11 O ( m 3 )
Overall O ( R m n + m 2 n + m 3 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sooklall, A.; Fonou-Dombeu, J.V. A Linguistic q-Rung Orthopair ELECTRE II Algorithm for Fuzzy Multi-Criteria Ontology Ranking. Big Data Cogn. Comput. 2025, 9, 277. https://doi.org/10.3390/bdcc9110277

AMA Style

Sooklall A, Fonou-Dombeu JV. A Linguistic q-Rung Orthopair ELECTRE II Algorithm for Fuzzy Multi-Criteria Ontology Ranking. Big Data and Cognitive Computing. 2025; 9(11):277. https://doi.org/10.3390/bdcc9110277

Chicago/Turabian Style

Sooklall, Ameeth, and Jean Vincent Fonou-Dombeu. 2025. "A Linguistic q-Rung Orthopair ELECTRE II Algorithm for Fuzzy Multi-Criteria Ontology Ranking" Big Data and Cognitive Computing 9, no. 11: 277. https://doi.org/10.3390/bdcc9110277

APA Style

Sooklall, A., & Fonou-Dombeu, J. V. (2025). A Linguistic q-Rung Orthopair ELECTRE II Algorithm for Fuzzy Multi-Criteria Ontology Ranking. Big Data and Cognitive Computing, 9(11), 277. https://doi.org/10.3390/bdcc9110277

Article Metrics

Back to TopTop