Next Article in Journal
Trigonometric Sums via Lagrange Interpolation
Previous Article in Journal
Weighted Integral Operators from the Hv Space to the Bμ Space on Cartan–Hartogs Domains
Previous Article in Special Issue
PB Space: A Mathematical Framework for Modeling Presence and Implication Balance in Psychological Change Through Fuzzy Cognitive Maps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel p-Norm-Based Ranking Algorithm for Multiple-Attribute Decision Making Using Interval-Valued Intuitionistic Fuzzy Sets and Its Applications

1
Department of Mathematics, Ch. Charan Singh University, Meerut 250004, Uttar Pradesh, India
2
Department of Mathematics and Statistics, College of Science, King Faisal University, P.O. Box 400, Al-Ahsa 31982, Saudi Arabia
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(10), 722; https://doi.org/10.3390/axioms14100722
Submission received: 31 July 2025 / Revised: 4 September 2025 / Accepted: 19 September 2025 / Published: 24 September 2025
(This article belongs to the Special Issue Recent Advances in Fuzzy Theory Applications)

Abstract

The main focus of this paper is to introduce an algorithm that enhances the outcomes of multiple-attribute decision making by harnessing the adaptability of interval-valued intuitionistic fuzzy ( I V I F ) sets ( I V I F S s). This algorithm utilizes I V I F numbers ( I V I F N s) to represent attribute values and attribute weights, enabling the decision maker to account for the intricate nuances and uncertainties that are inherent in the decision-making process. We introduce a novel generalized score function ( G S F ) designed to overcome the limitations of previous functions. This function incorporates two parameters, denoted as γ 1 and γ 2 ( γ 1 + γ 2 = 1 ) with γ 1 ( 0 , 0.5 ) . The core concept of this algorithm centers around the computation of the p-distance for each alternative relative to the positive ideal alternative. The p-distance is derived from the p-norm associated with each alternative’s score matrix, providing the decision maker ( D M ) with a tool to rank the available alternatives. Various examples are given to demonstrate the practicality and effectiveness of the proposed algorithm. Additionally, we apply the algorithm to a real event-based multiple-attribute decision-making ( M A D M ) problem—the investment company problem—to identify the optimal alternatives through a comparative analysis.
Keywords:
MADM; IVIFS; IVIFN; GSF; p-norm
MSC:
03E72; 90B50

1. Introduction

With the continuous advancement of science and technology and the development of society, fuzzy set theory [1] and its various extensions in managing complexity and uncertainty in research problems have become increasingly apparent. Nevertheless, D M s often encounter difficulties when attempting to express their opinions solely through membership degrees. This challenge arises due to the presence of positive indeterminacy in some decision-related information, and thus requires a more comprehensive means of accurately conveying their perspectives. To address this hardship, Atanassov [2] introduced “intuitionistic fuzzy sets” ( I F S s), which offer a more flexible and robust framework for computing such information. This framework was further developed by Atanassov and Gargov [3] into an “ I V I F S ” theory. In this theory, an element’s representation incorporates interval numbers for three different degrees—membership, non-membership, and indeterminacy—rather than relying on crisp numbers. Consequently, this theory has found extensive practical applications in the development of decision-making techniques for solving “ M A D M ” problems, as indicated in references [4,5,6,7] and the literature cited therein. This underscores the notion that the I V I F S theory provides a more nuanced and advanced framework for handling decision-making scenarios characterized by high levels of uncertainty and imprecision.

1.1. Review of Existing Studies

The initial section of the literature review is dedicated to the examination of commonly employed score functions ( S F s) and accuracy functions ( A F s) [4,7,8,9,10,11,12,13,14,15] utilized for assessing the performance of interval-valued intuitionistic fuzzy numbers. Subsequently, the review delves into various modern approaches for multiple-attribute group decision making ( M A G D M ) within the context of I V I F N s. Garg [4] developed an entropy-based M A D M algorithm by introducing a generalized, improved score function for I V I F N s. Kumar and Kumar [6] designed an M A G D M algorithm by using a game model and a new order function for I V I F N s. Kumar and Chen [16] presented a new M A D M procedure in the present framework with the help of set pair analysis ( S P A ). Chen and Tsai [7] constructed a novel S F of I V I F values. Thereafter, they developed solution steps for M A D M problems by using the mean and the variance of the computed score matrix. Wang et al. [17] offered a new I V I F Jenson–Shannon divergence measure operator for constructing a novel M A D M algorithm. Ohlan [18] built up a new M A G D M in the I V I F domain by utilizing the entropy measure. Senapati et al. [19] proposed Aczel–Alsina operations based M A D M , in which they created various I V I F Aczel–Alsina geometric operators. Shen et al. [20] used the concept of the partial connection number of S P A for decision making with I V I F N s. Patra [21] introduced a new M A D M technique, which is based on the probability density function of I V I F N . This technique defeats many delusions in the existing method [22]. Shi et al. [23] extended the power aggregation operators with Aczel–Alsina t-norm and t-conorm for “interval-valued Atanassov–intuitionistic fuzzy sets” in order to propose the M A D M method. Zhong et al. [24] developed various operators for I V I F N s in the context of Dempster–Shafer theory to introduce a new decision-making method. Chen and Hsu [25] developed a weight- determining algorithm in five steps for handling I V I F M A D M problems based on a new S F and a nonlinear optimization model.

1.2. Motivation and a Brief Overview of the Present Research

After carefully reviewing the above literature, we have identified the following as the main issues in the M A D M theory:
(1)
Numerous M A D M methods ([7,16,26,27]; see Section 5 for more details) discussed in the existing literature exhibit several notable shortcomings. The shortcomings, presented in the methods [7,16,26,27], include (i) division by zero and (ii) avoiding Definition 4. As a result, these methodologies fall short in delivering an equitable and rational ranking order for the diverse array of alternatives under consideration.
(2)
There is an unequivocal necessity for an efficient and appropriate score function tailored to I V I F N s, which can consistently and reliably manage comparable and incomparable I V I F decision data.
This makes it difficult to accurately order and prioritize these types of data. So, the present study has two main objectives. Firstly, we contrive a new G S F for I V I F N s and its constructive properties. This new function has several advantages over previous ones, as it not only allows for reasonable comparisons of two or more I V I F N s, but also conquers the limitations of past S F s. This function involves an unknown portion of the I V I F N s and two regulator parameters, γ 1 and γ 2 ( γ 1 + γ 2 = 1 ), with γ 1 ( 0 , 0.5 ) . Furthermore, we demonstrate the preference of this function through various numerical examples. Secondly, an algorithm was developed for M A D M using I V I F S s. The essence of this algorithm is to determine the p-distance corresponding to each alternative from the positive ideal alternative. To identify the finest alternative(s) based on p-distance, the p-distances are calculated for each alternative score matrix (a row matrix that corresponds to an alternative in which entries are score values of attribute values). The p-distances are then provided a preference order ( P r O ) of alternatives. The Python code for this algorithm is also given. Several examples are provided to illustrate the practicality and advantage of the proposed work. Finally, by applying the proposed algorithm, we find the most suitable investment company from a given set of companies.

1.3. Key Contributions and Organizing Structure

This paper presents the following noteworthy contributions:
(𝒞1)
We introduce a G S F for I V I F N s with robust properties. This function incorporates the concept of the unknown portion of an I V I F N and two flexible parameters, γ 1 and γ 2 , where γ 1 ( 0 , 0.5 ) and γ 1 + γ 2 = 1 . The advantage of these parameters is that varying their values can aid in ordering alternatives.
(𝒞2)
Several numerical examples are provided to demonstrate the usefulness, effectiveness, and credibility of the G S F that was formulated.
(𝒞3)
We develop an algorithm to provide a comprehensive solution to I V I F decision-making problems by incorporating the p-distance and the proposed G S F .
(𝒞4)
We showcase the effectiveness of the proposed algorithm by presenting multiple numerical examples. Additionally, our algorithm is used to identify the best investment company.
The remaining format of this paper is as follows:
  • Section 2: provides an overview of essential concepts concerning I V I F S , the p-norm, and commonly utilized S F s for the considered fuzzy numbers.
  • Section 3: P r e s e n t i n g   t h e   s h o r t   c o m i n g s   o f   t h e   p r e v i o u s S F s . C o n s t r u c t i o n   o f   a G S F f o r I V I F N s w i t h   i t s   p r o p e r t i e s . C h a m p i o n i n g   t h e   s i g n i f i c a n c e   o f   t h e G S F .
  • Section 4: M o d e l i n g   o f   t h e   p r e s e n t M A D M p r o b l e m   a n d   c o n s t r u c t i n g   n e w   r e s u l t s b a s e d   o n p - d i s t a n c e . A n   a l g o r i t h m   f o r   t h e   p r e s e n t M A D M w i t h   i t s   b e n e f i t s   a n d   l i m i t a t i o n s . N u m e r i c a l   e x a m p l e s .
  • Section 5: C o m p a r i s o n s   a n d   a d v a n t a g e s .
  • Section 6: S o l u t i o n   o f   i n v e s t m e n t   c o m p a n y   p r o b l e m   b y   p r o p o s e d   a l g o r i t h m . C o m p a r i s o n s   w i t h   t h e   e x i s t i n g   w o r k s   a n d   a d v a n t a g e s .
  • Section 7: K e y   o u t c o m e s   o f   t h i s   r e s e a r c h . M a n a g e m e n t   i n s i g h t s   a n d   c o m p u t e r   s o f t w a r e . F u t u r e   d i r e c t i o n s   a n d   l i m i t a t i o n s   o f   t h e   p r e s e n t   r e s e a r c h .

2. Preliminaries

In this section, we delve into the intricacies of I V I F S s and I V I F N s together with their associated operations. Furthermore, we discuss the various S F s and A F s that have been developed in this context. A list of abbreviations used throughout the paper is listed in Table 1.
Definition 1 
([2]). An I F S I on a fixed set X is given as
I = { x , ψ I ( x ) , σ I ( x ) x X } ,
where ψ I : X [ 0 , 1 ] , σ I : X [ 0 , 1 ] are, respectively, the functions of membership and non-membership with ψ I ( x ) + σ I ( x ) 1 . For each x X , the degree of hesitation π I ( x ) is defined by
π I ( x ) = 1 ψ I ( x ) σ I ( x ) .
Definition 2 
([3]). An I V I F S   I ˜ on a fixed set X is stated as
I ˜ = { x , ψ I ˜ ( x ) , σ I ˜ ( x ) x X } ,
where ψ I ˜ : X ρ [ 0 , 1 ] , σ I ˜ : X ρ [ 0 , 1 ] are, respectively, the functions of membership and non-membership with sup ψ I ˜ ( x ) + sup σ I ˜ ( x ) 1 . The set ρ [ 0 , 1 ] is the set of all closed and bounded subintervals of [ 0 , 1 ] . Thus, ψ I ˜ ( x ) and σ I ˜ ( x ) are represented by interval numbers for each x X .
The elements of set I ˜ are called I V I F N s. For ease of use, here, an I V I F N is denoted by b ˙ = [ ε , ϑ ] , [ ϱ , φ ] where [ ε , ϑ ] ρ [ 0 , 1 ] , [ ϱ , φ ] ρ [ 0 , 1 ] and ϑ + φ 1 .
The interval of hesitation ( π b ˙ ) relative to an I V I F N b ˙ = [ ε , ϑ ] , [ ϱ , φ ] is given as
π b ˙ = [ 1 ϑ φ , 1 ε ϱ ] .
Remark 1. 
The following cases are obtained from the Definition 2.
(i)
If ε = ϑ and ϱ = φ in b ˙ , then I V I F N b ˙ reduces to an intuitionistic fuzzy number ( I F N ).
(ii)
If ε = ϑ and ϱ = φ = 1 ε , then I V I F N b ˙ reduces to a fuzzy number ( F N ).
(iii)
If ϱ = 1 ϑ and φ = 1 ε in b ˙ , then I V I F N   b ˙ reduces to an interval value fuzzy number ( I V F N ).
Definition 3 
([3]). For b ˙ 1 = [ ε 1 , ϑ 1 ] , [ ϱ 1 , φ 1 ] and b ˙ 2 = [ ε 2 , ϑ 2 ] , [ ϱ 2 , φ 2 ] , the following arithmetic operations are given as
(i)
b ˙ 1 b ˙ 2 = [ ε 1 + ε 2 ε 1 ε 2 , ϑ 1 + ϑ 2 ϑ 1 ϑ 2 ] , [ ϱ 1 ϱ 2 , φ 1 φ 2 ] ,
(ii)
b ˙ 1 b ˙ 2 = [ ε 1 ε 2 , ϑ 1 ϑ 2 ] , [ ϱ 1 + ϱ 2 ϱ 1 ϱ 2 , φ 1 + φ 2 φ 1 φ 2 ] ,
(iii)
γ b ˙ = [ 1 ( 1 ε ) γ , 1 ( 1 ϑ ) γ ] , [ ϱ γ , φ γ ] ; γ > 0 ,
(iv)
b ˙ γ = [ ε γ , ϑ γ ] , [ 1 ( 1 ϱ ) γ , 1 ( 1 φ ) γ ] ; γ > 0 ,
(v)
c ( b ˙ ) = [ ϱ , φ ] , [ ε , ϑ ] ; c C o m p l e m e n t   o p e r a t o r .
Definition 4 
([8]). For b ˙ 1 = [ ε 1 , ϑ 1 ] , [ ϱ 1 , φ 1 ] and b ˙ 2 = [ ε 2 , ϑ 2 ] , [ ϱ 2 , φ 2 ] , b ˙ 1 b ˙ 2 if the relations ε 1 > ε 2 , ϑ 1 > ϑ 2 and ϱ 1 < ϱ 2 , φ 1 < φ 2 hold. Here, the symbol “≻” represents the relation p r e f e r s t o ”.
Definition 5 
([28]). Let R n be the n-dimensional Euclidean space. Then, for any x = ( x 1 , x 2 , , x n ) R n , the p-norm of x, denoted by x p , is defined as
x p = j = 1 n | x j | p 1 p ; p 1 .
By using p-norm, the p- distance between x and y, denoted by δ p ( x , y ) , is obtained as
δ p ( x , y ) = x y p = j = 1 n | x j y j | p 1 p ; p 1 ,
where y = ( y 1 , y 2 , , y n ) R n .
Definition 6. 
For an I V I F N   b ˙ = [ ε , ϑ ] , [ ϱ , φ ] , the mathematical expressions for certain existing score and accuracy functions are given as follows:
  • The score and accuracy functions, proposed by Xu [8], are
    S 1 ( b ˙ ) = ε ϱ + ϑ φ 2 ; S 1 ( b ˙ ) [ 1 , 1 ] ,
    and
    A 1 ( b ˙ ) = ε + ϱ + ϑ + φ 2 ; A 1 ( b ˙ ) [ 0 , 1 ] .
  • The accuracy function, proposed by Ye [9], is
    A 2 ( b ˙ ) = ε ( 1 ε ϱ ) + ϑ ( 1 ϑ φ ) 2 ; A 2 ( b ˙ ) [ 1 , 1 ] .
  • The accuracy function, proposed by Nayagam et al. [10], is
    A 3 ( b ˙ ) = ε + ϑ φ ( 1 ϑ ) ϱ ( 1 ε ) 2 ; A 3 ( b ˙ ) [ 1 , 1 ] .
  • The score function, proposed by Bai [11], is
    S 2 ( b ˙ ) = ε + ε ( 1 ε ϱ ) + ϑ + ϑ ( 1 ϑ φ ) 2 ; S 2 ( b ˙ ) [ 0 , 1 ] .
  • The score function, proposed by Garg [4], is
    S 3 ( b ˙ ) = ε + ϑ 2 + k 1 ε ( 1 ε ϱ ) + k 2 ϑ ( 1 ϑ φ ) ; S 3 ( b ˙ ) [ 0 , 1 ] ,
    where k 1 + k 2 = 1 ; k 1 , k 2 0 .
  • The score function, proposed by Nayagam et al. [12], is
    S 4 ( b ˙ ) = ε + ϑ + ϱ φ + ε ϑ + ϱ φ 3 ; S 4 ( b ˙ ) [ 1 , 1 ] .
  • The score function, proposed by Selvaraj and Majumdar [13], is
    S 5 ( b ˙ ) = ε ϑ + ϱ + φ + ε ϑ + ϱ φ 3 ; S 5 ( b ˙ ) [ 1 , 1 ] .
  • The score function proposed by Chen and Tsai [14] is
    S 6 ( b ˙ ) = ε + ϑ + 1 ϱ + 1 φ 2 ; S 6 ( b ˙ ) [ 0 , 2 ] .
  • The score function, proposed by Chen and Yu [15], is
    S 7 ( b ˙ ) = ε + ϑ + ( 1 ϱ ) + ( 1 φ ) + ε × 3 5 + ϑ × 2 5 × 1 ϱ × 3 5 + φ × 2 5 + ε + 1 ϱ 2 ε × ( 1 ϱ ) + ϑ + 1 φ 2 ϑ × ( 1 φ ) + 1 ; S 7 ( b ˙ ) [ 1 , 8 ] .
After reviewing the existing ranking functions, D M s require a new score function for I V I F N s that not only ranks two or more considered fuzzy numbers effectively but also overcomes the limitations of the existing ones. The formulation of this new score function is presented in the next section.

3. A Novel SF for IVIFN s with Robust Properties: A Generalized Approach

This section is partitioned into three subsections, each focusing on a different aspect related to comparing I V I F N s. In the first subsection, we highlight some of the limitations of existing S F s and A F s commonly used for this purpose. Moving on to the second subsection, we introduce a novel G S F that overcomes these limitations and provides an effective method for comparing two or more I V I F N s. This function possesses certain properties that make it most desirable for this type of analysis. In the third subsection, we demonstrate the primacy of the new G S F by showcasing its many advantages over the existing ones.

3.1. Shortcomings of Many Existing Score and Accuracy Functions

The aim of this subsection is to present the limitations of the existing score and accuracy functions, as defined in Definition 6, through the following numerical examples. These examples show that they often fail to effectively distinguish or compare two or more I V I F N s. This underscores the necessity of developing more reliable measures to enhance decision-making accuracy under I V I F environment.
Example 1.  
Consider two I V I F N s b ˙ 1 = [ 0 , 0.50 ] , [ 0.10 , 0.20 ] and b ˙ 2 = [ 0.20 , 0.30 ] , [ 0.15 , 0.15 ] . The resultant S F values are given as
  • The functions S 1 and A 1 give the values S 1 ( b ˙ 1 ) = S 1 ( b ˙ 2 ) = 0.10 and A 1 ( b ˙ 1 ) = A 1 ( b ˙ 2 ) = 0.40 .
  • The function A 2 produces A 2 ( b ˙ 1 ) = A 2 ( b ˙ 2 ) = 0.35 .
Example 2.  
Consider the I F S information in terms of two I V I F N s, b ˙ 1 = [ 0 , 0.50 ] , [ 0 , 0.30 ] and b ˙ 2 = [ 0 , 0.40 ] , [ 0.10 , 0.10 ] .
  • The function S 1 gives the values S 1 ( b ˙ 1 ) = S 1 ( b ˙ 2 ) = 0.10 .
  • The computed S 2 values are S 2 ( b ˙ 1 ) = S 2 ( b ˙ 2 ) = 0.30 .
Example 3.  
Suppose that two I V I F decision information sets are given as b ˙ 1 = [ 0 , 0.60 ] , [ 0 , 0 ] and b ˙ 2 = [ 0 , 0.70 ] , [ 0 , 0.10 ] .
  • The function S 1 gives S 1 ( b ˙ 1 ) = S 1 ( b ˙ 2 ) = 0.30 .
  • The function S 2 presents S 2 ( b ˙ 1 ) = S 2 ( b ˙ 2 ) = 0.42 .
  • The function S 5 provides S 5 ( b ˙ 1 ) = S 5 ( b ˙ 2 ) = 0.20 .
Example 4.  
Consider two I V I F N s b ˙ 1 = [ 0 , 0.50 ] , [ 0.10 , 0.40 ] and b ˙ 2 = [ 0.20 , 0.30 ] , [ 0.20 , 0.20 ] . For b ˙ 1 and b ˙ 2 , the computed values of function A 3 are A 3 ( b ˙ 1 ) = A 3 ( b ˙ 2 ) = 0.10 .
Example 5.  
Consider the I V I F N s b ˙ 1 = [ 0 , 0 ] , [ 0.20 , 0.50 ] and b ˙ 2 = [ 0 , 0 ] , [ 0.30 , 0.40 ] . Different A F s and S F s are applied to these numbers to obtain various values.
  • The S 1 and A 1 functions result in S 1 ( b ˙ 1 ) = S 1 ( b ˙ 2 ) = 0.35 and A 1 ( b ˙ 1 ) = A 1 ( b ˙ 2 ) = 0.35 , respectively.
  • The function A 2 produces A 2 ( b ˙ 1 ) = A 2 ( b ˙ 2 ) = 0.65 .
  • The function A 3 yields A 3 ( b ˙ 1 ) = A 3 ( b ˙ 2 ) = 0.35 .
  • The function S 2 gives S 2 ( b ˙ 1 ) = S 2 ( b ˙ 2 ) = 0 . The function S 3 yields S 3 ( b ˙ 1 ) = S 3 ( b ˙ 2 ) = 0 .
  • The function S 6 produces S 6 ( b ˙ 1 ) = S 6 ( b ˙ 2 ) = 0.65 .
Example 6.  
Consider two alternatives under the I V I F S domain— b ˙ 1 = [ 0.10 , 0.40 ] , [ 0.10 , 0.20 ] and b ˙ 2 = [ 0.20 , 0.25 ] , [ 0.20 , 0.30 ] . The function S 4 values are S 4 ( b ˙ 1 ) = S 4 ( b ˙ 2 ) = 0.1533 .
Example 7.  
Let b ˙ 1 = [ 0 , 0.04 ] , [ 0 , 0.75 ] and b ˙ 2 = [ 0 , 0.09 ] , [ 0 , 0.84 ] be two numbers in the I V I F context. The function S 6 values are S 6 ( b ˙ 1 ) = S 6 ( b ˙ 2 ) = 0.85 .
Example 8.  
Consider two I V I F N s b ˙ 1 = [ 0 , 0 ] , [ 0 , 1 ] and b ˙ 2 = [ 0 , 0 ] , [ 0.5101 , 0.7123 ] . The function S 7 values are S 7 ( b ˙ 1 ) = S 7 ( b ˙ 2 ) = 2.50 .
Example 9.  
In a decision process, two I V I F S alternatives are given as b ˙ 1 = [ 0 , 1 ] , [ 0 , 0 ] and b ˙ 2 = [ 0.5 , 0.5 ] , [ 0 , 0 ] .
  • The functions S 1 and A 1 yield S 1 ( b ˙ 1 ) = S 1 ( b ˙ 2 ) = 0.50 and A 1 ( b ˙ 1 ) = A 1 ( b ˙ 2 ) = 0.50 , respectively.
  • The function A 2 yields A 2 ( b ˙ 1 ) = A 2 ( b ˙ 2 ) = 0 .
  • The function A 3 gives A 3 ( b ˙ 1 ) = A 3 ( b ˙ 2 ) = 0.50 .
  • The function S 6 produces S 6 ( b ˙ 1 ) = S 6 ( b ˙ 2 ) = 2.00 .
Hence, from the above-explained examples, it indicates that the existing S F s are sometimes quite unable to provide a ranking order between b ˙ 1 and b ˙ 2 .

3.2. Novel G S F for I V I F N s

Keeping the above irregularities presented in score functions in view, the D M requires a robust score function to measure I V I F N s, which works smoothly and effectively.
Definition 7.  
Suppose b ˙ = [ ε , ϑ ] , [ ϱ , φ ] is an I V I F N . Then, by using the unknown degree, a new G S F is defined by
V ( b ˙ ) = ε γ 1 ( 1 ϱ ) + ϑ γ 1 ( 1 φ ) ϱ γ 2 ( 1 ε ) φ γ 2 ( 1 ϑ ) 2 ,
where γ 1 + γ 2 = 1 and γ 1 ( 0 , 0.5 ) .
Definition 8.  
For I V I F N s b ˙ 1 and b ˙ 2 , b ˙ 1 b ˙ 2 if V ( b ˙ 1 ) > V ( b ˙ 2 ) for all γ 1 ( 0 , 0.5 ) .
Proposition 1.  
If an I V I F N b ˙ = [ ε , ϑ ] , [ ϱ , φ ] reduces to an I F N , i.e., ε = ϑ = ψ and ϱ = φ = σ then V ( b ˙ ) = ψ γ 1 ( 1 σ ) σ γ 2 ( 1 ψ ) .
Proposition 2.  
If an I V I F N b ˙ = [ ε , ϑ ] , [ ϱ , φ ] reduces to a F N , i.e., ε = ϑ = ψ and ϱ = φ = 1 ψ then V ( b ˙ ) = ψ γ 1 + 1 ( 1 ψ ) γ 2 + 1 .
Proposition 3.  
If an I V I F N b ˙ = [ ε , ϑ ] , [ ϱ , φ ] reduces to an I V F N , i.e., ϱ = 1 ϑ and φ = 1 ε then V ( b ˙ ) = ε γ 1 ϑ + ϑ γ 1 ε ( 1 ϑ ) γ 2 ( 1 ε ) ( 1 ε ) γ 2 ( 1 ϑ ) .
Proposition 4.  
If b ˙ = [ 1 , 1 ] , [ 0 , 0 ] is the largest I V I F N , then V ( b ˙ ) = 1 .
Proposition 5.  
For the smallest I V I F N b ˙ = [ 0 , 0 ] , [ 1 , 1 ] , the value of the V ( b ˙ ) = 1 .
Theorem 1.  
Let b ˙ 1 = [ ε 1 , ϑ 1 ] , [ ϱ 1 , φ 1 ] and b ˙ 2 = [ ε 2 , ϑ 2 ] , [ ϱ 2 , φ 2 ] be two I V I F N s. If b ˙ 2 b ˙ 1 , i.e., ε 1 < ε 2 , ϑ 1 < ϑ 2 , ϱ 1 > ϱ 2 and φ 1 > φ 2 , then V ( b ˙ 1 ) < V ( b ˙ 2 ) .
Proof. 
For the given b ˙ 1 and b ˙ 2 , Formula (1) takes the following form:
V ( b ˙ 1 ) = ε 1 γ 1 ( 1 ϱ 1 ) + ϑ 1 γ 1 ( 1 φ 1 ) ϱ 1 γ 2 ( 1 ε 1 ) φ 1 γ 2 ( 1 ϑ 1 ) 2 ,
and
V ( b ˙ 2 ) = ε 2 γ 1 ( 1 ϱ 2 ) + ϑ 2 γ 1 ( 1 φ 2 ) ϱ 2 γ 2 ( 1 ε 2 ) φ 2 γ 2 ( 1 ϑ 2 ) 2 ,
respectively. Now, we calculate the following difference:
V ( b ˙ 1 ) V ( b ˙ 2 ) = ε 1 γ 1 ( 1 ϱ 1 ) + ϑ 1 γ 1 ( 1 φ 1 ) ϱ 1 γ 2 ( 1 ε 1 ) φ 1 γ 2 ( 1 ϑ 1 ) 2 ε 2 γ 1 ( 1 ϱ 2 ) + ϑ 2 γ 1 ( 1 φ 2 ) ϱ 2 γ 2 ( 1 ε 2 ) φ 2 γ 2 ( 1 ϑ 2 ) 2 .
The given inequalities ε 1 < ε 2 and ϑ 1 < ϑ 2 yield the results ε 1 γ 1 < ε 2 γ 1 and ϑ 1 γ 1 < ϑ 2 γ 1 . Again, the inequalities ϱ 1 > ϱ 2 and φ 1 > φ 2 yield the results 1 ϱ 1 < 1 ϱ 2 and 1 φ 1 < 1 φ 2 . Thus we get
ε 1 γ 1 ( 1 ϱ 1 ) < ε 2 γ 1 ( 1 ϱ 2 ) .
ϑ 1 γ 1 ( 1 φ 1 ) < ϑ 2 γ 1 ( 1 φ 2 ) .
Adding (3) and (4), we have
ε 1 γ 1 ( 1 ϱ 1 ) + ϑ 1 γ 1 ( 1 φ 1 ) < ε 2 γ 1 ( 1 ϱ 2 ) + ϑ 2 γ 1 ( 1 φ 2 ) .
Similarly, we can derive the following inequalities
ϱ 1 γ 2 ( 1 ε 1 ) < ϱ 2 γ 2 ( 1 ε 2 ) .
φ 1 γ 2 ( 1 ϑ 1 ) < φ 2 γ 2 ( 1 ϑ 2 ) .
These inequalities give
ϱ 1 γ 2 ( 1 ε 1 ) φ 1 γ 2 ( 1 ϑ 1 ) < ϱ 2 γ 2 ( 1 ε 2 ) φ 2 γ 2 ( 1 ϑ 2 ) .
Utilizing (5) and (6), the difference (2) becomes
V ( b ˙ 1 ) V ( b ˙ 2 ) < 0 .
   □
Theorem 2.  
For an arbitrary I V I F N b ˙ = [ ε , ϑ ] , [ ϱ , φ ] , the proposed G S F V ( b ˙ ) increases monotonically with ε and ϑ, and decreases monotonically with ϱ and φ.
Proof. 
Let b ˙ 1 = [ ε 1 , ϑ 1 ] , [ ϱ 1 , φ 1 ] and b ˙ 2 = [ ε 2 , ϑ 2 ] , [ ϱ 2 , φ 2 ] be two I V I F N s where
ε 1 ε 2 , ϑ 1 ϑ 2 and ϱ 1 = φ 1 = ϱ 2 = φ 2 = ϱ .
Assuming that
V ( b ˙ 1 ) = ( 1 ϱ ) ( ε 1 γ 1 + ϑ 1 γ 1 ) ϱ γ 2 ( 2 ε 1 ϑ 1 ) 2 = f ( ε 1 , ϑ 1 ) ,
and
V ( b ˙ 2 ) = ( 1 ϱ ) ( ε 2 γ 1 + ϑ 2 γ 1 ) ϱ γ 2 ( 2 ε 2 ϑ 2 ) 2 = f ( ε 2 , ϑ 2 ) .
The inequalities ε 1 ε 2 and ϑ 1 ϑ 2 with 1 ϱ 0 yield the following results
( 1 ϱ ) ( ε 1 γ 1 + ϑ 1 γ 1 ) ( 1 ϱ ) ( ε 2 γ 1 + ϑ 2 γ 1 ) .
ϱ γ 2 ( 2 ε 1 ϑ 1 ) ϱ γ 2 ( 2 ε 2 ϑ 2 ) .
Using (7) and (8), we get f ( ε 1 , ϑ 1 ) f ( ε 2 , ϑ 2 ) , whenever ε 1 ε 2 and ϑ 1 ϑ 2 . It proves that the G S F V ( b ˙ ) is a monotonically increasing function with respect to ε and ϑ .
In a similar manner, it can be proved that the G S F V ( b ˙ ) is a monotonically decreasing function with respect to ϱ and φ .    □
Theorem 3.  
Let b ˙ = [ ε , ϑ ] , [ ϱ , φ ] be an arbitrary I V I F N , then 1 V ( b ˙ ) 1 .
Proof. 
From the expression (1),
V ( b ˙ ) = ε γ 1 ( 1 ϱ ) + ϑ γ 1 ( 1 φ ) ϱ γ 2 ( 1 ε ) φ γ 2 ( 1 ϑ ) 2 .
By Theorem 2, for a given value of ϱ and φ , V ( b ˙ ) is a monotonically increasing function with respect to ε and ϑ .
Consequently, for 0 ε ϑ 1 , we get
V ( [ 0 , 0 ] , [ ϱ , φ ] ) V ( b ˙ ) V ( [ 1 , 1 ] , [ ϱ , φ ] ) .
By Theorem 2, for a given value of ε and ϑ , V ( b ˙ ) is a monotonically decreasing function with respect to ϱ and φ .
Thus, we get
V ( [ 1 , 1 ] , [ ϱ , φ ] ) V ( [ 1 , 1 ] , [ 0 , 0 ] ) ; if 0 ϱ φ ,
and
V ( [ 0 , 0 ] , [ 1 , 1 ] ) V ( [ 0 , 0 ] , [ ϱ , φ ] ) ; if ϱ φ 1 .
Finally, the results (9)–(11) yield the following inequality
V ( [ 0 , 0 ] , [ 1 , 1 ] ) V ( [ 0 , 0 ] , [ ϱ , φ ] ) V ( b ˙ ) V ( [ 1 , 1 ] , [ ϱ , φ ] ) V ( [ 1 , 1 ] , [ 0 , 0 ] ) .
Hence, 1 V ( b ˙ ) 1 .    □
Theorem 4.  
Let b ˙ 1 = [ ε 1 , ϑ 1 ] , [ ϱ 1 , φ 1 ] , b ˙ 2 = [ ε 2 , ϑ 2 ] , [ ϱ 2 , φ 2 ] and b ˙ = [ ε , ϑ ] , [ ϱ , φ ] be three I V I F N s. If b ˙ 1 b ˙ 2 then V ( b ˙ 1 b ˙ ) > V ( b ˙ 2 b ˙ ) .
Proof. 
Using Definition 3, the following results are found as
b ˙ 1 b ˙ = [ ε 1 + ε ε 1 ε , ϑ 1 + ϑ ϑ 1 ϑ ] , [ ϱ 1 ϱ , φ 1 φ ] b ˙ 2 b ˙ = [ ε 2 + ε ε 2 ε , ϑ 2 + ϑ ϑ 2 ϑ ] , [ ϱ 2 ϱ , φ 2 φ ] .
It is given that b ˙ 1 b ˙ 2 . By using Definition 4, we get
ε 1 > ε 2 , ϑ 1 > ϑ 2 and ϱ 1 < ϱ 2 , φ 1 < φ 2 .
The relations in (13), give the following:
ε 1 ( 1 ε ) + ε > ε 2 ( 1 ε ) + ε ϑ 1 ( 1 ϑ ) + ϑ > ϑ 2 ( 1 ϑ ) + ϑ .
This gives
ε 1 + ε ε 1 ε > ε 2 + ε ε 2 ε ϑ 1 + ϑ ϑ 1 ϑ > ϑ 2 + ϑ ϑ 2 ϑ .
Also, we have
ϱ 1 ϱ ϱ 2 ϱ and φ 1 φ φ 2 φ .
Suppose that
b ˙ 1 b ˙ = b ˙ 3 = [ ε 3 , ϑ 3 ] , [ ϱ 3 , φ 3 ] b ˙ 2 b ˙ = b ˙ 4 = [ ε 4 , ϑ 4 ] , [ ϱ 4 , φ 4 ] .
Using Theorem 1 and (16), the results (14) and (15), lead to the following inequality:
ε 3 γ 1 ( 1 ϱ 3 ) + ϑ 3 γ 1 ( 1 φ 3 ) ϱ 3 γ 2 ( 1 ε 3 ) φ 3 γ 2 ( 1 ϑ 3 ) 2 > ε 4 γ 1 ( 1 ϱ 4 ) + ϑ 4 γ 1 ( 1 φ 4 ) ϱ 4 γ 2 ( 1 ε 4 ) φ 4 γ 2 ( 1 ϑ 4 ) 2
Using (1) and (12), the inequality (17) becomes V ( b ˙ 1 b ˙ ) > V ( b ˙ 2 b ˙ ) .    □
Theorem 5.  
Let b ˙ 1 = [ ε 1 , ϑ 1 ] , [ ϱ 1 , φ 1 ] , b ˙ 2 = [ ε 2 , ϑ 2 ] , [ ϱ 2 , φ 2 ] and b ˙ = [ ε , ϑ ] , [ ϱ , φ ] be three I V I F N s. If b ˙ 1 b ˙ 2 then V ( b ˙ 1 b ˙ ) > V ( b ˙ 2 b ˙ ) .
Proof. 
The proof follows a similar logic presented in Theorem 4.    □

3.3. Championing the Significance of the Novel G S F

To describe the advantages of the proposed function, we recalculate the examples mentioned in Section 3.1 and discuss their results in Table 2.
After a thorough examination of the results in Table 2, it becomes clear that the suggested G S F offers a valuable means of appraising the effectiveness of decision-making information.

4. Algorithm for IVIFS   MADM Based on Proposed GSF and p -Distance

For a better presentation, we split this section into three subsections—(i) formulation of the M A D M problem and some auxiliary results for developing a novel algorithm, (ii) novel algorithm steps, (iii) numerical examples.
The details are given below.

4.1. Formulation and Auxiliary Results

This section aims to introduce an algorithm to tackle the current M A D M problem, where attribute values are given by I V I F N s for distinct attributes. The decision-making problem involves sets of alternatives T ˙ = T ˙ 1 , T ˙ 2 , , T ˙ m and attributes U ˙ = U ˙ 1 , U ˙ 2 , , U ˙ n , and the alternative–attribute matrix is presented in Table 3. Each entry in the matrix, denoted as b ˙ i j = [ ε i j , ϑ i j ] , [ ϱ i j , φ i j ] , represents the rating of the jth attribute in the ith alternative as an I V I F N .
The attribute values for ith alternative ( T ˙ i ) corresponding to attributes U ˙ 1 , U ˙ 2 , , U ˙ n are represented in the following form:
T ˙ i : { ( U ˙ j , b ˙ i j ) : j = 1 t o n } .
Here, [ ε i j , ϑ i j ] indicates the degree of satisfaction with attribute U ˙ j by alternative T ˙ i , while [ ϱ i j , φ i j ] represents the degree of dissatisfaction with attribute U ˙ j by alternative T ˙ i . It is worth noting that [ ε i j , ϑ i j ] [ 0 , 1 ] with [ ϱ i j , φ i j ] [ 0 , 1 ] and ϑ i j + φ i j 1 , for i = 1 to m , j = 1 to n . For alternative T ˙ i , the alternative matrix is a row matrix, denoted by [ T ˙ i ] , of size ( 1 × n ) . It is defined by [ T ˙ i ] = [ b ˙ i j ] 1 × n . The score matrix of this matrix [ T ˙ i ] is given by V T ˙ i = [ v i j ] , where v i j = V ( b ˙ i j ) . Thus, we can find a row vector v i = ( v i 1 , v i 2 , , v i n ) from V T ˙ i for each i.
If each rating of an alternative is given in terms of an I V I F N [ 1 , 1 ] , [ 0 , 0 ] , it is called the positive ideal alternative ( P I T ). Similarly, if each rating of an alternative is of the form [ 0 , 0 ] , [ 1 , 1 ] , it is called the negative ideal alternative ( N I T ). Then, we get the vectors v P I T = ( 1 , 1 , , 1 ) and v N I T = ( 1 , 1 , , 1 ) associated with P I T and N I T , respectively.
Definition 9.  
Using Definition 5, the p-distance of v i = ( v i 1 , v i 2 , , v i n ) relative to v P I T , denoted by δ p ( i ) , is given by
δ p ( i ) = j = 1 n | 1 v i j | p 1 p ; p 1 .
Definition 10.  
Let δ p ( i ) and δ p ( k ) be the p-distances of the alternatives T ˙ i and T ˙ k relative to P I T , respectively. Then, T ˙ i T ˙ k if δ p ( i ) < δ p ( k ) .

4.2. A Novel Algorithm for the Current M A D M

In the face of complex and uncertain situations, D M s often grapple with the task of assigning precise numerical weights to available information. To tackle this challenge, a viable approach involves the utilization of I V I F N s to represent these weights. The following algorithm outlines the steps for implementing this approach:
Step 1. 
The aim of this step is to build up a normalized decision matrix C ˙ = [ c ˙ i j ] from the matrix B ˙ = [ b ˙ i j ] , where
c ˙ i j = b ˙ i j ;   w h e r e j th   a t t r i b u t e   i s   o f   t h e   b e n e f i t   t y p e , c ( b ˙ i j ) ;   w h e r e j th   a t t r i b u t e   i s   o f   c o s t   t y p e .
Step 2. 
This step computes the weighted decision matrix D ˙ = [ d ˙ i j ] from the matrix C ˙ = [ c ˙ i j ] and the assigned weight vector μ ˙ = [ μ ˙ 1 , μ ˙ 2 , , μ ˙ n ] , where each μ ˙ j ( j = 1 to n ) is represented by an I V I F N . The i j th entry d ˙ i j of D ˙ is given by
d ˙ i j = c ˙ i j μ ˙ j ,
which is obtained by using property (ii) (Definition 3).
Step 3. 
By utilizing the proposed G S F and the matrix D ˙ , the score alternative matrix V T ˙ i = [ v i j ] 1 × n is calculated for each i, where v i j = V ( d ˙ i j ) = ε d ˙ i j γ 1 ( 1 ϱ d ˙ i j ) + ϑ d ˙ i j γ 1 ( 1 φ d ˙ i j ) ϱ d ˙ i j γ 2 ( 1 ε d ˙ i j ) φ d ˙ i j γ 2 ( 1 ϑ d ˙ i j ) 2 with γ 1 + γ 2 = 1 and γ 1 ( 0 , 0.5 ) .
Step 4. 
For each V T ˙ i , we find a row vector v i = ( v i 1 , v i 2 , , v i n ) . Then, based on Definition 9 and obtained vectors v i , we calculate p-distance δ p ( i ) corresponding to each T ˙ i as follows:
δ p ( i ) = j = 1 n | 1 v i j | p 1 p ; p 1 ( i = 1 to m ) .
Step 5. 
Based on Definition 10 and the calculated δ p ( i ) s, we determine a P r O of alternatives. The smaller the p-distance δ p ( i ) , the better the P r O of alternative T ˙ i , where i = 1 ; to ; m .
The step-by-step procedure of the proposed algorithm can be outlined as follows: First, the normalized decision matrix is obtained from the given decision matrix. Next, the weighted normalized decision matrix is constructed, where the attribute weights are expressed in terms of I V I F N s. In the third step, the alternative score matrix for each alternative is calculated using the proposed G S F together with the weighted normalized decision matrix. Subsequently, a row vector is extracted from each alternative score matrix, and the p-distance between this row vector and the P I T is determined. Finally, the P r O of the alternatives is established based on the computed p-distances. A smaller p-distance indicates a higher preference, meaning that alternative T ˙ i (for i = 1 ; t o ; m ) is ranked higher.
  • Benefits and Limitations
Here, some benefits and limitations of the developed algorithm are presented. The benefits are as follows:
  • This algorithm provides the solution steps to solve an M A D M problem in the I V I F context. In such decision-making problems, the expert focuses not only on the behavior of the fuzzy decision matrix (a matrix in which each entry is shown by an I V I F N ) but also provides attribute weight information in terms of I V I F N s.
  • We calculate a row vector for each alternative score matrix. The advantage of these vectors is that we define a p-distance to check the overall performance of each alternative. Definition 10 states that a small value of the p-distance represents better performance of an alternative in a set of feasible alternatives.
  • The proposed G S F is utilized in this algorithm to obtain reasonable decision results. The use of this function permits a sensitivity analysis with respect to the parameter γ 1 in order to find an effective measure of I V I F N s.
Some limitations of this approach are given as follows:
  • This research avoids group perceptions to provide more realistic decision opinions.
  • The G S F used in this algorithm has some limitations for parameter γ 1 . For γ 1 = 0.5 , this function does not work effectively. For example, if γ 1 = 0.5 and b ˙ 1 = [ 0.20 , 0.20 ] , [ 0.20 , 0.20 ] and b ˙ 2 = [ 0.40 , 0.40 ] , [ 0.40 , 0.40 ] , then V ( b ˙ 1 ) = V ( b ˙ 2 ) = 0 . This shows that the function does not provide a P r O for b ˙ 1 and b ˙ 2 , even though the two numbers are different from each other.
  • The present algorithm can be strengthened by including the concept of the negative ideal alternative ( N I T ).
  • A method will be proposed to determine attribute weights precisely, which will ensure a smoother decision-making process.

4.3. Numerical Examples

The objectives of this subsection are to present-(i) the applicability of the developed algorithm; and (ii) the impact of the p-distance.
Example 10.  
This example addresses an M A D M problem that involves I V I F N decision information. The decision matrix B ˙ for this problem [7] is presented in Table 4.
The steps involved in the algorithm are outlined below.
Step 1. 
This step is not applicable because each attribute is of the benefit type. Consequently, C ˙ = B ˙ .
Step 2. 
According to D M , the I V I F weights for the attributes are given as
μ ˙ 1 = [ 0.20 , 0.20 ] , [ 0.20 , 0.20 ] , μ ˙ 2 = [ 0.30 , 0.30 ] , [ 0.30 , 0.30 ] .
Then, we calculate the weighted decision matrix D ˙ , presented in Table 5.
Step 3. 
By utilizing the proposed G S F , we construct the score matrix V D ˙ for γ 1 = 0.35 , presented in Table 6.
Step 4. 
Utilizing (18), the value δ p ( i ) for p = 2 is calculated for T ˙ i , where i = 1 , 2 , 3 , 4 . The calculated values are
δ 2 ( 1 ) = 1.7308 , δ 2 ( 2 ) = 1.7746 , δ 2 ( 3 ) = 2.0217 , δ 2 ( 4 ) = 1.9771 .
Step 5. 
Step 4 produces the following sequence:
δ 2 ( 1 ) < δ 2 ( 2 ) < δ 2 ( 4 ) < δ 2 ( 3 ) .
This yields the following: P r O
T ˙ 1 T ˙ 2 T ˙ 4 T ˙ 3 .
Thus, T ˙ 1 is the most preferable.
Example 11.  
This example addresses an M A D M problem that involves I V I F N decision information. The decision matrix B ˙ for this problem [7] is presented in Table 7.
The steps involved in the algorithm are outlined below.
Step 1. 
This step is not applicable because each attribute is of the benefit type. Consequently, C ˙ = B ˙ .
Step 2. 
According to D M , the I V I F weights for attributes are given as
μ ˙ 1 = [ 0.60 , 0.60 ] , [ 0.40 , 0.40 ] , μ ˙ 2 = [ 0.70 , 0.70 ] , [ 0.30 , 0.30 ] .
Then, we calculate the weighted decision matrix D ˙ , presented in Table 8.
Step 3. 
By utilizing the proposed G S F , we construct the score matrix V D ˙ for γ 1 = 0.35 , presented in Table 9.
Step 4. 
Utilizing (18), the value δ p ( i ) for p = 2 is calculated for T ˙ i where i = 1 , 2 , 3 . The calculated values are
δ 2 ( 1 ) = 1.5307 , δ 2 ( 2 ) = 1.6671 , δ 2 ( 3 ) = 1.8303 .
Step 5. 
Step 4 produces the following sequence:
δ 2 ( 1 ) < δ 2 ( 2 ) < δ 2 ( 3 ) .
This yields the following: P r O
T ˙ 1 T ˙ 2 T ˙ 3 .
Thus, T ˙ 1 is the most preferable.
Example 12.  
This example addresses an M A D M problem that involves I V I F N decision information. The decision matrix B ˙ for this problem [7] is presented in Table 10.
The steps involved in the algorithm are outlined below.
Step 1. 
This step is not applicable because each attribute is of the benefit type. Consequently, C ˙ = B ˙ .
Step 2. 
According to D M , the I V I F weights for attributes are given as
μ ˙ 1 = [ 0.20 , 0.20 ] , [ 0.40 , 0.40 ] , μ ˙ 2 = [ 0.30 , 0.30 ] , [ 0.25 , 0.40 ] , μ ˙ 3 = [ 0.50 , 0.50 ] , [ 0.40 , 0.40 ] .
Then, we calculate the weighted decision matrix D ˙ , presented in Table 11.
Step 3. 
By utilizing the proposed G S F , we construct the score matrix V D ˙ for γ 1 = 0.35 , presented in Table 12.
Step 4. 
Utilizing (18), the value δ p ( i ) for p = 2 is calculated for T ˙ i , where i = 1 , 2 , 3 . The calculated values are
δ 2 ( 1 ) = 2.4322 , δ 2 ( 2 ) = 2.3839 , δ 2 ( 3 ) = 2.5000 .
Step 5. 
Step 4 produces the following sequence:
δ 2 ( 2 ) < δ 2 ( 1 ) < δ 2 ( 3 ) .
This yields the following: P r O
T ˙ 2 T ˙ 1 T ˙ 3 .
Thus, T ˙ 2 is the most preferable.
Example 13.  
This example addresses an M A D M problem that involves I V I F N decision information. The decision matrix B ˙ for this problem [7] is presented in Table 13.
The steps involved in the algorithm are outlined below.
Step 1. 
This step is not applicable because each attribute is of the benefit type. Consequently, C ˙ = B ˙ .
Step 2. 
According to D M , the I V I F weights for attributes are given as
μ ˙ 1 = [ 0.25 , 0.25 ] , [ 0.25 , 0.25 ] , μ ˙ 2 = [ 0.35 , 0.35 ] , [ 0.40 , 0.40 ] , μ ˙ 3 = [ 0.30 , 0.30 ] , [ 0.65 , 0.65 ] .
Then, we calculate the weighted decision matrix D ˙ , as presented in Table 14.
Step 3. 
By utilizing the proposed G S F , we construct the score matrix V D ˙ for γ 1 = 0.35 , as presented in Table 15.
Step 4. 
Utilizing (18), the value δ p ( i ) for p = 2 is calculated for T ˙ i where i = 1 , 2 , 3 . The calculated values are
δ 2 ( 1 ) = 2.2542 , δ 2 ( 2 ) = 2.4215 , δ 2 ( 3 ) = 2.4352 .
Step 5. 
Step 4 produces the following sequence:
δ 2 ( 1 ) < δ 2 ( 2 ) < δ 2 ( 3 ) .
This yields the following: P r O
T ˙ 1 T ˙ 2 T ˙ 3 .
Thus, T ˙ 1 is the most preferable.
Example 14.  
This example addresses an M A D M problem that involves I V I F N decision information. The decision matrix B ˙ for this problem [7] is presented in Table 16.
The steps involved in the algorithm are outlined below.
Step 1. 
This step is not applicable because each attribute is of the benefit type. Consequently, C ˙ = B ˙ .
Step 2. 
According to D M , the I V I F weights for attributes are given as
μ ˙ 1 = [ 0 , 0 ] , [ 0 , 0 ] , μ ˙ 2 = [ 0 , 0 ] , [ 0 , 0 ] , μ ˙ 3 = [ 0 , 0 ] , [ 0 , 0 ] .
Then, we calculate the weighted decision matrix D ˙ , as presented in Table 17.
Step 3. 
By utilizing the proposed G S F , we construct the score matrix V D ˙ for γ 1 = 0.35 , presented in Table 18.
Step 4. 
Utilizing (18), the value δ p ( i ) for p = 2 is calculated for T ˙ i , where i = 1 , 2 , 3 . The calculated values are
δ 2 ( 1 ) = 2.8359 , δ 2 ( 2 ) = 2.8359 , δ 2 ( 3 ) = 2.8359 .
Step 5. 
Step 4 produces the following sequence:
δ 2 ( 1 ) = δ 2 ( 2 ) = δ 2 ( 3 ) .
This yields the following: P r O
T ˙ 1 = T ˙ 2 = T ˙ 3 .
Hence, each alternative has an equal P r O ranking.
Example 15.  
This example addresses an M A D M problem that involves I V I F N decision information. The decision matrix B ˙ for this problem [21] is presented in Table 19.
The steps involved in the algorithm are outlined below.
Step 1. 
This step is not applicable because each attribute is of the benefit type. Consequently, C ˙ = B ˙ .
Step 2. 
According to D M , the I V I F weights for attributes are given as
μ ˙ 1 = [ 0.25 , 0.25 ] , [ 0.25 , 0.25 ] , μ ˙ 2 = [ 0.35 , 0.35 ] , [ 0.40 , 0.40 ] , μ ˙ 3 = [ 0.30 , 0.30 ] , [ 0.65 , 0.65 ] .
Then, we calculate the weighted decision matrix D ˙ , as presented in Table 20.
Step 3. 
By utilizing the proposed G S F , we construct the score matrix V D ˙ for γ 1 = 0.35 , presented in Table 21.
Step 4. 
Utilizing (18), the value δ p ( i ) for p = 2 is calculated for T ˙ i , where i = 1 , 2 , 3 . The calculated values are
δ 2 ( 1 ) = 2.1794 , δ 2 ( 2 ) = 2.3154 , δ 2 ( 3 ) = 2.4059 .
Step 5. 
Step 4 produces the following sequence:
δ 2 ( 1 ) < δ 2 ( 2 ) < δ 2 ( 3 ) .
This yields the following: P r O
T ˙ 1 T ˙ 2 T ˙ 3 .
Thus, T ˙ 1 is the most preferable.
Example 16.  
This example addresses an M A D M problem that involves I V I F N decision information. The decision matrix B ˙ for this problem [25] is presented in Table 22.
The steps involved in the algorithm are outlined below.
Step 1. 
This step is not applicable because each attribute is of the benefit type. Consequently, C ˙ = B ˙ .
Step 2. 
According to D M , the I V I F weights for attributes are given as
μ ˙ 1 = [ 0.15 , 0.20 ] , [ 0.40 , 0.60 ] , μ ˙ 2 = [ 0.30 , 0.50 ] , [ 0.20 , 0.40 ] , μ ˙ 3 = [ 0.50 , 0.70 ] , [ 0.10 , 0.30 ] .
Then, we calculate the weighted decision matrix D ˙ , as presented in Table 23.
Step 3. 
By utilizing the proposed G S F , we construct the score matrix V D ˙ for γ 1 = 0.35 , presented in Table 24.
Step 4. 
Utilizing (18), the value δ p ( i ) for p = 2 is associated with T ˙ i where i = 1 , 2 , 3 . The calculated values are
δ 2 ( 1 ) = 2.0468 , δ 2 ( 2 ) = 2.3165 , δ 2 ( 3 ) = 2.3397 .
Step 5. 
Step 4 produces the following sequence:
δ 2 ( 1 ) < δ 2 ( 2 ) < δ 2 ( 3 ) .
This yields the following: P r O
T ˙ 1 T ˙ 2 T ˙ 3 .
Thus, T ˙ 1 is the most preferable.

5. Comparisons and Advantages

A comprehensive analysis is conducted to compare the performance of the current approach against existing approaches. The ranking results in Examples 10–16 are obtained from both the existing approaches and the proposed approach, and the results are summarized in the following subsections.

5.1. Comparisons and Advantages with Example 10

  • Kumar and Chen [16] presented a connection number-based approach to address an I V I F M A D M problem. However, their method produced an unreasonable P r O of ( T ˙ 3 = T ˙ 4 T ˙ 1 T ˙ 2 ) for the given data, as the attribute values of T ˙ 3 and T ˙ 4 are distinct. To address this issue, Definition 4 is applied, revealing the P r O s of ( T ˙ 1 T ˙ 3 ) and ( T ˙ 2 T ˙ 3 ) in the pairs { T ˙ 1 , T ˙ 3 } and { T ˙ 2 , T ˙ 3 } , respectively. These results differed from the calculated ones, rendering the Kumar and Chen method unreliable. In contrast, our proposed method yielded a P r O of ( T ˙ 1 T ˙ 2 T ˙ 4 T ˙ 3 ) that not only addressed the limitations of the Kumar and Chen method but also justified Definition 4.
  • The method in [7] is based on the mean and variance of score matrices for I V I F values, while the approach in [25] employs a score function along with a non-linear programming model. The existing methods in [7,25], establish a P r O of ( T ˙ 1 T ˙ 2 T ˙ 3 T ˙ 4 ). In harmony with our proposed approach, both P r O s not only adhere to the criteria defined in Definition 4 but also converge on the same top-ranked alternative.

5.2. Comparisons and Advantages with Example 11

  • Kumar and Chen [16] developed a connection number-based method for addressing an I V I F M A D M problem. Kumar and Chen’s approach [16] yielded the P r O ( T ˙ 3 T ˙ 1 T ˙ 2 ) for the given decision problem. However, upon employing Definition 4, it became evident that this method is incapable of providing a consistent P r O . Our proposed algorithm, on the other hand, produced a P r O of ( T ˙ 1 T ˙ 2 T ˙ 3 ), which satisfies both relationships ( T ˙ 1 T ˙ 3 ) and ( T ˙ 2 T ˙ 3 ).
  • The ranking ordering yielded by the methods [7,25] corresponds exactly to the ordering produced by our proposed algorithm.

5.3. Comparisons and Advantages with Example 12

  • The P r O obtained from Kumar and Chen’s method [16] for the given data is ( T ˙ 1 = T ˙ 2 = T ˙ 3 ), which appears unreasonable since the attribute values for these alternatives are distinct. In contrast, our proposed algorithm ranks the alternatives as ( T ˙ 2 T ˙ 1 T ˙ 3 ), thereby addressing the limitation of Kumar and Chen’s method.
  • Chen and Tsai [7] proposed a methodology for solving M A D M problems that utilizes the score function of I V I F s, along with the mean and variance of the resulting score matrices. The proposed algorithm yields a P r O of ( T ˙ 2 T ˙ 1 T ˙ 3 ), whereas the existing method [7] produces a P r O of ( T ˙ 3 T ˙ 2 T ˙ 1 ). Despite the differences, both methods successfully differentiate between the rankings of T ˙ 1 , T ˙ 2 , and T ˙ 3 .

5.4. Comparisons and Advantages with Example 13

  • Li [26] proposed a nonlinear programming methodology based on the T O P S I S framework to solve M A D M problems, where both the ratings of alternatives and the weights of attributes are represented using I V I F sets. The method given in [26] suffers from the issue of “division by zero,” rendering it incapable of generating a reliable ranking. However, our proposed algorithm effectively overcomes this shortcoming.
  • The methodologies presented in [7,16,25,29,30,31,32,33] are considered for comparison to demonstrate the credibility of our proposed approach. These methods are designed to address M A D M problems within the I V I F context and are based on various concepts, including the mean and variance, non-linear programming models, probability density functions, U-quadratic distribution, Beta distribution, etc. The proposed algorithm gives the best alternative that is consistent with the alternative obtained from existing methods in [7,16,25,29,30,31,32,33].

5.5. Comparisons and Advantages with Example 14

  • To address I V I F M A D M problems, Li [26] proposed a nonlinear programming methodology based on the T O P S I S framework, while Zhao and Zhang [27] developed a method utilizing an accuracy function and the I V I F weighted averaging operator. The existing methods [26,27] are unable to solve this problem due to issue with “division by zero”. However, our proposed algorithm effectively addresses this shortcoming.
  • Our proposed algorithm produces the same P r O as that generated by the existing methods outlined in [7,16,25,29,30,31,32,33].

5.6. Comparisons and Advantages with Example 15

  • The novel M A D M approaches presented in [7,21,22,25,27] are developed within the I V I F context, with some methods based on concepts such as probability density functions and z-score decision matrix. Our proposed algorithm yields the same P r O as that produced by the existing methods outlined in [7,21,22,25,27]. While the method suggested by Li [26] is unable to generate the P r O for this problem.
  • The above comparisons show the reliability of the proposed algorithm.

5.7. Comparisons and Advantages with Example 16

  • Chen and Tsai’s method, as outlined in [7], initially established a P r O of ( T ˙ 1 T ˙ 2 = T ˙ 3 ) for the given decision making problem. However, upon a thorough examination of matrix B ˙ , it became apparent that the I V I F attribute values associated with U ˙ 1 , U ˙ 2 , and U ˙ 3 for alternatives T ˙ 2 and T ˙ 3 exhibited notable differences. This discrepancy indicated that T ˙ 2 and T ˙ 3 should not be considered equal in their ranking. In contrast, our proposed algorithm produced a P r O of ( T ˙ 1 T ˙ 2 T ˙ 3 ), effectively rectifying the limitation identified in [7].
  • Chen and Hsu [25] proposed a method in which attribute weights are determined using a non-linear programming model, and combined with a score function to derive the P r O . The proposed algorithm yields a P r O of ( T ˙ 1 T ˙ 2 T ˙ 3 ), whereas the existing method [25] produces a P r O of ( T ˙ 1 T ˙ 3 T ˙ 2 ). Despite the differences, both methods successfully differentiate between the rankings of T ˙ 2 and T ˙ 3 .

6. Applicability of Proposed Algorithm

The following example (investment company problem) is considered for showing the applicability of the proposed work in the real world scenarios. The D M aims to invest money in one of the following alternatives—a car company ( T ˙ 1 ), a food company ( T ˙ 2 ), a medicine company ( T ˙ 3 ), or an arm company ( T ˙ 4 ). These alternatives are evaluated based on three attributes—risk analysis ( U ˙ 1 ), growth analysis ( U ˙ 2 ), and environment impact analysis ( U ˙ 3 ). The decision must be made under I V I F environment. The decision matrix B ˙ [4] is given in Table 25.
To make the best investment decision, the D M must select the optimal alternative(s) based on the given attributes. The following steps can be taken to arrive at the following solution:
Step 1. 
This step is not applicable because each attribute is of the benefit type. Consequently, C ˙ = B ˙ .
Step 2. 
According to D M , the I V I F weights for attributes are given as
μ ˙ 1 = [ 0 , 0 ] , [ 0 , 0 ] , μ ˙ 2 = [ 0 , 0 ] , [ 0 , 0 ] , μ ˙ 3 = [ 0 , 0 ] , [ 0 , 0 ] .
Then, we calculate the weighted decision matrix D ˙ , as presented in Table 26.
Step 3. 
By utilizing the proposed G S F , we construct the score matrix V D ˙ for γ 1 = 0.35 , as presented in Table 27.
Step 4. 
Utilizing (18), the value δ p ( i ) for p = 2 is calculated for T ˙ i where i = 1 , 2 , 3 , 4 . The calculated values are
δ 2 ( 1 ) = 2.6800 , δ 2 ( 2 ) = 2.3669 , δ 2 ( 3 ) = 2.5144 , δ 2 ( 4 ) = 2.2611 .
Step 5. 
Step 4 produces the following sequence:
δ 2 ( 4 ) < δ 2 ( 2 ) < δ 2 ( 3 ) < δ 2 ( 1 ) .
This yields the following: P r O
T ˙ 4 T ˙ 2 T ˙ 3 T ˙ 1 .
Thus, T ˙ 4 is the most preferable.
Comparisons and advantages
In order to test the performance of the developed result over the existing results, we conducted an experiment. From this, the results are summarized in Table 28.
The information presented in Table 28 are shown in Figure 1.
From Table 28, the important comparison points are listed as follows:
(i)
Both kinds (subjective [21,22] and objective [4,6,25]) for describing attribute weight information are used in the approaches given in Table 28 in order to create an effective comparative analysis. However, in the present algorithm, the attribute weight information is subjectively given. But the proposed solution steps are completely different from the steps given in the existing approaches.
(ii)
In contrast to Kumar and Chen’s [22] inability to discern a P r O within the given numerical data, the remaining approaches consistently mirror the P r O elucidated by our suggested algorithm. These findings lend credence to the robustness and reliability of the proposed algorithm.

7. Conclusions

7.1. Main Outcomes of the Present Research

The main results of this research are given below:
(i)
A G S F was constructed with its robust properties to measure I V I F N s for solving decision-making problems. The main features of this function are as follows: (a) This function incorporates the degree of uncertainty presented in an I V I F N ; (b) it contains two flexible parameters, γ 1 and γ 2 , with the relation γ 1 + γ 2 = 1 and γ 1 ( 0 , 0.5 ) ; (c) the efficacy of this function was demonstrated through various numerical examples that indicate its preference over the past S F s.
(ii)
An algorithm was contrived for the M A D M using the I V I F decision framework. This algorithm utilizes the concept of the p-distance of each alternative score matrix relative to the P I T to effectively accomplish the D M ’s task. The novelties of the algorithm are as follows: (a) it is simple to implement, i.e., the algorithm running time is very short; (b) a program for this algorithm is given in Python to handle the numerical problems smoothly; (c) the p-distance is computed for each alternative, which plays a crucial role in ranking the alternatives.
(iii)
The implementation and advantages of the suggested algorithm are demonstrated through numerous comparative examples.

7.2. Management Insights and Computer Software

Some management insights for real event-based M A D M problem of the proposed algorithm are given below:
(i)
It is effective and flexible to include the p-distance of alternatives to the M A D M problems. The use of the values of p-distances cooperates D M for prioritizing alternatives accurately.
(ii)
D M faces many challenges in assigning precise numerical weights to different attributes due to the complex and uncertain nature of the information. Therefore, in this work, the attribute weights are represented by I V I F N s.
(iii)
A practical M A D M problem was successfully modeled and solved by using the proposed algorithm. A comprehensive comparative analysis is also provided to show the advantages of the algorithm.
(iv)
To calculate the numerical results accurately, Python software (version 3.12.3) was used throughout this paper. The results are presented in Appendix A.

7.3. Future Scope and Limitations

(i)
In future studies, the present algorithm can be used for solving M A D M problems in other fuzzy and uncertain environments, such as picture fuzzy and linguistic interval-valued intuitionistic fuzzy contexts.
(ii)
To expand its range of applications, the developed algorithm is anticipated to be employed in addressing various other real-world M A D M problems, such as the selection of healthcare waste disposal alternatives [34] and the connected autonomous vehicles in real-time traffic management [35].
(iii)
The present algorithm can be strengthened by including the concept of the negative ideal alternative ( N I T ).
(iv)
The present algorithm ranks the alternatives in M A D M problems only; it can be expanded to solve group decision-making problems.
(v)
A method will be proposed to determine attribute weights precisely, ensuring a smoother decision-making process.

Author Contributions

Conceptualization, S.K., S.R.M. and R.T.; methodology, S.K., S.R.M. and R.T.; software, S.K. and R.T.; validation, S.K., S.R.M. and R.T.; formal analysis, S.K., S.R.M. and R.T.; writing—original draft preparation, R.T.; writing—review and editing, S.K., S.R.M. and R.T.; supervision, S.K.; funding acquisition, S.R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. KFU253312].

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Python Code

The Python code for calculating rankings from the normalized decision matrix is given as follows:
Listing A1. Python code.
Axioms 14 00722 i001
Axioms 14 00722 i002

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  2. Atanassov, K.T. lntuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  3. Atanassov, K.; Gargov, G. Interval-valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
  4. Garg, H. A new generalized improved score function of interval-valued intuitionistic fuzzy sets and applications in expert systems. Appl. Soft Comput. 2016, 38, 988–999. [Google Scholar] [CrossRef]
  5. Wan, S.; Dong, J. Decision Making Theories and Methods Based on Interval-Valued Intuitionistic Fuzzy Sets; Springer Nature: Singapore, 2020. [Google Scholar]
  6. Kumar, S.; Kumar, M. A game theoretic approach to solve multiple group decision making problems with interval-valued intuitionistic fuzzy decision matrices. Int. J. Manag. Sci. Eng. Manag. 2021, 16, 34–42. [Google Scholar] [CrossRef]
  7. Chen, S.-M.; Tsai, C.-A. Multiattribute decision making using novel score function of interval-valued intuitionistic fuzzy values and the means and the variances of score matrices. Inf. Sci. 2021, 577, 748–768. [Google Scholar] [CrossRef]
  8. Xu, Z.S. Methods for aggregating interval-valued intuitionistic fuzzy information and their application to decision making. Control Decis. 2007, 22, 215–219. [Google Scholar]
  9. Ye, J. Multicriteria fuzzy decision-making method based on a novel accuracy function under interval-valued intuitionistic fuzzy environment. Expert Syst. Appl. 2009, 36, 6899–6902. [Google Scholar] [CrossRef]
  10. Nayagam, V.L.G.; Muralikrishnan, S.; Sivaraman, G. Multi-criteria decision-making method based on interval-valued intuitionistic fuzzy sets. Expert Syst. Appl. 2011, 38, 1464–1467. [Google Scholar] [CrossRef]
  11. Bai, Z.-Y. An interval-valued intuitionistic fuzzy TOPSIS method based on an improved score function. Sci. World J. 2013, 2013, 879089. [Google Scholar] [CrossRef]
  12. Nayagam, V.L.G.; Jeevaraj, S.; Dhanasekaran, P. An intuitionistic fuzzy multi-criteria decision-making method based on nonhesitance score for interval-valued intuitionistic fuzzy sets. Soft Comput. 2018, 21, 7077–7082. [Google Scholar] [CrossRef]
  13. Selvaraj, J.; Majumdar, A. A new ranking method for interval-valued intuitionistic fuzzy numbers and its application in multi-criteria decision-making. Mathematics 2021, 9, 2647. [Google Scholar] [CrossRef]
  14. Chen, S.-M.; Tsai, K.-Y. Multiattribute decision making based on new score function of interval-valued intuitionistic fuzzy values and normalized score matrices. Inf. Sci. 2021, 575, 714–731. [Google Scholar] [CrossRef]
  15. Chen, S.-M.; Yu, S.-H. Multiattribute decision making based on novel score function and the power operator of interval-valued intuitionistic fuzzy values. Inf. Sci. 2022, 606, 763–785. [Google Scholar] [CrossRef]
  16. Kumar, K.; Chen, S.M. Multiattribute decision making based on interval-valued intuitionistic fuzzy values, score function of connection numbers, and the set pair analysis theory. Inf. Sci. 2021, 551, 100–112. [Google Scholar] [CrossRef]
  17. Wang, Z.; Xiao, F.; Ding, W. Interval-valued intuitionistic fuzzy Jenson-Shannon divergence and its application in multi-attribute decision making. Appl. Intell. 2022, 52, 16168–16184. [Google Scholar] [CrossRef]
  18. Ohlan, A. Novel entropy and distance measures for interval-valued intuitionistic fuzzy sets with application in multi-criteria group decision-making. Int. J. Gen. Syst. 2022, 51, 413–440. [Google Scholar] [CrossRef]
  19. Senapati, T.; Mesiar, R.; Simic, V.; Iampan, A.; Chinram, R.; Ali, R. Analysis of interval-valued intuitionistic fuzzy Aczel-Alsina geometric aggregation operators and their aplication to multiple attribute decision-making. Axioms 2022, 11, 258. [Google Scholar] [CrossRef]
  20. Shen, Q.; Zhang, X.; Lou, J.; Liu, Y.; Jiang, Y. Interval-valued intuitionistic fuzzy multi-attribute second-order decision making based on partial connection numbers of set pair analysis. Soft Comput. 2022, 26, 10389–10400. [Google Scholar] [CrossRef]
  21. Patra, K. An improved ranking method for multi attributes decision making problem based on interval valued intuitionistic fuzzy values. Cybern. Syst. 2022, 54, 648–672. [Google Scholar] [CrossRef]
  22. Kumar, K.; Chen, S.M. Multiattribute decision making based on converted decision matrices, probability density functions, and interval-valued intuitionistic fuzzy values. Inf. Sci. 2021, 554, 313–324. [Google Scholar] [CrossRef]
  23. Shi, X.; Ali, Z.; Mahmood, T.; Liu, P. Power aggregation operators of interval-valued Atanassov-intuitionistic fuzzy sets based on Aczel-Alsina t-norm and t-conorm and their applications in decision making. Int. J. Comput. Intell. Syst. 2023, 16, 43. [Google Scholar] [CrossRef]
  24. Zhong, Y.; Zhang, H.; Cao, L.; Li, Y.; Qin, Y.; Luo, X. Power Muirhead mean operators of interval-valued intuitionistic fuzzy values in the framework of Dempster-Shafer theory for multiple criteria decision-making. Soft Comput. 2023, 27, 763–782. [Google Scholar] [CrossRef]
  25. Chen, S.M.; Hsu, M.H. Multiple attribute decision making based on novel score function of interval-valued intuitionistic fuzzy values, score matrix, and nonlinear programming model. Inf. Sci. 2023, 645, 119332. [Google Scholar] [CrossRef]
  26. Li, D.F. TOPSIS-based nonlinear-programming methodology for multiattribute decision making with interval-valued intuitionistic fuzzy sets. IEEE Trans. Fuzzy Syst. 2010, 18, 299–311. [Google Scholar] [CrossRef]
  27. Zhao, Z.; Zhang, Y. Multiple attribute decision making method in the frame of interval-valued intuitionistic fuzzy sets. In Proceedings of the Eighth International Conference on Fuzzy Systems and Knowledge Discovery, Shanghai, China, 26–28 July 2011; pp. 192–196. [Google Scholar]
  28. Balakrishnan, A.V. Applied Functional Analysis; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
  29. Chen, S.M.; Huang, Z.C. Multiattribute decision making based on interval-valued intuitionistic fuzzy values and linear programming methodology. Inf. Sci. 2017, 381, 341–351. [Google Scholar] [CrossRef]
  30. Chen, S.M.; Han, W.H. An improved MADM method using interval-valued intuitionistic fuzzy values. Inf. Sci. 2018, 467, 489–505. [Google Scholar] [CrossRef]
  31. Chen, S.M.; Fan, K.Y. Multiattribute decision making based on probability density functions and the variances and standard deviations of largest ranges of evaluating interval-valued intuitionistic fuzzy values. Inf. Sci. 2019, 490, 329–343. [Google Scholar] [CrossRef]
  32. Chen, S.M.; Chu, Y.C. Multiattribute decision making based on U-quadratic distribution of intervals and the transformed matrix in interval-valued intuitionistic fuzzy environments. Inf. Sci. 2020, 537, 30–45. [Google Scholar] [CrossRef]
  33. Chen, S.M.; Liao, W.T. Multiple attribute decision making using Beta distribution of intervals, expected values of intervals, and new score function of interval-valued intuitionistic fuzzy values. Inf. Sci. 2021, 579, 863–887. [Google Scholar] [CrossRef]
  34. Komal. Archimedean t-norm and t-conorm based intuitionistic fuzzy WASPAS method to evaluate health-care waste disposal alternatives with unknown weight information. Appl. Soft Comput. 2023, 146, 110751. [Google Scholar]
  35. Gokasar, I.; Pamucar, D.; Deveci, M.; Ding, W. A novel rough numbers based extended MACBETH method for the prioritization of the connected autonomous vehicles in real-time traffic management. Expert Syst. Appl. 2023, 211, 118445. [Google Scholar] [CrossRef]
Figure 1. Comparison of M A D M Methods in alternative ranking with other methods: (See Garg (2016) [4], Kumar and Kumar (2021) [6], Patra (2022) [21], Chen and Hsu (2023) [25], Kumar and Chen(2021) [22]).
Figure 1. Comparison of M A D M Methods in alternative ranking with other methods: (See Garg (2016) [4], Kumar and Kumar (2021) [6], Patra (2022) [21], Chen and Hsu (2023) [25], Kumar and Chen(2021) [22]).
Axioms 14 00722 g001
Table 1. List of Abbreviations.
Table 1. List of Abbreviations.
AbbreviationDefinition
A F Accuracy function
D M Decision maker
G S F Generalized score function
I F S Intuitionistic fuzzy set
I V I F Interval-valued intuitionistic fuzzy
I V I F N Interval-valued intuitionistic fuzzy number
I V I F S Interval-valued intuitionistic fuzzy set
M A D M Multiple-attribute decision making
M A G D M Multiple-attribute group decision making
N I T Negative ideal alternative
P I T Positive ideal alternative
P r O Preference order
S F Score function
S P A Set pair analysis
Table 2. The computed P r O s for Examples 1–9.
Table 2. The computed P r O s for Examples 1–9.
Example No.123456789
V ( b ˙ 1 ) and V ( b ˙ 2 ) 0.1727 0.2156 0.4401 0.0376 0.4468 0.4014 0.3309 0.5000 0.5000
for γ 1 = 0.25 0.4180 0.1930 0.3849 0.3392 0.4542 0.2433 0.3554 0.6895 0.8409
V ( b ˙ 1 ) and V ( b ˙ 2 ) 0.1141 0.1603 0.4181 0.0143 0.4943 0.2851 0.3576 0.5000 0.5000
for γ 1 = 0.35 0.3023 0.1474 0.3636 0.2267 0.5042 0.1312 0.3718 0.7239 0.7846
V ( b ˙ 1 ) and V ( b ˙ 2 ) 0.0487 0.1273 0.3973 0.0723 0.5478 0.1739 0.3804 0.5000 0.5000
for γ 1 = 0.45 0.1890 0.0724 0.3410 0.1171 0.5599 0.0229 0.3863 0.7602 0.7320
P r O b ˙ 2 b ˙ 1 b ˙ 1 b ˙ 2 b ˙ 1 b ˙ 2 b ˙ 2 b ˙ 1 b ˙ 1 b ˙ 2 b ˙ 1 b ˙ 2 b ˙ 1 b ˙ 2 b ˙ 1 b ˙ 2 b ˙ 2 b ˙ 1
Table 3. The alternative–attribute matrix.
Table 3. The alternative–attribute matrix.
U ˙ 1 U ˙ 2 U ˙ 3 U ˙ n
T ˙ 1 b ˙ 11 b ˙ 12 b ˙ 13 b ˙ 1 n
T ˙ 2 b ˙ 21 b ˙ 22 b ˙ 23 b ˙ 2 n
T ˙ m b ˙ m 1 b ˙ m 2 b ˙ m 3 b ˙ m n
Table 4. The decision matrix ( B ˙ ).
Table 4. The decision matrix ( B ˙ ).
U ˙ 1 U ˙ 2
T ˙ 1 [ 0.68 , 0.68 ] , [ 0.32 , 0.32 ] [ 0.63 , 0.63 ] , [ 0.24 , 0.24 ]
T ˙ 2 [ 0.65 , 0.65 ] , [ 0.35 , 0.35 ] [ 0.61 , 0.61 ] , [ 0.25 , 0.25 ]
T ˙ 3 [ 0.40 , 0.50 ] , [ 0.40 , 0.50 ] [ 0.33 , 0.50 ] , [ 0.33 , 0.50 ]
T ˙ 4 [ 0.15 , 0.50 ] , [ 0.15 , 0.50 ] [ 0.22 , 0.50 ] , [ 0.22 , 0.50 ]
Table 5. The weighted decision matrix ( D ˙ ).
Table 5. The weighted decision matrix ( D ˙ ).
U ˙ 1 U ˙ 2
T ˙ 1 [ 0.14 , 0.14 ] , [ 0.46 , 0.46 ] [ 0.19 , 0.19 ] , [ 0.47 , 0.47 ]
T ˙ 2 [ 0.13 , 0.13 ] , [ 0.48 , 0.48 ] [ 0.18 , 0.18 ] , [ 0.48 , 0.48 ]
T ˙ 3 [ 0.08 , 0.10 ] , [ 0.52 , 0.60 ] [ 0.10 , 0.15 ] , [ 0.53 , 0.65 ]
T ˙ 4 [ 0.03 , 0.10 ] , [ 0.32 , 0.60 ] [ 0.07 , 0.10 ] , [ 0.45 , 0.65 ]
Table 6. The score matrix ( V D ˙ ).
Table 6. The score matrix ( V D ˙ ).
U ˙ 1 U ˙ 2
T ˙ 1 0.5847 0.5181
T ˙ 2 0.2853 0.2236
T ˙ 3 0.4276 0.4240
T ˙ 4 0.3651 0.4302
Table 7. The decision matrix ( B ˙ ).
Table 7. The decision matrix ( B ˙ ).
U ˙ 1 U ˙ 2
T ˙ 1 [ 0.60 , 0.65 ] , [ 0.32 , 0.35 ] [ 0.55 , 0.63 ] , [ 0.25 , 0.28 ]
T ˙ 2 [ 0.55 , 0.55 ] , [ 0.38 , 0.42 ] [ 0.52 , 0.52 ] , [ 0.33 , 0.33 ]
T ˙ 3 [ 0.45 , 0.45 ] , [ 0.45 , 0.45 ] [ 0.35 , 0.35 ] , [ 0.35 , 0.35 ]
Table 8. The weighted decision matrix ( D ˙ ).
Table 8. The weighted decision matrix ( D ˙ ).
U ˙ 1 U ˙ 2
T ˙ 1 [ 0.36 , 0.39 ] , [ 0.59 , 0.61 ] [ 0.39 , 0.44 ] , [ 0.48 , 0.50 ]
T ˙ 2 [ 0.33 , 0.33 ] , [ 0.63 , 0.65 ] [ 0.36 , 0.36 ] , [ 0.53 , 0.53 ]
T ˙ 3 [ 0.27 , 0.27 ] , [ 0.67 , 0.67 ] [ 0.25 , 0.25 ] , [ 0.55 , 0.55 ]
Table 9. The score matrix ( V D ˙ ).
Table 9. The score matrix ( V D ˙ ).
U ˙ 1 U ˙ 2
T ˙ 1 0.1647 0.0068
T ˙ 2 0.2571 0.0949
T ˙ 3 0.3540 0.2315
Table 10. The decision matrix ( B ˙ ).
Table 10. The decision matrix ( B ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.20 , 0.50 ] , [ 0.06 , 0.50 ] [ 0.10 , 0.50 ] , [ 0.05 , 0.50 ] [ 0.20 , 0.50 ] , [ 0.10 , 0.50 ]
T ˙ 2 [ 0.28 , 0.50 ] , [ 0.14 , 0.50 ] [ 0 , 30 , 0.50 ] , [ 0.25 , 0.50 ] [ 0.25 , 0.50 ] , [ 0.15 , 0.50 ]
T ˙ 3 [ 0.37 , 0.50 ] , [ 0.23 , 0.50 ] [ 0.50 , 0.50 ] , [ 0.45 , 0.50 ] [ 0.30 , 0.50 ] , [ 0.20 , 0.50 ]
Table 11. The weighted decision matrix ( D ˙ ).
Table 11. The weighted decision matrix ( D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.04 , 0.10 ] , [ 0.44 , 0.70 ] [ 0.03 , 0.15 ] , [ 0.29 , 0.70 ] [ 0.10 , 0.25 ] , [ 0.46 , 0.70 ]
T ˙ 2 [ 0.06 , 0.10 ] , [ 0.48 , 0.70 ] [ 0.09 , 0.15 ] , [ 0.44 , 0.70 ] [ 0.13 , 0.25 ] , [ 0.49 , 0.70 ]
T ˙ 3 [ 0.07 , 0.10 ] , [ 0.54 , 0.70 ] [ 0.15 , 0.15 ] , [ 0.59 , 0.70 ] [ 0.15 , 0.25 ] , [ 0.52 , 0.70 ]
Table 12. The score matrix ( V D ˙ ).
Table 12. The score matrix ( V D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 0.4806 0.3727 0.3561
T ˙ 2 0.4844 0.4061 0.3538
T ˙ 3 0.5107 0.4559 0.3594
Table 13. The decision matrix ( B ˙ ).
Table 13. The decision matrix ( B ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.30 , 0.30 ] , [ 0.10 , 0.10 ] [ 0.60 , 0.60 ] , [ 0.25 , 0.25 ] [ 0.80 , 0.80 ] , [ 0.20 , 0.20 ]
T ˙ 2 [ 0.20 , 0.20 ] , [ 0.15 , 0.15 ] [ 0.68 , 0.68 ] , [ 0.20 , 0.20 ] [ 0.45 , 0.45 ] , [ 0.50 , 0.50 ]
T ˙ 3 [ 0.20 , 0.20 ] , [ 0.45 , 0.45 ] [ 0.70 , 0.70 ] , [ 0.05 , 0.05 ] [ 0.60 , 0.60 ] , [ 0.30 , 0.30 ]
Table 14. The weighted decision matrix ( D ˙ ).
Table 14. The weighted decision matrix ( D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.08 , 0.08 ] , [ 0.33 , 0.33 ] [ 0.21 , 0.21 ] , [ 0.55 , 0.55 ] [ 0.24 , 0.24 ] , [ 0.72 , 0.72 ]
T ˙ 2 [ 0.05 , 0.05 ] , [ 0.36 , 0.36 ] [ 0.24 , 0.24 ] , [ 0.52 , 0.52 ] [ 0.14 , 0.14 ] , [ 0.83 , 0.83 ]
T ˙ 3 [ 0.05 , 0.05 ] , [ 0.59 , 0.59 ] [ 0.25 , 0.25 ] , [ 0.43 , 0.43 ] [ 0.18 , 0.18 ] , [ 0.76 , 0.76 ]
Table 15. The score matrix ( V D ˙ ).
Table 15. The score matrix ( V D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 0.1707 0.2750 0.4440
T ˙ 2 0.2647 0.2056 0.6765
T ˙ 3 0.5305 0.0825 0.5543
Table 16. The decision matrix ( B ˙ ).
Table 16. The decision matrix ( B ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ] [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ] [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ]
T ˙ 2 [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ] [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ] [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ]
T ˙ 3 [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ] [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ] [ 0.40 , 0.40 ] , [ 0.50 , 0.50 ]
Table 17. The weighted decision matrix ( D ˙ ).
Table 17. The weighted decision matrix ( D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0 , 0 ] , [ 0.50 , 0.50 ] [ 0 , 0 ] , [ 0.50 , 0.50 ] [ 0 , 0 ] , [ 0.50 , 0.50 ]
T ˙ 2 [ 0 , 0 ] , [ 0.50 , 0.50 ] [ 0 , 0 ] , [ 0.50 , 0.50 ] [ 0 , 0 ] , [ 0.50 , 0.50 ]
T ˙ 3 [ 0 , 0 ] , [ 0.50 , 0.50 ] [ 0 , 0 ] , [ 0.50 , 0.50 ] [ 0 , 0 ] , [ 0.50 , 0.50 ]
Table 18. The score matrix ( V D ˙ ).
Table 18. The score matrix ( V D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 0.6373 0.6373 0.6373
T ˙ 2 0.6373 0.6373 0.6373
T ˙ 3 0.6373 0.6373 0.6373
Table 19. The decision matrix ( B ˙ ).
Table 19. The decision matrix ( B ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.75 , 0.75 ] , [ 0.10 , 0.10 ] [ 0.60 , 0.60 ] , [ 0.25 , 0.25 ] [ 0.80 , 0.80 ] , [ 0.20 , 0.20 ]
T ˙ 2 [ 0.80 , 0.80 ] , [ 0.15 , 0.15 ] [ 0.68 , 0.68 ] , [ 0.20 , 0.20 ] [ 0.45 , 0.45 ] , [ 0.50 , 0.50 ]
T ˙ 3 [ 0.30 , 0.30 ] , [ 0.45 , 0.45 ] [ 0.70 , 0.70 ] , [ 0.05 , 0.05 ] [ 0.60 , 0.60 ] , [ 0.30 , 0.30 ]
Table 20. The weighted decision matrix ( D ˙ ).
Table 20. The weighted decision matrix ( D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.19 , 0.19 ] , [ 0.33 , 0.33 ] [ 0.21 , 0.21 ] , [ 0.55 , 0.55 ] [ 0.24 , 0.24 ] , [ 0.72 , 0.72 ]
T ˙ 2 [ 0.20 , 0.20 ] , [ 0.36 , 0.36 ] [ 0.24 , 0.24 ] , [ 0.52 , 0.52 ] [ 0.14 , 0.14 ] , [ 0.83 , 0.83 ]
T ˙ 3 [ 0.08 , 0.08 ] , [ 0.59 , 0.59 ] [ 0.25 , 0.25 ] , [ 0.43 , 0.43 ] [ 0.18 , 0.18 ] , [ 0.76 , 0.76 ]
Table 21. The score matrix ( V D ˙ ).
Table 21. The score matrix ( V D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 0.0194 0.2750 0.4440
T ˙ 2 0.0474 0.2056 0.6765
T ˙ 3 0.4835 0.0825 0.5543
Table 22. The decision matrix ( B ˙ ).
Table 22. The decision matrix ( B ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.30 , 0.50 ] , [ 0.10 , 0.20 ] [ 0.20 , 0.30 ] , [ 0.10 , 0.50 ] [ 0.20 , 0.30 ] , [ 0 , 0.10 ]
T ˙ 2 [ 0.03 , 0.21 ] , [ 0.35 , 0.65 ] [ 0.05 , 0.30 ] , [ 0 , 0.10 ] [ 0.26 , 0.52 ] , [ 0.36 , 0.37 ]
T ˙ 3 [ 0.09 , 0.27 ] , [ 0.53 , 0.62 ] [ 0.41 , 0.47 ] , [ 0.26 , 0.47 ] [ 0.19 , 0.23 ] , [ 0 , 0.31 ]
Table 23. The weighted decision matrix ( D ˙ ).
Table 23. The weighted decision matrix ( D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.04 , 0.10 ] , [ 0.46 , 0.52 ] [ 0.06 , 0.15 ] , [ 0.28 , 0.60 ] [ 0.10 , 0.21 ] , [ 0.10 , 0.19 ]
T ˙ 2 [ 0 , 0.04 ] , [ 0.61 , 0.79 ] [ 0.02 , 0.15 ] , [ 0.20 , 0.28 ] [ 0.13 , 0.36 ] , [ 0.42 , 0.43 ]
T ˙ 3 [ 0.01 , 0.05 ] , [ 0.72 , 0.77 ] [ 0.12 , 0.24 ] , [ 0.41 , 0.58 ] [ 0.10 , 0.16 ] , [ 0.10 , 0.38 ]
Table 24. The score matrix ( V D ˙ ).
Table 24. The score matrix ( V D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 0.3892 0.2730 0.2006
T ˙ 2 0.7404 0.0709 0.0911
T ˙ 3 0.7324 0.2453 0.0396
Table 25. The decision matrix ( B ˙ ).
Table 25. The decision matrix ( B ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0.40 , 0.50 ] , [ 0.30 , 0.40 ] [ 0.40 , 0.60 ] , [ 0.20 , 0.40 ] [ 0.10 , 0.30 ] , [ 0.50 , 0.60 ]
T ˙ 2 [ 0.60 , 0.70 ] , [ 0.20 , 0.30 ] [ 0.60 , 0.70 ] , [ 0.20 , 0.30 ] [ 0.40 , 0.70 ] , [ 0.10 , 0.20 ]
T ˙ 3 [ 0.30 , 0.60 ] , [ 0.30 , 0.40 ] [ 0.50 , 0.60 ] , [ 0.30 , 0.40 ] [ 0.50 , 0.60 ] , [ 0.10 , 0.30 ]
T ˙ 4 [ 0.70 , 0.80 ] , [ 0.10 , 0.20 ] [ 0.60 , 0.70 ] , [ 0.10 , 0.30 ] [ 0.30 , 0.40 ] , [ 0.10 , 0.20 ]
Table 26. The weighted decision matrix ( D ˙ ).
Table 26. The weighted decision matrix ( D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 [ 0 , 0 ] , [ 0.30 , 0.40 ] [ 0 , 0 ] , [ 0.20 , 0.40 ] [ 0 , 0 ] , [ 0.50 , 0.60 ]
T ˙ 2 [ 0 , 0 ] , [ 0.20 , 0.30 ] [ 0 , 0 ] , [ 0.20 , 0.30 ] [ 0 , 0 ] , [ 0.10 , 0.20 ]
T ˙ 3 [ 0 , 0 ] , [ 0.30 , 0.40 ] [ 0 , 0 ] , [ 0.30 , 0.40 ] [ 0 , 0 ] , [ 0.10 , 0.30 ]
T ˙ 4 [ 0 , 0 ] , [ 0.10 , 0.20 ] [ 0 , 0 ] , [ 0.10 , 0.30 ] [ 0 , 0 ] , [ 0.10 , 0.20 ]
Table 27. The score matrix ( V D ˙ ).
Table 27. The score matrix ( V D ˙ ).
U ˙ 1 U ˙ 2 U ˙ 3
T ˙ 1 0.5042 0.4513 0.6774
T ˙ 2 0.4043 0.4043 0.2876
T ˙ 3 0.5042 0.5042 0.3405
T ˙ 4 0.2876 0.3405 0.2876
Table 28. Comparisons with existing approaches.
Table 28. Comparisons with existing approaches.
S. No.Approach PrO
1.Garg [4] T ˙ 4 T ˙ 2 T ˙ 3 T ˙ 1
2.Kumar and Kumar [6] T ˙ 4 T ˙ 2 T ˙ 3 T ˙ 1
3.Kumar and Chen [22]can not evaluate
4.Patra [21] T ˙ 4 T ˙ 2 T ˙ 3 T ˙ 1
5.Chen and Hsu [25] T ˙ 4 T ˙ 2 T ˙ 3 T ˙ 1
6.Proposed approach T ˙ 4 T ˙ 2 T ˙ 3 T ˙ 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, S.; Mondal, S.R.; Tyagi, R. A Novel p-Norm-Based Ranking Algorithm for Multiple-Attribute Decision Making Using Interval-Valued Intuitionistic Fuzzy Sets and Its Applications. Axioms 2025, 14, 722. https://doi.org/10.3390/axioms14100722

AMA Style

Kumar S, Mondal SR, Tyagi R. A Novel p-Norm-Based Ranking Algorithm for Multiple-Attribute Decision Making Using Interval-Valued Intuitionistic Fuzzy Sets and Its Applications. Axioms. 2025; 14(10):722. https://doi.org/10.3390/axioms14100722

Chicago/Turabian Style

Kumar, Sandeep, Saiful R. Mondal, and Reshu Tyagi. 2025. "A Novel p-Norm-Based Ranking Algorithm for Multiple-Attribute Decision Making Using Interval-Valued Intuitionistic Fuzzy Sets and Its Applications" Axioms 14, no. 10: 722. https://doi.org/10.3390/axioms14100722

APA Style

Kumar, S., Mondal, S. R., & Tyagi, R. (2025). A Novel p-Norm-Based Ranking Algorithm for Multiple-Attribute Decision Making Using Interval-Valued Intuitionistic Fuzzy Sets and Its Applications. Axioms, 14(10), 722. https://doi.org/10.3390/axioms14100722

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop