Next Article in Journal
Family of Enneper Minimal Surfaces
Next Article in Special Issue
An Application of Multicriteria Decision-making for the Evaluation of Alternative Monorail Routes
Previous Article in Journal
Implicit Equations of the Henneberg-Type Minimal Surface in the Four-Dimensional Euclidean Space

Metrics 0

## Export Article

Mathematics 2018, 6(12), 280; https://doi.org/10.3390/math6120280

Article
Algorithm for Probabilistic Dual Hesitant Fuzzy Multi-Criteria Decision-Making Based on Aggregation Operators with New Distance Measures
School of Mathematics, Thapar Institute of Engineering & Technology (Deemed University) Patiala, Punjab 147004, India
*
Author to whom correspondence should be addressed.
Received: 2 November 2018 / Accepted: 21 November 2018 / Published: 25 November 2018

## Abstract

:
Probabilistic dual hesitant fuzzy set (PDHFS) is an enhanced version of a dual hesitant fuzzy set (DHFS) in which each membership and non-membership hesitant value is considered along with its occurrence probability. These assigned probabilities give more details about the level of agreeness or disagreeness. By emphasizing the advantages of the PDHFS and the aggregation operators, in this manuscript, we have proposed several weighted and ordered weighted averaging and geometric aggregation operators by using Einstein norm operations, where the preferences related to each object is taken in terms of probabilistic dual hesitant fuzzy elements. Several desirable properties and relations are also investigated in details. Also, we have proposed two distance measures and its based maximum deviation method to compute the weight vector of the different criteria. Finally, a multi-criteria group decision-making approach is constructed based on proposed operators and the presented algorithm is explained with the help of the numerical example. The reliability of the presented decision-making method is explored with the help of testing criteria and by comparing the results of the example with several prevailing studies.
Keywords:
probabilistic dual hesitant fuzzy sets; distance measures; aggregation operators; consumer behavior; multi-criteria decision-making; maximum deviation method

## 1. Introduction

With growing advancements in economic, socio-cultural as well as technical aspects of the world, uncertainties have started playing a dominant part in decision-making (DM) processes. The nature of DM problems is becoming more and more complex and the data available for the evaluation of these problems is increasingly having uncertain pieces of unprocessed information [1,2]. Such data content leads to inaccurate results and increase the risks by many folds. To decrease the risks and to reach the accurate results, decision-making has attained the attention of a large number of researchers. In the complex decision-making systems, often large cost and computational efforts are required to address the information, to evaluate it to form accurate results. In such situations, the major aim of the decision makers remain to decrease the computational overheads and to reach the desired objective in less space of time.
Time-to-time such DM techniques are framed which captures the uncertain information in an efficient way and results are calculated in such a manner that they comply easily to the real-life situations. From the crisp set theory, an analysis was shifted towards the fuzzy sets (FSs) and further Atanassov [3] extended the FS theory given by Zadeh [4] to Intuitionistic FSs (IFSs) by acknowledging the measures of disagreeness along with measures of agreeness. Afterward, Atanassov and Gargov [5] extended the IFS to the Interval-valued intuitionistic fuzzy sets (IVIFSs) which contain the degrees of agreeness and disagreeness as interval values instead of single digits. As it is quite a common phenomenon that different attributes play a vital part during the selection of best alternative among the available ones, so suitable aggregation operators to evaluate the data are to be chosen carefully by the experts to address the nature of the DM problem. In these approaches, preferences are given as falsity and truth membership values in the crisp or interval number respectively such that the corresponding degrees altogether sum to be less than or equal to one. In above-stated environments, various researchers have constructed their methodologies for solving the DM problems focussing on information measures, aggregation operators etc. For instance, Xu [6] presented some weighted averaging aggregation operators (AOs) for intuitionistic fuzzy numbers (IFNs). Wang et al. [7] presented some AOs to aggregate various interval-valued intuitionistic fuzzy (IVIF) numbers (IVIFNs). Garg [8,9] presented some improved interactive AOs for IFNs. Wang and Liu [10] gave interval-valued intuitionistic fuzzy hybrid weighted AOs based on Einstein operations. Wang and Liu [11] presented some hybrid weighted AOs using Einstein norm operations. Garg [12] presented a generalized AOs using Einstein norm operations for Pythagorean fuzzy sets. Garg and Kumar [13] presented some new similarity measures for IVIFNs based on the connection number of the set pair analysis theory. However, apart from these, a comprehensive overview of the different approaches under the IFSs and/or IVIFSs to solve MCDM problems are summarized in [14,15,16,17,18,19,20,21,22,23,24]. In the above theories, it is difficult to capture cases where the preferences related to different objects are given in the form of the multiple numbers of possible membership entities. To handle it, Torra [25] came up with the idea of hesitant fuzzy sets (HFSs). Zhu et al. [26] enhanced it to the dual hesitant fuzzy sets (DHFSs) by assigning equal importance to the possible non-membership values as that of possible membership values in the HFSs. In the field of AOs, Xia and Xu [27] established different operators to aggregated their values. Garg and Arora [28] presented some AOs under the dual hesitant fuzzy soft set environment and applied them to solve the MCDM problems. Wei and Zhao [29] presented some induced hesitant AOs for IVIFNs. Apart from these, some other kinds of the algorithms for solving the decision-making problems are investigated by the authors [30,31,32,33,34,35,36,37,38] under the hesitant fuzzy environments.
Although, these approaches are able to capture the uncertainties in an efficient way, yet these works are unable to model the situations in which the refusal of an expert in providing the decision plays a dominant role. For example, suppose a panel of 6 experts is approached to select the best candidate during the recruitment process and 2 of them refused to provide any decision. While evaluating the informational data using the existing approaches, the number of decision makers is considered to be 4 instead of 6 i.e., the refusal providing experts are completely ignored and the decision is framed using the preferences given by the 4 decision-providing experts only. This cause a significant loss of information and may lead to inadequate results. In order to address such refusal-oriented cases, Zhu and Xu [39] corroborated probabilistic hesitant fuzzy sets (PHFSs). Wu et al. [40] gave the notion of AOs on interval-valued PHFSs (IVPHFSs) whereas Zhang et al. [41] worked on preference relations based on IVPHFSs and accessed the findings by applying to real life decision scenarios. Hao et al. [42] corroborated the concept of PDHFSs. Later on, Li et al. [43] presented the concept of dominance degrees and presents a DM approach based on the best-worst method under the PHFFSs. Li and Wang [44] comprehensively expressed way to address their vague and uncertain information. Lin and Xu [45] determined various probabilistic linguistic distance measures. Apart from them, several researchers [46,47,48,49,50,51,52] have shown a keen interest in applying probabilistic hesitant fuzzy set environments to different decision making approaches. Based on these existing studies, the primary motivation of this paper is summarized as below:
(i)
In the existing DHFSs, each and every membership value has equal probability. For instance, suppose a person has to buy a commodity X, and he is confused that either he is $10 %$ sure or $20 %$ sure to buy it, and is uncertain about $30 %$ or $40 %$ in not buying it. Thus, under DHFS environment, this information is captured as ${ 0.10 , 0.20 } ,$ ${ 0.30 , 0.40 }$. Here, in dual hesitant fuzzy set, each hesitant value is assumed to have probability $0.5$. So, mentioning the same probability value repeatedly is omitted in DHFSs. But, if the buyer is more confident about $10 %$ agreeness than that of $20 %$ i.e., suppose he is certain that his agreeness towards buying the commodity is $70 %$ towards $10 %$ and $30 %$ towards $20 %$ and similarly, for the non-membership case, he is $60 %$ favoring to the $40 %$ rejection level and $40 %$ favoring the $30 %$ rejection level. Thus, probabilistic dual hesitant fuzzy set is formulated as ${ 0.10 | 0.70 , 0.20 | 0.30 } ,$ ${ 0.30 | 0.4 , 0.40 | 0.6 } .$ So, to address such cases, in which even the hesitation has a some preference over the another hesitant value, DHFS acts as an efficient tool to model them.
(ii)
In the multi-expert DM problems, there may often arise conflicts in the preferences given by different experts. These issues can easily be resolved using DHFSs. For example, let A and B be two experts giving their opinion about buying a commodity X. Suppose opinion provided by A is noted in form of DHFS as ${ 0.20 , 0.30 } , { 0.10 , 0.15 }$ and similarly B gave opinion as ${ 0.20 , 0.25 } , { 0.10 }$. Now, both the experts are providing different opinions regarding the same commodity X. This is a common problem that arises in the real life DM scenarios. To address this case, the information is combined into PDHFS by analyzing the probabilities of decision given by both the experts. The PDHFS, thus formed, is given as $0.20 | 0.5 + 0.5 2 , 0.30 | 0.5 2 , 0.25 | 0.5 2 ,$ $0.10 | 0.5 + 1 2 , 0.15 | 0.5 2$. In simple form, it is $0.20 | 0.5 , 0.30 | 0.25 , 0.25 | 0.25 ,$ $0.10 | 0.75 , 0.15 | 0.25 .$ Thus, this paper is motivated by the need of capturing the more favorable values among the hesitant values.
(iii)
The existing decision-making approaches based on DHFS environment are numerically more complex and time consuming because of redundancy of the membership (non-membership) values to match the length of one set to another. This manuscript is motivated by the fact of reducing this data redundancy and making the DM approach more time-efficient.
Motivated by the aforementioned points regarding shortcomings in the existing approaches, this paper focusses on eradicating them by developing a series of AOs. In order to do so, the supreme objectives are listed below:
(i)
To consider the PDHFS environment to capture the information.
(ii)
To propose two novel distance measures on PDHFSs.
(iii)
To capture some weighted information regarding the available information by solving a non-linear mathematical model.
(iv)
To develop average and geometric Einstein AOs based on the PDHFS environment.
(v)
To propose a DM approach relying on the developed operators.
(vi)
To check numerical applicability of the approach to a real-life case and to compare the outcomes with prevailing approaches.
To achieve the first objective and to provide more degrees of freedom to practitioners, in this article, we consider PDHFS environment to extract data. For achieving the second objective, two distance measures are proposed; one in which the size of two PDHFSs should be the same whereas in the second one the size may vary. For achieving the third objective, a non-linear model is solved to capture the weighted information. For achieving fourth objective average and geometric Einstein AOs are proposed. To attain the fifth and sixth objective a real-life based case-study is conducted and its comparative analysis with the prevailing environments is carried out.
The rest of this paper is organized as follows: Section 2 highlights the basic definitions related to DHFSs, PHFSs, and PDHFSs. Section 3 introduces the two distance measures for PDHFSs along with their desirable properties. Section 4 introduces some Einstein operational laws on PDHFSs with the investigation of some properties. In Section 5, some averaging and geometric weighted Einstein AOs are proposed. A non-linear programming model for weights determination is elicited in Section 6. In Section 7, an approach is constructed to address the DM problems and includes the real-life marketing problem including a comparative analysis with the existing ones. Finally, concluding remarks are given in Section 8.

## 2. Preliminaries

This section emphasizes on basic definitions regarding the DHFSs, PHFSs and PDHFSs.
Definition 1.
On the universal set X, Zhu et al. [26] defined dual hesitant fuzzy set as:
$α = x , h ( x ) , g ( x ) ∣ x ∈ X$
where the sets $h ( x )$ and $g ( x )$ have values in $[ 0 , 1 ]$, which signifies possible membership and non-membership degrees for $x ∈ X$. Also,
$0 ≤ γ , η ≤ 1 ; 0 ≤ γ + + η + ≤ 1$
in which, $γ ∈ h ( x ) ;$ $η ∈ g ( x )$; $γ + ∈ h + ( x ) = ⋃ γ ∈ h ( x ) max { γ }$ and $η + ∈ g + ( x ) = ⋃ η ∈ g ( x ) max { η }$
Definition 2.
Let X be a reference set, then a probabilistic hesitant fuzzy set (PHFS) [39] P on X is given as
$P = { 〈 x , h x ( p x ) 〉 ∣ x ∈ X }$
Here, the set $h x$ contains several values in $[ 0 , 1 ]$, and described by the probability distribution $p x$. Also, $h x$ denotes membership degree of x in X. For simplicity, $h x ( p x )$ is called a probabilistic hesitant fuzzy element (PHFE), denoted as $h ( p )$ and is given as
$h ( p ) = { γ i ( p i ) ∣ i = 1 , 2 , … , # H } ,$
where $p i$ satisfying $∑ i = 1 # H p i ≤ 1$, is the probability of the possible value $γ i$ and $# H$ is the number of all $γ i ( p i )$.
Definition 3
([49]). A probabilistic dual hesitant fuzzy set (PDHFS) on X is defined as:
$α = x , h ( x ) | p ( x ) , g ( x ) | q ( x ) ∣ x ∈ X$
Here, the sets $h ( x ) | p ( x )$ and $g ( x ) | q ( x )$ contains possible elements where $h ( x )$ and $g ( x )$ represent the hesitant fuzzy membership and non-membership degrees $x ∈ X$, respectively. Also, $p ( x )$ and $q ( x )$ are their associated probabilistic information. Moreover,
$0 ≤ γ , η ≤ 1 ; 0 ≤ γ + + η + ≤ 1$
and
$p i ∈ [ 0 , 1 ] , q j ∈ [ 0 , 1 ] , ∑ i = 1 # h p i = 1 , ∑ j = 1 # g q j = 1$
where $γ ∈ h ( x ) ; η ∈ g ( x ) ; γ + ∈ h + ( x ) = ⋃ γ ∈ h ( x ) max { γ } ; η + ∈ g + ( x ) = ⋃ η ∈ g ( x ) max { η }$. The symbols $# h$ and $# g$ are total values in $h ( x ) | p ( x )$ and $g ( x ) | q ( x )$ respectively. For sake of convenience, we shall denote it as $h | p , g | q$ and name it as probabilistic dual hesitant fuzzy element (PDHFE).
Definition 4
([49]). For a PDHFE α, defined over a universal set X, the complement is defined as
$α c = ⋃ γ ∈ h , η ∈ g η | q η , γ | p γ , i f g ≠ ϕ a n d h ≠ ϕ ⋃ γ ∈ h 1 − γ , ϕ , i f g = ϕ a n d h ≠ ϕ ⋃ η ∈ g ϕ , 1 − η , i f h = ϕ a n d g ≠ ϕ$
Definition 5
([49]). Let $α = h | p , g | q$ be a PDHFE, then the score function is defined as:
$S ( α ) = ∑ i = 1 # h γ i · p i − ∑ j = 1 # g η j · q j$
where $# h$ and $# g$ are total numbers of elements in the components $h | p$ and $g | q$ respectively and $γ ∈ h$, $η ∈ g$. For two PDHFEs $α 1$ and $α 2$, if $S ( α 1 ) > S ( α 2 ) ,$ then the PDHFE $α 1$ is regarded more superior to $α 2$ and is denoted as $α 1 ≻ α 2$.

## 3. Proposed Distance Measures for PDHFEs

In this section, we propose some measures to calculate the distance between two PDHFEs defined over a universal set $X = { x 1 , x 2 , … , x n }$. Throughout this paper, the main notations used are listed below:
 Notations Meaning Notations Meaning n number of elements in the universal set $N A$ number of elements in $g A$ $h A$ hesitant membership values of set A $p A$ probability for hesitant membership of set A $g A$ hesitant non-membership values of set A $q A$ probability for hesitant non-membership of set A $M A$ number of elements in $h A$ $ω$ weight vector
Let $A = x , h A i ( x ) | p A i ( x ) , g A j ( x ) | q A j ( x ) ∣ x ∈ X$ and $B = x , h B i ′ ( x ) | p B i ′ ( x ) , g B j ′ ( x ) | q B j ′ ( x ) ∣ x ∈ X$ where $i = 1 , 2 , … , M A$; $j = 1 , 2 , … , N A$; $i ′ = 1 , 2 , … , M B$ and $j ′ = 1 , 2 , … , N B$, be two PDHFSs. Also, let $M = max { M A , M B } , N = max { N A , N B } ,$ be two real numbers, then for a real-number $λ > 0$, we define distance between A and B as:
$d 1 ( A , B ) = ∑ k = 1 n 1 n 1 M + N ∑ i = 1 M | γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) p B i ( x k ) | λ + ∑ j = 1 N | η A j ( x k ) q A j ( x k ) − η B j ( x k ) q B j ( x k ) | λ 1 λ$
where $γ A i ∈ h A i$, $γ B i ∈ h B i ′$, $η A i ∈ g A i$, $η B i ∈ g B i ′$. It is noticeable that, there may arise the cases in which $M A ≠ M B$ as well as $N A ≠ N B .$ Under such situations, for operating distance $d 1 ,$ the lengths of these elements should be equal to each other. To achieve this, under the hesitant environments, the experts repeat the least or the greatest values among all the hesitant values, in the smaller set, till the length of both A and B becomes equal. In other words, if $M A > M B ,$ then repeat the smallest value in set $h B$ till $M B$ becomes equal to $M A$ and if $M A < M B ,$ then repeat the smallest value in set $h A$ till $M A$ becomes equal to $M B$. Alike the smallest values, the largest values may also be repeated. This choice of the smallest or largest value’s repetition entirely depends on decision-makers optimistic or pessimistic approach. If the expert opts for the optimistic approach then he will expect the highest membership values and thus will repeat the largest values. However, if the expert chooses to follow the pessimistic approach, then he will expect the least favoring values and will go with repeating the smallest values till the same length is achieved. But sometimes, length of A and B cannot be matched by increasing the numbers of elements, then in such cases, the distance $d 1$ can be unappropriate for the data evaluations. To handle such cases, we propose another distance measure $d 2$ in which there is no need to repeat the values for matching the length of the elements under consideration. This distance $d 2$ is calculated as:
$d 2 ( A , B ) = ∑ k = 1 n 1 n 1 M A ∑ i = 1 M A γ A i ( x k ) p A i ( x k ) − 1 M B ∑ i ′ = 1 M B γ B i ′ ( x k ) p B i ′ ( x k ) λ 2 + 1 N A ∑ j = 1 N A η A j ( x k ) q A j ( x k ) − 1 N B ∑ j ′ = 1 N B η B j ′ ( x k ) q B j ′ ( x k ) λ 2 1 λ$
The distance measures proposed above satisfy the axiomatic statement given below:
Theorem 1.
Let A and B be two PDHFSs, then the distance measure $d 1$ satisfies the following conditions:
(P1)
$0 ≤ d 1 ( A , B ) ≤ 1$;
(P2)
$d 1 ( A , B ) = d 1 ( B , A ) ;$
(P3)
$d 1 ( A , B ) = 0$ if $A = B ;$
(P4)
If $A ⊆ B ⊆ C ,$ then $d 1 ( A , B ) ≤ d 1 ( A , C )$ and $d 1 ( B , C ) ≤ d 1 ( A , C ) .$
Proof.
Let $X = { x 1 , x 2 , … , x n }$ be the universal set and A, B be two PDHFSs defined over X. Then for each $x k , k = 1 , 2 , … , n$, we have
(P1)
Since, $0 ≤ γ A i ( x k ) ≤ 1$ and $0 ≤ p A i ( x k ) ≤ 1 ,$ for all $i = 1 , 2 , … , M$, this implies that $0 ≤ γ A i ( x k ) p A i ( x k ) ≤ 1$ and $0 ≤ γ B i ( x k ) p B i ( x k ) ≤ 1$. Thus, for any $λ > 0 ,$ we have $0 ≤ γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) p B i ( x k ) λ ≤ 1$. Further, $∑ i = 1 M 0 ≤ ∑ i = 1 M γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) p B i ( x k ) λ ≤ ∑ i = 1 M 1$ which leads to $0 ≤ ∑ i = 1 M γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) p B i ( x k ) λ ≤ M$. Similarly, for $j = 1 , 2 , … , N$, $0 ≤ ∑ j = 1 N η A j ( x k ) q A j ( x k ) − η B j ( x k ) q B j ( x k ) λ ≤ N$ which yields
$0 ≤ ∑ i = 1 M γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) p B i ( x k ) λ + ∑ j = 1 N η A j ( x k ) q A j ( x k ) − η B j ( x k ) q B j ( x k ) λ ≤ M + N .$
Thus,
$0 ≤ ∑ k = 1 n 1 n 1 M + N ∑ i = 1 M | γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) p B i ( x k ) | λ + ∑ j = 1 N | η A j ( x k ) q A j ( x k ) − η B j ( x k ) q B j ( x k ) | λ 1 λ ≤ 1 ,$
which clearly implies that $0 ≤ d 1 ( A , B ) ≤ 1 .$
(P2)
Since
$d 1 ( A , B ) = ∑ k = 1 n 1 n 1 M + N ∑ i = 1 M γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) p B i ( x k ) λ + ∑ j = 1 N η A j ( x k ) q A j ( x k ) − η B j ( x k ) q B j ( x k ) λ 1 λ = ∑ k = 1 n 1 n 1 M + N ∑ i = 1 M γ B i ( x k ) p B i ( x k ) − γ A i ( x k ) p A i ( x k ) λ + ∑ j = 1 N η B j ( x k ) q B j ( x k ) − η A j ( x k ) q A j ( x k ) λ 1 λ = d 1 ( B , A )$
Hence, the distance measure $d 1$ possess a symmetric nature.
(P3)
For $A = B$, we have $γ A i ( x k ) = γ B i ( x k )$ and $p A i ( x k ) = p B i ( x k )$. Also, $η A j ( x k ) = η B j ( x k )$ and $q A j ( x k ) = q B j ( x k ) .$ Thus, we have $γ A i ( x k ) p A i ( x k ) − γ A i ( x k ) p A i ( x k ) λ = 0$ and $η A j ( x k ) q A j ( x k ) − η A j ( x k ) q A j ( x k ) λ = 0 .$ Hence, it implies that
$∑ k = 1 n 1 n 1 M + N ∑ i = 1 M γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) p B i ( x k ) λ + ∑ j = 1 N η A j ( x k ) q A j ( x k ) − η B j ( x k ) q B j ( x k ) λ 1 λ = 0 ⇒ d 1 ( A , B ) = 0 .$
(P4)
Since, $A ⊆ B ⊆ C ,$ then $γ A i ( x k ) p A i ( x k ) ≤ γ B i ( x k ) p B i ( x k ) ≤ γ C i ( x k ) p C i ( x k )$ and $η A j ( x k ) q A j ( x k ) ≥ η B j ( x k ) q B j ( x k ) ≥ η C j ( x k ) q C j ( x k )$. Further, $γ A i ( x k ) p A i ( x k ) − γ B i ( x k ) q B i ( x k ) λ ≤ γ A i ( x k ) p A i ( x k ) − γ C i ( x k ) q C i ( x k ) λ$ and $η A j ( x k ) q A j ( x k ) − η B j ( x k ) q B j ( x k ) λ ≥ η A j ( x k ) q A j ( x k ) − η C j ( x k ) q C j ( x k ) λ$. Therefore, $d 1 ( A , B ) ≤ d 1 ( A , C )$ and $d 1 ( B , C ) ≤ d 1 ( A , C ) .$
Theorem 2.
Let A and B be two PDHFSs, then the distance measure $d 2$ satisfies the following conditions:
(P1)
$0 ≤ d 2 ( A , B ) ≤ 1$;
(P2)
$d 2 ( A , B ) = d 2 ( B , A ) ;$
(P3)
$d 2 ( A , B ) = 0$ if $A = B ;$
(P4)
If $A ⊆ B ⊆ C ,$ then $d 2 ( A , B ) ≤ d 2 ( A , C )$ and $d 2 ( B , C ) ≤ d 2 ( A , C ) .$
Proof.
The proof is similar to Theorem 1, so we omit it here. □

## 4. Einstein Aggregation Operational laws for PDHFSs

In this section, we propose some operational laws and the investigate some of their properties associated with PDHFEs.
Definition 6.
Let α, $α 1$ and $α 2$ be three PDHFEs such that $α = h | p h , g | q g$, $α 1 = h 1 | p h 1 , g 1 | q g 1$ and $α 2 = h 2 | p h 2 , g 2 | q g 2$. Then, for $λ > 0$, we define the Einstein operational laws for them as follows:
(i)
$α 1 ⊕ α 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2$$γ 1 + γ 2 1 + γ 1 γ 2 | p γ 1 p γ 2 , η 1 η 2 1 + ( 1 − η 1 ) ( 1 − η 2 ) | q η 1 q η 2$;
(ii)
$α 1 ⊗ α 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2$$γ 1 γ 2 1 + ( 1 − γ 1 ) ( 1 − γ 2 ) | p γ 1 p γ 2 , η 1 + η 2 1 + η 1 η 2 | q η 1 q η 2$;
(iii)
$λ α = ⋃ γ ∈ h , η ∈ g$$( 1 + γ ) λ − ( 1 − γ ) λ ( 1 + γ ) λ + ( 1 − γ ) λ | p γ , 2 ( η ) λ ( 2 − η ) λ + ( η ) λ | q η ;$
(iv)
$α λ = ⋃ γ ∈ h , η ∈ g$$2 ( γ ) λ ( 2 − γ ) λ + ( γ ) λ | p γ , ( 1 + η ) λ − ( 1 − η ) λ ( 1 + η ) λ + ( 1 − η ) λ | q η$
Theorem 3.
For real value $λ > 0$, the operational laws for PDHFEs given in Definition 6 that is $α 1 ⊕ α 2$, $α 1 ⊗ α 2$, $λ α ,$ and $α λ$ are also PDHFEs.
Proof.
For two PDHFEs $α 1$ and $α 2 ,$ we have
$α 1 ⊕ α 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 γ 1 + γ 2 1 + γ 1 γ 2 | p γ 1 p γ 2 , η 1 η 2 1 + ( 1 − η 1 ) ( 1 − η 2 ) | q η 1 q η 2$
As $0 ≤ γ 1 , γ 2 , η 1 , η 2 ≤ 1$, thus it is evident that $0 ≤ γ 1 + γ 2 ≤ 2$ and $1 ≤ 1 + γ 1 γ 2 ≤ 2$, thus it follows that $0 ≤ γ 1 + γ 2 1 + γ 1 γ 2 ≤ 1$. On the other hand, $0 ≤ η 1 η 2 ≤ 1$ and $1 ≤ 1 + ( 1 − η 1 ) ( 1 − η 2 ) ≤ 2 .$ Thus, $0 ≤ η 1 η 2 1 + ( 1 − η 1 ) ( 1 − η 2 ) ≤ 1$ Also, since $0 ≤ p γ 1 , p γ 2 , q η 1 , q η 2 ≤ 1$, thus $0 ≤ p γ 1 p γ 2 ≤ 1$ and $0 ≤ q η 1 q η 2 ≤ 1$. Similarly, $α 1 ⊗ α 2$, $λ α$ and $α λ$ are also PDHFEs. □
Theorem 4.
Let $α 1 , α 2 , α 3$ be three PDHFEs and $λ , λ 1 , λ 2 > 0$ be three real numbers, then following results hold:
(i)
$α 1 ⊕ α 2 = α 2 ⊕ α 1$;
(ii)
$α 1 ⊗ α 2 = α 2 ⊗ α 1$;
(iii)
$( α 1 ⊕ α 2 ) ⊕ α 3 = α 1 ⊕ ( α 2 ⊕ α 3 )$;
(iv)
$( α 1 ⊗ α 2 ) ⊗ α 3 = α 1 ⊗ ( α 2 ⊗ α 3 )$;
(v)
$λ ( α 1 ⊕ α 2 ) = λ α 1 ⊕ λ α 2$;
(vi)
$α 1 λ ⊗ α 1 λ = ( α 1 ⊗ α 2 ) λ$.
Proof.
Let $α 1 = h 1 | p h 1 , g 1 | q g 1 ,$ $α 2 = h 2 | p h 2 , g 2 | q g 2$, $α 3 = h 3 | p h 3 , g 3 | q g 3$ be three PDHFEs. Then, we have
(i)
For two PDHFEs $α 1$ and $α 2$, from Definition 6, we have
$α 1 ⊕ α 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 γ 1 + γ 2 1 + γ 1 γ 2 | p γ 1 p γ 2 , η 1 η 2 1 + ( 1 − η 1 ) ( 1 − η 2 ) | q η 1 q η 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 γ 2 + γ 1 1 + γ 2 γ 1 | p γ 2 p γ 1 , η 2 η 1 1 + ( 1 − η 2 ) ( 1 − η 1 ) | q η 2 q η 1 = α 2 ⊕ α 1$
(ii)
Proof is obvious so we omit it here.
(iii)
For three PDHFEs $α 1 , α 2$ and $α 3$, consider L.H.S. i.e.,
$α 1 ⊕ α 2 ⊕ α 3 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 γ 1 + γ 2 1 + γ 1 γ 2 | p γ 1 p γ 2 , η 1 η 2 1 + ( 1 − η 1 ) ( 1 − η 2 ) | q η 1 q η 2 ⊕ α 3$
$= ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 γ 3 ∈ h 3 , η 3 ∈ g 3 γ 1 + γ 2 + γ 3 + γ 1 γ 2 γ 3 1 + γ 1 γ 2 + γ 2 γ 3 + γ 3 γ 1 | p γ 1 p γ 2 p γ 3 , η 1 η 2 η 3 4 − 2 η 1 − 2 η 2 − 2 η 3 + η 1 η 2 + η 2 η 3 + η 1 η 3 | q η 1 q η 2 q η 3$
Also, on considering the R.H.S., we have
$α 1 ⊕ α 2 ⊕ α 3 = α 1 ⊕ ⋃ γ 2 ∈ h 2 , η 2 ∈ g 2 γ 3 ∈ h 3 , η 3 ∈ g 3 γ 2 + γ 3 1 + γ 2 γ 3 | p γ 2 p γ 3 , η 2 η 3 1 + ( 1 − η 2 ) ( 1 − η 3 ) | q η 2 q η 3$
$= ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 γ 3 ∈ h 3 , η 3 ∈ g 3 γ 1 + γ 2 + γ 3 + γ 1 γ 2 γ 3 1 + γ 1 γ 2 + γ 2 γ 3 + γ 3 γ 1 | p γ 1 p γ 2 p γ 3 , η 1 η 2 η 3 4 − 2 η 1 − 2 η 2 − 2 η 3 + η 1 η 2 + η 2 η 3 + η 1 η 3 | q η 1 q η 2 q η 3$
From Equations (11) and (12), the required result is obtained.
(iv)
Proof is obvious so we omit it here.
(v)
For $λ > 0 ,$ consider
$λ ( α 1 ⊕ α 2 ) = λ ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 ( 1 + γ 1 ) ( 1 + γ 2 ) − ( 1 − γ 1 ) ( 1 − γ 2 ) ( 1 + γ 1 ) ( 1 + γ 2 ) + ( 1 − γ 1 ) ( 1 − γ 2 ) | p γ 1 p γ 2 , 2 η 1 η 2 ( 2 − η 1 ) ( 2 − η 2 ) + η 1 η 2 | q η 1 q η 2$
For sake of convenience, put $( 1 + γ 1 ) ( 1 + γ 2 ) = a$; $( 1 − γ 1 ) ( 1 − γ 2 ) = b$; $η 1 η 2 = c$ and $( 2 − η 1 ) ( 2 − η 2 ) = d$. This implies
$λ ( α 1 ⊕ α 2 ) = λ ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 a − b a + b | p γ 1 p γ 2 , 2 c d + c | q η 1 q η 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 1 + a − b a + b λ − 1 − a − b a + b λ 1 + a − b a + b λ + 1 − a − b a + b λ | p γ 1 p γ 2 , 2 2 c d + c λ 2 − 2 c d + c λ + 2 c d + c λ | q η 1 q η 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 2 a a + b λ − 2 b a + b λ 2 a a + b λ + 2 b a + b λ | p γ 1 p γ 2 , 2 2 c d + c λ 2 d d + c λ + 2 a d + c λ | q η 1 q η 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 ( a λ − b λ ) ( a λ + b λ ) | p γ 1 p γ 2 , 2 c λ d λ + c λ | q η 1 q η 2$
Re-substituting a, $b ,$ c and d we have
$= ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 1 + γ 1 λ 1 + γ 2 λ − 1 − γ 1 λ 1 − γ 2 λ 1 + γ 1 λ 1 + γ 2 λ + 1 − γ 1 λ 1 − γ 2 λ | p γ 1 p γ 2 , 2 η 1 η 2 λ 2 − η 1 λ 2 − η 2 λ + η 1 η 2 | q η 1 q η 2 = λ α 1 ⊕ λ α 2$
(vi)
For $λ > 0 ,$
$α 1 ⊗ α 2 λ = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 2 γ 1 γ 2 1 + ( 1 − γ 1 ) ( 1 − γ 2 ) | p γ 1 p γ 2 , 1 + η 1 1 + η 2 − 1 − η 1 1 − η 2 1 + η 1 1 + η 2 + 1 − η 1 1 − η 2 | q η 1 q η 2 λ$
For sake of convenience, put
$γ 1 γ 2 = a ; 2 − γ 1 2 − γ 2 = b ; 1 + η 1 1 + η 2 = c and 1 − η 1 1 − η 2 = d$
So we obtain
$α 1 ⊗ α 2 λ = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 2 a b + a | p γ 1 p γ 2 , c − d c + d | q η 1 q η 2 λ = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 2 2 a b + a λ 2 − 2 a b + a λ + 2 a b + a λ | p γ 1 p γ 2 , 1 + c − d c + d λ − 1 − c − d c + d λ 1 + c − d c + d λ + 1 − c − d c + d λ | q η 1 q η 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 2 2 a b + a λ 2 b b + a λ + 2 a b + a λ | p γ 1 γ 2 , 2 c c + d λ − 2 d c + d λ 2 c c + d λ + 2 d c + d λ | q η 1 q η 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 2 a λ b λ + a λ | p γ 1 p γ 2 , c λ − d λ c λ + d λ | q η 1 q η 2$
Re-substituting values of $a ,$ b, c and d we get
$= ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 2 γ 1 γ 2 λ 2 − γ 1 λ 2 − γ 2 λ + γ 1 γ 2 λ | p γ 1 p γ 2 , 1 + η 1 λ 1 + η 2 λ − 1 − η 1 λ 1 − η 2 λ 1 + η 1 λ 1 + η 2 λ + 1 − η 1 λ 1 − η 2 λ | q η 1 q η 2 = α 1 λ ⊗ α 2 λ$
Theorem 5.
Let $α = h | p h , g | q g$ $α 1 = h 1 | p h 1 , g 1 | q g 1 ,$ and $α 2 = h 2 | p h 2 , g 2 | q g 2$ be three PDHFEs, and $λ > 0$ be a real number, then
(i)
$( α c ) λ = λ α c$;
(ii)
$λ ( α c ) = ( α λ ) c$;
(iii)
$α 1 c ⊕ α 2 c = ( α 1 ⊗ α 2 ) c$;
(iv)
$α 1 c ⊗ α 2 c = ( α 1 ⊕ α 2 ) c$.
Proof.
(i)
Let $α = h | p h , g | q g$ be a PDHFE, then using Definition 4, the proof for the three possible cases is given as:
(Case 1)
If $h ≠ ϕ$; $g ≠ ϕ$ then for a PDHFE $α = h | p h , g | q g$, from Equation (7) we have
$( α c ) λ = ⋃ γ ∈ h η ∈ g η | q η , γ | p γ λ = ⋃ γ ∈ h η ∈ g 2 ( η ) λ ( 2 − η ) λ + ( η ) λ | q η , , ( 1 + γ ) λ − ( 1 − γ ) λ ( 1 + γ ) λ + ( 1 − γ ) λ | p γ = ⋃ γ ∈ h η ∈ g ( 1 + γ ) λ − ( 1 − γ ) λ ( 1 + γ ) λ + ( 1 − γ ) λ | p γ , 2 ( η ) λ ( 2 − η ) λ + ( η ) λ | q η c = λ ⋃ γ ∈ p η ∈ q γ | p γ , η | q η c = ( λ α ) c$
(Case 2)
If $g = ϕ , h ≠ ϕ$, then
$( α c ) λ = ⋃ γ ∈ h 1 − γ | p γ , ϕ λ = ⋃ γ ∈ h 2 ( 1 − γ ) λ ( 2 − ( 1 − γ ) ) λ + ( 1 − γ ) λ | p γ , ϕ = ( λ α ) c$
(Case 3)
If $h = ϕ , g = ϕ$, then
$( α c ) λ = ⋃ η ∈ g ϕ , 1 − η | q η λ = ⋃ η ∈ g ϕ , ( 1 + ( 1 − η ) ) λ − ( 1 − ( 1 − η ) ) λ ( 1 + ( 1 − η ) ) λ + ( 1 − ( 1 − η ) ) λ | q η = ⋃ η ∈ g ( 2 − η ) λ − ( η ) λ ( 2 − η ) λ + ( η ) λ | q η , ϕ c = λ ⋃ η ∈ g ( 1 − η ) | q η , ϕ c = ( λ α ) c$
(ii)
Similar to above, so it is omitted.
(iii)
For two PDHFEs $α 1$, $α 2$ and a real number $λ > 0$, using Definitions 4 and 6 we have,
(Case 1)
If $h 1 ≠ ϕ , g 1 ≠ ϕ , h 2 ≠ ϕ$ and $g 2 ≠ ϕ$
$α 1 c ⊕ α 2 c = ⋃ γ 1 ∈ h 1 η 1 ∈ g 1 η 1 | q η 1 , γ 1 | p γ 1 ⊕ ⋃ γ 2 ∈ h 2 η 2 ∈ g 2 η 2 | q η 2 , γ 2 | p γ 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 η 1 + η 2 1 + η 1 η 2 | q η 1 q η 2 , γ 1 γ 2 1 + ( 1 − γ 1 ) ( 1 − γ 2 ) | p γ 1 p γ 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 γ 1 γ 2 1 + ( 1 − γ 1 ) ( 1 − γ 2 ) | p γ 1 p γ 2 , η 1 + η 2 1 + η 1 η 2 | q η 1 q η 2 c = ( α 1 ⊗ α 2 ) c$
(Case 2)
If $h 1 ≠ ϕ , g 1 = ϕ , h 2 ≠ ϕ$ and $g 2 = ϕ$, then
$α 1 c ⊕ α 2 c = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 1 − γ 1 | p γ 1 , ϕ ⊕ ⋃ γ 2 ∈ h 2 , η 2 ∈ g 2 1 − γ 2 | p γ 2 , ϕ = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 ( 1 − γ 1 ) + ( 1 − γ 2 ) 1 + ( 1 − γ 1 ) ( 1 − γ 2 ) | p γ 1 p γ 2 , ϕ = ( α 1 ⊗ α 2 ) c$
(Case 3)
If $h 1 = ϕ , g 1 ≠ ϕ , h 2 = ϕ , g 2 ≠ ϕ$
$α 1 c ⊕ α 2 c = ⋃ γ 1 ∈ h 1 η 1 ∈ g 1 ϕ , 1 − η 1 | q η 1 ⊕ ⋃ γ 2 ∈ h 2 η 2 ∈ g 2 ϕ , 1 − η 2 | q η 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 ϕ , ( 1 − η 1 ) ( 1 − η 2 ) 1 + η 1 η 2 | q η 1 q η 2 = ( α 1 ⊗ α 2 ) c$
(iv)
Similar, so we omit it here.

## 5. Probabilistic Dual Hesitant Weighted Einstein AOs

In this section, we have defined some weighted aggregation operators by using aforementioned laws for a collection of PDHFEs. For it, let $Ω$ be the family of PDHFEs.
Definition 7.
Let Ω be the family of PDHFEs $α i$ $( i = 1 , 2 , … , n )$ with the corresponding weights $ω = ( ω 1 , ω 2 , … , ω n ) T$, such that $ω i > 0$ and $∑ i = 1 n ω i = 1$. If PDHFWEA: $Ω n → Ω ,$ is a mapping defined by
$PDHFWEA ( α 1 , α 2 , … , α n ) = ω 1 α 1 ⊕ ω 2 α 2 ⊕ … ⊕ ω n α n$
then, PDHFWEA is called probabilistic dual hesitant fuzzy weighted Einstein average operator.
Theorem 6.
For a family of PDHFEs $α i = h i | p h i , g i | q g i , ( i = 1 , 2 , … , n )$, the aggregated value obtained by using PDHFWEA operator is still a PDHFE and is given as
$PDHFWEA ( α 1 , α 2 , … , α n ) = ⋃ γ i ∈ h i η i ∈ g i ∏ i = 1 n ( 1 + γ i ) ω i − ∏ i = 1 n ( 1 − γ i ) ω i ∏ i = 1 n ( 1 + γ i ) ω i + ∏ i = 1 n ( 1 − γ i ) ω i | ∏ i = 1 n p γ i , 2 ∏ i = 1 n ( η i ) ω i ∏ i = 1 n ( 2 − η i ) ω i + ∏ i = 1 n ( η i ) ω i | ∏ i = 1 n q η i$
where $ω = ( ω 1 , ω 2 , … , ω n ) T$ is a weight vector such that $∑ i = 1 n ω i = 1$ where $0 < ω i < 1 .$
Proof.
We will prove the Equation (14) by following the steps mathematical induction on n, and the proof is executed as below:
(Step 1)
For $n = 2 ,$ we have $α 1 = h 1 | p h 1 , g 1 | q g 1$ and $α 2 = h 2 | p h 2 , g 2 | q g 2$. Using operational laws on PDHFEs as stated in Definition 6 we get
$ω 1 α 1 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 ( 1 + γ 1 ) ω 1 − ( 1 − γ 1 ) ω 1 ( 1 + γ 1 ) ω 1 + ( 1 − γ 1 ) ω 1 | p γ 1 , 2 ( η 1 ) ω 1 ( 2 − η 1 ) ω 1 + ( η 1 ) ω 1 | q η 1 and ω 2 α 2 = ⋃ γ 2 ∈ h 2 , η 2 ∈ g 2 ( 1 + γ 2 ) ω 2 − ( 1 − γ 2 ) ω 2 ( 1 + γ 2 ) ω 2 + ( 1 − γ 2 ) ω 2 | p γ 2 , 2 ( η 2 ) ω 2 ( 2 − η 2 ) ω 2 + ( η 2 ) ω 2 | q η 2$
Hence, by addition of PDHFEs, we get
$PDHFWEA ( α 1 , α 2 ) = ω 1 α 1 ⊕ ω 2 α 2 = ⋃ γ 1 ∈ h 1 , η 1 ∈ g 1 γ 2 ∈ h 2 , η 2 ∈ g 2 ∏ i = 1 2 ( 1 + γ i ) ω i − ∏ i = 1 2 ( 1 − γ i ) ω i ∏ i = 1 2 ( 1 + γ i ) ω i + ∏ i = 1 2 ( 1 − γ i ) ω i | ∏ i = 1 2 p γ i , 2 ∏ i = 1 2 ( η i ) ω i ∏ i = 1 2 ( 2 − η i ) ω i + ∏ i = 1 2 ( η i ) ω i | ∏ i = 1 2 q η i$
Thus, the result holds for $n = 2 .$
(Step 2)
If Equation (14) holds for $n = k$, then for $n = k + 1$, we have
$PDHFWEA ( α 1 , α 2 , … , α k + 1 ) = ⨁ i = 1 k ω i α i ⊕ ( ω k + 1 α k + 1 ) = ⋃ γ i ∈ h i , η i ∈ g i ∏ i = 1 k ( 1 + γ i ) ω i − ∏ i = 1 k ( 1 − γ i ) ω i ∏ i = 1 k ( 1 + γ i ) ω i + ∏ i = 1 k ( 1 − γ i ) ω i | ∏ i = 1 k p γ i , 2 ∏ i = 1 k ( η i ) ω i ∏ i = 1 k ( 2 − η i ) ω i + ∏ i = 1 k ( η i ) ω i | ∏ i = 1 k q η i ⊕ ⋃ γ k + 1 ∈ h k + 1 , η k + 1 ∈ g k + 1 ( 1 + γ k + 1 ) ω k + 1 − ( 1 − γ k + 1 ) ω k + 1 ( 1 + γ k + 1 ) ω k + 1 + ( 1 − γ k + 1 ) ω k + 1 | p γ k + 1 , , 2 ( η k + 1 ) ω k + 1 ( 2 − η k + 1 ) ω k + 1 + ( η k + 1 ) ω k + 1 | q η k + 1 = ⋃ γ i ∈ h i , η i ∈ g i ∏ i = 1 k + 1 ( 1 + γ i ) ω i − ∏ i = 1 k + 1 ( 1 − γ i ) ω i ∏ i = 1 k + 1 ( 1 + γ i ) ω i + ∏ i = 1 k + 1 ( 1 − γ i ) ω i | ∏ i = 1 k + 1 p γ i , 2 ∏ i = 1 k + 1 ( η i ) ω i ∏ i = 1 k + 1 ( 2 − η i ) ω i + ∏ i = 1 k + 1 ( η i ) ω i | ∏ i = 1 k + 1 q η i$
Thus,
$PDHFWEA ( α 1 , α 2 , … , α n ) = ⋃ γ i ∈ h i η i ∈ g i ∏ i = 1 n ( 1 + γ i ) ω i − ∏ i = 1 n ( 1 − γ i ) ω i ∏ i = 1 n ( 1 + γ i ) ω i + ∏ i = 1 n ( 1 − γ i ) ω i | ∏ i = 1 n p γ i , 2 ∏ i = 1 n ( η i ) ω i ∏ i = 1 n ( 2 − η i ) ω i + ∏ i = 1 n ( η i ) ω i | ∏ i = 1 n q η i$
which completes the proof. □
Further, it is observed that the proposed PDHFWEA operator satisfies the properties of boundedness and monotonicity, for a family of PDHFEs $α i , ( i = 1 , 2 , … , n )$ which can be demonstrated as follows:
Property 1.
(Boundedness) For $α i = h i | p h i , g i | q g i$ where $i = ( 1 , 2 , … , n )$, let $α − = min ( h i ) | min ( p h i ) ,$ $max ( g i ) | max ( q g i ) = γ min | p min , η max | q max$ and
$α + = max ( h i ) | max ( p h i ) , min ( g i ) | min ( q g i ) = γ max | p max , η min | q min$ be PDHFEs, then $α − ≤ PDHFWEA ( α 1 , α 2 , … , α n ) ≤ α +$.
Proof.
Since each $α i$ is a PDHFE, it is obvious that $min ( h i ) ≤ h i ≤ max ( h i ) , min ( g i ) ≤ g i ≤ max ( g i ) , p min ≤ p i ≤ p max$ and $q m i n ≤ q i ≤ q max$. Let $f ( x ) = 1 − x 1 + x , x ∈ [ 0 , 1 ]$, $f ′ ( x ) = − 2 ( 1 + x ) 2 < 0$ i.e., $f ( x )$ is a decreasing function. Since, $γ min ≤ γ i ≤ γ max$, for all i, then $f ( γ max ) ≤ f ( γ i ) ≤ f ( γ min )$ i.e., $1 − γ max 1 + γ max ≤ 1 − γ i 1 + γ i ≤ 1 − γ max 1 + γ max$. Let $ω = ( ω 1 , ω 2 , … , ω n ) T$ be the weight vector of $( α 1 , α 2 , … , α n )$ such that each $ω i ∈ ( 0 , 1 )$ and $∑ i = 1 n ω i = 1 ,$ then we have
$1 − γ max 1 + γ max ω i ≤ 1 − γ i 1 + γ i ω i ≤ 1 − γ min 1 + γ min ω i$
Thus, we get
$1 + 1 − γ max 1 + γ max ≤ 1 + ∏ i = 1 n 1 − γ i 1 + γ i ω i ≤ 1 + 1 − γ min 1 + γ min ⇒ 2 1 + γ max ≤ ∏ i = 1 n ( 1 + γ i ) ω i + ∏ i = 1 n ( 1 − γ i ) ω i ∏ i = 1 n ( 1 + γ i ) ω i ≤ 2 1 + γ min ⇒ γ min ≤ 1 − ∏ i = 1 n 1 − γ i 1 + γ i ω i 1 + ∏ i = 1 n 1 − γ i 1 + γ i ω i ≤ γ max ⇒ γ min ≤ ∏ i = 1 n ( 1 + γ i ) ω i − ∏ i = 1 n ( 1 − γ i ) ω i ∏ i = 1 n ( 1 + γ i ) ω i + ∏ i = 1 n ( 1 − γ i ) ω i ≤ γ max$
Hence, we obtain the required result for membership values.
Now, for non-membership, let $c ( y ) = 2 − y y$, $y ∈ ( 0 , 1 ]$, then $c ′ ( y ) < 0$ i.e., $c ( y )$ is the decreasing function. Since, $η min ≤ η i ≤ η max ,$ then for all i, we have $c ( η max ) ≤ c ( η i ) ≤ c ( η min )$, that is $2 − η max η max ≤ 2 − η i η i ≤ 2 − η min η min$. Let $ω = ( ω 1 , ω 2 , … , ω n ) T$ be the weight vector of $( α 1 , α 2 , … , α n )$ such that $ω i ∈ ( 0 , 1 )$ and $∑ i = 1 n ω i = 1 ,$ then
$2 − η max η max ω i ≤ 2 − η i η i ω i ≤ 2 − η min η min ω i Thus , ∏ i = 1 n 2 − η max η max ω i ≤ ∏ i = 1 n 2 − η i η i ω i ≤ ∏ i = 1 n 2 − η min η min ω i ⇒ 2 η min ≤ 1 1 + ∏ i = 1 n 2 − η i η i ω i ≤ 2 η max ⇒ η min ≤ 2 ∏ i = 1 n ( η i ) ω i ∏ i = 1 n ( η i ) ω i + ∏ i = 1 n ( 2 − η i ) ω i ≤ η max$
Hence, the required for non-membership values is obtained.
Now, for probabilities, since $p min ≤ p i ≤ p max$ and $q min ≤ q i ≤ q max$ this implies that $∏ i = 1 n p min ≤ ∏ i = 1 n p i ≤ ∏ i = 1 n p max$ and $∏ i = 1 n q min ≤ ∏ i = 1 n q i ≤ ∏ i = 1 n q max .$ According to the score function, as defined in Definition 5, we obtain $S ( α − ) ≤ S ( α ) ≤ S ( α + )$. Hence, from all the above notions, $α − ≤ PDHFWEA ( α 1 , α 2 , … , α n ) ≤ α +$. □
Property 2.
(Monotonicity) Let $α i = h i | p h i , g i | q g i$ and $α i * = h i * | p h i * , g i * | q g i *$, for all $i = ( 1 , 2 , … , n )$ be two families of PDHFEs where for each element in $α i$ and $α i *$, there are $γ α i ≤ γ α i *$ and $η α i ≥ η α i *$ while the probabilities remain the same i.e., $p h i = p h i * , q g i = q g i *$ then PDHFWEA$( α 1 , α 2 , … , α n ) ≤$ PDHFWEA$( α 1 * , α 2 * , … , α n * )$.
Proof.
Similar to that of Property 1, so we omit it here. □
However, the PDHFWEA operator does not satisfy the idempotency. To illustrate this, we give the following example:
Example 1.
Let $α 1 = α 2 = 0.3 | 0.25 , 0.4 | 0.75 , 0.2 | 0.4 , 0.3 | 0.6$ be two PDHFEs and $ω = ( 0.2 , 0.8 ) T$ be the weight vector, then for $( i = 1 , 2 )$ the aggregated value using PDHFWEA operator is obtained as
$PDHFWEA ( α 1 , α 2 ) = ⋃ γ i ∈ h i η i ∈ g i ∏ i = 1 2 ( 1 + γ i ) ω i − ∏ i = 1 2 ( 1 − γ i ) ω i ∏ i = 1 n ( 1 + γ i ) ω i + ∏ i = 1 2 ( 1 − γ i ) ω i | ∏ i = 1 n p γ i , 2 ∏ i = 1 2 ( η i ) ω i ∏ i = 1 2 ( 2 − η i ) ω i + ∏ i = 1 2 ( η i ) ω i | ∏ i = 1 2 q η i = 0.3 | 0.625 , 0.3807 | 0.1875 , 0.3206 | 0.1875 , 0.4 | 0.5625 , 0.2 | 0.16 , 0.2772 | 0.24 , 0.2173 | 0.24 , 0.30 | 0.36$
which clearly shows that PDHFWEA$( α 1 , α 1 ) ≠ α 1$. Thus, it does not satisfy idempotency.
Definition 8.
Let $α i$ $( i = 1 , 2 , … , n )$ be the collection of PDHFEs, and PDHFOWEA: $Ω n → Ω$, if
$PDHFOWEA ( α 1 , α 2 , … , α n ) = ω 1 α σ ( 1 ) ⊕ ω 2 α σ ( 2 ) ⊕ … ⊕ ω n α σ ( n )$
where Ω is the set of PDHFEs and $ω = ( ω 1 , ω 2 , … , ω n ) T$ is the weight vector of $α i$ such that $ω i > 0$ and $∑ i = 1 n ω i = 1$. $( σ ( 1 ) , σ ( 2 ) , … , σ ( n ) )$ is a permutation of $( 1 , 2 , … , n )$ such that $α σ ( i − 1 ) ≥ α σ ( i )$ for $( i = 2 , 3 , … , n )$, then PDHFOWEA is called probabilistic dual hesitant fuzzy ordered weighted Einstein AO.
Theorem 7.
For a family of PDHFEs $α i = h i | p h i , g i | q g i , ( i = 1 , 2 , … , n )$, the combined value obtained by using PDHFOWEA operator is given as
$P D H F O W E A ( α 1 , α 2 , … , α n ) = ⋃ γ σ ( i ) ∈ h σ ( i ) , η σ ( i ) ∈ g σ ( i ) ∏ i = 1 n ( 1 + γ σ ( i ) ) ω σ ( i ) − ∏ i = 1 n ( 1 − γ σ ( i ) ) ω σ ( i ) ∏ i = 1 n ( 1 + γ σ ( i ) ) ω σ ( i ) + ∏ i = 1 n ( 1 − γ σ ( i ) ) ω σ ( i ) | ∏ i = 1 n p γ σ ( i ) , 2 ∏ i = 1 n ( η σ ( i ) ) ω σ ( i ) ∏ i = 1 n ( 2 − η σ ( i ) ) ω σ ( i ) + ∏ i = 1 n ( η σ ( i ) ) ω σ ( i ) | ∏ i = 1 n q η σ ( i )$
where $ω = ( ω 1 , ω 2 , … , ω n ) T$ is a weight vector such that $∑ i = 1 n ω i = 1$ where $0 < ω i < 1 .$
Proof.
Similar to Theorem 6. □
Property 3.
For all PDHFEs, $α i = h i | p h i , g i | q g i$ where $i = ( 1 , 2 , … , n )$ and for an associated weight vector $ω = ( ω 1 , ω 2 , … , ω n ) T$, such that each $ω i > 0$ and $∑ i = 1 n ω i = 1$, we have
(P1)
(Boundedness) For $α i = h i | p h i , g i | q g i$ where $i = ( 1 , 2 , … , n )$, let $α − = min ( h i ) | min ( p h i ) , max ( g i ) | max ( q g i ) = γ min | p min , η max | q max$ and $α + = max ( h i ) | max ( p h i ) , min ( g i ) | min ( q g i ) = γ max | p max , η min | q min$ be PDHFEs, then $α − ≤ PDHFOWEA ( α 1 , α 2 , … , α n ) ≤ α +$.
(P2)
(Monotonicity) Let $α i = h i | p h i , g i | q g i$ and $α i * = h i * | p h i * , g i * | q g i *$, for all $i = ( 1 , 2 , … , n )$ be two families of PDHFEs where for each element in $α i$ and $α i *$, there are $γ α i ≤ γ α i *$ and $η α i ≥ η α i *$ while the probabilities remain the same i.e., $p h i = p h i * , q g i = q g i *$ then PDHFOWEA$( α 1 , α 2 , … , α n ) ≤$ PDHFOWEA$( α 1 * , α 2 * , … , α n * )$.
Proof.
Similar to Properties 1 and 2. □
Definition 9.
Let Ω be a family of all PDHFEs $α i$ $( i = 1 , 2 , … , n )$ with the corresponding weights $ω = ( ω 1 , ω 2 , … , ω n ) T$, such that $ω i > 0$ and $∑ i = 1 n ω i = 1$. If PDHFWEG: $Ω n → Ω ,$ is a mapping defined by
$PDHFWEG ( α 1 , α 2 , … , α n ) = α 1 ω 1 ⊗ α 2 ω 2 ⊗ … ⊗ α n ω n$
then, PDHFWEG is called probabilistic dual hesitant fuzzy weighted Einstein geometric operator.
Theorem 8.
For a collection of PDHFEs $α i = h i | p h i , g i | q g i , ( i = 1 , 2 , … , n )$, the combined value obtained by using PDHFWEG operator is still a PDHFE and is given as
$PDHFWEG ( α 1 , α 2 , … , α n ) = ⋃ γ i ∈ h i , η i ∈ g i 2 ∏ i = 1 n ( γ i ) ω i ∏ i = 1 n ( 2 − γ i ) ω i + ∏ i = 1 n ( γ i ) ω i | ∏ i = 1 n p γ i , ∏ i = 1 n ( 1 + η i ) ω i − ∏ i = 1 n ( 1 − η i ) ω i ∏ i = 1 n ( 1 + η i ) ω i + ∏ i = 1 n ( 1 − η i ) ω i | ∏ i = 1 n q η i$
where $ω = ( ω 1 , ω 2 , … , ω n ) T$ is a weight vector such that $∑ i = 1 n ω i = 1$ where $0 < ω i < 1 .$
Proof.
Same as Theorem 6. □
Also, it has been seen that the PDHFWEG operator satisfies the properties of boundedness and monotonicity.
Definition 10.
Let $α i$ $( i = 1 , 2 , … , n )$ be the family of PDHFEs, and PDHFOWEG: $Ω n → Ω$, if
$PDHFOWEG ( α 1 , α 2 , … , α n ) = α σ ( 1 ) ω 1 ⊕ α σ ( 2 ) ω 2 … ⊕ α σ ( n ) ω n$
where Ω is the set of PDHFEs and $ω = ( ω 1 , ω 2 , … , ω n ) T$ is the weight vector of $α i$ such that $ω i > 0$ and $∑ i = 1 n ω i = 1$. $( σ ( 1 ) , σ ( 2 ) , … , σ ( n ) )$ is a permutation of $( 1 , 2 , … , n )$ such that $α σ ( i − 1 ) ≥ α σ ( i )$ for $( i = 2 , 3 , … , n )$, then PDHFOWEG is called probabilistic dual hesitant fuzzy ordered weighted Einstein geometric operator.
Theorem 9.
For a family of PDHFEs $α i = h i | p h i , g i | q g i , ( i = 1 , 2 , … , n )$, the combined value obtained by using PDHFOWEG operator is given as
$PDHFOWEG ( α 1 , α 2 , … , α n ) ⋃ γ σ ( i ) ∈ h σ ( i ) , η σ ( i ) ∈ g σ ( i ) 2 ∏ i = 1 n ( γ σ ( i ) ) ω σ ( i ) ∏ i = 1 n ( 2 − γ σ ( i ) ) ω σ ( i ) + ∏ i = 1 n ( γ σ ( i ) ) ω σ ( i ) | ∏ i = 1 n p γ σ ( i ) , ∏ i = 1 n ( 1 + η σ ( i ) ) ω σ ( i ) − ∏ i = 1 n ( 1 − η σ ( i ) ) ω σ ( i ) ∏ i = 1 n ( 1 + η σ ( i ) ) ω σ ( i ) + ∏ i = 1 n ( 1 − η σ ( i ) ) ω σ ( i ) | ∏ i = 1 n q η σ ( i )$
where $ω = ( ω 1 , ω 2 , … , ω n ) T$ is a weight vector such that $∑ i = 1 n ω i = 1$ where $0 < ω i < 1 .$
Proof.
Similar to Theorem 6. □
Also, it has been seen that the PDHFOWEG operator satisfies the properties of boundedness and monotonicity.

## 6. Maximum Deviation Method for Determination the Weights

The choice of weights directly affects the performance of weighted aggregation operators. For this purpose, in this subsection, the effective maximizing deviation method is adapted to calculate the weights in MCDM when the weights are unknown or partially known.
Given the set of alternatives $A = { A 1 , A 2 , … , A m }$ and the set of criteria $C = { C 1 , C 2 , … , C t }$ which is being evaluated by a decision maker under the PDHFS environment over the universal set $X = { x 1 , x 2 , … , x n }$. Assume that the rating values corresponding to each alternative is expressed in terms of PDHFEs as
$A r = C 1 , s r 1 , C 2 , s r 2 , … , C t , s r v ,$
where $s r v = h r v ( x k ) | p r v ( x k ) , g r v ( x k ) | q r v ( x k )$, where $r = 1 , 2 , … , m ;$ $v = 1 , 2 , … , t$, $k = 1 , 2 , … , n$. Assume that the importance of each criterion are given in the form of weights as $( ω 1 , ω 2 , … , ω t )$ respectively such that $0 < ω v ≤ 1$ and $∑ v = 1 t ω v = 1$. Now, by using the proposed distances $d 1$ in Equation (9) or $d 2$ in (10); the deviation measure between the alternative $A r$ and all other alternatives with respect to the criteria $C v$ is given as:
$D r v ( ω ) = ∑ b = 1 m w v D ( s r v , s b v ) r = 1 , 2 , … , m ; v = 1 , 2 , … , t$
In accordance to the notion of maximizing deviation method, if the distance between the alternatives is smaller for a criteria, then it should have smaller weight. This one shows that the alternatives are homologous to the criterion. Contrarily, it should have larger weights. Let,
$D v ( ω ) = ∑ r = 1 m D r v ( ω ) = ∑ r = 1 m ∑ b = 1 m w v D ( s r v , s b v ) , v = 1 , 2 , … , t$
Here $D v ( ω )$ represents the distance of all the alternatives to the other alternatives under the criteria $C v ∈ C$. Moreover, $‘ D$’ represents either distance $d 1$ or $d 2$ as given in Equations (9) and (10) respectively. Based on the concept of maximum deviation, we have to choose a weight vector $‘ ω$’ to maximize all the deviations measures for the criteria. For this, we construct a non-linear programming model as given below:
$max D ( ω ) = ∑ v = 1 t ∑ r = 1 m D r v ( ω ) = ∑ v = 1 t ∑ r = 1 m ∑ b = 1 m D ( s r v , s b v ) ω v s . t . ω v > 0 ; ∑ v = 1 t ω v = 1 ; v = 1 , 2 , … , t$
where ‘$D$’ can be either $d 1$ or $d 2$.
If $D = d 1$, then for $λ > 0$, we have
$D ( ω ) = ∑ v = 1 t ∑ r = 1 m ∑ b = 1 m ω v ∑ k = 1 n 1 n 1 M + N ∑ i = 1 M ( γ A i ( x k ) p A i ( x k ) ) ( x r v ) − ( γ B i ( x k ) p B i ( x k ) ) ( x b v ) λ + ∑ j = 1 N ( η A j ( x k ) q A j ( x k ) ) ( x r v ) − ( η B j ( x k ) q B j ( x k ) ) ( x b v ) λ 1 λ ;$
and if $D = d 2$, then
$D ( ω ) = ∑ v = 1 t ∑ r = 1 m ∑ b = 1 m ω v ∑ k = 1 n 1 n 1 M A ∑ i = 1 M A γ A i ( x k ) p A i ( x k ) ( x r v ) − 1 M B ∑ i ′ = 1 M B γ B i ′ ( x k ) p B i ′ ( x k ) ( x b v ) λ 2 + 1 N A ∑ j = 1 N A η A j ( x k ) q A j ( x k ) ( x r v ) − 1 N B ∑ j ′ = 1 N B η B j ′ ( x k ) q B j ′ ( x k ) ( x b v ) λ 2 1 λ$
If the information about criteria weights is completely unknown, then another programming method can be established as:
$max D ( ω ) = ∑ v = 1 t ∑ r = 1 m D r v ( ω ) = ∑ v = 1 t ∑ r = 1 m ∑ b = 1 m D ( s r v , s b v ) ω v s . t . ω v ≥ 0 ; ∑ v = 1 n ω v 2 = 1 ; v = 1 , 2 , … , t$
To solve this, a Lagrange’s function is constructed as
$L ( ω , ζ ) = ∑ v = 1 t ∑ r = 1 m ∑ b = 1 m D ( s r v , s b v ) ω v + ζ 2 ∑ v = 1 t ω v 2 − 1$
where $ζ$ is the Lagrange’s parameter. Computing the partial derivatives of Lagrange’s function w.r.t $ω v$ as well as $ζ$ and letting them equal to zero.
$∂ L ∂ ω v = ∑ r = 1 m ∑ b = 1 m D ( s r v , s b v ) + ζ ω v = 0 ; v = 1 , 2 , … , t ∂ L ∂ ζ = ∑ v = 1 t ω v 2 − 1 = 0$
Solving, Equation (27) we can obtain,
$ω v = ∑ r = 1 m ∑ b = 1 m D ( s r v , s b v ) ∑ v = 1 t ∑ r = 1 m ∑ b = 1 m D ( s r v , s b v ) 2 ; v = 1 , 2 , … , t$
Normalizing Equation (28) we get
$ω v = ∑ r = 1 m ∑ b = 1 m D ( s r v , s b v ) ∑ v = 1 t ∑ r = 1 m ∑ b = 1 m D ( s r v , s b v )$
In DM process, the data values for evaluation are available as DHFSs or PDHFSs which are integrated to form the PDHFSs. In order to gather the information, the probability values are assigned to each possible membership or non-membership value. An algorithm followed for this information fusion is outlined in Algorithm 1.
 Algorithm 1 Aggregating probabilities for more than one Probabilistic fuzzy sets. Input: $α ( 1 ) , α ( 2 ) , … , α ( d )$ where $α ( d ) = h ( d ) | p ( d )$ where $d = 1 , 2 , … , D$ such that D is the total number of elements to be fused together.     Output: $α ( o u t ) = h ( o u t ) | p ( o u t )$1:Let $u = 1 D ,$ be the normalized unit.2:List all the probabilistic membership values in a set and represent it as $M = { m l | s l } ,$ where $m l | s l = h ( d ) | p ( d )$, $∀ d = 1 , 2 , … , D$, and $l = 1 , 2 , … , # L ,$ such that $# L$ is the total number of probabilistic membership values of all the considered elements.3:Set $i = 1$4:Set $m e = m i$5:$f ( m e m ) ( l ) = 1 , if m e = m l 0 , if m e ≠ m l$6:Set $l = l + 1$ and repeat 5, until $l = # L$7:Set $h ( o u t ) = ⋃ i m e$8:$p ( o u t ) = ∑ l f ( m e m ) ( l ) · s l · u$9:Set $i = i + 1$ and goto 4, until $i = # L$
To demonstrate the working of aforementioned algorithm, an example is given below.
Example 2.
Let $α ( 1 ) = { 0.1 | 0.1 , 0.2 | 0.5 , 0.3 | 0.4 } , { 0.5 | 1 }$; $α ( 2 ) = { 0.2 | 0.4 , 0.3 | 0.6 } ,$ ${ 0.5 | 0.2 , 0.6 | 0.8 }$ and $α ( 3 ) = { 0.1 | 0.4 , 0.2 | 0.4 , 0.6 | 0.2 } , { 0.1 | 1 }$ be three PDHFEs to be fused together. Since, $h ( 1 ) , p ( 1 ) = { 0.1 | 0.1 , 0.2 | 0.5 , 0.3 | 0.4 }$, $h ( 2 ) , p ( 2 ) = { 0.2 | 0.4 , 0.3 | 0.6 }$ and $h ( 3 ) , p ( 3 ) = { 0.1 | 0.4 ,$ $0.2 | 0.4 , 0.6 | 0.2 }$, so we get $M = { 0.1 | 0.1 , 0.2 | 0.5 , 0.3 | 0.4 , 0.2 | 0.4 , 0.3 | 0.6 , 0.1 | 0.4 , 0.2 | 0.4 , 0.6 | 0.2 }$ where $# L = 8$ and thus $l = 1 , 2 , … , 8$. Clearly, here $D = 3$. Now, by following Algorithm 1 for both membership and non-membership degrees, we obtained the final PDHFE as:
$α ( o u t ) = { 0.1 | 0.1667 , 0.2 | 0.4333 , 0.3 | 0.3333 , 0.6 | 0.066 } , { 0.5 | 0.4 , 0.6 | 0.2666 , 0.1 | 0.3333 }$

## 7. Decision Making Approach Using the Proposed Operators

In this section, a DM approach based on proposed AOs is given followed by a numerical example.

#### 7.1. Approach Based on the Proposed Operators

Consider a set of m alternatives $A = { A 1 , A 2 , … , A m }$ which are evaluated by the experts classified under criteria information $C = { C 1 , C 2 , … , C t }$. The ratings for each alternative in PDHFEs are given as:
$A r = { C 1 , α r 1 , C 2 , α r 2 , … , C t , α r v } ,$
where $α r v = h r v | p r v , g r v | q r v$, where $r = 1 , 2 , … , m ;$ $v = 1 , 2 , … , t$. In order to get the best alternative(s) for a problem, DM approach is summarized in the following steps by utilizing proposed AOs as:
Step 1:
Construct decision matrices $R ( d )$ for $‘ d$’ number of decision makers in form of PDHFEs as:
where $h r v ( d ) | p r v ( d ) , g r v ( d ) | q r v ( d ) = γ r v ( d ) | p r v ( d ) , η r v ( d ) | q r v ( d )$, such that $r = 1 , 2 , … , m$ and $v = 1 , 2 , … , t$.
Step 2:
If $d = 1$, then $h r v ( d ) | p r v ( d ) , g r v ( d ) | q r v ( d )$ is equal to $h r v | p r v , g r v | q r v ,$ where $h r v | p r v , g r v | q r v$ $= γ r v | p r v , η r v | q r v$; such that $r = 1 , 2 , … , m$ and $v = 1 , 2 , … , t$ and goto Section 7.1 Step 3. If $d ≥ 2 ,$ then a matrix is formed by combining the probabilities in accordance to the Algorithm 1. The comprehensive matrix so obtained is given as:
where $h r v | p r v , g r v | q r v = γ r v | p r v , η r v | q r v$, where $r = 1 , 2 , … , m$ and $v = 1 , 2 , … , t$.
Step 3:
Choose the appropriate distance measure among $d 1$ or $d 2$ as given in Equations (9) and (10), on the basis of need the expert. If the repeated values of the largest or smallest dual-hesitant probabilistic values can be repeated according to the optimistic or pessimistic behavior of the expert then choose measure $d 1$ otherwise choose measure $d 2$ and determine the weights of different criteria using Equation (29).
Step 4:
Compute the overall aggregated assessment $‘ Q r$’ of alternatives using PDHFWEA or PDHFOWEA or PDHFWEG or PDHFOWEG operators as given below in Equations (31)–(34) respectively.
$Q r = PDHFWEA ( α r 1 , α r 2 , … , α r v ) = ⋃ γ r v ∈ h r v η r v ∈ g r v ∏ v = 1 t ( 1 + γ r v ) ω v − ∏ v = 1 t ( 1 − γ r v ) ω v ∏ v = 1 t ( 1 + γ r v ) ω v + ∏ v = 1 t ( 1 − γ r v ) ω v | ∏ v = 1 t p γ r v , 2 ∏ v = 1 t ( η r v ) ω v ∏ v = 1 t ( 2 − η r v ) ω v + ∏ v = 1 t ( η r v ) ω v | ∏ v = 1 t q η r v$
or
$Q r = PDHFOWEA ( α r 1 , α r 2 , … , α r v ) = ⋃ γ σ ( r v ) ∈ h σ ( r v ) η σ ( r v ) ∈ g σ ( r v ) ∏ v = 1 t ( 1 + γ σ ( r v ) ) ω σ ( v ) − ∏ v = 1 t ( 1 − γ σ ( r v ) ) ω σ ( v ) ∏ v = 1 t ( 1 + γ σ ( r v ) ) ω σ ( v ) + ∏ v = 1 t ( 1 − γ σ ( r v ) ) ω σ ( v ) | ∏ v = 1 t p γ σ ( r v ) , 2 ∏ v = 1 t ( η σ ( r v ) ) ω σ ( v ) ∏ v = 1 t ( 2 − η σ ( r v ) ) ω σ ( v ) + ∏ v = 1 t ( η σ ( r v ) ) ω σ ( v ) | ∏ v = 1 t q η σ ( r v )$
or
$Q r = PDHFWEG ( α r 1 , α r 2 , … , α r v ) = ⋃ γ r v ∈ h r v η r v ∈ g r v 2 ∏ v = 1 t ( γ r v ) ω v ∏ v = 1 t ( 2 − γ r v ) ω v + ∏ v = 1 t ( γ r v ) ω v | ∏ v = 1 t p γ r v , ∏ v = 1 t ( 1 + η r v ) ω v − ∏ v = 1 t ( 1 − η r v ) ω v ∏ v = 1 t ( 1 + η r v ) ω v + ∏ v = 1 t ( 1 − η r v ) ω v | ∏ v = 1 t q η r v$
or
$Q r = PDHFOWEG ( α r 1 , α r 2 , … , α r v ) = ⋃ γ σ ( r v ) ∈ h σ ( r v ) η σ ( r v ) ∈ g σ ( r v ) 2 ∏ v = 1 t ( γ σ ( r v ) ) ω σ ( v ) ∏ v = 1 t ( 2 − γ σ ( r v ) ) ω σ ( v ) + ∏ v = 1 t ( γ σ ( r v ) ) ω σ ( v ) | ∏ v = 1 t p γ σ ( r v ) , ∏ v = 1 t ( 1 + η σ ( r v ) ) ω σ ( v ) − ∏ v = 1 t ( 1 − η σ ( r v ) ) ω σ ( v ) ∏ v = 1 t ( 1 + η σ ( r v ) ) ω σ ( v ) + ∏ v = 1 t ( 1 − η σ ( r v ) ) ω σ ( v ) | ∏ v = 1 t q η σ ( r v ) ,$
Step 5:
Utilize Definition 5 to rank the overall aggregated values and select the most desirable alternative(s).

#### 7.2. Illustrative Example

An illustrative example (based on consumer’s buying behavior) for eliciting the numerical applicability of our proposed approach is given below:
In a company’s production oriented decision-making processes, consumers or buyers play a vital role. In order to increase sales and to be in good books of every customer, every production company pays a great attention to customer’s buying behavior. This consumer behavior is the main driving force behind the change of trends, need of updation in the products etc., to which the production company must remain in contact to have a great mutual relationship with the customers and to maintain a strong position in the competitive market environment.
Suppose a multi-national company wants to launch the new products on the basis of different consumers in different countries. For that, they have delegated works to the company heads of three different countries viz. India, Canada, and Australia. The company heads of these countries have to analyze the customer’s buying behavior and for that, they have information available in the form of PDHFEs. Each expert $( d = 1 , 2 , 3 )$ from the three different countries accessed the available information oriented to four company products $A i$’s where $( i = 1 , 2 , 3 , 4 )$ classified under four criteria determining the customer’s buying behavior namely $C 1 :$ ‘Suitability to cultural environment’; $C 2 :$ ‘Global trend accordance’; $C 3 :$ ‘Suitability to weather conditions’; $C 4 :$ ‘Good quality after-sale services’. The aim of the company is to access the main criteria which affect the customer’s buying behavior so as to figure out which product among $A i$’s $( i = 1 , 2 , 3 , 4 )$ has to be launched first. Following steps are adopted to find the most suitable product for the first launch.
Step 1:
The preference information corresponding to three decision-makers (d = 1; 2; 3) is given in Table 1, Table 2 and Table 3.
Step 2:
Since number of decision makers i.e., $d ≥ 2$, therefore, using Algorithm 1, the comprehensive matrix obtained after integrating all the preferences given by the panel of experts is given in Table 4.
Step 3:
The experts chose to have an optimistic behavior towards the analysis and thus utilizing distance $d 1$ in Equation (29), the weights are determined as $ω = ( 0.4385 , 0.1986 , 0.1815 , 0.1814 ) T$.
Step 4:
The aggregated values for each alternative $A i , i = ( 1 , 2 , 3 , 4 )$ by using PDHFWEA operator as given in Equation (31) are:
$Q 1 = 0.5213 | 0.0056 , 0.5439 | 0.0006 , 0.5546 | 0.0154 , 0.5760 | 0.0017 , … … … … , 0.6347 | 0.0037 , 0.2617 | 0.0444 , 0.2531 | 0.0222 , 0.1909 | 0.0222 , 0.1844 | 0.0111 , … … … … , 0.3120 | 0.0074 Q 2 = 0.6080 | 0.0469 , 0.6201 | 0.0614 , 0.4838 | 0.0253 , 0.4985 | 0.0331 , … … … … , 0.4240 | 0.0157 , 0.2531 | 0.0123 , 0.2359 | 0.0198 , 0.2266 | 0.0049 , 0.5372 | 0.0062 , … … … … , 0.6427 | 0.0025 Q 3 = 0.3384 | 0.0173 , 0.3352 | 0.0173 , 0.3515 | 0.0173 , 0.3963 | 0.0074 , … … … … , 0.7379 | 0.0123 , 0.4391 | 0.0444 , 0.4251 | 0.0346 , 0.4256 | 0.0691 , 0.2226 | 0.0222 , … … … … , 0.1540 | 0.0026 Q 4 = 0.4225 | 0.0474 , 0.4036 | 0.0474 , 0.3947 | 0.0474 , 0.4667 | 0.0435 , … … … … , 0.3110 | 0.0078 , 0.3016 | 0.0017 , 0.3413 | 0.0039 , 0.3698 | 0.0111 , 0.2533 | 0.0006 , … … … … , 0.5259 | 0.0197$
Step 5:
The score values are obtained as $S ( Q 1 ) = 0.1810 , S ( Q 2 ) = 0.1799 , S ( Q 3 ) = 0.1739$ and $S ( Q 4 ) = − 0.0002$
Step 6:
Since, the ranking order is $S ( Q 1 ) > S ( Q 2 ) > S ( Q 3 ) > S ( Q 4 )$, thus the ranking is obtained as $A 1 ≻ A 2 ≻ A 3 ≻ A 4$.
Thus, it is clear that according to the experts product $A 1$ should be launched first.
However, on the other hand, if we utilize the PDHFWEG operator instead of PDHFWEA operator to aggregate the different preferences, then the following steps of the proposed approach are executed to reach the optimal alternative(s) as.
Step 1:
Similar as above Section 7.2 Step 1.
Step 2:
Similar as above Section 7.2 Step 2.
Step 3:
Similar as above Section 7.2 Step 3.
Step 4:
The aggregated values for each alternative $A i , i = ( 1 , 2 , 3 , 4 )$ by using PDHFWEG operator as given in Equation (33) are:
$Q 1 = 0.3959 | 0.0056 , 0.4092 | 0.0006 , 0.4642 | 0.0154 , 0.4792 | 0.0017 , … … … … , 0.5908 | 0.0037 , 0.2917 | 0.0444 , 0.2827 | 0.0222 , 0.2008 | 0.0222 , 0.1913 | 0.0111 , … … … … , 0.3541 | 0.0074 Q 2 = 0.5090 | 0.0469 , 0.5415 | 0.0614 , 0.4391 | 0.0253 , 0.4685 | 0.0331 , … … … … , 0.2959 | 0.0157 , 0.3950 | 0.0123 , 0.3312 | 0.0198 , 0.3078 | 0.0049 , 0.6376 | 0.0062 , … … … … , 0.6516 | 0.0025 Q 3 = 0.1667 | 0.0173 , 0.1615 | 0.0173 , 0.1828 | 0.0173 , 0.2950 | 0.0074 , … … … … , 0.6164 | 0.0123 , 0.4890 | 0.0444 , 0.4646 | 0.0346 , 0.5203 | 0.0691 , 0.3256 | 0.0222 , … … … … , 0.2742 | 0.0026 Q 4 = 0.4150 | 0.0474 , 0.3981 | 0.0474 , 0.3886 | 0.0474 , 0.4580 | 0.0435 , … … … … , 0.2774 | 0.0078 , 0.3395 | 0.0017 , 0.3744 | 0.0039 , 0.4157 | 0.0111 , 0.2974 | 0.0006 , … … … … , 0.5656 | 0.0197$
Step 5:
The score values are obtained as $S ( Q 1 ) = 0.0937 , S ( Q 2 ) = − 0.0073 , S ( Q 3 ) = − 0.0202$ and $S ( Q 4 ) = − 0.0545$
Step 6:
Since, the ranking order is $S ( Q 1 ) > S ( Q 3 ) > S ( Q 2 ) > S ( Q 4 )$, thus the ranking is obtained as $A 1 ≻ A 2 ≻ A 3 ≻ A 4$.
The most desirable alternative is $A 1$.
If we analyze the impact of the all the proposed operators along with the distance $d 1$ and $d 2$ onto the final ranking order of the alternative, we perform an experiment where the steps of the proposed algorithms are executed. The final score values of each alternative $A i$ $( i = 1 , 2 , 3 , 4 )$, are obtained and are summarized in Table 5. It is seen that utilizing different distance measures i.e., $d 1$ and $d 2$ do not affect the best alternative $A 1$ in most of the cases. Moreover, the score values obtained by the proposed operators namely: PDHFWEA, PDHFWEG, and PDHFOWEG represent the same alternative $A 1$ as the best alternative which is to be launched first while the operator PDHOWEA represents the alternative $A 3$ as the best one. However, it can be seen that corresponding average PDHFWEA, PDHFOWEA score values are greater than that of PDHFWEG, PDHFOWEG aggregation operators showing that the average aggregation operators offer the decision maker more optimistic score-values as compared to the geometric ones. Also, it can be seen that both the distances, despite providing, a huge variation in numerical evaluation and data processing flexibility lead to the same result as $A 1$ as the best choice in most of the cases among the alternatives to be launched first.

#### 7.3. Comparative Studies

In order to analyze the alignment of the proposed approach’s results with the existing theories and to validate our proposed results, the score values corresponding to different operators are given in Table 6. The operators in the considered existing theories are: probabilistic dual hesitant fuzzy weighted average (PDHFWA) by Hao et al. [42], hesitant probabilistic fuzzy Einstein weighted average and Einstein weighted geometric (HPFEWEA, HPFEWEG) by Park et al. [50] and hesitant probabilistic fuzzy weighted average (HPFWA), hesitant probabilistic fuzzy weighted geometric (HPFWG), hesitant probabilistic fuzzy ordered weighted average (HPFOWA), hesitant probabilistic fuzzy ordered weighted geometric (HPFOWG) aggregation operators by Xu and Zhou [48]. Noticeably, the approach outlined by Hao et al. [42] by utilizing PDHFWA operator figures out $A 2$ as the best alternative and the least preferred alternative $A 4$ remains same as that of our proposed approach. However, if we consider only the probabilistic hesitant fuzzy information and ignores the non-membership probabilistic hesitant values, then the best alternative starts fluctuating among $A 1$ and $A 3$ by varying the different aggregation operators and the least preferred alternative remains same as $A 4$, which coincides the outcomes of our proposed approach. This variation is due to the negligence of the non-membership values and their corresponding probabilities. Thus, the proposed approach is advantageous among the traditional approaches because it remains firm on the same output ranking for different operators. Moreover, the best alternative chosen by the proposed approach remains the same as that with that of the existing approaches signifies that the proposed approach is the valid one.
Further, a deep insight into the comparison of our method with the existing ones is given by comparing the characteristics of all the approaches with the proposed one. In Table 7, it can be seen that the approaches put-forth by Hao et al. [42] and Xu and Zhou [48] considers multiple experts in analysis process whereas Park et al. [50] does not consider the multi-expert problems. All the existing approaches are the probabilistic approaches so they consider probabilities corresponding to their considered membership or non-membership values. Moreover, it is analyzed that the method proposed by [42] considers the non-membership probabilistic information but the rest two only considers the hesitant values and their probabilities. In all the three existing approaches, the weights are not derived by using any non-linear technique such as maximum deviation method for determination of weights but the weights corresponding to two different distance measures are considered in the proposed methodology.
In addition to above comparison studies, we elicit some characteristic comparison of our approach with existing DM methods proposed in [42,48,50] which are tabulated in Table 7.
In Table 7, the symbol ‘✓’ describes that the corresponding DM approach considers more than one decision maker, handles probabilities, accounts for non-membership entities and has weights derived by the non-linear approach, whereas the symbol $‘ ×$’ means that the associated method fails. The symbols tabulated in Table 7 depicts that the MCDM mentioned in [42] as well as [48] consider multiple multiple decision-makers whereas the approach utilized by [50] consists of preference evaluations through single expert. It is seen that all the three considered approaches considers the probabilities along with their respective fuzzy environments whereas only [42] considers only the non-membership values along with the membership ones while the other two considers only the membership value ratings. On the other hand, none of the existing approach among the specified ones, adopt a non-linear weight determination technique. Thus, it is analyzed that our proposed approach consists of all the four said characteristics and thus it deals with the real life situations, more efficiently as compared to the existing approaches [42,48,50].

## 8. Conclusions

In this manuscript, we have utilized the concept of PDHFS to handle the uncertainty in the data so as to capture the information with some more degree of freedom. For it, we have defined some new distance measures based on the size of two PDHFSs. Further, by focussing on the advantages of the aggregation operators into the decision-making process, we propose some series of weighted averaging and geometric aggregation operators by using Einstein norm operations. The major advantages of the proposed operators are that it considers the probability information to each dual hesitant membership degrees which give more information and help for the decision maker to take a decision more clearly. Further, since the decision makers are more sensitive to the loss and their bounded rationality, so there is a need for the probabilistic information into the analysis to solve the related MCDM problems. Also, its prominent characteristic is that it can consider the decision makers psychological behavior. The primary contribution of this paper is summarized as follows:
(1)
To introduce the two new distance measures between the pairs of the PDFHEs and explore their properties. Further, some basic operational laws for this proposed structure are discussed and explore the various relationships among them using Einstein norm operations.
(2)
To obtain the optimal selection in the group decision making (GDM) under the probabilistic dual hesitant fuzzy environment, we have proposed a maximum deviation method (MDM) algorithm and developed several weighted aggregation operators. In this case, the MDM method has been used to determine the optimal weight of each criterion.
(3)
Four new aggregation operators, namely, the PDHFWEA, PDHFOWEA, PDHFWEG, and PDHFOWEG operators have been developed to aggregate the PDHFE information. In addition to it, on a comprehensive scrutiny of DHFSs and PDHFSs, we have devised an algorithm to formulate PDHFSs from the given probabilistic fuzzy information. Based on the decision maker preferences in order to optimize their desired goals, the person can choose the required proposed distance measures and/or aggregation operators.
(4)
Finally, the presented group decision-making approach is explained with the help of numerical example and an extensive comparative analysis has been conducted with the existing decision making theories [42,48,50] to show the advantages of the proposed approach.
Thus, we can conclude that the proposed notion about the PDHFSs is widely used in the different scenarios such as when a person provides the information about the fact that ‘how much he/she sure about the uncertain information evaluated by him/her?’; in the situations, when the evaluators have no knowledge of the importance of their decision as well the considered criteria. Thus, the proposed concepts are efficaciously applicable to the situation under uncertainties and expected to have wide applications in complex DM problems. In the future, there is a scope of extending the proposed method to some different environment and its application in the various fields related to decision-theory [53,54,55,56,57,58,59,60,61,62,63].

## Author Contributions

Conceptualization, H.G. and G.K.; Methodology, G.K.; Validation, H.G.; Formal analysis, H.G. and G.K.; Investigation, H.G. and G.K.; Writing-original draft preparation, H.G. and G.K.; Writing-review and editing, H.G.; Visualization, H.G.

## Funding

This research received no external funding.

## Acknowledgments

The authors wish to thank the anonymous reviewers for their valuable suggestions.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Kaur, G.; Garg, H. Multi-Attribute Decision—Making Based on Bonferroni Mean Operators under Cubic Intuitionistic Fuzzy Set Environment. Entropy 2018, 20, 65. [Google Scholar] [CrossRef]
2. Kaur, G.; Garg, H. Generalized cubic intuitionistic fuzzy aggregation operators using t-norm operations and their applications to group decision-making process. Arab. J. Sci. Eng. 2018, 1–20. [Google Scholar] [CrossRef]
3. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
4. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef][Green Version]
5. Atanassov, K.; Gargov, G. Interval-valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
6. Xu, Z.S. Intuitionistic fuzzy aggregation operators. IEEE Trans. Fuzzy Syst. 2007, 15, 1179–1187. [Google Scholar]
7. Wang, W.; Liu, X.; Qin, Y. Interval-valued intuitionistic fuzzy aggregation operators. J. Syst. Eng. Electron. 2012, 23, 574–580. [Google Scholar] [CrossRef]
8. Garg, H. Generalized intuitionistic fuzzy interactive geometric interaction operators using Einstein t-norm and t-conorm and their application to decision making. Comput. Ind. Eng. 2016, 101, 53–69. [Google Scholar] [CrossRef]
9. Garg, H. Novel intuitionistic fuzzy decision making method based on an improved operation laws and its application. Eng. Appl. Artif. Intell. 2017, 60, 164–174. [Google Scholar] [CrossRef]
10. Wang, W.; Liu, X. Interval-valued intuitionistic fuzzy hybrid weighted averaging operator based on Einstein operation and its application to decision making. J. Intell. Fuzzy Syst. 2013, 25, 279–290. [Google Scholar]
11. Wang, W.; Liu, X. The multi-attribute decision making method based on interval-valued intuitionistic fuzzy Einstein hybrid weighted geometric operator. Comput. Math. Appl. 2013, 66, 1845–1856. [Google Scholar] [CrossRef]
12. Garg, H. A New Generalized Pythagorean Fuzzy Information Aggregation Using Einstein Operations and Its Application to Decision Making. Int. J. Intell. Syst. 2016, 31, 886–920. [Google Scholar] [CrossRef]
13. Garg, H.; Kumar, K. An advanced study on the similarity measures of intuitionistic fuzzy sets based on the set pair analysis theory and their application in decision making. Soft Comput. 2018, 22, 4959–4970. [Google Scholar] [CrossRef]
14. Wei, G.W.; Xu, X.R.; Deng, D.X. Interval-valued dual hesitant fuzzy linguistic geometric aggregation operators in multiple attribute decision making. Int. J. Knowl.-Based Intell. Eng. Syst. 2016, 20, 189–196. [Google Scholar] [CrossRef]
15. Peng, X.; Dai, J.; Garg, H. Exponential operation and aggregation operator for q-rung orthopair fuzzy set and their decision-making method with a new score function. Int. J. Intell. Syst. 2018, 33, 2255–2282. [Google Scholar] [CrossRef]
16. Garg, H. Some robust improved geometric aggregation operators under interval-valued intuitionistic fuzzy environment for multi-criteria decision -making process. J. Ind. Manag. Optim. 2018, 14, 283–308. [Google Scholar] [CrossRef]
17. Liu, P. Some Frank Aggregation Operators for Interval-valued Intuitionistic Fuzzy Numbers and their Application to Group Decision Making. J. Mult.-Valued Log. Soft Comput. 2017, 29, 183–223. [Google Scholar]
18. Chen, S.M.; Cheng, S.H.; Tsai, W.H. Multiple attribute group decision making based on interval-valued intuitionistic fuzzy aggregation operators and transformation techniques of interval-valued intuitionistic fuzzy values. Inf. Sci. 2016, 367–368, 418–442. [Google Scholar] [CrossRef]
19. Garg, H. Some arithmetic operations on the generalized sigmoidal fuzzy numbers and its application. Granul. Comput. 2018, 3, 9–25. [Google Scholar] [CrossRef]
20. Chen, S.M.; Cheng, S.H.; Chiou, C.H. Fuzzy multiattribute group decision making based on intuitionistic fuzzy sets and evidential reasoning methodology. Inf. Fus. 2016, 27, 215–227. [Google Scholar] [CrossRef]
21. Garg, H. A new generalized improved score function of interval-valued intuitionistic fuzzy sets and applications in expert systems. Appl. Soft Comput. 2016, 38, 988–999. [Google Scholar] [CrossRef]
22. Wei, G.W. Interval Valued Hesitant Fuzzy Uncertain Linguistic Aggregation Operators in Multiple Attribute Decision Making. Int. J. Mach. Learn. Cybern. 2016, 7, 1093–1114. [Google Scholar] [CrossRef]
23. Kumar, K.; Garg, H. Connection number of set pair analysis based TOPSIS method on intuitionistic fuzzy sets and their application to decision making. Appl. Intell. 2018, 48, 2112–2119. [Google Scholar] [CrossRef]
24. Kumar, K.; Garg, H. Prioritized Linguistic Interval-Valued Aggregation Operators and Their Applications in Group Decision-Making Problems. Mathematics 2018, 6, 209. [Google Scholar] [CrossRef]
25. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
26. Zhu, B.; Xu, Z.; Xia, M. Dual Hesitant Fuzzy Sets. J. Appl. Math. 2012, 2012, 13. [Google Scholar] [CrossRef]
27. Xia, M.; Xu, Z.S. Hesitant fuzzy information aggregation in decision-making. Int. J. Approx. Reason. 2011, 52, 395–407. [Google Scholar] [CrossRef]
28. Garg, H.; Arora, R. Dual hesitant fuzzy soft aggregation operators and their application in decision making. Cognit. Comput. 2018, 10, 769–789. [Google Scholar] [CrossRef]
29. Wei, G.; Zhao, X. Induced hesitant interval-valued fuzzy Einstein aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2013, 24, 789–803. [Google Scholar]
30. Meng, F.; Chen, X. Correlation Coefficients of Hesitant Fuzzy Sets and Their Application Based on Fuzzy Measures. Cognit. Comput. 2015, 7, 445–463. [Google Scholar] [CrossRef]
31. Garg, H. Hesitant Pythagorean fuzzy Maclaurin symmetric mean operators and its applications to multiattribute decision making process. Int. J. Intell. Syst. 2018, 1–26. [Google Scholar] [CrossRef]
32. Zhao, N.; Xu, Z.; Liu, F. Group decision making with dual hesitant fuzzy preference relations. Cognit. Comput. 2016, 8, 1119–1143. [Google Scholar] [CrossRef]
33. Farhadinia, B.; Xu, Z. Distance and aggregation-based methodologies for hesitant fuzzy decision making. Cognit. Comput. 2017, 9, 81–94. [Google Scholar] [CrossRef]
34. Arora, R.; Garg, H. A robust correlation coefficient measure of dual hesistant fuzzy soft sets and their application in decision making. Eng. Appl. Artif. Intell. 2018, 72, 80–92. [Google Scholar] [CrossRef]
35. Garg, H.; Arora, R. Distance and similarity measures for Dual hesistant fuzzy soft sets and their applications in multi criteria decision-making problem. Int. J. Uncertain. Quantif. 2017, 7, 229–248. [Google Scholar] [CrossRef]
36. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Hesitant fuzzy linguistic arithmetic aggregation operators in multiple attribute decision making. Irani. J. Fuzzy Syst. 2016, 13, 1–16. [Google Scholar]
37. Garg, H. Hesitant Pythagorean fuzzy sets and their aggregation operators in multiple attribute decision making. Int. J. Uncertain. Quantif. 2018, 8, 267–289. [Google Scholar] [CrossRef]
38. Garg, H.; Kumar, K. Group decision making approach based on possibility degree measure under linguistic interval-valued intuitionistic fuzzy set environment. J. Ind. Manag. Optim. 2018, 1–23. [Google Scholar] [CrossRef]
39. Zhu, B.; Xu, Z.S. Probability-hesitant fuzzy sets and the representation of preference relations. Technol. Econ. Dev. Econ. 2018, 24, 1029–1040. [Google Scholar] [CrossRef]
40. Wu, W.; Li, Y.; Ni, Z.; Jin, F.; Zhu, X. Probabilistic Interval-Valued Hesitant Fuzzy Information Aggregation Operators and Their Application to Multi-Attribute Decision Making. Algorithms 2018, 11, 120. [Google Scholar] [CrossRef]
41. Zhang, S.; Xu, Z.; Wu, H. Fusions and preference relations based on probabilistic interval-valued hesitant fuzzy information in group decision making. Soft Comput. 2018, 1–16. [Google Scholar] [CrossRef]
42. Hao, Z.; Xu, Z.; Zhao, H.; Su, Z. Probabilistic dual hesitant fuzzy set and its application in risk evaluation. Knowl.-Based Syst. 2017, 127, 16–28. [Google Scholar] [CrossRef]
43. Li, J.; Wang, J.Q.; Hu, J.H. Multi-criteria decision-making method based on dominance degree and BWM with probabilistic hesitant fuzzy information. Int. J. Mach. Learn. Cybern. 2018, 1–15. [Google Scholar] [CrossRef]
44. Li, J.; Wang, J.Q. Multi-criteria outranking methods with hesitant probabilistic fuzzy sets. Cognit. Comput. 2017, 9, 611–625. [Google Scholar] [CrossRef]
45. Lin, M.; Xu, Z. Probabilistic Linguistic Distance Measures and Their Applications in Multi-criteria Group Decision Making. In Soft Computing Applications for Group Decision-Making and Consensus Modeling; Springer: Berlin, Germany, 2018; pp. 411–440. [Google Scholar]
46. Xu, Z.; He, Y.; Wang, X. An overview of probabilistic-based expressions for qualitative decision-making: Techniques, comparisons and developments. Int. J. Mach. Learn. Cybern. 2018, 1–16. [Google Scholar] [CrossRef]
47. Song, C.; Xu, Z.; Zhao, H. A Novel Comparison of Probabilistic Hesitant Fuzzy Elements in Multi-Criteria Decision Making. Symmetry 2018, 10, 177. [Google Scholar] [CrossRef]
48. Xu, Z.; Zhou, W. Consensus building with a group of decision makers under the hesitant probabilistic fuzzy environment. Fuzzy Optim. Decis. Mak. 2017, 16, 481–503. [Google Scholar] [CrossRef]
49. Zhou, W.; Xu, Z. Probability calculation and element optimization of probabilistic hesitant fuzzy preference relations based on expected consistency. IEEE Trans. Fuzzy Syst. 2018, 26, 1367–1378. [Google Scholar] [CrossRef]
50. Park, J.; Park, Y.; Son, M. Hesitant Probabilistic Fuzzy Information Aggregation Using Einstein Operations. Information 2018, 9, 226. [Google Scholar] [CrossRef]
51. Wang, Z.X.; Li, J. Correlation coefficients of probabilistic hesitant fuzzy elements and their applications to evaluation of the alternatives. Symmetry 2017, 9, 259. [Google Scholar] [CrossRef]
52. Zhou, W.; Xu, Z. Group consistency and group decision making under uncertain probabilistic hesitant fuzzy preference environment. Inf. Sci. 2017, 414, 276–288. [Google Scholar] [CrossRef]
53. Garg, H. Linguistic Pythagorean fuzzy sets and its applications in multiattribute decision-making process. Int. J. Intell. Syst. 2018, 33, 1234–1263. [Google Scholar] [CrossRef]
54. Garg, H. New Logarithmic operational laws and their aggregation operators for Pythagorean fuzzy set and their applications. Int. J. Intell. Syst. 2019, 34, 82–106. [Google Scholar] [CrossRef]
55. Garg, H.; Arora, R. Generalized and Group-based Generalized intuitionistic fuzzy soft sets with applications in decision-making. Appl. Intell. 2018, 48, 343–356. [Google Scholar] [CrossRef]
56. Garg, H.; Nancy. New Logarithmic operational laws and their applications to multiattribute decision making for single-valued neutrosophic numbers. Cognit. Syst. Res. 2018, 52, 931–946. [Google Scholar] [CrossRef]
57. Rani, D.; Garg, H. Complex intuitionistic fuzzy power aggregation operators and their applications in multi-criteria decision-making. Expert Syst. 2018, e12325. [Google Scholar] [CrossRef]
58. Garg, H.; Rani, D. Complex Interval- valued Intuitionistic Fuzzy Sets and their Aggregation Operators. Fund. Inf. 2019, 164, 61–101. [Google Scholar]
59. Liu, X.; Kim, H.; Feng, F.; Alcantud, J. Centroid Transformations of Intuitionistic Fuzzy Values Based on Aggregation Operators. Mathematics 2018, 6, 215. [Google Scholar] [CrossRef]
60. Wang, J.; Wei, G.; Gao, H. Approaches to Multiple Attribute Decision Making with Interval-Valued 2-Tuple Linguistic Pythagorean Fuzzy Information. Mathematics 2018, 6, 201. [Google Scholar] [CrossRef]
61. Garg, H.; Kaur, J. A Novel (R, S)-Norm Entropy Measure of Intuitionistic Fuzzy Sets and Its Applications in Multi-Attribute Decision-Making. Mathematics 2018, 6, 92. [Google Scholar] [CrossRef]
62. Joshi, D.K.; Beg, I.; Kumar, S. Hesitant Probabilistic Fuzzy Linguistic Sets with Applications in Multi-Criteria Group Decision Making Problems. Mathematics 2018, 6, 47. [Google Scholar] [CrossRef]
63. Garg, H.; Nancy. Linguistic single-valued neutrosophic prioritized aggregation operators and their applications to multiple-attribute group decision-making. J. Ambient Intell. Hum. Comput. 2018, 9, 1975–1997. [Google Scholar] [CrossRef]
Table 1. Preference values provided by decision-maker 1.
Table 1. Preference values provided by decision-maker 1.
$C 1$$C 2$$C 3$$C 4$
$A 1$$0.2 | 0.4 , 0.3 | 0.6 0.4 | 1$$0.45 | 0.42 , 0.60 | 0.58 0.2 | 0.4 , 0.3 | 0.6$$0.9 | 1 0.1 | 1$$0.6 | 1 0.3 | 1$
$A 2$$0.8 | 0.9 , 0.6 | 0.1 0.1 | 1$$0.30 | 1 0.6 | 1$$0.6 | 1 0.2 | 0.5 , 0.1 | 0.5$$0.2 | 1 0.8 | 1$
$A 3$$0.05 | 0.7 , 0.2 | 0.3 0.5 | 1$$0.50 | 1 0.5 | 1$$0.8 | 0.6 , 0.6 | 0.4 0.15 | 1$$0.12 | 1 0.7 | 0.9 , 0.6 | 0.1$
$A 4$$0.4 | 1 0.3 | 0.5 , 0.2 | 0.5$$0.50 | 1 0.2 | 0.3 , 0.4 | 0.7$$0.3 | 1 0.65 | 1$$0.5 | 1 0.2 | 0.3 , 0.4 | 0.7$
Table 2. Preference values provided by decision-maker 2.
Table 2. Preference values provided by decision-maker 2.
$C 1$$C 2$$C 3$$C 4$
$A 1$$0.3 | 0.5 , 0.5 | 0.5 0.4 | 1$$0.20 | 1 0.7 | 0.1$$0.2 | 1 0.4 | 0.8 , 0.6 | 0.2$$0.6 | 0.7 , 0.7 | 0.3 0.25 | 1$
$A 2$$0.2 | 1 0.7 | 1$$0.30 | 0.5 , 0.2 | 0.5 0.20 | 0.5 , 0.15 | 0.5$$0.2 | 1 0.6 | 1$$0.2 | 0.3 , 0.3 | 0.7 0.6 | 1$
$A 3$$0.4 | 0.4 , 0.5 | 0.6 0.5 | 1$$0.45 | 1 0.5 | 1$$0.8 | 0.4 , 0.6 | 0.6 0.2 | 0.7 , 0.1 | 0.3$$0.1 | 1 0.6 | 0.6 , 0.8 | 0.4$
$A 4$$0.4 | 0.2 , 0.5 | 0.8 0.3 | 1$$0.2 | 0.4 , 0.5 | 0.6 0.4 | 0.2 , 0.3 | 0.8$$0.4 | 0.1 , 0.5 | 0.9 0.3 | 1$$0.4 | 1 0.6 | 1$
Table 3. Preference values provided by decision-maker 3.
Table 3. Preference values provided by decision-maker 3.
$C 1$$C 2$$C 3$$C 4$
$A 1$$0.75 | 1 0.2 | 1$$0.50 | 1 0.2 | 0.5 , 0.5 | 0.5$$0.3 | 1 0.6 | 1$$0.6 | 1 0.3 | 1$
$A 2$$0.6 | 0.6 , 0.8 | 0.4 0.1 | 1$$0.20 | 1 0.7 | 1$$0.9 | 1 0.1 | 1$$0.3 | 1 0.5 | 0.4 , 0.6 | 0.6$
$A 3$$0.9 | 1 0.1 | 1$$0.6 | 1 0.25 | 0.5 , 0.1 | 0.5$$0.8 | 1 0.2 | 1$$0.2 | 1 0.8 | 1$
$A 4$$0.3 | 0.7 , 0.5 | 0.3 0.4 | 0.6 , 0.5 | 0.4$$0.1 | 1 0.8 | 1$$0.3 | 1 0.3 | 1$$0.35 | 1 0.6 | 1$
Table 4. Comprehensive matrix.
Table 4. Comprehensive matrix.
$C 1$$C 2$$C 3$$C 4$
$A 1$$0.2 | 0.1333 , 0.3 | 0.3667 0.5 | 0.1667 , 0.75 | 0.3333 , 0.4 | 0.6667 , 0.2 | 0.3333$$0.45 | 0.14 , 0.6 | 0.1934 0.2 | 0.3333 , 0.5 | 0.3333 , 0.2 | 0.3 , 0.3 | 0.2 0.7 | 0.3333 , 0.5 | 0.1667$$0.9 | 0.3333 , 0.2 | 0.3333 0.3 | 0.3334 , 0.1 | 0.3333 , 0.4 | 0.2667 0.6 | 0.4$$0.6 | 0.9 , 0.7 | 0.1 , 0.3 | 0.6667 , 0.25 | 0.3333$
$A 2$$0.8 | 0.4333 , 0.6 | 0.2334 0.2 | 0.3333 , 0.1 | 0.6667 , 0.7 | 0.3333$$0.30 | 0.75 , 0.2 | 0.5 , 0.6 | 0.3333 , 0.2 | 0.1667 0.15 | 0.1667 , 0.7 | 0.3333$$0.6 | 0.3333 , 0.2 | 0.3334 0.9 | 0.3333 , 0.2 | 0.1667 , 0.1 | 0.6667 0.6 | 0.1666$$0.2 | 0.4333 , 0.3 | 0.5667 , 0.8 | 0.3333 , 0.6 | 0.3333 0.5 | 0.1333$
$A 3$$0.05 | 0.2334 , 0.2 | 0.1 0.4 | 0.1333 , 0.5 | 0.2 0.9 | 0.3333 , 0.5 | 0.6667 , 0.1 | 0.3333$$0.5 | 0.3333 , 0.45 | 0.3333 0.6 | 0.3334 , 0.5 | 0.6667 , 0.2 | 0.1667 0.1 | 0.1666$$0.8 | 0.6667 , 0.6 | 0.3333 , 0.15 | 0.3333 , 0.2 | 0.5666 0.1 | 0.1$$0.12 | 0.3333 , 0.1 | 0.3333 0.2 | 0.3334 , 0.7 | 0.3 , 0.6 | 0.2333 0.8 | 0.4667$
$A 4$$0.4 | 0.4 , 0.5 | 0.3667 0.3 | 0.2333 , 0.3 | 0.5 , 0.2 | 0.1667 0.4 | 0.2 , 0.5 | 0.1333$$0.5 | 0.5333 , 0.2 | 0.1333 0.1 | 0.3334 , 0.2 | 0.1 , 0.4 | 0.3 0.3 | 0.2667 , 0.8 | 0.3333$$0.30 | 0.6667 , 0.4 | 0.0333 0.5 | 0.3 , 0.65 | 0.3333 , 0.3 | 0.6667$$0.5 | 0.3333 , 0.4 | 0.3333 0.35 | 0.3334 , 0.2 | 0.1 , 0.4 | 0.2334 0.6 | 0.6666$
Table 5. Score values of proposed approach.
Table 5. Score values of proposed approach.
Operator$A 1$$A 2$$A 3$$A 4$Ranking
Distance $d 1$PDHFWEA0.18100.17990.1739−0.0002$A 1 ≻ A 2 ≻ A 3 ≻ A 4$
PDHFOWEA0.22930.22390.29400.0013$A 3 ≻ A 1 ≻ A 2 ≻ A 4$
PDHFWEG0.0937−0.0073−0.0202−0.0545$A 1 ≻ A 2 ≻ A 3 ≻ A 4$
PDHFOWEG0.14580.02830.0856−0.0515$A 1 ≻ A 3 ≻ A 2 ≻ A 4$
Distance $d 2$PDHFWEA0.19680.07540.1213−0.0459$A 1 ≻ A 3 ≻ A 2 ≻ A 4$
PDHFOWEA0.16840.08320.0971−0.0472$A 1 ≻ A 3 ≻ A 2 ≻ A 4$
PDHFWEG0.1006−0.1189−0.1072−0.1056$A 1 ≻ A 4 ≻ A 3 ≻ A 2$
PDHFOWEG0.0691−0.1118−0.1268−0.1091$A 1 ≻ A 4 ≻ A 2 ≻ A 3$
Table 6. Comparison of overall rating values and ranking order of alternatives.
Table 6. Comparison of overall rating values and ranking order of alternatives.
$A 1$$A 2$$A 3$$A 4$Ranking
Hao et al. [42]PDHFWA0.19850.21350.20610.0098$A 2 ≻ A 3 ≻ A 1 ≻ A 4$
Park et al. [50]HPFEWA0.51310.49150.52430.3917$A 3 ≻ A 1 ≻ A 2 ≻ A 4$
HPFEWG0.45690.40940.40560.3723$A 1 ≻ A 2 ≻ A 3 ≻ A 4$
Xu and Zhou [48]HPFWA0.52530.50910.54450.3953$A 3 ≻ A 1 ≻ A 2 ≻ A 4$
HPFWG0.44570.39370.38370.3685$A 1 ≻ A 2 ≻ A 3 ≻ A 4$
HPFOWA0.55850.52150.60780.3957$A 3 ≻ A 1 ≻ A 2 ≻ A 4$
HPFOWG0.48260.39980.43850.3699$A 1 ≻ A 3 ≻ A 2 ≻ A 4$
Table 7. Characteristic comparison of the proposed approach with different methods.
Table 7. Characteristic comparison of the proposed approach with different methods.
MethodsWhether Consider MoreWhether ConsidersWhether ConsidersWeights Derived By
Than One Decision MakerProbabilitiesNon-MembershipNon-Linear Approach
Hao et al. [42]×
Park et al. [50]×××
Xu and Zhou [48]××
Our proposed approach