1. Introduction
The consideration of relative importance between the concerned criteria is of great significance in most of the various decision environments and decision theories, including bounded rationality [
1], fuzzy group decision-making [
2], order-based decision models [
3], multi-criteria decision-making (MCDM) [
4], non-additive-measure-based decision-making [
5,
6], preference involved decision-making [
7], random and stochastic decision-making [
8], and interactive decision-making [
9], and the normalized weight function/vector thus serves as the very suitable embodiment of the relative importance. There are numerous methods to generate normalized weight function, such as the method considering the roles of eigen things in the analytic hierarchy process (AHP) [
10] with a myriad of applications [
11,
12], and the method used in ordered weighted averaging (OWA) operator [
13] with some of its extensions [
14,
15,
16,
17].
Both methods adopted in AHP and OWA contain subjectivity and objectivity. Note that the involvement of subjectivity in general does not mean arbitrariness or lack of seriousness; on the contrary, subjectivity ordinarily is directly linked to working experiences or is indirectly derived from the expertise of decision makers [
14].
Many different types of preferences are often embedded or embodied in numerous decision-making and aggregation problems [
18,
19,
20,
21,
22]. Yager proposed to generate weight function by using inducing information and bi-polar preference [
15,
16]. Firstly, the inducing information should be embodied by a function/vector that exactly corresponds to the input function/vector. For example, the inducing information can be different time points when the input values are obtained, the certainty extents to which the input values are thought to be convincing, or the magnitudes of the input values in their own right. Moreover, the applied bipolar preference generally in its practical meaning should pertain to the concerned inducing information. For example, if the inducing information is about time points, then the involved bipolar preference could become newer–older preference; if it is about certainty extents, then the bipolar preference should become more–less certainty preference or indifference–certainty preference; and if it is about magnitudes, then the bipolar preference should become optimism–pessimism preference. Lastly, the weight allocation can be conducted by a number of techniques, and later we will use a quantifier-based method that provides convenience in related discussions.
Uncertainties are pervasive in practical MCDM problems, and recently researchers proposed an uncertainty paradigm called basic uncertain information (BUI) [
23,
24] to effectively and conveniently tackle a wide variety of uncertainties involved in decision-making and evaluation problems. Since there is a paucity of literature dealing with and discussing information-fusion-based MCDM in a BUI environment, this work will mainly focus on the certainty degree as the inducing information in MCDM in this new type of generalized and formalized uncertain decision environment. When concerned with a mere certainty inducing variable, the problem will not be complex; this is because once the extent of indifference–certainty preference is determined, we can easily perform quantifier based weight allocation. However, in MCDM problems frequently contain different factors such as different experts consulted, the different extents of certainties involved for both inputs and importance of criteria, the combination of different magnitudes and extents of certainties for both inputs and importance of criteria, and the necessity of considering those decision elements in a comprehensive or merging sense.
That is to say, in MCDM problems where much more complexities may arise, decision makers in general should generate, consider, and handle multiple and complex inducing information rather than some simple and single form. Hence, this article, in detail, will discuss and provide some merging selections and special merging techniques of inducing information, together with some paradigmatic or prescriptive decision-making suggestions for decision makers to refer to.
Note that in MCDM problems with BUI environment, there are many restrictions on the selections of the methods to merge different inducing information and thus make the problem more complex. For example, if the certainty is only associated with a normalized weight vector as a whole, then it cannot be merged inward with any entry of the normalized weight vector, but if the different certainties are specifically linked with the different entries of a non-normalized weight vector in a pointwise way, then the certainty could be merged with the entries of the non-normalized weight vector. Clearly, the existing few traditional methods cannot work because none of them have adequately considered the involved numerical uncertainties.
Some theoretical advantages and contributions of this study lie in that it will make it clearer how to reasonably consider several different types of inducing information in MCDM problems and selectively merge some of them in order to generate desirable weight functions with bipolar preferences. The study will help decision makers to build and select suitable, automatic and relatively objective weighted evaluation models with given information and under their own preferences.
The remainder of this article is organized as follows. In
Section 2, we majorly review some basic concepts and propose a general weight allocation paradigm with some extended instances.
Section 3 discusses the differences in generating a normalized weight vector with given importance information, which has two different uncertain forms, and then proposes some detailed generating methods. In
Section 4, we analyze some different methods and orderings to generate normalized weight vector from inputs of BUI.
Section 5 concludes and comments on this study.
2. Weight-Allocation Methods and Aggregation Based on Inducing Variable
Without loss of generality, the real number input with n individuals is represented as a function/vector and the set of all such input functions is conventionally denoted by . Given an n-nary input function x, in order to carry out some further aggregation we need to have a normalized weight function/vector of dimension n (), and each of the values will be associated with the input value . The space of all normalized weight functions of dimension n is denoted by .
In this study, we will often consider a set of n criteria, which can be used to comprehensively evaluate some alternatives or options under consideration in MCDM. Hence, the relative importance of those n criteria will be expressed as a normalized weight function of dimension n, . However, we will also be faced with the concept of “importance” of (each) criterion, and it will be expressed by a weight function/vector , which is not necessarily normalized and should be distinguished from the concept of “relative importance”.
With some given normalized weight vector
w, we can use it to perform a preference-involved aggregation such as weighted average (also known as weighted mean) and geometrical weighted average (also known as geometrical weighted mean). Whether or not the weight vector is derived from OWA aggregation, we can always express the corresponding (geometrical) OWA operators by (geometrical) weighted mean [
25].
Definition 1. The weighted average operator with weight functionis defined as the mapping, such that Definition 2. The geometrical weighted average operator with weight functionis defined as the mapping, such that The input can be also formed by
m individuals, each of which is a normalized weight function of dimension
n. The space of all such input is denoted by
. With a weight function
, we can define the average of the collection of input functions
by using a mapping
, called Weighted Average for Weights (
WAW), such that
Note that since each is a normalized weight function of dimension n, then is still a normalized weight function of the same dimension n.
To determine a normalized weight function with certain inducing information, we will use the method originally proposed by Yager in the induced ordered weighted average (IOWA) operator [
15,
16]. The original method has not perfectly considered the effect of tied values in inducing information in the actual weight allocation process. Jin et al. [
26] have further developed a three-set expression to accurately and strictly deal with the involved weight allocation problem.
In this study, a piece of inducing information (also called inducing function) with dimension
n is expressed by a function
, which generally corresponds to the considered
n criteria in MCDM or the input function of dimension
n. With a piece of inducing information and a well-defined function called Regular Increasing Monotone (RIM) quantifier [
16], we can generate a normalized weight function. A RIM quantifier
is non-decreasing and has the boundary conditions
and
. We denote by
the space of all RIM quantifier. In addition, the orness of any RIM quantifier
is defined by
, whose value indicates the preference extent in a general way [
16].
The weight-generating method from the given inducing function
c and RIM quantifier
to obtain a normalized weight function
can be rephrased and revamped by the following formula:
where
,
,
, and
represents the cardinality of any finite set
S.
In this study, for the sake of convenience, we can use a straightforward and strict way to express the weight allocation with given information, a function/vector of dimension
n,
, and a RIM quantifier
Q:
and
The mapping
in Equation (5) is called the general weight allocation formulation and its value
on given
c and
Q is called a weight allocation paradigm.
Recall that the basic uncertain information (BUI) [
23,
24] is a recently proposed uncertainty concept that can generalize a lot of different types of uncertainties such as fuzzy information [
27], intuitionistic fuzzy information [
28,
29,
30,
31], probability information, interval information, hesitant information [
32,
33,
34,
35], and some other types of uncertain information [
36]. From some methods or formulations, those different types of uncertain information may be indirectly transformed into BUI (which will not be discussed further in this work). Another feature of BUI lies in the fact that the certainty/uncertainty extents may also be communicated or expressed directly by experts. BUI can conveniently express the extent of uncertainty in decision-making, which helps one to make a reasonable evaluation and make wise decisions in uncertainty environment. Recently, apart from the authors of this work, other researchers also paid attention to BUI and developed its related theories, new concepts, and applications [
37,
38,
39,
40,
41,
42,
43]. For example, Chen et al. [
37] proposed the Improved Basic Uncertain Linguistic Information (IBULI) as a new extension of BUI, and Tao et al. [
38] proposed basic uncertain information soft set with its application.
A BUI is a pair in which is the mainly concerned data (also called value element to distinguish it from the certainty element in this work), and is its associated certainty degree (or called certainty element), generally representing the extent to which x takes exactly its value or the degree to which some involved decision makers believe it takes that value. The originally defined BUI has a very simple pair form, and actually it can also assume different types of extended forms. For example, when (or ) and are two functions, the form (or ) () can be also easily recognized as a BUI pair. Sometimes, the value element in BUI is a normalized weight vector , so then should be recognized as the certainty degree to the whole vector rather than one of its entry .
We can perform weight allocations according to Equations (5) and (6) with some different types of BUI inputs, as discussed in the preceding paragraph. For example, when we are given a collection of BUI pairs (), we can use Equation (6) to obtain a normalized weight function for further information aggregation in which . When we are given (), then we have , in which . When we are with where , we consequently have in which and .
3. Generating Relative Importance from Given Importance Information
In MCDM problems, probably the most essential task is to determine the relative importance of the considered multiple criteria. In general, a widely accepted method is to allocate and form a normalized weight function to embody the relative importance of n criteria. In general, it is much easier to assign each single importance rate to each criterion than to assign a whole piece of normalized weight vector to those all criteria altogether. Hence, the weight-allocation methods of this section will begin with some simple form that involves a function/vector that is, in general, not normalized. We assume corresponds well to n criteria , so that is the importance extent of criterion , i.e., the larger the more important the criterion in the comprehensive evaluation.
Next, in order to generate a normalized weight function, we consider two methods that are simple but effective in some situations. We suppose in such methods the decision maker carries out the whole weight allocation process by himself/herself so that it is required that the importance of criteria will be judged by the decision maker alone.
The first method is more direct and even somewhat simplistic. If , then we can easily have a normalized weight function w by , and if , then set by Laplace decision criterion. Alternatively, we may preset a number and generate w by , in which case whether or not will no longer matter.
The second method is to perform IOWA weight allocation with paradigm , where Q is a RIM quantifier. In this case, we need because it can adequately represent a preference over the criteria with higher importance values rather than over those with lower importance values.
Next, we consider the other two situations where multiple experts will be invited to join the determination of the relative importance of criteria. Assume that m experts have been invited to offer or suggest their own different opinions about the importance of those n criteria that are represented by m weight function (). With such initial importance functions, there exist two types of uncertainty involvement about the opinions of those experts that can be represented by two different extensions of BUI below.
The first one is to one-to-one assign individual certainty degree to each weight value , and may vary with respect to both and . Thus, for different experts, he/she may offer different types of importance information and different certainty functions . The above initial information provided by those m experts thus can be formulated by , seen as a collection of m extended BUI called Varying Certainties for Weight Values (VCWV).
The second one is to assign a same certainty degree to the importance information as a whole. The formulation for this type of uncertainty is by , with being certainty degrees (real values) and called Constant Certainty for Weight Function (CCWF). Note that does not vary according to but may vary according to .
The above-mentioned two types of uncertainty involved importance information and contain significant differences in weight allocation. For the first type, we can use an aggregation operator [
26,
44]
to melt
with
and obtain the intermediate index
(
,
) as the intermediate information to help further determine weight allocation; in addition, for a fixed
i,
can be used as inducing information because generally the values of
assigned with some higher certainties
will be more convincing to the decision maker. For the second type, however, we cannot consider any forms of melting
with
, and we will allocate weights merely according to the certainty information
as the inducing information; in addition, note that
cannot be used as inducing information for any single
with any fixed
i. This is because we have regarded
as an independent whole, which has as its certainty
.
The major difference as mentioned above will then lead to differences in the detailed weights allocating processes. We next present two weight-allocation methods for VCWV and one method for CCWF.
3.1. Weight Allocation for VCWV—Method 1
First, select a binary aggregation operator
to obtain
values
(
,
). Since any aggregation operator is non-decreasing, this monotonicity ensures that both larger importance,
, and larger certainty of it,
, will contribute to a larger relative importance of criterion
, and vice versa. Then, a final normalized weight function
can be obtained by
where
and
Q is a RIM quantifier with
.
3.2. Weight Allocation for VCWV—Method 2
First, for each
obtain a normalized weight function
by
, where
and
Q a RIM quantifier with
. Then, for each
, generate a BUI pair
. Finally, obtain a normalized weight function
by
where
and
Q is a RIM quantifier with
. Note that the obtained
will be used (if necessary) to weight and aggregate values
(
).
Example 1. We show a simple numerical example to generate a weight function with respect to the above weight allocation for VCWV—Method 2. The example is also representative, so after seeing this the other different methods introduced in this work will not be difficult to understand.
Assume and , and
, , ,
, ,
Moreover, suppose satisfies with .
Firstly, we calculate
,
,
,
.
Then, by taking weighted averages, we obtain, respectively,
,
,
,
.
Consequently,
We next present a weight-allocation method for CCWF that is relatively simpler.
First, obtain a normalized weight function by with . Then, take the weighted average of , , by using v, and obtain such that . Finally, normalize W and obtain a final normalized weight function by , where is a preset real value.