1. Introduction
Decision Support Systems (DSS) integrated with Artificial Neural Networks (ANN) are gaining significant attention among researchers worldwide. Various multi-criteria decision-making (MCDM) methodologies have been proposed to address complex decision problems. For instance, a methodology for transforming depressed industrial areas into sustainable urban spaces, prioritizing social, environmental, and economic factors, was developed in [
1]. Theoretical foundations, empirical validations, and a framework for interdisciplinary applications in sequential decision-making were explored in [
2]. Several studies have introduced novel approaches to handle uncertainty in decision-making. Interval-valued linguistic neutrosophic sets, Z-numbers, and the trapezium cloud model have been effectively combined to address randomness and uncertainty, as presented in the Z-IVLNS-TTC model, which minimizes information loss and distortion while developing a novel multi-objective programming approach for weight calculation. A group decision-making problem and sensitivity analysis demonstrated the model’s practicality and superiority over existing methods in [
3]. In the field of ranking methodologies, a novel coefficient aiding educators and institutions in selecting effective software solutions for undergraduate power systems instruction was proposed in [
4]. Two-dimensional linguistic intuitionistic fuzzy variables (2DLIFVs) were introduced to model expert cognitive information while ensuring evaluation reliability in [
5]. Furthermore, a novel score function, distance measure, and four new aggregation operators were defined to rank and aggregate 2DLIFVs in a complex decision-making process involving multiple conflicting criteria such as cost, safety, and personal preferences in [
6]. The VIKOR method, combined with fuzzy logic, has been widely used to provide effective compromise solutions. One study addressed the challenges of selecting appropriate companies for financing in sustainable supply chain finance by proposing a Multi-Attribute Group Decision-Making (MAGDM) method, addressed in [
7]. Similarly, a three-way VIKOR method that combines ranking and classification to handle multi-attribute decision-making with conflicting attributes was introduced in [
8]. Additionally, a new correlation coefficient (CC) for evaluating relationships between linguistic intuitionistic fuzzy sets (LIFSs) using linguistic intuitionistic fuzzy numbers (LIFNs) was proposed in [
9]. Advancements in fuzzy set theory and optimization techniques have also contributed to improving decision-making models. The normal wiggly hesitant fuzzy set was introduced to overcome limitations in traditional hesitant fuzzy sets, capturing both the explicit and implicit preferences of decision-makers in [
10]. A hybrid model combining the Gray Wolf Optimization and Archimedes Optimization Algorithms (AOA) with ANN was developed to predict construction and demolition waste quantities more accurately in [
11]. Using data from 200 real-life projects in the Gaza Strip, the AOA-ANN model demonstrated superior accuracy compared to other models. Furthermore, machine learning (ML) and deep learning (DL) techniques have been applied in decision-making. Random Forest (RF), Support Vector Machines (SVM), and ANN models were evaluated, with RF showing the highest accuracy in a study on ML-based decision-making in [
12]. An ANN model for accurately forecasting sky temperature in Djibouti, a hot and humid climate where existing estimation models fail to perform well, is introduced in [
13]. The relationships between microRNA and cytokine profiles of hematopoietic progenitors from cord blood in hematopoiesis regulation were analyzed in [
14]. The MAX78000, an ultra-low-power Edge AI microcontroller with a hardware-based convolutional neural network (CNN) accelerator, is evaluated in [
15], focusing on its behavior in radiation environments. The RGB images and multispectral data from UAVs used to classify five crops using machine learning techniques are integrated in [
16]. The IVN Fuzzy TOPSIS method is applied in [
17] to select the optimal location for international education fairs in Türkiye. Two new group decision-making approaches using the linguistic intuitionistic fuzzy Yager weighted arithmetic aggregation operator are proposed in [
18]. Linguistic Intuitionistic Fuzzy ANN was introduced in [
19]. In this paper, new aggregation operators for linguistic intuitionistic fuzzy sets are proposed, along with a novel correlation coefficient. The proposed methods are applied to select the best alternative using the VIKOR method and ANN-based decision-making approaches.
2. New Aggregation Operators for Linguistic Intuitionistic Fuzzy Numbers
Definition 1. The Lin-IFWAA operator: Let for j = 1,2,…,n be a collection of Linguistic Intuitionistic Fuzzy numbers. The Linguistic Intuitionistic Fuzzy Weighted Arithmetic Averaging (Lin-IFWAA) operator is defined as , where the weight vector of is and for
Definition 2. The Lin-IFWG operator: Let for j = 1, 2, …, n be a collection of Linguistic Intuitionistic Fuzzy numbers. The Linguistic Intuitionistic Fuzzy Weighted Geometric (Lin-IFWG) operator is defined as , where the weight vector of is and for
Definition 3. The Lin-IFOWG operator: Let for j = 1,2,…,n be a collection of Linguistic Intuitionistic Fuzzy numbers. The Linguistic Intuitionistic Fuzzy Ordered Weighted Geometric (Lin-IFOWG) operator is defined as , where the weight vector of is and for
Theorem 1. When Lin-IFWA is used to aggregate a set of Linguistic Intuitionistic Fuzzy numbers , the result is also a Linguistic Intuitionistic Fuzzy number.
Proof. Let us use the induction approach to demonstrate this theorem. Take into account
Then, Continuing the process with : Then, when n=k+1, Hence, it is clear that for n=k+1, the operator is valid. The operator is thus true for every n according to the induction principle, concluding the proof. Hence, is an LIFN. □
Theorem 2. Let be a collection of Linguistic Intuitionistic Fuzzy numbers and be the weight vector of with and Finally, it can be proven that the Lin-IFWA operator is (i) Idempotent, (ii) Bounded, (iii) Monotonic, (iv) Commutative, and (v) Associative.
Proof. (i) Idempotency: If all are equal, that is for all j, then let us establish that Since , for all j, (ii) Boundedness: Let, The following representation illustrates this: (iii) Monotonicity: Let be a collection of LIFNs. If for all j, then we show that for all ω. Let and Since for all j, , which is possible when we consider the fact that where (iv) Commutativity: Let be a collection of LIFNs. Then we have to prove , and for all ω where is any permutation of . Let Now, Since is any permutation of , we can have for and hence (v) Associativity: Let be a collection of Linguistic Intuitionistic Fuzzy numbers. Then, □
Theorem 3. When Lin-IFWG is used to aggregate a set of Linguistic Intuitionistic Fuzzy numbers , the result is also a Linguistic Intuitionistic Fuzzy number.
Theorem 4. Let be a collection of Linguistic Intuitionistic Fuzzy numbers and be the weight vector of with and . Finally, it can be proven that the Lin-IFWG operator is (i) Idempotent, (ii) Bounded, (iii) Monotonic, (iv) Commutative and (v) Associative.
Theorem 5. When Lin-IFOWG is used to aggregate a set of Linguistic Intuitionistic Fuzzy numbers the result is also a Linguistic Intuitionistic Fuzzy number.
Theorem 6. Let be a collection of Linguistic Intuitionistic Fuzzy numbers and be the weight vector of with and . Then the Lin-IFOWG operator is (i) Idempotent, (ii) Bounded, (iii) Monotonic, (iv) Commutative, and (v) Associativity.
All the above theorems can be proven in a similar way to Theorems 1 and 2.
3. Correlation Coefficient of LIFNs
The following method will be useful for figuring out an LIFN’s correlation coefficient excluding hesitance degree: Let . Here, represent the normalized linguistic degree, membership and non-membership, respectively, where For LIFN, no correlation measure has been explicitly defined so far in the literature. For LIFN, the correlation measure is defined as follows: Let and be two LIFSs. Then, for each the informational LIF energy of G and H is , . The correlation of G and H: The correlation coefficient between G and H is .
Proposition 1. For any the propositions are given as follows: (i) ; (ii) ; (iii) and (iv)
Proof. Here,
Hence . (i) Clearly, , from the definition of . (ii)
If , then . □
Theorem 7. For .
Proof. Obviously and Hence, □
Theorem 8. are non-fuzzy sets with the condition or .
Proof. For all If then
Similarly, it can be proven for □
6. Numerical Illustration: LIF-VIKOR with Entropy, Sensitivity and ANN
Assume there are four industries (alternatives)
to be weighed against certain criteria. Evaluate industries in terms of their technological innovation capability, evaluating ‘factors’ such as resource ability for digitalization
, organizational innovation
, Innovation Centers
, and Innovative products
. Consider a group of experts whose weights are given as
. The experts’ assessments of the four industries are listed in the following tables.
Make the integrated matrix, as mentioned in the algorithm.
Computation 1: VIKOR method with known weights
Assign the known weights
to each indicator and use the cost type indicators
and the benefit type indicators
to derive the following decision matrix:
By using the Linguistic Median Membership function, the above matrix is converted into a crisp matrix.
In the above matrix, negative ideal solutions are highlighted by single lines and positive ideal solutions are highlighted by double lines.
The positive ideal solution and the negative ideal solution are
Calculate the
A, B, and
C values for all the alternatives in the decision problem, as in
Section 4:
Then,
where A
* = min A
j; A
− = max A
j; B
* = min B
j; B
− = max B
j,
is the lowest value, and hence is the best alternative.
By using the proposed Correlation coefficient, and Hence the ranking of the alternatives is
Computation 2: VIKOR method with weights from entropy method
Calculate
where
is computed from the integrated matrix.
Calculate the entropy value , where m = 4, and k = 1/ln(m) = 1/ln(4). k = 0.72135, and so calculate the weights It can be easily seen that ∑wj = 1.
Hence, the weights calculated by the entropy method are
Proceeding through step 1 to step 4, as in the previous computations with the weights calculated above in the entropy method, the following matrix is obtained:
The positive ideal solution and the negative ideal solution are
Calculate the
A,
B, and
C values for all the alternatives in the decision problem, as in computation 1, with entropy weights,
And then,
where A* = min A
j; A
− = max A
j; B* = min B
j; B
− = max B
j.
is the lowest value, hence is the best alternative.
Using the proposed correlation coefficient and Hence, the ranking of the alternatives is
Sensitivity Analysis for the VIKOR Method
The process of recalculating outcomes under alternative assumptions to determine the impact of a variable under sensitivity analysis can be useful for various purposes. In this work, the sensitivity analysis is performed on the weight vectors derived from the entropy method. The weights calculated by the entropy method are . Now let us analyze the change in the output when changes are allowed in all the vectors of the weights. Let us allow the small change, , for the first vector of the weights.
Following the same computations as are used in sensitivity analysis, using VIKOR, the following decisions (computation 3 to 6) can be obtained, as given in
Table 1.
From the above table, it can be observed that the best alternative is .
Computation 7: Decision-making with Artificial Neural Network
The following ANN procedure will be incorporated in order to use a high-complexity Artificial Neural Network (ANN) to analyze the defuzzyfied decision matrix:
Assign input to the matrix. Consider the columns as traits or attributes and the rows as options or alternatives;
Normalize the data, ensuring that each attribute contributes evenly by scaling the data;
Initiate the ANN—
Input layer. The number of attributes or traits is equal to the number of neurons.
Hidden layers. More than one layer will be employed for increased complexity.
Output layer. Each possibility is scored by a single neuron;
Train the ANN. To reduce error, use backpropagation;
Assess the outcomes. Choose the option with the greatest output rating.
Pseudo Code for the ANN:
# Input: Matrix of alternatives and attributes & Defuzzyfied matrix with the decision data
# Step 1: Normalization of the decision matrix
# Step 2: Initiate ANN
Input neurons # Number of attributes
hidden neurons = 10 # First hidden layer
hidden neurons = 8 # Second hidden layer
output neurons = 1 # Output layer (single score per row)
# Initialize the weighting vector and biases for the ANN
Weights input hidden1 = random initialize (input neurons, hidden neurons layer1)
Weights hidden1 hidden2 = random initialize (hidden neurons layer1, hidden neurons layer2)
Weights hidden2 output = random initialize (hidden neurons layer2, output neurons)
Bias hidden1 = random initialize (hidden neurons layer1)
Bias hidden2 = random initialize (hidden neurons layer2)
Bias output = random initialize (output neurons)
# Activation function: define sigmoid(x)=1/(1 + exp(−x))
# Step 3: Forward Pass: Define forward pass(row):
Input to hidden1 = sigmoid (dot product (row, weights input hidden1) + bias hidden1)
hidden1 to hidden2 = sigmoid (dot product (input to hidden1, weights hidden1 hidden2) + bias hidden2)
output = sigmoid (dot product (hidden1 to hidden2, weights hidden2 output) + bias output)
return output
# Step 4: Train the ANN (Backpropagation)
for epoch in range:
for row in normalized matrix:
# Calculate forward pass
output = forward pass(row)
# Compute error and backpropagate to update weights
Backpropagate (output, row)
# Step 5: Evaluate and Select Best Alternative
scores = [forward pass(row) for row in normalized matrix]
best alternative = argmax(scores)
Following the procedure, let us start the ANN with the defuzzyfied matrix from computation-1:
Step 1. Normalize the Matrix
We will normalize the matrix column-wise so that each column has values scaled between 0 and 1, ensuring that all attributes contribute equally to the analysis. The normalized matrix is given as follows:
Step 2. Define ANN structure.
Input layer: Four neurons (one for each attribute).
Hidden layer 1: Eight neurons.
Hidden layer 2: Six neurons.
Output layer: One neuron to compute the score for each row.
Following the above structure, randomly initialize weights and biases:
Weights (Input to Hidden Layer 1): 4 × 8 matrix.
Weights (Hidden Layer 1 to Hidden Layer 2): 8 × 6 matrix.
Weights (Hidden Layer 2 to Output): 6 × 1 matrix.
Biases: One bias for each neuron in the hidden and output layers.
Activation function: Utilize the Sigmoid activation function for the outputs from one layer to the other.
Step 3: Perform forward pass and training
We will implement the forward pass and compute the outputs.
Using these scores, we will identify the best alternative (row with the highest score).
Computations: For each row of the decision matrix, after the hidden layer-1, we get
After the computations of the hidden layer-2, we get
Utilizing the resultant matrix from hidden layer-2, we end up with the final scores of the matrix, as follows:
Row 1, 0.9214; row 2, 0.9213; row 3, 0.9219; row 4, 0.9230.
Hence the best alternative is row 4, with the highest score of 0.9230.
Hence, following the ANN procedure also produces the fourth alternative as the best alternative.