Next Article in Journal
Heart Rate Variability-Based Non-Invasive Method for Ovulation Detection
Previous Article in Journal
Internet of Things and Predictive Artificial Intelligence for SmartComposting Process in the Context of Circular Economy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Hamming Distance-Based Intuitionistic Fuzzy Artificial Neural Network with Novel Back Propagation Method †

by
John Robinson Peter Dawson
* and
Wilson Arul Prakash Selvaraj
Department of Mathematics, Bishop Heber College, Affiliated to Bharathidasan University, Tiruchirappalli 620017, India
*
Author to whom correspondence should be addressed.
Presented at the 4th International Conference on Future Technologies in Manufacturing, Automation, Design and Energy 2024 (ICOFT 2024), Karaikal, India, 12–13 November 2024.
Eng. Proc. 2025, 95(1), 9; https://doi.org/10.3390/engproc2025095009
Published: 6 June 2025

Abstract

An artificial neural network (ANN)-based decision support system model, which aggregates intuitionistic fuzzy matrix data using a recently introduced operator, is developed in this work. Several desirable features related to distance measures of aggregation operators and artificial neural networks, including the backpropagation method, are investigated to support the application of the proposed methodologies to multiple attribute group decision-making (MAGDM) problems using intuitionistic fuzzy information. A novel and enhanced aggregation operator—the Hamming–Intuitionistic Fuzzy Power Generalized Weighted Averaging (H-IFPGWA) operator—is proposed for weight determination in MAGDM situations. Numerical examples are provided, and various ranking techniques are used to demonstrate the effectiveness of the suggested strategy. Subsequently, an identical numerical example is solved without bias using the ANN backpropagation approach. Additionally, a novel algorithm is created to address MAGDM problems using the proposed backpropagation model in an unbiased manner. Several defuzzification operators are applied to solve the numerical problems, and the efficacy of the solutions is compared. For MAGDM situations, the novel approach works better than the previous ANN approaches.

1. Introduction

The purpose of artificial neural network organizations is to function as networks of parallel distributed computing. Neural networks have intrinsic characteristics that are analogous to those of biological neural networks and they are, by nature, Mathematical models of information processing. The basic processing units of neural networks are artificial neurons, or simply neurons. In an ANN, signals are transmitted between neurons via a coupler. To control its output signal, each neuron applies an activation function to its net input. A neural network is characterized by its architecture—the layout of the microcircuitry between neurons—its activation function, and its approach to assigning weight to the connections, which can be supervised or unsupervised. Artificial neural networks (ANNs) effectively handle uncertainty, nonlinearity, and high-dimensional data by learning complex patterns and relationships from input data. Their adaptive nature enables robust decision-making in diverse applications, including classification, optimization, and prediction.
Modern computers are accelerated sequential machines, in contrast to the brain’s wonderfully parallel structure. Arithmetic operations—sequential tasks that must be completed one after the other—are one example of the unique enterprises that the brain oversees. In the instance of visual or speech recognition problems, which are thought to be extremely parallel, the brain can easily and concurrently handle the intricacies, while a computer cannot manage such tasks as effectively. A biological neuron that works in the human brain is similar to the artificial neuron that has recently been developed. A neuron’s output can be either on or off, and it is entirely dependent on the inputs. The authors of [1,2,3,4,5,6,7,8,9] worked extensively on ANNs. Aggregation operators were designed and widely applied to MAGDM scenarios by the authors in [5,6,7,9,10,11,12]. Furthermore, the authors of [3,7,13,14] applied state-of-the-art techniques to tackle real-world problems involving fuzzy systems using machine learning applications. Although a significant number of ANN-related works have been conducted in the past, this paper will focus on the innovative field of ANNs with intuitionistic fuzzy sets and the use of the backpropagation approach, which has not yet been covered by many authors.
The Hamming–Intuitionistic Fuzzy Power Generalized Weighted Averaging (H-IFPGWA) operator is a novel and improved aggregation operator used in this paper to derive inputs for an adapted ANN model from [7]. In the past, the Power Generalized Weighted Averaging operator played a vital role in MAGDM problem-solving; in this work, it is enhanced with the Hamming distance to identify the closest alternatives to the attributes. The proposed ANN model appears to be more effective and reasonable, particularly when the decision-maker provides incomplete information about the problem statement. After processing the input using the novel backpropagated intuitionistic fuzzy ANN, the outcomes are evaluated for efficacy by contrasting them with those obtained using other ANN techniques and ranking strategies.

2. The Backpropagation Method for Artificial Neural Networks

Since its inception, applications of artificial neural networks—or recursive nonlinear functions—have transformed machine learning. In this instance, proper training of a neural network is the most important step in producing a reliable model. The term “backpropagation”, commonly used in connection with this training, is difficult for most individuals unfamiliar with deep learning to understand. One of the crucial techniques utilized in ANN training is backpropagation. It is necessary to calculate the error rate of forward propagation and distribute the loss backward through the ANN layers in order to calibrate the weights during this network operation. The backpropagation phases are an essential component of any neural net training. It involves balancing the weights of a neural net according to the error rate achieved in the previous iteration, or epoch. Lower error rates, which occur during ANN weight vector calibration, improve the model’s generalizability and dependability. ANN’s backpropagation algorithm uses the chain rule to calculate the gradient of the loss function for a single weight which is evident from Figure 1. In contrast to an indigenous direct computational model—which computes the gradient but does not disclose how it is transmitted—the backpropagation skilfully computes one layer at a time and generalizes the calculation in the delta rule.

3. The Hamming–Intuitionistic Fuzzy Power Generalized Weighted Averaging (H-IFPGWA) Operator

In the following section, the Hamming–IFPGWA operator, which functions mainly based on the usual Euclidean distance metric for IFSs, is presented.
Definition 1. 
Let  a ~ j = μ j , ν j  be a cluster of Intuitionistic Fuzzy Numbers. Let  ω = ω 1 , ω 2 , , ω n T be the weighting vector of a ~ j j = 1 , 2 , , n , where the weight vector has the property ω j 0 ,   j = 1 , 2 , , n and j = 1 n ω j = 1 . Now, the novel and improvised operator called Hamming–Intuitionistic Fuzzy Power Generalized Weighted Average (H-IFPGWA) operator is proposed as follows: H I F P G W A a ~ 1 , a ~ 2 , , a ~ n = j = 1 n ω j 1 + T a ~ i j a ~ j λ j = 1 n ω j 1 + T a ~ i j 1 λ , where  λ 0 , + .
Using mathematical induction on n , the following can be observed:
H I F P G W A a ~ 1 , a ~ 2 , , a ~ n = 1 j = 1 n 1 μ j λ ω j 1 + T a ~ j j = 1 n ω j 1 + T a ~ j 1 λ , 1 1 j = 1 n 1 1 ν j λ ω j 1 + T a ~ j j = 1 n ω j 1 + T a ~ j 1 λ ,
where T a i = j = 1 j 1 n S u p a ~ i , a ~ j and S u p a ~ i , a ~ j is nothing but the support for a ~ i from a ~ j and possesses the following three properties:
(1) S u p a ~ i , a ~ j 0 , 1 , (2) S u p a ~ i , a ~ j = S u p a ~ j , a ~ i , (3) S u p a ~ i , a ~ j S u p a ~ k , a ~ t , if d a ~ i , a ~ j d a ~ k , a ~ t , Sup ( r ~ i j k , r ~ i j l ) = 1–d ( r ~ i j k , r ~ i j l ), where d is the distance measure calculated using the Hamming Distance: d ( a ~ 1 , a ~ 2 ) = 1 2   i = 1 n [ μ a 1 ~ x i μ a 2 ~ x i + ϒ a 1 ~ x i ϒ a 2 ~ x i + π a 1 ~ x i π a 2 ~ x i ] , where π = 1 − μ − ϒ.
Here, a few exclusive essentials of the H I F P G W A operator are provided.
(1)
Suppose λ 0 .
H I F P G W A a ~ 1 , a ~ 2 , , a ~ n = j = 1 n μ j ω j 1 + T a ~ j j = 1 n ω j 1 + T a ~ j , 1 j = 1 n 1 ν j ω j 1 + T a ~ j j = 1 n ω j 1 + T a ~ j ,
Then, the H I F P G W A operator scales down to the H I F P W G operator.
(2)
Suppose λ = 1 , then H I F P G W A a ~ 1 , a ~ 2 , , a ~ n =   1 j = 1 n 1 μ j ω j 1 + T a ~ j j = 1 n ω j 1 + T a ~ j , j = 1 n ν j ω j 1 + T a ~ j j = 1 n ω j 1 + T a ~ j .
(3)
Suppose λ + , then H I F P G W A a ~ 1 , a ~ 2 , , a ~ n = max a ~ 1 , a ~ 2 , , a ~ n .
Theorem 1. 
If  S u p a ~ i , a ~ j = k ,  for all  i j ,  then
H I F P G W A a ~ 1 , a ~ 2 , , a ~ n = j = 1 n ω j 1 + T a ~ j a ~ j λ j = 1 n ω j 1 + T a ~ j 1 λ = j = 1 n ω j a ~ j λ 1 λ .
Note: Support in the IFPGWA operator plays a crucial role in aggregating intuitionistic fuzzy information, balancing conflicting opinions, and improving the reliability of decision-making in MAGDM problems.
Theorem 2 (Commutative Property). 
Let  a ~ 1 , a ~ 2 , , a ~ n  be any permutation of  a ~ 1 , a ~ 2 , , a ~ n , then  H I F P G W A a ~ 1 , a ~ 2 , , a ~ n = H I F P G W A a ~ 1 , a ~ 2 , , a ~ n .
Theorem 3 (Idempotent Property). 
Let  a ~ j = a ~ , with  j = 1 , 2 , , n , then H- I F P G W A a ~ 1 , a ~ 2 , , a ~ n = a ~ .
Theorem 4 (Boundedness Property). 
The  H I F P G W A  is found to lie within the maximum and the minimum operators:  min a ~ 1 , a ~ 2 , , a ~ n H I F P G W A a ~ 1 , a ~ 2 , , a ~ n max a ~ 1 , a ~ 2 , , a ~ n .

4. The Back-Propagated IFS-ANN with the H-IFPGWA Operator

Pseudo-code for ANN with H-IFPGWA:
Cn: n Matrix itemset of size k × m
Input {An, Collection of n Intuitionistic Fuzzy Decision Matrices of size k }
W = np.array([w1,w2,w3,w4,w5]) #weights Initialization
//* Aggregation Phase*//
Compute {Hamming-IFPGWA aggregator with the Initial Weight Vector}
For (n = 1; An   ∅; n++) do begin
Generate {Individual Preference Intuitionistic Fuzzy Decision Matrices, Xn}
//* Xn is the collection of Individual Preference IF-Decision Matrices *//
Generate {Intuitionistic Fuzzy Attribute Weight Vector}
//* The same H-IFPGWA is also used to derive the attribute weight vector *//
While i m  do {Defuzzify the IF column matrix into Fuzzy Column matrix}
Generate {Collective Overall Preference Intuitionistic Fuzzy Decision Matrices the new Weight Vector, WT}
//*Improvise the input vector by different Defuzzification functions
1 μ γ , 1 μ + γ , 1 + μ γ ,   1 μ γ 2 , *//
Input vector //* Back Propagation: Start*//
Forward Pass: Calculate net input to each hidden layer: a j = j w i j   X i ,   i N
Learning Rate = 0.5
Error Calculation: Error = Target Output − Network output
Backward Pass: (weights updating) w i j n e w = Δ w i j + w i j ( o l d ) ; Δ w i j = α δ j X i
Continue the Forward Pass with updated weights from Backward pass:
Calculate net input to each hidden layer:  a j = j w i j   X i ,   i N
Continue the weight updation until the error is minimized to a desired level
//*Choosing the MAGDM best alternative*//
Find {The Weighted Arithmetic Averaging (WAA) values between the alternatives}
Find {The distance between WAA values and Net output}
While Distance values Threshold do Generate {The best alternative}
Output {Best Alternative(s) to be chosen}
End.
This pseudo-code algorithm works in three phases. Phase 1 is the aggregation phase, where the proposed H-IFPGWA operator combines all the given matrices with individual column matrices. Phase 2 involves the determining the attribute weight vector from the given decision matrices, which is used in the final phase of aggregation to convert the individual decision matrices into input for the ANN. Phase 3 consists of the forward and backward passes of the artificial neural network with backpropagation to decide the best alternative of the decision problem which is evident from the depiction given in Figure 2. The forward and backward passes are clearly depicted in the following flowchart.

5. Numerical Illustration for Backpropagated IFS-ANN withH-IFPGWA Operator

These days, the average person must invest money in a suitable company or plan. Consider an investment firm with five options for where to put its funds: Vehicles are the business of A1, beverages are the business of A2, software is the business of A4, heavy alloys are the business of A4, and oil and gas are the business of A5. The following four characteristics are considered: Risk analysis (G1), cash flow analysis (G2), liquidity analysis (G3), and debt and equity analysis (G4). Three experts have thoroughly examined this investment strategy challenge and subsequently offered their weighting vector. Their data are presented as decision matrices and are as follows:
I ~ 1 = 0.4 , 0.3           0.5 , 0.2           0.2 , 0.5           ( 0.1 , 0.6 ) 0.6 , 0.2           0.6 , 0.1           0.6 , 0.1           ( 0.3 , 0.4 ) 0.5 , 0.3           0.4 , 0.3           0.4 , 0.2           ( 0.5 , 0.2 ) 0.7 , 0.1           0.5 , 0.2           0.2 , 0.3           ( 0.1 , 0.5 ) 0.5 , 0.1           0.3 , 0.2           0.6 , 0.2           ( 0.4 , 0.2 ) ; I ~ 2 = 0.5 , 0.4           0.6 , 0.3           0.3 , 0.6           ( 0.2 , 0.7 ) 0.7 , 0.3           0.7 , 0.2           0.7 , 0.2           ( 0.4 , 0.5 ) 0.6 , 0.4           0.5 , 0.4           0.5 , 0.3           ( 0.6 , 0.3 ) 0.8 , 0.1           0.6 , 0.3           0.3 , 0.4           ( 0.2 , 0.6 ) 0.6 , 0.2           0.4 , 0.3           0.7 , 0.1           ( 0.5 , 0.3 ) ; I ~ 3 = 0.4 , 0.5           0.5 , 0.4           0.2 , 0.7           ( 0.1 , 0.8 ) 0.6 , 0.4           0.6 , 0.3           0.6 , 0.3           ( 0.3 , 0.6 ) 0.5 , 0.5           0.4 , 0.5           0.4 , 0.4           ( 0.5 , 0.4 ) 0.7 , 0.2           0.5 , 0.4           0.2 , 0.5           ( 0.1 , 0.7 ) 0.5 , 0.3           0.3 , 0.4           0.6 , 0.2           ( 0.4 , 0.4 ) .
Following the computations of BPIF-ANN, the results are as follows:
The H-IFPGWA values can be calculated for each data entry provided by the experts from the above three matrices. Suppose λ = 1 , then
z ~ 11 = ( 1   ( ( 1 μ 11 ) ω ~ 11 × ( 1 μ 12 ) ω ~ 12 × ( 1 μ 13 ) ω ~ 13 × ( 1 μ 14 ) ω ~ 14 ), 1 ( 1   ( 1 1 γ 11 ω ~ 11 × 1 1 γ 12 ω ~ 12 × 1 1 γ 13 ω ~ 13 × ( 1 1 γ 14 ) ω ~ 14 ) ) ) = ( 1 ( 1 0.4 0.30863 × 1 0.5 0.19520 × 1 0.2 0.20575 × 1 0.1 0.29041 ) ,   1 ( 1 ( ( 1 ( 1 0.3 ) ) 0.30863 × ( 1 1 0.2 ) 0.19520 × ( 1 1 0.5 ) 0.20575 ×   ( 1 1 0.6 ) 0.29041 ) ) )   =   ( 0.30889 , 0.37653 ) .
Similarly, all the other values from the above three matrices can be computed as collective overall matrices, as follows:
I ~ 1   = ( 0.30889 ,   0.37653 ) ( 0.53555 ,   0.17999 ) ( 0.46399 ,   0.26435 ) ( 0.43637 ,   0.23209 ) ( 0.45965 ,   0.16183 ) ;   I ~ 2   = ( 0.41197 ,   0.48360 ) ( 0.63902 ,   0.29011 ) ( 0.56446 ,   0.34615 ) ( 0.55071 ,   0.28341 ) ( 0.56193 ,   0.21397 ) ;   I ~ 3   = ( 0.30889 ,   0.58803 ) ( 0.53555 ,   0.39511 ) ( 0.46399 ,   0.44695 ) ( 0.43637 ,   0.40324 ) ( 0.45965 ,   0.32006 ) .
By defuzzifying the collective overall values using the identity 1 μ γ , we obtain the input vector, as follows:
X 1 = 0.31458 0.28446 0.27166 0.33154 0.37852 ,       X 2 = 0.10443 0.07087 0.08939 0.16588 0.22410 ,       X 3 = 0.10308 0.06934 0.08906 0.16039 0.22029
Assume the weight vector, (and permutation of w 11 for all the other stages) as follows:
w 11 = 0.15 0.22 0.25 0.20 0.18 . Let the Target Output = 1.
Through computing the forward and backward passes and updating the weights, we can derive the output as recorded in Table 1 and Table 2.

6. Discussion

Table 1 illustrates how this ANN model, which is based on the backpropagation technique, is used to convert intuitionistic fuzzy input into a fuzzy input vector using the three different defuzzification procedures based on the median membership grades. The collective matrix for which the ANN was used was supplied by the suggested H-IFPGWA operator, and the outcomes are shown in a table. Following the completion of weight updates, the network output is presented in Table 1. The maximum number of iterations is n=1000, though the decision-maker may choose to increase this to any large value of n. The best alternative is A 1 for one case and A 2 in the other two cases. The same numerical example was also illustrated by P-IFWG aggregation operator in [7], and the results were compared with existing ranking techniques. The choice of best alternative in [7] was also A 1 , when Delta and Perceptron Learning Rules were employed. Hence, using this new ANN algorithm proves to be a consistent method in line with the earlier proposed methods, which is clearly evident from the data presented in Table 1 and Table 2.

7. Conclusions

Using an enhanced aggregation operator for computing, a novel backpropagated model for MAGDM problem-solving is presented in this study. In order to solve the MAGDM problem, the new aggregation operator (H-IFPGWA) defined and used in this work advances the decision matrices to the next level of processing through an ANN. Lastly, the decision alternatives are ranked based on the updated weights of the ANN, following successful forward and backward passes. The model proposed in this paper was compared to the conventional MAGDM methods and techniques for addressing the same investment choice problem. Compared to the previous ANN-based method in [7], which employed different learning rules, the new ANN approach proves to be very effective, particularly in handling the complexities faced by decision-makers during the weight updating process in MAGDM situations. This MAGDM model, together with the proposed backpropagated ANN (implemented without any bias vector), can also be extended by including a bias vector, which is reserved for the future work.

Author Contributions

Conceptualization, methodology, validation, formal analysis, resources, supervision, project administration: J.R.P.D.; Writing—original draft preparation, writing—review and editing: W.A.P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Atanassov, K.; Sotirov, S.; Pencheva, T. Intuitionistic Fuzzy Deep Neural Network. Mathematics 2023, 11, 716. [Google Scholar] [CrossRef]
  2. Hájek, P.; Olej, V. Intuitionistic Fuzzy Neural Network: The Case of Credit Scoring Using Text Information. In Engineering Applications of Neural Networks: EANN 2015, Rhodes, Greece, 25–28 September 2015; Iliadis, L., Jayne, C., Eds.; Communications in Computer and Information Science; Springer: Cham, Switzerland, 2015; Volume 517. [Google Scholar] [CrossRef]
  3. Jekova, I.; Christov, I.; Krasteva, V. Atrioventricular synchronization for detection of atrial fibrillation and flutter in one to twelve ECG leads using a dense neural network classifier. Sensors 2022, 22, 6071. [Google Scholar] [CrossRef] [PubMed]
  4. Krasteva, V.; Christov, I.; Naydenov, S.; Stoyanov, T.; Jekova, I. Application of dense neural networks for detection of atrial fibrillation and ranking of augmented ECG feature set. Sensors 2021, 21, 6848. [Google Scholar] [CrossRef] [PubMed]
  5. Leonishiya, A.; Robinson, P.J. A Fully Linguistic Intuitionistic Fuzzy Artificial Neural Network Model for Decision Support Systems. Indian J. Sci. Technol. 2023, 16, 29–36. [Google Scholar] [CrossRef]
  6. Petkov, T.; Bureva, V.; Popov, S. Intuitionistic fuzzy evaluation of artificial neural network model. Notes Intuitionistic Fuzzy Sets 2021, 27, 71–77. [Google Scholar] [CrossRef]
  7. Robinson, P.J.; Leonishiya, M. Application of Varieties of Learning Rules in Intuitionistic Fuzzy Artificial Neural Network. In Machine Intelligence for Research & Innovations; Verma, O.P., Wang, L., Kumar, R., Yadav, A., Eds.; Lecture Notes in Networks & Systems; Springer: Singapore, 2024; Volume 832, pp. 35–45. [Google Scholar] [CrossRef]
  8. Sotirov, S.; Atanassov, K. Intuitionistic fuzzy feed forward neural network. Cybern. Inf. Technol. 2009, 9, 62–68. Available online: https://cit.iict.bas.bg/CIT_09/v9-2/62-68.pdf (accessed on 24 July 2024).
  9. Xu, Z.S.; Yager, R.R. Some Geometric Aggregation Operators Based on Intuitionistic Fuzzy sets. Int. J. Gen. Syst. 2006, 35, 417–433. [Google Scholar] [CrossRef]
  10. Leonishiya, A.; Robinson, P.J. Varieties of Linguistic Intuitionistic Fuzzy Distance Measures for Linguistic Intuitionistic Fuzzy TOPSIS Method. Indian J. Sci. Technol. 2023, 16, 2653–2662. [Google Scholar] [CrossRef]
  11. Yager, R.R.; Filev, D.P. Induced Ordered Weighted Averaging Operators. IEEE Trans. Syst. Man Cybern.-Part B 1999, 29, 141–150. [Google Scholar] [CrossRef] [PubMed]
  12. Yager, R.R. The Power average operator. IEEE Trans. Fuzzy Syst. Man Cybern.-Part A Syst. Hum. 2001, 31, 724–731. [Google Scholar] [CrossRef]
  13. Kumar, A.; Sharma, T.K.; Verma, O.P. Detection of Heart Failure by Using Machine Learning. In Machine Intelligence for Research & Innovations; Verma, O.P., Wang, L., Kumar, R., Yadav, A., Eds.; Lecture Notes in Networks & Systems; Springer: Singapore, 2024; Volume 832, pp. 195–203. [Google Scholar]
  14. Sharma, R.; Verma, O.P.; Kumari, P. Application of Dragonfly Algorithm-Based Interval Type-2 Fuzzy Logic Closed-Loop Control System to Regulate the Mean Arterial Blood Pressure. In Machine Intelligence for Research & Innovations; Verma, O.P., Wang, L., Kumar, R., Yadav, A., Eds.; Lecture Notes in Networks & Systems; Springer: Singapore, 2024; Volume 832, pp. 183–194. [Google Scholar]
Figure 1. A simple backpropagation architecture.
Figure 1. A simple backpropagation architecture.
Engproc 95 00009 g001
Figure 2. Flow chart for backpropagation method.
Figure 2. Flow chart for backpropagation method.
Engproc 95 00009 g002
Table 1. Transforming vague values into fuzzy values.
Table 1. Transforming vague values into fuzzy values.
Sl. No.:Transforming Vague Values to Fuzzy ValuesInput Vector-1Input Vector-2Input Vector-3Ranking of Alternatives
1 [ 1 + t A u f A   u ] 2 R 1 = 0.4657 0.6770 0.5981 0.5944 0.6475 R 2 = 0.4636 0.6736 0.6079 0.6245 0.6723 R 3 = 0.3599 0.5694 0.5073 0.5077 0.5681 A2 < A5 < A4 < A3 < A1
2   t A ( u ) t A   u + f A ( u ) R 1 = 0.3311 0.3945 0.3860 0.3586 0.3534 R 2 = 0.4436 0.4733 0.4624 0.4327 0.4165 R 3 = 0.4283 0.4691 0.4551 0.4199 0.4028 A2 < A3 < A4 < A1 < A5
3 t A (u) + [ 1 t A u f A u ] [ t A u + f A u ] R 1 = 0.3820 0.2728 0.2978 0.2676 0.2299 R 2 = 0.4898 0.3800 0.3847 0.3411 0.3038 R 3 = 0.6975 0.4124 0.4474 0.4112 0.3378 A1 < A3 < A2 < A4 < A5
Table 2. Selection of best alternatives [7] by intuitionistic fuzzy ANN and different aggregation operators with/without hidden layers and different learning rules.
Table 2. Selection of best alternatives [7] by intuitionistic fuzzy ANN and different aggregation operators with/without hidden layers and different learning rules.
Sl. No.:Learning
Rule
Hidden LayerThreshold
P-IFWG
Threshold
P-IFWA
Threshold IFWGThreshold IFWARanking
1DeltaNo0.178750.198610.178850.18195A1, A2
0.197720.212380.201660.22589A1, A2
2Yes0.139710.139710.142040.14204A2, A4
0.136560.136560.136410.13641A2, A3, A4
3PerceptronNo0.129920.133260.130170.2005896A1, A2, A3
0.170170.171750.170940.197562A1, A2, A3
4Yes0.057970.057970.057740.05774A4, A5
0.057750.057750.0577160.057696A2, A3, A4
5HebbNo0.854080.813530.845791.39A4, A5
6Yes2.332592.332592.336472.33647A1, A3, A5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peter Dawson, J.R.; Selvaraj, W.A.P. Hamming Distance-Based Intuitionistic Fuzzy Artificial Neural Network with Novel Back Propagation Method. Eng. Proc. 2025, 95, 9. https://doi.org/10.3390/engproc2025095009

AMA Style

Peter Dawson JR, Selvaraj WAP. Hamming Distance-Based Intuitionistic Fuzzy Artificial Neural Network with Novel Back Propagation Method. Engineering Proceedings. 2025; 95(1):9. https://doi.org/10.3390/engproc2025095009

Chicago/Turabian Style

Peter Dawson, John Robinson, and Wilson Arul Prakash Selvaraj. 2025. "Hamming Distance-Based Intuitionistic Fuzzy Artificial Neural Network with Novel Back Propagation Method" Engineering Proceedings 95, no. 1: 9. https://doi.org/10.3390/engproc2025095009

APA Style

Peter Dawson, J. R., & Selvaraj, W. A. P. (2025). Hamming Distance-Based Intuitionistic Fuzzy Artificial Neural Network with Novel Back Propagation Method. Engineering Proceedings, 95(1), 9. https://doi.org/10.3390/engproc2025095009

Article Metrics

Back to TopTop