Next Article in Journal
A 4 × 4 Matrix Spectral Problem Involving Four Potentials and Its Combined Integrable Hierarchy
Previous Article in Journal
Enhanced Qualitative Understanding of Solutions to Fractional Boundary Value Problems via Alternative Fixed-Point Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forgetting-Based Concept-Cognitive Learning for Classification in Fuzzy Formal Decision Context

1
Department of Mathematics, Shantou University, Shantou 515063, China
2
School of Mathematics and Statistics, Shaanxi Normal University, Xi’an 710119, China
3
College of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(8), 593; https://doi.org/10.3390/axioms14080593 (registering DOI)
Submission received: 29 May 2025 / Revised: 18 July 2025 / Accepted: 25 July 2025 / Published: 1 August 2025

Abstract

Concept-cognitive learning reveals the principle of human cognition by simulating the brain’s process of learning and processing concepts. Nevertheless, for neighborhood similarity granules, the average information of objects regarding all attributes is not considered, which may lead to unbalanced acquisition of knowledge. On the other hand, there are some unnecessary concepts in the extension of fuzzy concepts, which results in poor classification learning. To tackle these challenges, we present a forgetting-based concept-cognitive learning model for classification in a fuzzy formal decision context. Firstly, the fuzzy concept space is established based on the the correlation coefficient matrix. Then, to delete unnecessary objects that are in the zone of proximal development, we construct the forgetting fuzzy concept space by selecting the concept corresponding to the maximum similarity. Subsequently, a forgetting-based fuzzy concept model (FCCLM) mechanism is proposed. In the end, experimental results on eight datasets validate the feasibility and efficiency of the proposed learning mechanism through classification performance assessment.

1. Introduction

Cognitive computing [1], as an important applied branch of cognitive science, can employ computational approaches to develop innovative solutions for complex real-world challenges [2,3]. Concepts are created by humans via the generalization and abstraction of the fundamental attributes of objects. As fundamental building blocks of human knowledge systems, they provide a structured approach to knowledge representation and organization, which is crucial for cognitive computing.
Wille [4,5] introduced formal concept analysis (FCA for short) to illustrate the formal context and formal concepts from a mathematical standpoint. With the development of FCA, some extended models such as fuzzy concepts, interval set concepts, three-way concepts, and multi-granularity concepts have been proposed by scholars [6,7,8,9]. It should be noted that fuzzy concepts generated from a fuzzy formal context are a special case of Chu space [10,11]. The basic form of a fuzzy Chu space is a triple ( A , X , r ) , where A and X are sets, r : A × X k is a mapping, and k is a truth value set. When k takes values in the unit interval [ 0 , 1 ] , the fuzzy Chu space is reduced to fuzzy formal concept analysis. In recent years, the combination of FCA and cognitive computing has led to the emergence of a popular research direction, namely concept-cognitive learning (CCL for short). CCL denotes the acquisition of concepts from provided clues via specific cognitive models, aiming to uncover the systematic regularity of how the human brain learns concepts. For example, a novel concept-cognitive model based on granular computing was first studied by Zhang et al. [12], and the experiments show that the model is feasible. Yao [13] put forward a three-level framework of concept cognition from three aspects: the abstract layer, the brain layer, and the machine layer. Qiu et al. [14] proposed a concept-cognitive system that can realize multiple concept cognitions. Additionally, Kumar et al. [15] investigated the concept learning approach to analyze functionalities of bidirectional associative memory. Thereafter, Li et al. [16] discussed the concept learning mechanism from the perspective of philosophy and cognitive psychology. Therefore, the above-mentioned results in concept-cognitive systems have mainly been obtained by means of granular computing.
Subsequently, common concept-cognitive learning models mainly include information similarity matching [17,18,19]. Xu et al. [17] designed a two-way learning method of fuzzy concepts from the necessary or sufficient information granules in fuzzy datasets. Then Xu et al. [18] further proposed the dynamic learning of necessary and sufficient concepts in a dynamic environment. Meanwhile, many scholars have presented achievements in the combination of CCL and machine learning, which mainly address the problems of cognitive classification of data and knowledge fusion. For example, Mi et al. [20,21,22] introduced an incremental CCL model [20], fuzzy-based CCL model [21], and concurrent CCL model [22]. Furthermore, Yuan et al. [23] presented an incremental learning mechanism based on progressive fuzzy three-way concepts. Zhang et al. [24] discussed a knowledge discovery and concept classification method by assigning different weights to the fuzzy formal context. Next, Wang et al. [25] studied an innovative multi-view fuzzy CCL model to integrate concepts through multiple views. Liang et al. [26] investigated an incremental CCL method based on concept reduction for dynamic classification in the formal context. In [23,24], note that neighborhood similarity granules induced by the Euclidean distance or cosine theorem do not consider the average information of objects. To this end, the correlation coefficient matrix is discussed to improve classification performance, which is a motivation for this paper.
The advantage of CCL is that it can integrate past experiences into itself to describe dynamic data, and some achievements have already been made. For instance, Guo et al. [27] introduced a memory-based CCL model to mine knowledge by combining the recalling and forgetting mechanisms. It is worth noting that the forgetting mechanism in the work of Guo is to forget the maximum concept in the inclusion relationship while retaining the remaining concepts. However, the principle of forgetting is that short-term memories are usually forgotten. This also means that objects are easily forgotten that are in the zone of proximal development. Therefore, the formulation of forgetting rules in fuzzy concepts remains an open research question requiring further investigation. We address this challenge in Section 3 through a novel proposed solution.
To overcome the aforementioned constraints, we propose a forgetting-based concept-cognitive learning model (FCCLM for short) for classification. Concretely, the fuzzy concept space is constructed with the correlation coefficient matrix. It is known that objects are easily forgotten that are in the zone of proximal development, which can improve knowledge discovery classification performance. Afterwards, to remove objects that are easily forgotten, we introduce a forgetting-based concept-cognitive learning mechanism for fusing concepts and class prediction. The overall workflow diagram is shown in Figure 1. Meanwhile, this paper makes the following innovations.
  • The fuzzy concept based on the correlation coefficient matrix describes the average information embedded in data, which further decreases cognitive biases.
  • It designs a concept-forgetting method via removing some unnecessary objects, which can improve the classification performance of concept recognition.
The remainder of this article is organized as follows. We recall some related notations about fuzzy formal concept analysis in Section 2. Section 3 studies the construction of the forgetting fuzzy concept space with the correlation coefficient and concept classification. The experimental results are presented and analyzed in Section 4. Finally, Section 5 concludes this work and outlines directions for future research.
Figure 1. The overall workflow diagram.
Figure 1. The overall workflow diagram.
Axioms 14 00593 g001

2. Preliminaries

This section provides a concise overview of fundamental notions in fuzzy formal concept analysis. More detailed information can be found in [4,5,6].

Fuzzy Formal Concept Analysis

Consider a universe G; a fuzzy set F ˜ of G is characterized by a mapping F ˜ ( · ) : G [ 0 , 1 ] . For any g G , the value F ˜ ( g ) represents the degree of membership of g belonging to G ˜ . We denote F ( G ) by the collection of all fuzzy sets on G.
Let E and F be two fuzzy sets on G. If E ˜ ( x ) F ˜ ( x ) , x G , then E is a subset of F; that is to say, E F . Specifically, the collection of all crisp sets defined on G is denoted as P ( G ) .
A triplet ( G , A , R ˜ ) is referred to as a fuzzy formal context, where G = { x 1 , x 2 , , x n } and A = { a 1 , a 2 , , a m } are the sets of objects and attributes, respectively. Let R ˜ be a fuzzy relation between G and A (i.e., R ˜ : G × A [ 0 , 1 ] ), with each R ˜ ( x , a ) indicating the degree to which object x possesses attribute a.
Definition 1.
Given a fuzzy formal context ( G , A , R ˜ ) , for Y G and C ˜ F ( A ) , two learning operators H ˜ : P ( G ) F ( A ) and G : F ( A ) P ( U ) are described as follows:
H ˜ ( Y ) ( a ) = x Y R ˜ ( x , a ) , a A , G ( C ˜ ) = x G : a A , C ˜ ( a ) R ˜ ( x , a ) ,
in which the pair ( Y , C ˜ ) is a fuzzy concept satisfying H ˜ ( Y ) = C ˜ and G ( C ˜ ) = Y . In general, Y and C ˜ are named extent and intent, respectively.
For any Y , Y 1 , Y 2 G , C ˜ , C ˜ 1 , C ˜ 2 F ( A ) , we derive the following characteristic properties:
  • Y 1 Y 2 H ˜ ( Y 2 ) H ˜ ( Y 1 ) ;
  • C ˜ 1 C ˜ 2 G ( C ˜ 2 ) G ( C ˜ 1 ) ;
  • Y G H ˜ ( Y ) , C ˜ H ˜ G ( C ˜ ) ;
  • Y = H ˜ G H ˜ ( Y ) , C ˜ = G H ˜ G ( C ˜ ) ;
  • Y G ( C ˜ ) C ˜ H ˜ ( Y ) ;
  • H ˜ ( Y 1 Y 2 ) = H ˜ ( Y 1 ) H ˜ ( Y 2 ) , G ( C ˜ 1 C ˜ 2 ) = G ( C ˜ 1 ) G ( C ˜ 2 ) .
In addition, suppose ( G , A , R ˜ ) and ( G , D , J ) are two fuzzy formal contexts, where R ˜ : G × A [ 0 , 1 ] and J : G × D { 0 , 1 } . Then a fuzzy formal decision context (FFDC) is characterized by ( G , A , R ˜ , D , J ) , where A D = . G / D = { D 1 , D 2 , , D t } is regarded as a decision division based on decision attribute set D with D = D 1 D 2 D t .

3. The Construction of a Fuzzy Concept with the Correlation Coefficient

3.1. Constructing the Fuzzy Concept Space

This section introduces a correlation coefficient matrix to construct the correlation similarity granule for achieving concept characterization.
Definition 2.
Given a fuzzy formal context ( G , A , R ˜ ) , for any x i , x j G , the correlation coefficient between x i and x j is defined as follows:
r ( x i , x j ) = k = 1 m   | R ˜ ( i , k ) R ¯ ( i ) | | R ˜ ( j , k ) R ¯ ( j ) | k = 1 m ( R ˜ ( i , k ) R ¯ ( i ) ) 2 k = 1 m ( R ˜ ( j , k ) R ¯ ( j ) ) 2 ,
where R ˜ ( i , k ) denotes the membership degree of object x i with respect to attribute a k , while R ¯ ( i ) represents the average membership degree of x i across the attribute set A. That is to say, R ¯ ( i ) = 1 m i = 1 m R ˜ ( i , k ) . The value of r ( x i , x j ) explains the similarity degree between x i and x j . The pairwise correlation coefficients form a symmetric matrix Q, where Q ( i , j ) = r ( x i , x j ) quantifies the linear relationship between objects x i and x j . Meanwhile, this correlation coefficient matrix can be used to construct the correlation similarity granule.
Definition 3.
Given an FFDC ( G , A , R ˜ , D , J ) , for any D i G / D , the correlation similarity granule of object x is given by
C G ( x ) = { y D i | r ( x , y ) β } ,
where β is a threshold and C G ( x ) is the correlation similarity degree between x and y.
Definition 4.
Given an FFDC ( G , A , R ˜ , D , J ) , where G / D = { D 1 , D 2 , , D t } , for any D i , the fuzzy concept subspace 𝒮 i about D i is denoted as follows:
𝒮 i = G H ˜ C G ( x ) , H ˜ C G ( x ) | x D i .
where β is a threshold and C G ( x ) is the correlation similarity degree between x and y.
Subsequently, we denote by S = { 𝒮 1 , 𝒮 2 , , 𝒮 t } the fuzzy concept space, where 𝒮 i is named a fuzzy subspace of S . It should be emphasized that comprehensive feature learning of each individual object serves as a prerequisite for achieving optimal classification performance. Then we propose the procedure of constructing the fuzzy concept space in Algorithm 1 with the time complexity of O ( | G | 2 | A | ) .
Algorithm 1: Constructing fuzzy concept space (CFCS).
Axioms 14 00593 i001
Example 1.
Table 1 is an FFDC ( G , A , R ˜ , D , J ) , where G = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 } and A = { a 1 , a 2 , a 3 , a 4 } . d is the decision attribute that partitions all objects into two classes, with D 1 = { x 1 , x 2 , x 3 , x 4 } and D 2 = { x 5 , x 6 , x 7 , x 8 } . The correlation coefficient matrix Q is computed as follows:
Q = 1.00 0.87 0.74 0.83 0.48 0.83 0.98 0.87 0.87 1.00 0.95 0.72 0.67 0.79 0.90 0.94 0.74 0.95 1.00 0.54 0.70 0.83 0.76 0.96 0.83 0.72 0.54 1.00 0.69 0.65 0.90 0.59 0.48 0.67 0.70 0.69 1.00 0.65 0.90 0.59 0.83 0.79 0.83 0.65 0.68 1.00 0.80 0.92 0.98 0.90 0.76 0.90 0.59 0.80 1.00 0.84 0.87 0.94 0.96 0.59 0.60 0.92 0.84 1.00 .
Given β = 0.65 , for the objects in decision class D 1 , we can have the correlation similarity granule C G ( x 1 ) = { x 1 , x 2 , x 3 , x 4 } , C G ( x 2 ) = { x 1 , x 2 , x 3 , x 4 } , C G ( x 3 ) = { x 1 , x 2 , x 3 } , and C G ( x 4 ) = { x 1 , x 2 , x 4 } . Next, fuzzy concept subspace 𝒮 1 = { ( { x 1 , x 2 , x 3 , x 4 } , ( 0.1 , 0.3 , 0.4 , 0.4 ) ) , ( { x 1 , x 2 , x 3 } , ( 0.1 , 0.3 , 0.4 , 0.5 ) ) } .
For the objects in decision class D 2 , their correlation similarity granule and fuzzy concept subspace are as follows: C G ( x 5 ) = { x 5 , x 6 } , C G ( x 6 ) = { x 5 , x 6 , x 7 , x 8 } , C G ( x 7 ) = { x 6 , x 7 , x 8 } and C G ( x 8 ) = { x 6 , x 7 , x 8 } . 𝒮 2 = { ( { x 5 , x 6 } , ( 0.3 , 0.4 , 0.3 , 0.4 ) ) , ( { x 5 , x 6 , x 7 , x 8 } , ( 0.1 , 0.4 , 0.3 , 0.3 ) ) , ( { x 6 , x 7 , x 8 } , ( 0.1 , 0.5 , 0.4 , 0.3 ) ) } .
Table 1. An FFDC.
Table 1. An FFDC.
Ua1a2a3a4d
x 1 0.70.30.40.51
x 2 0.10.80.50.91
x 3 0.30.50.60.71
x 4 0.40.30.40.41
x 5 0.60.40.30.92
x 6 0.30.50.60.42
x 7 0.20.50.40.32
x 8 0.10.70.80.82

3.2. Constructing the Forgetting Fuzzy Concept Space

Section 3.1 discusses the construction process of the fuzzy concept space through the correlation similarity granule and a pair of cognitive operators H ˜ and G . However, since X G H ˜ ( X ) , some of the objects in extent G H ˜ ( X ) may be forgotten because of knowledge forgetting.
Definition 5.
Given an FFDC ( G , A , R ˜ , D , J ) , for arbitrary X G , the inner fringe of X is denoted as
( X ) = G H ˜ ( X ) Y i | Y j Y i ,   f o r   a n y   Y j Ψ ( i j ) .
where Ψ = L ( H ˜ ( X ) , a ) | a A , H ˜ ( X ) ( a ) 0 = Y 1 , Y 2 , Y | Ψ | and L ( H ˜ ( X ) , a ) = g G H ˜ ( X ) | R ˜ ( g , a ) = H ˜ ( X ) ( a ) for a A .
Definition 5 introduces a method for forgetting the minimal object set that is proximate to a certain object in the extent G H ˜ ( X ) . Then the above forgetting minimal object set can be represented by a Boolean matrix, shown as Proposition 1.
Proposition 1.
Let ( G , A , R ˜ , D , J ) be an FFDC. ( X ) is an inner fringe of X G . Then, we know the following:
( X ) = G H ˜ ( X ) Y i | j = 1 | Ψ | d i j = 1 , Y i Ψ .
where Ψ = L ( H ˜ ( X ) , a ) | a A , H ˜ ( X ) ( a ) 0 = Y 1 , Y 2 , Y | Ψ | and
d i j = 1 , Y j Y i , 0 , o t h e r w i s e .
for i , j = 1 , 2 , , | | .
Proof. 
For any Y i Ψ , j = 1 | | d i j = 1 means that
d i j = 1 , j = i , 0 , j i .
That is to say, for any any Y j Ψ ( j i ) , it implies that Y j Y i . Hence, for any Y i Ψ , if j = 1 | | d i j = 1 , then it is obvious that G H ˜ ( X ) Y i ( X ) .   □
Proposition 2.
Let ( G , A , R ˜ , D , J ) be an FFDC. ( X ) is an inner fringe of X G . For any V i ( X ) , G H ˜ ( V i ) , H ˜ ( V i ) is a fuzzy concept.
Proof. 
It is obvious from Definition 1.   □
In fact, there are multiple elements in ( X ) . After applying knowledge forgetting, the obtained forgetting fuzzy concept ( G H ˜ ( V i ) , H ˜ ( V i ) ) is defined, which is maximally similar to fuzzy concept ( G H ˜ ( X ) , H ˜ ( X ) ) . For two concepts ( G H ˜ ( V i ) , H ˜ ( V i ) ) and ( G H ˜ ( X ) , H ˜ ( X ) ) , the similarity degree is K ( G H ˜ ( V i ) , H ˜ ( V i ) ) , ( G H ˜ ( X ) , H ˜ ( X ) ) = | G H ˜ ( V i ) G H ˜ ( X ) | | G H ˜ ( V i ) G H ˜ ( X ) | + a A ( H ˜ ( V i ) H ˜ ( X i ) ) 2 .
Definition 6.
Let ( G , A , R ˜ , D , J ) be an FFDC. For ( G H ˜ ( X ) , H ˜ ( X ) ) S i , the collection of forgetting fuzzy concepts is defined as follows:
F X i = ( G H ˜ ( V i ) , H ˜ ( V i ) ) | K ( G H ˜ ( V i ) , H ˜ ( V i ) ) , ( G H ˜ ( X ) , H ˜ ( X ) ) = M a x 0   f o r   V i ( X ) ,
where M a x 0 = m a x V i ( X ) K ( G H ˜ ( V i ) , H ˜ ( V i ) ) , ( G H ˜ ( X ) , H ˜ ( X ) ) .
In a fuzzy formal decision context ( G , A , R ˜ , D , J ) and X G , there are some fuzzy subconcepts of ( G H ˜ ( X ) , H ˜ ( X ) ) ; that is, certain fuzzy concepts can be obtained from ( G H ˜ ( X ) , H ˜ ( X ) ) after knowledge forgetting through the inner fringe ( X ) . From Proposition 2, for any V i ( X ) , ( G H ˜ ( V i ) , H ˜ ( V i ) ) is a fuzzy concept obtained from ( G H ˜ ( X ) , H ˜ ( X ) ) after knowledge forgetting. It is obvious that ( G H ˜ ( V i ) , H ˜ ( V i ) ) ( G H ˜ ( X ) , H ˜ ( X ) ) . The higher the similarity between ( G H ˜ ( V i ) , H ˜ ( V i ) ) and ( G H ˜ ( X ) , H ˜ ( X ) ) , the higher the probability that objects in G H ˜ ( X ) Y i are forgotten.
In addition, given D i , the forgetting fuzzy concept subspace is denoted as F i = F X i | ( G H ˜ ( X ) , H ˜ ( X ) ) 𝒮 i . At the same time, the process of constructing the forgetting fuzzy concept space is shown as Algorithm 2, where Ψ is run in Steps 3–11, which can be taken in O ( | G | 2 | A | ) , and some unnecessary objects are reduced in Steps 12–25, which can be measured in O ( | Ψ | 2 | G | ) , where Ψ means the zone of proximal development of an object. Hence, the time complexity of Algorithm 2 is O ( | S | ( | G | 2 | A | + | Ψ | 2 | G | ) ) .
Example 2.
Continue with Example 1. With respect to class D 1 , we now discuss the forgetting fuzzy concept of ( { x 1 , x 2 , x 3 , x 4 } , ( 0.1 , 0.3 , 0.4 , 0.4 ) ) in 𝒮 1 . In fact, Ψ = { { x 2 } , { x 1 , x 4 } , { x 4 } } , which implies that ( X ) = { { x 1 , x 2 , x 3 , x 4 } { x 2 } , { x 1 , x 2 , x 3 , x 4 } { x 4 } } = { x 1 , x 3 , x 4 } , { x 1 , x 2 , x 3 } . Then we have two fuzzy concepts, ( { x 1 , x 3 , x 4 } , ( 0.3 , 0.3 , 0.4 , 0.4 ) ) and ( { x 1 , x 2 , x 3 } , ( 0.1 , 0.3 , 0.4 , 0.5 ) ) . Subsequently, K ( { x 1 , x 3 , x 4 } , ( 0.3 , 0.3 , 0.4 , 0.4 ) ) , ( { x 1 , x 2 , x 3 , x 4 } , ( 0.1 , 0.3 , 0.4 , 0.4 ) ) = 0.95 and K ( { x 1 , x 2 , x 3 } , ( 0.1 , 0.3 , 0.4 , 0.5 ) ) , ( { x 1 , x 2 , x 3 , x 4 } , ( 0.1 , 0.3 , 0.4 , 0.4 ) ) = 0.85 . Hence, we know that F X 1 = { ( { x 1 , x 3 , x 4 } , ( 0.3 , 0.3 , 0.4 , 0.4 ) ) } . Similarly, F Y 1 = { ( { x 2 , x 3 } , ( 0.1 , 0.5 , 0.5 , 0.7 ) ) } . Finally, we have the forgetting fuzzy concept subspace F 1 = { ( { x 1 , x 3 , x 4 } , ( 0.3 , 0.3 , 0.4 , 0.4 ) ) , ( { x 2 , x 3 } , ( 0.1 , 0.5 , 0.5 , 0.7 ) ) } .
In addition, the forgetting fuzzy concept subspace in D 2 is F 2 = { ( { x 5 } , ( 0.6 , 0.4 , 0.3 , 0.9 ) ) , ( { x 6 , x 7 , x 8 } , ( 0.1 , 0.5 , 0.4 , 0.3 ) ) , ( { x 6 , x 8 } , ( 0.1 , 0.5 , 0.6 , 0.4 ) ) } .
Meanwhile, the process of learning the forgetting fuzzy concept is shown in Figure 2.
Algorithm 2: Constructing forgetting fuzzy concept space (CFFCS).
Axioms 14 00593 i002

3.3. Fusing Concept

Mutual information exists among forgetting fuzzy concepts, manifested through their bidirectional influence mechanisms. In order to overcome the limitations of individual cognition and the incomplete cognitive environment [28], we propose a new method to fuse concepts through the forgetting fuzzy concept space.
Definition 7.
Given an FFDC ( G , A , R ˜ , D , J ) , for a forgetting fuzzy concept subspace F i , if there exist forgetting fuzzy concepts ( X 1 , B ˜ 1 ) , ( X 2 , B ˜ 2 ) , , ( X n , B ˜ n ) satisfying X 1 X 2 X n , then ( X n , B ˜ n ) is regarded as the supremum fuzzy concept, and then the fusing forgetting fuzzy pseudo-concept ( Y i , j , B ˜ i , j ) is defined as follows:
Y i , j = X 1 X 2 X n , B ˜ i , j = 1 2 n 1 ( B ˜ 1 + B ˜ 2 + 2 B ˜ 3 + 4 B ˜ 4 + + 2 n 1 B ˜ n ) .
where n means the number of forgetting fuzzy concepts.
Generally speaking, the fusing forgetting fuzzy concept space is denoted as K = { K 1 , K 2 , , K t } , where K i = { K i , j | j = 1 , 2 , , n } = { ( Y i , j , B ˜ i , j ) | j = 1 , 2 , , n } , in which n is the number of pseudo-concepts in K i . The intent B ˜ i , j explicitly characterizes the magnitude of the pseudo-concept, in which the intents of subconcepts have been assigned different weights according to their corresponding extents. In other words, the greater the extent X i is, the larger the weight of its corresponding intent B ˜ i is. Finally, Algorithm 3 describes the clustering process of the fusing forgetting fuzzy concept space with the worst-case time complexity of O ( | G | 2 | A | ) .
Algorithm 3: Cognitive process of fusing forgetting fuzzy concept space.
Axioms 14 00593 i003
Example 3.
Continuing to Example 2, according to Definition 7, fusing forgetting fuzzy concept spaces are represented as follows:
K 1 , 1 = ( { x 1 , x 3 , x 4 } , ( 0.3 , 0.3 , 0.4 , 0.4 ) ) ; K 1 , 2 = ( { x 2 , x 3 } , ( 0.1 , 0.5 , 0.5 , 0.7 ) ) ; K 2 , 1 = ( { x 5 } , ( 0.6 , 0.4 , 0.3 , 0.9 ) ) ; K 2 , 2 = ( { x 6 , x 7 , x 8 } , ( 0.1 , 0.5 , 0.5 , 0.35 ) ) .
It should be noted that there are two fusing forgetting fuzzy concepts in each decision class. It is evident that these concepts preserve the initial information while eliminating the redundant forgetting fuzzy concepts, which significantly enhances the efficiency of concept-cognitive learning.

3.4. Class Prediction

It should be noted that the class prediction of a testing sample is primarily determined based on the Euclidean distance between the testing sample and existing fuzzy concept clustering space [27,29,30].
Definition 8.
Let x n e w be a testing sample. ( x n e w , H ˜ ( x n e w ) ) is a new fuzzy concept; then the distance between H ˜ ( x n e w ) and pseudo-concept ( Y i , j , B ˜ i , j ) in K i is defined as follows:
D i s ( H ˜ ( x n e w ) , B ˜ i , j ) = a A ( H ˜ ( x n e w ) ( a ) B ˜ i , j ( a ) ) 2 .
As is well known, D i s ( H ˜ ( x n e w ) , B ˜ i , j ) is the distance similarity. The smaller the distance value D i s ( H ˜ ( x n e w ) , B ˜ i , j ) , the stronger the correlation between two fuzzy concepts. Consequently, class determination for a new sample can be achieved by computing their similarity. Algorithm 4 describes the class prediction with the time complexity of O ( | K | | A | ) .
Algorithm 4: Class prediction of testing sample.
Axioms 14 00593 i004
Example 4.
In the FFDC of Table 2, where x 1 x 8 are from Example 1, x 9 x 10 are two new testing objects.
Regarding the object x 9 , the membership degree about R ˜ is H ˜ ( x n e w ) = ( 0.6 , 0.5 , 0.1 , 0.6 ) , while its true label is 2. Subsequently, the Euclidean distance between x 9 and the existing fusing forgetting fuzzy concept space K is computed as follows: D i s ( H ˜ ( x n e w ) , B ˜ 1 , 1 ) = 0.5099 , D i s ( H ˜ ( x n e w ) , B ˜ 1 , 2 ) = 0.6481 , D i s ( H ˜ ( x n e w ) , B ˜ 2 , 1 ) = 0.3742 , and D i s ( H ˜ ( x n e w ) , B ˜ 2 , 2 ) = 0.6874 . Indeed, the distance between H ˜ ( x n e w ) and B ˜ 2 , 1 attains its minimum in K 2 . Consequently, x 9 should be classified into decision class D 2 , which perfectly matches its real label 2.
Based on the above discussion, Figure 3 describes the overall flowchart of the FCCLM, which includes four parts, namely (1) constructing a fuzzy concept space; (2) constructing a forgetting fuzzy concept space; (3) constructing a fusing forgetting fuzzy concept space; and (4) class prediction. In summary, the total time complexity of our model FCCLM is O ( m a x { | S | ( | G | 2 | A | + | Ψ | 2 | G | ) , | K | | A | } ) .
Table 2. A FFDC.
Table 2. A FFDC.
Ua1a2a3a4d
x 1 0.70.30.40.51
x 2 0.10.80.50.91
x 3 0.30.50.60.71
x 4 0.40.30.40.41
x 5 0.60.40.30.92
x 6 0.30.50.60.42
x 7 0.20.50.40.32
x 8 0.10.70.80.82
x 9 0.60.50.10.62
x 10 0.20.60.40.92

4. Experimental Results

To evaluate the classification efficacy of model FCCLM, a comparative analysis will be conducted between it and concept-cognitive learning as well as machine learning classification algorithms. A total of eight datasets are selected from the UCI (dataset source: http://archive.ics.uci.edu/, accessed on 14 March 2015) and Gene (Dataset source: https://jundongl.github.io/scikit-feature/datasets.html, accessed on 14 March 2015) data set repositories, as detailed in Table 3.

4.1. Experimental Setting

As a preprocessing step, Max–Min normalization is applied to all attributes to guarantee dataset compatibility with fuzzy processing requirements, specifically given by
R ˜ ( u i , a j ) = f ( u i , a j ) min ( f ( a j ) ) max ( f ( a j ) ) min ( f ( a j ) ) ,
where f ( u i , a j ) is the initial value of object u i for attribute a j . Additionally, max ( f ( a j ) ) and min ( f ( a j ) ) are the maximum and minimum values of attribute a j across all objects, respectively.
In the experiment, we compare FCCLM with three concept-cognitive learning classification algorithms, namely, DMPWFC [23], ILMPFTC [24], and FCLM [21]. Furthermore, we also compare FCCLM with four machine learning classification algorithms, including Complex Tree (CT), Fuzzy-Root-Sum-Square (F-RSS), Decision Tree (DT), and Fuzzy Classification and Regression Tree (F-CART). Notice that parameter β plays an important role in constructing correlation similarity granules. In this investigation, parameter β is varied systematically from 0.55 to 1.0 with a step size of 0.05 to influence correlation similarity granules. All analyses are conducted using five-fold cross-validation to guarantee unbiased performance estimation. Final performance metrics are computed as the mean values obtained from the five cross-validation test partitions. All results are generated in MATLAB 2021a using standardized hardware (Intel(R) Core(TM) i7-4790 CPU @ 3.6 GHz, 16 GB (Lenovo, Quanzhou, China)) to ensure computational consistency.

4.2. Results and Analyses

The results of accuracy and the optimal parameter β of the proposed algorithm FCCLM and seven classification algorithms on eight datasets are shown in Table 4. The bottom row displays the mean classification accuracy with the standard deviation, where bold–underline formatting highlights the algorithm’s statistically superior performance compared to all baseline methods. The results of the table indicate that ILMPFTC and CT achieve optimal accuracy on one dataset, respectively, while FCCLM demonstrates superior performance across six datasets. Furthermore, FCCLM has a higher average accuracy and lower standard deviation than the other algorithms, which indicates that the performance of FCCLM outperforms the other seven algorithms. The average accuracy of FCCLM is increased by 9.48% when compared to F-CART on all selected datasets. In summary, the above results indicate that FCCLM is superior to the other models. Figure 4 depicts the accuracy comparison between CCL and the other seven algorithms. This figure describes the accuracy of FCCLM, which possesses a smaller fluctuation range than the other algorithms in most datasets.
Subsequently, the results of recall for the proposed algorithm FCCLM and seven classification algorithms on eight datasets are presented in Table 5. The last row illustrates the average accuracy and standard deviation, with bold–underline formatting denoting statistically superior performance relative to comparator algorithms. This table indicates that ILMPFTC, CT, and F-CART achieve the best accuracy twice, once, and once, respectively, while FCCLM has the best performance on four datasets. Furthermore, FCCLM demonstrates statistically superior performance, exhibiting both a significantly higher average accuracy and lower standard deviation compared to the other algorithms, which illustrates that the performance of FCCLM outperforms the other seven algorithms.
Figure 5 describes the classification accuracy as β changes. From this figure, we can observe that the accuracy shows a slow upward trend with fluctuations as the parameter varies, among which the fluctuation range is the largest in the interval [0.75, 0.85]. At the same time, most datasets achieve optimal accuracy in the range of [0.85, 0.95].
Additionally, the statistical significance of the eight classification algorithms could be compared using the Friedman test [31] and Bonferoni–Dunn test [32]. For the Friedman test, a Fisher distribution F F measuring the performance of different algorithms is given by
F F = ( N 1 ) χ F 2 N ( k 1 ) χ F 2 F k 1 , k 1 ( N 1 )
where χ F 2 = 12 N k ( k + 1 ) ( i = 1 k R i 2 k ( k + 1 ) 2 4 ) . N is the number of datasets, k is the cardinality of different algorithms, and R i is the average rank of algorithm R i across all datasets. We first assume the null hypothesis that no statistically significant differences exist in predictive accuracy among the compared classification methods. If F F > F k 1 , k 1 ( N 1 ) , then the initial hypothesis will be rejected. The rank result of the eight models is represented in Table 6. In fact, F F = 4.5634 , and the critical value of F ( 7 , 49 ) = 2.203 at level α = 0.05 . Therefore, we reject the null hypothesis and accept the alternative hypothesis that the classification performance of the eight models is remarkably different.
Subsequently, the Bonferroni–Dunn test is applied to conduct pairwise comparisons of classification performance among the five top-performing models. Assume that the mean rank between two models exceeds the critical value
C D α = q α k ( k + 1 ) 6 N
and then the above two models exhibit statistically significant differences in classification performance, in which q α is denoted as the critical value in the test.
At level α = 0.05 , we notice that the critical value q 0.05 = 2.690 in [32]. Subsequently, C D α = 3.2946 ( k = 8 and N = 8 ). Therefore, we can see from Figure 6 that algorithm FCCLM outperforms algorithms F-RSS, FCLM, and DT at level α = 0.05 .

4.3. Discussion

In the above subsection, we compare the performance of FCCLM with other algorithms using eight datasets. Experimental results demonstrate the effectiveness and efficiency of our proposed algorithm FCCLM from the perspective of accuracy and statistical results. Specifically, some advantages are reflected on as follows: (1) We could counterpoise the sensitivity and specificity of Algorithm 1, named CFCS, to capture rich information by adjusting the threshold, thus facilitating the interpretability and understanding of the concept learning process. (2) Knowledge forgetting of the learned fuzzy concepts is performed through the inner fringe, thereby aligning with the logic of the Ebbinghaus forgetting curve and enhancing interpretability. (3) In the classification task, the fusing forgetting fuzzy pseudo-concept and class prediction can accelerate the acquisition of sample labels and reduce the time consumption of model FCCLM. In summary, a novel forgetting-based concept-cognitive learning model for classification in a fuzzy formal decision context is introduced to enhance the cognitive process.

5. Conclusions

In the context of rapidly expanding data, how to mine knowledge and fuse information is an important issue. Some existing CCL methods in the fuzzy formal decision context mainly focus on forgetting unnecessary concepts in constructing the concept space; ignoring the object is the key to constructing concepts. In this article, we have proposed a forgetting-based concept-cognitive learning model for classification tasks in a fuzzy formal decision context. Concretely, we firstly construct the fuzzy concept space with the correlation coefficient matrix. After that, in order to eliminate unnecessary objects in the zone of proximal development, we build a forgetting fuzzy concept space by choosing the concept with the highest similarity. Next, we propose a forgetting-based fuzzy concept model, which handles concept fusion and concept class prediction. Finally, to gain a better understanding of our proposed model, a series of comparative experiments on eight datasets are conducted to show that FCCLM can attain superior classification performance.
This article introduces a forgetting mechanism in the process of concept-cognitive learning, but does not consider recognition in the incremental learning process. Our future research will address this issue with the aim of enhancing the accuracy and efficiency of incremental learning approaches in the context of CCL.

Author Contributions

Conceptualization: X.L.; Software: C.Z.; Writing—original draft: C.S. All authors have read and agreed to the published version of the manuscript.

Funding

Work is supported by the Natural Science Foundation of Fujian Province (2024J01793) and the Natural Science Basic Research Program of Shaanxi Province (Grant No. 2024JC-YBQN-0025).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Pylyshyn, Z. Computation and cognition: Issues in the foundations of cognitive science. Behav. Brain Sci. 1980, 3, 111–132. [Google Scholar] [CrossRef]
  2. Wang, Y.; Howard, N.; Kacprzyk, J.; Frieder, O.; Sheu, P.; Fiorini, R.A.; Gavrilova, M.L.; Patel, S.; Peng, J.; Widrow, B. Cognitive informatics: Towards cognitive machine learning and autonomous knowledge manipulation. Int. J. Cogn. Inform. Nat. Intell. 2018, 12, 1–13. [Google Scholar] [CrossRef]
  3. Feldman, J. Minimization of Boolean complexity in human concept learning. Nature 2000, 407, 630–633. [Google Scholar] [CrossRef] [PubMed]
  4. Ganter, B.; Wille, R. Formal Concept Analysis: Mathematical Foundations; Springer: New York, NY, USA, 1999. [Google Scholar]
  5. Wille, R. Restructuring lattice theory: An approach based on hierarchies of concepts. In Proceedings of the Formal Concept Analysis: 7th International Conference, ICFCA 2009, Darmstadt, Germany, 21–24 May 2009; Proceedings 7. Springer: Berlin/Heidelberg, Germany, 2009; pp. 314–339. [Google Scholar]
  6. Yahia, S.; Arour, K.; Slimani, A.; Jaoua, A. Discovery of compact rules in relational databases. Inf. Sci. 2000, 4, 497–511. [Google Scholar]
  7. Qi, J.; Qian, T.; Wei, L. The connections between three-way and classical concept lattices. Knowl.-Based Syst. 2016, 91, 143–151. [Google Scholar] [CrossRef]
  8. Xu, F.; Xing, Z.; Yin, H. Attribute reductions and concept lattices in interval-valued intuitionistic fuzzy rough set theory: Construction and properties. J. Intell. Fuzzy Syst. 2016, 30, 1231–1242. [Google Scholar] [CrossRef]
  9. Qi, J.; Wei, L.; Wan, Q. Multi-level granularity in formal concept analysis. Granul. Comput. 2019, 4, 351–362. [Google Scholar] [CrossRef]
  10. Basil, K. Papadopoulos and apostolos syropoulos, fuzzy sets and fuzzy relational structures as Chu spaces. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2000, 8, 471–479. [Google Scholar]
  11. Papadopoulos, B.; Syropoulos, A. Categorical relationships between Goguen sets and “two-sided” categorical models of linear logic. Fuzzy Sets Syst. 2005, 149, 501–508. [Google Scholar] [CrossRef]
  12. Zhang, W.; Xu, W. Cognitive model based on granular computing. Chin. J. Eng. Math. 2007, 24, 957–971. [Google Scholar]
  13. Yao, Y. Interpreting concept learning in cognitive informatics and granular computing. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2009, 39, 855–866. [Google Scholar] [CrossRef]
  14. Qiu, G.; Ma, J.; Yang, H.; Zhang, W. A mathematical model for concept granular computing systems. Sci. China Inf. Sci. 2009, 39, 1239–1247. [Google Scholar] [CrossRef]
  15. Kumar, C.; Ishwarya, M.; Loo, C. Formal concept analysis approach to cognitive functionalities of bidirectional associative memory. Biol. Inspired Cogn. Archit. 2015, 12, 20–33. [Google Scholar]
  16. Li, J.; Mei, C.; Xu, W.; Qian, Y. Concept learning via granular computing: A cognitive viewpoint. Inf. Sci. 2015, 298, 447–467. [Google Scholar] [CrossRef] [PubMed]
  17. Xu, W.; Li, W. Granular computing approach to two-way learning based on formal concept analysis in fuzzy datasets. IEEE Trans. Cybern. 2014, 46, 366–379. [Google Scholar] [CrossRef]
  18. Xu, W.; Guo, D.; Qian, Y.; Ding, W. Two-way concept-cognitive learning method: A fuzzy-based progressive learning. IEEE Trans. Fuzzy Syst. 2022, 31, 1885–1899. [Google Scholar] [CrossRef]
  19. Xu, W.; Guo, D.; Mi, J.; Qian, Y.; Zheng, K.; Ding, W. Two-way concept-cognitive learning via concept movement viewpoint. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 6798–6812. [Google Scholar] [CrossRef]
  20. Shi, Y.; Mi, Y.; Li, J.; Liu, W. Concept-cognitive learning model for incremental concept learning. IEEE Trans. Syst. Man Cybern. Syst. 2018, 51, 809–821. [Google Scholar] [CrossRef]
  21. Mi, Y.; Shi, Y.; Li, J.; Liu, W.; Yan, M. Fuzzy-based concept learning method: Exploiting data with fuzzy conceptual clustering. IEEE Trans. Cybern. 2020, 52, 582–593. [Google Scholar] [CrossRef]
  22. Shi, Y.; Mi, Y.; Li, J.; Liu, W. Concurrent concept-cognitive learning model for classification. Inf. Sci. 2019, 496, 65–81. [Google Scholar] [CrossRef]
  23. Yuan, K.; Xu, W.; Li, W.; Ding, W. An incremental learning mechanism for object classification based on progressive fuzzy three-way concept. Inf. Sci. 2022, 584, 127–147. [Google Scholar] [CrossRef]
  24. Zhang, C.; Tsang, E.C.; Xu, W.; Lin, Y.; Yang, L. Incremental concept-cognitive learning approach for concept classification oriented to weighted fuzzy concepts. Knowl.-Based Syst. 2023, 260, 110093. [Google Scholar] [CrossRef]
  25. Wang, J.; Xu, W.; Ding, W.; Qian, Y. Multi-view fuzzy concept-cognitive learning with high-order information fusion of fuzzy attributes. IEEE Trans. Fuzzy Syst. 2024, 32, 6965–6978. [Google Scholar] [CrossRef]
  26. Liang, T.; Lin, Y.; Li, J.; Lin, G.; Wang, Q. Incremental cognitive learning approach based on concept reduction. Int. J. Approx. Reason. 2025, 179, 109359. [Google Scholar] [CrossRef]
  27. Guo, D.; Xu, W.; Qian, Y.; Ding, W. M-FCCL: Memory-based concept-cognitive learning for dynamic fuzzy data classification and knowledge fusion. Inf. Fusion 2023, 100, 101962. [Google Scholar] [CrossRef]
  28. Li, J.; Mi, Y.; Liu, W. Incremental cognition of concepts: Theories and methods. Chin. J. Comput. 2019, 42, 1–19. [Google Scholar]
  29. Ding, Y.; Xu, W.; Ding, W.; Qian, Y. IFCRL: Interval-intent fuzzy concept re-cognition learning model. IEEE Trans. Fuzzy Syst. 2024, 32, 3581–3593. [Google Scholar] [CrossRef]
  30. Guo, D.; Xu, W. Fuzzy-based concept-cognitive learning: An investigation of novel approach to tumor diagnosis analysis. Inf. Sci. 2023, 639, 118998. [Google Scholar] [CrossRef]
  31. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  32. Dunn, O. Multiple comparisons among means. J. Am. Stat. Assoc. 1961, 56, 52–64. [Google Scholar] [CrossRef]
Figure 2. The process of learning the forgetting fuzzy concept.
Figure 2. The process of learning the forgetting fuzzy concept.
Axioms 14 00593 g002
Figure 3. The overall procedure of the proposed method.
Figure 3. The overall procedure of the proposed method.
Axioms 14 00593 g003
Figure 4. Accuracy comparison with CCL and other classification algorithms.
Figure 4. Accuracy comparison with CCL and other classification algorithms.
Axioms 14 00593 g004
Figure 5. Accuracy comparisons with varying β on eight datasets.
Figure 5. Accuracy comparisons with varying β on eight datasets.
Axioms 14 00593 g005
Figure 6. CD comparison of all classification algorithms with Bonferroni–Dunn test ( α = 0.05 ).
Figure 6. CD comparison of all classification algorithms with Bonferroni–Dunn test ( α = 0.05 ).
Axioms 14 00593 g006
Table 3. Data description.
Table 3. Data description.
IDDatasetObjectAttributeClass
1Appendicitis10672
2Dermatology358346
3Movement3609015
4Air359643
5Austra690142
6Sonar208602
7Wdbc569302
8Wine178133
Table 4. Accuracy comparison (mean ± standard deviation%) of FCCLM and seven other algorithms.
Table 4. Accuracy comparison (mean ± standard deviation%) of FCCLM and seven other algorithms.
IDβFCCLMDMPWFCILMPFTCFCLMCTF-RSSDTF-CART
10.8086.75 ± 5.2973.14 ± 12.0357.80 ± 23.0663.76 ± 19.0283.89 ± 5.5686.70 ± 8.8574.50 ± 2.8483.89 ± 4.43
20.9596.36 ± 1.1695.16 ± 2.2995.92 ± 2.6840.52 ± 3.0094.68 ± 2.3153.93 ± 10.0661.77 ± 7.9794.13 ± 2.09
31.0081.67 ± 3.8582.76 ± 4.8687.29 ± 5.0866.69 ± 3.2361.67 ± 5.0738.33 ± 4.7714.44 ± 5.0757.70 ± 11.31
41.0095.83 ± 3.1195.56 ± 1.9582.61 ± 4.8054.49 ± 5.8085.52 ± 3.5945.41 ± 7.7350.09 ± 10.5280.50 ± 3.83
51.0080.14 ± 3.5384.53 ± 3.3253.05 ± 3.9765.23 ± 4.5484.78 ± 3.1278.99 ± 1.8572.46 ± 6.6284.20 ± 4.45
61.0086.09 ± 7.8060.89 ± 14.5882.57 ± 7.5466.79 ± 12.7771.18 ± 12.1768.78 ± 4.8665.82 ± 10.0172.13 ± 10.35
71.0093.32 ± 1.0141.71 ± 16.6983.07 ± 3.7070.65 ± 1.1578.09 ± 1.4777.72 ± 0.9088.93 ± 4.0992.79 ± 1.34
80.9596.63 ± 1.2327.16 ± 37.9089.05 ± 6.4068.87 ± 26.2292.67 ± 5.8393.84 ± 2.2966.81 ± 7.8389.35 ± 6.61
Ave. ± SD 89.60 ± 3.3770.11 ± 11.7078.92 ± 7.1562.13 ± 9.4781.56 ± 4.8967.96 ± 5.1661.85 ± 6.8781.84 ± 5.55
Rank 1.634.884.256.383.565.256.133.94
Win/tie/loss 6/0/80/0/81/0/80/0/81/0/80/0/80/0/80/0/8
Table 5. Recall comparison (mean ± standard deviation%) of FCCLM and seven other algorithms.
Table 5. Recall comparison (mean ± standard deviation%) of FCCLM and seven other algorithms.
IDβFCCLMDMPWFCILMPFTCFCLMCTF-RSSDTF-CART
10.8075.36 ± 11.8374.71 ± 24.0457.13 ± 26.7763.33 ± 27.7165.21 ± 9.7171.00 ± 19.6467.02 ± 8.8768.96 ± 8.78
20.9595.83 ± 2.0892.58 ± 11.3595.41 ± 9.5627.72 ± 35.2794.03 ± 2.8136.76 ± 3.98-92.42 ± 1.92
31.0080.43 ± 4.7958.54 ± 39.3284.29 ± 22.1568.69 ± 18.0162.05 ± 5.1544.73 ± 2.23-67.70 ± 16.54
41.0095.54 ± 3.4695.29 ± 7.0580.90 ± 10.1156.12 ± 30.8885.35 ± 3.7538.65 ± 3.28-80.23 ± 4.47
51.0079.62 ± 4.0283.85 ± 3.3254.40 ± 14.0964.76 ± 18.6184.75 ± 3.3576.89 ± 1.0662.36 ± 13.4283.92 ± 3.94
61.0085.86 ± 7.84-87.86 ± 6.0477.77 ± 17.5971.60 ± 13.0066.81 ± 4.94-71.77 ± 10.35
71.0090.90 ± 2.26-84.82 ± 10.0276.85 ± 31.0291.92 ± 2.8082.58 ± 3.28-92.58 ± 1.37
80.9597.19 ± 0.94-90.33 ± 11.0933.33 ± 57.7493.20 ± 5.4293.95 ± 1.7056.37 ± 14.5688.28 ± 7.52
Ave. ± SD 87.59 ± 4.6580.99 ± 17.0179.39 ± 13.7358.57 ± 29.6081.01 ± 5.7563.92 ± 5.0161.91 ± 12.2880.73 ± 6.86
Table 6. Rank of classification algorithms.
Table 6. Rank of classification algorithms.
IDFCCLMDMPWFCILMPFTCFCLMCTF-RSSDTF-CART
116873.5253.5
213284765
332145786
412463875
542871563
618264573
718475632
818563274
Average1.634.884.256.383.565.256.133.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, C.; Ling, X.; Zhang, C. Forgetting-Based Concept-Cognitive Learning for Classification in Fuzzy Formal Decision Context. Axioms 2025, 14, 593. https://doi.org/10.3390/axioms14080593

AMA Style

Sun C, Ling X, Zhang C. Forgetting-Based Concept-Cognitive Learning for Classification in Fuzzy Formal Decision Context. Axioms. 2025; 14(8):593. https://doi.org/10.3390/axioms14080593

Chicago/Turabian Style

Sun, Chuanhong, Xuewei Ling, and Chengling Zhang. 2025. "Forgetting-Based Concept-Cognitive Learning for Classification in Fuzzy Formal Decision Context" Axioms 14, no. 8: 593. https://doi.org/10.3390/axioms14080593

APA Style

Sun, C., Ling, X., & Zhang, C. (2025). Forgetting-Based Concept-Cognitive Learning for Classification in Fuzzy Formal Decision Context. Axioms, 14(8), 593. https://doi.org/10.3390/axioms14080593

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop