You are currently viewing a new version of our website. To view the old version click .
Axioms
  • Article
  • Open Access

16 October 2025

Construction of Consistent Fuzzy Competence Spaces and Learning Path Recommendation

,
and
1
School of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China
2
Fujian Provincial Key Laboratory of Data-Intensive Computing, Quanzhou 362000, China
3
Fujian University Laboratory of Intelligent Computing and Information Processing, Quanzhou 362000, China
4
School of Mathematics and Statistics, Minnan Normal University, Zhangzhou 363000, China
This article belongs to the Special Issue Advances in Fuzzy Preference Relations and Decision-Making Methods with Applications

Abstract

Artificial intelligence is playing an increasingly important role in education. Learning path recommendation is one of the key technologies in artificial intelligence education applications. This paper applies knowledge space theory and fuzzy set theory to study the construction of consistent fuzzy competence spaces and their application to learning path recommendation. With the help of the outer fringe of fuzzy competence states, this paper proves the necessary and sufficient conditions for a fuzzy competence space to be a consistent fuzzy competence space and designs an algorithm for verifying consistent fuzzy competence spaces. It also proposes methods for constructing and reducing consistent fuzzy competence spaces, provides learning path recommendation algorithms from the competence perspective and combined with a disjunctive fuzzy skill mapping, and constructs a bottom-up gradual and effective learning path tree. Simulation experiments are carried out for the construction and reduction in consistent fuzzy competence spaces and for learning path recommendation, and the simulation studies show that the proposed methods achieve significant performance improvement compared with related research and produce a more complete recommendation of gradual and effective learning paths. The research of this paper can provide theoretical foundations and algorithmic references for the development of artificial intelligence education applications such as learning assessment systems and intelligent testing systems.

1. Introduction

Artificial intelligence has had a significant impact on the field of education, and related research continues to grow [1,2]. At present, artificial intelligence is widely used by educational institutions in various forms; intelligent educational systems can achieve personalized curriculum customization through machine learning and adaptive adjustments, thereby improving teaching quality and promoting students’ knowledge acquisition [3]. The deep integration of artificial intelligence with educational systems is transforming how students learn, how teachers teach, and how educational institutions operate [4]. Currently, the application of artificial intelligence in education mainly focuses on disciplines such as STEM (science, technology, engineering, and mathematics), computer science, and English education [5]. Studies have shown that to encourage broader adoption of artificial intelligence technologies among educators, it is necessary to highlight their practical benefits in teaching, thus promoting deeper integration of AI into digital instruction [6].
Personalized learning path recommendation can dynamically deliver appropriate learning resources and test questions based on learners’ responses, current proficiency levels, and specific learning objectives. Many scholars have studied learning path recommendation technologies from different perspectives. Kurilovas et al. proposed a learning path selection method based on swarm intelligence, which is suitable for dynamically selecting learning paths according to learners’ learning styles [7]. Nigenda et al. proposed a flexible conceptual framework that represents course information as AI planning and mathematical programming models, promoting the generation of learning paths through domain-independent algorithms [8]. Zhang et al. proposed a process-type learning path model that represents paths as flowcharts and dynamically recommends branches based on the learner’s knowledge state by combining deep knowledge tracing with process mining and decision mining; experiments on e-learning logs show improved learning effectiveness and efficiency [9]. The research by Rahayu et al. showed that current learning path recommendation technologies often combine ontology with Bayesian networks, data mining, and other AI technologies; they also pointed out that ontology can be combined with knowledge representation tools, educational psychology, and evolutionary computation to construct future dynamic learning paths in adaptive learning environments [10]. From a complementary perspective, multi-criteria decision making has explored set-based optimization mechanisms that can inform trade-offs in path selection [11]. Wang proposed an AI model based on deep learning to optimize personalized learning recommendation methods in English education, addressing problems such as ignoring students’ interests, relying on subjective judgment, or a large mean square error caused by popular learning paths in traditional recommendation methods [12]. In recent years, related research has also integrated technologies such as reinforcement learning, social networks, knowledge graphs, cognitive graphs, and graph attention mechanisms, combined with dynamic adjustment, multi-behavior modeling, cognitive visualization, and memory forgetting rules, focusing on improving recommendation accuracy and learning effectiveness [13,14,15,16,17]. Recent work also demonstrates knowledge-graph-based path planning that customizes learning sequences by integrating concept relations with refined learner diagnosis [18]. Complementarily, advances in deep knowledge tracing enhance the estimation of evolving knowledge states over long interaction sequences, supporting downstream path recommendation [19].
Most studies on learning path recommendation focus on the technical level and rarely incorporate the internal structure of learning content and learners’ mastery of existing knowledge. Knowledge space theory (KST) provides a theoretical framework for research on personalized learning path recommendation by integrating knowledge structures and learners’ knowledge states. KST, rooted in probabilistic measurement theory and mathematical psychology, is a mathematical framework proposed by Doignon and Falmagne to model and assess individuals’ knowledge states [20,21,22]. Using the formal mechanisms provided by KST, it is possible to infer an individual’s knowledge level in a specific domain based on their responses to a series of questions [23,24]. Currently, knowledge space theory has been extended to competence-based knowledge space theory (CbKST) [25,26,27,28]. Early research focusing on dichotomous knowledge structures has also been extended to polytomous knowledge structures [29,30,31,32,33]. At the same time, the combination of KST with cognitive diagnostic theory, rough set theory, formal concept analysis, and fuzzy set theory has further enriched KST and its applications [34,35,36,37,38]. Although early studies of KST also mentioned its application to learning path recommendation, relevant research has remained relatively limited. In recent years, some scholars have attempted to apply KST to learning path recommendation [39,40,41]. Zhou et al. explored learning path recommendation for fuzzy knowledge structures under conjunctive skill mappings, disjunctive skill mappings, and conjunctive fuzzy skill mappings [39]. Zhou et al. also studied the construction of knowledge structures and learning path recommendation in formal contexts [40]. Wang et al. proposed a learning path recommendation method based on fuzzy competence space theory, defined a fuzzy competence space called a consistent fuzzy competence space (CFCS), and designed a gradual and effective learning path recommendation algorithm from to the full set Q , which was validated through simulations [41].
In Wang’s study [41], the construction of a CFCS was not specified in detail, and the learning path recommendation discussed only how to find one gradual and effective path from the initial knowledge state to the full set Q . A natural extension is to enumerate all gradual and effective paths from to Q . In addition, the existing algorithm checks consistency directly from the basic definition, which is computationally inefficient. In this paper, we establish necessary and sufficient conditions under which a fuzzy competence space is consistent, propose an algorithm to verify consistency, and study both construction and reduction methods for CFCSs. Building on these results, we improve the prior learning path algorithm to construct a bottom-up gradual learning path tree from to Q , and enumerate all gradual and effective learning paths from to Q .
The rest of this paper is organized as follows: Section 2 reviews CFCSs and related basic concepts. Section 3 presents a simplified method to verify whether a fuzzy competence structure is a CFCS and introduces construction methods for CFCSs. Section 4 proposes a gradual and effective learning path recommendation algorithm. Section 5 presents simulation experiments, and Section 6 concludes the work and outlines future research directions.

2. Preliminaries

To clearly articulate the concept of CFCSs, we first revisit foundational concepts from knowledge space theory and fuzzy set theory.
Definition 1
([21]). Let Q  be a nonempty finite set representing a domain of problems. In an ideal scenario free from random errors or guessing, the subset of problems K Q  that an individual can correctly solve is called a knowledge state. Both  (solving none) and Q  (solving all) are valid knowledge states. The collection of all such knowledge states is denoted as K , and the pair Q , K  is termed a knowledge structure. For any K 1 , K 2 K ,  if K 1 K 2 K , the structure Q , K  is called a knowledge space.
Union-closure of knowledge states means that solvable sets can be combined; if one learner solves K 1 Q and another solves K 2 Q , then a state covering K 1 K 2 belongs to K . This conveys that collaboration integrates what each can solve into a jointly attainable set of problems.
Definition 1 provides a formal mathematical basis to model what learners know in a domain. It also supports tracing knowledge progression and designing personalized learning paths within the structure Q , K .
Definition 2
([39]). Let S  be a nonempty finite set of skills relevant to problem solving in  Q . A fuzzy competence state (FC-state) is defined as a mapping T : S 0 ,   1 , where for each skill s S , the value T s  represents the degree to which an individual has mastered skill s . Formally, the set of all such mappings is denoted by F S = T T : S 0,1 .
From the perspective of fuzzy sets, T  can be seen as a fuzzy set over S , with T s  as the membership degree of skill s . For any two FC-states T 1 , T 2 F S :
  • T 1 T 2  if and only if T 1 s T 2 s  for all s S ; T 1 T 2  represents  T 1 T 2  and T 1 T 2 ;
  • The union is given by T 1 T 2 s = m a x T 1 s , T 2 s ;
  • The intersection is given by T 1 T 2 s = m i n T 1 s , T 2 s .
Definition 2 allows us to model individuals’ proficiency levels in a flexible and graded way, reflecting partial rather than all-or-nothing mastery, and it supports algebraic operations on FC-states.
Definition 3
([39]). Let Q  and S  be nonempty finite sets representing problems and skills, respectively. A fuzzy skill mapping is defined as a triple Q , S , τ , which assigns to each problem a fuzzy skill requirement function τ q : S 0 , 1 . Here, τ q s  represents the minimum level of mastery needed for skill s  to solve problem q . If τ q s = 0 , it means skill   s  is irrelevant to problem q .
Under the disjunctive model, for an individual with an FC-state T , problem q  is considered solvable if there exists at least one skill s S  such that T s τ q s > 0 . The corresponding knowledge state K  can thus be formally described as:
K = q Q s S , 0 < τ q s T s .
Such a formulation illustrates how differences in skill mastery correspond to problem-solving capabilities and makes it possible to formally specify which problems learners can handle based on their FC-states.
Definition 4
([41]). Let S  be a nonempty finite set of skills, and for each s S , let P s , s  be a finite ordered set of numerical values where the minimum is 0 and the maximum is 1. Denote by P s S  the set of all transversals over s × P s s S ; that is, each element in P s S  selects exactly one pair s , p s  with  p s P s  for every s S . In other words, a transversal is an FC-state T  such that, for each   s S , T ( s ) P s . (The concept of transversal here follows the explanation in Definition 4 of [41].)
Let C P s S . A triple S , P s , C  is called a fuzzy competence structure if it satisfies the following conditions:
  • C contains both the minimal competence state S × 0  and the maximal competence state S × 1 ;
  • s , p s T T C = S × P s .
Moreover, if for any two FC-states T 1 , T 2 C , the union T 1 T 2  is also in C , then S , P s , C  is called a fuzzy competence space.
Such definitions describe how sets of graded competence states can be systematically organized and specify the algebraic closure under union required to form a fuzzy competence space, providing theoretical support for subsequent research on learning path recommendation.
Building on the above concepts and drawing on Definitions 8 and 9 in [41], we now present a more explicit definition of a consistent fuzzy competence space.
Definition 5
(cf. [41], Definitions 8 and 9). Let S , P s , C  be a fuzzy competence structure. For each   T C , define the cardinality T  as the number of skills with non-zero proficiency:
T = s S T s > 0 .
For any T 1 , T 2 C  with T 1 T 2 , let
T 2 T 1 = s S T 2 s T 1 s > 0 ,
 and define the distance:
Δ T 1 , T 2 = s S Δ T 1 s , T 2 s ,  
where, for proficiency levels p i , p j P s , Δ p i , p j = j i , with 1 i < j P s .
A fuzzy competence space S , P s , C  is called a CFCS if it satisfies the following conditions:
(1)
For any T , T C , if T T , there exist T 0 , T 1 , , T n C  such that T  = T 0 T 1 T n = T , and T i T i 1 = 1 , where 1 i n ;
(2)
For any T , T C , if T T , and T T = 1 , suppose T T s > 0 , then there exist T 0 , T 1 , , T n C  such that T = T 0 T 1 T n = T , and Δ T i 1 s , T i s = 1 , where 1 i n .
Remark 1.
In Definition 5, the chain condition is stated with strict inclusion, T 0 T 1 T n . This slightly strengthens the inclusion in [41], yet it is fully justified by the other clauses: T i T i 1 = 1  and Δ T i 1 s , T i s = 1  together imply T i T i 1 . Using strict inclusion therefore makes the incremental progress explicit without changing the substance of the definition; using non-strict inclusion would still be correct but less precise.
This formulation reflects three educational learning properties [41]:
  • Closure under union of competence states indicates that the problem sets solvable by different individuals can be integrated through collaboration or instruction, producing a state that covers their combined solvable set;
  • Consistency, understood as progression via single-step increments, captures the gradualness of skill learning;
  • Taken together, closure under union and consistency imply that advancing in some skills does not interfere with other skills (see [41], Proposition 3).
Definition 6 
([41]). Let S , P s , C  be a fuzzy competence structure and Q , S , τ  be a fuzzy skill mapping. Under the disjunctive model, for each FC-state T C , define:
m ( T ) = q Q s S , 0 < τ q s T s .
The mapping m : C 2 Q  is called the fuzzy disjunctive problem function associated with Q , S , τ . Further, the family of subsets:
K = m ( T ) T C .
This family is called the knowledge structure induced by the fuzzy competence structure S , P s , C  and the fuzzy skill mapping Q , S , τ  under the disjunctive model.
This formulation explains how individual FC-states translate into solvable problem sets, providing a formal bridge from learners’ skill profiles to knowledge states within the disjunctive reasoning framework.
Example 1.
Let S , P s , C  be a fuzzy competence structure, and let Q , S , τ  be a disjunctive fuzzy skill mapping. Let S = s 1 , s 2 , s 3 , with P s 1 = 0 ,   0.2 ,   0.6 ,   1 ,   P s 2 = 0 ,   0.5 ,   1 ,   P s 3 = 0 ,   0.4 ,   0.7 ,   1 , and Q = q 1 , q 2 , q 3 , q 4 , q 5 , q 6 , q 7 , q 8 The FC-states in C  are listed in Table 1.
Table 1. The FC-states in C .
The fuzzy skill mapping Q , S , τ  is given in Table 2.
Table 2. Fuzzy skill mapping.
By Definition 5, S , P s , C  can be verified to be a CFCS. Based on Definition 6, the knowledge states corresponding to each FC-state are shown in Table 3.
Table 3. FC-states and corresponding knowledge states.
For the FC-states T 4 , T 5 , T 6 , T 7 , T 8 , the proficiency level of skill s 3  is 1. Under the disjunctive fuzzy skill mapping, if at least one skill meets the requirement, the corresponding problem is solvable. Therefore, these states correspond to the complete problem set Q .

3. Construction of CFCSs

To facilitate subsequent applications, constructing CFCSs is an important task. However, the prior study [41] did not provide a concrete method for constructing CFCSs. In this section, inspired by the outer fringe of competence states in CbKST, we first introduce the definition of the outer fringe of an FC-state. On this basis, we propose a simplified method to decide whether a fuzzy competence structure is a CFCS and further design an algorithm to construct such a space. We first recall the outer fringe of competence states.
Definition 7
([25]). Let C 2 S  be a competence structure defined on a nonempty finite set of skills S . For any competence state C C , the outer fringe of C , denoted C O , is the set of skills s S \   C  such that adding s  to C  results in a valid competence state in C . Formally:
C O = s S \   C C { s } C .
The outer fringe C O consists of skills not yet mastered in C ; acquiring any of them yields a new competence state. This concept is analogous to the zone of proximal development in educational theory and indicates the next skills or knowledge the learner is ready to acquire. The outer fringe highlights the gradual nature of skill learning: competence states improve step by step. If every non-maximal competence state has a nonempty outer fringe, learners can progressively master the entire skill set S by choosing one skill from the current outer fringe at each step until all skills are acquired.
FC-states extend ordinary competence states by adding proficiency levels for each skill. Fuzzy skill learning is approximately continuous, while FC-states discretize it by selecting finite representative levels in 0 ,   1 . Based on these discretized values, we defined the fuzzy skill mapping, fuzzy competence structure, fuzzy competence space and CFCS above. It is also natural to use these selected values as thresholds to define the outer fringe of an FC-state. For example, in Example 1, P s 3 = 0 ,   0.4 ,   0.7 ,   1 , and s 3 , 0.7 is the next possible target proficiency after s 3 , 0.4 . The concept of the outer fringe of competence states can be extended to FC-states.
Definition 8.
Given a fuzzy competence structure S , P s , C , for any T C , if there exists T C  such that T T  and there does not exist any T * C  with T T * T , then T  is called an outer fringe state of T . The set of all such outer fringe states of T  is called the outer fringe of T , denoted by T O .
Example 2.
In Example 1, the outer fringes of each FC-state in C  are shown in Table 4.
Table 4. The outer fringes of each FC-state in C of Example 1.
As shown in Example 2, except for the last FC-state whose outer fringe is empty, each FC-state has exactly one outer fringe state, which directly corresponds to the next FC-state in the sequence. This illustrates an ideal situation where, starting from the initial state, learners can progressively reach the final FC-state by successively moving to the outer fringe state of the current state.
Based on these observations and the structural properties of CFCSs, we can establish the following proposition.
Proposition 1.
Let S , P s , C  be a CFCS. For any T C  and any nonempty outer fringe state T T O , it holds that Δ T , T = 1 .
Proof. 
By Definition 8, T T O implies T T and there does not exist T * C such that T T * T . By Definition 5, for any T , T C with T T , there exists a chain:
T = T 0 T 1 T n = T ,
where T i T i 1 = 1 for 1 i n . Since there does not exist T * with T T * T , it follows that n = 1 . Thus, the chain reduces to:
T = T 0 T 1 = T ,
with T 1 T 0 = 1 , i.e., T T = 1 . Moreover, by the second condition in Definition 5, suppose T T s > 0 , then there exists a chain T = T 0 T 1 = T such that:
Δ T 0 s , T 1 s = 1 .
Therefore:
Δ T , T = 1 .
Example 3.
The fuzzy competence structure S , P s , C  presented in Examples 1 and 2 constitutes a CFCS. As shown in Table 4, all FC-states with nonempty outer fringes satisfy the conclusion of Proposition 1.
Theorem 1.
Let S , P s , C  be a fuzzy competence space. Then, S , P s , C  is a CFCS if and only if for every FC-state T C :
(1) 
If T = S × 0 , then there exists T * T O  such that T * T = 1 ,   Δ T , T * = 1 .
(2) 
If T = S × 1 , then there exists T C  such that T T O  and T T = 1 ,   Δ T , T = 1 .
(3) 
If T S × 0  and T S × 1 , then there exist T ,   T * C  such that T T O ,   T * T O  and T T = 1 ,   Δ T , T = 1 , T * T = 1 ,   Δ T , T * = 1 .
Proof. 
(Necessity) Suppose S , P s , C is a CFCS. By Definition 5, for any T ~ , T ^ C with T ~ T ^ , there exists a chain of FC-states:
T ~ = T 0 T 1 T 2 T n 1 T n = T ^
such that T i T i 1 = 1 for 1 i n .
(1)
When T = S × 0 :
Take T ~ = T and T ^ = S × 1 . Let T * = T 1 . The chain is:
T = S × 0 T * T 2 T n 1 T ^ = S × 1 .
Since no state exists between T and T * , we have T * T O and T * T = 1 . By Proposition 1, Δ T , T * = 1 .
(2)
When T = S × 1 :
Take T ~ = S × 0 and T ^ = T . Let T = T n 1 . The chain is:
T ~ = S × 0 T 1 T 2 T n 2 T T = S × 1 .
Since no state exists between T and T , we have T T O and T T = 1 . By Proposition 1, Δ T , T = 1 .
(3)
When T is an intermediate state ( T S × 0 and T S × 1 ) :
First, take T ~ = S × 0 and T ^ = T . Let T = T n 1 . The chain is:
T ~ = S × 0 T 1 T 2 T n 2 T T .
Since no state exists between T and T , we have T T O and T T = 1 . By Proposition 1, Δ T , T = 1 .
Next, take T ~ = T and T ^ = S × 1 . Let T * = T 1 . The chain is:
T T * T 2 T n 1 T ^ = S × 1 .
Since no state exists between T and T * , we have T * T O and T * T = 1 . By Proposition 1, Δ T , T * = 1 .
(Sufficiency) For any T , T C with T T , let T 0 = T . By the three conditions above, there exists T * T O with T * T = 1 , Δ T , T * = 1 . Let T 1 = T * , then T = T 0 T 1 with T 1 T 0 = 1 and Δ T 1 , T 0 = 1 . If T 1 = T , we have found the required chain of FC-states. Otherwise, repeat this process until T n = T , yielding:
T = T 0 T 1 T 2 T n 1 T n = T ,
where T i T i 1 O for 1 i n , and T i T i 1 = 1 , Δ T i , T i 1 = 1 . Thus, S , P s , C is a CFCS. □
Guided by Proposition 1 and Theorem 1, we design an algorithm to decide whether S , P s , C is a CFCS. We first sort C lexicographically by skill proficiencies and encode each state as an integer vector. We use the Manhattan distance between vectors to quantify the level gaps across skills, we verify union-closure using the componentwise maximum and, under union-closure, identify each state’s outer fringe T i O by locating single-skill increments where the distance equals one. Consistency is rejected on the first violation, either a missing union or an empty outer fringe, and otherwise the structure is accepted as consistent. The complete procedure is given in Algorithm 1.
Algorithm 1 Decide whether S , P s , C is a CFCS
Input: A fuzzy competence structure S , P s , C ; each P s is finite and totally ordered.
Output: True if S , P s , C is a CFCS; otherwise False.
 1: m S
 2: n C
 3:Sort C increasingly by T s 1 , T s 2 , , T s m in lexicographic order, and relabel the states as T 0 , T 1 , , T n 1 .
 4:For each s S , index P s increasingly as 0,1 , , P s 1 .
 5:Map each T C to an integer vector v T using these indices.
 6: H { v T : T C } .
 7:For i   =   0 to n 1 do
 8:    For j   =   i + 1 to n 1 do
 9:          u c o o r d i n a t e w i s e _ m a x ( v T i , v T j )
10:          If u     H then
11:                Return False. // T i T j C
12:          End if
13:    End for
14:End for
15:For i   =   0   t o   n 2 do // exclude the maximal state T n 1
16:     T i O
17:     For j   =   i + 1 to n 1 do
18:           d = k = 1 m v T j ( k ) v T i ( k ) // Manhattan distance
19:          If d   =   1 then
20:                 T i O T i O T j // T j T i O
21:          End if
22:    End for
23:    If T i O = then
24:          Return False. // some non-maximal state has empty outer fringe
25:    End if
26:End for
27:IF i { 0,1 , , n 2 } with T n 1 T i O
28:    Return True. // maximal state appears as someone’s outer fringe
29:Else
30:    Return False // maximal state is not reachable by a 1-step raise
31:End if
Example 4.
Let S , P s , C  be a fuzzy competence structure. Let S = s 1 , s 2 , s 3  with P s 1 = P s 2 = 0,0.5,1  and P s 3 = 0,0.3,0.7,1 . The FC-states in C  are listed in Table 5.
Table 5. The FC-states in C .
We now apply Algorithm 1 to determine whether S , P s , C  is a CFCS.
First, sort C  lexicographically by T s 1 , T s 2 , T s 3  in ascending order and relabel the states as T 0 , T 1 , T 14  in this new order. Each proficiency level set is indexed in ascending order:
P s 1 = P s 2 = 0,0.5,1 { 0,1 , 2 } , P s 3 = 0,0.3,0.7,1 { 0,1 , 2,3 } .
Replacing each proficiency in a state with its index yields an integer vector v T { 0,1 , 2 } × { 0,1 , 2 } × { 0,1 , 2,3 } . The sorted states and their integer vector encodings are presented in Table 6.
Table 6. Sorted FC-states and corresponding integer vector encodings.
Second, check union-closure. For each 0 i < j 14 , compute u = c o o r d i n a t e w i s e _ m a x ( v T i , v T j )  and verify u H . All pairs satisfy this property.
Third, compute the outer fringe T i O  for each T i  with i 13 . A state T j  belongs to T i O  if j > i  and the Manhattan distance between v T i  and v T j  equals 1. We illustrate the calculation process for T 0  as an example. Based on the sorted order in Table 6, each FC-state is mapped into its integer vector representation v T i . We then compute the Manhattan distance between v T 0  and each v T j ( j > 0 ) . States with a Manhattan distance equal to 1 constitute T 0 O . The results are presented in Table 7, from which it is clear that T 0 O = T 1 , T 2 .
Table 7. Manhattan distance between v T 0 and v T j   ( j > 0 ) .
The procedure for computing the outer fringes of the other FC-states follows the same steps as for T 0 . According to the definition of the outer fringe, the maximal state T 14  has no outer fringe; thus, in Table 8 its outer fringe is denoted by . The results for all states are summarized in Table 8.
Table 8. Outer fringes of T i .
Finally, verify that the maximal state T 14  appears in at least one outer fringe. Indeed, T 14 T 12 O  and T 14 T 13 O .
Since union-closure holds, every outer fringe except that of the maximal state is non-empty, and the maximal state T 14  itself appears as the outer fringe of other FC-states. Therefore, Algorithm 1 confirms that S , P s , C  is a CFCS.
Example 5.
Based on the fuzzy competence structure S , P s , C  in Example 4, consider the sorted FC-states given in Table 6. Remove T 2  and T 3  from C . It is straightforward to verify that the remaining FC-states still satisfy union-closure.
Following Algorithm 1, we compute the outer fringes. As an illustration, we examine T 1 . Using the integer vectors in Table 6, we calculate the Manhattan distance between v T 1 = ( 0,0 , 1 )  and v T j  for j = 4,5 , , 14 . The results are listed in Table 9. Since no state lies at distance 1  from T 1 , we obtain T 1 O  = . Hence, despite union-closure, the modified S , P s , C  is not a CFCS, and it is unnecessary to compute the outer fringes of the remaining states.
Table 9. Manhattan distance between v T 1 and v T j   ( j = 4,5 , , 14 ) .
We proceed to investigate the construction of CFCSs. As a starting point, we consider a special case in which C contains all possible FC-states, i.e., C = P s S .
Proposition 2.
Let S , P s , C  be a fuzzy competence structure. If C = P s S , then S , P s , C  is a CFCS.
Proof. 
By Definition 2, for any T 1 , T 2 C , the union is defined so that, for each skill s S , its level in T 1 T 2 equals the higher of its levels in T 1 and T 2 :
T 1 T 2 s = m a x T 1 s , T 2 s , s S .
Since C contains all possible states, T 1 T 2 C . Hence C is union-closed, and by Definition 4, S , P s , C is a fuzzy competence space.
If T = S × 0 , pick any skill s * S . Let T * be obtained from T by increasing the level of s * to its immediate successor in P s * , keeping all other skills unchanged. Then T * C , T * T = 1 , and Δ T , T * = 1 .
If T = S × 1 , pick any skill s * S Let T be obtained from T by decreasing the level of s * to its immediate predecessor in P s * , keeping all other skills unchanged. Then T C , T T O , T T = 1 , and Δ T , T = 1 .
If T S × 0 and T S × 1 , choose a skill s whose level in T is not maximal and a skill s whose level in T is not minimal. Let T * be obtained by increasing the level of s by one (others unchanged) and T by decreasing the level of s by one (others unchanged). Then T * ,   T C , T T O ,   T * T O , T T = 1 ,   Δ T , T = 1 , T * T = 1 and Δ T , T * = 1 .
Thus, all three conditions in Theorem 1 are satisfied. Therefore S , P s , C is a CFCS. □
Example 6.
Consider S , P s , C  with S = s 1 , s 2 , s 3 , P s 1 = P s 2 = 0,0.5,1  as in Example 4. Let C = P s S , i.e., C  contains all FC-states over S , P s . In accordance with Algorithm 1, we sort the states lexicographically by T s 1 , T s 2 , T s 3  in ascending order and index them as T 0 , T 1 , , T 35 . The complete list is given in Table 10.
Table 10. The FC-states in C = P s S .
Since  C = P s S  contains all possible FC-states, Proposition 2 applies directly: S , P s , C  is a CFCS.
Proposition 3.
Let S , P s , C  with C = P s S . After sorting C  lexicographically by T s 1 , T s 2 , , T s S , relabel the states as T 0 , T 1 , , T n 1  with n = C , so that T 0 = S × 0  and T n 1 = S × 1 . Construct a sequence T 0 , T 1 , , T m  as follows:
  • T 0 = T 0 ;
  • for each i 0 , let T i + 1  be the last element of T i O  according to the insertion order specified in Algorithm 1;
  • stop at the first m  with T m = T n 1 .
Set C = T 0 , T 1 , , T m . Then S , P s , C  is a CFCS.
Proof. 
By Definition 8, T i + 1 T i O implies T i T i + 1 and the change occurs on exactly one skill by one proficiency level; hence
T 0 T 1 T m ,   Δ T i , T i + 1 = 1   ( 0 i < m ) .
The rule “take the last element of T i O in Algorithm 1’s order” ensures a deterministic choice at each step, so the chain is uniquely determined. Because C is a chain under , for any i j we have T i T j = T j C ; therefore C is union-closed and S , P s , C is a fuzzy competence space. Finally, let U , V C with U V . Writing U = T i and V = T j with i < j , the subsequence T i T i + 1 T j satisfies Definition 5: for each i < k j , T k T k 1 = 1 and Δ T k 1 , T k = 1 . Hence S , P s , C is a CFCS. □
Let S , P s , C with C = P s S , and let C = { T 0 , , T m } be the chain constructed in Proposition 3 by repeatedly taking, at each step, the last element of the current outer fringe in the order induced by Algorithm 1. Each step changes exactly one skill by exactly one proficiency level, and the state strictly increases under . Denote
D = s S P s 1 .
From T 0 = S × { 0 } to T m = S × { 1 } , any chain in which each step modifies exactly one skill by one proficiency level must perform P s 1 changes for each skill s , so its length is at least d . The construction in Proposition 3 exactly reaches this bound with m   =   d , hence C = d + 1 . Hence, C contains the minimal possible number of FC-states among all CFCSs on S , P s .
It should be noted that C is a maximal chain under but not necessarily unique: if an outer fringe contains multiple elements, selecting a different one (instead of the “last” element) may yield a different chain; however, all such chains have the same cardinality D + 1 .
According to Proposition 3 and the above counting argument, the following corollary holds.
Corollary 1.
Let S , P s , C  be a CFCS with S × { 0 } ,   S × { 1 } C . Applying the selection rule of Proposition 3 within C  yields a subset C C  that is also a CFCS and satisfies
C = D + 1 C .
If C > C , this gives a strict reduction; otherwise, C  is already minimal. Different admissible choices at outer fringes may lead to different C , but all such subsets have the same minimal size D + 1 .
Building on Proposition 3 and the preliminaries in Algorithm 1, we now introduce a constructive procedure that generates, for a given skill set S and the associated proficiency-level sets { P s : s S } , a CFCS with the minimal number of states, as shown in Algorithm 2. The algorithm first enumerates all FC-states in P s S in lexicographic order and then extracts a deterministic maximal chain by iteratively selecting the last element from the current outer fringe.
Algorithm 2 Construct a minimal CFCS S , P s , C
Input: A skill set S and proficiency-level sets { P s : s S } ; each P s is finite and totally ordered.
Output: A CFCS S , P s , C with the minimal number of states.
 1: m S
 2:Generate all FC-states over P s S ; sort lexicographically by T s 1 , , T s m ; relabel as T 0 , , T n 1 ; set C = { T 0 , , T n 1 } .
 3:For each s S , index P s increasingly as { 0,1 , , P s 1 } .
 4:Map each T i C to an integer vector v T i using these indices.
 5: C { T 0 } ; i   0
 6:While T i T n 1 do
 7:     O // outer fringe of T i
 8:    For j   =   i + 1 to n 1 do
 9:          d k = 1 m v T j k v T i k   // Manhattan distance
10:         If d   =   1 then
11:                 O O { T j }
12:         End if
13:    End for
14:    Let T next be the last element of O , ordered lexicographically as in Step 2.
15:     C C { T next }
16:      i index of T next among T 0 , , T n 1
17:End while
18:Return ( S , P s , C )
Example 7.
We apply Algorithm 2 to the same input as in Example 4: let S = { s 1 , s 2 , s 3 } , with P s 1 = P s 2 = { 0,0.5,1 }  and P s 3 = { 0,0.3,0.7,1 } . The full set P s S  of FC-states, sorted lexicographically as in Table 10 of Example 6 and is denoted by C = { T 0 , , T 35 } .
Table 11 reports, for each sorted state T i , its outer fringe T i O  computed as in Algorithm 2 (Manhattan distance equal to one in the indexed grid).
Table 11. Outer fringes of T i .
Using Table 11 and the selection rule of Algorithm 2 (at each step select the last element in lexicographic order), we obtain the minimal chain C . Table 12 lists the resulting FC-states.
Table 12. FC-states of S , P s , C .
Whereas Algorithm 2 constructs a minimal CFCS by enumerating all states in P s S , Algorithm 3 reduces a given S , P s , C to a minimal one while staying within the provided family C . To improve robustness, it first verifies consistency via Algorithm 1 and then reuses the outer-fringe traversal restricted to the given states.
Algorithm 3 Reduction in a CFCS
Input: A fuzzy competence structure S , P s , C ; each P s is finite and totally ordered.
Output: If inconsistent, report and stop. If consistent, return a minimal S , P s , C with C , C , and the ratio C / C .
 1: m S ; n C
 2: // Robustness check using Algorithm 1.
 3:If Alg 1 S , P s , C = False then // Alg 1 denotes Algorithm 1
 4:    Report an error: the input is not a CFCS.
 5:    Return without reduction
 6:End if
 7:Sort C lexicographically by T s 1 , , T s m ; relabel as T 0 , , T n 1 with T 0 = S × { 0 } , T n 1 = S × { 1 }
 8:For each s S , index P _ s increasingly as { 0,1 , , P s 1 } .
 9:Map each T i C to an integer vector v T i using these indices.
10: C { T 0 } ; i 0
11:While T i T n 1 do
12:     O   // outer fringe of T i in C
13:    For j   = i + 1 to n 1 do
14:          d k = 1 m v T j k v T i k   // Manhattan distance
15:         If d = 1 then
16:                 O O { T j }
17:         End if
18:    End for
19:    Let T next be the last element of O , ordered lexicographically as in Step 7.
20:     C C { T next }
21:     i index   of   T next   among   T 0 , , T n 1
22:End while
23: r C ; q r / n
24:Return S , P s , C , n , r , q .
Example 8.
Consider the CFCS S , P s , C  of Example 4, where S = { s 1 , s 2 , s 3 }  with P s 1 = P s 2 = { 0,0.5,1 } ,  and P s 3 = { 0,0.3,0.7,1 } . Following Algorithm 3, we reduce  C  by traversing the sorted states in Table 6 and, at each step, taking the last element of the current outer fringe given in Table 8. Starting from T 0 , the chosen sequence is:
T 0 T 2 T 3 T 8 T 9 T 11 T 13 T 14 .
This yields a minimal consistent fuzzy competence subspace S , P s , C  with C = T 0 , T 1 , , T 7 . The states are listed in Table 13.
Table 13. The FC-states in S , P s , C (reduction of Example 4).
Remark 2.
Example 4 has C = 15  sorted states T 0 , T 1 , , T 14 . The reduction above keeps C = 8  states, matching the theoretical minimum D + 1  with D = s S P s 1 = 2 + 2 + 3 = 7 . Thus C / C = 8 / 15 .
This section established a constructive route to CFCSs and to their minimal representations. The central result is Theorem 1, which states a necessary and sufficient condition for consistency in terms of the outer fringes of FC-states. Building on Theorem 1, Algorithm 1 decides whether a given fuzzy competence structure S , P s , C is a CFCS. On the same foundation, Algorithm 2 constructs a CFCS with the minimal number of states by extracting a lexicographic chain for the given S and the proficiency-level sets { P s : s S } . Algorithm 3 reduces any given CFCS to a minimal one within the supplied C . The next section designs learning path recommendation algorithms based on the CFCSs developed in this section.

4. Learning Path Recommendation

This section develops learning path recommendation in two layers:
(1)
From the competence perspective (without items), we enumerate all gradual paths in a CFCS;
(2)
Under a disjunctive fuzzy skill mapping Q , S , τ , we annotate each competence state T with its induced knowledge state m ( T ) and enumerate all gradual and effective paths. The second layer extends [41] from finding a single path to listing all paths from to Q .

4.1. Learning Path Recommendation from the Competence Perspective

Given a CFCS S , P s , C , any outer fringe step increases exactly one skill by one proficiency level, and chaining such steps yields gradual paths from S × { 0 } to S × { 1 } . Algorithm 4 constructs a bottom-up tree whose nodes are FC-states and whose edges are one-level increments and then performs a depth-first search (DFS) to output all gradual paths from to Q in competence space terms (i.e., as sequences T 0 T 1 T n ).
Algorithm 4 Enumerate gradual learning paths in a CFCS
Input: A fuzzy competence structure S , P s , C .
Output: A bottom-up learning path tree rooted at T 0 and the full list of paths from T 0 to T n 1 .
 1: m S
 2: n C
 3:If Alg1( S , P s , C ) = False then // Alg1 denotes Algorithm 1
 4:    Return “( S , P s , C ) is not a CFCS.”
 5:End if
 6:Sort C lexicographically by T s 1 , , T s m ; relabel as T 0 , , T n 1
 7:For each s S , index P s as { 0,1 , , P s 1 }
 8:Map each T i to an integer vector v T i
 9:For i = 0 to n 2 do
10:     O i   // outer fringe of T i
11:    For j = i + 1 to n 1 do
12:          d k = 1 m v T j k v T i k   // Manhattan distance
13:         If d = 1 then
14:                 O i O i { T j }
15:          End if
16:    End for
17:End for
18:Define procedure D F S ( T ) :
19:    If T = T n 1 then
20:          Append current stack to Paths; Return
21:    End if
22:    For each U O index ( T ) do // i n d e x ( T ) : the index of T after Step 6
23:          Push U ; D F S ( U ) ; Pop
24:    End for
25:End procedure
26:Build edge set A = T i , T j : T j O i
27: Paths ; stack T 0
28:Call D F S ( T 0 )
29:Output the bottom-up tree A , T 0 and the list Paths
30:Return n , Tree, Paths
Example 9.
Applying Algorithm 4 to the CFCS of Example 8 yields the gradual learning path tree in Figure 1  (bottom-up). A label T i a , b , c  denotes state  T i  with the original proficiency levels a , b , c  on skills s 1 , s 2 , s 3 . The bold nodes and arrows indicate the deterministic minimal chain produced by Algorithm 3, which always selects the last element of the current outer fringe. Table 14 reports all gradual learning paths from T 0  to T 14 .
Figure 1. Gradual learning path tree (bottom-up).
Table 14. All gradual learning paths from T 0 to T 14 .

4.2. Gradual and Effective Learning Paths

We now consider learning path recommendation when a fuzzy skill mapping Q , S , τ is given in addition to a CFCS S , P s , C . By Definition 6, under the disjunctive model, each FC-state T is mapped to the knowledge state m ( T ) = q Q s S , 0 < τ q s T s , and the induced family K = m ( T ) T C forms the knowledge structure associated with S , P s , C and Q , S , τ .
Definition 9.
Let S , P s , C  be a CFCS and Q , S , τ  a fuzzy skill mapping. K  is the knowledge space induced by S , P s , C  and Q , S , τ  under the disjunctive model. For any K ,   K K  with K K  , we say that there exists a gradual and effective learning path from K  to K  if there exist T 0 , T 1 , , T n C  such that
T = T 0 T 1 T n ,   m ( T 0 ) = K ,   m ( T n ) = K ,
 and for every i = 1 , , n  one has
m ( T i 1 ) m ( T i )   a n d   Δ T i 1 , T i = 1 .
That is, each step raises exactly one skill by one proficiency level while the induced knowledge state is non-decreasing.
Proposition 1 in [41] ensures monotonicity of the disjunctive problem function m : whenever T 1 T 2 in C , one has m ( T 1 ) m ( T 2 ) . Therefore, every upward edge in the competence tree (which raises exactly one skill by one proficiency level) induces a non-decreasing transition of knowledge states. Building on Algorithm 4, Algorithm 5 augments each competence state T with its corresponding knowledge state m ( T ) , producing a bottom-up knowledge-labeled gradual learning path tree and allowing us to enumerate all gradual and effective learning paths from to Q .
Algorithm 5 Annotate the gradual learning path tree with induced knowledge states and list all gradual and effective learning paths from to Q under the disjunctive model
Input: A fuzzy competence structure S , P s , C and a fuzzy skill mapping Q , S , τ .
Output: A bottom-up knowledge-labeled tree where each node carries T , m ( T ) , and the full list of gradual and effective paths from to Q .
 1:// Build a bottom-up learning path tree and enumerate all gradual learning paths
 2: ( T r e e C , P a t h s C ) A l g 4 ( S , P s , C ) // Alg4 denotes Algorithm 4
 3:// Compute induced knowledge states m ( T ) under the disjunctive model
 4:For each node T in T r e e C do
 5:     m [ T ] = q Q s S , 0 < τ q s T s
 6:End for
 7:// Create knowledge-labeled tree
 8: T r e e C K T r e e C with each node T relabeled as ( T ,   m [ T ] )
 9:// Enumerate all gradual and effective paths
10: P a t h s C K
11:For each competence path p = T 0 , T 1 , , T n 1 in P a t h s C do
12:    If m [ T 0 ] m [ T 1 ] m [ T n 1 ]   // non-decreasing knowledge states
13:         Append ( T 0 , m [ T 0 ] ) ( T 1 , m [ T 1 ] ) ( T n 1 , m [ T n 1 ] ) to P a t h s C K
14:    End if
15:End for
16:Return T r e e C K , P a t h s C K
Example 10.
Consider the CFCS S , P s , C  in Example 4 after lexicographically sorting the FC-states by skill proficiency. For convenience, the states in Table 6 are restated in Table 15. Under the fuzzy skill mapping Q , S , τ  given in Table 16, the knowledge state induced by each T C  according to m ( T ) = q Q s S , 0 < τ q s T s  is reported in Table 17. The bottom-up gradual and effective learning path tree produced by Algorithm 5 is shown in Figure 2. The bold path in Figure 2 corresponds to the bold path in Figure 1.
Table 15. Sorted FC-states in Example 4.
Table 16. Fuzzy skill mapping.
Table 17. FC-states and their induced knowledge states.
Figure 2. Gradual learning path tree under fuzzy skill mapping (bottom-up).
Relative to Figure 1, each node additionally displays the knowledge state m ( T ) induced by the mapping Q , S , τ . The bold nodes and edges form one gradual and effective learning path from to the full set Q corresponding to the bold path in Figure 1: at every step the FC-state increases exactly one skill by one proficiency level, and the associated knowledge state also increases (or at least does not decrease). This reflects that, as a learner follows the path and upgrades skills step by step, the learner’s knowledge state grows accordingly. Other gradual and effective paths may also be chosen, capturing the fact that different learners can reasonably progress along different improvement routes—consistent with what is observed in real learning cohorts.

5. Simulation

To evaluate the proposed algorithms, we conducted simulation experiments in three parts:
(1)
Constructing a minimal CFCS (Algorithm 2);
(2)
Reducing a CFCS (Algorithm 3);
(3)
Searching gradual and effective learning paths (Algorithm 5).
All experiments ran on an Intel® Core™ i7-9700 @ 3.00 GHz with 16 GB RAM under Windows 10, and the simulator was implemented in Python 3.13.2.

5.1. Simulation on Constructing Minimal CFCSs

Algorithm 2 constructed a minimal CFCS. We used the skill sets S and proficiency sets P s from the first group of ten datasets in Ref. [41]. Basic information is summarized in Table 18.
Table 18. Basic information of 10 datasets.
The simulation outcomes are summarized in Table 19.
Table 19. Simulation results for Algorithm 2.
Figure 3 and Figure 4 jointly illustrate scalability and timing trends across the ten datasets. Figure 3 shows that runtime increased roughly with D + 1 at this scale, while Figure 4 highlights that the minimal space grew linearly in D + 1 even when the full Cartesian product s S P s grew rapidly—explaining why Algorithm 2 remained efficient despite large underlying state spaces.
Figure 3. Runtime versus minimal consistent states count.
Figure 4. Minimal consistent states count versus all possible states count.
Key observations from the results are as follows:
  • Minimality was attained. For every dataset the number of constructed states equaled D + 1 , matching the theoretical lower bound D = s S P s 1 ; hence the constructed fuzzy competence spaces were both consistent and minimal.
  • Favorable scalability. Although s S P s can be large (e.g., 6561 for d10), the minimal space size depends primarily on D + 1 , which grows much more slowly. This property benefits downstream reduction (Algorithm 3) and path search (Algorithm 5).
  • Low computational cost. All runs finished within 0.04 s per dataset; small variations stemmed from OS-level I/O and caching, not from algorithmic complexity.

5.2. Simulation on Reducing a CFCS

Algorithm 3 reduced a CFCS. We generated the full fuzzy competence space for each dataset using S and P s from the first group of ten datasets in Ref. [41]. Basic information is listed in Table 20.
Table 20. Basic information of 10 datasets.
The simulation outcomes are summarized in Table 21 (here “Removed (%)” is defined as 100 × ( 1 r e d u c e d / o r i g i n a l ) .
Table 21. Reduction results.
Figure 5 and Figure 6 together characterize Algorithm 3’s behavior: Figure 5 shows that the removed fraction increased with C and rapidly approached 100% for large spaces (e.g., 99.74% for d10), because the reduced space contained only   D + 1 states while C = s S P s grew combinatorically. Figure 6 indicates runtime increased with dataset size and was dominated by the verification stage for large C .
Figure 5. Reduction Ratio by Dataset.
Figure 6. Runtime versus reduced states count.
Key observations from the results are as follows:
  • Near-total pruning for large spaces. As C increases, the chain size D + 1 remains small relative to s S P s , so the removed percentage grows and can exceed 99% (d08–d10).
  • Moderate pruning for small spaces. When C is modest, a larger fraction of states may remain in the minimal chain, so the removed percentage is smaller (e.g., 44.444% for d01).
  • Verification dominates runtime at scale. Consistency verification checks union-closure and outer fringes and performs a reachability test; a straightforward implementation touches many pairwise unions (worst-case O ( C 2 · S ) and fringe checks O ( C · S ) , which explains the very large “Verify time (s)” for d10 and why it dictates total runtime in Figure 6. Engineering optimizations (memorized maxima, sparse adjacency, early stopping) can reduce this overhead without altering the reduction outcome.

5.3. Simulation on Searching Gradual and Effective Learning Paths

We implemented the simulator for Algorithm 5. Using the data in Example 10 as input, the resulting bottom-up gradual and effective learning path tree is shown in Figure 7. It matches Figure 2 but is rendered in a simplified style; since paths are bottom-up, arrowheads are omitted for readability. We then evaluated the algorithm on the first group of ten datasets from Ref. [41]; the dataset configuration is summarized in Table 22 (details follow Ref. [41]).
Figure 7. Bottom-up gradual and effective learning path tree.
Table 22. Basic information of 10 datasets.
Table 23 reports the simulation results of Algorithm 5. In this table, | K | denotes the number of knowledge states jointly induced by S , P s , C and the fuzzy skill mapping Q , S , τ , “nodes” and “edges” are the counts in the gradual and effective learning path tree, “paths” is the number of bottom-up gradual and effective paths from to Q , “verify(s)” is the time spent on consistency checking, and “total(s)” is the total runtime.
Table 23. Simulation results for Algorithm 5.
For ease of exposition, we refer to Algorithm 5 as Algorithm A and the method in Ref. [41] as Algorithm B. For comparison, Table 24 lists the results for Algorithm B. Note that Algorithm B finds a single gradual path from to Q , whereas Algorithm A enumerates all such paths and represents them as a tree.
Table 24. Simulation results from Ref. [41] (Algorithm B).
Figure 8 plots, for each dataset, the verification-time ratio ρ = v e r i f y   i n   A l g o r i t h m   B v e r i f y   i n   A l g o r i t h m   A , which ranges from 3.88 (d01) to 5.11 (d04), averaging about 4.9 .
Figure 8. Verification time ratio per dataset.
Several salient observations emerge:
  • For Algorithm A, as C grew, the numbers of nodes and edges scaled roughly with C , but the number of paths increased dramatically—from 6 (d01) to 8.17 × 10 10 (d10). This surge reflects the combinatorial recombination of single-skill increments across levels, so the total runtime grew faster than linearly in C .
  • For verifying a CFCS, the empirical ratios in Figure 8 show that Algorithm A was 3.9 5.1 times faster than Algorithm B across all datasets (on average, about a 4.9-fold speedup), confirming a substantial improvement.
  • Algorithm B returned one gradual path, whereas Algorithm A enumerated all gradual and effective paths and organized them as a tree, enabling comprehensive recommendation, coverage analysis, and fine-grained visualization; hence the total runtimes were not directly comparable in intent.

6. Conclusions

This paper develops a complete pipeline for constructing, verifying, reducing, and exploiting CFCSs for learning path recommendation. We first derive necessary and sufficient conditions for consistency and implement a fast verification algorithm (Algorithm 1). Building on this, we present two constructive procedures: Algorithm 2 generates a minimal CFCS directly from a given skill set and proficiency grids, and Algorithm 3 reduces any given CFCS to a minimal subspace while preserving consistency. For recommendation, Algorithm 4 enumerates all gradual paths in a competence space, and Algorithm 5 augments the path tree with knowledge states under a disjunctive fuzzy skill mapping, thereby enumerating all gradual and effective learning paths from to Q . Simulations across ten benchmark datasets showed that Algorithm 2 attained the theoretical lower bound ( D + 1 ) on the number of states, Algorithm 3 removed over 99% of states in large instances, and our verification runs were 3.9–5.1 times faster than a published baseline with the same inputs. These results suggest direct value for learning analytics, intelligent testing, and formative assessment. Minimal consistent spaces provide compact learner models; the knowledge-labeled path tree supports next-step guidance, multi-step planning, and coverage analysis; and the efficient verifier enables frequent reassessment as evidence accumulates. In practice, these algorithms can be packaged as a reusable library that exposes stable APIs for the verifier, the constructor/reducer, and the path engine, allowing developers of learning assessment systems and intelligent testing systems to invoke them directly. In adaptive learning environments, the path engine delivers next-step guidance, and the verifier supports frequent reassessment as evidence accumulates. Figure 9 summarizes the overall CFCS research framework, covering foundations, algorithms, simulations, and future research directions.
Figure 9. CFCS research framework.
Looking forward, several directions appear promising.
(1)
While this work constructs the minimal consistent space, other non-minimal yet structured consistent spaces (e.g., spaces optimized for coverage balance, robustness, or prerequisite density) merit study, together with algorithms that trade off compactness against expressiveness.
(2)
Building on the enumeration of all gradual and effective paths, future work can investigate path ranking, pruning, and personalization (e.g., by cost, time, difficulty, or learning gains), as well as classroom-level aggregation to design cohort-aware interventions.
(3)
On the modeling side, extending from disjunctive to conjunctive or mixed fuzzy skill mappings, handling noisy or learned mappings, and updating spaces online as new data arrive would broaden practical use.
(4)
On scalability, exploiting sparsity, incremental verification, and parallel path graph generation should further reduce time and memory footprints, enabling very large skill and problem domains and real-time integration with adaptive testing engines.
(5)
For practical deployment, we plan to package the CFCS verifier, constructor/reducer, and path engine as a reusable library with stable APIs and to build a CFCS-based learning diagnosis system. We will then conduct empirical studies in real instructional settings to evaluate effectiveness, scalability, and integration with adaptive learning workflows.

Author Contributions

Writing—original draft preparation, R.W., B.H. and J.L.; writing—review and editing, R.W., B.H. and J.L.; visualization, R.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Fujian Provincial Natural Science Foundation of China (Grant No. 2025J01356).

Data Availability Statement

Dataset descriptions are provided in the manuscript; additional information can be obtained from the first author upon reasonable request.

Acknowledgments

We are grateful to the anonymous reviewers and the editorial team for their careful evaluation and constructive feedback.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
KSTKnowledge space theory
FC-stateFuzzy competence state
CbKSTCompetence-based knowledge space theory
CFCSConsistent fuzzy competence space
DFSDepth-first search

References

  1. Chen, X.L.; Zou, D.; Xie, H.R.; Cheng, G.; Liu, C.X. Two Decades of Artificial Intelligence in Education: Contributors, Collaborations, Research Topics, Challenges, and Future Directions. Educ. Technol. Soc. 2022, 25, 28–47. Available online: https://doaj.org/article/f8b4d7534bfc42a4818ca1d6bf7e4d4b (accessed on 1 September 2025).
  2. Forero-Corba, W.; Bennasar, F.N. Techniques and applications of Machine Learning and Artificial Intelligence in education: A systematic review. RIED-Rev. Iberoam. Educ. Distancia 2024, 27, 37491. [Google Scholar] [CrossRef]
  3. Chen, L.J.; Chen, P.P.; Lin, Z.J. Artificial Intelligence in Education: A Review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  4. Kamalov, F.; Santandreu Calonge, D.; Gurrib, I. New Era of Artificial Intelligence in Education: Towards a Sustainable Multifaceted Revolution. Sustainability 2023, 15, 12451. [Google Scholar] [CrossRef]
  5. Guo, S.C.; Zheng, Y.Y.; Zhai, X.M. Artificial intelligence in education research during 2013-2023: A review based on bibliometric analysis. Educ. Inf. Technol. 2024, 29, 16387–16409. [Google Scholar] [CrossRef]
  6. Ma, S.Y.; Lei, L. The factors influencing teacher education students’ willingness to adopt artificial intelligence technology for information-based teaching. Asia Pac. J. Educ. 2024, 44, 94–111. [Google Scholar] [CrossRef]
  7. Kurilovas, E.; Zilinskiene, I.; Dagiene, V. Recommending suitable learning scenarios according to learners’ preferences: An improved swarm based approach. Comput. Hum. Behav. 2014, 30, 550–557. [Google Scholar] [CrossRef]
  8. Nigenda, R.S.; Padrón, C.M.; Martínez-Salazar, I.; Torres-Guerrero, F. Design and evaluation of planning and mathematical models for generating learning paths. Comput. Intell. 2018, 34, 821–838. [Google Scholar] [CrossRef]
  9. Zhang, F.; Feng, X.; Wang, Y. Personalized process–type learning path recommendation based on process mining and deep knowledge tracing. Knowl.-Based Syst. 2024, 303, 112431. [Google Scholar] [CrossRef]
  10. Rahayu, N.W.; Ferdiana, R.; Kusumawardani, S.S. A systematic review of learning path recommender systems. Educ. Inf. Technol. 2023, 28, 7437–7460. [Google Scholar] [CrossRef]
  11. Şenel, G. An Innovative Algorithm Based on Octahedron Sets via Multi-Criteria Decision Making. Symmetry 2024, 16, 1107. [Google Scholar] [CrossRef]
  12. Wang, P.Q. An optimisation and application of artificial intelligence models based on deep learning in personalised recommendation of English education. Int. J. Comput. Appl. Technol. 2024, 75, 96–102. [Google Scholar] [CrossRef]
  13. Li, H.; Gong, R.R.; Wang, C.X.; Xu, B.S.; Zhong, Z.M.; Li, H.N. Research on Dynamic Learning Path Recommendation Based on Social Networks. IEEE Trans. Consum. Electron. 2024, 70, 5903–5910. [Google Scholar] [CrossRef]
  14. Ma, D.L.S.; Zhu, H.P.; Liao, S.J.; Chen, Y.; Liu, J.; Tian, F.; Chen, P. Learning path recommendation with multi-behavior user modeling and cascading deep Q networks. Knowl.-Based Syst. 2024, 294, 111743. [Google Scholar] [CrossRef]
  15. Mu, M.N.; Yuan, M. Research on a personalized learning path recommendation system based on cognitive graph with a cognitive graph. Interact. Learn. Environ. 2024, 32, 4237–4255. [Google Scholar] [CrossRef]
  16. Gu, R.L. Personalized learning path based on graph attention mechanism deep reinforcement learning research on recommender systems. J. Comput. Methods Sci. Eng. 2025, 25, 2411–2426. [Google Scholar] [CrossRef]
  17. Luo, G.Q.; Gu, H.N.; Dong, X.X.; Zhou, D.D. HA-LPR: A highly adaptive learning path recommendation. Educ. Inf. Technol. 2025, 30, 14597–14627. [Google Scholar] [CrossRef]
  18. Hou, B.; Lin, Y.; Li, Y.; Fang, C.; Li, C.; Wang, X. KG-PLPPM: A Knowledge Graph-Based Personal Learning Path Planning Method Used in Online Learning. Electronics 2025, 14, 255. [Google Scholar] [CrossRef]
  19. Zhao, W.; Xu, Z.; Qiu, L. BPSKT: Knowledge Tracing with Bidirectional Encoder Representation Model Pre-Training and Sparse Attention. Electronics 2025, 14, 458. [Google Scholar] [CrossRef]
  20. Falmagne, J.C. Random conjoint measurement and loudness summation. Psychol. Rev. 1976, 83, 65–79. [Google Scholar] [CrossRef]
  21. Doignon, J.-P.; Falmagne, J.-C. Spaces for the assessment of knowledge. Int. J. Man-Mach. Stud. 1985, 23, 175–196. [Google Scholar] [CrossRef]
  22. Falmagne, J.-C.; Koppen, P.; Villano, M.; Doignon, J.-P. Introduction to knowledge spaces: How to build, test, and search them. Psychol. Rev. 1990, 97, 201–224. [Google Scholar] [CrossRef]
  23. Falmagne, J.C.; Doignon, J.P. A class of stochastic procedures for the assessment of knowledge. Br. J. Math. Stat. Psychol. 1988, 41, 1–23. [Google Scholar] [CrossRef]
  24. Schrepp, M. A generalization of knowledge space theory to problems with more than two answer alternatives. J. Math. Psychol. 1997, 41, 237–243. [Google Scholar] [CrossRef]
  25. Stefanutti, L.; de Chiusole, D. On the assessment of learning in competence based knowledge space theory. J. Math. Psychol. 2017, 80, 22–32. [Google Scholar] [CrossRef]
  26. de Chiusole, D.; Stefanutti, L.; Anselmi, P.; Robusto, E. Stat-Knowlab. Assessment and learning of statistics with competence-based knowledge space theory. Int. J. Artif. Intell. Educ. 2020, 30, 668–700. [Google Scholar] [CrossRef]
  27. Spoto, A.; Stefanutti, L. Empirical indistinguishability: From the knowledge structure to the skills. Br. J. Math. Stat. Psychol. 2023, 76, 312–326. [Google Scholar] [CrossRef] [PubMed]
  28. Anselmi, P.; de Chiusole, D.; Stefanutti, L. Constructing tests for skill assessment with competence-based test development. Br. J. Math. Stat. Psychol. 2024, 77, 429–458. [Google Scholar] [CrossRef] [PubMed]
  29. Stefanutti, L.; Anselmi, P.; de Chiusole, D.; Spoto, A. On the polytomous generalization of knowledge space theory. J. Math. Psychol. 2020, 94, 102306. [Google Scholar] [CrossRef]
  30. Heller, J. Generalizing quasi-ordinal knowledge spaces to polytomous items. J. Math. Psychol. 2021, 101, 102515. [Google Scholar] [CrossRef]
  31. Sun, W.; Li, J.J.; Lin, F.C.; He, Z.R. Constructing polytomous knowledge structures from fuzzy skills. Fuzzy Sets Syst. 2023, 461, 108395. [Google Scholar] [CrossRef]
  32. Wang, R.H.; Xie, X.J.; Zhi, H.L. Bi-Fuzzy S-Approximation Spaces. Mathematics 2025, 13, 324. [Google Scholar] [CrossRef]
  33. Wang, G.X.; Li, J.J.; Xu, B.C. Constructing polytomous knowledge structures from L-fuzzy S-approximation operators. Int. J. Approx. Reason. 2025, 179, 109363. [Google Scholar] [CrossRef]
  34. Heller, J.; Stefanutti, L.; Anselmi, P.; Robusto, E. On the Link between Cognitive Diagnostic Models and Knowledge Space Theory. Psychometrika 2015, 80, 995–1019. [Google Scholar] [CrossRef]
  35. Liu, G.L. Rough set approaches in knowledge structures. Int. J. Approx. Reason. 2021, 138, 78–88. [Google Scholar] [CrossRef]
  36. Xie, X.X.; Xu, W.H.; Li, J.J. A novel concept-cognitive learning method: A perspective from competences. Knowl.-Based Syst. 2023, 265, 110382. [Google Scholar] [CrossRef]
  37. Xu, B.C.; Li, J.J. The inclusion degrees of fuzzy skill maps and knowledge structures. Fuzzy Sets Syst. 2023, 465, 108540. [Google Scholar] [CrossRef]
  38. Zhou, Y.F.; Yang, H.L.; Li, J.J.; Wang, D.L. Skill assessment method: A perspective from concept-cognitive learning. Fuzzy Sets Syst. 2025, 508, 109331. [Google Scholar] [CrossRef]
  39. Zhou, Y.F.; Li, J.J.; Wang, H.K.; Sun, W. Skills and fuzzy knowledge structures. J. Intell. Fuzzy Syst. 2022, 42, 2629–2645. [Google Scholar] [CrossRef]
  40. Zhou, Y.F.; Li, J.J.; Yang, H.L.; Xu, Q.Y.; Yang, T.L.; Feng, D.L. Knowledge structures construction and learning paths recommendation based on formal contexts. Int. J. Mach. Learn. Cybern. 2024, 15, 1605–1620. [Google Scholar] [CrossRef]
  41. Wang, R.; Huang, B.; Li, J. A Learning Path Recommendation Approach in a Fuzzy Competence Space. Axioms 2025, 14, 396. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.