Abstract
Artificial intelligence is playing an increasingly important role in education. Learning path recommendation is one of the key technologies in artificial intelligence education applications. This paper applies knowledge space theory and fuzzy set theory to study the construction of consistent fuzzy competence spaces and their application to learning path recommendation. With the help of the outer fringe of fuzzy competence states, this paper proves the necessary and sufficient conditions for a fuzzy competence space to be a consistent fuzzy competence space and designs an algorithm for verifying consistent fuzzy competence spaces. It also proposes methods for constructing and reducing consistent fuzzy competence spaces, provides learning path recommendation algorithms from the competence perspective and combined with a disjunctive fuzzy skill mapping, and constructs a bottom-up gradual and effective learning path tree. Simulation experiments are carried out for the construction and reduction in consistent fuzzy competence spaces and for learning path recommendation, and the simulation studies show that the proposed methods achieve significant performance improvement compared with related research and produce a more complete recommendation of gradual and effective learning paths. The research of this paper can provide theoretical foundations and algorithmic references for the development of artificial intelligence education applications such as learning assessment systems and intelligent testing systems.
Keywords:
knowledge space theory (KST); fuzzy competence state (FC-state); fuzzy competence structure; consistent fuzzy competence space (CFCS); outer fringe; learning path recommendation; gradual and effective learning paths MSC:
68T05; 03E72
1. Introduction
Artificial intelligence has had a significant impact on the field of education, and related research continues to grow [1,2]. At present, artificial intelligence is widely used by educational institutions in various forms; intelligent educational systems can achieve personalized curriculum customization through machine learning and adaptive adjustments, thereby improving teaching quality and promoting students’ knowledge acquisition [3]. The deep integration of artificial intelligence with educational systems is transforming how students learn, how teachers teach, and how educational institutions operate [4]. Currently, the application of artificial intelligence in education mainly focuses on disciplines such as STEM (science, technology, engineering, and mathematics), computer science, and English education [5]. Studies have shown that to encourage broader adoption of artificial intelligence technologies among educators, it is necessary to highlight their practical benefits in teaching, thus promoting deeper integration of AI into digital instruction [6].
Personalized learning path recommendation can dynamically deliver appropriate learning resources and test questions based on learners’ responses, current proficiency levels, and specific learning objectives. Many scholars have studied learning path recommendation technologies from different perspectives. Kurilovas et al. proposed a learning path selection method based on swarm intelligence, which is suitable for dynamically selecting learning paths according to learners’ learning styles [7]. Nigenda et al. proposed a flexible conceptual framework that represents course information as AI planning and mathematical programming models, promoting the generation of learning paths through domain-independent algorithms [8]. Zhang et al. proposed a process-type learning path model that represents paths as flowcharts and dynamically recommends branches based on the learner’s knowledge state by combining deep knowledge tracing with process mining and decision mining; experiments on e-learning logs show improved learning effectiveness and efficiency [9]. The research by Rahayu et al. showed that current learning path recommendation technologies often combine ontology with Bayesian networks, data mining, and other AI technologies; they also pointed out that ontology can be combined with knowledge representation tools, educational psychology, and evolutionary computation to construct future dynamic learning paths in adaptive learning environments [10]. From a complementary perspective, multi-criteria decision making has explored set-based optimization mechanisms that can inform trade-offs in path selection [11]. Wang proposed an AI model based on deep learning to optimize personalized learning recommendation methods in English education, addressing problems such as ignoring students’ interests, relying on subjective judgment, or a large mean square error caused by popular learning paths in traditional recommendation methods [12]. In recent years, related research has also integrated technologies such as reinforcement learning, social networks, knowledge graphs, cognitive graphs, and graph attention mechanisms, combined with dynamic adjustment, multi-behavior modeling, cognitive visualization, and memory forgetting rules, focusing on improving recommendation accuracy and learning effectiveness [13,14,15,16,17]. Recent work also demonstrates knowledge-graph-based path planning that customizes learning sequences by integrating concept relations with refined learner diagnosis [18]. Complementarily, advances in deep knowledge tracing enhance the estimation of evolving knowledge states over long interaction sequences, supporting downstream path recommendation [19].
Most studies on learning path recommendation focus on the technical level and rarely incorporate the internal structure of learning content and learners’ mastery of existing knowledge. Knowledge space theory (KST) provides a theoretical framework for research on personalized learning path recommendation by integrating knowledge structures and learners’ knowledge states. KST, rooted in probabilistic measurement theory and mathematical psychology, is a mathematical framework proposed by Doignon and Falmagne to model and assess individuals’ knowledge states [20,21,22]. Using the formal mechanisms provided by KST, it is possible to infer an individual’s knowledge level in a specific domain based on their responses to a series of questions [23,24]. Currently, knowledge space theory has been extended to competence-based knowledge space theory (CbKST) [25,26,27,28]. Early research focusing on dichotomous knowledge structures has also been extended to polytomous knowledge structures [29,30,31,32,33]. At the same time, the combination of KST with cognitive diagnostic theory, rough set theory, formal concept analysis, and fuzzy set theory has further enriched KST and its applications [34,35,36,37,38]. Although early studies of KST also mentioned its application to learning path recommendation, relevant research has remained relatively limited. In recent years, some scholars have attempted to apply KST to learning path recommendation [39,40,41]. Zhou et al. explored learning path recommendation for fuzzy knowledge structures under conjunctive skill mappings, disjunctive skill mappings, and conjunctive fuzzy skill mappings [39]. Zhou et al. also studied the construction of knowledge structures and learning path recommendation in formal contexts [40]. Wang et al. proposed a learning path recommendation method based on fuzzy competence space theory, defined a fuzzy competence space called a consistent fuzzy competence space (CFCS), and designed a gradual and effective learning path recommendation algorithm from to the full set , which was validated through simulations [41].
In Wang’s study [41], the construction of a CFCS was not specified in detail, and the learning path recommendation discussed only how to find one gradual and effective path from the initial knowledge state to the full set . A natural extension is to enumerate all gradual and effective paths from to . In addition, the existing algorithm checks consistency directly from the basic definition, which is computationally inefficient. In this paper, we establish necessary and sufficient conditions under which a fuzzy competence space is consistent, propose an algorithm to verify consistency, and study both construction and reduction methods for CFCSs. Building on these results, we improve the prior learning path algorithm to construct a bottom-up gradual learning path tree from to , and enumerate all gradual and effective learning paths from to .
The rest of this paper is organized as follows: Section 2 reviews CFCSs and related basic concepts. Section 3 presents a simplified method to verify whether a fuzzy competence structure is a CFCS and introduces construction methods for CFCSs. Section 4 proposes a gradual and effective learning path recommendation algorithm. Section 5 presents simulation experiments, and Section 6 concludes the work and outlines future research directions.
2. Preliminaries
To clearly articulate the concept of CFCSs, we first revisit foundational concepts from knowledge space theory and fuzzy set theory.
Definition 1
([21]). Let be a nonempty finite set representing a domain of problems. In an ideal scenario free from random errors or guessing, the subset of problems that an individual can correctly solve is called a knowledge state. Both (solving none) and (solving all) are valid knowledge states. The collection of all such knowledge states is denoted as , and the pair is termed a knowledge structure. For any ,
if , the structure is called a knowledge space.
Union-closure of knowledge states means that solvable sets can be combined; if one learner solves and another solves , then a state covering belongs to . This conveys that collaboration integrates what each can solve into a jointly attainable set of problems.
Definition 1 provides a formal mathematical basis to model what learners know in a domain. It also supports tracing knowledge progression and designing personalized learning paths within the structure .
Definition 2
([39]). Let be a nonempty finite set of skills relevant to problem solving in . A fuzzy competence state (FC-state) is defined as a mapping , where for each skill , the value represents the degree to which an individual has mastered skill . Formally, the set of all such mappings is denoted by .
From the perspective of fuzzy sets, can be seen as a fuzzy set over , with as the membership degree of skill . For any two FC-states :
- if and only if for all ; represents and ;
- The union is given by ;
- The intersection is given by .
Definition 2 allows us to model individuals’ proficiency levels in a flexible and graded way, reflecting partial rather than all-or-nothing mastery, and it supports algebraic operations on FC-states.
Definition 3
([39]). Let and be nonempty finite sets representing problems and skills, respectively. A fuzzy skill mapping is defined as a triple , which assigns to each problem a fuzzy skill requirement function . Here, represents the minimum level of mastery needed for skill to solve problem . If , it means skill is irrelevant to problem .
Under the disjunctive model, for an individual with an FC-state , problem is considered solvable if there exists at least one skill such that . The corresponding knowledge state can thus be formally described as:
Such a formulation illustrates how differences in skill mastery correspond to problem-solving capabilities and makes it possible to formally specify which problems learners can handle based on their FC-states.
Definition 4
([41]). Let be a nonempty finite set of skills, and for each , let be a finite ordered set of numerical values where the minimum is 0 and the maximum is 1. Denote by the set of all transversals over ; that is, each element in selects exactly one pair with for every . In other words, a transversal is an FC-state such that, for each , . (The concept of transversal here follows the explanation in Definition 4 of [41].)
Let . A triple is called a fuzzy competence structure if it satisfies the following conditions:
- contains both the minimal competence state and the maximal competence state ;
- .
Moreover, if for any two FC-states , the union is also in , then is called a fuzzy competence space.
Such definitions describe how sets of graded competence states can be systematically organized and specify the algebraic closure under union required to form a fuzzy competence space, providing theoretical support for subsequent research on learning path recommendation.
Building on the above concepts and drawing on Definitions 8 and 9 in [41], we now present a more explicit definition of a consistent fuzzy competence space.
Definition 5
(cf. [41], Definitions 8 and 9). Let be a fuzzy competence structure. For each , define the cardinality as the number of skills with non-zero proficiency:
For any with , let
and define the distance:
where, for proficiency levels , , with .
A fuzzy competence space is called a CFCS if it satisfies the following conditions:
- (1)
- For any , if , there exist such that = , and , where ;
- (2)
- For any , if , and , suppose , then there exist such that , and , where .
Remark 1.
In Definition 5, the chain condition is stated with strict inclusion, . This slightly strengthens the inclusion in [41], yet it is fully justified by the other clauses: and together imply . Using strict inclusion therefore makes the incremental progress explicit without changing the substance of the definition; using non-strict inclusion would still be correct but less precise.
This formulation reflects three educational learning properties [41]:
- Closure under union of competence states indicates that the problem sets solvable by different individuals can be integrated through collaboration or instruction, producing a state that covers their combined solvable set;
- Consistency, understood as progression via single-step increments, captures the gradualness of skill learning;
- Taken together, closure under union and consistency imply that advancing in some skills does not interfere with other skills (see [41], Proposition 3).
Definition 6
([41]). Let be a fuzzy competence structure and be a fuzzy skill mapping. Under the disjunctive model, for each FC-state , define:
The mapping is called the fuzzy disjunctive problem function associated with . Further, the family of subsets:
This family is called the knowledge structure induced by the fuzzy competence structure and the fuzzy skill mapping under the disjunctive model.
This formulation explains how individual FC-states translate into solvable problem sets, providing a formal bridge from learners’ skill profiles to knowledge states within the disjunctive reasoning framework.
Example 1.
Let be a fuzzy competence structure, and let be a disjunctive fuzzy skill mapping. Let , with , , , and . The FC-states in are listed in Table 1.
Table 1.
The FC-states in .
The fuzzy skill mapping is given in Table 2.
Table 2.
Fuzzy skill mapping.
By Definition 5, can be verified to be a CFCS. Based on Definition 6, the knowledge states corresponding to each FC-state are shown in Table 3.
Table 3.
FC-states and corresponding knowledge states.
For the FC-states , the proficiency level of skill is 1. Under the disjunctive fuzzy skill mapping, if at least one skill meets the requirement, the corresponding problem is solvable. Therefore, these states correspond to the complete problem set .
3. Construction of CFCSs
To facilitate subsequent applications, constructing CFCSs is an important task. However, the prior study [41] did not provide a concrete method for constructing CFCSs. In this section, inspired by the outer fringe of competence states in CbKST, we first introduce the definition of the outer fringe of an FC-state. On this basis, we propose a simplified method to decide whether a fuzzy competence structure is a CFCS and further design an algorithm to construct such a space. We first recall the outer fringe of competence states.
Definition 7
([25]). Let be a competence structure defined on a nonempty finite set of skills . For any competence state , the outer fringe of , denoted , is the set of skills such that adding to results in a valid competence state in . Formally:
The outer fringe consists of skills not yet mastered in ; acquiring any of them yields a new competence state. This concept is analogous to the zone of proximal development in educational theory and indicates the next skills or knowledge the learner is ready to acquire. The outer fringe highlights the gradual nature of skill learning: competence states improve step by step. If every non-maximal competence state has a nonempty outer fringe, learners can progressively master the entire skill set by choosing one skill from the current outer fringe at each step until all skills are acquired.
FC-states extend ordinary competence states by adding proficiency levels for each skill. Fuzzy skill learning is approximately continuous, while FC-states discretize it by selecting finite representative levels in . Based on these discretized values, we defined the fuzzy skill mapping, fuzzy competence structure, fuzzy competence space and CFCS above. It is also natural to use these selected values as thresholds to define the outer fringe of an FC-state. For example, in Example 1, , and is the next possible target proficiency after . The concept of the outer fringe of competence states can be extended to FC-states.
Definition 8.
Given a fuzzy competence structure , for any , if there exists such that and there does not exist any with , then is called an outer fringe state of . The set of all such outer fringe states of is called the outer fringe of , denoted by .
Example 2.
In Example 1, the outer fringes of each FC-state in are shown in Table 4.
Table 4.
The outer fringes of each FC-state in of Example 1.
As shown in Example 2, except for the last FC-state whose outer fringe is empty, each FC-state has exactly one outer fringe state, which directly corresponds to the next FC-state in the sequence. This illustrates an ideal situation where, starting from the initial state, learners can progressively reach the final FC-state by successively moving to the outer fringe state of the current state.
Based on these observations and the structural properties of CFCSs, we can establish the following proposition.
Proposition 1.
Let be a CFCS. For any and any nonempty outer fringe state , it holds that .
Proof.
By Definition 8, implies and there does not exist such that . By Definition 5, for any with , there exists a chain:
where for . Since there does not exist with , it follows that . Thus, the chain reduces to:
with , i.e., . Moreover, by the second condition in Definition 5, suppose , then there exists a chain such that:
Therefore:
□
Example 3.
The fuzzy competence structure presented in Examples 1 and 2 constitutes a CFCS. As shown in Table 4, all FC-states with nonempty outer fringes satisfy the conclusion of Proposition 1.
Theorem 1.
Let be a fuzzy competence space. Then, is a CFCS if and only if for every FC-state :
- (1)
- If , then there exists such that .
- (2)
- If , then there exists such that and .
- (3)
- If and , then there exist such that and .
Proof.
(Necessity) Suppose is a CFCS. By Definition 5, for any with , there exists a chain of FC-states:
such that for .
- (1)
- When :
Take and . Let . The chain is:
Since no state exists between and , we have and . By Proposition 1, .
- (2)
- When :
Take and . Let . The chain is:
Since no state exists between and , we have and . By Proposition 1, .
- (3)
- When is an intermediate state ( and :
First, take and . Let . The chain is:
Since no state exists between and , we have and . By Proposition 1, .
Next, take and . Let . The chain is:
Since no state exists between and , we have and . By Proposition 1, .
(Sufficiency) For any with , let . By the three conditions above, there exists with . Let , then with and . If , we have found the required chain of FC-states. Otherwise, repeat this process until , yielding:
where for , and , . Thus, is a CFCS. □
Guided by Proposition 1 and Theorem 1, we design an algorithm to decide whether is a CFCS. We first sort lexicographically by skill proficiencies and encode each state as an integer vector. We use the Manhattan distance between vectors to quantify the level gaps across skills, we verify union-closure using the componentwise maximum and, under union-closure, identify each state’s outer fringe by locating single-skill increments where the distance equals one. Consistency is rejected on the first violation, either a missing union or an empty outer fringe, and otherwise the structure is accepted as consistent. The complete procedure is given in Algorithm 1.
| Algorithm 1 Decide whether is a CFCS | |
| Input: A fuzzy competence structure ; each is finite and totally ordered. | |
| Output: True if is a CFCS; otherwise False. | |
| 1: | |
| 2: | |
| 3: | Sort increasingly by in lexicographic order, and relabel the states as . |
| 4: | For each , index increasingly as . |
| 5: | Map each to an integer vector using these indices. |
| 6: | . |
| 7: | For to do |
| 8: | For to do |
| 9: | |
| 10: | If then |
| 11: | Return False. // |
| 12: | End if |
| 13: | End for |
| 14: | End for |
| 15: | For do // exclude the maximal state |
| 16: | |
| 17: | For to do |
| 18: | // Manhattan distance |
| 19: | If then |
| 20: | // |
| 21: | End if |
| 22: | End for |
| 23: | If = then |
| 24: | Return False. // some non-maximal state has empty outer fringe |
| 25: | End if |
| 26: | End for |
| 27: | IF with |
| 28: | Return True. // maximal state appears as someone’s outer fringe |
| 29: | Else |
| 30: | Return False // maximal state is not reachable by a 1-step raise |
| 31: | End if |
Example 4.
Table 5.
The FC-states in .
We now apply Algorithm 1 to determine whether is a CFCS.
First, sort lexicographically by in ascending order and relabel the states as in this new order. Each proficiency level set is indexed in ascending order:
Replacing each proficiency in a state with its index yields an integer vector . The sorted states and their integer vector encodings are presented in Table 6.
Table 6.
Sorted FC-states and corresponding integer vector encodings.
Second, check union-closure. For each , compute and verify . All pairs satisfy this property.
Third, compute the outer fringe for each with . A state belongs to if and the Manhattan distance between and equals 1. We illustrate the calculation process for as an example. Based on the sorted order in Table 6, each FC-state is mapped into its integer vector representation . We then compute the Manhattan distance between and each . States with a Manhattan distance equal to 1 constitute . The results are presented in Table 7, from which it is clear that .
Table 7.
Manhattan distance between and .
The procedure for computing the outer fringes of the other FC-states follows the same steps as for . According to the definition of the outer fringe, the maximal state has no outer fringe; thus, in Table 8 its outer fringe is denoted by . The results for all states are summarized in Table 8.
Table 8.
Outer fringes of .
Finally, verify that the maximal state appears in at least one outer fringe. Indeed, and .
Since union-closure holds, every outer fringe except that of the maximal state is non-empty, and the maximal state itself appears as the outer fringe of other FC-states. Therefore, Algorithm 1 confirms that is a CFCS.
Example 5.
Based on the fuzzy competence structure in Example 4, consider the sorted FC-states given in Table 6. Remove and from . It is straightforward to verify that the remaining FC-states still satisfy union-closure.
Following Algorithm 1, we compute the outer fringes. As an illustration, we examine . Using the integer vectors in Table 6, we calculate the Manhattan distance between and for . The results are listed in Table 9. Since no state lies at distance from , we obtain = . Hence, despite union-closure, the modified is not a CFCS, and it is unnecessary to compute the outer fringes of the remaining states.
Table 9.
Manhattan distance between and .
We proceed to investigate the construction of CFCSs. As a starting point, we consider a special case in which contains all possible FC-states, i.e., .
Proposition 2.
Let be a fuzzy competence structure. If , then is a CFCS.
Proof.
By Definition 2, for any , the union is defined so that, for each skill , its level in equals the higher of its levels in and :
Since contains all possible states, . Hence is union-closed, and by Definition 4, is a fuzzy competence space.
If , pick any skill . Let be obtained from by increasing the level of to its immediate successor in , keeping all other skills unchanged. Then , , and .
If , pick any skill Let be obtained from by decreasing the level of to its immediate predecessor in , keeping all other skills unchanged. Then , , , and .
If and , choose a skill whose level in is not maximal and a skill whose level in is not minimal. Let be obtained by increasing the level of by one (others unchanged) and by decreasing the level of by one (others unchanged). Then , , and .
Thus, all three conditions in Theorem 1 are satisfied. Therefore is a CFCS. □
Example 6.
Consider with , as in Example 4. Let , i.e., contains all FC-states over . In accordance with Algorithm 1, we sort the states lexicographically by in ascending order and index them as . The complete list is given in Table 10.
Table 10.
The FC-states in .
Since contains all possible FC-states, Proposition 2 applies directly: is a CFCS.
Proposition 3.
Let with . After sorting lexicographically by , relabel the states as with , so that and . Construct a sequence as follows:
- ;
- for each , let be the last element of according to the insertion order specified in Algorithm 1;
- stop at the first with .
Set . Then is a CFCS.
Proof.
By Definition 8, implies and the change occurs on exactly one skill by one proficiency level; hence
The rule “take the last element of in Algorithm 1’s order” ensures a deterministic choice at each step, so the chain is uniquely determined. Because is a chain under , for any we have ; therefore is union-closed and is a fuzzy competence space. Finally, let with . Writing and with , the subsequence satisfies Definition 5: for each , and . Hence is a CFCS. □
Let with , and let be the chain constructed in Proposition 3 by repeatedly taking, at each step, the last element of the current outer fringe in the order induced by Algorithm 1. Each step changes exactly one skill by exactly one proficiency level, and the state strictly increases under . Denote
From to , any chain in which each step modifies exactly one skill by one proficiency level must perform changes for each skill , so its length is at least . The construction in Proposition 3 exactly reaches this bound with , hence . Hence, contains the minimal possible number of FC-states among all CFCSs on .
It should be noted that is a maximal chain under but not necessarily unique: if an outer fringe contains multiple elements, selecting a different one (instead of the “last” element) may yield a different chain; however, all such chains have the same cardinality .
According to Proposition 3 and the above counting argument, the following corollary holds.
Corollary 1.
Let be a CFCS with . Applying the selection rule of Proposition 3 within yields a subset that is also a CFCS and satisfies
If , this gives a strict reduction; otherwise, is already minimal. Different admissible choices at outer fringes may lead to different , but all such subsets have the same minimal size .
Building on Proposition 3 and the preliminaries in Algorithm 1, we now introduce a constructive procedure that generates, for a given skill set and the associated proficiency-level sets , a CFCS with the minimal number of states, as shown in Algorithm 2. The algorithm first enumerates all FC-states in in lexicographic order and then extracts a deterministic maximal chain by iteratively selecting the last element from the current outer fringe.
| Algorithm 2 Construct a minimal CFCS | |
| Input: A skill set and proficiency-level sets ; each is finite and totally ordered. | |
| Output: A CFCS with the minimal number of states. | |
| 1: | |
| 2: | Generate all FC-states over ; sort lexicographically by ; relabel as ; set . |
| 3: | For each , index increasingly as . |
| 4: | Map each to an integer vector using these indices. |
| 5: | ; 0 |
| 6: | While do |
| 7: | // outer fringe of |
| 8: | For to do |
| 9: | // Manhattan distance |
| 10: | If then |
| 11: | |
| 12: | End if |
| 13: | End for |
| 14: | Let be the last element of , ordered lexicographically as in Step 2. |
| 15: | |
| 16: | index of among |
| 17: | End while |
| 18: | Return ) |
Example 7.
We apply Algorithm 2 to the same input as in Example 4: let , with and . The full set of FC-states, sorted lexicographically as in Table 10 of Example 6 and is denoted by .
Table 11 reports, for each sorted state , its outer fringe computed as in Algorithm 2 (Manhattan distance equal to one in the indexed grid).
Table 11.
Outer fringes of .
Using Table 11 and the selection rule of Algorithm 2 (at each step select the last element in lexicographic order), we obtain the minimal chain . Table 12 lists the resulting FC-states.
Table 12.
FC-states of .
Whereas Algorithm 2 constructs a minimal CFCS by enumerating all states in , Algorithm 3 reduces a given to a minimal one while staying within the provided family . To improve robustness, it first verifies consistency via Algorithm 1 and then reuses the outer-fringe traversal restricted to the given states.
| Algorithm 3 Reduction in a CFCS | |
| Input: A fuzzy competence structure ; each is finite and totally ordered. | |
| Output: If inconsistent, report and stop. If consistent, return a minimal with , , and the ratio . | |
| 1: | ; |
| 2: | // Robustness check using Algorithm 1. |
| 3: | If then // denotes Algorithm 1 |
| 4: | Report an error: the input is not a CFCS. |
| 5: | Return without reduction |
| 6: | End if |
| 7: | Sort lexicographically by ; relabel as with |
| 8: | For each , index increasingly as . |
| 9: | Map each to an integer vector using these indices. |
| 10: | |
| 11: | While do |
| 12: | // outer fringe of in |
| 13: | For to do |
| 14: | // Manhattan distance |
| 15: | If then |
| 16: | |
| 17: | End if |
| 18: | End for |
| 19: | Let be the last element of , ordered lexicographically as in Step 7. |
| 20: | |
| 21: | |
| 22: | End while |
| 23: | ; |
| 24: | Return , , , . |
Example 8.
Consider the CFCS of Example 4, where with and . Following Algorithm 3, we reduce by traversing the sorted states in Table 6 and, at each step, taking the last element of the current outer fringe given in Table 8. Starting from , the chosen sequence is:
This yields a minimal consistent fuzzy competence subspace with . The states are listed in Table 13.
Table 13.
The FC-states in (reduction of Example 4).
Remark 2.
Example 4 has sorted states . The reduction above keeps states, matching the theoretical minimum with . Thus .
This section established a constructive route to CFCSs and to their minimal representations. The central result is Theorem 1, which states a necessary and sufficient condition for consistency in terms of the outer fringes of FC-states. Building on Theorem 1, Algorithm 1 decides whether a given fuzzy competence structure is a CFCS. On the same foundation, Algorithm 2 constructs a CFCS with the minimal number of states by extracting a lexicographic chain for the given and the proficiency-level sets . Algorithm 3 reduces any given CFCS to a minimal one within the supplied . The next section designs learning path recommendation algorithms based on the CFCSs developed in this section.
4. Learning Path Recommendation
This section develops learning path recommendation in two layers:
- (1)
- From the competence perspective (without items), we enumerate all gradual paths in a CFCS;
- (2)
- Under a disjunctive fuzzy skill mapping , we annotate each competence state with its induced knowledge state and enumerate all gradual and effective paths. The second layer extends [41] from finding a single path to listing all paths from to .
4.1. Learning Path Recommendation from the Competence Perspective
Given a CFCS , any outer fringe step increases exactly one skill by one proficiency level, and chaining such steps yields gradual paths from to . Algorithm 4 constructs a bottom-up tree whose nodes are FC-states and whose edges are one-level increments and then performs a depth-first search (DFS) to output all gradual paths from to in competence space terms (i.e., as sequences ).
| Algorithm 4 Enumerate gradual learning paths in a CFCS | |
| Input: A fuzzy competence structure . | |
| Output: A bottom-up learning path tree rooted at and the full list of paths from to . | |
| 1: | |
| 2: | |
| 3: | If Alg1() = False then // Alg1 denotes Algorithm 1 |
| 4: | Return “() is not a CFCS.” |
| 5: | End if |
| 6: | Sort lexicographically by ; relabel as |
| 7: | For each , index as |
| 8: | Map each to an integer vector |
| 9: | For to do |
| 10: | // outer fringe of |
| 11: | For to do |
| 12: | // Manhattan distance |
| 13: | If then |
| 14: | |
| 15: | End if |
| 16: | End for |
| 17: | End for |
| 18: | Define procedure : |
| 19: | If then |
| 20: | Append current stack to Paths; Return |
| 21: | End if |
| 22: | For each do // : the index of after Step 6 |
| 23: | Push ; ; Pop |
| 24: | End for |
| 25: | End procedure |
| 26: | Build edge set |
| 27: | ; |
| 28: | Call |
| 29: | Output the bottom-up tree and the list Paths |
| 30: | Return , Tree, Paths |
Example 9.
Applying Algorithm 4 to the CFCS of Example 8 yields the gradual learning path tree in Figure 1
(bottom-up). A label denotes state with the original proficiency levels on skills . The bold nodes and arrows indicate the deterministic minimal chain produced by Algorithm 3, which always selects the last element of the current outer fringe. Table 14 reports all gradual learning paths from to .
Figure 1.
Gradual learning path tree (bottom-up).
Table 14.
All gradual learning paths from to .
4.2. Gradual and Effective Learning Paths
We now consider learning path recommendation when a fuzzy skill mapping is given in addition to a CFCS . By Definition 6, under the disjunctive model, each FC-state is mapped to the knowledge state , and the induced family forms the knowledge structure associated with and .
Definition 9.
Let be a CFCS and
a fuzzy skill mapping.
is the knowledge space induced by
and
under the disjunctive model. For any
with , we say that there exists a gradual and effective learning path from
to
if there exist
such that
and for every
one has
That is, each step raises exactly one skill by one proficiency level while the induced knowledge state is non-decreasing.
Proposition 1 in [41] ensures monotonicity of the disjunctive problem function : whenever in , one has . Therefore, every upward edge in the competence tree (which raises exactly one skill by one proficiency level) induces a non-decreasing transition of knowledge states. Building on Algorithm 4, Algorithm 5 augments each competence state with its corresponding knowledge state , producing a bottom-up knowledge-labeled gradual learning path tree and allowing us to enumerate all gradual and effective learning paths from to .
| Algorithm 5 Annotate the gradual learning path tree with induced knowledge states and list all gradual and effective learning paths from to under the disjunctive model | |
| Input: A fuzzy competence structure and a fuzzy skill mapping . | |
| Output: A bottom-up knowledge-labeled tree where each node carries , and the full list of gradual and effective paths from to . | |
| 1: | // Build a bottom-up learning path tree and enumerate all gradual learning paths |
| 2: | // Alg4 denotes Algorithm 4 |
| 3: | // Compute induced knowledge states under the disjunctive model |
| 4: | For each node in do |
| 5: | |
| 6: | End for |
| 7: | // Create knowledge-labeled tree |
| 8: | with each node relabeled as |
| 9: | // Enumerate all gradual and effective paths |
| 10: | |
| 11: | For each competence path in do |
| 12: | If // non-decreasing knowledge states |
| 13: | Append to |
| 14: | End if |
| 15: | End for |
| 16: | Return , |
Example 10.
Consider the CFCS in Example 4 after lexicographically sorting the FC-states by skill proficiency. For convenience, the states in Table 6 are restated in Table 15. Under the fuzzy skill mapping given in Table 16, the knowledge state induced by each according to is reported in Table 17. The bottom-up gradual and effective learning path tree produced by Algorithm 5 is shown in Figure 2. The bold path in Figure 2 corresponds to the bold path in Figure 1.
Table 15.
Sorted FC-states in Example 4.
Table 16.
Fuzzy skill mapping.
Table 17.
FC-states and their induced knowledge states.
Figure 2.
Gradual learning path tree under fuzzy skill mapping (bottom-up).
Relative to Figure 1, each node additionally displays the knowledge state induced by the mapping . The bold nodes and edges form one gradual and effective learning path from to the full set corresponding to the bold path in Figure 1: at every step the FC-state increases exactly one skill by one proficiency level, and the associated knowledge state also increases (or at least does not decrease). This reflects that, as a learner follows the path and upgrades skills step by step, the learner’s knowledge state grows accordingly. Other gradual and effective paths may also be chosen, capturing the fact that different learners can reasonably progress along different improvement routes—consistent with what is observed in real learning cohorts.
5. Simulation
To evaluate the proposed algorithms, we conducted simulation experiments in three parts:
- (1)
- Constructing a minimal CFCS (Algorithm 2);
- (2)
- Reducing a CFCS (Algorithm 3);
- (3)
- Searching gradual and effective learning paths (Algorithm 5).
All experiments ran on an Intel® Core™ i7-9700 @ 3.00 GHz with 16 GB RAM under Windows 10, and the simulator was implemented in Python 3.13.2.
5.1. Simulation on Constructing Minimal CFCSs
Algorithm 2 constructed a minimal CFCS. We used the skill sets and proficiency sets from the first group of ten datasets in Ref. [41]. Basic information is summarized in Table 18.
Table 18.
Basic information of 10 datasets.
The simulation outcomes are summarized in Table 19.
Table 19.
Simulation results for Algorithm 2.
Figure 3 and Figure 4 jointly illustrate scalability and timing trends across the ten datasets. Figure 3 shows that runtime increased roughly with at this scale, while Figure 4 highlights that the minimal space grew linearly in even when the full Cartesian product grew rapidly—explaining why Algorithm 2 remained efficient despite large underlying state spaces.
Figure 3.
Runtime versus minimal consistent states count.
Figure 4.
Minimal consistent states count versus all possible states count.
Key observations from the results are as follows:
- Minimality was attained. For every dataset the number of constructed states equaled , matching the theoretical lower bound ; hence the constructed fuzzy competence spaces were both consistent and minimal.
- Favorable scalability. Although can be large (e.g., 6561 for d10), the minimal space size depends primarily on , which grows much more slowly. This property benefits downstream reduction (Algorithm 3) and path search (Algorithm 5).
- Low computational cost. All runs finished within 0.04 s per dataset; small variations stemmed from OS-level I/O and caching, not from algorithmic complexity.
5.2. Simulation on Reducing a CFCS
Algorithm 3 reduced a CFCS. We generated the full fuzzy competence space for each dataset using and from the first group of ten datasets in Ref. [41]. Basic information is listed in Table 20.
Table 20.
Basic information of 10 datasets.
The simulation outcomes are summarized in Table 21 (here “Removed (%)” is defined as .
Table 21.
Reduction results.
Figure 5 and Figure 6 together characterize Algorithm 3’s behavior: Figure 5 shows that the removed fraction increased with and rapidly approached 100% for large spaces (e.g., 99.74% for d10), because the reduced space contained only states while grew combinatorically. Figure 6 indicates runtime increased with dataset size and was dominated by the verification stage for large .
Figure 5.
Reduction Ratio by Dataset.
Figure 6.
Runtime versus reduced states count.
Key observations from the results are as follows:
- Near-total pruning for large spaces. As increases, the chain size remains small relative to , so the removed percentage grows and can exceed 99% (d08–d10).
- Moderate pruning for small spaces. When is modest, a larger fraction of states may remain in the minimal chain, so the removed percentage is smaller (e.g., 44.444% for d01).
- Verification dominates runtime at scale. Consistency verification checks union-closure and outer fringes and performs a reachability test; a straightforward implementation touches many pairwise unions (worst-case and fringe checks , which explains the very large “Verify time (s)” for d10 and why it dictates total runtime in Figure 6. Engineering optimizations (memorized maxima, sparse adjacency, early stopping) can reduce this overhead without altering the reduction outcome.
5.3. Simulation on Searching Gradual and Effective Learning Paths
We implemented the simulator for Algorithm 5. Using the data in Example 10 as input, the resulting bottom-up gradual and effective learning path tree is shown in Figure 7. It matches Figure 2 but is rendered in a simplified style; since paths are bottom-up, arrowheads are omitted for readability. We then evaluated the algorithm on the first group of ten datasets from Ref. [41]; the dataset configuration is summarized in Table 22 (details follow Ref. [41]).
Figure 7.
Bottom-up gradual and effective learning path tree.
Table 22.
Basic information of 10 datasets.
Table 23 reports the simulation results of Algorithm 5. In this table, denotes the number of knowledge states jointly induced by and the fuzzy skill mapping , “nodes” and “edges” are the counts in the gradual and effective learning path tree, “paths” is the number of bottom-up gradual and effective paths from to , “verify(s)” is the time spent on consistency checking, and “total(s)” is the total runtime.
Table 23.
Simulation results for Algorithm 5.
For ease of exposition, we refer to Algorithm 5 as Algorithm A and the method in Ref. [41] as Algorithm B. For comparison, Table 24 lists the results for Algorithm B. Note that Algorithm B finds a single gradual path from to , whereas Algorithm A enumerates all such paths and represents them as a tree.
Table 24.
Simulation results from Ref. [41] (Algorithm B).
Figure 8 plots, for each dataset, the verification-time ratio , which ranges from (d01) to 5.11 (d04), averaging about .
Figure 8.
Verification time ratio per dataset.
Several salient observations emerge:
- For Algorithm A, as grew, the numbers of nodes and edges scaled roughly with , but the number of paths increased dramatically—from 6 (d01) to (d10). This surge reflects the combinatorial recombination of single-skill increments across levels, so the total runtime grew faster than linearly in .
- For verifying a CFCS, the empirical ratios in Figure 8 show that Algorithm A was times faster than Algorithm B across all datasets (on average, about a 4.9-fold speedup), confirming a substantial improvement.
- Algorithm B returned one gradual path, whereas Algorithm A enumerated all gradual and effective paths and organized them as a tree, enabling comprehensive recommendation, coverage analysis, and fine-grained visualization; hence the total runtimes were not directly comparable in intent.
6. Conclusions
This paper develops a complete pipeline for constructing, verifying, reducing, and exploiting CFCSs for learning path recommendation. We first derive necessary and sufficient conditions for consistency and implement a fast verification algorithm (Algorithm 1). Building on this, we present two constructive procedures: Algorithm 2 generates a minimal CFCS directly from a given skill set and proficiency grids, and Algorithm 3 reduces any given CFCS to a minimal subspace while preserving consistency. For recommendation, Algorithm 4 enumerates all gradual paths in a competence space, and Algorithm 5 augments the path tree with knowledge states under a disjunctive fuzzy skill mapping, thereby enumerating all gradual and effective learning paths from to . Simulations across ten benchmark datasets showed that Algorithm 2 attained the theoretical lower bound () on the number of states, Algorithm 3 removed over 99% of states in large instances, and our verification runs were 3.9–5.1 times faster than a published baseline with the same inputs. These results suggest direct value for learning analytics, intelligent testing, and formative assessment. Minimal consistent spaces provide compact learner models; the knowledge-labeled path tree supports next-step guidance, multi-step planning, and coverage analysis; and the efficient verifier enables frequent reassessment as evidence accumulates. In practice, these algorithms can be packaged as a reusable library that exposes stable APIs for the verifier, the constructor/reducer, and the path engine, allowing developers of learning assessment systems and intelligent testing systems to invoke them directly. In adaptive learning environments, the path engine delivers next-step guidance, and the verifier supports frequent reassessment as evidence accumulates. Figure 9 summarizes the overall CFCS research framework, covering foundations, algorithms, simulations, and future research directions.
Figure 9.
CFCS research framework.
Looking forward, several directions appear promising.
- (1)
- While this work constructs the minimal consistent space, other non-minimal yet structured consistent spaces (e.g., spaces optimized for coverage balance, robustness, or prerequisite density) merit study, together with algorithms that trade off compactness against expressiveness.
- (2)
- Building on the enumeration of all gradual and effective paths, future work can investigate path ranking, pruning, and personalization (e.g., by cost, time, difficulty, or learning gains), as well as classroom-level aggregation to design cohort-aware interventions.
- (3)
- On the modeling side, extending from disjunctive to conjunctive or mixed fuzzy skill mappings, handling noisy or learned mappings, and updating spaces online as new data arrive would broaden practical use.
- (4)
- On scalability, exploiting sparsity, incremental verification, and parallel path graph generation should further reduce time and memory footprints, enabling very large skill and problem domains and real-time integration with adaptive testing engines.
- (5)
- For practical deployment, we plan to package the CFCS verifier, constructor/reducer, and path engine as a reusable library with stable APIs and to build a CFCS-based learning diagnosis system. We will then conduct empirical studies in real instructional settings to evaluate effectiveness, scalability, and integration with adaptive learning workflows.
Author Contributions
Writing—original draft preparation, R.W., B.H. and J.L.; writing—review and editing, R.W., B.H. and J.L.; visualization, R.W. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by the Fujian Provincial Natural Science Foundation of China (Grant No. 2025J01356).
Data Availability Statement
Dataset descriptions are provided in the manuscript; additional information can be obtained from the first author upon reasonable request.
Acknowledgments
We are grateful to the anonymous reviewers and the editorial team for their careful evaluation and constructive feedback.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| KST | Knowledge space theory |
| FC-state | Fuzzy competence state |
| CbKST | Competence-based knowledge space theory |
| CFCS | Consistent fuzzy competence space |
| DFS | Depth-first search |
References
- Chen, X.L.; Zou, D.; Xie, H.R.; Cheng, G.; Liu, C.X. Two Decades of Artificial Intelligence in Education: Contributors, Collaborations, Research Topics, Challenges, and Future Directions. Educ. Technol. Soc. 2022, 25, 28–47. Available online: https://doaj.org/article/f8b4d7534bfc42a4818ca1d6bf7e4d4b (accessed on 1 September 2025).
- Forero-Corba, W.; Bennasar, F.N. Techniques and applications of Machine Learning and Artificial Intelligence in education: A systematic review. RIED-Rev. Iberoam. Educ. Distancia 2024, 27, 37491. [Google Scholar] [CrossRef]
- Chen, L.J.; Chen, P.P.; Lin, Z.J. Artificial Intelligence in Education: A Review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
- Kamalov, F.; Santandreu Calonge, D.; Gurrib, I. New Era of Artificial Intelligence in Education: Towards a Sustainable Multifaceted Revolution. Sustainability 2023, 15, 12451. [Google Scholar] [CrossRef]
- Guo, S.C.; Zheng, Y.Y.; Zhai, X.M. Artificial intelligence in education research during 2013-2023: A review based on bibliometric analysis. Educ. Inf. Technol. 2024, 29, 16387–16409. [Google Scholar] [CrossRef]
- Ma, S.Y.; Lei, L. The factors influencing teacher education students’ willingness to adopt artificial intelligence technology for information-based teaching. Asia Pac. J. Educ. 2024, 44, 94–111. [Google Scholar] [CrossRef]
- Kurilovas, E.; Zilinskiene, I.; Dagiene, V. Recommending suitable learning scenarios according to learners’ preferences: An improved swarm based approach. Comput. Hum. Behav. 2014, 30, 550–557. [Google Scholar] [CrossRef]
- Nigenda, R.S.; Padrón, C.M.; Martínez-Salazar, I.; Torres-Guerrero, F. Design and evaluation of planning and mathematical models for generating learning paths. Comput. Intell. 2018, 34, 821–838. [Google Scholar] [CrossRef]
- Zhang, F.; Feng, X.; Wang, Y. Personalized process–type learning path recommendation based on process mining and deep knowledge tracing. Knowl.-Based Syst. 2024, 303, 112431. [Google Scholar] [CrossRef]
- Rahayu, N.W.; Ferdiana, R.; Kusumawardani, S.S. A systematic review of learning path recommender systems. Educ. Inf. Technol. 2023, 28, 7437–7460. [Google Scholar] [CrossRef]
- Şenel, G. An Innovative Algorithm Based on Octahedron Sets via Multi-Criteria Decision Making. Symmetry 2024, 16, 1107. [Google Scholar] [CrossRef]
- Wang, P.Q. An optimisation and application of artificial intelligence models based on deep learning in personalised recommendation of English education. Int. J. Comput. Appl. Technol. 2024, 75, 96–102. [Google Scholar] [CrossRef]
- Li, H.; Gong, R.R.; Wang, C.X.; Xu, B.S.; Zhong, Z.M.; Li, H.N. Research on Dynamic Learning Path Recommendation Based on Social Networks. IEEE Trans. Consum. Electron. 2024, 70, 5903–5910. [Google Scholar] [CrossRef]
- Ma, D.L.S.; Zhu, H.P.; Liao, S.J.; Chen, Y.; Liu, J.; Tian, F.; Chen, P. Learning path recommendation with multi-behavior user modeling and cascading deep Q networks. Knowl.-Based Syst. 2024, 294, 111743. [Google Scholar] [CrossRef]
- Mu, M.N.; Yuan, M. Research on a personalized learning path recommendation system based on cognitive graph with a cognitive graph. Interact. Learn. Environ. 2024, 32, 4237–4255. [Google Scholar] [CrossRef]
- Gu, R.L. Personalized learning path based on graph attention mechanism deep reinforcement learning research on recommender systems. J. Comput. Methods Sci. Eng. 2025, 25, 2411–2426. [Google Scholar] [CrossRef]
- Luo, G.Q.; Gu, H.N.; Dong, X.X.; Zhou, D.D. HA-LPR: A highly adaptive learning path recommendation. Educ. Inf. Technol. 2025, 30, 14597–14627. [Google Scholar] [CrossRef]
- Hou, B.; Lin, Y.; Li, Y.; Fang, C.; Li, C.; Wang, X. KG-PLPPM: A Knowledge Graph-Based Personal Learning Path Planning Method Used in Online Learning. Electronics 2025, 14, 255. [Google Scholar] [CrossRef]
- Zhao, W.; Xu, Z.; Qiu, L. BPSKT: Knowledge Tracing with Bidirectional Encoder Representation Model Pre-Training and Sparse Attention. Electronics 2025, 14, 458. [Google Scholar] [CrossRef]
- Falmagne, J.C. Random conjoint measurement and loudness summation. Psychol. Rev. 1976, 83, 65–79. [Google Scholar] [CrossRef]
- Doignon, J.-P.; Falmagne, J.-C. Spaces for the assessment of knowledge. Int. J. Man-Mach. Stud. 1985, 23, 175–196. [Google Scholar] [CrossRef]
- Falmagne, J.-C.; Koppen, P.; Villano, M.; Doignon, J.-P. Introduction to knowledge spaces: How to build, test, and search them. Psychol. Rev. 1990, 97, 201–224. [Google Scholar] [CrossRef]
- Falmagne, J.C.; Doignon, J.P. A class of stochastic procedures for the assessment of knowledge. Br. J. Math. Stat. Psychol. 1988, 41, 1–23. [Google Scholar] [CrossRef]
- Schrepp, M. A generalization of knowledge space theory to problems with more than two answer alternatives. J. Math. Psychol. 1997, 41, 237–243. [Google Scholar] [CrossRef]
- Stefanutti, L.; de Chiusole, D. On the assessment of learning in competence based knowledge space theory. J. Math. Psychol. 2017, 80, 22–32. [Google Scholar] [CrossRef]
- de Chiusole, D.; Stefanutti, L.; Anselmi, P.; Robusto, E. Stat-Knowlab. Assessment and learning of statistics with competence-based knowledge space theory. Int. J. Artif. Intell. Educ. 2020, 30, 668–700. [Google Scholar] [CrossRef]
- Spoto, A.; Stefanutti, L. Empirical indistinguishability: From the knowledge structure to the skills. Br. J. Math. Stat. Psychol. 2023, 76, 312–326. [Google Scholar] [CrossRef] [PubMed]
- Anselmi, P.; de Chiusole, D.; Stefanutti, L. Constructing tests for skill assessment with competence-based test development. Br. J. Math. Stat. Psychol. 2024, 77, 429–458. [Google Scholar] [CrossRef] [PubMed]
- Stefanutti, L.; Anselmi, P.; de Chiusole, D.; Spoto, A. On the polytomous generalization of knowledge space theory. J. Math. Psychol. 2020, 94, 102306. [Google Scholar] [CrossRef]
- Heller, J. Generalizing quasi-ordinal knowledge spaces to polytomous items. J. Math. Psychol. 2021, 101, 102515. [Google Scholar] [CrossRef]
- Sun, W.; Li, J.J.; Lin, F.C.; He, Z.R. Constructing polytomous knowledge structures from fuzzy skills. Fuzzy Sets Syst. 2023, 461, 108395. [Google Scholar] [CrossRef]
- Wang, R.H.; Xie, X.J.; Zhi, H.L. Bi-Fuzzy S-Approximation Spaces. Mathematics 2025, 13, 324. [Google Scholar] [CrossRef]
- Wang, G.X.; Li, J.J.; Xu, B.C. Constructing polytomous knowledge structures from L-fuzzy S-approximation operators. Int. J. Approx. Reason. 2025, 179, 109363. [Google Scholar] [CrossRef]
- Heller, J.; Stefanutti, L.; Anselmi, P.; Robusto, E. On the Link between Cognitive Diagnostic Models and Knowledge Space Theory. Psychometrika 2015, 80, 995–1019. [Google Scholar] [CrossRef]
- Liu, G.L. Rough set approaches in knowledge structures. Int. J. Approx. Reason. 2021, 138, 78–88. [Google Scholar] [CrossRef]
- Xie, X.X.; Xu, W.H.; Li, J.J. A novel concept-cognitive learning method: A perspective from competences. Knowl.-Based Syst. 2023, 265, 110382. [Google Scholar] [CrossRef]
- Xu, B.C.; Li, J.J. The inclusion degrees of fuzzy skill maps and knowledge structures. Fuzzy Sets Syst. 2023, 465, 108540. [Google Scholar] [CrossRef]
- Zhou, Y.F.; Yang, H.L.; Li, J.J.; Wang, D.L. Skill assessment method: A perspective from concept-cognitive learning. Fuzzy Sets Syst. 2025, 508, 109331. [Google Scholar] [CrossRef]
- Zhou, Y.F.; Li, J.J.; Wang, H.K.; Sun, W. Skills and fuzzy knowledge structures. J. Intell. Fuzzy Syst. 2022, 42, 2629–2645. [Google Scholar] [CrossRef]
- Zhou, Y.F.; Li, J.J.; Yang, H.L.; Xu, Q.Y.; Yang, T.L.; Feng, D.L. Knowledge structures construction and learning paths recommendation based on formal contexts. Int. J. Mach. Learn. Cybern. 2024, 15, 1605–1620. [Google Scholar] [CrossRef]
- Wang, R.; Huang, B.; Li, J. A Learning Path Recommendation Approach in a Fuzzy Competence Space. Axioms 2025, 14, 396. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).