1. Introduction
Recommender systems (RS) are commonly used in today’s era. They recommend user-specific purchasing, learning, traveling, and entertainment [
1,
2,
3,
4,
5,
6]. By analyzing large amounts of data, RS improves decision-making. This boosts user satisfaction and engagement. Even though they are successful, keeping long-term use going is still a big problem, especially in situations where user motivation and tenacity are very important [
7,
8]. Digital platforms need recommender systems (RS) to guide users to shop, study, travel, and have fun [
1,
2,
3,
4,
5,
6]. Eliminating unnecessary information helps RS users make better decisions. It makes users satisfied and more engaged. Gamification elements also improve motivation, engagement, and behavior outside of games [
9,
10,
11,
12,
13,
14]. Trust, satisfaction, and system use were improved through the use of game elements in e-commerce, workplace technology, and healthcare and in education, learning performances, and engagement in gamification studies [
13,
15,
16,
17,
18,
19,
20,
21]. Gamified recommender systems (GRS) improve user adoption and user experience by combining personalization with incentive design [
22,
23,
24,
25,
26,
27,
28]. GRS has been proposed for career promotion [
29], tourism [
22], adaptive learning [
20,
25], and consumer engagement [
24]. Prior systematic reviews emphasize that explanations in recommender systems should not only enhance decision effectiveness and transparency, but also build trust, increase satisfaction, ensure usefulness and ease of use, improve efficiency, support persuasion, and even enable educational benefits [
29]. These dimensions provide a comprehensive basis for evaluating the quality and impact of gamified recommender designs [
16]. Recent reviews confirm their growing relevance and potential [
30]. When designed well, gamification boosts user motivation and engagement [
31].
However, most existing studies on GRS remain limited to conceptual domain-specific prototypes or narrative reviews [
6,
7,
30,
32,
33,
34]. Few works have examined the causal interrelationships among gamification elements that influence engagement and adoption. Prior research often employs descriptive or comparative approaches [
10,
11,
24,
32] with limited application of multiple-criteria decision-making (MCDM) techniques [
34,
35,
36]. Prior reviews addressed the absence of analytical methods to inspect causal interrelations among gamified design frameworks [
37]. Several multi-criteria decision-making (MCDM) techniques—such as the Analytic Hierarchy Process (AHP) [
38], Analytic Network Process (ANP) [
39], Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) [
40], and Decision-Making Trial and Evaluation Laboratory (DEMATEL) [
41]—have been applied to gamification evaluation and system design. Recent studies have emphasized the growing role of intelligent and hybrid decision-making approaches in evaluating user-centric systems and complex learning environments. For instance, one study showed hybrid MCDM approaches as spherical fuzzy DEMATEL, ANP, and VIKOR (VIekriterijumsko KOmpromisno Rangiranje) can improve the modeling of evaluation criteria interdependencies under uncertainty, which is also helpful in analyzing complex relationships among gamification elements in recommender system design [
42]. Researchers [
43] examined decision-making frameworks in engineering that utilize artificial intelligence, with a focus on how multi-criteria evaluation can facilitate the creation of adaptive learning environments. These developments align with the current study’s objective to employ fuzzy-based causal modeling for the examination of gamified recommender systems. AHP, ANP, and TOPSIS are some of the methods that have been used to evaluate gamification. They generally help people decide what to do initially rather than making cause-and-effect connections. Gabus and Fontela developed DEMATEL to demonstrate and quantify complex cause-and-effect relationships between system variables [
41]. This helps people comprehend how different factors affect each other while making technology–people decisions [
44]. DEMATEL has advanced with the inclusion of fuzzy set theory, hybrid extensions like gray DEMATEL, rough DEMATEL, and amalgamations with other MCDM methods like AHP and TOPSIS [
45,
46,
47,
48,
49].
These changes have made it much better at modeling uncertainty and taking into account both direct and indirect effects between evaluation criteria. This makes fuzzy DEMATEL a very useful tool for studying systems design and behavioral assessment. In addition, the fuzzy DEMATEL approach has proven effective in modeling interdependencies in fields such as education [
50], retail [
21], and gamification research [
24], yet its application to GRS design evaluation remains scarce. Similarly in one study, a personalized gamified learning recommender system that dynamically adapts game elements to learner profiles was proposed [
51], yet such systems rarely use DEMATEL to examine the causal relationships between gamification factors that drive engagement and adoption. This methodological gap restricts the ability to identify which gamification elements act as key drivers and how they interact to shape user outcomes.
To bridge this gap, this study introduces a fuzzy DEMATEL-based evaluation framework for gamified recommender systems. Three layers of novelty are offered. First, four JavaScript React-based prototypes were developed—Points, Badges, and Leaderboards (PBL); Acknowledgments, Objectives, and Progression (AOP); Acknowledgments, Competition, and Time Pressure (ACT); and Acknowledgments, Objectives, and Social Pressure (AOS)—since no readily available systems exist for these theoretically grounded designs [
24,
26,
37,
51]. Second, nine user-centric evaluation criteria—Effectiveness, Transparency, Persuasiveness, User Satisfaction, Trust, Usefulness, Ease of Use, Efficiency, and Education—were systematically derived through a PRISMA-guided review of 823 studies, refined to 19 relevant works published between 2013 and 2023. These criteria are widely supported in the prior literature as central to gamification and recommender system evaluation [
13,
15,
17,
18,
19,
20]. Third, fuzzy DEMATEL was applied to capture the causal structure among criteria, distinguishing between driving and dependent factors and thereby revealing the most influential determinants of adoption whereas other studies used DEMATEL to map causal links in a sustainability performance framework, showing its suitability for gamified recommender system evaluation [
52].
The significance of this study lies in its integration of theoretical, methodological, and practical contributions. Theoretically, it advances gamification research by operationalizing four distinct GRS prototypes beyond discussions. This study uses a systematic literature review and fuzzy DEMATEL analysis to model complicated interdependencies and provide a reproducible evaluation methodology. The findings suggest design objectives for adaptive, user-centered gamified systems. These priorities include utility, usability, and trust. These ideas help education, e-commerce, and healthcare by encouraging people to use, like, and stay using the service.
2. Related Research
Gamification is the use of game-like elements in non-game contexts. It is often used to increase motivation, involvement, and behavioral adjustment [
9,
10,
11,
12,
13,
14,
32]. Its positive effects on learning outcomes, involvement, and user satisfaction in education highlight its importance in boosting effectiveness and educational value [
13,
15,
17,
18,
19,
20]. Systematic evaluations show that gamification can boost motivation and performance, provided that game mechanisms are designed to provide clear feedback and progression routes [
10,
15,
32]. Gamification in workplace technology, retail, and tourism has shown persuasive and trust-enhancing effects on system adoption and use [
16,
21,
22].
Recommender systems (RS) have also become very important on digital platforms because they help users make content their own and deal with too much information [
1,
2,
3,
4,
6,
8]. Collaborative filtering, content-based filtering, and hybrid models have been widely employed in diverse fields such as e-commerce, education, entertainment, and healthcare [
2,
5,
6,
7]. Recent research has examined RS via ethical, explanatory, and cognitive perspectives, pinpointing transparency, trust, and efficiency as essential determinants for enduring adoption [
1,
2,
4]. Even with these improvements, it is still hard to keep people interested and find a balance between personalization and user freedom.
Gamified recommender systems (GRS) are a new type of system that combines personalization with motivational design to encourage more people to use them. Proposed applications encompass career advancement systems [
29], tourism [
22], customized learning trajectories [
17,
25], and entertainment [
26,
27]. For example, researchers in [
53] developed a gamified word-of-mouth RS to foster customer engagement [
24], while researchers in [
30] provided a systematic overview of GRS design trends and applications. To structure such designs, prior works have emphasized different gamification strategies: Points, Badges, and Leaderboards (PBL) as classic extrinsic motivators [
24,
37]; Acknowledgments, Objectives, and Progression (AOP) for transparent progression and competence-based feedback [
26]; Acknowledgments, Competition, and Time Pressure (ACT) for efficiency and engagement under competitive conditions [
24]; and Acknowledgments, Objectives, and Social Pressure (AOS) for leveraging social accountability and persuasiveness [
51]. However, it is important to note that such comprehensive gamified recommender system designs are not readily available in practice, as most existing works remain limited to conceptual discussions or isolated case studies. Developing functional prototypes therefore represent a novel contribution of this study.
To provide more systematic evaluations, researchers have applied multiple-criteria decision-making (MCDM) techniques such as AHP, ANP, and TOPSIS [
24,
34,
35,
36,
37]. Although these methods facilitate prioritization, they seldom account for the causal interdependencies among evaluation criteria, including trust, ease of use, and efficiency. While traditional multi-criteria methods such as AHP, ANP, TOPSIS, and DEMATEL have been extensively applied, more recent studies advocate integrating sustainability-oriented and knowledge-driven decision frameworks. A study improved DEMATEL by using a threshold-based approach to determine the most influential aspects, which helps recommender system evaluation prioritize key gamification criteria [
54].
Building on this perspective, the current research adopts a fuzzy DEMATEL framework to uncover the interdependencies among gamification criteria, extending the methodological line of recent MCDM and AI-enhanced decision studies [
1,
2,
3]. The Decision-Making Trial and Evaluation Laboratory (DEMATEL), particularly in its fuzzy extension, has proven effective in modeling such cause–and–effect structures [
55,
56,
57,
58,
59]. It has been successfully applied in higher education [
49], retail [
21], and environmental management [
44], but its use in GRS design remains limited. Most existing works either list gamification features or apply comparative frameworks without addressing how these nine critical dimensions interact with specific design strategies (PBL, AOP, ACT, AOS) to shape user experience and adoption [
10,
11,
32].
This review reveals two evident gaps. There are not many complete frameworks on GRS, and most of the research is based on individual case studies. Second, although MCDM approaches have been employed, there has been insufficient emphasis on determining which criteria—such as usefulness, satisfaction, or trust—act as drivers and which function as dependent outcomes. This study introduces a fuzzy DEMATEL-based evaluation framework designed to rectify the deficiencies present in gamified recommender systems. The contributions include the development of four prototypes utilizing JavaScript React (PBL, AOP, ACT, AOS), informed by the existing literature [
23,
24,
26,
37], the systematic identification of nine user-centric criteria through a PRISMA-guided review, and the application of fuzzy DEMATEL to elucidate causal interrelationships. These contributions create a strong and repeatable framework for making user-centered gamified recommender systems that can be used in education, e-commerce, and healthcare.
This study aims to identify the principal user-centric factors that facilitate the adoption of gamified recommender systems.
RQ1: What are the causal relationships among main user-centric criteria?
RQ2: What elements act as principal catalysts for the implementation of gamified recommender systems?
RQ3: In what ways do various gamified designs (PBL, AOP, ACT, AOS) affect these relationships?
3. Materials and Methods
This study followed a systematic multi-step methodology to evaluate gamified recommender system (GRS) designs using fuzzy DEMATEL. The approach combined a systematic literature review for criteria identification, prototype development for alternative designs, decision makers’ elicitation for evaluation, and fuzzy DEMATEL analysis for cause–effect modeling.
3.1. Research Design
A multi-criteria decision-making (MCDM) framework was adopted, as such an approach provides structured tools for evaluating complex socio-technical systems with multiple interacting factors. The fuzzy Decision-Making Trial and Evaluation Laboratory (fuzzy DEMATEL) was selected due to its ability to model interdependencies among criteria and distinguish between driving and dependent factors.
3.2. Identification of Criteria
To evaluate gamified recommender systems (GRS), this study adopted a systematic literature review approach guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework [
60]. A total of 823 records were initially identified from IEEE Xplore, ScienceDirect, Scopus, and Web of Science. After applying inclusion and exclusion criteria, duplicate removal, and language filtering, 19 primary studies published between 2013 and 2023 were retained for full-text analysis.
From these 19 investigations, recurrent characteristics of user experience and system quality were extracted using a systematic coding process. User engagement, system adoption, and gamification results were the targets of a thorough literature review. Effectiveness, Transparency, Persuasiveness, Trust, Usefulness, Ease of Use, Efficiency, and Education are the nine user-focused evaluation criteria that were identified during the investigation. All of the criteria were based on research that actually happened in the real world. To determine its efficacy, we consulted research that examined engagement with gamification features and personalization [
26,
35,
53,
61,
62]. Clarity and interpretability in gamified recommendation system processes were the focus of studies that eventually gave rise to transparency [
23,
25,
27,
63,
64].
Research on motivational and behavioral change mechanisms [
16,
29,
65,
66,
67] acknowledges persuasiveness.
Feedback dashboard and educational system studies focused on user satisfaction [
17,
68].
Gamification frameworks identified ease of use and usefulness in large-scale, user-friendly system designs [
68].
Research on motivational and behavioral change pathways identified persuasiveness [
16,
29,
65,
66,
67].
Efficiency was supported by studies emphasizing algorithmic accuracy and learning optimization [
17,
22].
Education was confirmed by gamification in MOOCs and e-learning contexts [
67].
The final selection of criteria thus represents a comprehensive synthesis of recurring evaluation dimensions across multiple contexts, including education, tourism, healthcare, and smart cities. These nine criteria were subsequently employed in the fuzzy DEMATEL framework to capture causal interrelationships and identify the most influential dimensions in the evaluation of gamified recommender systems.
3.3. Developing Four Gamified Recommender System Designs
From the prior literature on gamification design frameworks [
23,
24,
26], four gamified recommender system designs were conceptualized, each combining different game mechanics that personalize gamification based on empirical data from learners’ real experiences. The designs use basic motivational ideas like competence, autonomy, relatedness, and social influence to create a theory-based framework for customizing gamified learning environments. These designs show that combinations of elements fit with established motivational constructs and improve personalization by connecting gamification design to learners’ intrinsic motivation and contextual characteristics [
26]. The choice of English language learning as the focus area stems from its worldwide demand and the existence of explicit, quantifiable goals, such as vocabulary acquisition, grammatical proficiency, and reading comprehension. Students often have trouble staying motivated and practicing regularly, which makes this setting a good place to use gamification. Recommender systems tailor content to individual learner requirements, while gamification enhances motivation, feedback, and perseverance. This domain is useful in real life and fits well with the nine evaluation criteria used in this study. We used the JavaScript React environment to write code. The four designs of the gamified recommender system are as follows (see
Appendix A for full design details):
Design 1: Points, Badges, and Leaderboards (PBL)—based on Game Design Theory, specifically traditional gamification mechanics—has demonstrated the ability to enhance extrinsic motivation (see
Figure 1).
Design 2: Acknowledgments, Objectives, and Progression (AOP) places a strong emphasis on Self-Determination Theory (SDT) by recognizing achievement and setting goals (
Figure 2).
Design 3: Acknowledgments, Objectives, and Social Pressure (AOS)—builds on Social Influence Theory, highlighting peer comparison and social accountability (see
Figure 3).
Design 4: Acknowledgments, Competition, and Time Pressure (ACT) is based on Behaviorist Learning Theory, emphasizing that reinforcement and external stimuli, such as competition and urgency, influence behavior (
Figure 4).
The designs show different ways to motivate people in gamified settings and can be used as different ways to add gamification to recommendation systems.
Table 1 presents a comparison of four features of gamified recommender systems.
3.4. Data Collection
3.4.1. Participants
A group of 25 undergraduate students made the decisions. They had already used four gamified recommender systems in their classes and projects. Undergraduate participants were selected not as professional practitioners, but because they represent the end-users of educational recommender systems, aligning with the study’s context. The demographic characteristics of the 25 participants exhibit a combination of diversity and relevance to the study context. Of the 25 people who took part, 16 were males (64%) and 9 were females (36%). There were two groups of participants based on their academic backgrounds: Computer Information Systems, which had 14 participants (56%), and Management Information Systems, which had 11 participants (44%). Ten of the participants (40%) said they used recommender systems very often, and seven (28%) said they used them often. Five students (20%) used them less often, and three (12%) used them extremely rarely. Participants had different levels of platform experience, according to the data. Seven (28%) used Coursera and Khan Academy, six (24%) used Netflix and Amazon Prime Video, five (20%) used Spotify, four (16%) used Duolingo to learn a new language, and smaller groups used Strava (8%) and Fitbit (4%) to reach fitness goals.
This distribution highlights both the diversity of systems experienced by the participants and their practical awareness of recommender technologies, ensuring that their evaluations provide comprehensive and relevant insights for assessing gamified recommender systems.
3.4.2. Procedure
After ethical approval was granted by the researchers’ university ethical board on 25 April 2024 with the application number NEU/AS/2024/214, the decision makers tried four gamified recommender system designs throughout the coursework. They were presented with an evaluation survey for each gamified recommender system design. The decision makers provided pairwise comparisons of nine criteria using a linguistic questionnaire for each GRS design (e.g., “no influence,” “low influence,” “high influence”). Responses were aggregated into fuzzy direct-influence matrices using the geometric mean method to obtain the final cause–effect diagrams for each design.
3.5. Fuzzy DEMATEL Method
Let represent the evaluation criteria and the design alternatives.
Step 1—Define Criteria and Alternatives
Identify the nine evaluation criteria and the four designs (AOP, ACT, AOS, PBL).
Step 2—Collect Expert Evaluations
Experts provide pairwise influence judgments
using linguistic terms mapped to triangular fuzzy numbers (TFNs):
For each decision maker
:
Step 3—Aggregate Expert Opinions (Geometric Mean)
Step 4—Defuzzification (Centroid Method)
This produces the crisp direct-relation matrix .
Step 5—Normalize the Direct-Relation Matrix
Step 6—Compute the Total-Relation Matrix
Normalization guarantees , so the series converges ().
Step 7—Determine the Threshold Value
Step 8—Cause–Effect Analysis
Compute row and column sums:
Then calculate prominence and relation:
Criteria with belong to the cause group, and those with to the effect group. The pairs are plotted to generate the cause–effect diagram.
4. Results
Following the fuzzy DEMATEL procedure described in
Section 3.5, the analysis for each gamified recommender design—Points, Badges, and Leaderboards (PBL); Acknowledgments, Objectives, and Progression (AOP); Acknowledgments, Competition, and Time Pressure (ACT); and Acknowledgments, Objectives, and Social Pressure (AOS)—was carried out in eight computational steps aligned with Equations (1)–(8).
Step 1–2. Expert Evaluations and Linguistic Judgments
The experts were presented with the following rating scale for each of the four different gamified recommender designs: For four gamified recommender system that experts earlier tested, they evaluated each system by comparing the impact of following criteria (Effectiveness, Transparency, Persuasiveness, User Satisfaction, Trust, Usefulness, Ease of Use, Efficiency, and Education) over each other using the rating scale (No influence (NO), Very Low influence (VL), Low influence (L), High influence (H), Very High influence (VH), given in
Table 2). The sample evaluation rating scale is given below:
C1: Effectiveness: is the Gamified Recommendation System effective in achieving its intended aims?
C2: Transparency: are Gamified Recommendation Systems visible and open in the way they work?
C3: Persuasiveness: does the Gamified Recommendation System influence and motivate users?
C4: User Satisfaction: are you satisfied with the performance of the Gamified Recommendation System?
C5: Trust: is the Gamified Recommendation System stable and consistent?
C6: Usefulness: is the Gamified Recommendation System a helpful system?
C7: Ease of use: is the Gamified Recommendation System easy to use?
C8: Efficiency: does the Gamified Recommender System effectively use resources to achieve its purpose?
C9: Education: does the Gamified Recommendation System foster successful learning outcomes?
Twenty-five decision makers (DMs) assessed the pairwise causal influence of each criterion
on
using the five-term linguistic scale defined as No, Very Low, Low, High, and Very High, represented by triangular fuzzy numbers (TFNs)
shown in
Table 2 which defines the linguistic-to-TFN mapping applied to all subsequent analyses. Linguistic fuzzy scale for expert judgments were used to construct the individual
matrices.
Each expert’s judgments generated a fuzzy direct-relation matrix as described in Equation (1), where diagonal elements were fixed to (0, 0, 0).
Because all designs share the same evaluation criteria, four parallel sets of matrices were created—one for each gamified design.
Step 3. Aggregation of Expert Opinions
The twenty-five individual matrices were aggregated through the geometric-mean operator (Equation (2)) to obtain the group fuzzy direct-relation matrix .
After defuzzification via the centroid method (Equation (3)), the crisp direct-relation matrices were generated for each design.
The mean causal influences of each criterion are computed in step 3.
Step 4. Normalization of the Direct-Relation Matrix
To guarantee comparability and convergence, each X was normalized according to Equation (4), resulting in the normalized fuzzy direct-relation matrices N.
The normalized matrices, essential for total-relation computation were performed in step 4.
Step 5. Computation of the Total-Relation Matrix
Utilizing Equation (5), , each normalized matrix was expanded to encompass both direct and indirect influences. The fuzzy total-relation matrices are computed.After the defuzzification process, the resulting crisp total-relation matrices were generated in step 5.
Step 6. Threshold Determination and Network Relation Map (NRM)
For each design, the average of all off-diagonal values was used as the threshold (Equation (6)).
Relations below were omitted to emphasize significant causal influences.
The thresholded total relation matrices for each design were calculated in step 6. The corresponding network relation maps (NRMs), which visually depict causal linkages among criteria, are illustrated in
Figure 5 (PBL),
Figure 6 (AOP),
Figure 7 (AOS), and
Figure 8 (ACT).
Step 7. Cause–Effect Computation
The sums of rows and columns of -provided and (Equation (7)). Prominence () and relation () were subsequently derived (Equation (8)).
The final output matrices were computed using step 7.
Step 8. Comparative and Interpretive Analysis
Positive values show cause criteria. Negative values denote effect criteria.
In all four designs, Perceived Usefulness (C6) and Ease of Use (C7) consistently emerge as primary causal factors influencing system behavior, while Engagement (C8) and Satisfaction (C9) serve as outcome criteria.
Among the designs, AOP exhibits the strongest overall interdependence, suggesting that acknowledgment and progression mechanisms amplify systemic influence propagation, while ACT shows moderate causal density governed by competition and time pressure components.
4.1. Results for Design 1: Points, Badges and Leaderboards (PBL)
Table 3, below, indicates the direct-relation matrix, which is the same as the pairwise comparison matrix of the decision makers in Design 1.
Table 4, below, shows the normalized fuzzy direct-relation matrix of Design 1.
Table 5, below, shows the fuzzy total-relation matrix of Design 1.
Table 6, below, shows the crisp total-relation matrix of Design 1.
In this design, the threshold value is equal to 0.533 (see
Table 7).
Table 8, below, shows the final output for Design 1.
The cause–effect diagram for Design 1 (PBL) is given below in
Figure 5.
Figure 5.
Cause–effect diagram—Design 1.
Figure 5.
Cause–effect diagram—Design 1.
4.2. Results for Design 2: Acknowledgments, Objectives, and Progression (AOP)
Table 9, below, indicates the direct-relation matrix, which is the same as the pairwise comparison matrix of the decision makers in Design 2.
Table 10, below, shows the normalized fuzzy direct-relation matrix of Design 2.
Table 11, below, shows the fuzzy total-relation matrix of Design 2.
Table 12, below, shows the crisp total-relation matrix of Design 2.
In this design, the threshold value is equal to 0.629 (see
Table 13).
The final output for Design 2 is shown in
Table 14 below.
The cause–effect diagram for Design 2 (AOP) is shown in
Figure 6.
Figure 6.
Cause–effect diagram—Design 2.
Figure 6.
Cause–effect diagram—Design 2.
4.3. Results for Design 3: Acknowledgments, Objectives, and Social Pressure (AOS)
Table 15 below indicates the direct-relation matrix, which is the same as the pairwise comparison matrix of the decision makers in Design 3.
Table 16, below, shows the normalized fuzzy direct-relation matrix of Design 3.
Table 17, below, shows the fuzzy total-relation matrix of Design 3.
Table 18, below, shows the crisp total-relation matrix of design 3.
In this design, the threshold value is equal to 0.472 (see
Table 19).
The final output was given in
Table 20 below.
The cause–effect diagram for Design 3 is given in
Figure 7 below.
Figure 7.
Cause–effect diagram—Design 3.
Figure 7.
Cause–effect diagram—Design 3.
4.4. Results for Design 4: Acknowledgments, Competition, and Time Pressure (ACT)
Table 21, below, indicates the direct-relation matrix, which is the same as the pairwise comparison matrix of the decision makers in Design 4.
Table 22, below, shows the normalized fuzzy direct-relation matrix of Design 4.
Table 23, below, shows the fuzzy total-relation matrix of Design 4.
Table 24, below, shows the crisp total-relation matrix of Design 4.
In this design, the threshold value is equal to 0.419 (see
Table 25).
The final output for design4 is given in
Table 26 below:
The cause–effect diagram for Design 4 is shown in
Figure 8 below.
Figure 8.
Cause–effect diagram—Design 4.
Figure 8.
Cause–effect diagram—Design 4.
4.5. Validation
Validations was ensured through the following:
Threshold sensitivity analysis looks at how changes to a model’s input parameters, especially those that are close to a set threshold, change the model’s output. It helps to find the values of the parameters that cause the model’s behavior or decision-making to change a lot. In short, it is about figuring out how much a model’s results change when something changes around a critical point. In fuzzy DEMATEL, the threshold determines which relationships are sufficiently robust to be represented in the causal diagram. Higher thresholds make networks that are simpler and less connected, while lower thresholds make networks that are more connected.
The thresholds for the four designs were as follows: Design 1 = 0.533, Design 2 = 0.629, Design 3 = 0.472, and Design 4 = 0.419.
Design 2 (0.629): a simpler map that only shows the most important connections.
Design 1 (0.533): a balanced view that shows both major and minor effects.
Design 3 (0.472): a denser network with more indirect effects.
Design 4 (0.419): most connections are still there, but they are less clear.
Results showed that thresholds were stable, which confirmed that criteria and causal groupings were always important.
Thresholds used were D1 = 0.533, D2 = 0.629, D3 = 0.472, D4 = 0.419; sensitivity checks showed stable (D + R) orderings across plausible ranges.
- 3.
Findings were validated through a comparison with previous applications of fuzzy DEMATEL in education, retail, and gamification. Researchers in [
49] showed that it worked well to explain the main things that affect students’ choices of courses in school. In retail, [
21] showed that it works well to find the most important things to consider when shopping in the metaverse. In gamification research, scholars in [
23] emphasized its significance in examining intricate interrelations among gamification components. This study confirms the efficacy of fuzzy DEMATEL as a robust method for identifying cause–effect relationships in various domains, including the creation of gamified recommender systems.
5. Discussion
Fuzzy DEMATEL was used to evaluate four gamified recommender system architectures. The results helped demonstrate how things evolve and interact. Effectiveness, Transparency, Persuasiveness, User Satisfaction, Trust, Usefulness, Ease of Use, Efficiency, and Education were evaluated. The results show that various criteria have different effects and cause-and-effect correlations.
Two things made people satisfied, productive, and trustworthy: usefulness (C6) and ease of use (C7). This is consistent with previous research that demonstrated the significance of perceived usefulness and usability in technology adoption [
21,
24]. Trust (C5) functioned as a mediator, shaped by transparency and usability, thereby enhancing happiness and effectiveness, in alignment with prior research in education [
51] and retail [
21].
Satisfaction (C4), efficacy (C1), and education (C9) were the outcome criteria that were most affected by upstream causes. This pattern is in line with earlier research on gamification, which found that external factors boost short-term satisfaction but long-term effects depend on internal alignment [
24]. C2 (transparency) and C8 (efficiency) had a moderate effect, especially on designs that are time-sensitive and deal with social issues. Persuasiveness (C3) affected designs that were based on competition and peers, which supports earlier research that showed that persuasive elements have a big impact on engagement [
24].
Design-level comparisons showed that each group had a unique motivational pathway.
PBL (Design 1) was compelling but poor for learning and trust because it relied on external motivators. PBL increases short-term motivation, but this effect is not sustained, as shown in [
23,
24].
The most even construction was AOP Design 2. It was valuable and easy to use, improving teaching and learning. Its higher threshold (0.629) exhibited excellent causal clarity, supporting evidence indicating that explicit goals and controlled feedback improve long-term learning [
51].
AOS (Design 3) employed social influence, so being honest and holding your friends accountable made people trust you and believe what you say. Education and retail research show that social approbation motivates people to try new things, reinforcing our understanding. It depended on teamwork, hence it did not work as well as prior methods.
ACT (Design 4) was more persuasive but less trustworthy and gratifying since it highlighted competitiveness and urgency. A previous study has shown that intense competition might lower intrinsic motivation [
24]. The dense but unclear causal network with a threshold of 0.419 demonstrated this.
Overall, usefulness and user-friendliness are the most essential criteria for all designs, whereas satisfaction, efficacy, and education are the most important outcomes. Clarity and long-term attention make AOP the most pedagogically sustainable design. PBL and ACT may drive you temporarily, but they may damage trust and academic value. Working together helps individuals learn and stay interested, therefore AOS works best.
These results demonstrate that gamified recommender systems must go beyond points and competition. It needs goal setting, growth, and social transparency to keep people utilizing and making them work [
24,
51,
53].
5.1. Limitations
The study has number of limitations. The study used undergraduate students as end-users to evaluate the gamified recommender systems, which may limit their applicability to younger learners, professional users, or varied cultural situations. Second, four gamification designs—PBL, AOP, AOS, and ACT—based on different theoretical frameworks were examined. These strategies are common, however, hybrid models and other designs may produce different results. The fuzzy DEMATEL approach involves subjective expert opinions. Even though group aggregation reduced individual bias, findings depend on participant perceptions. The study examined English language acquisition, however, applying the same paradigm to healthcare, e-commerce, or workplace training may provide different results. Despite their potential impact on real-world adoption, long-term engagement, cognitive load, and emotional responses were excluded from the analysis due to systematic review standards.
5.2. Implications of the Study
This work has various theoretical and practical implications. Game-based recommender system designs (AOP, ACT, AOS, PBL) are compared to show how different learning and behavioral theories affect user perception and system efficacy. This study shows that decision-making models can simplify complex design attribute interactions by integrating fuzzy DEMATEL with carefully stated criteria.
The findings can help educators, developers, and system designers integrate gamification into English language learning recommender systems. Finding causal and effect criteria can help you design better by ensuring that recognition, progress, competition, and social influence are leveraged to motivate learners and improve system usability. The methodology can be adjusted to numerous educational environments because the results are consistent across threshold levels.
5.3. Recommendations for Future Research
Based on the current results, several avenues for future research are proposed. The research could be augmented by assessing the suggested gamified recommender system designs with a broader and more heterogeneous learner demographic, inclusive of diverse educational levels and cultural contexts, to enhance generalizability. Subsequent research ought to include longitudinal studies to evaluate the enduring impacts of gamification elements on learner engagement, performance, and retention. Third, hybrid evaluation methods that combine fuzzy DEMATEL with other multi-criteria decision-making techniques, like ANP, TOPSIS, or fuzzy AHP, may give us better information about how important each criterion is compared to the others. Along with the fuzzy DEMATEL framework used in this study, future research could use other multi-criteria decision-making methods to confirm or build on these results. Distance-based methods, like the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), make it easier to rank gamification strategies based on how close they are to the best engagement results.
Utility-based methods like the Analytic Hierarchy Process (AHP) give you structured pairwise preferences that show what learners really care about. New hybrid and range-sensitive models, such as the Ranking based on the Distances and Range (RADAR) method, also make it possible to combine robustness with interpretability [
69]. Using and comparing these methods on bigger datasets might make the analysis more reliable and give us a better picture of how causal–evaluative relationships work in the creation of gamified recommender systems. Generalizability may be improved by comprehensive empirical validation using learner datasets from varied educational contexts and longitudinal assessments. Using causal modeling and forecasting methodologies like machine learning-based recommender models helps us understand how gamification dynamics evolve over time and effect long-term learner engagement. The study’s methods show that making decisions based on more than one factor can help with complex rating problems. Fuzzy DEMATEL shows how different factors affect gamified system like how gray-correlation-based hybrid MCDM models help choose sustainable materials [
70]. Future research should combine subjective user-centric elements with objective performance metrics like system usage statistics, response precision, and learning advancement indicators to improve fuzzy DEMATEL analysis of causal relationships’ reliability and external validity. To reduce subjectivity and improve causal inferences in gamified recommender system evaluations, future research should combine linguistic expert assessments with objective system interaction data like activity logs, completion rates, and engagement metrics. The paradigm could be effective in STEM education, job training, and workplace learning, not just in English. Future research should examine adaptive gamification, where system components dynamically adjust to learners’ profiles, preferences, and progress to improve personalized and effective instruction.
6. Conclusions
This research employed the fuzzy DEMATEL method to evaluate four sophisticated gamified recommender system designs for English language acquisition. A systematic review evaluated designs using nine criteria: Effectiveness, Transparency, Persuasiveness, Satisfaction, Trust, Usefulness, Ease of Use, Efficiency, and Education. Gamification strategies like Points, Badges, Leaderboards (PBL), Acknowledgments, Objectives, Progression (AOP), Acknowledgements, Objectives, Social Pressure (AOS), and Acknowledgements, Competition, Time Pressure (ACT) vary in clarity, motivation, and social influence.
This study used fuzzy DEMATEL to identify the main user-centric factors influencing GRS adoption. The proposed approach answered all three research questions.
The nine user-centric criteria in RQ1 showed unique causal linkages, demonstrating a hierarchy of causal and effective factors. Perceived Usefulness (C6) and Ease of Use (C7) consistently influenced Engagement (C8) and Satisfaction (C9) outcome variables. In all four gamified recommender system designs, Perceived Usefulness (C6) and Ease of Use (C7) were the main causal factors. These factors had minimal effects across configurations, but their relative importance and causal direction remained unchanged. This suggests that these parameters predict user engagement and adoption beyond design effects.
In response to RQ2, the analysis found factors C6 and C7 to be the main drivers of GRS adoption due to their predominance and positive correlation. This shows that usability and perceived value are essential for system engagement.
A comparison of four gamified designs—PBL, AOP, AOS, and ACT—showed that the Acknowledgments, Objectives, and Progression (AOP) design improved network connectivity and causal balance in response to RQ3. However, competition and time constraint in the ACT design had localized consequences.
Due to its high threshold, Design 2 had the greatest causal structure, but Design 1 was more egalitarian. The results are strong since causal network densities did not affect criteria importance across thresholds. Research shows that fuzzy DEMATEL efficiently analyzes gamification criteria’s interrelationships, making educational recommender systems easier to construct. Methodological comprehension is improved by providing fuzzy DEMATEL in gamified environments and providing practical guidance for adopting learner-centered gamification mechanics. As one of the first systematic attempts to combine gamification design with fuzzy DEMATEL in recommender systems, this study offers scholars and practitioners a new perspective. The study achieved its goals by (i) explaining causal interdependencies among user-centric criteria, (ii) identifying the most important adoption determinants, and (iii) showing how gamified design configurations alter these linkages. Fuzzy DEMATEL provides a structured approach for analyzing user-centric gamification strategies and improving system design. The study shows that fuzzy DEMATEL effectively investigates gamification criteria interdependencies, enabling educational recommender system creation. Methodological approaches are improved by testing fuzzy DEMATEL in gamified systems and offering practical guidance for choosing learner-centered gamification mechanics.
Importantly, this is among the first studies to systematically integrate gamification design with fuzzy DEMATEL for recommender systems, providing a novel perspective for both researchers and practitioners.