Skip to Content
  • Proceeding Paper
  • Open Access

10 December 2023

A Model of Gamification by Combining and Motivating E-Learners and Filtering Jobs for Candidates †

and
Hindustan Institute of Technology and Science, Department of Computer Applications, Chennai 603103, Tamilnadu, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances in Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
This article belongs to the Proceedings Eng. Proc., 2023, RAiSE-2023

Abstract

Early in the 1990s, recommender systems emerged to assist users in dealing with the cognitive overload caused by the internet. Since then, similar systems have expanded into many more capacities, such as assisting users in exploration, enhancing decision making, or even providing entertainment. Understanding the user task and how to modify the advice to assist it are made possible by these features. Recommender systems for education have been proposed in related research. These recommender systems assist students in locating the learning materials that best suit their requirements. One of the primary requirements of the online social platform is to engage the user in an effective way. For this purpose, online media starts to use gamification to improve the user participants. The reward system for online media widely uses gamification elements such as points, badges, etc. Thereby, in a badge-based system, an unachieved badge highly influences the gamification system. In this paper, unachieved and achievable badges were recommended using item-based collaborative filtering recommendation model. This enables us to gather information from the candidates and make accurate predictions about the jobs that might suit them. This is also durable in the sense that any missing data about the candidate does not affect the algorithm as a whole as it is capable of making assumptions regarding the missing data based on similar data already stored in the database. Beyond this, this algorithm can be employed to host courses on the website. The empirical observation shows that the proposed model has recommended the badge with 70 percent accuracy.

1. Introduction

Gamification is an intelligent technique used to engage and motivate students. It adds elements of games to non-gaming activities to influence user activity. Gaming mechanics and dynamics are employed to govern the player’s activity and the mechanics’ run-time behavior, respectively. The mechanics of the game include points, badges, leaderboards, limitations, etc., whereas the dynamics are story, progression, and relationship. When a student starts playing games they become more immersed and can unlock new levels. Combining learning and gaming elements makes students more proactive and engages them to spend more time learning.
Gamification elements such as scores, ranks, levels, badges, trophies, and leaderboards motivate the students to learn. These gamification features can be categorized as game mechanics and game dynamics.
Game mechanics are the basics of gamification, which helps the students to convert any process into gamification. Game mechanics can make the students grow bored with repeated steps. Here, we have game dynamics, which combine the student’s behavior and game mechanics and involve the students in learning over a long period of time.
The four components of game mechanics are quantity, spatial, state, and action. The quantity is represented by a number, the spatial component by the position and rotation of the objects, the state by extra rules, and the action mechanics by change.
Examples of game dynamics include competition, collaboration, community, achievement, surprise, etc.
The game elements of score and points motivate the students, as students are rewarded for the efforts they make and there are no punishments for their underperformance. For example, in a quiz, the students will achieve a score and points for every correct answer and there is no punishment or negative score or point for every wrong answer. As a result, students are motivated, and the student progression will be increased.
Levels and challenges are the next level of game mechanics in gamification. These help the students to define their knowledge in their learning highly. It makes the students start learning from basics and increase their learning gradually by changing the difficulty of each level.
Badges and trophies are the third level of gamification. In the real world, soldiers are honored with different levels of badges with respect to their bravery. In the same way, students are awarded badges and trophies for their achievements in completing the challenges at each level. Therefore, the students can profile their badges to project their status /symbol of status. Figure 1 shows the bottom-up view of the elements of the game.
Figure 1. Bottom-up view of elements of game.
Collaborative filtering (CF) is a framework for predicting a user’s interests by collecting preferences or test data from a large number of users (collaborating) [1]. A machine learning technique known as content-based filtering investigates the similarity between attributes to generate judgements. [2]. This strategy is commonly used in recommender systems, which are algorithms designed to promote or recommend products to users based on information obtained about the user. [3]. However, because it is based on historical data, this strategy requires knowledge that may or may not be relevant to the task at hand. This includes domain-specific data generated by users, clickstream data, and other information. As a result, a more powerful algorithm is required to handle form-based data, and collaborative filtering has been found to be an effective solution.
The main idea of the collaborative filtering framework is that if two users have similar views on a topic, they are more likely to have the same view on a different topic than two randomly picked users. It should be emphasized that, while these forecasts are based on data from a large number of users, they are unique to each individual. This differs from the simpler strategy of assigning an average (non-specific) score to each intriguing item, for example, depending on the number of assessments. Massive data sets are sometimes included in collaborative filtering algorithms.

1.1. Item-Based Approach

Item-based approach is classified as a memory-based collaborative filtering algorithm [4]. CF models are created using machine learning algorithms in this method to forecast user ratings based on previously collected taste data [5]. This can be utilized to find the best possible match for recommendations. The item-based approach is explained in Figure 2.
Figure 2. Memory-based CF.

1.2. Model-Based Approach

This method predicts user ratings of unrated items by creating CF models using machine learning methods [6,7]. The algorithms in this approach can further be broken down into three sub-types. This is explained in Figure 3.
Figure 3. Model-based CF.

3. Recommendation Model

Item-based filtering is used to create the recommendation system in collaborative filtering. The proposed work suggests an item-based framework to find the similarity among the badges and to compute the similarity score. Item-based filtering aids in identifying badges that are similar to one another.
The basic idea for computing the similarity among the badges is to find the score of similarity. By generating similarity, an efficient recommendation system is modeled.
To find the similarity among the badges, different similarity scores were used. To compute the similarities, distance between the similarity scores was calculated using Euclidean distance with an n-dimensional space, where n is the number of users.
Assuming that in the Q&A (Question and Answer) session, adding badges to the students who are answering the questions makes the students more involved, this makes students more likely to answer the remaining questions and earn more badges. Figure 5 describe the flow of recommendation model in e-learning.
Figure 5. Flow of recommendation model.
Distance among the two badges is calculated by Equation (1),
d ( p , q ) = ( i = 0 n | p i q i | r ) 1 r
The main challenge in the badging system is finding the unachieved badges by the students. In this work, collaborative filtering-based recommendation has been developed.
Where p and q are the 0–1 vectors that establish the availability of the badges. To calculate the different similarity scores cosine similarity is used. The advantage of using cosine similarity is it is fit for sparse data, and it does not rely on shared-zero (0–0) matches. The cosine similarity is calculated by Equation (2),
C o s p , q = p · q p X q
where · shows the dot product and ||p|| shows the vector length of p which holds the zero, and one for calculating the badge availability.
The length of the vector is calculated by Equation (3),
p = i = 1 n p i 2
Combining Equations (2) and (3), the final equation to calculate cosine similarity is Equation (4),
C o s p , q = i = 1 n p i X q i i = 1 n p i 2 X i = 1 n q i 2
Cosine similarity is rated from +1 to −1, with +1 denoting perfect resemblance and −1 denoting imperfect similarity. With the help of cosine similarity, a badge recommendation system is developed which helps the students to identify the unachieved badges based on the history of the achieving badge. Using this history, the system will recommend the unachieved badges and if the students achieved all the badges the system will not recommend it.
To calculate the history of similarity the following equation is used to calculate Equation (5),
R e c o m m e n d a t i o n = i = 1 s m h i s t o r y i   X   S i m i l a r i t y i i = 1 s m h i s t o r y i + S i m i l a r i t y i
where sm is the number of similar models which are selected by our model. In the equation, the term history is the 0–1 vector, and the term similarity is the cosine vector. For every user badge, similarity is used to measure high similar badges among the user badge and history is used to measure the available user badge among the user profile. Equation (5) recommends the badges that the user does not have using the highest score from this Equation (5). We extend our model to solve the problem if the user does not have any badges. The model recommends common badges within the threshold.

Experimental Evaluation

The proposed model is analyzed using the decision support accuracy metrics: precision and recall. These measures assist the students in choosing suggested and undeserved badges. Equations (6) and (7) show the formulation for precision and recall evaluation.
R e c o m m m e n d a t i o n   s y s t e m   p r e c i s i o n   p         = N u m b e r   o f   o u r   r e c o m m e n d e d   b a d g e s   t h a t    a r e   r e l e v a n t N u m b e r   o f   r e c o m m e n d e d   b a d g e s   b y   p r o p o s e d   m o d e l  
R e c o m m m e n d a t i o n   s y s t e m   r e c a l l   r           = N u m b e r   o f   o u r   r e c o m m e n d e d   b a d g e s   t h a t    a r e   r e l e v a n t N u m b e r   o f   a l l   p o s s i b l e   r e l e v a n t   b a d g e s
To recommend N number of badges from the available n relevant badges, then the average precision at N number of badges is given by Equation (8),
A P N = 1 n i = 1 N P i   i f   i t h   b a d g e   w a s   r e l e v a n t = 1 n i = 1 N P i   · r e l ( i )  
where rel(i) is used to check the availability of ith badges that are relevant among the available badges using 0/1.
APN is used to single-user measurement of precision and MAPN is used to measure the average among all the available users Equation (9).
M A P N = 1 k k = 1 k ( A P N ) k
The precision and recall were assessed using the F-measure approach. The weighted average approach is used to assess the system’s effectiveness. The F-measure uses a degree of proximity € and weight σ to measure the system model. A threshold K has been assigned to measure the similarity.
Figure 6 makes it clear that recall and precision are at odds with one another. It is observed that when the precision is higher and recall is lower, the degree of matching is lower. As a result, from Figure 6, we can assign the threshold value between 0.5 and 0.6.
Figure 6. Variation of precision and recall.
From Figure 7, it is observed that F-measure is increased gradually when a threshold is decreased. When the degree of matching is lower, precision is higher, and recall is lower.
Figure 7. F-Measure with multiple threshold matching.

4. Conclusions and Future Work

The proposed item-based recommendation system is based on collaborative filtering. The concept suggests badges based on user behavior, which aids in helping students determine the path of their study. The findings indicate that the model’s badge suggestion mechanism offers recommendations that are 75% accurate by examining each student’s badge. If a user already has a badge, the proposed approach will not suggest further badges. The student’s history badge scores and those for related badges are determined by the model. After earning all the badges, the scores and recommended badges are arranged in descending order.
Future research can assess the state of the art of additional algorithms using student feedback. The model can be constructed using both student feedback and content-based collaborative filtering. The findings unequivocally demonstrate that the model put forward in this work aids students in more effectively achieving their learning objectives.

Author Contributions

Conceptualization, S.E. and R.P.; methodology, S.E.; software, R.P.; validation, S.E.; formal analysis, S.E. and R.P.; investigation, S.E.; resources, S.E.; data curation, S.E.; writing—original draft preparation, S.E.; writing—review and editing, S.E.; visualization, S.E.; supervision, R.P.; All authors have read and agreed to the published version of the manuscript.

Funding

There is no funding from any sources.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data can be obtained from the corresponding author on request.

Acknowledgments

We acknowledge the institutional management and family members for their immense support.

Conflicts of Interest

The authors and coauthors declare no conflict of interest.

References

  1. Schafer, J.B.; Frankowski, D.; Herlocker, J.; Sen, S. Collaborative filtering recommender systems. In The Adaptive Web: Methods and Strategies of Web Personalization; Springer: Berlin, Germany, 2007; pp. 291–324. [Google Scholar]
  2. Ekstrand, M.D.; Riedl, J.T.; Konstan, J.A. Collaborative filtering recommender systems. Found. Trends Hum.-Comput. Interact. 2011, 4, 81–173. [Google Scholar] [CrossRef]
  3. Adomavicius, G.; Tuzhilin, A. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng. 2005, 17, 734–749. [Google Scholar] [CrossRef]
  4. Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th International Conference on World Wide Web, Hong Kong, China, 1–5 May 2001. [Google Scholar]
  5. Wang, J.; De Vries, A.P.; Reinders, M.J. Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA, USA, 6–11 August 2006. [Google Scholar]
  6. Aggarwal, C.C. Model-Based Collaborative Filtering. In Recommender Systems: The Textbook; Springer: Cham, Switzerland, 2016; pp. 71–138. [Google Scholar]
  7. Bergner, Y.; Droschler, S.; Kortemeyer, G.; Rayyan, S.; Seaton, D.; Pritchard, D.E. Model-based collaborative filtering analysis of student response data: Machine-learning item response theory. In Proceedings of the International Conference on Educational Data Mining (EDM), Chania, Greece, 19–21 June 2012. [Google Scholar]
  8. Tondello, G.F.; Orji, R.; Nacke, L.E. Recommender systems for personalized gamification. In Proceedings of the Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization, Bratislava, Slovakia, 9–12 July 2017. [Google Scholar]
  9. Seaborn, K.; Fels, D.I. Gamification in theory and action: A survey. Int. J. Hum. Comput. 2015, 74, 14–31. [Google Scholar] [CrossRef]
  10. Prakasa, F.B.P.; Emanuel, A.W.R. Review of benefit using gamification element for countryside tourism. In Proceedings of the International Conference of Artificial Intelligence and Information Technology (ICAIIT), Yogyakarta, Indonesia, 13–15 March 2019. [Google Scholar]
  11. Rinc, S. Integrating gamification with knowledge management. In Proceedings of the Management, Knowledge and Learning, International Conference, International School for Social and Business Studies, Portorož, Slovenia, 25–27 June 2014. [Google Scholar]
  12. Pilar, L.; Moulis, P.; Pitrová, J.; Bouda, P.; Gresham, G.; Balcarová, T.; Rojík, S. Education and Business as a key topic at the Instagram posts in the area of Gamification. J. Effic. Responsib. Educ. Sci. 2019, 12, 26–33. [Google Scholar] [CrossRef]
  13. de Paula Porto, D.; de Jesus, G.M.; Ferrari, F.C.; Fabbri, S.C.P.F. Initiatives and challenges of using gamification in software engineering: A Systematic Mapping. J. Syst. Softw. 2021, 173, 1–46. [Google Scholar]
  14. Corbett, S. Learning by Playing: Video Games in the Classroom. The New York Times, 15 September 2010. [Google Scholar]
  15. Deterding, S.; Dixon, D.; Khaled, R.; Nacke, L. From game design elements to gamefulness: Defining gamification. In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, Tampere, Finland, 28–30 September 2011. [Google Scholar]
  16. Wertz, R.J. Reality is Broken—Why Games Make Us Better and How They Can Change the World. J. Commun. Media Stud. 2011, 3, 174–176. [Google Scholar]
  17. Lattal, K.A.; Chase, P.N. (Eds.) Behavior Theory and Philosophy; Springer Science & Business Media; West Virginia University: Morgantown, WV, USA, 2013. [Google Scholar]
  18. Hassan, M.A.; Habiba, U.; Majeed, F.; Shoaib, M. Adaptive gamification in e-learning based on students’ learning styles. Interact. Learn. Environ. 2021, 29, 545–565. [Google Scholar] [CrossRef]
  19. Hamari, J.; Koivisto, J. Why do people use gamification services? Int. J. Inf. Manag. 2015, 35, 419–431. [Google Scholar] [CrossRef]
  20. Huang, B.; Hwang, G.J.; Hew, K.F.; Warning, P. Effects of gamification on students’ online interactive patterns and peer-feedback. Distance Educ. 2019, 40, 350–379. [Google Scholar] [CrossRef]
  21. Urh, M.; Vukovic, G.; Jereb, E. The model for introduction of gamification into e-learning in higher education. In Proceedings of the Procedia-Social and Behavioral Sciences, Novotel Athens Convention Center, Athens, Greece, 5–7 February 2015. [Google Scholar]
  22. de Marcos Ortega, L.; García-Cabo, A.; López, E.G. Towards the social gamification of e-learning: A practical experiment. Int. J. Eng. Educ. 2017, 33, 66–73. [Google Scholar]
  23. Yildirim, I. The effects of gamification-based teaching practices on student achievement and students’ attitudes toward lessons. Internet High. Educ. 2017, 33, 86–92. [Google Scholar] [CrossRef]
  24. Göksün, D.O.; Gürsoy, G. Comparing success and engagement in gamified learning experiences via Kahoot and Quizizz. Comput. Educ. 2019, 135, 15–29. [Google Scholar] [CrossRef]
  25. Lopez, C.E.; Tucker, C.S. The effects of player type on performance: A gamification case study. Comput. Hum. Behav. 2019, 91, 333–345. [Google Scholar] [CrossRef]
  26. Eliyas, S.; Ranjana, P. Gamification: Is E-next Learning’s Big Thing. J. Internet Serv. Inf. Secur. 2022, 12, 238–245. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.