Previous Article in Journal
Hybrid CDN Architecture Integrating Edge Caching, MEC Offloading, and Q-Learning-Based Adaptive Routing
Previous Article in Special Issue
A Guided Self-Study Platform of Integrating Documentation, Code, Visual Output, and Exercise for Flutter Cross-Platform Mobile Programming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games

1
School of Aerospace Engineering & Department of Control and Robot Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea
2
Department of Computer Games Development, Faculty of Computing and AI, Air University, Islamabad 44000, Pakistan
3
Department of Psychology, Faculty of Social Sciences, Air University, Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Computers 2025, 14(10), 434; https://doi.org/10.3390/computers14100434 (registering DOI)
Submission received: 17 September 2025 / Revised: 3 October 2025 / Accepted: 9 October 2025 / Published: 13 October 2025

Abstract

Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor–Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent’s generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments.
Keywords: integration technology; reinforcement learning; interactive learning environments; action games; gamification integration technology; reinforcement learning; interactive learning environments; action games; gamification

Share and Cite

MDPI and ACS Style

Farooq, S.S.; Rahman, H.; Abdul Wahid, S.; Alyan Ansari, M.; Abdul Wahid, S.; Lee, H. Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games. Computers 2025, 14, 434. https://doi.org/10.3390/computers14100434

AMA Style

Farooq SS, Rahman H, Abdul Wahid S, Alyan Ansari M, Abdul Wahid S, Lee H. Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games. Computers. 2025; 14(10):434. https://doi.org/10.3390/computers14100434

Chicago/Turabian Style

Farooq, Sehar Shahzad, Hameedur Rahman, Samiya Abdul Wahid, Muhammad Alyan Ansari, Saira Abdul Wahid, and Hosu Lee. 2025. "Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games" Computers 14, no. 10: 434. https://doi.org/10.3390/computers14100434

APA Style

Farooq, S. S., Rahman, H., Abdul Wahid, S., Alyan Ansari, M., Abdul Wahid, S., & Lee, H. (2025). Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games. Computers, 14(10), 434. https://doi.org/10.3390/computers14100434

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop