Multi-Agent Reinforcement Learning Model Simulation for Attention-Deficit Hyperactivity Disorder Children
Abstract
1. Introduction
2. Related Works
3. Background
4. Materials and Methods
4.1. Objective Function
4.2. Environment
4.3. Agents
4.4. Actions
4.5. States
4.6. Rewards
4.7. Reward Aggregation Function
4.8. Algorithms
| Algorithm 1: Independent Deep Q Network (IDQN) [42] |
| Inputs: values networks with parameters, target networks with parameters, replay buffers , time steps T, agent number n, observation o, action a, reward r, next observation , transitions B, state s, loss L. |
| Outputs: updated target network parameters and , global reward y. |
| Initialize n value networks with random parameters ; |
| Initialize n target networks with parameters ; |
| Initialize a replay buffer for each agent . |
| For time step t = 0, 1, 2,…T do |
| Collect current observations . |
| For agent i = 1 to n do |
| Choose an action (ex, -greedy). |
| End For |
| Apply actions , get rewards and next observations . |
| For agent i = 1 to n do |
| Store transition in replay buffers ; |
| Sample random mini batch of B transitions from . |
| If is terminal, then |
| else |
| End If |
| Update parameters by minimizing the loss ; |
| Update target network parameters . |
| End For |
| End For |
| Algorithm 2: Value Decomposition Network (VDN) [42] |
| Inputs: values networks with parameters, target networks with parameters, shared replay buffers D, time steps T, agent number n, observation o, action a, reward r, next observation , transitions B, state s, loss L |
| Outputs: updated target network parameters and , global reward y. |
| Initialize n value networks with random parameters ; |
| Initialize n target networks with parameters ; |
| Initialize a shared replay buffer for all the agents D. |
| For time step t = 0, 1, 2,…T do |
| Collect current observations . |
| For agent i = 1 to n do |
| Choose an action (ex: -greedy). |
| End for |
| Apply actions , get the shared reward and next observations . |
| For agent i = 1 to n do |
| Store transition in the shared replay buffer D; |
| Sample random mini batch of B transitions from D. |
| If is terminal, then |
| else |
| End If |
| Update parameters by minimizing the loss ; |
| Update target network parameters for each agent i. |
| End For |
| End For |
| Algorithm 3: QMIX [42] |
| Inputs: values networks with parameters, target networks with parameters, shared replay buffers D, time steps T, agent number n, observation o, action a, reward r, next observation , transitions B, state s, loss L. |
| Outputs: updated target network parameters and , global reward y. |
| Initialize n value networks with random parameters ; |
| Initialize n target networks with parameters ; |
| Initialize a shared replay buffer for all the agents D; Initialize hypernetwork with random parameters . |
| For time step t = 0, 1, 2,…T do |
| Collect current observations . |
| For agent i = 1 to n do |
| Choose an action (ex: -greedy). |
| End for |
| Apply actions , get the shared reward , next observations and next centralized information . |
| For agent i = 1 to n do |
| Store transition in the shared replay buffer D; |
| Sample random mini batch of B transitions from D. |
| If is terminal, then |
| else |
| Mixing parameters |
| End If Mixing parameters ; Value estimates . |
| Update parameters by minimizing the loss ; |
| Update target network parameters for each agent i. |
| End For |
| End For |
5. Discussion and Results
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| ADHD | Attention Deficit Hyperactivity Disorder |
| EF | Executive Function |
| TD | Typically Developing |
| AI | Artificial Intelligence |
| ML | Machine Learning |
| DL | Deep Learning |
| RL | Reinforcement Learning |
| MARL | Multi-Agent Reinforcement Learning |
| IDQN | Independent Deep Q Network |
| VDN | Value Decomposition Network |
References
- Desseilles, M.; Perroud, N.; Weibel, S. Manuel de L’hyperactivité et Du Déficit de L’attention: Le TDAH Chez L’adulte; Eyrolles: Paris, France, 2020. [Google Scholar]
- CDC. Data and Statistics on ADHD. Available online: https://www.cdc.gov/adhd/data/index.html (accessed on 15 December 2025).
- Liu, J.; Jiang, Z.; Li, F.; Zheng, Y.; Cui, Y.; Xu, H.; Li, Y. Prevalence and Comorbidity of Attention Deficit Hyperactivity Disorder in Chinese School-Attending Students Aged 6–16: A National Survey. Ann. Gen. Psychiatry 2025, 24, 23. [Google Scholar] [CrossRef] [PubMed]
- Laslo-Roth, R.; George-Levi, S.; Rosenstreich, E. Protecting Children with ADHD against Loneliness: Familial and Individual Factors Predicting Perceived Child’s Loneliness. Personal. Individ. Differ. 2021, 180, 110971. [Google Scholar] [CrossRef]
- Ingeborgrud, C.B.; Oerbeck, B.; Friis, S.; Zeiner, P.; Pripp, A.H.; Aase, H.; Biele, G.; Dalsgaard, S.; Overgaard, K.R. Anxiety and Depression from Age 3 to 8 Years in Children with and Without ADHD Symptoms. Sci. Rep. 2023, 13, 15376. [Google Scholar] [CrossRef] [PubMed]
- Nguyen-Thi-Phuong, M.; Nguyen-Thi-Thanh, M.; Goldberg, R.J.; Nguyen, H.L.; Dao-Thi-Minh, A.; Duong-Quy, S. Obstructive Sleep Apnea and Sleep Disorders in Children with Attention Deficit Hyperactivity Disorder. Pulm. Ther. 2025, 11, 423–441. [Google Scholar] [CrossRef]
- Parks, K.M.; Hannah, K.E.; Moreau, C.N.; Brainin, L.; Joanisse, M.F. Language Abilities in Children and Adolescents with DLD and ADHD: A Scoping Review. J. Commun. Disord. 2023, 106, 106381. [Google Scholar] [CrossRef]
- He, Z.; Yang, X.; Li, Y.; Zhao, X.; Li, J.; Li, B. Attention-deficit/Hyperactivity Disorder in Children with Epilepsy: A Systematic Review and Meta-Analysis of Prevalence and Risk Factors. Epilepsia Open 2024, 9, 1148–1165. [Google Scholar] [CrossRef]
- Lotfy, A.S.; Darwish, M.E.S.; Ramadan, E.S.; Sidhom, R.M. The Incidence of Dysgraphia in Arabic Language in Children with Attention-Deficit Hyperactivity Disorder. Egypt J. Otolaryngol. 2021, 37, 115. [Google Scholar] [CrossRef]
- Villa, F.M.; Crippa, A.; Rosi, E.; Nobile, M.; Brambilla, P.; Delvecchio, G. ADHD and Eating Disorders in Childhood and Adolescence: An Updated Minireview. J. Affect. Disord. 2023, 321, 265–271. [Google Scholar] [CrossRef]
- Alqarni, M.M.; Shati, A.A.; Alassiry, M.Z.; Asiri, W.M.; Alqahtani, S.S.; ALZomia, A.S.; Mahnashi, N.A.; Alqahtani, M.S.; Alamri, F.S.; Alqarni, M.M. Patterns of Injuries Among Children Diagnosed with Attention Deficit Hyperactivity Disorder in Aseer Region, Southwestern Saudi Arabia. Cureus 2021, 13, e17396. [Google Scholar] [CrossRef]
- French, B.; Nalbant, G.; Wright, H.; Sayal, K.; Daley, D.; Groom, M.J.; Cassidy, S.; Hall, C.L. The Impacts Associated with Having ADHD: An Umbrella Review. Front. Psychiatry 2024, 15, 1343314. [Google Scholar] [CrossRef]
- Banaschewski, T.; Häge, A.; Hohmann, S.; Mechler, K. Perspectives on ADHD in Children and Adolescents as a Social Construct amidst Rising Prevalence of Diagnosis and Medication Use. Front. Psychiatry 2024, 14, 1289157. [Google Scholar] [CrossRef] [PubMed]
- Jensen, V.H.; Orm, S.; Øie, M.G.; Andersen, P.N.; Hovik, K.T.; Skogli, E.W. Executive Functions and ADHD Symptoms Predict Educational Functioning in Children with ADHD: A Two-Year Longitudinal Study. Appl. Neuropsychol. Child 2025, 14, 225–235. [Google Scholar] [CrossRef] [PubMed]
- Byun, J.; Joung, C.; Lee, Y.; Lee, S.; Won, W. Le Petit Care: A Child-Attuned Design for Personalized ADHD Symptom Management Through AI-Powered Extended Reality. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2025; pp. 1–7. [Google Scholar]
- Chen, X.; Wang, S.; Yang, X.; Yu, C.; Ni, F.; Yang, J.; Tian, Y.; Ye, J.; Liu, H.; Luo, R. Utilizing Artificial Intelligence-Based Eye Tracking Technology for Screening ADHD Symptoms in Children. Front. Psychiatry 2023, 14, 1260031. [Google Scholar] [CrossRef] [PubMed]
- Yu, D.; Fang, J. hui Using Artificial Intelligence Methods to Study the Effectiveness of Exercise in Patients with ADHD. Front. Neurosci. 2024, 18, 1380886. [Google Scholar] [CrossRef]
- Busby, A.; Wijetunge, M.N.R.; Jayadas, A. Promoting Creativity Among the Students with ADHD in Universities in USA: A Controlled Experiment on the Effects of Environmental Stimulus. A B E-J. 2025, 1, 1–26. [Google Scholar]
- Dahan, A.; Roth, N.; Pelosi, A.D.; Reiner, M. A Reinforcement Learning Framework for Personalized Adaptive E-Learning. In Advanced Technologies and the University of the Future; Vendrell Vidal, E., Cukierman, U.R., Auer, M.E., Eds.; Lecture Notes in Networks and Systems; Springer Nature: Cham, Switzerland, 2025; Volume 1140, pp. 141–162. ISBN 978-3-031-71529-7. [Google Scholar]
- Tejasvi, P.; Kumar, T. A Smart System Facilitating Emotional Regulation in Neurodivergent Children. Procedia Comput. Sci. 2024, 235, 3257–3270. [Google Scholar] [CrossRef]
- Boschello, F.; Conca, A.; Donadello, I.; Giupponi, G.; Holzer, S.; Zini, F. Towards AI-Based Cognitive Training for Adult ADHD Patients. In Proceedings of the First International Conference on AI in Medicine and Healthcare (AiMH’ 2025), Innsbruck, Austria, 8–10 April 2025. [Google Scholar]
- Bansal, D.; Verma, A.; Sharma, A.; Kapoor, I.; Reddy, V.; Patel, A. Improving Object Recognition and Diagnostics with Advanced Learning Techniques. 2024. Available online: https://www.researchgate.net/publication/384844639_Improving_Object_Recognition_and_Diagnostics_with_Advanced_Learning_Techniques (accessed on 18 October 2025).
- Katabi, G.; Shahar, N. Exploring the Steps of Learning: Computational Modeling of Initiatory-Actions among Individuals with Attention-Deficit/Hyperactivity Disorder. Transl. Psychiatry 2024, 14, 10. [Google Scholar] [CrossRef]
- Dong, H.; Chen, D.; Chen, Y.; Tang, Y.; Yin, D.; Li, X. A Multi-Task Learning Model with Reinforcement Optimization for ASD Comorbidity Discrimination. Comput. Methods Programs Biomed. 2024, 243, 107865. [Google Scholar] [CrossRef]
- Zhang, S.; Song, H.; Wang, Q.; Pei, Y. Fuzzy Logic Guided Reward Function Variation: An Oracle for Testing Reinforcement Learning Programs. arXiv 2024, arXiv:2406.19812. [Google Scholar] [CrossRef]
- Gupta, A.; Badr, Y.; Negahban, A.; Qiu, R.G. Energy-Efficient Heating Control for Smart Buildings with Deep Reinforcement Learning. J. Build. Eng. 2021, 34, 101739. [Google Scholar] [CrossRef]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Naeem, M.; Rizvi, S.T.H.; Coronato, A. A Gentle Introduction to Reinforcement Learning and Its Application in Different Fields. IEEE Access 2020, 8, 209320–209344. [Google Scholar] [CrossRef]
- Tedeschi, T.; Ciangottini, D.; Baioletti, M.; Poggioni, V.; Spiga, D.; Storchi, L.; Tracolli, M. Smart Caching in a Data Lake for High Energy Physics Analysis. arXiv 2022, arXiv:2208.06437. [Google Scholar] [CrossRef]
- Moussaoui, H.; El Akkad, N.; Benslimane, M. Reinforcement Learning: A Review. IJCDS 2023, 13, 1465–1483. [Google Scholar] [CrossRef] [PubMed]
- Malibari, N.; Katib, I.; Mehmood, R. Systematic Review on Reinforcement Learning in the Field of Fintech. arXiv 2023, arXiv:2305.07466. [Google Scholar] [CrossRef]
- Zhong, L. Comparison of Q-Learning and SARSA Reinforcement Learning Models on Cliff Walking Problem; Atlantis Press: Dordrecht, The Netherlands, 2024; pp. 207–213. [Google Scholar]
- Liu, Y.; Yang, J.; Chen, L.; Guo, T.; Jiang, Y. Overview of Reinforcement Learning Based on Value and Policy. In Proceedings of the 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 22–24 August 2020; IEEE: New York, NY, USA, 2020; pp. 598–603. [Google Scholar]
- Van Hasselt, H.; Guez, A.; Silver, D. Deep Reinforcement Learning with Double Q-Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Association for the Advancement of Artificial Intelligence: Washington, DC, USA, 2016; Volume 30. [Google Scholar]
- Zhou, S.; Liu, X.; Xu, Y.; Guo, J. A Deep Q-Network (DQN) Based Path Planning Method for Mobile Robots. In Proceedings of the 2018 IEEE International Conference on Information and Automation (ICIA), Wuyishan, China, 11–13 August 2018; IEEE: New York, NY, USA, 2018; p. 371. [Google Scholar]
- Tournaire, T. Model-Based Reinforcement Learning for Dynamic Resource Allocation in Cloud Environments. Ph.D. Thesis, Institut Polytechnique de Paris, Paris, France, 2022. [Google Scholar]
- Sivamayil, K.; Rajasekar, E.; Aljafari, B.; Nikolovski, S.; Vairavasundaram, S.; Vairavasundaram, I. A Systematic Study on Reinforcement Learning Based Applications. Energies 2023, 16, 1512. [Google Scholar] [CrossRef]
- Siboo, S.; Bhattacharyya, A.; Raj, R.N.; Ashwin, S.H. An Empirical Study of DDPG and PPO-Based Reinforcement Learning Algorithms for Autonomous Driving. IEEE Access 2023, 11, 125094–125108. [Google Scholar] [CrossRef]
- Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.M.O.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D.P. Continuous Control with Deep Reinforcement Learning. U.S. Patent No 10,776,692, 15 September 2020. [Google Scholar]
- Bick, D. Towards Delivering a Coherent Self-Contained Explanation of Proximal Policy Optimization. Ph.D. Thesis, University of Groningen, Groningen, The Netherlands, 2021. [Google Scholar]
- Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal Policy Optimization Algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar] [CrossRef]
- Albrecht, S.V.; Christianos, F.; Schäfer, L. Multi-Agent Reinforcement Learning: Foundations and Modern Approaches; MIT Press: Cambridge, MA, USA, 2024. [Google Scholar]
- Yuan, L.; Zhang, Z.; Li, L.; Guan, C.; Yu, Y. A Survey of Progress on Cooperative Multi-Agent Reinforcement Learning in Open Environment. arXiv 2023, arXiv:2312.01058. [Google Scholar] [CrossRef]
- Ning, Z.; Xie, L. A Survey on Multi-Agent Reinforcement Learning and Its Application. J. Autom. Intell. 2024, 3, 73–91. [Google Scholar] [CrossRef]
- Liang, J.; Miao, H.; Li, K.; Tan, J.; Wang, X.; Luo, R.; Jiang, Y. A Review of Multi-Agent Reinforcement Learning Algorithms. Electronics 2025, 14, 820. [Google Scholar] [CrossRef]
- Son, K.; Kim, D.; Kang, W.J.; Hostallero, D.E.; Yi, Y. Qtran: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge MA, USA, 2019; pp. 5887–5896. [Google Scholar]
- Ryu, H.; Shin, H.; Park, J. Cooperative and Competitive Biases for Multi-Agent Reinforcement Learning. arXiv 2021, arXiv:2101.06890. [Google Scholar] [CrossRef]
- Maisonhaute, T.; Michel, F.; Soulie, J.-C. État de l’art Sur Les Approches En Apprentissage Par Renforcement Multi-Agent. In Proceedings of the JFSMA 2024; Cépaduès: Toulouse, France, 2024; pp. 99–108. [Google Scholar]
- Zhang, Y. DQN for Coordinating Multi-Agent Cooking. Highlights Sci. Eng. Technol. 2023, 39, 1228–1238. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, Y.; Tian, F.; Ma, J.; Jin, Q. Intelligent Games Meeting with Multi-Agent Deep Reinforcement Learning: A Comprehensive Review. Artif. Intell. Rev. 2025, 58, 165. [Google Scholar] [CrossRef]
- Li, Z.; Chen, X.; Fu, J.; Xie, N.; Zhao, T. Reducing Q-Value Estimation Bias via Mutual Estimation and Softmax Operation in MADRL. Algorithms 2024, 17, 36. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, W.; Hu, Y.; Hao, J.; Chen, X.; Gao, Y. Multi-Agent Game Abstraction via Graph Attention Neural Network. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; AAAI Press: Palo Alto, CA, USA, 2020; Volume 34, pp. 7211–7218. [Google Scholar]
- Kong, X.; Xin, B.; Liu, F.; Wang, Y. Revisiting the Master-Slave Architecture in Multi-Agent Deep Reinforcement Learning. arXiv 2017, arXiv:1712.07305. [Google Scholar]
- Kilinc, O.; Montana, G. Multi-Agent Deep Reinforcement Learning with Extremely Noisy Observations. arXiv 2018, arXiv:1812.00922. [Google Scholar] [CrossRef]
- Niu, Y.; Paleja, R.R.; Gombolay, M.C. Multi-Agent Graph-Attention Communication and Teaming. In Proceedings of the AAMAS, Online, 3–7 May 2021; Volume 21, p. 20. [Google Scholar]
- Tucker, M.; Li, H.; Agrawal, S.; Hughes, D.; Sycara, K.; Lewis, M.; Shah, J.A. Emergent Discrete Communication in Semantic Spaces. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2021; Volume 34, pp. 10574–10586. [Google Scholar]
- Gohari, P.; Hale, M.; Topcu, U. Privacy-Engineered Value Decomposition Networks for Cooperative Multi-Agent Reinforcement Learning. In Proceedings of the 2023 62nd IEEE Conference on Decision and Control (CDC), Singapore, 13–15 December 2023; IEEE: New York, NY, USA, 2023; pp. 8038–8044. [Google Scholar]
- Hu, J.; Jiang, S.; Harding, S.A.; Wu, H.; Liao, S. Rethinking the Implementation Tricks and Monotonicity Constraint in Cooperative Multi-Agent Reinforcement Learning. arXiv 2023, arXiv:2102.03479. [Google Scholar] [CrossRef]
- Canese, L.; Cardarilli, G.C.; Di Nunzio, L.; Fazzolari, R.; Giardino, D.; Re, M.; Spanò, S. Multi-Agent Reinforcement Learning: A Review of Challenges and Applications. Appl. Sci. 2021, 11, 4948. [Google Scholar] [CrossRef]
- Foerster, J.; Assael, I.A.; de Freitas, N.; Whiteson, S. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2016; Volume 29. [Google Scholar]
- Jiang, J.; Lu, Z. Learning Attentional Communication for Multi-Agent Cooperation. In Advances in Neural Information Processing Systems; Curran Associates: Sydney, NSW, Australia, 2018. [Google Scholar]
- Singh, A.; Jain, T.; Sukhbaatar, S. Learning When to Communicate at Scale in Multiagent Cooperative and Competitive Tasks. arXiv 2018, arXiv:1812.09755. [Google Scholar] [CrossRef]
- Wang, R.; He, X.; Yu, R.; Qiu, W.; An, B.; Rabinovich, Z. Learning Efficient Multi-Agent Communication: An Information Bottleneck Approach. In Proceedings of the 37th International Conference on Machine Learning; PMLR: Cambridge MA, USA, 2020; pp. 9908–9918. [Google Scholar]
- Sheng, J.; Wang, X.; Jin, B.; Yan, J.; Li, W.; Chang, T.-H.; Wang, J.; Zha, H. Learning Structured Communication for Multi-Agent Reinforcement Learning. Auton. Agents Multi-Agent Syst. 2022, 36, 50. [Google Scholar] [CrossRef]
- Chu, T.; Chinchali, S.; Katti, S. Multi-Agent Reinforcement Learning for Networked System Control. arXiv 2020, arXiv:2004.01339. [Google Scholar] [CrossRef]
- Hu, G.; Zhu, Y.; Zhao, D.; Zhao, M.; Hao, J. Event-Triggered Multi-Agent Reinforcement Learning with Communication under Limited-Bandwidth Constraint. arXiv 2020, arXiv:2010.04978. [Google Scholar]
- Qu, C.; Li, H.; Liu, C.; Xiong, J.; Chu, W.; Wang, W.; Qi, Y.; Song, L. Intention propagation for multi-agent reinforcement learning. arXiv 2020, arXiv:2002.07085. [Google Scholar]
- Kim, W.; Park, J.; Sung, Y. Communication in multi-agent reinforcement learning: Intention sharing. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Yusriyyah, Q.N.; Aziz, A.R.H.; Setiawati, Y.; Dianasari, D.; Pradanita, V.N.; Ardani, I.G.A.I. Learning Disorder in Attention Deficit Hyperactivity Disorder (ADHD) Children: A Literature Review. Int. J. Sci. Adv. 2023, 4, 15–18. [Google Scholar] [CrossRef]
- Ballard, R.; Sadhu, J. Anticipatory Guidance for Children and Adolescents with Attention-Deficit/Hyperactivity Disorder. Pediatr. Ann. 2025, 54, e34–e39. [Google Scholar] [CrossRef] [PubMed]
- Vaughn, M. Confidently Parenting ADHD Children: Practical Tips and Guidance; Independently Published (Amazon KDP): Seattle, WA, USA, 2024. [Google Scholar]
- Cuevas, B.P.G.; Carreño, Y.A.M.; Gamboa, M.R. Integrating Scaffolding Techniques into Listening Comprehension Activities for English Language Learning in Students with ADHD. REGARD 2024, 8, 40. [Google Scholar]
- Wojciechowska, K.; Turek, M.; Jaroń, A.; Jastrzębska, K.; Witkowska, M.; Skotnicka, J.; Błaszczak, K.; Borkowski, A.; Sawicki, M. ADHD-Treatment Options and Consequences of Neglect. Qual. Sport 2024, 17, 53428. [Google Scholar] [CrossRef]
- Lyu, J.; Ishwaran, H. Commentary: To Classify Means to Choose a Threshold. J. Thorac. Cardiovasc. Surg. 2023, 165, 1443–1445. [Google Scholar] [CrossRef]
- Van den Goorbergh, R.; van Smeden, M.; Timmerman, D.; Van Calster, B. The Harm of Class Imbalance Corrections for Risk Prediction Models: Illustration and Simulation Using Logistic Regression. J. Am. Med. Inform. Assoc. 2022, 29, 1525–1534. [Google Scholar] [CrossRef]
- Rajaraman, S.; Ganesan, P.; Antani, S. Deep Learning Model Calibration for Improving Performance in Class-Imbalanced Medical Image Classification Tasks. PLoS ONE 2022, 17, e0262838. [Google Scholar] [CrossRef]
- Salvat, H.; Mohammadi, M.N.; Molavi, P.; Mostafavi, S.A.; Rostami, R.; Salehinejad, M.A. Nutrient Intake, Dietary Patterns, and Anthropometric Variables of Children with ADHD in Comparison to Healthy Controls: A Case-Control Study. BMC Pediatr. 2022, 22, 70. [Google Scholar] [CrossRef]
- Ahn, J.; Shin, J.; Park, H.; Ha, J.-W. Increased Risk of Injury and Adult Attention Deficit Hyperactivity Disorder and Effects of Pharmacotherapy: A Nationwide Longitudinal Cohort Study in South Korea. Front. Psychiatry 2024, 15, 1453100. [Google Scholar] [CrossRef]
- Hyde, C.; Fuelscher, I.; Rosch, K.S.; Seymour, K.E.; Crocetti, D.; Silk, T.; Singh, M.; Mostofsky, S.H. Subtle Motor Signs in Children with ADHD and Their White Matter Correlates. Hum. Brain Mapp. 2024, 45, e70002. [Google Scholar] [CrossRef]
- Meachon, E.J.; Klupp, S.; Grob, A. Gait in Children with and without ADHD: A Systematic Literature Review. Gait Posture 2023, 104, 31–42. [Google Scholar] [CrossRef] [PubMed]
- Downing, C.; Caravolas, M. Handwriting Legibility and Fluency and Their Patterns of Concurrent Relations with Spelling, Graphomotor, and Selective Attention Skills. J. Exp. Child Psychol. 2023, 236, 105756. [Google Scholar] [CrossRef] [PubMed]
- Katsarou, D.V.; Efthymiou, E.; Kougioumtzis, G.A.; Sofologi, M.; Theodoratou, M. Identifying Language Development in Children with ADHD: Differential Challenges, Interventions, and Collaborative Strategies. Children 2024, 11, 841. [Google Scholar] [CrossRef] [PubMed]
- Kaplan Kılıç, B.; Bumin, G.; Öğütlü, H. Effect of Telerehabilitation on Handwriting Performance in Children With Attention Deficit Hyperactivity Disorder: Randomized Controlled Trial. Child 2025, 51, e70055. [Google Scholar] [CrossRef]
- Santos, W.M.D.; Albuquerque, A.R. de Effect of Words Highlighting in School Tasks upon Typical ADHD Behaviors. Psicol. Teor. Pesqui. 2021, 37, e37302. [Google Scholar] [CrossRef]
- Namasse, Z.; Hidila, Z.; Tabaa, M.; Elhaddadi, M.; Mouchawrab, S. Cure-Free: A Free-Model Reinforcement Learning Approach for the ADHD Children. In Emerging Technologies for Developing Countries; 2026, in press.
- Amato, C. A First Introduction to Cooperative Multi-Agent Reinforcement Learning. 2024, in press.
- Zhu, C.; Dastani, M.; Wang, S. A Survey of Multi-Agent Deep Reinforcement Learning with Communication. Auton. Agents Multi-Agent Syst. 2024, 38, 4. [Google Scholar] [CrossRef]
- Amato, C. An Introduction to Centralized Training for Decentralized Execution in Cooperative Multi-Agent Reinforcement Learning. arXiv 2024, arXiv:2409.03052. [Google Scholar] [CrossRef]
- Fang, X.; Cui, P.; Wang, Q. Multiple Agents Cooperative Control Based on QMIX Algorithm in SC2LE Environment. In Proceedings of the 2020 7th International Conference on Information, Cybernetics, and Computational Social Systems (ICCSS), Guangzhou, China, 13–15 November 2020; Curran Associates, Inc.: Red Hook, NY, USA, 2020; pp. 435–439. [Google Scholar]
- U.S. Department of Agriculture. SuperTracker: Source Code and Foods Database. Available online: https://catalog.data.gov/dataset/supertracker-source-code-and-foods-database (accessed on 30 January 2026).
- Cohen, R.; Cohen-Kroitoru, B.; Halevy, A.; Aharoni, S.; Aizenberg, I.; Shuper, A. Handwriting in Children with Attention Deficient Hyperactive Disorder: Role of Graphology. BMC Pediatr. 2019, 19, 484. [Google Scholar] [CrossRef]












| Indices | Action | Explanation |
|---|---|---|
| 1 | Eating sugar food | According to [77], ADHD children eat more sugary food than non-ADHD children. |
| 2 | Eat less proteins | [77] found that children with ADHD consumed less protein-based food. |
| 3 | Eat more protein | [77] concluded that increased protein intake reduces ADHD symptoms. |
| 4 | Eat less sugary food | In agreement with [77], a decrease in sugar consumption could alleviate ADHD symptoms. |
| 5 | Fall | A systematic review mentioned by [78] indicates that children with ADHD have more fractures than those without ADHD. |
| 6 | Move clumsily | According to a study mentioned by [79], children with ADHD have widespread subtle motor activities and greater motor overflow than non-ADHD children. |
| 7 | Walk more slowly | Studies cited by [80] indicate that a slower walking pace could alleviate ADHD symptoms. |
| 8 | Making graphic design mistakes | According to [81], children with ADHD make more errors in drawing than neurotypical children. |
| 9 | Write an incoherent text. | [82] mention that children with ADHD write more incoherent texts than children without ADHD. |
| 10 | Write more often | The findings by [83] highlight that more frequent writing practice may increase attention in individuals with ADHD. |
| 11 | Highlight the words | A study conducted by [84] suggests that highlighting words can improve attention in individuals with ADHD. |
| 12 | Summarize the paragraphs | [82] propose paragraph brainstorming as an intervention to reduce inattention in individuals with ADHD. |
| Variables | Meaning | Margin |
|---|---|---|
| DFT | Duration of focus during a task | 0: Less than 10 min 1: Between 10 and 15 min 2: Between 15 and 20 min 3: Between 20 and 30 min |
| WSpe | Writing speed | 0: Normal speed 1: High speed 2: Low speed |
| LS | Letter spacing | 0: Normal (1–2 mm) 1: Large 2: Narrow 3: Irregularly spaced |
| WSpa | Word spacing | 0: Normal 1: Large 2: Narrow 3: Irregularly spaced |
| Pro | Proteins | 0: Between 0 and 20 1: Between 21 and 40 2: Between 41 and 60 |
| Kcal | Kilocalories | 0: Between 0 and 200 1: Between 201 and 400 2: Between 401 and 600 |
| WC | Walking cadence | 0: Very slow 1: Slow 2: Normal 3: Fast 4: Very fast |
| Algorithm | Value-Based or Policy-Based | Credit Assignment |
|---|---|---|
| MAPPO [50] | Policy-Based | ✕ |
| MADDPG [50] | Policy-Based | ✕ |
| IDQN [86] | Value-Based | ✕ |
| CommNet [87] | Policy-Based | ✕ |
| BicNet [87] | Policy-Based/Value-Based | ✕ |
| DIAL [50] | Value-Based | ✕ |
| COMA [50] | Policy-Based | ✓ |
| VDN [50] | Value-Based | ✓ |
| QMIX [50] | Value-Based | ✓ |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Namasse, Z.; Hidila, Z.; Tabaa, M.; Elhaddadi, M.; Mouchawrab, S. Multi-Agent Reinforcement Learning Model Simulation for Attention-Deficit Hyperactivity Disorder Children. Appl. Sci. 2026, 16, 2158. https://doi.org/10.3390/app16042158
Namasse Z, Hidila Z, Tabaa M, Elhaddadi M, Mouchawrab S. Multi-Agent Reinforcement Learning Model Simulation for Attention-Deficit Hyperactivity Disorder Children. Applied Sciences. 2026; 16(4):2158. https://doi.org/10.3390/app16042158
Chicago/Turabian StyleNamasse, Zineb, Zineb Hidila, Mohamed Tabaa, Mounia Elhaddadi, and Samar Mouchawrab. 2026. "Multi-Agent Reinforcement Learning Model Simulation for Attention-Deficit Hyperactivity Disorder Children" Applied Sciences 16, no. 4: 2158. https://doi.org/10.3390/app16042158
APA StyleNamasse, Z., Hidila, Z., Tabaa, M., Elhaddadi, M., & Mouchawrab, S. (2026). Multi-Agent Reinforcement Learning Model Simulation for Attention-Deficit Hyperactivity Disorder Children. Applied Sciences, 16(4), 2158. https://doi.org/10.3390/app16042158

