1. Introduction
The integration of artificial intelligence (AI) in games has revolutionized the game design and player experience with significant advancements in recent years. From strategic decision-making in board games such as chess and Go, to dynamic character behaviors in video games, AI-driven games continue to shape modern gaming landscapes. Notably, deep reinforcement learning (DRL) [
1], neural networks [
2], and procedural content generation (PCG) [
3] have brought breakthroughs in game AI development.
Traditionally, AI in games primarily focused on rule-based systems and finite-state machines, often leading to predictable behaviors. However, with the advancement of machine learning (ML) techniques, AI has evolved into more adaptive and human-like agents, capable of learning from player interactions, generating novel content, and even competing at superhuman levels. Notable examples include AlphaGo [
4], which defeated human world champions in Go using deep neural networks and Monte Carlo Tree Search (MCTS), and OpenAI Five, which mastered the complex multiplayer game Dota 2 through reinforcement learning [
5].
Despite these advancements, challenges remain in AI for games, particularly in the following areas.
Asymmetric competitive games where players assume different roles and objectives, require AI to adapt to varying strategies.
Human-like AI opponents ensuring AI behaviors feel realistic and engaging for players rather than simply maximizing efficiency.
AI for creative game mechanics incorporates AI as opponents and collaborative agents, procedural content generators, and adaptive storytellers.
We developed an AI for Drawing Werewolf, an asymmetric competitive game where players are assigned the roles of humans or werewolves and attempt to achieve their respective victory conditions through drawing interactions. The key requirements for the AI include recognizing and generating drawings, making SketchRNN, a recurrent neural network (RNN) specialized for drawing, an appropriate learning model for this task. However, since Drawing Werewolf requires a different approach to drawing from that of standard sketching, it is necessary to collect a custom dataset to train the AI model effectively. Therefore, we developed a network-based Drawing Werewolf game and established a framework to integrate AI into the system. In this study, we introduced the system architecture of the developed game and implemented a simple AI in the game environment.
The remainder of this paper is structured as follows.
Section 2 reviews related works on AI in games.
Section 3 introduces the Drawing Werewolf game and the system overview.
Section 4 describes the SketchRNN model and its function to recognize and generate drawing sequences.
Section 5 presents the implementation of the AI player in the Drawing Werewolf game with discussions based on the remaining issues. Finally,
Section 6 summarizes our findings and discusses future research directions.
2. Related Works
We reviewed previous studies on asymmetric competitive games, followed by research on learning-based drawing generation.
2.1. Asymmetric Competitive Games
There has been research aimed at developing AI for asymmetric competitive games. Generally, these approaches involve collecting gameplay data from the target game and using that data to train a learning model.
Similarly to symmetric competitive games, reinforcement learning is also commonly applied in asymmetric competitive games. Deep reinforcement learning (RL) algorithms have been applied to a two-player asymmetric real-time strategy game, called Hunting of the Plark, which simulates an anti-submarine warfare scenario [
6]. Deep Q-network (DQN), proximal policy optimization (PPO), and advantage actor–critic (A2C) were compared, and the results showed that A2C showed superior performance in learning speed and reward convergence. We introduced self-play, environment exploration, and strategy diversification to help the disadvantaged player overcome limited information and break out of equilibrium strategies.
For multiplayer asymmetric games, Sun et al. proposed a novel multi-agent deep reinforcement learning framework featuring asymmetric-evolution training (AET), which incorporates adaptive data adjustment and environment randomization to balance training between distinctly different agents [
7]. Experiments on a complex AMP game, “Tom & Jerry,” demonstrated that this method enables reinforcement learning agents to reach top human-level performance, with ablation studies confirming the role of the proposed modules. A neural-network-based adaptive critic mechanism has also been developed to investigate the optimal tracking control problem for nonlinear continuous-time multiplayer zero-sum games with asymmetric constraints [
8]. A single critic neural network with a novel weight updating rule was employed to ensure near-optimal control policies and system stability.
These studies utilized different learning models, highlighting the importance of selecting an appropriate model based on the characteristics of the target game. Focusing on a drawing-based game, we employed a well-suited model for learning and generating drawings.
2.2. Learning-Based Drawing Recognition and Generation
Unlike image data learning, which focuses on image recognition and generation, various methods have been developed for learning drawing data. Treating a drawing as a single image to learn using deep learning models is similar to those used in conventional image learning techniques. Castellano et al. reviewed several deep-learning approaches for pattern extraction and recognition in painting and drawing [
9]. They investigated the effectiveness of methods such as convolutional neural networks (CNNs), generative adversarial networks (GANs), and RNNs on various datasets of artworks such as artistic influence discovery or object recognition. A deep learning-based system for recognizing 2D engineering drawings has also been developed, utilizing tools such as PyTorch and YOLO [
10]. The system achieved high accuracy in detecting views, detecting annotation groups, and recognizing text and symbols.
Various approaches for learning sketch data have been studied for a long time, and research in this field remains active and ongoing today. Chen et al. introduced a novel approach for sketch recognition using part-based hierarchical analogical learning [
11]. The method segments a given sketched object into parts to construct multi-level qualitative representations of them. They focused on data-efficient learning and explainability to allow users to understand the system’s decision-making process. A model that focuses on the dynamic structure of the sketch has also been proposed. The model, SketchRNN, is designed to generate stroke-based drawings of common objects [
12]. It is trained on a large dataset of human-drawn sketches to create both conditional and unconditional sketch drawings in a vector format. The model employs robust training methods to ensure the generation of coherent and natural-looking sketches. An advanced architecture of SketchRNN, namely SketchR2CNN, is built upon the capabilities of SketchRNN by integrating an RNN with a CNN through a novel rasterization process [
13]. The model takes vector sketches as input and uses an RNN to extract per-point features in the vector space. The features are converted into multi-channel point feature maps using a neural line rasterization module, which is subsequently processed by CNN to extract convolutional features in the pixel space. This end-to-end learning approach significantly improves sketch recognition performance over traditional CNN baselines.
The Drawing Werewolf game is a deduction-based game that relies on dynamic drawing actions. Since SketchRNN can capture and generate dynamic sketch drawings, it is an appropriate model for the game. Therefore, we incorporated SketchRNN to implement the game’s AI.
3. Drawing Werewolf and System Overview
3.1. Drawing Werewolf
Drawing Werewolf is a game where players take turns drawing a single-stroke sketch based on a given theme, to identify the imposter (Werewolf), who does not know the theme. Each player draws twice, and after all drawings are complete, the players vote to identify the Werewolf. Humans win by correctly identifying the Werewolf through a majority vote while ensuring the Werewolf does not correctly guess the theme while the Werewolf wins when avoiding being chosen in the majority vote or being chosen but correctly guessing the theme.
3.2. System Architecture
The game system consists of one server and four client computers (
Figure 1). Both the server and client programs are implemented using C++, and the computers communicate via transmission control protocol/internet protocol (TCP/IP) socket communication. The server-side processing flow is shown in
Figure 2, while the client-side processing flow is shown in
Figure 3. All image assets used in the game were self-created.
Figure 4 shows the flow of the created Drawing Werewolf game (language in Japanese).
The Drawing Werewolf game consists of two main modes: drawing mode and voting mode. In drawing mode, each player takes turns drawing to gather information for deducing the roles of other players. During a player’s drawing turn, coordinate data is continuously sent to the server, which then forwards the received coordinates to all clients. Each client renders the drawing on the canvas upon receiving coordinate information from the server. After each player has drawn twice, the game transitions to voting mode. In the mode, players cast their votes by clicking on the person they suspect to be the Werewolf. The player(s) with the most votes receive a penalty mark above their icons. If none of the marked players are the Werewolf, the Werewolf wins, while if the Werewolf is among the marked players, the humans win. The current version of the system omits the theme-guessing phase that typically follows.
3.3. Required Functions for AI Player
The AI player in Drawing Werewolf must be able to adapt its behavior based on its assigned role (human or Werewolf).
- 1.
When AI is a human
Draw according to the given theme while ensuring clarity.
Add to other players’ drawings in a way that aligns with the theme.
Subtly indicate its human role to other players while concealing the theme from the Werewolf.
- 2.
When AI is a Werewolf
If drawing first, create a neutral sketch that aligns with the general category (e.g., for the category “vehicles,” drawing a tire).
If not first, infer the theme from other players’ drawings and attempt to blend in convincingly.
- 3.
General AI Capabilities:
Recognize and analyze previous drawings to make informed drawing deci-sions.
Generate smooth, sequential sketches using a SketchRNN-based model.
Adapt its strategy dynamically based on the game state.
In this study, we developed an AI model for the Human role in Drawing Werewolf, to draw according to the given theme. We utilized SketchRNN as the learning model and trained it using a subset of the “Quick, Draw!” dataset.
4. SketchRNN
SketchRNN is a type of RNNs specifically designed for drawing tasks. The structure of SketchRNN in the developed system is as follows.
4.1. Model Overview
SketchRNN consists of an encoder and a decoder, and an overview of the architecture is shown in
Figure 5.
The encoder is responsible for recognizing the input drawing sequence. To improve recognition accuracy, a bidirectional learning mechanism was used. In the mechanism, the sequence is processed in both forward and backward directions, and the resulting information is merged into a unified vector representation. When the input sequence passes through the encoder, the model estimates the mean μ and variance σ of the sequence. These parameters are then used in combination with a Gaussian distribution N(0, 1) to sample latent vectors, which are then fed into the decoder to generate a drawing sequence represented by the sampled parameter.
4.2. Drawing Sequence Generation
The decoder in SketchRNN was designed to generate a drawing sequence based on a given latent vector. There are three ways to generate such drawings.
Generate a random drawing by inputting a randomly sampled latent vector.
Reconstruct a full drawing using a latent vector derived from the encoder.
Complete a partial drawing, using the encoder’s latent vector obtained from incomplete input strokes.
In these cases, the decoder calculates the drawing sequence step by step. First, by inputting the latent vector along with the initial state, the model outputs the initial stroke information (including the pen coordinates and pen status) that defines the beginning of the drawing. This output, along with the latent vector, is then fed into the next step, generating the next state. This process is repeated until the entire drawing sequence is produced. While the method of obtaining the latent vector varies between the three approaches, the generation process itself remains the same across all of them.
Figure 6 shows how the model completes a sketch of a “cat” that was initially drawn by a human and an example of drawing with SketchRNN. In the figure, the human first sketches the ears of the cat. This partial sketch is processed by the encoder, and the decoder generates a complete drawing shown in the bottom left of the figure. Since the decoder receives a sampled value based on the mean and variance estimated by the encoder, different drawings are generated by sampling different latent values. Next, the human adds the outline of the cat’s head, and the result generated by SketchRNN from this updated sketch is shown in the bottom right. This demonstrates that SketchRNN can generate various drawing sequences based on partial input sketches.
5. AI Player in Drawing Werewolf
As the first step toward building an AI player for the Drawing Werewolf game, we implemented the following features for the AI client. A gameplay scene during the actual game is shown in
Figure 7. AI sketched the theme “cat” using a pre-generated sequence from SketchRNN, which was loaded during gameplay. In the voting mode, the AI selects one player at random to vote for.
5.1. Gameplay
The AI player’s screen is shown in the top-right section of
Figure 7, and in this game, Player 2 was the Werewolf. In the game, the role of the AI player was human. We trained SketchRNN using “cat” data from the “Quick, Draw!” dataset and pre-generated a drawing sequence of a cat using random latent variables. In this game scenario, the AI player first drew, and during the first round, it outlined the cat as the first stroke (
Figure 7, top left). After the other players took their turns, AI drew again during the second round, adding the cat’s ears as the second stroke (
Figure 7, top right). Following the second round of drawing by all players, the game entered voting mode (
Figure 7, bottom left). AI randomly selected Player 4. The final votes were two votes for Player 1 and two votes for Player 4. Since both Player 1 and Player 4 were human, and the actual Werewolf was Player 2, the Werewolf won the game.
5.2. Discussion
We built the foundation for an AI player in Drawing Werewolf, and as shown in
Figure 7, it operated correctly. The functions to be implemented were created to develop the AI player‘s functionality.
One of the challenges in completing the AI player was to adjust its drawing behavior according to both its role and the current game state. For the human role, AI must be capable of interpreting a given theme and expressing it visually, which is understandable to other human players but vague enough to prevent the Werewolf from inferring the theme. Achieving this balance requires AI to develop a nuanced understanding of both visual abstraction and social deception, which remains an open challenge in generative AI.
In the case of the Werewolf role, AI must infer the theme solely from partial drawings by other players. This requires integrating sketch recognition capabilities with probabilistic reasoning to generate plausible follow-up sketches that blend in with the group without betraying AI’s lack of knowledge. Implementing this requires hybrid models that combine sequence generation (e.g., SketchRNN) with classification or attention-based architectures capable of thematic estimation. Furthermore, improvements to AI’s voting behavior are necessary. The current implementation randomly selects a player to vote for, which does not reflect strategic reasoning. The iterations of AI need to be incorporated into rule-based logic, statistical modeling, or even reinforcement learning to evaluate drawing patterns and make informed voting decisions. Incorporating an interpretability component enhances human trust in AI decisions during gameplay.
Ultimately, developing a fully functioning AI player for Drawing Werewolf demands advancements in generative drawing, role-based strategy modeling, and real-time decision-making. By addressing these challenges, AI plays competitively and contributes meaningfully to the social and creative dynamics of the game.
6. Applications and Ethical Considerations
6.1. Applications in Creative and Educational Contexts
The Drawing Werewolf game and its AI system introduce novel possibilities for combining generative drawing models with social deduction mechanics. Beyond entertainment, such systems can be applied to collaborative creativity, ambiguity interpretation, and multi-agent reasoning are required. For instance, education and training environments benefit from this type of AI to support creative exercises involving drawing, problem-solving, and decision-making. A game-based AI that mimics human ambiguity or deception in drawings enhances critical thinking or team collaboration skills among learners. In human–computer interaction studies, this system provides a unique testbed for investigating how users interpret and respond to AI-generated visual expressions within a social context. The drawing-based nature of communication offers a modality for exploring non-verbal human–AI interaction.
6.2. Ethical Implications in Deceptive AI Design
From an ethical perspective, careful attention must be paid to transparency and user expectations. As AI begins to participate in games that involve bluffing or deception, it becomes necessary to consider how players perceive fairness, agency, and trust in such environments. Designing AI that is both competent and ethically aligned requires ongoing dialogue between game designers, AI researchers, and ethicists.
6.3. Limitations and Technical Challenges
While the current implementation demonstrates the feasibility of integrating generative drawing models in a social deduction game, several technical limitations remain.
First, the SketchRNN model, while effective for generating coherent stroke-based sketches, is limited in its ability to generate complex or highly detailed images. This constraint might reduce the expressiveness of AI-drawn content, especially in themes that require fine distinctions (e.g., “lion” vs. “cat”).
Second, due to the pre-training approach, AI currently lacks real-time adaptability. All sketches are generated in advance, meaning AI cannot dynamically respond to drawings from other players. Real-time generation conditioned on evolving context is necessary for more nuanced interaction and deception strategies.
Third, the system currently assumes a consistent drawing style across all players. In practice, variations in human sketching styles might lead to biased interpretations or mismatches when AI attempts to mimic or respond to human drawings.
Finally, evaluating the effectiveness of AI decisions, especially in the voting phase, remains a significant challenge. Without ground truth or clear correctness criteria, quantifying AI performance in socially complex games requires innovative evaluation frameworks that incorporate qualitative metrics, such as believability or perceived intention.
Addressing these technical issues is critical for building AI agents that can engage in socially rich and visually grounded gameplay scenarios.
7. Conclusions and Future Work
We developed an AI player for the Drawing Werewolf game—an asymmetric multiplayer game that integrates drawing, deduction, and deception. We implemented a foundational AI agent capable of performing basic drawing behavior as a human-role player using a model trained with SketchRNN and the “Quick, Draw!” dataset. Through experimental gameplay, we confirmed that AI operated effectively within the game system, producing coherent sketches aligned with the given theme and participating in the voting phase. As the first step, the developed AI generated pre-trained drawing sequences and performed random voting. While this validated the integration of SketchRNN and the communication framework of the game, the following needs to be further developed.
Enhance the drawing logic by enabling AI to interpret themes dynamically and adapt to the evolving game context.
Implement strategic decision-making in the voting phase, allowing AI to evaluate and reason about other players’ drawings.
Extend AI functionality to cover the Werewolf role, which requires inferring the hidden theme based on other players’ contributions.
Incorporate context-aware models such as attention mechanisms or multi-modal learning to better capture sketch semantics.
Explore cooperative learning approaches and self-play frameworks to iteratively improve AI behavior in social gameplay settings.
These enhancements enable a robust and socially aware AI player to participate meaningfully in creative, multi-agent games and offer broader insights into human–AI interaction in playful environments.
Author Contributions
Conceptualization, N.O. and S.N. (Shun Nishide); methodology, N.O. and S.N. (Shun Nishide); software, N.O., S.N. (Sota Nishiguchi) and S.N. (Shun Nishide); validation, N.O., S.N. (Sota Nishiguchi) and S.N. (Shun Nishide); formal analysis, N.O., S.N. (Sota Nishiguchi) and S.N. (Shun Nishide); investigation, N.O. and S.N. (Shun Nishide); resources, N.O. and S.N. (Sota Nishiguchi); data curation, N.O. and S.N. (Sota Nishiguchi); writing—original draft preparation, N.O. and S.N. (Shun Nishide); writing—review and editing, N.O. and S.N. (Shun Nishide); visualization, N.O. and S.N. (Shun Nishide); supervision, S.N. (Shun Nishide); project administration, S.N. (Shun Nishide); funding acquisition, S.N. (Shun Nishide). All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by JSPS KAKENHI, grant number JP23K11277.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study is available on request from the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Shao, K.; Tang, Z.; Zhu, Y.; Li, N.; Zhao, D. A Survey of Deep Reinforcement Learning in Video Games. arXiv 2019, arXiv:1912.10944. [Google Scholar] [CrossRef]
- Liu, C.; Zhu, E.; Zhang, Q.; Wei, X. Modeling of Agent Cognition in Extensive Games via Artificial Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4857–4868. [Google Scholar] [CrossRef] [PubMed]
- Bomström, H.; Kelanti, M.; Lappalainen, J.; Annanperä, E.; Liukkunen, K. Synchronizing Game and AI Design in PCG-Based Game Prototypes. In Proceedings of the 15th International Conference on the Foundations of Digital Games, Bugibba, Malta, 15–18 September 2020; Volume 24, pp. 1–8. [Google Scholar]
- Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural network and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
- Berner, C.; Brockman, G.; Chan, B.; Cheung, V.; Dębiak, P.; Dennison, C.; Farhi, D.; Fischer, Q.; Hashme, S.; Hesse, C.; et al. Dota 2 with Large Scale Deep Reinforcement Learning. arXiv 2019, arXiv:1912.06680. [Google Scholar] [CrossRef]
- Dasgupta, P.; Kliem, J. Improved Reinforcement Learning in Asymmetric Real-time Strategy Games via Strategy Diversity. Int. J. Serious Games 2023, 10, 19–38. [Google Scholar] [CrossRef]
- Sun, C.; Zhang, Y.; Zhang, Y.; Lu, Z.; Liu, J.; Xu, S.; Zhang, W. Mastering Asymmetrical Multiplayer Game with Multi-Agent Asymmetric-Evolution Reinforcement Learning. arXiv 2023, arXiv:2304.10124. [Google Scholar]
- Qiao, J.; Li, M.; Wang, D. Asymmetric Constrained Optimal Tracking Control With Critic Learning of Nonlinear Multiplayer Zero-Sum Games. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 5671–5683. [Google Scholar] [CrossRef] [PubMed]
- Castellano, G.; Vessio, G. Deep learning approaches to pattern extraction and recognition in paintings and drawings: An overview. Neural Comput. Appl. 2021, 33, 12263–12282. [Google Scholar] [CrossRef]
- Lin, Y.; Ting, Y.; Huang, Y.; Cheng, K.; Jong, W. Integration of Deep Learning for Automatic Recognition of 2D Engineering Drawings. Machines 2023, 11, 802. [Google Scholar] [CrossRef]
- Chen, K.; Forbus, K.; Srinivasan, B.V.; Chhaya, N.; Usher, M. Sketch Recognition via Part-based Hierarchical Analogical Learning. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macao, China, 19–25 August 2023; pp. 2967–2974. [Google Scholar]
- Ha, D.; Eck, D. A Neural Representation of Sketch Drawings. arXiv 2017, arXiv:1704.03477. [Google Scholar] [CrossRef]
- Li, L.; Zou, C.; Zheng, Y.; Su, Q.; Fu, H.; Tai, C. Sketch-R2CNN: An RNN-Rasterization-CNN Architecture for Vector Sketch Recognition. IEEE Trans. Vis. Comput. Graph. 2020, 27, 3745–3754. [Google Scholar] [CrossRef] [PubMed]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).