Next Article in Journal
Intrusion Detection in Industrial Control Systems Using Transfer Learning Guided by Reinforcement Learning
Previous Article in Journal
A Comparative Study of Authoring Performances Between In-Situ Mobile and Desktop Tools for Outdoor Location-Based Augmented Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration

1
Computer Science Department, Universidad Rey Juan Carlos, 28001 Madrid, Spain
2
Applied Mathematics Department, Universidad Rey Juan Carlos, 28001 Madrid, Spain
*
Author to whom correspondence should be addressed.
Information 2025, 16(10), 909; https://doi.org/10.3390/info16100909
Submission received: 17 September 2025 / Revised: 7 October 2025 / Accepted: 14 October 2025 / Published: 17 October 2025
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)

Abstract

The rapid integration of generative artificial intelligence (AI) into creative workflows is transforming design from a human-driven activity into a synergistic process between humans and AI systems. Yet, most current tools still operate as linear “executors” of user commands, which fundamentally clashes with the non-linear, iterative, and ambiguous nature of human creativity. Addressing this gap, this article introduces a conceptual framework of five irreducible paradoxes—ambiguity vs. precision, control vs. serendipity, speed vs. reflection, individual vs. collective, and originality vs. remix—as core design tensions that shape human–AI co-creative systems. Rather than treating these tensions as problems to solve, we argue they should be understood as design drivers that can guide the creation of next-generation co-creative environments. Through a critical synthesis of existing literature, we show how current executor-based AI tools (e.g., Microsoft 365 Copilot, Midjourney) fail to support non-linear exploration, refinement, and human creative agency. This study contributes a novel theoretical lens for critically analyzing existing systems and a generative framework for designing human–AI collaboration environments that augment, rather than replace, human creative agency.

1. Introduction

The integration of generative artificial intelligence (AI) into creative workflows is transforming design from a predominantly human-driven activity into a synergistic process between humans and AI systems, a paradigm known as human–AI co-creation [1]. We define human–AI co-creation as a collaborative partnership where AI and humans operate as a cohesive system, engaging in a dynamic interchange to produce outcomes that exceed the creative potential of any single agent [2,3]. This differs fundamentally from a tool-based model, where AI executes deterministic commands. Instead, in a co-creative model, AI contributes generatively and interpretively throughout the creative process, acting as an active collaborator rather than a passive instrument [4,5]. This paradigm shift necessitates a deeper understanding of the informational dynamics, challenges, and prerequisites required for such partnerships to truly augment human creativity [6].
As AI becomes more and more integrated into everyday human activities, we need to reevaluate how we perceive and use this quickly evolving technology. Non-academic discourse posits that AI has the potential to automate repetitive work, freeing up human attention for more creative endeavors. However, the rise of creative AI is blurring traditional automation boundaries. The authors of [2] outline three different uses for creative AI: (1) comprehension (since creative processes require some level of comprehension), (2) representation (using artificial intelligence to fill in the gaps in existing datasets), and (3) generation (including text-to-image synthesis and visual transformation to create original outputs). In essence, by integrating (generative) AI tools, systems, and agents, human–AI co-creativity has the potential to significantly improve human creative powers beyond what is now typical for (non-enhanced) human creativity. This paradigm change necessitates a more thorough comprehension of these co-creative relationships, related difficulties, and the prerequisites for augmenting AI [3]. A collaborative model, in which AI contributes generatively and interpretively to the creative process, has replaced a tool-based model, where AI merely executes deterministic commands, similar to a search engine retrieving results.
From the standpoint of information science, this change necessitates a reassessment of the ways in which information is exchanged, understood, and changed in a human–AI partnership. Whereas the AI needs precise, low-level information inputs (prompts, parameters), the human gives ambiguous, high-level information needs (vision, intent, style) [4]. Many collaboration failures stem from this basic mismatch in information behavior. In order to enable the intricate information connections that define creative labor, including exploration, reflection, and serendipitous discovery, this study contends that effective co-creative systems must be built as information-rich environments that can translate over this gap.
However, most current AI tools still operate as linear ‘executors’ of user commands, which fundamentally clashes with the non-linear, iterative, and ambiguous nature of human creativity [7,8]. We contend that this mismatch points to deeper, inherent tensions in the collaboration. This article argues that the design space for human–AI co-creation is fundamentally shaped by a set of irreducible paradoxes. These paradoxes—which we identify as ambiguity vs. precision, control vs. serendipity, speed vs. reflection, individual vs. collective, and originality vs. remix—are not problems to be solved but essential dynamics to be managed. While the limitations of current ‘executor-model’ AI tools are well-documented—such as their linear workflow, lack of support for refinement, and struggle with ambiguity—the prevailing research approach has been to address these as discrete technical or interaction-level problems to be solved. For instance, studies focus on improving prompt engineering [4], mitigating specific user experience pitfalls [9], or modeling interaction patterns [10].
However, this problem-solving approach overlooks a more fundamental issue: these limitations are not isolated flaws but surface manifestations of deeper, irreducible tensions inherent in the collaboration between human and artificial cognition. The current paradigm lacks a conceptual framework that explains why these tensions exist and how they can constructively shape design, rather than being eliminated.
Therefore, the critical research gap this article addresses is the lack of a foundational theoretical lens for understanding the core, paradoxical tensions that define the design space of human–AI co-creativity. Without this lens, system design remains reactive, focusing on patching specific interaction failures without guiding the creation of truly synergistic co-creative environments.
This article aims to provide and develop a conceptual framework for the development of co-creative systems between humans and artificial intelligence. We contend that these tensions should be viewed as fundamental design forces rather than as issues that need to be resolved. This paradigm offers a generative roadmap for developing next-generation co-creative environments that enhance rather than replace human creative agency, as well as a theoretical perspective for examining current systems.

Shift from AI as a Tool to as a Collaborator

Collaboration between humans and creative systems is crucial because it boosts creativity, introduces new viewpoints, promotes continuous learning, and solves complex issues. In order for a system to be deemed autonomously creative, it must be capable of creative behavior, such as coming up with original concepts or solutions on its own without assistance from humans [6]. This raises the question of whether generative AI tools are inherently creative. The foundation of this type of creativity is machine learning, which gives algorithms the ability to learn, adapt, and react in ways that can be considered “intelligent”—and hence, potentially creative [7]. But the argument over whether technical systems are truly creative goes beyond science and turns into a philosophical discussion about appearing vs. being. This discussion centers on the possible drawbacks of generative AI. According to perspectives, AI’s dependence on pre-existing data would limit it to exhibiting “incremental creativity,” raising doubts about the breadth and genuineness of its creative output [11,12]. Non-academic discourse posits that genuine creativity is an exclusive domain of humankind, contingent upon our singular ability to experience profound emotions and exercise empathy [10].
Throughout the creative process, humans employ a wide variety of creative techniques, thought processes, and concepts, and the final product evolves dynamically over time. The agent must be flexible in order to keep up with this constant flow of ideas. Furthermore, the role and interactions of the co-creative AI are not always explicit throughout the co-creation process. For instance, the human may want to take the lead and let the AI aid with certain duties. At other times, the human may wish for the AI to simulate a more proactive role, generating unexpected ideas or working semi-autonomously within a defined context to explore a solution space [13]. Numerous generative design methods currently in use are inadequate in taking human variables into account, which restricts their capacity to take into account the entire range of human skills and limitations, and affective reactions that must be taken into account in order to advance true human-centered product and service innovation [14]. Critical issues like role ambiguity, cognitive overload, authorship, uncertainty about outcomes and control conflicts between users and AI agents have also been found by recent studies [8,15,16].
Generative AI is bringing about a revolutionary new era known as “human–AI co-creation,” in which AI ceases to be a passive instrument and instead becomes an active, cooperative partner. The goal of this collaboration is to generate collective creativity that is superior to what either AI or humans could achieve on their own. We believe that although AI can improve human creativity and automate activities, its development blurs conventional lines and calls for a better comprehension of co-creative dynamics. Because of the significant shortcomings caused by this mismatch, based on studies [5,13,17,18], we propose five irreducible paradoxes—originality vs. remix, speed vs. reflection, control vs. serendipity, and ambiguity vs. precision—that fundamentally shape the design space for human–AI co-creative systems. We suggest that these paradoxes offer a critical lens through which to examine current systems and produce fresh design frameworks, and that they are not issues to be resolved but rather necessary conflicts to be handled.

2. Related Work

Although HCI has a long history of researching tools that foster creativity, the development of potent generative AI has completely changed the paradigm from tools that foster creativity to systems that actively engage in it. The theoretical foundation for human–computer creative collaborations was established by early research on computational creativity and co-creative systems [19,20,21], which frequently concentrated on domains with strict definitions or rules. On the other hand, current research faces the difficulties posed by large-scale, data-driven generative models, which provide previously unheard-of capabilities but also create new obstacles in the creative process [5,6].
Creativity is an essential component of graphic design, allowing designs to differentiate themselves from competition and engage consumers’ attention [22]. Critical issues such as role ambiguity, cognitive overload, and control conflicts between users and AI agents have been found in recent studies. As AI technology develops quickly, a variety of generative design tools have surfaced to help in the process of creative design [18]. These technologies can interact with users and maximize design outcomes in response to textual instructions. However, because they are unaware of the natural cooperation process in creative design, they could have a bad user experience [23].
Since both humans and computers take the initiative in the creative process and collaborate as co-creators, the idea of co-creative systems was born, merging standalone generating systems with tools to support creativity [24]. The creative process in a co-creative system is complex and emergent due to the interaction between the AI agent and the human. According to a study, creativity that results from human–computer interaction cannot be attributed only to either party and outweighs the initial goals of both parties because new ideas are generated during the encounter [25]. Researching human creative collaboration can serve as a solid foundation for exploring issues related to modeling an efficient interaction design for co-creative systems [19]. According to other authors, methods to lay the groundwork for the creation of computer-based systems that can support or improve collaborative creativity can be developed by comprehending the elements of human collaborative creativity [20].
The ramifications of generative AI have started to be examined in recent information science work. A study [3] argues that ‘getting humans back in the loop’ is necessary to ensure human agency in socio-technical systems by framing AI engagement via the lens of affordances. It is consistent with our criticism of AI’s passive ‘executor’ position, which allows only a limited range of information sharing. Additionally, studies on information interaction using generative models reveal that prompt engineering is a major information retrieval difficulty for users [4]. It can hinder creativity for users to have to learn how to translate their internal information needs into a syntactic format that the AI can grasp. This informational friction is directly addressed in our work on the ambiguity vs. precision conundrum. Our paradoxes offer a higher-level conceptual framework that explains why interaction patterns are tense, providing a complementary lens anchored in the fundamental informational conflicts of co-creation, even though models such as COFI [13] are excellent at modeling such patterns.
In order to address gaps researchers faced during human–AI collaboration, several insufficiencies that must be addressed to cover up this gap are presented in this study: essentially, the very behaviors that give design its unique character are actively hampered by AI’s linear workflow. The user is compelled to behave as an exact “command coder” instead of an adventurous “creative partner”. Instead of using their imagination, the user expends mental energy creating the ideal prompt. AI agents in well-known programs like Copilot [21] and Midjourney [26] require exact instructions and do not make it easier to mash up different possibilities. Until they get outcomes that are largely satisfying, designers frequently rework their prompts and regenerate images.

2.1. The Executor Model and Its Limitations for Exploration

The creative process becomes a non-linear, iterative loop when prompts can be manually refined and hybridized. But in the end, the AI’s training data and algorithms limit this non-linearity. When contrasted with the infinite, associative, and frequently intuitive non-linearity of human creativity itself, this basic limitation emphasizes the inadequacy of the AI tool’s non-linearity. The natural non-linear transit between creative design stages [27,28,29,30], which entails taking inspiration from multiple simultaneous iterations and sometimes changing initial concepts along the way [31], is hampered by such a turn-based linear approach. Because of their need for exact commands and limited operations on results, current AI-assisted tools consequently disregard the design requirements.
The user must be allowed to exit this loop, create a new command, and resume the process in order to solve this issue. This will result in a workflow that is sequential and incremental by nature. Every stage is a separate, distinct transaction, a property of non-linearity which entails divergent thinking, emergence, iteration and revision. The user bears the full cognitive load of monitoring the creative state and tracking conceptual lineages, even though a dedicated user can mimic a non-linear process by initiating new, isolated conversations and manually curating results. This supports our claim that existing systems are designed to allow non-linear workflows that they do not natively support.
In addition to criticizing the executor model, recent studies have started to list particular interaction-level errors in co-creative systems between humans and artificial intelligence. ‘Invisible AI Boundaries,’ ‘Conflicts of Territory,’ and ‘Agony of Choice,’ for example, are among the nine potential pitfalls identified by [9], which concretize the bad design-related user experience issues. By showing how problems like limited expressiveness (a failure in handling the ambiguity vs. precision dilemma) or AI overwriting human input (a failure in the control vs. serendipity contradiction) materialize in real-world interactions, these errors empirically ground the tensions we perceive. Our framework of paradoxes seeks to explain why these tensions are essential and irreducible in the first place, offering a creative lens for design rather than just a corrective one, even though their work offers a useful taxonomy of what goes wrong.
This executor model is evident in popular AI tools, albeit with slight variations. Microsoft 365 Copilot [21] operates as a direct command-executor within a productivity suite, generating text or slides based on explicit instructions. Midjourney [26], while generating multiple image options, still requires precise prompting and treats each interaction as a largely isolated transaction, placing the burden of curation and lineage tracking on the user. Even conversational agents like ChatGPT-3.5, which can simulate a more collaborative dialogue, often function fundamentally as executors at their core—they require users to translate creative intent into effective prompt sequences and typically generate a single, streamed response per turn. Despite their differences, these tools share the fundamental limitation of the executor model: they respond to commands rather than proactively engaging in the open-ended, exploratory dialogue characteristic of human creative partnership.

2.2. Lack of Support for Refinement

The relationship between human and artificial intelligence in terms of creativity is complex and cannot be reduced to a simple hierarchy. Empirical evidence indicates that while the most exceptional human creators often outperform AI [32,33], artificial intelligence can generate results that exceed those of an average human on standardized creativity assessments [13,34]. It is also critical to note that human creativity is not inherently superior in all contexts; it can be constrained by cognitive fixedness and a tendency toward conformity, particularly in collaborative environments. Therefore, the essential distinction lies not in a quantitative score, but in the qualitative origin of the creative act: AI creativity emerges from a computational recombination of statistical patterns, whereas human creativity is frequently rooted in embodied, subjective experience and sensory-informed cognition [33]. This foundational disparity highlights the current limitations of AI in navigating ambiguity with the nuanced understanding characteristic of humans.
There are several reasons for this. For instance, these tools demand precise input, yet users often provide vague and unclear language, leading to failure. This constitutes another insufficiency: the lack of support for refinement. Modern chatbots like ChatGPT have achieved human-level creativity, surpassing average human responders in psychological tasks like the Ambiguous task [10,32] or the Torrance test [34], according to recent research comparing AI and human creative outputs. All of this research has demonstrated, meanwhile, that even the most advanced AI systems cannot match the performance of the best human performers. These studies’ use of somewhat antiquated creativity tests, for which a sizable amount of information and examples are available online, is one of their limitations. It is possible that these items served as training data for the chatbot, which would subsequently re-pose when given the assignment. In this scenario, the chatbot’s creative output would be a memory output from the stored training data rather than representing actual creative behaviors. Divergent thinking was assessed using the Figural Interpretation Quest (FIQ) [33], a multimodal evaluation tool that measures creative potential through the interpretation of ambiguous, abstract forms [35]. In contrast to conventional text-based assessments, the FIQ engages participants’ visual processing and mental imagery by requiring them to generate creative meanings for non-representational figures. This task measures core components of creativity, including the originality, fluency, and flexibility of the generated ideas. A central feature of the FIQ is that participants must provide two distinct interpretations for each figure, aiming for maximum semantic dissimilarity. The semantic distance between these paired responses is a primary indicator of cognitive flexibility, where a larger distance reflects a greater capacity to perceive the stimulus from varied conceptual viewpoints [36]. The FIQ has established predictive validity for forecasting originality in domain-specific contexts, positioning it as a valuable complement to standard verbal tests of divergent thinking.
The notion [37] that “all users demand identical creative outputs” serves as a direct critique of the tools. Ambiguity is frequently cleared up by context, which varies depending on the individual and the endeavor. An AI will always be forced to interpret ambiguous commands from scratch, producing generic or unsuccessful results, if it is unable to acquire the distinct style, preferences, and project history of a user or a team. This is one of the main causes of the “lack of support for refinement.” A critical design principle is that effective human–AI interaction (HAII) in creative systems necessitates the mitigation of known collaborative challenges. Furthermore, a significant gap in the design of many creative AI tools is the general failure to account for collaborative workflows altogether.

2.3. Single Output/Multiple Exploration

The tendency of many AI systems to focus on producing a single, optimal output rather than enabling a thorough exploration of the design space is a major barrier to successful human–AI co-creation. Moreover, a major limitation is the principal way of engagement itself. Creators are forced to translate visual ideas into words because of the significant emphasis on text-based prompts, which eliminates more intuitive, non-linguistic input like sketching. However, some technologies encourage inquiry by producing several possibilities, such as four different image options. For instance, when given a prompt, Midjourney generates four images to choose from; however, this often constitutes a limited form of choice. Instead of allowing for a fully comprehensive and multifaceted investigation of the design space, these variations usually only represent slight permutations within the constrained parameters of a single prompt interpretation.
The human designer’s function as an idea curator may be compromised by this single-output emphasis, which can also prematurely merge creative processes and inhibit uniqueness [38]. Future AI systems must be specifically built as exploration engines rather than solution generators in order to overcome this constraint. This can be accomplished via a number of crucial tactics:
In order to help the early “Discover” and “Define” phases of the design process, where problem-framing and ideation are crucial, AI development must first change. Since the later phases of development and delivery [39] are mostly the focus of current AI-DSS technologies, convergence is naturally preferred over divergence. Early-stage tools would focus on producing a broad range of thoughts, metaphors, and linkages in order to broaden the solution space before focusing on narrowing it. Second, it is crucial to include AI into iterative, non-linear workflows. Creative exploration rarely follows a straight route; instead, it necessitates concept branching, recombination, and backtracking. This fluidity should be supported by AI systems, which would enable designers to effortlessly save, revisit, and combine several lines of inquiry [40,41]. This method fits nicely with the organic flow of creative activity, in which preliminary concepts are improved via iterative reflection and feedback.
Third, designers might escape cognitive fixedness by utilizing AI for serendipity and associative thinking. AI should be entrusted with using its extensive training data to provide lateral variations, stylistic opposites, or surprising pairings rather than aiming for the best answer. Layered and progressive prompting can produce rich and varied visual outputs, transforming the AI into a collaborator for chance discovery, as demonstrated in research on AI-assisted design education [42]. Importantly, the user must have clear control over the exploration through the design of the interface and interaction. Systems should give designers user-friendly controls, like sliders for originality, randomness, style influence, and constraint adherence, to stop AI from taking over the creative [43] direction. In order to allow the designer to evaluate, compare, and curate the ideas produced, the findings should also be displayed as a multifaceted collection of possibilities rather than as a single output, such as in galleries or plotted onto a graph of trade-offs. This strengthens the designers’ ultimate authorship over the finished work and gives them the ability to use their creative agency.
By putting these approaches into practice, we may move away from AI systems that automate design decisions and toward ones that enhance human creativity, making sure that AI serves as a lens for human imagination rather than a framework that limits it.

2.4. AI’s Limited Role as an Executor

According to the executor role, AI is a strong but passive instrument for executing commands rather than an active collaborator in the creative process as defined by [18] for an AI tool, Copilot: “Copilot provides users with an optimal output to their commands without explaining how it is inferred from the user’s instructions”. To understand this deficiency, we must consider one thing that this technique fundamentally conflicts with the inherent character of creative design, introducing several key problems, like restriction of exploratory ideas and divergent thinking. The design space is frequently prematurely convergent by executor AI systems. Because of stochasticity, they could yield different results for the same command, although these differences are usually limited to a small range of likely outcomes that were determined from the training data. This is devoid of the methodical, independent investigation of a human partner who can provide ‘alternative answers as sources of inspiration’ [18] from essentially distinct viewpoints.
This feature directly contradicts a fundamental aspect of creativity: the production and exploration of diverse ideas. In contrast to human collaborators who supply “alternative solutions as sources of inspiration,” [18] an executor AI does not offer divergent thinking. This flaw lowers the design process to a lengthy and inefficient trial-and-error loop, frequently resulting in user irritation and abandoned tasks when the AI continuously fails to understand the user’s purpose.
AI-powered technologies like ChatGPT, Midjourney, and Autodesk Dreamcatcher promote iterative, non-linear collaboration, combining human intuition and AI-generated insights [15,44]. These tools, in fact, typically nonetheless serve as executors in practice because they require precision, hinder natural processes, and focus on single outputs. The executor mode lays the full responsibility for translation and precision on the human creator. We make clear that human free will is not eliminated by AI. However, because executor-based AI models impose a high cognitive load, they may impede natural creative processes. Users are compelled to break down their high-level, often ambiguous creative intents into low-level, executable instructions—a task that necessitates significant effort and domain-specific knowledge. The fact is that executors sometimes fail when they meet vague language when used with creative ideation; it could be reduced by paraphrasing the user’s commands or by creating a communication layer that enables humans and AI to comprehend one another.
A significant body of research outlines human collaboration patterns, emphasizing the value of aligning requirements and fostering creative idea synthesis. However, a key issue in the development of successful co-creative systems is the basic question of how requirement alignment and remixing between human and artificial agents can be efficiently enabled. This issue is still open and complicated.
While prior work [5,13,17,18] identifies isolated challenges in human–AI collaboration, we argue that these form a system of paradoxes unique to co-creative domains, where system designers balance competing demands without stifling human ingenuity. We suggest that these paradoxes offer a critical lens through which to examine current systems and produce innovative design frameworks, and that they are not issues to be resolved but rather necessary conflicts to be handled. Moreover, these will provide a valuable analytical lens for critiquing existing systems and a generative framework for guiding future design.
The present work highlights significant shortcomings in the state of AI technologies in creative creation. It makes the case that widely used programs like Copilot and Midjourney, which function as “executors” that need exact instructions and generate single outputs, impose a linear workflow. The inherent non-linearity of human creativity—which flourishes on variation, iteration, and exploration—is hampered by this structure. These technologies also do not support refinement, struggle with the imprecise terminology that comes with early creative thought, and do not pick up on a user’s distinct style. The emphasis on a single ideal result is a serious drawback since it stifles chance discovery and prematurely converges the design space. In the synthesis of these criticisms, we contend that previous studies have addressed these issues separately, but that they actually constitute a network of paradoxes that need to be handled collaboratively when designing co-creative systems.

3. Core Paradoxes in Human–AI Co-Creative Design

This section elaborates on the five irreducible paradoxes that form our conceptual framework for analyzing and designing human–AI co-creative systems. These tensions are not problems to be solved but essential dynamics to be managed. The Table 1 provides a concise overview.

3.1. Ambiguity vs. Precision

Advanced generative AI systems like GPT-4 are being adopted at a rapid pace, revolutionizing human–technology interaction by enabling conversational, intuitive problem-solving using natural language across a variety of applications [45]. AI models are trained on vast amounts of data, designed to provide precise outputs, but on the other side, human input may contain ambiguous or emotive language, which leads to the following question: Without limiting initial creative exploration, how can we create interfaces that serve as “ambiguity translators,” assisting users in gradually refining vague intentions into prompts?
There is a fundamental mismatch where users struggle to translate their vision into executable commands, leading to frustration because human creative thought is inherently ambiguous, expressed through abstract concepts and subjective language; on the other side, AI systems require precision and explicit parameters to function predictably [27,30]. This leads to a fundamental conflict: whereas users can use vague cues to obtain new and imaginative results from the AI, the underlying AI model itself needs some level of accuracy and predictable parameters in order to work well. Therefore, the challenge is to create systems that can understand human uncertainty as a source of creative potential rather than as noise that needs to be removed, all the while giving the model the structured input it needs to produce logical conclusions.
Here, ambiguity translators do not mean to design an input box but to design interaction loops. The answer will not be provided as just a single output, but rather multiple designs; for instance, in [42], the integration of artificial intelligence (AI), more especially text-to-image (T2I) generators like Midjourney, into the conceptual design stage of interior design education was investigated. The results showed that AI-assisted visualization improved conceptual precision, sped up design iteration, and improved ideation. A fundamental question is introduced by this possibility [5]: where do we draw the boundary between creativity and human–AI systems? Determining the source of creativity—the human, the AI, or the cooperation itself—is the main challenge. This unresolved question critically shapes how we evaluate creative outputs, forcing a choice between viewing AI as a mere tool or as a genuine creative partner. The same fundamental contradiction is discussed in Section 2.1—the conflict between the nature of present AI systems and human creativity—and presented from the perspective of linearity vs. non-linearity.
To navigate the ambiguity vs. precision paradox, system design must move beyond single-turn prompt boxes. A more effective approach is to implement multi-turn refinement loops that function as “ambiguity translators.” This can be achieved through interfaces that ask clarifying questions, suggest refinements, and maintain conversational context. Pilot systems in design education, such as those explored by [42], demonstrate how iterative dialogue with AI can enhance conceptual precision and ideation, effectively building precision from ambiguity over multiple turns.
From an HCI perspective, resolving this paradox involves designing interfaces that function as ‘ambiguity translators.’ These systems should use iterative, multi-turn dialogues to help users refine their vague intentions into actionable prompts [42]. Instead of locking the user into a single output cycle, the interface should offer structural guidance, present relevant information, and ask clarifying questions to decompose the user’s high-level vision into manageable chunks. This process enhances the user experience by collaboratively building precision from ambiguity.
A complementary approach involves designing fluid roles. By enabling the AI to shift across a spectrum of roles—from a supportive tool that executes precise commands to an active co-creator that proposes ideas—the system can ensure that the pursuit of precision does not prematurely restrict the creative process. Deliberately allowing for a degree of role ambiguity has been shown to be necessary for creative potential, as it permits the collaboration to evolve and new, emergent roles to form dynamically [13,24].

3.2. Control vs. Serendipity

The creative process has a contingency. There are numerous ways to achieve creative success—or failure—from the very beginning of a concept to its eventual adoption by the target audience. Serendipity is the experience of making an unexpected and beneficial discovery through a combination of chance and a ‘prepared mind’ [46,47]. We argue that this is paradoxical since it necessitates the simultaneous presence of two seemingly incompatible elements: i) lack of control and ii) agency and control. Discovery must commence with an unforeseen, unplanned incident that lies beyond the individual’s direct control or intention. It is a disruption to the expected course of events [46,47]. To strike a balance of agency and control, the person must be knowledgeable, sensitive, and cognitively prepared (sagacious) enough to see the accident’s potential and be able to use it expertly and purposefully to produce something original and worthwhile [48].
Although users can instruct AI to produce “surprising” results, this sometimes involves trial and error with ambiguous instructions, producing random outputs that may not be significantly related to the user’s original purpose. The design difficulty of going beyond this is addressed by the control vs. serendipity contradiction. We suggest methods that are designed to provide contextually appropriate surprises, such as lateral variants or stylistic opposites depending on the state of the project, while explicitly providing the user with curation tools and ‘veto power’. The user’s sagacity (or prepared mind) can identify and incorporate unexpected but beneficial recommendations, turning serendipity from a passive, random event into an active, collaborative activity.
Here, creativity arises from the tension between passively accepting chance and actively, skillfully manipulating it; it is neither completely accidental nor purely agential. Pure agency lacks the disruptive spark of the unexpected, whereas pure chance is passive and “blind,” according to Ross. At the exact point where these two opposing forces converge, serendipity occurs [49]. Serendipity is valuable if the user trusts the AI’s output. This locus of control can be determined by concluding what the optimal balance between AI-generated suggestions and user veto power is. We argue that optimal balance is a critical topic that crosses a technical specification and delves deeper into the concept of interaction design. The system does not aim for a flawless predetermined optimum.
Sagacity levels vary from person to person, so they depend on the cognitive state of the user. The balance is a dynamic interaction intended to promote the circumstances rather than a static arrangement, but to foster serendipity, it might be possible through several ways, such as actively countering enhancement of algorithms [50], and another key point is to control repetition, preventing the system from providing the same core answer with only superficial changes in vocabulary. Another way is to support, not explore [51], as AI should be a catalyst that broadens the user’s horizon, working as the digital counterpart of “visiting the library, going to the stacks, or going to the seemingly unrelated seminar.” While users inherently possess veto power in any interactive system, the ideal balance for co-creativity is one that deliberately designs for and emphasizes this veto authority as the critical manifestation of the ‘prepared mind’ required for serendipitous discovery [51].
This vetoing and curating process involves active, critical communication with the AI rather than just rejection. Think about a writer who collaborates with an AI. An initially implausible plot twist may be suggested by the AI. ‘This twist doesn’t fit my character’s motivation, but it offers me the notion to introduce a hidden nemesis,’ the writer says in response to the veto, which is more than just a rejection. Here, the ‘poor’ suggestion served as a springboard for a fresh, creative concept, illustrating how the conflict between AI suggestion and human curation co-creates serendipity. The human partner evaluates the output according to their unique creative intent, media knowledge, and content topical expertise. When an AI fails to live up to a user’s high expectations, it frequently serves as a catalyst for goal definition and improved iteration. Consequently, dual literacy is necessary for efficient co-creation: in-depth knowledge of the subject matter and artistic medium, as well as an intuitive grasp of the AI’s potential and constraints.
Furthermore, vetoing serves as an active reasoning process that strengthens the user’s role in abductive thinking, which is crucial for serendipity. It facilitates the synchronization of prior information with apparent anomalies [52]. This cognitive curation is crucial for spotting valuable deviations; by discarding irrelevant suggestions, the user frees attentional space to recognize and use unexpected insights that may otherwise be overlooked in a stream of homogenized recommendations. The conflict between AI-proposed options and human veto power creates a dynamic in which algorithmic breadth is balanced by expert discernment, fostering settings conducive to unique and meaningful discovery. This tension is further complicated by the different goals of each agent: AI models are often optimized to generate a statistically ‘probable’ or optimal output based on their training data, whereas human creators frequently seek ‘good-enough’ results that satisfy a unique, situated intent [18,39].

3.3. Speed vs. Reflection

AI is not affected by physiological factors like fatigue [53], yet it operates under the expectation of generating inventive content continuously and at scale. AI’s speed sets a hard bar for human creators, but it also reduces the training requirements for artists and creatives. AI’s ability to learn from human creations sets a higher standard for innovation. The objective is not to slow down AI for its own sake, but rather to intentionally foster human judgment, contextualization, and intentionality. The importance of speed is based on theories of divergent thinking and brainstorming, in which postponing judgment and creating a large number of ideas are important stages in the creative process. In human–AI co-creative design, reflection [54] denotes the metacognitive process of critical thought, evaluation, and integration. The tension arises when AI systems accelerate processes, for instance, text generation, but this acceleration unintentionally limits time available for human reflective skills such as critical thinking, contextual interpretation, and creative intent. This creates a contradiction between efficiency and depth of comprehension.
It is important to clarify that speed and reflection are not literal opposites, but rather represent a symbolic design tension between two cognitive modes. The notion of “speed” in this paradox refers to the system’s tendency toward automation, acceleration, and output optimization, while “reflection” denotes the human capacity for metacognition, contextual reasoning, and deliberate evaluation. Following the reviewer’s valuable suggestion, this paradox can also be interpreted through the lens of closure-driven vs. process-driven creativity (as in Myers–Briggs typologies), where the contrast lies not in temporal velocity but in cognitive orientation. We therefore retain the title “Speed vs. Reflection” to preserve consistency across the five paradoxes, while clarifying that it represents a broader conceptual duality between efficiency and depth of thought in human–AI collaboration. For example, a use case in radiology where AI is used to pre-screen medical pictures such as MRIs or X-rays provides the clearest discussion of this dilemma. By identifying possible abnormalities, the AI expedites the initial diagnosis procedure; this is seen as a productivity issue. It might, however, inadvertently cut down on the amount of time radiologists devote to a comprehensive, reflective examination of every scan. Because the chance for in-depth, personalized contemplation diminishes with the amount of images to be processed in a time unit, the creative intent in diagnosis—which entails creating a patient-specific story from subtle, contextual clues—cannot develop adequately in high-speed settings [55]. We argue that a primary focus on using AI for speed within a system’s design can inadvertently marginalize the reflective techniques that make human specialists competent and capable of dealing with novel situations in the long term. Here, the “speed” path is alluring. It provides instant advantages in productivity and efficiency. However, such a design risks fostering over-reliance, where a generation of professionals may experience attentional deskilling—a degradation of critical thinking abilities due to a lack of practice—as the system’s speed and efficiency reduce the necessity for deep, independent analysis [56]. Another study [57] proposed that this tension can be addressed not by choosing between speed and reflection, but by designing AI systems to support and provoke reflection. This approach aims to make human professionals more profoundly effective, rather than just mechanically faster, by augmenting their critical reasoning. Future co-creative systems must be purposefully built with strategic “pause points” or “friction” that encourage critical review and incorporation of AI-generated content in order to reconcile the conflict between speed and contemplation. In order to ensure that AI’s speed enriches rather than diminishes the depth of human creative cognition, interfaces that promote annotation of outputs, comparative analysis of many possibilities, and suggested reflection prompts can be included.

3.4. Individual vs. Collective

The paradox is profound and nuanced, and it gets to the core of the growing interaction between humans and AI. When a single human creator (the individual) collaborates with an artificial intelligence that is, by definition, an embodiment of a collective—trained on aggregated data, patterns, and outputs from large swathes of human culture and knowledge—a fundamental tension occurs. Here we define two sides of this paradox: One of them is human (individual) with the goal of uniqueness, coherent creative vision. The other is an AI model (collaborative) involving generic human knowledge or wisdom of all data. This paradox manifests when the output of the collective (the AI) misaligns with the vision of the individual (the human creator). For instance, AI-powered design tools like Copilot and Midjourney often follow a linear sequence of exact instructions to approximate design objectives. These procedures contravene creative design guidelines, limiting AI agents’ ability to accomplish creative tasks [18]. This tension raises crucial research questions: How do designers and AIs settle creative disputes and which interface features work best for negotiating a common course? These inquiries are the paradox’s practical expressions, looking for ways to address the discrepancy between individual intentions and group production.
A study [58] demonstrates how AI influences human narrative, pointing to a type of implicit negotiation in which people integrate AI-generated concepts into their own original work. Assimilation and compromise are ways of “settling” creative disputes. Additionally, the study also demonstrates that hybrid human–AI networks attained the greatest diversity over time, indicating that creative synergy can still result from straightforward, anonymous, iterative collaboration (without formal negotiation interfaces). This suggests that minimally controlled interactions can help humans and AIs negotiate their creative differences. According to a different study [59], complementing cooperation rather than overt bargaining is how creative “disputes” are settled. While AI delivers size, pattern recognition, and quick generation, humans give context, intentionality, and moral judgment. Both can contribute their strengths thanks to this synergy without one taking over the other, On the other hand, the study also supports open-ended, adaptable technologies that let humans direct the creative process while utilizing AI’s capacity to produce concepts and variations. The ideal “interface” is one that lets AI manage extensive pattern synthesis while still permitting human oversight and contextual input.
We might draw the conclusion that a conflict is inevitable when a single creator interacts with this collective reflection. Because it is based on statistical probability, the AI will tend toward the most probable choice, which often aligns with the traditional or common patterns in its training data. If the dataset leans toward unconventional or ‘unsafe’ practices, the output would likely reflect that instead. While intuitively, a system capable of learning a user’s distinctive style appears to be a logical resolution to this paradox, such an approach carries a significant risk. An excessive dependence on personalization can inadvertently suppress creativity, trapping the user in an algorithmic echo chamber that merely amplifies their established habits. Consequently, the objective shifts from engineering a flawless style replica to creating interfaces that empower users to consciously modulate their interaction with the collective knowledge base. Practical implementations could include adjustable parameters governing the weight of personal history against diverse stylistic datasets, or the intentional injection of incongruous or contrasting aesthetics from the collective to stimulate novel thinking. This ensures the AI functions as a conduit to a wider creative universe, rather than a simple reflection of the user’s own predispositions.
This paradox can be reframed as a fundamental shift in the human creator’s role: from a hands-on craftsperson to a creative director. The craftsperson is deeply involved in the manual execution and technical details of creation. In contrast, the creative director is skilled in formulating a high-level vision, briefing collaborators (both human and AI), and curating generated outputs to align with that vision. The most valuable skill becomes creative direction—the ability to articulate intent, guide the AI’s generative process through effective prompting and parameter setting, and make discerning choices from a vast array of possibilities. The individual’s genius no longer lies solely in manual dexterity or solitary ideation, but in the capacity to remix, refine, and focus the chaos of collective intelligence into a novel, coherent whole that bears their unique signature.

3.5. Originality vs. Remix

AI systems may generate unique material using data and algorithms, a concept known as originality. This notion evaluates the parallels and differences between AI and human creativity, taking into account both technical and ethical considerations. AI-generated material is considered original based on its unique methodologies, datasets, and results [60]. On the other side, the theory of remix, defined by [61] as “Copy, transform, and combine”, is a recursive algorithm for making new works from existing resources. The paradox is that this very ability—to quickly remix and regenerate content—enables both extraordinary production and a troubling erosion of creative variation.
We argue that generative AI embodies the remix concept. It works by quantitatively evaluating a large collection of human inventions (the ultimate remix source material) and identifying patterns, styles, and relationships within it. Its output is a recombination of these previously learnt patterns; it is the result of an extreme algorithmic remix. On the other hand, the output of AI appears to meet the criterion for originality. It can create a new image, text, or design that did not previously exist in that exact form. It can result in work that is unique (new) and frequently worthwhile. To a human observer, the outcome can appear unique, unexpected, and inventive. This creates irreconcilable tension from both remix and originality perspectives. It pulls us in two directions at once, exposing weaknesses and limitations in our long-held conception of uniqueness and creativity. According to findings from a study [62], AI can help to promote and deepen design originality, but it may be restricted in its ability to generate creativity itself. When paired with human ingenuity, AI improves design processes by providing both efficiency and originality. This suggests that value is not in the AI-generated remix, but in the human’s capacity to control, pick from, and infuse it with personal vision and context. The AI handles the computationally demanding “remix” (variation generation), whereas the human gives the “original” artistic direction.
The process of generative AI significantly questions the underlying concepts of originality and remix. According to Gunkel [63], the system works by processing statistical data rather than sampling content, converting cultural objects into mathematical embeddings that describe hidden patterns. This ontological shift implies that using the binary of “original” (a privileged source) and “remix” (a derivative copy) is a fundamental misapplication of categories.

4. Discussion

This study contributes a novel conceptual framework to the field of information science to critically analyze human–AI co-creation. This paper makes the case for a paradigm change away from the simplistic idea of AI as a tool and toward an active, communicative partner. The approach does a good job of showing that incorporating generative AI into creative workflows is a fundamental shift rather than a piecemeal one. This change calls for a reexamination of traditional HCI paradigms, which frequently place a higher value on efficiency and accuracy than on the ambiguity, inquiry, and serendipity that are essential to human creativity. A turn-based, linear interaction model frequently predominates, as demonstrated by systems such as Copilot [21] and Midjourney [26], which reduces the person to a “command coder” instead of a “creative partner.” The non-linear, iterative, and emergent aspects of true creative cooperation are suppressed by this structure [27,28,30].
In order to objectively examine human–AI co-creation, this study offers a new conceptual framework for information science. Beyond merely identifying interaction patterns, the proposed paradoxes convey the fundamental tensions that characterize user information behavior and AI system information supply within a cooperative dyad. The framework presents these tensions as necessary conflicts to be managed in the design of co-creative systems, rather than as problems to be solved. This paradigm provides a creative basis for creating new tools as well as a powerful analytical lens for evaluating current ones. The ambiguity vs. precision conundrum, for example, draws attention to a fundamental inconsistency: human creative cognition is abstract and subjective, yet artificial intelligence (AI) needs specific criteria to operate. Creating interfaces that function as “ambiguity translators” through iterative, multi-turn discourse is a viable remedy [42].
Similarly, the delicate balance necessary for fruitful collaboration is encapsulated in the control vs. serendipity conundrum. It reinterprets the user’s “veto power” as an active, advantageous part of “abductive thinking” and cognitive curation [51,52], which are crucial for chance discovery, rather than as an AI failure. The originality vs. remix conundrum is especially important since it calls into question basic ideas about authorship and creativity. According to this paper, the ultimate remix engine is generative AI, which operates on statistical patterns in large amounts of training data [61]. However, its results can seem new and different. As a result, the human’s ability to provide creative guidance—curating, improving, and adding context and personal perspective to the AI’s output—becomes more useful than the AI’s inherent generative ability [62]. This suggests that mastery of creative direction and curation, rather than mastery of a medium, may be the most important talent for aspiring producers. The description of the executor position [18] brings up a significant point: AI frequently generates results that are not explicable. One valuable tactic for resolving a number of paradoxes is the application of explainable AI (XAI) concepts. One way to close the ambiguity–precision gap and give users more agency would be to disclose the “operational strategies” an AI employed to interpret a prompt.
The foundation of our approach is a critical synthesis of the body of existing literature (input), which consistently identifies flaws in the “executor”-model AI tools that are currently available [17,20,25]. We see them not as isolated issues but rather as signs of more serious, irreducible conflicts that arise when humans and AI work together. Thus, we deduce that five fundamental paradoxes shape the design space for co-creative systems.

4.1. Interdisciplinary Connections

The proposed paradoxes resonate strongly with established concepts in other fields. The ambiguity vs. precision tension echoes the psychological study of divergent vs. convergent thinking, where creativity requires both open-ended ideation (ambiguity) and focused evaluation (precision). The individual vs. collective paradox is a microcosm of sociological debates about individual agency versus social structure, examining how a creator’s unique voice interacts with the vast, culturally embedded dataset of the AI. Furthermore, the speed vs. reflection tension aligns with critiques from the philosophy of technology, which warn of the potential for tools to shape human habits and values, in this case, potentially privileging speed over deep thought. Acknowledging these connections enriches our framework, positioning human–AI co-creativity not merely as a technical challenge, but as a profound socio-technical phenomenon.
The successful adoption of co-creative systems hinges on broader user and societal acceptance. This requires fostering new user literacies, such as prompt fluency and critical curation skills, to effectively direct AI collaborators. Furthermore, societal debates concerning authorship, authenticity, and the value of “human-made” work must be addressed. Systems that transparently frame AI as a tool for augmenting, rather than replacing, human creativity will be more readily accepted, shifting the narrative from “AI-made” to “human-directed.”

4.2. Theoretical Implications

By framing the difficulties of human–AI co-creation as a system of basic, constructive paradoxes, this study represents a substantial conceptual breakthrough. It moves the emphasis of study away from technical expertise and toward the subtleties of interaction design, cognitive cooperation, and the essence of creativity itself. The framework enhances theories in information science and HCI by offering a novel vocabulary and a critical perspective for examining the informational dynamics and underlying conflicts in co-creative dyads. The persistent emergence of pitfalls such as those documented by [9] underscores that the challenges in human–AI co-creation are not transient bugs but symptoms of deeper, paradoxical tensions. This validates our position that a shift in perspective is needed: from solving these issues technically to managing them dynamically through thoughtful interaction design. Our paradox framework provides the ‘why,’ while catalogues of pitfalls offer the ‘what,’ together creating a more complete picture for guiding future research.
The establishment of a paradox-driven framework is this article’s main contribution to the fields of information science and human–AI interaction. We synthesize these problems into a system of five basic, irreducible tensions, in contrast to previous work that lists individual difficulties [5,12,16,17,31,32]. This reframes the design problem: the objective is to handle these contradictions dynamically through careful interaction design, not to solve them technically. In addition to giving scholars a new vocabulary and critical lens to examine the informational dynamics and fundamental tensions present in co-creative dyads, this framework offers the fundamental “why” behind frequent cooperation failures.

4.3. Ethical Considerations in Co-Creative Systems

The ethical dimensions of human–AI co-creation are deeply embedded within the paradoxical tensions of our framework. The originality vs. remix paradox confronts foundational questions of intellectual property and attribution. When generative models produce outputs derived from extensive training datasets, it creates ambiguity regarding ownership. The rights of the human prompter, the AI developers, and the original creators whose works informed the algorithm all require consideration. This ambiguity challenges conventional copyright laws and underscores the need for novel legal and technical models to establish clear provenance and contribution [61,63].
Concurrently, the individual vs. collective paradox highlights the risks of algorithmic bias and cultural homogenization. Systems trained on aggregated data inherently encode and can amplify the biases within that corpus. Consequently, a designer’s unique vision may be systematically steered toward statistically dominant patterns, potentially stifling cultural diversity and reinforcing hegemonic aesthetic or narrative conventions. Mitigating this requires both equipping creators to identify these biases and ensuring systemic transparency regarding training data origins and model limitations.
Finally, the speed vs. reflection paradox carries implications for professional competency and agency. An over-dependence on AI for expedited ideation and execution may precipitate attentional deskilling [56], eroding the capacity for deep critical analysis and masterful craftsmanship. Therefore, an ethical approach to co-creative design must prioritize the augmentation of human cognition, safeguarding the professional’s ultimate authorship, judgment, and control over the creative process.

4.4. Future Work and Limitations

A principal constraint of this study is the conceptual origin of its framework, which, while informed by identified shortcomings in contemporary systems, necessitates rigorous empirical testing to evaluate its real-world utility, effect on design practices, and significance within various collaborative domains. Subsequent investigations should prioritize implementing this model in controlled environments, for instance, by constructing and evaluating prototype interfaces with integrated mechanisms to navigate each paradox—such as iterative dialogue for managing ambiguity, adjustable controls for serendipity, and built-in cues for reflection—and benchmarking their performance against conventional executor-based tools in practical creative scenarios. Additionally, a critical consideration of the framework’s boundaries is essential, recognizing that these paradoxes constitute persistent dynamic trade-offs. This understanding compels a deeper inquiry into the longitudinal cognitive and societal consequences of human–AI co-creation, particularly the dangers of eroded critical thinking skills, excessive dependency on automation, and the dilution of unique artistic expression through an over-reliance on aggregated, AI-generated content.

5. Conclusions

We present the field of information science with a crucial tool to help direct the development of future co-creative information systems that are not only more potent but also more intuitive, helpful, and ultimately more human by presenting these difficulties as irreducible paradoxes. The future of creative design, according to this article, lies in rethinking AI as an active, opinionated collaborator. This partnership necessitates an evolution in the human role from a sole creator to a creative director who orchestrates the collaboration. This director provides the vision, intentionality, and curation, while leveraging the AI’s power for exploration, variation, and pattern synthesis. We have defined the fundamental issues as five irreducible paradoxes by combining criticisms of the current linear “executor” systems. The fundamental design area for co-creation between humans and AI is defined by these tensions which the developers of co-creative systems must manage dynamically. We suggest that instead of trying to solve these paradoxes, the goal should be to create systems that can manage them dynamically while promoting a cooperative relationship. The ultimate objective is to enhance human creativity by making sure AI serves as an inspiration rather than a limitation, enabling people to continue being the deliberate, creative leaders at the center of the process.

Author Contributions

Conceptualization, Z.S. and R.H.-N.; methodology, Z.S. and R.H.-N.; validation, Z.S. and R.H.-N.; formal analysis, Z.S., R.H.-N. and C.P.; investigation, Z.S. and R.H.-N.; resources, Z.S., R.H.-N. and C.P.; writing—original draft preparation, Z.S.; writing—review and editing, Z.S. and R.H.-N.; visualization, R.H.-N. and Z.S.; supervision, R.H.-N. and C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by research grants PID2022-137849OB-I00 funded by MI CIU/AEI/10.13039/501100011033 and by the ERDF, EU.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Serbanescu, A.; Nack, F. Human-AI system co-creativity for building narrative worlds. In Proceedings of the IASDR 2023: Life-Changing Design, Milan, Italy, 9–13 October 2023; Design Research Society: London, UK, 2023. [Google Scholar]
  2. De Vries, K. You never fake alone. Creative AI in action. Inf. Commun. Soc. 2020, 23, 2110–2127. [Google Scholar] [CrossRef]
  3. Melville, N.P.; Robert, L.; Xiao, X. Putting humans back in the loop: An affordance conceptualization of the 4th industrial revolution. Inf. Syst. J. 2023, 33, 733–757. [Google Scholar] [CrossRef]
  4. Haj-Bolouri, A.; Conboy, K.; Gregor, S. Research Perspectives: An Encompassing Framework for Conceptualizing Space in Information Systems: Philosophical Perspectives, Themes, and Concepts. J. Assoc. Inf. Syst. 2024, 25, 407–441. [Google Scholar] [CrossRef]
  5. Haase, J.; Pokutta, S. Human-AI Co-Creativity: Exploring Synergies Across Levels of Creative Collaboration. arXiv 2024, arXiv:2411.12527. [Google Scholar] [CrossRef]
  6. Jennings, K.E. Developing Creativity: Artificial Barriers in Artificial Intelligence. Minds Mach. 2010, 20, 489–501. [Google Scholar] [CrossRef]
  7. Mateja, D.; Heinzl, A. Towards Machine Learning as an Enabler of Computational Creativity. IEEE Trans. Artif. Intell. 2021, 2, 460–475. [Google Scholar] [CrossRef]
  8. Chiou, E.K.; Lee, J.D. Trusting Automation: Designing for Responsivity and Resilience. Hum Factors 2023, 65, 137–165. [Google Scholar] [CrossRef] [PubMed]
  9. Buschek, D.; Mecke, L.; Lehmann, F.; Dang, H. Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems. arXiv 2021, arXiv:2104.00358. [Google Scholar] [CrossRef]
  10. Haase, J.; Hanel, P.H.P. Artificial muses: Generative artificial intelligence chatbots have risen to human-level creativity. J. Creat. 2023, 33, 100066. [Google Scholar] [CrossRef]
  11. Boden, M.A. Computer Models of Creativity. AI Mag. 2009, 30, 23–34. [Google Scholar] [CrossRef]
  12. Cropley, D.; Cropley, A. Creativity and the Cyber Shock: The Ultimate Paradox. J. Creat. Behav. 2023, 57, 485–487. [Google Scholar] [CrossRef]
  13. Rezwana, J.; Maher, M.L. Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems. ACM Trans. Comput.-Hum. Interact. 2023, 30, 1–28. [Google Scholar] [CrossRef]
  14. Demirel, H.O.; Goldstein, M.H.; Li, X.; Sha, Z. Human-Centered Generative Design Framework: An Early Design Framework to Support Concept Creation and Evaluation. Int. J. Hum. -Comput. Interact. 2024, 40, 933–944. [Google Scholar] [CrossRef]
  15. Chen, V.; Liao, Q.V.; Wortman Vaughan, J.; Bansal, G. Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations. Proc. ACM Hum.-Comput. Interact. 2023, 7, 1–32. [Google Scholar] [CrossRef]
  16. Gmeiner, F.; Yang, H.; Yao, L.; Holstein, K.; Martelaro, N. Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–20. [Google Scholar] [CrossRef]
  17. Moruzzi, C.; Margarido, S. A User-centered Framework for Human-AI Co-creativity. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1–9. [Google Scholar] [CrossRef]
  18. Zhou, J.; Li, R.; Tang, J.; Tang, T.; Li, H.; Cui, W.; Wu, Y. Understanding Nonlinear Collaboration between Human and AI Agents: A Co-design Framework for Creative Design. arXiv 2024, arXiv:2401.07312. [Google Scholar] [CrossRef]
  19. Davis, N.; Hsiao, C.-P.; Popova, Y.; Magerko, B. Zagalo, N., Branco, P., Eds.; An Enactive Model of Creativity for Computational Collaboration and Co-creation. In Creativity in the Digital Age; Springer: London, UK, 2015; pp. 109–133. ISBN 978-1-4471-6681-8. [Google Scholar]
  20. Mamykina, L.; Candy, L.; Edmonds, E. Collaborative creativity. Commun. ACM 2002, 45, 96–99. [Google Scholar] [CrossRef]
  21. Stallbaumer, C. Introducing Copilot for Microsoft 365. Microsoft 365 Blog. Available online: https://www.microsoft.com/en-us/microsoft-365/blog/2023/03/16/introducing-microsoft-365-copilot-a-whole-new-way-to-work/ (accessed on 26 August 2025).
  22. Lopes, D.; Correia, J.; Machado, P. EvoDesigner: Towards Aiding Creativity in Graphic Design. In Artificial Intelligence in Music, Sound, Art and Design; Martins, T., Rodríguez-Fernández, N., Rebelo, S.M., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 162–178. [Google Scholar]
  23. Frich, J.; MacDonald Vermeulen, L.; Remy, C.; Biskjaer, M.M.; Dalsgaard, P. Mapping the Landscape of Creativity Support Tools in HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–18. [Google Scholar] [CrossRef]
  24. Kantosalo, A.; Jordanous, A. Role-Based Perceptions of Computer Participants in Human-Computer Co-Creativity; AISB: London, UK, 2021; pp. 20–26. Available online: https://aisb.org.uk/wp-content/uploads/2021/04/cc_aisb_proc.pdf (accessed on 26 August 2025).
  25. Liapis, A.; Yannakakis, G.N.; Togelius, J. Computational Game Creativity. 2014. Available online: https://www.um.edu.mt/library/oar/handle/123456789/29473 (accessed on 26 August 2025).
  26. Tan, L.; Luhrs, M. Using Generative AI Midjourney to enhance divergent and convergent thinking in an architect’s creative design process. Des. J. 2024, 27, 677–699. [Google Scholar] [CrossRef]
  27. Gero, J.S. Design Prototypes: A Knowledge Representation Schema for Design. AI Mag. 1990, 11, 26. [Google Scholar] [CrossRef]
  28. Gero, J.S.; Kannengiesser, U. The situated function–behaviour–structure framework. Des. Stud. 2004, 25, 373–391. [Google Scholar] [CrossRef]
  29. Hatchuel, A.; Weil, B. A New Approach of Innovative Design: An Introduction to C-K Theory. In DS 31: Proceedings of ICED 03, the 14th International Conference on Engineering Design, Stockholm; 2003; pp. 109–110. Available online: https://www.designsociety.org/publication/24204/a_new_approach_of_innovative_design_an_introduction_to_c-k_theory (accessed on 13 October 2025).
  30. Howard, T.J.; Culley, S.J.; Dekoninck, E. Describing the creative design process by the integration of engineering design and cognitive psychology literature. Des. Stud. 2008, 29, 160–180. [Google Scholar] [CrossRef]
  31. Girotto, V. Collective Creativity through a Micro-Tasks Crowdsourcing Approach. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, San Francisco, CA, USA, 27 February–2 March 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 143–146. [Google Scholar] [CrossRef]
  32. Koivisto, M.; Grassini, S. Best humans still outperform artificial intelligence in a creative divergent thinking task. Sci. Rep. 2023, 13, 13601. [Google Scholar] [CrossRef]
  33. Grassini, S.; Koivisto, M. Artificial Creativity? Evaluating AI Against Human Performance in Creative Interpretation of Visual Stimuli. Int. J. Hum. -Comput. Interact. 2024, 41, 4037–4048. [Google Scholar] [CrossRef]
  34. Guzik, E.E.; Byrge, C.; Gilde, C. The originality of machines: AI takes the Torrance Test. J. Creat. 2023, 33, 100065. [Google Scholar] [CrossRef]
  35. Erwin, A.K.; Tran, K.; Koutstaal, W. Evaluating the predictive validity of four divergent thinking tasks for the originality of design product ideation. PLoS ONE 2022, 17, e0265116. [Google Scholar] [CrossRef]
  36. Grassini, S. Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  37. Hwang, A.H.-C. Too Late to be Creative? AI-Empowered Tools in Creative Processes. In Proceedings of the CHI Conference on Human Factors in Computing Systems Extended Abstracts, New Orleans, LA, USA, 29 April –5 May 2022; ACM: New Orleans, LA, USA, 2022; pp. 1–9. [Google Scholar] [CrossRef]
  38. Guo, X.; Xiao, Y.; Wang, J.; Ji, T. Rethinking Designer Agency: A Case Study of Co-Creation Between Designers and AI. IASDR Conference Series. 2023. Available online: https://dl.designresearchsociety.org/iasdr/iasdr2023/fullpapers/170 (accessed on 13 October 2025).
  39. Lee, S.; Law, M.; Hoffman, G. When and How to Use AI in the Design Process? Implications for Human-AI Design Collaboration. Int. J. Hum. -Comput. Interact. 2025, 41, 1569–1584. [Google Scholar] [CrossRef]
  40. Baltà-Salvador, R.; El-Madafri, I.; Brasó-Vives, E.; Peña, M. Empowering Engineering Students Through Artificial Intelligence (AI): Blended Human–AI Creative Ideation Processes with ChatGPT. Comput. Appl. Eng. Educ. 2025, 33, e22817. [Google Scholar] [CrossRef]
  41. Ege, D.N.; Øvrebø, H.H.; Stubberud, V.; Berg, M.F.; Steinert, M.; Vestad, H. Benchmarking AI design skills: Insights from ChatGPT’s participation in a prototyping hackathon. Proc. Des. Soc. 2024, 4, 1999–2008. [Google Scholar] [CrossRef]
  42. Karadağ, D.; Ozar, B. A new frontier in design studio: AI and human collaboration in conceptual design. Front. Archit. Res. 2025, in press. [Google Scholar] [CrossRef]
  43. Weisz, J.D.; Muller, M.; He, J.; Houde, S. Toward General Design Principles for Generative AI Applications. arXiv 2023, arXiv:2301.05578. [Google Scholar] [CrossRef]
  44. Dehghani Champiri, Z. UX Design & Evaluation of HealthQB: A Mobile Application to Manage Chronic Pain. Available online: https://summit.sfu.ca/item/35168 (accessed on 22 December 2023).
  45. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; Association for Computational Linguistics: Minneapolis, MN, USA, 2019; pp. 4171–4186. [Google Scholar] [CrossRef]
  46. Ross, W. The possibilities of disruption: Serendipity, accidents and impasse driven search. Possibility Stud. Soc. 2023, 1, 489–501. [Google Scholar] [CrossRef]
  47. Foster, M.I.; Keane, M.T. The Role of Surprise in Learning: Different Surprising Outcomes Affect Memorability Differentially. Top. Cogn. Sci. 2019, 11, 75–87. [Google Scholar] [CrossRef] [PubMed]
  48. Ross, W.; Vallée-Tourangeau, F. Microserendipity in the Creative Process. J. Creat. Behav. 2021, 55, 661–672. [Google Scholar] [CrossRef]
  49. Weisberg, R.W. On the Usefulness of “Value” in the Definition of Creativity. Creat. Res. J. 2015, 27, 111–124. [Google Scholar] [CrossRef]
  50. Finn, E. What Algorithms Want: Imagination in the Age of Computing; MIT Press: Cambridge, MA, USA, 2017; ISBN 978-0-262-03592-7. [Google Scholar]
  51. Lisete, B. Serendipity: Obstacles and facilitators. J. Arts Humanit. Soc. Sci. 2025, 2, 50–56. [Google Scholar] [CrossRef]
  52. Fortes, G. Abduction. In The Palgrave Encyclopedia of the Possible; Glăveanu, V.P., Ed.; Springer International Publishing: Cham, Switzerland, 2022; pp. 1–9. ISBN 978-3-030-90913-0. [Google Scholar]
  53. Ayoub, K.; Payne, K. Strategy in the Age of Artificial Intelligence. J. Strateg. Stud. 2016, 39, 793–819. [Google Scholar] [CrossRef]
  54. Bykova, E.A. Reflection as a Factor in the Success of Learners’ Innovative Activity. Lurian J. 2022, 3, 36–45. [Google Scholar] [CrossRef]
  55. Wilkens, U.; Field, A.E. Creative Intent and Reflective Practices for Reliable and Performative Human-AI Systems. Schriftenreihe Der Wiss. Ges. Für Arb.-Und Betriebsorganisation (WGAB) 2023, 2023, 77–94. [Google Scholar] [CrossRef]
  56. Attewell, P. The Deskilling Controversy. Work Occup. 1987, 14, 323–346. [Google Scholar] [CrossRef]
  57. Abdel-Karim, B.M.; Pfeuffer, N.; Carl, K.V.; Hinz, O. How AI-Based Systems Can Induce Reflections: The Case of AI-Augmented Diagnostic Work1. MIS Q. 2023, 47, 1395–1424. [Google Scholar] [CrossRef]
  58. Shiiku, S.; Marjieh, R.; Anglada-Tort, M.; Jacoby, N. The Dynamics of Collective Creativity in Human-AI Hybrid Societies. arXiv 2025, arXiv:2502.17962. [Google Scholar] [CrossRef]
  59. Linares-Pellicer, J.; Izquierdo-Domenech, J.; Ferri-Molla, I.; Aliaga-Torro, C. We Are All Creators: Generative AI, Collective Knowledge, and the Path Towards Human-AI Synergy. arXiv 2025, arXiv:2504.07936. [Google Scholar] [CrossRef]
  60. Fan, S.; Taylor, M. Will AI Replace Us? Thames and Hudson Ltd.: London, UK, 2019; Available online: https://library.fra.ac.uk/bib/37629 (accessed on 10 September 2025).
  61. Gunkel, D.J. Generative AI and Remix: Difference and Repetition. In The Routledge Companion to Remix Studies, 2nd ed.; Routledge: Oxfordshire, UK, 2025. [Google Scholar]
  62. Günay, M. Artificial Intelligence and Originality in Design. ART/Icle 2025, 4, 449–469. [Google Scholar] [CrossRef]
  63. Orozco, L. Holly Herndon. New Suns. 2023. Available online: https://newsuns.net/holly-herndon-spawning-identities/ (accessed on 10 September 2025).
Table 1. Summary of five paradoxes in human–AI co-creative systems.
Table 1. Summary of five paradoxes in human–AI co-creative systems.
ParadoxCore TensionDesign Goal
Ambiguity vs. Precision Vague human intent vs. AI’s need for clear input.Translate vision into prompts without limiting exploration.
Control vs. SerendipityHuman direction vs. value of unexpected AI discoveries.Enable beneficial accidents while ensuring human authorship.
Speed vs. ReflectionAI’s rapid generation vs. need for human critical thought.Use AI for efficiency without causing cognitive deskilling.
Individual vs. CollectiveCreator’s unique voice vs. AI’s data-driven “wisdom of the crowd.”Leverage collective patterns without suppressing individual style.
Originality vs. RemixDesire for novelty vs. AI’s  recombinative nature.Frame originality as emerging from human curation of AI’s “remix.”
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salma, Z.; Hijón-Neira, R.; Pizarro, C. Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration. Information 2025, 16, 909. https://doi.org/10.3390/info16100909

AMA Style

Salma Z, Hijón-Neira R, Pizarro C. Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration. Information. 2025; 16(10):909. https://doi.org/10.3390/info16100909

Chicago/Turabian Style

Salma, Zainab, Raquel Hijón-Neira, and Celeste Pizarro. 2025. "Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration" Information 16, no. 10: 909. https://doi.org/10.3390/info16100909

APA Style

Salma, Z., Hijón-Neira, R., & Pizarro, C. (2025). Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration. Information, 16(10), 909. https://doi.org/10.3390/info16100909

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop