Next Article in Journal
Stress Overload: A Mixed-Methods, Single-Case Exploration of a Principal’s Stress Accumulation, Sleep, and Well-Being over a School Year
Previous Article in Journal
Characteristics of Effective Mathematics Teaching in Greek Pre-Primary Classrooms
Previous Article in Special Issue
Developing Creativity in Psychological Science and Beyond
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Commentary

The Misleading Definition of Creativity Suggested by AI Must Be Kept out of the Classroom

Radical Creativity, Aalto University, FI-00076 Helsinki, Finland
Educ. Sci. 2025, 15(9), 1141; https://doi.org/10.3390/educsci15091141
Submission received: 6 May 2025 / Revised: 8 August 2025 / Accepted: 18 August 2025 / Published: 2 September 2025
(This article belongs to the Special Issue Creativity and Education)

Abstract

The advent of AI is likely to make it difficult to support the creativity of students. This article points to the specific problem whereby the pseudo-creativity of AI is misinterpreted as authentic creativity, which could in turn mislead educators. The result would be a failure to optimally support the authentic creativity of students. There are suggestions that AI can be creative, but also compelling reasons to reject that claim. These include the need for a self in creativity and the role of the self in the creative process. Self-expression, for instance, requires a self, as part of the process. Then, there is the intrinsic motivation that characterizes the human creative experience and may be involved in the problem finding that is critical for the creative process. Each of these positions is reviewed in this article. The curious thing is that, even with compelling reasons to distinguish the authentic creativity of students from the artificial creativity of AI, definitions of creativity seem to be changing. The unique feature of the present effort is its examination of the reasons why definitions seem to be changing, even when, according to creativity research and theory, they should not. This article describes, (a) the surprising and incorrect position that AI can be creative, and, (b) problems that would occur if this position is applied in the classroom. It then (c) attempts to explain why that position is in fact taking hold, as is suggested by the changes in how creativity is defined. This explanation requires looking beyond creativity theory to broad social contexts, which are economic, political, scientific, and technological. This article concludes with (d) recommendations for supporting the authentic creativity of students. The recommendations exclude ideas suggested by the misleading definition of creativity, described in (a) and were instead selected based on their relationships specifically with authentic creativity.

1. Introduction

Creativity is a remarkably valuable thing for both individuals and society. Not surprisingly, a large number of methods and programs have been developed to support creativity in schools (see Beghetto, 2021; Ma, 2006). Supporting creativity in the classroom is not easy. There are numerous difficulties. Consider, for example, Westby and Dawson (1995) finding that teachers believe ideal students to be courteous, punctual, and conventional. These tendencies are largely at odds with the autonomy, independence, divergent thinking, and unconventional tendencies that play a role in creative behavior. Westby and Dawson (1995) put it this way: “Students displaying creative characteristics appear to be unappealing to teachers” (p. 1). To make matters worse, Westby and Dawson discovered that teachers were unaware that they favored uncreative students. The teachers valued creativity but were, apparently, not appreciating it when they saw it. Standing back, it is easy to see how a student’s divergent thinking (i.e., the production of original and unusual ideas), autonomy, and intrinsic motivation may be incompatible with the typical lesson plan, which is usually designed to involve the entire class of students rather than individuals.
The advent of AI is likely to make it even more difficult to support the creativity of students. That is because generative AI (GenAI) can produce things that may appear to be creative, but the processes it uses do not guarantee—and may not even allow—authentic creativity. For this reason, what GenAI does has been described as artificial creativity, and it may be even more accurate to label it pseudo-creativity (Runco, 2023a, 2023b). This position is summarized below, but the unique material in the present article is focusing on the implications and, in particular, on the specific problems faced by educators. The most general of these is a result of the fact that what GenAI does is easy to (mis)attribute with creativity, and as a result, definitions are being modified such that they will detract from the authentic creativity that should be supported in the classroom.
The definitions that are used in the classroom are part of the implicit theories of educators. Teachers, like parents, have theories about creativity which lead directly to expectations and judgments (Chan & Chan, 1999; Johnson et al., 2003; Runco et al., 1993; Sternberg, 1985). These implicit theories are not based on scientific definitions, which must be articulated so they can be shared and tested as a part of scientific endeavors. Those are explicit, in contrast to the implicit theories that operate in the natural environment, including the classroom. Implicit theories lead to expectations but need not be articulated or supported by experimental data. The impact of teachers’ expectations on the growth of students is well-documented (Rosenthal, 1991).
The terms “authentic creativity” and “artificial creativity” are used throughout this article, in part because they have been used in previous studies on the ostensible creativity of GenAI. Something more can be said about this, although mostly, those terms are used here simply to be consistent with previous studies (e.g., Guzik et al., 2023; Haase & Hanel, 2023; Runco, 2023a). There is also a logic to these concepts. It starts with the position that humans are unambiguously creative. This is a pretty easy view to accept. We have advanced as a species because we have been creative. Our creativity has led to cultural, technological, scientific, and all forms of advance. Then, there are the 75 years of research demonstrating human creativity with psychological tasks and analyses of creative achievement, cultural differences, creative economies, and so on. The concept of authenticity by itself has a rich history in psychology, and much of it humanistic. Maslow (1973), for example, was explicit that human creativity depends on authenticity (and self-actualization). With all of this in mind, “authentic creativity” makes a good deal of sense, which is no doubt why it is frequently used in studies comparing GenAI and humans, like those cited in this article.
Then, there is the fact that what GenAI does is not the same as what humans do. Simplifying a bit, GenAI is trained by exposure to large amounts of information. It uses algorithms to find patterns and draw inferences such that it can solve certain kinds of problems, make certain kinds of decisions, and produce various kinds of output (e.g., music, code, images, and solutions). The point is that, if humans are authentically creative, and AI is dissimilar to humans in how it operates, where does that leave AI? What AI does could be called “inauthentic creativity” but that is not elegant (in the sense of simplicity and clarity). Ergo, what GenAI does is at most artificially creative. Additional support for this position is cited below.
The present effort begins with a brief comparison of the authentic creativity of humans and the artificial creativity of AI. That comparison is easily summarized because much of it has been presented before (Holyoak, 2024; Runco, 2024a, 2023/2025). This allows the focus to shift to what is unique in the present article, namely the possibility that the definition of creativity is changing and would mislead educators if it is adopted. This is no straw argument; there are already indications that definitions are changing. These indications are reviewed below, immediately after the research on the ostensible creativity of AI is summarized and the reasons that AI cannot be authentically creative are presented. This article then attempts to explain why definitions are changing, even though there are compelling reasons to recognize what AI does as different from human creativity. To this end, several broad social, economic, scientific, and technological contexts are summarized. The conclusion to this article details the unfortunate outcomes that are likely if businesses and the others who are invested in AI succeed, and the definition of creativity used by teachers does change such that the artificial creativity of AI is mistaken for authentic creativity. That could cause great harm by detracting from what it takes to be authentically creative and, instead, rewarding processes which are not actually a part of authentic creativity. If we are not careful, we may soon have students who have the potential for creativity but do not receive optimal support for it.

2. Authentic vs. Artificial Creativity

There are indications that definitions and implicit theories are already changing in response to GenAI. Consider, in this regard, the statement from Aru (2024): “Artificial intelligence (AI) systems capable of generating creative outputs are reshaping our understanding of creativity.” That implies that the outputs or products of AI are in fact creative. This is misleading, given that those products are not really creative, if you use criteria suggested by the authentic creativity of humans. As such, this quotation implies a new definition of creativity of the sort I described above, with the potential to mislead. If teachers rely on a misleading definition of creativity, they are likely to support the pseudo-creativity of AI and relegate processes that are indicative of actual creative potential and authentic creativity. Aru eventually acknowledged that the processes used by AI differ from those used by humans, and processes may be the clearest difference between AI and humans (Runco, 2024a). Nonetheless, the statement above about “reshaping our understanding” confirms that care must be taken with how the output of AI is described.
There is additional evidence that interpretations and definitions are already changing. Watkins and Barak-Medina (2024), for example, claimed that, “generative AI…affects human creative agency.” Vinchon et al. (2024) pointed to “new forms of creativity” and suggested that efforts be put into “guiding AI to foster genuine originality.” Genuine creativity sounds much like authentic creativity. Vinchon et al. (2024) described some empirical results as showing that the “creative output” of AI was only average, but they refered to it as creative output. In addition, they implied that humans must adapt to take the creativity of AI into account, which of course assumes that AI can be creative. The thinking of Vinchon et al. (2024), specifically about co-creativity, is reasonable, as long as the AI involved is recognized as a tool and not really a co-creator. Literally speaking, an AI co-creator would be creating itself, along with a human creator. This is not the same as a tool that augments human creativity. A final example is that of Mishra et al. (2024) and their description of “how media technologies have historically shaped human cognition and society [and] the transformative role of generative AI on creativity, education, and communication.” If the transformation is such that AI is viewed as a tool rather than as authentically creative, that statement is probably acceptable, though quite broad.
Why are definitions and implicit theories changing such that creativity is aligned with the products of GenAI? One reason is that various kinds of output from GenAI (e.g., poetry, songs, ideas and solutions, coding) (Chen et al., 2023; Doshi et al., 2024; Hitsuwari et al., 2023; Koivisto & Grassini, 2023; Lyu et al., 2022; Meincke et al., 2024) seem to satisfy the standard definition of creativity. This definition is “standard” because it has been used for decades and is used much more than any other definition (Runco & Jaeger, 2012). It requires only originality and effectiveness. Guzik et al. (2023), for instance, administered the Torrance Tests of Creative Thinking (TTCT) and found GenAI to perform above the 90th percentile. Care must be taken when interpreting these results. For one thing, the TTCT is not really a measure of creativity per se. It is instead an indicator or estimate of creative potential. A test score may indicate that an individual or GenAI has the potential accomplish creative things, but there is no guarantee. A person who earns a high score may or may not exhibit unambiguously creative behavior in the natural environment. All tests, including the TTCT, sample particular behavior and then summarize the sample with a test score. That score is really only reliably indicative of the sample. It is not an index of what occurred in the natural environment nor of the entire universe of relevant skills. Additionally, other things are involved in authentic creativity, in addition to the divergent thinking that is assessed with the TTCT. Motivation, affect, attitude, and meta-cognition, for example, often have a notable impact on actual creative performances (Basadur, 1994; Davis, 1999; Sternberg, 2025). The research of Guzik et al. is interesting and useful, but performance on standardized tests of creative potential does not necessarily lead to the conclusion that GenAI can be creative.
There are other reasons to reject the claim that AI can be creative. Consider the statement above that GenAI appears to be creative because its output sometimes satisfies the standard definition of creativity (Runco & Jaeger, 2012). This definition requires only two things: originality (or novelty) and effectiveness, and what is being attributed with creativity (the poems, codes, and so on) is some sort of output or product (and not a process). Not only does a product usually say little with certainty about the processes involved in its construction; the process has—at the point where there is a product—been completed. The work has already been performed. The only thing to evaluate is the end result, or product, and this does not offer much about what led to that product. Processes can only be inferred from output, and those inferences are post hoc.
Relatedly, GenAI can produce things that satisfy the standard definition, but the processes it used to do so may not really allow the creation of something truly new and original. LLMs, for example, produce responses to prompts by what has been called “autocorrect,” meaning that it finds a response that has the highest probability of being similar to what it has found in related information involved in the training. Additionally, AI cannot use the same processes as humans when humans are creative. AI lacks authenticity and intentionality, for example, and it is not intrinsically motivated. There is no self, so self-expression is not possible. In Holyoak’s (2024) terminology, AI has no “inner experience”, which is another way of saying that it lacks consciousness or sentience. It may be that the best of these descriptors is “self”. That is because creativity is often self-expressive, which, of course, is impossible if there is no self.
The involvement of the self can be put into the context of a debate that is now several decades old. One side held that creativity was merely a kind of problem solving and the other side posited that creativity was more than problem solving. Creativity is sometimes expressed as an original and effective solution to a problem, but the second perspective turned out to be the stronger argument, in part because sometimes creativity is unrelated to problem solving, as is apparent when the creativity of humans is pure self-expression. Self expression may be spontaneous and occur when there is no problem. The separation of creativity from problem solving is also suggested by the role of problem finding in creativity (Mumford et al., 1994; Runco, 1994). This may involve problem identification, problem discovery, problem construction, problem formulation, or problem definition. Whichever of these is involved, it occurs before problem solving. Many creative breakthroughs have resulted from creative problem finding of some sort and not because of problem solving. This is germane in that GenAI does not find its own problems. At best it can generate hypotheses (Dai et al., 2024), but even there it must be prompted to do so. Note, also, that both the involvement of a self in creativity and the importance of problem finding, which is often described as the first stage of the creative process, both offer specifics about what processes humans use when they are creative.
Importantly, individuals developing AI have admitted that they do not really know what processes are used during machine learning. Sam Bowman of the Anthropic corporation, for example, said the following:
“If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means. With only the tiniest of exceptions, we can’t look inside these things and say, “Oh, here’s what concepts it’s using, here’s what kind of rules of reasoning it’s using. Here’s what it does and doesn’t know in any deep way.” We just don’t understand what’s going on here. We built it, we trained it, but we don’t know what it’s doing” (quoted by Hassenfeld, 2023).
As Dalmath (2024) described it, GenAI models such as “GPT and Claude can simulate human logic and provide comprehensive answers. But it is not their ‘true thought.’ Unlike humans, who rely on introspection or understanding, they count on patterns and probabilities. This intelligence is powered by massive computing infrastructure under the hood.” He also described the processes used by AI as inference and added, “During inference, standard AI models use their pre-trained patterns to recognize the input and generate the most likely output based on probabilities. This process uses computing power to analyze the pre-existing knowledge.” The dependence on pre-existing knowledge—which more accurately should be labeled pre-existing information—is hugely important for it implies that GenAI may not be original. It relies on something that already exists. A slightly different take is that GenAI can produce output which appears to be new, but it has not created it, but found it by searching pre-existing information and discovering seemingly new possibilities. In both cases—creating something new vs. finding something new—the outcome is the same, but the processes leading up to the new thing differs, which is precisely the point. AI does not use the same processes that humans use when they are authentically creative. From a psychological point of view, that is enormously important. It is also enormously important from an educational point of view, given that creative potential can only be optimally supported if the underlying mechanisms are understood. It is suboptimal to work only with students’ output and focus on the results of their efforts. More will be said in the Discussion below about the classroom.
The idea that different processes are used by humans and AI can also be put into the context of the oft-cited framework in creative studies which distinguishes creative product, creative process, creative personality, and creative place (Rhodes, 1961). As a matter of fact, in addition to process and product, personality approach is relevant to the present discussion in that humans have particular traits (i.e., openness to new experiences and intrinsic motivation), both of which are beyond what could be expected of GenAI. More will be said about intrinsic motivation below, in the Discussion, which summarizes implications for teaching.
The various differences between humans and GenAI in the processes used constitute the rationale for viewing the output of GenAI as what Cropley (1999), May (1959), and Nicholls (1983) each called pseudo-creativity. They described objects and performances which appear to be creative but are in fact not the result of a creative process. Because many of the operations of GenAI are described as “artificial intelligence,” I suggested a parallel label for what AI does, namely artificial creativity (Runco, 2023/2025). Thus, there is AI and AC. I have also suggested that, given what we know about how AI operates, it may be most accurate to describe its processes as discovery, or perhaps innovation and not creativity. Care must be exercised on this point, however because, as Bender (2025) evinced, LLMs are not mere search engines. Nonetheless, for the present purposes the important thing is that discovery is an apt label in that AI may find an effective response in existing LLMs. Innovation is an apt descriptor because it is sometimes defined as adapting or extending something which already exists. Admittedly, these ideas apply most directly to large language models and not all of AI. There are new forms of AI where the machine learning is supplemented with human input and reinforcement (Wells & Bednarz, 2021). In addition, there are other views of innovation in addition to that which just suggest that innovation involves an adaptation of something that already exists. Innovation is, for example, sometimes defined in terms of implementation. There is no reason to conflate creativity, discovery, and innovation, and there would be a benefit to distinguishing them in the classroom.

3. Why Is the Output from AI Given More Weight than the Creative Experience?

The section above suggests that, although there is research that has referred to various outputs from GenAI (e.g., standardized test scores, poems, songs, computer code, artwork) as creative, a careful analysis refutes the position that it is authentic creativity and suggests that what GenAI does is artificial, inauthentic, or even pseudo-creativity. AI may display discovery by finding original ideas, but it is not creating new ideas. It may also innovate by extending existing lines of information. What is puzzling is that there are indications that definitions of, and expectations for, creativity may be changing. Several examples were quoted above. These changes are surprising, given the compelling reasons to reject the position that GenAI can be authentically creative. Why the changes when there is evidence to reject the position that GenAI can really be creative? There are several possible explanations.
Just above the analysis of GenAI was put into the context of the debate about problem solving and creativity, and right after that a second context was summarized, namely that provided by the 4P framework that is so often used in studies of creativity. Come to find out there is also a benefit to examining broad economic, political, scientific, and technological contexts. Doing so contributes to an explanation of why definitions of creativity are showing signs of changing, even with the clear reports that what GenAI does is not authentically creative. Thus, to answer the question of why things are changing when they should not be, we can look to context. Immediately below the economic context is considered.
Of the various contexts just mentioned (i.e., economic, political, and scientific), the least surprising pressure in favor of the position that GenAI is authentically creative is economic. A number of businesses have invested heavily in AI, and they are promoting their chatbots and various products powered by AI, advertising them as creative. Hence, even with evidence, like that reviewed above, that GenAI is not authentically creative, large sums of money are being spent promoting the position that AI is creative. Microsoft, for example, has invested well over 10 billion US dollars in AI, and that is supposed to go up by a factor of eight in the next fiscal year. And that is just one company. Obviously, businesses would not be investing so heavily in AI if they did not expect (and work for) a return. That return is more likely if AI is attributed with creativity.
The political context overlaps with the economic context. That is because money is now the primary force in politics, at least in USA, and at least going back to 2010 and a highly destructive decision by the Supreme Court of the US. I have never cited Wikipedia in my academic publications, but in the following instance offers a good summary:
“Citizens United v. Federal Election Commission, 558 U.S. 310, is a landmark decision of the United States Supreme Court regarding campaign finance laws, in which the Court found that laws restricting the political spending of corporations and unions are inconsistent with the Free Speech Clause of the First Amendment to the U.S. Constitution. The Supreme Court’s 5–4 ruling in favor of Citizens United sparked significant controversy, with some viewing it as a defense of American principles of free speech and a safeguard against government overreach, while others criticized it as promoting corporate personhood and granting disproportionate political power to large corporations” (https://en.wikipedia.org/wiki/Citizens_United_v._FEC, accessed on 25 June 2025).
Huge donations to political campaigns are now legal in USA, even by corporations, and, in turn, government contracts are being given to large donors. It is quid pro quo. Misinformation is now rampant, both in politics and in a de-regulated business environment. This is one reason why AI can be advertised as creative, as an attempt to promote it. As a matter of fact, the misinformation is both resulting from AI (because it does make mistakes, such as ostensible hallucinations (Emsley, 2023)) but is also supporting the interpretations of AI that are the most favorable—including the claim that AI is authentically creative. That is, actually, a good summary and should be emphasized: Any definition that accepts the output of AI as truly creative is misinformation, given that there are a number of reasons (summarized above, and in Holyoak (2024) and Runco (2023b, 2024a, 2024b)) to reject that position.
Much more could be said about the relevance of the political context. The current administration in USA has, for example, started to eliminate the Department of Education, and, as Synder (2025) pointed out, the current President and his administration, “are declaring their power to define reality, independently not only of judicial but of all verification” (italics added). This is entirely consistent with the White House attacks on science and universities. Synder described how controlling labels and language are attempts to alter categorizations—even of what is legal and what is not—and, as such, are specific signs of the political redefinition of reality. Even more specific are the words (e.g., inclusion, bias, diversity) that can no longer be used in federally supported programs and grants. Synder’s focus was political, but the idea that political efforts and that the use of labels and categories can alter reality applies here, to GenAI, creativity, and education. The idea is that, if AI is called creative often enough, the reality (i.e., AI is not authentically creative) will be forgotten. Synder’s recommendation might apply to the present case: He was quite clear that a new reality only emerges with assent. With that in mind, we should not assent to changes in the definition of creativity that do not respect human creative potential, even if GenAI produces things which superficially resemble human creativity.
There is even a scientific context which lends itself to the misinformation about the ostensible creativity of GenAI. That follows from the fact that the sciences emphasize objectivity. Recall here that the output of GenAI is often a product and easily observable. It is in that sense objective, and is certainly more objective or at least concrete than the underlying processes which were cited above as evidence that GenAI is not authentically creative. The creativity research is not immune from this bias which favors that which is most easily observable. Indeed, there is some indication that the creativity research has itself been derailed by what might be seen as a temptation of objectivity, where products and social recognition are preferred because they appear to be objective, even though they do not have explanatory power (Runco, 1995).1
There is one last context to discuss. This is the technological context. In one sense, it is the most obviously relevant to AI. On the other hand, much of the current thinking about technology—and indeed, much of the current zeitgeist around the world—respects technology and, therefore, AI. Indeed, the simplest implication of a technological zeitgeist, probably, is that technology contributes in many ways and is prevalent in all walks of life, and, so, there is an expectation that it will support creative endeavors. Yet, this is simplistic, and there is an alternative implication when it comes to creativity. That is implied by the well-known expression, “the medium is the message.” This expression was used by Marshall McLuhan in 1964 has been enormously influential. Like all provocative ideas, there are numerous interpretations of that expression (see McLuhan, 1964; Postman, 1985; MacKinnon, 1965; Ong, 1982). It was recently applied to creativity by Mishra et al. (2024). The point to be made here is, like much of the present argument, about creative processes. Simplifying some, the medium for human creativity is organic. Creativity results from various processes that are controlled by the nervous system, which, on the most basic level, operates with electro-chemical processes. These are necessarily involved in the various emotional, attitudinal, motivational, and irrational contributions to the creativity complex (MacKinnon, 1965; Mumford & Gustafsson, 1988). The medium used by AI is, in contrast, electrical. The neural networks of AI are analogous to the systems and networks operating in the human brain, but they are, really, only analogous. Thus, at best, what GenAI does is mimicry. It does not have the same organic medium as humans. If the message was all-important, the product perspective summarized above might be more acceptable, with the implication that the artificial creativity of GenAI could be creative. Yet, to accept this we would need to ignore the medium, and, in particular, the organic basis of human thought (and emotion) and the complex nature of human creativity.
The last few paragraphs have drifted from the practical issues occurring in the classroom. Nonetheless, if we want to keep the shifting definitions (where AI might be viewed as authentically creative) from misleading educational efforts, it is vital to fully understand why those definitions are changing. These last few paragraphs raised the possibility that various contexts explain some of the conceptual drift. In some ways, the economic, political, and scientific contexts of society are facilitating changes in how creativity is defined. The technological zeitgeist is in some ways doing the same thing. The idea that the medium is the message is actually consistent with the position that GenAI’s artificial creativity is different from the authentic creativity of humans. In the Discussion below, I explore various aspects of human creativity that should be supported in the classroom. These are things which are beyond GenAI and important parts of authentic creativity. Educators should support each of them.

4. Discussion

The primary concern expressed in this article is that definitions and expectations about students’ creativity may change as a result of the misinterpretations of output from GenAI. If that occurs, support for students’ creative potential will be suboptimal. Educational efforts would focus on the things that AI does when it is artificially creative instead of what humans do when they are authentically creative. That is to be avoided. Various contributions to creativity, suggested by years of research with humans, might be relegated. Intrinsic motivation is, for instance, often associated with human creativity and something which should be supported in the classroom. As a matter of fact, a large number of educational theories imply that intrinsic motivation is critical for all learning. Many of these (e.g., Piaget, Dewey, Vygotsky, and Bruner) were summarized by Runco (2025). Piaget (1976) argued that cognitive development depends on intrinsic motivation. This is implied by his view that students only react to challenges that are optimally ahead of their current level of functioning. This both assumes intrinsic motivation but also suggests that education is as individualized as possible. Creativity is involved in Piaget’s claim that “to understand is to invent,” with the idea being that authentic learning is itself a creative process whereby meaning is created. If there is no creativity, there is no authentic learning; at best, students merely memorize what they are told to learn. Many others have described how students are the most likely to be creative when they are given opportunities to follow their own interests at an individual pace. The suggestion here is to target authentic learning, which is itself a creative process (Eyler, 2018).
Interestingly, AI supposedly learns. What it does is frequently described as “machine learning.” Yet a cursory examination of what this implies suggests that, even though machines learn, it is not the same learning used by humans—so again, the processes differ. AI does learn, but it is at best equivalent to the memorization mentioned above, and is uncreative and superficial. There are reports that AI does more than memorize, but recall here that we are really not sure what AI does when it is programmed to “learn.” It may process and retain information, but this is very different from the learning of humans that is a part of lived experience. Human learning, when it is meaningful and creative, is both top-down and bottom-up, with emotions, heuristics, and intrinsic motivation all kicking in to construct personal meaning. Thus, even though humans, like AI, must learn from input, human learning uses input very differently from the way AI uses it.
The inclusion of intrinsic motivation in this description of human learning not only acknowledges an important part of the creativity complex, but also answers the question of, “why create?” The discussion above about the processes involved in creativity mostly described “what,” as in “what do humans do that AI cannot?” The differences between humans and AI include both different processes (e.g., self-expression) but also motivation. It may be intrinsic motivation in humans, but, in AI, output results from (extrinsic) prompts. Citing Piaget (1976) one more time, humans are intrinsically motivated because we are genetically predisposed to understand our experiences and environment. Piaget was trained as a biologist and recognized the evolutionary advantage of an intrinsic motivation to adapt in this way.
To practice creative habits, students should also be given opportunities to identify important problems for themselves, and additional opportunities to redefine problems rather than sticking with the way problems are presented to them. These are aspects of problem finding and fit with the suggestions about intrinsic motivation as well. Problem finding might be supported in the classroom by asking questions about existing problems or issues rather than presenting them as fixed or rigid. Problem finding might also be supported by providing latitude with assignments so students have some say-so and can redefine problems for themselves. Shifting perspectives can help with problem finding and should also be encouraged. This is a kind of flexibility and can be supported by suggesting to students that they consider what others might think or how others might view the problem at hand. There is a brief entry in the Encyclopedia of Creativity which summarizes various benefits of, and methods for, perspective shifting.
The fact that authentic creativity may reflect self-expression indicates that students should be given opportunities to reflect and explore personal interests and feelings. This can be a challenge, for certain ages are characterized by the tendency to conform or, at least, give undue weight to conventions. Conventions are normative and often contrary to individualized self-expression. They direct students to what others are thinking, which is often different from personal considerations. The pressures towards conventionality that occur in the schools mean that creative self-expression also depends a great deal on ego-strength, courage, and risk-tolerance. Each of these can be supported in the classroom. The humanists had methods for exactly this sort of thing (Harrington et al., 1987). These involved positive regard and respect. Discretion is also involved, especially because mature creativity does require an awareness of what is conventional (Sternberg, 2025). The individuals need to know when to go along with norms and when to rebel and stick with one’s own position.
None of these recommendations are necessarily inconsistent with the use of AI in the classroom. There have, for example, been reasonable suggestions of co-creativity (AI working with humans) (Haase & Hanel, 2023; Vinchon et al., 2024). This may be the best way to interpret the technological context. Recall, here, the statement above, “the simplest implication of a technological Zeitgeist probably is that technology contributes in many ways and is prevalent in all walks of life, so there is an expectation that it will support creative endeavors.” That must be qualified because GenAI is not itself authentically creative. It can, nonetheless, contribute to creativity, even if not itself authentically creative. It might provide information and options, for example. Certainly, care must be taken, especially in that “co-creativity” could imply that a human works with AI and that both are creative. That is inconsistent with the position defended above and would mislead educators. It implies that both the human and AI bring creativity to the task. It would be more accurate to say that the computer can bring certain things to the task at hand and in a way support human creativity, even if it is itself not creative. AI can bring enormous amounts of information to projects, and when prompted appropriately can also offer different perspectives, as recommended above. Co-creativity may help in the classroom, and elsewhere, but this does not imply that GenAI is, by itself, truly creative.
This article started by referring to the benefits of creativity for individuals and society. It is useful, before closing, to consider some specific benefits that have not yet been mentioned. Creativity does lead to advance, progress, and growth. Authentic learning requires creativity (Beghetto, 2021; Runco, 2023/2025), and individuals and society are both more adaptive when they are creative. Creativity makes for good problem solving, be it on the individual or cultural level. Yet, the fulfillment of potential mentioned several times in this article refers to much more than advance and problem solving. Creativity is also related to the quality of life. It adds richness and texture to our experiences, and can do so each and every day (Richards, 1991). Thus, if GenAI changes how creativity is defined and educators start supporting what it does rather than what it takes for authentic creativity, students will really miss out, and not just on conventional indicators or achievement or advance, but also in terms of enjoyment, purpose, meaning, and satisfaction.

Funding

This research received no external funding.

Conflicts of Interest

The author has no conflicts of interest to declare.

Note

1
The history of the field of creativity research actually shows several eras (Albert & Runco, 1999). Early on, studies of creativity were not scientific. Creativity was equated with art. Fifty or so years ago, methods were developed such that various facets of creativity could be objectively studied, and the science of creativity has boomed ever since. It may have gone too far, however, at least in the attributional and systems theories that require social recognition for creativity.

References

  1. Albert, R. S., & Runco, M. A. (1999). The history of creativity research. In R. S. Sternberg (Ed.), Handbook of creativity (pp. 16–31). Cambridge University Press. [Google Scholar]
  2. Aru, J. (2024). Artificial intelligence and the internal processes of creativity. Journal of Creative Behavior, 59(2), e1530. [Google Scholar] [CrossRef]
  3. Basadur, M. S. (1994). Managing the creative process in organizations. In M. A. Runco (Ed.), Problem finding, problem solving, and creativity. Ablex. [Google Scholar]
  4. Beghetto, R. A. (2021). There is no creativity without uncertainty: Dubito Ergo Creo. Journal of Creativity, 31, 100005. [Google Scholar] [CrossRef]
  5. Bender, E. M. (2025). The AI con: How to fight Big tech’s hype and create the future we want. Harper. [Google Scholar]
  6. Chan, D. W., & Chan, L. K. (1999). Implicit Theories of Creativity: Teachers’ Perception of Student Characteristics in Hong Kong. Creativity Research Journal, 12(3), 185–195. [Google Scholar] [CrossRef]
  7. Chen, L., Sun, L., & Han, J. (2023). A comparison study of human and machine-generated creativity. Journal of Computing and Information Science in Engineering, 23(5), 051012. [Google Scholar] [CrossRef]
  8. Cropley, A. J. (1999). Definitions of creativity. In M. A. Runco, & S. R. Pritzker (Eds.), Encyclopedia of creativity. Elsevier. [Google Scholar]
  9. Dai, T., Vijayakrishnan, S., Szczypinski, F. T., Ayme, J.-F., Simaei, E., Fellowes, T., Clowes, R., Kotopanov, L., Shields, C. E., Zhou, Z., & Ward, J. W. (2024). Autonomous mobile robots for exploratory synthetic chemistry. Nature, 635, 890–897. [Google Scholar] [CrossRef]
  10. Dalmath, R. (2024). Understanding test-time compute: A new mechanism allowing AI to “Think Harder”: Exploring how AI adapts to complex tasks with dynamic reasoning power medium. Available online: https://medium.com/@rendysatriadalimunthe/understanding-test-time-compute-a-new-mechanism-allowing-ai-to-think-harder-19e017abc540 (accessed on 25 June 2025).
  11. Davis, G. (1999). Barriers to creativity and creative attitudes. In M. A. Runco, & S. Pritzker (Eds.), Encyclopedia of creativity (pp. 165–174). Academic Press. [Google Scholar]
  12. Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), eadn5290. [Google Scholar] [CrossRef]
  13. Emsley, R. (2023). ChatGPT: These are not hallucinations—they’re fabrications and falsifications. Schizophrenia, 9, 52. [Google Scholar] [CrossRef]
  14. Eyler, J. (2018). How humans learn: The science and stories behind effective college teaching. West Virginia University Press. [Google Scholar]
  15. Guzik, E., Byrge, C., & Gilde, C. (2023). The originality of machines: AI takes the torrance test. Journal of Creativity, 33, 100065. [Google Scholar] [CrossRef]
  16. Haase, J., & Hanel, J. H. P. (2023). Artificial muses: Generative artificial intelligence chatbots have risen to human-level creativity. Journal of Creativity, 33, 100066. [Google Scholar] [CrossRef]
  17. Harrington, D. M., Block, J. H., & Block, J. (1987). Testing aspects of carl rogers’ theory of creative environments: Child-rearing antecedents of creative potential in young adolescents. Journal of Personality and Social Psychology, 52, 851–856. [Google Scholar] [CrossRef] [PubMed]
  18. Hassenfeld, N. (2023, July 15). We built it, we trained it, but we don’t know what it’s doing. Even the scientists who build AI can’t tell you how it works. Vox. Available online: https://www.vox.com/unexplainable/2023/7/15/23793840/chat-gpt-ai-science-mystery-unexplainable-podcast (accessed on 25 June 2025).
  19. Hitsuwari, J., Ueda, Y., Yun, W., & Nomura, M. (2023). Does human-AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. Computers in Human Behavior, 139, 107502. [Google Scholar] [CrossRef]
  20. Holyoak, K. J. (2024). Why I am not a turing machine. Journal of Cognitive Psychology, 1–12. [Google Scholar] [CrossRef]
  21. Johnson, D., Runco, M. A., & Raina, M. K. (2003). Parents and teachers’ implicit theories of children’s creativity: A cross-cultural perspective. Creativity Research Journal, 14, 427–438. [Google Scholar]
  22. Koivisto, M., & Grassini, S. (2023). Best humans still outperform artificial intelligence in a creative divergent thinking task. Scientific Reports, 13, 13601. [Google Scholar] [CrossRef]
  23. Lyu, Y., Wang, X., Lin, R., & Wu, J. (2022). Communication in human-AI co-creation: Perceptual analysis of paintings generated by text-to-image system. Applied Sciences, 12(22), 11312. [Google Scholar] [CrossRef]
  24. Ma, H.-H. (2006). A Synthetic Analysis of the Effectiveness of Single Components and Packages in Creativity Training Programs. Creativity Research Journal, 18, 435–446. [Google Scholar] [CrossRef]
  25. MacKinnon, D. (1965). Personality and the realization of creative potential. American Psychologist, 20, 273–281. [Google Scholar] [CrossRef] [PubMed]
  26. Maslow, A. H. (1973). Creativity in self-actualizing people. In A. Rothenberg, & C. R. Hausman (Eds.), The creative question (pp. 86–92). Duke University Press. [Google Scholar]
  27. May, R. (1959). The nature of creativity. ETC: A Review of General Semantics, 16, 261–276. Available online: https://www.jstor.org/stable/24234376 (accessed on 25 June 2025).
  28. McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill. [Google Scholar]
  29. Meincke, L., Girotra, K., Nave, G., Terwiesch, C., & Ulrich, K. T. (2024). Large language models for idea generation in innovation. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  30. Mishra, P., Oster, N., & Henriksen, D. (2024). To thine own mind be true: Understanding cultural technologies, from cave walls to ChatGPT. TechTrends. [Google Scholar] [CrossRef]
  31. Mumford, M. D., & Gustafson, S. B. (1988). Creativity syndrome: Integration, application, and innovation. Psychological Bulletin, 103, 27–43. [Google Scholar] [CrossRef]
  32. Mumford, M. D., Reiter-Palmon, R., & Redmond, M. R. (1994). Problem construction and cognition: Applying problem representations in ill-defined domains. In M. A. Runco (Ed.), Problem finding, problem solving, and creativity (pp. 3–39). Ablex. [Google Scholar]
  33. Nicholls, J. (1983). Originality in the person who will never produce anything useful or creative. In R. S. Albert (Ed.), Genius and eminence: A social psychology of creativity and exceptional achievement (pp. 265–279). Pergamon. [Google Scholar]
  34. Ong, W. J. (1982). Orality and literacy: The technologizing of the word. Methuen. [Google Scholar]
  35. Piaget, J. (1976). To understand is to invent. Penguin. [Google Scholar]
  36. Postman, N. (1985). Amusing ourselves to death: Public discourse in the age of show business. Penguin. [Google Scholar]
  37. Rhodes, M. (1961). An analysis of creativity. Phi Delta Kappan, 42, 305–310. [Google Scholar]
  38. Richards, R. (1991). A new aesthetic for environmental awareness: Chaos theory, the beauty of nature, and our broader humanistic identity. Journal of Humanistic Psychology, 41, 59–95. [Google Scholar] [CrossRef]
  39. Rosenthal, R. (1991). Teacher expectancy effects: A brief update 25 years after the Pygmalion experiment. Journal of Research in Education, 1, 3–12. [Google Scholar]
  40. Runco, M. A. (1994). Conclusions concerning problem finding, problem solving, and creativity. In M. A. Runco (Ed.), Problem finding, problem solving, and creativity (pp. 272–290). Ablex. [Google Scholar]
  41. Runco, M. A. (1995). Insight for creativity, expression for impact. Creativity Research Journal, 8, 377–390. [Google Scholar] [CrossRef]
  42. Runco, M. A. (2023a, 15–18 May). AI cannot be creative: Lifetime achievement award address. Annual Creativity Conference, Southern Oregon University, Ashland, Oregon. [Google Scholar]
  43. Runco, M. A. (2023b). AI can only produce artificial creativity. Journal of Creativity, 33, 100063. [Google Scholar] [CrossRef]
  44. Runco, M. A. (2024a). The authentic learning of humans is a creative process and very different from the artificially creative output of AI. Journal of Creative Behavior, 37, 1–5. [Google Scholar] [CrossRef]
  45. Runco, M. A. (2024b). The discovery and innovation of AI does not qualify as creativity. Journal of Cognitive Psychology, 1–10. [Google Scholar] [CrossRef]
  46. Runco, M. A. (2025). Updating the Standard Definition of Creativity to Account for the Artificial Creativity of AI. Creativity Research Journal. (Originally presented May 2023). [Google Scholar] [CrossRef]
  47. Runco, M. A., & Jaeger, G. (2012). The standard definition of creativity. Creativity Research Journal, 24, 92–96. [Google Scholar] [CrossRef]
  48. Runco, M. A., Johnson, D. J., & Bear, P. K. (1993). Parents’ and teachers’ implicit theories of children’s creativity. Child Study Journal, 23, 91–113. [Google Scholar]
  49. Sternberg, R. J. (1985). Implicit theories of intelligence, creativity, and wisdom. Journal of Personality and Social Psychology, 49(3), 607–627. [Google Scholar] [CrossRef]
  50. Sternberg, R. J. (2025). Developing creativity in psychological science and beyond. Education Sciences, 15, 201. [Google Scholar] [CrossRef]
  51. Synder, T. (2025, March 17). The evil at your door. Substack. Available online: https://snyder.substack.com/p/the-evil-at-your-door?triedRedirect=true (accessed on 25 June 2025).
  52. Vinchon, F., Gironnay, V., & Lubart, T. (2024). GenAI creativity in narrative tasks: Exploring new forms of creativity. Journal of Intelligence, 12, 125. [Google Scholar] [CrossRef] [PubMed]
  53. Watkins, R., & Barak-Medina, E. (2024). AI’s influence on human creative agency. Creativity Research Journal, 1–13. [Google Scholar] [CrossRef]
  54. Wells, L., & Bednarz, T. (2021). Explainable AI and reinforcement learning—A systematic review of current approaches and trends. Frontiers in Artificial Intelligence, 4, 550030. [Google Scholar] [CrossRef]
  55. Westby, E. L., & Dawson, V. L. (1995). Creativity: Asset or Burden in the Classroom? Creativity Research Journal, 8(1), 1–10. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Runco, M.A. The Misleading Definition of Creativity Suggested by AI Must Be Kept out of the Classroom. Educ. Sci. 2025, 15, 1141. https://doi.org/10.3390/educsci15091141

AMA Style

Runco MA. The Misleading Definition of Creativity Suggested by AI Must Be Kept out of the Classroom. Education Sciences. 2025; 15(9):1141. https://doi.org/10.3390/educsci15091141

Chicago/Turabian Style

Runco, Mark A. 2025. "The Misleading Definition of Creativity Suggested by AI Must Be Kept out of the Classroom" Education Sciences 15, no. 9: 1141. https://doi.org/10.3390/educsci15091141

APA Style

Runco, M. A. (2025). The Misleading Definition of Creativity Suggested by AI Must Be Kept out of the Classroom. Education Sciences, 15(9), 1141. https://doi.org/10.3390/educsci15091141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop