Next Article in Journal
Development of Epistemic Meta Didactic–Mathematical Knowledge in Mathematics Teachers When Teaching Functions: A Scoping Review
Previous Article in Journal
Effective Personality in Early Childhood Teacher Education: A Pilot Study on Its Relationship with Inclusive Education in a Pedagogy Program in Southern Chile
Previous Article in Special Issue
A Bibliometric Analysis of Creativity Studies Within Giftedness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Rated Originality as a Mediator That Connects Creative Activities and AI-Rated Originality in Divergent Thinking

Department of Educational Psychology, University of Georgia, Athens, GA 30602, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(11), 1525; https://doi.org/10.3390/educsci15111525
Submission received: 13 August 2025 / Revised: 7 October 2025 / Accepted: 8 November 2025 / Published: 12 November 2025
(This article belongs to the Special Issue Creativity and Education)

Abstract

Engagement in everyday creative activities is widely considered to be a good way to develop creative thinking ability. Perhaps, by engaging in such activities, creators can learn at which point their work could be acknowledged as novel and useful by peers or experts in the field. However, little is known about the underlying mechanisms of this development process. So, our study aimed to examine the mediating effect of self-rated originality on the relationship between everyday creative activities and AI-rated originality in divergent thinking. In our dataset, the indirect effect of everyday creative activities on AI-rated originality was significant, whereas the direct effect was not significant, indicating full mediation. These results revealed that engaging in everyday creative activities did not directly enhance AI-rated originality but instead indirectly contributed to AI-judged originality through students’ generation of self-perceived original ideas. Our empirical findings will open the door to a better understanding of how incorporating students’ self-evaluations into creative education might facilitate the transition from creative activities to creative thinking, and eventually to domain-specific creative achievements.

1. Introduction

1.1. Paradox and Balance in Creativity

Creativity inherently depends on two potentially contradictory dimensions, namely, originality and appropriateness, as recognized in the standard definition (Runco & Jaeger, 2012). That is, creativity simultaneously requires abilities that seem at odds with each other (Brillenburg Wurth, 2019; Kaufman, 2023; Stein, 1953), entailing an essential paradox (Steele et al., 2021). Therefore, creators must be able to balance novelty and effectiveness to be perceived as creative by others (Dumas et al., 2025). For instance, a clothing design that is very new and completely deviates from mainstream styles, yet appears nonsensical to others, is likely to be perceived as bizarre or improbable rather than creative or original. On the contrary, if a designer produces an old-fashioned T-shirt similar to past designs, it will fail to stand out in comparison to others. However, when the design reaches the optimal point where these two opposing attributes coexist without greatly compromising either, it has a higher chance of being perceived as original or creative. What complicates this process is that, although creators first generate an idea or a product with the intent of making a creative impact, it is inherently unpredictable how these will be valued or perceived by others (Beghetto, 2021; Boden, 1995; Lopez-Persem et al., 2024).
A paradox lies in the fact that while generating an original idea or product, creators must, on the one hand, be autonomous, independent, and isolated from others (Beghetto & Kaufman, 2007, 2014); on the other hand, they will likely need to consider others’ perspectives (Glăveanu, 2015). Originality is subsequently assessed by a group of peers, experts, critics, or audiences, once the creation is complete. In the process of creation, others’ evaluation could act as a double-edged sword (Amabile et al., 1990; Hennessey, 2010). Creators seek the optimal point where their ideas are accepted by others as both novel and meaningful, moving beyond the boundaries of creativity shaped only by their own perspectives (Dietrich, 2015; Simonton, 2019). On the flip side, overemphasis on others’ judgments—or the pressure to consider others’ viewpoints—might discourage creators’ motivation to generate new or original ideas, leading them to stick with conventional ones, particularly when gatekeepers in the field are defensive and resistant to originality (Sosa & Gero, 2004). In other words, self-referenced originality—where creators consider their ideas as original by themselves, even though not acknowledged by others—might be suppressed by socially-referenced originality, in which originality is judged based on the established criteria of the field. Moreover, this paradox or tension becomes even clearer when external evaluations conflict with the creators’ self-perception or self-awareness (Amabile, 1979; Silvia & Phillips, 2004).

1.2. Tensions in Creativity Assessments

One way to explore the tensions between self-perception and social judgments of creativity or originality is by examining the relationship between self-ratings and others’ ratings, which represent two foundational lines of inquiry in creativity assessments (Kagan & Dumas, 2025; Kaufman et al., 2016). Self-rating involves self-report measures of self-concept/self-beliefs (e.g., creative self-efficacy, creative personal identity), everyday creative activities (e.g., participation in creative hobbies like writing a poem), and creative achievements (e.g., publishing a peer-reviewed paper) using a Likert scale. In relatively recent scholarly discourse, participants have been asked to rate their own performance while engaging in specific divergent thinking or other creative performance tasks (Pretz & McCollum, 2014; Silvia, 2008), yielding more accurate predictions of experts’ ratings on the same tasks than global self-assessments of creativity (Kaufman et al., 2016). On the other hand, others’ ratings tend to include teachers’ or parents’ perceptions of children’s creativity, typically measured using Likert scales (Acar et al., 2024; Scherbakova et al., 2024) as well as experts evaluations using the Consensual Assessment Technique (CAT; Amabile, 1982), and Artificial Intelligence (AI) ratings trained to mimic human-judges’ assessments of divergent thinking (Organisciak et al., 2023, 2024) and other creative performance tasks (DiStefano et al., 2023; Luchini et al., 2025).
The correlations between self- and others-rated originality or creativity have demonstrated a high degree of complexity and variability: linear, nonlinear, or no relationship. One fundamental line of research has shown a positive linear relationship between self- and others-rated originality, most often observed in studies relating self-reported everyday creative activities and others-rated divergent thinking or creative performance (e.g., Silvia et al., 2014) as well as self-reported creative self-concepts and creative performance (Batey et al., 2010; Park et al., 2002; Karwowski et al., 2018; Wang et al., 2025). However, there was an exceptional case as reported in PISA 2022 (OECD, 2024a, 2024b), where creative performance showed a negative correlation with self-reported creative activities, both in-school and outside schools, which might suggest a potential curvilinear relationship between them (Kagan & Dumas, 2025). That is, either sporadic engagement in different types of creative activities or spending too much time on creative activities without focus could possibly hinder the development of creativity (Barbot & Kaufman, 2025). Moreover, another line of studies revealed no relationship between self-reported creative self-beliefs and others-rated divergent thinking (Lee et al., 2002) as well as between self-reported creative self-concepts and experts-rated creative performance (Kaufman et al., 2010), implying that self-perceptions of creativity might be different from others’ perspectives. A more specific way to examine how self-perception of creativity aligns with others’ judgments is to ask participants to rate their own generated ideas and then see how accurate they were in predicting others’ rated creativity (Kaufman et al., 2016; Silvia, 2008). In these studies, self-rated creativity showed a more consistent positive correlation with others-rated creativity than when using global self-reported assessments. However, young students tended to underestimate their creative impact, as reflected in their self-ratings (Kaufman et al., 2016), implying that the relationship between self- and others-rated originality might become different depending on additional factors such as age. On the other hand, narcissists or over-confident individuals tend to consistently overrate their creative abilities (Goncalo et al., 2010), demonstrating a discrepancy between actual creative performance and their beliefs. For individuals who are either overconfident or underconfident, the correlation between their self-ratings and others’ ratings might not be as strong as the correlation represented in individuals with more accurate creative self-perceptions (i.e., potentially because of stronger creative meta-cognition; M. Urban & Urban, 2021). That is, the relationships between creative assessments might be obscured by other individual characteristics or psychological attributes like expertise, personality, cultures, or creative metacognition. However, despite the potential risks of inaccurate self-evaluation, without self-confidence and autonomy developed through self-rating, individuals’ outcomes might never be recognized according to the social standards of the field (Chiang et al., 2022). So, creators must navigate the tension between self-assurance and social standards (Silvia & Phillips, 2004), progressing along the ‘slope of enlightenment’ in the Dunning–Kruger curve (Dunning et al., 2003; Lebuda et al., 2024), where their confidence becomes calibrated to actual abilities as their expertise develops.

1.3. Alignment Between Self- and Others-Rated Originality: The Role of Engaging in Everyday Creative Activities

Engagement in diverse everyday creative activities (e.g., writing a short story, making up a melody), may possibly support the development of creators’ metacognitive evaluation ability, which in turn allows creators to accurately determine at which point their creative products could be acknowledged by peers, experts, or other judges in their domain (Dumas & Kaufman, 2024; Dumas et al., 2025). In this way, engagement with creative activities may allow self-referenced ‘mini-c’ creativity (i.e., ideas that are new and meaningful to oneself but may not be recognized by others) to develop into increasingly impactful forms of creativity within a domain (Kaufman & Beghetto, 2009). The more frequently and deeply creators engage in everyday creative activities, the more opportunities they have to express their ideas, communicate with others, receive feedback, engage in self-judgments, and reflect on that feedback in their creative process (Glăveanu, 2015), which may strengthen creators’ metacognitive skills (i.e., the cognitive skills of tracking and assessing one’s thinking processes; Nelson & Narens, 1990, 1994; Schraw & Moshman, 1995).
With ample practice and engagement in creative activities, creators may be more likely to accurately calibrate their judgments of their own originality. Calibration refers to the degree to which individuals’ self-judgments of their capability or confidence correspond to their actual performance (Alexander, 2013; Lichtenstein et al., 1982). That is, as they become more calibrated, creators will likely develop a clearer sense of what might be seen as original by others and therefore continue adjusting their ideas to balance their own viewpoints with others’, in order to evoke a sense of surprise in those who judge their work (Ostermaier & Uhl, 2020; Sawyer, 2021; Simonton, 2018). Indeed, as creators build on their expertise and knowledge in the field, they tend to become more capable of aligning their self-ratings with others’ ratings by considering both self-referenced and socially-referenced originality (M. Urban & Urban, 2021), thereby becoming better at predicting what may be impactful within their field (Lebuda et al., 2025; Mutter & Hübner, 2024).
In this way, engagement in creative activities may help creators to generally develop not only self-awareness of their creativity but also a goal or intention to be perceived as original by others (K. Urban & Urban, 2025). So, without the self-judgment or self-evaluation step, the direct link between engaging in everyday creative activities and creative performances as judged by others might be weaker. Of course, creators who are less goal-oriented, less willing to openly communicate with others, have low cognitive flexibility, are resistant to change, and have limited meta-cognitive skills may struggle to develop the ability to align their self-ratings with others’ ratings. These individual differences might lead to variability in the transition from self-referenced to socially-referenced originality.

1.4. Goal of the Current Study

This study examines how self-rated originality mediates the relation between engagement in everyday creative activities and other-rated or AI-rated originality in U.S. college students. Specifically, self-reports of everyday creative activities, as well as self-rated and AI-rated originality scores, were used to examine the potential mediation effects of everyday creative activities on the relationship between self-rated and AI-rated originality.
In the present investigation, we addressed the following research questions:
Research Question 1: How and to what extent are self-reported everyday creative activities, self-rated originality, and AI-rated originality correlated?
Research Question 2: Does self-rated originality mediate the relationship between everyday creative activities and AI-rated originality?

2. Methodology

2.1. Participants

A total of 254 undergraduate students enrolled in one of the Educational Psychology courses at a large public research university in the southern U.S. participated in this study in exchange for research credit as part of their course requirements. All participants were recruited via the Educational Psychology department’s SONA system. 36 participants who had not completed DT tasks or self-ratings of DT tasks were excluded, leaving the final sample of 218 (Mage = 20.49, SD = 3.25). Of the total final sample, 182 (83.5%) reported Native/Bilingual Proficiency, 24 (11.0%) Full Professional Proficiency, 10 (4.6%) Professional Proficiency, and 2 (0.9%) Limited Working Proficiency of the English language. Most people in our analysis reported themselves as female (n = 184, 84.4%). There were 41 freshmen (18.8%), 80 sophomores (36.7%), 65 juniors (29.8%), 30 seniors (13.8%), and 2 master’s students (0.9%). With respect to race and ethnicities, 166 (76.1%) were European American or White; 15 (6.9%) were Hispanic or Latino; 11 were (5.0%) African American or Black; 15 (6.9%) were Asian; 9 (4.1%) were Multi-ethnic or Multi-racial identities; 1 (0.5%) was Native Hawaiian or Other Pacific Islander identity; and 1 (0.5%) chose not to answer.

2.2. Measures

2.2.1. Divergent Thinking Tasks (DT Tasks)

Participants completed two DT tasks, which required generating multiple original ideas in response to a given prompt. In each DT task, participants were asked to respond to 5 items, with a maximum of 2 min given to complete each item, in total, 10 min. The order of the DT tasks was randomized across participants. In addition, the responses generated in both AUT and RAUT were scored on originality, operationalized as the average originality across five items, using the AI scoring (i.e., Organisciak et al., 2024; see details below) and self-ratings of each participant (Silvia et al., 2017). Details about the measures and scoring methods are described below.
Alternative Uses Task (AUT)
AUT has been one of the most widely and representatively used measures for assessing divergent thinking, referred to as a domain-general creative process (Guilford, 1967). For this study, participants were asked to generate as many unusual or surprising uses as they could think of for the five objects (i.e., book, table, fork, hammer, and pants) within the allocated time (i.e., two minutes per item). The sequence of the five items was shuffled for each participant to minimize the possible order and fatigue effects.
Reversed Alternative Uses Task (rAUT)
rAUT is a newly developed domain-general divergent thinking measure that conceptually reverses the instructions of the AUT (Scherbakova et al., 2025). Instead of generating novel and unique uses for given objects (as in the AUT), participants were instructed to list as many unique and surprising objects that they could think of that could be used for the following five purposes on this task: (a) an object that they could use for carrying supplies in; (b) an object that they could use to travel across campus; (c) an object that they could use to dig with; (d) an object that they could use to write with; and (e) an object that they could use as a weapon. These 5 items were also assigned in a random order for each participant.

2.2.2. Self-Ratings of DT Responses

After generating their ideas for each item, participants were presented with their responses and asked to self-rate their originality. Each of their responses appeared on the screen in the order they were written, and participants were instructed to rate each response on a 5-point Likert scale. The instruction indicated, “Rate each use/object you generated in terms of how original you think it is on the following scale”: 1 (Not original at all) to 5 (Extremely original). For both the AUT and rAUT, self-ratings of DT responses were averaged within each item and then averaged across the five items. Cronbach’s (1951) alpha for the DT tasks was computed to check the internal consistency reliability and found to be reliable (α = 0.90 for the AUT self-ratings and α = 0.89 for the rAUT self-ratings). The self-rated DT tasks score was obtained by summing the self-rated AUT and self-rated rAUT originality scores (α = 0.93 across all ten AUT and rAUT items).

2.2.3. AI Scoring of DT Responses

DT responses were scored for originality by an open-access artificial intelligence (AI) platform (Organisciak et al., 2023, 2024), which has a high level of agreement with a panel of trained human judges (Acar et al., 2024; Dumas et al., 2023). The system we utilized, OCSAI, is a fine-tuned deep neural network-based large language model (LLMs) trained on the DT responses assessed by human raters (Organisciak et al., 2023). The scoring platform employed in our study is available for public access at https://openscoring.streamlit.app (accessed on 15 May 2025). The AI-judged originality score for each response was averaged within each item, and subsequently averaged across the five items, yielding a person-level score for the AUT and rAUT, respectively. Cronbach’s (1951) alpha was 0.85 for the entire set of AUT items and 0.83 for the rAUT items, demonstrating strong reliability. The AI-rated DT tasks score was obtained by summing up the AI-rated AUT and rAUT originality scores (α = 0.89 across the AUT and rAUT).

2.2.4. Self-Reported Creative Activities

The Creative Activities scale of the Inventory of Creative Activities and Achievements (ICAA; Diedrich et al., 2018 was utilized to measure the students’ everyday creative activities across different domains. The Creative Activities scale has been validated for particularly assessing Little-c creativity, and therefore appropriate to be used for our purpose, assessing young adults’ engagement in everyday creative activities. The Creative Activities scale contains 8 domains: Literature, Music, Arts-and-Crafts, Creative Cooking, Sports, Visual Arts, Performing Arts (Theatre, Dance, Film), and Science and Engineering. Each domain is composed of 6 items, where participants are instructed to indicate how frequently they have carried out this activity over the past 10 years on a 5-point Likert scale (1 = never; 2 = 1–2 times, 3 = 3–5 times; 4 = 6–10 times; 5 = more than 10 times). An example activity in the Arts-and-Crafts domain is “Created an original decoration”. Two items in the domain of the Arts-and-Crafts were adapted, as the original activities did not appear to sufficiently capture university students’ creative activities. For instance, the original version of the activity mentioned above was “Designed a garden or landscape”, but it was updated to “Designed a garden or landscape or set up houseplants”, because college students likely do not have access to an outdoor garden, but many have houseplants. Averaging across the six items yielded a creative activity score within each domain. Subsequently, the domain-general score was derived by averaging across the domain-specific scores from the eight distinct domains. All of the subscales exhibited adequate internal consistency except for the Literature subscale: Literature (α = 0.67), Music (α = 0.86), Arts-and-Crafts (α = 0.84), Creative Cooking (α = 0.85), Sports (α = 0.87), Visual Arts (α = 0.78), Performing Arts (α = 0.71), and Science and Engineering (α = 0.82). As a general scale incorporating all 64 items, as was used in the study here, a strong reliability was reached (α = 0.94).

2.3. Data Collection Procedures

The research was approved by the Institutional Review Board where the study took place. All tasks and surveys were administered remotely via a Qualtrics platform. Participants were asked to confirm that they were 18 years of age or older on the eligibility screening page. If they confirmed that they were eligible to participate, then they were directed to the consent form. After providing informed consent, participants were asked to continue participation in a comfortable environment without any distractions. As part of a larger research project, the current study employed a subset of measures from a larger-scale survey. These included a sociodemographic survey (e.g., age, gender, ethnicities/race, and English proficiency), two DT tasks (AUT, rAUT), and a measure of everyday creative activities. In the DT tasks (AUT, rAUT), participants were instructed to self-rate their originality of each response on a 5-point Likert scale immediately after completing each item.

2.4. Data Analysis Overview

Descriptive statistics and zero-order correlations were computed to explore the distributions of the variables, as well as how and to what extent they were related. Then, everyday creative activities score as well as self-rated and AI-rated originality scores were used to examine the potential mediation effects of everyday creative activities on socially-referenced originality.

3. Results

3.1. Bivariate Correlations

Descriptive statistics and bivariate correlations are reported in Table 1 and Table 2. Pearson’s correlation analysis revealed that self-rated originality scores in the AUT and rAUT were positively and highly correlated (r = 0.80, p < 0.001), indicating that these DT tasks are closely interconnected but still being separated, and therefore aggregating their scores into a single DT self-rated originality might be justified. In parallel, AI-rated originality for the AUT and rAUT appeared to be positively and strongly associated (r = 0.63, p < 0.001), justifying the use of the sum scores of these two DT tasks as a single AI-rated originality score.
Notably, self-rated originality of the DT tasks showed a positive but weak linear relationship with AI-rated originality of the DT tasks (r = 0.21, p < 0.01). There was a small positive correlation between creative activities and self-rated originality of the DT tasks (r = 0.17, p < 0.05), while the correlation with AI-rated originality was similarly small but statistically not significant (r = 0.15, p = 0.257). These similar effect sizes suggest a consistent, though modest, relationship across both self- and AI-ratings, with the difference in statistical significance likely influenced by variability in measurement. With these findings in mind, we tested the potential mediation effect of self-rated originality as detailed below.

3.2. Testing a Mediation Model

Rather than using the classic approach to testing mediation among observed variables developed by Baron and Kenny (Baron & Kenny, 1986), we opted for a more modern methodology using a bootstrapping procedure with 5000 replications in Mplus 8.10. A path analysis was carried out to test the mediation model with creative activities serving as the independent variable, AI-rated originality on the DT tasks as the outcome variable, and self-rated originality on the DT tasks as the mediator. Bootstrapping does not necessarily depend on normality assumptions and generally shows robust statistical power in detecting indirect effects, which is beneficial for testing mediation models that could potentially violate these assumptions (Hayes, 2009). Thus, a bootstrapping approach appeared to be well-suited for testing the mediation effect in our analysis.
As the model was just-identified with zero degrees of freedom, it was expected to reproduce the observed variances and covariances exactly (Hayes, 2013). Therefore, model fit indices appeared to be uninformative and were not a focus of interpretation, in line with recommendations for path models using only observed variables (Kline, 2023). The total effect of creative activities on AI-rated originality was significant (β = 0.151, 95% CI [0.020, 0.275]), and the indirect effect of creative activities on AI-rated originality was also marginally significant (β = 0.033, 95% CI [0.006, 0.080]). However, the direct effect of creative activities on AI-rated originality was not statistically significant (β = 0.119, 95% CI [−0.012, 0.243]). The Confidence Interval (CI) for the direct effect between creative activities and AI-rated originality included 0, suggesting that self-rated originality plays a completely mediating role, although this interpretation should be approached with caution given the marginal significance of the indirect effect (β = 0.033). Additional details are reported in Table 3 and Figure 1.

4. Discussion

4.1. Key Findings

4.1.1. Engaging in Creative Activities Did Not Directly Enhance AI-Rated Originality

Our mediation model showed no significant direct effect of creative activities on AI-rated originality. That is, while the coefficient showed a small positive correlation (β = 0.119), this effect was not statistically significant. This pattern may contradict a well-established finding in the field of creativity research—namely, the positive association between creative activities and others-rated/AI-rated originality (Diedrich et al., 2018; Ishiguro et al., 2022; Silvia et al., 2012; Weiss et al., 2025). Our finding suggests that, at least in this sample, greater engagement in everyday creative activities alone did not reliably predict higher AI-rated originality. Such a finding might be attributable to a measurement misalignment, where AI-rated originality in divergent thinking tasks captures domain-general original ideas that are not directly affected by the frequency or range of creative activities. For instance, as these tasks emphasize verbal ideational ability, they seem likely to be more influenced by engagement in literary than in crafting activities. Taken together, the relationship between everyday creative activities and AI-rated originality in divergent thinking appeared to be indirect, making it interesting to investigate the mediating role of self-rated originality in this association.

4.1.2. Self-Rated Originality Was a Mediator That Connected Everyday Creative Activities and AI-Rated Originality

Self-rated originality indicated a positive and statistically significant, but relatively weak correlation with AI-rated originality in the divergent thinking tasks (r = 0.21, p < 0.01), which is a similar value to that found in prior research (Acar et al., 2024; Silvia et al., 2017). Moreover, in our mediation model, self-rated originality fully mediated everyday creative activities and AI-rated originality (β = 0.033, 95% CI [0.006, 0.080]), though with marginal statistical significance. These findings revealed that while engaging in a variety of creative activities did not appear to directly enhance socially-referenced originality for our participants, they contributed to AI-judged originality by allowing students to generate ideas that they themselves found original, which in turn were more likely to be judged as original by the AI that was trained to mimic human-raters’ assessments of divergent thinking. Within our sample, AI’s evaluation was linked to creative activities only through college students’ judgment of their own ideas. The perceived originality from these self-ratings might be a useful predictor of how AI or human raters would later evaluate an idea as original. However, as the coefficient was marginal for this mediation pathway, the interpretation should be tentative, and replication is needed. Overall, these empirical findings open the door to a better understanding of how teaching students to accurately evaluate their own ideas, within creative education, might help transform creative activities into domain-specific creative achievements. Therefore, the findings of this paper appear to be in-correspondence with recent arguments underscoring the importance of accurate self-judgment of originality or creativity as a core mechanism in higher actual creative performance (M. Urban & Urban, 2021) and that this capacity can be further developed through education and practice, including through engagement in creative activities (Beghetto & Kaufman, 2007).

4.2. Limitations and Future Studies

As with other social science research, this study has measurement-related limitations. For example, originality was rated only on verbal divergent thinking tasks, rather than on figural divergent thinking tasks or other domain-specific creativity tasks, which may not sufficiently capture how creative activities develop into domain-specific creative performance. Future research could replicate the current mediation pathway model using domain-specific creative tasks, such as songwriting rated for creativity by the creators and by others, and specific creative activity scores, such as musical activities. Additionally, composite scores were used for originality in divergent thinking tasks as well as for engagement in everyday creative activities, potentially obscuring the distinct characteristics of the two divergent thinking tasks (i.e., AUT, rAUT) and the specific domains in creative activities (e.g., creative cooking, literature). Given that our research focus was on how general tendencies of engagement in creative activities indirectly influence AI-rated originality in divergent thinking through self-ratings, composite scores were thought to be more stable estimates of underlying constructs, rather than separate scores, which would have necessitated multiple mediation models to be fit to the data. Future work can extend our study to advanced SEM approaches (e.g., multilevel SEM), allowing for the investigation of task-specific or domain-specific mediation pathway analysis while maintaining measurement accuracy.
Another potential limitation is that our mediation pathway model did not account for individual characteristics or psychological attributes, such as age, expertise, and creative metacognitive ability. Depending on participants’ psychological attributes or characteristics, the mediation effect of self-ratings might become stronger or weaker (i.e., the mediation pathway might be further moderated by other variables). For instance, students with higher accuracy in their self-ratings on creativity tasks (perhaps because of higher levels of creative metacognition) might exhibit a stronger mediating effect. Accordingly, investigating the moderated mediating effects of self-rated originality on the pathway linking everyday creative activities to AI-rated originality could be a fruitful direction for future mediation pathway analysis studies, particularly with moderators that reflect individual differences (e.g., expertise).
Although our study revealed full mediation, the coefficient for the indirect effect of everyday creative activities on AI-rated originality was marginal, suggesting careful interpretation of these results. This small effect size might be due to the sample size, the use of a composite score for measuring creative activities, or AI ratings of originality. To clarify whether this mediation pathway holds across different contexts, future research could replicate the model using more diverse and larger samples to improve statistical power and confirm the consistency of these findings.

Author Contributions

Conceptualization, Y.K., and D.D.; methodology, Y.K.; software, Y.K.; validation, Y.K., and D.D.; formal analysis, Y.K.; investigation, Y.K.; resources, Y.K.; data curation, Y.K.; writing—original draft preparation, Y.K.; writing—review and editing, D.D.; visualization, Y.K.; supervision, D.D.; project administration, Y.K. and D.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of University of Georgia (protocol code PROJECT00009606 and date of approval 1 May 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data used in this article are available from the authors upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Acar, S., Dumas, D., Organisciak, P., & Berthiaume, K. (2024). Measuring original thinking in elementary school: Development and validation of a computational psychometric approach. Journal of Educational Psychology, 116(6), 953–981. [Google Scholar] [CrossRef]
  2. Alexander, P. A. (2013). Calibration: What is it and why it matters? An introduction to the special issue on calibrating calibration. Learning and Instruction, 24, 1–3. [Google Scholar] [CrossRef]
  3. Amabile, T. M. (1979). Effects of external evaluation on artistic creativity. Journal of Personality and Social Psychology, 37(2), 221. [Google Scholar] [CrossRef]
  4. Amabile, T. M. (1982). Social psychology of creativity: A consensual assessment technique. Journal of Personality and Social Psychology, 43(5), 997. [Google Scholar] [CrossRef]
  5. Amabile, T. M., Goldfarb, P., & Brackfleld, S. C. (1990). Social influences on creativity: Evaluation, coaction, and surveillance. Creativity Research Journal, 3(1), 6–21. [Google Scholar] [CrossRef]
  6. Barbot, B., & Kaufman, J. C. (2025). PISA 2022 creative thinking assessment: Opportunities, challenges, and cautions. The Journal of Creative Behavior, 59(1), e70003. [Google Scholar] [CrossRef]
  7. Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173. [Google Scholar] [CrossRef]
  8. Batey, M., Furnham, A., & Safiullina, X. (2010). Intelligence, general knowledge, and personality as predictors of creativity. Learning and Individual Differences, 20, 532–535. [Google Scholar] [CrossRef]
  9. Beghetto, R. A. (2021). There is no creativity without uncertainty: Dubito Ergo Creo. Journal of Creativity, 31, 100005. [Google Scholar] [CrossRef]
  10. Beghetto, R. A., & Kaufman, J. C. (2007). Toward a broader conception of creativity: A case for mini-c creativity. Psychology of Aesthetics, Creativity, and the Arts, 1, 73–79. [Google Scholar] [CrossRef]
  11. Beghetto, R. A., & Kaufman, J. C. (2014). Classroom contexts for creativity. High Ability Studies, 25, 53–69. [Google Scholar] [CrossRef]
  12. Boden, M. (1995). Creativity and unpredictability. Stanford Humanities Review, 4(2), 123–139. [Google Scholar]
  13. Brillenburg Wurth, K. (2019). The creativity paradox: An introductory essay. The Journal of Creative Behavior, 53(2), 127–132. [Google Scholar] [CrossRef]
  14. Chiang, H. L., Lien, Y. C., Lin, A. P., & Chuang, Y. T. (2022). How followership boosts creative performance as mediated by work autonomy and creative self-efficacy in higher education administrative jobs. Frontiers in Psychology, 13, 853311. [Google Scholar] [CrossRef]
  15. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. [Google Scholar] [CrossRef]
  16. Diedrich, J., Jauk, E., Silvia, P. J., Gredlein, J. M., Neubauer, A. C., & Benedek, M. (2018). Assessment of real-life creativity: The inventory of creative activities and achievements (ICAA). Psychology of Aesthetics, Creativity, and the Arts, 12(3), 304. [Google Scholar] [CrossRef]
  17. Dietrich, A. (2015). How creativity happens in the brain. Springer. [Google Scholar]
  18. DiStefano, P. V., Patterson, J. D., & Beaty, R. (2023). Automatic scoring of metaphor creativity with large language models. PsyArXiv. [Google Scholar] [CrossRef]
  19. Dumas, D., Acar, S., Berthiaume, K., Organisciak, P., Eby, D., Grajzel, K., Vlaamster, T., Newman, M., & Carrera, M. (2023). What makes children’s responses to creativity assessments difficult to judge reliably? The Journal of Creative Behavior, 57(3), 419–438. [Google Scholar] [CrossRef]
  20. Dumas, D., & Kaufman, J. C. (2024). Evaluation is creation: Self and social judgments of creativity across the Four-C model. Educational Psychology Review, 36(4), 107. [Google Scholar] [CrossRef]
  21. Dumas, D., Kim, Y., Carrera-Flores, M., Kagan, S., Acar, S., & Organisciak, P. (2025). Understanding elementary students’ creativity as a trade-off between originality and task appropriateness: A Pareto optimization study. Journal of Educational Psychology. Advance online publication. [Google Scholar] [CrossRef]
  22. Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87. [Google Scholar] [CrossRef]
  23. Glăveanu, V. P. (2015). Creativity as a sociocultural act. The Journal of Creative Behavior, 49(3), 165–180. [Google Scholar] [CrossRef]
  24. Goncalo, J. A., Flynn, F. J., & Kim, S. H. (2010). Are two narcissists better than one? The link between narcissism, perceived creativity, and creative performance. Personality and Social Psychology Bulletin, 36(11), 1484–1495. [Google Scholar] [CrossRef]
  25. Guilford, J. P. (1967). The nature of human intelligence. McGraw-Hill. [Google Scholar]
  26. Hayes, A. F. (2009). Beyond baron and kenny: Statistical mediation analysis in the new millennium. Communication Monographs, 76(4), 408–420. [Google Scholar] [CrossRef]
  27. Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford publications. [Google Scholar]
  28. Hennessey, B. A. (2010). The creativity-motivation connection. The Cambridge Handbook of Creativity, 2010, 342–365. [Google Scholar]
  29. Ishiguro, C., Sato, Y., Takahashi, A., Abe, Y., Kato, E., & Takagishi, H. (2022). Relationships among creativity indices: Creative potential, production, achievement, and beliefs about own creative personality. PLoS ONE, 17(9), e0273303. [Google Scholar] [CrossRef]
  30. Kagan, S., & Dumas, D. (2025). More creative activities, lower creative ability: Exploring an unexpected PISA finding. The Journal of Creative Behavior, 59(2), e70035. [Google Scholar] [CrossRef]
  31. Karwowski, M., Lebuda, I., & Wiśniewska, E. (2018). Measuring creative self-efficacy and creative personal identity. The International Journal of Creativity & Problem Solving, 28(1), 45–57. [Google Scholar]
  32. Kaufman, J. C. (2023). The creativity advantage. Cambridge University Press. [Google Scholar]
  33. Kaufman, J. C., & Beghetto, R. A. (2009). Beyond big and little: The four c model of creativity. Review of General Psychology, 13(1), 1–12. [Google Scholar] [CrossRef]
  34. Kaufman, J. C., Beghetto, R. A., & Watson, C. (2016). Creative metacognition and self-ratings of creative performance: A 4-C perspective. Learning and Individual Differences, 51, 394–399. [Google Scholar] [CrossRef]
  35. Kaufman, J. C., Evans, M. L., & Baer, J. (2010). The American idol effect: Are students good judges of their creativity across domains? Empirical studies of the arts, 28(1), 3–17. [Google Scholar] [CrossRef]
  36. Kline, R. B. (2023). Principles and practice of structural equation modeling. Guilford publications. [Google Scholar]
  37. Lebuda, I., Hofer, G., & Benedek, M. (2025). Determinants of creative metacognitive monitoring: Creativity, personality, and task-related predictors of self-assessed ideas and creative performance. Metacognition and Learning, 20(1), 28. [Google Scholar] [CrossRef]
  38. Lebuda, I., Hofer, G., Rominger, C., & Benedek, M. (2024). No strong support for a Dunning–Kruger effect in creativity: Analyses of self-assessment in absolute and relative terms. Scientific Reports, 14(1), 11883. [Google Scholar] [CrossRef] [PubMed]
  39. Lee, J., Day, J. D., Meara, N. M., & Maxwell, S. E. (2002). Discrimination of social knowledge and its flexible application from creativity: A multitrait-multimethod approach. Personality and Individual Differences, 32, 913–928. [Google Scholar] [CrossRef]
  40. Lichtenstein, S., Fischhoff, B., & Phillips, L. D. (1982). Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 306–334). Cambridge University Press. [Google Scholar]
  41. Lopez-Persem, A., Moreno-Rodriguez, S., Ovando-Tellez, M., Bieth, T., Guiet, S., Brochard, J., & Volle, E. (2024). How subjective idea valuation energizes and guides creative idea generation. American Psychologist, 79(3), 403–422. [Google Scholar] [CrossRef] [PubMed]
  42. Luchini, S. A., Maliakkal, N. T., DiStefano, P. V., Laverghetta, A., Jr., Patterson, J. D., Beaty, R. E., & Reiter-Palmon, R. (2025). Automated scoring of creative problem solving with large language models: A comparison of originality and quality ratings. Psychology of Aesthetics, Creativity, and the Arts. Advance online publication. [Google Scholar] [CrossRef]
  43. Mutter, Y., & Hübner, R. (2024). The effect of expertise on the creation and evaluation of visual compositions in terms of creativity and beauty. Scientific Reports, 14(1), 13675. [Google Scholar] [CrossRef]
  44. Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and some new findings. In G. H. Bower (Ed.), The psychology of learning and motivation (Vol. 26, pp. 125–173). Academic Press. [Google Scholar]
  45. Nelson, T. O., & Narens, L. (1994). Why investigate metacognition? In J. Metcalfe, & A. P. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 1–25). MIT Press. [Google Scholar]
  46. OECD. (2024a). PISA 2022 results (Volume III): Creative minds, creative schools. PISA; OECD Publishing. [Google Scholar] [CrossRef]
  47. OECD. (2024b). PISA 2022 technical report. PISA; OECD Publishing. [Google Scholar] [CrossRef]
  48. Organisciak, P., Acar, S., Dumas, D., & Berthiaume, K. (2023). Beyond semantic distance: Automated scoring of divergent thinking greatly improves with large language models. Thinking Skills and Creativity, 49, 101356. [Google Scholar] [CrossRef]
  49. Organisciak, P., Dumas, D., Acar, S., & de Chantal, P. L. (2024). Open creativity scoring [computer software]. University of Denver. Available online: https://openscoring.du.edu (accessed on 15 May 2025).
  50. Ostermaier, A., & Uhl, M. (2020). Performance evaluation and creativity: Balancing originality and usefulness. Journal of Behavioral and Experimental Economics, 86, 101552. [Google Scholar] [CrossRef]
  51. Park, M., Lee, J., & Hahn, D. W. (2002, August 22–25). Self-reported creativity, creativity, and intelligence. Poster presented at the American Psychological Association, Chicago, IL, USA. [Google Scholar]
  52. Pretz, J. E., & McCollum, V. A. (2014). Self-perceptions of creativity do not always reflect actual creative performance. Psychology of Aesthetics, Creativity, and the Arts, 8(2), 227. [Google Scholar] [CrossRef]
  53. Runco, M. A., & Jaeger, G. J. (2012). The standard definition of creativity. Creativity Research Journal, 24(1), 92–96. [Google Scholar] [CrossRef]
  54. Sawyer, R. K. (2021). The iterative and improvisational nature of the creative process. Journal of Creativity, 31, 100002. [Google Scholar] [CrossRef]
  55. Scherbakova, A., Dumas, D., Acar, S., Berthiaume, K., & Organisciak, P. (2024). Performance and perception of creativity and academic achievement in elementary school students: A normal mixture modeling study. The Journal of Creative Behavior, 58(2), 245–261. [Google Scholar] [CrossRef]
  56. Scherbakova, A., Dumas, D., Kagan, S., Vlaamster, T., Acar, S., & Organisciak, P. (2025). Reversing the alternate uses task. Thinking Skills and Creativity, 2025, 101915. [Google Scholar] [CrossRef]
  57. Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology Review 7, 351–371. [Google Scholar] [CrossRef]
  58. Silvia, P. J. (2008). Discernment and creativity: How well can people identify their most creative ideas? Psychology of Aesthetics, Creativity, and the Arts, 2, 139–146. [Google Scholar] [CrossRef]
  59. Silvia, P. J., Beaty, R. E., Nusbaum, E. C., Eddington, K. M., Levin-Aspenson, H., & Kwapil, T. R. (2014). Everyday creativity in daily life: An experience-sampling study of “little c” creativity. Psychology of Aesthetics, Creativity, and the Arts, 8(2), 183. [Google Scholar] [CrossRef]
  60. Silvia, P. J., Nusbaum, E. C., & Beaty, R. E. (2017). Old or new? Evaluating the old/new scoring method for divergent thinking tasks. The Journal of Creative Behavior, 51(3), 216–224. [Google Scholar] [CrossRef]
  61. Silvia, P. J., & Phillips, A. G. (2004). Self-awareness, self-evaluation, and creativity. Personality and Social Psychology Bulletin, 30(8), 1009–1017. [Google Scholar] [CrossRef]
  62. Silvia, P. J., Wigert, B., Reiter-Palmon, R., & Kaufman, J. C. (2012). Assessing creativity with self-report scales: A review and empirical evaluation. Psychology of Aesthetics, Creativity, and the Arts, 6(1), 19–34. [Google Scholar] [CrossRef]
  63. Simonton, D. K. (2018). Defining creativity: Don’t we also need to define what is not creative? The Journal of Creative Behavior, 52(1), 80–90. [Google Scholar] [CrossRef]
  64. Simonton, D. K. (2019). Creativity in sociocultural systems. In The oxford handbook of group creativity and innovation. Oxford University Press. [Google Scholar]
  65. Sosa, R., & Gero, J. S. (2004). Diffusion of creative design: Gatekeeping effects. International Journal of Architectural Computing, 2(4), 517–531. [Google Scholar] [CrossRef]
  66. Steele, L. M., Hardy, J. H., III, Day, E. A., Watts, L. L., & Mumford, M. D. (2021). Navigating creative paradoxes: Exploration and exploitation effort drive novelty and usefulness. Psychology of Aesthetics, Creativity, and the Arts, 15(1), 149. [Google Scholar] [CrossRef]
  67. Stein, M. I. (1953). Creativity and culture. The Journal of Psychology, 36(2), 311–322. [Google Scholar] [CrossRef]
  68. Urban, K., & Urban, M. (2025). Metacognition and motivation in creativity: Examining the roles of self-efficacy and values as cues for metacognitive judgments. Metacognition and Learning, 20(1), 16. [Google Scholar] [CrossRef]
  69. Urban, M., & Urban, K. (2021). Unskilled but aware of it? Cluster analysis of creative metacognition from preschool age to early adulthood. The Journal of Creative Behavior, 55(4), 937–945. [Google Scholar] [CrossRef]
  70. Wang, Q., Xiao, H., Yin, H., Wei, J., Li, S., & Shi, B. (2025). The different relationships between mobile phone dependence and adolescents’ scientific and artistic creativity: Self-esteem and creative identity as mediators. The Journal of Creative Behavior, 59(2), e70024. [Google Scholar] [CrossRef]
  71. Weiss, S., Jaggy, A. K., & Goecke, B. (2025). Introducing the inventory of creative activities for young adolescents: An adaption and validation study. Thinking Skills and Creativity, 57, 101836. [Google Scholar] [CrossRef]
Figure 1. Full Mediation Model of Self-rated Originality on the Relationship between Creative Activities and AI-rated Originality. Note. Standardized path coefficients are shown with standard errors in parentheses. Solid lines indicate the significant paths (p < 0.05); the dashed line represents a non-significant direct path. * p < 0.05. ** p < 0.01.
Figure 1. Full Mediation Model of Self-rated Originality on the Relationship between Creative Activities and AI-rated Originality. Note. Standardized path coefficients are shown with standard errors in parentheses. Solid lines indicate the significant paths (p < 0.05); the dashed line represents a non-significant direct path. * p < 0.05. ** p < 0.01.
Education 15 01525 g001
Table 1. Descriptive Statistics for All Variables (N = 218).
Table 1. Descriptive Statistics for All Variables (N = 218).
VariableMSDMinMaxSkewnessKurtosis
1. AUT Originality (AI-rated)2.600.381.403.31−1.000.69
2. rAUT Originality (AI-rated)2.310.311.383.17−0.600.32
3. DT Originality (AI-rated)4.910.632.996.03−0.870.44
4. AUT Originality (self-rated)2.480.741.005.000.460.29
5. rAUT Originality (self-rated)2.550.631.004.570.350.34
6. DT Originality (self-rated)5.031.294.974.980.480.51
7. Literature Activities14.974.560.0030.000.060.31
8. Music Activities13.376.730.0030.000.790.46
9. Arts-and-Crafts Activities20.586.230.0030.00−0.420.42
10. Creative Cooking Activities16.216.630.0030.000.410.45
11. Sports Activities10.745.910.0030.001.360.40
12. Visual Arts Activities14.685.210.0030.000.420.35
13. Performing Arts Activities11.184.570.0027.000.790.31
14. Science and Engineering Activities11.905.290.0028.000.930.36
15. Creative Activities (Total)19.0319.030.0035.830.550.35
Note. DT Originality = Sum of AUT and rAUT Originality scores.
Table 2. Bivariate Correlations among Self-rated Originality, AI-rated Originality, and Everyday Creative Activities.
Table 2. Bivariate Correlations among Self-rated Originality, AI-rated Originality, and Everyday Creative Activities.
Variable1234567
1. Self-rated AUT
2. Self-rated rAUT0.80 ***
3. Self-rated DT tasks0.35 ***0.32 **
4. AI-rated AUT0.20 **0.17 *0.10
5. AI-rated rAUT0.120.25 **0.040.63 ***
6. AI-rated DT tasks0.15 *0.100.21 **0.52 ***0.40 ***
7. Creative Activities−0.07−0.030.17 *0.120.060.15
Note. Self-rated AUT = self-rated Originality in AUT; Self-rated rAUT = self-rated Originality in rAUT; Self-rated DT tasks = self-rated Originality in AUT and rAUT; AI-rated AUT = AI-rated Originality in AUT; AI-rated rAUT = AI-rated Originality in rAUT; AI-rated DT tasks = AI-rated Originality in AUT and rAUT; Creative Activities = domain-general creative activity score. * p < 0.05. ** p < 0.01. *** p < 0.001.
Table 3. Mediation Analysis for Creative Activities, Self-rated Originality, and AI-rated Originality (N = 218).
Table 3. Mediation Analysis for Creative Activities, Self-rated Originality, and AI-rated Originality (N = 218).
EffectβSE95% Confidence Interval
LLCIUCLI
Direct effect
Creative Activities → Self-rated Originality0.1710.0730.0180.309
Self-rated Originality → AI-rated Originality0.1910.0640.0610.316
Creative Activities → AI-rated Originality0.1190.065−0.0120.243
Indirect effect
Creative Activities → Self-rated Originality → AI-rated Originality0.0330.0180.0060.080
Total effect
Creative Activities → Self-rated Originality → AI-rated Originality0.1510.0650.0200.275
Note. LLCI = lower limit of confidence interval; ULCI = upper limit of confidence interval. R2 for Self-rated Originality = 0.029; R2 for AI-rated Originality = 0.022.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, Y.; Dumas, D. Self-Rated Originality as a Mediator That Connects Creative Activities and AI-Rated Originality in Divergent Thinking. Educ. Sci. 2025, 15, 1525. https://doi.org/10.3390/educsci15111525

AMA Style

Kim Y, Dumas D. Self-Rated Originality as a Mediator That Connects Creative Activities and AI-Rated Originality in Divergent Thinking. Education Sciences. 2025; 15(11):1525. https://doi.org/10.3390/educsci15111525

Chicago/Turabian Style

Kim, Yoojoong, and Denis Dumas. 2025. "Self-Rated Originality as a Mediator That Connects Creative Activities and AI-Rated Originality in Divergent Thinking" Education Sciences 15, no. 11: 1525. https://doi.org/10.3390/educsci15111525

APA Style

Kim, Y., & Dumas, D. (2025). Self-Rated Originality as a Mediator That Connects Creative Activities and AI-Rated Originality in Divergent Thinking. Education Sciences, 15(11), 1525. https://doi.org/10.3390/educsci15111525

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop