Next Article in Journal
The Relationship Between Academic Delay of Gratification and Depressive Symptoms Among College Students: Exploring the Roles of Academic Involution and Academic Resilience
Previous Article in Journal
What Can We Learn from the Previous Research on the Symptoms of Selective Mutism? A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Digital Centaur as a Type of Technologically Augmented Human in the AI Era: Personal and Digital Predictors

by
Galina U. Soldatova
,
Svetlana V. Chigarkova
* and
Svetlana N. Ilyukhina
Faculty of Psychology, Lomonosov Moscow State University, Moscow 119991, Russia
*
Author to whom correspondence should be addressed.
Behav. Sci. 2025, 15(11), 1487; https://doi.org/10.3390/bs15111487
Submission received: 18 August 2025 / Revised: 24 October 2025 / Accepted: 28 October 2025 / Published: 31 October 2025
(This article belongs to the Section Social Psychology)

Abstract

Industry 4.0 is steadily advancing a reality of deepening integration between humans and technology, a phenomenon aptly described by the metaphor of the “technologically augmented human”. This study identifies the digital and personal factors that predict a preference for the “digital centaur” strategy among adolescents and young adults. This strategy is defined as a model of human–AI collaboration designed to enhance personal capabilities. A sample of 1841 participants aged 14–39 completed measures assessing digital centaur preference and identification, emotional intelligence (EI), mindfulness, digital competence, technology attitudes, and AI usage, as well as AI-induced emotions and fears. The results indicate that 27.3% of respondents currently identify as digital centaurs, with an additional 41.3% aspiring to adopt this identity within the next decade. This aspiration was most prevalent among 18- to 23-year-olds. Hierarchical regression showed that interpersonal and intrapersonal EI and mindfulness are personal predictors of the digital centaur preference, while digital competence, technophilia, technopessimism (inversely), and daily internet use emerged as significant digital predictors. Notably, intrapersonal EI and mindfulness became non-significant when technology attitudes were included. Digital centaurs predominantly used AI functionally and reported positive emotions (curiosity, pleasure, trust, gratitude) but expressed concerns about human misuse of AI. These findings position the digital centaur as an adaptive and preadaptive strategy for the technologically augmented human. This has direct implications for education, highlighting the need to foster balanced human–AI collaboration.

1. Introduction

Industry 4.0 is gradually confronting us with the reality of increasingly active integration between humans and technology. These changes can be described through the metaphor of the technologically augmented human. It illustrates how a person develops within a constantly evolving socio-cultural environment, where one of the main characteristics is the high speed of digital transformations. Computers, smartphones, the Internet of Things, and AI assistants together form an integrated technosystem that mediates human everyday life. These technologies act as complex cultural tools, serving as extensions and augmentations of people that ultimately become part of their very identity (G. Soldatova & Voiskounsky, 2021).
The study of socio-cultural digital artifacts as external extensions of the human being is carried out within various scientific fields. These disciplines can be viewed as different branches of externalist philosophy, such as the social externalism found in the cultural–historical approach (Vygotskij, 1982; Cole, 1996). Researchers operationalize the processes and phenomena under study within a similar phenomenology of “the extensions of man” (McLuhan, 1964); “extended mind” (Clark & Chalmers, 1998); “augmented human intellect” (Engelbart, 1962); “extended self” (Belk, 2016); and “augmented human” (Faiola et al., 2016). In this work, we draw on the socio-cognitive concept of digital socialization. Within the paradigm of cultural–historical psychology and building on the above approaches, the concept considers one of the key concepts and the main outcome of digital socialization to be the technologically augmented individual. In such an individual, cognitive, personal, and behavioral systems are fused with elements of the technosystem (G. Soldatova & Voiskounsky, 2021).
What are the current and future pathways for the technological augmentation of the human being and for our integration and fusion with technology? Drawing on both theoretical and empirical research, scholars have identified several possible trajectories for this development, which represent distinct types of the technologically augmented individual. These include digital donors, techno-conservatives, techno-isolationists, personoids, cyborgs, and digital centaurs (G. U. Soldatova et al., 2024). In this work, we focus on the digital centaur, operationalizing this type through its strategy of adaptation to a technology-saturated reality. Already being in use across certain domains of human activity, this metaphor speaks to the urgent need to enhance the quality of human intelligence. The founding figure of cyberpsychology—physicist, mathematician, and psychologist Joseph Licklider—wrote as early as 1960, in his seminal article Man–Computer Symbiosis: “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today” (Licklider, 1960, p. 4). This juncture can be considered to have arrived in November 2022 with the public accessibility of ChatGPT.

1.1. Digital Centaurs

A digital centaur is an individual enhanced by digital technologies and artificial intelligence, capable of solving problems more effectively than either a human or AI alone (G. U. Soldatova et al., 2024). Their adaptation strategy is characterized by a conscious and purposeful integration of artificial intelligence as a tool for augmenting personal capabilities, optimizing activities, and enhancing productivity and comfort across various domains. The core of this adaptation lies in a human-centered collaboration, where the individual actively leverages AI to solve problems more effectively than either a human or AI alone, while maintaining the integrity of the self and the dominance of the human element. The concept of enhancing human intellectual abilities predates the widespread adoption of AI. Early formulations, such as Engelbart’s (1962) concept of “Augmenting Human Intellect,” laid the groundwork for this idea, which later evolved alongside emerging technologies like the Internet of Things and Intelligence Augmentation (Thiele, 2021). Contemporary research on cognitive processes extended by digital tools rests primarily on the thesis of the extended mind (Clark & Chalmers, 1998) and is often situated within the broader trend known as cognitive offloading—that is, the process of “unloading” cognitive functions by using external tools (Pandey et al., 2023). The collaborative interplay between humans and artificial intelligence is described through a variety of terms, including human–robot teams (Wolf & Stock-Homburg, 2023), human–machine interaction (Ostheimer et al., 2021), hybrid intelligence (Akata et al., 2020; Dellermann et al., 2019), and collaborative intelligence (Schleiger et al., 2024). The term “centaur” is currently used in two key contexts. In cognitive science, it refers to computational models that can predict and simulate human behavior in experiments expressible in natural language (Binz et al., 2025). In a closely related vein, it also denotes a model of human–AI collaboration within philosophy and posthumanist ethics, known as Centaur Intelligence (Youvan, 2025).
The practical embodiment of the digital centaur concept began in 1998, when the first “advanced chess” match was played. Here, the competitors were not merely chess players but human–computer pairs. As a result, chess players’ level of mastery rose dramatically, and the very approach to the game itself was transformed. Today, the strongest chess players are no longer solely human but centaurs: hybrids of humans and machines, a true symbiosis of human and artificial intelligence. In 2016, a computer defeated a human in the Chinese strategy board game Go—an achievement once thought impossible due to the game’s complexity, which defies brute-force algorithmic analysis. Yet by 2021, two of Russia’s strongest Go players, operating according to the digital centaur strategy, faced the AI system Leela Zero and emerged victorious, demonstrating the power of human–machine collaboration.
By late 2022, the digital centaur strategy had moved beyond the closed circles of professional logic-game players as everyday users gained access to ChatGPT—an “intelligent” technological extension. ChatGPT broke the world record as the fastest-growing application in terms of downloads, reaching 100 million users within just two months of its launch. For comparison, TikTok required nine months to achieve the same milestone, while YouTube took four years and one month.
To what extent is the digital centaur strategy being realized across different domains? Most research to date remains experimental in nature, yet several general conclusions can already be drawn. The integration of AI into a range of tasks—such as creative writing, report preparation, email composition, programming, and call center consulting—has been shown to improve productivity. However, the most pronounced gains are typically observed among individuals who initially possess weaker skills (Noy & Zhang, 2023; Doshi & Hauser, 2024; Dell’Acqua et al., 2023; Peng et al., 2023). For example, in an experiment with 444 professionals, ChatGPT significantly enhanced writing productivity (reducing time by 0.8 SD and improving quality by 0.4 SD) while reducing performance inequality by disproportionately benefiting lower-ability workers (Noy & Zhang, 2023). Similar results were observed in a study of 758 consultants, where AI use improved performance by 43% among below-average performers, compared to a 17% increase for above-average performers (Dell’Acqua et al., 2023). Nevertheless, human–AI collaboration enables individuals and organizations alike to acquire deeper insights, accelerate innovation, and tackle complex problems with greater efficiency across diverse sectors. As these technologies evolve, emphasis is shifting toward fostering ethical, inclusive, and human-centered collaboration—one that strengthens, rather than diminishes, collaborative intelligence (Zhao et al., 2025).

1.2. Human Intelligence and Mindfulness

The ability of a digital centaur to harness artificial intelligence, integrate with it from a human-centered perspective, and counterbalance it when needed depends entirely on its foundation of strong human intelligence. This encompasses the full spectrum of “cold” and “hot” forms, such as academic, social (Kihlstrom & Cantor, 2011), emotional (Mayer et al., 2004), practical (Sternberg & Hedlund, 2002), cultural (Earley & Ang, 2003), and personal (Mayer, 2008) intelligence. We argue that the principal meta-intellectual regulatory component capable of integrating the complex identity of the digital centaur may be personal intelligence. Personal intelligence enables the creation of viable strategies for identity formation, allowing individuals to both integrate and differentiate themselves. This capacity develops through cultivating individuality, agency, self-reflection, mindfulness, metacognition, and the ability to evaluate both other people and digital entities—the identity that allows the human element in people to remain dominant over the digital. In this interpretation, personal intelligence remains understudied, though research into emotional intelligence and mindfulness in the context of human–AI collaboration is advancing. While technical and operational efficiencies of AI are widely discussed, the role of human EI in enhancing this collaboration remains underexplored. Research demonstrates the strategic importance of EI in fostering effective human–AI synergy and proposes that EI-driven human resource management practices can significantly improve organizational adaptability and performance (Huzooree et al., 2025; Gill & Mathur, 2024). Another study found that workplace mindfulness played a critical role in mitigating the negative impact of job insecurity on technology-related anxiety. This insecurity specifically stemmed from the integration of human–AI collaboration (Wu et al., 2024). Accordingly, we hypothesize that a preference for the digital centaur as an adaptation strategy of the augmented individual will be determined by emotional intelligence (H1) and mindfulness (H2).

1.3. Digital Predictors: User Activity, Digital Competence, and Attitudes Toward Technology

The digital centaur is, by definition, intrinsically linked to technology use. In the digital realm, screen time and user activity serve as indicators of access to technological extensions and are important determinants of well-being (Vuorre & Przybylski, 2024). At the same time, hyperconnectivity has become the norm of digital everyday life. Hyperconnectivity is conceptualized as a quantitative aspect of modern life, characterized by an environment saturated with digital devices, high user activity, and maximal screen time. These factors, in turn, lead to qualitative changes in daily functioning (Brubaker, 2022; Otrel-Cass, 2019; G. Soldatova & Voiskounsky, 2021). It can be hypothesized that digital centaurs are likely to exhibit high levels of screen time, even to the point of hyperconnectivity (H3).
In the context of constant and rapid technological change, accompanied by rising technostress, attitudes toward technology may play a significant role in an individual’s resilience and successful adaptation to the modern world. The cognitive dimension of such attitudes is often positioned along two poles: technooptimism and technopessimism. Technooptimism reflects a worldview and life stance in which technological achievements and scientific–technical progress are assigned primary importance in addressing social problems (Königs, 2022; Ridley, 2010). Technopessimism, by contrast, relates to the belief that technological progress impedes societies’ well-being and that its benefits are less than its harm (Königs, 2022; Postman, 1992). The emotional–behavioral dimension of attitudes toward technology is reflected in the phenomena of technophilia and technophobia. Technophobia is understood as an internal resistance that arises when people think or speak about new technology; it encompasses fear or anxiety related to its use, as well as hostile or aggressive attitudes toward it (Brosnan, 1998). Technophilia, conversely, denotes a positive disposition toward most technologies, enjoyment derived from using new ones, and a readiness to gain experience in their application (Amichai-Hamburger, 2009; Osiceanu, 2015). Researchers also identify technorationalism—the conscious and deliberate use of technology (G. U. Soldatova et al., 2021). We may hypothesize that digital centaurs are more likely to exhibit technophilia and technorationalism than technopessimism and technophobia (H4).
Another critical attribute of the digital centaur is digital competence—the ability to act effectively and safely in the digital environment, to use complex digital tools, and to critically evaluate the risks associated with them (DQ Institute, 2017; Ribble, 2015; G. U. Soldatova & Rasskazova, 2018; Vuorikari et al., 2022). Digital competence serves as a predictor of a preference for the digital centaur as an adaptation strategy of technologically augmented individuals (H5).

1.4. Relationship with Artificial Intelligence

In understanding the digital centaur, the individual’s relationship with their “intelligent extension” in the form of artificial intelligence is of particular significance. Based on existing research into human–AI interaction, three principal dimensions can be distinguished: AI usage practices (as a companion or as a tool), acceptance of AI across different domains, and fears and concerns associated with AI adoption.
Two areas of research have provided insights into the interaction between humans and AI: functional use and relational use. The first approach views technologies mainly as tools designed to carry out specific functions, emphasizing the necessity of their adoption and use to fulfill these practical purposes. The key theoretical frameworks within this perspective are the diffusion of innovations theory (Rogers, 1983), domestication theory (Silverstone & Haddon, 1996), and the more contemporary six-phase acceptance model for interactive technologies (De Graaf et al., 2018). In contrast, the second area posits that individuals perceive technologies as social entities, fostering personal connections with them. This perspective is largely informed by the computers as social actors (CASA) framework (Reeves & Nass, 1996), which has inspired a considerable body of research (Fox & Gambino, 2021). The longitudinal study examined both types of use (Xu & Li, 2022). The results confirmed that functional and relational uses of technology mutually reinforced each other over time. Specifically, relational use enhanced future functional use, while self-disclosure strengthened relational use. Interestingly, functional use did not lead to increased relational use; rather, longitudinal mediation analysis indicated that it actually decreased relational use due to insufficient self-disclosure. An international study of AI use practices among students has shown that AI usage and positive AI attitudes significantly predict interest in AI, which, in turn, and together with AI literacy, enhance AI self-efficacy (Bewersdorff et al., 2024). The study also identified three groups of students: “AI Advocates,” “Cautious Critics,” and “Pragmatic Observers,” each exhibiting unique patterns of AI-related cognitive, affective, and behavioral traits.
The research field is increasingly enriched by studies demonstrating the ambivalence of attitudes toward AI: fears regarding artificial intelligence coexist with expectations of its benefits and its regular use (Dong et al., 2024; Hitsuwari & Takano, 2025; Li et al., 2025). A comprehensive meta-analysis reveals that AI acceptance is driven by a multifaceted framework encompassing both AI characteristics and individual user factors (Li et al., 2025). This dual-perspective approach, examining AI as both tool and agent, demonstrates that acceptance depends on multiple interdependent elements, including capability, role, anthropomorphism and context. A study of 554 Japanese people found a link between the frequency of AI use and acceptance of the technology (Hitsuwari & Takano, 2025). It may be hypothesized that digital centaurs are characterized by functional AI use, a positive emotional profile in interactions with AI, a high level of acceptance of AI in domains related to enhancing productivity and personal comfort, and fears primarily grounded in concerns over the unlawful or unethical use of AI by humans (H6).
As a type of technologically augmented individual, the digital centaur faces a number of specific risks. Given their deep integration with technology, digital centaurs continuously strive to unify their physical and digital dimensions. This ongoing process complicates the achievement of a stable identity and demands considerable self-reflective effort (G. U. Soldatova & Ilyukhina, 2025). A disruption of the balance between the human and technological elements within the digital centaur creates conditions for a new form of identity crisis, the consequences of which remain insufficiently understood. This underscores the importance of studying this strategy as one of the most adaptive in today’s hybrid online/offline reality.
The aim of this study is to identify the digital and personal predictors influencing adolescents’ and young adults’ preference for the digital centaur as a strategy of adaptation—one of the possible developmental trajectories of the technologically augmented individual in the era of artificial intelligence. These findings provide the first evidence for the personal and digital factors that shape young people’s preference for human–AI collaboration as their primary strategy. They also reveal the specific nature of their attitudes toward artificial intelligence within this context.

2. Materials and Methods

2.1. Participants

The study sample comprised 1,841 respondents: 649 adolescents aged 14–17 years (M = 16.3, SD = 0.9, 55.3% female), 817 youth aged 18–23 years (M = 19.8, SD = 1.7, 46.0% female), and 375 young adults aged 24–39 years (M = 31.0, SD = 5.2, 22.2% female). The sample comprised 17.3% secondary school students, 24.7% college students, 34% university students, and 24% employed individuals. In terms of financial status, 39.1% of participants described their income level as high, while 40.7% reported a moderate financial situation, noting that they could not afford to purchase property or a car. Additionally, 20.2% indicated that they were experiencing financial difficulties. Respondents were drawn from major urban centers across five regions of Russia, including Moscow (32.2%), Saint Petersburg (14.9%), Tyumen (14.7%), Rostov-on-Don (19.2%), and Makhachkala (19.1%).

2.2. Data Collection

Participants were recruited through an online survey conducted via Google Forms between autumn 2024 and winter 2025. The recruitment was carried out within a research network comprising universities, schools, and colleges. Prior to completing the questionnaire, all the participants were provided with information about the study and gave informed consent.

2.3. Materials and Procedure

2.3.1. Socio-Demographic Questionnaire

The methodological toolkit of the study included a socio-demographic questionnaire comprising questions on gender, age, place of residence, type of employment, level of education, and socio-economic status, operationalized through family income.

2.3.2. Digital Centaur

To assess participants’ preferences for interaction strategies with digital technologies and AI in the framework of the “digital centaur” concept, the vignette method was employed. The vignette is a description of a certain social situation or human experience that is assessed by a respondent based on a number of parameters (Agostini et al., 2024). Respondents were asked to read a vignette and answer questions. The vignette was presented in two versions, reflecting both female and male gender identities. The vignette included parameters of the digital centaur, such as technooptimism, the importance of technological literacy, and the use of technology and AI to achieve success, optimize activities, and enhance productivity and comfort across various domains: “Milana (Max) believes that success in the modern world is impossible without obtaining technological literacy. She (He) actively explores innovations in the digital sphere, not only to make her (his) life more convenient but also to achieve success across various domains. Artificial intelligence helps her (him) optimize her (his) professional activities, search for information, create content, and translate texts. Milana’s (Max’s) key advantage lies in her (his) ability to enhance her capabilities through the use of technology. As a result, she (he) completes tasks more efficiently and with higher quality, which contributes to her (his) career advancement. She (He) also integrates digital tools into her (his) everyday life—from planning meals to managing a smart home—allowing her (him) to save time and devote more attention to her (his) personal interests and loved ones”.
The case was followed by these questions: “How much do you like this person?”, “How similar are you to this person?”, “How willing would you be to regularly communicate with someone who lives this kind of lifestyle?”, and “Would you like to be like this person in 10 years?” Responses were rated on a 5-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). The Digital Centaur Scale was constructed by calculating the mean score across the first three items (Cronbach’s α = 0.80, M = 3.30, SD = 0.96). The Digital Centaur Scale measures the overall preference for this adaptation strategy. In the analysis of the results, three indicators were used: (1) The Digital Centaur Scale, (2) Present Self-Identification (“How similar are you to this person?”), and (3) Future Self-Identification (“Would you like to be like this person in 10 years?”).

2.3.3. Trait Mindfulness and Emotional Intelligence

Respondents completed the Mindful Attention Awareness Scale (MAAS) (Carlson & Brown, 2005), using the Russian-language adaptation consisting of 15 items forming a single scale of mindfulness (Yumartova & Grishina, 2016).
Emotional intelligence was measured using the Brief Version of the Emotional Intelligence Test (Pankratova et al., 2022), which consists of 8 items, with two items assessing each of the following scales: Understanding of One’s Own Emotions, Management of One’s Own Emotions, Understanding of Others’ Emotions, and Management of Others’ Emotions. These four scales are further grouped into two subscales: Intrapersonal and Interpersonal Emotional Intelligence.

2.3.4. Internet Use, Attitudes Toward Technology, and Digital Competence

Participants were asked to self-assess their daily internet usage. This was carried out via the question, “On average, how much time do you spend on the internet per day?” The response scale ranged from “Up to 1 h” to “12 h or more,” with 1 h intervals.
Digital competence was assessed using the Screening Version of the Digital Competence Index, which includes 16 items comprising a single scale of digital competence. The scale evaluates knowledge, skills, motivation, and responsibility/safety across four domains: content-related activities, communication, consumption, and the technosphere (G. U. Soldatova & Rasskazova, 2018).
Attitudes toward technology were assessed with the Technology Attitudes Questionnaire, which includes 20 items distributed across four subscales: openness and enthusiasm for using technology (“Technophilia”), mindful and rational use of innovative technology (“Technorationalism”), difficulties in mastering and using technology (“Technophobia”), and a critical view of the social risks of technology (“Technopessimism”). The questionnaire underwent validation on a Russian-speaking sample (G. U. Soldatova et al., 2021). The study included 808 participants, of whom 448 were parents of adolescents aged 14–17 years (M = 40 years, SD = 5.9, 77% female) and 360 were adolescents aged 14–17 years (M = 15.4 years, SD = 1.07, 50% female). The questionnaire’s structure was refined based on the results of exploratory and confirmatory factor analyses (CFA = 0.9, RMSEA = 0.07, SRMR = 0.06), and all scales demonstrated good internal consistency (Cronbach’s α = 0.66–0.88). The results reported by the questionnaire’s authors indicate its reliability and support its use for research purposes. The scales demonstrated good reliability in our sample, with all Cronbach’s alpha coefficients falling within an acceptable to good range (0.78–0.91).

2.3.5. AI Use and Attitudes Toward AI

As part of the study, a set of questions was developed to assess participants’ use of and attitudes toward artificial intelligence (AI).
Participants were asked to indicate how frequently they use AI-powered chatbots for various purposes using the question, “How often do you use AI chatbots for the following purposes?” The listed purposes included completing academic tasks, solving work-related problems, entertainment, gaming and leisure, obtaining information, functioning as a personal assistant, psychological support, social interaction, receiving relationship advice, and developing specific skills and abilities. Participants were asked to rate each item on a 5-point Likert scale ranging from 1—“never” to 5—“constantly”.
Participants were also asked about the emotions they experience in relation to AI with the question, “What do you usually feel toward AI-based chatbots when interacting with them?” The listed emotional responses included interest, irritation, pleasure, anxiety, gratitude, envy, trust, and a sense of inferiority. Each item was rated on a 5-point Likert scale, where 1 indicated “never feel this emotion,” and 5 indicated “constantly feel this emotion”.
To evaluate participants’ acceptance of AI in various societal roles, they were asked: “Artificial intelligence and robotics are rapidly evolving. Today, AI is already capable of performing complex tasks, monitoring the environment, expressing emotions, and engaging in conversations indistinguishable from human interaction. To what extent would you be willing to accept AI in the future in the following roles?” The listed roles included a work colleague, a friend, a school or university teacher, a household assistant, a psychologist, a romantic partner, a physician, a judge, a nanny, a city mayor, a president, and a police officer. Each role was evaluated on a 5-point Likert scale, from 1—“completely unwilling” to 5—“completely willing”.
To assess fears related to AI and the future of this technology, participants were asked, “To what extent are you concerned about the following?” The statements included: AI will lead to the disappearance of professions and transform the labor market; AI will compete with humans as friends and romantic partners; AI will be used for the benefit of certain individuals, groups, or organizations; AI will be used to commit various crimes; humans will make critical decisions based on AI and may be mistaken; AI will go out of human control and start governing people; AI will make humans lazy and prevent their development; AI will destroy humanity; people will begin to worship AI and turn it into a religion; AI will be used to enhance government control over citizens’ lives; and AI will outperform me in everything, making me feel worthless. Participants rated each concern on a 5-point Likert scale, from 1—“not concerned at all” to 5—“very concerned”.

2.4. Data Analysis

Data analysis was conducted using IBM SPSS Statistics 22.0 and Jamovi 2.4.8, employing Pearson correlation coefficients, ANOVA, and hierarchical linear regression analysis.

3. Results

3.1. Digital Centaurs: Age, Gender and Financial Status

Age group differences were found among adolescents aged 14–17 years, youth aged 18–23 years, and young adults aged 24–39 years on the Digital Centaur scale (F = 20.6, df = 2, p < 0.001), Present Self-Identification with Digital Centaur (F = 7.7, df = 2, p < 0.001), and Future Self-Identification with Digital Centaur (F = 14.2, df = 2, p < 0.001). Youth aged 18–23 years demonstrated the highest scores across all indicators. The results are presented in Figure 1. No significant gender differences were found.
Significant differences in financial status (high, moderate, low) were found on the Digital Centaur scale (F = 4.6, df = 2, p < 0.01) and Future Self-Identification with Digital Centaur (F = 7.6, df = 2, p < 0.001). However, no significant differences were observed for Present Self-Identification with the Digital Centaur. These results are presented in Figure 2.
To calculate the percentage distribution of Digital Centaurs based on three indicators among adolescents, youth and young adults, participants were categorized into three groups: High Digital Centaur Group (scores ≥ 3.6 and ≤5.0), Moderate Digital Centaur Group (scores ≥ 2.5 and ≤3.5), and Low Digital Centaur Group (scores ≥ 1.0 and ≤2.4). The grouping was based on a ±0.5 range from the scale’s mean value of 3. At present, 27.3% of respondents identify themselves as Digital Centaurs, while 41.3% would like to become one in the next 10 years. The number of respondents who are uncertain or do not want to become Digital Centaurs declines over a 10-year time horizon. The results are presented in Figure 3.

3.2. Predictors of Digital Centaur Strategy Preference

The results presented in Table 1 indicate that the Digital Centaur Scale is associated with mindfulness, intrapersonal and interpersonal emotional intelligence, as well as with digital competence, daily internet usage, technophilia, technorationalism, and weakly with technopessimism, but not with technophobia.
The results of the first stage of hierarchical regression analysis showed that mindfulness, intrapersonal and interpersonal emotional intelligence, digital competence, and user activity were significant predictors of the Digital Centaur preference (R2 = 0.141, p < 0.001). When technophilia, technopessimism, and technorationalism were added to the model at the second stage, mindfulness and intrapersonal emotional intelligence lost their significance, and technorationalism also turned out to be non-significant (R2 = 0.252, p < 0.001) (Table 2).
To clarify the mediating role of Technophilia in the preference for a Digital Centaur strategy, we performed a bootstrapped mediation analysis with 5000 resamples. The results indicate that Technophilia is a significant mediator for the effects of both Technorationality and Mindfulness on the Digital Centaur Scale. Specifically, for these two predictors, only the indirect effects through Technophilia were statistically significant, while their direct effects were not. In contrast, Digital Competency exhibited a dual influence on the Digital Centaur Scale, demonstrating both a significant direct effect and a significant indirect effect mediated through Technophilia. The complete results and statistics are presented in Figure 4 and Table 3.

3.3. Attitudes of Digital Centaurs Toward AI

The results presented in Table 4 show that Digital Centaurs primarily use AI for work-related and academic tasks, as well as for developing skills, but not for personal communication or leisure. Additional analysis was conducted on the frequency of AI use within the High Digital Centaur Group (based on the Digital Centaur Scale). In this group, 59.6% use AI frequently or constantly for academic tasks, 46.8% for work tasks, 26.9% for skill and ability development, and 31.6% for entertainment, gaming, and leisure.
Digital Centaurs predominantly experience positive emotions toward AI, including curiosity, pleasure from interaction, and gratitude toward the intelligent digital assistant, whom they also tend to trust. At the same time, they are less likely to experience negative emotions such as anxiety, envy, or sense of inferiority (Table 5).
Digital Centaurs are generally willing to accept AI primarily in professional roles, such as colleagues, teachers, household assistants, doctors, and judges, and even in the more intimate role of a friend. However, they are reluctant to see AI as a nanny for their child or as a romantic partner. They also express doubts about AI’s ability to hold high-level leadership positions, such as a mayor or president (Table 6).
Digital Centaurs are primarily concerned that humans might misuse AI technology, leading to new risks such as increased crime, job loss, enhanced governmental control, or even stagnation of human development due to overreliance on AI. They do not believe that AI can replace humans, especially in personal relationships, such as in the role of a friend or romantic partner. Additionally, Digital Centaurs generally do not fear that AI will surpass them in all areas (Table 7).

4. Discussion

The findings indicate that the digital centaur, conceptualized as an adaptive strategy for individuals in an increasingly complex digital world, is becoming increasingly prevalent among young people. At present, more than a quarter of respondents identify as pronounced digital centaurs, while a larger share, nearly half, prefer this strategy of human–technology interaction and desire to adopt it in the future. The prevalence of the digital centaur strategy in our sample aligns with data on ChatGPT usage among Americans for work (28%), learning (26%), and entertainment (22%) (Pew Research Center, 2025a). The digital centaur is optimistic about technological progress, aware of the importance of technological literacy, and adept at using technology and AI to achieve success, optimize activities, and enhance productivity and comfort across various domains. This strategy is gradually securing an important place not only in the discourse of scholars, policymakers, and business leaders, advancing human-centered human–AI collaboration (Gao et al., 2025; Schleiger et al., 2024), but also in the mindset of everyday users. This trend underscores both the readiness of a substantial segment of society for such transformations and the need for targeted efforts to foster the constructive development of this strategy, particularly among youth. According to our data, the most active practitioners and those most prepared to adopt the digital centaur strategy are young people aged 18–23—primarily university students. For children and adolescents, the primary developmental task is to master their own cognitive abilities, which should serve as the foundation for future effective collaboration with AI rather than be replaced by it, whereas young adults are already in a position to actively employ AI as digital centaurs. Research confirms that this age group is indeed emerging as the most active users of AI at present: 46% of younger adults (18–29) report using AI weekly (GptZERO Team, 2025).
Intrapersonal and interpersonal emotional intelligence, along with mindfulness, emerge as predictors of a preference for the digital centaur strategy, supporting H1 and H2. One of the risks for digital centaurs lies in the disruption of balance between the human and the technological, particularly in the context of interpersonal relationships, loss of control over intelligent extensions, and loss of integrity (G. U. Soldatova et al., 2024; G. U. Soldatova & Ilyukhina, 2025). It can be assumed that these personal characteristics are closely linked to self-regulation and may act as protective factors against such risks. One of the few empirical studies shows that increased workplace mindfulness mitigates the negative impact of human–AI collaboration job insecurity on tech-learning anxiety and well-being among Chinese employees who work daily with AI (Wu et al., 2024). This can be correlated with our research findings that mindfulness promotes a balanced use of AI within the digital centaur strategy.
In most works, the focus shifts to how AI itself should exhibit traits associated with emotional intelligence to improve human–AI collaboration (Huzooree et al., 2025; Gill & Mathur, 2024). Our results, which revealed the role of user emotional intelligence as a predictor of preference for the digital centaur, are strongly corroborated by research focused on the design of AI systems themselves. As demonstrated by the systematic review of Kolomaznik et al. (2024), effective human–machine collaboration requires AI to exhibit qualities associated with high emotional intelligence: empathy, rapport building, and trustworthiness. Numerous empirical findings suggest that such “socially and emotionally competent” AI significantly enhances user adherence to recommendations (by 20%), reduces anxiety (by 22%), and strengthens trust (by 40%). Therefore, it can be concluded that the successful implementation of the digital centaur strategy is based on a reciprocal process: it is determined not only by the individual’s personal characteristics but also by the AI agent’s ability to adequately respond to their emotional and social needs. The present findings contribute to the still limited body of research on the role of mindfulness and emotional intelligence in human–AI interaction.
User activity contributes to the preference for the digital centaur, confirming H3. In this way, the digital centaur fulfills its need for access to its digital extensions and, to some extent, ensures its own well-being—something that is becoming increasingly normative within the framework of digital everyday life for society as a whole (Vuorre & Przybylski, 2024). These findings are consistent with international research indicating that frequent internet users are more likely than others to report feeling excited about the increasing use of AI in daily life (Pew Research Center, 2025b). At the same time, although the relationship with user activity is significant, it is relatively weak. This suggests that a digital centaur’s hyperconnectivity is defined less by actual screen time and more by the constant availability of digital tools and a highly saturated technological environment. (Brubaker, 2022; Otrel-Cass, 2019; G. Soldatova & Voiskounsky, 2021).
The choice of the digital centaur preference is primarily associated with technophilia and technorationalism and only weakly with technopessimism, confirming H4. This is supported by a meta-analysis confirming the significant role of attitudes toward technology in shaping behavioral choices (Kelly et al., 2023). The data obtained reveal a complex picture of the digital centaur’s relationship with technology. The digital centaur maintains positive, open, and enjoyable engagement with technology, yet approaches it with mindfulness and functional purpose. This approach also involves an acknowledgment of technology’s potential risks, limitations, and more pessimistic scenarios. This suggests that, at the current stage of human–machine symbiosis, this strategy of a technologically augmented individual is relatively balanced. Furthermore, mediation analysis revealed that technorationalism is significant only through its association with technophilia. Thus, a general positive attitude toward technology is a stronger predisposing factor for preferring the digital centaur strategy. These results correspond with the AI Device Use Acceptance Model, which demonstrates the role of hedonic motivation (Gursoy et al., 2019). Hedonic motivation refers to the perceived pleasure derived from using an AI device. The model’s findings indicate that hedonic motivation is positively related to AI performance expectancy. Digital competence serves as a key foundation for this balance and a significant predictor of the digital centaur preference (H5). It provides the motivation, knowledge, and skills necessary for the effective, responsible, and safe use of digital technologies across various domains. This is also consistent with research showing a positive association between students’ digital competence and their use of AI (Joseph et al., 2024). Digital competence provides the basis for a technologically augmented human to self-regulate their extensions. At the same time, a positive attitude toward technology, when combined with higher levels of digital competence and user activity, appears to be more influential in determining the choice of the digital centaur strategy than mindfulness or intrapersonal emotional intelligence. At this stage, digital predictors play a more significant role, sharpening the question of the need for deliberate efforts to preserve the human element in individuals augmented by technology.
The findings regarding relationships with AI confirm H6. For digital centaurs, AI use is characterized more by a functional than relational usage for work/study, self-development, entertainment, and leisure, thus framing AI primarily as a tool (Rogers, 1983; Silverstone & Haddon, 1996; De Graaf et al., 2018). However, when emotions are analyzed, the Computers as Social Actors (CASA) framework (Reeves & Nass, 1996; Fox & Gambino, 2021) becomes equally relevant. Alongside emotions such as contentment (pleasure) and amusement (curiosity), the digital centaur also experiences connection emotions (gratitude, trust) that are typically associated with interpersonal relationships. As in other studies, this demonstrates that functional and relational use in fact complement each other (Xu & Li, 2022). The qualitative study of emotions toward AI in the workplace identified a broad emotional spectrum (Gkinko & Elbanna, 2022), but unlike those findings, digital centaurs tend not to exhibit frustration emotions, such as envy, a sense of inferiority, or anxiety. The results highlight the specificity of AI-induced emotions among digital centaurs and expand the current research, which has largely been conducted in the context of work and education (e.g., Gkinko & Elbanna, 2022; Shugurova, 2025; Xin & Derakhshan, 2024; Yang & Zhao, 2024).
As previous research has shown, AI acceptance and AI-related fears are complementary categories (Dong et al., 2024; Hitsuwari & Takano, 2025; Li et al., 2025). International data further reveal that public sentiment is often mixed; for instance, 42% of respondents report feeling equally concerned and excited about AI’s growing presence in daily life (Pew Research Center, 2025b). In our study, AI acceptance was assessed across various social roles. Digital centaurs are already prepared to accept AI in many professional roles—more often as a household assistant or workplace colleague, less often as a physician or teacher. However, two categories currently demonstrate low levels of acceptance. The first includes social roles that imply close personal relationships, such as a romantic partner or a nanny who would be a significant figure in a child’s life. The second encompasses domains associated with leadership and high-stakes global decision-making, where the consequences and cost of errors are extremely high, such as the roles of a city mayor or a country’s president. A large-scale international study on fears regarding AI in six professions (doctors, judges, managers, care workers, religious workers, and journalists) found results that varied substantially across cultures (Dong et al., 2024). For example, in Russia, AI was perceived as more acceptable among religious workers, journalists, care workers, and managers than among doctors or judges, a finding that partially aligns with our own data. Fears toward AI among digital centaurs are not strongly pronounced, which reflects their generally critical stance toward technology. Their concerns are focused less on AI itself and more on its potential for misuse by humans, such as for selfish or criminal purposes that restrict human freedom or suppress individuals. It can be assumed that individuals who prefer the digital centaur strategy exhibit a higher degree of AI acceptance and less pronounced fears than society at large.
The present study has several limitations that should be considered when interpreting the results and planning future research. First, the data collection relied exclusively on self-report measures, which may not fully capture actual behavioral patterns and the true prevalence of the digital centaur strategy. Furthermore, the methodology assessed the preference for the digital centaur as a strategy of the technologically augmented individual, rather than directly measuring the corresponding collaborative behavior with AI in real-world settings. To obtain a more objective picture, future studies should supplement questionnaires with behavioral data, such as log analysis of AI interactions or experimental methods.
Second, the study sample was drawn from urban centers in Russia, which limits the generalizability of the findings to rural populations who may have different levels of access to digital technologies, digital competence, and attitudes toward technology.
Third, the mindfulness assessment tool used (MAAS) operationalizes mindfulness primarily in terms of present-moment attention and awareness of internal and external experiences. This leaves out other important facets of mindfulness, such as acceptance or curiosity, which could also play a significant role in shaping a preference for the digital centaur strategy.
The phenomenon of the digital centaur is only beginning to be empirically studied in psychology, leaving room for a broad spectrum of future research. To gain a deeper understanding of the nature of digital centaur activity, it is advisable to employ mixed-methods research designs, combining quantitative surveys with qualitative interviews, focus groups, and experimental paradigms. This would help uncover unique patterns of human–AI collaboration and the meanings users ascribe to this partnership.
A particularly important direction is a theoretical and empirical investigation of personal intelligence as the core meta-intelligence of the technologically augmented human. It is crucial to conceptualize its structure and correlate its components with the successful implementation of the digital centaur strategy. Future research should aim to identify which specific facets of personal intelligence—such as self-reflection, agency, identity integration, metacognition, and the capacity for evaluating both humans and digital entities—are most critical for fostering a positive and adaptive variant of the digital centaur. Understanding this would make it possible to predict the risk of losing integrity and control over one’s digital augmentations and, conversely, to develop strategies for the constructive evolution of this type of technologically augmented individual, enhancing their adaptation and well-being in the AI era. Empirical validation of these components will be key for creating targeted educational interventions.

5. Conclusions

This study provides the first empirical identification of a set of digital and personal predictors for the preference for the digital centaur strategy among adolescents and young adults. This strategy represents one of the key developmental trajectories for the technologically augmented individual in the era of AI. The findings demonstrate the growing prevalence and appeal of the digital centaur strategy for a significant portion of young people, both today and to an even greater extent in the future. This trend reflects a desire among adolescents and young adults to engage in preadaptive behavior. Such behavior is partially realized in the present but is primarily perceived as essential for navigating the future, especially amid the rapid evolution of AI. In this sense, adopting the digital centaur strategy can itself be seen as a predictor of preadaptation.
The study’s results have direct practical implications, particularly for the field of education. The identified preadaptive potential of the digital centaur strategy among young people highlights the need for its targeted cultivation within educational systems. Curricula should be designed not only to develop digital competence and technological literacy but also to deliberately foster the meta-intellectual level. These qualities enable the self-management of complex digital augmentations, ensuring the integrity of the self and the dominance of the human element in human–machine collaboration.
In the context of human–AI collaboration, this work underscores the importance of considering not only technological but also psychological factors when designing effective symbiotic systems. Understanding what drives digital centaurs—functionality, positive technology attitudes (technophilia), and digital competence—informs the development of interfaces and AI solutions that augment human capabilities rather than replace them. Furthermore, the identified skepticism of digital centaurs towards using AI in high-stakes social roles (e.g., nanny, political leader) and their concerns about the unethical use of AI by humans point to the necessity of advancing AI ethics and digital citizenship, making these topics critical components of future educational programs aimed at fostering balanced and human-centered collaboration. An important implication is the need to foster interest in collaborative AI use among individuals with lower socio-economic status. As our data show, they are less engaged with this technology and risk being unable to reap its corresponding benefits and improve their well-being, potentially widening existing social disparities.

Author Contributions

Conceptualization, G.U.S. and S.V.C.; methodology, G.U.S.; validation, G.U.S. and S.V.C.; formal analysis, S.N.I.; investigation, G.U.S., S.V.C. and S.N.I.; resources, S.V.C.; data curation, S.N.I.; writing—original draft preparation, G.U.S., S.V.C. and S.N.I.; writing—review and editing, G.U.S. and S.V.C.; visualization, S.N.I.; supervision, G.U.S.; project administration, S.V.C.; funding acquisition, G.U.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Russian Science Foundation, grant number 23-18-00350, URL: https://rscf.ru/en/project/23-18-00350/ (accessed on 27 October 2025).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Bioethics Commission of Lomonosov Moscow State University (meeting number 164-d and date of approval 26 September 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors are grateful to all participants who helped in the collection of materials.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Agostini, E., Schratz, M., & Eloff, I. (2024). Vignette research. Bloomsbury Publishing Plc. [Google Scholar]
  2. Akata, Z., Balliet, D., De Rijke, M., Dignum, F., Dignum, V., Eiben, G., Fokkens, A., Grossi, D., Hindriks, K., Hoos, H., Hung, H., Jonker, C., Monz, C., Neerincx, M., Oliehoek, F., Prakken, H., Schlobach, S., Van Der Gaag, L., Van Harmelen, F., … Welling, M. (2020). A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer, 53(8), 18–28. [Google Scholar] [CrossRef]
  3. Amichai-Hamburger, Y. (2009). Technology and well-being: Designing the future. In Cambridge University Press eBooks (pp. 260–278). Cambridge University Press & Assessment. [Google Scholar] [CrossRef]
  4. Belk, R. (2016). Extended self and the digital world. Current Opinion in Psychology, 10, 50–54. [Google Scholar] [CrossRef]
  5. Bewersdorff, A., Hornberger, M., Nerdel, C., & Schiff, D. (2024). AI advocates and cautious critics: How AI attitudes, AI interest, use of AI, and AI literacy build university students’ AI self-efficacy. Computers and Education Artificial Intelligence, 8(1), 100340. [Google Scholar] [CrossRef]
  6. Binz, M., Akata, E., Bethge, M., Brändle, F., Callaway, F., Coda-Forno, J., Dayan, P., Demircan, C., Eckstein, M. K., Éltető, N., Griffiths, T. L., Haridi, S., Jagadish, A. K., Ji-An, L., Kipnis, A., Kumar, S., Ludwig, T., Mathony, M., Mattar, M., … Schulz, E. (2025). A foundation model to predict and capture human cognition. Nature, 644, 1002–1009. [Google Scholar] [CrossRef] [PubMed]
  7. Brosnan, M. J. (1998). Technophobia: The psychological impact of information technology. Routledge. [Google Scholar]
  8. Brubaker, R. (2022). Hyperconnectivity and its discontents. Polity. [Google Scholar]
  9. Carlson, L. E., & Brown, K. W. (2005). Validation of the mindful attention awareness scale in a cancer population. Journal of Psychosomatic Research, 58(1), 29–33. [Google Scholar] [CrossRef]
  10. Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19. [Google Scholar] [CrossRef]
  11. Cole, M. (1996). Cultural psychology: A once and future discipline. Harvard University Press. [Google Scholar]
  12. De Graaf, M. M., Allouch, S. B., & Van Dijk, J. A. (2018). A phased framework for long-term user acceptance of interactive technology in domestic environments. New Media & Society, 20, 2582–2603. [Google Scholar] [CrossRef]
  13. Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013. Available online: https://ssrn.com/abstract=4573321 (accessed on 27 October 2025).
  14. Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61, 637–643. [Google Scholar] [CrossRef]
  15. Dong, M., Conway, J. R., Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2024). Fears about artificial intelligence across 20 countries and six domains of application. American Psychologist. Online ahead of print. [Google Scholar] [CrossRef]
  16. Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10, eadn5290. [Google Scholar] [CrossRef]
  17. DQ Institute. (2017). Digital intelligence. A conceptual framework & methodology for teaching and measuring digital citizenship. Available online: https://www.dqinstitute.org/wp-content/uploads/2017/08/DQ-Framework-White-Paper-Ver1-31Aug17.pdf (accessed on 29 June 2025).
  18. Earley, P. C., & Ang, S. (2003). Cultural intelligence: Individual interactions across cultures. Stanford University Press. [Google Scholar]
  19. Engelbart, D. C. (1962). Augmenting human intellect: A conceptual framework. Summary Report, Contract AF 49–1024 21. DTIC. [Google Scholar] [CrossRef]
  20. Faiola, A., Voiskounsky, A. E., & Bogacheva, N. V. (2016). Augmented human beings: Developing cyberconsciousness. Voprosy Filosofii, 3, 147–162. (In Russian). [Google Scholar]
  21. Fox, J., & Gambino, A. (2021). Relationship development with humanoid social robots: Applying interpersonal theories to human–robot interaction. Cyberpsychology Behavior and Social Networking, 24, 294–299. [Google Scholar] [CrossRef] [PubMed]
  22. Gao, Q., Xu, W., Pan, H., Shen, M., & Gao, Z. (2025). Human-centered human-AI collaboration (HCHAC). arXiv. [Google Scholar] [CrossRef]
  23. Gill, A., & Mathur, A. (2024). Emotional intelligence in the age of AI. In Advances in business information systems and analytics book series (pp. 263–285). Business Science Reference. [Google Scholar]
  24. Gkinko, L., & Elbanna, A. (2022). Hope, tolerance and empathy: Employees’ emotions when using an AI-enabled chatbot in a digitalised workplace. Information Technology and People, 35, 1714–1743. [Google Scholar] [CrossRef]
  25. GptZERO Team. (2025). How many people use AI in 2025? Available online: https://gptzero.me/news/how-many-people-use-ai/ (accessed on 29 July 2025).
  26. Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169. [Google Scholar] [CrossRef]
  27. Hitsuwari, J., & Takano, R. (2025). Associating attitudes towards AI and ambiguity: The distinction of acceptance and fear of AI. Acta Psychologica, 26, 105581. [Google Scholar] [CrossRef]
  28. Huzooree, G., Yadav, M., & Dewasiri, N. J. (2025). AI and emotional intelligence in project management. In Advances in computational intelligence and robotics book series (pp. 111–130). Engineering Science Reference. [Google Scholar]
  29. Joseph, G. V., Athira, P., Thomas, M. A., Jose, D., Roy, T. V., & Prasad, M. (2024). Impact of digital literacy, use of ai tools and peer collaboration on AI assisted learning-perceptions of the university students. Digital Education Review, 45, 43–49. [Google Scholar] [CrossRef]
  30. Kelly, S., Kaye, S., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925. [Google Scholar] [CrossRef]
  31. Kihlstrom, J. F., & Cantor, N. (2011). Social intelligence (pp. 564–581). Cambridge University Press eBooks. [Google Scholar]
  32. Kolomaznik, M., Petrik, V., Slama, M., & Jurik, V. (2024). The role of socio-emotional attributes in enhancing human-AI collaboration. Frontiers in Psychology, 15, 1369957. [Google Scholar] [CrossRef]
  33. Königs, P. (2022). What is techno-optimism? Philosophy & Technology, 35, 63. [Google Scholar] [CrossRef]
  34. Li, B., Lai, E. Y., & Wang, X. (2025). EXPRESS: From tools to agents: Meta-analytic insights into human acceptance of AI. Journal of Marketing. [Google Scholar] [CrossRef]
  35. Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1, 4–11. [Google Scholar] [CrossRef]
  36. Mayer, J. D. (2008). Personal intelligence. Imagination Cognition and Personality, 27, 209–232. [Google Scholar] [CrossRef]
  37. Mayer, J. D., Salovey, P., & Caruso, D. R. (2004). Emotional intelligence: Theory, findings, and implications. Psychological Inquiry, 15, 197–215. [Google Scholar] [CrossRef]
  38. McLuhan, M. (1964). The medium is the message. In Understanding media: The extensions of man. The MIT Press. [Google Scholar]
  39. Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. SSRN Electronic Journal, 381(6654), 187–192. [Google Scholar] [CrossRef]
  40. Osiceanu, M.-E. (2015). Psychological implications of modern technologies: “Technofobia” versus “technophilia”. Procedia—Social and Behavioral Sciences, 180, 1137–1144. [Google Scholar] [CrossRef]
  41. Ostheimer, J., Chowdhury, S., & Iqbal, S. (2021). An alliance of humans and machines for machine learning: Hybrid intelligent systems and their design principles. Technology in Society, 66, 101647. [Google Scholar] [CrossRef]
  42. Otrel-Cass, K. (2019). Hyperconnectivity and digital reality: Towards the eutopia of being human. Springer. [Google Scholar]
  43. Pandey, N. P., Singh, N. T., & Kumar, N. S. (2023). Cognitive offloading: Systematic review of a decade. International Journal of Indian Psychology, 11, 1545–1563. [Google Scholar] [CrossRef]
  44. Pankratova, A. A., Kornienko, D. S., & Lyusin, D. V. (2022). Testing the short version of the emin questionnaire. Psychology Journal of the Higher School of Economics, 19, 822–834. (In Russian) [Google Scholar] [CrossRef]
  45. Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity: Evidence from GitHub copilot. arXiv. [Google Scholar] [CrossRef]
  46. Pew Research Center. (2025a). 34% of U.S. adults have used ChatGPT, about double the share in 2023. Available online: https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/ (accessed on 20 October 2025).
  47. Pew Research Center. (2025b). How people around the world view AI. Available online: https://www.pewresearch.org/global/2025/10/15/how-people-around-the-world-view-ai/ (accessed on 20 October 2025).
  48. Postman, N. (1992). Technopoly: The surrender of culture to technology. Knopf. [Google Scholar]
  49. Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people. Cambridge University Press. [Google Scholar]
  50. Ribble, M. (2015). Digital citizenship in schools: Nine elements all students should know (3rd ed.). International Society for Technology in Education. [Google Scholar]
  51. Ridley, M. (2010). The rational optimist: How prosperity evolves. Harper Collins. [Google Scholar]
  52. Rogers, E. M. (1983). Diffusion of innovations (3rd ed. rev.). Free Press; Collier Macmillan. [Google Scholar]
  53. Schleiger, E., Mason, C., Naughtin, C., Reeson, A., & Paris, C. (2024). Collaborative intelligence: A scoping review of current applications. Applied Artificial Intelligence, 38, 2327890. [Google Scholar] [CrossRef]
  54. Shugurova, O. (2025). “It’s thrilling and liberating, but I worry”: Explorations of AI-induced emotions from a dialogical perspective. Culture & Psychology. online ahead of print. [Google Scholar] [CrossRef]
  55. Silverstone, R., & Haddon, L. (1996). Design and the domestication of information and communication technologies: Technical change and everyday life (pp. 44–74). Communication by Design. [Google Scholar]
  56. Soldatova, G., & Voiskounsky, A. (2021). Socio-cognitive concept of digital socialization: A new ecosystem and social evolution of the mind. Psychology. Journal of the Higher School of Economics, 18(3), 431–450. (In Russian) [Google Scholar] [CrossRef]
  57. Soldatova, G. U., Chigarkova, S. V., & Ilyukhina, S. N. (2024). Metamorphosis of the identity of the human completed: From digital donor to digital centaur. Social Psychology and Society, 15, 40–57. (In Russian). [Google Scholar] [CrossRef]
  58. Soldatova, G. U., & Ilyukhina, S. N. (2025). Digital extended man looking for his wholeness. Cultural-Historical Psychology, 21, 13–23. [Google Scholar] [CrossRef]
  59. Soldatova, G. U., Nestik, T. A., Rasskazova, E. I., & Dorokhov, E. A. (2021). Psychodiagnostics of technophobia and technophilia: Testing a questionnaire of attitudes towards technology for adolescents and parents. Social Psychology and Society, 12(4), 170–188. (In Russian). [Google Scholar] [CrossRef]
  60. Soldatova, G. U., & Rasskazova, E. I. (2018). Brief and screening versions of the digital competence index: Verification and application possibilities. National Psychological Journal, 31(3), 47–56. (In Russian). [Google Scholar] [CrossRef]
  61. Sternberg, R. J., & Hedlund, J. (2002). Practical intelligence, g, and work psychology (pp. 143–160). Psychology Press eBooks. [Google Scholar] [CrossRef]
  62. Thiele, L. P. (2021). Rise of the centaurs: The internet of things intelligence augmentation. In Towards an international political economy of artificial intelligence (pp. 39–61). Springer. [Google Scholar]
  63. Vuorikari, R., Kluzer, S., & Punie, Y. (2022). DigComp 2.2: The digital competence framework for citizens—With new examples of knowledge, skills and attitudes. Publications Office of the European Union. [Google Scholar] [CrossRef]
  64. Vuorre, M., & Przybylski, A. K. (2024). A multiverse analysis of the associations between internet use and well-being. Technology Mind and Behavior, 5(2), 1–11. [Google Scholar] [CrossRef]
  65. Vygotskij, L. S. (1982). The instrumental method in psychology. In L. S. Vygotsky, Complete works (Vol. 1, pp. 103–108). Pedagogika. (In Russian) [Google Scholar]
  66. Wolf, F. D., & Stock-Homburg, R. M. (2023). How and when can robots be team members? Three decades of research on human–robot teams. Group & Organization Management, 48, 1666–1744. [Google Scholar] [CrossRef]
  67. Wu, T.-J., Liang, Y., & Wang, Y. (2024). The buffering role of workplace mindfulness: How job insecurity of human-artificial intelligence collaboration impacts employees’ work–life-related outcomes. Journal of Business and Psychology, 39, 1395–1411. [Google Scholar] [CrossRef]
  68. Xin, Z., & Derakhshan, A. (2024). From excitement to anxiety: Exploring English as a foreign language learners’ emotional experiences in the artificial intelligence-powered classrooms. European Journal of Education, 60, e12845. [Google Scholar] [CrossRef]
  69. Xu, S., & Li, W. (2022). A tool or a social being? A dynamic longitudinal investigation of functional use and relational use of AI voice assistants. New Media & Society, 26, 3912–3930. [Google Scholar] [CrossRef]
  70. Yang, L., & Zhao, S. (2024). AI-induced emotions in L2 education: Exploring EFL students’ perceived emotions and regulation strategies. Computers in Human Behavior, 159, 108337. [Google Scholar] [CrossRef]
  71. Youvan, D. C. (2025). The centaur compact: Reclaiming human-AI collaboration in an age of division and control. Preprint, 2025, 1–18. [Google Scholar] [CrossRef]
  72. Yumartova, N. M., & Grishina, N. V. (2016). Mindfulness: Psychological characteristics and adaptation of measurement tools. Psychological Journal, 37, 105–115. (In Russian). [Google Scholar]
  73. Zhao, J., Kumar, V. V., Katina, P. F., & Richards, J. (2025). Humans and generative ai tools for collaborative intelligence. IGI Global. [Google Scholar] [CrossRef]
Figure 1. Differences in mean scores across three measures: the Digital Centaur Scale, Present Self-Identification with the Digital Centaur, and Future Self-Identification with the Digital Centaur among three age groups: adolescents aged 14–17 years, youth aged 18–23 years, and young adults aged 24–39 years.
Figure 1. Differences in mean scores across three measures: the Digital Centaur Scale, Present Self-Identification with the Digital Centaur, and Future Self-Identification with the Digital Centaur among three age groups: adolescents aged 14–17 years, youth aged 18–23 years, and young adults aged 24–39 years.
Behavsci 15 01487 g001
Figure 2. Differences in mean scores across three measures: the Digital Centaur Scale, Present Self-Identification with the Digital Centaur, and Future Self-Identification with the Digital Centaur among three groups: high, moderate and low financial status.
Figure 2. Differences in mean scores across three measures: the Digital Centaur Scale, Present Self-Identification with the Digital Centaur, and Future Self-Identification with the Digital Centaur among three groups: high, moderate and low financial status.
Behavsci 15 01487 g002
Figure 3. Percentage of participants in the total sample across the Digital Centaur Scale, Present Self-Identification with the Digital Centaur, and Future Self-Identification with the Digital Centaur groups, divided into High Group (scores ≥ 3.6 and ≤5.0), Moderate Group (scores ≥ 2.5 and ≤3.5), and Low Group (scores ≥ 1.0 and ≤2.4).
Figure 3. Percentage of participants in the total sample across the Digital Centaur Scale, Present Self-Identification with the Digital Centaur, and Future Self-Identification with the Digital Centaur groups, divided into High Group (scores ≥ 3.6 and ≤5.0), Moderate Group (scores ≥ 2.5 and ≤3.5), and Low Group (scores ≥ 1.0 and ≤2.4).
Behavsci 15 01487 g003
Figure 4. Model illustrating the direct and indirect effects of Technophilia, based on a bootstrapped mediation analysis with 5000 resamples. The figure displays the coefficients β and their p-values.
Figure 4. Model illustrating the direct and indirect effects of Technophilia, based on a bootstrapped mediation analysis with 5000 resamples. The figure displays the coefficients β and their p-values.
Behavsci 15 01487 g004
Table 1. Pearson correlations between the Digital Centaur Scale and digital/personal indicators.
Table 1. Pearson correlations between the Digital Centaur Scale and digital/personal indicators.
ScaleThe Digital Centaur Scale
Mindfulness0.225 **
Interpersonal Emotional Intelligence0.167 **
Intrapersonal Emotional Intelligence 0.212 **
Digital competence0.259 **
Technopessimism0.119 **
Technophilia0.450 **
Technorationalism0.387 **
Technophobia0.030
Daily Internet Usage0.134 **
Note: **—p < 0.001.
Table 2. Hierarchical Regression Model for Predictors of the Digital Centaur Scale.
Table 2. Hierarchical Regression Model for Predictors of the Digital Centaur Scale.
ModelPredictorsStandardized
Coefficient β
SE
Coefficient B
p-Value
Step 1 Daily Internet Usage0.1010.0080.000
Interpersonal Emotional Intelligence0.0540.0090.028
Intrapersonal Emotional Intelligence 0.1280.0100.000
Mindfulness 0.1690.0200.000
Digital Competence0.2020.0010.000
Step 2Daily Internet Usage0.0540.0070.009
Interpersonal Emotional Intelligence0.0150.0090.529
Intrapersonal Emotional Intelligence 0.0700.0090.002
Mindfulness0.0410.0210.079
Digital Competence0.1470.0010.000
Technopessimism−0.1110.0240.000
Technophilia0.3860.0380.000
Technorationalism 0.0470.0370.239
Table 3. Direct and Indirect Effects from the Bootstrapped Mediation Model (N = 5000).
Table 3. Direct and Indirect Effects from the Bootstrapped Mediation Model (N = 5000).
TypeEffectSEβ95% C.I.p-Value
IndirectMindfulness ⇒ Technophilia ⇒ The Digital Centaur Scale0.0060.0330.018–0.044<0.001
Digital competence ⇒ Technophilia ⇒ The Digital Centaur Scale0.0000.0130.000–0.0010.012
Technorationalism ⇒ Technophilia ⇒ The Digital Centaur Scale0.0270.2890.217–0.322<0.001
DirectMindfulness ⇒ The Digital Centaur Scale0.0210.028−0.022–0.0730.225
Digital competence ⇒ The Digital Centaur Scale0.0010.1620.005–0.009<0.001
Technorationalism ⇒ The Digital Centaur Scale0.0350.026−0.042–0.0900.480
Table 4. Digital Centaurs AI use (Pearson correlations).
Table 4. Digital Centaurs AI use (Pearson correlations).
ScaleThe Digital Centaur Scale
Completing academic tasks0.181 **
Resolving work-related tasks0.178 **
Entertainment, gaming and leisure0.094 **
Obtaining information0.087
Functioning as a personal assistant0.048
Psychological support−0.055
Social interaction−0.073
Receiving relationship advice0.015
Developing specific skills and abilities0.132 **
Note: **—p < 0.001.
Table 5. AI-induced emotions (Pearson correlations).
Table 5. AI-induced emotions (Pearson correlations).
ScaleThe Digital Centaur Scale
Curiosity0.356 **
Irritation−0.047
Pleasure0.221 **
Anxiety−0.099 **
Gratitude0.188 **
Envy −0.093 **
Trust0.178 **
Sense of inferiority−0.096 **
Note: **—p < 0.001.
Table 6. Social roles where Digital Centaurs are open to integrating AI (Pearson correlations).
Table 6. Social roles where Digital Centaurs are open to integrating AI (Pearson correlations).
ScaleThe Digital Centaur Scale
A work colleague0.237 **
A friend0.070 **
A school or university teacher0.168 **
A household assistant0.325 **
A psychologist0.065 **
A romantic partner−0.034
A physician0.156 **
A judge0.112 **
A nanny0.052
A city mayor0.050
A president0.006
A police officer0.087 **
Note: **—p < 0.001.
Table 7. Digital Centaurs’ fears and concerns regarding AI (Pearson correlations).
Table 7. Digital Centaurs’ fears and concerns regarding AI (Pearson correlations).
ScaleThe Digital Centaur Scale
AI will lead to the disappearance of professions and transform the labor market0.066 **
AI will compete with humans as friends and romantic partners−0.065 **
AI will be used for the benefit of certain individuals, groups, or organizations0.119 **
AI will be used to commit various crimes0.151 **
Humans will make critical decisions based on AI and may be mistaken0.120 **
AI will go out of human control and start governing people0.017
AI will make humans lazy and prevent their development 0.115 **
AI will destroy humanity−0.032
People will begin to worship AI and turn it into a religion−0.055
AI will be used to enhance government control over citizens’ lives0.143 **
AI will outperform me in everything, making me feel worthless−0.074 **
Note: **—p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Soldatova, G.U.; Chigarkova, S.V.; Ilyukhina, S.N. The Digital Centaur as a Type of Technologically Augmented Human in the AI Era: Personal and Digital Predictors. Behav. Sci. 2025, 15, 1487. https://doi.org/10.3390/bs15111487

AMA Style

Soldatova GU, Chigarkova SV, Ilyukhina SN. The Digital Centaur as a Type of Technologically Augmented Human in the AI Era: Personal and Digital Predictors. Behavioral Sciences. 2025; 15(11):1487. https://doi.org/10.3390/bs15111487

Chicago/Turabian Style

Soldatova, Galina U., Svetlana V. Chigarkova, and Svetlana N. Ilyukhina. 2025. "The Digital Centaur as a Type of Technologically Augmented Human in the AI Era: Personal and Digital Predictors" Behavioral Sciences 15, no. 11: 1487. https://doi.org/10.3390/bs15111487

APA Style

Soldatova, G. U., Chigarkova, S. V., & Ilyukhina, S. N. (2025). The Digital Centaur as a Type of Technologically Augmented Human in the AI Era: Personal and Digital Predictors. Behavioral Sciences, 15(11), 1487. https://doi.org/10.3390/bs15111487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop