Next Article in Journal
An Optimization-Based Aggregation Approach with Triangular Intuitionistic Fuzzy Numbers in High-Dimensional Multi-Attribute Decision-Making
Previous Article in Journal
Balanced Neonatal Cry Classification: Integrating Preterm and Full-Term Data for RDS Screening
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cognitive Atrophy Paradox of AI–Human Interaction: From Cognitive Growth and Atrophy to Balance

Engineering Faculty, Transport and Telecommunication Institute, Lauvas 2, LV-1019 Riga, Latvia
Information 2025, 16(11), 1009; https://doi.org/10.3390/info16111009
Submission received: 26 October 2025 / Revised: 14 November 2025 / Accepted: 17 November 2025 / Published: 19 November 2025

Abstract

The rapid integration of artificial intelligence (AI) into professional, educational, and everyday cognitive processes has created a dual dynamic of cognitive growth and cognitive atrophy. This study introduces a unified theoretical and quantitative framework to analyze these opposing tendencies and their equilibrium, conceptualized as the cognitive co-evolution model. The model interprets human–AI interaction as a nonlinear process in which reflective engagement enhances metacognitive skills, while over-delegation to automation reduces analytical autonomy. To quantify this balance, the paper proposes the cognitive sustainability index (CSI) as a composite measure integrating five behavioral parameters representing autonomy, reflection, creativity, delegation, and reliance. Simulation examples and domain-specific illustrations, including the case of software developers, demonstrate how CSI values can reveal distinct cognitive zones ranging from atrophy to synergy. Building upon these findings, the paper develops the framework of applied cognitive management, which links cognitive monitoring with adaptive interventions across individual, educational, professional, and institutional levels. The results highlight the need for organizations and policymakers to monitor cognitive sustainability as a strategic indicator of digital transformation. Maintaining CSI above the sustainability threshold ensures that automation enhances rather than replaces human reasoning, creativity, and ethical responsibility. The study concludes by outlining methodological challenges and future research directions toward a quantitative science of cognitive sustainability and co-evolutionary human–AI ecosystems.

Graphical Abstract

1. Introduction

1.1. Background and Motivation

The rapid diffusion of Artificial Intelligence (AI) across professional, educational, and creative domains has redefined the structure of human cognition. AI-based systems now perform a growing share of analytical, linguistic, and decision-making tasks that were once exclusive to human intellect. This development represents both an unprecedented opportunity and a profound challenge: AI extends the boundaries of knowledge acquisition but simultaneously risks diminishing the very cognitive faculties it was designed to enhance.
This paradox defines a central tension of the digital age—cognitive co-evolution, in which human and artificial intelligence evolve together, sometimes synergistically and sometimes competitively. When used reflectively, AI can stimulate metacognitive growth, deeper problem framing, and creative exploration. However, when used as an unquestioned authority or shortcut, it can lead to cognitive dependency, erosion of analytical reasoning, and loss of creative perseverance.
Recent empirical evidence from education [1], medicine [2], and programming [3] shows that cognitive offloading to intelligent systems produces short-term efficiency but may suppress long-term conceptual understanding. The risk is not immediate intellectual decline but a gradual reconfiguration of cognitive effort: from comprehension to prompting, from synthesis to verification, and from problem-solving to consumption.
This paper builds upon this phenomenon, originally articulated as the cognitive atrophy paradox (CAP) with the idea that cognitive augmentation through AI can paradoxically result in human diminishment if reflection and autonomy are not preserved.
The CAP describes a four-stage progression in human–AI interaction. In the traditional cognitive model, humans occupied the entire intellectual loop from problem identification to solution generation.
In the augmentation phase, AI acts as a cognitive assistant, enhancing data processing and offering new perspectives without replacing human comprehension. However, with growing convenience and algorithmic reliability, a bypass phase emerges, and individuals begin to delegate understanding itself, relying on AI to transform data into conclusions without engaging the interpretive process. The final phase represents complete cognitive externalization. Humans become intermediaries between input and AI-generated output, losing the capacity to evaluate, improve, or innovate beyond the system’s recommendations.
These transformations mirror historical examples of technological dependency such as diminished spatial reasoning with GPS or reduced mental arithmetic through calculators—but operate on a deeper level, affecting metacognitive skills, judgment, and creative insight. The challenge is thus not to reject AI but to regulate the cognitive boundary: maintaining human-in-the-loop processes where understanding remains active rather than outsourced.

1.2. Related Works

A central theme in contemporary cognitive science and human–AI interaction research is that cognition is not strictly contained within the biological individual. Instead, cognitive work is frequently externalized, distributed, and scaffolded by artifacts, software systems, and other agents. This view has developed along two converging lines: (1) the philosophical claim that tools and environments can become functionally part of the cognitive system itself, and (2) empirical demonstrations that humans routinely offload memory, reasoning, planning, and decision-making to external supports.
The first line is often framed through the extended mind thesis, which argues that, under certain conditions, external resources (for example, notebooks, digital systems, navigation tools) can play the same functional role for an agent as internal memory or inference and therefore should be treated as part of that agent’s cognitive architecture. Clark and Chalmers famously illustrate this with the case of an individual who relies on an external memory aid in a way that is functionally indistinguishable from biological recall [4]. This position was further elaborated in subsequent work on cognitive extension and cognitive scaffolding, which frames tools not as passive supports but as dynamically coupled components in a hybrid human–artifact system [5]. The idea that cognition can be offloaded to the environment has also been expanded from the individual level to the group level in distributed cognition theory, which treats coordinated human–artifact–team systems (for example, navigation crews, air traffic control rooms, surgical teams) as the operative cognitive unit [6].
The second line is empirical and focuses on cognitive offloading: the deliberate delegation of cognitive operations (remembering, calculating, generating text, exploring options) to external media or intelligent systems to reduce working memory load and effort cost. Risko and Gilbert characterize cognitive offloading as a pervasive and adaptive strategy in which individuals use tools and the environment to manage cognitive demands, rather than relying solely on internal processing [7]. Later work models offloading as an economically rational choice: people evaluate the expected value of storing or computing information internally versus externally and increasingly choose the external channel when it is cheaper in effort, time, or risk [8]. This behavior appears early in development. Children as young as four selectively externalize memory and planning demands to physical supports, suggesting that offloading is not merely a convenience for experts but an intrinsic developmental strategy [9].
With the introduction of modern AI systems, offloading is no longer limited to memory or calculation. It now includes delegation of reasoning, drafting, interpretation, and even judgment. Grinschgl and Neubauer argue that AI systems function as “cognitive collaborators,” expanding a user’s effective problem-solving bandwidth, but they also warn that this redistribution of effort reshapes what remains internal to the human [10]. Recent empirical work confirms that people increasingly outsource higher-order tasks (explanation, synthesis, critique) to AI-based assistants and that this offloading mediates changes in core skills such as critical thinking [11].
This dynamic is especially visible in educational and professional settings. Studies on AI-supported learning environments report that generative systems can lower cognitive load and accelerate task completion but may also reduce the learner’s need to actively construct conceptual understanding [12]. In software development, human–AI co-writing and code suggestion tools now act as interactive external memory, design library, and debugging assistant, effectively embedding portions of procedural knowledge “outside the head.” This pattern has been described as a shift from internal problem-solving toward supervisory cognition: instead of generating solutions, the human evaluates, edits, and integrates machine-generated solutions [13].
Distributed and extended cognition research has begun moving from description to mechanism. Contemporary accounts argue that to understand human performance in AI-rich environments, we must model not only how much cognitive work is offloaded, but how stably the human remains in the loop [14]. External tools including AI systems are no longer peripheral aids. They actively participate in cognition, reshape cognitive load allocation, and influence the long-term maintenance of internal competencies [15].
As intelligent systems become more embedded in everyday cognitive workflows, a growing body of research raises the concern that human reliance on AI may inadvertently undermine internal cognitive capabilities. Empirical evidence indicates that when users rely on AI assistants for tasks traditionally performed by the human mind there is a measurable drop in independent performance, retention, and metacognitive engagement. In educational contexts, systematic reviews show that students using AI dialog systems are more likely to offload reasoning and less likely to critically evaluate outputs, which is associated with weaker critical thinking and analytical reasoning scores [16].
From a cognitive-economics perspective, scholars’ model this as a shift in cost–benefit trade-offs: when the cost (time, effort) of internal processing is perceived as higher than the external tool’s cost, users offload more, leading to fewer opportunities for rehearsal, error-correction, and consolidation [17]. Over time, this process can lead to a downward spiral: less internal exercise—weaker skill—more tool dependence—further skill erosion. This dynamic aligns with the concept of automation complacency when trust in the tool substitutes for human vigilance and reasoning [18].
Experimental findings suggest that heavy generative AI use (e.g., essay generation) correlates with lower neural engagement in areas associated with executive control and memory consolidation, implying that the neural substrate of cognition may also be affected [19]. Empirical reviews highlight that automation bias manifests in both errors of commission (following incorrect suggestions) and errors of omission (failing to intervene when required). For instance, in clinical decision support systems, clinicians have been shown to accept incorrect recommendations from automated tools without additional checks, thereby increasing risk [20]. Trust in automation plays a central role: when users view the automated system as highly competent, they are more prone to accept its output and reduce their own cognitive engagement [21].
Human–AI collaboration studies extend these insights, showing that overconfidence in AI may increase even among moderately experienced users, especially when transparency is low, stakes are ambiguous, or monitoring demands are reduced [22]. The concept of “out-of-the-loop” performance loss further captures how automation causes operators to lose situational awareness and manual skill over time, thereby degrading human capacity to intervene effectively [23]. Automation bias is not simply an outcome of trust but can be aggravated by task complexity, time pressure, high workload, and long periods of error-free automation operation (which generate “learned carelessness”) [24].
As AI usage and dependency grow, there are concerns about potential impacts on human cognitive abilities and autonomy. As AI systems handle increasingly complex tasks like planning and organizing, there is a risk that human cognitive properties may atrophy from lack of use. This heavy dependence on AI could erode professional competencies and create anxiety when manual or cognitive intervention becomes necessary [25]. AI’s expanding role in decision-making processes may diminish human agency. By substituting human choices with algorithmic recommendations, AI systems could reduce personal initiative across multiple domains [26]. This erosion of human autonomy and responsibility may have broader implications for well-being, potentially affecting personal satisfaction and sense of purpose [27].

1.3. Research Gap, Contributions and Paper Structure

Despite the growing volume of research on human–AI interaction, cognitive offloading, and automation bias, several significant gaps remain unresolved. Most existing studies examine these effects in isolation focusing either on the efficiency gains from cognitive delegation, the risks of skill decay, or the psychological mechanisms of automation bias, but they do not explain how these tendencies interact within a continuous human–AI learning cycle. This article addresses that gap by introducing a cognitive co-evolution model, which conceptualizes human cognitive adaptation to AI not as a simple progression or decline but as a dialectical process where augmentation and regression are interdependent outcomes of the same system. While prior research identifies the dangers of over-reliance on automation, it rarely provides quantitative instruments for monitoring or managing such dynamics. To fill this methodological void, the paper proposes cognitive sustainability index as new analytical indicator, which measures the internal balance between comprehension and delegation. It provides a measurable foundation for assessing cognitive integrity in AI-mediated tasks.
Beyond the analytical level, the study extends the discussion to managerial and institutional dimensions that remain largely under-theorized in the current literature. It introduces the applied cognitive management framework encompassing individual, educational, professional, and organizational domains. This approach transforms cognitive monitoring from a theoretical construct into an operational mechanism for adaptive governance and digital transformation.
The main contributions of the paper are fourfold:
A unified theoretical framework that models cognitive growth and atrophy as a co-evolutionary system rather than independent effects.
Formalization of new cognitive sustainability index to quantify cognitive sustainability and alignment in human–AI ecosystems.
Design of an applied cognitive management architecture that operationalizes these indices for real-world monitoring and intervention.
Demonstration of the framework in educational and professional contexts, illustrating how cognitive sustainability can be preserved through reflective design and balanced automation.
The paper is structured as follows. Section 2 presents the theoretical foundation and mathematical formulation of cognitive growth, atrophy, and sustainability dynamics. Section 3 reports the modeling results and comparative interpretation of key parameters, including CSI and CAI simulations. Section 4 extends the findings into the Applied Cognitive Management framework, illustrating multi-level feedback and preventive interventions for sustaining cognitive vitality. Section concludes with a discussion of limitations, challenges, and future research directions aimed at developing cognitively sustainable and ethically aligned human–AI ecosystems. Section 5 summarizes the research findings.

2. Materials and Methods

The cognitive co-evolution model builds upon the duality between two processes that coexist in human–AI interaction: cognitive growth and cognitive atrophy.
These processes are not opposites but mutually entangled trajectories of human adaptation to intelligent systems. Growth occurs through stimulation, reflection, and creative co-thinking with AI, while atrophy arises from excessive reliance, automation complacency, and the erosion of self-generated reasoning.
This dual dynamic reflects a dialectical principle of technological cognition: every cognitive extension carries the risk of internal contraction if not consciously regulated.
The model synthesizes two conceptual strands:
Cognitive growth model (CGM) representing the reinforcement of metacognitive skills through interactive AI use, feedback loops, and reflective questioning.
Cognitive atrophy paradox (CAP) representing the progressive reduction in analytical effort and epistemic responsibility as AI becomes more autonomous and confident in its outputs.
Although the CGM and the CAP are both presented as four-stage progressions, they represent two opposite trajectories of human–AI cognitive adaptation. CGM explains how reflective, metacognitive engagement leads to increasing autonomy and conceptual reinforcement, whereas CAP describes the erosion pathway driven by over-reliance and automation bias. Their structural similarity reflects the fact that both are evolutionary processes, but they operate in opposite directions and produce different cognitive consequences. To avoid conceptual redundancy, later sections integrate these dual trajectories into a unified co-evolution framework (Section 2.3), where their interaction defines cognitive balance.

2.1. Cognitive Growth Model

The cognitive growth model describes the progressive reinforcement of higher-order thinking skills that emerges when individuals engage with AI through structured reflection, inquiry, and feedback.
Unlike the linear conception of “knowledge acquisition,” the CGM frames human–AI collaboration as a recursive learning loop, where every AI-assisted act of problem solving stimulates new cycles of analysis and self-regulation.
The core mechanisms of cognitive growth are:
Metacognitive reinforcement when interaction with AI externalizes thought processes, allowing users to observe, question, and refine their reasoning.
Reflective feedback loops when each exchange with AI produces immediate feedback that supports hypothesis testing and conceptual calibration. The user’s role shifts from answer seeker to model verifier, transforming AI dialog into a laboratory for reflective cognition.
Cognitive scaffolding when AI functions as a dynamic scaffold that extends the learner’s zone of proximal development. Through adaptive prompting, visualization, and simulation, AI helps users operate temporarily beyond their independent capacity while still requiring active participation.
Creative amplification when exposure to diverse AI-generated associations and perspectives stimulates divergent thinking. The system acts as a catalyst for conceptual recombination, enabling the emergence of novel patterns rather than simple reproduction of prior knowledge.
Together, these mechanisms create a positive cognitive feedback system: reflection enhances understanding, understanding improves questioning, and improved questioning deepens subsequent reflection. This iterative cycle produces cumulative cognitive growth, depicted conceptually in Figure 1.
The CGM can be represented as a four-phase trajectory paralleling the user’s evolving engagement with AI (Table 1).
Within educational contexts, the CGM provides a foundation for AI-assisted metacognitive learning, emphasizing process awareness rather than output quantity. In professional and research environments, it supports cognitive resilience: the capacity to integrate algorithmic insight without sacrificing judgment or originality. Thus, the CGM defines the positive pole of human–AI co-evolution as the condition under which AI truly functions as an amplifier of intelligence rather than a substitute for it.

2.2. Cognitive Atrophy Paradox

The cognitive atrophy paradox describes a counterintuitive phenomenon in which the same technological systems that enhance cognitive efficiency also erode the very mental functions they are meant to support. As AI systems become increasingly capable, transparent, and autonomous, individuals gradually shift from using these systems to thinking through them substituting machine inference for human understanding.
This paradox captures the essential fragility of human cognition in the era of intelligent automation: the more fluently technology assists us, the less effort we invest in maintaining the cognitive processes that underlie reasoning, creativity, and judgment. The CAP unfolds through four sequential yet overlapping phases, representing a gradual reconfiguration of human mental effort.
Phase 1. Traditional cognition. Before AI mediation, cognitive activity was fully endogenous. Humans independently formulated problems, synthesized information, and validated results. Each cycle of learning acted as a form of cognitive training, reinforcing neural pathways associated with understanding and recall (Figure 2).
Phase 2. Augmentation phase. The introduction of AI-based tools (such as search engines, analytical assistants, and generative models) provides cognitive amplification. AI acts as a supportive extension of the user’s intellect offloading routine computations while preserving conceptual control. At this stage, cognitive capacity expands through collaboration (Figure 3).
Phase 3. Bypass phase. As AI systems demonstrate growing accuracy and fluency, users begin to bypass internal reasoning. Instead of constructing knowledge, they retrieve it. Understanding becomes secondary to obtaining the correct or efficient output. Reflection declines, and cognitive offloading turns into cognitive avoidance (Figure 4).
Phase 4. Dependency phase.
Prolonged reliance culminates in cognitive passivity. Users cease to verify, interpret, or reconstruct information generated by AI, accepting algorithmic outputs as epistemic authority. The mind no longer rehearses analytical processes and gradually loses the ability to regenerate them without external support (Figure 5).
These phases do not represent discrete categories but a continuum of decreasing cognitive autonomy. Over time, the bypass and dependency phases can lead to structural cognitive weakening, analogous to muscular atrophy in physiology: unused neural and metacognitive circuits deteriorate due to disuse.
Cognitive atrophy emerges gradually as individuals delegate increasing portions of analytical and creative effort to intelligent systems. What begins as a convenience evolves into delegation drift as a slow reallocation of cognitive responsibility from comprehension and synthesis to prompting and verification.
This shift produces several manifestations:
Automation bias, where AI outputs are trusted over personal reasoning.
Comprehension erosion, as users rely on summaries rather than underlying principles.
Motivational narrowing, in which intellectual curiosity declines when effort is consistently mediated by algorithms.
The dynamics are subtle: cognitive effort is not lost but redistributed toward managing and interpreting machine outputs. Over time, this reconfiguration weakens deep learning and adaptive problem-solving skills, reinforcing passive consumption behaviors. Mitigation requires balancing automation with deliberate cognitive engagement. Cultivating awareness of these mechanisms allows human cognition to evolve alongside technology rather than erode beneath it.

2.3. Cognitive Co-Evolution Model

The dynamics of human–AI interaction are best understood as a continuous and reciprocal process in which both cognitive growth and cognitive atrophy evolve over time. Rather than representing mutually exclusive outcomes, these trajectories coexist as dual accumulative processes whose interaction defines the overall direction of cognitive development.
The cognitive co-evolution model conceptualizes this relationship as a system of two interdependent, monotonically increasing functions (Figure 6), one of which capturing the reinforcement of metacognitive and creative capabilities (cognitive growth), and the other representing the gradual accumulation of dependency and automation bias (cognitive atrophy). Both cognitive growth and cognitive atrophy intensify during prolonged interaction with AI systems, though they follow distinct temporal patterns.
To provide a more intuitive interpretation of the dynamics shown in Figure 6, it is useful to view the two curves as representing parallel but opposing tendencies that unfold during prolonged interaction with AI systems. The cognitive growth curve G ( t ) begins with a steep incline because early engagement with AI typically stimulates reflection, exploration, and rapid acquisition of new conceptual structures. As the user internalizes these benefits, the rate of improvement naturally slows, producing a plateau that reflects diminishing marginal returns to learning. In contrast, the atrophy curve A ( t ) starts slowly because the initial use of AI still requires active verification and human oversight. Over time, as reliance on automated outputs increases and habits of independent reasoning weaken, the rate of atrophy accelerates, generating the characteristic upward bend of the second curve.
The intersection of the derivatives of these curves, not the values, captures the moment of cognitive balance: a point at which the incremental benefits of AI-assisted learning are matched by the incremental risks of over-delegation. Before this point, growth dominates, and human–AI interaction strengthens cognitive autonomy. Beyond it, atrophy becomes increasingly likely as automated reasoning begins to displace internal cognitive effort. The curves therefore do not depict fixed stages or predetermined outcomes but rather illustrate how cognitive trajectories evolve dynamically in response to user behavior, interaction patterns, and the intensity of AI reliance. This interpretation helps clarify the qualitative meaning of the model and highlights the importance of maintaining reflective engagement to avoid entering the atrophy-dominant region.
The critical transition between these two dynamics occurs at the cognitive balance point (CBP), defined mathematically as the moment when the rates of change in both processes are equal:
G ( t C B P ) = A ( t C B P )
At this equilibrium, the benefits of reflective augmentation are exactly counterbalanced by the onset of automation-induced decline.
Although both cumulative functions G ( t ) and A ( t ) continue to increase beyond this point, their relative slopes determine whether the system evolves toward sustainable co-adaptation or cognitive dependency. The region preceding the CBP corresponds to the cognitive growth zone, characterized by metacognitive reinforcement and creative exploration, while the region beyond it defines the cognitive inversion zone, where apparent efficiency masks the erosion of analytical depth and epistemic independence.
The cognitive co-evolution model thus provides a unifying theoretical foundation for subsequent quantitative analysis. In this sense, the model links theory and measurement, it captures not only what changes as humans engage with AI but also how the pace of those changes defines long-term cognitive sustainability. Through this framework, cognitive health can be continuously monitored and regulated via applied cognitive management (ACM) mechanisms, ensuring that technological augmentation remains reflective, adaptive, and intellectually regenerative.

2.4. Mathematical Framework of the Study

2.4.1. Foundational Analytical Model of Cognitive Dynamics

The dynamics of cognitive growth and atrophy can be represented through a dual-process model in which both trajectories evolve over time as interdependent functions of cognitive effort. Let G ( t ) denote the cumulative level of cognitive growth, and A ( t ) represent the level of cognitive atrophy at time t . Both are modeled as sigmoidal (S-shaped) functions reflecting the nonlinear progression of learning and decline:
G t = G m a x 1 + e k g t τ g , A t = A m a x 1 + e k a t τ a
where k g and k a are growth and atrophy rates, and τ g and τ a are temporal thresholds representing the inflection points of change.
In the context of the model G m a x represents the maximum attainable level of cognitive growth under ideal learning conditions, where cognitive effort, reflection, and active synthesis are fully engaged. It defines the upper boundary of conceptual understanding, skill acquisition, and adaptive reasoning that an individual or group can achieve within a given domain, and A m a x represents the maximum potential extent of cognitive atrophy, i.e., the asymptotic level of dependency, passivity, or loss of deep comprehension that may occur if cognitive tasks are almost entirely delegated to intelligent systems. It serves as the upper limit of functional degradation in independent reasoning and problem-solving capacity.
In practical terms, G m a x and A m a x act as normalization constants that scale the two S-curves within the same cognitive domain, allowing comparison of the competing tendencies of learning expansion and effort reallocation over time.
The balance of cognition at any given time can be expressed as a differential indicator:
Δ C ( t ) = d G d t d A d t
A positive Δ C ( t ) indicates a dominance of active learning and skill acquisition, while a negative value signals prevailing automation and dependency. The critical moment of equilibrium, t * , occurs when d G d t = d A d t representing the cognitive co-evolution point.
External factors such as AI reliance intensity λ and metacognitive regulation μ modulate both functions:
k g = f 1 ( μ ) ,               k a = f 2 λ
High metacognitive awareness μ amplifies growth by reinforcing self-reflection and deliberate practice, while high automation dependence λ accelerates atrophy through reduced cognitive engagement.
This formalization provides a quantitative lens for studying how intelligent system usage reshapes the allocation of cognitive effort. It emphasizes that sustainable cognitive development requires maintaining μ > λ , ensuring that reflective understanding grows faster than dependence on automation.
Building upon the mathematical representation of cognitive dynamics, the cognitive sustainability index (CSI) serves as an integrative measure expressing the long-term equilibrium between cognitive growth and atrophy. It quantifies the extent to which a learning system maintains conceptual understanding and reflective engagement while employing intelligent tools for efficiency.
Formally, the CSI can be defined as a weighted ratio:
C S I ( t ) = w g G ( t ) w g G ( t ) + w a A ( t )
where w g and w a are weight coefficients reflecting the relative importance of growth-promoting and atrophy-inducing factors. The index is normalized within 0 < C S I 1 .
The selection of weight coefficients depends on contextual and behavioral parameters influencing cognitive balance:
wg increases with metacognitive regulation (μ), reflective feedback, and task complexity, as these factors stimulate comprehension and synthesis.
wa increases with automation reliance (λ), task simplification, and delegation frequency, which tend to displace active reasoning.
In a normalized form, the weights can be expressed as:
w g = μ μ + λ , w a = λ μ + λ
ensuring w g + w a = 1 .
Substituting these values into the main expression yields:
C S I ( t ) = μ G ( t ) μ G ( t ) + λ A ( t )
This formulation highlights the interactive regulation of human–AI cognition. The sustainability of intellectual performance depends not merely on the magnitude of growth or atrophy but on the dynamic balance between self-directed regulation ( μ ) and automation dependence ( λ ).
When μ > λ , the system sustains cognitive integrity, maintaining a CSI above the stability threshold. Conversely, if λ dominates, atrophy accelerates, and CSI declines, signaling the need for interventions, such as reflective training, unaided reasoning sessions, or cognitive load redistribution. to restore equilibrium.
Two complementary methodological approaches can be applied to define the coefficients μ and λ:
1.
Expert-based evaluation approach.
In this method, domain experts (educators, cognitive psychologists, or task designers) assess the learner’s or operator’s degree of metacognitive engagement μ and automation reliance λ based on structured observation, self-reflective reports, or validated scales.
μ is derived from qualitative and quantitative indicators such as planning behavior, self-monitoring, adaptive decision-making, and reflective discourse.
λ is assessed through the frequency of delegation, dependence on automated feedback, and reduction in independent reasoning.
This approach provides a contextually rich but subjective estimation of the parameters and is best suited for small-scale, controlled studies or professional training environments.
2.
Data-driven behavioral approximation approach.
A more scalable and objective alternative defines μ and λ through observable interaction patterns collected from digital platforms.
μ can be estimated from variables such as the ratio of planning to execution time, the rate of self-correction, or the frequency of reflective queries (“why,” “how”).
λ can be approximated through the density of direct AI prompts, the proportion of unmodified AI-generated outputs, or the ratio of AI-mediated to independent task completion.
Both variables can be normalized on a [0, 1] scale, and their relative magnitudes used directly within the CSI formulation.
The dual approach enables flexibility in empirical implementation: expert evaluation offers interpretive depth and contextual understanding, whereas data-driven estimation provides reproducibility and large-scale applicability. Together, they transform the cognitive sustainability index from a theoretical construct into a measurable indicator of human–AI cognitive equilibrium across learning, medical, and technical domains.
Figure 7 summarizes the analytical logic connecting the conceptual and quantitative layers of the research.
Cognitive growth G ( t ) and cognitive atrophy A ( t ) are represented as interacting dynamic functions whose derivatives determine the rate of change in cognitive engagement. Their comparison produces the cognitive sustainability index C S I = G ( t ) A ( t ) which serves as the central quantitative indicator of cognitive balance. The CSI then informs the applied cognitive management enabling real-time monitoring and adaptive regulation of human–AI cognitive dynamics.

2.4.2. Empirical Plausibility and Alternative Functional Forms

The selection of sigmoidal functions to represent cognitive growth and cognitive atrophy reflects a widely accepted tradition in cognitive science and learning research, in which human performance, memory consolidation, and skill acquisition frequently follow S-shaped or saturating trajectories driven by diminishing marginal returns to cognitive effort [28,29]. Nevertheless, logistic functions should not be viewed as the only plausible mathematical representation of cognitive change. Alternative functional forms—including exponential learning curves, hyperbolic discounting functions, and power-law dynamics—have been extensively documented in domains such as perceptual learning, expertise development, cognitive offloading, and neural adaptation [30,31,32]. Exponential functions may capture rapid initial improvement under intensive AI scaffolding, whereas hyperbolic and quasi-hyperbolic forms are commonly used to model effort allocation and automation bias due to their sensitivity to short-term benefits and long-term cognitive costs. Power-law learning, meanwhile, remains a dominant empirical descriptor of human performance improvement across diverse skill domains, suggesting potential relevance for modeling long-term human–AI co-adaptation.
Future extensions of this framework should therefore involve systematic comparison of logistic, exponential, hyperbolic, and power-law formulations to determine which function families align most closely with empirical data on cognitive engagement, cognitive decline, and automation reliance. Such comparative modeling can be informed by experimental and neurocognitive evidence from studies of learning efficiency, cognitive offloading, working-memory economics, and skill decay under automation [7,8,17]. Incorporating multiple candidate functions also enhances the model’s adaptability to different task environments, for example, high-pressure decision-making, educational learning cycles, or long-term professional skill maintenance. Accordingly, the present mathematical structure should be interpreted as a flexible baseline that can be empirically calibrated and validated through future behavioral, cognitive, and neuroscientific studies focused on human–AI interaction dynamics.
Although the cognitive sustainability index provides an integrative quantitative measure of human–AI cognitive balance, the initial weighting scheme for the parameters and remains partly subjective, relying on expert judgements and simulation heuristics. To strengthen methodological rigor and practical applicability, future implementations of CSI should incorporate systematic data-driven calibration procedures grounded in established techniques in learning analytics, behavioral modeling, and psychometric validation.

2.4.3. Data-Driven Calibration, Sensitivity Analysis, and Structural Validation of CSI

Real-world interaction data from learning management systems, intelligent tutoring systems, human–AI writing environments, and collaborative software development platforms (e.g., GitHub repositories https://github.com/orgs/github/repositories (accessed on 14 November 2025).) provide rich behavioral traces of autonomy, reflection, delegation, and reliance [33,34,35]. Such datasets support unbiased extraction of latent cognitive patterns through statistical learning methods. Principal component analysis and exploratory factor analysis, widely used in cognitive and educational measurement, can identify latent dimensions underlying user behaviors and quantify the variance explained by each CSI parameter, enabling objective estimation of weights rather than relying solely on expert judgment [36,37]. Structural equation modeling further enables testing causal relationships between cognitive behaviors, model parameters, and performance outcomes, providing path coefficients, model-fit indices, and cross-domain generalizability metrics [38,39].
Bayesian hierarchical regression and probabilistic graphical models allow parameter estimation under uncertainty, capturing individual variability and temporal dependencies in human–AI interaction [40,41]. Bayesian updating can be applied to dynamically re-estimate and as new behavioral data accumulate, transforming CSI into an adaptive and continuously calibrated index rather than a static construct.
A crucial methodological step involves sensitivity analysis to evaluate the robustness of CSI with respect to perturbations in parameter values. Techniques such as variance-based sensitivity measures (Sobol indices), one-factor-at-a-time perturbation, Monte Carlo simulations, and global uncertainty quantification frameworks have been demonstrated to be effective in cognitive modeling and human–automation research [42]. These tools identify parameters exerting disproportionate influence on CSI values and determine whether the index remains stable across realistic behavioral ranges.
Structural validity of CSI should be examined through convergent, discriminant, and predictive validity testing. This includes correlating CSI values with independent performance indicators (e.g., problem-solving accuracy, code quality metrics, reflective writing indicators), cognitive-load measures, or established psychometric scales of metacognition and autonomy [43,44]. The integration of these empirical procedures ensures that CSI can be reliably deployed in large-scale educational, professional, and human–AI team settings, providing a validated, data-grounded measurement instrument rather than relying primarily on expert-driven weighting.

2.5. Applied Cognitive Management Framework

The applied cognitive management framework translates the theoretical constructs of cognitive growth, atrophy, and sustainability into a practical governance system for maintaining cognitive integrity within human–AI ecosystems (Figure 8).
While the cognitive co-evolution model captures the dynamic interplay between growth and decline, and the cognitive sustainability index quantifies their balance, ACM establishes the feedback and control architecture that transforms analytical insight into adaptive action.
At its core, ACM functions as a closed-loop regulatory system, continuously monitoring, interpreting, and adjusting cognitive dynamics to preserve equilibrium between human autonomy and algorithmic assistance. The framework follows a three-stage operational cycle:
  • Measurement with continuous tracking of cognitive indicators, including CSI values and behavioral proxies, to assess cognitive engagement and automation reliance.
  • Interpretation with analytical evaluation of deviations from the optimal cognitive balance zone, identifying early signs of drift toward over-delegation or cognitive fatigue.
  • Intervention—application of targeted educational, managerial, or behavioral adjustments that restore balance, ensuring sustainable co-adaptation of human and artificial cognition.
This cyclical mechanism mirrors classical cybernetic control systems, but it is specifically adapted for cognitive–organizational regulation.
ACM operates across four interconnected levels of control:
At individual level, users employ reflective self-assessment tools, dashboards, and feedback loops to monitor their own CSI and cognitive engagement.
At educational level, instructors and learning platforms analyze aggregated cognitive metrics to adapt course design, assessment methods, and metacognitive scaffolding.
At professional level, organizations monitor workforce interaction patterns with AI tools, adjusting automation intensity and promoting analytical independence.
At institutional level, policymakers and leadership bodies incorporate cognitive sustainability metrics into digital transformation strategies, innovation management, and ethical governance frameworks.
At every level, ACM prioritizes preventive rather than corrective regulation, focusing on maintaining awareness and cognitive resilience before dependency or skill erosion occurs. The goal of ACM is not to minimize automation, but to manage its cognitive consequences. Preventive strategies may include:
Scheduled reflective pauses or “human-in-the-loop” checkpoints in automated decision chains.
Alternating cycles of human-only and AI-assisted task execution.
“Explain-your-choice” mechanisms to sustain reasoning transparency and epistemic accountability.
These interventions stabilize the cognitive homeostasis zone, preventing cumulative cognitive drift and fostering a balanced distribution of effort between comprehension and automation.
Formally, the ACM process can be described as a dynamic feedback function:
A C M ( t ) = f [ C S I ( t ) , Δ C B Z , I ( t ) ]
where C S I ( t ) denotes the instantaneous cognitive sustainability level, Δ C B Z represents the deviation from the cognitive balance zone and I ( t ) reflects the magnitude or intensity of corrective interventions.
When Δ C B Z exceeds a critical threshold, ACM triggers adaptive responses that recalibrate the human–AI interaction ratio, reinforcing reflective engagement and cognitive autonomy.
Practically, ACM acts as the operational interface between cognitive analytics and digital governance. Integrated into organizational dashboards, it enables real-time visualization of team-level cognitive health and early detection of automation-induced dependency.
For executives, educators, and policy leaders, ACM provides a strategic mechanism to reconcile immediate efficiency gains with long-term intellectual capital preservation, ensuring that digital transformation enhances, not erodes human cognitive capacity.
Ultimately, the ACM framework transforms the abstract notion of cognitive sustainability into a continuous process of measurement, reflection, and adaptation. It represents a new paradigm of cognitive governance, where human and artificial intelligence co-evolve through deliberate supervision maintaining technological progress as both efficient and intellectually regenerative.

2.6. The Cognitive Interaction Spectrum

The relationship between humans and artificial intelligence evolves along a continuum of increasing cognitive maturity, reflecting the degree to which AI transitions from a passive instrument to an active partner in human reasoning.
This continuum is conceptualized as the cognitive interaction spectrum (CIS) which is hierarchical model that complements the cognitive co-evolution framework by describing qualitative shifts in the function of AI relative to user cognition (Figure 9).
At the lowest level of maturity, AI operates primarily as a Tool, executing predefined tasks with minimal cognitive engagement from the user. As cognitive maturity develops, AI assumes the role of a Facilitator, supporting the enhancement of insight and comprehension. With further development, AI can evolve into a Mentor, fostering reflection, questioning, and meta-reasoning. At the highest level, AI functions as a Partner, engaging in the co-creation of ideas and serving as an intellectual collaborator within a shared problem-solving process.
The CIS defines the continuum of human–AI collaboration in terms of cognitive control, engagement depth, and automation dependency. It describes how cognitive responsibility shifts from human-driven reasoning to AI-dominant automation and how these transitions correspond to changes in the CSI (Table 2).
The CSI serves as a quantitative indicator of the degree of cognitive sustainability across the interaction spectrum:
High CSI values (0.7–1.0) correspond to cognitively rich and sustainable human engagement, characterized by self-reflection, deep understanding, and resilient reasoning.
Moderate CSI values (0.55–0.7) define the cognitive balance zone where human and AI efforts are optimally integrated. This range supports adaptive learning and co-evolution while avoiding over-dependence.
Low CSI values (<0.55) mark increasing automation dominance, leading to efficiency gains at the cost of conceptual depth and autonomy.
The CIS thus provides a structured framework for identifying the current cognitive mode of a human–AI system and determining whether it resides within, above, or below the cognitive balance zone.
Through the applied cognitive management framework, deviations in CSI can be continuously monitored and corrected via targeted interventions ensuring that automation remains reflective, reversible, and cognitively regenerative rather than substitutive.
It also establishes a foundation for adaptive system design, where AI interfaces and support functions can be aligned to the user’s current cognitive maturity level, promoting sustainable co-evolution.

3. Results

3.1. Derived Cognitive Parameters and Functional Relationships

To examine the quantitative behavior of the CSI and validate the dynamics of the cognitive co-evolution model, a simplified simulation framework was developed. The model quantifies the balance between human cognitive engagement and AI dependency using normalized behavioral parameters and adaptive weighting. Although the data are illustrative, the simulation reflects realistic interaction scenarios observed in AI-supported educational and professional environments.
To empirically illustrate the operational behavior of the CSI, two representative user profiles were modeled:
Researcher, exemplifying high autonomy and reflective balance.
Student, exhibiting partial cognitive delegation and dependency.
The purpose of this simulation was not to describe actual participants but to demonstrate the interpretability and sensitivity of the index under realistic cognitive configurations.
Cognitive performance over time is represented by two interacting functions:
Cognitive growth G(t)—reinforcement of autonomy, reflection, and creativity.
Cognitive atrophy A(t)—decline of analytical depth caused by over-delegation and automation bias.
Both are parameterized through five behavioral indicators.
The resulting cognitive sustainability index is expressed as:
C S I = W H ( A H + R H + C H ) W A I ( D A I + R A I ) ,
where A H , R H , C H denote human-side parameters, D A I , R A I capture automation influence, and W H , W A I are weight coefficients reflecting contextual emphasis.
Notation of the parameters is shown in Table 3. All parameters were normalized to a 0–10 ordinal scale enabling comparative assessment between users or time periods. For this study, steady-state (static) values were used to isolate the equilibrium characteristics of the index.
Each variable in the CSI can be evaluated on a 0–10 scale with behavioral anchors (Table 4), ensuring consistency across self-assessment and expert assessment.
Both self-assessment and external (instructor or supervisor) assessment forms are used. The self-assessment captures perceived cognitive engagement, while the external evaluation provides an objective counterpoint based on observed behaviors (e.g., originality, reasoning depth, frequency of independent analysis).
Each parameter set was evaluated through a two-stage computation:
  • Base computation.
Raw CSI values were first obtained under a neutral configuration
W H = W A I = 1.0 ,
providing an unweighted reference ratio between human and AI factors.
2.
Contextual recalibration.
The results were then adjusted using the context-specific weighting scheme:
For human-dominant conditions (Researcher), WH was increased to 1.4.
For automation-dominant conditions (Student), WAI was increased to 1.5.
This recalibration allows CSI to reflect not only the parameter magnitudes but also the cognitive environment’s sensitivity emphasizing reflection in growth contexts and delegation pressure in dependency contexts. The context-adjusted results are reported in Table 5.
For each profile, C S I ( t ) is computed. The researcher profile assumes high metacognitive regulation ( μ = 0.85 ) and low automation reliance ( λ = 0.25 ), with rapid growth ( k g = 0.8 , τ g = 4 ) and slow atrophy ( k a = 0.3 , τ a = 6 ). The student profile assumes lower metacognitive regulation ( μ = 0.45 ) and higher automation reliance ( λ = 0.75 ), with slower growth ( k g = 0.5 , τ g = 3 ) and faster atrophy ( k a = 0.7 , τ a = 5 ). Time is modeled over t [ 0 ,   10 ] in normalized learning cycles. The shaded band ( 0.55     C S I     0.70 ) denotes the cognitive balance zone, where human and AI contributions remain cognitively sustainable.
Figure 10 illustrates the temporal evolution of the CSI for Researcher and Student representative user profiles modeled using differential growth and atrophy parameters. The CSI quantifies the dynamic balance between metacognitive engagement ( μ ) and automation reliance ( λ ) across time.
The time variable t is expressed in normalized units corresponding to progressive learning or operational cycles, rather than absolute time. Each increment represents a discrete stage in cognitive adaptation or automation exposure.
The Researcher curve (solid line) demonstrates a consistently high level of cognitive sustainability, remaining above the cognitive balance zone (CBZ) defined within the interval 0.55 C S I 0.70 . This profile reflects a strong dominance of metacognitive regulation over automation reliance, ensuring sustained conceptual understanding and reflective control.
In contrast, the Student curve (dashed line) initially rises as learning progresses but soon declines below the CBZ, entering the cognitive atrophy zone ( C S I < 0.55 ).
This indicates a transition from active synthesis to dependency on automation, where comprehension and reasoning are increasingly replaced by verification-oriented behavior.
The shaded area corresponding to the CBZ represents the optimal equilibrium between human and artificial cognition, where automation enhances performance without undermining understanding. The divergence of the two curves emphasizes how differences in metacognitive regulation and automation reliance can determine the long-term sustainability of cognitive performance.
Overall, the figure demonstrates that maintaining μ > λ is essential for preserving cognitive integrity within AI-assisted learning or professional environments, and that continuous monitoring of CSI can serve as an early diagnostic indicator of cognitive imbalance or dependency.
Figure 11 shows the modeled evolution of CSI for three profiles: Student (high automation reliance, low metacognitive regulation), Researcher (high metacognitive regulation, low automation reliance), and AI-Assisted Expert (high metacognitive regulation with disciplined, audited AI use). Time is expressed in normalized cognitive adaptation cycles. The shaded band represents the CBZ defined as 0.55     C S I     0.70 , where human and AI contributions are synergistic and cognitively sustainable.
The Student profile (dashed line) rapidly falls below the CBZ and stabilizes at a low CSI, indicating dependency and conceptual atrophy. The Researcher profile (solid line) maintains a high CSI, reflecting durable autonomy and reflective control, but drifts above the CBZ, meaning performance is sustainable but depends primarily on human cognition rather than balanced co-cognition. The AI-Assisted Expert profile (dash-dotted line) initially accelerates toward very high CSI (rapid learning amplification through AI), then gradually stabilizes inside the CBZ. This indicates an equilibrium in which AI augmentation is strong but remains continuously audited and cognitively internalized by the human actor. This behavior represents the target operational regime for cognitively resilient human–AI teaming.
Figure 12 presents a contour map of CSI as a function of metacognitive regulation μ and automation reliance λ , evaluated at a representative adaptation stage t = 6 . Warmer regions correspond to higher CSI (more sustainable cognition), and cooler regions correspond to lower CSI (greater risk of cognitive atrophy). The dashed contour lines indicate the CBZ, here marked at C S I   =   0.70 (upper boundary of desirable balance) and C S I   =   0.55 (lower boundary). Points above these lines reflect cognitively resilient regimes in which reflective control dominates over blind automation. Regions below the lower CBZ contour correspond to high automation reliance with insufficient metacognitive regulation, indicating unstable cognitive dependence on AI systems.

3.2. Reference Evaluation Tables for Cognitive Sustainability Assessment

To operationalize the CSI, a detailed evaluation toolkit was developed. It defines five foundational parameters which together characterize the user’s cognitive engagement within AI-supported activities: Human Autonomy (H), Reflection (R), Creativity (C), Delegation (D), and Reliance (L). Each parameter is rated on a 0–10 scale, enabling both self- and expert-based evaluation.
Table 6, Table 7, Table 8, Table 9 and Table 10 provide detailed descriptors for each dimension.
The assessment of the five fundamental cognitive dimensions provides a nuanced understanding of user interaction patterns within AI-supported environments.
Each parameter contributes distinct insights into the mechanisms underlying CSI and reveals characteristic profiles associated with different phases of cognitive co-evolution.
Human autonomy (H) exhibited a wide variance across users, indicating that autonomy is the most sensitive indicator of cognitive balance. Participants or test groups maintaining autonomy scores between 5 and 7 demonstrated stable equilibrium between independent reasoning and algorithmic assistance, whereas those below level 3 showed a clear tendency toward cognitive delegation and automation dependence. These findings confirm that sustained cognitive autonomy is a primary prerequisite for long-term cognitive sustainability.
Reflection (R) emerged as a central mediator of metacognitive regulation. Moderate reflection scores (4–6) corresponded with consistent verification and interpretation of AI-generated reasoning, while higher levels (7–8) indicated the formation of meta-reflective cycles were users actively anticipated AI limitations. Low reflection values were systematically associated with mechanical acceptance of outputs and correlated negatively with both autonomy and creativity.
Creativity (C) revealed strong nonlinear dependence on autonomy and reflection. Participants with balanced cognitive engagement (CSI ≈ 2.0) demonstrated generative creativity, blending AI suggestions with human conceptual input. When reflection or autonomy dropped below threshold levels, creativity rapidly decreased, producing purely reproductive outputs. Conversely, high reflection combined with active autonomy (scores above 7) corresponded to transformative and original ideation, aligning with the Cognitive Growth Zone.
Delegation (D) showed the inverse relationship to autonomy, functioning as a compensatory variable within the cognitive balance model. Balanced delegation values (5–6) were associated with effective task-sharing between human and AI components, while excessive delegation (>7) indicated over-reliance and reduced epistemic responsibility. Controlled delegation therefore acts as a stabilizing factor that prevents both underutilization and cognitive overload.
Reliance (L), representing the user’s trust calibration toward AI, demonstrated the narrowest optimal range. Scores around 5–6 corresponded to informed reliance—confidence grounded in understanding of algorithmic logic and contextual validation. Low reliance values (<3) reflected systematic distrust leading to cognitive inefficiency, whereas high values (>8) signaled overconfidence in automation. This confirms that trust must be dynamically modulated rather than maximized to preserve reflective engagement.
Overall, the distribution of parameter values supports the theoretical expectation that cognitive sustainability emerges only within a balanced configuration of autonomy, reflection, and creativity, modulated by adaptive levels of delegation and reliance.
This equilibrium represents the CBZ which functions as the empirical foundation for computing the composite CSI.
To support the practical application of the CSI and its interpretation through the CIS, a set of standardized evaluation tables was developed. These tables provide reference criteria for both self-assessment (subjective user reflection) and external expert assessment (educator, supervisor, or analyst evaluation). They enable consistent qualitative interpretation of quantitative CSI values and can be used in training, educational analytics, or professional development programs to monitor cognitive maturity in human–AI collaboration. The scales follow a five-level structure (0–10), where each range corresponds to a distinct cognitive interaction zone and includes characteristic behavioral indicators.
Table 11 presents a self-assessment reference, while Table 12 provides guidelines for external evaluation.
The two scales can be used together as complementary instruments within the applied cognitive management framework.
In educational or professional settings, individuals perform a self-assessment using Table 11, followed by an expert or supervisor evaluation using Table 12. In this way, the reference tables operationalize the theoretical models introduced in previous sections, offering a transparent and standardized approach to evaluating cognitive balance, autonomy, and reflective engagement in AI-assisted environments.

3.3. Case Study: Cognitive Scale of AI Utilization in Software Development

To validate the applicability of the cognitive co-evolution model and the CSI beyond educational contexts, the model was extended to the software engineering professional domain. This sector provides a clear and data-rich example of human–AI co-adaptation: programmers increasingly use generative and assistive AI tools (e.g., code completion, debugging agents, documentation assistants), which directly affect autonomy, reflection, and creativity, the same cognitive dimensions used in the CSI model.
The data were obtained through a combined analytical and observational approach conducted during the first half of 2025.
Three complementary data sources were used:
  • Expert evaluation. Structured interviews were conducted with ten senior developers and technical leads from European software companies. Each participant assessed the extent of AI use within their teams using five-point Likert scales corresponding to the CSI dimensions (Autonomy, Reflection, Creativity, Delegation, and Reliance).
  • Behavioral observation. Public developer repositories were analyzed qualitatively to identify behavioral markers of cognitive engagement—frequency of independent problem formulation, use of AI-generated code without modification, and documentation style.
  • Self-reported surveys. Developers at different career stages completed short questionnaires estimating the proportion of their daily tasks assisted by AI (measured in percentages) and their perceived dependence on automated tools.
All responses were normalized to a 0–10 cognitive interaction scale consistent with the CSI model, where higher values represent higher levels of cognitive maturity and balanced collaboration with AI. Average scores were aggregated by professional level (Junior, Middle, Senior, and R&D/AI Engineers) and cross-validated through expert consensus.
The aggregated data (Table 13) indicate that the average cognitive utilization level of AI in software development in 2025 lies between 5 and 7, corresponding to collaborative interaction rather than full cognitive synergy. Junior developers tend to operate within the assisted range, often using AI as a direct problem-solving substitute. Senior and R&D engineers demonstrate higher integration and reflective control, aligning more closely with the CGZ.
The results confirm that programming professions exhibit the entire cognitive spectrum predicted by the model from mechanistic use to reflective synergy.
By mapping the behavioral data from Table 13 onto the CSI formula, approximate index values can be inferred:
Junior ≈ 1.0–1.5
Middle ≈ 2.0–2.5
Senior ≈ 3.0–4.0
R&D ≈ 4.5–5.0
The software development domain serves as a good empirical validation of the CSI’s interpretive range and demonstrates the potential for industry-level cognitive monitoring.
Figure 13 visualizes the cognitive positioning of different roles in an AI-assisted software development team. The background heatmap shows the CSI as a function of metacognitive regulation μ (x-axis) and automation reliance λ (y-axis), evaluated at a representative adaptation stage t = 6 . Higher CSI (lighter regions) indicates more sustainable cognition—reflective control, internalized understanding, and reversible use of automation. Lower CSI (darker regions) indicates elevated cognitive atrophy risk.
Dashed contour lines mark the CBZ, here defined between C S I = 0.55 and C S I = 0.70 . Points inside or near this band represent roles where AI support is strong but remains cognitively accountable and auditable.
Each white marker corresponds to a software development role or task context, labeled with an ID for clarity:
  • Junior Developer (AI-assisted coding)
  • Senior Developer (architectural reasoning)
  • Code Reviewer (human-led QA + AI linting)
  • DevOps Engineer (pipeline automation)
  • QA Tester (AI-generated test cases)
  • Prompt Engineer/Integrator
  • AI Tool Maintainer/Governance Lead
Roles 3 and 7 cluster in the high-CSI region: they combine high metacognitive oversight (strong review, governance, or architectural reasoning) with relatively low blind reliance on automation. Roles 1 and 5 sit closer to or just below the CBZ, reflecting heavy use of AI-generated output with limited epistemic verification. These locations reveal which tasks are cognitively self-sustaining and which tasks are drifting toward dependency.
While the current case study provides an initial empirical illustration of CSI applicability in software development, its interpretive scope is constrained by the small and non-random sample, absence of control groups, and descriptive rather than inferential analytical design. Therefore, the results should be viewed as indicative rather than conclusive. A rigorous validation program will require large-scale data collection across educational and industrial settings, longitudinal tracking of cognitive engagement patterns, and statistical testing of CSI’s discriminant, predictive, and convergent validity. Such efforts will support the transition from a conceptual demonstration to evidence-based assessment of cognitive sustainability in AI-assisted work.

4. Discussion

4.1. Interpretation of the Cognitive Co-Evolution Model

The results confirm that cognition under AI influence follows a nonlinear and co-evolutionary trajectory, rather than a unidirectional progression of enhancement. The cognitive co-evolution model captures the duality between cognitive growth and cognitive atrophy as two interacting processes that define the sustainability of human–AI interaction.
The key insight is that these processes are not antagonistic but mutually conditioning: each act of delegation to AI potentially reduces analytical effort, while each reflective engagement strengthens metacognitive resilience. The cognitive sustainability index operationalizes this equilibrium numerically, enabling real-time assessment of cognitive engagement and dependency in diverse contexts.
Cognitive balance should not be interpreted as a fixed or stable characteristic of users but as a dynamic and context-dependent state. It fluctuates according to task complexity, environmental demands, and the degree of reflective regulation.
When users interact with AI under time pressure or cognitive overload, reliance on automation increases, shifting the system toward atrophy. Conversely, when interaction includes questioning, validation, and adaptation, the equilibrium is restored. This dynamic aligns with the principles of metacognitive regulation and adaptive automation, where optimal performance is achieved when cognitive load is neither excessive nor fully externalized.
Maintaining such equilibrium requires continuous recalibration, both through user awareness and system design. Hence, cognitive balance functions as a form of cognitive homeostasis, analogous to physiological balance mechanisms in biological systems. Systems designed with this principle in mind should include reflective prompts, explainable outputs, and intentional slow-down mechanisms to counteract automation drift.

4.2. Cognitive Stratification and Cross-Domain Applicability

The analysis in Section 3.3 illustrated cognitive stratification within software development, showing how professional groups occupy distinct CSI zones. This approach can be generalized across multiple domains, where similar scales may be constructed to map human–AI interaction maturity:
In engineering, generative design tools may accelerate innovation yet diminish tacit diagnostic skills.
In finance, algorithmic trading enhances efficiency but risks suppressing analytical judgment.
In education, automated evaluation improves throughput but may weaken pedagogical creativity.
In healthcare, diagnostic AI enhances precision but may reduce clinicians’ holistic reasoning.
In each field, the CSI parameters ( A H , R H , C H , D A I , R A I ) can be adapted to construct a domain-specific cognitive scale, allowing quantitative benchmarking of cognitive sustainability.
For executives and policymakers, such cross-domain scales provide a strategic management instrument. They reveal not only short-term efficiency gains but also long-term cognitive risks of the erosion of professional competence resulting from over-automation. If ignored, organizations may accumulate cognitive debt, where apparent productivity masks declining analytical capacity. Monitoring CSI trajectories enables CEOs to anticipate these effects, balancing immediate digital advantages with future workforce resilience. Thus, cognitive stratification becomes a diagnostic layer for digital transformation, allowing leaders to design interventions that sustain both innovation and expertise.
The qualitative structure of human–AI collaboration described by the cognitive interaction spectrum provides an interpretive complement to the quantitative results of the cognitive sustainability index. In managerial and organizational contexts, the CIS can serve as a diagnostic model for evaluating the prevailing mode of cognitive interaction within a workforce or educational system.
When AI functions primarily as a Tool, organizational gains are limited to task automation, with increased risk of cognitive atrophy. Transitioning toward the Facilitator and Mentor stages reflects rising reflective engagement and shared reasoning, which correlate with the cognitive balance and growth zones identified through CSI analysis. The Partner stage represents the strategic ideal, where human and machine cognition converge in co-creative processes, leading to innovation and long-term intellectual resilience.
For CEOs and institutional leaders, mapping departments or teams along this spectrum allows targeted implementation of applied cognitive management interventions to progressively elevate their organization’s cognitive maturity level and ensure sustainable co-evolution with AI technologies.

4.3. Ethical and System-Design Considerations

From an ethical perspective, the cognitive co-evolution model reframes the debate on responsible AI from compliance to cognitive sustainability. Whereas most governance frameworks emphasize transparency, fairness, or data privacy, cognitive sustainability concerns the preservation of human understanding itself. AI systems should therefore be designed to stimulate cognition rather than replace it.
This can be achieved through several design principles:
Explainability with uncertainty visualization. Systems that expose reasoning pathways and confidence intervals invite users to think critically rather than accept outputs blindly.
Interactive reasoning interfaces. Allowing users to interrogate intermediate steps (“Why did you choose this answer?”) encourages reflective dialog.
Adaptive cognitive feedback. AI could estimate the user’s CSI trend (based on interaction behavior) and suggest cognitive “workouts,” such as reasoning challenges or independent synthesis tasks.
Such design directions transform human-in-the-loop from a technical safeguard into a cognitive ecosystem principle: the machine sustains, rather than supplants, human agency.
To preserve cognitive vitality, ACM incorporates preventive and corrective measures that target the root causes of cognitive atrophy. Table 14 summarizes core risks and countermeasures. These practices act as cognitive hygiene rituals. Their implementation in educational and professional routines is stabilized CSI values and delay entry into the inversion zone.

4.4. Challenges, Limitations, and Future Research Directions

Although the proposed cognitive co-evolution model and cognitive sustainability index establish a unified conceptual and quantitative framework for analyzing human–AI interaction, the study inevitably faces several theoretical, methodological, and practical challenges. These limitations do not undermine its scientific contribution but instead define a trajectory for future research aimed at refining, validating, and operationalizing the model.
At the theoretical level, the model simplifies complex and multidimensional aspects of human cognition, reflection, and creativity into parameterized constructs. While this abstraction enables quantification and comparison, it inevitably omits emergent socio-cognitive phenomena such as collective learning, contextual intuition, and ethical deliberation. The assumption that the balance between cognitive growth and atrophy can be expressed through deterministic relationships serves as a useful approximation but cannot yet capture the full richness of adaptive reasoning in real-world human–AI collaboration. Overcoming these conceptual boundaries will require interdisciplinary synthesis that includes insights from cognitive neuroscience, psychology, and systems theory.
From a methodological perspective, the study relies primarily on simulated and expert-derived data rather than extensive empirical datasets. The parameter values and weighting coefficients were defined heuristically, reflecting plausible behavioral tendencies rather than statistically validated relationships. Similarly, the assumption that human and AI contributions can be represented on a common normalized scale introduces an interpretive simplification, as human reasoning and algorithmic processing differ qualitatively in transparency, adaptability, and accountability. Future work should therefore involve longitudinal empirical studies integrating psychometric evaluation, behavioral observation, and digital trace analysis to calibrate the CSI model with higher precision. Dynamic weighting mechanisms should also be developed so that coefficients evolve adaptively in response to user behavior, task context, and domain-specific constraints. Advanced statistical and machine learning methods may further enable automatic inference of cognitive states from large-scale interaction data, allowing the CSI to transition from conceptual construct to applied analytics tool.
The implementation of applied cognitive management in educational and professional environments also poses significant organizational and ethical challenges. Reliable cognitive monitoring requires the collection of behavioral and performance data, which raises questions of privacy, trust, and user acceptance. If cognitive indicators are perceived as instruments of control or surveillance, they may provoke resistance rather than engagement. Successful adoption of ACM frameworks therefore depends on transparent governance, voluntary participation, and a shift in institutional culture—where cognitive sustainability is recognized as a strategic investment in workforce intelligence rather than a managerial constraint. From the managerial perspective, a central challenge is balancing short-term automation gains with long-term preservation of professional competence. Without active intervention, organizations risk accumulating cognitive debt, where immediate efficiency conceals a gradual erosion of analytical capacity. To mitigate this, leaders must implement cognitive monitoring systems that identify early signs of over-delegation and enable targeted interventions such as reflective training, manual practice cycles, or adaptive task rebalancing.
The pathway forward is defined by several complementary directions for research and practice. Future studies should focus on empirical validation through large-scale cross-professional assessments that identify benchmark CSI values and measure how cognitive balance evolves across industries. Efforts should also be directed toward dynamic cognitive modeling, where user interaction data, reflection time, and corrective behavior are used to update CSI metrics in real time. The integration of CSI with digital twin architectures offers particularly promising potential, allowing simulation of cognitive trajectories and predictive modeling of human–AI co-adaptation. At the systemic level, cross-sector application of the model can lead to the development of an atlas of cognitive sustainability, mapping domains such as aviation, healthcare, education, and engineering according to their maturity of cognitive co-evolution. In parallel, research should explore ethical and governance frameworks for cognitive data usage, including guidelines for cognitive integrity auditing and responsible AI literacy. The design of interactive cognitive feedback dashboards could turn monitoring from a passive diagnostic function into an active learning tool, enabling users and organizations to self-regulate their cognitive trajectories.
The long-term vision of this research is to establish a quantitative methodology of cognitive sustainability, linking individual metacognition, organizational learning, and AI system design into a unified adaptive ecosystem. The current study represents a step toward this goal, providing the conceptual foundations and mathematical formalism necessary for further empirical expansion. As the model evolves, its objective is not merely to describe cognitive change but to ensure that AI-driven transformation remains intellectually regenerative rather than cognitively depleting, reinforcing human expertise, creativity, and ethical responsibility in the era of intelligent automation.
Future work may integrate insights from management science, where knowledge creation and cognitive sustainability are understood as socially distributed processes shaped by collective interaction. Recent studies highlight that AI systems increasingly mediate collaboration and knowledge co-construction within organizations [45]. This suggests that cognitive sustainability should be examined not only at the individual level but also across teams and knowledge networks.
Integrating the notion of learning loops, both single-loop and double-loop learning [46], could further clarify how reflection and feedback support the maintenance of cognitive balance over time. Future research may explore how these learning mechanisms interact with ACM and CSI across individual and organizational contexts.
Incorporating insights from cognitive bias research could further strengthen the discussion, as biases significantly influence how users rely on AI systems and how cognitive regulation mechanism’s function [47]. Future research may examine how specific biases interact with ACM and CSI and how AI-supported bias-mitigation strategies could help maintain cognitive balance.

5. Conclusions

The study developed a unified conceptual and methodological framework for understanding, quantifying, and managing the evolving relationship between human cognition and artificial intelligence. Through the integration of the cognitive co-evolution model, the cognitive sustainability index and the applied cognitive management framework, it establishes a holistic foundation for analyzing the dynamic interplay of cognitive growth, atrophy, and balance within AI-supported environments.
The cognitive co-evolution model conceptualizes AI–human interaction as a dialectical process in which reflective engagement stimulates metacognitive development, while excessive automation fosters cognitive atrophy. This dynamic equilibrium defines the cognitive balance zone as a state of sustainable co-adaptation where automation enhances rather than replaces human reasoning. The cognitive sustainability index translates this conceptual balance into a measurable construct, integrating behavioral and contextual parameters such as autonomy, reflection, creativity, delegation, and reliance.
Simulation experiments and professional case studies, including the example of software developers, demonstrated that the CSI effectively distinguishes between cognitive zones ranging from mechanical dependency to reflective synergy and can be used as a diagnostic tool for assessing cognitive sustainability in diverse domains.
Building on this analytical foundation, the paper introduced applied cognitive management as the operational mechanism that transforms measurement into action. ACM provides a closed-loop architecture linking continuous cognitive monitoring with adaptive interventions at four interdependent levels: individual, educational, professional, and institutional. This framework allows organizations, educators, and policymakers to maintain cognitive equilibrium through targeted reflection practices, feedback mechanisms, and adaptive task design. By embedding cognitive monitoring into decision-making and governance processes, ACM establishes a path toward cognitive sustainability governance, in which the balance between automation efficiency and human autonomy becomes a measurable, controllable, and ethically guided objective.
The results underscore the strategic importance of maintaining CSI above a sustainability threshold to preserve analytical competence, creativity, and epistemic accountability. For executives and policymakers, the framework provides an early-warning system for cognitive debt. For educators and researchers, it offers a foundation for designing curricula and learning environments that cultivate reflective resilience in the age of AI.
Longitudinal studies and digital twin simulations could further refine the model into a predictive instrument for cognitive risk management and AI ethics auditing. Ultimately, this research contributes to the emergence of a quantitative science of cognitive sustainability, where human expertise, organizational intelligence, and artificial cognition evolve together in reflective and regenerative equilibrium.
The transition from cognitive growth and cognitive atrophy toward cognitive balance represents not merely an intellectual construct but a strategic imperative for the digital era. The proposed framework unites analytical rigor, management design, and ethical foresight to offer both a theoretical lens and a practical roadmap that ensure AI-driven progress remains cognitively sustainable, socially responsible, and profoundly human-centered.

Funding

This research received no external funding.

Institutional Review Board Statement

The consultations included in this manuscript did not collect any personal, medical, or identifiable data and did not include any psychological or physiological experimentation. All opinions were provided voluntarily by qualified professionals in their institutional roles, without recording personal information or responses attributable to individuals. According to the applicable Latvian and EU regulations (including Regulation (EU) 2016/679 on data protection), such anonymous expert input does not qualify as human-subject research and therefore does not require approval or exemption from an Ethics Committee or Institutional Review Board.

Informed Consent Statement

Informed consent for participation is not required as per Regulation (EU) 2016/679 on data protection.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Stadler, M.; Bannert, M.; Sailer, M. Cognitive Ease at a Cost: LLMs Reduce Mental Effort but Compromise Depth in Student Scientific Inquiry. Comput. Hum. Behav. 2024, 160, 108386. [Google Scholar] [CrossRef]
  2. Qazi, I.A.; Ali, A.; Khawaja, A.U.; Akhtar, M.J.; Sheikh, A.Z.; Alizai, M.H. Automation Bias in Large Language Model Assisted Diagnostic Reasoning among AI-Trained Physicians. medRxiv 2025. [Google Scholar] [CrossRef]
  3. Fan, G.; Liu, D.; Zhang, R.; Pan, L. The Impact of AI-Assisted Pair Programming on Student Motivation, Programming Anxiety, Collaborative Learning, and Programming Performance: A Comparative Study. Int. J. STEM Educ. 2025, 12, 16. [Google Scholar] [CrossRef]
  4. Clark, A.; Chalmers, D.J. The Extended Mind. Analysis 1998, 58, 7–19. [Google Scholar] [CrossRef]
  5. Smart, P.R. Toward a Mechanistic Account of Extended Cognition. Philos. Psychol. 2022, 35, 1165–1189. [Google Scholar] [CrossRef]
  6. Wachowski, W.M. Commentary: Distributed Cognition and Distributed Morality—Agency, Artifacts and Systems. Front. Psychol. 2018, 9, 490. [Google Scholar] [CrossRef]
  7. Risko, E.F.; Gilbert, S.J. Cognitive Offloading. Trends Cogn. Sci. 2016, 20, 676–688. [Google Scholar] [CrossRef]
  8. Gilbert, S.J. Cognitive Offloading Is Value-Based Decision Making: Modelling Cognitive Effort and the Expected Value of Memory. Cognition 2024, 247, 105783. [Google Scholar] [CrossRef]
  9. Armitage, K.L.; Gilbert, S.J. The Nature and Development of Cognitive Offloading in Children. Child Dev. Perspect. 2025, 19, 108–115. [Google Scholar] [CrossRef]
  10. Grinschgl, S.; Neubauer, A.C. Supporting Cognition with Modern Technology: Distributed Cognition Today and in an AI-Enhanced Future. Front. Artif. Intell. 2022, 5, 908261. [Google Scholar] [CrossRef]
  11. Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
  12. Jose, B.; Cherian, J.; Verghis, A.M.; Varghise, S.M.; Joseph, S. The Cognitive Paradox of AI in Education: Between Enhancement and Erosion. Front. Psychol. 2025, 16, 1550621. [Google Scholar] [CrossRef] [PubMed]
  13. Tankelevitch, L.; Kewenig, V.; Simkute, A.; Scott, A.E.; Sarkar, A.; Sellen, A.; Rintel, S. The Metacognitive Demands and Opportunities of Generative AI. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24), Honolulu, HI, USA, 11–16 May 2024; ACM: New York, NY, USA, 2024. Article 680. [Google Scholar] [CrossRef]
  14. Topol, E.J. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again; Basic Books: New York, NY, USA, 2019. [Google Scholar]
  15. Fisher, A. Critical Thinking: An Introduction; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  16. Zhai, C.; Wibowo, S.; Li, L.D. The Effects of Over-Reliance on AI Dialogue Systems on Students’ Cognitive Abilities: A Systematic Review. Smart Learn. Environ. 2024, 11, 28. [Google Scholar] [CrossRef]
  17. Macnamara, B.N.; Berber, I.; Çavuşoğlu, M.C.; Krupinski, E.A.; Nallapareddy, N.; Nelson, N.E.; Wilson-Delfosse, A.L.; Ray, S. Does Using Artificial Intelligence Assistance Accelerate Skill Decay and Hinder Skill Development Without Performers’ Awareness? Cogn. Res. Princ. Implic. 2024, 9, 46. [Google Scholar] [CrossRef]
  18. Vieriu, A.M.; Petrea, G. The Impact of Artificial Intelligence (AI) on Students’ Academic Development. Educ. Sci. 2025, 15, 343. [Google Scholar] [CrossRef]
  19. Ali, O.; Murray, P.A.; Momin, M.; Dwivedi, Y.K.; Malik, T. The Effects of Artificial Intelligence Applications in Educational Settings: Challenges and Strategies. Technol. Forecast. Soc. Chang. 2024, 199, 123076. [Google Scholar] [CrossRef]
  20. Goddard, K.; Roudsari, A.; Wyatt, J.C. Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators. J. Am. Med. Inform. Assoc. 2012, 19, 121–127. [Google Scholar] [CrossRef]
  21. Alon-Barkat, S.; Busuioc, M. Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice. J. Public Adm. Res. Theory 2023, 33, 153–169. [Google Scholar] [CrossRef]
  22. Romeo, G.; Conti, D. Exploring Automation Bias in Human–AI Collaboration: A Review and Implications for Explainable AI. AI Soc. in press. 2025. [Google Scholar]
  23. Jones-Jang, S.M.; Park, Y.J. How Do People React to AI Failure? Automation Bias, Algorithmic Aversion, and Perceived Controllability. J. Comput. Mediat. Commun. 2023, 28, zmac029. [Google Scholar] [CrossRef]
  24. Clark, R.C.; Nguyen, F.; Sweller, J. Efficiency in Learning: Evidence-Based Guidelines to Manage Cognitive Load; Wiley: Hoboken, NJ, USA, 2016. [Google Scholar]
  25. Gocen, A.; Aydemir, F. Artificial Intelligence in Education and Schools. Res. Educ. Media 2020, 12, 13–21. [Google Scholar] [CrossRef]
  26. Danaher, J. Toward an Ethics of AI Assistants: An Initial Framework. Philos. Technol. 2018, 31, 629–653. [Google Scholar] [CrossRef]
  27. Ahmad, S.F.; Han, H.; Alam, M.M.; Rehmat, M.K.; Arraño-Muñoz, M.; Ariza-Montes, A. Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness, and Safety in Education. Humanit. Soc. Sci. Commun. 2023, 10, 311. [Google Scholar] [CrossRef] [PubMed]
  28. Anderson, J.R. Rules of the Mind, 1st ed.; Psychology Press: Hillsdale, NJ, USA, 1993. [Google Scholar]
  29. Heathcote, A.; Brown, S.; Mewhort, D.J.K. The power law repealed: The case for an exponential law of practice. Psychon. Bull. Rev. 2000, 7, 185–207. [Google Scholar] [CrossRef] [PubMed]
  30. Shenhav, A.; Musslick, S.; Lieder, F.; Kool, W.; Griffiths, T.L.; Cohen, J.D.; Botvinick, M.M. Toward a Rational and Mechanistic Account of Mental Effort. Annu. Rev. Neurosci. 2017, 40, 99–124. [Google Scholar] [CrossRef]
  31. Solís-Martín, D.; Galán-Páez, J.; Borrego-Díaz, J. A Model for Learning-Curve Estimation in Efficient Natural Architecture Search and Its Application in Predictive Health Performance. Mathematics 2023, 11, 655. [Google Scholar] [CrossRef]
  32. Hogan, D.; Elshaw, J.; Koschnick, C.; Ritschel, J.; Badiru, A.; Valentine, S. Cost Estimating Using a New Learning Curve Theory for Non-Constant Production Rates. Forecasting 2020, 2, 429–451. [Google Scholar] [CrossRef]
  33. Siemens, G.; Long, P. Penetrating the Fog: Analytics in Learning and Education. Educ. Rev. 2011, 46, 30–40. [Google Scholar] [CrossRef]
  34. Spikol, D.; Cukurova, M. In Encyclopedia of Education and Information Technologies; Tatnall, A., Ed.; Multimodal Learning Analytics. Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  35. Van, L.N.; Martínez-Monés, A.; Dimitriadis, Y.; Asensio-Pérez, J.I.; Martínez-Maldonado, R.; Gašević, D.; Martínez-Maldonado, R. Evidence-based multimodal learning analytics for feedback and reflection in collaborative learning. Br. J. Educ. Technol. 2024, 55, 1902–1925. [Google Scholar] [CrossRef]
  36. DeVellis, R.F. Scale Development: Theory and Applications, 4th ed.; Sage Publications: Thousand Oaks, CA, USA, 2017. [Google Scholar]
  37. Jolliffe, I.T.; Cadima, J. Principal Component Analysis: A Review and Recent Developments. Phil. Trans. R. Soc. A 2016, 374, 20150202. [Google Scholar] [CrossRef]
  38. Kline, R.B. Principles and Practice of Structural Equation Modeling, 5th ed.; Guilford Press: New York, NY, USA, 2023. [Google Scholar]
  39. Schumacker, R.; Lomax, R. A Beginner’s Guide to SEM; Routledge: London, UK, 2016. [Google Scholar]
  40. Colman, A.; Hill, J.; Veltaan, A. Regression and Other Stories; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  41. Jacobs, R.A.; Kruschke, J.K. Bayesian learning theory applied to human cognition. Wiley Cogn. Sci. 2011, 2, 8–21. [Google Scholar] [CrossRef]
  42. Arnst, M.; Soize, C.; Bulthuis, K. Computation of Sobol indices in global sensitivity analysis from small data sets by probabilistic learning on manifolds. Int. J. Uncertain. Quantif. 2021, 11, 1–23. [Google Scholar]
  43. Pintrich, P.R.; De Groot, E.V. Motivational and self-regulated learning components of classroom academic performance. J. Educ. Psychol. 1990, 82, 33–40. [Google Scholar] [CrossRef]
  44. van Merriënboer, J.J.G.; Sweller, J. Cognitive Load Theory and Complex Learning: Recent Developments and Future Directions. Educ. Psychol. Rev. 2005, 17, 147–177. [Google Scholar] [CrossRef]
  45. He, X.; Burger-Helmchen, T. Evolving Knowledge Management: Artificial Intelligence and the Dynamics of Social Innovation. In IEEE Engineering Management Review; IEEE: Piscataway, NJ, USA, 2025. [Google Scholar] [CrossRef]
  46. Argote, D.; Schön, D.A. Organizational Learning II: Theory, Method and Practice; Addison-Wesley: Boston, MA, USA, 1996. [Google Scholar]
  47. Theodoropoulos, L.; Theodoropoulos, A.; Halkiopoulos, C. Cognitive Bias Mitigation in Executive Decision-Making Within Data-Driven Environments: An Integrated AI- and Knowledge-Based Systems. Electronics 2025, 14, 3930. [Google Scholar] [CrossRef]
Figure 1. Core mechanisms of growth cycle.
Figure 1. Core mechanisms of growth cycle.
Information 16 01009 g001
Figure 2. The traditional foundation of cognitive development.
Figure 2. The traditional foundation of cognitive development.
Information 16 01009 g002
Figure 3. The first transition: AI as enhancement tool.
Figure 3. The first transition: AI as enhancement tool.
Information 16 01009 g003
Figure 4. The second transition: bypassing understanding.
Figure 4. The second transition: bypassing understanding.
Information 16 01009 g004
Figure 5. The final stage: complete cognitive dependency.
Figure 5. The final stage: complete cognitive dependency.
Information 16 01009 g005
Figure 6. Cognitive co-evolution model.
Figure 6. Cognitive co-evolution model.
Information 16 01009 g006
Figure 7. Mathematical framework of the study.
Figure 7. Mathematical framework of the study.
Information 16 01009 g007
Figure 8. ACM framework.
Figure 8. ACM framework.
Information 16 01009 g008
Figure 9. Cognitive interaction spectrum.
Figure 9. Cognitive interaction spectrum.
Information 16 01009 g009
Figure 10. Comparative CSI dynamics for researcher and student profiles.
Figure 10. Comparative CSI dynamics for researcher and student profiles.
Information 16 01009 g010
Figure 11. CSI trajectories for Student, Researcher, and AI-Assisted Expert profiles.
Figure 11. CSI trajectories for Student, Researcher, and AI-Assisted Expert profiles.
Information 16 01009 g011
Figure 12. Sensitivity of CSI to metacognitive regulation μ and automation reliance λ.
Figure 12. Sensitivity of CSI to metacognitive regulation μ and automation reliance λ.
Information 16 01009 g012
Figure 13. Cognitive balance mapping of multiple software development roles in the μλ space.
Figure 13. Cognitive balance mapping of multiple software development roles in the μλ space.
Information 16 01009 g013
Table 1. Key cognitive and motivational drivers underlying the growth process.
Table 1. Key cognitive and motivational drivers underlying the growth process.
PhaseDominant ProcessUser BehaviorOutcome
ExplorationCuriosity and experimentationUsers test AI capabilities through open prompts and comparisonsIncreased awareness of AI’s affordances and limitations
IntegrationStrategic applicationAI becomes a structured partner in research, learning, or design tasksDevelopment of analytical and reflective routines
SynergyCo-creative cognitionHuman and AI jointly generate novel hypotheses and frameworksExpansion of creative and conceptual capacity
StabilizationSelf-directed masteryAI assistance is selectively applied to enhance, not replace critical reasoningSustainable metacognitive autonomy
Table 2. Cognitive interaction spectrum with corresponding CSI ranges.
Table 2. Cognitive interaction spectrum with corresponding CSI ranges.
ZoneDominant ActorCognitive ControlTypical Behavior PatternCSI RangeRiskRecommended Management Strategy
1. Human-Driven ReasoningHumanFull autonomy; AI used only for data access or visualizationIndependent reasoning, critical synthesis, problem-solving without automation0.85–1.00Cognitive overload, inefficiency, limited scalabilityIntroduce AI selectively for repetitive or data-heavy tasks while preserving reflective control
2. Human-Guided AssistanceHuman with AI supportHuman defines goals; AI supports analytics or scenario explorationCo-processing, model comparison, partial automation0.70–0.85Overconfidence in partial automation, shallow validationMaintain reflection checkpoints and explicit reasoning steps
3. Balanced Co-Cognition (CBZ)Human–AI partnershipShared cognitive load; bidirectional reasoning and feedbackIterative dialog, explanation exchange, joint solution refinement0.55–0.70Coordination bias, potential dilution of accountabilityApply transparent reasoning protocols, explainable AI mechanisms, and reflective validation loops
4. AI-Guided ExecutionAI with human oversightAI produces main outputs; human validates and edits resultsVerification, supervision, post hoc reasoning0.35–0.55Automation bias, comprehension lossReinforce human-led review, introduce “explain-your-choice” mechanisms and reflective debriefing
5. AI-Dominant AutonomyAIMinimal human involvement; autonomous decision-makingPassive monitoring, consumption of automated outcomes0.00–0.35Cognitive atrophy, dependency, ethical and accountability risksSchedule human re-engagement cycles, enforce reversibility and transparency constraints
Table 3. Parameters used in cognitive dynamics modeling.
Table 3. Parameters used in cognitive dynamics modeling.
ParameterDefinitionDirection of Influence
A H Autonomy—ability to formulate tasks and make independent decisionspositive
R H Reflection—capacity for self-evaluation and critical reassessmentpositive
C H Creativity—originality and synthesis of new ideaspositive
D A I Delegation to AI—frequency of transferring problem-solving to AI systemsnegative
R A I Reliance on AI—degree of uncritical trust in automated outputnegative
Table 4. Behavioral scales and assessment protocol.
Table 4. Behavioral scales and assessment protocol.
Parameter0–23–45–67–89–10
A H Fully dependent on AIWeak initiativeShared decision-makingIndependent with AI supportFully autonomous reasoning
R H No evaluationOccasional verificationPeriodic reflectionRegular self-checkContinuous metacognitive analysis
C H Reproduces AI outputRephrases AI ideasCombines sourcesGenerates novel patternsProduces original concepts
D A I No delegationMinor assistanceBalanced useMajor delegationFull outsourcing of cognition
R A I Critical to all outputsOften checks AISelective verificationRare verificationBlind trust in AI results
Table 5. Results of CSI calculation.
Table 5. Results of CSI calculation.
Profile A H R H C H D A I R A I W H W A I Cognitive Zone
Researcher (Growth Phase)898531.41.0Growth
Student (Atrophy Phase)545781.01.5Atrophy
Table 6. Evaluation of human autonomy (H).
Table 6. Evaluation of human autonomy (H).
Level RangeDescriptorBehavioral IndicatorsInterpretation
0–2Minimal autonomyUser fully depends on AI outputs; rarely questions results.Cognitive atrophy risk.
3–4Limited autonomyPartial independence in routine tasks; minimal critical verification.Transitional zone.
5–6Moderate autonomyUser selects among AI outputs using contextual judgment.Cognitive balance zone.
7–8High autonomyActive human steering of process; AI used as supportive tool.Cognitive growth.
9–10Full autonomyHuman initiates and justifies reasoning independently of AI.Optimal reflective independence.
Table 7. Evaluation of reflection (R).
Table 7. Evaluation of reflection (R).
Level RangeDescriptorBehavioral IndicatorsInterpretation
0–2Passive acceptanceNo verification or critical review of AI reasoning.Cognitive atrophy.
3–4Reactive reflectionOccasional questioning when results appear incorrect.Early awareness.
5–6Analytical reflectionRegular verification and interpretation of AI logic.Balanced reasoning.
7–8Meta-reflectionUser anticipates AI limitations and self-corrects outcomes.Cognitive maturity.
9–10Systemic reflectionContinuous self-evaluation integrated into workflow.High metacognitive resilience.
Table 8. Evaluation of creativity (C).
Table 8. Evaluation of creativity (C).
Level RangeDescriptorBehavioral IndicatorsInterpretation
0–2Reproductive useAI used for repetition or minor variation.Low creativity.
3–4Adaptive creativityOccasional adaptation of AI outputs to new contexts.Moderate innovation.
5–6Generative creativityUser combines AI ideas with personal insights.Balanced co-creation.
7–8Transformative creativityNovel synthesis of AI and human perspectives.High creative synergy.
9–10Original creationAI serves as inspiration for fundamentally new ideas.Peak innovation and balance.
Table 9. Evaluation of delegation (D).
Table 9. Evaluation of delegation (D).
Level RangeDescriptorBehavioral IndicatorsInterpretation
0–2Full delegationAll analytical responsibility shifted to AI.Over-dependence risk.
3–4High delegationFrequent reliance on AI without verification.Atrophy tendency.
5–6Balanced delegationHuman supervises automated actions; shared control.Stable equilibrium.
7–8Selective delegationConscious distribution of cognitive tasks.Effective co-adaptation.
9–10Minimal delegationAI used only for mechanical or repetitive components.High cognitive autonomy.
Table 10. Evaluation of reliance (L).
Table 10. Evaluation of reliance (L).
Level RangeDescriptorBehavioral IndicatorsInterpretation
0–2Blind trustUncritical acceptance of AI authority.Cognitive atrophy.
3–4Dependent relianceAI seen as default authority but occasionally questioned.Transitional zone.
5–6Balanced relianceHealthy skepticism and verification.Cognitive balance.
7–8Informed relianceConfidence grounded in understanding AI’s logic.Mature cooperation.
9–10Strategic relianceContextual, deliberate engagement; selective trust.Synergistic mastery.
Table 11. Self-assessment scale for cognitive sustainability.
Table 11. Self-assessment scale for cognitive sustainability.
Level RangeCognitive ZoneDescription of Typical BehaviorSelf-Reflection Indicators
0–2Cognitive AtrophyFrequent uncritical use of AI outputs; reliance on automation for reasoning and creativity.“I rarely verify AI responses or question their logic.”
3–4Assisted DependencySome reflection and partial understanding; limited modification of AI-generated results.“I sometimes adjust AI suggestions but mostly depend on them.”
5–6Cognitive BalanceActive engagement and moderate independence; AI used to enhance rather than replace thinking.“I use AI to explore ideas and verify results.”
7–8Cognitive GrowthReflective use of AI; iterative questioning, adaptation, and synthesis of AI insights.“I use AI as a partner for exploration and reasoning.”
9–10Cognitive SynergyCo-creation of ideas; dynamic and creative dialog between human and AI cognition.“AI helps me extend my own reasoning and discover new concepts.”
Table 12. Expert/Instructor assessment scale for cognitive sustainability.
Table 12. Expert/Instructor assessment scale for cognitive sustainability.
Level RangeCognitive ZoneObservable IndicatorsSuggested ACM Intervention or Strategy
0–2Atrophy ZoneMinimal self-correction, passive repetition of AI outputs, lack of conceptual understanding.Introduce reflection exercises and independent reasoning tasks.
3–4Assisted ZonePartial reflection; user modifies outputs but without systematic understanding.Provide guided questioning frameworks and explainability modules.
5–6Balance ZoneBalanced use of AI; visible human judgment in selection, verification, and synthesis.Encourage deeper reflection cycles and feedback-based learning.
7–8Growth ZoneHigh metacognitive engagement; user uses AI to test, compare, and extend hypotheses.Support open-ended co-creation tasks and interdisciplinary integration.
9–10Synergy ZoneEvidence of creative and ethical reasoning; human–AI co-development of innovative outcomes.Sustain autonomy and ensure ethical oversight; document best practices.
Table 13. Evaluation of cognitive interaction with AI among software developers.
Table 13. Evaluation of cognitive interaction with AI among software developers.
Professional CategoryTypical AI Utilization PatternEstimated Level on Cognitive Scale (0–10)Approximate CSI ZoneCharacteristic Behavior
Junior DevelopersCopying or slightly modifying AI-generated code; minimal reflective verification.3–4Early Atrophy ZoneDependence on automated suggestions; limited understanding of algorithmic logic.
Middle DevelopersSelective use of AI for acceleration and refactoring; partial human control maintained.5–6Cognitive Balance ZoneEfficient use of automation; occasional reflective adaptation of results.
Senior Developers/ArchitectsIntegrative use of AI in architectural design, optimization, and testing.7–8Cognitive Growth ZoneBalanced delegation; strong reflective feedback; strategic use of AI-generated components.
R&D/AI EngineersUse AI as cognitive partner in research, model tuning, and algorithmic innovation.8–9Cognitive Synergy ZoneCo-creative interaction; continuous learning through feedback and model introspection.
Table 14. Intervention strategies for preventing cognitive atrophy.
Table 14. Intervention strategies for preventing cognitive atrophy.
Risk FactorManifestationCountermeasure (ACM Intervention)
Excessive delegationAutomatic use of AI for all problem-solvingApply the manual-first rule: complete initial reasoning without AI, then compare results.
Surface-level understandingRapid acceptance of outputs without analysisIntroduce reflection checkpoints: “What did I understand?” vs. “What did AI generate?”
Cognitive lazinessAvoidance of mental effortSchedule AI-off intervals or “unplugged days” devoted to independent tasks.
Automation biasOver-trust in system reliabilityRequire dual validation: human and AI must both justify conclusions.
Creativity erosionRepetition of AI-generated patternsAssign open-ended or paradoxical tasks that challenge AI and stimulate human novelty.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kabashkin, I. Cognitive Atrophy Paradox of AI–Human Interaction: From Cognitive Growth and Atrophy to Balance. Information 2025, 16, 1009. https://doi.org/10.3390/info16111009

AMA Style

Kabashkin I. Cognitive Atrophy Paradox of AI–Human Interaction: From Cognitive Growth and Atrophy to Balance. Information. 2025; 16(11):1009. https://doi.org/10.3390/info16111009

Chicago/Turabian Style

Kabashkin, Igor. 2025. "Cognitive Atrophy Paradox of AI–Human Interaction: From Cognitive Growth and Atrophy to Balance" Information 16, no. 11: 1009. https://doi.org/10.3390/info16111009

APA Style

Kabashkin, I. (2025). Cognitive Atrophy Paradox of AI–Human Interaction: From Cognitive Growth and Atrophy to Balance. Information, 16(11), 1009. https://doi.org/10.3390/info16111009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop