1. Generative Artificial Intelligence and “Eco-Cognitive Openness and Situatedness”
The concept of eco-cognitive openness and situatedness, which I introduced in [
1], describes how cognitive systems—human or artificial—interact with their environment to produce information and creative results through abductive cognition, that is, the process of forming hypotheses to explain observations. This framework emphasizes how external resources, such as tools, social interactions, or cultural artifacts, shape cognitive processes. It also evaluates how generative AI engages with its environment and its potential for adaptive, creative meaning-making. Below, I explore eco-cognitive openness in human cognition, its implications for generative AI, particularly LLMs, and its connection to locked and unlocked strategies.
Eco-cognitive openness and situatedness refer to a cognitive system’s capacity to use external resources alongside internal processes to address challenges, generate ideas, or create novel outcomes [
1]. Grounded in my eco-cognitive approach to abduction, it suggests that cognition extends beyond the mind through the following:
External mediators: tools like notes, diagrams, or language that aid reasoning.
Environmental engagement: active interaction with the physical or social world to test ideas or acquire new information.
Contextual flexibility: adapting cognitive processes to dynamic, real-world inputs, promoting creative outcomes.
For example, a scientist using a telescope and various cognitive tools to study celestial bodies demonstrates eco-cognitive openness, facilitating new discoveries. Systems with limited openness, however, operate within rigid frameworks, restricting adaptability. Eco-cognitive openness underscores that cognition is interactive, dynamic, and embedded in a larger system, influenced by the following:
Environmental interaction: humans continually engage with their surroundings, shaping their thinking and learning.
Embodiment: cognition is linked to sensory and bodily experiences.
Social and cultural context: thinking is influenced by social interactions and shared knowledge.
Adaptive learning: humans flexibly adjust to new experiences and contexts.
This dynamic interplay between the mind and environment makes cognition adaptable and fluid.
Human creative abductive intelligence is marked by high eco-cognitive openness, where the brain, as an open system, constantly interacts with its environment. This enables humans to perform the following processes:
Integrate diverse inputs: combining cultural knowledge, social interactions, and sensory data (visual, auditory, tactile) to form hypotheses. For instance, an artist draws inspiration from nature, discussions, and tools like software or brushes.
Manipulate the environment: humans actively modify their surroundings (e.g., creating models or jotting notes) to externalize and refine cognitive processes, a process I term manipulative abduction.
Create robust hypotheses: through abduction, humans develop contextually rich, cross-domain solutions, such as new technologies or cultural artifacts.
This openness fosters “unlocked strategies,” where cognition is flexible and transcends predefined limits. For example, a poet crafting a metaphor blends environmental cues, cultural symbols, and personal experiences to create universal meaning.
2. Locked and Unlocked Strategies
The relationship between generative AI and my concept of locked and unlocked strategies highlights differences in cognitive adaptability. Locked strategies characterize systems like AlphaGo, which operate within constrained data domains, while unlocked strategies reflect human cognition’s flexibility. These excel in specific tasks but lack broader adaptability due to limited environmental interaction. In Go, both human and AI strategies rely on fixed rules and data, limiting innovative or diverse solutions. For example, AlphaGo’s impressive performance is confined to the game’s rules, making its abductive reasoning “locked” to that domain. In contrast, human cognition often employs unlocked strategies, leveraging eco-cognitive openness to adapt to new environments, create novel rules, or redefine contexts. For instance, AlphaGo’s creative Move 37 against Lee Sedol was innovative within Go’s constraints but lacks the broader flexibility of human cognition in open-ended scenarios. Unlocked strategies enable humans to produce high-level creative outcomes, such as scientific breakthroughs or artistic expressions, by integrating cultural contexts and diverse interactions. These strategies are less constrained by preset limits and adapt to varied situations.
Generative AI, including LLMs and image generators like DALL-E, exhibits limited eco-cognitive openness, aligning with locked strategies. Key limitations include:
Limited environmental interaction: GenAI relies on pre-existing datasets (e.g., text corpora or image libraries) rather than real-time environmental inputs. For example, an LLM generates text based on learned patterns, not dynamic world interactions.
Restricted manipulative abduction: Unlike humans, who use tools or sketches to brainstorm, GenAI cannot independently modify its environment, producing outputs algorithmically.
Limited abductive reasoning: GenAI creates believable outputs (e.g., coherent stories or realistic images) but struggles to incorporate novel environmental factors without retraining.
For instance, DALL-E generates images based on prompts but is limited by its training data, unlike a human artist who adapts to real-time feedback or observations. GenAI’s creativity is rooted in learned patterns, not dynamic environmental engagement.
Generative AI aligns with locked strategies due to its reliance on predefined data environments. LLMs like ChatGPT generate coherent text based on statistical patterns, not real-time interaction, reflecting the constrained creativity of locked systems. Their domain-specific nature limits generalization; for example, a text-to-image model like DALL-E cannot adapt to new modalities like audio without reprogramming. While GenAI performs a form of abduction by generating plausible outputs, it lacks the open-ended, context-sensitive hypothesis generation of human cognition. AlphaGo’s creative moves, for instance, are confined to Go, unlike human innovations that transcend domains, such as scientific discoveries.
3. Eco-Cognitive Openness in LLMs (Generative AI Systems)
Generative AI, such as LLMs, simulates human-like cognition but differs significantly in environmental interaction:
No sensory experience: LLMs lack sensory engagement with the world, relying on datasets and statistical patterns.
Lack of embodiment: without bodies or sensory organs, their cognition is purely computational, not grounded in physical experience.
Context-dependence: LLMs use informational context (e.g., text patterns) but lack real-world environmental engagement.
Cultural and social context: while LLMs process human-created data, they do not “experience” cultural or social contexts, limiting their understanding.
Adaptive learning: LLMs require retraining to adapt to new information, unlike humans’ real-time adaptability.
4. Artificial Intelligence’s Potential to Improve Human Eco-Cognitive Openness
Despite their locked strategies, GenAI systems can enhance human eco-cognitive openness through collaboration. AI can serve as an epistemic mediator, boosting creativity via human–AI co-creation, such as iterative prompting or co-design. For example, architects might use AI-generated design options and refine them based on cultural or environmental factors. In collaborative tasks, AI extends human cognitive abilities, suggesting ideas or aiding problem-solving, creating a shared cognitive ecosystem. Future AI systems could integrate real-time data streams (e.g., from social media or IoT devices) to exhibit greater eco-cognitive flexibility. Multimodal AI, combining text, image, and audio processing, may also mimic human manipulative abduction. However, AI remains locked without intentional design to overcome algorithmic constraints. GenAI’s exploratory potential, such as generating novel product designs, can inspire human creativity, but its outputs are bound by training data, requiring human agency to contextualize and refine them.
5. LLMs Are Powerful Cognitive Tools That Have the Potential to Either Support or Undermine Human Creativity
High-level eco-cognitive openness enables exceptional human creativity, while GenAI’s locked strategies limit its creative potential. LLMs can produce creative outputs but lack the dynamic environmental interaction needed for high-level abductive feats, which remain a human strength. However, LLMs often outperform humans in routine tasks, revealing that much human cognition is imitative, akin to “stochastic parrots.” This highlights human intellectual limitations rather than AI shortcomings. LLMs are powerful tools that can enhance cognitive performance but also threaten creativity by fostering over-reliance. Human–AI collaboration and real-time data integration could improve eco-cognitive openness, but risks like bias and overcomputationalization require human oversight to ensure meaningful results. Maintaining control over AI output is crucial to preserving human creativity and ensuring contextual relevance.
6. Ethical Issues and Difficulties: Endangering Human Discoverability?
Beyond standard AI ethics concerns, I highlight the risk of overcomputationalization [
2], where locked AI strategies undermine human creativity by overly structuring decision-making. GenAI’s automation of tasks like content creation may reduce human engagement, limiting eco-cognitive openness and discoverability. The “half-automation problem” underscores how partial AI autonomy can create complacency, undermining human agency. Also, it is well known that AI’s reliance on data may perpetuate biases, even with transparency, if training datasets underrepresent certain groups. Overcomputationalization may reduce human environmental interaction, weakening eco-cognitive openness. Technical challenges, such as enabling real-time learning or ethical safeguards, must be addressed to prevent misuse, like misinformation. Additionally, GenAI’s environmental impact, such as energy consumption, necessitates sustainable practices. In summary, my emphasis on the cognitive role of eco-cognitive openness encourages designing AI that supports human creativity and environmental sustainability.
7. Law Washing and Ethics Washing as Negative Possible Consequences
Hildebrandt [
3] notes that recommender systems, driven by economic incentives, may undermine human creativity and agency through filter bubbles and echo chambers. She suggests that EU legislative frameworks could realign incentives to support human abductive creativity, aligning with my eco-cognitive framework. However, profiling technologies and AI optimizers challenge constitutional protections, as their harms are often invisible or untraceable. Hildebrandt’s concerns reflect the risk of “ethics washing,” where tech companies use vague ethical guidelines to avoid regulation [
4]. Similarly, “law washing” may occur if legal frameworks, like the EU’s AI regulations, fail to be enforced. Tafani [
4] argues that machine learning’s lack of transparency and reliance on automated statistics can create deceptive narratives about AI capabilities, treating systems as “magical objects” immune to critique. This risks undermining liberalism and the rule of law by reducing human agency and equality; I totally share concerns that ethical and legal frameworks may struggle to be implemented, leaving societies vulnerable to ethics and law washing.
8. Conclusions
Eco-cognitive openness underscores the differences between human and AI cognition. GenAI’s locked strategies limit its creativity compared to human-unlocked strategies, which leverage dynamic environmental interactions. Human–AI collaboration can enhance eco-cognitive openness, with AI serving as an epistemic mediator to narrow the gap toward unlocked creativity. However, risks like overcomputationalization and bias require human oversight to maintain agency and contextual relevance. The challenge is ensuring AI supports, rather than supplants, human eco-cognitive processes. I also express concern that ethical and legal frameworks to address AI’s negative impacts may fail to be enforced, risking ethics and law washing, leaving human creativity and agency at risk.