Next Issue
Volume 127, INT-DOC-RES
Previous Issue
Volume 125, IOCRF 2025
 
 
proceedings-logo

Journal Browser

Journal Browser

Proceedings, 2025, IOCPh 2025

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Number of Papers: 11
Order results
Result details
Select all
Export citation of selected articles as:

Other

9 pages, 237 KB  
Proceeding Paper
Jean Piaget and Objectivity—Genetic Epistemology’s Place in a View from Nowhere
by Mark A. Winstanley
Proceedings 2025, 126(1), 1; https://doi.org/10.3390/proceedings2025126001 - 13 Aug 2025
Viewed by 452
Abstract
Science pursues objectivity. According to Thomas Nagel, “we must get outside of ourselves, and view the world from nowhere within it” is the most natural expression of this goal. However, we cannot literally get outside of ourselves; realistically, we can only hope to [...] Read more.
Science pursues objectivity. According to Thomas Nagel, “we must get outside of ourselves, and view the world from nowhere within it” is the most natural expression of this goal. However, we cannot literally get outside of ourselves; realistically, we can only hope to achieve a more detached conception by relying “less and less on certain individual aspects, and more and more on something else, less individual, which is also part of us”. This “self-transcendent conception should ideally explain (1) what the world is like; (2) what we are like; (3) why the world appears to beings like us in certain respects as it is and in certain respects as it isn’t; (4) how beings like us can arrive at such a conception.” The natural and human sciences address (1)–(3), but the last condition is rarely met, according to Nagel. In this paper, I argue that the genetic epistemology conceived by Jean Piaget as a science of the growth of knowledge explains how beings like us meet condition (4). Full article
7 pages, 171 KB  
Proceeding Paper
The Evolution of Intelligence from Active Matter to Complex Intelligent Systems via Agent-Based Autopoiesis
by Gordana Dodig-Crnkovic
Proceedings 2025, 126(1), 2; https://doi.org/10.3390/proceedings2025126002 - 18 Aug 2025
Viewed by 636
Abstract
Intelligence is a central topic in computing and philosophy, yet its origins and biological roots remain poorly understood. The framework proposed in this paper approaches intelligence as the complexification of agency across multiple levels of organization—from active matter to symbolic and social systems. [...] Read more.
Intelligence is a central topic in computing and philosophy, yet its origins and biological roots remain poorly understood. The framework proposed in this paper approaches intelligence as the complexification of agency across multiple levels of organization—from active matter to symbolic and social systems. Agents gradually acquire the capacity to detect differences, regulate themselves, and sustain identity within dynamic environments. Grounded in autopoiesis, cognition is reframed as a recursive, embodied process sustaining life through self-construction. Intelligence evolves as a problem-solving capacity of increasing organizational complexity: from physical self-organization to collective and reflexive capabilities. The model integrates systems theory, cybernetics, enactivism, and computational approaches into a unified info-computational perspective. Full article
9 pages, 1005 KB  
Proceeding Paper
General Theory of Information and Mindful Machines
by Rao Mikkilineni
Proceedings 2025, 126(1), 3; https://doi.org/10.3390/proceedings2025126003 - 26 Aug 2025
Viewed by 622
Abstract
As artificial intelligence advances toward unprecedented capabilities, society faces a choice between two trajectories. One continues scaling transformer-based architectures, such as state-of-the-art large language models (LLMs) like GPT-4, Claude, and Gemini, aiming for broad generalization and emergent capabilities. This approach has produced powerful [...] Read more.
As artificial intelligence advances toward unprecedented capabilities, society faces a choice between two trajectories. One continues scaling transformer-based architectures, such as state-of-the-art large language models (LLMs) like GPT-4, Claude, and Gemini, aiming for broad generalization and emergent capabilities. This approach has produced powerful tools but remains largely statistical, with unclear potential to achieve hypothetical “superintelligence”—a term used here as a conceptual reference to systems that might outperform humans across most cognitive domains, though no consensus on its definition or framework currently exists. The alternative explored here is the Mindful Machines paradigm—AI systems that could, in future, integrate intelligence with semantic grounding, embedded ethical constraints, and goal-directed self-regulation. This paper outlines the Mindful Machine architecture, grounded in Mark Burgin’s General Theory of Information (GTI), and proposes a post-Turing model of cognition that directly encodes memory, meaning, and teleological goals into the computational substrate. Two implementations are cited as proofs of concept. Full article
Show Figures

Figure 1

9 pages, 188 KB  
Proceeding Paper
Intelligence and the Hard Problem of Consciousness—With ‘Dual-Aspect Theory’ Notes
by Marcus Abundis
Proceedings 2025, 126(1), 4; https://doi.org/10.3390/proceedings2025126004 - 10 Sep 2025
Viewed by 375
Abstract
To model informatic intelligence, agency, consciousness and the like, one must address a claimed Hard Problem: that a grasp of ‘the mind’ lies wholly beyond scientific views. While this claim is suspect, persistent analogues can be identified in the literature, such as a [...] Read more.
To model informatic intelligence, agency, consciousness and the like, one must address a claimed Hard Problem: that a grasp of ‘the mind’ lies wholly beyond scientific views. While this claim is suspect, persistent analogues can be identified in the literature, such as a “symbol grounding problem”, “solving intelligence”, a missing “theory of meaning”, and more. The topic of subjective phenomena thus still holds sway in many corners as being unresolved. But firm analysis of Hard Problem claims is rare; researchers instead respond intuitively, claiming that (1) it is an absurd view unworthy of study, or (2) it is an intractable issue defying study, where neither side offers much clarifying detail. In contrast, this paper firmly assesses the Hard Problem’s claim contra one scientific role: evolution by means of natural selection (EvNS). It examines the specific logic behind this claim, as seen in the literature over the years. The paper ultimately shows that the Hard Problem’s logic is deeply flawed, with the further implication that EvNS remains available for exploring consciousness. The paper also suggests that an ‘information theory’ dual-aspect approach is best suited to resolving Hard-Problem-like claims. Full article
8 pages, 171 KB  
Proceeding Paper
How Brook’s Behavior-Based Robots Teach Us a Lesson About Knowledge
by Saskia Janina Neumann
Proceedings 2025, 126(1), 5; https://doi.org/10.3390/proceedings2025126005 - 12 Sep 2025
Viewed by 171
Abstract
This work argues that there is more than one form of knowledge. By comparing human cognition with Rodney Brooks’ behavior-based robots, which act without representational content, I show that humans interact with the world through contentful representations, while robots rely on contentless, embodied [...] Read more.
This work argues that there is more than one form of knowledge. By comparing human cognition with Rodney Brooks’ behavior-based robots, which act without representational content, I show that humans interact with the world through contentful representations, while robots rely on contentless, embodied routines. Drawing on empirical cases—spreading activation, object recognition, agnosia, and vision reconstruction—I argue that humans require content and thus face the hard problem of content. I propose that content is internally generated. Ultimately, I defend a pluralistic view: knowledge can be both contentful and contentless and neither form is inherently superior. Full article
7 pages, 168 KB  
Proceeding Paper
Ritual Practice Robots: The Importance of Incorporating “li”
by Liang Wang and Wenya Ma
Proceedings 2025, 126(1), 6; https://doi.org/10.3390/proceedings2025126006 - 12 Sep 2025
Viewed by 231
Abstract
Confucian ethics, as a form of virtue ethics, focuses on moral practices and ritual norms, which differ significantly from utilitarian and deontological theories. Confucian ethics emphasize that moral norms are not only theoretically prescribed, but are also deeply embedded in etiquette practice, which [...] Read more.
Confucian ethics, as a form of virtue ethics, focuses on moral practices and ritual norms, which differ significantly from utilitarian and deontological theories. Confucian ethics emphasize that moral norms are not only theoretically prescribed, but are also deeply embedded in etiquette practice, which is manifested through actual social behaviors. By integrating Confucian ethics into the design of social robots to form artificial intelligence with the function of etiquette practice, it provides an innovative solution to improve the quality of human–robot relationships and the morality of the society. Full article
6 pages, 179 KB  
Proceeding Paper
Towards Beneficial AI: A Biomimicry Framework to Design Intelligence That Cooperates with Biological Entities
by Paweł Polak, Peter Niewiarowski, John Huss and Roman Krzanowski
Proceedings 2025, 126(1), 7; https://doi.org/10.3390/proceedings2025126007 - 15 Sep 2025
Viewed by 302
Abstract
This paper proposes biomimicry as a paradigm for helping to overcome both the conceptual and technological limitations of current AI systems. It begins by outlining three key challenges faced by modern AI and then proceeds to introduce the concept of biomimicry, offering examples [...] Read more.
This paper proposes biomimicry as a paradigm for helping to overcome both the conceptual and technological limitations of current AI systems. It begins by outlining three key challenges faced by modern AI and then proceeds to introduce the concept of biomimicry, offering examples of how biologically inspired approaches have informed technical solutions. Furthermore, this paper presents a framework for integrating biomimicry principles into AI research and development. The three central challenges identified here are the energy challenge, the gap challenge, and the conceptual challenge. This paper also presents a case study on beneficial AI to illustrate how a biomimetic approach can be applied to address some current shortcomings in AI technology. Full article
8 pages, 206 KB  
Proceeding Paper
Transitive Self-Reflection–A Fundamental Criterion for Detecting Intelligence
by Krassimir Markov and Velina Slavova
Proceedings 2025, 126(1), 8; https://doi.org/10.3390/proceedings2025126008 - 15 Sep 2025
Viewed by 229
Abstract
This survey investigates the concept of transitive self-reflection as a fundamental criterion for detecting and measuring intelligence. We explore the manifestation of this ability in humans, consider its potential presence in other animals, and discuss the challenges and possibilities of replicating it in [...] Read more.
This survey investigates the concept of transitive self-reflection as a fundamental criterion for detecting and measuring intelligence. We explore the manifestation of this ability in humans, consider its potential presence in other animals, and discuss the challenges and possibilities of replicating it in artificial intelligence systems. Transitive self-reflection is characterized by an awareness of oneself through complex cognitive abilities rooted in evolutionary mechanisms that are innate in humans. Although transitive self-reflection cannot be fully replicated in AI as an origin, its behavioral characteristics can be analyzed and, to some extent, imitated. The study delves into various forms of transitive self-reflection, including self-recognition, object-mediated self-reflection, and reflective social cognition, highlighting their philosophical roots and recent advancements in cognitive science. We also examine the multifaceted nature of intelligence, encompassing cognitive, emotional, and social dimensions. Despite significant progress, current AI systems lack true transitive self-reflection. Developing AI with this capability requires advances in knowledge representation, reasoning algorithms, and machine learning. Incorporating transitive self-reflection into AI systems holds transformative potential for creating socially adept and more human-like intelligence in machines. This research underscores the importance of transitive self-reflection in advancing our understanding of and the development of intelligent systems. Full article
4 pages, 6120 KB  
Proceeding Paper
When Planes Fly Better than Birds: Should AIs Think like Humans?
by Soumya Banerjee
Proceedings 2025, 126(1), 9; https://doi.org/10.3390/proceedings2025126009 - 16 Sep 2025
Viewed by 241
Abstract
As artificial intelligence (AI) systems continue to outperform humans in an increasing range of specialised tasks, a fundamental question emerges at the intersection of philosophy, cognitive science, and engineering: should we aim to build AIs that think like humans, or should we embrace [...] Read more.
As artificial intelligence (AI) systems continue to outperform humans in an increasing range of specialised tasks, a fundamental question emerges at the intersection of philosophy, cognitive science, and engineering: should we aim to build AIs that think like humans, or should we embrace non-human-like architectures that may be more efficient or powerful, even if they diverge radically from biological intelligence? This paper draws on a compelling analogy from the history of aviation: the fact that aeroplanes, while inspired by birds, do not fly like birds. Instead of flapping wings or mimicking avian anatomy, engineers developed fixed-wing aircraft governed by aerodynamic principles that enabled superior performance. This decoupling of function from the biological form invites us to ask whether intelligence, like flight, can be achieved without replicating the mechanisms of the human brain. We explore this analogy through three main lenses. First, we consider the philosophical implications: What does it mean for an entity to be intelligent if it does not share our cognitive processes? Can we meaningfully compare different forms of intelligence across radically different substrates? Second, we examine engineering trade-offs in building AIs modelled on human cognition (e.g., through neural–symbolic systems or cognitive architectures) versus those designed for performance alone (e.g., deep learning models). Finally, we explore the ethical consequences of diverging from human-like thinking in AI systems. If AIs do not think like us, how can we ensure alignment, predictability, and shared moral frameworks? By critically evaluating these questions, this paper advocates for a pragmatic and pluralistic approach to AI design: one that values human-like understanding where it is useful (e.g., for interpretability or human–AI interaction) but also recognises the potential of novel architectures unconstrained by biological precedent. Intelligence may ultimately be a broader concept than the human example suggests, and embracing this plurality may be key to building robust and beneficial AI systems. Full article
Show Figures

Figure 1

5 pages, 160 KB  
Proceeding Paper
Abductive Intelligence, Creativity, Generative AI: The Role of Eco-Cognitive Openness and Situatedness
by Lorenzo Magnani
Proceedings 2025, 126(1), 10; https://doi.org/10.3390/proceedings2025126010 - 17 Sep 2025
Viewed by 249
Abstract
I recently developed the concept of eco-cognitive openness and situatedness to explain how cognitive systems, whether human or artificial, engage dynamically with their surroundings to generate information and creative outcomes through abductive cognition. Human cognition demonstrates significant eco-cognitive openness, utilizing external resources like [...] Read more.
I recently developed the concept of eco-cognitive openness and situatedness to explain how cognitive systems, whether human or artificial, engage dynamically with their surroundings to generate information and creative outcomes through abductive cognition. Human cognition demonstrates significant eco-cognitive openness, utilizing external resources like tools and cultural contexts to produce contextually rich hypotheses, sometimes highly creative via what I called “unlocked strategies.” Conversely, generative AI, such as large language models (LLMs) and image generators, employs “locked strategies,” relying on pre-existing datasets with minimal real-time environmental interaction—this leads to limited creativity. While these systems can yield some low-level degrees of creative outputs, their lack of human-like eco-cognitive openness restricts their ability to achieve high-level creative abductive feats, which remain a human strength, especially among the most talented. However, LLMs often outperform humans in routine cognitive tasks, exposing human intellectual limitations rather than AI deficiencies. Much human cognition is repetitive and imitative, resembling “stochastic parrots,” much like LLMs. Thus, LLMs are potent cognitive tools that can enhance human performance but also endanger creativity. Future AI developments, such as human–AI partnerships, could improve eco-cognitive openness, but risks like bias and overcomputationalization necessitate human oversight to ensure meaningful results. In collaborative settings, generative AI can serve as an epistemic mediator, narrowing the gap toward unlocked creativity. To safeguard human creativity, control over AI output must be maintained, embedding them in socio-cultural contexts. I also express concern that ethical and legal frameworks to mitigate AI’s negative impacts may fail to be enforced, risking “ethics washing” and “law washing.” Full article
6 pages, 162 KB  
Proceeding Paper
A Comparison of the Effect of Language on High Level Information Processes in Humans and Linguistically Mature Generative AI
by Daniel Boyd
Proceedings 2025, 126(1), 11; https://doi.org/10.3390/proceedings2025126011 - 26 Sep 2025
Abstract
Recent advances in Large Language Models (LLMs) have reignited discussions concerning the similarities and differences between human and machine intelligence. This article approaches such questions from the viewpoint of the overarching explanation for biological and technological information systems provided by Emergent Information Theory. [...] Read more.
Recent advances in Large Language Models (LLMs) have reignited discussions concerning the similarities and differences between human and machine intelligence. This article approaches such questions from the viewpoint of the overarching explanation for biological and technological information systems provided by Emergent Information Theory. Particular attention is given to the role of language in the construction of high-level emergent informational processes and entities and to its use in conscious reporting. This leads to the conclusion that language may also provide a window into the inner workings of these systems that can provide evidence relevant to these discussions. Full article
Back to TopTop