Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (138)

Search Parameters:
Keywords = Turing computability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1237 KB  
Article
Constraint, Asymmetry, and Meaning: A Cybernetic Reinterpretation of Probabilistic Emergence Across Complex Systems
by Ezra N. S. Lockhart
Symmetry 2026, 18(3), 518; https://doi.org/10.3390/sym18030518 - 18 Mar 2026
Viewed by 305
Abstract
This study develops a Constraint-Driven Model of Intelligence to explain the emergence of structured meaning in complex systems, reconciling probability and cybernetics. It applies a conceptual–analytic procedure, conducted entirely through logical reasoning and theoretical analysis, without empirical measurement, data acquisition, experimental manipulation, or [...] Read more.
This study develops a Constraint-Driven Model of Intelligence to explain the emergence of structured meaning in complex systems, reconciling probability and cybernetics. It applies a conceptual–analytic procedure, conducted entirely through logical reasoning and theoretical analysis, without empirical measurement, data acquisition, experimental manipulation, or statistical testing, and is therefore methodologically separate from empirical artificial intelligence research. Phenomena such as model collapse are cited as theoretical instances for epistemic argumentation, without asserting empirical verification. Building on Émile Borel’s Infinite Monkey Theorem, which demonstrates the theoretical inevitability of order in unbounded stochastic processes, and Gregory Bateson’s principle of negative explanation, which defines structure as the result of systematically eliminated alternatives, the analysis formalizes how constraints break ergodicity and generate asymmetry. Shannon’s entropy quantifies the informational effects of constraints, while Simon’s bounded rationality and Turing’s algorithmic limits show how cognitive and computational boundaries produce tractable outcomes. Applied to modern AI, the model accounts for model collapse in recursive training, showing that the loss of asymmetric constraints produces low-entropy, repetitive outputs, demonstrating the epistemic necessity of constraint regulation. Comparing probabilistic and cybernetic accounts of emergence, the study shows that structured intelligence arises not from stochastic exploration alone, but from bounded, recursive, selective processes. This model is transdisciplinary, formalizing how constraints from socioeconomic pressures to subcultural circulation shape diversity, innovation, and functional asymmetry, establishing a generalizable cybernetic epistemology for the generation of structured intelligence and meaning across domains. By formalizing these concepts through set-theoretic derivations and integrative synthesis, this non-empirical model advances a cybernetic epistemology, separate from quantitative AI evaluations or experimental designs. Full article
Show Figures

Figure 1

39 pages, 67440 KB  
Article
LLM-TOC: LLM-Driven Theory-of-Mind Adversarial Curriculum for Multi-Agent Generalization
by Chenxu Wang, Jiang Yuan, Tianqi Yu, Xinyue Jiang, Liuyu Xiang, Junge Zhang and Zhaofeng He
Mathematics 2026, 14(5), 915; https://doi.org/10.3390/math14050915 - 8 Mar 2026
Viewed by 469
Abstract
Zero-shot generalization to out-of-distribution (OOD) teammates and opponents in multi-agent systems (MASs) remains a fundamental challenge for general-purpose AI, especially in open-ended interaction scenarios. Existing multi-agent reinforcement learning (MARL) paradigms, such as self-play and population-based training, often collapse to a limited subset of [...] Read more.
Zero-shot generalization to out-of-distribution (OOD) teammates and opponents in multi-agent systems (MASs) remains a fundamental challenge for general-purpose AI, especially in open-ended interaction scenarios. Existing multi-agent reinforcement learning (MARL) paradigms, such as self-play and population-based training, often collapse to a limited subset of Nash equilibria, leaving agents brittle when faced with semantically diverse, unseen behaviors. Recent approaches that invoke Large Language Models (LLMs) at run time can improve adaptability but introduce substantial latency and can become less reliable as task horizons grow; in contrast, LLM-assisted reward-shaping methods remain constrained by the inefficiency of the inner reinforcement-learning loop. To address these limitations, we propose LLM-TOC (LLM-Driven Theory-of-Mind Adversarial Curriculum), which casts generalization as a bi-level Stackelberg game: in the inner loop, a MARL agent (the follower) minimizes regret against a fixed population, while in the outer loop, an LLM serves as a semantic oracle that generates executable adversarial or cooperative strategies in a Turing-complete code space to maximize the agent’s regret. To cope with the absence of gradients in discrete code generation, we introduce Gradient Saliency Feedback, which transforms pixel-level value fluctuations into semantically meaningful causal cues to steer the LLM toward targeted strategy synthesis. We further provide motivating theoretical analysis via the PAC-Bayes framework, showing that LLM-TOC converges at rate O(1/K) and yields a tighter generalization error bound than parameter-space exploration under reasonable preconditions. Experiments on the Melting Pot benchmark demonstrate that, with expected cumulative collective return as the core zero-shot generalization metric, LLM-TOC consistently outperforms self-play baselines (IPPO and MAPPO) and the LLM-inference method Hypothetical Minds across all held-out test scenarios, reaching 75% to 85% of the upper-bound performance of Oracle PPO. Meanwhile, with the number of RL environment interaction steps to reach the target relative performance as the core efficiency metric, our framework reduces the total training computational cost by more than 60% compared with mainstream baselines. Full article
(This article belongs to the Special Issue Applications of Intelligent Game and Reinforcement Learning)
Show Figures

Figure 1

24 pages, 389 KB  
Article
The Power of the Lorentz Quantum Computer
by Qi Zhang and Biao Wu
Entropy 2026, 28(3), 266; https://doi.org/10.3390/e28030266 - 28 Feb 2026
Cited by 1 | Viewed by 285
Abstract
We analyze the power of the recently proposed Lorentz quantum computer (LQC), a theoretical model leveraging hyperbolic bits (hybits) governed by complex Lorentz transformations. We define the complexity class BLQP (bounded-error Lorentz quantum polynomial-time) and demonstrate its equivalence to the complexity class [...] Read more.
We analyze the power of the recently proposed Lorentz quantum computer (LQC), a theoretical model leveraging hyperbolic bits (hybits) governed by complex Lorentz transformations. We define the complexity class BLQP (bounded-error Lorentz quantum polynomial-time) and demonstrate its equivalence to the complexity class PP (the class of problems solvable by a deterministic polynomial-time Turing machine with access to a P oracle). LQC algorithms are shown to solve NP-hard problems, such as the maximum independent set (MIS), in polynomial time, thereby placing NP and co-NP within BLQP. Furthermore, we establish that LQC can efficiently simulate quantum computing with postselection (PostBQP), while the reverse is not possible, highlighting LQC’s unique “super-postselection” capability. By proving BLQP =PP, we situate the entire polynomial hierarchy (PH) within BLQP and reveal profound connections between computational complexity and physical frameworks like Lorentz quantum mechanics. These results underscore LQC’s theoretical superiority over conventional quantum computing models and its potential to redefine boundaries in complexity theory. Full article
(This article belongs to the Special Issue Quantum Computation, Quantum AI, and Quantum Information)
Show Figures

Figure 1

41 pages, 1883 KB  
Article
Is Every Cognitive Phenomenon Computable?
by Fernando Rodriguez-Vergara and Phil Husbands
Mathematics 2026, 14(3), 535; https://doi.org/10.3390/math14030535 - 2 Feb 2026
Viewed by 1053
Abstract
According to the Church–Turing thesis, the limit of what is computable is bounded by Turing machines. Following from this, given that general computable functions formally describe the notion of recursive mechanisms, it is sometimes argued that every organismic process that specifies consistent cognitive [...] Read more.
According to the Church–Turing thesis, the limit of what is computable is bounded by Turing machines. Following from this, given that general computable functions formally describe the notion of recursive mechanisms, it is sometimes argued that every organismic process that specifies consistent cognitive responses should be both limited to Turing machine capabilities and amenable to formalization. There is, however, a deep intuitive conviction permeating contemporary cognitive science, according to which mental phenomena, such as consciousness and agency, cannot be explained by resorting to this kind of framework. In spite of some exceptions, the overall tacit assumption is that whatever the mind is, it exceeds the reach of what is described by notions of computability. This issue, namely the nature of the relation between cognition and computation, becomes particularly pertinent and increasingly more relevant as a possible source of better understanding the inner workings of the mind, as well as the limits of artificial implementations thereof. Moreover, although it is often overlooked or omitted so as to simplify our models, it will probably define, or so we argue, the direction of future research on artificial life, cognitive science, artificial intelligence, and related fields. Full article
(This article belongs to the Special Issue Non-algorithmic Mathematical Models of Biological Organization)
Show Figures

Figure 1

18 pages, 265 KB  
Article
Wittgenstein, Turing, and the Intelligence of Games
by Rossella Lupacchini
Philosophies 2026, 11(1), 10; https://doi.org/10.3390/philosophies11010010 - 16 Jan 2026
Viewed by 829
Abstract
One of Wittgenstein’s most quoted passages from his Remarks on the Philosophy of Psychology concerns Turing’s “machines” and says verbatim: “These machines are humans who calculate. And one might express what he [Turing] says also in the form of games.” This passage [...] Read more.
One of Wittgenstein’s most quoted passages from his Remarks on the Philosophy of Psychology concerns Turing’s “machines” and says verbatim: “These machines are humans who calculate. And one might express what he [Turing] says also in the form of games.” This passage not only captures the kernel of Turing’s conceptual argument for the adequacy of his definition of “computability”, as presented in his article On Computable Numbers (1936), but also helps clarify Turing’s idea of “mechanical intelligence.” Indeed, the notion of game provides an ideal means to focus on similarities and differences between Turing and Wittgenstein’s views of mechanical procedures, mathematical understanding, and thinking activity. The live encounter between Ludwig Wittgenstein and Alan Turing took place in Cambridge in 1939, when Wittgenstein’s Lectures on the Foundations of Mathematics were regularly attended by Turing. Interestingly, during the conversations between the two, Turing seems to play the role of the Wittgenstein of the Tractatus, to allow the present Wittgenstein to reassess what he deplores as mistaken or misleading in his early work. As for Turing himself, his reflection on thinking machines from the late 1940s demonstrates the significance of his dialogue with Wittgenstein. Full article
(This article belongs to the Special Issue Intelligent Inquiry into Intelligence)
6 pages, 210 KB  
Article
Why Turing’s Computable Numbers Are Only Non-Constructively Closed Under Addition
by Jeff Edmonds
Entropy 2026, 28(1), 71; https://doi.org/10.3390/e28010071 - 7 Jan 2026
Viewed by 434
Abstract
Kolmogorov complexity asks whether a string can be outputted by a Turing Machine (TM) whose description is shorter. Analogously, a real number is considered computable if a Turing machine can generate its decimal expansion. The modern ϵ-approximation definition of computability, widely used [...] Read more.
Kolmogorov complexity asks whether a string can be outputted by a Turing Machine (TM) whose description is shorter. Analogously, a real number is considered computable if a Turing machine can generate its decimal expansion. The modern ϵ-approximation definition of computability, widely used in practical computation, ensures that computable reals are constructively closed under addition. However, Turing’s original 1936 digit-by-digit notion, which demands the direct output of the n-th digit, presents a stark divergence. Though the set of Turing-computable reals is not constructively closed under addition, we prove that a Turing machine capable of computing x+y non-constructively exists. The core constructive computational barrier arises from determining the ones digit of a sum like 0.333¯+0.666¯=0.999¯=1.000¯. This particular example is ambiguous because both 0.999¯ and 1.000¯ are legitimate decimal representations of the same number. However, if any of the infinite number of 3s in the first term is changed to a 2 (e.g., 0.3332+0.666¯), the sum’s leading digit is definitely zero. Conversely, if it is changed to a 4 (e.g., 0.3334+0.666¯), the leading digit is definitely one. This implies an inherent undecidability in determining these digits. Recent papers and our work address this issue. Hamkins provides an informal argument, while Berthelette et al. present more complicated formal proof, and our contribution offers a simple reduction to the Halting Problem. We demonstrate that determining when carry propagation stops can be resolved with a single query to an oracle that tells if and when a given TM halts. Because a concrete answer to this query exists, so does a TM computing the digits of x+y, though the proof is non-constructive. As far as we know, the analogous question for multiplication remains open. This, we feel, is an interesting addition to the story. This reveals a subtle but significant difference between the modern ϵ-approximation definition and Turing’s original 1936 digit-by-digit notion of a computable number, as well as between constructive and non-constructive proof. This issue of computability and numerical precision ties into algorithmic information and Kolmogorov complexity. Full article
34 pages, 2403 KB  
Article
Literary Language Mashup: Curating Fictions with Large Language Models
by Gerardo Aleman Manzanarez, Raul Monroy, Jorge Garcia Flores and Hiram Calvo
Mathematics 2026, 14(2), 210; https://doi.org/10.3390/math14020210 - 6 Jan 2026
Viewed by 465
Abstract
The artificial generation of text by computers has been a field of study in computer science since the beginning of the twentieth century, from Markov chains to Turing tests. This has evolved into automatic summarization and marketing chatbots. The generation of literary texts [...] Read more.
The artificial generation of text by computers has been a field of study in computer science since the beginning of the twentieth century, from Markov chains to Turing tests. This has evolved into automatic summarization and marketing chatbots. The generation of literary texts by Large Language Models (LLMs) has also been an area of scholarly inquiry for over six decades. The literary quality of AI-generated text can be evaluated with GrAImes, an evaluation protocol grounded in literary theory and inspired by the editorial process of book publishers. This evaluation can also be framed as part of broader editorial practices within publishing, emphasizing both theoretical grounding and applied assessment. This protocol necessitates the involvement of human judges to validate the texts generated, a process that is often resource-intensive in terms of both time and financial investment, primarily due to the specialized credentials and expertise required of these evaluators. In this paper, we propose an alternative approach by employing LLMs themselves as evaluators within the GrAImes framework. We apply this methodology to assess human-written and AI-generated microfictions in Spanish, to five PhD professors in literature and sixteen literary enthusiasts, and to short stories in both Spanish and English. By comparing the evaluations performed by LLMs with those of human judges, we examine the degree of alignment and divergence between both perspectives, thereby assessing the feasibility of LLMs as auxiliary literary evaluators. Our analysis focuses on the alignment of responses from LLMs with those of human evaluators, providing insights into the potential of LLMs in literary assessment. The conducted experiments reveal that while LLMs cannot be regarded as substitutes for human judges in the evaluation of literary microfictions and short stories, with a Krippendorff’a alpha reliability coefficient less than 0.66, they can serve as a valuable tool that offers an initial perspective on the editorial quality of the texts in question. Overall, this study contributes to the ongoing discourse on the role of artificial intelligence in literature, underlining both its methodological constraints and its potential as a complementary resource for literary evaluation. Full article
(This article belongs to the Special Issue Advances in Computational Intelligence and Applications)
Show Figures

Figure 1

23 pages, 3559 KB  
Article
From Static Prediction to Mindful Machines: A Paradigm Shift in Distributed AI Systems
by Rao Mikkilineni and W. Patrick Kelly
Computers 2025, 14(12), 541; https://doi.org/10.3390/computers14120541 - 10 Dec 2025
Cited by 1 | Viewed by 1843
Abstract
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted [...] Read more.
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted in a Turing-paradigm architecture: statistical world models (opaque weights) bolted onto brittle, imperative workflows. They excel at pattern completion, but they externalize governance, memory, and purpose, thereby accumulating coherence debt—a structural fragility manifested as hallucinations, shallow and siloed memory, ad hoc guardrails, and costly human oversight. The shortcoming of current AI relative to human-like intelligence is therefore less about raw performance or scaling, and more about an architectural limitation: knowledge is treated as an after-the-fact annotation on computation, rather than as an organizing substrate that shapes computation. This paper introduces Mindful Machines, a computational paradigm that operationalizes coherence as an architectural property rather than an emergent afterthought. A Mindful Machine is specified by a Digital Genome (encoding purposes, constraints, and knowledge structures) and orchestrated by an Autopoietic and Meta-Cognitive Operating System (AMOS) that runs a continuous Discover–Reflect–Apply–Share (D-R-A-S) loop. Instead of a static model embedded in a one-shot ML pipeline or deep learning neural network, the architecture separates (1) a structural knowledge layer (Digital Genome and knowledge graphs), (2) an autopoietic control plane (health checks, rollback, and self-repair), and (3) meta-cognitive governance (critique-then-commit gates, audit trails, and policy enforcement). We validate this approach on the classic Credit Default Prediction problem by comparing a traditional, static Logistic Regression pipeline (monolithic training, fixed features, external scripting for deployment) with a distributed Mindful Machine implementation whose components can reconfigure logic, update rules, and migrate workloads at runtime. The Mindful Machine not only matches the predictive task, but also achieves autopoiesis (self-healing services and live schema evolution), explainability (causal, event-driven audit trails), and dynamic adaptation (real-time logic and threshold switching driven by knowledge constraints), thereby reducing the coherence debt that characterizes contemporary ML- and LLM-centric AI architectures. The case study demonstrates “a hybrid, runtime-switchable combination of machine learning and rule-based simulation, orchestrated by AMOS under knowledge and policy constraints”. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

66 pages, 819 KB  
Article
Tossing Coins with an 𝒩𝒫-Machine
by Edgar Graham Daylight
Symmetry 2025, 17(10), 1745; https://doi.org/10.3390/sym17101745 - 16 Oct 2025
Viewed by 691
Abstract
In computational complexity, a tableau represents a hypothetical accepting computation path p of a nondeterministic polynomial time Turing machine N on an input w. The tableau is encoded by the formula ψ, defined as [...] Read more.
In computational complexity, a tableau represents a hypothetical accepting computation path p of a nondeterministic polynomial time Turing machine N on an input w. The tableau is encoded by the formula ψ, defined as ψ=ψcellψrest. The component ψcell enforces the constraint that each cell in the tableau contains exactly one symbol, while ψrest incorporates constraints governing the step-by-step behavior of N on w. In recent work, we reformulated a critical part of ψrest as a compact Horn formula. In another paper, we evaluated the cost of this reformulation, though our estimates were intentionally conservative. Here, we provide a more rigorous analysis and derive a polynomial bound for two enhanced variants of our original Filling Holes with Backtracking algorithm: the refined (rFHB) and streamlined (sFHB) versions, each tasked with solving 3-SAT. The improvements stem from exploiting inter-cell dependencies spanning large regions of the tableau in the case of rFHB, and by incorporating correlated coin-tossing constraints in the case of sFHB. These improvements are purely conceptual; no empirical validation—commonly expected by complexity specialists—is provided. Accordingly, any claim regarding P vs. NP remains beyond the scope of this work. Full article
(This article belongs to the Special Issue Symmetry in Solving NP-Hard Problems)
Show Figures

Figure 1

18 pages, 2638 KB  
Article
RNA Polymerase I Dysfunction Underlying Craniofacial Syndromes: Integrated Genetic Analysis Reveals Parallels to 22q11.2 Deletion Syndrome
by Spencer Silvey, Scott Lovell and Merlin G. Butler
Genes 2025, 16(9), 1063; https://doi.org/10.3390/genes16091063 - 10 Sep 2025
Viewed by 1363
Abstract
Background/Objective: POLR1A and related gene variants cause craniofacial and developmental syndromes, including Acrofacial Dysostosis-Cincinnati, Treacher-Collins types 2–4, and TWIST1-associated disorders. Using a patient case integrated with molecular analyses, we aimed to clarify shared pathogenic mechanisms and propose these conditions as part of a [...] Read more.
Background/Objective: POLR1A and related gene variants cause craniofacial and developmental syndromes, including Acrofacial Dysostosis-Cincinnati, Treacher-Collins types 2–4, and TWIST1-associated disorders. Using a patient case integrated with molecular analyses, we aimed to clarify shared pathogenic mechanisms and propose these conditions as part of a spectrum of RNA polymerase I (Pol I)–related ribosomopathies. Methods: A patient with a heterozygous POLR1A variant underwent clinical evaluation. Findings were integrated with a literature review of craniofacial syndromes to identify overlapping fea tures. Protein-protein and gene-gene interactions were analyzed with STRING and Pathway Commons, a structural modeling of POLR1A assessed the mutation’s impact. Results: The patient exhibited features overlapping with Sweeney-Cox, Saethre-Cox, Robinow-Sorauf, and Treacher-Collins types 2–4, supporting a shared spectrum. Computational analyses identified POLR1A-associated partners and pathways converging on Pol I function, ribosomal biogenesis, and nucleolar processes. Structural modeling of the Met496Ile variant suggested disruption of DNA binding and polymerase activity, linking molecular dysfunction to the clinical phenotype. Conclusion: Significant clinical and genetic overlap exists among Saethre-Chotzen, Sweeney-Cox, Treacher-Collins types 2–4, and Acrofacial Dysostosis-Cincinnati. POLR1A and related Pol I subunits provide a mechanistic basis through impaired nucleolar organization and rRNA transcription, contributing to abnormal craniofacial development. Integrative protein, gene, and structural analyses support classifying these syndromes as Pol I–related ribosomopathies, with implications for diagnosis, counseling, and future mechanistic or therapeutic studies. Full article
Show Figures

Figure 1

25 pages, 489 KB  
Article
A Review on Models and Applications of Quantum Computing
by Eduard Grigoryan, Sachin Kumar and Placido Rogério Pinheiro
Quantum Rep. 2025, 7(3), 39; https://doi.org/10.3390/quantum7030039 - 4 Sep 2025
Cited by 1 | Viewed by 7668
Abstract
This manuscript is intended for readers who have a general interest in the subject of quantum computation and provides an overview of the most significant developments in the field. It begins by introducing foundational concepts from quantum mechanics—such as superposition, entanglement, and the [...] Read more.
This manuscript is intended for readers who have a general interest in the subject of quantum computation and provides an overview of the most significant developments in the field. It begins by introducing foundational concepts from quantum mechanics—such as superposition, entanglement, and the no-cloning theorem—that underpin quantum computation. The primary computational models are discussed, including gate-based (circuit) quantum computing, adiabatic quantum computing, measurement-based quantum computing and the quantum Turing machine. A selection of significant quantum algorithms are reviewed, notably Grover’s search algorithm, Shor’s factoring algorithm, and Quantum Singular Value Transformation (QSVT), which enables efficient solutions to linear algebra problems on quantum devices. To assess practical performance, we compare quantum and classical implementations of support vector machines (SVMs) using several synthetic datasets. These experiments offer insight into the capabilities and limitations of near-term quantum classifiers relative to classical counterparts. Finally, we review leading quantum programming platforms—including Qiskit, PennyLane, and Cirq—and discuss their roles in bridging theoretical models with real-world quantum hardware. The paper aims to provide a concise yet comprehensive guide for those looking to understand both the theoretical foundations and applied aspects of quantum computing. Full article
Show Figures

Figure 1

9 pages, 1005 KB  
Proceeding Paper
General Theory of Information and Mindful Machines
by Rao Mikkilineni
Proceedings 2025, 126(1), 3; https://doi.org/10.3390/proceedings2025126003 - 26 Aug 2025
Viewed by 1967
Abstract
As artificial intelligence advances toward unprecedented capabilities, society faces a choice between two trajectories. One continues scaling transformer-based architectures, such as state-of-the-art large language models (LLMs) like GPT-4, Claude, and Gemini, aiming for broad generalization and emergent capabilities. This approach has produced powerful [...] Read more.
As artificial intelligence advances toward unprecedented capabilities, society faces a choice between two trajectories. One continues scaling transformer-based architectures, such as state-of-the-art large language models (LLMs) like GPT-4, Claude, and Gemini, aiming for broad generalization and emergent capabilities. This approach has produced powerful tools but remains largely statistical, with unclear potential to achieve hypothetical “superintelligence”—a term used here as a conceptual reference to systems that might outperform humans across most cognitive domains, though no consensus on its definition or framework currently exists. The alternative explored here is the Mindful Machines paradigm—AI systems that could, in future, integrate intelligence with semantic grounding, embedded ethical constraints, and goal-directed self-regulation. This paper outlines the Mindful Machine architecture, grounded in Mark Burgin’s General Theory of Information (GTI), and proposes a post-Turing model of cognition that directly encodes memory, meaning, and teleological goals into the computational substrate. Two implementations are cited as proofs of concept. Full article
(This article belongs to the Proceedings of The 1st International Online Conference of the Journal Philosophies)
Show Figures

Figure 1

23 pages, 372 KB  
Article
Computability of the Zero-Error Capacity of Noisy Channels
by Holger Boche and Christian Deppe
Information 2025, 16(7), 571; https://doi.org/10.3390/info16070571 - 3 Jul 2025
Cited by 1 | Viewed by 1406
Abstract
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is [...] Read more.
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is known for computing this capacity. In this work, we investigate the computability-theoretic boundaries of the zero-error capacity and establish several fundamental limitations. Our main result shows that the zero-error capacity of noisy channels is not Banach–Mazur-computable and therefore is also not Borel–Turing-computable. This provides a strong form of non-computability that goes beyond classical undecidability, capturing the inherent discontinuity of the capacity function. As a further contribution, we analyze the deep connections between (i) the zero-error capacity of DMCs, (ii) the Shannon capacity of graphs, and (iii) Ahlswede’s operational characterization via the maximum-error capacity of 0–1 arbitrarily varying channels (AVCs). We prove that key semi-decidability questions are equivalent for all three capacities, thus unifying these problems into a common algorithmic framework. While the computability status of the Shannon capacity of graphs remains unresolved, our equivalence result clarifies what makes this problem so challenging and identifies the logical barriers that must be overcome to resolve it. Together, these results chart the computational landscape of zero-error information theory and provide a foundation for further investigations into the algorithmic intractability of exact capacity computations. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
24 pages, 769 KB  
Article
Injecting Observers into Computational Complexity
by Edgar Graham Daylight
Philosophies 2025, 10(4), 76; https://doi.org/10.3390/philosophies10040076 - 26 Jun 2025
Cited by 1 | Viewed by 1315
Abstract
We characterize computer science as an interplay between two modes of reasoning: the Aristotelian (procedural) method and the Platonic (declarative) approach. We contend that Aristotelian, step-by-step thinking dominates in computer programming, while Platonic, static reasoning plays a more prominent role in computational complexity. [...] Read more.
We characterize computer science as an interplay between two modes of reasoning: the Aristotelian (procedural) method and the Platonic (declarative) approach. We contend that Aristotelian, step-by-step thinking dominates in computer programming, while Platonic, static reasoning plays a more prominent role in computational complexity. Various frameworks elegantly blend both Aristotelian and Platonic reasoning. A key example explored in this paper concerns nondeterministic polynomial time Turing machines. Beyond this interplay, we emphasize the growing importance of the ‘computing by observing’ paradigm, which posits that a single derivation tree—generated with a string-rewriting system—can yield multiple interpretations depending on the choice of the observer. Advocates of this paradigm formalize the Aristotelian activities of rewriting and observing within automata theory through a Platonic lens. This approach raises a fundamental question: How do these Aristotelian activities re-emerge when the paradigm is formulated in propositional logic? By addressing this issue, we develop a novel simulation method for nondeterministic Turing machines, particularly those bounded by polynomial time, improving upon the standard textbook approach. Full article
(This article belongs to the Special Issue Semantics and Computation)
Show Figures

Figure 1

29 pages, 351 KB  
Article
The Computability of the Channel Reliability Function and Related Bounds
by Holger Boche and Christian Deppe
Algorithms 2025, 18(6), 361; https://doi.org/10.3390/a18060361 - 11 Jun 2025
Viewed by 1557
Abstract
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated [...] Read more.
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated functions, demonstrating that the reliability function is not Turing computable. This also holds true for functions related to the sphere packing bound and the expurgation bound. Additionally, we examine the R function and zero-error feedback capacity, as they are vital in the context of the reliability function. Both the R function and the zero-error feedback capacity are not Banach–Mazur computable. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
Back to TopTop