Previous Article in Journal
Check4Strep: A Community-Based Pilot Project to Understand and Respond to Infection with Group A Streptococcus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

General Theory of Information and Mindful Machines †

by
Rao Mikkilineni
Center for Business Innovation, Golden Gate University, San Francisco, CA 94105, USA
Presented at the 1st International Online Conference of the Journal Philosophies, 10–14 June 2025; Available online: https://sciforum.net/event/IOCPh2025.
Proceedings 2025, 126(1), 3; https://doi.org/10.3390/proceedings2025126003
Published: 26 August 2025

Abstract

As artificial intelligence advances toward unprecedented capabilities, society faces a choice between two trajectories. One continues scaling transformer-based architectures, such as state-of-the-art large language models (LLMs) like GPT-4, Claude, and Gemini, aiming for broad generalization and emergent capabilities. This approach has produced powerful tools but remains largely statistical, with unclear potential to achieve hypothetical “superintelligence”—a term used here as a conceptual reference to systems that might outperform humans across most cognitive domains, though no consensus on its definition or framework currently exists. The alternative explored here is the Mindful Machines paradigm—AI systems that could, in future, integrate intelligence with semantic grounding, embedded ethical constraints, and goal-directed self-regulation. This paper outlines the Mindful Machine architecture, grounded in Mark Burgin’s General Theory of Information (GTI), and proposes a post-Turing model of cognition that directly encodes memory, meaning, and teleological goals into the computational substrate. Two implementations are cited as proofs of concept.

1. Introduction

Transformer-based LLMs, including GPT-4, Gemini, and Copilot, exemplify what is often called emergent AI—AI whose capabilities arise from scaling parameters, training data, and computational power rather than explicit symbolic or causal modeling. For this paper, emergent AI refers specifically to transformer-based architectures trained on vast corpora using gradient-based optimization and self-attention mechanisms, achieving remarkable generalization without domain-specific programming.
Historically, visions of transformative AI have ranged from von Neumann’s early speculation [1] on self-replicating automata and the “technological singularity” to Kurzweil’s prediction [2,3,4] of human–machine integration to Altman’s pragmatic focus on governance and alignment for highly capable systems [5]. OpenAI’s weak-to-strong generalization approach uses smaller, less capable models to guide the training of larger models, representing one alignment strategy for scaled architectures [6].
While emergent AI has delivered remarkably powerful tools, its inherent lack of a theoretical foundation for consciousness, ethics, and self-regulation presents significant philosophical and practical limitations. The paper by Hendrycks, Schmidt, and Wang [7] outlines a comprehensive geopolitical framework for managing the rise in superintelligent AI systems. It introduces the concept of Mutual Assured AI Malfunction (MAIM)—a deterrence model inspired by Cold War nuclear strategy—where any attempt by a state to unilaterally dominate AI development could trigger sabotage by rival powers. The authors argue that the relative ease of disrupting AI projects (e.g., via cyberattacks or strikes on data centers) makes this a credible deterrent. Their strategy rests on three pillars: deterrence, to prevent reckless AI escalation; nonproliferation, to keep dangerous capabilities away from rogue actors; and competitiveness, to ensure national resilience through AI-enhanced economic and military strength. This framework reflects a modern evolution of ideas von Neumann helped pioneer, recognizing superintelligence as a transformative force requiring strategic foresight and global coordination. These approaches point to the fundamental shortfall of “emergence without understanding and a design with purpose”.
While current systems deliver impressive general-purpose capabilities, they lack intrinsic models of meaning, persistent memory, causal reasoning, and embedded ethical governance. Philosophical and practical concerns include their black-box opacity, dependence on external alignment mechanisms, and limited ability to adapt autonomously over time.
The purpose of this paper is to
  • Highlight limitations of scaling-driven approaches to intelligence;
  • Introduce the Mindful Machines paradigm, grounded in the GTI (General Theory of Information), which encodes goals, structure, and ethical constraints directly into system architecture;
  • Demonstrate feasibility through working prototypes in distributed computing and medical decision support.
In Section 2, we discuss emergent AI. In Section 3, we present the General Theory of In-formation and Mindful Machines. In Section 4, we present practical implementations. In Section 5, we discuss the potential future of AI with a hybrid approach where Mindful Machines and Large Language Models are harmoniously integrated. If Alan Turing’s symbolic computing is the thesis, and Sam Altman’s Super-intelligence is the anti-thesis, then Mark Burgin’s GTI and Mindful Machines provide the synthesis.

2. The Emergent AI Paradigm

Main emergent AI systems are typically characterized by their architecture as deep learning, “black-box models,” often monolithic in design. This inherent opacity makes their internal workings and decision-making processes difficult to interpret or debug, which is why there is a growing field of Explainable AI (XAI) [8,9,10]. In terms of memory, these systems are fundamentally “stateless,” meaning they possess no persistent semantic or episodic memory. Their knowledge is primarily derived from vast training datasets rather than accumulated, contextualized experience [11,12]. Their outputs are typically influenced by immediate input contexts, resembling Markovian processes. This means that they often lack the capacity for long-term reasoning or understanding of causal relationships to gain insights and develop intuition. Without explicit integration of causal models, such as structural causal models or directed acyclic graphs, these systems cannot robustly reason about interventions or counterfactuals [13,14]. Ethical alignment in current AI systems is typically achieved externally—through mechanisms like human feedback (e.g., Reinforcement Learning from Human Feedback, or RLHF) or regulatory frameworks—rather than being intrinsic to the system’s architecture or reasoning processes. This means ethical behavior is often “bolted on” rather than deeply embedded [15,16]. Their goal orientation is “prompt-driven,” indicating a lack of intrinsic motivation; their purpose is externally imposed by user queries or developers. Within this paradigm, Large Language Models (LLMs) are viewed primarily as “autonomous generators” [17]. LLMs and Transformer-based systems are typically deployed in a static manner, lacking built-in mechanisms for self-modification or dynamic evolution of their architecture or behavior [18]. Additionally, agentic architectures—where multiple AI agents interact or manage each other—face the “who manages the managers” problem, a recursive challenge in oversight and alignment [19].
Emergent AI systems are powerful yet devoid of intrinsic understanding, purpose, or awareness. This challenges performance-based definitions of intelligence, shifting the philosophical debate from “how much” intelligence to “what kind” of intelligence truly matters. These systems are reactive and opaque, optimized for output but disconnected from meaning, lacking a theoretical foundation for consciousness or semantic grounding.
The post hoc ethical bolting-on raises concerns about accountability, transparency, and trust, especially in novel or unforeseen contexts where pre-programmed constraints may fail. The “black-box” nature of these models exacerbates these concerns, as their decision-making processes remain largely inscrutable. Moreover, their lack of self-regulation, stateless memory and prompt-driven behavior prevent them from evolving based on accumulated experience or intrinsic goals. They cannot autonomously adapt or correct themselves over time, which limits their capacity for genuine autonomy [16].
In addition, the von Neumann architecture, which implements the Turing Machine model, separates data and processing (both represented as sequences of symbols). This architecture uses computing resources (memory and CPU) to execute operations on data. However, the algorithm lacks awareness of the required resources, and the computing resources do not know the algorithm’s requirements for execution. This separation leads to complexity in resource management, especially when there are large fluctuations in resource demand or availability.
Finally, these systems operate through symbol manipulation without semantic grounding. Their reasoning is statistical and Markovian, predicting likely outputs based on prior data rather than understanding meaning or causality. This distinction between syntax and semantics is central to the philosophical critique: emergent AI may simulate intelligence, but it does not comprehend.
In the next section, we introduce Mindful Machines derived from the principles of the General Theory of Information [20,21,22] and Burgin–Mikkilineni Thesis [23,24,25,26,27] which address these concerns.

3. Mindful Machines: Foundations of a Post-Turing Cognition

In this framework, computation is redefined not as the passive execution of instructions, but as an active, self-organizing process that mirrors the properties of living systems—autonomy, understanding, and purpose. Key innovations include: (1) self-modification, where systems can alter their own structure and behavior during operation; (2) semantic interpretation, allowing machines to process meaning rather than just symbols; and (3) causal reasoning, enabling them to infer cause-effect relationships beyond statistical correlations. These capabilities address long-standing critiques of emergent AI, particularly its lack of intrinsic understanding, ethical autonomy, and semantic grounding. Figure 1 captures the new computing structure.
The Mindful Machines paradigm marks a fundamental departure from conventional emergent AI by introducing a post-Turing model of cognition rooted in the General Theory of Information (GTI) [20,21,22,23,24]. Unlike traditional systems that are reactive, prompt-driven, and largely statistical, Mindful Machines aim to embed memory, meaning, and intentionality directly into the computational substrate. This approach challenges the Church–Turing thesis by dissolving the rigid boundary between the computer and the computed, enabling systems to engage in self-modification, semantic interpretation, and causal reasoning [27,28,29]. In essence, the structure (form) and function are self-regulated driven by the goals and self-knowledge.
At the core of this paradigm is the Digital Genome (DG) derived from the Burgin–Mikkilineni Thesis [23,24], a prescriptive blueprint inspired by biological genomes. Unlike data-driven models such as large language models (LLMs), which operate as opaque black boxes, DG-based systems are designed with explicit operational knowledge, ethical constraints, and self-regulation policies. This enables them to manage themselves within defined boundaries, fostering transparency, resilience, and principled behavior by design.
Grounded in the Burgin–Mikkilineni Thesis and implemented in two use cases [26,29], the architecture integrates structural machines, triadic automata, and cognizing oracles—components that allow systems to process structured knowledge, adapt dynamically, and monitor their own performance [26]. While these systems do not possess human-like consciousness or subjective experience, their “mindfulness” is functional: they can self-correct, learn from experience, and regulate emergent behaviors within their operational scope.
Ultimately, the DG framework offers a compelling path forward for building safe, adaptive, and ethically aligned AI. It reframes intelligence not as perfect logic or sentience, but as the capacity for dynamic adaptation, principled autonomy, and coherent interaction with the world. For AI professionals, this represents a shift from training models to engineering digital organisms—systems that are not only capable but also accountable by design. Figure 2 shows the evolution of computing paradigms from the stored program implementation of the Turing Machine, Deep Learning neural networks, Generative AI pioneered by OpenAI and Sam Altman, and the Super-computing model with structural machines, cognizing oracles, and knowledge structures derived from Burgin’s General Theory of Information [5,6,26,30].
As the picture shows, the DG approach introduces memory populated by the symbolic and sub-symbolic structures and adds a super-symbolic structure that uses cognizing oracles as information processing elements to provide an analysis of memory-based knowledge to reason, predict and validate future actions based on goals, ethical considerations, experience, and avoidance of self-referential circularity by anchoring to truth as external reality. In the next section we describe two prototypes of the DG approach to confirm the feasibility. Some of the terms used in this paper are explained in Appendix A.

4. Real-World Prototypes: Demonstrating the Feasibility

Distributed Video Streaming Application Prototype: This prototype [25] leverages the Digital Genome to design, deploy, operate, and manage a distributed video streaming service, designed to be cloud-agnostic. It utilizes structural machines, cognizing oracles, associative memory, and event-driven interaction history for self-regulation and enhanced cognitive behaviors. The system achieves “autopoietic self-regulation” by maintaining structural stability and smooth communication among distributed components. It also enhances resilience through mechanisms like auto-failover, ensuring continuous service. While this prototype demonstrates sophisticated engineered self-regulation, adaptation, and system management, its “mindfulness” is a function of its explicit design and predefined policies rather than emergent, intrinsic consciousness. There is no evidence presented of genuine subjective experience, self-awareness in the human sense, or a capacity for self-directed goal setting beyond its programmed objectives. The “self” in this context is a functional construct for robust system operation, not a conscious entity.
Medical Knowledge-Driven Early Diagnosis Digital Assistant Prototype: This prototype [26] is also implemented using a Digital Genome derived from the GTI, specifically designed to facilitate early disease diagnosis by bridging the knowledge gap between patients and healthcare providers. It utilizes a structural machine, an autopoietic manager (implemented using containers), a cognitive network manager, and a digital replica/long-term memory stored in a graph database. Similarly to the streaming service, while highly functional and adaptive within its domain, its “mindfulness” is a product of its sophisticated engineering and adherence to predefined medical knowledge and policies, rather than emergent consciousness.
Both prototypes are presented as videos [31] as Supplementary Material which describe the implementation details, memory representations, and the management of both the structure and function of the distributed software applications).

5. Conclusions

The Digital Genome moves beyond the prevalent data-driven learning paradigm to a prescriptive, blueprint-based approach, providing an intrinsic “genetic code” for digital entities. This enables robust self-correction, sophisticated adaptive learning through integrated memory systems, and principled behavior by design, significantly mitigating AI limitations. A key differentiator of this framework is its direct focus on “taming emergence” through internal mechanisms like meta-cognition and self-modeling, steering emergent behavior towards teleonomic goals, unlike many AGI safety proposals that focus on external control.
The understanding derived from this analysis, particularly in light of Gödel’s theorems, points to a philosophical shift in what constitutes “success” for advanced AI. Instead of striving for an “infallible” or “perfect AI” (which Gödel’s theorems suggest is a “mirage” for a fixed formal system), the focus shifts to building “resilient and adaptable AI that can operate effectively despite inherent logical limitations”. This redefines “intelligence” not as absolute provability within a single system, but as the capacity for dynamic self-correction, external grounding, and strategic adaptation across multiple logical frameworks.
A frequent objection to advancing machine intelligence is that modern computing hardware, built on static von Neumann architectures, cannot match the adaptive cellular–neuronal substrate of living organisms and that extensible scaling of identical processors will eventually lead to unsustainable energy consumption. However, we are not simulating a brain. We are replicating the functions of brain and mind using a silicon substrate as algorithmic processes as explained in Appendix B.
The path forward for principled and adaptive AI necessitates continued research and development across several key areas. Further investigation is essential into mechanisms for dynamic DG evolution, allowing its foundational specifications to adapt in response to unpredictable environmental shifts while maintaining consistency and integrity. Developing rigorous formal verification and empirical validation techniques specifically tailored to DG-driven systems will be crucial, including establishing metrics for quantifying self-regulation, resilience, and the degree of ethical alignment achieved, moving beyond traditional accuracy metrics. A deeper exploration of the complex interplay of prescriptive behaviors defined by the DG and emergent behaviors arising from deep learning components is warranted, particularly in contexts involving non-Markovian reasoning and long-term dependencies. Expanding the practical applications for DG-driven mindful machines into critical domains demanding exceptionally high reliability, autonomy, and ethical considerations, such as advanced medical AI assistants, highly complex distributed systems, and potentially foundational architectures for Artificial General Intelligence and advanced robotics, is a vital next step. Crucially, continued exploration of how the DG can robustly embed and enforce “ethical principles that govern system behavior” is essential. This will ensure that as machines become more autonomous, they remain aligned with human values and operate within a framework of “conscious human–machine collaboration”.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/proceedings2025126003/s1. Video S1: Mindful Machines; Presentation: Mindful Machines (including Video S2: Digital Genome and self-regulating distributed software applications with associative memory and event-driven history); Video S3: Applying the General Theory of Information.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The author acknowledges many useful discussions with Max Michaels, Patrick Kelly, and Gideon Crawley. During the preparation of this manuscript, the author used Gemini and Copilot for the purposes of research. The author has reviewed and edited the output and takes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Explanations

Here are some terms and their explanations:
  • Large Language Model (LLM): A transformer-based probabilistic program modeling P(next token∣context). P(next token∣context) over large vocabularies, learned from internet-scale corpora. Generates text/code via conditional sampling; has no intrinsic world-model, goals, or persistent episodic memory unless extended.
  • Cognizing Oracles: Runtime epistemic middleware that bridges goals/policies and execution. Reads Digital Genome + telemetry; consults memory and causal models; chooses and justifies actions (place/move/scale/stop services; adjust configs) to satisfy goals with minimum cost/energy; writes back outcomes for learning.
  • Unity of the Computer and the Computed (Figure 1): Achieved by co-specifying function, form, and policy in the Digital Genome; compiling into live graphs via Structural Machines; continuously orchestrated by Cognizing Oracles; executed by Autopoietic Managers; and closed-loop learning via integrated semantic/episodic memory.
  • Digital Genome (DG) and Non-Digital Genome: DG = versioned, executable knowledge graph encoding teleological goals, functional specs, non-functional intents, structural patterns, lifecycle policies, and model/memory references. Non-Digital Genomes = biological (chemical) prescriptive blueprints; DG is the engineered, digital counterpart.
  • Autopoietic Systems (Figure 2) vs. von Neumann Self-Reproduction: Von Neumann: syntactic replication of a description via a universal constructor in an abstract lattice; no goals or semantics.
  • Autopoietic Mindful Systems: sustain and adapt their organization in real environments; replication/healing/migration are goal-driven, memory- and semantics-informed, energy-aware, and optimized against SLOs. They couple operational closure with environmental adaptation and learn from history.

Appendix B. Hardware Limitations and Energy Utilization

A frequent objection to advancing machine intelligence is that modern computing hardware, built on static von Neumann architectures, cannot match the adaptive cellular–neuronal substrate of living organisms and that extensible scaling of identical processors will eventually lead to unsustainable energy consumption. However, It is important to note that we are not simulating brains. We are emulating functional behaviors with engineered software and hardware agents. Figure A1 shows the difference between biological systems and Mindful Machines.
Figure A1. The difference between biological systems and Mindful Machines.
Figure A1. The difference between biological systems and Mindful Machines.
Proceedings 126 00003 g0a1
The Mindful Machines paradigm addresses this limitation by replacing monolithic, centralized compute farms with virtualized, distributed, and concurrent execution across heterogeneous cloud IaaS/PaaS infrastructures. Autopoiesis—implemented through redundancy in virtual hardware allocation and replication of software containers—enables systems to self-heal, self-scale, and migrate in real time. Cognizing oracles orchestrate execution based on task priority, latency, cost, and energy constraints, ensuring just-in-time resource use and releasing idle capacity to minimize baseline energy drain. This approach decouples form from function, allowing systems to adapt to the available infrastructure without compromising capability, and shifts growth from brute-force hardware multiplication to knowledge-driven, energy-aware orchestration. As a result, Mindful Machines bypass the hard physical ceilings and energy-death trajectories of traditional AI scaling, aligning computational evolution with principles of sustainability, resilience, and adaptive efficiency.

References

  1. von Neumann, J. The Computer and the Brain; Yale University Press: New Haven, CT, USA, 1958. [Google Scholar]
  2. Kurzweil, R. The Age of Intelligent Machines; MIT Press: Cambridge, MA, USA, 1990. [Google Scholar]
  3. Kurzweil, R. The Age of Spiritual Machines: When Computers Exceed Human Intelligence; Viking: New York, NY, USA, 1999. [Google Scholar]
  4. Kurzweil, R. The Singularity is Near: When Humans Transcend Biology; Viking: New York, NY, USA, 2005. [Google Scholar]
  5. Altman, S. The Gentle Singularity. 2025. Available online: https://blog.samaltman.com/the-gentle-singularity (accessed on 30 June 2025).
  6. OpenAI. Weak-to-Strong Generalization. 2024. Available online: https://openai.com/index/weak-to-strong-generalization (accessed on 30 June 2025).
  7. Hendrycks, D.; Schmidt, E.; Wang, A. Superintelligence strategy: Expert version. arXiv 2025, arXiv:2503.05628. [Google Scholar] [CrossRef]
  8. Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting black-box models: A review on explainable artificial intelligence. Cogn. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
  9. Savage, N. Breaking into the Black Box of Artificial Intelligence. Nature. Available online: https://www.nature.com/articles/d41586-022-00858-1 (accessed on 11 August 2025).
  10. von Eschenbach, W.J. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar] [CrossRef]
  11. DeChant, C. Episodic memory in AI agents poses risks that should be studied and mitigated. arXiv 2025, arXiv:2501.11739. [Google Scholar] [CrossRef]
  12. Kumar, A.A. Semantic memory: A review of methods, models, and current challenges. Psychon. Bull. Rev. 2020, 28, 40–80. [Google Scholar] [CrossRef] [PubMed]
  13. Zhou, S.; Schölkopf, B. Causal reasoning and large language models: Opening a new frontier for AI. arXiv 2023, arXiv:2305.00050. [Google Scholar]
  14. Baron, S. Explainable AI and causal understanding: Counterfactual approaches considered. Minds Mach. 2023, 33, 347–377. [Google Scholar] [CrossRef]
  15. Hagendorff, T. The ethics of AI ethics: An evaluation of guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  16. van Maanen, H. The philosophy and ethics of AI: Conceptual, empirical, and normative perspectives. AI and Ethics. Digit. Soc. 2024, 3, 10. [Google Scholar] [CrossRef]
  17. Ye, G.; Pham, K.D.; Zhang, X.; Gopi, S.; Peng, B.; Li, B.; Kulkarni, J.; Inan, H.A. On the emergence of thinking in LLMs I: Searching for the right intuition. arXiv 2025. [Google Scholar] [CrossRef]
  18. Shahzad, T.; Mazhar, T.; Tariq, M.U.; Ahmad, W.; Ouahada, K.; Hamam, H. A comprehensive review of large language models: Issues and solutions in learning environments. Discov. Sustain. 2025, 6, 27. [Google Scholar] [CrossRef]
  19. Cemri, M.; Pan, M.Z.; Yang, S.; Agrawal, L.A.; Chopra, B.; Tiwari, R.; Keutzer, K.; Parameswaran, A.; Klein, D.; Ramchandran, K.; et al. Why do multi-agent LLM systems fail? arXiv 2025. [Google Scholar] [CrossRef]
  20. Burgin, M. Theory of Information: Fundamentality, Diversity, and Unification; World Scientific: Singapore, 2010. [Google Scholar]
  21. Burgin, M. Theory of Knowledge: Structures and Processes; World Scientific: New York, NY, USA; London, UK; Singapore, 2016. [Google Scholar]
  22. Burgin, M. Structural Reality; Nova Science Publishers: New York, NY, USA, 2012. [Google Scholar]
  23. Mikkilineni, R. A New Class of Autopoietic and Cognitive Machines. Information 2022, 13, 24. [Google Scholar] [CrossRef]
  24. Burgin, M.; Mikkilineni, R. On the Autopoietic and Cognitive Behavior. EasyChair Preprint No. 6261, Version 2. 2021. Available online: https://easychair.org/publications/preprint/tkjk (accessed on 27 December 2021).
  25. Mikkilineni, R.; Kelly, W.P.; Crawley, G. Digital genome and self-regulating distributed software applications with associative memory and event-driven history. Computers 2024, 13, 220. [Google Scholar] [CrossRef]
  26. Kelly, W.P.; Coccaro, F.; Mikkilineni, R. General theory of information, digital genome, large language models, and medical knowledge-driven digital assistant. Comput. Sci. Math. Forum 2023, 8, 70. [Google Scholar] [CrossRef]
  27. Mikkilineni, R. Going beyond Church–Turing Thesis Boundaries: Digital Genes, Digital Neurons and the Future of AI. Proceedings 2020, 47, 15. [Google Scholar] [CrossRef]
  28. Cockshott, P.; MacKenzie, L.M.; Michaelson, G. Computation and Its Limits; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  29. Mikkilineni, R. Mark Burgin’s Legacy: The General Theory of Information, the Digital Genome, and the Future of Machine Intelligence. Philosophies 2023, 8, 107. [Google Scholar] [CrossRef]
  30. Turing, A.M. The Essential Turing; Copeland, B.J., Ed.; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  31. Digital Genome Implementation Presentations: Autopoietic Machines. Available online: https://triadicautomata.com/digital-genome-vod-presentation/ (accessed on 30 June 2025).
Figure 1. Digital Genome and information processing structures (black denotes processes).
Figure 1. Digital Genome and information processing structures (black denotes processes).
Proceedings 126 00003 g001
Figure 2. Evolution of computing (thesis is black, antithesis is blue, and synthesis is red).
Figure 2. Evolution of computing (thesis is black, antithesis is blue, and synthesis is red).
Proceedings 126 00003 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mikkilineni, R. General Theory of Information and Mindful Machines. Proceedings 2025, 126, 3. https://doi.org/10.3390/proceedings2025126003

AMA Style

Mikkilineni R. General Theory of Information and Mindful Machines. Proceedings. 2025; 126(1):3. https://doi.org/10.3390/proceedings2025126003

Chicago/Turabian Style

Mikkilineni, Rao. 2025. "General Theory of Information and Mindful Machines" Proceedings 126, no. 1: 3. https://doi.org/10.3390/proceedings2025126003

APA Style

Mikkilineni, R. (2025). General Theory of Information and Mindful Machines. Proceedings, 126(1), 3. https://doi.org/10.3390/proceedings2025126003

Article Metrics

Back to TopTop