The Science of Information Processing Structures and the Design of a New Class of Distributed Computing Structures

Introduction Classical computer science with its origins from the John von Neumann’s stored program, which implemented the information processing structure based on the universal Turing machine, has given us tools to decipher the mysteries of physical, chemical and biological systems in nature. Both symbolic computing and sub-symbolic computing (with neural network implementations) have allowed us to model and analyze various observations (including both mental and physical processes) and use information to optimize our interactions with each other and with our environment. In addition, our understanding of the nature of information processing structures using both physical and computer experiments is pointing us to a new direction in computer science going beyond the current Church-Turing thesis boundaries of classical computer science.


Introduction
Classical computer science with its origins from the John von Neumann's stored program, which implemented the information processing structure based on the universal Turing machine, has given us tools to decipher the mysteries of physical, chemical and biological systems in nature. Both symbolic computing and sub-symbolic computing (with neural network implementations) have allowed us to model and analyze various observations (including both mental and physical processes) and use information to optimize our interactions with each other and with our environment. In addition, our understanding of the nature of information processing structures using both physical and computer experiments is pointing us to a new direction in computer science going beyond the current Church-Turing thesis boundaries of classical computer science.
The global access to communicate, collaborate and conduct commerce almost at the speed of light has created a demand for secure, non-stop and reliable information processing systems that assure data privacy. There are two shortcomings in current distributed information processing systems to provide the required sentience (capacity to observe, or perceive), resilience (assure stability in the face of fluctuations in the demand for or the availability of resources) and intelligence (the ability to predict and mitigate risk). First, the requirement to manage widely distributed information processing software and hardware components either leads to single vendor lock-in or the complexity of managing myriad autonomous local management systems. Second, the business processes and their automaton assume trusted relationships between the participants in various transactions. Unfortunately, global connectivity and non-deterministic fluctuations in the participants and information processing resource make it necessary to verify the trust before completing transactions and proactively act to thwart any bad behaviors. In essence, we need intelligent orchestration of information processing structures with global visibility and global regulation while respecting the local autonomy and privacy of various actors participating.
In this paper, we present a new approach to address both shortcomings with the design of autopoietic and cognitive information processing structures taking the cue from biological systems which use the genome to specify and execute autopoietic and cognitive behaviors.

The Science of Information Processing Structures
Cockshott et al. [1] conclude their book "Computation and its limits" with the paragraph "The key property of generalpurpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves." However, cellular organisms have managed to model a part of the world that includes themselves as we learn from various studies of neuroscience [2,3]. The genome contained in a single cell describes, models and executes the life's processes using physical structures that obey physical and chemical laws transforming matter and energy.
The genome contains the knowledge from the information gained in the process of evolution. Living organisms, in essence utilize matter and energy transformations to execute the "life" processes using the information and knowledge they have accumulated through the process of evolution. The knowledge is organized using genes and the neuronal networks giving rise to both autopoietic and cognitive behaviors of the organism. An autopoietic system is a network of processes that produces the components that reproduce the network, and that also regulates the boundary conditions necessary for its ongoing existence as a network. Cognition, on the other hand, is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various processes that monitor and control a system and its environment. Cognition is associated with a sense of "self" (the observer) and the systems with which it interacts (the environment or the "observed"). Cognition extensively uses time and history in executing and regulating tasks that constitute a cognitive process.
The theories of structural machines, triadic automata, autopoietic machines and the "knowledge structure" schema and operations on them, so well-articulated by Prof. Mark Burgin, [4 -6] provide unified science of information processing structures (SIPS). SIPS allows the transition from current data structure-based information processing to knowledge structures, Turing machines to Triadic Automata and computations that go beyond the boundaries of Church-Turing thesis. SIPS helps us in the following areas: 1. It helps us in designing and implementing a new class of autopoietic machines which go beyond the boundaries of classical computer science paving the path to a new generation of information processing systems while utilizing the current generation systems just as a mammalian brain adopted various functions that the reptilian brain provides to build higher level intelligence.
2. It allows us to design and implement an intelligent knowledge network that integrates deep learning, deep memory, knowledge from various domain and provides a framework for deep reasoning to sense and act in real-time to maintain stability, and manage the risk/reward-based behaviors of the system.

Designing a New Class of Distributed Computing Structures
We present an application of SIPS to design digital autopoietic machines with cognitive abilities to improve the sentience, resilience and intelligence of digital automata based on Turing machines.
Autopoietic machines are built using the knowledge network which consists of knowledge nodes and information sharing links with other knowledge nodes. The knowledge nodes that are wired together fire together to manage the behavioral changes in the system. Each knowledge node contains hardware, software and "infware 1 " that manages the information processing and communication structures within the node. There are three types of knowledge nodes depending on the nature of the infware: 1. Autopoietic Functional Node (AFN): Provides autopoietic component information processing services. Each node executes a set of specific functions based on the inputs and provides outputs that other knowledge nodes utilize.

Autopoietic Network Node (ANN):
Provides operations on a set of knowledge nodes to configure, monitor and manage their behaviors based on the group level objectives.

Digital Genome Node (DGN):
A system-level node that configures a set of autopoietic sub-networks, monitors them and manages them based on system-level objectives Each knowledge node is specialized with its infware defining the knowledge structures which model downstream entities/objects, their relationships and behaviors (both autopoietic and cognitive) which are executed using appropriate software, and hardware. The infware contains the knowledge to obtain resources, configure, execute, monitor and manage the downstream components based on the node level objectives.