Triadic Automata and Machines as Information Transformers

Algorithms and abstract automata (abstract machines) are used to describe, model, explore and improve computers, cell phones, computer networks, such as the Internet, and processes in them. Traditional models of information processing systems—abstract automata—are aimed at performing transformations of data. These transformations are performed by their hardware (abstract devices) and controlled by their software (programs)—both of which stay unchanged during the whole computational process. However, in physical computers, their software is also changing by special tools such as interpreters, compilers, optimizers and translators. In addition, people change the hardware of their computers by extending the external memory. Moreover, the hardware of computer networks is incessantly altering—new computers and other devices are added while other computers and other devices are disconnected. To better represent these peculiarities of computers and computer networks, we introduce and study a more complete model of computations, which is called a triadic automaton or machine. In contrast to traditional models of computations, triadic automata (machine) perform computational processes transforming not only data but also hardware and programs, which control data transformation. In addition, we further develop taxonomy of classes of automata and machines as well as of individual automata and machines according to information they produce.


Introduction
It is well known that computers are processing information. At the same time, they are also containers or carriers of information. That is why it is so important to study properties of computers and computer networks from the information perspective. To efficiently do this, researchers need adequate models of computers and computer networks. Many properties of computers, computer networks and computation are well presented in conventional models of automata and algorithms. However, there are still properties that demand new models.
Traditionally, computation is treated as data transformation, which modifies information contained in data and creates new information. At the same time, there were important traits of computers and computer networks, which were missed in traditional models. For instance, the majority of these models do not have input and output systems. The absence of these systems prevented finding and formalizing inductive computations for a long time [1,2].
Other ignored characteristics are related to hardware and software transformations, which take place in computers and computer networks. As a matter of fact, in physical computers, their programs are changing by special software tools such as interpreters, compilers, optimizers and translators. Besides, by using external memory, people change the hardware of their computers. The hardware of computer networks is permanently altering-new computers and other devices are added while other computers and other devices are disconnected. structure of abstract automata, which consists of its hardware, software and infware, which can be treated as data processed by this automaton. In the fourth section, we introduce and formally describe the concept of a triadic automaton. In the fifth section, we study the dynamics of triadic automata and machines. In Conclusion, we summarize obtained results and suggest directions for the future research.

Algorithms, Automata and Machines
It would be no exaggeration to say that the basic concepts of computer science are algorithms, automata, machines and computation. Nevertheless, there are no unanimously accepted definitions of these important concepts. That is why, in this section, we analyze these concepts, explicate relations between them and suggest informal definitions of these concepts in the context of information processing. Definition 1. An algorithm is a constructive finite description of a set of processes aimed at solving some problem with the exact definitions of its input and result.
Here, constructive means that all described operations are comprehensible, executable and finite. There are different types, categories, kinds, forms and classes of algorithms. Let us consider some of them.
According to the world structuration described in [16], the following types of algorithms are used by people:
Structurally represented algorithms, e.g., structures of computer programs or of transition rules of finite automata; 3.
In turn, physically represented algorithms have the following forms:
Textual algorithms, e.g., a system of instructions or rules; 3.
Numeric algorithms, e.g., algorithms as weights in neural networks.
Textual and numeric algorithms together form the class of symbolic algorithms.
In this context, textual algorithms can be divided into three subclasses:
Intentionally expressed textual algorithms giving only descriptions of what is necessary to do, e.g., programs in functional or relational programming languages.
This classification reflects levels of explicitness of algorithm representations. According to their forms, the following categories of algorithms are used by people:
Description algorithms, e.g., programs in functional or relational programming languages.
This classification reflects levels of symbolism in algorithm representations.
There are also levels of algorithms, which reflect grades of explicit descriptions of computational processes by algorithms [20,21]. For simplicity, we describe these levels for algorithms that have the form of a system of instructions or rules.
Algorithms of the first level contain only instructions (rules) for data transformation.
Algorithms of the second level contain instructions (rules) for data transformation and execution instructions (or metarules), which describe how to apply data transformation instructions (rules).
For instance, in finite automata, instructions (rules) that define how to select an appropriate transition are execution instructions (or metarules).
Note that metarules of instruction selection can essentially influence the functioning of systems where they are used.
Algorithms of the third level contain instructions (rules) for data transformation, execution instructions (metarules) of the first level, which describe how to apply data transformation instructions (rules) and execution instructions (metarules) of the second level, which describe how to apply execution instructions (metarules) of the first level.
It is possible to continue this construction considering algorithms of any level n.
Algorithms are utilized to control and direct the functioning of automata and machines, which are systemic devices. That is why, at first, we consider the general concept of a device.

Definition 2.
A device is a structure that performs actions and generates processes.
There are different types, categories, kinds and classes of devices. Let us consider some of them. According to the world structuration described in [16], the following types of devices are used by people: 1. Physical devices, e.g., computers or calculators; 2. Abstract devices, e.g., abstract automata such as finite automata or Turing machines; 3. Mental devices, e.g., mental schemas of neural systems, which perform addition or multiplication [22,23].
It is important to understand that only instrumental algorithms are devices. Two other classestextual and numeric algorithms-are not devices because they do not perform actions themselves. For instance, a Turing machine is a device while a partial recursive function is not a device. Algorithms that are not devices need devices or people to be performed.
According to their origin, the following classes of devices are created and used by people: 1. Artificial devices are created by people, e.g., computers; 2. Natural devices exist in nature, e.g., an organism of a living being or the Earth, considered as a device; 3. Combined devices are combinations of artificial and natural devices, e.g., a car with a driver or a plane with a pilot.
People construct and use a diversity of artificial devices-computers, cars, planes, cell phones, ships and so on. At the same time, people use also different natural devices. The system of a sundial is an example of such a device. It consists of three subsystems: the Sun, the Earth and the sundial itself.
According to the classification of natural systems, the following kinds of devices exist: 1. Animate devices are or include living individuals, 2. Inanimate devices do not include living individuals, and 3. Hybrid natural devices are combinations of animate and inanimate devices.
In some generalized sense, it is possible to consider all living beings as animate devices. The Solar System is an example of inanimate device.
Automata and machines are important special cases of devices.

Definition 3.
An automaton is a device that autonomously performs actions prescribed by an algorithm when a relevant input is given.
The functioning of automata is controlled by algorithms.
Similar to devices in general, there are three basic classes of automata: 1. Artificial automata are created by people, e.g., electronic clocks; 2. Natural automata exist in nature, e.g., the Solar system; 3. Combined automata are combinations of artificial and natural devices, e.g. a sundial, which consists of two natural systems -the Sun and the Earth -combined with one artificial system -the sundial itself.
In essence, all classifications of devices are applicable to automata. Very often people treat theoretical automata and machines as the same class of objects. For instance, Turing machines are considered the most popular class of abstract automata. However, in our study, we make a distinction between automata and machines.

Definition 4.
A machine is a device such that when a relevant input is given, performs actions prescribed by an algorithm which can involve participation of people.
For instance, cell phones are machines while clocks are automata. Note that according to these definitions, any automaton is a machine, but it is not true that any machine is an automaton.
In essence, all classifications of devices induce corresponding classes of automata and machines because automata and machines are devices. To avoid repetition, we do not consider those types, categories, kinds and classes of automata and machine classification that are induced by classifications of devices. At the same time, there are classifications of automata and machines brought about by classification of algorithms. It is possible to take the following classification as an example.
According to the classification of algorithms that control automata, the following categories of automata are used in computer science: 1. Parametric automata, e.g., neural networks, are controlled by parametric algorithms; 2. Instruction automata, e.g., Turing machines, are controlled by instruction algorithms; 3. Description automata, e.g., symmetric Turing machines in the sense of [10,11,24], are controlled by description algorithms.
Devices in general and automata and machines, in particular, can perform various functions and generate diverse processes. Here we are interested in information processing in the form of computation.
Although computation pervades the contemporary society, there is no unanimously accepted definition of computation [25]. At the same time, there is a variety of different definitions and description-some of which we consider here.
After Turing machine was accepted as the uppermost model of algorithms, computation has been interpreted as what a Turing machine is doing. When it became clear that computation can go beyond Turing machines, the opposite trend appeared, in which it was supposed that any system in nature or society is computing. This approach is called pancomputationalism (cf., for example, [26][27][28]).
As it is explained in [29], there are three basic levels of generality in understanding the phenomenon of computation: On the top level, computation is perceived as any transformation of information and/or information representation.
On the middle level, computation is distinguished as a discrete process of transformation of information and/or information representation.
On the bottom level, computation is recognized as a discrete process of symbolic transformation of information and/or symbolic information representation.
Here, we take on the engineering approach to computation according to which computation is a process performed by information processing devices. Besides, in what follows, we consider only computing automata and machines.

Inner Structure of Abstract Automata
On analyzing a computer, we see that it has various devices, e.g., one or several processors, memory of different types and input/output devices [30]. All these devices and connections between them constitute the hardware of this computer.
In addition, a computer has various programs, which direct its functioning. All these programs cast the software of this computer.
Besides, a computer works with diverse data. These data are unified under the name the infware of this computer.
In a similar way, the inner structure of an abstract automaton consists of three componentshardware, software and infware-and relations between them [1].

Definition 5.
The hardware of an abstract automaton comprises theoretical (abstract) devices, such as a control device, processor or memory, which play a part in computations performed by this automaton, and connections between these devices.
In a general situation, the hardware of a real information processing system has three key components: the input subsystem, output subsystem and processing subsystem. Thus, to properly represent information processing systems, which are actually utilized by people, the hardware of an abstract automaton must have three basic components: (abstract) input device, (abstract) information processor, and (abstract) output device [1]. In many theoretical models of computation, input and output devices are either not specified or represented by components of the common memory and/or of the processor. For example, in a typical Turing machine, operations of input and output utilize the working memory-one or several tapes. In contrast to this, inductive Turing machines contain special input and output registers, e.g., tapes [6]. The same is true for pointer machines. Indeed, a pointer machine receives input-finite sequences of symbols (words) from its "read-only tape" (or an equivalent storage device) -and it writes output sequences of symbols on an output "write-only" tape (or an equivalent storage device).
Neural networks also have these core components: the input subsystem that comprises all input neurons, output subsystem that consists of all output neurons, and it is possible to treat all neurons of the network or only all hidden neurons as its information processing subsystem [31]. In some cases, input and output neurons are regarded as one group of visible neurons.

Definition 6.
Infware of an abstract automaton consists of objects processed by this automaton including input and output data.
Here are some examples. The majority of abstract automata (computing devices) work with linear (i.e., one-dimensional) languages. Consequently, their infware consists of words in some alphabet.
Kolmogorov algorithms and storage modification machines work with arbitrary graphs [32,33]. Consequently, their infware consists of graphs.
There are also many practical algorithms that work with graphs (cf., for example, [34]). Consequently, their infware also consists of graphs.
Turing machines with two-dimensional tapes and two-dimensional cellular automata work with two-dimensional arrays of words. This means that their infware consists of two-dimensional arrays of words.
Turing machines with n-dimensional tapes and n-dimensional cellular automata work with n-dimensional arrays of words. This means that their infware consists of n-dimensional arrays of words.
Note that such advanced abstract automata as structural machines can process not only data but also knowledge [37]. In this case, their infware consists of knowledge.
It is necessary to remark that the word infoware, which is used in the field of networks and computers, has a very different meaning in comparison with the term infware.

Definition 7.
Software of an abstract automaton consists of texts, which control the functioning of this automaton.
Here are some examples. Many kinds of algorithms and abstract automata, such as finite automata, pushdown automata, register machines, Kolmogorov algorithms, random access machines (RAM), and Turing machines, use systems of instructions, for example, in the form of transition rules, to control computational processes. Such systems of instructions constitute software of these automata and machines.
The system of weights, activation functions, threshold functions and output functions form software of neural networks. It is possible to treat these systems as algorithms although their form is different from traditional algorithms, which are described as sets of instructions.
In contrast to neural networks, software of the majority of algorithms and abstract automata consists of systems of instructions. These instructions or rules determine computational processes, which are controlled by algorithms and are going in these automata. All these classes of algorithms and abstract automata are unified by the comprehensive concept of an instruction machine.

Definition 8. (a) An instruction machine or instruction automaton M is an automaton functioning of which is determined by a system of instructions (rules). (b) A pure instruction machine or pure instruction automaton M is an automaton functioning of which is determined only by a system of instructions (rules) and its input.
Note that the functioning of an instruction automaton is not necessarily uniquely determined by its system of instructions. For instance, its functioning can also depend on the states of its control device as in Turing machines. Besides, in nondeterministic instruction machines, e.g., in nondeterministic Turing machines, there are metarules of instruction selection, which can essentially influence its functioning [20,21].
At the same time, an important dynamic characteristic of the majority of abstract automata is their state. This brings us to another important class of automata. Definition 9. (a) A state machine or state automaton M has a control device and is an automaton functioning of which is determined by the states of its control device. (b) A pure state machine or pure state automaton M has a control device and is an automaton functioning of which is determined only by the states of its control device and its input.
Note that the control device of an automaton can coincide with the whole automaton. In this case, the functioning of the automaton is determined by its states. However, when the machine has a control device makes this automaton more flexible.
Often state machines (state automata) are also instruction machines (automata) with systems of instructions. However, implicitly any state machine (automaton) M is an instruction machine (automaton). Indeed, if we take descriptions of how the functioning of the machine (automaton) M depends on the state, we obtain instructions (rules) of its functioning.
We observe this situation in the case of finite automata. A finite automaton is a pure state machine (automaton) but its transition function (relation) makes it also an instruction machine (automaton).
Let us consider the structure of a state instruction machine (state instruction automaton) in a general case.
In a general case, a state instruction machine (state instruction automaton) M has three components: • The control device C M , which is a finite automaton and represents states of the machine (automaton) M; • The memory W M , which stores data; • The processor P M , which transforms (processes) information (data) from the input and the memory W M .
The memory W M consists of cells and connections between them. Each cell can be empty or contain a symbol from the alphabet A M of the machine (automaton) M.
On each step of computation, the processor P M observes one cell from the memory W M at a time, and can change the symbol in this cell and go to another cell using connections in the memory W M . These operations are performed according to the instructions R M for the processor P M . These instructions R M can be stored in the processor P M or in the memory W M .
However, it is possible that an instruction machine consists of a single processor as its particular case-a finite automaton.

Example 1.
A finite automaton G is an instruction machine, which has the following representation. Namely, a finite automaton (FA) G consists of three structures: • The linguistic structure L = ( , Q, Ω) where is a finite set of input symbols, Q is a finite set of states, and Ω is a finite set of output symbols of the automaton G; • The state structure S = (Q, q 0 , F) where q 0 is an element from Q that is called the start state and F is a subset of Q that is called the set of final (in some cases, accepting) states of the automaton G; • The action structure, which is traditionally called the transition function of G and has the following form The transition relation/function δ is portrayed by descriptions of separate transitions and each of these descriptions is an instruction (rule) for the automaton functioning.
Note that a finite automaton does not have a memory.

Example 2.
A Turing machine T is also an instruction machine because its functioning is defined by a system of rules (instructions), which have the following form for a Turing machine with one head qa→pbQ Here, a is the symbol, which is observed by the head of T and changed to the symbol b, and q is a state of the Turing machine, or more exactly, of its control device, which is changed in this operation to the state p, while Q is direction of the move of the head after performing the writing operation. Example 3. An inductive Turing machine K is also an instruction machine because its functioning is defined by a system of rules (instructions), which are similar to rules of Turing machines [1,6].
There are numerous kinds of instruction machines with various types of instructions. However, it is possible to distinguish three classes of instructions: • Straightforward or prescriptive instructions directly tell what is necessary to do. While all these examples are conventional models of computation, in the next section, we introduce and study more advanced models.

Structure of Triadic Automata and Machines
Triadic automata (machines) transform data (infware), instructions (software) and memory (hardware). Before describing their structure, we make an inventory of the types of triadic machines (triadic automata). Besides, there are different ways to perform hardware/software modifications. With respect to the source of modification, it is possible to consider three types of hardware/software modifications in an automaton (machine) M: External modification is performed by another system. Internal modification is performed by the automaton (machine) M. Combined modification is performed by both the automaton (machine) M and another system. What modifications are possible and permissible depends on the structure of a triadic machine or triadic automaton. Its mandatory components are input and output systems working together with one or more processors. Usually, input and output components are specific registers in the memory of the machine (automaton) [1]. At the same time, in neural networks, input and output are organized using specified neurons [31].
However, adding memory and other components to automata allows increasing their flexibility, interoperability and efficiency. These changes are reflected in the structure of triadic machines (automata), which have different types. Here we consider two types state and instruction triadic automata (machines). Definition 11. A triadic state machine or triadic state automaton A with memory has seven core hardware components: • The control device C A , which is a finite automaton and represents states of the machine (automaton) A; • The data memory W A , which stores data and includes input and output registers; • The software memory V A , which stores software of the machine (automaton) A; • The data processor P M , which transforms (processes) information (data) from the memory W M ; • The software processor D M , which transforms (processes) software of A stored in the memory V M ; • The metaprocessor P A , which transforms (e.g., builds or deletes connections in) the hardware H A and/or changes the control device C A .
In the standard form, both memories consist of cells, which are connected by transition links. Processors have their programs of functioning, which constitute the software of the automaton.
In the same way as triadic state machines, triadic instruction machines constitute a special class of triadic machines. In a general case, it is possible that the functioning of a triadic instruction machine does depend on its state. However, we include the state system in the general description of triadic instruction machines because when the functioning of a triadic instruction machine does depend on its state, it is possible to treat this as a machine with only one state.

Definition 12.
A triadic instruction machine or triadic instruction automaton H with memory has seven core hardware components: • The control device C H , which is a finite automaton and represents states of the machine (automaton) H; • The data memory W H , which stores data; • The instruction memory V H , which stores instructions; • The data processor P M , which transforms (processes) information (data) from the memory W M ; • The instruction processor D M , which transforms (processes) information (instructions) from the memory V M ; • The memory processor P W , which transforms (builds or deletes connections and/or cells in) the memory W M ; • The memory processor P V , which transforms (e.g., builds or deletes connections and/or cells in) the memory V M .
Memory processors are hardware transformers and it is also possible to include a control device processor in the structure of a triadic instruction machine. This additional processor changes the control device C A .
There are different classes of triadic instruction machines (automata).

Definition 13. Triadic instruction machines (automata) that:
do not have processors that transform memory are called symmetric instruction machines, -have only processor(s) that transform data are called instruction machines, -have only processor(s) that transform instructions are called translation machines or translators, -have only processor(s) that transform memory are called construction machines or constructors, -do not have processors that transform data are called constructors (construction machines) with translators, and do not have processors that transform instructions are called generative instruction machines.
Machines from each class have their specific functions. For instance, construction machines (constructors) can be used to construct memory for other machines. This technique is employed in inductive Turing machines of the second and higher orders use inductive Turing machines of lower orders as their constructors [1,6].
Besides, there are different methods to organize program formation with the help of computing/ constructing agents. If the memory of the automaton has connections between any pair of cells, then the program can use these connections. Thus, it is possible to organize the inductive mode of computing by inductive computation (compilation) of the program for the main computation.
In the simplest approach called the sequential strategy, it is assumed that given some schema, for example, a description of the structure of the memory E of an inductive Turing machine M, an automaton A builds the program and places it in the memory E before the machine M starts its computation. When M is an inductive Turing machine of the first order, its constructor A is a Turing machine, which, for example, puts the names of the connections of the memory of M into instructions (rules) of M. When M is an inductive Turing machine of the second or higher order, its constructor A is also an inductive Turing machine-the order of which is less than the order of M and which modifies instructions (rules) of M. For instance, the program of inductive Turing machines of the second order is constructed by Turing machines of the first order.
According to another methodology, which is called the concurrent strategy, program formation by the automaton A and computations of the machine M go concurrently, that is, while the machine M computes, the automaton A constructs the program in the memory E.
It is also possible to use the mixed strategy when some parts of the program E are assembled before the machine M starts its computation, while other parts are formed parallel to the computing process of the machine M.
These three strategies determine three kinds of the constructed program (software): • In the static program (static software) of the machine M, everything is constructed before M starts working.

•
In the growing program (growing software) of the machine M, parts are constructed while M is working but no parts are deleted.

•
In the dynamic program (growing software) of the machine M, when it necessary, some parts are constructed and when it necessary, some parts are deleted while M is working.
It is possible to use similar strategies for hardware modification. This approach determines three types of the constructed hardware of a triadic automaton/machine:

•
In the static hardware of the machine M, everything is constructed before M starts working.

•
In the growing hardware of the machine M, parts are constructed while M is working but no parts are deleted.

•
In the dynamic hardware of the machine M, when it necessary, some parts are constructed and some parts are deleted while M is working. Now, we can analyze the functioning of triadic automata and machines in more detail.

The Dynamics of Triadic Automata and Machines
To describe the dynamics of triadic automata and machines, we need some concepts from the theory of computation and automata.
There are different equivalence relations between automata, machines and algorithms. Three basic ones are determined by properties of computations [1,38]. For instance, a pushdown automaton, which does not use its stack, is operationally equivalent to a nondeterministic finite automaton. Consequently, the class of all pushdown automata, which do not use their stack, is operationally equivalent to the class of all nondeterministic finite automata.
However, operational equivalence does not completely characterize automata, machines and algorithms with respect to their functioning. To achieve this goal, we need a stronger equivalence. By definition, strictly operationally equivalent automata, machines or algorithms are operationally equivalent. However, properties of computation show that two automata (machines or algorithms) can be operationally equivalent but not strictly operationally equivalent. For instance, a Turing machine with one working tape, one input read-only tape, one output write-only tape and with three corresponding heads is operationally equivalent to a simple inductive Turing machine [1]. However, these machines are not strictly operationally equivalent because their results are defined by different rules.
In a similar way, strictly operationally equivalent classes of automata, machines or algorithms are operationally equivalent. However, the previous example shows that two classes of automata (machines or algorithms) can be operationally equivalent but not strictly operationally equivalent.
In addition to two kinds of operational equivalence, there are other forms of equivalence of automata, machines and algorithms. In particular, it is known that automata can perform different operations but give the same result. This observation brings us to two more types of automata/machines equivalence. For instance, the class of all Turing machines with one tape and the class of all one-dimensional cellular automata are functionally equivalent because they compute the same class of partially recursive functions [1].
Properties of automata machines and algorithms imply the following result.

Lemma 1.
If the results of computations are defined in the same way for two (classes of) automata machines or algorithms, then their operational equivalence implies their functional equivalence.
In other words, strict operational equivalence implies functional equivalence. However, when the results of computations are defined in a different way, two operationally equivalent (classes of) automata machines or algorithms can be not functionally equivalent. For instance, the class of all Turing machines with three tapes and heads and the class of all simple inductive Turing machines are operationally equivalent but they are not functionally equivalent because inductive Turing machines can compute much more functions than Turing machines [6].
Let us assume that all considered below automata (machines) work with words in some alphabet. Naturally, these automata (machines) compute some languages. For instance, the class of all deterministic finite automata and the class of all nondeterministic finite automata are linguistically equivalent [1].
Properties of functions imply the following result.
Lemma 2. [38]. Functional equivalence of two (classes of) automata machines or algorithms implies their linguistic equivalence.
Lemmas 1 and 2 imply the following result.

Corollary 1. Strict operational equivalence implies linguistic equivalence.
However, operational equivalence implies neither functional nor linguistic equivalence. Indeed, when the results of computations are defined in a different way, two operationally equivalent (classes of) automata machines or algorithms can be not linguistically equivalent. For instance, the class of all Turing machines with three tapes and heads and the class of all simple inductive Turing machines are operationally equivalent but they are not linguistically equivalent because inductive Turing machines can compute much more languages than Turing machines [6].
All classes of automata, machines or algorithms studied and utilized in computer science and mathematics are usually divided into three substantial categories: subrecursive, subrecursive and superrecursive classes of automata, machines or algorithms [1]. Here we introduce a more detailed classification.

Definition 18.
A class of automata, machines or algorithms R is called linguistically (functionally) recursive if it is linguistically (functionally) equivalent to the class of all Turing machines.

Example 7.
The class of all Random Access Machines (RAM) is linguistically and functionally recursive [39].

Example 8.
The class of all cellular automata is linguistically recursive [40].

Example 9.
The class of all storage modification machines is linguistically and functionally recursive [33].

Example 10.
The class of all Minsky machines is linguistically and functionally recursive [41].
Any language computed by some Turing machine is also computed by a universal Turing machine [42]. This gives us the following result. Recursive classes of automata, machines or algorithms form a theoretical threshold for separating two basic classes-subrecursive and superrecursive automata, machines or algorithms. Namely, Turing machines are also used to define two more classes of automata, machines or algorithms.

Example 14.
The class of all inductive Turing machines is linguistically and functionally superrecursive [1,6].

Example 15.
The class of all periodic Turing machines is linguistically and functionally superrecursive [43].

Example 16.
The class of all inductive cellular automata is linguistically and functionally superrecursive [44].
However, the possibility to compute a function (language) noncomputable by Turing machines does not guaranty that that in this class of algorithms/automata, all functions (languages) computable by Turing machines will be computable. In other words, it is possible that a linguistically and functionally superrecursive class of automata, machines or algorithms does not contain a linguistically and functionally recursive subclass of automata, machines or algorithms. That is why it is reasonable to consider one more important category of automata (machines) related to their functioning.

Definition 21.
A superrecursive class of automata, machines or algorithms U is called strictly linguistically (functionally) superrecursive if all languages (functions) computable/acceptable in the class of all Turing machines are also computable/acceptable in U.

Example 17.
The class of all simple inductive Turing machines is strictly linguistically and functionally superrecursive [1,6].

Example 18.
The class of all inductive cellular automata is strictly linguistically and functionally superrecursive [44].

Example 19.
The class of all neural networks with real number weights is strictly linguistically and functionally superrecursive [45].
Example 20. The class of all general machines working with real numbers is strictly linguistically and functionally superrecursive [46,47].  Note that any Turing machine is nominally linguistically and functionally recursive, but it is not always linguistically and functionally recursive.
Properties of universal Turing machines imply the following result. Note that any Turing machine is nominally linguistically and functionally recursive but in the majority of cases, it is linguistically and functionally subrecursive.

Example 24. A universal inductive Turing machine is linguistically and functionally superrecursive.
Note that any inductive Turing machine is nominally linguistically and functionally superrecursive but in the majority of cases, it is linguistically and functionally recursive or subrecursive. For instance, inductive Turing machines that do not use memory are linguistically and functionally subrecursive.
This clearly shows the importance of the memory for the computing power of automata and machines. Structured memory is an important concept introduced in the theory of inductive Turing machines [48]. We remind that a structured memory E consists of cells and is structured by a system of relations and connections (ties) that organize memory functioning and provide connections between cells. This structuring delineates three components of the memory E: input registers, the working memory, and output registers. In a general case, cells may be of different types: binary cells store bits of information represented by symbols 1 and 0; byte cells store information represented by strings of eight binary digits; symbol cells store symbols of the alphabet(s) of the machine M and so on. Thus, the memory is a network of cells and it is possible to interpret these cells not only as containers of symbols but also as information processing systems, such as neurons in the brain of a human being or computers in the World Wide Web or abstract computing devices in a grid automaton [49]. Definition 25. The software (hardware) of a triadic automaton is recursive if it and its transformation rules determine recursive algorithms.

Example 25.
The software (hardware) of a Turing machine is recursive.
Let A be an instruction automaton (machine) with a structured memory. Theorem 3. If the instructions (rules) of the automaton (machine) A are recursive, i.e., they define a recursive or subrecursive algorithm, but the structuring of its memory is superrecursive, then A can also be superrecursive.
The structuring of the automaton (machine) memory can be performed as hardware modification similar to the construction of inductive Turing machines of the second and higher orders. As inductive Turing machines of any order form a superrecursive class of automata, Theorem 3 give us the following result.

Corollary 2.
It is possible to achieve superrecursivity by hardware modification.

Corollary 3.
A triadic automaton with recursive software can be superrecursive.
An example of such an automaton is given in the studies of interactive hypercomputation [50]. Definition 26. Information processing, e.g., computation, is recursive if it can be realized by a recursive algorithm.
Properties of recursive computations imply the following result [1].

Theorem 4.
A triadic automaton with recursive software and hardware and their parallel recursive data processing and software/hardware modification is recursive.
This result shows that parallel to data processing recursive software/hardware modification cannot increase computing power. However, even recursive rules of hardware modification can essentially improve efficiency of automata and machines decreasing their time complexity. Namely, results from [51] allow proving the following result.
Theorem 5. For any recursively computable language L (function f), there is a recursive hardware modification automaton A with a priory structured memory, which computes the language L (the function f), with linear time complexity.
As recursive hardware modification automata are also triadic automata, we have the following corollary. Inductive Turing machines form an important class of automata formalizing scientific induction and inductive reasoning in general. That is why we can use inductive Turing machines for developing a new classification of automata, machines or algorithms. Here we do this for inductive Turing machines of the first order.

Definition 27.
A class of automata, machines or algorithms D is called linguistically (functionally) plainly inductive if it is linguistically (functionally) equivalent to the class IM 1 of all inductive Turing machines of the first order.

Example 26.
The class of all simple inductive Turing machines is linguistically and functionally inductive [1,6].

Example 27.
The class of all periodic Turing machines is linguistically and functionally inductive [43].

Example 28.
The class of all inductive cellular automata is linguistically and functionally inductive [44].
Properties of universal inductive Turing machines of the first order (cf. [1,6]) give us the following result. Properties of universal inductive Turing machines of the second or higher orders (cf. [1,6]) give us the following result. Theorem 7. A class of automata or machines H is linguistically (functionally) plainly superinductive if it is linguistically (functionally) equivalent to the class IM 2 of all inductive Turing machines of the second or higher order.
This brings us to the following problem.

Problem 1.
Are there proper subclasses of the class IM 2 of all inductive Turing machines of the second order, which are computationally weaker than IM 2 but are still linguistically or functionally plainly superinductive?

Conclusions
We introduced concepts of triadic automata and machines. Relations between different types of triadic automata and machines and their properties were obtained. In addition, we further developed taxonomy of classes of automata and machines as well as of individual automata and machines. All this opens new directions for further research.
Inductive Turing machines form a powerful class of algorithms [6]. Thus, formalization and exploration of triadic inductive Turing machines is a motivating problem for future research. It is possible to study the same problem for periodic Turing machines [43], which are intrinsically related to inductive Turing machines.
Structural machines provide an extremely flexible model of computation [35,36]. It would be interesting to introduce and study triadic structural machines.
It would also be interesting to formalize triadic cellular automata, triadic inductive cellular automata [44] and triadic neural networks and study their properties.
Traditional recursive and superrecursive models of automata and algorithms are used for the development of algorithmic information theory. Thus, an important direction for the future research is the development of triadic information theory based on triadic automata.
Funding: This research received no external funding.