Next Article in Journal
Predicting Activities of Daily Living with Spatio-Temporal Information
Previous Article in Journal
About Rule-Based Systems: Single Database Queries for Decision Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complex Cognitive Systems and Their Unconscious. Related Inspired Conjectures for Artificial Intelligence

by
Gianfranco Minati
Italian Systems Society, 20161 Milan, Italy
Future Internet 2020, 12(12), 213; https://doi.org/10.3390/fi12120213
Submission received: 4 November 2020 / Revised: 23 November 2020 / Accepted: 26 November 2020 / Published: 27 November 2020
(This article belongs to the Section Smart System Infrastructure and Applications)

Abstract

:
The aim of the article is to propose a conceptual framework, constructs, and conjectures to act as a guide for future, related research finalized to design and implement versions of Artificial Intelligence encompassing an artificially simulated unconscious suitable for human-like artificial cognitive processing. This article considers the concept of the unconscious in psychoanalysis. The interdisciplinary understanding of this concept is considered to be the unavoidable property of sufficiently complex, cognitive processing. We elaborate on the possibility of an artificial unconscious, able to both self-acquired properties through usage, and self-profile through a supposed implicit, parasitic usage of explicit cognitive processing. Memory activities are considered to be integrated into cognitive processing, with memory no longer only being storage and reminding no longer only being finding. We elaborate on the artificial unconscious as an implicit, usage-dependent, self-profiling, and emergent process. Conceptual characteristics of the research project are the implementation of weighted networked, fuzzified memorizations; self-generated networks of links of inter-relationships as nodes, self-variation of the intensity of the links according to the use, and activation of internal self-processes such as the introduction of fictitious links intended as variations and combinations of the current ones. Application examples suitable for experimental implementation are also discussed with reference to chatbot technology that has been extended with features of an artificial unconscious. Thus, we introduce the concept of the AU-chatbot. The main purpose is to allow the artificial cognitive processing to acquire suitable human-like attitudes in representing, interfacing, and learning, potentially important in supporting and complementing human-centered activities. Examples of expected features are the ability to combine current and unconscious links to perform cognitive processing such as representing, deciding, memorizing, and solving equivalencies, and also learning meta-profiles, such as in supporting doctor–patient interactions and educational activities. We also discuss possible technologies suitable for implementing experiments for the artificial unconscious.

1. Introduction

We conjecture a systemic interdisciplinary understanding, possible related approaches, and applications of the controversial concept of unconscious considered in philosophy (see, for instance, [1,2,3]), in psychoanalysis by Sigmund Freud [4], and as a subject of research for medical applications. We consider here the subject without its possible relationship with therapy and medical dimensions, such as dream interpretation and mental illness. The subject is considered in the literature by studies related to the problem of consciousness, mind, and the body–mind such as [5,6,7], and, recently, considering artificial intelligence [8].
The new issues considered here are not introduced in formalized, exhaustive ways alongside corresponding optimum, recommended technologies, but in open, conceptual ways allowing for the implementation of different possible technological approaches.
In the literature, complexity is intended to occur when there is an occurrence of processes of self-organization and emergence [9]. Self-organization is given by the sequence of new properties acquired in a phased transition-like manner, having regularities and repetitiveness, and synchronizations, e.g., whirlpools. Emergence is given by the acquisition of sequences of new properties (in this case self-organized) in not regular, not repetitive, but coherent, ways, having multiple different synchronizations and correlations, e.g., flocks and swarms [10].
Throughout the article, we consider features within the field of systems science to model complex phenomena and apply them in order to outline approaches that take into account the complexity of the artificial unconscious, which is considered here a source of complexity for artificial cognitive processing.
The features considered are:
-
Logical openness. Considered as the undefined, variable, and inexhaustible number of degrees of freedom of complex systems, such as collective behaviors that are continuously acquired and changing [11,12]. Examples of approaches to logical openness introduced in the literature include ensemble learning [13] and evolutionary game theory [14].
-
Multiple systems. Established by multiple roles of their constituting interacting components, interchangeability among components, e.g., in ergodic behavior. Components populations interact in such a way if x% of the population is in a particular state at any moment in time, then each (actually in real cases, it’s a matter of high percentages establishing the level of ergodicity) component of the population is assumed to spend x% of time in that state. In ergodic behaviors, components take on the same roles at different times and different roles simultaneously; however, with the same percentages. This behavior relates to the interchangeability of components playing the same role at different times. Percentages may be considered as equal at a suitable threshold or at least when having high similarity. These are physics concepts used, for instance, in theoretical geomorphology, economics, and population dynamics ([10], pp. 291–320). Multiple roles are established by equivalences and various interactions ([15], pp. 42–45; pp. 166–170). Multiple roles are also considered established by equivalences, and multiple interactions ([15], pp. 42–45; pp. 166–170). The logical openness of complex systems implies the usage of multiple systems models.
-
Incompleteness. Completeness assumes that a process can be fully represented by a finite sequence of steps, that is, a procedure or system has a finite number of degrees of freedom and is therefore fully described by a finite number of variables, and a finite number of constraints. Incompleteness is a property of processes that cannot be reduced to a finite sequence of steps and therefore have a non-finite number of degrees of freedom, or constraints. We can assume that endless completeness corresponds to incompleteness. Logical openness implies that formal completeness is replaced by levels of coherence with the occurrence of incompleteness and quasi-ness, for instance, when a system is not always a system, not always the same system, and dynamically only partially a system [16,17].
-
Mesoscopic level of representations. Processes and states related to the previous cases are intended to be quasi-represented, summarized by properties of mesoscopic variables, where microscopic representations are clustered per similarities ([15], pp. 110–128), [18].
-
Profiles. Behavioral profiles consist firstly of properties of behavioral histories, measuring the level of compliance with the constraints and boundary conditions, for example, prevalent usage of the maximum and minimum allowed. Secondly, there are properties of mesoscopic variables, for instance, learned through machine learning, modeled through correlations and statistics. Finally, the approach considers populations of profiles and their infra-profile properties. It is then possible to use meta-profiles as profiles of profiles [19].
The following are notations from the literature that form the basis of the conjectures and proposal for a research project aimed at filling the gap of an AI without unconscious.
In the 1980s, the problem of the unconscious was partially considered by Minsky [20]. He related the concept of the unconscious with memory when he wrote:
“Usually, we have no conscious sense of this happening, and we never use words like ‘memory’ or ‘remembering’ when the process works quickly and quietly; instead, we speak of ‘seeing’ or ‘recognizing’ or ‘knowing’. This is because such processes leave too few traces for the rest of the mind to contemplate; accordingly, such processes are unconscious, because consciousness requires short-term memory. It is only when a recognition involves substantial time and effort that we speak of ‘remembering’.”
([20], p.154)
He wrote “It is widely understood that emotional behavior depends on unconscious machinery, but we do not so often recognize that ordinary ‘intellectual’ thinking also depends on mechanisms that are equally hidden from introspection.” ([20], p. 178). He also wrote “ln this book we take ‘conscious’ to mean aspects of our mental activity of which we are aware. But since there are very few such processes, we must consider virtually everything done by the mind to be unconscious.” ([20], p. 331).
Furthermore, at the same time Moravec [21] introduced the so-called paradox of Moravec: “… it has become clear that it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility” ([21], p. 15). It is much easier to teach a machine to play chess than to make it to use the unconscious for decision-making mechanisms.
We stress how a significant consequence is that we cannot ignore the artificial unconscious of the increasingly complex artificial cognitive processing, especially when AI systems are charged with learning, deciding on and making scenarios of crucial issues. Can artificial unconscious be, for instance, separated, processed, memorized, and replicated?
Furthermore, we stress that this is a conceptual work; it does not cover experiences with the eventual implementation of the concepts introduced, and it does not describe experimental results. Possible laboratory implementation is related to expected further research.
The purpose of this article is to propose a conceptual framework, constructs, and conjectures to act as a guide for future, related research finalized to design and implement versions of Artificial Intelligence encompassing an artificially simulated unconscious based on the understanding of concepts related to the psychoanalytic unconscious suitable for generalization to sufficiently complex, generic cognitive—particularly artificial—processing [22,23]. The article puts forward various hypotheses and conjectures to be considered in possible research projects allowing implemented versions of Artificial Intelligence (AI) encompassing an artificial, simulated unconscious suitable for human-like artificial cognitive processing. The idea is that an AI without an artificial unconscious is reductive, as the latter is a source of complexity.
The artificial unconscious and related conjectures introduced are proposed to be implemented in Artificial Intelligence systems, for instance, chatbots and decision support systems. We consider self-acquiring properties of memory no longer reduced to only storage. As specified later in Section 5.1, the research application project is supposed to be characterized by the implementation of:
  • weighted networked fuzzified memorizations;
  • self-generated networks of links in turn among inter-relationships among memorizations (networks of links as nodes in turn consisting of links);
  • approaches to vary the intensity of the links according to the use, the inputs to be considered and their intra-properties;
  • approaches to activate self-processes such as the introduction of fictitious links intended as variations and combinations of the current ones and by reducing threshold levels and the intensities possessed by links and their levels of fuzzification, at various levels of coherence.
Requests to provide, in other words, make available, memorizations, are not just requests to select and pick up labeled items of a store or a warehouse. Furthermore, processes of remembering consist of using current interrelated memorizations and subnetworks, networks of inter-relationships among memorizations, at thresholds that trigger the involvement of previous effects of removed, usually ignored, partially ignored, and fictitious memorizations. The system is expected to answer questions in multiple ways, allowing the user to consider scenarios. The system should be experimentally used by varying parameters, thresholds, weights, and intensities of links (see Table 1).
In Section 2, we consider cognition and cognitive processing, the related artificial cognitive processing introducing the artificial unconscious; in Section 3, we introduce a possible interdisciplinary understanding of the unconscious, the concepts of meta-memory and of its self-processing; in Section 4, we consider the emergence of the artificial unconscious within cognitive systems with sufficient complexity, related conjectures such as self-acquiring properties for meta-memory, meta-memorization as artificial unconscious, dreamed memory, and application examples to chatbots; in Section 5, we mention possible technologies to implement an artificial unconscious on AI systems; In Section 6, we mention some possible further research.

2. Cognition and Cognitive Processing

We initiate by considering a pending question in the field of cognitive science ([10] pp. 394–397): “to what degree can we consider cognition suitably equivalent to cognitive processing, and the latter symbolically and non-symbolically computable?” The conjecture considered below is that taking account of the artificial unconscious can validly extend previous approaches, for example, connectionism, and at least further approximate the equivalence between cognition and cognitive processing. However, we mention how this equivalence is also dealt with in the literature using quantum-approaches not considered here [24,25,26].
Cognition is intended, in a nutshell, as the general mental activity (performed by the brain and body) of acquiring knowledge and understanding, allowing their application to manage inputs, establish behaviors, and enable learning and repeatability [27,28,29,30,31,32,33]. We mention so-called embodied cognition [34], according to which the mental activity is deeply rooted in the body’s interactions with the environment, so the body plays a central role in shaping, or making emergent, the mind. Classic cognitivism considers the mind as a processor of information, while connections with the environment have little or no theoretical importance. Conversely, the focus of embodied cognition is on the relationship of the physical body interacting with the surrounding world. The idea is that living systems provided with cognition evolved as living beings whose neural resources were mainly devoted to perceptual and motor processing, and these cognitive activities largely consisted of immediate interactions and responses to the environment. Therefore, human cognition, rather than being considered centralized, abstract and distinct in input and output modules, is intended to have deep roots in the sensorimotor process [35,36]. Although this approach has several problems, it is considered in research approaches such as reactive robotics ([10], pp. 400–402) and mirror neurons [37].
In a broad sense, metacognition is self-cognition, or cognition of cognition. This opens the crucial subject of consciousness, the subject of countless studies and hypotheses. We consider the matter here in a very limited way as useful to deal with the idea of the unconscious, keeping in mind artificial applications. From this perspective, we can consider consciousness, the conscious, as a representation of cognition, such as the represented, remembered present. Processes of representation of the present may be considered to start from their simultaneous, contextual “coding” as memory [38,39].
Furthermore, artificial consciousness has long been considered in the literature at different levels. For instance, authors considered general issues such as moral machines [40], machine consciousness for robotics [41,42,43,44,45,46], models for artificial consciousness [47,48,49,50], and related approaches [51,52]. The artificial unconsciousness is not intended as the negation, the opposite, the lack of occurring of artificial consciousness, but rather as induced by the interdisciplinary understandings of psychoanalytical concepts; in particular with reference to memory, its internal self-processing, and related acquired properties.
Regarding the second question, it relates to the understanding of cognitive processing in general and to what degree cognitive processing can be intended as computable, simulated by artificial cognitive processing.
“A Cognitive System is intended as a complete system of interactions among activities, which cannot be separated from one another, such as those related to attention, perception, language, the affectionate–emotional sphere, memory and the inferential system.”
[53]
We mention that a cognitive system is considered to be architecture (biological, emergent, artificial) of a network, correlated, e.g., due to covariance, measuring a certain dependency between subsystems and processes, interactions such as those related to actuators, the affective and emotional sphere, attention, inferential systems and logical activity, knowledge representation, language, memory, perception, and the unconscious, as we consider here.
Cognitive processing is intended as the activity of the cognitive system related to mixed external and internal inputs and finalized to, for instance, acquire behavior, anticipate, compare, process language, engage in logical activity and have perceptions, and to dream, imagine, judge, reflect, represent, and remember. Furthermore, an example of property acquired through the self-activity of the cognitive system is the decision-making ability, allowing conditioned correlations and hierarchizing or weighting memorizations.
Such internal activity of the cognitive system, the cognitive processing, may be intended as at least sufficiently approximated by an artificial cognitive processing of the information generated by, and related to, internal and external activities. As such they may be performed and simulated by a suitable computer system running simulation software. Such information should be intended as quantification due to possible measures, but also, for instance, by indexing and properties allowed by suitable representations such as phenomenological considerations, similarities and analogies, levels of generic correlations, and topological features.
However, such an approach is suitable and typical for the study and simulation of single processes, e.g., memory and visual perception, in laboratory experiments. However, this is at the price of reductively ignoring other aspects, however crucial, such as those related to the cognitive architecture, the emotional–affectionate and motivational aspects, difficult to quantify, and computationally intractable [54,55,56,57].
More realistically, such artificial cognitive processing should have almost networked aspects such as those allowed by neural network technologies in several versions considered below. However, artificial cognitive processing suitable for having significant levels of equivalence with cognition should perform self-activities allowing acquisition of emergent properties, networking and correlating processes of several non-equivalent nodes that have different semantics. It is reminiscent of multiple models required for complex, collective behaviors (see [10], pp. 64–75, and [15], pp. 201–204). The cognitive architecture is intended as logically open and non-symbolically computable (see Section 5.2.6 and [10], pp. 111–112, [15], pp. 48–51).
Conceptually, taking account of artificial unconsciousness should be intended suitable to almost significantly increase the correspondence between cognitive processing and artificial cognitive processing, so extended by considering intractable, unquantifiable, and non-measurable aspects of the unconscious. The perspective, then, is to make the cognitive processing quasi-computable. In this conceptual framework, the subject of this article, the crucial question “is cognition equivalent to computation?” has a possible, limited positive answer complementing the fact that “a cognitive system may be assumed as a system of cognitive models interacting within a cognitive architecture” where “cognitive models” [58,59,60] refers to specific cognitive processes (single, such as anticipating and learning, or multiple, such as visually searching and making decisions). As such, considering and adding properties of an artificial unconscious to artificial cognitive processing should allow for significant logical openness, and get closer to simulating human cognitive systems.
The source of the following concepts and approaches is neither the contrast between unconscious with conscious nor the simple reduction of unconscious-less to fully-conscious. Furthermore, as considered in Section 3, the usual artificial unconscious-less information technology should not be intended simply as the opposite of the proposed artificial unconscious. Here, being conscious is absolutely different from being unconscious-less. Simplistically it should be assumed that conventional artificial cognitive processing and AI systems lacking consciousness are already artificial unconscious.
In this article, the notion of unconscious corresponds to the cognitive functions and properties of memory and its self-processes, rather than to the processes in a computer system that the system is not conscious of, i.e., lacking artificial consciousness. The issue faced here is that the usual artificial unconscious-less information technology is much poorer in properties than technologies that provide cognitive processing using artificial unconscious.
The term “artificial unconscious” has myriad meanings and possible approaches. This results in many ambiguities and possible misunderstandings, such as unconscious-less being considered equivalent to fully-conscious. To avoid such misleading situations, we could use “subliminal” rather than “unconscious”. However, in the field of cognitive sciences, this is incorrect since most unconscious processes do not occur as subliminal stimuli [61]. Consequently, it was decided to keep the diction “artificial unconscious”.

3. Unconscious: Possible Interdisciplinary Understanding

As mentioned in the Introduction, we consider some interdisciplinary cognitive aspects and properties that are networkable, correlated as a system, and have, singularly and collectively, characteristics that are conceptually compatible, partially equivalent, and attributable to the psychoanalytic unconscious.
Briefly, we may consider the unconscious as having properties of inactive memorized information that are sedimented by several types of non-usages. They are, however, still linked (with current active memory and between them) along significant periods of time, where low frequency, replication, redundancy, and confirmation of simple correspondence are crystallized as acquired degrees of freedom. The alleged inactivity of memorized information constituting the unconscious may relate to both rare recalls and rare usages in the performance of cognitive processing other than remembering. Otherwise, the memorized information constituting the unconscious may have non-rare self-processing and acquire properties such as varied emotional significance, thus it may become, or no longer be part of, clustered reminders. This will be discussed in detail in Section 4.3.2 dealing with meta-memory constituting through its self-processing, the unconscious.
Therefore, the unconscious considered here mostly constitutes properties of memorizations and their self-processing, leading to the acquisition of (infra and collective) properties. Such properties are supposed to influence the cognitive processing and the process of remembering which is certainly not reducible to finding a memory within a memory-deposit, but rather understood as reconstruction ([12], p. 52, [62]). Examples of properties include:
-
The removed memorizations, which relates to the effects of removing information on other information. In other words, it refers to metadata on the linked causes, previous coherences, and relationships between the removed and the remaining information. The removal can be only partial, actually ineffective on diluted causes, effects, and links, where dilution consists of diffused partials. The removal introduces forced incompleteness, in turn forcing restructuring. The removal implies new, artificial balances and coherences.
-
The ignored past, which relates to deliberately ignoring the past, as if it was not there; disabled, deactivated, suspended; to act in some ways despite the past; rationally intended as incompatible with the present. The past can be implicitly ignored by dealing instead with its distortions, approximations, and metaphorical representations. Of course, such process can be only partial, actually ineffective on diluted causes, effects, and links. It is often a survival approach and the enabling of a new start despite the past.
While the removed relates to networks of links, causes, and effects around a node (implicitly removed), the ignored relates to nodes and their links, causes, and effects (implicitly ignored). It is a matter of weighted networks and fuzzy nodes suitable for representing different levels of significance and intensity of nodes and links.
-
The evaporated past, which is the unused, unnecessary past that no longer has a role in cognitive activities; it then becomes transformed, made adequate and suitable for new situations. The evaporated past then becomes unrecognizable and non-reverse-engineerable, since evaporation is widespread and completed over time.
-
The residue, which relates to kinds of side effects or remainders, for any reason, of incomplete processes, for instance, those that are interrupted or suspended. We consider the difference between residue (pending alternative use options, including the definitive discard) and waste (the only option is to be waste).
-
The implicit, which relates to meta-representation (through logical inferences applied to the represented), interpretation, paradoxical reformulation, and the generation of the possible, allowing intuition.
-
The unused, which relates to memorizations having, for a significant amount of time, no explicit role in decision-making activities, so links are rarely made and no new links are added.
-
The invented past, which relates to an artificial past, is invented as suitable, convenient, completing and explicative of the present. There is a kind of constructivist restructuration of the past, i.e., how it is more appropriate for the subject to think of it.
In sum, the unconscious is intended acquired from the usages, self-processing of the memory and the past.
We mention how the previous characteristics of memorizations may be artificially implemented, for instance as fuzzy properties [63] and as the level of networking intended as a number of directly or intermediated interconnecting links in networks [64,65].
We may say that such acquired weak memorizations and their combinations crystallize in a metaphorical inherited read-only memory (ROM)-like to be eventually substituted by new versions and releases.
Unconscious-free learning is reduced to recognizable, simple, replicable, successful or unsuccessful, stored stimulus-reaction-like behaviorist sequences subsequently complexified by contextual data.
It is assumed that an artificial cognitive system provided with an artificial unconscious will have the features listed above. Such artificial unconscious is considered a source of complexity inasmuch as it is a source of improbable configurations, non-equivalences, perturbations, randomness, roles for weak links and fuzziness, transformations, and unpredictability, despite having local, temporary levels of coherence. Such a process may be considered occurring also by (in our case simulated) dreaming (see Section 4.3.3).

4. The Unavoidable Unconscious

The “unconscious” of the cognitive complexity is considered here as the sedimented non-linear processed effects of the past, as in “We always try to use old memories to recollect how we solved problems in the past. But nothing’s ever twice the same, so recollections rarely match. Then we must force our memories to fit—so we can see those different things as similar. To do this, we can either modify memory or change how we represent the present scene.” ([20] p. 298]).

4.1. The Unconscious and the Cognitive System

A cognitive system can be understood as a system of interactions between cognitive activities and the processing of such a system is intended as cognitive processing performed through a suitable cognitive architecture. However, some of the cognitive processing is internal, in that it elaborates itself; for instance, performing representation, restructuring, and meta-memorization. Cognitive processing establishes and does not ignore the usage of the unconscious [3,66,67].
The term “sufficient level of complexity” relates to the level of complexity supposed to refer, even if controversially non-measurable, to the relevance, significance, and level of the predominance of multiple processes of self-organization and emergence occurring inside the memory and the meta-memory. The thesis we consider here, with AI usage in mind, is that any cognitive system with sufficient complexity is also provided with an unconscious. Significant levels of complexity occur when considering approaches such as the so-called non-symbolic computation, mesoscopic variables, network representations, and emergent computation (see Section 5.2.6 for more discussion on non-symbolic computability).
Low complexity characterizes the AI endowing computers with logical reasoning and problem-solving abilities. The classic behavioristic stimulus-reaction is intended to have minimum complexity, since learning is reduced to replicative reinforcement roles with no or very limited further elaboration.
Furthermore, the complexity of the cognitive system (its logical openness, incompleteness, quasi-ness, and multiple coherences) cannot disregard the role of the unconscious as a source of complexity generated by self-processed memory and meta-memory. Cognitive processing is then performed by using and establishing the unconscious [3,66,67]. As an example, the role of the unconscious may be considered almost as invasive, non-negligible, perturbative, suitable to break critical equivalences and increasing the significance of otherwise irrelevant issues. We consider, in short, that a sufficiently complex cognitive system cannot avoid also processing itself and its self-acquiring properties.
Considering cognitive systems without their unconscious and therefore regardless of their internal restructuring and emergent self-processing, is a matter of reductionism and simplification. Furthermore, the distinction between cognitive systems and their (artificial, in this) unconscious (mind, when appropriate) is a simplification, introduced as convenient to make problems more easily tractable. Self-processing of the unconscious could be, rather, intended as a tentative model of the unconscious mind [7], unavoidably integrated with cognitive processing.

4.2. Thinking in the Same Way as Reminding, and Reminding in the Same Way as Thinking

In living systems provided with sufficient complex cognition, particularly human beings, to remember can be interpreted as thinking of the past, and not a separate supplementary activity. More, remembering as an activity of thinking. Conceptually, we may hypothesize that we reason as we remember, and we remember how we reason. This is also related to the presence of different types of memory, e.g., episodic, explicit, long-term, short-term, and working, located in different interconnected brain regions, e.g., amygdala, basal ganglia, cerebellum, hippocampus, neocortex, and prefrontal cortex. Complex reasoning and complex remembering are reciprocally allowed, where the complexity is intended, given by the occurrence of multi-layered interconnected processes of emergent networking, fuzziness, correlation and clustering.
Memorization is considered as information processed in the architecture of memory, acquiring further self-acquired properties [68]. The request to remember should be cognitively formulated and expressed. In simple cases, it is a matter of opening a closed box which we know contains the information we are looking for. This is the case of labeled variables. In other cases, it is a matter of reconstructing the memory, starting from partial, fuzzy, uncertain clues. Often the reconstruction process itself, and its possible cognitive adaptations, constitutes the remembered more than its results.
The request to remember is answerable with simple or complex answers, depending on the question. In the former case, the answer or the memorized may be intended to be constituted of data and nodes with little to no fuzziness and with or without weak insignificant links. In the latter case, the answer or the memorized is constituted of extracts from the networked memory and linked nodes, with significant fuzziness and correlations. The latter case can be reduced and coincide or combine with the former.

4.3. Conjectures for an Artificial Unconscious

The focus here is on memory-related processing; in the current knowledge, cognitive processing can be thought as emergent cognitive memory-related processes.
In the usual artificial unconscious-less information technology, memorized information is intended as constant, regardless of its usage and the time. Thus, memory is intended as addressable, invariant storage. Forgetting, therefore, is equivalent to removing. We consider that forgetting may consist of different levels of negative answers to the request to remember.
The usual storage understanding of memory is suitable for ordinary, well-defined entity-based issues, whose representations are invariable; regardless of the usage and storage, it is supposed to have no properties other than retrievability. The approach considered here, on the other hand, is suitable for multi-level, clustered, networked, and macroscopic information such as contextualized representations of images, sounds, situations, represented emotions, languages, and tentative, incomplete reasoning.
The use of memorized information is not reducible to its availability to be found, as in a catalog, but to the possibility of answering requests for memories that are cognitively processed, taking account of self-acquired properties due to external–internal usages, where the two aspects are often mixed, implied, and not so precisely distinguishable. Examples are memories as networks effects from usages and self-profiling both revealing and using analytically hidden correlations possibly represented by weight changes in networks, fuzzification, tentative and emerging creation of links.
Requests to resume and reconstruct memories may have different forms, such as questions with different levels of precision that are answerable with specific nodes or networks (in case nodes and links are part of the answer), clustered, correlated memories. Requests are supposed to be cognitively processable and metaphorically understandable in order to be answerable. These memories and requests are always interfaced and not directly accessible, as they would be in storage-like memory.
Such a process of answering is essentially further processing of research of compatibility and similarity, for instance, via correlations and probability as plausibility and recurrence depending on usage. However, the process of answering, i.e., remembering and recalling, is a cognitive process in which the memorizing activities are integrated. It cannot ignore the remote history such as obsolete usages, sedimentation (the artificial unconscious expressed by self-profiles and networks), correlated nodes weighing less than a determined threshold, and clusters with reduced numbers of belongings, all with low-frequency fuzzifying usages, corresponding to their being removed, ignorable, evaporated, residue, minimally represented, implicit and invented past.
The role of such nodes and links in networks [64,69,70] and clusters [15,18] may be to constitute alternative decisive paths, solve, constitute equivalencies, and consider tentative configurations as creative issues. In weighted networking, it is a matter of considering low weights for rarely used or unused improbable links, nodes having significant and increasing topological distance between nodes, the nodes and network fuzziness as considered below. This corresponds conceptually to the crucial roles of the weak forces which take on decisive roles within the incompleteness [71]. The artificial unconscious, intended here as a self-acquired property of meta-memory (see Section 4.3.2), is supposed to perform the role of weak forces in complex systems [71].
Consequently, repeated requests are not in reality always completely the same, and as such, answers are not expected to be always the same, although they do have very similar semantic preponderant significance.

4.3.1. Self-Acquiring Properties for Memory

Memory becomes part of the cognitive process, sharing aspects of incompleteness [72]. Remembering can be considered as intrinsically incomplete since, explicitly or implicitly, its contextual dependence ignores details and connections deemed non-crucial. It also has self-acquired properties such as attribution of links, a probability which reinforces and confirms in the absence of reliable alternatives. An example of memory possessing properties other than the usual storage and retrieval abilities is the associative memory and the related associative storage, retrieval, and processing methods [73].
The process of self-acquiring properties for memory [74] does not necessarily require active memorization processes such as cataloguing features and making associations in databases. Furthermore, such features may be performed by external retrieval processes. The self-acquired properties we are considering here are acquired in two ways:
(a)
as effects from using the memory, such as disuse, incomplete removal, partial disregarding, residue, making items unfindable, generating ambiguities, making updates, making non-replacement updates, making new inclusions, weakening networks, introducing side effects as nodes, and reducing correlations and interactions. Such properties relate to the memories, e.g., single and clustering, and their structure. However, the structure is not reductively intended as fixed and built-in, but rather as dynamically emergent from usage.
(b)
as self-processed memory going through processes of self-remembering [39] as dreaming (introduced below), and through cognitive processing.

4.3.2. Meta-Memory, Meta-Memorization as Artificial Unconscious

Meta-memory is a sub-area of meta-cognition [75] within psychology. It relates to representations of memorizations and their self-acquired properties such as correlations and interrelations [76].
The meta-memory that we consider is a network of nodes, clusters, and correlations all having levels of fuzziness and incompleteness, see Section 3. The meta-memorization we consider here occurs when memorization is a self-reflexive process, includes properties of its usage, the profiles of usages, and self-profiles, where profiling is intended as non-ideal, data-driven, mesoscopic modeling, together with their emergent, ongoing properties such as coherence and correlation.
The self-acquired properties of meta-memory’s fuzziness differentiate the artificial unconscious from the self-acquired properties of regular memorized information, not of the type considered in Section 3. The self-acquired properties of meta-memory related to semantic memory are intended as networks and correlations extended with self-generated, hidden, little-used (partially forgotten) sub-, partial-networks. They are intended due to tentative, partial processes of self-organization or emergence establishing synchronizations or coherences among fuzzy (as in Section 3) and non-fuzzy memorizations.
Meta-memory with its self-acquired properties can be intended as constituting the artificial unconscious tout-court then allowing possible, imaginary, and fuzzy representations suitable for the logical openness of cognitive processes. In short, we consider the artificial unconscious as self-processed meta-memory, with self-acquired properties. An example occurs when remembering weak or negative memories, i.e., nodes or links; using replicated weak implicative connections; avoiding disturbing representations; and combining current and unconscious links, perturbs the artificial cognitive processing, making any symbolic processing unsuitable.

4.3.3. Dreamed Memory

The unlikely abstract separation between memory, the process of representation, the retrieval system, and the general processing is systemically very reductionist. Rather, the reductionist attitude begins from considering a retrieval system per se, an actual simplification of the system of retrieving capabilities together with capabilities of cognitive reconstruction, e.g., through highly probable correlations. However, probabilistic correlations deal with incompleteness and allow tentative, possible equivalent, findings suitable for processes of emergence within memory. The reconstruction process of memory is actually of a semantic nature, where it is not a reductive reconstruction, i.e., expressible through algorithms, but an emerging incomplete system of evocations, adaptations, and suitable distortions where the unconscious can be intended as a dreamed memory [77,78].
The processing of memories through dreaming can be understood as reducing threshold levels and usage of imaginary links and fuzzification, increasing degrees of freedom, and allowing multiple, superimposed, weak processes of emergence. In short, we tentatively intend dreaming as a self-processing of the meta-memory, such as meta-profiling, fuzzification of links and nodes, and reduction of threshold levels.
The semantic nature of the reconstruction process of memories allows for both semantic and usage of symbolic mistakes [79], as well as fault-tolerant, repairing and as-if processes. Dreaming is then conjectured here as autonomous processing of the meta-memory [73,74,77,78]. Simulation are considered in [80,81].

4.3.4. Artificial Unconscious as an Implicit Process

At this point we introduce how the artificial unconscious should be understood not only as memory with self-acquisition (meta-memory), networking, clustering, correlation, retrieving and reconstruction properties, as well as autonomously, implicitly (i.e., not on-demand) active and infiltrative as dreamed memory and profiles, but also as indirectly, cognitively, and unavoidably implicitly intrinsically part of the cognitive process. We can then consider that the artificial cognitive processing performed with the artificial unconscious is expected to acquire significant complexity, e.g., quasi-ness incompleteness and logical openness (it would be interesting to study if the reverse is also true).
The unconscious-provided cognitive processing is assumed to be implicitly performed as superimposed and “parasitic”, e.g., through multiple meanings and reuses. As such, it is understood that establishing the artificial unconscious will unavoidably interfere with or contribute to, depending on the point of view, the explicit artificial cognitive process. Simple examples of “parasitic” artificial processing, when it is just a matter of priority, include applications active in the background, such as apps, that are devoted to security, geo-localization and parallel processing (discussed further in Section 5.2.2).
Such implicit integrated cognitive usage is impossible in the face of completely ruled symbolic processing systems, that is, when the explicit cognitive process exhausts the entire artificial cognitive processing often provided with a simplified separated, non-autonomous memory. The artificial unconscious can then be understood as partially active, since it is parasitic to the explicit cognitive processes. The artificial unconscious is intended to combine with the explicit cognitive processes, as a low-intensive option to be arbitrarily considered as an alternative; and then dominant, when explicit, symbolic for an artificial system, processes are not totalizing and incomplete [16].
The artificial unconscious we are considering here is then an implicit (non-symbolic) system, a network of multiple possibly fuzzy, partially memory reconstructed, evoked, nodes and self-acquired properties.
Finally, we mention that the artificial unconscious requires the implementation of software programs provided with some level of logical openness, such as the fuzzy processing mentioned in Section 5.2. As a research issue, we may figure out a Turing-like test [82,83,84] to distinguish whether artificial cognitive processing is provided with an artificial unconscious or not.

4.3.5. Application Examples: AU Chatbots

We consider in the following the case of chatbots [85]. These are chatting robots with natural language processing capabilities [86] and machine learning abilities [87], such as in conversational AI [88]. The technology is largely used in e-communications and interactions with online services such as social networks, e-commerce, standardized user interactions, and games. However, this technology may be considered for more complex use in human interactions, such as in nursing, assisting the sick (especially in neurology) and the elderly, customer interfaces, and education.
In the approach we are considering here, chatbots are differentiated from each other not only because of their technical features. We differentiate chatbots of the same technology based on whether or not their cognitive processing is provided with an artificial unconscious.
The differences in robustness and reliability between chatbots can be characterized by:
(1)
Preponderant deterministic rule-based nature with limited learning capabilities and sensitivity to context and usage;
(2)
Preponderant localized machine learning abilities, such as specializing in dealing with the same types of users;
(3)
Preponderant general machine learning abilities; and
(4)
Machine learning abilities based on memory with self-acquired properties and artificial unconscious.
It is important to distinguish between cases when questions are standardized, such as for frequently asked questions (FAQ) and when dealing with different scenarios including previously profiled users, such as in e-commerce; dealing with well-defined contexts, such as requests for assistance by users of services, considering contextual specificities, e.g., deadlines, regulatory and functionality news; dealing with generic situations, such as requests for help in emergencies, behavioral indications, precautions to be taken; and advice and educational information for students of specific disciplines. The complexity of the fourth case is intended to increase robustness and availability in response to the complexity of the questions.
Acquiring complex reliability and robustness is indispensable in complex interactions, such as in supporting medical interactions, particularly when inductive reasoning should also be involved with the artificial unconscious, such as in educational and personal entertainment interactions. In these cases, one particular chatbot may be more suitable than another, depending on its training. As discussed above, it is a matter of allowing learning and interactions on emergent properties of self-acquired memory, such as under threshold correlations, weak networking, fuzziness, restricted clustering, not implicative but available, potential options. Sequences and families of such low-probability options constitute unique contextually equivalent or non-equivalent configurations.
We may name such artificial unconscious-provided chatbots “AU-chatbots”. AU-chatbots are differentiated depending on usages and are contextually equivalent, where one is worth the other in terms of story, experience, and suitability. Individual features can be obviously replicated an arbitrary number of times and initiate different evolutionary stories. AU-chatbots are intended to perform in ways other than just reacting by promptly answering context-free questions, since the multiplicity of possible answers is larger than the multiplicity of the question. AU-chatbots are even supposed to be able to propose equivalent possibilities. The expected interesting features of AU-chatbots go beyond the usual learnable properties. It is a matter of self-acquisition ability to perform non-necessary, non-implicated tasks such as introducing new low-probable links, consider reduced coherence thresholds, setting up scenarios, and considering options.
Finally, we posit how the approaches considered above may be used for other kinds of AI applications such as self-driving vehicles. In medical interactions, typically with patients, AU-chatbots are in demand to suggest actions, obviously with lower priority than the doctor’s individual prescriptions (which in this case the AU would learn). For instance, actions prescribed by the pharmacological producers such as dosages, assessing symptoms (intolerances), requesting information, cross-collecting medical information, and suggesting specialist exams and treatments to the patient’s doctor. The AU-chatbots are expected to also use their experience and unconscious to act as trained assistants, separately interacting with the patient’s doctor(s) and the patient by proposing, reporting, and remembering actions depending on:
-
limitations such as allergies, surgeries, incompatibilities, duration of treatment (min, max, periodicity), temporary inadequacy (e.g., pregnancy or convalescence);
-
coherence and stability of consultable medical tests;
-
convalescence states;
-
formulation of induced online questions;
-
historical non-medical data about the patient;
-
social health emergencies, such as lockdown situations;
-
taking into account seasonal and infectious problems;
-
the crossing and use of information and recommendations received;
-
the detection and representation, e.g., network, of similarities to other cases;
-
the realization of trends, e.g., performed correlation tools, from cases;
-
unconsciously activated inductions and probabilities to be considered, for instance using data as possibly confirmatory, activating questions and suggesting actions such as clinical tests to the doctor.
In these cases, the AU-chatbot acts as a personal assistant to both the patient and the doctor. However, in dramatic emergency cases when there is no medical care available, the advice of an AU-chatbot can be very useful. Furthermore, such AU-chatbots may become personal assistants in education, specializing, for instance, according to age or discipline, e.g., mathematics, languages, music, or physics. They could support educational and creative activities for disabled students, such as writing, reading, drawing, and verbal communication.
We may figure out a future where personalized AU-chatbots are made available in specialized areas such as supporting health, education, tax, and economic activities, accessible by request of corresponding, uniquely (proprietary?) identified users. We also mention how forms of AU-chatbots may eventually supporting user activities in the Internet such as in browsing and related to security. These will be real profiles and profiles of profiles, which will then have both strategic uses, such as detecting collective social behaviors to be oriented, e.g., in epidemics, panics and food or drug shortages, and, unfortunately, possible commercially and sociologically manipulative uses, such as consensus and price manipulation.
Chatbots evolve by different experiences of machine learning depending on the processed inputs. AU-chatbots evolve also for their self-acquired, emergent artificial unconscious.
AU-chatbots may interact in populations of AU-chatbots as agents with levels of autonomy, such as in the case of agent populations establishing simulated collective behavior. Corresponding user human agents will be profiled and profiles of profiles will be available. We think that it will require very serious legislative regulations and embedded unavoidable self-abilities of AU-chatbots to be able to refuse illegal use of profiles, report them to the authorities, and possibly self-destruct. Furthermore, can an unconscious-free chatbot learn to behave like an unconscious-provided chatbot? In sum, is the unconscious learnable or replicable in another chatbot? We believe that considering the artificial unconscious is a fundamental step for AI and for semantic processing [89].
Finally, at this point we ask ourselves if the artificial unconscious may play a role by considering an AU-chatbot, such as Turing’s oracle, almost as a generator of non-algorithmic (non-computable) decisions with, however, a generic coherence [90].

5. Outlines and Possible Technologies for Artificial Unconscious-Based AI Systems

In Section 5.1 we conceptually outline the research project and in Section 5.2 we consider examples of possible technologies for applications. The majority of the issues discussed in Section 5.1 are rarely addressed by the extant literature and constitute novelties despite being implementable through available technologies.
We propose approaches to solve the following research gaps: the ability to artificially cognitively process implicit properties, such as those due to reduced levels of memorizations; their networks of inter-relationships among memorizations and networks of interrelated relationships; effects of memorizations removals, and fictitious links.

5.1. Conceptual Outline of the Research Project

We introduce a consequential conceptual proposal for a research, applicative project aimed at creating AI systems provided with and using an artificial unconscious, generated by use and involved in cognitive processing. The project is determined based on the available technology mentioned in the Section 5.2 and can be described by the following conceptual modules and characteristics of possible applicative implementations, such as:
  • Active memorization having available options (chosen through algorithms of selection) to code, represent the input to be memorized and including contextual networked inter-relationships with other concomitant inputs. In a nutshell, each memorization is a network of nodes that have different intensities. Active memorization includes the networking of current memorized inputs with previous memorizations which have different levels of intensities, e.g., represented by levels of fuzziness, and have a number of common nodes (the number depends on the level of relatedness).
  • Active memory is presumed to perform several processes. For instance, generate networks of inter-relationships among memorizations (networks of links among nodes, in turn consisting of interrelationships between memorizations). The links of such networks may have different intensities, may be all generic or of different kinds, in case composed. Such networks of inter-relationships are updated on the occasion of any creation of inter-relationship among memorization.
  • The previous processes are supposed to occur at each processing step of any input. Otherwise, it can be assumed to proceed from pre-established initial conditions.
  • The first processing step may be supposed to take place with no links, no previous memorizations, and no network of inter-relationships. That is considering the hypothesis of the tabula rasa.
  • Another case of an internal self-generated process, i.e., occurring not necessarily in the face of an input, consists of the generation of fictitious links, inter-relationships among memorizations and links of the network of inter-relationships. Fictitious links may be intended generable in different ways, such as variations and combinations of the current ones possibly chosen for their levels of occurrence, for their levels of intensity, and also randomly. Furthermore, the generation of fictitious links may be intended as a process of dreaming when occurring by reducing threshold levels of the links’ intensities and levels of fuzzification, for instance only partially coherent.
  • Active memory is also supposed to actively participate in the cognitive process to answer requests to provide, i.e., make available, memorizations. These requests are not just requests directed to labelled items of a warehouse-like store. Specifically, the active memory processes these requests in such a way that they may have non-univocal answers, but, for instance, subnetworks of ranked possibilities, allowing for the reconstruction of answers or of their emergence from the available networked fuzzy memorizations. The intersection of answers to questions can result in univocal answers.
  • The flexibility of the artificial unconscious represented, for instance, by levels of fuzzifications, intensities of links, level of thresholds for acceptable adaptations, and approximations for subnetworks and scenarios, may be used experimentally. For instance, by varying parameters in on-going cognitive processing to simulate the randomness of emergency situations, to induce cognitive properties from memory properties, the decision-making in the face of distorted or manipulated memorizations, and simulate memory-related cognitive pathologies.
Since the artificial unconscious of AI systems is generated by the usage, such systems are contextualized and, in the case of individual use, even considerable as personalized.
We mention that issues 1, 2, 5, 6, 7, and 8 are still not addressed by the extant literature, at least not to be applied such as for available AI-based solutions.
In Section 4.3.5 we worked out applications to chatbots, while Intelligent Decision Support Systems (IDSS) and Intelligent Educational Support Systems (IESS) for parental and distance education may be other cases.
Advantages and disadvantages of using AI systems provided with artificial unconscious depend on expectancies, e.g., looking for stable scenarios as an advantage or disadvantage, and their effective usages, e.g., looking for multiple rather than univocal answers. Cases are listed in Table 1.
The proposed conceptual framework and conjectures have no experimental applications and no tangible results to validate the effectiveness of the proposals. However, we present conceptual cases and examples considered appropriate to represent features of such research projects and then allow validations.
These cases typically relate to the cognitive non-fully rational processing, such as those involved in the decisions of non-explicitly considered, low linked, and poorly correlated memorizations. It is a matter of presenting possible scenarios in support of decisions, taking account of soft, weak, poorly related aspects. such as those of the unconscious, becoming however decisive in the case of equivalences or a lack of features. Here are some examples of this:
-
Deciding between almost equivalent options in decision-making (in marketing the use of a color or suitable images is decisive). The equivalence is solved by considering fuzzy, remote memorizations connected by long, indirect paths.
-
Avoiding the use of, for instance, a term, a product, or approaches, because of a link, even if soft or indirect, with previous negative memorizations.
-
Similarities between the linkage of the input under consideration and previous memorizations may make it possible to consider issues that are poorly linked and not explicitly involved.
-
Considering the alternative, lower intensity, equivalent network paths involve memorizations not directly implicated but that allow possible scenarios. For example, an AU-chatbot bringing to the attention of a doctor their patient’s past, fuzzified cases of related allergies and incompatibilities that occurred in different contexts.
Other cases may relate to the following areas:
-
The re-application of previous approaches, represented as networks of memorizations, and evoked by similarities with the input, for example, in its networking and some other aspects.
-
Influencing the cognitive processing of the input through, for instance, aspects of its representation and form, such as the language used, the accent, and the terminology, implicitly softly linked with previous memorizations and allowing the establishment of influencing scenarios.
-
The availability to the cognitive processing of previous evoked scenarios, correlated at different levels, in which to avoid or perform reapplication.
Suitable applications are not expected for fully rational games, but rather for quasi-rational problems. Learning aspects and strategies apply, however, in a global context where the pending artificial unconscious is always available to solve equivalences, substitute lacking aspects, and support decisions by considering low correlated memorizations and scenarios.

5.2. Possible Technologies for an Artificial Unconscious

In this section, we consider examples of possible technologies suitable for the project proposed in this article. Suitable architectures should be designed depending on specific possible approaches, engineering adequacy, and deal with specific tasks such as the creation and use of clustering, correlation, fuzzification, and networking. It is a matter of a suitable architecture for the processing and information representation to overcome the unsuitable completeness of algorithms and their eventual computable probabilistic features making them unsuitable generators or acquirers of unconscious, since memories are simply available or not. In general, we suppose the necessary availability of hidden, incomplete, implicitly involuntary, multiple, superimposed, reusable parasitic processing and its memory.
We mention in the following an incomplete and not necessarily ranked list of current technologies that we consider relevant for our purpose to conceptually support the transformation of our conjecture of an artificial unconscious into an implementation project.

5.2.1. Long Short-Term Memory for Deep Learning

An example of AI technology we consider is the sub-symbolic processing of artificial neural networks (ANNs) and their machine deep learning [91,92,93]. We particularly refer to recurrent neural networks (RNNs) [94], which can use their internal states, i.e., memory, to process sequences of inputs. A case is given by long short-term memory (LSTM), an artificial recurrent neural network used for deep learning to process not only single data points (such as images), but also entire sequences of data (such as in speech recognition). LSTM is able to retain the information for long periods of time and then process time-series data, for instance, to classify and predict. The memory stores activation parameters, input and temporary data, and weights [94].

5.2.2. Parallel Processing

Distributed memory processors have their own local memory available, and there is no global address space across all processors. Furthermore, distributed memory systems need a communication network to coherently connect and manage the inter-processor memory. The shared memory of parallel computers can vary, e.g., for managing priorities and competing for access, but the general characteristic is the ability for all processors to access the global memory. Computer systems actually use hybrid distributed-shared memory architecture. This is the case for shared processing systems, multitasking systems optimizing alternation, for instance, between information and communication processing, and virtual processors [95,96].

5.2.3. Networks

It is possible to remove a web node. However, it is not possible to remove the previous diffused usages of the node and all the incomplete, interrupted, related links (previous usages are almost irreversible, since they are not reverse-engineerable because of their diffuse invasiveness). The removal is as effective as substituted, replaced by something acquiring a similar role which does not cause regret of the previous node, allowing equivalence. We also mention links no longer used, replaced by other more efficient, secure, and adequate connections. It is a matter of decay and acquisition of properties from disuse, e.g., unwanted usage by the context.
The fuzziness of a node may be given by its incomplete localization, the need for its reconstruction or its hiddenness [97], or, more formally, when it is a fuzzy set and the membership function includes uncertain parameters [98]. However, the networking is supposedly weighted [99] and the fuzzy nodes are intended to be suitable for representing different levels of significance and intensity of nodes and links, where low intensity characterizes the artificial unconscious. The entire issue is relevant for the Internet with considerations such as related profiling and browsing activities; usage of localized Internet activities for laboratory experiences such as psychological and sociological; decisions about the right or suitability to be forgotten; related legal and privacy issues having also security relevance.

5.2.4. Clustering

There are several robust approaches to community detection available that have a long tradition. Among them, we mention techniques such as top-down and bottom-up clustering and so-called self-organizing maps (SOMs) [100]. In particular, we mention processes of clustering techniques, such as k-means, k-median, and k-medoids [101,102,103], and multivariate data analysis (MDA) and finally cluster analysis to identify classes [104,105]. Furthermore, mesoscopic variables ([17], pp.110–116), [18,19] are considered as clusters and represented as nodes in networks. Their processing is performed taking account of their properties, such as the number of components, distribution, possible multiple elements, recurrence, density when considering the intervals used, and deviation of the actual time evolution from its averaged mesoscopic evolution [106].

5.2.5. Correlational Analysis

In this regard, there are several specific robust approaches [107] with a long tradition and a well-known statistical nature.

5.2.6. Emergent Computation

We consider here implicit, symbolically inexpressible (sub-symbolic), hidden, partially symbolically unruled zones of information processing. Examples include rules having specific domains that may not completely exhaust all the information processing, such as for emergent processing, and merging symbolic and sub-symbolic computation [108].
Another example is so-called cloud computing [109], intended to take place when the processing is performed through the use of sets of computational resources from populations of hardware and software resources, which may be:
(1)
partially equivalent or non-equivalent;
(2)
identical, but in different states of availability;
(3)
functionally equivalent, but having, for instance, different time or energy effectiveness, different security levels, and deterministic or approximate results in the face of reduced computational time.
This generally the case for more general emergent computation, when computation emerges as acquired property [110,111].

5.2.7. Fractional Calculus

Fractional calculus is a mathematical theory dating back to the very foundation of differential calculus and relates to fractional-order derivatives. Today, fractional calculus is applied in different disciplines such as theoretical and applied physics as quantum mechanics, engineering, and signal analysis [112].
Regarding the topics covered here, the fractional derivative applied in modeling the various memory phenomena considered consisting of two stages in some ways corresponding to the explicit (or procedural or declarative) memory (part of the long-term memory, also known as motor skills) and the implicit or unconscious memory. In some studies, while the first stage is assumed to have permanent retention, the other is assumed governed by fractional derivative models until to consider that a physical meaning of the fractional-order is an index of memory [113].
The use of fractional memory makes artificial systems able to model a human–operator memory pattern identification allowing the development of “shared-control systems, where a virtual human model assists the human during a control task, and human operator state monitoring” [114]. The use of fractional calculus allows for building systems that can naturally transfer control from a machine to a human, such as in shared-control and semiautonomous driving, involving transitions of control between humans and machines [115].

6. Further Research

We list possible lines of research related to specific issues, some of which were mentioned above:
  • Explore the possibility of learning to behave as having an active artificial unconscious.
  • Define a Turing-like test to distinguish between AU-chatbots and chatbots.
  • Determine whether it is possible to consider an AU-chatbot as Turing’s oracle, almost as a generator of non-algorithmic, decisions.
  • Consider populations of interacting AU-chatbots and chatbots with self-acquiring autonomous properties such as for collective behaviors.
  • Consider clusters of memorizations. Such clusters may be intended as fuzzy memories when a spectral clustering algorithm can support their identification and allow for network reconstruction [99].
  • Consider the possibility to allow corresponding fuzzy implications and identify cluster similarities as semantic parameters.
  • Explore the possibility of artificial dreaming as an autonomous, emergent, preconscious [116] process where large numbers of below-threshold values of symbolic significance and correlation processes are performed. Imaginary links may relate to weak clustering when clustering algorithms use low threshold levels.
  • Consider issues related to the possibility that the unconscious and its emergent properties (as for machine learning) be moved, prescribed, replicated, influenced as is, and artificially produced.
  • Find out whether the unconscious emerging from usage and as self-processing occurs during the learning process or is a kind of learning itself.

7. Conclusions

The artificial unconscious we considered here is intended as meta-memory with self-acquired properties such as effects from the usage of memory and meta-memorizing as a self-reflexive process. We consider properties and profiles of its usage and self-profiles, where profiling is intended as non-ideal, data-driven modeling, together with its ongoing, acquired properties, such as coherence and correlation. Such artificial unconscious is intended to be also generated by dreaming and understood as an emergent, implicit, independent, low-intensive network proposer, that is, always implicitly active as an available option and explicitly active when symbolic. Ruled activities do not fully use cognitive resources and support implicit, parasitic, i.e., not explicitly allowed activities. In short, we tentatively intend dreaming to be self-processing of meta-memory, such as meta-profiling, fuzzification of links and nodes, and reduction of threshold levels. Dreaming is then conjectured here as autonomous processing of meta-memory.
The crucial role of the artificial unconscious is to be a source of complexity generated by self-processed memory, meta-memory, and related acquired properties. Learning without an unconscious seems to be reductionist, a generalization of learning from a tabula rasa, which is a simplification. Depending on the application, learning with an unconscious is historical and unique, and can be suitable for complex interfacing, while unconscious-free learning may be suitable for well-defined, specific, standardized, simplified interfacing. We consider memory integrated with artificial cognitive processing, where requests for memories are altogether cognitive processes. We consider that thinking occurs in the same way as reminding, and reminding in the same way as thinking. Furthermore, memory is no longer storage and reminding no longer finding, but rather a reconstruction process having a semantic nature.
We introduce some application examples suitable for experimental implementation considering the technology of chatbots, extended with features of the artificial unconscious. In this regard, we introduce the term AU-chatbots. In AU-chatbots we considered possibilities generated by the processing and self-processing of the artificial unconscious as the complexity of the artificial cognitive processing.
Well-known examples of applications of unconscious-free learning include online e-commerce interface, customer interface, online services and management of FAQ. Examples of applications of learning with an unconscious include firstly personalized medical interactions and educational support activities, where AU-chatbots are expected not only to answer, but to have autonomous initiative based on their artificial unconscious as a processed, usage-dependent past. Second is self-profiles such as formulating on-line questions, using information and recommendations they receive, realizing trends from cases, and generating unconsciously activated inductions and probabilities to be considered and proposed to the doctor. The approaches considered above may be used for other kinds of AI applications such as self-driving vehicles.
What are the advantages and disadvantages of AI in terms of the possible availability of an artificial unconscious? The main purpose is to allow suitable human-like attitudes in representing, interfacing with, and learning. This is important to support and complement human-centered activities such as assisting the sick and elderly, nursing, in education, and suggesting creative activities. Other purposes are related to decision-making activities beyond optimization, suitable for presenting options to the supported user rather than making synthetic, presupposed optimal decisions.
We mention the possibility of considering AU-chatbots for social profiling, allowing medical and safety simulations, but also market and social manipulation. In particular, populations of interacting AU-chatbots could acquire emergent properties, which is currently impossible for algorithmic interactions, such as for standardized purchase-sale processes in the stock market.
We discuss some possible technologies suitable for implementing experiments on an artificial unconscious, such as LSTM for deep learning, parallel processing, node deletion in networks, fuzzy nodes and weighted networks, clustering, correlation identifiers, and emergent computation.
Finally, the article proposes conjectures to be implemented that are possible with the currently available technologies. In Table 2, we present a summary view.
The present research work is dedicated to the memory of Eliano Pessa, for his deep insights and expertise in the science of complexity.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Von Hartmann, E.; Coupland, W.C.; Trench, K.P. Philosophie des Unbewussten; Trübner, & Co.: London, UK, 1893; Volume III. [Google Scholar]
  2. Shann, H.J. Unconscious Thought in Philosophy and Psychoanalysis; Palgrave MacMillan: New York, NY, USA, 2015. [Google Scholar]
  3. Smith, D.L. Freud’s Philosophy of the Unconscious; Springer: New York, NY, USA, 1999. [Google Scholar]
  4. Freud, S. The Unconscious; Original Published on 1915; Penguin Classics: London, UK, 2005. [Google Scholar]
  5. Chalmers, D. The Conscious Mind. In Search of a Fundamental Theory; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
  6. Crick, F. The Astonishing Hypothesis: The Scientific Search for the Soul; Scribner: New York, NY, USA, 1995. [Google Scholar]
  7. Mlodinow, L. Subliminal: How Your Unconscious Mind Rules Your Behavior; Pantheon Books (Random House): New York, NY, USA, 2012. [Google Scholar]
  8. Piletsky, E. Consciousness and Unconsciousness of Artificial Intelligence. Future Hum. Image 2019, 11, 66–71. [Google Scholar] [CrossRef]
  9. Minati, G. Phenomenological structural dynamics of emergence: An overview of how emergence emerges. In The Systemic Turn in Human and Natural Sciences. A Rock in the Pond; Ulivi, L.U., Ed.; Springer: New York, NY, USA, 2019; pp. 1–39. [Google Scholar]
  10. Minati, G.; Pessa, E. Collective Beings; Springer: New York, NY, USA, 2006. [Google Scholar]
  11. Minati, G.; Penna, M.P.; Pessa, E. Thermodynamic and Logical Openness in General Systems. Syst. Res. Behav. Sci. 1998, 15, 131–145. [Google Scholar] [CrossRef]
  12. Licata, I. Logical openness in cognitive models. Epistemologia 2008, 31, 177–191. [Google Scholar]
  13. Zhou, Z.-H. Ensemble Methods: Foundations and Algorithms; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  14. Vincent, T.L. Evolutionary Game Theory, Natural Selection, and Darwinian Dynamics; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  15. Minati, G.; Pessa, E. From Collective Beings to Quasi-Systems; Springer: New York, NY, USA, 2018. [Google Scholar]
  16. Minati, G. Knowledge to Manage the Knowledge Society: The Concept of Theoretical Incompleteness. Systems 2016, 4, 26. Available online: https://pdfs.semanticscholar.org/3240/93c48a679dd6e5d5dc4ea1129c17beed46ae.pdf (accessed on 4 August 2020). [CrossRef] [Green Version]
  17. Minati, G.; Abram, M.; Pessa, G. (Eds.) Systemics of Incompleteness and Quasi-Systems; Springer: New York, NY, USA, 2019. [Google Scholar]
  18. Minati, G.; Licata, I. Emergence as Mesoscopic Coherence. Systems 2013, 1, 50–65. [Google Scholar] [CrossRef]
  19. Minati, G. Big Data: From Forecasting to Mesoscopic Understanding. Meta-Profiling as Complex Systems. Systems 2019, 7, 8. [Google Scholar] [CrossRef] [Green Version]
  20. Minsky, M. The Society of Mind; Simon & Schuster: New York, NY, USA, 1986. [Google Scholar]
  21. Moravec, H. Mind Children: The Future of Robot and Human Intelligence; Harvard University Press: Cambridge, MA, USA, 1988. [Google Scholar]
  22. Vernon, V. Artificial Cognitive Systems: A Primer; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  23. Peddemors, A.; Niemegeers, I.; Eertink, E.; de Heer, J. A System Perspective on Cognition for Autonomic Computing and Communication. In Proceedings of the 16th International Workshop on Database and Expert Systems Applications (DEXA’05), Copenhagen, Denmark, 22–26 August 2005; pp. 181–185. [Google Scholar] [CrossRef]
  24. Jibu, M.; Yasue, K. Quantum Brain Dynamics and Consciousness: An Introduction; Benjamins: Amsterdam, The Netherlands, 1995. [Google Scholar]
  25. Vitiello, G. Dissipation and memory capacity in the quantum brain model. Int. J. Mod. Phys. B 1995, 9, 973–989. [Google Scholar] [CrossRef]
  26. Vitiello, G. My Double Unveiled; Benjamins: Amsterdam, The Netherlands, 2001. [Google Scholar]
  27. Diettrich, O. A Physical Approach to the Construction of Cognition and to Cognitive Evolution. Found. Sci. 2001, 6, 273–341. [Google Scholar] [CrossRef]
  28. Benjafield, J.G. Cognition; Prentice-Hall: Englewood Cliffs, NJ, USA, 1992. [Google Scholar]
  29. Newell, A. Unified Theory of Cognition; Harvard University Press: Cambridge, MA, USA, 1990. [Google Scholar]
  30. Fetzer, J.H. Computers and Cognition: Why Minds are not Machines; Kluwer: Dordrecht, The Netherlands, 2001. [Google Scholar]
  31. Pessa, E. Quantum connectionism and the emergence of cognition. In Brain and Being. At the Boundary between Science, Philosophy, Language and Arts; Globus, G.G., Pribram, K.H., Vitiello, G., Eds.; Benjamins: Amsterdam, The Netherlands, 2004; pp. 127–145. [Google Scholar]
  32. Pylyshyn, Z.W. Computation and Cognition; MIT Press: Cambridge, MA, USA, 1984. [Google Scholar]
  33. Arecchi, F.T. Cognition and language: From apprehension to judgment-quantum conjectures. In Chaos, Information Processing and Paradoxical Games; Nicolis, G., Basios, V., Eds.; World Scientific: Singapore, 2014; pp. 319–343. [Google Scholar]
  34. Varela, F.; Thompson, E.; Rosch, E. The Embodied Mind: Cognitive Science and Human Experience; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  35. Wilson, M. Six views of embodied cognition. Psychon. Bull. Rev. 2002, 9, 625–636. [Google Scholar] [CrossRef]
  36. Wilson, R.A.; Foglia, L. Embodied Cognition. In The Stanford Encyclopedia of Philosophy; Edward, N.Z., Ed.; 2017; Available online: https://plato.stanford.edu/archives/spr2017/entries/embodied-cognition (accessed on 4 August 2020).
  37. Caramazza, A.; Anzellotti, S.; Strnad, L.; Lingnau, A. Embodied cognition and mirror neurons: A critical assessment. Annu. Rev. Neurosci. 2014, 37, 1–15. [Google Scholar] [CrossRef] [Green Version]
  38. Joseph, R. Development of Consciousness: Brain, Mind, Cognition, Memory, Language, Social Skills, Sex Differences & Emotion; University Press: New York, NY, USA, 2011. [Google Scholar]
  39. Edelman, G. The Remembered Present: A Biological Theory of Consciousness; Basic Books: New York, NY, USA, 1990. [Google Scholar]
  40. Wallach, W.; Allen, C. Moral Machines: Teaching Robots Right from Wrong; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  41. Chella, A.; Manzotti, R. Machine consciousness: A manifesto for robotics. Int. J. Mach. Conscious. 2009, 1, 33–51. [Google Scholar] [CrossRef]
  42. Chella, A.; Manzotti, R. AGI and Machine Consciousness. Theor. Found. Artif. Gen. Intell. 2012, 4, 263–282. [Google Scholar]
  43. Clowes, R.; Torrance, S.; Chrisley, R. Machine Consciousness. J. Conscious. Stud. 2007, 14, 7–14. [Google Scholar]
  44. Holland, O. Machine Consciousness; Imprint Academic: Exeter, UK, 2003. [Google Scholar]
  45. Holland, O. The Future of Embodied Artificial Intelligence: Machine Consciousness? In Embodied Artificial Intelligence; Iida, F., Pfeifer, R., Steels, L., Kuniyoshi, Y., Eds.; Springer Nature: Berlin, Germany, 2003; pp. 37–53. [Google Scholar]
  46. Mccarthy, J. Making Robots Conscious of their Mental States. In Machine Intelligence; Muggleton, S., Ed.; Oxford University Press: Oxford, UK, 1995; pp. 1–39. [Google Scholar]
  47. Buttazzo, G.; Manzotti, R. Artificial consciousness: Theoretical and practical issues. Artif. Intell. Med. 2008, 44, 79–82. [Google Scholar] [CrossRef]
  48. Chella, A.; Manzotti, R. (Eds.) Artificial Consciousness; Imprint Academic: Exeter, UK, 2007. [Google Scholar]
  49. Aleksander, I.; Awret, U.; Bringsjord, S.; Chrisley, R.; Clowes, R.; Parthermore, J.; Stuart, S. Assessing Artificial Consciousness. J. Conscious. Stud. 2008, 15, 95–110. [Google Scholar]
  50. Manzotti, R.; Tagliasco, V. Artificial consciousness: A discipline between technological and theoretical obstacles. Artif. Intell. Med. 2008, 44, 105–117. [Google Scholar] [CrossRef]
  51. Taylor, J.G. CODAM: A neural network model of consciousness. Neural Netw. 2007, 20, 983–992. [Google Scholar] [CrossRef]
  52. Tononi, G. An information integration theory of consciousness. BMC Neurosci. 2004, 5, 42. Available online: https://bmcneurosci.biomedcentral.com/articles/10.1186/1471-2202-5-42#citeas (accessed on 4 August 2020). [CrossRef] [Green Version]
  53. Pessa, E. Cognitive modelling and dynamical systems theory. Nuova Crit. 2000, 35, 53–93. [Google Scholar]
  54. Von Neumann, H. Mechanisms of neural architecture for visual contrast and brightness perception. Neural Netw. 1996, 9, 921–936. [Google Scholar] [CrossRef]
  55. Olmstead, W.E.; Davis, S.H.; Rosenblat, S.; Kath, W.L. Bifurcation with memory. Siam J. Appl. Math. 1986, 46, 171–188. [Google Scholar] [CrossRef]
  56. Ratcliff, R. Connectionist models of recognition memory: Constraints imposed by learning and forgetting functions. Psychol. Rev. 1990, 97, 285–308. [Google Scholar] [CrossRef] [PubMed]
  57. Sarnthein, J.; Petsche, H.; Rappelsberger, P.; Shaw, G.L.; Von Stein, A. Synchronization between prefrontal and posterior association cortex during human working memory. Proc. Natl. Acad. Sci. USA 1998, 95, 7092–7096. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Anderson, J.R. The Architecture of Cognition; Harvard University Press: Cambridge, MA, USA, 1983. [Google Scholar]
  59. Anderson, J.R.; Lebiere, C. The Atomic Components of Thought; Erlbaum: Hillsdale, NJ, USA, 1999. [Google Scholar]
  60. Lee, M.D. Bayesian Cognitive Modeling; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  61. Bargh, J. Before You Know It: The Unconscious Reasons We Do What We Do; Atria Books: New York, NY, USA, 2019. [Google Scholar]
  62. Tulving, E. How many memory systems are there? Am. Psychol. 1985, 40, 385–398. [Google Scholar] [CrossRef]
  63. Reyna, V.F.; Brainerd, C.J. Fuzzy Memory and Mathematics in the Classroom. In Memory in Everyday Life; Davies, G.M., Logie, R.M., Eds.; North-Holland: Amsterdam, The Netherlands, 2011; pp. 91–119. [Google Scholar]
  64. Lewis, T.G. Network Science: Theory and Applications; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  65. Valente, T.W. Network interventions. Science 2012, 337, 49–53. [Google Scholar] [CrossRef]
  66. Kellerman, H. The Unconscious Domain; Springer: New York, NY, USA, 2020. [Google Scholar]
  67. Legrand, D.; Trigg, D. Unconsciousness between Phenomenology and Psychoanalysis; Springer: New York, NY, USA, 2017. [Google Scholar]
  68. Addis, D.R.; Barense, M.; Duarte, A. (Eds.) The Wiley Handbook on the Cognitive Neuroscience of Memory; Wiley: Chichester, UK, 2015. [Google Scholar]
  69. Estrada, E. The Structure of Complex Networks: Theory and Applications; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
  70. Cohen, R.; Havlin, S. Complex Networks: Structure, Robustness and Function; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  71. Minati, G. General System(s) Theory 2.0: A brief outline. Towards a Post-Bertalanffy Systemics. In Proceedings of the Sixth National Conference of the Italian Systems Society, Rome, Italy, 21–22 November 2014; Minati, G., Abram, M., Pessa, E., Eds.; Springer: New York, NY, USA, 2016; pp. 211–219. [Google Scholar]
  72. Nagarajan, V.; Sorin, D.J.; Hill, M.D.; Wood, D.A. A Primer on Memory Consistency and Cache Coherence (Synthesis Lectures on Computer Architecture); Morgan & Claypool Publishers: San Rafael, CA, USA, 2020. [Google Scholar]
  73. Kohonen, T. Associative Memory, Content Addressing, and Associative Recall; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
  74. Kohonen, T. Self-Organization and Associative Memory; Springer: Heidelberg, Germany, 1989. [Google Scholar]
  75. Dunlosky, J.; Metcalfe, J. Metacognition; Sage: Thousand Oaks, CA, USA, 2009. [Google Scholar]
  76. Byrne, J.H. Learning and Memory: A Comprehensive Reference; Academic Press-Elsevier: Cambridge, MA, USA, 2017. [Google Scholar]
  77. Domhoff, G.W. The Emergence of Dreaming: Mind-Wandering, Embodied Simulation, and the Default Network; Oxford University Press: New York, NY, USA, 2018. [Google Scholar]
  78. Kelley, T.D. Robotic Dreams: A Computational Justification for the Post-Hoc Processing of Episodic Memories. Int. J. Mach. Conscious. 2014, 6, 109–123. [Google Scholar] [CrossRef]
  79. Minati, G.; Vitiello, G. Mistake Making Machines. In Systemics of Emergence: Applications and Development; Minati, G., Pessa, E., Abram, M., Eds.; Springer: New York, NY, USA, 2006; pp. 67–78. [Google Scholar]
  80. Revonsuo, A.; Tuominen, J.; Valli, K. The Simulation Theories of Dreaming: How to Make Theoretical Progress in Dream Science—A Reply to Martin Dresler. In Open MIND: Philosophy and the Mind Sciences in the 21st Century; Metzinger, T., Windt, J.M., Eds.; MIT Press: Cambridge, MA, USA, 2016; pp. 1341–1348. [Google Scholar]
  81. Da Lio, M.; Mazzalai, A.; Windridge, D.; Thill, S.; Svensson, H.; Yüksel, M.; Gurney, K.; Saroldi, A.; Andreone, L.; Anderson, S.R.; et al. Exploiting dream-like simulation mechanisms to develop safer agents for automated driving: The “Dreams4Cars”. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/8317649 (accessed on 4 August 2020).
  82. Turing, A. Computing Machinery and Intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  83. Cooper, S.B.; Soskova, M.I. (Eds.) The Incomputable: Journeys Beyond the Turing Barrier; Springer: New York, NY, USA, 2017. [Google Scholar]
  84. Moor, J.H. (Ed.) The Turing Test: The Elusive Standard of Artificial Intelligence; Springer: Dordrecht, The Nederlands, 2003. [Google Scholar]
  85. Følstad, A.; Araujo, T.; Papadopoulos, S.; Law, E.L.-C.; Granmo, O.C.; Luger, E.; Brandtzaeg, P.B. (Eds.) Chatbot Research and Design: Third International Workshop, CONVERSATIONS 2019, Amsterdam, The Netherlands, 19–20 November 2019; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  86. Tascini, G. AI-Chatbot Using Deep Learning to Assist the Elderly. In Systemics of Incompleteness and QUASI-Systems; Minati, G., Abram, M., Pessa, E., Eds.; Springer: New York, NY, USA, 2019; pp. 317–323. [Google Scholar]
  87. Deng, L.; Li, X. Machine Learning Paradigms for Speech Recognition: An Overview. IEEE Trans. Audio Speech Lang. Process. 2013, 21, 1060–1089. [Google Scholar] [CrossRef]
  88. Cancel, D.; Gerhardt, D. Conversational Marketing: How the World’s Fastest Growing Companies Use Chatbots to Generate Leads 24/7/365 (and How You Can Too); Wiley: Hoboken, NJ, USA, 2019. [Google Scholar]
  89. Acosta, M.; Cudré-Mauroux, P.; Maleshkova, M.; Pellegrini, T.; Sack, H.; Sure-Vetter, Y. (Eds.) Semantic Systems. The Power of AI and Knowledge Graphs. In Proceedings of the 15th International Conference, SEMANTiCS 2019, Karlsruhe, Germany, 9–12 September 2019; Springer: New York, USA, 2019. [Google Scholar]
  90. Soare, R.I. Turing oracle machines, online computing, and three displacements in computability theory. Ann. Pure Appl. Log. 2009, 160, 368–399. [Google Scholar] [CrossRef] [Green Version]
  91. Goodfellow, I.; Bengio, Y.; Courville, A.; Bach, F. Deep Learning; MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
  92. Aggarwal, C.C. Neural Networks and Deep Learning: A Textbook; Springer: New York, NY, USA, 2018. [Google Scholar]
  93. Kelleher, J.D. Deep Learning; MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
  94. Bianchi, F.M.; Maiorino, E.; Kampffmeyer, M.C.; Rizzi, A.; Jenssen, R. Recurrent Neural Networks for Short-Term Load Forecasting: An Overview and Comparative Analysis; Springer: New York, NY, USA, 2017. [Google Scholar]
  95. Yahyapour, R. (Ed.) Euro-Par 2019: Parallel Processing: 25th International Conference on Parallel and Distributed Computing; Springer: New York, NY, USA, 2019. [Google Scholar]
  96. Pophale, S.; Neena, I.; Aderholdt, F.; Venkata, M.G. (Eds.) Open SHMEM and Related Technologies; Springer: New York, NY, USA, 2019. [Google Scholar]
  97. Su, R.Q.; Wang, W.X.; Lai, Y.C. Detecting hidden nodes in complex networks from time series. Phys. Rev. EStat. Nonlinear Soft Matter Phys. 2012, 85, 065201. [Google Scholar] [CrossRef] [Green Version]
  98. Ma, Y.; Cheng, G.; Liu, Z.; Xie, F. Fuzzy nodes recognition based on spectral clustering in complex networks. Phys. A Stat. Mech. Appl. 2017, 465, 792–797. [Google Scholar] [CrossRef]
  99. Horvath, S. Weighted Network Analysis: Applications in Genomics and Systems Biology; Springer: New York, NY, USA, 2011. [Google Scholar]
  100. Schmidt, J.T. Self-Organizing Neural Maps: The Retinotectal Map and Mechanisms of Neural Development: From Retina to Tectum; Academic Press: London, UK, 2019. [Google Scholar]
  101. Aggarwal, C.C.; Reddy, C.K. Data Clustering: Algorithms and Applications; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  102. Everitt, B.S.; Landau, S.; Leese, M.; Stahl, D. Cluster Analysis; Wiley: Chichester, UK, 2011. [Google Scholar]
  103. Mirkin, B. Clustering: A Data Recovery Approach; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  104. Christen, P. Data Matching: Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection; Springer: New York, NY, USA, 2014. [Google Scholar]
  105. Hair, J.F., Jr.; Black, W.C. Multivariate Data Analysis; Pearson: Harlow, UK, 2013. [Google Scholar]
  106. Freeman, W. Neurodynamics: An. Exploration in Mesoscopic Brain Dynamics: An. Exploration in Mesoscopic Brain Dynamics; Springer: London, UK, 2000. [Google Scholar]
  107. Shevlyakov, G.L.; Oja, H. Robust Correlation: Theory and Applications; Wiley: Chichester, UK, 2013. [Google Scholar]
  108. Neto, J.P.G.; Siegelmann, H.T.; Costa, J.F. Symbolic Processing in Neural Networks. J. Braz. Comput. Soc. 2003, 8, 58–70. [Google Scholar] [CrossRef] [Green Version]
  109. Erl, T.; Puttini, R.; Mahmood, Z. Cloud Computing: Concepts, Technology and Architecture; Prentice Hall: New York, NY, USA, 2013. [Google Scholar]
  110. Licata, I.; Minati, G. Emergence, Computation and the Freedom Degree Loss Information Principle in Complex Systems. Found. Sci. 2016, 21, 1–19. [Google Scholar] [CrossRef]
  111. Forrest, S. Emergent Computation; MIT Press: Cambridge, MA, USA, 1990. [Google Scholar]
  112. Cattani, C.; Srivastava, H.M.; Yang, X.-J. (Eds.) Fractional Dynamics; De Gruyter Open Ltd.: Warsaw/Berlin, Germany, 2015; Available online: https://www.degruyter.com/view/title/518543 (accessed on 20 November 2020).
  113. Du, M.; Wang, Z.; Hu, H. Measuring memory with the order of fractional derivative. Sci. Rep. 2013, 3, 3431. [Google Scholar] [CrossRef]
  114. Martínez-García, M.; Zhang, Y.; Gordon, T. Memory Pattern Identification for Feedback Tracking Control in Human-Machine Systems. Hum. Factors 2019. [Google Scholar] [CrossRef] [Green Version]
  115. Martínez-García, M.; Kalawsky, R.S.; Gordon, T.; Smith, T.; Meng, Q.; Flemisch, F. Communication and Interaction with Semiautonomous Ground Vehicles by Force Control Steering. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
  116. Dixon, N.F. Preconscious Processing; Wiley: Hoboken, NJ, USA, 1982. [Google Scholar]
Table 1. A schematic list of advantages and disadvantages of using artificial intelligence (AI) systems equipped with artificial unconscious.
Table 1. A schematic list of advantages and disadvantages of using artificial intelligence (AI) systems equipped with artificial unconscious.
AI Systems Used with Their Artificial Unconscious Generated by Usage have Intrinsic Learning and Conservative Attitudes
Advantages/Disadvantages
Answers and proposals are expected to implicitly replicate previous assumptions and approaches
Apply and reproduce implicit styles
Breaking equivalences in decisions, however in similar ways
Experimental usage of the AI system by forcing same changes such as varying its current parameters, network properties among memorizations and usage of fictitious links
Advantages/Disadvantages
The networked answers to questions are supposed to provide multiple options presenting new scenarios
The networked answers to questions allow for refining of the question
Emergence of logically non-deducible options
Unusual breaking of equivalences in decisions
Experimental usage of the AI system by activating–deactivating their artificial unconscious
Advantages/Disadvantages
It is possible to proceed with activation–deactivation sequences to compare the on-going results
It is possible to experiment with interactions among different kinds of AI systems provided with different versions of artificial unconscious to consider properties emergent from populations constituted by them
Table 2. A summarizing conceptual abstract of the article.
Table 2. A summarizing conceptual abstract of the article.
Application Examples
Artificial Unconscious-based chatbots; Internet profiling and browsing activities; usage of localized Internet activities such as for psychological and sociological laboratory experiences
Possible Technologies for Artificial Unconscious-Based AI Systems
Long short-term memory for deep learning, Fractional Calculus, Parallel processing, Networks, Clustering, Correlational analysis, Emergent computation
Conjectures for an Artificial Unconscious
Artificial metamemory, self-acquired properties, dreamed memory. An unconscious-free machine learning is equivalent to accepting a tabula rasa-like context.
Unconscious
The unconscious in psychoanalysis (Freud), considered by Minsky, is related to acquired properties of memory (removed memorizations, ignored past, evaporated past, residue, implicit, unused, invented past, weak memorizations).
Memory
Memorizations as part of cognitive processing. Thinking in the same way as reminding, and reminding in the same way as thinking. Metamemory and dreamed memory. Remembering not reducible to finding memorization within a memory-deposit, but rather understood as reconstruction.
Cognitive Systems
Intended as a complete system of interactions among activities, which cannot be separated from one another, such as those related to attention, perception, language, the affectionate–emotional sphere, memory and the inferential system.
Systemic Background
Self-organization, emergence; logical openness; multiple systems; incompleteness; mesoscopic level of representation; and profiling.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Minati, G. Complex Cognitive Systems and Their Unconscious. Related Inspired Conjectures for Artificial Intelligence. Future Internet 2020, 12, 213. https://doi.org/10.3390/fi12120213

AMA Style

Minati G. Complex Cognitive Systems and Their Unconscious. Related Inspired Conjectures for Artificial Intelligence. Future Internet. 2020; 12(12):213. https://doi.org/10.3390/fi12120213

Chicago/Turabian Style

Minati, Gianfranco. 2020. "Complex Cognitive Systems and Their Unconscious. Related Inspired Conjectures for Artificial Intelligence" Future Internet 12, no. 12: 213. https://doi.org/10.3390/fi12120213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop