Next Article in Journal
International Human Rights Protections Find Support in Hobbes’ Leviathan
Previous Article in Journal
Habit: A Rylean Conception
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

Uncharted Aspects of Human Intelligence in Knowledge-Based “Intelligent” Systems

1
Consortium for the Advancement of Cognitive Science, College of Arts and Sciences, Psychology Department, Ohio University, Athens, OH 45701, USA
2
Department of Psychology and Education, Columbus State Community College, Columbus, OH 43215, USA
*
Author to whom correspondence should be addressed.
Philosophies 2022, 7(3), 46; https://doi.org/10.3390/philosophies7030046
Submission received: 3 December 2021 / Revised: 5 April 2022 / Accepted: 11 April 2022 / Published: 19 April 2022

Abstract

:
This paper briefly surveys several prominent modeling approaches to knowledge-based intelligent systems (KBIS) design and, especially, expert systems and the breakthroughs that have most broadened and improved their applications. We argue that the implementation of technology that aims to emulate rudimentary aspects of human intelligence has enhanced KBIS design, but that weaknesses remain that could be addressed with existing research in cognitive science. For example, we propose that systems based on representational plasticity, functional dynamism, domain specificity, creativity, and concept learning, with their theoretical and experimental rigor, can best characterize the problem-solving capabilities of humans and can best overcome five key limitations currently exhibited by knowledge-based intelligent systems. We begin with a brief survey of the relevant work related to KBIS design and then discuss these five shortcomings with new suggestions for how to integrate results from cognitive science to resolve each of them. Our ultimate goal is to increase awareness and direct attention to areas of theoretical and experimental cognitive research that are fundamentally relevant to the goals underlying KBISes.

1. Introduction

We begin by distinguishing between the following three types of systems: (1) knowledge-based systems (KBS), (2) knowledge-based intelligent systems (KBIS), and (3) expert systems (ES). A knowledge-based system (KBS) is a program that solves problems by acting on a database of “facts”, either explicitly provided by human experts or, via implicit patterns, extractable from that database. If the focus of the KBS is on the process of extracting “hidden” information in a way that may be construed as “intelligent”, then the KBS is referred to as a “knowledge-based intelligent system” (KBIS). On the other hand, expert systems are knowledge-based systems designed to operate on the knowledge provided by human experts that neither inherently nor necessarily require any intelligent process but can incorporate intelligence (particularly, human-like intelligence, which we will argue is ideal). KBISes became prominent in the AI community after two great ideas in the history of AI research seemed to have lost some of their original impetus. The first of these was an extension of the early work on artificial neural networks (ANNs) of McCulloch and Pitts [1] who introduced a simple model of a neuron. Rosenblatt [2] extended this model by showing how a learning algorithm could adjust the connection strengths of a single three-layer neuron he called a “perceptron”. However, this development was not sufficient to address the two main objections to the ANNs approach: (1) it was only useful in solving toy problems (i.e., not real-world problems; for more recent examples of ANN applications to expert systems design and theory see [3,4,5,6]) and (2) its low-level nature would make it immensely complicated to obtain systems with the kind of robustness, flexibility, and generality displayed by the human mind.
The second disappointment came from the work of Newell and Simon [7,8] who attempted to build a generalized problem solver (GPS). Their technique was based on what is known as the means–ends analysis. The basic idea was that problem solving could be represented as a process involving states: the present state and the desirable state. At each stage of the process, the difference between the present and the desirable states determined the choice of an operator for minimizing the resulting difference. A description of the problem-solving process was equivalent to a proof of the solution. However, because the program was based on painstakingly difficult logical representations that also required exorbitant memory resources and computing time resources, it proved to be intractable and implausible for anything but the simplest of problems.
Both perceptrons and the GPS were attempts at general, domain-independent problem solving. In other words, the idea behind these models was that a simple set of rules or mechanisms could yield solutions to a wide variety of problems. As it turns out, given the computational limitations of the time, this goal seemed ever more elusive. It was about this time in the early 1970s that a new paradigm for artificial intelligence emerged. When the general-purpose problem-solving approaches of the aforementioned researchers fell short of expectations, researchers began to realize that by restricting problem solving to specific domains of high-level knowledge, one may be able to achieve practical progress. This paradigm shift which favored knowledge (or memory) intensive methods over the algorithmic (computational) generality of earlier methods was first introduced in programs such as DENDRAL [9,10] and MYCIN [11].
DENDRAL emulated the expertise of a chemist by implementing problem-solving heuristics in the form of high-level rules (rules of the form “if x 1   x 2   x n then Y ” called production rules, where x i and Y are statements) applied to a database of facts elicited from an expert in chemistry. The purpose of DENDRAL was to identify chemical compounds from their spectral signatures. Similarly, MYCIN emulated the expertise of doctors in diagnosing blood diseases. In fact, the commercial success of these two programs encouraged the development of a great number of expert systems that have gone far beyond the simple production-rules paradigm.
In this paper, we discuss the major developments in KBIS design that have led to the current generation of expert systems. An expert system is a knowledge-based system that can be designed to emulate decisions of a human expert. This discussion is confined to these types of intelligent expert systems because, when it comes to cognitive research and the emulation of human intelligence (i.e., cognitive processes associated with problem solving, decision-making, concept learning, perception, etc.), expert systems have, to date, been the least effective among the various other knowledge representations (e.g., decision trees and Bayesian belief networks). Furthermore, because the nature of these systems is one that emphasizes symbolic processing, there is a natural connection to language as the highest level of processing in the cognitive hierarchy where the lower-level cognitive capacities shape or determine such symbolic constructs as facilitators of human communication. Admittedly, numerous AI techniques used to improve KBISes have been used to improve expert systems [12]. In those situations, we briefly discussed those techniques. Specifically, this paper examines the key techniques and approaches that have influenced the development of intelligent expert systems and KBISes over the past four decades in an attempt at revealing the various notions of intelligence on which they are based. We will argue that although great progress has been made in the development of knowledge-based intelligent systems, this progress has been hampered by neglecting some of the most fundamental and majestic facets of human intelligence. These aspects of intelligence, as we discuss at greater length in the following sections, refer to the capacities of the human mind studied extensively in cognitive science and psychology that are regarded as characteristic of intelligence, specifically in relation to humans as opposed to other species of animals. From a more functional and primitive perspective, intelligence, as a cognitive construct, has come to be understood as the suite of mental capacities that facilitate the survival of individual human observers and the human species given the challenges they face in their environment. Some examples of these capacities include the ability of the human perceptual system to efficiently distribute attentional resources, the ability to collapse an immense range of available perceptual information into generalized conceptual representations, the ability to reason linguistically, the adaptive and creative capabilities of human problem solvers, and the efficient heuristics that may guide human judgments and decision-making. It should be noted that in cognitive science and psychology, decision-making and problem solving (capacities often emphasized in AI research) are regarded as higher-order capacities that are dependent upon lower-order capacities, namely, the other capacities mentioned above [13,14,15]. By giving more attention to these uncharted aspects of intelligence and the wealth of research dedicated to their study in cognitive science and psychology, KBIS development may be enriched considerably and better approximate the complex capabilities of human cognition.

2. Aspects of Intelligence

In knowledge-based AI, there are two fundamental assumptions about what makes a system intelligent; the system must be able to solve nontrivial problems and, if the goal is to emulate human intelligence, it must be able to solve these problems in ways that are analogous to the ways humans solve them [16,17,18]. The fact that humans are ill-equipped to solve certain types of problems (e.g., calculating √π to 100 figures) is not significant under this view. Indeed, under this view, the focus is on handling complex but generalizable problems with heuristic, efficient, and creative solutions. AI designed in the inspiration of humanlike processes and the outputs of such designs may be more appropriately interpreted by human agents and more effectively serve human needs when comprehensive and perfect algorithmic solutions are impractical.
With these two assumptions in mind, gradual improvements are seen in the evolution of expert systems design that have been fueled by disparities between the system’s performance and human performance on the same problem-solving tasks. Each performance disparity has been the result of some aspect of human intelligence being ignored. In some cases, technology and a lack of know-how could be blamed for the omission. In other cases, a system’s performance was good enough to merit no additional modifications. Since many major developments in expert systems design were reactions to a notable discrepancy between natural (human) intelligence and artificial intelligence, this analysis is organized according to these discrepancies. We propose that these discrepancies fall along four lines, henceforth referred to as the problems of (1) uncertainty, (2) disorganization, (3) ambiguity, and (4) adaptability.

2.1. Production Rules and Inductive Inference Systems

Among the concerns of early expert systems designers was the ability to solve problems at a very high level. This meant an ability to solve problems using knowledge in the form of natural language statements. This became a reality with the first generation of rule-based intelligent expert systems. Intelligence in these early systems was construed as a very high-level process associated with language and logical inference.
The first generation of knowledge-based systems was based on what is known as production rules. These systems emerged around the 1970s when it became clear that for an artificial system to solve problems in a human-like fashion, the system would have to be constrained to a specific knowledge domain.
The presupposition was that since it would take much too long to discover how one could represent the whole of mental processing in algorithmic terms, the next best thing would be a very high-level representation of the thinking process and, by extension, the process of making decisions. The design of these systems presumes that human experts have specific knowledge or facts about their domain of expertise stored in memory, and by applying inference rules to this storage source, humans can organize, identify, and generate new information to solve problems. Figure 1 displays a possible architecture of a production rule-based expert system.
The system is comprised of a database of facts or clauses pertaining to a given domain, a knowledge base of production inference rules that could be applied to those facts or clauses, an inference engine that actually applies the rules to the facts, an explanation engine that lets the user know how an answer was obtained, a user interface, and a developer interface. The user interface allows the user to communicate with the system. The developer interface allows the expert and the knowledge engineer to modify the program, the database of facts, or the knowledge base as the need arises.
As mentioned, the production rules in the knowledge base are simple logical rules of the form x 1   x 2   x n Y . One possible interpretation of this expression is that if the subgoals x 1 ,   x 2   , , x n are met, then the goal Y is met. We distinguish these conjunctive rules from disjunctive rules of the form x 1   x 2   x n Y . The first type of rules shall be called deterministic and the second type shall be called indeterministic in that 2 n subsets of the set x 1 , x 2 , , x n are all possible conditions for Y. This is an interesting point for it seems to suggest that even a simple knowledge-constrained system can act both probabilistically and deterministically with the right choice of rules.
As already suggested, production rules can take many specific forms. They may be directives such as “if the program crashes and no error prompt shows up, then reboot the computer” or plain relations: “if the program crashes then there is insufficient memory”. Furthermore, they may be recommendations such as “if the program is expensive and the computer is cheap, then the advice is to buy a new computer,” or even heuristics such as “if the program requires at least 0.5 gigabytes of RAM, and the program runs on Windows, and the program was developed by Microsoft, then the program is Microsoft Office”.
In addition to providing expert advice, one of the most attractive features of a production-rules system is its ability to report how it came to a solution. This ability suggests the ability humans have to report their own thought processes. Another powerful feature of the first generation of expert systems is their ability to include metarules in their knowledge base. Metarules are rules about rules supplied by the expert to the knowledge base to add higher-order decision-making or problem-solving capabilities to the expert system as does the following rule: “indeterministic rules have lower priority than deterministic ones”. In the expert systems that use this approach, metarules are usually given a higher priority by the inference engine [20].
Over the last decade, researchers have attempted to integrate symbolic models based on production rules with deep-learning neural networks (DNNs; see [21], for a review pertaining to system health management). DNNs offer an extended capacity for feature extraction given vast and complex datasets for which a large parameter space is needed to generate accurate outputs. DNNs have been effectively applied to pattern, image, and speech recognition as well as recommendation systems in a broad array of arenas (including prominent companies such as Apple, Google, and Microsoft; also see [22], for a survey of applications). However, given their complexity, scope, and potential to easily overfit data, researchers may be challenged to implement the deep-learning approach. Similarly, the computing demands of DNNs may be of concern as they grow in complexity. This shortcoming is particularly glaring when considering the development of AI analogous to human cognition, as the demands of DNNs would seem to far exceed the limitations of what is tractable to the human mind. Reductive heuristic processes that guide cognition (see [23], for an introductory overview) may be more closely approximated by simpler production rules-based expert systems.
While a production-rules system operates upon a database of facts by way of a given set of rules, others are designed with an alternative, inductive representation of the knowledge base. The system seeks to infer general rules from the instances provided in the data, often represented as attribute-value vectors and associated by classes [24]. Two prominent examples are decision trees and Bayesian belief networks, and both parallel contemporary cognitive research (e.g., rational agent decision making [25], and Bayesian analysis in categorization [26]). However, these cognitively inspired approaches have not had the level of success from an empirical and theoretical standpoint in research of human cognition as the approaches that we discuss in this paper (e.g., [27,28,29,30]). Indeed, one of our objectives in this paper is to draw attention toward the most robust cognitive empirical results, theoretical constructs, and models that can provide greater promise toward effectively emulating human intelligence.
Despite the successes of production-rules and inductive inference expert systems, there are several notable limitations if one examines them from the point of view of human intelligence. For one, human experts are not endowed with perfect information. Memory store in humans can contain uncertain, incomplete, and ambiguous information organized in idiosyncratic ways not mimicked by an expert system. These shortcomings are discussed next.

2.2. Uncertainty

To mimic human intelligence, an expert system should be able to make inferences about uncertain information and include such information in its database. To accomplish this, expert systems can use rules of the form T e T p h [31], meaning that if evidence e has occurred (i.e., given the evidence that e is true), then hypothesis h has occurred with probability p. According to the popular Bayesian approach to modeling decision making, human experts can, in principle, estimate the prior probability that e has occurred given that h has occurred. This is important since the user provides evidence e, and according to that evidence, the expert system computes the posterior probability p h | e   given by Bayes’ rule as
p h | e = p e | h · p h p e | h p h + p e | ¬ h p ¬ h
Expert systems based on the Bayesian approach require as inputs probability estimates for p e | h , p h , p e | ¬ h , and p ¬ h . Now, these estimates are often subjective, except in rare situations (e.g., with respect to well-defined scenarios like those encountered in casino gambling [32]) when the sample space of alternatives is abundantly clear. Indeed, psychological research indicates that human probability judgments are not consistent with the rules of probability [32,33] on which Bayesian analysis is based. This means that even an expert’s assessment of conditional and posterior probabilities would not be consistent with Bayes’ rule, and an expert system built around Bayesian statistical properties may not reflect authentic human cognitive processing. Thus, such systems may be limited by failing to emulate the highly adaptable and dynamic processes by which humans contend with uncertainty (we discuss both adaptability and functional dynamism further as aspects of human intelligence in the following sections). One way of overcoming this shortcoming is to adopt the approach of Shortliffe and Buchanan [34] in MYCIN; instead of using probabilities, uncertainty is measured in terms of a subjective scale based on degree of belief. The rules take the form T e T cf h , which means that if evidence e occurs, then hypothesis h follows with a certainty factor of cf.
On the other hand, several researchers have tried to address the shortcomings of the Bayesian approach by employing more eclectic approaches. For example, Walley [35] evaluated the Bayesian approach along with three other prominent models of uncertainty: coherent lower and upper previsions, belief functions (from Dempster–Shafer theory), and possibility measures (from fuzzy logic). These were evaluated along requisite criteria for a comprehensive model of uncertainty. Although each measure proved useful for specific types of problems, none of the measures proved adequate as a general model of uncertainty. This lack of generality has led to recent advances in modeling uncertainty [36,37,38,39,40,41] and an effort to incorporate human-simulating intelligence into the resolution of uncertainties [42]; however, the development of a general model remains an open problem. The resolution of uncertain information within KBISes may relate to the means by which information is represented and organized. Next, we discuss KBIS progressions in these arenas.

2.3. Organization

The way knowledge is organized determines ease of retrieval. For example, it is a far easier task to search for a name in an alphabetized list than it is in a scrambled list. Furthermore, the greater the amount of information to be searched through in an identification or retrieval task, the more important it is that this information be organized in an efficient manner. Marvin Minsky [43] proposed the idea of representing knowledge in terms of frames. Frames are data structures containing most of the information needed to know about an object in terms of slots or indexed characteristics or attributes. For instance, a driver’s license is a type of frame: its slots are the name of the person, date of birth, hair color, appearance (via a picture), and so on.
Object-oriented programming (OOP) is a paradigm for software development based on the idea of a frame: in fact, the notions of “frame” and “object” share many similarities in AI. An object in OOP is a data structure that contains the procedures (called methods) that operate on the data structure [44]. This is a drastic departure from the typical paradigm of program organization where a data structure and the program that operates on it are separate entities. One could say that under OOP, objects are frames that contain behaviors or operations that act on their slots once that behavior is elicited by an external signal called a “message”.
A powerful feature of frames is that they can be organized in terms of the relation of inheritance. Classes of frames in a class hierarchy inherit the behaviors and characteristics of those classes above them in the hierarchy. This kind of arrangement has proven to be extremely efficient for both developing code and conducting queries [45]. Furthermore, there is empirical evidence which suggests that some cognitive systems such as semantic memory organize conceptual information in a similar hierarchical fashion [24,46]. With an object-oriented organization, production rules include the use of slot variables for inferring particular attributes shared by slots across the frames of certain classes. For example, a rule can take the form C s 1 = a     C s 2 = b P C , which means that if slot s 1 of class C has attribute value a and slot s 2 of class C has attribute value b, then class C has property P.
More recently, much attention has been given throughout the computer sciences to the organization of knowledge by way of ontologies. In this context, an ontology serves as an objective and explicit specification of abstract conceptualizations of objects and relations with systems of axioms useful for drawing conclusions or solving problems [47]. These ambitious ontologies may promote improved efficiency in certain specialized or domain-specific KBISes where rigid programming operationalizations of a “concept” are appropriate. This approach may also prove limiting for higher-order concept hierarchies [48] wherein abstract relations cannot be readily represented at a granular level without a more adaptable paradigm or a more human-like approach to concept formation. Likewise, the advent of knowledge graphs as an extension of the ontological approach has aided the efficiency of the process of exhaustive knowledge retrieval by filtering irrelevant information [49], a process that may be further improved by incorporating deterministic models of human category learning. We discuss possible solutions to these standing problems in Section 3.5.
Although much attention has been given to the organizational structure of representations, the current approaches ignore the possibility that, at least in humans, different types of representations may be organized in different ways. For example, low-level information-processing subsystems, such as the human visual system, may convert and organize environmental data in ways that are most efficient for the task of visual perception [13]. On the other hand, language-processing subsystems may reflect the kind of abstract semantic or conceptual organization proposed above. Expert systems that integrate different types of organization schemes as a function of the nature of the allowed representations have not been developed.

2.4. Ambiguity

Experts often use ambiguous language. A biologist might say that a certain species of animal is very fast or that a cell is highly segmented. The utility of imprecise terms or terms that represent degrees of an attribute were not addressed efficiently in the first generation of KBISes. This problem led to the application of interval arithmetic [50] and a special case of interval arithmetic referred to as fuzzy logic [51,52] to the realm of expert systems design resulting in what are known as fuzzy experts [53]. Fuzzy experts are expert systems with rules involving linguistic-fuzzy terms such as “if the number of cylinders is large then the car is fast.” To articulate such rules, fuzzy terms “large” and “fast” need to be defined in terms of set membership functions. This is accomplished by, for example, mapping the number of cylinders in cars to a degree of “largeness” or the top velocities of cars to a degree of “fastness” [54]. From a practical standpoint, this suggests a drastic reduction in the number of required production rules in a classical expert system [55].
The flexibility and efficiency permitted by using fuzzy logic has significantly improved the effectiveness of expert systems in diagnostic and forecasting applications [56,57]. For example, CADIAG-2, an internal medicine expert [58,59], and EMERGE, an expert system for the analysis of chest pain, have fuzzy logic engines at their core. Moreover, in the travel industry, Instant Traveling Expert Advice (ITEA) uses an input of fuzzy facts describing weather conditions (e.g., “humidity is very high,” “wind is slightly breezy”). ITEA assigns a degree of certainty to each fact and returns a recommended activity for the user. Fuzzy expert systems have also been developed to help managers (e.g., credit evaluators and damage/risk assessors), educators, and Internet users interpret complex, and often vague, information.
These developments have paved the way towards endowing KBISes with robust natural language-processing capacities. Central to these capacities is the ability for a program to realize the same input in multiple ways so as to choose an appropriate output in context [60]. Indeed, the pragmatic nature of human communication poses a particularly difficult challenge for KBISes in the way that a system must be able to recognize the goals and expectations of the speaker in order to produce a correct output for the given context. Nonetheless, newsworthy advances in natural language processing and KBISes have led to renewed hope with respect to this challenge (for a review, see [61]). Advances in natural language processing have also led to the personal assistant application Siri that has become a huge commercial success for Apple’s iOS. Siri constantly uses what it knows about someone (e.g., their prior search/call history) and their location (using the phone’s map application) to provide context for interpreting ambiguous speech input. Siri excels not only in the processing of natural language, but also in its production of language. Contributing to its commercial success is the way in which Siri engages in a conversation with the user in a way that far outperforms previous generations of natural language processors (e.g., an automatic bill collecting service at a local bank).
Accordingly, the last two decades have seen additional progress in the capacity for fuzzy expert systems to reconcile linguistic ambiguities by the implementation of type 2 fuzzy expert systems [62,63]. The vast majority of the previous works on fuzzy expert systems utilized deterministic membership functions (type 1 systems). These systems were unable to directly model uncertainties stemming from ambiguous semantics in the linguistic formations of rules as well as noise measurement and data [64]. Type 2 fuzzy expert systems eschew this apparent conflict between the deterministic membership function and “fuzzy” characterization as their membership functions are themselves fuzzy. They have generally outperformed their type 1 counterparts. Zarandi et al. have produced numerous examples of potentially promising type 2 fuzzy expert systems applied to a broad range of topics (e.g., the desulphurization of steel [65], stock price analysis [66], and image enhancement [67]). Zadeh [68] originally introduced the type 2 option as a means of more intuitively addressing problems associated with linguistic uncertainty. Its recent resurgence suggests a needed focus on linguistic ambiguity across the literature as a progression from the older fuzzy logic processes discussed previously, albeit not one based on human cognition.
Furthermore, KBISes that are better equipped to handle ambiguous inputs and pragmatics in spoken language demonstrate actions that better emulate human intelligence in the way ambiguity is highlighted as a constant aspect of real-time adaptation in a complex and unpredictable environment. An example system that was designed with semantic ambiguity in mind is ConceptNet [69], an ontology for language processing that seeks to make contextual flexibility the central concern for textual reasoning, analogy-making, and other functions. Moreover, flexibility in the face of complex and vague information is the foundation for the next aspect of human intelligence in KBISes: adaptation.

2.5. Adaptation

In the first section of this paper, some of the initial failures of the artificial neural networks approach to modeling expert systems were discussed. Some of these failures may be traced to the oversimplified character of the early perceptron networks and their inability to model complex adaptive behavior. However, after over forty years of improvements, artificial neural networks are now a useful tool in fields where modeling dynamic and adaptive learning is important [70]. For example, more expert systems are using neural networks to generate adaptive rules (e.g., see [71] for an integrated ANN-expert system designed for traffic control). These networks are simple three-layer networks that are fully connected. Each input neuron represents an object’s attribute. When weights are assigned to the input values (attributes) according to their importance, the second layer applies a sign activation function such as
y =   1   i f i = 1 n x i w i 0 1   i f   i = 1 n x i w i < 0
The activation of neurons in the second layer determines the categories (in the output layer) to which the objects belong and do not belong. Weighted rules that operate on the knowledge base can then be represented by artificial neural networks which modify their weights to achieve the correct decisions whenever new facts and rules are introduced into the system. Note that we highlight a notion of adaptation specific to the integrated ANN approach given its connection to cognitive research. Still, adaptation may take on a different route for other knowledge-based systems; that is, the adaptation may be to the knowledge base itself or to the ways it is organized within a single structure. However, as we discuss later in Section 3.1, expert systems that can reliably adapt across multiple representational structures remain an open problem. Since one of the hallmarks of human learning is its adaptive character, expert systems based on ANNs may be an appropriate starting point toward more realistically capturing human intelligence. This is the case when the connection between ANN integration in expert systems and human cognitive research is given explicit attention by designers as a motivating factor.
This completes a brief tour of the four basic rule formation strategies overtly addressed by the current generation of expert systems. These four strategies have given expert systems the potential to resemble empirically-founded psychological notions of human intelligence. In fact, recent attempts at improving KBISes have integrated all four strategies into a hybrid system (e.g., [72,73,74,75], see [76], for a survey of earlier works). For example, neuro-fuzzy expert systems bring together probabilistic inference, production rules, fuzzy logic, and rule adaptation to try to fulfill the elusive goal of emulating human problem-solving and decision-making intelligence. Despite this formidable blend, as shall be seen in the following section, there are important aspects of human problem-solving and decision-making intelligence that have not been considered by the current generation of expert systems. Without implementing these aspects, it is unlikely that KBISes will reach their full potential.

3. Five Missing Key Aspects of Intelligence in KBISes

We proposed that four key breakthroughs in KBIS design were motivated by the inability of the first generation of KBISes to incorporate uncertain, organizing, ambiguous, and adaptive rules. As mentioned, any system that propounds to solve complex decision-making problems in a human-like intelligent fashion must be able to process, build, and utilize these four types of rules. Accordingly, the implementation of such rules has led to the highly publicized success of knowledge-based systems such as IBM’s Watson which competed in the game show Jeopardy in February 2011. But as it was the case with the success of IBM’s Deep Blue after defeating chess champion Gary Kasparov, Watson’s victory raised new questions regarding the dubious claim of human-like intelligence in KBISes [77].
Accordingly, we propose five capacities of human cognition that have not yet been effectively captured by the current generation of KBISes and intelligent expert systems, preventing them from reaching their full potential. This potential may be realized both for systems explicitly intended for emulating the decision-making and inferential capacities of human experts and, eventually, in broader contexts for adaptive expert system performance overall. Henceforth, when we refer to “expert systems”, we refer predominantly to intelligent expert systems with similar design motivations to the broader array of KBISes for which humanlike intelligence is most readily applicable. Each of the limitations we discuss in this section was informed by empirical and theoretical research on human cognition.

3.1. Representational Plasticity

The first of these is the ability to solve problems using alternative representations. An example of this capacity involves the four-cubes problem popularized under the name “instant insanity”. In a version of the problem one is given four cubes. The faces of each cube are one of four colors: R(ed), G(reen), Y(ellow), and B(lue). The objective is to pile up the four cubes in such way that all four colors appear on each side of the 4 × 1 stack of cubes. Although there are thousands of possible permutations for stacking up the cubes, there is basically one way that gives a solution. One way of solving this problem is by supplying the expert system with a representation of a cube as a Prolog (a logic programming language) structure called a functor [78], and then by including rules for combinatorially matching the edges of colors shared by the stacked-up cubes. The use of the term “functor” in Prolog, and more generally in functional languages, comes from an attempt to model these programming languages using Category Theory [79]. A category in Category Theory is a directed graph with nodes (objects) and arrows (morphisms) that define the basic rules. Functors provide structure-preserving maps between categories that allow for transformations between object domains. Accordingly, functors in Prolog are primitive terms that map simple terms (their arguments) to compound terms or, in other words, to different syntactic structures. Thus, the term is conceptually related to the notion of a functor in Category Theory as a mapping between categories (or higher-order structures). However, these would be relatively brute-force methods of solving this problem and, hence, not the most “intelligent” approaches.
So how does a human expert solve this problem? The answer is found in the concept of representation. The way a problem is represented often determines how easy it is to solve. This, however, is not without a cost. French [80] argues that representing problems in overly simplified ways can lead to situations where the insight necessary to make breakthrough discoveries can be hindered. With this caveat in mind, by changing the problem space with isomorphic or analogous spaces, a cognitive agent can make difficult problems much more manageable. Thus, instant insanity may be regarded as a problem of insight where arriving at the solution depends on representational plasticity.
In our example displayed in Figure 2, an expert recognizes that representing the problem in terms of edges and nodes (where the edges stand for the relation “is adjacent to”) transforms the problem into a considerably easier one. More specifically, the problem is solved by representing each cube by a graph with four color nodes and by defining the edges connecting these nodes as the relation of “adjacent to”. Color nodes are adjacent to each other if and only if they are the colors of opposite faces in the cubes represented by the graphs. The solution is obtained by superimposing these graphs and examining the resulting subgraphs (for details, see [81]).
How can intelligent expert systems emulate the capacity that human beings possess for solving problems by using alternative problem spaces? Clearly, the structure of the rules and the knowledge base itself would have to be transformed. This may seem at first as a very difficult task, but it may be accomplished via metaprogramming. Metaprogramming is the process of writing programs that write or manipulate other programs. The language in which the metaprogram is written is called the metalanguage and the language of the programs that are manipulated is called the object language. When the programming language can be its own metalanguage, the language is said to be reflexive. Languages like Lisp and Prolog, both typically used in the development of KBISes, are reflexive languages [82].
With reflexive languages, it is a plausible but tedious task to add transformations that change the nature of the problem space by changing the nature of the rules in the knowledge base and the clauses in the database in such a way that they remain isomorphic to the original knowledge base and database pair. Some similar approaches have been attempted, albeit not within this precise framework of representational plasticity. For example, Gay et al. [83,84] introduced a class-based modular concept applied to object-oriented programming within which a nonuniform object (i.e., those that adapt the available methods given the problem state) protocol within an object class may be partitioned in separately sequenced and callable methods. More progress is needed, however, to better reflect the dynamic representational problem-solving capacities of humans. By switching from one representation to another, as in the case of going from cubes to graphs above, an expert can derive answers that would otherwise be difficult to derive by either problem space alone.
It is worth noting that programming common sense representations about the world is nontrivial, a difficulty often discussed in terms of the frame problem [85]. The frame problem refers to the challenge of specifying exactly what is changed by performing an action on the world as well as what remains unchanged. For example, when analyzing instant insanity, a system may have difficulty determining and describing that the blocks that are not touched remain unchanged or that the color of each side remains the same when a block is rotated. Many frame axioms are, therefore, needed to define everything that stays the same when an action is performed. The cumbersome and complex process of using so many frame axioms is the crux of the frame problem [86]. Conversely, cognitive evidence from perception and concept learning research suggests that people can detect invariants in their environment rather efficiently (i.e., by exercising cognitive economy) for the purpose of forming knowledge representations, including judgments [28,29]. This incompatibility is a challenge for KBISes that hope to emulate human intelligence.

3.2. Functional Dynamism

Another limitation of the current generation of expert systems hinges on their inability to emulate core-level cognitive processes that are essential to problem solving. Hence, this section focuses on the process of selective attention. Selective attention has been studied by numerous researchers empirically and theoretically (for a survey of key results on attention, see [87]). One way of thinking about how attention influences problem solving is through the problem of functional fixedness. As an example, examine a famous experiment by Duncker [88]. Duncker presented subjects with a candle, matches, and a box of tacks. Figure 3 shows a variant of his experiment.
The task was to attach a candle to a door for a vision experiment. This problem was difficult for subjects because they experienced difficulty thinking of the box of tacks as a platform for the candle rather than as a container exclusively. This type of difficulty is referred to as functional fixedness and, more colloquially, as mental inertia. How functional fixedness and attention relate to each other has been studied by several researchers, most notably Knoblich et al. [89]. Their research on eye movement during a problem-solving task suggests a link between the two processes. Notably, there were longer fixation times and few eye movements during an impasse (or functional fixedness) and increased attention to relevant information toward the end of problem solving, but only for successful problem solvers. Correspondingly, Kaplan and Simon [90] suggest that attention to additional features assists problem-solving behavior. Particularly, after a representation is revised with the addition of new information, individuals search for properties that are consistent in both representations. Finding such invariant properties while encoding additional features may overcome functional fixedness, expediting the problem-solving process.
It seems that the more one fixates on particular representations of objects in a problem-solving task, the less likely one is to find an insightful answer. However, how can insight of the kind that it takes to solve the Duncker problem be incorporated within expert systems? One possible approach would be to incorporate a mechanism of attention shifting for the functional properties of the clausal relations in the knowledge base and the database. In respect to the Duncker problem, this would mean that the various functional properties of the box of tacks, such as its ability to act as a platform, a cup, a kite, and so on, would be specified in functor structures. These would then be invoked to activate new goal queries that would lead to multiple possible solutions, each rated by its degree of ease. Thus, KBISes would benefit from implementing a low-level attentional mechanism that resembles the intelligent attention-shifting strategies of human problem solvers.

3.3. Domain Specificity

The third human intelligence emulation failure of intelligent expert systems concerns their specific nature. In this regard, one may argue that the strength of intelligent expert systems can also be considered their weakness. To appreciate this statement, consider that by narrowing the knowledge of KBISes to particular domains, one can never hope to emulate the multidomain connectivity of the human mind. The type of domain integration envisioned here is of a much broader scale than the one seen in a few existing expert systems.
The dangers associated with reducing connectivity in favor of narrow knowledge domains have been elaborated by Forbus et al. [91]. They believe that through the use of narrow knowledge domains, a model becomes susceptible to irrelevant constraints and the researcher becomes incapable of analyzing why a particular model is successful. The problem with domain integration is that unless one can find very general rules that can make connections between multiple domains efficient, the creation of production rules and a knowledge database capable of achieving such integration would be a task of insuperable complexity. Nonetheless, by sorting out the general rules and clauses that can stretch across domains of knowledge from the more domain-specific ones, one may be able to get a partial handle on this problem.
High-level perception and analogical thought have been suggested as a possible domain-general process allowing for the multidomain connectivity characteristic of human thought [92]. Chalmers et al. argue that many models are flawed because of how they downplay high-level perception. This is in accord with some of the points made previously with respect to attention shifting. They present a model of high-level perception and analogical thought that places emphasis on the integration of perceptual processing and analogical mapping as well as accumulation of appropriate representations in a given context. Additional accounts of analogical reasoning have been proposed to describe the domain-general mechanistic underpinnings of analogical learning in human cognitive development or otherwise highlight the role of domain-general processes (e.g., [93,94]). Likewise, some recent steps have been taken toward developing a domain-general neural network effective in solving analogy tasks [95]. This is, however, intended only as an improvement over alternative neural networks for determining task solutions and not as an accurate depiction of human knowledge interactions.
A possible direction that might provide a KBIS designer with a way forward toward achieving greater domain generality lies with concept structures, described by the unique relations of the dimensions within a category. Feldman [96] cataloged numerous Boolean concept structures (in this case, a collection of structure families defined over strictly binary dimensions) for study in relation to human category learning, drawing from the works of Shepard et al. [6], Aiken [97], and others. Furthermore, Vigo [28] studied the largest set of concept structures to date from different perspectives and suggests that concept structures may be defined by any number of dimensions, dimensional values, and cardinality, and that relationships between the dimensional values of any category of objects, regardless of the domain of investigation, may be described in this framework [98]. As such, concept structures are domain-general, and their implementation into KBIS design should be reasonably simple. For example, a class of facts within a knowledge base can be described by its dimensions and their relations as a structure, and any other class, even if drawn from an entirely different domain for which the same structure is applicable (at the least, any class featuring as many facts and the same number or more dimensions to be related), can be treated equivalently by the system acting upon those knowledge bases. Undoubtedly, multidomain connectivity is a challenge that deserves consideration in KBIS design if these systems are to resemble natural intelligence, and applicable research in cognitive science abounds.

3.4. Creativity

Another human intelligence emulation failure of intelligent expert systems lies in their inability to be creative. If one considers composing a beautiful melody a difficult problem, an expert system can certainly not solve such a problem—at least not consistently. Creativity is evident in countless human activities, although the cognitive mechanisms and neural structures that support creativity have remained elusive. Thagard and Stewart [99] proposed a computational account of the AHA! Experience accompanying creative thinking. They suggest mental representations are patterns of neural activity and that combining unconnected neural patterns may produce novel representations. Creativity as a combination of representations has also been suggested elsewhere [100,101,102]. Thagard and Stewart describe computer simulations that produce new patterns of neural activity whereby concepts combine to form a novel representation. Fauconnier and Turner [103] describe this notion of conceptual blending as a synthesis of frame-like input spaces containing the conceptual framework and semantic knowledge associated with each element combined into a blended space (e.g., “house” and “boat” blended to a new notion of “houseboat”). In addition, Eppe et al. [104] note challenges inherent to generating a computational account for blending. Combining input spaces demands a preliminary generic space containing the relational properties and similarities that may tie them together (such as “a person lives in a house on ground,” “a person travels in a boat on water,” and “a person lives in a houseboat on water”). Most present blending accounts cannot compute the generic space on their own. Moreover, the number of possible combinations of inputs may be immense, and the computational resolution of these combinations may be highly inefficient.
On the other hand, Hofstadter and the Fluid Analogies Research Group [105] suggest that the answer to how expert systems can be made more creative lies with analogical thinking. Through the process of analogy construction, one filters some information and adds other information that makes concepts “fluid” rather than static (see [106] for a discussion of concepts as analogies). Consequently, considerable attention has been given to creativity as a process of analogical generalization, and it was described as heuristic-driven theory projection (HDTP; [107]). HDTP is an algorithmic system that is meant to generalize mappings between the source and target domains given their structural commonalities. Guhe et al. [108] suggest, however, that this is merely a special case of conceptual blending and may not always be sufficient to describe the generation of novel concepts [109,110]. Besold and Robere [111] also provide thorough argumentation bringing into question the computational tractability of HDTP as a general account for human creativity. Accordingly, rather than incorporating an analogical engine in expert systems, we take the parsimonious route and suggest that much of the required fluidity may be achieved by following the suggestions discussed in previous sections for incorporating domain integration, attention shifting, and representational plasticity in these systems. In fact, these notions are roughly analogous to the underlying processes specified by Thagard and Stewart [99] in their model of creativity: namely, the model’s reliance on malleable mental representations (representational plasticity), combining neural patterns to produce novel representations (domain integration), and changes in perceptual inputs (attention shifting). Creative human cognition may be an emergent property of these fundamental processes, and accordingly, the capacity of an expert system to emulate human creative behavior may be reliant upon each of these previously discussed shortcomings being resolved. As such, KBIS designers seeking improved creativity ought to explore these hypothesized sources of human creativity.

3.5. Concept Learning

Previously, we discussed the importance of organizing data in ways that facilitate their retrieval and connection to other related data. We also argued that the way of organizing data under the object-oriented programming paradigm is consistent with well-known cognitive models of semantic memory [24,46]. Both approaches use the same simple and intuitive core idea of arranging concepts or categories of entities in terms of a class membership tree-like structure. However, although conceptual organization is important in the efficient retrieval of information, it says little about one of the key capacities of an intelligent human agent: namely, the ability to form concepts.
The concept learning and classification literature in the field of cognitive psychology is quite extensive (for a gentle introduction to concepts, see [106]). However, this discussion is confined to a particular result that may have a positive impact on expert systems design. In a recent work, Vigo [28,29] proposed a mathematical theory of human conceptual behavior whose models predict the degree of subjective difficulty experienced by humans when learning different types of well-defined and ill-defined concepts. The core models associated with generalized invariance structure theory, or GIST (e.g., the invariance law of human conceptual behavior, or the GISTM), have made accurate predictions with respect to very large classes of concepts that have been of great interest to researchers in the past few decades. In GIST, a system for how structural precursors of concept formation inform a rule formation system is proposed. Under the system, concepts are formed by the detection of atomic patterns referred to as invariants. These patterns are then stored as compound memory traces called ideotypes. Ideotypes facilitate the formation of simple rules, prototypes, and holistic magnitude judgments used to classify objects in the environment. Furthermore, a new theory of information derived from GIST and referred to as GRIT (generalized representational information theory, see [112] for an extensive investigation supporting GRIT’s efficacy) provides a way of measuring the information carried by objects of a category about the category.
Thus, in short, GIST provides a framework for explaining and predicting how the human conceptual system simplifies complex categories of datapoints into simpler ones containing the essential information present in the larger sets—a key landmark of human intelligence. As such, it provides a deterministic mechanism for human generalization and, hence, can enrich the capabilities of modern expert systems in several ways. In particular, we envision engines capable of capturing the essence of a database of facts in KBISes and condensing them to essential facts in ways consistent with the way that human experts would do it. A possible way of doing this was described in detail by Vigo [113]; however, here, a simple sketch of the approach is given.
Figure 4 below illustrates the knowledge information compression module that may be at the core of a conceptualizing engine. This module shall be referred to as the HIC (human information compression module). Whenever new facts are added to the database of facts, the combined facts are reduced by the conceptual engine. To do this, facts are first represented as objects. Sets of these objects are used to build concepts. The number of nonessential objects in the database may be reduced by applying the conceptual information measure developed by Vigo [114] that is derived in GIST and is referred to as “representational information”. The reduced database consisting of these essential concepts/facts is then used as a secondary database that better emulates the efficient derivation of heuristic solutions whenever the inference engine acts on its content. Indeed, when the inference engine operates on the HIC database, efficient and elegant advice of the type often displayed by a human expert would be expected. Admittedly, such advice will not be as precise as advice generated from the full database of facts, but it will be a good approximation based on the kind of generalization and, hence, information compression that the process of concept formation facilitates.
If future generations of KBISes are to attain a capacity for emulating the decisional and inferential capacities of human experts, they will need to account for these limitations to the current systems. Cognitive scientific empirical and theoretical research abounds with potential directions that can inform future KBIS design. Designers should pay attention to the need for problem-solving engines that can incorporate alternative representations, that is, representational plasticity; overcome the limitations of functional fixedness to become more functionally dynamic and narrow knowledge domains for greater generality and integration; pursue novel creative solutions; and compress information efficiently by processes analogous to human conceptual representation. These adaptations will follow in an established tradition for KBIS improvement driven by more human-like processes and the inspiration of human cognitive research.

4. Conclusions

KBISes as engines truly capable of emulating human intelligence have not yet been realized. To alleviate this gap, in addition to summarizing the problems of uncertainty, organization, ambiguity, and adaptability—recognized as important factors in the development of expert systems—we proposed five additional unresolved discrepancies in KBIS design. Incorporating the capacities of representational plasticity, attention shifting, domain integration, creativity, and concept formation in the new generation of expert systems will result in systems that fare the best chance at embodying truly intelligent behavior. While presenting these novel approaches, this paper has also offered brief suggestions as to how they may be implemented. Thus, this article has shown how theoretical and empirical developments in the field of cognitive psychology may inform and fortify future generations of knowledge-based intelligent systems.

Author Contributions

The original idea for the piece and the predominant theoretical discussions come from R.V.; furthermore, D.E.Z. and J.W. contributed to shaping the theoretical discussion. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Mikayla Doan, Charles Doan, Andrew Halsey, and Jinling Zhao for their helpful suggestions regarding this work. Correspondence and requests for materials can be addressed to either Ronaldo Vigo at [email protected], Derek Zeigler at [email protected], or Jay Wimsatt at [email protected].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–137. [Google Scholar] [CrossRef]
  2. Rosenblatt, F. Principles of Neurodynamics; Spartan Books: Kingussie, Scotland, 1962. [Google Scholar]
  3. Shin, C.; Park, S. Memory and neural network based expert system. Expert Syst. Appl. 1999, 16, 145–155. [Google Scholar] [CrossRef]
  4. Hatzilygeroudis, I.; Prentzas, J. Integrating (rules, neural networks) and cases for knowledge representation and reasoning in expert systems. Expert Syst. Appl. 2004, 27, 63–75. [Google Scholar] [CrossRef]
  5. Mukerjee, A.; Deshpande, J.M. Application of artificial neural networks in structural design expert systems. Comput. Struct. 1993, 54, 367–375. [Google Scholar] [CrossRef]
  6. Shepard, R.N.; Hovland, C.I.; Jenkins, H.M. Learning and memorization of classifications. Psychol. Monogr. Gen. Appl. 1961, 75, 1–42. [Google Scholar] [CrossRef]
  7. Newell, A.; Simon, H.A. Computer Simulation of Human Thinking. Science 1961, 134, 2011–2017. [Google Scholar] [CrossRef]
  8. Newell, A.; Simon, H.A. Human Problem Solving; Prentice Hall: Hoboken, NZ, USA, 1972. [Google Scholar]
  9. Buchanan, B.; Sutherland, G.; Feigenbaum, E.A. Heuristic DENDRAL: A program for generating explanatory hypotheses in organic chemistry. In Machine Intelligence; Edinburgh University Press: Edinburgh, UK, 1969. [Google Scholar]
  10. Feigenbaum, E.A.; Buchanan, B.G.; Lederberg, J. On generality and problem solving: A case study using the dendral program. In Machine Intelligence 6; Meltzer, B., Michie, D., Eds.; Edinburgh University Press: Edinburgh, UK, 1971; pp. 165–190. [Google Scholar]
  11. Shortliffe, E.H. Mycin: Computer-Based Medical Consultations; Elsevier Press: Cambridge, MA, USA, 1976. [Google Scholar]
  12. Firebaugh, M. Artificial Intelligence: A Knowledge-Based Approach; Boyd & Fraser: Boston, MA, USA, 1988. [Google Scholar]
  13. Marr, D. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information; W.H. Freeman: San Francisco, CA, USA, 1982. [Google Scholar]
  14. Sun, R. Anatomy of the Mind: Exploring Psychological Mechanisms and Processes with the Clarion Cognitive Architecture; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
  15. Vigo, R. Musings on the utility and challenges of cognitive unification: Review of Anatomy of the Mind, Ron Sun. Rensselaer Polytechnic Institute (2016). Cogn. Syst. Res. 2018, 51, 14–23. [Google Scholar] [CrossRef]
  16. Cerone, N.; McCalla, G. Artificial intelligence: Underlying assumptions and basic objectives. J. Am. Soc. Inf. Sci. 1984, 35, 280–288. [Google Scholar] [CrossRef]
  17. Lucas, P.; Van Der Gaag, L. Principles of Expert Systems; Addison-Wesley Longman Publishing: Boston, MA, USA, 1991. [Google Scholar]
  18. Wilson, B.G.; Welsh, J.R. Small knowledge-based systems in education and training: Something new under the sun. Educ. Technol. 1986, 26, 7–13. [Google Scholar]
  19. Medsker, L. Hybrid Neural Network and Expert Systems; Kluwer Academic Publishers: Amsterdam, The Netherlands, 1994. [Google Scholar]
  20. Waterman, D.A. A Guide to Expert Systems; Addison-Wesley: Boston, MA, USA, 1986. [Google Scholar]
  21. Khan, S.; Yairi, T. A review on the application of deep learning in system health management. Mech. Syst. Signal Process. 2018, 107, 241–265. [Google Scholar] [CrossRef]
  22. Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
  23. Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef] [PubMed]
  24. Quillian, M.R. Semantic Memory—Unpublished Doctoral Dissertation; MIT: Cambridge, MA, USA, 1966. [Google Scholar]
  25. Rao, A.S.; Georgeff, M.P. BDI agents: From theory to practice. In Proceedings of the First International Conference on Multi-agent Systems, San Francisco, CA, USA, 12–14 June 1995; AAAI Press: Palo Alto, CA, USA, 1995. [Google Scholar]
  26. Anderson, J.R. Cognitive Psychology and Its Applications; W.H. Freeman: New York, NY, USA, 1985. [Google Scholar]
  27. Nosofsky, R.M. An exemplar-model account of feature inference from uncertain categorizations. J. Exp. Psychol. Learn. Mem. Cogn. 2015, 41, 1929–1941. [Google Scholar] [CrossRef]
  28. Vigo, R. The GIST of concepts. Cognition 2013, 129, 138–162. [Google Scholar] [CrossRef]
  29. Vigo, R. Mathematical Principles of Human Conceptual Behavior: The Structural Nature of Conceptual Representation and Processing; Routledge: London, UK; Taylor & Francis: London, UK, 2015; Original work published 2014. [Google Scholar]
  30. Vigo, R.; Doan, C.A.; Zhao, L. Classification of three-dimensional integral stimuli: Accounting for a replication and extension of Nosofsky & Palmeri (1996) with a dual discrimination model. J. Exp. Psychol. Learn. Mem. Cogn. 2022. [Google Scholar] [CrossRef]
  31. Medsker, L.; Leibowitz, J. Design and Development of Expert Systems and Neural Computing; Macmillan: New York, NY, USA, 1994. [Google Scholar]
  32. Bowers, J.S.; Davis, C.J. Bayesian just-so stories in psychology and neuroscience. Psychol. Bull. 2012, 138, 389–414. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Tversky, A.; Kahneman, D. Causal Schemes in Judgements under Uncertainty; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  34. Shortliffe, E.H.; Buchanan, B.G. A model of inexact reasoning in medicine. Math. Biosci. 1975, 23, 351–379. [Google Scholar] [CrossRef]
  35. Walley, P. Measures of uncertainty in expert systems. Artif. Intell. 1996, 83, 1–58. [Google Scholar] [CrossRef] [Green Version]
  36. Baroni, P.; Vicig, P. An uncertainty interchange format with imprecise probabilities. Int. J. Approx. Reason. 2005, 40, 147–180. [Google Scholar] [CrossRef] [Green Version]
  37. Capotorti, A.; Formisano, A. Comparative uncertainty: Theory and automation. Math. Struct. Comput. Sci. 2008, 18, 57–79. [Google Scholar] [CrossRef]
  38. Luo, X.; Zhang, C.; Leung, H. Information sharing between heterogeneous uncertain reasoning models in a multi-agent environment: A case study. Int. J. Approx. Reason. 2001, 27, 27–59. [Google Scholar] [CrossRef] [Green Version]
  39. Mauá, D.D.; Antonnuci, A.; de Campos, C.P. Hidden Markov models with set-valued parameters. Neurocomputing 2016, 180, 94–107. [Google Scholar] [CrossRef] [Green Version]
  40. Núñez, R.C.; Scheutz, M.; Premaratne, K.; Murthi, M.N. Modeling uncertainty in first-order logic: A Dempster-Shafer theoretic approach. In Proceedings of the 8th International Symposium on Imprecise Probability: Theories and Applications, Compiègne, France, 2–5 July 2013. [Google Scholar]
  41. Zaffalon, M.; Miranda, E. Conservative Inference Rule for Uncertain Reasoning under Incompleteness. J. Artif. Intell. Res. 2009, 34, 757–821. [Google Scholar] [CrossRef]
  42. Liu, Z.N. Human-simulating intelligent PID control. Int. J. Mod. Nonlinear Theory Appl. 2017, 6, 74–83. [Google Scholar] [CrossRef] [Green Version]
  43. Minsky, M.L. A Framework for Representing Knowledge; McGraw-Hill: New York, NY, USA, 1975. [Google Scholar]
  44. Taylor, D. Object-Oriented Information Systems; John Wiley: Hoboken, NZ, USA, 1992. [Google Scholar]
  45. Touretzky, D.S. The Mathematics of Inheritance Systems; Morgan Kaufmann: Burlington, MA, USA, 1986. [Google Scholar]
  46. Anderson, J.R. The adaptive nature of human categorization. Psychol. Rev. 1991, 98, 409–429. [Google Scholar] [CrossRef]
  47. Gruber, T.R. A translation approach to portable ontology specifications. Knowl. Acquis. 1993, 5, 199–220. [Google Scholar] [CrossRef]
  48. Frické, M. Classification, facets, and metaproperties. J. Inf. Archit. 2010, 2, 43–65. [Google Scholar]
  49. Pujara, J.; Miao, H.; Getoor, L.; Cohen, W. Knowledge graph identification. In Proceedings of The Semantic Web—ISWC 2013; Springer: Berlin, Germany, 2013. [Google Scholar]
  50. Trott, M. Interval arithmetic in the Mathematica Guidebook for Numerics; Springer-Verlag: Berlin, Germany, 2006; pp. 54–66. [Google Scholar]
  51. Yager, R.R.; Zadeh, L.A. An Introduction to Fuzzy Logic Applications in Intelligent Systems; Kluwer Academic Publishers: Amsterdam, The Netherlands, 1992. [Google Scholar]
  52. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  53. Kandel, A. Fuzzy Expert Systems; CRC Press: Boca Raton, FL, USA, 1992. [Google Scholar]
  54. Dubois, D.; Pride, H.; Yager, R.R. Fuzzy Rules in Knowledge-Based Systems; Morgan Kaufmann: San Francisco, CA, USA, 1993. [Google Scholar]
  55. Kohout, L.J.; Bandler, W. Fuzzy relational products in knowledge engineering. In Fuzzy Approach to Reasoning and Decision Making; Springer Science and Business Media: Dordrecht, The Netherlands, 1992. [Google Scholar]
  56. Munakata, T.; Jain, Y. Fuzzy systems: An overview. Commun. ACM 1994, 37, 69–76. [Google Scholar] [CrossRef]
  57. Zimmermann, H.J. Fuzzy Set Theory—And Its Applications, 4th ed.; Kluwer Academic Publishers: Amsterdam, Netherlands, 2001. [Google Scholar]
  58. Adlassnig, K.P. Fuzzy Set Theory in Medical Diagnosis. IEEE Trans. Syst. Man, Cybern. 1986, 16, 260–265. [Google Scholar] [CrossRef]
  59. Adlassnig, K.P. Update on CADIAG-2: A fuzzy medical expert system for general internal medicine. In Progress in Fuzzy Sets and Systems; Janko, W.H., Roubens, M., Zimmermann, H.-J., Eds.; Kluwer Academic Publishers: Amsterdam, The Netherlands, 1990; pp. 1–6. [Google Scholar]
  60. Hovy, E. Generating natural language under pragmatic constraints. J. Pragmat. 1987, 11, 689–719. [Google Scholar] [CrossRef]
  61. Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 2011, 12, 2493–2537. [Google Scholar]
  62. Mendel, J.M. Advances in type-2 fuzzy sets and systems. Inf. Sci. 2007, 177, 84–110. [Google Scholar] [CrossRef]
  63. Tavana, M.; Hajipour, V. A practical review and taxonomy of fuzzy expert systems: Methods and applications. Benchmarking Int. J. 2019, 27, 81–136. [Google Scholar] [CrossRef]
  64. Mendel, J.M.; John, R.I.B. Type-2 fuzzy sets made simple. IEEE Trans. Fuzzy Syst. 2002, 10, 117–127. [Google Scholar] [CrossRef]
  65. Zarandi, M.F.; Türkşen, I.; Kasbi, O.T. Type-2 fuzzy modeling for desulphurization of steel process. Expert Syst. Appl. 2007, 32, 157–171. [Google Scholar] [CrossRef]
  66. Zarandi, M.F.; Rezaee, B.; Turksen, I.; Neshat, E. A type-2 fuzzy rule-based expert system model for stock price analysis. Expert Syst. Appl. 2009, 36, 139–154. [Google Scholar] [CrossRef]
  67. Zarinbal, M.; Zarandi, M.F. Type-2 fuzzy image enhancement: Fuzzy rule based approach. J. Intell. Fuzzy Syst. 2014, 26, 2291–2301. [Google Scholar] [CrossRef]
  68. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning. Part III Inf. Sci. 1975, 9, 43–80. [Google Scholar] [CrossRef]
  69. Liu, H.; Singh, P. ConceptNet: A practical commonsense reasoning tool-kit. BT Technology Journal 2004, 22, 211–216. [Google Scholar] [CrossRef]
  70. Haykin, S. Neural Networks: A Comprehensive Foundation; Macmillan College Publishing Company: New York, NY, USA, 1994. [Google Scholar]
  71. Patel, M.; Ranganathan, N. IDUTC: An intelligent decision-making system for urban traffic-control applications. IEEE Trans. Veh. Technol. 2001, 50, 816–829. [Google Scholar] [CrossRef]
  72. Bagloee, S.A.; Asadi, M.; Patriksson, M. Minimization of water pumps’ electricity usage: A hybrid approach of regression models with optimization. Expert Syst. Appl. 2018, 107, 222–242. [Google Scholar] [CrossRef]
  73. Sharaf-El-Deen, D.A.; Moawad, I.F.; Khalifa, M.E. A new hybrid case-based reasoning approach for medical diagnosis systems. J. Med. Syst. 2014, 38, 9. [Google Scholar] [CrossRef] [PubMed]
  74. Sharaf-El-Deen, D.A.; Moawad, I.F.; Khalifa, M.E. A breast cancer diagnosis system using hybrid case-based approach. Int. J. Comput. Appl. 2013, 72, 14–19. [Google Scholar]
  75. Zhou, Q.; Yan, P.; Liu, H.; Xin, Y. A hybrid fault diagnosis method for mechanical components based on ontology and signal analysis. J. Intell. Manuf. 2017, 30, 1693–1715. [Google Scholar] [CrossRef]
  76. Sahin, S.; Tolun, M.; Hassanpour, R. Hybrid expert systems: A survey of current approaches and applications. Expert Syst. Appl. 2012, 39, 4609–4617. [Google Scholar] [CrossRef]
  77. Allen, C.; Wallach, W. Wise machines? On the Horizon 2011, 19, 251–258. [Google Scholar] [CrossRef]
  78. Sterling, L.; Shapiro, E. The Art of Prolog: Advanced Programming Techniques; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  79. Awodey, S. Category Theory; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  80. French, R.M. Why representation modules don’t make sense. In Proceedings of the 1997 International Conference on New Trends in Cognitive Science: International Conference on New Trends in Cognitive Science; Riegler, A., Peschl, M., Von Stein, A., Eds.; Austrian Society for Cognitive Science: Vienna, Austria; pp. 158–163.
  81. Beineke, L.W.; Wilson, R.J. Introduction to Graph Theory. Am. Math. Mon. 1974, 81, 679. [Google Scholar] [CrossRef] [Green Version]
  82. Demers, F.N.; Malenfant, J. Reflection in logic, functional and object oriented programming: A short comparative study. In Proceedings of the IJCAI’95 Workshop on Reflection and Metalevel Architectures and their Applications in AI, Montreal, QC, Canada, 21 August 1995; pp. 29–38. [Google Scholar]
  83. Gay, S.; Gesbert, N.; Ravara, A.; Vasconcelos, V.T. Modular session types for objects. Log. Methods Comput. Sci. 2015, 11, 1–76. [Google Scholar] [CrossRef] [Green Version]
  84. Gay, S.; Vasconcelos, V.T.; Ravara, A.; Gesbert, N.; Caldeira, A. Modular session types for distributed object-oriented programming. In Proceedings of the 37th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’10), Madrid, Spain, 17–23 January 2010; ACM: New York, NY, USA, 2010. [Google Scholar]
  85. McCarthy, J.; Hayes, P. Some philosophical problems from the standpoint of artificial intelligence. In Machine Intelligence; Meltzer, B., Ed.; Edinburgh University Press: Edinburgh, UK, 1969; Volume 4, pp. 463–502. [Google Scholar]
  86. Morgenstern, L. The problem with solutions to the frame problem. In The Robot’s Dilemma Revisited: The Frame Problem in Artificial Intelligence; Ablex Publishing Company: New York, NY, USA, 1996; pp. 99–133. [Google Scholar]
  87. Kruschke, J.K. Models of attentional learning. In Formal Approaches in Categorization; Pothos, E.M., Willis, A.J., Eds.; Cambridge University Press: Cambridge, UK, 2009; pp. 120–152. [Google Scholar]
  88. Duncker, K. On problem solving (translated by L.S. Lees). Psychol. Monogr. 1945, 58, 270. [Google Scholar]
  89. Knoblich, G.; Ohlsson, S.; Raney, G.E. An eye movement study of insight problem solving. Mem. Cogn. 2001, 29, 1000–1009. [Google Scholar] [CrossRef] [PubMed]
  90. Kaplan, C.A.; Simon, H.A. In search of insight. Cogn. Psychol. 1990, 22, 374–419. [Google Scholar] [CrossRef]
  91. Forbus, K.D.; Gentner, D.; Markman, A.B.; Ferguson, R.W. Analogy just looks like high level perception: Why a domain-general approach to analogical mapping is right. J. Exp. Theor. Artif. Intell. 1998, 10, 231–257. [Google Scholar] [CrossRef]
  92. Chalmers, D.J.; French, R.M.; Hofstadter, D. High-level perception, representation, and analogy: A critique of artificial intelligence methodology. J. Exp. Theor. Artif. Intell. 1992, 4, 185–211. [Google Scholar] [CrossRef]
  93. Alvarez, J.; Abdul-Chani, M.; Deutchman, P.; DiBiasie, K.; Iannucci, J.; Lipstein, R.; Zhang, J.; Sullivan, J. Estimation as analogy-making: Evidence that preschoolers’ analogical reasoning ability predicts their numerical estimation. Cogn. Dev. 2017, 41, 73–84. [Google Scholar] [CrossRef]
  94. Thibodeau, P.H.; Flusberg, S.J.; Glick, J.J.; Sternberg, D.A. An emergent approach to analogical inference. Connect. Sci. 2013, 25, 27–53. [Google Scholar] [CrossRef]
  95. Yuan, A. Domain-general learning of neural network models to solve analogy task: A large-scale simulation. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, London, UK, 16–29 July 2017; Cognitive Science Society: Seattle, WA, USA, 2017. [Google Scholar]
  96. Feldman, J. A catalog of Boolean concepts. J. Math. Psychol. 2003, 47, 75–89. [Google Scholar] [CrossRef]
  97. Aiken, H.H. Synthesis of Electronic Computing and Control Circuits; Harvard University Press: Harvard, UK, 1951. [Google Scholar]
  98. Vigo, R.; Wimsatt, J.; Doan, C.A.; Zeigler, D.E. Raising the bar for theories of categorisation and concept learning: The need to resolve five basic paradigmatic tensions. J. Exp. Theor. Artif. Intell. 2021. [Google Scholar] [CrossRef]
  99. Thagard, P.; Stewart, T.C. The AHA! experience: Creativity through emergent binding in neural networks. Cogn. Sci. 2011, 35, 1–33. [Google Scholar] [CrossRef] [Green Version]
  100. Boden, M.A. The Creative Mind: Myths and Mechanisms; Routledge: London, UK, 2004. [Google Scholar]
  101. Koestler, A. The Act of Creation: A Study of the Conscious and Unconscious in Science and Art; Dell: New York, NY, USA, 1967. [Google Scholar]
  102. Mednick, S. The associative basis of the creative process. Psychol. Rev. 1962, 69, 220–232. [Google Scholar] [CrossRef] [Green Version]
  103. Fauconnier, G.; Turner, M. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities; Basic Books: New York, NY, USA, 2003. [Google Scholar]
  104. Eppe, M.; Maclean, E.; Confalonieri, R.; Kutz, O.; Schorlemmer, M.; Plaza, E.; Kühnberger, K.-U. A computational framework for conceptual blending. Artif. Intell. 2018, 256, 105–129. [Google Scholar] [CrossRef]
  105. Hofstadter, D.R. Fluid Analogies Research Group. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought; Basic Books: New York, NY, USA, 1995. [Google Scholar]
  106. Vigo, R. A dialog on concepts. Think 2010, 9, 109–120. [Google Scholar] [CrossRef] [Green Version]
  107. Schwering, A.; Krumnack, U.; Kühnberger, K.-U.; Gust, H. Syntactic principles of heuristic-driven theory projection. Cogn. Syst. Res. 2009, 10, 251–269. [Google Scholar] [CrossRef]
  108. Guhe, M.; Pease, A.; Smaill, A.; Martinez, M.; Schmidt, M.; Gust, H.; Kühnberger, K.-U.; Krumnack, U. A computational account of conceptual blending in basic mathematics. Cogn. Syst. Res. 2011, 12, 249–265. [Google Scholar] [CrossRef]
  109. Hedblom, M.M.; Kutz, O.; Neuhaus, F. Image schemas in computational conceptual blending. Cogn. Syst. Res. 2016, 39, 42–57. [Google Scholar] [CrossRef]
  110. Kutz, O.; Bateman, J.; Neuhaus, F.; Mossakowski, T.; Bhatt, M. E pluribus unum: Formalisation, use-cases, and computational support for conceptual blending. In Computational Creativity Research: Towards Creative Machines; Atlantis Thinking, Machines; Besold, T., Schorlemmer, M., Smaill, A., Eds.; Atlantis Press: Amsterdam, The Netherlands, 2015; Volume 7. [Google Scholar]
  111. Besold, T.R.; Robere, R. When almost is not even close: Remarks on the approximability of HDTP. In Proceedings of the Artificial General Intelligence—6th International Conference, AGI 2013, Beijing, China, 31 July–3 August 2013; Lecture Notes in Computer Science. Kuhnberger, K., Rudolph, S., Wang, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7999, pp. 11–20. [Google Scholar]
  112. Vigo, R.; Doan, C.A.; Basawaraj; Zeigler, D.E. Context, structure, and informativeness judgments: An extensive empirical investigation. Mem. Cognit. 2020, 48, 1089–1111. [Google Scholar] [CrossRef]
  113. Vigo, R. Towards a law of invariance in human concept learning. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Boston, MA, USA, 20–23 July 2011; Carlson, L., Hölscher, C., Shipley, T., Eds.; Cognitive Science Society: Seattle, WA, USA, 2011; pp. 2580–2585. [Google Scholar]
  114. Vigo, R. Representational information. Inf. Sci. 2011, 181, 4847–4859. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Production-rules expert system architecture. Note. This graphic describes an expert system according to the authors’ definition and visualization. See [19] for a similar depiction.
Figure 1. Production-rules expert system architecture. Note. This graphic describes an expert system according to the authors’ definition and visualization. See [19] for a similar depiction.
Philosophies 07 00046 g001
Figure 2. Alternative representations of the four-cubes problem in terms of graphs.
Figure 2. Alternative representations of the four-cubes problem in terms of graphs.
Philosophies 07 00046 g002
Figure 3. Duncker problem.
Figure 3. Duncker problem.
Philosophies 07 00046 g003
Figure 4. Production-rules expert system with the knowledge information compression module. Note. HIC = human information compression; CD = compressed data, reduced by the conceptual engine.
Figure 4. Production-rules expert system with the knowledge information compression module. Note. HIC = human information compression; CD = compressed data, reduced by the conceptual engine.
Philosophies 07 00046 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vigo, R.; Zeigler, D.E.; Wimsatt, J. Uncharted Aspects of Human Intelligence in Knowledge-Based “Intelligent” Systems. Philosophies 2022, 7, 46. https://doi.org/10.3390/philosophies7030046

AMA Style

Vigo R, Zeigler DE, Wimsatt J. Uncharted Aspects of Human Intelligence in Knowledge-Based “Intelligent” Systems. Philosophies. 2022; 7(3):46. https://doi.org/10.3390/philosophies7030046

Chicago/Turabian Style

Vigo, Ronaldo, Derek E. Zeigler, and Jay Wimsatt. 2022. "Uncharted Aspects of Human Intelligence in Knowledge-Based “Intelligent” Systems" Philosophies 7, no. 3: 46. https://doi.org/10.3390/philosophies7030046

APA Style

Vigo, R., Zeigler, D. E., & Wimsatt, J. (2022). Uncharted Aspects of Human Intelligence in Knowledge-Based “Intelligent” Systems. Philosophies, 7(3), 46. https://doi.org/10.3390/philosophies7030046

Article Metrics

Back to TopTop