Next Article in Journal
Symmetry Groups, Quantum Mechanics and Generalized Hermite Functions
Previous Article in Journal
Arousing Early Strategic Thinking about SDGs with Real Mathematics Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of a Computable Approximate Reasoning Logic System for AI

Institute of Uncertainty Information, Hebei University of Engineering, Handan 056038, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(9), 1447; https://doi.org/10.3390/math10091447
Submission received: 28 March 2022 / Revised: 12 April 2022 / Accepted: 20 April 2022 / Published: 25 April 2022

Abstract

:
The fuzzy logic reasoning based on the “If... then...” rule is not the inaccurate reasoning of AI against ambiguity because fuzzy reasoning is antilogical. In order to solve this problem, a redundancy theory for discriminative weight filtering containing six theorems and one M(1,2,3) model was proposed and the approximate reasoning process was shown, the system logic of AI handling ambiguity as an extension of the classical logic system was proposed. The system is a generalized dynamic logic system characterized by machine learning, which is the practical-application logic system of AI, and can effectively deal with practical problems including conflict, noise, emergencies and various unknown uncertainties. It is characterized by combining approximate reasoning and computing for specific data conversion through machine learning. Its core is data and calculations and the condition is “sufficient” high-quality training data. The innovation is that we proposed a discriminative weight filtering redundancy theory and designed a computable approximate reasoning logic system that combines approximate reasoning and calculation through machine learning to convert specific data. It is a general logic system for AI to deal with uncertainty. The study has significance in theory and practice for AI and logical reasoning research.

1. Introduction

This work is to propose a logic system for AI to deal with various uncertainties.
Many scholars believe that in the past 70 years, the progress of the artificial intelligence (AI) technology has taken an “antilogical” route. Is this really the case? In order to identify whether the research route of AI is logical or antilogical, firstly, the difference between a logic system and an antilogic system supporting the route should be distinguished. A logic system refers to the reasoning system where reasoning is “computational thinking”. An antilogic system refers to a reasoning system where reasoning does not rely on logical operation. Secondly, it should be clear that a logic system supporting AI research is not only a mathematical logic system of “accurate reasoning” (i.e., a classical logic system), but also an approximate reasoning logic system based on “inaccuracy reasoning” obtained by extending the classical logic system and processing fuzzy information.
Nowadays, artificial intelligence is developing rapidly, and AI is applied in various fields [1]. From the construction industry [2] to modern power systems [3], from precipitation prediction [4] to urban intelligent systems [5], from medical health [6] to music education [7], AI has become an indispensable part of the development of modern society. Machine learning has also been applied to the development of medical health [8] and building safety [9] and so on.
Most reasoning methods have problems facing inconsistency. Many scholars have conducted a lot of research on it. Floridi et al. considered that complete uncertainty is out of reach [10]. Dealing with uncertainty is a part of most intelligent behaviors. A. Dominic proposed that the technology of managing uncertainty was the key step to producing intelligent behaviors in machine learning [11], and D.T. Li hold that understanding the underlying uncertainty is very important to improving the reliability of a system [12]. M. Kifer believed that the logic theory is a sufficient model for people’s views on inconsistency [13]. L.A. Zadeh put forward the possibility theory of fuzzy logic focused on knowledge representation and reasoning under uncertainty [14]. Teodorescu et al. believed that fuzzy logic can be exchanged with other logic, although there are subtle differences in natural language [15].
In fact, an approximate reasoning logic system is also a logic system for AI to deal with various conflicts, emergencies and unknown uncertainties which make AI have “great limitations”. The condition is that it can be assumed that the indicators affecting the system state are independent. Under the condition that the indicators affecting the system state cannot be assumed to be independent, the logic system for AI to deal with uncertainties is the well-known generalized dynamic logic system characterized by machine learning.
In order to solve the problem of fuzzy reasoning antilogic caused by the “If... then...” rules not being the matter of AI’s inaccurate reasoning about ambiguity, a generalized dynamic logic system characterized by machine learning is proposed. It is a general logic system for AI to deal with uncertainty, and it is also a practical-application logic system of AI which has been used to deal with practical problems including conflict, noise, emergencies and various unknown uncertainties. It is characterized by combining approximate reasoning and computing for specific data conversion through machine learning, and its condition is that training data must be provided “sufficiently” and can basically cover the scene.
The structure of the paper is as follows. First, a mathematical logic system based on precise reasoning and the first cold winter of AI are introduced. On this basis, the irrationality of a fuzzy logic system and the second trough of AI research are expounded. Then, a redundant theory of discriminative weight filtering is constructed, and by showing the approximate reasoning process, an approximate reasoning logic supporting the fuzziness study is computed, which is the logic system for AI to deal with ambiguity. Finally, a generalized dynamic logic system characterized by machine learning is described and the conclusion is drawn.

2. Mathematical Logic System Based on Precise Reasoning

In 1956, AI was pushed onto a new stage of history during the Dartmouth Conference. Following the mathematical logic of precise reasoning, AI successfully passed the golden decade in the early stage of development by completing the theory of machine proof well [16]. When dealing with deterministic problems such as theorem-proving, AI following the mathematical logic of accurate reasoning really has great advantages. However, when dealing with practical problems with a large number of uncertainties, AI following a mathematical logic of accurate reasoning has great limitations. These “great limitations” just show that “a mathematical logic system of accurate reasoning” is not a “sufficiently good” logic system for AI to deal with “a large number of uncertainties”. Hence, what kind of a logic system is a “sufficiently good” logic system designed for AI to deal with “a large number of uncertainties”?
The fact that “large number of uncertainties” make AI have “great limitations” has not been paid more attention just shows that the classical logic system is not a “sufficiently good” logic system for AI to deal with “uncertainties”. Then, these “great limitations” make AI researchers start to doubt the rationality of simulating the intelligent process with logical calculus. H. Dreyfus believes that people do not rely on logical operation in solving problems [17]. This is the first challenge to the rationality of simulating the intelligent process with logical calculus. However, the challenge did not succeed because perceptron models that do not rely on logical operation were proved to have great limitations [18]. The value of this proof is that it unquestionably defends the rationality of using logical operation to simulate the process of human intelligence. However, the proof does not point out what kind of an operation is the logical operation that makes AI have “great limitations” due to the emergence of “a large number of uncertainties”. In fact, “unable to express” is normal because different uncertainties making the great limitations of AI usually involve different reasoning methods and logical operation methods.
However, AI research went into a severe winter through the whole of 1970s because of the “great limitations” of AI caused by the emergence of “a large number of uncertainties”. The first severe winter of AI shows that mathematical logic systems based on accurate reasoning (i.e., classical logic systems) are not “sufficiently good” to deal with “a large number of uncertainties”. From the problem of AI’s first severe winter, we should think about the following questions:
(1)
What other uncertainties besides randomness constitute the large number of uncertainties that make AI have “great limitations”?
(2)
The deeper question is whether there are nonrandom uncertainties that fundamentally change the “accurate reasoning model” and the “accurate reasoning logic” followed by AI, whether there is a kind of randomness uncertainty which can change the “accurate reasoning model” and the “accurate reasoning logic” that AI originally follows? If there is, what kind of uncertainty is it? How does it change the “inaccuracy reasoning mode” and the “inaccuracy reasoning logic” of AI? What are the new reasoning and logic? Obviously, this is not just a concern of AI, but also a concern of logic research.

3. Irrationality of a Fuzzy Logic System

3.1. Fuzzy Set Theory and Fuzzy Logic

In 1965, nine years after the appearance of AI, in order to challenge the certainty of sets, Dr. Zadeh published the famous paper Fuzzy Sets [19]. “Taking big and taking small” is another operation property of a fuzzy set. After the appearance of fuzzy sets, people realized that there is an uncertainty called fuzziness in addition to randomness, which is the virtual fuzziness presented by an “element part belongs” fuzzy set.
The first fuzzy study was the fuzzy set theory of virtual fuzziness. The fuzzy set theory uses a fuzzy set to describe fuzzy information and deals with fuzzy information through fuzzy set conversion, which can be realized by fuzzy reasoning based on the “If… then…” rule generated by the operation of “take big and take small”. What provides the theoretical basis for fuzzy reasoning is the fuzzy logic. As Ross put in his book Fuzzy Logic and Its Engineering Applications [20], the ultimate goal of fuzzy logic is to provide a theoretical basis for inaccuracy reasoning. This reasoning is also known as approximate reasoning (Zadeh, 1979). The theory and method of expressing and processing fuzzy information based on fuzzy set, fuzzy reasoning and fuzzy logic generation is called fuzzy mathematics.
The 1990s was a period of rapid development of fuzzy mathematics, but the debate of “right and wrong” and “usable and unusable” on fuzzy mathematics is becoming more and more intense. Although there are different views, the rapid development and applications of fuzzy mathematics and fuzzy reasoning based on the “If…then…” rule to deal with fuzzy information still pose a great temptation for AI researchers. In fact, since the first international conference on NN and FR held in Japan in 1990, the interest in fuzzy reasoning (FR) has increased sharply. The research on the integration of the NN and FR technology has developed rapidly and attracted increasing attention. It has become a new research direction in the field of control and information processing [21,22,23,24,25]. In particular, the fusion of fuzzy reasoning (FR), artificial intelligence (AI) and neural networks (NN) has attracted more attention. Some scholars call it the FAN system [26] and believe that the FAN system is a very practical information processing system with a computer as the main platform [27].
The fuzzy reasoning used in the fuzzy set theory to deal with fuzzy information is not the inaccuracy reasoning of AI to process fuzziness. This explosive result made AI researchers greatly disappointed. This caused AI research to fall into a trough again (1987–1993).
However, the more important question is why the fuzzy reasoning based on the “If… then…” rule used in the fuzzy set theory to deal with fuzzy information is not the imprecise reasoning of AI to deal with fuzziness because only by figuring out the reason can we know how to design a “sufficiently good” logic system.
In fact, FR is a typical example of antilogic. Hence, it is necessary to study why the antilogical route is not the right route for AI research in depth. Further research shows that with a fuzzy set that disrupts the electron-like inseparability of the elements in the set, when describing fuzzy information, the two different physical concepts of “element” and “element attribute” are confused. “Elements” cannot be divided, and there is no corresponding entity in the objective world; the “element attributes” can be divided in the attribute space.
The fuzzy set theory does not define the attribute space but replaces it with a fuzzy set, and a fuzzy set confuses the virtual membership in the “element partly belongs to” subspace with the (real) membership in the “element attribute partly belongs to” subspace. The result of confusion is that the “fuzzy set conversion” used by the fuzzy set theory to deal with fuzzy information is actually the conversion of irrelevant data composed of the measurement results of different objects under the same measurement conditions. Because irrelevant data cannot be converted, the calculation of fuzzy set conversion is not feasible. Hence, the fuzzy reasoning based on the “If… then…” rule to realize fuzzy set conversion is not “the idea of computation” and the fuzzy logic that provides a theoretical basis for fuzzy reasoning is not a computable logic. Because the fuzzy set theory, which studies virtual fuzziness, takes an antilogical route, the fundamental reason why the antilogical route cannot be used in AI research is that its reasoning does not rely on logical operation, and the perceptron model that does not rely on logical operation has been proved to have limitations [28].

3.2. Fuzzy Test and Fuzziness

There are two kinds of uncertainty tests, a random test and a fuzzy test. The differences between them are as follows:
(1)
A fuzzy test cannot be repeated under the same conditions.
(2)
The event we are interested in an experiment is not “happening or not happening”, but the quantitative value of the event of interest in the j ( j = 1 ~ m ) times test, which we call μ j ( x ) , and μ j ( x ) , is not known and can be determined by decisionmakers.
(3)
The key problem is that the event of interest is not locked by one or infinite times, but by m tests. It is assumed that the probability of the event of interest locked by m tests is α ( x ) .
(4)
Uncertainty refers to people not knowing the true values of μ j ( x ) and α ( x ) , but only being able to determine the approximate true value, and this nonrandom uncertainty is called fuzziness.
(5)
The reason why we should study fuzziness is that only by determining α ( x ) can we solve the practical problems that need to be solved. To determine α ( x ) , firstly, we should study and determine the fuzziness of μ j ( x ) and α ( x ) .
(6)
The process of determination of the quantitative value μ j ( x ) on the occurrence of the event of interest in the j ( j = 1 ~ m ) times fuzzy test is called knowledge data acquisition.
The quantitative value sequence of events of interest determined by an m times fuzzy test is defined as follows:
μ 1 ( x ) , μ j 2 ( x ) , , μ m ( x )
We call the process of determining the possibility α(x) of the event of interest locked by m times fuzzy tests knowledge data conversion.
(7)
The “acquisition and conversion” of knowledge data are the content of fuzziness research, and these are two different stages. The difference between them is that knowledge data acquisition realizes the conversion between a “qualitative concept” and a “quantitative value” and completes the acquisition and expression of fuzzy information. The conversion of knowledge data realizes the conversion between “quantitative values” and completes fuzzy information processing.
(8)
The “acquisition and conversion” of knowledge data should be calculated, and the calculation includes fuzziness. There is no doubt that the first focus of fuzziness research is the characteristics of this kind of calculation, including fuzziness, because “uncertainty research” usually begins with recognizing the characteristics of computation, including “uncertainty”. This is not only the starting point of uncertainty research, but also the dividing point of whether uncertainty research takes the logical route or the antilogical route.
With that in mind, what are the characteristics of the calculation including fuzziness?

4. Preliminary Knowledge of a Regression Logic Route in Fuzziness Research

4.1. Characteristics of Computation including Ambiguity

Note that the true values of μ j ( x ) and α ( x ) are not known and what the decisionmaker determines is only the approximation of the true values. However, it is common sense that “approximation” differentiates between the “good” and the “bad”, and only the “optimal” approximation is valuable. What is the “optimal” approximation?
Since the true values of μ j ( x ) and α ( x ) cannot be known, there is no absolute standard to measure approximate superiority or inferiority. What kind of a standard should a reasonable “relative standard” be?
Definition of a relative standard: The approximation determined by the decisionmaker should be the “optimal” approximation among all the approximations that people may determine under the current conditions, that is, the “relatively optimal” approximation. Hence, the fuzziness research is to determine the “relatively optimal” approximation. This is an invariable proposition throughout the study of fuzziness.
How can the decisionmaker determine the best approximation of all possible approximations under current conditions? For this purpose, it is necessary to figure out what kind of fuzziness produces the approximation.
(1)
Knowledge data acquisition stage.
The determination of μ j ( x ) is usually an indirect measurement without the aid of a tool. Because the information is incomplete, it is necessary to supplement the necessary subjective knowledge and combine the subjective and objective knowledge; only then can we reasonably determine the quantitative value μ j ( x ) of the occurrence of the event of interest in a j times fuzzy test. It is not the correct method of fuzziness research to avoid or even deliberately ignore subjective knowledge in the knowledge data acquisition stage. However, the randomness in the subjectivity of decisionmakers makes the calculation of μ j ( x ) not repeatable, the calculation result is not unique, and it is not guaranteed to be a “relatively optimal” approximation.
(2)
Knowledge data conversion stage.
Although the data in Equation (1) is the basic condition for determining α ( x ) , it is not a sufficient condition for determining α ( x ) , but a relevant condition. Under the relevant conditions of Equation (1), the calculation of α ( x ) is feasible; it is not a deterministic calculation, but there is algorithm diversity. It is this diversity of algorithms that makes the calculation of determining α ( x ) not repeatable, the calculation result is not unique, and it is not guaranteed to be a “relatively optimal” approximation.
From the above, we know that in order to make the calculation of “acquisition and conversion” of fuzzy knowledge data repeatable, the calculation result unique and approximation “relatively optimal”, that is to conduct fuzziness research normally, there must be logical support that can suppress the randomness of the subjectivity of decisionmakers and regulate the diversity of objective algorithms. This supporting logic is called the approximate reasoning logic.
What kind of a logic is the approximate reasoning logic? How can it make the calculation of the true value approximation of the target value repeatable, the calculation result unique and approximation “relatively optimal” by restraining the randomness in the subjectivity of the decisionmaker and standardizing the diversity of the objectively existing algorithms? Obviously, to start the fuzziness research, this is the first logical question needed to be clearly answered and solved, and it is also the starting point of whether the study of fuzziness can be embarked on the logical route.

4.2. Approximate Reasoning Logic Supporting Fuzziness Research

In order to make the calculation of “acquisition and conversion” of fuzzy knowledge data repeatable, the calculation result unique and approximation “relatively optimal”, the approximate reasoning logic supporting the research of fuzziness requires that every choice made by decisionmakers to determine the “acquisition and conversion” of knowledge data containing fuzziness must be the best choice under the current conditions. This is the rule of ambiguity research. If this is done, it is certain that there is no better approximation than the approximation determined by the decisionmaker. Therefore, the calculation of approximation is the calculation of the “relatively optimal approximation” under the current conditions. There is no doubt that such calculation is repeatable, the calculation result is unique and the approximation is “relatively optimal”.
The reason why the above rules are feasible rules and can be formulated for the approximate reasoning logic is that the standardized human approximate reasoning ability exists objectively. If it took humans more than a thousand years to get a kind of computable logic, which is mathematical logic, in that way, it took humans thousands of years to learn the approximate reasoning ability.
The difference between the approximate reasoning logic and the binary logic is that the logical relationship is different. The logical relationship of the binary logic is causal, so the binary logic system is a strict proof system; and the logical relationship of the approximate reasoning logic is the correlation relationship, so the approximate reasoning logic system is a “relatively optimal” approximate verification system. The difference between the approximate reasoning logic and the fuzzy logic is substantive. The approximate reasoning logic is computable logic, while the fuzzy logic is not. Approximate reasoning logic is faced with “acquisition and conversion” of the knowledge data computable and containing fuzziness. Rules should be made in order to ensure the feasible calculations can be repeated, the calculation result is unique and the approximation is “relatively optimal”. Although the execution is difficult, no matter how difficult it is, because the calculations are repeatable, the calculation result is unique and the approximation is “relatively optimal” and exist objectively, people will find them in the end.
Fuzzy logic faces the conversion of a computerless fuzzy set. It is hoped that there is fuzzy reasoning based on the “If… then…” rules generated by “take big or take small” to achieve fuzzy set conversion. Because the calculation of fuzzy set conversion is not feasible regardless of whether there is fuzzy reasoning based on the “If…then…” rule that can achieve fuzzy set conversion, it is not “calculation thinking”, thus, the fuzzy logic that provides the theoretical basis for fuzzy reasoning is not computable logic.
The emergence of the fuzzy set theory on virtual fuzziness makes the uncertainty research have two routes. One is the logical route in which reasoning is the idea of computation; the other is the antilogical route in which reasoning does not rely on computation.
Although the approximate reasoning logic exists objectively, whether the study of fuzziness can embark on this logical route is another matter. To determine the quantitative value μ j ( x ) for the occurrence of the event of interest in a j ( j = 1 ~ m ) times fuzzy test, although difficulties may sometimes be encountered, they are all non-substantive difficulties because determining μ j ( x ) achieves the transition between the “qualitative concepts” and the “quantitative numerical”, in which humans have accumulated so many methods.
It is very difficult to achieve “conversion of knowledge data containing fuzziness”. Although the calculation to achieve this conversion is feasible, almost without exception, it is “nonlinear calculation”. To complete a kind of “unknown nonlinear calculation”, we must provide the reasoning of “computational thinking”. What kind of reasoning is this approximate reasoning that can provide “computational thinking” for “unknown nonlinear calculation” and how to find it?
This is the reason why calculable approximate reasoning logic has not been established since mathematical logic. In order to find this approximate reasoning that can provide “computational thinking” for “unknown nonlinear calculation”, first of all, we should understand the characteristics of conversion of knowledge data containing fuzziness.

4.3. Conversion Characteristics of Knowledge Data Containing Fuzziness

Equation (1) is the most general conversion of knowledge data containing fuzziness. It is characterized by an uncertain conversion between a limited number of related values. Because it is a data conversion with correlation, the calculation of data conversion is feasible. Because finite computationally feasible numerical conversions have no randomness, the “uncertainty” in the conversion is not randomness but fuzziness; more precisely, it is the diversity of algorithms called objective fuzziness. Because it is a “numerical” conversion, not a conversion between the “qualitative concepts” and the “quantitative numerical”, cloud models based on the concepts of “cloud” and “cloud drops” cannot be covered.
This is the fundamental reason why many experts and scholars usually fail to obtain substantial results when they do their best to replace it with probabilistic methods or cloud models to cover fuzziness research.
For example, the D–S evidence theory is an extension of the classical probability theory. The general framework was first proposed to construct an “uncertain inference model” by Dempster in 1967 (13); it was later expanded by Shafer, and in 1976, an evidence theory for processing uncertain information based on trust function and authenticity measures was established [29]. The D–S evidence method is a powerful method for uncertain information expression and synthesis [30,31], which is especially suitable for decision-making-level information fusion [32]. However, the application of the D–S evidence method to decision-making-level information fusion actually employs the random method to solve the conversion of knowledge data containing fuzziness. Because the D–S evidence theory does not have the function of revealing deep internal links in conversion of knowledge data containing fuzziness, the effect is general. That is why most users argue that the D–S evidence method is usually not as good as the “weighted average” model of fuzzy comprehensive assessment.
Despite the many application cases of “uncertain data conversion”, there are actually only two types of data conversion that contain uncertainty and are computationally feasible. They are conversion of the data containing randomness and related conversion of the data containing fuzziness. As we all know, m times measurement results of the same object under the same measurement conditions are different (i.e., uncertainty principle), which is the result of randomness. It is easy to prove that the arithmetic mean value of m times measurement results is the best approximation of the true value in the sense of the “minimum sum of squares of error”. This result shows that the transformed data containing randomness is a deterministic “linear additive”.

4.4. Data Conversions Containing Fuzziness

The measurement results of the same object under different measurement conditions are called relevant data. This is the most prolific type of data in the application. Under the conditions of randomness, fuzziness (including approximation), emergencies, various conflicts, noise, contradictions and various uncertainties, data obtained by different sensors that reflect the state of the same object can be regarded as relevant data. The data of Equation (1) is the most general related conversion containing fuzziness.
The obvious fact is that the measurement results of the same object under different measurement conditions are essentially different from those under the same measurement conditions. There is no reason to consider the data of Equation (1) containing fuzziness are also a deterministic “linear additive” like the converted data containing randomness. Therefore, the Equation (1) data conversion should solve three problems:
(1)
The characteristic division about why the construction of Equation (1) data is not a “linear additive” (also called approximate reasoning).
(2)
Construct the calculation method of “nonlinear additivity” of Equation (1).
(3)
When the “additivity result” is regarded as the probability α ( x ) of the occurrence of the event of interest locked by m times fuzzy tests, it must be the “optimal approximation” of all the approximations that people may determine under the current conditions.
Obviously, these are three mathematical problems. Hence, fuzziness and randomness are also mathematical subjects. If randomness becomes a mathematical research object, which is because Kolmogorov established the axiomatic system of the probability theory in the 1930s, then fuzziness becomes the object of mathematical research because the three mathematical problems of conversion of knowledge data containing fuzziness must be solved. The key and difficult point of the above three problems is the first approximate reasoning. Therefore, approximate reasoning is not only the soul of fuzziness research, but also the bottleneck restricting the development of fuzziness research, as well as the substantive link of whether the approximate reasoning logic can eventually become a computable logic.
So far, whether the approximate reasoning logic supporting fuzziness research can eventually become a computable logic can be summed up as follows: Can we show why Equation (1) is not linear additive approximate reasoning? Although humans developed the approximate reasoning ability through thousands of years of hard work, the problem of how to reasonably determine data conversion of new data satisfying three principles from the data of Equation (1) has not been solved; that is, the approximate reasoning of “computational thinking” which achieves the data conversion of Equation (1) has not been solved.

5. Redundancy Theory: Computable Logic, Approximate Reasoning Logic

5.1. Basic Data

The obvious fact is that μ j ( x ) with different connotations corresponds to different application backgrounds, different knowledge data acquisition methods and different reasoning and calculation of knowledge data conversion. Although there are countless such data, the most important of them is the membership conversion data. The “most important” means that the approximate inference in the space of the basic data can be directly displayed. The basic data are as follows.
The main indicators known to affect the G state of the system are m kinds. When the index j ( j = 1 ~ m ) takes the monitoring value x j on the feature interval [ a j , b j ] , then, μ j k is the membership where j makes the system G belong to the I k ( k = 1 ~ p ) state level; μ j k is also called the index membership, which meets the following conditions:
0 μ j k 1 ,         j = 1 p μ j k = 1  
where I k ( k = 1 ~ p ) is the kth specific state or the kth subspace or the kth state level or the kth class in the dimensionless continuous state space I , which meets the following condition:
I i   I j = , ( i j ) , k = 1 p I k = I
where { I 1 , I 2 , , I p } is called the partition of I , which is recorded as I = { I 1 , I 2 , , I p } and represent I by partition of I (continuous space discretization).
The index membership degree μ j k is determined by constructing the standard membership function on feature interval [ a j , b j ] . The standard membership function refers to the membership function that meets the three algebraic properties of “nonnegativity”, “additivity” and “normalization”, and usually has a broken line structure. These three algebraic properties are rules that must be followed, like the conservation of energy. Value μ j k determined by the standard membership function can be regarded as the “relatively optimal” approximation of the true value.
When the index membership degree μ j k is determined, the fuzzy information representing the state of G provided by index j can be explicitly expressed by the membership vector μ ( j ) = ( μ j 1 , μ j 2 , , μ j p ) . The fuzzy information provided by all m indexes and reflecting the G state can be expressed as an m × p state transition matrix U ( G ) = ( μ j k ) m × p . The k times column of the matrix is as follows:
( μ 1 k , μ 2 k , , μ m k ) T
It is the result of an m times fuzzy test, which is called the basic data, where μ j k is the quantitative value of the event of interest in a j times test, that is, the membership degree that j makes the state of G belong to the state level of I k .
The problem is to determine that the state of system G belongs to the comprehensive membership degree α k ( G ) of the I k state level under the common influence of m indexes, which is to determine the occurrence degree α k ( G ) of the events of interest locked by m times fuzzy tests. Specifically, it is to study the knowledge data conversion of α k ( G ) determined by Equation (4).
Fuzzy comprehensive assessment uses the “weighted average” model to determine α k ( G ) :
α k ( G ) = j = 1 m λ i ( G ) · μ j k
where λ i ( G ) is the normalized weight of index j .
Obviously, fuzzy comprehensive assessment is to use a linear method to realize data conversion under the condition that the linear additivity of Equation (4) data cannot be defined. Because the missing provision is the reasoning of computational thinking, the “weighted average” model of fuzzy comprehensive assessment is an antilogical model.
However, this antilogical model is the most widely used membership conversion model in fuzziness research. Antilogical models are the most widely used in similar models, and there must be little-known reasons. The approximate reasoning masked by the “weighted average” model is the soul of constructing the “inaccuracy reasoning logic”, which also shows the importance of showing the approximate reasoning process. Then, what kind of reasoning is it to realize the approximate inference of “nonlinear additivity” of Equation (4)?

5.2. Heuristic Knowledge Acquisition

We point out that the data in Equation (4) are not “linearly additive” because the index membership degree μ j k usually contains nonlinear redundancy values that are not useful for determining the comprehensive membership degree α k ( G ) of the system. Thus, the approximate inference of “nonlinear addition” of Equation (4) data can be clearly defined as the mathematical expression of determining the nonlinear redundancy value that may be included in μ j k and does not work on determining α k ( G ) .
As we all know, when the indicator j provides taxonomic information reflecting the G state with the membership vector μ ( j ) = ( μ j 1 , μ j 2 , , μ j p ) , according to the information entropy theory, different indexes j make different contributions to the G classification; some are big, some are small, some even make no contribution.
Although we do not know what the nonlinear redundancy value that may be included in indicator membership μ j k is and do not work on determining α k ( G ) , it is certain that the redundancy value must be related to the contribution of index j to the G classification. This important enlightening knowledge shows that the quantitative value of the contribution of each index j to the G classification is most probably the breakthrough to recognize the true face of the redundant value.
For this purpose, starting from the state transition matrix U ( G ) = ( μ j k ) m × p , the calculation is as follows:
H j ( G ) = k = 1 p μ j k · lg μ j k
when
μ j k = 0 ,   ( μ j k · l g μ j k ) = 0 V j ( G ) = 1 1 lg p H j ( G )  
where H j ( G ) is the entropy and V j ( G ) is the peak value.
Definition:
ω j ( G ) = V j ( G ) / t = 1 m V t ( G )
where ω j ( G ) is called the distinguishing weight of index j about G . Obviously, the discrimination weight ω j ( G ) meets the following conditions:
0 ω j ( G ) 1 ,           j = 1 m ω j ( G ) = 1
The intuitive significance of distinguishing weight ω j ( G ) is that when the index j provides the classification information reflecting the state of G with the membership vector μ ( G ) , then the discrimination weight ω j ( G ) is the degree to which the classification information provided by j can distinguish the category to which G belongs.

5.3. Approximate Reasoning

In order to show the approximate reasoning more clearly, we proposed some theorems and set up one model.
Theorem 1 (redundancy theorem 1).
In the state transfer matrix U ( G ) , if the J t h row vector corresponds to discrimination weight ω j ( G ) = 0 , then the membership degree provided by index j is a redundant membership degree that does not play a role in determining the comprehensive membership degree of the system.
Theorem 2 (redundancy theorem 2).
In the state transition matrix U ( G ) = ( μ j k ) m × p , if there are at least two row vectors corresponding to a non-zero discrimination weight, then the non-zero elements of row j and column k in the matrix must contain redundant values that do not work for the determination of α k ( G ) , and the redundant values can be expressed as μ j k · ( 1 ω j ( G ) ) , where ω j ( G ) is the discrimination weight of index j .
Theorem 3 (nonlinear conversion theorem).
In the state transition matrix U ( G ) , if there are at least two row vectors corresponding to a non-zero discrimination weight and at least one row vector with no component with a value of 1, then the data of Equation (4) to determine the system comprehensive membership α k ( G ) must be “nonlinear”.
The above three theorems have completed the definition of nature of “nonlinear additive” properties of Equation (4) data, and also provide the calculation idea of determining the comprehensive membership degree α k ( G ) by Equation (4).
After removing the nonlinear redundancy value μ j k · ( 1 ω j ( G ) ) , the ω j ( G ) · μ j k is called the effective membership of j . The effective membership degree ω j ( G ) · μ j k of the index j is used to replace the membership degree μ j k to calculate the comprehensive membership degree α k ( G ) of the system, which is called discriminative weight filtering.
Theorem 4 (linear additive theorem).
The effective membership ω j ( G ) · μ j k of different indexes j is obviously linearly additive to j .
Suppose λ j ( G ) is the normalized weight of the indicator j and call λ j ( G ) · ω j ( G ) · μ j k the comparable membership of j , its intuitive meaning is the degree to which the classification information provided by index j makes the state of G belong to grade I k .
Theorem 5 (directly additive theorem).
The comparable membership λ j ( G ) · ω j ( G ) · μ j k of different indexes j can obviously be added directly to j .
M k ( G ) = j = 1 m λ j ( G ) · ω j ( G ) · μ j k
is called the k-class comparable sum of system G .
However, whether M k ( G ) can be used as the basis for calculating the comprehensive membership α k ( G ) of the system depends on whether the m indexes affecting the system state are independent.
It is usually difficult to determine whether the m indexes that affect the system state are independent, but in order to determine the comprehensive membership of the system, it is necessary to assume that m indexes are independent. Otherwise, the calculation to determine the comprehensive membership of the system cannot continue.
Inference. 
In a multi-index system, the necessary condition to determine the comprehensive membership of the system is that the main indicators affecting the system state are independent.
Theorem 6 (membership conversion theorem).
Under the assumption of m independent indicators, the following is true:
α k ( G ) = M k ( G ) / t = 1 p M t ( G ) , ( k = 1 ~ p )
Obviously, α k ( G ) meets the following conditions:
0 α k ( G ) 1 ,   k = 1 p α k ( G ) = 1
Therefore, α k ( G ) is the comprehensive membership degree that the state of system G belongs to the state level of I k under the common influence of m indicators. The membership conversion model can be obtained as follows:
{ α k ( G ) = M k ( G ) / t = 1 p M t ( G ) M k ( G ) = j = 1 m λ j ( G ) · ω j ( G ) · μ j k ω j ( G ) = V j ( G ) / t = 1 m V t ( G ) V j ( G ) = 1 1 lg p H j ( G ) H j ( G ) = k = 1 p μ j k · lg μ j k
where λ j ( G ) is the normalized weight of the indicator   j .
The above model is recorded as M ( 1 , 2 , 3 ) , where 1 means the discriminative weight filtering, 2 means that effective membership degree is transformed into a comparable membership degree and 3 indicates that the comprehensive membership of the system is calculated with a comparable membership.

5.4. Specific Technical Details of the Model M ( 1 , 2 , 3 ) and Its Differences from Similar Models

The differences between M ( 1 , 2 , 3 ) and similar models are as follows:
(1)
M ( 1 , 2 , 3 ) is a nonlinear mathematical model.
(2)
Under the known state transition matrix, index weight vector and index-independent assumption, the establishment of M ( 1 , 2 , 3 ) does neither add any new knowledge nor cause the loss of the existing information.
(3)
M ( 1 , 2 , 3 ) is an approximate model, which is supported by the approximate reasoning logic. Every choice made by building the M ( 1 , 2 , 3 ) model is the best choice under the current conditions, so M ( 1 , 2 , 3 ) is a “relatively optimal” approximate model.
The “weighted average” model of fuzzy comprehensive assessment is an antilogical model. The approximate reasoning and the M ( 1 , 2 , 3 ) model are both reasoning and model constructed on the basis of specific data of Equation (2), all relatively complex knowledge data conversions containing fuzziness in fuzziness research, by mapping the conversion data to the high-dimensional state space; all the data conversion can be completed based on the above approximate inference and model M ( 1 , 2 , 3 ) . Therefore, the above approximate reasoning and M ( 1 , 2 , 3 ) model are the approximate reasoning and the basic model that support fuzziness research.
By constructing the redundancy theory of discriminative weight filtering to show the approximate reasoning process, the approximate reasoning logic becomes a computable logic. The difference from the classical mathematical logic is that the approximate reasoning logic is a computable logic of “inaccuracy reasoning”, so it is an extension of classical logic.
Although the approximate reasoning logic is designed for fuzziness research, it is an application model for AI to deal with the practical-application logic system with “great limitations” due to the emergence of “a large number of uncertainties”. In fact, the “large amount of uncertainty” faced by a complex system not only features randomness and fuzziness (including approximation), but also emergencies, various conflicts, contradictions, noises and unknown uncertainties. However, if the measurement data reflecting the state of the same object under different measurement conditions can be provided, the problem can be transformed as follows: the conversion data according to the law of data conversion, through the approximate reasoning and the M ( 1 , 2 , 3 ) model, can be used to complete the specific data conversion containing fuzziness, which is not relevant to the “uncertainty” before data procurement.
This is the reason why an approximate reasoning logic system is a logic system for AI to deal with the “great limitations” due to the emergence of “a large number of uncertainties”.

6. Generalized Dynamic Logic System Characterized by Machine Learning

Before the construction of an approximate reasoning logic system, people do not know what kind of inaccuracy reasoning it is which can realize the conversion of knowledge data including fuzziness, the approximate reasoning of “computational thinking”. If the decisionmaker cannot assume the main indicators affecting the system state are independent, it is impossible to construct the redundancy theory of discriminative weight filtering and solve the nonlinear calculation problem of conversion of knowledge data containing fuzziness by showing the “approximate reasoning process”. In this case, decisionmakers can only use machine learning methods by mining the statistical laws hidden in the transformed data instead of using the approximate reasoning of “computational thinking” to solve the nonlinear calculation problem of knowledge data conversion. This mapping system characterized by machine learning and realizing the nonlinear conversion from input data to output data is called a generalized logic system.
The advantages of the approximate reasoning logic system are that from obtaining the input data to outputting the decision data, not only the output is the true “relatively optimal” approximation, but also the calculation is simple, can run online, and few resources are needed. However, the requirements are harsh, that is, the decisionmaker must assume that the main indicators affecting the system state are independent.
An advantage of a generalized dynamic logic system characterized by machine learning is that it does not need to assume that the main indicators affecting the system state are independent. As long as the training data reflecting the system state are provided, the statistical law reflecting the regularity of the input data can be obtained through machine learning. The condition is that the training data must be provided “sufficiently” and can basically cover the scene.
Machine learning has long been the mainstream method of AI research.
After the fuzzy reasoning based on the “If…then…” rule was proven to be not the imprecise reasoning of AI dealing with fuzziness, the AI research fell from the severe winter to the trough again; since 1994, a large number of AI researchers have turned to solving specific problems in specific application fields, such as speech recognition, image recognition, natural language processing and so on. In this process, researchers realized the importance of data and the value of statistical models. The breakthrough mainly benefits from machine learning.
Machine learning refers to the process of acquiring a series of knowledge, rules, methods and skills by receiving external information (including samples observation, external supervision, interactive feedback, etc.). Compared with the traditional algorithms, the advantages of machine learning are that the programmer does not have to define the specific process; they just need to provide the machine with some general knowledge and define a sufficiently flexible learning structure; then, the machine can accumulate practical experiences through observation and experience and can adjust and improve the defined structure so as to obtain the processing ability for specific tasks. It is this kind of ability that can “solve altogether” the “nonlinear computation” problems, which can only be realized by constructing approximate reasoning that is “computational thinking” and includes fuzzy knowledge data conversion. Machine learning now emphasizes deep machine learning. What we do is online, that is we turn the system into software, and then form an AI system for practical application, not offline. Machine learning is generally good, but the requirements are more stringent. It needs to rely on algorithms to work, and it needs a variety of algorithms to implement, rather than basic learning in simple increments. Another disadvantage is the need for a large number of data samples for training and learning. After 2011, the rapid development of AI making machine learning the mainstream method has been achieved thanks to the continuous accumulation of big data, deep network learning and the emergence of large-scale computing clusters.
A generalized dynamic logic system characterized by machine learning is composed of the learning objective, the learning structure, the training data and the learning algorithm.
(1)
Learning objectives: From the application perspective, learning objectives include perception, reasoning and generation. Perception includes listening to sounds, looking at pictures and so on. Reasoning includes finding reasons and making decisions. Generation includes generating voice, images and texts. From the task perspective, learning objectives can be divided into prediction (giving one part of data to predict another part of data) and description (discovering and characterizing the internal laws of data).
(2)
Learning structures, also known as models, define the specific forms used to express system knowledge. Function is a common model that expresses knowledge as mapping from an input to an output. When learning, the new knowledge obtained from the data can be absorbed by changing the function parameters.
(3)
Training data. Data are the accumulation of experience. Using data to study the system can update the prior knowledge and improve the practicability of the system. The quality and quantity of data and the coverage of actual scenarios directly affect machine learning results. Therefore, data are the foundation of machine learning. When collecting and organizing data, we need to pay special attention to whether the data are accurate and complete, and what the correlation between different data is. Generally, the original data are not used directly, but through a series of preprocessing steps, that is, the original data are cleaned and filtered and the features are extracted for learning.
(4)
Learning algorithms. A learning algorithm is the concrete embodiment of the machine learning process. Machine learning algorithms can be divided into supervised learning, unsupervised learning and reinforcement learning. However, regardless of the kind of learning, its core is computing. The choice of the learning algorithm is determined by several factors, such as the learning structure, the learning objectives and the data characteristics, and there is no universal algorithm.
The obvious fact is that the core of generalized dynamic logic systems characterized by machine learning and approximate reasoning logic systems is both “data and computation”. Hence, they both follow the logical route. The difference is between the typical logical route and the generalized logical route. One is to achieve nonlinear computations containing fuzzy knowledge data conversions by constructing the approximate reasoning of “computational thinking” and the other one is to mine the statistical laws hidden in the knowledge data conversion through machine learning to realize the nonlinear calculation of conversion of knowledge data containing the uncertainty.
Both generalized dynamic logic systems characterized by machine learning and approximate reasoning logic systems are conditional. The former is that the main indicators that affect the system state can be assumed to be independent of each other. The latter is that training data covering the actual scenes should be “sufficient” and relatively high-quality.
Generalized dynamic logic systems characterized by machine learning and approximate reasoning logic systems are not invariable and change with the data provided by uncertainty environments containing uncertainty and different connotations because Godel’s incomplete theorem proves that there is no consistent and complete logic system in which all knowledge in the field can be determined by calculation as long as the basic assumptions are correct. The real difficulty of AI research is that when the decisionmaker cannot assume that the main indicators affecting the system state are independent of each other, and it is even more impossible to provide a “sufficient” amount of training data that can basically cover the scene, neither generalized dynamic logic systems characterized by machine learning nor approximate reasoning logic systems can be directly applied. This is clearly a concern for AI research. With this in mind, what kind of a logic system can help AI cope with the “great limitations” due to the emergence of a large number of uncertainties in an uncertain environment?
Approximate reasoning is helpful to the construction of the learning structure and the generation of learning algorithms in machine learning. Therefore, perhaps, this is a way to design a “better logic system” which comprehensively considers the existing logic systems and study the complementarity and reasonable “docking” of the existing logic system so as to possibly make up for the distortion of output results due to the independent indicators or the “not sufficient” training data. That is the reason why the design of “sufficiently good” logic systems for AI is still under way.

7. Conclusions

In order to find a more inaccurate reasoning of AI against ambiguity, a generalized dynamic logic system characterized by machine learning was proposed and the related models were set up. We also analyzed the reason why fuzzy reasoning based on the “If… then…” rule is not AI’s inaccuracy reasoning to deal with fuzziness. In order to solve the problem of the fuzzy reasoning antilogic caused by the “If... then...” rules not being a matter of AI’s inaccurate reasoning about ambiguity and to find the real logic system so as to effectively deal with the practical problems including conflict, noise, emergencies and all kinds of unknown uncertainty, an approximate reasoning logic system was proposed, and it is an extension of the classical logic systems to implement conversion of specific data containing fuzziness, the condition being that the indicators affecting the system state are independent of each other. The generalized dynamic logic system characterized by machine learning is the practical application logic system of AI, which has been used to deal with practical problems including conflict, noise, emergencies and various unknown uncertainties. It is characterized by combining approximate reasoning and computing for specific data conversion through machine learning, and its condition is that the training data must be provided “sufficiently” and can basically cover the scene. For the design of “sufficiently good” logic systems for AI, we still have a long way to go.

Author Contributions

Conceptualization, K.L. and Y.L.; methodology, K.L.; validation, K.L. and Y.L.; formal analysis, K.L.; investigation, Y.L.; writing—original draft preparation, K.L. and Y.L.; writing—review and editing, K.L., Y.L. and R.C.; supervision, K.L.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the Natural Science Foundation of Hebei Province, China (E2020402079) and a project of the Scientific Research Program of Colleges and Universities in Hebei Province, China (No. ZD 2019114).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Asan, O.; Bayrak, A.E.; Choudhury, A. Artificial intelligence and human trust in healthcare: Focus on clinicians. J. Med. Internet Res. 2020, 22, e15154. [Google Scholar] [CrossRef] [PubMed]
  2. Bianca, W.L. Corporate digital responsibility (CDR) in construction engineering—Ethical guidelines for the application of digital transformation and artificial intelligence (AI) in user practice. SN Appl. Sci. 2021, 3, 1–25. [Google Scholar] [CrossRef]
  3. Sahu, P.C.; Prusty, R.C.; Panda, S. Active power management in wind/solar farm integrated hybrid power system with AI based 3DOF-FOPID approach. Energy Sources Part A-Recovery Util. Environ. Eff. 2021, 1–21. [Google Scholar] [CrossRef]
  4. Yang, L.S.; Feng, Q.; Yin, Z.L.; Wen, X.H.; Deo, R.C.; Si, J.H.; Li, C.B. Application of multivariate recursive nesting bias correction, multiscale wavelet entropy and AI-based models to improve future precipitation projection in upstream of the Heihe River, Northwest China. Theor. Appl. Climatol. 2019, 137, 323–339. [Google Scholar] [CrossRef]
  5. Yigitcanlar, T.; Cugurullo, F. The Sustainability of Artificial Intelligence: An urbanistic viewpoint from the lens of smart and sustainable cities. Sustainability 2020, 12, 8548. [Google Scholar] [CrossRef]
  6. Leatherdale, S.T.; Lee, J. Artificial intelligence (AI) and cancer prevention: The potential application of AI in cancer control programming needs to be explored in population laboratories such as COMPASS. Cancer Causes Control 2019, 30, 671–675. [Google Scholar] [CrossRef] [Green Version]
  7. Chen, J.C.W. AI in Music Education: The Impact of Using Artificial Intelligence (AI) Application to Practise Scales and Arpeggios in a Virtual Learning Environment. Learn. Environ. Des. 2020, 307–322. [Google Scholar] [CrossRef]
  8. Strohm, L.; Hehakaya, C.; Ranschaert, E.R.; Boon, W.P.C.; Moors, E.H.M. Implementation of artificial intelligence (AI) applications in radiology: Hindering and facilitating factors. Eur. Radiol. Vol. 2022, 30, 5525–5532. [Google Scholar] [CrossRef]
  9. Clive, Q.X.P.; Chalani, U.U.; Yang, M.G. Safety leading indicators for construction sites: A machine learning approach. Autom. Constr. 2018, 93, 375–386. [Google Scholar] [CrossRef]
  10. Floridi, L.; Cowls, J.; King, T.C.; Taddeo, M. How to design AI for social good: Seven essential factors. Sci. Eng Ethics 2020, 26, 1771–1796. [Google Scholar] [CrossRef] [Green Version]
  11. Clark, D.A. Numerical and symbolic approaches to uncertainty management in AI. Int. J. Comput. Commun. Control 1990, 4, 109–146. [Google Scholar] [CrossRef]
  12. Li, D.T.; Hu, L.T.; Peng, X.T. A proposed artificial intelligence workflow to address application challenges leveraged on algorithm uncertainty. iScience 2022, 25, 103961. [Google Scholar] [CrossRef] [PubMed]
  13. Kifer, M.; Lozinskii, E.L. A logic for reasoning with inconsistency. J. Autom. Reason. 1992, 9, 179–215. [Google Scholar] [CrossRef]
  14. Azadeh, P.L. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1978, 1, 3–28. [Google Scholar] [CrossRef]
  15. Teodorescu, M.; Teodorescu, H.N. Noncommutative logic systems with applications in management and engineering. Int. J. Comput. Commun. Control 2021, 16, 1–19. [Google Scholar] [CrossRef]
  16. Wang, H. Toward mechanical mathematics IBM. J. Res. Dev. 1960, 4, 2–22. [Google Scholar] [CrossRef]
  17. Dreyfus, H.; Dreyfus, S.E.; Athanasion, T. Mind over Machine; Simon Schuster: New York, NY, USA, 2000. [Google Scholar]
  18. Minsky, M.; Papert, S. Perceptions: An Essay in Computational Geometry; MIT Press: Cambridge, MA, USA, 1969. [Google Scholar]
  19. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  20. Timothy, J.R. Fuzzy Logic and Its Engineering Application; Qian, T.H., Translator; Electronic Industry Press: Beijing, China, 2001. [Google Scholar]
  21. Quan, T.F. Information Fusion Neural Network Fuzzy Reasoning Theory and Application; National Defense Industry Press: Beijing, China, 2002. [Google Scholar]
  22. Buckley, J.J.; Hayashi, Y. Fuzzy neural networks: A survey. Fuzzy Syst. 1994, 66, 1–13. [Google Scholar] [CrossRef]
  23. Horikawa, S.; Furuhashi, T.; Chikawa, Y.U. On fuzzy modeling using fuzzy neural networks with back-propagation algorithm. IEEE Trans. Neural Netw. 1992, 3, 801–806. [Google Scholar] [CrossRef]
  24. Quan, T.; Yuan, Y.; Zhou, B. Neural network fuzzy inference inner fusion tracking. Control Theory Appl. 1997, 14, 697–701. [Google Scholar]
  25. Zhang, Z.; Sun, C.; Mizutani, H. Neuro Fuzzy and Software Calculation; Zhang, P.; Gao, C., Translators; Xi’an Jiaotong University Press: Xi’an, China, 2000. [Google Scholar]
  26. Seminar Album Fan. Control Intelligence; Tohoku University: Tokyo, Japan, 1995. [Google Scholar]
  27. Kosko, B. Neural Networks and Fuzzy Systems; Prentice Hall: Englewood Cliffs, NJ, USA, 1992. [Google Scholar]
  28. Dempster, A.P. Upper and lower probabilities included by a multivalued mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  29. Shafer, G.A. Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  30. Pan, Z.Z. Shafer-dempster method for multi sensor information fusion. Firepower Command 1995, 3, 12–16. [Google Scholar]
  31. Liu, M.T. Surface target recognition based on evidence theory, fuzzy inference and multisensory information fusion. J. Softw. 1999, 10, 277–282. [Google Scholar]
  32. Xu, C.; Geng, W.; Pan, Y. Total number of D-S methods for data fusion. J. Electron. 2001, 29, 393–396. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, K.; Li, Y.; Cui, R. Design of a Computable Approximate Reasoning Logic System for AI. Mathematics 2022, 10, 1447. https://doi.org/10.3390/math10091447

AMA Style

Liu K, Li Y, Cui R. Design of a Computable Approximate Reasoning Logic System for AI. Mathematics. 2022; 10(9):1447. https://doi.org/10.3390/math10091447

Chicago/Turabian Style

Liu, Kaidi, Yancang Li, and Rong Cui. 2022. "Design of a Computable Approximate Reasoning Logic System for AI" Mathematics 10, no. 9: 1447. https://doi.org/10.3390/math10091447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop