Quantifying Aristotle’s Fallacies

: Fallacies are logically false statements which are often considered to be true. In the “Sophistical Refutations”, the last of his six works on Logic, Aristotle identiﬁed the ﬁrst thirteen of today’s many known fallacies and divided them into linguistic and non-linguistic ones. A serious problem with fallacies is that, due to their bivalent texture, they can under certain conditions disorient the nonexpert. It is, therefore, very useful to quantify each fallacy by determining the “gravity” of its consequences. This is the target of the present work, where for historical and practical reasons—the fallacies are too many to deal with all of them—our attention is restricted to Aristotle’s fallacies only. However, the tools (Probability, Statistics and Fuzzy Logic) and the methods that we use for quantifying Aristotle’s fallacies could be also used for quantifying any other fallacy, which gives the required generality to our study.


Introduction
Fallacies are logically false statements that are often considered to be true. The first fallacies appeared in the literature simultaneously with the generation of Aristotle's bivalent Logic. In the "Sophistical Refutations" (Sophistici Elenchi), the last chapter of the collection of his six works on logic-which was named by his followers, the Peripatetics, as "Organon" (Instrument)-the great ancient Greek philosopher identified thirteen fallacies and divided them in two categories, the linguistic and non-linguistic fallacies [1].
The research on logical fallacies was reanimated during the later Middle Ages (1300-1500 AD) with the establishment of the first Universities, where the study of Aristotle's Logic was one of the first priorities. Many of the now-existing fallacies took their Latin names at that time. A long period of reduced interest on the subject followed. However, after the end of the Second World War, and in particular after 1970, the interest in fallacies was renewed with the addition of the cognitive biases (prejudices) and other distortions of logic. In addition to Aristotle's fallacies, many other fallacies are known today. A list of the most important of them is given in [2], while many fallacies are analyzed in the first of the present authors' books [3].
Because of their variety of structure and applications, it is difficult to classify the fallacies so as to satisfy all practitioners. A standard way of classifying them into formal (with deductive arguments that they are false) and informal fallacies is according to their structure or content [4].
Another big problem with fallacies is that, due to their bivalent texture, they can under certain conditions disorient the nonexpert. This explains the frequent use of fallacies as rhetorical devices in the desire to persuade when the focus is more on communicating and gaining common agreement rather than on the correctness of reasoning. It is, therefore, useful to quantify the fallacies by determining the "gravity" of their consequences. In the present work, our attention is focused on Aristotle's fallacies, but the tools and the methods used for quantifying them could be also used for any other logical fallacy.
The rest of the article is organized as follows: In Section 2 Aristotle's thirteen fallacies are presented and described in brief, while their quantification is attempted in Section 3. Representative examples are also presented. A general discussion follows in Section 4, and the article closes with the final conclusions, which are stated in Section 5.

Aristotle's Thirteen Fallacies
In his "Sophistical Refutations", Aristotle (384-322 BC) identified thirteen fallacies or absurdities, as he used to call them. Aristotle divided those fallacies, which are presented here with their modern names, into two categories as follows 2.1. Linguistic Fallacies

1.
Accent: The emphasis used within a statement gives a different meaning from that of the words alone. Example: "Eat your meal" means "Do not throw your meal away", but "Eat your meal" means "Do not eat the meal of the others".

2.
Amphiboly: A sentence having two different meanings. Example: Once the famous ancient Greek soothsayer Pythia, priestess to the god Apollo in Delphi, was asked by a king about the gender of the child that his wife was going to born. Her answer was: "Boy no girl", without putting any comma in the sentence. That answer accepts two interpretations. In fact, "Boy, no girl" means boy, whereas "Boy no, girl" means girl.

3.
Figure of speech: The way of expressing something affects the listener or the public view. Example: "She swims like a mermaid" instead of simply saying "She is a good swimmer". 4.
Equivocation: When the perceived meaning of a message is different from that which is intended, this meaning is false. Example: Since only man [human] is rational and no woman is man [male], no woman is rational.

5.
Composition: Generalizing from a few to the whole set Example: Oxygen and hydrogen are gases at room temperature. Therefore water (H 2 0) is a gas at room temperature. 6.
Division: Assuming that the parts have the characteristics of the whole. Example: You are working at IBM, which constructs computers. Therefore, you can construct computers.

7.
Ignorance of refutation: An argument is given from which a perfectly valid and sound conclusion may be drawn, yet the stated conclusion is something else (irrelevant conclusion). Example: There has been an increase of criminality in the area. This must be due to the fact that more people are moving into the area. 8.
Many questions: Confusion is caused when many different questions are asked. This could lead the listener to stumble through a proper answer, allowing the speaker to continue answering the question in the way he wants. The effect is multiplied if unrelated questions are asked, as the listener not only tries to remember them, but also to make sense about the relationship between them. Examples: What do you want me to do, how frequently, and what is your opinion about my past activities? Is Mary wearing a blue, a white, a red, or a yellow dress? 9.
Unqualified generalizations (Accident): A general rule is used to explain a specific case that does not fall under this rule. Example: Football is a very popular game; therefore, you must like it. 10. Hasty generalizations: Assuming that something is true in general, because it happens to be true in certain cases. Example: This school has in total 20 teachers. I know that two of them are very good teachers, therefore the school is a good one.
11. Wicked circle: Circular reasoning to prove the assumed premise. Hence the premise is not really proven by the argument. Example: God exists because the Bible says so and the Bible is true because it comes from God. 12. False cause: A causal relationship between two facts without proof that it actually exists. Example: Money makes people happy (not all people and not always just money). 13. Affirming the consequent: This assumes that that given an "if A then B" argument, you can also invert it (false inversion). The A part of such a statement is called the "antecedent" and the B part is called the "consequent". Example: All cats have four feet, therefore all the animals having four feet are cats.

A Modern Classification of Aristotle's Non-Linguistic Fallacies
On the basis of the present-day knowledge, the last three of Aristotle's fallacies (11)(12)(13) can be characterized as fallacies of cause and effect. Those fallacies constituted the essence of Aristotle's philosophy, who argued that individuals have free will, and therefore they are responsible for their actions. This is imprinted in the Latin phrase "Causarum Cognitio"-appearing in the upper part of the Rafael's fresco in Vatican representing the "School of Athens", which means "You must know the causes".
Later on, the previous three of Aristotle's fallacies gave genesis to a total of five fallacies of cause and effect. In fact, the fallacy of "begging the question" is a special case of the wicked circle (No. 11). For example: "The store is closed today, because it is not open". In this statement we have a tautology which does not actually give any explanation.
Additionally, the fallacy of the false cause (No. 12) can be analyzed in two fallacies, "simultaneous events" and the "irrelevant correlations". An example of the former fallacy is connected to the following anecdote: A policeman is watching a fellow who drops a white powder all around the Syndagma Square, the central square of Athens, the capital of Greece. Full of curiosity the policeman asks the fellow: "Why are you dropping the powder all around the square?" and receives the answer: "To eliminate the elephants!" The policeman starts laughing and replies: "However, there are no elephants in Syntagma square!" The received answer, however, was completely confounding: "Yes, you can see how effective the powder is!" That is, this fallacy exploits the time order between cause and effect combining causes and effects which could be irrelevant to each other.
One of the most characteristic examples of the latter fallacy is probably the correlation between the parallel decrease of the number of births and the number of storks in Germany. In fact, during the period 1965-1987, the curves of the evolution of those two phenomena were almost identical [5]. However, this accidental coincidence does not mean that the storks bring the babies! It is impressive that two of Aristotle's non-linguistic fallacies, namely 9 and 10, are connected to Statistics, which was a completely unknown topic on that time. Statistics, when used in a misleading way, can make the observer to believe something different than what the data shows. This is called a statistical fallacy. Apart from 9 and 10, several other statistical fallacies have been studied more recently than the time of Aristotle; for more details, see Section 3.5.

Quantification of Aristotle's Fallacies
Although the fallacies have been identified on the basis of principles of bivalent logic, the information provided by this logic for the gravity of their consequences is very poor. In fact, the inference about those consequences in terms of bivalent logic can be characterized only as true or false. This information, however, is almost useless in practice, where one wants to know the degree of truth of that inference.
In certain cases, this can be achieved with the help of Probability and Statistics. Edwin T. Jaynes (1922-1998), Professor of Physics at the University of Washington, argued that Probability theory can be considered to be a generalization of bivalent logic, reducing it to the special case where our hypothesis is either absolutely true or absolutely false [6]. Many eminent scientists have been inspired by the ideas of Janes, like the expert in Algebraic Geometry David Mumford, who believes that Probability and Statistics are emerging as a better way of building scientific models [7]. Probability and Statistics are related mathematical topics that have, however, fundamental differences. In fact, Probability is a branch of theoretical mathematics dealing with the estimation of the likelihood of future events, whereas Statistics is an applied branch, which tries to make sense by analyzing the frequencies of past events.
Nevertheless, both Probability and Statistics have been developed on the basis of the principles of bivalent logic. As a result, they are effectively only tackling cases of uncertainty existing in the real world that are due to randomness, and not those due to imprecision [8]. For example, the expression "The probability that Mary is a clever person is 75%" means that Mary is, according to Aristotle's law of the excluded middle, either a clever or not a clever person. However, her outlines (heredity, educational background, etc.) suggest that the probability of being a clever person is high. The problem here is that there is not an exact criterion available to the observer (e.g., IQ index) enabling him to decide definitely whether or not Mary is a clever person. In such cases Fuzzy Logic (FL), introduced during the 1970s [9], comes to bridge the existing gap.
Multi-valued logics, challenging the law of the excluded middle, have been systematically proposed previously by Lukasiewicz (1878-1956) and Tarski , although their ideas can already be traced in the philosophical beliefs of the Ionian Greek philosopher Heraclitus (535-475 BC), who spoke about the "harmony of the opposites" and the Gautama Buddha, who lived in India around 500 BC. Those followed by Plato (427-377 BC), who used to be the teacher of Aristotle, and by several other, more recent, philosophers, like Hegel, Marx, Engels, etc. (see [10], Section 2). However, the electrical engineer of Iranian origin Lofti Zadeh, Professor of Computer Science at the University of Berkeley, California, was the first to mathematically formulate the infinite-valued FL through the notion of the fuzzy set (FS) that assigns membership degrees (degrees of truth) in the real interval [0, 1] to all elements of the universal set [11].
Probabilities and membership degrees, although both are defined in the same interval [0, 1], are essentially different from each other. For example, the expression "Mary's membership degree in the FS of the clever persons is 0.75", means that Mary is a rather clever person. However, all people belong to the FS of clever persons with membership degrees varying from 0 (stupid) to 1 (genius)! A disadvantage of FL is that the definition of the membership function of a FS, although it must always be based on logical arguments, is not uniquely determined depending on the observer's personal criteria and goals. This was the reason of a series of generalizations and related theories that followed the introduction of FL [12]. All those theories together form an effective framework for tackling all the forms of uncertainty existing in the real world and science, although none of them has been proved suitable for solving all the related problems alone. Statistical data or probability distributions can be used in certain cases to define membership degrees, but this is not the rule in general. This will become evident in the rest of the paper through our efforts to quantify the inferences of Aristotle's fallacies starting from his non-linguistic fallacies.

Statistical Fallacies
Assume that a high school employs 100 teachers in total. Three of them are not good, whereas the other 97 are good teachers. Parent A happens to know only the three not good teachers. Based on it, he concludes that the school is not good, and he decides to choose another school for his child. On the contrary, parent B, who knows the 97 good teachers, concludes that the school is good and decides to choose it for his child.
In that case, parent A has fallen into fallacy no. 10 of hasty generalizations, whereas parent B has fallen into the fallacy no. 9 of unqualified generalizations. It becomes evident, however, that the gravity of those two fallacies is not the same. In fact, the decision of parent A could jeopardize the future of his child, whereas the decision of parent B is very likely to benefit his child. Numerically speaking, the degree of truth of the first fallacy is only 3%, whereas the degree of truth of the second fallacy is 97%. Consequently, it is crucial for people to avoid hasty generalizations, but at the same time, they must be careful about unqualified generalizations. Those two fallacies must be examined simultaneously in order to make the right decision.
The cultivation of statistical literacy is very important, but it alone is not enough; it must be combined with critical thinking. The great ancient Greek philosopher Socrates (470-399) in his dialogue with Euthydemus-which was written by his student Plato in 384 BC, i.e., the year of Aristotle's birthtacitly exploited the dicto simpliciter to give the following important example about the importance of critical thinking in decision-making.
Socrates asked his friend Euthydemus if he thinks that cheating is immoral. Of course it is, answered Euthydemus. However, what happens, replied Socrates, if your friend, feeling terrible, wants to commit suicide and you steal his knife? There is no doubt that you cheat him in that case, but is this immoral? No, said the embarrassed Euthydemus [13]. Here Euthydemus followed the statistical way of thinking, since in most cases cheating is considered to be an immoral action. Socrates, however, taught him to combine it with critical thinking. It is recalled that critical thinking is considered to be a higher mode of thinking by which the individual transcends his subjective self in order to arrive rationally at conclusions substantiated using valid information (see [14], Section 3). Through critical thinking, reasoning skills such as analysis, synthesis and evaluation are combined, giving rise to other skills like inferring, estimating, predicting, generalizing, problem solving, etc. [15].
Note also that the dialogue of Socrates with Euthydemus introduces indirectly the fallacy of "the purpose justifies the means". In Socrates' example, the stealing of the knife (means) was moral, since it could save a life (purpose). Stealing, however, for your own profit is an immoral action. The nature of the fallacies of morality in general require the help of FL for their quantification [16].
Let us now transfer the dialogue of Socrates with Euthydemus to the previous case with the two parents. Imagine that Socrates (if he were alive at that time) met parent B downtown and asked him: if your child has a particular interest in the lessons taught by the three bad teachers and he is not interested in the lessons taught by the 97 good teachers, is your decision to choose this school right for his future? After this, parent B becomes puzzled and thinks that he should reconsider his decision after discussing it with his child.
In conclusion, these two of Aristotle's statistical fallacies are connected to the error created by inductive reasoning [17]. Therefore, quantifying the gravity of those fallacies, one actually quantifies the inductive error. Nevertheless, the error of dicto simpliciter is much less than that of the secundum quid, so that many people consider the former as not being actually a fallacy. On the contrary, the latter is a serious fallacy caused by the lack of statistical literacy and must be avoided in all cases.

Fallacies of Cause and Effect
A usual characterization of the human affairs and relationships is the distinction between cause and effect, with the former always preceding in time. Consequently, the study of fallacies of cause and effect has a particular interest.
Concerning Aristotle's fallacies that fall into this category, the wicked circle (no. 11) is a wily fallacy, the degree of truth of which is very often indeterminate, or at least very difficult to determine. For example, to determine statistically the degree of truth of the argument "I am not a liar", it is not enough to know if I am usually telling the truth or not. In fact, in that particular moment I could be under pressure to confirm something. Therefore, there is a need to look for another way, within the premises of FL, to determine the degree of truth in this case. Furthermore, in cases of tautology, like "The store is closed today, because it is not open", there is no information at all about the reason for which the store is closed. Consequently, in such cases the degree of truth cannot be determined.
The false cause (no. 12) was categorized in Section 2.3 into fallacies of irrelevant correlations and of simultaneous events. The degree of truth in the former case is obviously zero, since no relation exists between the cause (e.g., storks) and the effect (e.g., births). In the latter case, Statistics could possibly help for calculating the degree of truth. In the anecdote with the elephants, for example (see Section 2.3), one could bring an elephant to the square to stand on the white powder and observe if it will go away or not. This could be repeated several times in order to obtain conclusions about the effectiveness or otherwise of the white powder. A similar procedure is usually followed for testing the effectiveness of a new medicine.
In other cases, however, things are more complicated. Consider for example the case of an experimental school, where a continuous selection of both teachers and students is made. Everyone with non-satisfactory performance is replaced. The quality teachers increase the level and interest of the students; therefore, student demand also increases. This forces teachers to improve their teaching methods even more, which causes a further improvement of students and so on. Finally, why is this school a good school? Because of having good students or good teachers? In other words which is the cause and which is the effect? It is almost impossible to give a definite answer to this question.
The general form of the fallacy of false inversion (no. 13) is: "If A then B" implies that "If B then A", where A = the cause and B = the effect. To quantify this fallacy, a shift is needed from the Aristotelian logic to Bayesian Reasoning, because its degree of truth is equal to the conditional probability P(A/B). Then the Bayes' formula [18] gives that The fallacy of false inversion is also connected to the credibility of medical tests. Assume, for example, that Mr. G lives in a city where 2% of the inhabitants have been infected by a dangerous virus. Mr. X does a test for the virus, whose statistical accuracy is 95%. The test is positive. What is the probability of Mr. X being a carrier of the virus?
To answer this question, let us consider the events: A = The subject is a carrier of the virus and B = The test is positive. According to the given data, we have that P(A) = 0.02 and P(B/A) = 0.95. Furthermore, assuming that 100 inhabitants of the city do the test, we should have on average 2 × 95% = 1.9 positive results from the carriers and 98 × 5% = 4.9 from the noncarriers of the virus. Therefore P(B) = 0.068. Replacing the values of those probabilities in Equation (1), one finds P(A/B) ≈ 0.2794. Therefore, the probability of Mr. X being a carrier of the virus is only 27.94% and not 95%, as could be thought after a rough first estimation! It is worth noting that the only information given within the premises of bivalent logic about this fallacy is that the inversion between cause and effect is false, or otherwise that the conditional probability P(A/B) is not equal to 1. However, this information is useless in practice, when one wants to know "what is" (via positiva) and not "what is not" (via negativa). The latter, for example, is a method that has been followed by religion when failing to define "what is God". It was decided then to define instead "what is not God" (Cataphatic and Apophatic Theologies), which is much easier.
From the beginning of the 19th century, several researchers in the area of bivalent logic (Bantham, Hamilton, De Morgan, Frege, etc.), in their effort to improve the quality of the bivalent inferences, introduced the universal (∀) and the existential (∃) quantifiers. In this way, the false inversion becomes valid by saying, for example, "There exist animals with four feet which are cats", or "Some of the brain mechanisms are Bayesian, but it has not been proved that all of them (even the cognitive ones) are" but the information given by this modified expression still remains very poor.

Aristotle's Other Non-Linguistic Fallacies
The ignorance of refutation (no. 7) is a completely false fallacy. It is frequently used when one wants to change the subject of the discourse. For example, a member of the government answers the remark that many people in the country are living under the boundaries of poverty as follows: "We have increased the unemployment allowance by 25%, the allowance for disabled people by 8.6%, the allowance for widows by 10%, we have provided an allowance of 200 euros for the first child, etc., etc.".
For the fallacy of many questions (no. 8), one has to determine all the existing choices, to assign coefficients of gravity to each of them, and then try to combine all of them in a suitable criterion making it possible to make the proper decision. FL could help towards this action, although this faces many difficulties in practice.

Linguistic Fallacies
Aristotle's linguistic fallacies (1-6) are characterized by imprecision or by complete vagueness. As a result, Probability and Statistics cannot usually help to quantify their degree of truth. In such cases, FL and/or theories related to it are frequently appropriate tools for this purpose.
More explicitly, in case of the fallacy of accent (no. 1), one has to find a proper way, probably with the help of computers, to measure the intensity of each word in the corresponding expression, and then to identify the word with the greater intensity (e.g., eat your meal) in order to understand the true meaning of the corresponding expression.
Something similar happens with amphiboly (no. 2). In the case of written speech, one has to identify the possible position of the missing comma, whereas in the case of oral speech one has to measure the mediating time between the words of the corresponding phrase in order to understand its correct meaning. For example, "Boy (t 1 ) not (t 2 ) girl" means boy when t 1 > t 2 , but girl when t 1 < t 2 .
Furthermore, due to the nature of the fallacy of the figure of speech (no. 3), it becomes evident that its degree of truth cannot be determined. Additionally, the degree of truth of the equivocation (no. 4) is zero, because of the double meaning of the crucial word contained in the corresponding expression (e.g., man [human] and man [male] in the example of Section 2.1).
To quantify the fallacy of composition (no. 5), one has to examine the influence of all the components to the final result. In the case of water (see Section 2.1), for example, the degree of truth is zero, but this is not always the case. Assume, for instance, that an orchestra consists of excellent (each one for his instrument) musicians. If, however, the coordination among all the musicians is not of the required level, the orchestra could be not as good as expected.
Finally, to quantify the fallacy of division (no. 6), one has to examine the characteristics of the part without taking into account the characteristics of the whole. For example, the fact that a person is working at IBM (Section 2.1) does not guarantee that they are able to construct computers. However, Statistics could help in that case, if one knows the percentage of those working at IBM that are able to construct computers.

Other Fallacies
As mentioned in our Introduction, apart from Aristotle's thirteen fallacies, many other fallacies have been studied since. Among them, several statistical fallacies are known, such as sampling bias, data dredging, survivorship bias, cherry picking, the gambler's fallacy, the regression toward the mean, the thought-terminating cliché, etc. [19]. Frequently, statistical fallacies are characterized by lack of critical thinking.
Cognitive biases are another group of fallacies frequently characterized by lack of statistical literacy. A cognitive bias is defined as an unreasonable attitude that is unusually resistant to rational influence. Examples of cognitive biases include racism, nationalism, religious, linguistic, sexual or neurological discrimination, sexism, etc. [20]. The Israeli psychologist and Nobel prize winner in Economics (2002) Daniel Kahneman with his collaborator Amos Tversky contributed significantly to the study of the cognitive biases related to Economics [21]. The fact that Kahneman is a (the only) Nobel laureate in Economics who is a psychologist emphasizes the useful role of psychology in quantifying the cognitive fallacies and the fuzziness of human reasoning.
In general, too many sources of fuzziness exist in real life, creating several types of fallacies, such as, for example, all adjectives and adverbs in the natural language. There is obviously a need for determining the gravity of the consequences of all those fallacies in a way analogous to the Aristotle's fallacies, which is a good proposal for future research.

Discussion
The quantification of fallacies is very important in everyday life, where people want to know not simply whether something is true or false, but actually the degree of its truth. Nevertheless, as has been illustrated by the present study, the latter cannot always be achieved with the help of bivalent logic. One could think about the role of logic in such cases in terms of a new plot. The plot has to be fenced first (bivalent logic), and then you can watch what happens inside it (FL).
FL does not oppose bivalent logic; on the contrary it extends and complements it [22][23][24]. The fact that FL sometimes uses statistical data or probability distributions to define membership degrees does not mean that it "steals" ideas and methods from those topics. As we saw in Section 3.1, probabilities and membership degrees are completely different concepts. In addition, FL frequently uses other innovative techniques, like linguistic variables, the calculus of fuzzy if-then rules, etc.
In an earlier work [17], we provided full evidence that scientific progress is due to the collaboration of these two equally valuable types of logic. This collaboration is expressed in everyday life by the method of trial and error and in the human way of thinking through inductive and deductive reasoning. Inductive reasoning always precedes the tracing of a new scientific idea, while deductive reasoning only guarantees the validity and correctness of the corresponding theory on the basis of the axioms on which it has been built. In addition, whereas deduction is purely based on the principles of bivalent logic, FL, rejecting the principle of the excluded middle, marks out the real value of induction, which is disregarded by bivalent logic.
Another important point that was illustrated in Section 3.2 of the present study is the essential role of the conditional probabilities for quantifying the fallacies of cause and effect. The Bayes' rule-Equation (1)-connects the conditional probability P(A/B) to the inverse of time conditional probability P(B/A) in terms of the prior probability P(A) and the posterior probability P(B). Thus, by changing the value of the prior probability P(A), one obtains different values for the conditional probability P(A/B), representing in this study the degrees of truth of the corresponding fallacy.
The amazing thing, however, is that, although probabilities in general and conditional probabilities in particular have been defined and developed on the basis of the principles of bivalent logic, the change of the values of the prior probability P(A) provides multiple values for the conditional probability P(A/B), introducing in this way a multi-valued logic! Consequently, one could argue that the conditional probabilities-often called Bayesian probabilities as well-constitute an interface between bivalent and fuzzy logic.
At first glance, Bayes' rule is an immediate consequence of the basic theorem calculating the value of a conditional probability. In fact, we have that P . However, the consequences of this simple rule have been proved to be very important for all of science, while recent research gives evidence that even the mechanisms under which the human brain works are Bayesian! [18,25]. As seen in [17], the validation of any scientific theory T can be expressed by a deductive argument of the form "If H, then T", where H represents the premises of T (observations, intuitive conclusions, axioms on which T has been built, etc.), which have been obtained by inductive reasoning. Therefore, the inductive error is transferred through H to the deductive argument. Consequently, the conditional probability P(T/H) expresses the degree of truth of the theory T. Thus, Sir Harold Jeffreys'-the British mathematician who played an important role in the revival of the Bayesian view of probability-characterization of the Bayesian rule as the "Pythagorean Theorem of Probability Theory" [26] is fully justified.

Conclusions
The highlights of the present work can be summarized as follows: • A deep wisdom must be attributed Aristotle for introducing the logical fallacies. The description of his statistical fallacies was particularly impressive cular, because at that time Statistics was a completely unknown concept. • Aristotle's fallacies and all the other fallacies of bivalent logic contain very poor information about the gravity of their consequences, which can be enriched by statistical and critical thinking, as some textbooks in logic suggest (e.g., [27]).

•
Probability and Statistics are able to quantify, i.e., to calculate the degree of truth, of the statistical fallacies and of the fallacies of cause and effect. The Bayesian probabilities, in particular, which have been proved to be very important for all of science and human cognition, play an essential role in quantifying the fallacies of cause and effect. The fuzziness of the linguistic fallacies, however, cannot be handled by probabilistic and statistical methods. In fact, innovative methods of FL, like the use of linguistic variables, the calculation of fuzzy if-then rules, etc., must be used to quantify those fallacies. It is of worth noting that in certain cases (e.g., figures of speech) the degree of truth of the corresponding fallacy is indeterminate.

•
The fact that FL sometimes uses statistical data or probability distributions to define membership degrees does not mean that it "steals" ideas and methods from those topics. In fact, although probabilities and membership degrees function in the same interval [0, 1], they are completely different concepts. FL does not oppose bivalent logic; on the contrary, it extends and complements it. The whole of human scientific progress is due to the collaboration of these two types of logic.
Author Contributions: E.A., methodology, formal analysis, resources, visualization; M.G.V., writing-original draft preparation, conceptualization, resources, data curation, visualization. All authors have read and agreed to the published version of the manuscript.