Illusory Arguments by Artificial Agents: Pernicious Legacy of the Sophists
Abstract
:1. Introduction
2. The Lasting Legacy of Sophistry
2.1. The Received View of the Sophists
2.2. Were the Sophists Really By-Hook-or-By-Crook Persuaders?
[The] account of sophistic rhetoric as the art of persuasion is accepted, as Schiappa has recently noted, by virtually all scholars since Blass, more than a century ago. And even recent scholars like Cole, Poulakos and Swearingen, who approach sophistic rhetoric from new perspectives that downplay its connection to persuasion, have not directly questioned the connection between rhetoric and persuasion. Explicitly or implicitly most scholars agree that for the Sophists, to speak well meant to speak persuasively and to teach rhetoric was to teach the art of persuasion. Scholarly consensus is always comforting, but (as the reader may suspect) my aim in this paper is not to reaffirm the consensus but rather to reconsider the whole issue and ask, did the Sophists really aim to persuade? The answer, I will suggest, is that persuasion was only one goal of sophistic logoi, and not the most important.
2.3. What, Then, Is Sophistry?
- (1)
- If less severe punishments deter people from committing crime, then capital punishment should not be used.
- (2)
- Less severe punishments do not deter people from committing crime.
- (3)
- Therefore, capital punishment should be used.
3. Sophistic Impact in Today’s Digital World
3.1. AI and the New Digital Sophistry
4. Artificial Sophistry Is Thoroughly Unsurprising
4.1. AI and Artificial Agents
4.2. AI and Philosophy
4.3. AI, Sophistic AI, and Human Nature
5. Arguments, Illusions, and The Lying Machine
5.1. An Overview of the Lying Machine
- A veracious argument for a true proposition emanating from shared beliefs;
- A valid argument for a false proposition emanating from one or more false premises that the audience erroneously believes already;
- A fallacious argument for a true proposition (an expedient fiction for the fraudulent conveyance of a truth);
- A fallacious argument for a false proposition (the most opprobrious form being one that insidiously passes from true premises to a false conclusion).
5.2. The Lying Machine’s Intellectual Connection to Sophistry
5.3. Sophistic Arguments and Cognitive Illusions
All the Frenchmen in the restaurant are wine-drinkers. | |
Some of the wine-drinkers in the restaurant are gourmets. | |
∴ | Some of the Frenchmen in the restaurant are gourmets. |
The overwhelming majority of subjects in various studies (as high as one hundred percent in some, e.g., Johnson-Laird and Savary 1999) responded that there is an ace in the hand. Although this conclusion seems obvious, it is provably false; from the premise it deductively follows that in fact there is not an ace in the hand. To see why there is not an ace in the hand, recognize that one of the two conditionals in the premise must be false, and a conditional is false only when its antecedent is true and its consequent is false.15 The two conditionals here have a common consequent: “there is an ace”. Since one of the conditionals must be false, this consequent must be false, i.e., there is not an ace in the hand. In reaching the provably incorrect conclusion (i.e., that there is an ace) subjects abstain from making a valid inference and commit to an illusory one. First, they abstain from inferring from the falsity of one of the two conditionals that the common consequent is false.16 Then, they erroneously infer from the disjunction of the two conditionals that the common consequent is true, i.e., that there is an ace in the hand.17 Had subjects only abstained from the valid inference (and not committed to the illusory one), they would have inferred nothing at all from the premise sentence. Likewise, had subjects only committed to the illusory inference (and not abstained from the valid one), they would have seen a contradiction in their reasoning. By erring both ways, subjects compound and amplify the illusion’s potency.If there is a king in the hand, then there is an ace, or if there isn’t a king in the hand, then there is an ace, but not both of these if-thens are true.
I think we can all agree that at least one of the following two statements is true:
If Cindy has had (and recovered from) COVID-19, then she poses little threat to others. If Cindy has been vaccinated against COVID-19, then she poses little threat to others.
Now, the following is certainly true:
- 3.
If Cindy poses little threat to others, then she does not need to wear a mask.- 4.
Based on her medical records, Cindy has either had (and recovered from) COVID-19 or been vaccinated against COVID-19, and possibly both.
Rhetorically speaking, does Cindy need to wear a mask? No, absolutely not. Either it is true that Cindy has had/recovered from COVID-19 and, thus, poses little threat to others (1), or it is true that Cindy has been vaccinated and, thus, poses little threat to others (2). So, if Cindy has had/recovered from COVID-19 or has been vaccinated against it, then she poses little threat to others. Since Cindy has, in fact, either had/recovered from COVID-19 or has been vaccinated against it (possibly both) (4), it follows that she poses little threat to others. Now, according to (3), if she poses little threat to others, then she does not need to wear a mask. Therefore, it clearly follows that Cindy does not need to wear a mask.18
5.4. Mental Models and Argument Generation
1. | 1. | 1. | ||||||||
2. | 2. | |||||||||
3. |
5.5. Initial Evidence of the Lying Machine’s Persuasive Power
5.6. The Lying Machine and Large Language Models (LLMs)
6. Objections; Rejoinders
6.1. Objection #1: “However, LLMs Can Be Sophists!”
You claim that the lying machine is a genuine artificial sophist, whereas LLMs are not, because LLMs do not possess representations for intentions and beliefs, and their arguments do not emanate from reasoning over epistemic attitudes, for example beliefs. However, this sort of claim appears to be based on an outdated understanding of a very limited set of popular statistical language models. At the very least, some more precise definitions are required before such a bold claim can be made. What constitutes a sufficient representation of beliefs (specifically, one that the lying machine satisfies that all LLMs do not)? Is it required that an AI system have statements explicitly representing the propositional content of beliefs in a “belief box”, and if so, why would not beliefs stated in the system side of relevant prompts satisfy this? Furthermore, why would your position not imply that human beings could not be said to have beliefs as well?
6.2. Objection #2: “However, the Sophists Used Formally Valid Reasoning!”
From the standpoint of the study of ancient rhetoric, your characterization of ancient sophistry is uncompelling. The sophists did not have a distinction between formal and informal logic, a distinction that is at the heart of your analysis, position, and AI science and technology. Furthermore, clearly, the sophists employed what by any metric must be called valid reasoning to deceive, just as many do today. The sophists did not try only to slip in invalid arguments to persuade. Additionally, more often than not the sophists made no pretense to formal arguments at all. Lying was a common practice, and the kind of persuasive techniques the sophists employed and taught were more about how to persuade like a contemporary, calculating politician, rather than a malicious Bertrand Russell.
7. Conclusions: The Future
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
1 | “Frankly, Socrates said [Protagoras], I have fought many a contest of words, and if I had done as you bid me, that is, adopted the method chosen by my opponent, I should have proved no better than anyone else, nor would the name of Protagoras have been heard of in Greece” (Protagoras 335a, in Hamilton and Cairns 1961, p. 330). As to what primary sources it makes sense by our lights to move on to after reading the words of Protagoras in the eponymous dialogue, we recommend turning to the rather long lineup of sources that Taylor and Lee (2020) cite as confirmatory of the persuasion-centric modus operandi of the sophists. Since we have already relied on Plato, and the reader may already thus have Plato’s dialogues near at hand, one could do worse than turning to another dialogue starring another sophist: Gorgias. Therein, Gorgias proclaims, firmly in line with the lay legacy of sophism, that what he teaches is not what to venerate, but rather how to persuade. | |||
2 | Analogous to the possibility that a given computer program is invalid, a proof can likewise be invalid, and indeed, even some proofs in the past proffered by experts have turned out to be invalid. | |||
3 | We can lay out the pattern as follows. For any propositions P and Q, where the arrow indicates if what is on the left of it holds, then so does what is on the right of it (a so-called material conditional, the core if–then structure in mathematics, which is first taught to children in early mathematics classes, with this teaching running straight through high-school (and indeed, post-secondary) mathematics, and ¬ can be read as ‘not’), we have:
(1) matches the first line, (2) the second, and (3), the conclusion, the third line. For a sustained treatment of the nature of inference schemata/patterns, including specifically modus tollens, in connection with the corresponding nature of the cognizers who understand and use such schemata, see (Bringsjord and Govindarajulu 2023). | |||
4 | Some readers may be inclined to countenance only a myside bias, not a myside—as we say—fallacy. Such readers will view the bias in question to be the application, to the analysis of arguments, of the criterion that good arguments must align with the values, views, et cetera of “their side”. However, so-called “biases” in purportedly rational reasoning correspond one-to-one with fallacies, i.e., with invalid reasoning relative to some background axioms and normative inference schemata for reasoning over these axioms (and perhaps additional content expressed in the same underlying language as the axioms). Classifying any proposition in such declarative content as false or inadmissible because it is not “on the side” of the evaluator is fallacious reasoning; no inference schemata sanctions such classification. Empirical justification for our view of biases is readily available in the literature on such biases. For example, the famous Linda Problem from Nobelist Daniel Kahneman, perhaps the most solidly replicated such result, shows a bias in which some proposition is judged more probable than P (!) (Kahneman 2013). Such an inference is provably invalid by the probability calculus (originally from Kolmogorov). | |||
5 | For example: After the contentious 2020 presidential election—which was held in the midst of a tenuous economic recovery from COVID-19—the U.S. government secretly partnered with, and sometimes overtly pressured, social media companies to suppress and censor various disfavored individuals and viewpoints, with the intent of (re)shaping public opinion by controlling the flow and content of information. The details of this episode as they relate to the social media company formerly known as Twitter, are chronicled in fifteen journalist-authored Twitter threads (aka. “The Twitter Files”, Taibbi et al. 2023) and are based on released internal Twitter documents. The legality of such government-encouraged censorship is before the U.S. Supreme Court in the case of Murthy v. Missouri (docket no. 23-411). | |||
6 | For example, ChatGPT hit 100 million monthly active users after only two months of operation, making it the fastest growing consumer application of all time (Hu 2023). | |||
7 | An overview of AI, suitable for non-technical readers coming from the humanities, and replete with a history of the field, is provided in (Bringsjord and Govindarajulu 2018). | |||
8 | In 1936 both Turing and Alonzo Church published separate formal accounts of computation, with Church preceding Turing by a few months. While each worked independently, their results were proved to be equivalent and both are now credited as the founders of modern computer science. | |||
9 | Turing’s (1950) test for artificial intelligence, the Imitation Game—now better known as the Turing Test—takes the form of a deceptive conversation, one where the machine tries to deceive humans into believing that the machine is human too. Though argumentation and sophistic techniques are not explicitly required of the machine, this exercise in linguistic persuasion, manipulation, and deception has manifest similarities to the sophists. | |||
10 | While AI has, at times, been concerned with contemplating and (re)creating animal-like intelligence, alien-like intelligence, and trans-human super-intelligence, the fact is that humans are the only exemplars of the kinds of sophisticated “minds” AI originally sought to replicate. Furthermore, whole sub-disciplines of AI (e.g., human–robot interaction, cognitive robotics, autonomy, natural language understanding, human–machine teaming) are dedicated to enabling intelligent machines to physically, linguistically, and socially cohabitate with humans. | |||
11 | Some anthropologists and psycholinguists (e.g., Deacon 1997; Pinker 1994) go so far as to credit deception and counter-deception under evolutionary pressures with fueling a cognitive and linguistic “arms race” that ultimately gave rise to the human mind. | |||
12 | A number of the surveyed works were influenced by Perelman and Olbrechts-Tyteca’s (1969) New Rhetoric, and there is general agreement among them that the persuasiveness of an argument depends on the audience’s receptivity, and therefore, that effectual argumentation ought to take into account the audience’s dispositions. | |||
13 | Differing implementations of the lying machine can, and have, varied in the representational complexity of beliefs and in the use of third- and higher-order beliefs. These implementations have ranged from separate databases of propositional beliefs, to belief and topic-indexed ascription trees inspired by ViewGen (Ballim and Wilks 1991), to multi-modal cognitive calculi, e.g., (employed, among many places, in Bringsjord et al. 2020, which offers an appendix that introduces this calculus and the overall family). Furthermore, for efficient coverage of simpler cognitive calculi directly responsive to results in the psychology of human reasoning, see (Bringsjord and Govindarajulu 2020). | |||
14 | That is to say, it determines and internally justifies whether X follows from, or is contradicted by, first-order beliefs (i.e., its own beliefs about the world). | |||
15 | The underlying logicist structure of the premise in this illusion is that there is an exclusive disjunction of two material conditionals. Subsequent experiments carried out by Johnson-Laird and Savary (1999) revealed that even when subjects were directly informed of this in the natural-language stimulus, the illusion remained. | |||
16 | This abstention may be due to subjects misinterpreting the sentence as expressing an inclusive disjunction (as opposed to an exclusive disjunction), or due to subjects failing to account for the necessary falsity of one conditional when imagining the other conditional to be true. | |||
17 | Note that this inference is invalid regardless of whether the disjunction is exclusive or inclusive; it would be valid only if the premise had been the conjunction of the two conditionals. | |||
18 | The linguistic realization and enthymematic contractions produced by the lying machine are, perhaps, not as polished as one might wish; the system did not specifically focus on linguistic aspects of rhetoric. | |||
19 | A critic might object to our analysis on the basis that our logical reconstruction from the English is uncharitable; specifically, that a reader/hearer will combine the given information (i.e., the premises) with their own beliefs. Thus, individuals who come into the argument thinking that both (1) and (2) are true would likely reconstruct the first premise, not as a disjunction of two conditionals, but rather as a single conditional with a disjunctive antecedent, which makes their reasoning valid. They would not—so the story here goes—recognize the first premise as a disjunction of conditionals unless they come into the argument already disposed to think only (1) or (2) is true. Such a critic is of course right that a reader/hearer inevitably brings their own beliefs to bear (and this is affirmed in mental models theory), but this does not exonerate the reader/hearer’s reasoning, because their beliefs and subsequent reasoning go beyond the common ground of the argument: the supposed facts to which all parties attest. The simple fact of the matter is that the phrase, “at least one of the following two statements in true”, preserves the possibility that one of the statements is, in fact, false; and reasoning that ignores this possibility is erroneous. That the argument seems to invite such misrepresentation or error is indicative of the illusion’s potency—and if rather than an argument, this text was grounds for a wager, or instructions for disarming a bomb, then the reader/hearer’s error would be gravely injurious, however sincere their beliefs. Finally, the particular argument under discussion was proffered as an illustrative, intentionally provocative example; we concede that a reader/hearer might be biased by prior sincerely held beliefs. However, as reported in (Clark 2010), this and similar illusory arguments were tested using various innocuous content (as in the original “king-ace” illusion), and the persuasive effect persisted. Therefore, we conclude that the argument’s potency (i.e., its invitation to misrepresentation and error) is due to its structure and not to its content or to the audience’s prior held beliefs. | |||
20 | The variants differ in the definition of inter-model contradiction, sentential/clausal negation, and interpretation of conditionals (e.g., material vs. subjunctive). These variants can be thought of as either capturing differing levels of reasoning proficiency or as capturing competing interpretations of mental models theory itself. | |||
21 | A mental model may have more than one corresponding expression; linguistic and/or sentential realization is not guaranteed to be unique. | |||
22 | This is likely due to the insensitivity of the relevant statistical test (viz. Kruskal–Wallis) and the high accuracy of groups A and B, which left little room for improvement. | |||
23 | There is more to be said here, for sure; but we would quickly jump out of scope, and into highly technical matters, to discuss such barriers. It is in fact possible that there are insuperable barriers due to relevant limitative theorems. Interested readers are advised to read the not-that-technical (Bringsjord et al. 2022), in which the second author asserts as a theorem that achieving a genuine understanding of natural language in the general case is a Turing-uncomputable problem. To correct the output of LLMs and such, it would be necessary for the outside system doing the correcting to understand the problematic prose to be corrected. | |||
24 | You will, e.g., without question employ modus ponens in some way. | |||
25 | E.g., it might include the formula: | |||
26 | Barring a sudden, astonishing discovery of a body of work from a thinker of the rank of Aristotle who wrote before Aristotle. Such an event, given current command of all the relevant facts by relevant cognoscenti, must be rated as overwhelmingly unlikely. | |||
27 | Some readers may be a bit skeptical about this; but see the book-length treatment of the issue: (Glymour 1992). It took centuries until a formal logic that is a superset of Aristotle’s formal logic in Organon to be discovered (by Leibniz), and shown to be sufficiently expressive to formalize a large swathe of the reasoning of the sophists. | |||
28 | See also an essay in passionate praise of GPT-4: (Bubeck et al. 2023). For a dose of reality, which entails that sophisticated sophistry is at least currently out of reach for today’s LLMs, see (Arkoudas 2023). | |||
29 | We expect some readers will doubt the efficacy of teaching formal logic, based e.g., on such studies as those in (Cheng et al. 1986). However, such studies, which absurdly assess the outcomes from at most a single course in logic, are not germane to the partial cure we recommend, which is fundamentally based on the indisputable efficacy of the teaching of mathematics. Students in public education in technologized economies study math K–12 minimally, and, in many cases, K–16. (This yields upwards of 13 full courses in mathematics.) We are recommending a parallel for formal logic. From the standpoint of formal science, this is a “seamless” recommendation, since mathematics, as we now know from reverse mathematics, is based on content expressed in, and proved from, axioms expressed in formal logic (Simpson 2010). (Formal logic is provably larger than mathematics, since only extensional deductive logics are needed to capture mathematics as taught K–16, whereas formal logic e.g., comprises also intensional/modal logics, etc.) In addition, the specifics of how formal logic is taught makes a big, big difference; see, e.g., (Bringsjord et al. 1998, 2008), the first of which directly rebuts (Cheng et al. 1986). To mention one specific here, which relates to the fact that, in the foregoing, we resorted to hypergraphs to show the anatomy of some arguments, in the teaching of so-called natural deduction, it is beneficial to make use of hypergraphical versions of such deduction; see (Bringsjord et al. 2024). | |||
30 | Some undergraduate programs in philosophy require a course in formal logic, but this is usually only the case (in the United States) when the program is a B.S. rather than a B.A. As part of required coverage of discrete mathematics, and some theoretical aspects of algorithms and their efficiency and feasibility, undergraduate programs in computer science often require the teaching of elementary formal logic. Math, as we have noted in the previous endnote, is quite another matter. In the U.S., e.g., Algebra 2 is a required course in public math education, and its textbooks typically even include thorough coverage of indirect proof. | |||
31 | One could carp that Church’s calculus was not embodied, and that Turing, Church’s Ph.D. student, gave us an alternative scheme (what are known as ‘Turing machines’ today) that was easily embodied. (Likely one of the reasons why Gödel found Turing machines peerlessly intuitive and natural pertained to the ease with which humans could imagine them made physically real.) Leaving such issues aside as outside our scope, we mean by ‘embodied, standard, general-purpose’ (ESG) computation at the level of a standard Turing machine (Turing 1937). Church, temporally speaking, certainly beat Turing by a significant interval: the former’s -calculus was introduced in two papers published in 1932 and 1933 (Church 1932, 1933). | |||
32 | See his Rhetorica (1402a, 23–25, in McKeon 1941, p. 1431), where, for example, Aristotle writes: “Hence people were right on objecting to the training Protagoras undertook to give them. It was a fraud; the probability it handled was not genuine but spurious[.]” | |||
33 | Aristotle gives in Organon (accessible in McKeon 1941) a formal logic that is a fragment of modern first-order logic, and he proves what reasoning in this fragment is valid and what is not valid. Some alert readers might remind us that Plato, too, sought to counter sophistry—yet, in light of the relevant portion of ancient logicist intellectual history reviewed above, this commendable attempt was doomed to failure, as but an intuitive grasping in the logico-mathematical dark. The reason is that believed-to-be formally invalid but nonetheless persuasive reasoning (=category III in Table 3) can be precisely designed as such, and diagnosed and unmasked as such, only by recourse to formal logic; this is simply a mathematical fact. Aristotle gave humanity the first formal logic chiefly to systematically sift, in a principled way, the persuasive and perfectly valid reasoning of Euclid, from the persuasive and often thoroughly invalid reasoning of sophists; see again (Glymour 1992). | |||
34 | As has been emphasized for us in a reviewer’s feedback on an earlier draft of the present paper, our suggestion only addresses the invalidity of reasoning—yet, there are many ways in which a reasoner could be misled, and presumably the sophists knew this. E.g., the use of emotion (pathos) and the status of the arguer (ethos) could be exploited to persuade and mislead. Fortunately, while details must wait for subsequent analysis, formal logic has no trouble capturing the full range of human emotions, and computational formal logic can serve as the basis for engineering artificial agents that self-reflectively detect and exploit human emotions. An example of such an agent is the robot PERI.2, operating as an “emotion manipulating” sculptor operating in the style of Rodin; see (Bringsjord et al. 2023). |
References
- Agüera Y Arcas, Blaise, and Peter Norvig. 2023. Artificial Intelligence Is Already Here. Noēma, October. Available online: https://www.noemamag.com/artificial-general-intelligence-is-already-here(accessed on 1 January 2024).
- Arkoudas, Konstantine. 2023. GPT-4 Can’t Reason. arXiv. Available online: https://arxiv.org/abs/2308.03762 (accessed on 1 January 2024).
- Aubry, Geoffroy, and Vincent Risch. 2006. Managing Deceitful Arguments with X-Logics. Paper presented at the 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2006), Washington, DC, USA, November 13–15; pp. 216–19. [Google Scholar]
- Ausiello, Giorgio, Roberto Giaccio, Giuseppe F. Italiano, and Umberto Nanni. 1992. Optimal Traversal of Directed Hypergraphs. Technical Report TR-92-073. Berkeley: International Computer Science Institute. [Google Scholar]
- Ballim, Afzal, and Yorick Wilks. 1991. Artificial Believers: The Ascription of Belief. Hillsdale: Lawrence Erlbaum. [Google Scholar]
- Bletchley Declaration. 2023. Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November, 2023, Bletchley Park, UK. November 1. Available online: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration (accessed on 1 January 2024).
- Bringsjord, Selmer, and Naveen Sundar Govindarajulu. 2018. Artificial Intelligence. In The Stanford Encyclopedia of Philosophy. Edited by Edward Zalta. Stanford: Stanford University. Available online: https://plato.stanford.edu/entries/artificial-intelligence (accessed on 1 January 2024).
- Bringsjord, Selmer, and Naveen Sundar Govindarajulu. 2020. Rectifying the Mischaracterization of Logic by Mental Model Theorists. Cognitive Science 44: e12898. [Google Scholar] [CrossRef]
- Bringsjord, Selmer, and Naveen Sundar Govindarajulu. 2023. Mathematical Objects Are Non-Physical, So We Are Too. In Minding the Brain. Edited by Angus J. Menuge, Brian R. Krouse and Robert J. Marks. Seattle: Discovery Institute Press, pp. 427–46. [Google Scholar]
- Bringsjord, Selmer, Elizabeth Bringsjord, and Ron Noel. 1998. In Defense of Logical Minds. In Proceedings of the 20th Annual Conference of the Cognitive Science Society, Madison, WI, USA, August 1–4. Mahwah: Lawrence Erlbaum, pp. 173–78. [Google Scholar]
- Bringsjord, Selmer, James Hendler, Naveen Sundar Govindarajulu, Rikhiya Ghosh, and Michael Giancola. 2022. The (Uncomputable!) Meaning of Ethically Charged Natural Language, for Robots, and Us, from Hypergraphical Inferential Semantics. In Trustworthy Artifical-Intelligent Systems. Edited by Isabel Ferreira. Cham: Springer, pp. 143–67. [Google Scholar]
- Bringsjord, Selmer, John Slowik, Naveen Sundar Govindarajulu, Michael Giancola, James Oswald, and Rikhiya Ghosh. 2023. Affect-based Planning for a Meta-Cognitive Robot Sculptor: First Steps. Paper presented at 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), Cambridge, MA, USA, September 10–13; pp. 1–8. [Google Scholar] [CrossRef]
- Bringsjord, Selmer, Joshua Taylor, Andrew Shilliday, Micah Clark, and Konstantine Arkoudas. 2008. Slate: An Argument-Centered Intelligent Assistant to Human Reasoners. In Proceedings of the 8th International Workshop on Computational Models of Natural Argument (CMNA 8), University of Patras, Patras, Greece, July 21. Edited by Floriana Grasso, Nancy Green, Rodger Kibble and Chris Reed. Patras: University of Patras, pp. 1–10. [Google Scholar]
- Bringsjord, Selmer, Naveen Sundar Govindarajulu, and Alexander Bringsjord. 2024. Three-Dimensional Hypergraphical Natural Deduction. Bulletin of Symbolic Logic 30: 112–13. [Google Scholar]
- Bringsjord, Selmer, Naveen Sundar Govindarajulu, John Licato, and Michael Giancola. 2020. Learning Ex Nihilo. In GCAI 2020. 6th Global Conference on Artificial Intelligence, Hangzhou, China, April 6–9. Manchester: EasyChair Ltd., vol. 72, pp. 1–27. [Google Scholar] [CrossRef]
- Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, and et al. 2023. Sparks of Artificial General Intelligence: Early Experiments With GPT-4. arXiv. Available online: https://arxiv.org/abs/2303.12712 (accessed on 1 January 2024).
- Cheng, Patricia W., Keith J. Holyoak, Richard E. Nisbett, and Lindsay M. Oliver. 1986. Pragmatic versus Syntactic Approaches to Training Deductive Reasoning. Cognitive Psychology 18: 293–328. [Google Scholar] [CrossRef] [PubMed]
- Church, Alonzo. 1932. A Set of Postulates for the Foundation of Logic (Part I). Annals of Mathematics 33: 346–66. [Google Scholar] [CrossRef]
- Church, Alonzo. 1933. A Set of Postulates for the Foundation of Logic (Part II). Annals of Mathematics 34: 839–64. [Google Scholar] [CrossRef]
- Clark, Micah H. 2010. Cognitive Illusions and the Lying Machine: A Blueprint for Sophistic Mendacity. Ph.D. thesis, Rensselaer Polytechnic Institute, Troy, NY, USA. [Google Scholar]
- Clark, Micah H. 2011. Mendacity and Deception: Uses and Abuses of Common Ground. In Building Representations of Common Ground with Intelligent Agents: Papers from the AAAI Fall Symposium. Edited by Sam Blisard and Wende Frost. Technical Report FS-11-02. Cambridge: AAAI Press, pp. 2–9. [Google Scholar]
- Curtius, Ernst. 1870. The History of Greece. London: Richard Bentley, vol. III. [Google Scholar]
- Deacon, Terrence W. 1997. The Symbolic Species: The Co-Evolution of Language and the Brain. New York: W. W. Norton & Co. [Google Scholar]
- Forster, Edward Seymour, and David John Furley, eds. 1955. Aristotle: On Sophistical Refutations, On Coming-To-Be and Passing-Away, On the Cosmos. Cambridge: Harvard University Press. [Google Scholar]
- Frankfurt, Harry G. 2005. On Bullshit. Princeton: Princeton University Press. [Google Scholar]
- Gagarin, Michael. 2001. Did the Sophists Aim to Persuade? Rhetorica: A Journal of the History of Rhetoric 19: 275–91. [Google Scholar] [CrossRef]
- Gallo, Giorgio, Giustino Longo, Stefano Pallottino, and Sang Nguyen. 1993. Directed hypergraphs and applications. Discrete Applied Mathematics 42: 177–201. [Google Scholar] [CrossRef]
- Gampa, Anup, Sean P. Wojcik, Matt Motyl, Brian A. Nosek, and Peter H. Ditto. 2019. (Ideo)Logical Reasoning: Ideology Impairs Sound Reasoning. Social Psychological and Personality Science 10: 1075–83. [Google Scholar] [CrossRef]
- Gillies, John, ed. 1823. A New Translation of Aristotle’s Rhetoric. London: T. Cadell. [Google Scholar]
- Glymour, Clark. 1992. Thinking Things through. Cambridge: MIT Press. [Google Scholar]
- Grasso, Floriana. 2002. Would I Lie To You? Fairness and Deception in Rhetorical Dialogues. In Working Notes of the AAMAS 2002 Workshop on Deception, Fraud and Trust in Agent Societies, Bologna, Italy, July 15–16. Edited by Rino Falcone, Suzanne Barber, Larry Korba and Munindar Singh. Berlin and Heidelberg: Springer. [Google Scholar]
- Grote, George. 1879. History of Greece, rev. ed. New York: Harper & Brothers, vol. VIII. [Google Scholar]
- Hamilton, Edith, and Huntington Cairns, eds. 1961. The Collected Dialogues of Plato (Including the Letters). Princeton: Princeton University Press. [Google Scholar]
- Hegel, Georg W. F. 1857. Lectures on the Philosophy of History, 3rd German ed. London: Henry G. Bohn. [Google Scholar]
- Held, Carsten, Markus Knauff, and Gottfried Vosgerau, eds. 2006. Mental Models and the Mind: Current Developments in Cognitive Psychology, Neuroscience, and Philosophy of Mind. Advances in Psychology. Amsterdam: Elsevier, vol. 138. [Google Scholar]
- Hu, Krystal. 2023. ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note. Reuters, February 2. Available online: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/(accessed on 1 January 2024).
- Johnson-Laird, Philip N. 1983. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge: Harvard University Press. [Google Scholar]
- Johnson-Laird, Philip N. 1999. Deductive reasoning. Annual Review of Psychology 50: 109–35. [Google Scholar] [CrossRef] [PubMed]
- Johnson-Laird, Philip N. 2006a. How We Reason. New York: Oxford University Press. [Google Scholar]
- Johnson-Laird, Philip N. 2006b. Models and heterogeneous reasoning. Journal of Experimental & Theoretical Artificial Intelligence 18: 121–48. [Google Scholar]
- Johnson-Laird, Philip N., and Fabien Savary. 1996. Illusory Inferences about Probabilities. Acta Psychologica 93: 69–90. [Google Scholar] [CrossRef]
- Johnson-Laird, Philip N., and Fabien Savary. 1999. Illusory inferences: A novel class of erroneous deductions. Cognition 71: 191–229. [Google Scholar] [CrossRef] [PubMed]
- Johnson-Laird, Philip N., and Ruth M. J. Byrne. 1991. Deduction. Hove: Lawrence Erlbaum. [Google Scholar]
- Johnson-Laird, Philip N., and Yingrui Yang. 2008. Mental Logic, Mental Models, and Simulations of Human Deductive Reasoning. In The Cambridge Handbook of Computational Psychology. Edited by Ron Sun. New York: Cambridge University Press, chp. 12. pp. 339–58. [Google Scholar]
- Kahneman, Daniel. 2013. Thinking, Fast and Slow. New York: Farrar, Straus, and Giroux. [Google Scholar]
- Khemlani, Sangeet S., and Philip N. Johnson-Laird. 2013. The Processes of Inference. Argument and Computation 4: 1–20. [Google Scholar] [CrossRef]
- Kissinger, Henry A., Eric Schmidt, and Daniel Huttenlocher. 2021. The Age of AI and Our Human Future. New York: Little, Brown and Company. [Google Scholar]
- Marback, Richard. 1999. Plato’s Dream of Sophistry. Columbia: University of South Carolina Press. [Google Scholar]
- Maslej, Nestor, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, and et al. 2023. The AI Index 2023 Annual Report. Stanford: Institute for Human-Centered AI, Stanford University. Available online: https://aiindex.stanford.edu/report/ (accessed on 1 January 2024).
- McComiskey, Bruce. 2002. Gorgias and the New Sophistic Rhetoric. Carbondale: Southern Illinois University Press. [Google Scholar]
- McKeon, Richard, ed. 1941. The Basic Works of Aristotle. New York: Random House. [Google Scholar]
- Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, and et al. 2022. Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science 378: 1067–74. [Google Scholar] [CrossRef] [PubMed]
- Nguyen, C. Thi. 2020. Echo Chambers and Epistemic Bubbles. Episteme 17: 141–61. [Google Scholar] [CrossRef]
- Oakhill, Jane, and Alan Garnham, eds. 1996. Mental Models in Cognitive Science: Essays in Honour of Phil Johnson-Laird. Hove: Psychology Press. [Google Scholar]
- Oakhill, Jane, Alan Garnham, and Philip N. Johnson-Laird. 1990. Belief bias effects in syllogistic reasoning. In Lines of Thinking: Reflections on the Psychology of Thought. Volume 1: Representation, Reasoning and Decision Making. Edited by K. J. Gilhooly, M. T. G. Keane, R. H. Logie and G. Erdos. Chichester: John Wiley & Sons, pp. 125–38. [Google Scholar]
- Perelman, Chaïm, and Lucy Olbrechts-Tyteca. 1969. The New Rhetoric: A Treatise on Argumentation. Notre Dame: University of Notre Dame Press. [Google Scholar]
- Pinker, Steven. 1994. The Language Instinct: How the Mind Creates Language. New York: William Morrow & Co. [Google Scholar]
- Pinker, Steven. 2021. Rationality: What It Is, Why It Seems Scarce, Why It Matters. New York: Penguin Books. [Google Scholar]
- Pohl, Rüdiger F., ed. 2004. Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. New York: Psychology Press. [Google Scholar]
- Reed, Chris, and Floriana Grasso. 2001. Computational Models of Natural Language Argument. In Computational Science—ICCS 2001: International Conference San Francisco, CA, USA, May 28–30, 2001 Proceedings, Part I. Edited by Vassil N. Alexandrov, Jack J. Dongarra, Benjoe A. Juliano, René S. Renner and C. J. Kenneth Tan. Lecture Notes in Computer Science. Berlin and Heidelberg: Springer, vol. 2073, pp. 999–1008. [Google Scholar]
- Reed, Chris, and Floriana Grasso. 2007. Recent Advances in Computational Models of Natural Argument. International Journal of Intelligent Systems 22: 1–15. [Google Scholar] [CrossRef]
- Reed, Chris, and Timothy J. Norman, eds. 2004. Argumentation Machines: New Frontiers in Argument and Computation. Dordrecht: Kluwer Academic Publishers. [Google Scholar]
- Russell, Stuart, and Peter Norvig. 2020. Artificial Intelligence: A Modern Approach, 4th ed. New York: Pearson. [Google Scholar]
- Schaeken, Walter, André Vandierendonck, Walter Schroyens, and Géry d’Ydewalle, eds. 2007. The Mental Models Theory of Reasoning: Refinements and Extensions. Mahwah: Lawrence Erlbaum. [Google Scholar]
- Schiappa, Edward. 1999. The Beginnings of Rhetorical Theory in Classical Greece. New Haven: Yale University Press. [Google Scholar]
- Simpson, Stephen G. 2010. Subsystems of Second Order Arithmetic, 2nd ed. Cambridge: Cambridge University Press. [Google Scholar]
- Taibbi, Matt, Bari Weiss, Michael Shellenberger, Lee Fang, David Zweig, and Alex Berenson. 2023. 2022–2023. The Twitter Files. Originally Published a Series of Twitter Threads. Available online: https://www.twitterfiles.co/archive/ (accessed on 1 January 2024).
- Taylor, Christopher, and Mi-Kyoung Lee. 2020. The Sophists. In The Stanford Encyclopedia of Philosophy. Edited by Edward Zalta. Stanford: Stanford University. Available online: https://plato.stanford.edu/entries/sophists (accessed on 1 January 2024).
- Tindale, Christopher W. 2010. Reason’s Dark Champions: Constructive Strategies of Sophistic Argument. Columbia: University of South Carolina Press. [Google Scholar]
- Turing, Alan M. 1937. On Computable Numbers with Applications to the Entscheidungsproblem. Proceedings of the London Mathematical Society 42: 230–65. [Google Scholar] [CrossRef]
- Turing, Alan M. 1950. Computing Machinery and Intelligence. Mind LIX: 433–60. [Google Scholar] [CrossRef]
- Tversky, Amos, and Daniel Kahneman. 1974. Judgment under Uncertainty: Heuristics and Biases. Science 185: 1124–31. [Google Scholar] [CrossRef] [PubMed]
- Wason, Peter C. 1966. Reasoning. In New Horizons in Psychology. Edited by Brian M. Foss. Hammondsworth: Penguin, pp. 135–51. [Google Scholar]
- Wolfram, Stephen. 2023. What Is ChatGPT Doing …and Why Does It Work? February 14. Available online: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work (accessed on 1 January 2024).
Accuracy (N) | Mean Confidence (SD) | Mean Agreement (SD) | ||||||
---|---|---|---|---|---|---|---|---|
Group (N) | Control | Experimental | Control | Experimental | Control | Experimental | ||
A (96) | 81.3% (96) | 60.4% (91) | 6.13 (1.15) | 5.99 (1.25) | NA | NA | ||
B (80) | 82.3% (79) | 50.7% (75) | 6.03 (1.27) | 5.92 (1.26) | 3.84 (2.37) | 2.71 (2.17) | ||
C (80) | 93.2% (73) | 25.0% (72) | 6.34 (0.99) | 6.19 (1.18) | 5.84 (1.66) | 5.04 (2.24) |
Mean Agreement (SD) with … | ||||
---|---|---|---|---|
Item Type | Accuracy (N) | Mean Confidence (SD) | Valid Arguments | Invalid Arguments |
Control | 91.8% (73) | 6.01 (1.37) | 6.17 (1.49) | 1.83 (1.49) |
Experimental | 37.1% (70) | 6.04 (1.20) | 3.37 (2.44) | 4.63 (2.44) |
Convincing | Unconvincing | |
---|---|---|
Believed formally valid | I | II |
Believed formally invalid | III (!!) | IV |
Agnostic formally valid | V | VI |
Agnostic formally invalid | VII | VIII |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Clark, M.H.; Bringsjord, S. Illusory Arguments by Artificial Agents: Pernicious Legacy of the Sophists. Humanities 2024, 13, 82. https://doi.org/10.3390/h13030082
Clark MH, Bringsjord S. Illusory Arguments by Artificial Agents: Pernicious Legacy of the Sophists. Humanities. 2024; 13(3):82. https://doi.org/10.3390/h13030082
Chicago/Turabian StyleClark, Micah H., and Selmer Bringsjord. 2024. "Illusory Arguments by Artificial Agents: Pernicious Legacy of the Sophists" Humanities 13, no. 3: 82. https://doi.org/10.3390/h13030082
APA StyleClark, M. H., & Bringsjord, S. (2024). Illusory Arguments by Artificial Agents: Pernicious Legacy of the Sophists. Humanities, 13(3), 82. https://doi.org/10.3390/h13030082