Next Article in Journal
Are We Indeed So Illuded? Recency and Frequency Illusions in Dutch Prescriptivism
Next Article in Special Issue
More than Relata Refero: Representing the Various Roles of Reported Speech in Argumentative Discourse
Previous Article in Journal
Effects of Recasts, Metalinguistic Feedback, and Students’ Proficiency on the Acquisition of Greek Perfective Past Tense
Previous Article in Special Issue
Ignoring Qualifications as a Pragmatic Fallacy: Enrichments and Their Use for Manipulating Commitments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Technical Language as Evidence of Expertise

Department of Philosophy, Logic and Aesthetics, University of Salamanca, 37007 Salamanca, Spain
Languages 2022, 7(1), 41; https://doi.org/10.3390/languages7010041
Submission received: 2 December 2021 / Revised: 3 February 2022 / Accepted: 8 February 2022 / Published: 21 February 2022
(This article belongs to the Special Issue Pragmatics and Argumentation)

Abstract

:
In this paper, I focus on one argumentative strategy with which experts (or putative experts) in a particular field provide evidence of their expertise to a lay audience. The strategy consists in using technical vocabulary that the speaker knows the audience does not comprehend with the intention of getting the audience to infer that the speaker possesses expert knowledge in the target domain. This strategy has received little attention in argumentation theory and epistemology. For this reason, the aim of the present paper is not to reach any definitive conclusions, but mainly exploratory. After introducing the phenomenon, I discuss various examples. Next, I analyse the phenomenon from an argumentative perspective. I discuss the pragmatic mechanism that underlies it, the quality of the evidence offered, and its capacity to persuade.

1. Introduction

The distribution of expert knowledge in modern societies is essential to the advancement of science and technology and, ultimately, to social progress. However, as it has often been pointed out, this distribution of expertise leads to a complex epistemic problem for everyone, given that expert knowledge is relevant to virtually all the practical decisions that we need to make on a daily basis, ranging from the decision to buy a particular type of food to the decision to get a particular vaccine or not. In his discussion of the problems that expert knowledge raises, Alvin Goldman (2001, p. 85) introduces “the novice/expert problem” to refer to the epistemic question that a layperson faces when evaluating the testimony of experts, especially in those cases where different putative experts disagree on a particular topic. The novice is someone who does not have knowledge or even an opinion on a particular topic, or has an opinion but does not have enough confidence in it to use it in evaluating the disagreement between rival putative experts. There seems to be an agreement in the literature dedicated to this problem that novices need to proceed indirectly, by first identifying which one of the persons making claims in the target domain is a genuine expert. As Collins and Evans put it, people make “social judgments about who ought to be agreed with, not scientific judgments about what ought to be believed” (Collins and Evans 2007, p. 47). It is not claims that novices evaluate, but the source of those claims. In turn, this puts pressure on experts and institutions to give evidence of their expertise on a particular topic. As Sarah Sorial notes, “persons with expertise thus typically appeal to audiences to accept their views by emphasising who they are, rather than what they say” (Sorial 2017, p. 292).
Indicating one’s own expertise in a particular topic plays a significant role in boosting one’s credibility and offering support to the assertions one makes. Although experts do not usually build explicit arguments in support of their claims on the basis of their own level of expertise, they might invite audiences to infer that their assertions are correct on this basis. For that purpose, they might convey, in direct or more indirect ways, that they are experts in a particular topic. A direct way might involve asserting that one is a specialist in a particular topic, showing a track record of research results, giving one’s Hirsch index or other quantitative indicators of research impact, using the name of one’s profession (“Prof. Taylor”), qualifications (“PhD in Microbiology”), prizes, etc., or the name of the institution one is affiliated to (“London Research Institute”, “University of…”). These might be mentioned either by the speaker themselves or by someone introducing the speaker (when, for instance, a television channel conducts an interview with an expert, and the interviewee is presented to the public in advance as an expert). One’s status of expert in a topic might be conveyed in less direct ways. A speaker might rely on cues, either nonverbal or verbal, to signal their expertise. Nonverbal cues might include details of the scenario (e.g., setting an interview in a science laboratory) or clothing (e.g., wearing a white coat or having a stethoscope hanging on one’s neck). Verbally, one indirect way in which a speaker might give clues of their expertise consists in displaying their competence with the vocabulary of a particular field of specialised inquiry. In addressing a lay audience, a speaker might intentionally use technical jargon that they know is incomprehensible to the audience. This strategy is the focus of this paper.
There is, as far as I can tell, no extended discussion of the phenomenon in the literature devoted to argumentation theory and epistemology.1 For this reason, the aim of the present paper is not to reach any definitive conclusions, but rather to offer a first grasp on the phenomenon and make a few tentative suggestions in the way of analysing it from an argumentative perspective.
The phenomenon of using language that is (partially or totally) inaccessible to an audience has been discussed in communication studies from a more descriptive perspective as part of an approach called Communication Accommodation Theory. This approach introduces a useful distinction between speech that is accommodative, when speakers make an effort to achieve common understanding, and speech that is nonaccommodative, when speakers, purposefully or unintendedly, do not pay sufficient attention to the cognitive needs of audiences. Jessica Gasiorek notes that “Adjustments that make a message more difficult to understand (e.g., speaking a language an interlocutor does not know, using unfamiliar jargon, speaking quickly) are considered nonaccommodative moves, in terms of interpretability” (Gasiorek 2016, p. 87). A display of technical language in order to impress the audience and present oneself as possessing knowledge of a particular topic is a kind of nonaccommodative behaviour. “Using the language of science,” Krieger and Gallois write, “although it is essential for certain occupational tasks, is often criticized in the public sphere for being inaccessible to nonexperts, disempowering them…” (Krieger and Gallois 2017, p. 2). Rice and Giles note that scientific language and information “can be interpreted as accommodating (relevant) or nonaccommodating (distancing through scientific terminology and not human scale)” (Rice and Giles 2016, p. 9). The nonaccommodative use of technical language that I discuss in this paper consists of cases in which the speaker or writer does not make an effort to simplify the language, avoid unnecessary technical vocabulary, or define the terms with which the audience is unfamiliar. I focus on cases in which this is used as a strategy to establish one’s expertise or epistemic credibility concerning a particular topic.
Nonaccomodative speech involving technical language in an interaction with a novice might have different causes and motivations. The speaker might be unable to convey the information in simple nontechnical language, or they might simply not be willing to make the effort to accommodate. In other cases, they might believe that in the given context a precise formulation in the language of the theory is required. A teacher might use the language of a particular theory in class in order to familiarise students with it. Sophisticated technical vocabulary might be used to emphasise social differences, for instance, as a way to remind the interlocutor that the speaker is the professor, and the interlocutor is the student. Convoluted technical language might be used to avoid giving a straightforward answer to a question asked. A physician might appeal to this strategy to end a conversation with a patient that asks too many questions, giving an answer that confuses the latter and leaves them with no reply. The kind of strategy I focus on in this paper is different from all the above: the speaker intentionally uses technical jargon that they know is incomprehensible to the audience in order to convey evidence of their own expertise in the target domain.
To sum up, two conditions must be met for the use of technical language to be of the kind considered here:
(C1)
the use is intentionally nonaccommodative (the speaker/writer uses technical jargon that they know the audience does not understand);
(C2)
technical language is used with the intention to persuade the audience that the speaker/writer possesses expert knowledge of the target domain.
Notice that it is not a necessary condition that the speaker have real linguistic competence with the technical jargon. In some of the examples that I discuss below, the speaker only fakes competence with the technical jargon or genuinely but falsely believes they have it.

2. Examples

Let us consider now various plausible instances of the strategy characterised in the previous section. The first one is a fragment from an advertisement for “Fractional Neck Lift Concentrate” from the October 2009 edition of the magazine “Wallpaper”:
“Y-42 FRACTIONAL NECK LIFT CONCENTRATE More than fractional treatments. For less than fractional treatments. In simple terms: neck therapy as its avant-garde finest. Protein fractions maximize fibrillin synthesis and minimize the inevitable, irreversible degradation of elastic tissue. Discovered-on-Mars iron rose crystal comes from effusive magma rock to increase prolyl hydroxylase activity by 381% and boost collagen production. Our new tetrapeptide-9 from France, as well as a next-generation tripeptide-10 citrulline and a bi-blinked dipeptide from Switzerland, to stimulate Laminin V, Collagen IV, Collagen VII, Collagen XVII and Integrin. A new tetrapeptide-11 from France also comes to the rescue to stimulate Syndecan-1 synthesis and reinforce epidermal cohesion.”
Meibauer (2016, p. 73) characterises this fragment as an “authentic piece of bullshitting”. I will not commit here to this claim but want to point out a different (albeit not totally unrelated) feature of the text: the way in which technical language is used with the purpose of impressing the audience. Condition C1, concerning the nonaccommodative character of the use of the language, is fulfilled. There is no reason to suppose that the average reader of a style and design magazine is capable of understanding the data provided and their relevance to the quality of the product. Is “bi-blinked dipeptide from Switzerland” better than bi-blinked dipeptide from other places? Is it good for your skin to “increase prolyl hydroxylase activity by 381%”? Do other skin care creams contain the same components or other? The average audience is not likely to have any clue how to answer these questions. Most probably, the readers have never heard of tripeptide-10 citrulline, Laminin V and tetrapeptide-11 “from France”. The authors of the advertisement know this very well, so they are not using this language in order to convey information. The only plausible explanation is that their purpose is not to inform, but to impress. Given that the ultimate aim of a commercial advertisement is to persuade the audience to buy the product, the text seems designed to show the company uses complex scientific research and sophisticated procedures in creating its products and that the company (referred to as “we” later on in the text of the advertisement) has expertise in that topic and, therefore, is to be trusted when it comes to skin care products. Hence, condition C2 is fulfilled. Moreover, it is fulfilled in a manifest way, in the sense that the authors of the advertisement openly invite the audience to draw the conclusion that the company is an expert in the science behind products such as the one advertised. We will see in what follows that C2 is not always so manifestly fulfilled.
The next two examples are from Alan Sokal and Jean Bricmont’s 1999 book Fashionable Nonsense—Postmodern Intellectuals’ Abuse of Science. The book focuses on texts written by postmodern philosophers who, according to Sokal and Bricmont, are “masters of language and can impress their audience with a clever abuse of sophisticated terminology—nonscientific as well as scientific” (Sokal and Bricmont 1999, p. 8), especially from mathematics and physics. The authors discuss several quotes from Jacques Lacan’s work in which theorems and concepts from geometry are mentioned in connection to the discussion of certain mental phenomena. One of the fragments analysed is as follows:
“A geometry implies the heterogeneity of locus, namely that there is a locus of the Other. Regarding this locus of the Other, of one sex as Other, as absolute Other, what does the most recent development in topology allow us to posit? I will posit here the term “compactness.” Nothing is more compact than a fault [faille], assuming that the intersection of everything that is enclosed therein is accepted as existing over an infinite number of sets, the result being that the intersection implies this infinite number. That is the very definition of compactness.”
According to the authors, Lacan uses words from the mathematical theory of compactness, but he combines them “arbitrarily and without the slightest regard for their meaning. His “definition” of compactness is not just false: it is gibberish” (Sokal and Bricmont 1999, p. 23). This and other fragments are given as examples of a display of “superficial erudition by shamelessly throwing around technical terms in a context where they are completely irrelevant. The goal is, no doubt, to impress and, above all, to intimidate the non-scientist reader” (Sokal and Bricmont 1999, p. 31). What is relevant to our purpose is not so much whether the text is gibberish from a mathematical point of view, but the latter comment the authors make: that the technical terminology is used to impress a particular audience which is not sufficiently familiarised with the concepts from geometry used, and thus to enhance the speaker’s epistemic status as a sophisticated and knowledgeable author. Hence, conditions C1 (nonaccommodation) and C2 (intention to impress) are fulfilled.2
Sokal and Bricmont also analyse various fragments of a text from Julia Kristeva’s writings on poetic language and arrive at similar conclusions. Here is one such fragment:
“Having assumed that poetic language is a formal system whose theorization can be based on set theory, we may observe, at the same time, that the functioning of poetic meaning obeys the principles designated by the axiom of choice… The notion of constructibility implied by the axiom of choice associated to what we have just set forth for poetic language, explains the impossibility of establishing a contradiction in the space of poetic language. This observation is close to Gödel’s observation concerning the impossibility of proving the inconsistency [contradiction] of a system by means formalized within the system.”
The metalogical concepts are not used metaphorically here, but, according to Sokal and Bricmont, with their literal meaning. The authors argue that Kristeva’s comments about the axiom of choice and Godel’s theorem are not only incorrect, but way off the mark, and that they show a poor understanding of the metalogical theorems invoked (Sokal and Bricmont 1999, p. 46). More significantly, Kristeva uses these concepts “without bothering to explain to the reader the content of these theorems” and that she tries to “impress the reader with technical jargon” (Sokal and Bricmont 1999, pp. 44–45). If this is correct, the fragment quoted is a good candidate for an instance of the nonaccommodative use of technical jargon intended to enhance the writer’s epistemic status.
Admittedly, these fragments from Lacan and Kristeva are not straightforward examples of the strategy we are considering. The strategy, as introduced, requires that the speaker or writer use nonaccommodative technical language (condition C1) with the intention that it be taken as evidence of their expertise in the topic under consideration (condition C2). However, it is not always easy to identify such an intention, in part because writers tend not to be open about it. Intellectual modesty requires that the author of a study such as the ones quoted above try to convince their readership on the basis of the merits of the arguments provided and not on the basis of a superficial appearance of sophistication. Theoretical sophistication must be a natural consequence of the intricate development of the discussion and must not appear to be added for the sake of impressing the reader. For this reason, authors like Lacan and Kristeva, if they have the intention to impress, would most likely hide it and, if questioned, deny that they use the language of the formal sciences with that intention. Is there a good reason, then, to conclude that Lacan and Kristeva had this intention at all? Is there a good method applicable to any similar case to determine whether the speaker has the intention to present themselves as an expert?
This is a very delicate matter and I do not have a definitive answer. It seems correct to postulate the intention to impress the audience in those cases in which there are reasons to believe a writer who uses technical language suspects (or knows) their use of this language is wide off the mark. In those cases, the writer cannot be using technical language simply with the intention to convey correct information. A different intention must be at play, and the intention to impress is a plausible explanation for the use of technical vocabulary. On the other hand, people are generally not in a position to assess correctly their own competence with a particular topic and the corresponding vocabulary because they tend to overestimate it.3 This cognitive bias is known as the Dunning–Kruger effect (Kruger and Dunning 1999), according to which, people generally tend to assume they are competent with a particular vocabulary or theory even when they are not. This fact makes it more difficult to find cases in which the speaker clearly intends to impress but not to inform. Indeed, Lacan and Kristeva might genuinely think they are informing correctly. Their intention to impress is in this case secondary or not present at all. Therefore, we can only tentatively suggest that the confusing use of badly understood theoretical language in Lacan’s and Kristeva’s texts is aimed at impressing the audience and maybe at gaining for their own theories a little bit of the prestige the formal disciplines invoked therein have.
I turn now to the fourth and last example. This one belongs to the pseudoscientific literature, which includes texts in which pseudoexperts misuse the language of a particular science or mix the vocabularies of various sciences producing meaningless technobabble, either out of ignorance or as a strategy to gain the reader’s trust. Many such cases are analysed in the study by Garrett et al. (2019), which focuses on a variety of Internet health scams currently active in Canada. By researching the major databases such as PubMed and MEDLINE and by relying on a panel of medical experts, the authors identified 112 types of such activities. One such scam is DNA manipulation, also known as ThetaHealing, which is meant to address a supposed health concern related to DNA damage and provide other benefits as well as teach clients how to “repair” their own DNA (Garrett et al. 2019, p. 234). The study classifies the risk of deception in this case as high. One of the criteria they use to test the risk of deception is whether the text which describes and advertises the service or product resorts to “pseudotechnical language”, which the authors define as “uses [of] new words (neologisms), repetitive and tautological statements, or jargon to explain how it works” (Garrett et al. 2019, p. 231). The study concludes that this is one of the most common persuasive techniques employed in Internet health scams, together with the use of authority and social influence (including celebrity endorsement) (Garrett et al. 2019, p. 232).
On the ThetaHealing official webpage, it is stated that this meditation technique, invented by Vianna Stibal in 1995, has the capacity to “wake up our DNA to our highest potential”. The following characterisation of the technique is also given in the same webpage:4
“You don’t have to be a scientist to do this technique, but you should know that the Pineal Gland is located exactly in the center of the brain; directly down from the crown and directly back of the third eye… Within the Pineal Gland is what is called the Master Cell, and it is this cell that is the operation center for all the other cells in the body. The Master Cell is the beginning point of healing for many of the functions that the body performs. Within this Master Cell is the chromosome of DNA that is the heart of the DNA Activation. Inside the Master Cell is a tiny universe all its own that is a master-key to our function. It runs everything in the body, from the color of our hair to the way we wiggle our feet. All parts of the body are controlled by the Program in the chromosomes and the DNA. Inside the Master Cell is the Youth and Vitality Chromosomes […]”
In what follows, Vianna Stibal goes on to introduce the meditation technique for “Activation of the Youth and Vitality Chromosomes”, which involves making the command, “Creator of All That Is, it is commanded that the activation of the youth and vitality chromosomes (state client’s name) take place on this day.”, etc.
The fragment starts with the sentence: “You don’t have to be a scientist to do this technique, but you should know that…”. This sentence invites us to think that what follows is scientific information. Although the fragment includes scientific terminology (chromosomes, DNA, the pineal gland), the author combines it with made-up vocabulary (“Master Cell”, “Youth and Vitality Chromosomes”) and fantastic claims about the functioning of the human organism and the healing powers of the mind, for which no evidence is given. Her misuse of scientific terminology shows a lack of understanding of the relevant scientific knowledge and disinterest in getting things right. Pieces of pseudoscientific texts such as this one are plausible examples of the phenomenon we are considering: in addressing a lay audience, the author uses the vocabulary of science; (C1) the text is nonaccommodative, as the technical notions are left unexplained (albeit less nonaccommodative than the other cases discussed above); (C2) her intention is to impress and not to inform, mimicking scientific discourse in order to gain an appearance of scientific respectability. Evidence for the latter point is that she does not employ scientific terms correctly and shows manifest disregard for scientific method and rigor, which is incompatible with an honest intention to understand science and contribute to scientific knowledge.

3. An Argumentative Approach

The examples of the phenomenon discussed above are all problematic, in the sense that they do not constitute good evidence of the speaker’s expert knowledge in the target domain. I adopt in what follows a—very broadly conceived—argumentative approach to the phenomenon introduced, in the sense that I take the notion of fallacy as a theoretical guide to the study of the phenomenon. A fallacy is usually defined in argumentation theory by identifying the three components distinguished by Hansen (2002). These are: the ontological component: it is an argument, “a pattern of argumentation” (Johnson and Blair 1994, p. 54), “or at least something that purports to be an argument” (Walton 1995, p. 255); the logical component: it is bad, or at least worse than it seems (Hamblin 1970, p. 39; Hansen 2002, p. 152); and the psychological component: it seems to be good, or at least better than it is. Methodologically, this distinction allows us to divide the analysis of the phenomenon into three parts, corresponding broadly to the three dimensions identified above. I address in what follows the following questions: what kind of phenomenon is it? What is the quality of the evidence provided? How persuasive is it?

3.1. The Ontological Dimension

Given that the phenomenon considered is, broadly speaking, a communicative one, the best perspective to take in order to answer the question is a pragmatic one. A naïve reconstruction of the use of technical language to provide evidence of expertise might involve the following inference: I am competent with (the relevant) technical vocabulary. Therefore, I possess expert knowledge of this topic. From a pragmatic perspective, making assertions with this content counts as realising a speech act of arguing. This is usually defined as a “complex speech act” (van Eemeren and Grootendorst 1984, p. 43) or a “secondary speech act” containing various first-order speech acts, which can be classified in two categories: acts of adducing premises and acts of concluding (Bermejo-Luque 2011, p. 60). Now, obviously no such speech acts were performed in our case: the speakers did not explicitly argue or assert the premise and the conclusion of the above inference.
An alternative account could suggest that what we have here is an act of arguing indirectly. Both the premise and the conclusion of the above inference are, it might be suggested, conveyed as the content of a conversational implicature. The mechanism which explains the generation of this implicature might involve a violation of the maxim of manner at the level of what is said. Consider the following pair of sentences:
  • You seem to have caught a cold.
  • You have symptoms of a viral infectious disease of the upper respiratory tract.
Assuming that the two sentences express the same semantic content relative to the context of utterance (which might be a too farfetched assumption, to begin with), choosing (b) over (a) constitutes a violation of the Maxim of Manner, which, among other things, requires the speaker to be brief and clear. The best explanation of this behaviour is (at least in those contexts where no better alternative explanation is available) that the speaker intends to convey that they are competent with the relevant technical vocabulary.
Again, this account cannot be right. One problem is that it is not possible to find for every sentence formulated in technical jargon a way to convey the same content in plain language. An even more important one is that the nonaccommodative use of technical jargon is noncooperative. Lepore and Stone (2014) discuss cases in which speakers purposefully use technical language that is incomprehensible to the listener and argue that they are noncooperative uses “where interlocutors are not even committed to reaching mutual understanding” (Lepore and Stone 2014, p. 221). If the speaker/writer’s contribution violates the Cooperative Principle, the framework in which conversational implicatures are postulated is not an option.
To sum up, the phenomenon cannot be accounted for as a speech act of arguing, and the conversational implicature approach does not seem more promising either. Nevertheless, it is plausible to think that, with nonaccommodative use of technical language, when C1 and C2 are fulfilled, the speaker implies—in some sense—that they are an expert in the topic. This implication, however, is of a different kind than conversational implicatures, conventional implicatures, semantic entailments or other forms of known implications.
In order to get a better grasp of the phenomenon considered here, we might start with the observation that the use of technical language is a communicative phenomenon in the wide sense of the term. However, not all communicative phenomena are of the same kind. Just like one’s accent in speaking a language (e.g., Scottish, Irish, Russian, Cockney, Hindi, etc., for English), using a particular vocabulary is something that comes with an utterance and carries all kinds of information. This information is not always something that the speaker wants to convey. However, given the way in which we circumscribed the phenomenon we focus on here with the help of conditions C1 and C2, in the cases considered, technical language is used with the intention to convey certain information. It is both intentionally nonaccommodative as well as aimed at persuading the audience that the speaker is knowledgeable of the topic considered. The presence of these intentions suggests that Grice’s (1957, 1969) analysis of speaker meaning might be a useful theoretical approach at this point. I make use of it in what follows but, as I hope it will become clear, the conclusions reached with the help of Grice’s proposal do not presuppose that this is ultimately a correct analysis of speaker meaning. Actually, I argued below that the kind of the phenomenon we considered is not an instance of speaker meaning. For this reason, it is not necessary to invoke a sophisticated version of the analysis, one that might have better chances of being ultimately correct. A simple formulation of the analysis will suffice for the present purposes.
According to Grice (1957, p. 384), “A [a speaker] means something by x” is roughly equivalent to “A uttered x with the intention of inducing a belief by means of the recognition of this intention.” A more detailed presentation of the analysis distinguishes three conditions. A speaker means p when they utter a sentence if and only if (1) A intends to induce a belief in p in the audience, (2) A intends the former intention to be recognised and (3) the recognition is intended by A to be a reason for the audience to form the belief. Grice (1957) draws our attention to cases that the analysis must rule out. There are, for instance, cases for which conditions (1) and (2) are fulfilled (the speaker has the intention to induce a belief, and the intention is manifest), but not condition (3). One such case is that of a child who, feeling faint, lets his mother see how pale he is, hoping that she may draw her own conclusions and help. The child manifestly presents evidence of a fact but does not intend the mother to draw the conclusion on the basis of recognising his intention, but instead on the basis of seeing the degree of paleness. Such cases might be characterised as inviting the audience to draw the inference that the fact obtains.
Using technical language to provide evidence of expertise, when this is performed in a manifest way (e.g., the skin care lotion cream advertisement discussed above), is very similar to the case of the sick child: the first two conditions of the Gricean analysis are fulfilled, but not the third. The speaker/writer manifestly shows the audience evidence of their expert knowledge of the target domain and invites the audience to infer that they have expert knowledge on the basis of the evidence provided. That is, the implication we are trying to characterise is a form of inviting an audience to draw an inference from the evidence shown to that proposition which the speaker takes the evidence to be evidence of.5
A different kind of cases is that in which the intention to provide evidence of one’s expertise is not manifest. That is, condition C2 is fulfilled but not in a manifest way. In those cases (the fragments from Lacan and Kristeva might be of this kind), the above characterisation in terms of an invitation to make an inference does not seem to be adequate. No invitation to draw an inference is made, assuming that an invitation is necessarily a manifest act. In fact, no implication of any kind is conveyed. Only condition (1) of the Gricean analysis is fulfilled, but not (2) and (3). The speaker offers evidence of their expertise and intends the audience to form the belief that they are an expert on this basis but does not want the audience to detect this intention. These are cases of showing evidence with the purpose of inducing a particular belief in the audience but without manifesting this intention in any way.
To sum up, the nonaccommodative uses of technical language aimed at providing evidence of expertise might be of two kinds: those in which the intention to provide evidence is manifest and those in which it is not. Only the former might be correctly characterised as involving an invitation to draw an inference. Consequently, only in these cases the strategy might be said to be an argumentative move, assuming that for something to be an argumentative move it must be a move in an argumentative discussion that is manifest, i.e., perceived as such by all interlocutors. A case in which the intention is present but not manifest is better characterised as a strategy to persuade but not as an argumentative move. There are reasons to expect that cases of the latter kind are more common than the former in practice, given that speakers normally do not want to be perceived as using technical language just for the sake of showing their linguistic competence, as opposed to using the language because rigor and precision require it in the context. On the other hand, when the intention to persuade is not manifest, it is difficult to determine whether it is present at all, that is, whether condition C2 is fulfilled or not. We have seen that Lacan’s and Kristeva’s texts raise this kind of questions.

3.2. The Epistemic Dimension

The examples introduced in the second section, especially the latter ones, share a certain negative feature to a lesser or higher degree: they are all cases in which we might suspect that the writer aims to mislead the audience into believing that they possess expert knowledge about the topic under consideration, when in fact this is not the case. However, not all nonaccommodative uses of technical jargon are designed to mislead the audience. The intention to mislead is not a necessary condition of the phenomenon as we have characterised it, nor is it a condition that the speaker believe that they lack expert-level knowledge of the topic. A real expert in the field of metalogic or human biology might make a nonaccommodative use of technical jargon with the intention of giving a clue of their expertise in the topic, but without any intention to deceive.
This brings us to the following question: is the evidence that the speaker provides by displaying their use of technical jargon good evidence that they have expert-level knowledge of the target domain? This is an important question, given that identification of genuine experts is an essential step in identifying reliable sources of expert knowledge.6
Collins and Evans (2007) do not discuss the kind of potential evidence of expertise the present paper focuses on, but what they say seems to imply a negative answer to our question. The authors propose a classification of levels of knowledge of a particular topic that one might have. The highest level of knowledge is what the authors call “specialist expertise”. This can be of two kinds: interactional expertise and contributory expertise. The former is “expertise in the language of a specialism in the absence of expertise in its practice” (Collins and Evans 2007, p. 28). Contributory expertise is the ability to correctly perform an activity that requires special training. In the case of scientific investigation, contributory expertise “enables those who have it to contribute to the domain to which the expertise pertains” (Collins and Evans 2007, p. 24). Contributory expertise is superior to interactional expertise, and it presupposes the latter (Collins and Evans 2007, p. 36). The authors also introduce what they call the Principle of Downward Discrimination: “only the downward direction is reliable, the other directions tending to lead to wrong impressions of reliability” (Collins and Evans 2007, p. 28), that is, only a person with a higher level of expertise could be a reliable judge of someone’s knowledge of a particular topic. It follows that a speaker’s competence with technical language is not good evidence to a layperson of the speaker’s expertise in a topic simply because nothing is. The Principle of Downward Discrimination entails (at least if we take it literally) that there is no solution to the problem that novices face when they need to decide who is an expert in a particular topic. However, Collins and Evans seem to think that not all criteria are equally bad, and some are more reliable than other. For instance, credentials are not a good criterion (Collins and Evans 2007, p. 67) but evaluating the putative expert’s experience within the domain is a much better one (Collins and Evans 2007, p. 68). However, the scepticism their principle expresses relative to the possibility of obtaining good evidence of expertise clearly suggests a negative answer to our question.7
Goldman’s (2001) approach to “the novice/expert problem” rejects full-fledged scepticism (Goldman 2001, p. 86) and instead offers five possible sources of evidence that a novice might appeal to in deciding whether to trust an expert or not. Goldman considers a scenario in which two putative experts disagree and a novice needs to decide which one to trust. The five sources of evidence identified are as follows: (i) the arguments presented by the contending experts to support their own views and criticising their rivals’ views; (ii) the existence of agreement with the claim made among additional putative experts; (iii) appraisals by “meta-experts”, including credentials earned by the experts; (iv) evidence of the experts’ interests and biases relative to the question at issue; (v) evidence of the experts’ past “track-records”.
Goldman does not consider use of technical language as a possible source of evidence of expert knowledge. However, the considerations he offers concerning point (i) are relevant to our discussion. In this respect, Goldman distinguishes direct and indirect argumentative justification. With direct argumentative justification, the hearer becomes justified in believing the conclusion of an argument by becoming justified in believing the premises and that they offer support to the conclusion. With indirect argumentative justification, the hearer obtains justification to believe the speaker has expertise about the topic under consideration indirectly, on the basis of their argumentative performance. Novices are not in a position to obtain direct argumentative justification as they do not have the resources to directly evaluate the quality of the arguments the putative experts provide. However, they could obtain indirect argumentative justification by making an inference to the best explanation from the speakers’ argumentative performance to their respective levels of expertise (Goldman 2001, p. 96). They could form an impression of the quality of the speakers’ performance by considering their display of dialectical abilities in a contradictory dialogue, their capacity to formulate arguments and objections and respond to objections and criticism. Goldman adds that “additional signs of superior expertise” may come from other aspects of the debate, such as “the comparative quickness and smoothness” with which a speaker responds to objections and challenges, which suggests that they are already familiar with them and prepared counterarguments well in advance of the debate. Quickness and smoothness in defending one’s claims suggest “prior mastery of the relevant information and support considerations”.
However, Goldman does not consider these pieces of evidence as reliable. He notes that “dialectical superiority may reasonably be taken as an indicator of [the speaker’s] having superior expertise on the question at issue” but adds that it is an “(non-conclusive) indicator” (Goldman 2001, pp. 95, 96). Concerning quickness and smoothness, he notes that they are “problematic indicators of informational mastery”. This is because “Skilled debaters and well-coached witnesses can appear better-informed because of their stylistic polish, which is not a true indicator of superior expertise. This makes the proper use of indirect argumentative justification a very delicate matter” (Goldman 2001, p. 95).
Similar considerations apply to the case of the fluency and smoothness with which a speaker appears to be using technical vocabulary. Goldman (2001) does not treat this as a potential source of evidence of the speaker’s expertise, but the similarity with the case of indirect evidence of debating skills invites a parallel conclusion: it is an indicator of expertise, but a problematic and nonconclusive one. Cases in which the speech or text is a genuine instance of a correct use of technical jargon are to be distinguished from cases in which the speaker or writer misuses the terminology and only mimics correct use (as in some of the examples introduced above). In a sense, the cases of the former type do offer good evidence of expertise, while the cases of the latter type do not. However, the problem here is that novices cannot discriminate between the two. By definition, the use of technical language we considered fulfils condition (C1), which means that the use is intentionally nonaccommodative. Therefore, novices are not in a position to distinguish correct use of genuine technical language from apparently sophisticated but actually incomprehensible language that includes technical vocabulary and which the speaker uses fluently and with confidence. As in the case of dialectical skills with arguments and objections, the only evidence that is available to novices is the appearance of quickness and smoothness but not actual ability. The ability to use a particular theoretical vocabulary with confidence, giving the impression of being competent with it, is extended among purveyors of pseudoscience. As Blancke et al. (2017, p. 87) note, pseudoexperts “explicitly use scientific publications, language and typical features such as graphs and formulas, to convince people that they are dealing with genuinely scientific and thus reliable information.” Mimicking scientific language is a strategy consciously assumed by pseudoexperts in order to gain the trust of lay audiences: “pseudo-scientific claims dress up so to look reliable, often by mimicking superficial features of science” (Briciu 2021, p. 641). It is an instance of precisely the phenomenon I discuss in this paper, the nonaccommodative use of technical language with the intention of providing evidence of expertise. The fact that pseudoexpertise is such an extended phenomenon is a reason to conclude that the appearance of linguistic competence with technical language is insufficient evidence to establish the conclusion that the speaker is an expert in the topic. A further reason that adds to our scepticism is that, even if a person has minimal mastery of a certain vocabulary (in the sense of avoiding making nonsensical and absurd claims), that does not mean they are a reliable source of information on the topic under consideration.
Two final comments concerning this point are in order. The first one is that saying that the evidence in question is not sufficient to establish that the speaker is an epistemic authority is not to say that it is not good evidence in any context whatsoever. However, it is not good evidence for the particular audience we considered, one composed of laypersons that are not able to distinguish between a speaker that is competent with the vocabulary of a theory and a speaker that only appears to be. The second one is that the scepticism I defend concerns the quality of the evidence of expertise the nonaccommodative use of technical language provides considered independently of any other possible source of evidence. However, the appearance of competence might contribute to providing good reasons for the speaker’s expertise in a particular topic if considered in conjunction with other kinds of evidence, such as the speaker’s credentials or their past track record. When such alternative evidence is available, their capacity to articulate claims using technical vocabulary might correlate with the rest of the evidence available and contribute to establishing their status as an epistemic authority.

3.3. Persuasiveness

The third one of the dimensions of fallacy is the psychological one, consisting in its ability to seem a good argument or argumentative move, that is, to be persuasive, at least to a certain extent. This is an empirical question, and there is empirical evidence—albeit not much—that nonaccommodative use of technical language does indeed have the effect of boosting the speaker’s credibility. Although, according to Sorial (2017, p. 304), there are no specific studies of this phenomenon, related phenomena have been the focus of empirical investigation. In their literature review, Zaboski and Therriault (2020) mention studies that showed that presenting a text in a scientific format—that is, using references, citations, a discussion of methods, and using passive voice—makes the information transmitted more credible. Even small linguistic changes influence the way in which lay audiences interpret texts that purport to convey factual knowledge. For instance, hedging is common in scientific literature and viewed as a mark of academic writing, while sensational and extraordinary claims (e.g., “a revolutionary treatment” or a “cure”) indicate “an overzealousness to persuade a reader” (Zaboski and Therriault 2020, p. 14). What is directly relevant to our focus here on technical language is that the studies have shown that those factors that increase the perceived scientificity of a text also increase their perceived credibility (Zaboski and Therriault 2020, pp. 10–12). The two variables tend to be strongly associated and highly correlated, both in other studies mentioned by Zaboski and Therriault and in their own study. It is, therefore, to be expected that the use of scientific jargon to increase the perceived scientificity of a text or speech also increases its credibility. However, the effects on readers vary depending on their own knowledge and training. Thus, the authors write that “jargon may increase the scientificness of a claim, but it may not persuade readers enough to accept a pseudo-scientific claim” (Zaboski and Therriault 2020, p. 4), in case the latter is disguised in technical language. The use of scientific (or, in general, technical) jargon might serve to boost the credibility of the speaker, even if this does not mean that the audience is prepared to accept any claim the speaker makes. It is safe to assume that many factors influence the listener’s disposition to accept the speaker’s claim, such as whether the claim seems plausible or extraordinary.8

4. Conclusions

In the previous section, we reached several (tentative) conclusions concerning the nonaccommodative use of technical vocabulary with the intention of providing evidence of expertise. I argued that the strategy in question is not an argument (in the speech act sense of the term), is not to be accounted for as a conversational implicature, and does not necessarily involve an implication of any kind. Nevertheless, a subclass of cases, those in which the intention to provide evidence of expertise is manifest, might be correctly characterised as involving an invitation to draw an inference and as an argumentative move. There are also cases in which the intention is present but not manifest, which might be characterised as strategic moves aimed to persuade. Grice’s (1957, 1969) analysis of speaker meaning has proven helpful in accounting for the difference between the two.
Concerning the epistemic dimension, I argue that it does not constitute a reliable source of evidence of expert knowledge. Moreover, there is empirical evidence suggesting that the strategy is actually persuasive. Going back to the concept of fallacy, which I took as a theoretical guide for the discussion, the tentative conclusion we seem to have reached is that the strategy is unreliable. When used as an argumentative strategy by which the speaker invites us (or aims, in less manifest ways, to get us) to infer a certain conclusion concerning their level of expertise, the argumentative move is deceptive to the extent that the evidence is not conclusive but potentially persuasive.
The discussion that led to these tentative conclusions made use of resources from both pragmatic theory (e.g., Grice’s analysis of speaker meaning) and argumentation theory (e.g., the concept of fallacy, which I used as a guide to analyse the phenomenon under consideration along various dimensions). I hope that this might serve as an indication of how these resources might be profitably combined to advance our understanding of certain phenomena that belong to what might be called “the pragmatics/argumentation interface”. One such phenomenon, which has received little attention so far, is the kind of nonaccommodative use of technical language that I focused on here.

Funding

The research that led to this paper was partially funded by the I+D+i Project Expercitec, PID2019-105783GB-I00, financed by MCIN Spain.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

A previous version of this paper has been presented at the conference “Understanding II”, organized by Andrei Mărăşoiu (University of Bucharest), and at one of the project meetings of the Research Project “Expercitec” at the University of Salamanca. I am indebted to the audiences of the two talks, especially to Andrei Mărăşoiu, Dan Zeman, Obdulia Torres, Ana Cuevas, and Carmen Fernández Juncal for their comments and suggestions, as well as to the three anonymous reviewers for this journal. I am particularly grateful to Steve Oswald for his editorial work for this issue, as well as for carefully reading the paper and offering valuable comments.

Conflicts of Interest

The author declares no conflict of interest.

Notes

1
An insight into this phenomenon might be gained by taking a rhetorical perspective, like the one developed by Lyne and Howe (1990). The authors discuss some of the rhetorical strategies that experts use in persuading a lay audience, including metaphors, analogies and other rhetorical figures that function as “instructions” for the public on how to read a scientific text. The authors focus mostly on uses of technical language that are meant to be comprehensible to a lay audience, cases in which the expert is sensitive to the audience’s level of understanding of the topic considered. For this reason, their analysis is not directly relevant to the present discussion, which focuses precisely on cases in which this does not occur. See also Tindale (2011) for a rhetorical approach that considers the role the expert’s ethos plays in persuading a lay audience.
2
A different yet related topic is that of obscurity. Lacan’s fragment that I quoted might be characterised as obscure. However, this is not because the vocabulary is too technical for the audience. The fact that an author is nonaccommodating and the intended readers fail to grasp the meaning of the words used does not make the text obscure. The text might be perfectly comprehensible and clear to someone familiar with that vocabulary. Instead, Lacan’s text is obscure because the author chose to use the vocabulary in ways that are nonstandard and confusing even to the specialists in the field (topology, in this case). For a discussion of how obscurity might be used by pseudoexperts and gurus to impress audiences and create the impression that the speaker has something profound and important to convey, even when no one can really say what it is, see Sperber (2010).
3
I am grateful to an anonymous reviewer for pointing this out.
4
The address of the website is as follows: https://www.thetahealing.com/thetahealing-dna-activation.html (accessed on 2 December 2021).
5
Pinto (2006, p. 309) famously characterised arguments as invitations to draw an inference. However, this characterisation, and the modified version that Pinto subsequently proposed, is insufficient to distinguish arguing from other kinds of speech acts (Moldovan 2012, pp. 303–4). The strategy we discuss here is an example of an invitation to draw an inference which, nevertheless, is not an argument.
6
Consider, in this sense, the critical questions that Walton et al. (2008) developed as a tool that arguers can use to evaluate arguments that appeal to expert opinion. Among these questions we find: “How credible is E as an expert source?” and “Is E an expert in the field that A [the target claim] is in?” (Walton et al. 2008, p. 310) The arguers who need to appeal to expert opinion are novices in the field to which A belongs.
7
In any case, in this framework, fluency with the vocabulary of the theory could only amount to evidence in favour of interactional expertise, but not contributory expertise. The former is not a guarantee of possessing knowledge of the field, the capacity to solve problems or put theoretical knowledge into practice.
8
An anonymous reviewer made the interesting comment that a novice who is persuaded by this strategy might be said to provide an example of a derivative Dunning–Kruger cognitive bias. The classic Dunning–Kruger effect concerns the fact that people usually do not recognise that they are not in a position to make object-level judgements in a particular field of technical inquiry. The derivative version might refer to the case of someone who, while aware that they lack competence in a particular field, does not realise that they are thereby not in a position to make the meta-level judgements about a speaker’s competency in using technical jargon.

References

  1. Bermejo-Luque, Lilian. 2011. Giving Reasons. A LinguisticPragmatic Approach to Argumentation Theory. Dordrecht: Springer. [Google Scholar]
  2. Blancke, Stefaan, Maarten Boudry, and Massimo Pigliucci. 2017. Why do irrational beliefs mimic science? The cultural evolution of pseudoscience. Theoria 83: 78–97. [Google Scholar] [CrossRef]
  3. Briciu, Adrian. 2021. Bullshit, trust, and evidence. Intercultural Pragmatics 18: 633–56. [Google Scholar] [CrossRef]
  4. Collins, Harry M., and Robert Evans. 2007. Rethinking Expertise. Chicago: University of Chicago Press. [Google Scholar]
  5. Garrett, Bernie, Emilie Mallia, and Joseph Anthony. 2019. Public perceptions of Internet-based health scams, and factors that promote engagement with them. Health & Social Care in the Community 27: e672–e686. [Google Scholar] [CrossRef]
  6. Gasiorek, Jessica. 2016. The ‘dark side’ of CAT: Nonaccommodation. In Communication Accommodation Theory. Negotiating Personal Relationships and Social Identities across Contexts. Edited by Howard Giles. Cambridge: Cambridge University Press. [Google Scholar]
  7. Goldman, Alvin I. 2001. Experts: Which Ones Should You Trust? Philosophy and Phenomenological Research 63: 85–110. [Google Scholar] [CrossRef]
  8. Grice, H. Paul. 1957. Meaning. Philosophical Review 66: 377–88. [Google Scholar] [CrossRef]
  9. Grice, H. Paul. 1969. Utterer’s Meaning and Intentions. Philosophical Review 78: 147–77. [Google Scholar] [CrossRef]
  10. Hamblin, Charles L. 1970. Fallacies. Willersey: Vale Press. [Google Scholar]
  11. Hansen, Hans V. 2002. The Straw Thing of Fallacy Theory: The Standard Definition of ‘Fallacy’. Argumentation 16: 133–55. [Google Scholar] [CrossRef]
  12. Johnson, Ralph, and J. Anthony Blair. 1994. Logical Self-Defense. Brussels: International Debate Education Association. [Google Scholar]
  13. Krieger, Janice L., and Cindy Gallois. 2017. Translating Science: Using the Science of Language to Explicate the Language of Science. Journal of Language and Social Psychology 36: 1–11. [Google Scholar] [CrossRef]
  14. Kristeva, Julia. 1969. Semeiotike. Recherches pour une Semanalyse. Paris: Editions du Seuil. [Google Scholar]
  15. Kruger, Justin, and David Dunning. 1999. Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology 77: 1121–34. [Google Scholar] [CrossRef] [PubMed]
  16. Lacan, Jacques. 1998. The Seminar of Jacques Lacan, Book XX, Encore 1972–1973. Edited by Jacques-Alain Miller. Translated with notes by Bruce Fink. New York: Norton. [Google Scholar]
  17. Lepore, Ernie, and Matthew Stone. 2014. Imagination and Convention: Distinguishing Grammar and Inference in Language. Oxford: Oxford University Press. [Google Scholar]
  18. Lyne, John, and Henry F. Howe. 1990. The rhetoric of expertise: E. O. Wilson and sociobiology. Quarterly Journal of Speech 76: 134–51. [Google Scholar] [CrossRef]
  19. Meibauer, Jörg. 2016. Aspects of a theory of bullshit. Pragmatics and Cognition 23: 68–91. [Google Scholar] [CrossRef]
  20. Moldovan, Andrei. 2012. Arguments, Implicatures and Argumentative Implicatures. In Inside Arguments: Logic And The Study Of Argumentation. Edited by Henrique Jales Ribeiro. Cambridge: Cambridge Scholars Publishing. [Google Scholar]
  21. Pinto, Robert C. 2006. Evaluating Inferences: The Nature and Role of Warrants. Informal Logic 26: 287–317. [Google Scholar] [CrossRef] [Green Version]
  22. Rice, Ronald, and Howard Giles. 2016. The contexts and dynamics of science language and communication. Journal of Language and Social Psychology 36: 127–39. [Google Scholar] [CrossRef]
  23. Sokal, Alan, and Jean Bricmont. 1999. Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science. New York: Picador. [Google Scholar]
  24. Sorial, Sarah. 2017. The Legitimacy of Pseudo-Expert Discourse in the Public Sphere. Metaphilosophy 48: 304–24. [Google Scholar] [CrossRef]
  25. Sperber, Dan. 2010. The Guru Effect. Review of Philosophy and Psychology 1: 583–92. [Google Scholar] [CrossRef]
  26. Tindale, Christopher W. 2011. Character and Knowledge: Learning from the Speech of Experts. Argumentation 25: 341–53. [Google Scholar] [CrossRef]
  27. van Eemeren, Frans H., and Rob Grootendorst. 1984. Speech Acts in Argumentative Discussions. Dordrecht and Cinnaminson: Foris Publications. [Google Scholar]
  28. Walton, Douglas, Chris Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. Cambridge and New York: Cambridge University Press. [Google Scholar]
  29. Walton, Douglas. 1995. A Pragmatic Theory of Fallacies. Tuscaloosa: University of Alabama Press. [Google Scholar]
  30. Zaboski, Brian A., and David J. Therriault. 2020. Faking science: Scientificness, credibility, and belief in pseudoscience. Educational Psychology 40: 820–37. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moldovan, A. Technical Language as Evidence of Expertise. Languages 2022, 7, 41. https://doi.org/10.3390/languages7010041

AMA Style

Moldovan A. Technical Language as Evidence of Expertise. Languages. 2022; 7(1):41. https://doi.org/10.3390/languages7010041

Chicago/Turabian Style

Moldovan, Andrei. 2022. "Technical Language as Evidence of Expertise" Languages 7, no. 1: 41. https://doi.org/10.3390/languages7010041

Article Metrics

Back to TopTop