Next Article in Journal
Translation as the Catalyst of Cultural Transfer
Previous Article in Journal
The Origins of Human Modernity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Rendering Humanities Sustainable

International Federation of Rock Art Organizations (IFRAO), P.O. Box 216, Caulfield South, VIC 3162, Australia
Humanities 2012, 1(1), 64-71; https://doi.org/10.3390/h1010064
Submission received: 16 September 2011 / Revised: 25 September 2011 / Accepted: 10 October 2011 / Published: 19 October 2011
Launching a journal intended to cover the entire humanities is certainly an audacious project, for two reasons at least. Firstly, this journal will be expected to cover much academic diversity, particularly by including the “social sciences.” However, in this time of rampant overspecialization, perhaps it is precisely such wholeness and breadth of vision that could become a journal’s strength. Secondly, since the viability of the humanities has been questioned from a number of perspectives it seems essential to meet these challenges by reinventing the discipline in response to issues raised—also a major task. It involves justifying the continuation of humanistic traditions. For this, humanists need to consider the nature of these challenges, understand and analyze them, and respond to them. It is therefore inevitable that a forward-looking, new journal in this discipline will deem it relevant to review these matters.
The tensions between the natural sciences and the Geisteswissenschaften (“spiritual sciences”) began with such scholars as Wilhelm Dilthey, who focused on human inner experience (“sovereignty of the will, responsibility for actions”, etc. [1]). Instead of a scientific approach, the humanities have developed hermeneutical understanding of meaning in literature, culture, and history, which have brought them into conflict with rising postmodernism. There are vague references to creating “well rounded citizens” [2], apparently achievable by immersion in the great texts of Western culture. That would suggest that the most generous, good-hearted, and selfless people should be found among the higher echelons of humanities departments. Judging from the debates among such high-ranking humanist academics that does not seem to be the case. Moreover, as one humanist notes, “it is not the business of the humanities to save us”, and the only honest answer to the question, of what use are the humanities, is none whatsoever [3]. For this he was rewarded with a torrent of 484 comments, which quasi-democratically define the issues faced by the discipline. Elsewhere, Stanley Fish clarifies that he is talking about humanities departments (an academic rather than cultural category), and “not about poets and philosophers and the effects they do or do not have in the world” [4]—an important clarification about academia’s role. In Fish’s words, the tiny effect a humanistic education might have “hardly amounts to a reason for supporting the entire apparatus of departments, degrees, colloquia, etc. that has grown up around the academic study of humanistic texts.” As he notes, the architects of the United States’ preemptive attack of a defenseless nation on the other side of the globe are individuals widely read in history, philosophy, and the arts, participating in deeply intellectual discussions of important texts. The same, of course, applied to many war criminals, and the good-citizenry case falters when examined closely.
A powerful argument in favor of retaining the humanities is that the question of what living is for needs to be clarified, and it is not one the sciences can deal with effectively. Another supportive point derives from their ability to promote critical thinking, but on reflection the sciences are perhaps better qualified to teach this, and critical thought processes can be trained by all life experiences, except obviously in religious or totalitarian contexts. In a sense much of the humanities can be compared to the quest of an art critic, and I understand from artists that such commentators regularly misinterpret their work. We have no evidence that the pronouncements of an academic of what Shakespeare really meant would find the Bard’s agreement. This is then an exercise in determining intention, i.e., the undeterminable.
Apart from the primary humanities (languages, literature, history, philosophy, arts, religion), the “social sciences” are intended to occupy a space somewhere between the sciences and the humanities, and include anthropology, law, communication, cultural studies, psychology, sociology, political “science,” economics, technology, geographical studies, and linguistics. That taxonomy is certainly not universally recognized, and there are subtle differences among various geographical regions. In other words, these largely arbitrary divisions reflect historical, social, and political factors; there is nothing objective or absolute about them, they result essentially from accidents in history. In order to more precisely determine the relative positions of all these respective disciplines, it would be useful to contrast them with the sciences, because these have a much clearer epistemological agenda. I would like to address, very briefly indeed, their inherent rationale—not to persuade the humanities to emulate it, but to clarify matters of difference.
It is impossible to deal effectively with any scholarly pursuit without recourse to epistemology, the branch of philosophy concerned with the origin, nature, and limits of human knowledge, and the methods of acquiring it or having acquired it. This is self-evident in relation to any academic topic, but in investigating the origins of human models of reality, which happens to be prerequisite to examining any construct of knowledge, it is utterly indispensable.
The human being is an intelligent organism, the product of a long evolutionary process. Its continued existence is contingent upon its possession of several sensory faculties. It is on the basis of these faculties that, as a species, we map and comprehend physical reality, as if that were their role. This is the greatest misunderstanding in the intellectual comprehension of reality, and in the human instance it is the basis of anthropocentrism. The interpretation of reality purely in terms of human values or experience is totally unscientific: anthropos metron hapanton—man is the measure of everything—is both true and false, depending on whether it refers to the anthropocentric reality we exist in or not. Man is certainly the measure of all he comprehends, but since cognitively he is a severely encumbered creature, he is in fact no measure of the world. Plato captured this state of profound ignorance perfectly in his allegory of the cave, so very many years ago.
To view anthropocentrism with even a semblance of objectivity, which is by no means easy for us, it is useful to consider the role of human sensory faculties. These were certainly not selected on the basis of being the best possible combination of such abilities for the purpose of determining “objective reality” (for the sake of the argument, let us assume that such a state can exist; I am not suggesting that we can know this). The principal criterion in their evolutionary “selection” was that the sensory faculties or other means of relating to the world of every organism in any planet’s global biotope must relate to the same physical reality as the rest of that planet’s biomass, often even to the same forms of perceptual manifestations of such reality. Evolutionary dynamics would not permit exceptions within such a system, and an organism not relating to it would not survive even if ambient environmental conditions were perfectly suitable for it. The perceptive abilities of any species in the universe are perhaps best described as a compromise between the need to match those dominant in the rest of its particular biotope, and possessing enough variation relative to competitors to have an evolutionary edge over them.
However, the fact that in a particular biotope, any participatory organism from a microbe to a human relates in some fashion to a particular set of variables (e.g., spatial or temporal variables) is no proof that these are the only ones possible, or that they define some finite reality. Yet it is from this that anthropocentrism (in the case of humans) derives its confidence. The obvious explanation for our confidence in equating the reality we experience with “objective reality” is that, provided we continue to experience it only within the cognitive framework we have traditionally used, it is not likely to be challenged. Much of what we call science is actually an exercise of systematically augmenting an anthropocentric framework, through the misapplication of empiricism. Valid empiricism is the principle that human sensory experience is the source of human knowledge, whereas if this view is corrupted to regarding human sensory experience as the sole arbiter of how things really are in the world, it becomes a major falsity.
Let us look at some generic propositions about perception. The possibility of perception is attributable to physical processes spreading out from centers and retaining certain characters. Without them it would be impossible for different percipients to perceive the same object or phenomenon from different perspectives, and no intelligent organism would have been able to discover that its conspecifics exist in a common world. A significant factor in our “perceptual confidence” (by which I mean our confidence that our perceptions are “valid” in the determination of their causations) is the similarity between the perceptions of different organisms in similar situations. Intelligence itself would have been impossible without the discovery of a common reality; hence intelligent reflection would not have occurred.
“Awareness,” like “intentionality,” is a very rubbery concept, and I emphasize that I shall use it here only in the vaguest of meanings. Now, this awareness of a common reality experienced by most, but not all, humans (and the same, one presumes, applies to all intelligent beings in the universe, should any others exist, have existed, or will ever exist) is clearly attributable to perception. Perceptions are patterned responses to sensations caused by physical objects or their properties. For instance, an object might reflect light radiation in a particular way, so that certain wavelengths dominate. A visual system sensitive to this selective reflection of light will perceive a sensation we call color vision, and the organism possessing such a system will infer the physical property of color. While the perceived object in question no doubt possesses a large number of alternative properties, only very few of which a human may perceive (even with the help of the technological extensions of our sensory abilities, certain instruments), there are good reasons why natural selection promoted particular sensory faculties and not others in us. In a nutshell, the faintly symbiotic relationship between ripening fruit and us (and other) primates may well explain why we have color vision. It was not bestowed on us to facilitate the appreciation of artworks.
The sensory perceptions of any organism, including one possessing some level of “self-reflective” intelligence as we define it, were presumably acquired through its evolution. They are then a rather haphazard collection of neural abilities in relating to particular physical processes in the world outside our bodies. The brain “knows” enough to carry out its function of fabricating individual reality, and from this we construct consensus reality through social intercourse.
Two crucial points emerge from this: firstly, our knowledge of anything occurring outside our brains, neural systems, and proprioceptors must be very precarious, and considering how little we even understand of what goes on within our brains this should be of concern. Secondly, the dynamics governing the evolution-determined acquisition of sensory abilities cannot be assumed to be related to some design aiming to equip us with the ability to define “objective reality;” there was no survival benefit in such an ability [5]. Rather, one would suppose that these dynamics resulted from chance variation in the struggle for existence, so they would have been selected for their utility in survival. Survival, of course, is in no way related to detecting objective reality; it merely reflects an ability to respond to environmental stimuli.
All our perceptions relate to events, to changes in the physical world; a steady-state reality would not be perceptible to us or to any other being. To perceive an event not taking place in the percipient’s body, there must be a physical process in the world, outside the reach of its hard-wired neural system, which produces a stimulus on the surface of our body (or a receptor such as the retina) that is neurally detectable. It is most reasonable to postulate that this rather tenuous link between our nervous system and the real world provides absolutely no justification for the fond delusion of humans that they have access to some significant reality. Of course they do not and it is salutary to remember that this was known to some Greek scholars millennia ago.
Immanuel Kant, in his seminal Critique of pure reason of 1781, developed Plato’s notion of a dichotomy between the knowable and the unknowable, and coined the concepts of a perceptual construct of reality (consisting of phenomena) and an objective reality consisting of noumena. While this distinction remains embedded in contemporary epistemology, there are significant problems with it. Basic to a Kantian model of the world is the assumption that the phenomenal reality is experienced uniformly by all humans, irrespective of their cultural conditioning. In the 18th century, this was certainly the expectation in European thought, which at that time was incapable of perceiving its own ethnocentrism. Even Ludwig Wittgenstein initially reaffirmed its basic validity with the aphorisms of his Tractatus logico-philosophicus of 1921; but he contributed significantly to questioning the logical positivism developed on Kantian thought when he examined the role of language in concept formation and maintenance. In his early phase, Wittgenstein asserted that thought (and he referred to human thought no doubt) is the logical picture of the facts, which in turn are made up of “atomic facts” (actually Bertrand Russell’s term; the 1922 English translation is a corrupted version of the German text). The thought is the significant proposition, and propositions are truth functions of elementary propositions. The purpose of language is to state the facts, which it does by picturing. Thus language seems to have a structural similarity to what it describes. Ethical or metaphysical statements can only be nonsensical violations of the legitimate application of language, and in this Wittgenstein includes his own utterances on the theory of language. He regarded his own metaphysics as useful or important nonsense, and philosophy, as it is traditionally understood, as rooted in linguistic confusion. His sentiments were later expressed in a different way by Richard Rorty in The linguistic turn (1967) when he called for overthrowing the “spectatorial account of knowledge:” philosophers had never been able to establish that they were doing anything more than eternalize contingent prejudices.
In Wittgenstein’s later phase (commencing around 1918), in which he contradicts his own Tractatus, he rightly focuses on the influence of language. However, this seems to have had no effect on logical positivism. The “facts” positivist reality is made up of are linguistic symbols, or “pictures”. These “facts” are phenomenological facts, i.e., they relate to the internal and relativistic construct of the world. To present in (philosophical) language anything which contradicts logic is impossible, because thought is itself (meant to be) logical. Hence the only mode of constrained mental activity (which the positivist calls “thought”) intelligible to the human “mind” (I would rather resist the urge to explore here what this word tries to capture) is logical thought as we define it. However, since it is entirely couched in tautologies, it cannot express anything of significance. So while logical positivism has to accept that nothing at all can be said about reality, it nevertheless pretends that its knowledge about the world, derived entirely from linguistic formulations of empiricist constructs, is valid. Thus, the tyranny of empiricism, which has become the hallmark of 20th century scientism (as opposed to science), lacks the integrity of Wittgenstein, who denounced his own work as a mistake and a self-contradiction.
It is therefore necessary to examine the influence language has on our preferred concept of reality. The communicative units of any language, verbal or otherwise, are of course symbols. Thus a symbol is a mediating tool the “mind” uses to represent the world. An intelligent species’ knowledge is mediated by symbols, which represent abstracted components of species-centric (or, we should preferably say, culture-specific) reality, carved out from perceptions of the objective world in the analytical process of the “mind” as it builds its image of the world. The lingual structure of anthropocentric reality, including that created by the humanities, is difficult to appreciate by the “human mind,” precisely because all it can know is predicated on its own symbolism. All conceptual beings seek validation from others of their conceptual standard through reference to an external standard. However, the need of external validation behaves inversely to the number of successful inductions the organism has experienced ontogenetically. Therein lies the only reason for ethnocentrism, and ultimately the anthropocentrism that permeates the traditional humanities.
To express this state of understanding more succinctly: humans are incapable of determining what is true. Proper science is not satisfied with this state of affairs, so it has found a way around this rather large problem. And here we arrive at the crux of the matter. In the 1930s Karl Popper devised falsification as a means of separating science from non-science: propositions must be presented in such a way that they can be disproved by some conceivable spatio-temporally located event exemplifying a possibility which the proposition would exclude. In the second half of the 20th century, this burden of falsification was somewhat modified: we now speak of refutation instead. This is because the falsifying evidence may itself be misinterpreted, and a refuted proposition is not necessarily false. The refuting evidence may be subjected to further testing, and if found to be problematic (as is often the case in science) a refuted proposition may be reinstated.
Essentially this system of scientific testing through refutation has become so universally accepted in the sciences that refutability is now considered to be the principal hallmark of a scientific proposition, hypothesis, or theory. The assumption of scientists is that, if a proposition has been tested thoroughly, and if we have failed to refute it, such an idea or model is considered to be strengthened, and continues to be strengthened by every refutation attempt it survives. However, at no stage will the real scientist consider it to be “true.” He can never know that; that knowledge, and the humility it embodies, is what makes him a scientist. So a scientist is not someone who knows something to be true, but someone who knows no truth. “Truth” can be found in religion, but no real scientist has ever come across one finite, absolute truth.
These simple principles govern the sciences. Or perhaps we should say: areas of human knowledge-claims, to which this principle of refutability cannot be applied, are not scientific. This does not in any way suggest that they must be invalid, or that we ought to ignore them. They simply do not belong in the realm of science. Disciplines whose models and theories lack significant potential for refutability, such as archaeology or paleoanthropology, may well comprise valid hypotheses, but if they are not internally testable (i.e., by the means available to the proposing discipline) they fall outside of science. The problem is that within a non-refutable system of knowledge-claims it is easy to invent interpretations and defend them by recourse to academic influence. Yet, by pretending that these interpretations are the result of scientific investigation, we are not only using false pretences to bolster the credibility of our claims, we are also discrediting science. This happens a great deal in, for instance, archaeology, a discipline that makes a good deal of use of several sciences, but then habitually misinterprets their findings because most archaeologists are humanists and cannot comprehend the severe limitations and qualifications that are attached to all scientific propositions. As one of the most celebrated American archaeologists, Lewis R. Binford, declares in exasperation, “humanists are committed to the defense of their chosen identity. Their methods are vacuous and their attempts at learning pathetic” [6].
Binford has for decades tried to steer archaeology toward a scientific trajectory, with ultimately very limited success, which may serve as a salutary lesson for other humanities. The reason for this is found in the above explanation of the epistemology the sciences subscribe to: the propositions presented in the humanities are not testable, and in many fields there is not even the slightest pretense that they are refutable. In law or religion they are simply prescriptive. Other areas seem principally concerned with aesthetics, which of course provides only anthropocentric notions about beauty that are devoid of any objective qualifier. Aesthetic judgments are imposed by human perceptions alone, on the basis of conditioned neural responses to sensory input. Although they can be defined scientifically, their justification exists purely in the human realm. Which is precisely why they are concerns of the humanities.
To consider these variables in non-anthropocentric terms is impossible: if there were other intelligent species to communicate with, we could safely assume that they would lack any understanding of the aesthetics of humanists. Clearly, then, the notions and concerns of the humanities can be of relevance only on one planet and only to one species—and only for an instant in cosmic time. The reality we see ourselves existing in, is simply the imagined world made real [7]. By contrast, the laws the sciences pursue apply throughout the universe and across the entire continuum of time, irrespective of the existence of human appreciators of an imagined property. It is also readily apparent that the sciences, because of their epistemologically enforced humility, are engaged in a relentless march forward, forever engaged in pushing the cutting edge toward better understanding. The humanities, by comparison, remain largely static, forever asking questions they cannot answer, such as what the purpose of human life is. Since the answer to that question has long been obvious, it would seem that the humanities have somehow not paid attention.
It emerges from these considerations that the humanities have an unsustainable epistemological basis and are prone to harking back to some golden age when human society had certainties instead of hypotheses and when wise men had the answers to human yearnings for something to believe in. However, just as astrology was replaced by astronomy, phrenology by neuroscience, the study of the human primate must also move on, and must choose between the path of slow and gradual oblivion and the alternative of renewal. Based on recent developments and perceived trajectories, the humanities in their present form are no more sustainable than humanity itself is in the long run. Just as humanity is in dire need of addressing the challenges it has itself created, albeit largely unintentionally, those deriving from the intractability of many humanists are as much in need of reflection. When voices from within the humanities counsel for a scaling down of the discipline we might ask, what would the reaction be if there were calls for reining in the mushrooming cost of medical research? I think it is obvious that such voices would be drowned out in a cacophony of indignation, even considering that there are widely publicized ethical issues with some of this research. What is it that explains the over-generous funding of both medical and strategic research, which exceeds that of all other pursuits of knowledge added together? The answer is human selfishness. These two quests seem to strive for contradictory ideals: prolonging and saving human life on the one hand, and perfecting ways of destroying it on the other. Yet both are about self-preservation, either at the species or individual level; or at the level of political, ethnic, or religious entities. Let us be quite clear about this: medical research is founded on what Albert Einstein laconically called the “ideal of swine;” it is the self-indulgent, self-centered pursuit of self-preservation of one species. There is nothing idealistic about it. The humanities, by contrast, are economically irrelevant, which should define their main strength. Just as Fish has recommended, their best course is to accept that they have no utility; they exist purely for their own sake.
However, this manifested integrity may not suffice to preserve them. So how can the humanities be rendered sustainable, capable of making the transition into future centuries?
My recommendations reflect the views and arguments presented above. The discipline’s core areas seem to deal with issues that cannot be resolved with the tools available to humanists. For instance the purpose of existence, the meaning of life, the role of the human soul, or the functioning of the mind cannot be established by word games of the kind Wittgenstein alludes to. The formulation of these humanistic questions is based on understandings that are neither relevant in this day and age, nor were they universally accepted even in the past. The various constructs of reality held by different societies over the millennia all disagree with that which has emerged in Europe, and its dominance today is no proof of its validity. It only demonstrates political and military strengths developed since the time the more aggressive Europeans beat the Chinese to the Americas. So the preoccupation with European ideology and ontology merely reflects historical developments. Perhaps more importantly, if one re-casts the central questions the humanities are likely to ask in a slightly different framework, they do become fathomable, and they have long been investigated by some of the sciences. For instance, instead of assuming that minds exist, one can ask, what are the processes that make up the system that has traditionally been called the human mind, how do they work, and how can their interplay be described? Credible responses to such inquiries are perfectly achievable, as shown by certain sciences. Therein, however, lies the problem: humanists have shown very limited interest in these pursuits, for reasons that are in need of explanation.
In recommending that the humanities should draw much more on the wealth of information from the sciences—information that is undeniably relevant to them—I am not suggesting that they need to somehow merge with the sciences. I have above provided the example of archaeology, where the experiment of turning it into a science has been largely a failure. So I am not suggesting that the humanities become sciences, but that they learn how to exploit the sciences to their advantage. Again, I emphasize that this needs to be based on an understanding of what the sciences are, and what they provide—which is not truth, but testable propositions that can help resolve confusion. Numerous sciences are capable of providing information relevant to understanding the human condition [8], including human ecology, human biology, pathology, physiology, evolutionary biology, genetics (populations, molecular, behavioral), endocrinology, neuroscience, cognitive sciences [9], paleoanthropology, paleogenetics, and various branches of medicine. Availing themselves of the opportunities offered by their data and hypotheses will not convert the humanities into sciences, but it will greatly enrich them, make them more credible, and more confident to face the next century. It will render them sustainable.

References

  1. W. Dilthey. Einleitung in die Geisteswissenschaften. Versuch einer Grundlegung für das Studium der Gesellschaft und der Geschichte. Göttingen, Germany: Bernhard Groethuysen, 1914. [Google Scholar]
  2. A.T. Kronman. Education’s End: Why Our Colleges and Universities Have Given up on the Meaning of Life. New Haven, CT, USA: Yale University Press, 2007. [Google Scholar]
  3. S. Fish. “Will the Humanities Save Us? ” The New York Times, 6 January 2008. [Google Scholar]
  4. S. Fish. “The Uses of the Humanities, Part 2.” The New York Times, 13 January 2008. [Google Scholar]
  5. R.G. Bednarik. “Editor’s response to B.J. Wright.” Rock Art Res. 2 (1984): 90–91. [Google Scholar]
  6. L.R. Binford. “On science bashing: A bashful archaeologist speaks out.” Bull. Deccan Coll. Post-Grad. Res. Inst. 60–61 (2001): 329–335. [Google Scholar]
  7. H. Plotkin. The Imagined World Made Real: Towards a Natural Science of Culture. London, UK: Penguin Books, 2002. [Google Scholar]
  8. R.G. Bednarik. The Human Condition. New York, NY, USA: Springer, 2011. [Google Scholar]
  9. F. Adams, and K. Aizawa. The Bounds of Cognition. Malden, MA, USA: Blackwell Publishing, 2007. [Google Scholar]

Share and Cite

MDPI and ACS Style

Bednarik, R.G. Rendering Humanities Sustainable. Humanities 2012, 1, 64-71. https://doi.org/10.3390/h1010064

AMA Style

Bednarik RG. Rendering Humanities Sustainable. Humanities. 2012; 1(1):64-71. https://doi.org/10.3390/h1010064

Chicago/Turabian Style

Bednarik, Robert G. 2012. "Rendering Humanities Sustainable" Humanities 1, no. 1: 64-71. https://doi.org/10.3390/h1010064

Article Metrics

Back to TopTop