1. Introduction and Overview
The development of information technologies, especially under the hasty digitalization of education and other spheres of social activity that happened recently due to the COVID-19 pandemic, has led to many radical changes in our everyday life. The role of computers and other electronic devices has grown rapidly in the past couple of decades, and, in turn, gave rise to questioning the very nature of human existence. The spirit of our days is surely different from the scientism of the 19th–20th centuries with its unambiguous optimistic views of the ever-going progress in science and technologies, as the global crises are now evident threats to the existence of civilization and its natural environment. Still, today, we also hear about the ideas of ‘transhumanism’, its main point being abandoning classical humanism in favor of the gradual replacement of humans by machines as a more perfect and fruitful creation. It is quite questionable whether such a symbiosis could leave any place for human values, including faith and love, and the human ability of having a moral attitude towards the world. I would argue that this kind of digitalization could and should result not in philosophical humanism being replaced by some ideology of human–computer hybrid, but on the contrary, in the further all-round development of humans and humanism.
2. Humanism and Its Many Aspects
What is humanism, then, and how is it possible in the age of digitalization? According to the definition proposed by the International Humanist and Ethical Union, “Humanism is a democratic and ethical life stance that affirms that human beings have the right and responsibility to give meaning and shape to their own lives” [
1]. In other words, humanism is the Weltanschauung that asserts the autonomy of a human being, their right and responsibility to be the subject of one’s own life.
Historically, different aspects of humanism have been brought into focus in the course of the development of human culture and history. In the Renaissance, this position has been perceived as secularism, opposing theistic views on human nature and declaring the moral independence of humans from their Creator. At the same time, humanism meant the idea of completeness of the human being as opposed to different estate and professional limitations—not only secularism as the autonomy of humans from God, but also a way to perceive a person in its whole nature, and not as a nobleman/noblewoman or as a blacksmith or housewife. In other words, humanism is manifested here as universalism, asserting the priority of common values and identities as opposed to any form of particularism.
Human person is a human person first of all and above everything, and only then are they a representative of a certain profession, class, nationality, ethnicity, gender, religious confession, and so on—in fact, it is that universal humanity that serves as a general ground enabling a person to possess any other particular identity out of the multitudes of possibilities. At the same time, humanism is but an abstraction without being concretized in ‘lower’ and more precise identities—in the famous words of Joseph de Maistre: “Or, il n’y a point d’
homme dans le monde. J’ai vu, dans ma vie, des François, des Italiens, des Russes, etc.; je sais même, grâces à Montesquieu,
qu’on peut être Persan: mais quant à l’
homme, je déclare ne l’avoir rencontré de ma vie; s’il existe, c’est bien à mon insu” [
2] (p. 102). It is not that the human per se does not exist—it exists as an ideal entity comprising what is common in a Frenchmen, an Italian, a Russian, etc. However, it does exist as the common ground for being a Frenchmen, an Italian, a Russian, etc., and for establishing a mutual understanding and common interests of the people of different identities. In fact, by bringing all the people together in a digital world, the current information technologies enable them to obtain such common grounds—maybe even for the first time in human history.
At the same time, I would argue that the main idea of humanism is that of democracy: both as the idea of the primacy and as the idea of general liberation of human from the power of external authorities. The latter becomes clear in the age of Enlightenment—and its philosophical appeal is well expressed in the famous words of Kant about having the courage to use one’s own reason as the motto of the Enlightenment:
“Aufklärung ist der Ausgang des Menschen aus seiner selbst verschuldeten Unmündigkeit. Unmündigkeit ist das Unvermögen, sich seines Verstandes ohne Leitung eines anderen zu bedienen. Selbstverschuldet ist diese Unmündigkeit, wenn die Ursache derselben nicht am Mangel des Verstandes, sondern der Entschließung und des Muthes liegt, sich seiner ohne Leitung eines andern zu bedienen.
Sapere aude! Habe Muth dich deines eigenen Verstandes zu bedienen!” [
3] (s. 481).
According to the logic of the Enlightenment, such courage should by definition be given to every human person with no exception: freeing oneself in the course of one’s development and education from the power of traditions and authorities, every person learns how to manage their own life and the life of their own society without the urge to alienate that natural ability for the benefit of kings, presidents, deputies, or anyone else. In fact, I could argue that values of both modern democracy (considered as not just political representation but as a culture of human sovereignty) and the classical type of scientific rationality are deeply grounded on that premise of humanism as moral autonomy of human person [
4]. For example, more than a hundred years ago, John Dewey was one of the first to point out the profound affinity of education, science, and democracy, while opposing the idea of science as an esoteric occupation of a handful of ‘initiates’ to its understanding as a public enterprise [
5]. However, the 20th century has seen the rise of machines that could well change the situation, and the on-going digitalization presents both risks and opportunities for the said approach to humanism, science, and democracy.
3. Computers and the New Enlightenment
The risks mentioned above are even more visible: while in the Renaissance, humanism served as a kind of assertion of humans’ ontological independence from their Creator, today—paradoxically enough—it could be applied to freeing humans from their own creations. The development of information technologies, the proliferation of computer devices, and the spread of the Internet has led to a rather radical transformation of lifestyle and working methods in many areas of human activity. In my opinion, the most important risk that lies in this situation of the information age is the possible confusion of information with human knowledge. While closely related, the two terms are in no way identical in their meaning: information could be presented as an alienated knowledge, as a depersonalized knowledge, a knowledge deprived of its subject-carrier and made available for transfer, for sale—and actually for the use of a machine.
The information itself is based on the presumption of the non-human nature of its recipient: information does not apply to a person in their human qualities. On the one hand, information as the abstraction of knowledge, while being oriented towards communication between human and machine, contributes to the self-alienation of the human person proper, human transformation into a kind of the same machine. The role of a person is reduced to a single quality of a carrier of some information. As a result, there could appear an illusion of omniscience (if all information is readily available at our disposal in, say, “Wikipedia”, then why study anything at all?), an illusion of mathesis universalis (if everything could be digitalized, then everything is computable, and the one who masters computing, masters the world) and an illusion of cyberhumanism or transhumanism (if all human knowledge is but information, and the computers are better in handling information than humans, then the computers can ‘think’ and would gradually replace humans in any field of their activity).
I think that all three of these assumptions are based on incorrect premises. As argued already in 1960s by philosophers who had opposed overly optimistic and positivist views on the development of information technologies, a computer could indeed model a human brain; however, it is not a brain that thinks but a whole human person with the help of a brain. Any technology (be it cybernetic or non-cybernetic) should be considered only as a means, only as a tool for the fulfillment of human goals; as explained recently by Adriana Braga and Robert Logan: “There is a subjective, non-rational (or perhaps extra-rational) aspect of human intelligence, which a computer can never duplicate” [
6] (p. 134). A computer can count faster and can play chess better than human person, but that is not something we can call intelligence: it is a very narrow form of intelligence that has little to do with true human nature and particularly with creativity.
On the other hand, it would be interesting to remind ourselves that until the late 1940s, computers actually “were human”—as defined by David Grier in his book [
7]: ‘human computers’ were mostly women (as lower-paid, but usually more accurate workers than men) who performed simple calculations, particularly those associated with military aviation and the development of the atomic bomb in the USA. Moreover, with the emergence of electronic computers, their “operators” have often been recruited from yesterday’s human computers, which thus became the first programmers. In fact, such computing is a kind of monotonous, routine labor that machines are definitely better at than humans—while not being able to perform many other tasks that humans are capable of. In other words, the digitalization and computerization of our life provides not only risks, but opportunities for humans and humanization—some ‘pro et contra’ are listed in
Table 1.
I think that, today, the development of the ICT sphere can actually augment human persons by empowering them with new possibilities. Computers are just tools and means for the development of human knowledge and human personality, and as the routine work could now be handled by machines instead of humans, humans are given a chance to become more humane. Thanks to computers, a living person can devote all the free time to truly human work—work in terms of scientific, technical, artistic, social creativity, but that also requires from him or her more responsibilities for defining the future of the world itself.
4. Conclusions
The ongoing computerization and digitalization of social activities presents a great challenge for the philosophical concept of humanism. Historically being an assertion of human autonomy from the Creator and any other external authorities, it now turns out to be a form of questioning the way humans actually depend on their own creations, namely computers and other machines. Still, the risks associated with the digitalization of human life; namely the possible illusions of omniscience, mathesis universalis, and cyberhumanism; are argued to be grounded on erroneous assumptions of human knowledge that cannot be reduced to information: the problem of human identity—just like any other problems related to values—is the problem of human intelligence, and not that of a computer. Computing machines are good at doing what they are intended to do—monotonous routine labor—while they are not able to do many other things humans are capable of, including critical thinking.
That is why it could be concluded that the development of the ICT sphere can actually augment human persons by empowering them with new possibilities. As humanism refers to democratization as the ability of each unique human personality to act as a subject of one’s own life and activities (and not an object of any plans, programs, and curricula) [
8], it is computers who can take away the inhumane machine labor and free humans to creative work. That optimistic perspective is now just a possibility rather than reality, but it still could be drawn as a result of philosophical comprehension of the topic discussed.