Multidisciplinarity, Interdisciplinarity, and Transdisciplinarity: The Tower of Babel in the Age of Two Cultures

: Despite the continuous emphasis on globalization, we witness increasing divisions and divisiveness in all domains of human activities. One of the reasons, if not the main one, is the intellectual fragmentation of humanity, compared in the title to the failed attempt at building the Biblical Tower of Babel. The attempts to reintegrate worldview, fragmented by the specialization of education (C.P. Snow’s The Two Cultures ) and expected to be achieved through reforms in curricula at all levels of education, were based on the assumption that the design of a curriculum should focus on the wide distribution of subjects of study, as if the distribution was the goal. The key point is not the distribution of themes, but the development of skills in the integration of knowledge. The quantitative assessment of the width of knowledge by the number of disciplines is of secondary importance. We cannot expect the miracle that students without any intellectual tools developed for this purpose would perform the job of integration, which their teachers do not promote or demonstrate, and which they cannot achieve for themselves. There are many other reasons for the increasing interest in making inquiries interdisciplinary, but there is little progress in the methodology of the integration of knowledge. This paper is a study of the transition from multidisciplinarity to interdisciplinarity, and further, to transdisciplinarity, with some suggestions regarding the use of methodological tools of structuralism and the choice of a conceptual framework.


Motivations for the Paper
The subtitle of this paper refers to two memes: The Biblical Tower of Babel, and The Two Cultures by C.P. Snow. Both are expressions of concern about damage brought by the intellectual fragmentation of humanity. This concern is not new. The first of the memes belongs to ancient Biblical tradition, and the second was born of reflection on the Second World War and the inhumane aspects of modern technology. The revival of the interest in problems they represent comes with the transformation of the world, generated by information technology-in particular, the rise of Artificial Intelligence (AI) and automation. AI and automation do not have any direct impact on the intellectual fragmentation of humanity, but they amplify the problems created by this fragmentation.
There is one obvious reason why we should worry about the divisions of human collectives large and small. We all face an global ecological crisis of multiple dimensions. The most dramatic is the change in climate, but this is only one facet of the complex problem of the interaction between humanity and its ecosphere. Others, such as the depletion of resources, overpopulation, pollution, destruction of natural habitats, suppression of biodiversity, etc., are less obvious; however, they are equally dangerous threats to the future of humanity. It is not clear whether we can prevent the most serious consequences of these challenges; however, it is very clear that without global cooperation, there is no chance of success. This global cooperation requires some forms of political and economic unity; however, it also requires a united vision of reality, and common values and norms, forming an inclusive global culture of humanity overarching local cultures associated with diverse ethnic, religious, regional identities.
The task of achieving even a rudimentary level of the global unity necessary for the coordinated effort to solve the problems of the global ecological crisis is very difficult. There are too many local, short-term interests involved and too many centrifugal forces opposing unification. However, there is no alternative to the continuing effort of propagating the smaller-scale forms of unification. One of them is education. The role of education is not as obvious and simple as might be expected. The traditional role of educational systems is primarily the perpetuation of existing culture and the promotion of the political interests of those who control these systems. Fortunately, education has become interdependent with a more general endeavor of scholarship, engaging people who want to learn and to share what they have learned. The young audience of education has always been more than eager to challenge the inherited products of the culture of their predecessors and to pursue their own goals. This has created an unusual role for higher education, not only as a mode of transmission from the past to the future, but also from the future to the present.

Objectives of the Paper
One of the goals of this paper is a broad reflection on the conditions necessary for the future of education, education for the future, and above all, the direction of its evolution in the context of the need to build a common stage for a coordinated effort to resolve the problems faced by humanity. This common stage is a unified view of reality for which we can use The Tower of Babel as a symbol, and which is the main goal of this paper. No discussion of the future can be fruitful without reference to the present. To think about the ways of unifying human intellect, we have to consider the reasons for the centrifugal forces that push in the direction of fragmentation, symbolized here by The Two Cultures. Thus, instead of speculation about the future, this paper is an attempt to assess the past and present status of intellectual centrifugal and centripetal forces, which can inform us in attempts to design education that supports the goal of achieving unity.
The centripetal forces, not as new as usually reported, can be aligned with the three stages of inquiries transcending the borders of disciplinary divisions; these are identified in the literature as multidisciplinary, interdisciplinary, and transdisciplinary studies. An attempt will be made to clarify their meanings and each of their roles. Finally, some suggestions will be presented regarding the methodology of the highest of the three forms of inquiry, in particular regarding the intellectual experience of structuralism. The purpose of invoking structuralism is to demonstrate the feasibility of a transdisciplinary methodology bridging the Two (or more than Two) Cultures.

The Two Cultures and Other Divisions
The story of the Tower of Babel, explaining the diversity of human languages as a means to disrupt by confusion and mutual misunderstanding of the blasphemous attempt to reach Heaven, can be applied to every domain of current human endeavors. Differences in religious beliefs stimulate violence and hatred, contradicting the precepts of religious conduct. Claims of unfair international treatises stimulate withdrawals from cooperation and reversal of the rule of military or economic power, completely ignoring the interests of weaker societies. Democratic forms of governance are blamed without any rational explanation for the allegedly dysfunctional economic or social organizations of society; this is in order to justify the exclusion of minorities and other vulnerable communities from access to resources such as education, work, health care, and income, to solidify political power gained through the support of those who are privileged or who are promised privileges. Accusations of lying, directed at opponents, are used as justification for the inconsistency of one's own statements. It seems that everything that made humanity closer to Heaven, or that was intended as a path in its direction, is now disrupted by the fragmentation of interests, mistrust, and complete lack of mutual understanding.
Of course, many people and organizations oppose these frightening tendencies, and the tendencies themselves are not new. The principle of divide et impera (divide and conquer) goes back at least to Julius Caesar, if not to Philip II of Macedon; since those times, it has been the main political tool for European (and not only European) sovereigns to maintain their monopoly of power. Others have not needed to learn this doctrine to use it effectively. History has a long record of the calamities caused by allowing enemies to destroy alliances, but this does not make the principle less effective in its destructive mission.
In the story of the Tower of Babel from Genesis, the division was intentional to prevent the insubordination of the human race. Politicians use this division with the clear intention to gain or to maintain power. This is why the second meme of The Two Cultures is more appropriate when we consider another form of fragmentation among contemporary intellectual elites. In this case, the process is not driven by someone's intention, although it is sometimes used for political purposes. This meme was born from the title of the article under this title, written by Charles Percy Snow in 1956 for The New Statesman. It was later developed into his 1959 Rede lecture at the University of Cambridge, and finally, into his famous book. These were all on the same subject of the division and growing distance between The Two Cultures: that of the humanities and that of science-technology [1,2]. The complicated background of educational and cultural politics in post-war Britain that motivated Snow, and that made his publications so prominent, is irrelevant here [3]. After all, Snow's critique of the increasing separation of The Two Cultures was not very innovative when we look at the developments in the world.

Elusive Goals of General Education
The famous 1945 Red Book of Harvard (General Education in a Free Society: Report of the Harvard Committee) blamed the atrocities of the Second World War on the deficiencies of education; in the pursuit of the depth of knowledge, it pushed for a higher level of specialization implemented by the cost of neglecting alternative domains of study [4]. The faculty of Harvard University proposed General Education as a required component of the lower level curriculum, in which students study topics complementary to their future major concentration to broaden their intellectual perspective. By now, General Education (GE) is a fixed element of American college education in virtually every undergraduate level institution of higher education. The ideals of education promoted a century earlier by Wilhelm von Humboldt were already deeply ingrained in American higher education, even before the initiative of the Harvard Committee. However, this common but purely formal GE requirement of completing a couple of credits in the introductory science courses or the humanities is only a faint shadow of the original idea. This is especially true of the innovative content of The Red Book, which planned specially designed courses, introducing students to alternative ways of inquiry and inviting them to reflect on the similarities and differences in ways of thinking. An example of this type of education can be found in the 1957 book The Copernican Revolution: Planetary Astronomy in the Development of Western Thought, containing lecture notes from the course taught at Harvard in the late 1940s by the young Thomas Kuhn [5].
On both sides of the Atlantic Ocean, a cure was seen in the distribution of required subjects at some stage of education, especially in the education of future elites. Within the next fifty years, General Education in several countries in the world went through periods of enthusiastic support and periods of negligence and suppression. The latter were associated with calls to educate highly qualified specialists necessary for the high-tech industry, management of production, or health care. After all, the time spent on studying literature or art could be used to lift the level of professional education of the constantly increasing level of specialization, which required additional time in the curriculum.
The pressure on students in the humanities or social sciences was also increasing, but the bigger problem here was the ineffectiveness of the attempts to increase the level of scientific literacy. The main reason for this failure was, and still is, the necessity to involve a higher level of abstraction and mathematical skills; the development of this at the college level in students who do not major in science requires too much time to be acceptable for typical curriculum development.

Error of Waiting for the Miricle
The actual source of the problems in achieving the ideal of comprehensive and unified education was always overlooked. All attempts were based on the assumption that the design of curricula should focus on the wide distribution of subjects of study, as if the distribution was the goal. The key point is the development of skills in the integration of knowledge. The quantitative assessment of the width of knowledge measured by the number of disciplines covered in the curriculum is of secondary importance. We cannot expect the miracle that students without any intellectual tools would perform the job of integration, which their teachers do not promote and which they most likely cannot achieve for themselves. This is the reason we have to start from the identification and development of intellectual tools for the unification of knowledge. Only after these tools are available can we think about a design for education in which students acquire the ability to integrate the pieces of the curriculum knowledge, competencies, and abilities into their individual intellectual armor, making them autonomous.
The unresolved issue of the increasing intellectual fragmentation of humanity received an additional dimension with the entirely new sources of social, cultural, economic, and technological transformation, brought by computer technology and based on the communication technology of the Internet. These new, revolutionary technologies changed the meaning of the idea of cultural fragmentation, making it much more complex. The Internet accelerated the processes of globalization, i.e., to a high degree, it eliminated and abolished the established manners of spatial divisions, such as political borders or geographic regionalization. However, the idea that the world became more uniform or more connected is a gross oversimplification. The divisions did not disappear, but they changed.

Borgesian Library of the Internet
At first sight, the Internet revolutionized access to information, and therefore, created unprecedented learning opportunities. Today, the majority of people in industrialized societies have knowledge at hand accumulated through centuries, including the most recent results of research. Browsing machines make it easy to access immense intellectual resources in the search for the answers to all possible questions. Does it bring closer the dream of informed and educated citizenship?
Liberal arts education, which shaped the ideal of learning in Western civilization, had, as its core, the development of intellectual autonomy; this was understood as the ability to make informed, rational judgments and decisions, independently of the opinions or authority of others. When someone has direct access to such extensive resources, this ideal seems easy to achieve. However, this is only an illusion of accessibility. The external barrier to the record of knowledge was eliminated. Everyone can enter the Internet, which is a slightly simplified version of Jorge Luis Borges' repository of entire possible knowledge, described in his short novel The Library of Babel [6]. The tower of the library, with its innumerable stories, had the collection of all possible 410-page books. The only task in the search for the answer to any question was to find the book that provides it, which of course, was impossible. This does not mean that the task is much easier on the Internet, with its overflow of information and promotion of sources willing to pay for the display on the top of the list provided by browsers.
The Internet created a highly democratized "market" of free expression. Everyone can post their works and make them accessible to a very wide audience. This cyberspace Hyde Park Corner has many advantages. For instance, the perspective to become a contributor to collective knowledge stimulates some people to conduct research in their leisure time and to share its results. This comes with the price of making it very difficult for someone without extensive experience to distinguish between rigorous, well-organized, and welldocumented research, and the products of ignorant self-confidence. To be sure, this is not a new phenomenon that has come with Internet blogs and influencers. Alexander Pope in his 1709 Essay on Criticism warned us that "A little learning is a dangerous thing" and "Fools rush in where angels fear to tread" [7]. It is questionable that the academic affiliation of the author gives more credibility to the content of blogs. Very often, extensive blogging is a substitute activity for the members of the academic world who are frustrated by their inability to pass through (not necessarily fair) peer reviews in research journals. Even those who pass through this obstacle sometimes easily trespass into domains where they are not qualified to claim authority.
There is no simple recipe for the selection of research resources. This is one of the tools acquired through academic apprenticeship in the work of developing knowledge. Authentic knowledge requires some form of structural unity which cannot be found in or on the Internet. This does not mean that Internet resources are useless, but that they become useful only for someone with sufficiently structured intellectual capacities. Information technology changed the range of skills necessary for being an autonomous thinker. Contemporary education can eliminate the exercises that develop skills in memorization, and it can focus on skills for the integration of information. Now, the question is whether the only problem in designing educational programs is in overcoming the separation of Snow's Two Cultures. Do we have the two cohesive domains of intellectual activities with organized systems of values, norms, and procedures, as was suggested by Snow? Or is the division into the Two Cultures a tip-of-the-iceberg symptom of deeper and more fundamental fragmentation. When we use the term "science", does it have a delineated meaning of commonly agreed identity? Or, rather, maybe it is just a convenient short description of a variety of human activities with some similarities and some differences of equal importance.

The Lost Unity of Natural Philosophy
The idea of the unification of knowledge accompanied European intellectual tradition from its beginnings in the Mediterranean Antiquity, in the form of attempts to compile all knowledge and organize it in some way. An example could be the work of Aristotle, Simplicius of Cilicia, and Albert the Great who was called, in his time, Doctor Universalis for his extensive knowledge. The individual efforts of distinguished philosophers-whose knowledge spanned the entire scientific knowledge of their generations-had to be replaced by the collective work initiated and coordinated by the few enthusiasts (for example, Denis Diderot and Jean le Rond d'Alembert who edited Encyclopédie, which had many contributors).
The task itself of the collective work on scientific programs, originally promoted by Francis Bacon, was more oriented towards the effective sharing of the effort of creating knowledge within separate domains of individual interests than an organization of diverse disciplines with separate conceptual frameworks into structured units. The task of unification, even in the time of Immanuel Kant, belonged to the individual person equipped with the exceptional ability to span various directions of inquiry. Only in the 19th century has the specialization of scientific work become reflected not only in the choice of the direction of inquiries, but also in their consolidation into the separate independent scientific disciplines. rather than the general idea of Natural Philosophy. The separation was stimulated by the organizational changes in European universities, initiated by Wilhelm von Humboldt, who propagated the idea of education through engagement in research. Universities gradually transformed from institutions where students learned about science to those where they participated in its creation or at least observed its creation.
The division of Natural Philosophy into scientific disciplines, although not invented by Auguste Comte, was propagated by him in his writings from the 1830s and collected into The Course in Positive Philosophy; this occurred together with its organization into a linearly ordered ladder hierarchy, with the first rung of mathematics followed by astronomy, physics, chemistry, biology, up to physique sociale (sociology). They were arranged according to the perceived levels of exactness (positivity) reflecting the reversed order of their complexity.
Comte's classification of the disciplines of science was epistemic rather than ontological (these present standard philosophical terms, are used here in an anachronic way, as they were introduced later in 1854 by James Fredrick Ferrier in his Institutes of Metaphysic: The theory of knowing and being) [8]. The order of the ladder of disciplines was based on the qualification of knowledge corresponding to the complexity of phenomena, but without the claim that phenomena at one level can be reduced to the other level. He did not claim that astronomy or physics can be reduced to mathematics, but that mathematical methods are tools for these disciplines. The reductionist view of reality that emerged in the 19th century required an additional ontological step, which came with the attempt to justify synthesis. Its unity was based on the assumption that phenomena of a higher level can be reduced to phenomena of the lower levels.
The emergence of separate disciplines of science and specialization within them, from the very beginning, raised concerns about the loss of the meaning of knowledge fragmented by specialization. These concerns were already explicitly expressed by William Whewell in the March 1834 issue of the Quarterly Review within an enthusiastic commentary on On the Connextion of the Physical Sciences published by Margaret Somerville. It was this commentary where he introduced, in written form, the term "scientist"; he sometimes used this term in discussions as a rather satirical name for those engaged in the pursuit of truncated specialized knowledge, long before it started to be used with a more respectful meaning [9].
The sense of the lost unity of Natural Philosophy generated several different attitudes to the issue. This is too extensive a subject to elaborate on here. Comte's hierarchic structure of disciplines initiated diverse forms of reductionism, including its ontological forms. Its extreme form, which acquired its name only in the 20th century, was physicalism. It can be identified by the use of the terminology of physics to address philosophical ideas outside of the discourse on phenomena directly studied in the theories of physics (e.g., energy, force, matter), and even more clearly in the use of the adjective "physical" as an ontological qualification of existence (e.g., "physical space"). This ontological reductionism and its extreme version of physicalism can be understood as one of the possible ways in which to re-establish the unity of the scientific vision of reality.
It is one of the ironies in intellectual history that after a century of attempts to eliminate physicalism from the philosophical vision of reality, according to a recent survey, it is the dominating position among philosophers [10]. Surprisingly, these are physicists who, with full confidence, object physicalistic reduction, or at least its traditional forms. This critique of physicalism is not always completely free from reduction, but it is free from reductionism in its classical understanding, as presented by Nagel [11]. The best example would be the view presented by Phillip W. Anderson in his excellent explanation of the non-reductive forms of complexity, in the short but very influential 1972 article More is Different [12]. By now, the emergent character of the higher forms of complexity, not only in the domain of phenomena studied by physics, is commonly acknowledged by physicists. However, this does not mean that overcoming physicalism resolves the issue of the fragmentation of the scientific vision of reality.
Yet another irony of the intellectual history is in the role of the theory of biological evolution of species, published by Charles Darwin in his 1859 On the Origin of Species. It stimulated interpretations aligned with reductionism for those who believed that life can be explained in terms of physicochemical processes, by removing the barrier separating humanity, identified with the mental aspects of reality, from other forms of life identified with the material aspects. On the other hand, the theory of evolution stimulated interest in the unity of biological systems with its symbolic concept of an organism that was absent in the subject of studies in physical sciences. Positivistic thought moved from Comte's "physics of society" to "society as an organism".
The extraordinary popularity of the views of Herbert Spencer, in particular, his Synthetic Philosophy, is an example of confusion in the intellectual circles of the 19th century. At first glance, the Synthetic approach, as presented in his First Principles (1862), seems an attempt to restore the unity of the scientific vision of the entirety of reality governed by a few universal laws.
Spencer proposed three principles: the Law of the Persistence of Force, the Law of the Instability of the Homogeneous, and the Law of the Multiplicity of Effects [13]. The universe is a subject of permanent action of non-uniform forces driven by energy. An originally homogeneous universe is in a constant transition to an increasingly heterogeneous state, because force acts in an unstable and variable way in different parts of the universe. The third law, the Law of the Multiplicity of Effects, states that the heterogeneity of the universe grows exponentially, accelerating the transition from homogeneity up to some state of equilibrium, after which it will return to homogeneity.
We know, in hindsight, that Spencer's Principles were doomed by the arrival, around the same time, of the Second Law of Thermodynamics; this established the equally universal but opposite direction of the transformation of all closed systems, including the universe. Moreover, the Principles involved many inconsistencies. For instance, we can identify one in force, which acts un-evenly and non-uniformly but directs the universe in some specific direction. If we want to avoid inconsistency, we have to question the possibility of any science, as the action of force makes it impossible to discover regularities. Some followers of Spencer (progressionists and supporters of orthogenesis) maintained that force has some teleological meaning. Whatever the interpretation of the Principles, they have no more explanatory power than his (in)famous "survival of the fittest".
The great popularity of Spencer's ideas can be interpreted as the result of the need for systematic organization of the view of reality, fragmented by diverging scientific disciplines and; at the same time, it can be interpreted as a defense against reductionist tendencies that seemed to oversimplify relations between different domains of study. In particular, we can see in it the need to find tools for the understanding of complexity.

From Unity of Science to Holism
Probably the most radical opposition to the reductionist vision of reality, accompanied by proposals of radically new methodology, came from biology. After the initial success in merging some methods of research on life with chemistry into biochemistry, more and more problems appeared in the reduction of the study of life to the concepts of disciplines from the lower rungs of the traditional ladder of science. It is not an accident that the calls for an entirely new methodology focused on the organic unity of complex systems came from biologists.
The concept of holism was introduced not by a biologist, but by a lawyer and politician with interests in botany. Jan Smuts in his 1926 book Holism and Evolution was influenced by biology [14]; however, his holism was intended as a general methodology for the study of complex systems that form wholes irreducible to their components. Smuts did not attempt to provide philosophical foundations or a clearly defined methodology for holism. He wrote about it in the preface to the second 1927 edition of his book. He wrote: "I recognize that there is a Metaphysic or Logic of Holism which has still to be written; but it is not for me to write it" [14]. The task of building philosophical foundations for holism was undertaken more than two decades later by Ludwig von Bertalanffy in his General Systems Theory (GST). His work generated great hope for a new chapter in the philosophy and methodology of science [15]. It is not a surprise that von Bertalanffy was a biologist working on mathematical models of the growth of organisms. His work in biology retained its relevance and value over the decades, but his General Systems Theory did not acquire a clearly defined methodology that could unify studies of complex systems. Its importance manifested more in the stimulation of searches for new methodologies than in being, itself, a clearly defined and codified methodology that guides and organizes research.
The General Systems Theory had a very wide range of followers; some of them, such as Anatol Rapoport, Kenneth E. Boulding, William Ross Ashby, Margaret Mead, Gregory Bateson, and more recently, Erwin Laszlo, made important contributions to their fields of studie inspired by the idea of the holistic approach. Unfortunately, GST has been frequently reduced to the trivial apocryphal quotation, supposedly from the Metaphysics of Aristotle: "The whole is more than the sum of its parts." This easy-to-remember and easy-to-repeat slogan attracted the attention of people who searched for shortcuts to philosophy and wisdom, giving GST its prominence; however, it also gained bad publicity when the authors of wild speculations tried to support their ideas by referencing the work of Bertalanffy. Of course, we cannot blame him for giving inspiration to questionable intellectual enterprises. After all, he promoted the idea that opened a new chapter in scientific inquiry.
The idea of the reorientation of scientific methodology to include holistic aspects of complex systems, initiated by Bertalanffy in the 1950s, resurfaced later in many different ways. It was not always authors of similar scientific or philosophical initiatives who referred to GST or gave credit to Bertalanffy. They actually might have not been familiar with his written works. However, it is hard to believe that they could not be influenced by the intellectual trend of a holistic way of thinking.
The philosophical reflection on the part-whole relation was the central theme of the 1967 book by Arthur Koestler, The Ghost in The Machine [16], which introduced the concept of a holon, something which is simultaneously a part and a whole. Neither a part nor a whole exists anywhere. We just have a holarchy, understood as a hierarchy of self-regulating holons which have some level of stability and autonomy. As an example of holon, Koestler gives the example of a human being, stating: "No man is an island-he is a holon. A Janus-faced entity who looking inward, sees himself as a self-contained unique whole, looking outward as a dependent part" [16].
The systemic way of thinking was not restricted to philosophy. We can find its reflection, for instance, in the work of Robert Rosen, who followed his teacher, Nicolas Rashevsky, in the development of relational biology. Rashevsky engaged in the development of a new mathematical approach to biology, starting with the change in the language of biological discourse [17]. His Generalized Postulate of Relation Forces was as follows: "The development of organismic set proceeds in such a manner as to maximize the total number of relations and the number of different kinds of relations during the total course of development" [18]. The novelty of this approach was in the direct involvement of mathematics in the study of living organisms and in the structural analysis describing, in a clearly defined way, the idea of the unity of biological systems.
Rosen's approach was similar, although he involved a different mathematical formalism of the category theory [19]. The choice of the category theory was dictated by Rosen's interest in self-reference as the main characteristic of life, distinguishing it from the subjects of study in physics [20,21]. He believed that category theory could help him to avoid the destructive logical consequences of self-reference.
Philosophy and science of life received an intellectual tool of exceptional importance in the concept of autopoiesis, introduced by Humberto Maturana and Francisco Varela [22]. The idea of the fundamental and distinctive characteristic of life expressed as a self-creation goes much further than the cybernetic concept of self-regulation, and its influence on the understanding of life was, and is, immense. However, we can ask whether the entire direction of holistic, systemic, and autopoietic studies takes us closer to a cohesive and uniform scientific worldview. The fact that we are studying complex phenomena, which require consideration of wholes irreducible to their parts, does not mean that our view of reality is more cohesive, or that our methodology is more uniform and consistent.
There is no simple, straightforward answer to the question of whether the holistic, systemic approach belongs to centrifugal or centripetal tendencies in human knowledge. On one hand, we get closer to resolving one of the most fundamental and challenging problems of the mind-body duality, which opened the precipice between the humanities on the mental side and the natural sciences on the side of the body. The embodied cognition described in terms of life and its scientific characteristics, which escape the epistemic reduction to physics, brings us closer to the uniform vision of reality studied by uniform science. However, this uniformity of science can be, and actually is, questioned. What types of contribution have this systemic direction of studies made to physics? Can we bridge the mind-body precipice when we still claim that to cross another border between physics and life sciences, we have to change our intellectual tools? What does constitute the authentic unity of knowledge?
The last question is sometimes answered without direct dependence on the systemic or holistic way of thinking. An example can be found in the two books published by Edward O. Wilson, the first in 1998, Consilience: The Unity of Knowledge [23], and the second in 2011, The Meaning of Human Existence [24]. Both are more attempts at the reconciliation of sciences than methodological proposals of the integration (as the title of the first states). The motive of the unification of knowledge is too wide to discuss or review in this paper more elaborately. However, even this fragmentary account of centripetal tendencies for the study of transdisciplinarity would have been incomplete without mentioning Gregory Bateson and his 1979 Mind and Nature: A Necessary Unity [25].

Interdisciplinary vs. Multidisciplinary Studies
The typical answer to the question about achieving unity of knowledge is that the solution can be found in the so-called interdisciplinary studies. There were examples of successful (to some degree) merges of disciplines, such as those mentioned before, establishing the discipline of biochemistry. There were diverse research organizations that included the expression "interdisciplinary" in their names. More recently, the term "interdisciplinary" has become fashionable and is applied in a wide range of contexts, always in the sense of the broadening of the perspective of knowledge; however, there are rarely any attempts to define it or to make the claim of their interdisciplinarity accountable.
The reflection on the experience from these attempts at interdisciplinarity produced the distinction of three forms of crossing the borders of disciplines: multidisciplinarity, interdisciplinarity, and transdisciplinarity. The distinction of the first two became a standard and commonly accepted recognition of the failed programs of interdisciplinarity that promised too much. For instance, Peter van den Besselaar and Gaston Heimerics write that in multidisciplinary research, "the subject under study is approached from different angles, using different disciplinary perspectives. However, neither the theoretical perspectives nor the findings of the various disciplines are integrated in the end"; meanwhile, interdisciplinary research "creates its own theoretical, conceptual and methodological identity. Consequently, the results of an interdisciplinary study [. . . ] are more coherent and integrated" [26].
Bernard C.K. Choi and Anita W.P. Pak express the distinction that "[Multidisciplinarity] draws on knowledge from different disciplines but stays within their boundaries.
Similar but different distinctions, based more on the level of the mutual interdependence of the subjects of the study, were already present for a long time in the discussions of the differences between the cross-disciplinary studies and the interdisciplinary studies. These were understood as a distinction between studying diverse instances of phenomena and the studies of these phenomena in mutual interactions, for instance, the distinction between cross-cultural analysis and intercultural analysis, which require different methodologies.

Transdisciplinary Inquiries
Transdisciplinarity is also not a new concept and was, from the beginning, more of a methodological program for the future than the existing methodology. It appeared first in the early 1970s in a vision of such a program propagated by Jean Piaget: "Finally, we hope to see succeeding to the stage of interdisciplinary relations a superior stage, which should be 'transdisciplinary,' i.e., which will not be limited to recognize the interactions and/or reciprocities between the specialized researches, but which will locate these links inside a total system without stable boundaries between the disciplines" [28].
The triad of the approaches to crossing disciplinary borders was described by Basarab Nicolescu as follows: "Multidisciplinarity concerns itself with studying a research topic in not just one discipline but in several simultaneously. From this perspective, any topic will ultimately be enriched by incorporating the perspectives of several disciplines. Multidisciplinarity brings a plus to the discipline in question, but this "plus" is always in the exclusive service of the home discipline. In other words, the multidisciplinary approach overflows disciplinary boundaries while its goal remains limited to the framework of disciplinary research. Interdisciplinarity has a different goal than multidisciplinarity. It concerns the transfer of methods from one discipline to another. Like multidisciplinarity, interdisciplinarity overflows the disciplines, but its goal remains within the framework of disciplinary research. Interdisciplinarity even has the capacity of generating new disciplines, such as quantum cosmology and chaos theory.
Transdisciplinarity concerns that which is at once between the disciplines, across the different disciplines, and beyond all disciplines. Its goal is the understanding of the present world, of which one of the imperatives is the unity of knowledge" [29,30]. We can see that Nicolescu's view of transdisciplinarity is strikingly close to what we are looking for in the present paper. However, even if the goals are similar, there are substantial differences. As in a popular saying "the devil is in the detail." Nicolescu presents his program of transdisciplinarity in what he claims to be an axiomatic approach: "After many years of research, we have arrived at the following three axioms of the methodology of transdisciplinarity: 1.
The ontological axiom: There are, in Nature and society and in our knowledge of Nature and society, different levels of Reality of the Object and, correspondingly, different levels of Reality of the Subject.

2.
The logical axiom: The passage from one level of Reality to another is ensured by the logic of the included middle. 3.
The complexity axiom: The structure of the totality of levels of Reality or perception is a complex structure: every level is what it is because all the levels exist at the same time" [30].
The program proposed by Nicolescu requires very strong and radical commitments, and at the same time, does not offer much beyond very general principles that refer to undefined concepts of the levels of reality, multivalued logic, and complexity. Without these missing specifics, it cannot be considered a methodology of transdisciplinarity. Axiomatic systems require a much higher level of specific identification of their primitive concepts, or alternatively, very precise definitions of the conceptual framework. On the other hand, we cannot expect that a very disruptive approach excluding the accumulated knowledge of the past acquired within disciplines can serve this purpose.
Another example of a program intended as a support for transdisciplinarity can be found in the works of Søren Brier on cybersemiotics. Brier refers directly to transdisciplinarity in his articles, and provides a comprehensive description of his vision of the role of cybersemiotics in overcoming the barriers between the biological and social realms [31][32][33][34]. In Can Cybersemiotics Solve the Paradox of Transdisciplinary Knowing? Brier writes: "My major claim is that combining Luhmann's system theory with biosemiotics provides a new transdisciplinary framework, which is an alternative to 'the unity science' of positivism on one hand, and post-modernism on the other. I advocate Cybersemiotics as a multidimensional semiotic constructive realism, the point of which is that signs as concepts and classifications arise in our embodied biological and social 'life forms'. For our understanding of meaning production, a concept has to have a phenomenological and emotional constitution; there is therefore no good reason why the inner world of cognition, emotions, and volition should not be accepted as just as real as the physical world as well as our cultural world of signs and meaning" [34]. In a slightly different context of the informational transdisciplinarity in the article Can Cybersemiotics Solve the Problem of Informational Transdisciplinarity?, he writes: "Cybersemiotics attempts to combine a systemic and a semiotic view trying to amend the shortcoming of the above described transdisciplinary models into a model that is not totalitarian mechanistic, algorithmic or physicalistic reductionism and on the other hands is not a constructivist relativism giving up any scientific truth claims. Cybernetics and systems science attempts to overcome these problems through its dynamics theory of emergence, where like in dialectical materialism now qualities arise in systems development or when to types of systems are integrated" [34].
We can see that Brier's work is focused on bridging the social and biological inquiries to be achieved by finding an alternative method, avoiding the extremes of reductionism or constructivist relativism. However, Brier does not provide a distinction between interdisciplinarity and transdisciplinarity, and does not present methodological tools that can be used outside of the context of socio-biological research. He considers the two forms of synthesis just a part of the same tendency, stating that "There is a long history of striving towards inter-and trans-disciplinarity in the sciences, from Newton and Laplace through Comte and the logical positivists" [34]. It is not clear whether the cybersemiotic approach can benefit other disciplines of science, or what methodologies it can offer for a unified view of reality.

From Multidisciplinarity and Interdisciplinarity to Transdisciplinarity
Someone could question the importance of the distinctions between different forms or levels of synthesis. However, the distinction has practical meaning. It gives both a direction for the search of methodological transformations and a measure of the achievement. Catherine Scott and Anne Hofmeyer give their very practical perspective on this issue: "It is timely to develop improved understandings about strengthening interdisciplinary contexts to guide effective and quality healthcare research; contexts in which health and social issues occur do not recognize disciplinary boundaries. Similar to the notion of "partnership", the terms multidisciplinary, interdisciplinary and transdisciplinary are in danger of becoming conceptually indistinct and thus of limited usefulness for researchers, practitioners and teams" [35].
Without a consensus about the meaning of transdisciplinarity, despite the frequent calls for its establishment, we can conclude that this is an expected highest stage of research activity that requires intellectual tools common for all disciplines, serving the purpose of unification. The most important feature of these tools should be that they facilitate the transfer of results and methods between different disciplines, and that they have an integrative power to unify fragmentary models of reality into a larger whole.
We could see that the third, highest form of crossing the borders of disciplines is desired and expected, but it is far from being implemented. This does not mean that the idea of interdisciplinarity can be easily achieved and that we can consider science unified, at least at this level. Here, we can use as an example the case of the cognitive science analyzed in an excellent study by Rafael Núñez, Michael Allen, Richard Gao, Carson Miller Rigoli, Josephine Relaford-Doyle, and Arturs Semenuks, with the title What Happened to Cognitive Science? [36] .
When you look up a dictionary definition, for instance, in the Merriam-Webster Dictionary, cognitive science is defined as "an interdisciplinary science that draws on many fields (such as psychology, artificial intelligence, linguistics, and philosophy) in developing theories about human perception, thinking, and learning." There are institutes and departments of cognitive science, and academic organizations with this science in their names. However, the identity of this science is not as obvious as it may seem. In 2003, George A. Miller, commonly considered one of the founders of this discipline, recalled the early times when cognitive science emerged in the 1950s, and when, in the 1970's, it received formal recognition in the program sponsored by the Sloan Foundation: "I argued that at least six disciplines were involved: psychology, linguistics, neuroscience, computer science, anthropology and philosophy. I saw psychology, linguistics and computer science as central, the other three as peripheral. These fields represented, and still represent, an institutionally convenient but intellectually awkward division. Each, by historical accident, had inherited a particular way of looking at cognition and each had progressed far enough to recognize that the solution to some of its problems depended crucially on the solution of problems traditionally allocated to other disciplines. The Sloan Foundation accepted my argument [. . . ]" [37].
In the conclusion of his reminiscences, Miller wrote about his view of cognitive science in 2003: "Some veterans of those days question whether the program was successful and whether there really is something now that we can call 'cognitive science'. For myself, I prefer to speak of the cognitive sciences, in the plural. But the original dream of a unified science that would discover the representational and computational capacities of the human mind and their structural and functional realization in the human brain still has an appeal that I cannot resist" [37]. Thus, it is not cognitive science, but cognitive sciences. The extensive study carried out by Núñez and his colleagues gives very clear confirmation of this diagnosis.
"More than a half-century ago, the 'cognitive revolution', with the influential tenet 'cognition is computation', launched the investigation of the mind through a multidisciplinary endeavour called cognitive science. Despite significant diversity of views regarding its definition and intended scope, this new science, explicitly named in the singular, was meant to have a cohesive subject matter, complementary methods and integrated theories. Multiple signs, however, suggest that over time the prospect of an integrated cohesive science has not materialized. Here we investigate the status of the field in a data-informed manner, focusing on four indicators, two bibliometric and two socio-institutional. These indicators consistently show that the devised multi-disciplinary program failed to transition to a mature inter-disciplinary coherent field. Bibliometrically, the field has been largely subsumed by (cognitive) psychology, and educationally, it exhibits a striking lack of curricular consensus, raising questions about the future of the cognitive science enterprise" [34].
To obtain a more comprehensive view of the present time, science, and its cohesion through multidisciplinary, interdisciplinary, and transdisciplinary studies, we need many more studies of this type about attempts to form interdisciplinary fields. However, even this single example shows that overcoming the fragmentation of scientific inquiries is a formidable task that requires very broad and deep foundations, especially in the form of clearly defined methodology.
In the absence of an example of a mature, fully developed transdisciplinary study, we can only try to conceive an outline of the necessary methodological tools. The starting point could be the question about the conceptual and methodological framework of Natural Philosophy. What was the glue that held it united and gave it an identity? There are many possible answers. For instance, the role of empirical methods could be considered, combined with increasing use of the quantitative description of phenomena. However, empirical methods do not have much sense without some form of realism. Quantitative description of phenomena, perceived by Comte and many others as evidence for exactness, can impress only novices. In my view, the key is in the balance and correspondence between ontology and epistemology, more specifically, the correspondence between structures that define and describe entities and epistemological structures (logical, mathematical, and other). Empirical methods are just ways to control this correspondence. This is the reason why in the following, we will explore the intellectual experience brought by structuralism in its specific version associated with symmetry. After all, in the attempt to use structuralism as the common methodology of The Two Cultures, the Tower of Babel of human knowledge was closest to Heaven.

Symmetry Study as Methodology for Structural Analysis in Science
About half a century ago, for the first and last time, there was hope for building a bridge across all disciplines including the humanities, science, and mathematics in the form of structuralism. The story of structuralism is probably the closest to the Biblical story of The Tower of Babel.
It is difficult to find a term of more universal use in all contemporary domains of human inquiry than "structure". This does not mean that people who use this term can define it or explain its meaning. If asked about the meaning, people would refer to the explanation from a dictionary that structure is an organization, arrangement of parts, elements, or constituents of a complex system, considered from the point of view of the whole rather than of any particular part, etc. The terms "organization" or "arrangement", in turn, are explained in dictionaries with the use of the word "structure". Once we restrict the term structure to a particular discipline, we can find an increasingly precise definition of its meaning. The concept of a structure is convoluted with another concept of symmetry. Both were born in the mid 19th century to replace the traditional general ideas of form and harmony, respectively. The marriage of these concepts had, as its offspring, the formalization of the concept of structure in terms of symmetry, understood as invariance with respect to transformations.
The question about the common meaning of geometric structures considered in the different forms of geometry led to the modern general methodology for structural analysis in mathematics and mathematical sciences. This methodology involved the concept of symmetry. Up to the late 18th century, symmetry was usually conceptualized as a harmony of proportions [38]. It can be a surprise that although its modern understanding can be traced to earlier, the precise definition of symmetry in its simplest mirror type was provided by Ernst Mach in 1872 [38], in the same year that the entire methodology of the symmetry study was born in the Erlangen Program of Felix Klein [39]. Klein proposed a new paradigm of mathematical study focusing not on its objects, but their transformations. His mathematical theory of geometric symmetry was understood as an investigation of invariance, with respect to transformations of the geometric space (two-dimensional plane or higher dimensional space).
Klein used this very general concept of geometric symmetry for the unification of different types of geometries (Euclidean and non-Euclidean), and the classification of different geometric structures within these geometries. The fundamental conceptual framework of Klein's Program (which was intended by Klein as a paradigm of study, and became such paradigm on a scale not expected by him) was based on the scheme of (1) space as a collection of points → (2) the algebraic structure (group) of its transformations → and (3) invariants of the transformations, i.e., configurations of points that do not change as a whole, while their points can be permuted by transformations. Selections of algebraic substructures (subgroups) of transformations correspond to different types and levels of invariant configurations, allowing the differentiation and comparison of structural properties associated with symmetry. The classical example of mirror symmetry (symmetry with respect to the surface of the mirror) can be identified with invariance with respect to the mirror reflection, understood as a transformation of the entire space.
Klein's work applied a new theory of groups which in the works of Arthur Cayley [40] and Camille Jordan [41] became a part of algebra. Klein's Erlangen Program to classify geometries has been extended to many other disciplines of mathematics, becoming one of the most common methods of in all mathematical research. The new analytical tool was very soon adapted to studies in theoretical physics.
Group theory was originally "preconceived" in the early 19th-century work of young Evarist Galois in his study of the impossibility of solving polynomial equations of degrees higher than four (the actual definition of a group was introduced several decades later). Galois considered invariance of the solutions for equations under transformations of the set of numbers that can be substituted for the variable. This line of thinking was investigated later in the mid 19th century by James Joseph Sylvester and Arthur Cayley in the context of solving equations. This led to the natural question about the invariance of physical equations under transformations of physical coordinates.
It was quickly recognized, under the influence of the Klein Program, that in classical mechanics, the equations have to be invariant with respect to the group of transformations of coordinates, which was called the Galilean group. This group was, and is, called Galilean because it consisted of the transformations which Galileo considered several centuries earlier when he observed that the description of reality has to be independent of the choice of the position, orientation, linear motion with a constant velocity of the observer, and of the choice of the time for observation. It turns out that the equations of Newtonian mechanics are invariant with respect to the Galilean group of transformations, but curiously, Maxwell's equations for electrodynamics are not. The latter equations turned out to be invariant with respect to another group of transformations recognized in 1905 by Henry Poincare, which he called the Lorentz group. This was the group with respect to which relativistic mechanical equations are invariant (the full group of transformations, including the Lorentz group, for which they are invariant was, in turn, called the Poincare group by Herman Minkowski in 1908).
The transition from classical to relativistic mechanics started to make sense as a change in the type of symmetry with different geometry (change from Euclidean geometry to the geometry of Minkowski space) and with the change in invariant physical magnitudes. In this new description of reality, the separate Galilean group invariant magnitudes of mass and energy are combined into the one Lorentz group invariant magnitude of mass-energy.
However, a bigger, Copernican-type conceptual revolution came a little bit later. The transition from classical to relativistic mechanics, in terms of the change in symmetry groups from the Galilean to the Lorentz or Poincare group, was a realization of the Klein Program in physics regarding the change in geometry. The change in the invariant physical magnitudes from separate mass and energy to united mass-energy was a welcome gift of great theoretical and practical importance (its consequence is the most famous equation of physics: E = mc 2 ).

Copernican Revolution of Symmetry in Physics
One of the most celebrated achievements of 19th-century physics, expressed as the Law of Conservation of Energy (i.e., the First Law of Thermodynamics), was interpreted as an expression of the ontological status for an alternative form of physical entities. Energy was an alternative to matter, associated with the Law of the Conservation of Mass. Until the 19th century with the dominating Newtonian corpuscular theory of light, the concept of matter was associated with the atomistic tradition founded on the distinction matter-vacuum. It was meant, to belong in the realm of the Cartesian res extensa. The wave theory of light and the development of electrodynamics, which introduced electromagnetic waves transmitted through a vacuum, made the earlier atomistic distinction matter-vacuum meaningless.
At first, there was hope that this distinction could be retained if electromagnetic waves were the waves of the aether, an exotic form of matter filling out the entire space. The end of the 19th century was the end of this hope. Special Relativity theory eliminated aether, but also united mass and energy into mass-energy, restoring the uniform ontological status. This may explain why even today, anachronisms such as "matter and energy" finds their place in philosophical discussions when in physics, the actual ontological distinctions are along the dualisms of wave-particle or field-particle.
A Copernican revolution in the understanding of conservation laws was a consequence of one of the most important contributions to mathematical physics of all time, published in 1918 by Emmy Noether [42] and stating that every differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether showed that the conservation laws for physical magnitudes such as energy, momentum, and angular momentum are associated with transformations describing changes of reference frames, i.e., observers. Thus, it is not true that we have given, in advance, distinct physical entities with corresponding magnitudes that obey conservation laws, and for which we can find fundamental equations that are invariant with respect to some groups of symmetries.
The roles are reversed. If we have a general description of a physical system and we want to render this description objective in the sense that every theoretical observer of the system, free from the action of external forces, describes the reality the same way (i.e., the description is the same in every inertial reference-frame equivalent with respect to symmetry group transformation), then with each type of symmetry, there is a corresponding magnitude conserved in time. Conservation of energy is a consequence of invariance with respect to the time shift. Conservation of momentum is a consequence of invariance with respect to space shift. Conservation of angular momentum is a result of invariance with respect to the rotation of the reference frame.
This, in fact, answers a naïve but legitimate question: "What does make 1 /2mv 2 better than 1 /2mv 3 as the description of kinetic energy?" The answer is that the former is invariant with respect to theoretical changes of observers (i.e., reference frames) in the case of an isolated physical system in the absence of forces, while the latter is not invariant at all. Therefore the distinguished physical magnitudes satisfying the corresponding conservation rules are determined by the choice of symmetry transformations, and those are determined by the condition of equivalence of all possible observers. Naturally, this makes the study of symmetry a central tool for scientific methodology. Physics (and science in general) looks for an objective description of reality, i.e., a description that is invariant or covariant with changes of observers. Noether's theorem tells us that such a description can be carried out with the conserved magnitudes. This is why the expression "matter and energy" does not make sense in physics. What is the group of symmetry for which matter is invariant? What does it mean that matter is invariant?

Symmetry Climbs Comte's Ladder
The year 1872, when the Erlangen Program was published, can be considered a starting point for the study of symmetries; this was clearly defined in terms of group theory, but not in terms of the scientific exploration of symmetries studied before the concept of symmetry was formalized. The intuitive recognition of the similarities and differences between some configurations of points and their mirror reflections generated the interest of the greatest minds for quite a long time. For instance, Immanuel Kant tried to rationalize the distinction between human left and right hands (although they are different, they are mutual mirror images), but his hypothetical argumentation that in a world in which God would have created humans with the one hand only, this hand is neither left nor right, is not very convincing. Sooner or later, humans could invent a mirror and could realize that there is an alternative form for their single hand. More important is that Kant's reflection did not contribute much to the understanding of mirror symmetry.
The structural characteristic that gives the distinction of left-and right-handedness was given the name of chirality by Lord Kelvin much later, but their distinction could be recognized easily thanks to our everyday experience with our hands and the gloves that have to match them. Thus, either of our hands is chiral, and they are enantiomorphs of each other, while the majority of simple organisms are symmetric with respect to all rotations and reflections, and therefore achiral.
In 1848, Louis Pasteur published one of his most important papers, explaining the isomerism of tartrates, more specifically of tartaric acid by molecular chirality ("left-or righthandedness" of molecules) [43]. He showed that the differences in the optical properties of the solutions of this organic compound between samples synthesized in living organisms and samples synthesized artificially result from the fact that artificially synthesized molecules-although constructed from the same atoms as those in natural synthesis-have two geometric configurations; he concluded that they are symmetric with respect to the mirror reflection, but not exchangeable by spatial translations or rotations (in the same way as left and right palms of human hands); conversely, he showed that in the nature, only left-handed configurations occur. Later, it turned out that almost exclusively naturally synthesized amino acids (and therefore proteins) are "left-handed", and sugars are "righthanded". Artificial synthesis, if not constrained by special procedures, leads to the equal production of left and right-handedness. There is no commonly accepted explanation of this mysterious phenomenon, even today.
The chirality of organic molecules became one of the most important subjects of 19th-century biochemistry, leading to the discovery of the role of atoms of carbon in the formation of chiral molecules formulated into the Le Bel-van 't Hoff Rule, published by these two researchers independently in 1874.
The study of symmetry in biology, particularly of chirality in complex organisms, could not have been explained in the 19th century; however, researchers published some phenomenological laws of evolution and phenotypic development of organisms, such as Bateson's Rule. Much later Bateson's son Gregory explained this rule in terms of information science [44,45].
A similar interpretation can be given to Curie's Dissymmetry Principle. Pierre Curie made so many important contributions to physics and chemistry that this fundamental principle of great philosophical importance is rarely invoked. The outdated original formulation, using the term "dissymmetry" instead of the now commonly used "asymmetry", was: A physical effect cannot have a dissymmetry absent from its efficient cause [46]. This rather unintuitive principle has very important consequences in biology and chemistry. The real importance of these early developments could be fully appreciated half a century later when it became fully clear, thanks to advances in physics (elementary particle theory), that the study of the conditions for maintaining symmetry is no more important than the study of breaking symmetry.
By the mid 20th century, the study of symmetry became a fundamental tool for mathematics, physics, chemistry, and several branches of biology. This can explain the explosion of the interest in symmetry among philosophers. The swing of the pendulum of the dominating philosophical interests between the tendency to seek an objective methodology for philosophical inquiry, inspired by scientific methodology, and the calls for freedom of the use of introspective, and therefore subjective, phenomenal experience, reached the apex of the former. The most influential expression of the alignment of the humanities with science was in structuralism.

Structuralism or Many Structuralisms?
Although the generic term "structuralism" was already in use in the late 19th century in the context of mental structures in the psychology of Wilhelm Wundt (for instance, in the description of Wundt's position by his student Edward B. Titchener), the beginnings of structuralism, understood as a broad methodological perspective, can be traced most directly to the works of Ferdinand de Saussure on linguistics (more specifically his lectures from 1907-1911, posthumously published by his disciples in 1916 as Course in General Linguistics [47]). The emphasis on the structural characteristics of language and their synchronous analysis prompted increased interest in the meaning of the concept of structure, although de Saussure himself used the term "system" rather than "structure". It is only speculation, but it seems that the words "structure" and "system" gained their popularity in the 19th century in parallel to the decline in the use of the term "form", due to the latter's luggage of associations to its use in diverse meanings in philosophy through the centuries. Through the association with de Saussure, the concept of a structure acquired an implicit characteristic of synchrony. In disciplines in which diachrony was fundamental, such as biology, the preferred dynamical concept of morphogenesis, popularized by Goethe, appended the idea of structure expressed as morphology. We should not be deceived by terminological preferences. In all cases, the central concept was of a structure, viewed either statically or dynamically; by the end of the 19th century, this concept had replaced in science the concept of form. Klein's Erlangen Program, originally formulated for geometry, provided the pattern of a methodological tool for mathematics, and soon later, for physics.
It was a natural consequence that the tools used in science for structural analysis in terms of symmetry found their way to psychology, anthropology, and philosophy.
The clearest programmatic work Structuralism by Jean Piaget, published originally in 1968, refers explicitly to the concept of the group of transformations, although very little of the formal apparatus was presented there [48,49]. On the other hand, Piaget, in his work on developmental psychology, used this methodology explicitly. For instance, he based his theory of child development on the so-called Klein's "Four-group". The works of others, for instance, Claude Levi-Strauss, also directly employed the methods developed as a consequence of the Erlangen Program, and included the use of Klein's group too [50].
This was the time in which structuralism was triumphant, but also a time of great confusion. The popularity of structuralism made its name a buzzword, and everything written at that time with the word "structure" was (and unfortunately still frequently is) associated with structuralism. Of course, nobody owns the name of structuralism and there is nothing wrong with using it for different purposes. However, mixing these different uses and the resulting misattributions of views are errors.
Piaget should be prized for the popularization of the idea of structuralism as a broad philosophical direction of thought, but his postulate to use the concept of a structure as a bridge between the scientific and humanistic forms of inquiry was preceded by the short but very influential 1952 book Symmetry, written by Hermann Weyl [51]. Weyl did not use the name structuralism, but demonstrated the use of the method of Klein's Erlangen Program in studying structures from mathematics, physics, crystallography, and biology, to art, design, etc. In his view, the study of structures was the study of the invariants of transformations (more exactly, groups of transformations).
It was a time when both terms "structure" and "symmetry" had already established fundamental roles in mathematics, physics, chemistry, and biology. A more elaborate exposition of these roles in the present context of their unifying power is presented by the author elsewhere, in the study of the question "What is a structure?", carried out with the use of, or reference to, rather advanced mathematical formalisms [52]. However, to prevent confusion, it is necessary to disentangle some ideas before we proceed to further discussion of the methodological tools that structuralism, based on the symmetry concept, can offer for transdisciplinary studies.
Piaget's book Structuralism generated enormous interest among readers belonging to a very wide audience, although in its Conclusion, he already complains that "[. . . ] one can only be disturbed by the current modishness of structuralism, which weakens and distorts it" [49] (p. 137). Thus, the book was intended as a means to clarify the confusion that often comes with popularity. Unfortunately, it also generated a lot of misunderstandings, possibly partially because of the differences between its original French edition and its English translation. In the following, I will refer to the English version. This could have been prevented if Piaget gave a reference to the much earlier book Symmetry by Herman Weyl (whose different book on a different subject he quoted).
Piaget prizes Klein's Erlangen Program as "a prime example of the scientific fruitfulness of structuralism" [49] (p. 22) and provides its explanation, but makes it so oversimplified that it does not make much sense. Statements such as "Groups are systems of transformations; but more important, groups are so defined that transformation can, so to say, be administered in small doses, for any group can be divided into subgroups and the avenues of approach from any one to any other can be marked out" [49] (p. 21), or "Group structure and transformation go together. However, when we speak of transformation, we mean an intelligible change, which does not transform things beyond recognition at one stroke, and which always preserves invariance in certain respects" [49] (p. 20) are nonsensical. What is meant by "preserves invariance in certain respects"?
Piaget made it clear that structuralism, or to use his words, "structuralism in general", is a descendant of Klein's Erlangen Program: "In this little book we shall, therefore, confine ourselves to the kinds of structuralism that are to be met in mathematics and the several empirical sciences, already a sufficiently venturesome undertaking. [. . . ] But first we must elaborate somewhat on the definition of the structuralism in general that is here proposed, else it will be hard to understand why a notion as abstract as that of a 'system closed under transformation' should raise such high hopes in all domains of inquiry" [49] (p. 6).
This may explain the bizarre formulation of his presentation of Klein's ideas, which he might have considered easier to understand for those who abhor abstraction. Whatever his intention, some of his statements are confusing, and some are confused. For instance, in the passage "Indeed, all known structures -from mathematical groups to kinship systems -are, without exception, systems of transformation. But transformation need not be a temporal process {. . . }" [49] (p. 11), he is obviously right in that symmetry is an invariance with respect to transformations that do not have to be temporal, and that what is invariant is structure; however, it is the invariant of a group of transformations, not a "system of transformations." Thus far, we can talk about the confusing formulation of some claims. The real problem starts when Piaget makes claims that are meaningless or explicitly inconsistent with the mathematical description of structures as invariants of transformations: "In short, the notion of structure is comprised of three key ideas: the idea of wholeness, the idea of transformation, and the idea of self-regulation" (p. 6). While the idea of transformation (in the mathematical description realized by transformations understood as functions of a specific type) is fundamental for structures, and we can interpret wholeness as the result of invariance, the idea of self-regulation does not have any meaning in the structures of mathematics or physics. By introducing the idea of self-regulation, Piaget resurrects the ghost of the systemic, organismic conceptual framework without giving self-regulation any formal meaning.
A similar problem of the error of commission can be identified in the Conclusion, where Piaget writes, in the context of all possible structures: "There is no structure apart from construction, either abstract or genetic" [49] (original emphasis, p. 140) and "The problem of genesis is not just a question of psychology; its framing and its solution determine the very meaning of the idea of structure. The basic epistemological alternatives are predestination or some sort of constructivism" [49] (p. 141). Here, Piaget makes the mistake of mixing two levels of the discourse.
There is nothing in the general inquiry of the structures defined as invariants of transformations that commits us to a particular epistemological or ontological position. These commitments in works of contributors are always posterior to the study of structures. Naturally, from the position of his Genetic Epistemology, transformations can be interpreted as types of constructions, and with this interpretation comes the interpretation of structures as constructs [49,53]; however, this is the result of his commitment, not an inherent feature of structures. In any case, Piaget's interpretation seems artificial when we consider the symmetries of physics. For instance, transformations from the Galileo group or Lorentz group are transitions between potential observers (reference frames). Not only can these transitions hardly be considered constructions, but the transition from Newtonian to Relativistic Mechanics-which, in hindsight, we can associate with the transition from the Galilean to the Lorentz group-was a discovery made against the expectations and the will of physicists involved. Later, we had a string of major discoveries in physics that consisted of surprising cases of breaking symmetry.
Thus far we had examples of ramifications in the understanding of structures and related forms of structuralism. There is no reason to claim that one form is better than the other. We just have to choose one and, in this paper, it is the one that is based on the concept of symmetry, initiated by the Erlangen Program of Klein, and is free from any additional assumptions or interpretations. The reason for the choice is that the task for this paper is to build a bridge between the Two Cultures, and to search for patterns in developing a transdisciplinary methodology.
Thus, it was clarified that we have more than one structuralism, and the one that serves our purpose the best is derived from the Erlangen Program. However, this does not eliminate the confusion. Another source of misunderstanding is a false belief that our preference excludes directions of inquiry of special importance. An example of the apparent contestant to the role of a methodological tool for transdisciplinarity is category theory, introduced in 1945 by Samuel Eilenberg and Saunders MacLane in a long, epochmaking paper, General Theory of Natural Equivalences [54]. The misunderstanding is in the relationship between these two directions of thought. First, it should be made clear that the more important idea of this famous paper is that of a functor, understood as a transition between categories that are, themselves, auxiliary concepts.
For those who are familiar with the research genealogy of the subject in which Emmy Noether was a mentor for MacLane, and Felix Klein for Emmy Noether, it should not be a surprise that the work of Eilenberg and MacLane was intended as a continuation of the Erlangen, as this quotation from the Introduction tells us: "The invariant character of a mathematical discipline can be formulated in these terms. Thus, in group theory all the basic constructions can be regarded as the definitions of co-or contravariant functors, so we may formulate the dictum: The subject of group theory is essentially the study of those constructions of groups which behave in a covariant or contravariant manner under induced homomorphisms. More precisely, group theory studies functors defined on well specified categories of groups, with values in another such category. This may be regarded as a continuation of the Klein Erlanger Programm, in the sense that a geometrical space with its group of transformations is generalized to a category with its algebra of mappings" [54].
Thus, the two directions of thought are not only not in competition, but one is an extension of the other. This may generate a question about whether the the Category and Functor Theory is a better choice for a methodological tool for transdisciplinarity. The answer is that it is a matter of preference. The theory of structures based on symmetry is just less abstract. Most of the research in physical sciences and, of course, in other disciplines is carried out at this lower level of abstraction, so there is no compelling reason to go further. There is a close analogy of the relationship between these two programs of inquiry and the pair of algebraic structures of groups and monoids (the former are special cases of the latter type). In mathematics, if you can carry out something using exclusively groups, you do not introduce the concept of a monoid. If it turns out that there is a need for generalization, you can always achieve it; however, you then have to pay the price of some lost tools that require the eliminated assumptions (in this case, the reversibility of morphisms and the methods of set theory).
There is one aspect of symmetry and structure studies that is worth mentioning here. The Theory of Categories and Functors has its role in mending the historical division of research on symmetries between Felix Klein and Sophus Lie. Klein, in his Erlangen Program, explicitly renounced the exploration of symmetries described by continuous groups studied by Lie, and left it to his friend [39]. This led to the study of continuous symmetries in terms of the so-called pseudogroups. Emmy Noether's two famous theorems regarding the relationship between symmetries and conservation laws of physics were in the conceptual framework of continuous symmetries [42]. Moreover, there is a parallel direction of research in physics initiated by the work of Lie within mathematics-the study of dynamic systems, culminating in the works of René Thom-especially his 1972 Structural Stability and Morphogenesis [55]-and in the complexity studies of the members of Santa Fe Institute focusing on complex adaptive systems. Although the two directions of research initiated by Klein and Lie have many differences in their methodologies, they do not compete but complement each other. Unforunately, the perception of the general audience of their supposed opposition is biased by the differences in terminology (e.g., structure vs. complex system).

Is the Paradise of Structuralism Lost?
The swing of the pendulum reversed its direction and in the late 20th century, structuralism lost its dominating position to competitors; however, its importance can be seen in the name of this reversed swing as "Post-structuralism". Some of this criticism is naïve.
For instance, structuralism was criticized as "ahistorical," "static," "too much formalized," and "too much restrictive for the freedom of expression".
The view of the ahistorical characteristic is most likely a result of mistaken association with the views of Ferdinand de Saussure. In the context of linguistics, de Saussure distinguished the two modes of inquiry: diachronic and synchronic. The synchronic perspective focuses on the structure at some particular moment. However, in a more general context, there is nothing precluding the evolution of structures, as is commonly done in physics. We can see here why the inquiry of structures using the methodology of symmetry is so effective. Structures are invariants of transformations and they are distinguished from their environment by being invariant. Their existence is a resolution of the opposition between change (diachrony) and identity (synchrony). This is the key distinction between structuralism based on symmetry invariance and its other types or versions.
More justified is the objection to the lack of interest in the explanation of the origin of structures considered in the studies of Levi-Strauss and others. The missing evolutionary or dynamic theory of structures can be blamed on these authors, but it is more a matter of the misunderstanding of the mathematical tools than of their absence. Physics and chemistry possess powerful, exact dynamic theories of their structures in terms of group theory and symmetry, so there is no good reason to believe that such a dynamic approach is impossible in other disciplines of philosophy. The most convincing explanation of the shortcomings identified in the applications to the study of culture and society is probably that the mathematical tools of symmetry theory found little use in works of the most prominent propagators of structuralism. Symmetry was more a metaphor for the literary treatment of the subject than an actual study of the invariants of groups of transformations [56].
Symmetry can easily be identified in the studies of visual arts and music. The structural study of music initiated by Pythagoreans found its way to medieval philosophy via Neoplatonic authors, and then to the works of the founders of modern science such as Johannes Kepler. The music of heavens, understood literally as music produced by the motion of the planets, was a mathematical model of the universe. An example of the highest-quality contemporary study of symmetries in art in a cross-cultural perspective can be found in the book Symmetries of Culture: Theory and Practice of Plane Pattern Analysis by Dorothy K. Washburn and Donald W. Crowe [57].
At this time, group theory in the context of symmetries had already become an everyday tool for all physicists and had assumed a permanent place in university curricula for studies in physics, chemistry, and biology [58]. A statement from an article published in Science in 1972 by a future Nobel Prize laureate in Physics Philip Warren Anderson stating that "It is only slightly overstating the case to say that physics is the study of symmetry" was an expression of a commonly accepted truth [12].
The study of symmetry became a fundamental methodological tool. Anderson's article not only closed the century of its development, but also included another very important message. Anderson emphasized the role of "breaking symmetry" and of the hierarchical structure of reality. He demonstrated that, at least in the perspective of physics, reality has a hierarchic structure of increasing complexity and that the transition from one level of complexity to the next is associated with breaking symmetry, understood as a transition from one group of symmetry to another of a lower level. Thus, not only is the study of symmetry important, but so are the ways in which it changes.

Symmetry as a Unification Tool
The role of symmetry and its breaking is equally fundamental in physics and other scientific disciplines today as it was half a century before, while structuralism in the humanities has gone through a period of strong denial (seen in the proudly declared dissents of those who, like Umberto Eco, were considered structuralists, or in the frequent denunciations of its ineffectiveness). However, recently, there has been increased interest in structuralism, not only in philosophy but also in social sciences and economics [59].
One of the reasons for the revival of the interest in structuralism and symmetry outside of scientific disciplines-where they both remained at the center of attention without any decline in interest-was the growing recognition that practically every complex system retains its identity only as a structure, not as an aggregation of elements. For instance, every living organism replaces its chemical substrates in a time incomparably shorter than its life span, which applies, of course, to the human organism. Every social organization goes through a similar process of exchange. This naturally led to the claim that, because of this universal feature of complex systems and because of deficiency in the description of complexes in terms of their simple components, the actual, real status should be given to structures, not their substrates.
We can already find the same way of thinking in Weyl's book which, in 1952, initiated an interest in symmetry: "We found that objectivity means invariance with respect to the group of automorphisms. Reality may not always give a clear answer to the question of what the actual group of automorphisms is, and for the purpose of some investigations, it may be quite useful to replace it by a wider group" [51].
There is possibly a legitimate concern regarding backlash against structuralism, which generated such strong polarization of views within The Two Cultures. If we want to use the methodology that was denounced in the past as faulty, this may lead to yet another story of The Tower of Babel. The answer is that the use of symmetry in humanistic or cultural contexts was misguided by the lack of appropriate methodological tools.
The most typical misunderstanding in attempts to extend the methodology of symmetry studies, in geometry to other contexts, is a consequence of misinterpretation of Klein's Program. Klein did not consider arbitrary transformations of the plane (or set of points on which geometry is defined), but only those that preserve the underlying geometric structure. This very important but very frequently ignored aspect of the Program was clearly described in Weyl's book popularizing symmetry in the general audience: "What has all this to do with symmetry? It provides the adequate mathematical language to define it. Given a spatial configuration , those automorphisms of space which leave unchanged form a group Γ, and this group describes exactly the symmetry possessed by . Space itself has the full symmetry corresponding to the group of all automorphisms, of all similarities. The symmetry of any figure in space is described by a subgroup of that group." [51] Even in recent books popularizing symmetry studies within the restricted domain of geometry, we can find statements exhibiting a lack of understanding of this aspect of Klein's Program. Therefore, everywhere in textbooks we find statements such as "Symmetry of a geometric object consisting of some set of points A is every transformation of a space S, i.e., bijective function from S to itself, that leaves A unchanged." In these cases, the authors are talking about "groups of symmetries" as groups of all arbitrary transformations leaving object A unchanged. Of course, these "symmetries" and "groups of symmetries" would only be useful in very limited situations. If symmetry is just one particular collection of transformations, then every two squares of the different centers would have different symmetries. In addition, we have to consider separate "symmetries" transformations that leave all points of a square identical, but arbitrarily permute all other points.
The beauty and power of Klein's Program are in the recognition of what is important for the study of symmetry. We have a more general group of transformations of a particular type, i.e., determined by a specific type of the structure (geometric, topological, algebraic, etc.); then, we look for the subgroup of transformations that leave our object unchanged, even if particular points within the object have different images through the transformations. The difference would emphasize the importance of the pre-defined total group of transformations which typically is a proper subgroup of the group of all transformations. Weyl calls it the group of symmetries for the entire space: "Space itself has the full symmetry" [51]. Only then we can make a selection of the subgroup describing a specific symmetry. This is the point where we can find the sources of the doubts about the applications of symmetry in cultural studies. In the geometric context, everyone, even those who did not understand the method in its generality, automatically considered only subgroups of the group of all isometries, i.e., transformations preserving the metric (distance) characterizing a particular type of geometry. In application to the humanities, the choice of the symmetry group was arbitrary, guided only by the desired result. The presence of arbitrary choices of transformations generated resentment, expressed in the form of claims that Levi-Strauss and others using this methodology could not get anything new beyond that which they entered into consideration. However, this is more of an aberration of the structuralistic way of thinking than the norm [60].

Expulsion from Paradise?
The title of the preceding section included the expression "Lost Paradise" with an explicit intention. In this paper, there is not much about the intellectual movement of postmodernism, which is openly hostile not only towards structuralism but also towards any tendencies to look for general syntheses of knowledge. The initiator of this movement, Jean-François Lyotard, in his programmatic work The Postmodern Condition: A Report on Knowledge, declares the war against totalitarian syntheses: "Let us wage a war on totality; let us be witnesses to the unpresentable; let us activate the differences and save the honor of the name" [61] The war was, and still is, in the name of the postmodern: "Simplifying to the extreme, I define postmodern as incredulity toward metanarratives. This incredulity is undoubtedly a product of progress in the sciences: but that progress in turn presupposes it. To the obsolescence of the metanarrative apparatus of legitimation corresponds, most notably, the crisis of metaphysical philosophy and of the university institution which in the past relied on it" [61] The book exemplifies the dangers of the external view of science acquired from the usually unsuccessful attempts at popularization. Lyotard gives the evidence for his external view, writing: "A crude proof of this: what do scientists do when they appear on television or are interviewed in the newspapers after making a 'discovery'? They recount an epic of knowledge that is in fact wholly unepic. They play by the rules of the narrative game; [. . . ]" [61]. What scientists do in front of TV cameras has nothing to do with science. Lyotard's reflection on science is not only external but also very naive: "I have already made the point that the question of proof is problematical since proof needs to be proven. One can begin by publishing a description of how the proof was obtained, so other scientists can check the result by repeating the same process. But the fact still has to be observed in order to stand proven. What constitutes a scientific observation? A fact that has been registered by an eye, an ear, a sense organ? Senses are deceptive, and their range and powers of discrimination are limited. This is where technology comes in" [61] Is this supposed to be a competent critique of science? It is not necessary to publish spoof articles in postmodernist journals to ridicule the movement [62]. Postmodernists perform this job better by publishing their own articles.
There are many other statements in Lyotard's book that show his disarmingly naive view of science, its methodology, and the actual problems that can be rightly interpreted as the crisis. These problems cannot, and should not, be mixed with the crisis in modern societies that do not understand science, the needs of people contributing to it, or their organizations. Among the other misconceptions, there are some clearly mistaken views on what science is. Science is not a narrative, not because it is descriptive (as Lyotard claims), but because it is a dialog that never ends. The crucial point of every scientific activity is the formulation of the research question, which initiates the dialog and which often is more difficult than finding the answer. The popularization of science (admittedly not of the highest quality) frequently has the form of a narrative. The most important feature of science that makes it different from the narratives of culture and religion is that while the latter make all effort to suspend disbelief, the former starts from generating disbelief and keeping disbelief as a self-control tool.
How is the story of postmodernism related to John Milton's Paradise Lost? postmodernists fell to the temptation of Satan to eat the fruit from the tree of the knowledge of good and evil. postmodernists believe that they tasted the fruit and that this gives them the power to judge what is good or evil in knowledge or philosophy. This makes them so interested in the issue of legitimization because they reject the criteria of objective reality. There is no objective reality, so there is no objective truth. Reality consists of social constructs, so the task is to achieve legitimization of the constructions.
An example of the postmodernist way of thinking in the subject of this paper is the book The Unity of Nature: Wholeness and Disintegration in Ecology and Science by Alan Marshall [63]. The author explicitly identifies his postmodernist position: "[T]here is an ongoing debate between naturalism and realism on the one hand and social constructivism on the other; and I should state at the onset that I am, myself, more allied to the latter than to the former" [63]. The following quotation effectively presents the way of thinking of the author, who opposes the idea of the unity of nature: "Natural unity, we shell see, does not just exists on its own as an independent idea. It has a whole attendant army of supporting concepts, narratives and metaphors from which it gains its strength. For example, unity has attachments to the ideas of 'balance', 'order', 'hierarchy', 'stability', and the concept of the 'system'. In this book, these companion ideas are filtered out from the unity of nature idea and then, one by one, distilled so as to expose their unfortunate philosophical side-effects." We ought to be thankful to the author that he did not dispose of the entire idea of 'philosophy' because of its unfortunate side effects. This is a good example of a postmodernist meta-narrative which, with arrogance typical of this movement, claims possession of the fruits from the tree of the knowledge of good and evil to judge that which is fortunate or unfortunate. I am afraid that the followers of postmodernism are not aware of their affinity to those who, like Kellyane Conway, Senior Counselor to the President of the US, promote such social constructs as "alternative facts" [64].

Conclusions
To go beyond the declarations of the importance of transdisciplinarity for the reunification of knowledge dispersed through diverse disciplines of inquiry, it is necessary to develop a clearly defined transdisciplinary methodology based not on the reduction in the conceptual frameworks of component domains to a specific framework of a dominant discipline, but on the concepts that are central for all contexts.
In this article, we revisited the concept of structure, which was of great importance in modern intellectual history. This concept raised hopes for reconciliation between diverse studies, but was rejected on the humanistic side because of misunderstandings and attempts to directly transmit a highly abstract version of its methodology of the study of symmetry. The revolt against structuralism and symmetry may be puzzling when we realize that both structure and symmetry are descendants of the ideas of form and harmony, without which no humanistic discourse is possible.
Even if we witnessed how Paradise was Lost, there is still a chance for Paradise Regained. In this paper, diverse tendencies in searching for a unified view of reality were reviewed, and the most promising direction seems to be in the attempt to revive, extend, and develop the methodology of structuralism. I am sure, at least, that structuralism is more universal and no more totalitarian than postmodernism. I believe that there is no other methodological framework that could compete with it. However, more important than my belief is the demonstration that there are available methodological tools for transdisciplinary studies.
Funding: This research received no external funding.

Acknowledgments:
The author appreciates comments from the audience of the PHAEDE 2019 Conference, where an early version of this paper was presented.

Conflicts of Interest:
The author declares no conflict of interest.