Next Article in Journal
Spatial–Temporal Coupling and Interactive Effects of Reclaimed Water Usage Efficiency and High-Quality Development of the Financial Sector
Next Article in Special Issue
Synchronization and Patterns in Human Dynamics
Previous Article in Journal
Bridging the Gender Gap in Climate-Resilient Sweet Potato Farming: A Case Study from Goromonzi District, Zimbabwe
Previous Article in Special Issue
System Science Can Relax the Tension Between Data and Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Opinion

The Systemic Unity in Mathematics and Science: Beyond Techno-Science Myths †

CNRS & Ecole Normale Supérieure, 75005 Paris, France
This paper is an extended and largely revised summary of the preface, in Italian, to “Il Liceo matematico: un approccio storico e interdisciplinare all’insegnamento delle scienze e della matematica” di A. Nigrelli e F. S. Tortoriello (Mimesis, to appear, 2025).
Systems 2025, 13(2), 136; https://doi.org/10.3390/systems13020136
Submission received: 20 December 2024 / Revised: 24 January 2025 / Accepted: 29 January 2025 / Published: 19 February 2025
(This article belongs to the Special Issue Theoretical Issues on Systems Science)

Abstract

:
Dualistic views in mathematics and natural sciences are severely reducing the scope of science. It began by claiming, more than one century ago, that, in mathematical theories, “a formal list of signs“ (the axioms and rules of inference, independent from meaning) could allow one to consistently and completely deduce all “true” theorems. It continues today by claiming that “we can control evolution by re-writing the instructions of life written in the four signs of DNA” or mimicking completely human cognition in “sequences of 0 and 1s”, independently of our biological body and brain and their historicity. The damages of and the alternatives to these views will be briefly hinted.

1. Introduction

The strict separation between “syntax” and “semantics”, “intuition” and “logical reasoning” in mathematics and, more broadly, in science has been severely affecting their epistemology. Federigo Enriques (1871–1946), one of the most prominent representatives of the Italian school of algebraic geometry, can be considered a precursor of the conceptual framework set forth in this contribution. According to Enriques, mathematics cannot be fully understood without reference to philosophy and its history, and vice versa; philosophy always had to deal with the developments of mathematics and science in general. Enriques was publicly and violently criticized by Benedetto Croce (the most influential Italian philosopher of the first decades of the last century) for having dared to organize a philosophy conference in Bologna in 1911: he, a mathematician, therefore a “technician without knowledge”, opened a lively polemic with Croce and, then, with Gentile. A key issue in Enriques’ perspective is the above-mentioned inseparability of intuition and logic, sense and formal deduction,
[…] the usual question, whether mathematics should educate intuition or logic, is flawed by an imperfect vision of the value of teaching. In fact, the presupposition of this question is that logic and intuition can be separated as distinct faculties of the intelligence, whereas they are rather two inseparable aspects of the same active process, which refer to each other.
F. Enriques, Scienza e razionalismo, Zanichelli, Bologna 1912
Why is the role attributed by Enriques to “intuition” in this quotation? Because it is by means of intuition (a strict relative of Pascal’s ‘esprit de finesse’) that it becomes possible to see the ‘mathematical objects’. Intuition generates the ‘insight’ (Weyl) essential to invention and proof in mathematics. It is grounded in our human bodily gestures, which organize space by our actions in it.

2. Mathematical Invention

I dare to think that a very ancient gesture, of an eminently mathematical nature, was made by our ancestors many tens of thousands of years ago, when they interpolated the stars with lines that do not exist and gave contours and names to constellations, thus giving meaning, probably mythical, to those meaningless bright points in the sky. That is, they “imagined configurations of meaning”. Then, at Lascaux and Altamira, over 15,000 years before our era, humans drew lines and edges, on the walls of caves: the edge of a horse or a bison. Objects have no edges: it starts from the primary visual cortex that the brain, in many vertebrates, builds around objects [1,2]; then, we humans, and only we, as far as I know, manage to transform this trace in neurons, this “physiological invention” of the animal brain, into a form of pictorial-mathematical communication between humans, to draw figures as pure edges on the walls of a cave, probably rich in mythical symbolism. Until we get to the splendid definition of beta in Euclid’s books, the line is a length without thickness. Then, tracing is combined with language: the student will understand what a continuous line is only when he sees the gesture, the trajectory of the hand, the line traced on the blackboard by the teacher; but only in language will he be able to add, like Euclid: the line I draw is without thickness—at the same time an absurdity and the invention of the mathematical notion of edge, very profound—Euclid’s geometric figures are nothing but edges [3]. And this is performing mathematics, an intertwining of geometric gesture, an invariant of action on the plane or in space, and language: paradigmatic examples of proposals for concepts and structures, not arbitrary, because rich in meaning. All mathematics is a permanent invention of new concepts and structures: there is no profound and original proof of a relevant theorem that does not require the invention of new ones. This is the primary incompleteness of every formal-deductive formalism: only in a second phase can we make explicit the notions, the principles, which are always new, and not always applicable according to formal-mechanical rules, even in proof of theorems in Formal Number Theory (Arithmetic) [4].
Mathematical idealities are deeply rooted in the world, even if we tend to place them outside the world. It is not, as Galileo says, nature that is written in the language of mathematics, but mathematics that is written in natural and historical practices and language. In this way we can build a “dialogue between humanistic culture and scientific culture”, as the authors hope, to “recover the inseparable link between science and philosophy through a method capable of bringing out the meaning of science for the life of man” [5,6] and from human life.

3. All Is Code and Computation

Let us now move from a philosophical/scientific debate to techno-science, where the absence of philosophical vision (or its poverty) causes very serious distortions. The Theory of Computability, to which I have long contributed as a mathematician, born in mathematical logic in the 30s (Gödel, Church, Turing, etc.), is a beautiful theory of the discrete: it is given on integer numbers. Then, a lot of work was performed for computability in the “continuum” (or quasi-continuum) and to interpret it in continuous geometric structures (part of the work of the author). But the machine that implements it, the contemporary computer, is a “discrete state machine”, as its founder, Alan Turing, defines it. Even the recent Deep Learning concept at the forefront of the new Artificial Intelligence (AI), which uses and develops very powerful mathematical methods in continua [7], often coming from mathematical physics (e.g., wavelets, re-normalization), must necessarily be implemented in the discrete state machine in sequences of 0 and 1. Even if some “shapes” that emerge in the mathematical dynamics described with Deep Learning resemble brain dynamics (in language recognition, it is possible to reproduce dynamics that can be glimpsed in the cerebral cortex—or the same mathematics can be used to describe them), the animal brain does not have “behind” or “under” a discrete state machine that does the calculations, like in the case of Deep Learning. That is, the image of the “thought” that AI sends back to us is always that of a computation on integers. Thus, as some philosophers say, “what cannot be calculated, cannot be thought” [8]. Sequences of 0/1 and calculations in the discrete, algorithms, this is their world. And Pearl and Valiant, both Turing Award winners (the Nobel Prize of Computer Science), explain to us that the laws of physics and biology are algorithms (enriched by statistical correlations between numerical data, for Pearl; “echo-rhythms”, says Valiant to highlight the interaction between programs) [9,10]. Excellent technicians in their discipline, they project onto the world what they know how to do, without any critical reflection.
Therefore, in an algorithmic world, a stone falls because it is “programmed to fall” (see also Stephen Wolfram’s “A New Kind of Science”, Wolfram Media, Champaign, Ill., 2002), just as it happens on a computer screen. Fortunately, Einstein explained to us that a stone falls “for reasons of symmetry” [11], and physics goes on ignoring such nonsense. This is not the case for biology, where in the absence of a “Theory of the Organism” (ontogenesis), despite the rich theoretical debate in the Theory of Evolution (phylogenesis), from Darwin onwards, vague computer metaphors continue to be evoked to talk about the living. So it is said that “we can control evolution”, by reprogramming organisms [12,13,14]. Difficult techniques justify this arrogance (hybris), with very modest consequences, especially compared to the promises. After human genome sequencing, as for promise, the title of von Eschenbach suffices: “NCI [Nat. Cancer Inst.] sets goal of eliminating suffering and death due to cancer by 2015” [15], jointly with the promise that within two and three years cancer diagnosis and prognosis should have been made by analyzing DNA. Instead, DNA sequencing does not help distinguish a primary tumor from a metastatic one, benign from malignant [16]; in fact, “63 to 69% of all somatic mutations [are] not detectable across every tumor region… Gene-expression signatures of good and poor prognosis were detected in different regions of the same tumor” [17], and tumors without mutations are observed [18]. And even today, only the histologist, looking at the tissue and the shape of the cells under the light microscope, can tell whether the tumor is malignant, benign, primary, or metastatic. Then, consider the GWA (Genome-Wide Association), a project funded for 10 years with USD 8 billion (!) for the purpose of associating diseases of all sorts with “genes”: a mountain (of money) that has produced a few mice [19,20]. In biology, there is no privileged causal level, and even gene expression, which is very important, is a network of genes activated in interaction [21], whose network dynamics are channeled by epigenetic constraints, ranging from the proteome to the structure/position of the cell in the tissue, to the organism, to the ecosystem [22,23].
It is then fair to say that we are dealing with two technosciences united by a new Imperative Pythagoreanism. Let me explain. Not only would the essence of the world be in the integer numbers, as the Pythagoreans said (or in sequences of letters, DNA, which is the same thing) or in any case codified in them, but, excluding physical causality, as Einstein and Bohr had understood, the world is made to function like a digital computer: by giving orders. In fact, even functional or “object-oriented” programming languages, widely used in AI, must still be reduced to sequences of orders (“formal reductions”, Church-Rosser theorems [24], and normalization theorems [25]); furthermore, their management in the computer with compilers and operating systems are based on orders; it is imperative (it is implemented in languages called imperative). And a black stone falls on the screen because the pixels implement orders that make them become black and white in succession. The world and cognition would also be the same: they follow orders—the causal world of physics, framed by symmetries (conservation properties), is replaced by sets of instructions to be obeyed.

4. Geometry’s Sensitivity to Coding

Furthermore, in addition to causality, the geometric notion of dimension is also lost. In fact, what is geometric is “sensitive to coding”. Continuous spaces, from Descartes to Riemann, of multiple dimensions and with “natural” topologies, cannot be coded in a single dimension: what counts is lost, continuity, and with it, as we said, classical and relativistic causality. Consider that to represent three dimensions in two, with the Italian perspective in Renaissance painting, the first symbolic representation of the “actual” infinity had to be invented: the point of convergence of parallel lines, the projective point, the result of over 1000 years of debate on the infinity of God (in potential, in act?) [26,27]. In the discrete, instead, any number of spatial dimensions can be encoded in a computer’s one-dimensional sequence of 0s and 1s: the discrete is insensitive to encoding. And this is the common watchword of AI and dominant molecular biology: encoding, encoding… “all is code”, everything is (reducible to) linear, alphanumeric code. A conceptual catastrophe, if extended to the world: 2500 years of its mathematical and physical intelligibility, of its causal and spatial structuring, are erased, replacing it with orders coded with four letters in the DNA, or 0 and 1 aligned in a computer. This last one is an excellent technology, based moreover on a radical dualism, the distinction between hardware and software: a splendid idea of Turing for constructing machines (1936), a cognitive and knowledge disaster if projected onto inert and living matter.
My long and repeated American experience suggests to me a reason for the success of such visions in Silicon Valley and elsewhere. For many years, American high schools have taught only computational mathematics, directly programmable in the discrete state machine. If the brilliant student then continues his studies in computer science or engineering, he will never see the continuum of Euclid’s gestures, who writes: draw a line from one point to another (axiom 1), extend a segment continuously in a line (axiom 2), draw a circle around a point (axiom 3), etc., continuous trajectories, edges, it was said. She will never truly grasp the importance of the continuous deformations of Riemann spaces to understand Einstein’s relativity, which is the dynamics of metric spaces. Nor the fluctuations in the continuum, below the physical measure, therefore non-measurable causes of the unpredictability of deterministic systems. Thus, the world, as Pearl says, is “laplacian”, i.e., it is deterministic and predictable, like computational dynamics (except for quantum phenomena in the discrete, says he).
As a consequence, in the dominant molecular biology (fortunately, there are some heroic dissidents), data are collected, and all the possible “-omics” (genomics, proteomics, transcriptomics, glycomics, lipidomics, etc.) hoping to receive answers from the observation of regularities in the data, without a theory of the organism, replaced by vague imperative numerical metaphors: “we have decoded the instructions written by God in the DNA of every organism”, as Collins declared in 2001, in the presence of Clinton and Blair [28]. Among the dissidents, Sydney Brenner (molecular biologist, Nobel Prize, 2002) observes, “This science of ‘-omics’ has corrupted us. It has created the idea that if you collect a lot of data, everything is solved” [29]. And so powerful data extraction and processing technologies are developed in AI and molecular biology, of great engineering intelligence, without theoretical, critical thinking, to be applied without asking questions: a “proletarization” of scientific work [30].

5. Reversing Prometheus’ Nightmare

In order to struggle against the cultural degradation of scientific work, we need a “reversal of the myth of Prometheus”; that myth which has become a “nightmare” for which “we self-transform for the love of our machines, because we take our machines as a model for our alterations: we therefore renounce assuming ourselves as units of measurement and with this we limit our freedom or we renounce it”. To get out of what “Anders defines as Promethean shame” [5,31], philosophy and science must dialog intensely. It is good that those who do science or teach it, on the one hand, grasp the meaning of their work also thanks to philosophical criticism. On the other hand, philosophy must also subsume in its critical framework the characteristics of science as “substained, collective, critical enquire” [32], with its techniques, of course, but always to be distinguished from technoscience. The scientistic vision of knowledge, based only on the accumulation of techniques, from “problem solving” to “techno-fix”, is incompatible with the negative, limiting results that science can also propose; moreover, it was said, that these have always opened up new points of view. Even in AI, thanks to the relative solidity and rigor of some mathematical methods in the continuum, the search for “optimal” paths or values of geodesics in immense phase spaces, we are starting to see some limited results. Recently, it has been demonstrated that the existence of such optima, necessary to find solutions to the problem in question, is equivalent to the Continuum Hypothesis in Set Theory [33]. This hypothesis is undecidable; consequently, there cannot be a “uniform and effective” method to find said optima. That is, and this is also the reality of the facts, in the face of every problem, it will take human work to build mathematics and programming environments and techniques that can deal with it. A minimal change (the passage from 19 squares to 17 in the game of Go) and… the machine stops. Each time, talented technicians must build an ad hoc software, which can also do very well and be very useful.
AI is anything but “plastic”, the great property of the animal brain, which makes it “generalist”. The latter not only learns to hunt, to look for mushrooms, to play with the owner’s son… but above all, it modifies its internal structures. The layered mathematical structure of Deep Learning vaguely resembles the stratification of the visual cortex. But hearing, smell, touch, etc., from what little we know, work thanks to profoundly different brain connectivity structures. However, in the case of cognitive deficit, one can replace the other, modifying itself structurally to perform the new function [34].
These observations, outside of myths, can also help to construct better machines, which, as humans, we certainly need, “for Heidegger technology is co-determining of knowing” [5]. And of being in the world: our humanity is also “technological”, since the time of the invention of stone worked for a purpose. But as soon as we get to propose even a partial theoretical framework, science knows how to understand its limits and, if possible, identify other ways. Some recent theoretical AIs allow, in addition to remarkable techniques, also some limiting results, as we said. While, in the absence of a rigorous theoretical frame, it is not possible to prove a vague theory false, as Feynman claims, nor provide negative/limitation results, a good way to propose new ideas. This is the case for the “genetic program theory”, from Monod to Doudna (mentioned), with his phantom “exact editing” of DNA, thanks to the CRISPR-cas9 technique [12,13]. However, between the most important books of the two authors, a difference must be underlined. Monod’s book from 1970 is certainly scientistic, but it tells of great observations and extraordinary laboratory work—in an erroneous theoretical framework. It can happen; it also happened to Ibn Yunus (Egypt, 10th century), a great astronomer and mathematician, co-inventor of spherical trigonometry and a very fine observer of the sky, in the frame of the Ptolemaic, geo-centric theoretical framework [35,36]. But is theoretical and critical thinking so important? As Boltzmann says, “there is nothing more practical, in science, than a good theory”. With the principle of inertia, Galileo definitively demolishes the Ptolemaic framework (the “retrograde movements” of the planetary epicycles become impossible) and opens the way to Newton. The Theory of Evolution allowed us to make good use of DNA sequencing, which has contributed to identifying evolutionary correlations of great interest—DNA, for us, is a chemical-physical trace of all evolution, of extraordinary importance, a “constraint” to the molecular dynamics, largely stochastic, channeled by the cell, in interaction with the organism, in its evolutionary history, in an ecosystem [37,38]. The principles, explicit and rigorous, from Galileo (inertia) to Heisenberg (indeterminacy) or to Darwin, whose first principle, “reproduction with variation” (and motility), is at the heart of the production of biological diversity, allow the theoretical construction and that critical step aside. This critical attitude is a fundamental bridge with philosophy and eventually helps to change direction, to invent something new, as happened after the negative results of the Pythagoreans, Poincaré, and Gödel. Major examples in this direction are given by the result of the 1930s on “undecidability” and “incomputability”: in order to prove the existence of unprovable assertions, of functions that cannot be computed, Gödel, Church, and Turing, between 1931 and 1936, gave different notions of algorithm, later proved equivalent, and, therefore, of computable function, which will be at the heart of the numerical-algorithmic machine that is changing the world. And changing it for the better, if one develops more critical thinking and less hybris, if one does not project it onto the world, saying that brain and cell, the world, are instances of that alphanumeric machine [31,39]. This is impossible in biology, as long as its “theoretical” framework is dominated by vague digital and alphabetic metaphors (such as the “exact editing” of DNA, as if it were a written text—against the same practice in the laboratory, which is stochastic) and in which the relationship between results and investments (and promises), from human health to GMOs, is very low (see notes and references on the GWA and new GMOs). But the cascade of failures, after the promises to eliminate world hunger with GMOs (2000) [40,41] and cancer (von Eschenbach, 2003, cited), and empirical evidence are not enough. As noted in the book, in reference to Lakatos: “falsification cannot be the immediate result of an experimental procedure. In his view, no basic assertion can by itself induce a scientist to reject a theory: “nature can shout its NO, but human ingenuity can always shout louder; with enough ingenuity and a bit of luck any theory can be defended ’progressively’ for a long time even if it is false”“. This is especially true when financial interests clearly outweigh theoretical content; as a courageous biologist colleague says, “the geno-centric enterprise is “too big to fail”” [42,43,44].

6. Theory Building

Let it be clear that the critical look that we have briefly developed on two very powerful and dominant techno-sciences today is intended to contribute to their better development: the digital computer and its networks, such as molecular analyses of DNA in particular, and their role in the dynamics of the organism are of great importance. But they must be immersed in a critical reflection, both scientific and philosophical. Mathematics plays a crucial role, given it is continuously used and invoked in current technologies, both in a specific and metaphorical way. Furthermore, together with physics, it has been able to propose limits and, therefore, new theoretical inventions. As for its teaching, it must be stressed that an important part of the work of those who do research consists of tackling problems by changing the point of view, trying to formulate them differently, and transporting them from one context to another. Science is not so much “problem solving”, as techno-science claims, but rather “theory building”: changing the point of view requires criticism and theoretical construction and is also a way to solve problems, “immersing them in a sea of new concepts and structures”, as Grothendieck, an immense mathematician, used to say.

7. Conclusions: Back to Intuition

We started with a reference to Enriques and the interplay he stressed between logic and intuition in mathematics. Then, we mentioned the understanding of continua “by intuition” of the “gesture”, by our “mathematical seeing” of a trajectory. Brouwer, the founder of the “intuitionist school” in logic, refers to the intuition of the discrete flow of time. The insight by “subitizing” (immediate counting of a small number of objects), which we share with some animals, constitutes another “intuitive practice” of the discrete. The junction of the two experiences of discreteness sets the condition of possibility for the mathematical invariant that we propose as the infinite discrete sequence of integer numbers, the paradigm of a discrete structure (for work and references to “continuous gestures”, “subitizing” and Brouwer on discrete counting, see [45]). Then, along history, by discrete vs. continuous mathematics, we provided two different organizations of “reality”, which yield a different understanding of causality, as we stressed. Discrete State Machines (DSM), as Turing soundly called it in the 1950s his 1936 invention of the “Logic Computing Machine”, opened the way to our fantastic achievements in computing. Turing understood the role of the interplay between continua and discrete by the difference he makes between imitation and modeling (see [46]): his DSM may imitate a process (human thinking in the 1950 paper, see [46]), with no causal commitment to its “causal structure”; while his model of morphogenesis (the 1952 paper, see [46]) proposes the mathematical description of an action that causes a reaction that causes a diffusion in continua. As a matter of fact, causes are framed by conservation properties in physics that correspond to “continuous symmetries” in equations [11,47]. In summary, by the use of continuous vs. discrete mathematics, we provide different understandings of “reality”. Intuition and logic may underlie both, but they radically differ in the way we “look” at the world. In particular, because of the lack of an analysis of causality, the description and understanding of phenomena by discrete tools suggest either the intrinsic randomness proper to quantum measurement (assumed by Bohr in the Bohr-Einstein debate, see [5]) or the Imperative Pythagorism, mentioned above: mind, biological organisms, inert matter, etc., function by “following the orders”. Thus, we can control and reprogram them at will, like on the screen of our digital machines. Our analysis aims at the proposal of alternatives, from biology [22] to cognition [45].

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Hubel, D.H.; Wiesel, T.N. Shape and arrangement of columns in the cat’s striate cortex. J. Physiol. 1963, 163, 559–568. [Google Scholar] [CrossRef] [PubMed]
  2. Petitot, J. Elements of Neurogeometry: Functional Architectures of Vision; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  3. Rudyak, Y. On Thom Spectra, Orientability, and (co)Bordism; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  4. Longo, G. Reflections on Concrete Incompleteness. Philos. Math. 2011, 19, 255–280. [Google Scholar] [CrossRef]
  5. Nigrelli, A.; Tortoriello, F.S. Il Liceo Matematico: Un Approccio Storico e Interdisciplinare All’insegnamento Delle Scienze e Della Matematica; Mimesis: New York, NY, USA, 2025. [Google Scholar]
  6. Minati, G. Systems, Complex Systems, and Intelligence: An Educational Overview. Wseas Trans. Adv. Eng. Educ. 2022, 19, 87–108. [Google Scholar] [CrossRef]
  7. Le Cun, Y. Quand la machine apprend. In La Révolution des Neurones Artificiels et de L’apprentissage Profond; Odile Jacob: Paris, France, 2023. [Google Scholar]
  8. Ladyman, J.; Ross, D. Every Thing Must Go, Metaphysics Naturalized; Philosophy Education Society Inc.: Oxford, UK, 2008. [Google Scholar]
  9. Pearl, J.; Mackenzie, D. The Book of Why: The New Science of Cause and Effect; Basic Book: New York, NY, USA, 2018. [Google Scholar]
  10. Valiant, L. Probably Approximately Correct; Basic Books: New York, NY, USA, 2013. [Google Scholar]
  11. Kosman-Schwarback, Y. The Noether Theorems: Invariance and Conservation Laws in the Twentieth Century; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  12. Doudna, J.; Sternberg, S. A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution; Bodley Head: London, UK, 2017. [Google Scholar]
  13. Longo, G. Programming Evolution: A Crack in Science. Org. J. Biol. Sci. 2021, 5, 5–16. [Google Scholar]
  14. Hoepers, A.M.; Heinemann, J.A.; Zanatta, C.B.; Chu, P.; Hiscox, T.C.; Agapito-Tenfen, S.Z. Predicted multispecies unintended effects from outdoor genome editing. Ecotoxicol. Environ. Saf. 2024, 282, 116707. [Google Scholar] [CrossRef]
  15. Von Eschenbach, A.C. NCI sets goal of eliminating suffering and death due to cancer by 2015. J. Natl. Med. Assoc. 2003, 95, 637. [Google Scholar]
  16. Weinberg, R. Coming Full Circle—From endless complexity to simplicity and back again. Cell 2014, 95, 637–639. [Google Scholar] [CrossRef]
  17. Gerlinger, M.; Rowan, A.J.; Horswell, S.; Larkin, J.; Endesfelder, D.; Gronroos, E.; Martinez, P.; Matthews, N.; Stewart, A.; Tarpey, P.; et al. Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing. Engl. J. Med. 2012, 366, 883–892. [Google Scholar] [CrossRef]
  18. Versteeg, R. Tumors outside the mutation box. Nature 2014, 506, 438–439. [Google Scholar] [CrossRef]
  19. Heeney, C. Problems and promises: How to tell the story of a Genome Wide Association Study? Stud. Hist. Philos. Sci. 2021, 89, 1–10. [Google Scholar] [CrossRef]
  20. Baverstock, K. The Gene: An appraisal. Prog. Biophys. Mol. Biol. 2024, 187, 1–77. [Google Scholar] [CrossRef] [PubMed]
  21. Noble, D. Genes are not the blueprint for life. Nature 2024, 626, 254–255. [Google Scholar] [CrossRef]
  22. See Soto, A.; Longo, G. From the century of the genome to the century of the organism: New theoretical approaches. Prog. Biophys. Mol. Biol. 2016, 122, 1. [Google Scholar] [CrossRef]
  23. Ball, P. How Life Works: A User’s Guide to the New Biology; Pan Macmillan: London, UK, 2024. [Google Scholar]
  24. Barendregt, H. The Lambda-Calculus: Its Syntax, Its Semantics; North-Holland: Amsterdam, The Netherlands, 1984. [Google Scholar]
  25. Girard, J.-Y.; Taylor, P.; Lafont, Y. Proofs and Types; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  26. Zellini, P. Breve Storia Dell’infinito; Adelphi: Milan, Italy, 1980. [Google Scholar]
  27. Longo, G.; Longo, S. Infinity of God and Space of Men in Painting, Conditions of Possibility for the Scientific Revolution. In Mathematics in the Visual Arts; Scheps, R., Maurel, M.-C., Eds.; ISTE-WILEY: London, UK, 2020; Available online: https://www.di.ens.fr/users/longo/download.html (accessed on 20 January 2025).
  28. Venter, C. We Have Learned Nothing from the Genome; Der Spiegel: Hamburg, Germany, 2010. [Google Scholar]
  29. Parrington, J. The Deeper Genome: Why There Is More to the Human Genome Than Meets the Eye; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
  30. Soto, A.; Sonnenschein, C. The proletarianization of biological thought. Philosophy World Democracy. 2021. Available online: https://www.philosophy-world-democracy.org/articles-1/the-proletarianization-of-biological-thought (accessed on 23 December 2022).
  31. Longo, G. Le Cauchemar de Prométhée; Les sciences et Leurs Limites: Paris, France, 2023. [Google Scholar]
  32. Philosophy World Democracy. Available online: https://www.philosophy-world-democracy.org/articles-1/science-in-the-storm (accessed on 20 August 2021).
  33. Ben-David, S.; Hrubeš, P.; Moran, S.; Shpilka, A.; Yehudayoff, A. Learnability can be undecidable. Nat. Mach. Intell. 2019, 1, 44–48. [Google Scholar] [CrossRef]
  34. Berthoz, A. La Vicariance; Odile-Jacob: Paris, France, 2013. [Google Scholar]
  35. Longo, G.; Mossio, M. Geocentrism vs genocentrism: Theories without metaphors, metaphors without theories. Interdiscip. Sci. Rev. 2020, 45, 380–405. [Google Scholar] [CrossRef]
  36. Longo, G. Information at the Threshold of Interpretation, Science as Human Construction of Sense. In Will Science Remain Human? Bertolaso, M., Sterpetti, F., Eds.; Springer: New York, NY, USA, 2019; pp. 67–100. [Google Scholar]
  37. Elowitz, M.B.; Levine, A.J.; Siggia, E.D.; Swain, P.S. Stochastic Gene Expression in a Single Cell. Science 2002, 297, 1183–1186. [Google Scholar] [CrossRef]
  38. Montévil, M.; Mossio, M. The Identity of Organisms in Scientific Practice: Integrating Historical and Relational Conceptions. Front. Physiol. 2020, 11, 611. [Google Scholar] [CrossRef]
  39. Lassègue, J.; Longo, G. L’empire numérique. De l’alphabet à l’IA. 2025. Available online: https://www.observatoireplurilinguisme.eu/dossiers-thematiques/culture-et-industries-culturelles/177778372-technologies-de-la-traduction/17821-ia,-soci%C3%A9t%C3%A9-et-sciences-de-la-culture-inalco,-4-f%C3%A9vrier-2025,-18h-19h30 (accessed on 20 January 2025).
  40. Kranthi, K.R.; Stone, G.D. Long-term impacts of Bt cotton in India. Nat. Plants 2020, 6, 188–196. [Google Scholar] [CrossRef] [PubMed]
  41. For More Negative Consequences and the Current Debate in Europe (New Genetic Technologies, CRISPRCas9). Available online: https://ensser.org/ (accessed on 20 January 2025).
  42. Are GM Crops Better for the Environment? 2015. Available online: http://gmoinquiry.ca/environment/ (accessed on 20 January 2025).
  43. Heinemann, J.A.; Massaro, M.; Coray, D.S.; Agapito-Tenfen, S.Z.; Wen, J.D. Sustainability and innovation in staple crop production in the US Midwest. Int. J. Agric. Sustain. 2014, 12, 71–88+387–390. [Google Scholar] [CrossRef]
  44. Rising Pesticides Use Harming Farmers, Environment: Report. 2022. Available online: https://www.dw.com/en/pesticide-atlas-2022/a-60390427 (accessed on 20 January 2025).
  45. Longo, G.; Viarouge, A. Mathematical intuition and the cognitive roots of mathematical concepts. Topoi 2010, 29, 15–27. [Google Scholar] [CrossRef]
  46. Longo, G. Letter to Alan Turing. Theory Cult. Soc. 2018, 6, 73–94. [Google Scholar]
  47. Longo, G.; Montévil, M. Perspectives on Organisms: Biological Time, Symmetries and Singularities; Springer: Dordrecht, The Netherlands, 2014. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Longo, G. The Systemic Unity in Mathematics and Science: Beyond Techno-Science Myths. Systems 2025, 13, 136. https://doi.org/10.3390/systems13020136

AMA Style

Longo G. The Systemic Unity in Mathematics and Science: Beyond Techno-Science Myths. Systems. 2025; 13(2):136. https://doi.org/10.3390/systems13020136

Chicago/Turabian Style

Longo, Giuseppe. 2025. "The Systemic Unity in Mathematics and Science: Beyond Techno-Science Myths" Systems 13, no. 2: 136. https://doi.org/10.3390/systems13020136

APA Style

Longo, G. (2025). The Systemic Unity in Mathematics and Science: Beyond Techno-Science Myths. Systems, 13(2), 136. https://doi.org/10.3390/systems13020136

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop