Foundations of Biology

It is often stated that there are no laws in biology, where everything is contingent and could have been otherwise, being solely the result of historical accidents. Furthermore, the customary introduction of fundamental biological entities such as individual organisms, cells, genes, catalysts, and motors remains largely descriptive; constructive approaches involving deductive reasoning appear, in comparison, almost absent. As a consequence, both the logical content and principles of biology need to be reconsidered. The present article describes an inquiry into the foundations of biology. The foundations of biology are built in terms of elements, logic, and principles, using both the language and the general methods employed in other disciplines. This approach assumes the existence of a certain unity of human knowledge that transcends discipline boundaries. Leibniz’s principle of sufficient reason is revised through a study of the complementary concepts of symmetry and asymmetry and of necessity and contingency. This is used to explain how these concepts are involved in the elaboration of theories or laws of nature. Four fundamental theories of biology are then identified: cell theory, Darwin’s theory of natural selection, an informational theory of life (which includes Mendel’s theory of inheritance) and a physico-chemical theory of life. Atomism and deductive reasoning are shown to enter into the elaboration of the concepts of natural selection, individual living organisms, cells, and their reproduction, genes, as well as catalysts and motors. This work contributes to clarify the philosophical and logical structure of biology and its major theories. This should ultimately lead to a better understanding of the origin of life, of system and synthetic biology, and of artificial life.


Introduction
Our work investigating the foundations of biology began as a theoretical construction of the genetic material. We observed that the fundamental concept of a material basis to heredity is commonly introduced through a description of the DNA double helix, followed by another description of its replication process [1,2]. Asking "Why DNA is such as it is and not otherwise?", we chose to pursue a complementary approach: Our focus was on a construction in the general deductive spirit of the work of Crane [3], establishing first a minimal list of requirements that a biological device for information storage should possess. The genetic material emerged from this construction as a transient state in a succession of quasi-invariant processes of replication. Over the years, this work has been slowly amended through presentations to various audiences, in research seminars, meetings and courses (both at the undergraduate and graduate levels), and has benefited from the criticisms of many people. Encouraged by the positive features of this construction, we sought to better understand the nature of the rules upon which it was made.
The structure of DNA, when compared to our theoretical construct is-in a qualified sense-unique and ideal. This conclusion contrasts with a widely held opinion, according to which there are no laws in biology, where everything could have been otherwise, being solely the result of historical accidents. It thus raises the broad question of the roles of necessity and contingency in this discipline. Asking "Why life is such as it is and not otherwise?", we were inevitably induced to ponder on the foundations of biology and its connection with the problems of the origin of life, of artificial life, and of synthetic biology. Finally, we came to compare this with the foundations of other disciplines.

The Foundations of Knowledge
The disciplines of logic, mathematics, and physics have gone through foundational crises leading to a deeper understanding of their elements, principles, and historical development. - In mathematics, the basic concepts of number (for example, real or infinite) and geometry (for example, Euclidean versus Riemannian) have been revised; novel axioms, as well as constructive approaches, have been introduced, and new measures have been defined (as the so-called measure theory used in modern probability and integration). - In physics, the concepts of space, time, and matter have been reconsidered, and metrology has been completely renovated. Modern physics has identified many new elements (such as photons, atoms, and subatomic particles), and this has required the elaboration of discrete logical systems (both in statistical physics and quantum mechanics). The role of probability theory in the logic of this discipline has grown considerably. As a result, the philosophical issues of necessity and determinism, of contingency and free will, have also been reconsidered. -Logic itself has been transformed by Gödel's work, showing that certain propositions can neither be proved nor refuted, thus giving an unexpected role to contingency in this discipline, and radically transforming the concept of demonstration. Various types of logical systems have also developed: finite or not, discrete or continuous, multivalued or fuzzy, temporal, bearing on past and future, on retroduction and prediction or describing other modalities. As a result, the largely illusory a priori character of logic has faded away, the process of differentiation into diverse systems making patent (using the wording of Quine) its naturalization.
Lastly, the broad importance of the concept of symmetry has been grasped progressively in logic, in mathematics, in physics, and in the elaboration of theories or laws of nature, as further explained below.
These three disciplines appear to have common characteristics: the importance of their deductive content, as well as the key role played by elemental objects in the logical inferences employed. A certain "elemental logic", bearing on items that are irreducible and invariant, is at the heart of many constructive approaches. This resembles the logical atomism of Russell [4] which has been a source for modern analytical philosophy.
Biology appears to differ today from these disciplines in several ways. It has not gone through a foundational crisis. Furthermore, it is often claimed that biology is unique, being unlike other disciplines, and that contingency reigns, providing the ultimate explanation for everything. Such statements both deny the existence of theories in biology and, more generally, of a unity of knowledge. The logical content, especially deductive, of biological knowledge remains uncertain. This also raises the question of the role of atomism in biology, a question having many facets as well as ancient roots. The three great biological theories introduced in the nineteenth century, namely cell theory, Darwin's theory of natural selection, [5] and Mendel's theory of inheritance [6,7] were formulated at a time when the atomic structure of matter was still being debated. This implies that their initial formulation must be updated in order to make room for an elemental logic. Mendel's theory is often considered as a main root of atomism in biology, introducing, implicitly, the concept of particulate inheritance, as put forth by Fisher, [8] thus correcting the erroneous idea of blending inheritance [9,10]. Yet Mendel's "elements", later called genes by Johannsen [11], are now known to be compound items rather than true units of biological information. Furthermore, modern physical atomism originates from other sources, namely the kinetic theory of gases and the theory of Brownian motion, and its general consequences for biological thought must also to be explored.

Aim of the Present Work
The goal of the present work is threefold: first to provide a succinct exposition of our investigations on the foundations of biology and of the methodology used, sketching the main arguments and major conclusions; secondly, to describe a research program; thirdly, to present a pedagogical project. A companion work will describe theoretical constructions of the genetic material and of proteins using the results obtained here [12].
In the investigation of the foundations of biology, one must identify elemental phenomena of the discipline, describe the measures employed to study them and the resulting logic. At the same time, the underlying principles must be disclosed, bringing into light philosophical considerations. Foundations are also to be studied in historical terms.
Working on all these aspects of the foundations of biology is an immense task that cannot be completed in a few years by a small group of researchers. It constitutes a research program, similar to the mathematical Erlangen program of Felix Klein [13] seeking to incorporate group theory into the study of geometry.
Finally, we also think of foundations in terms of what we believe should be taught first, in terms of elementary or basic phenomena, concepts, and theories. We have found over the years that the foundations of biology can be to a large extent taught at the undergraduate level (and even before). The present work should, therefore, be viewed also as a pedagogical project.

Results and Discussion
Biology is a scientific discipline based on phenomena derived from observations or experiments, on concepts elaborated from them, and on further generalizations derived from these concepts through theories or laws of nature. We investigate its foundations using the language and the methods employed in the investigation of disciplines outside of biology. Our approach is based on a fundamental and general belief in the existence of a unity of human knowledge. This assumption is, nevertheless, constantly held as a working hypothesis susceptible of refutation.

Revising the Principal of Sufficient Reason
The question "Why is DNA such as it is and not otherwise?" can be addressed, following Leibniz, using the philosophical principle of sufficient reason. This states that we can always provide an answer to the following two questions: "Why does something exist rather than nothing?", and, if it exists, "Why is it as such and not otherwise?". The broad importance of this principle is well known: As remarked by Enriques [14], it constitutes a postulate of the intelligibility of reality, as well as a requirement for the elaboration of models, and provides rules for scientific constructions. At the same time, its practical use is not so easily grasped. This principle is constrained by the observation that, most often, the reasons shall remain unknown to us. This limitation illustrates the general finding that fundamental principles never come alone, but always appear in complementary pairs, in a dialectic manner.
Underlying the principle of sufficient reason are the concepts of necessity and contingency and of symmetry and asymmetry. To make the use of the principle of sufficient reason easier, we have broken it into four sub-principles, defining four philosophical attitudes towards these dual couples of antithetic concepts, illustrated in Table 1. One can associate, for instance, symmetry with contingency (COSY) through the classical point of view according to which contingency arises from ignorance, a lack of information. Conversely, one can associate asymmetry with necessity (NEASY) by observing that a phenomenon, to occur, requires the absence of certain symmetry elements, in other words, the presence of necessary asymmetries; Thirdly, symmetries can be taken as necessary (NESY), focusing on simplicity, economy, and invariances. Fourthly, asymmetries can be taken as contingent (COASY), focusing on imagination and invention. These four attitudes can be used to explain how the concepts of necessity and contingency and of symmetry and asymmetry enter in the formation of knowledge, in particular, in the principle of sufficient reason and in the elaboration of theories or laws of nature. Table 1. Four attitudes towards necessity and contingency, symmetry, and asymmetry.
COSY associates symmetry with asymmetry, in a principle of "non-sufficient" reason or indifference. Both in physics and in information theory, ignorance, viewed as a lack of information, can prevent the establishment of the uniqueness of a phenomenon. This is treated through a maximization of the relevant (Gibbs-Boltzmann or Shannon) entropies.
NEASY associates asymmetry with necessity. The presence of necessary asymmetries for the occurrence of a phenomenon constitutes Curie's asymmetry principle: "The characteristic symmetry of a phenomenon is the maximal symmetry compatible with the existence of the phenomenon. A phenomenon can exist in a medium that possesses its characteristic symmetry or that of one of the subgroups of its characteristic symmetry. In other words, certain symmetry elements can coexist with certain phenomena, but they are not necessary. What is necessary is that some symmetry elements be missing. Asymmetry is what creates a phenomenon." [26]. For instance, a macroscopic heat engine requires: (1) a transfer between two reservoirs differing in temperature (a space heterogeneity), (2) an absence of time reversibility (time orientation), and (3) an oriented heat flow from the hot to the cold reservoir. The Carnot principle can be viewed as a manifestation of a principle of division of the simplest type relating division with efficiency. A Curie analysis of the phenomenon of the Brownian motion of a colloidal particle in a fluid at rest similarly reveals two fundamental spatial and temporal asymmetries. It establishes the granularity of the surrounding fluid, its discrete, atomic (or molecular) structure (and, therefore, the disruption of the symmetry of scale invariance); it also reveals that the surrounding fluid medium is not microscopically at rest (in agreement with the kinetic theory of heat).
NESY asserts the necessity of symmetry at the foundations of laws of nature. It arises, first, through the principle of simplicity or economy: when several laws can describe the same phenomenon, we must retain the one that is the simplest, most parsimonious, devoid of redundancy. Simplicity can be here defined and measured through algorithmic information theory (as developed by Solomonoff, Kolmogorov, and Chaitin). The information content of a phenomenon is formally given by the size of the smallest algorithm characterizing it. In NESY, phenomena are not described through necessary asymmetries, but instead through a group of compatible symmetries, either continuous or discrete, and the focus is on conservation laws or selection rules. Symmetries are called "non-observables", and, accordingly, asymmetries become associated observables as in NEASY.
COASY associates asymmetry with contingency. Here, one decides to suppress some of the necessary asymmetries of phenomenon, treating them as contingent. It releases imagination and contributes to creative thinking at large. Gedanken (thought) experiments in the natural sciences lead to the prediction of phenomena, thus to conjectural asymmetries. It can also be found in all disciplines that are not constrained by natural phenomena such mathematics or the sciences of the artificial. This can be illustrated with mathematical Brownian motion (also called a Wiener process), which is obtained as the continuous limit of a discrete random walk on a lattice and possesses the properties of unbounded scale and time invariances. The existence of a ballistic regime in the physical process of Brownian motion is incompatible with exact scale and time invariances, showing that mathematical Brownian motion does not describe physical reality.
The construction of a theory describing a phenomenon (or a set of phenomena) consists of two steps: The first is the elaboration of appropriate measures aiming at its study. Secondly, from these measures can then be identified the invariants of the phenomenon which include both necessary asymmetries and compatible symmetries. Phenomenal asymmetries can be deemed necessary according to Curie's asymmetry principle [26]. In a complementary manner, the set of all the symmetries compatible with a phenomenon defines its symmetry group. Such symmetries can be viewed philosophically as being either contingent or necessary. The necessity of symmetry in mathematics is found in a principle of symmetry, stated by Weyl [21], according to which any mathematical object must be characterized in terms of the set of symmetries, called its automorphism group, leaving it invariant. This represents an extension of Klein's program for geometry to the whole of mathematics and then to the natural sciences. The method of construction of a theory that we follow, following Lautman [29], therefore aims both to extract the necessary asymmetries from a phenomenon and to incorporate as many symmetry elements as possible in the symmetry group which describes it. In this approach, one does not construct something that actually exists, but rather an ideal, Platonic structure. We follow a similar approach in the theoretical construction of a compound item. A necessary item is unique and, if it is compound, can be assembled from elemental components using the rules of construction given above. In contrast, an item that is not necessary is called contingent as it could be otherwise; a contingent item cannot be constructed, but only described.
The claim that everything in biology is contingent raises a general philosophical problem as to the origin of necessity. We view necessity as having its main sources in logic, in symmetry considerations, and lastly, in atomism and the Fermatian inference of descent (discussed in the next section). In Aristotelian logic, necessity is associated with deductive reasoning (called apodictic necessity or certainty by Kant) [30]. Extending this idea to the natural sciences, we state that the necessary character of an inference made in any scientific discipline is related to its content in deductive reasoning. The inference leading, for instance, to the establishment of a difference between two objects using a measure of sufficient accuracy possesses such a deductive character. It allows us to hold, as necessary, the conclusion of non-identity. This can explain, in part, the origin of the necessity found in Curie's asymmetry principle. Another major source of necessity arises from the attitude associating it with symmetry, at the heart of modern physics. Making symmetry necessary has the consequence to confer a character of certainty to the generalization of induction.

Biology and Physical Atomism
The introduction of atomism in biology is required to deduce the existence of individual living organisms (meaning literally, that cannot be divided) and to construct schemes for a process of reproduction. The basic logical tool here is derived from Fermat's principle of infinite (or indefinite) descent (which, in his terms, can be used both in a negative and in an affirmative manner) [31]. This inference rules-out reproduction processes based on a constant reduction in the size of (nestled) living organisms (as in the theory of the "homunculus"). This was understood by Buffon [32], who, however, could not draw his argumentation to a clear conclusion as the size and reality of atoms everything themselves were unknown in his days. Another consequence of atomism is that reproduction cannot proceed unbounded under finite resources, pointing to the necessary existence of some sort of competition or struggle for life, a statement that is at the basis of the theory of natural selection. One can thus conclude that physical atomism plays a central role in the construction of biological concepts and theories. In the following sections, we shall see how it enters in the four fundamental theories of life: the theory of natural selection, cell theory, the informational theory of life and the physico-chemical theory of life.

Four Fundamental Theories of Biology
We construct four theories of life: a theory of natural selection, a cell theory, an informational theory of life, and lastly, a physico-chemical theory of the living. There is a historical basis to this choice, and the boundaries between these theories are difficult to define and somewhat arbitrary. The theory of natural selection was first built at a human scale, with a macroscopic view of living organisms. Yet, the principle of atomicity is a central component of the theory. Cell theory requires a mesoscopic description, which also mixes discrete and continuous logic. The informational theory is the most abstract, also focusing on the discrete, through the notion of an elemental bit of information. The physico-chemical theory of life investigates natural selection at the elemental level (discrete: supramolecular, molecular, as well as sub-or infra-molecular), and as such, it contains what is also described as molecular biology.
The process of reproduction stands out among the familiar phenomena associated with life. All living organisms come from previous living organisms through reproduction. The process of reproduction has two aspects: it is a kind of multiplication, which can increase the number of living organisms and a certain transmission of hereditary information, implying a memory system. Any system involving a memory requires its persistence through time. This requirement is absolute and cannot tolerate even the slightest interruption, thus relying on a principle of continuity (also known as Leibniz's law of continuity). Biology which relies on reproduction of individuals is, therefore, based on a pair of seemingly contradictory, yet actually complementary, principles: continuity and discontinuity. The simultaneous presence of both continuity and discontinuity is found in all sciences, as observed by d'Arcy Thompson [33], among others. This is illustrated in physics with the roles played by the continuum in field physics compared to that of the discrete in crystallography, atomic or quantum physics.
The most conspicuous mode of reproduction is biparental, observed both in macroscopic plants and animals. Biparental reproduction was long thought to be the major type of generation in the living kingdom. Yet, biparental generation is now known to be much less common than uniparental generation, both in number of organisms reproducing in this manner, in terms of their biomass and in variety of species. Uniparental reproduction, which can also be called haploid reproduction (from the Greek απλoειδής having a simple aspect) is the simplest mode of generation, with only one parent and two offspring. Indeed, reproduction, in order to result in a multiplication, requires the production of at least two surviving organisms. The simplest type of reproduction is thus a gemination, first observed by Trembley, Horace Bénédict de Saussure and Spallanzani, a duplication through a process of growth followed by a binary fission, a division into two parts. Uniparental reproduction remains a constant source of wonder, stemming from its disarming simplicity, and also a source of bewilderment, as the parent mysteriously vanishes into its two offspring.

The Theory of Natural Selection
Darwin observed that biparental reproduction is not an exact invariance, but an approximate symmetry that generates heritable, in general minute, differences [5]. He then generalized by induction this empirical finding to all living organisms (indeed, we know today that uniparental reproduction is also not an exact invariance). The underlying current explanation is based on the existence of a genetic material, consisting of information containing heteropolymers made of assemblies of a finite number of monomers (elementary bits) such that both the number of hereditary traits and the informational content of each one (in terms of number of bits) are always very large. This makes possible the existence of discrete variations, slight enough to be treated as infinitesimal, compatible with the use of a continuous description of the reproduction process. Furthermore, we know today that the material structure is indeed prone to mutate as a result of interactions with ionizing radiations, UV light, or base-pair tautomerism, ensuring a constant generation of variations.
A close look at the logic of Darwin's theory of natural selection leads to the identification of three main types of logical inferences: • inductive, such as the statement that the reproduction of individuals is not an exact invariance, but it is an approximate symmetry that generates heritable differences, • deductive, proving the existence of a process called natural selection leading to the retention of certain living organisms and to the elimination of others, as a necessary consequence of the imperfection of reproduction and of the finiteness of available resources and • retroductive or abductive, [16] leading (when augmented by concepts elaborated in cell theory below) to the parsimonious conclusion of the existence of a unique common ancestor for all living organisms.
From a pedagogical point of view, this logic of natural selection is simple enough to be taught to high school students. The inductive, as well as the key deductive, components of Darwin's theory were identified long ago by Julian Huxley [34] and by Leigh van Valen [35]. In contrast, many recent works, such as Mayr's book on the history of biology or articles written in 2009 (150 years after the publication of the Origin), discuss the logic of this theory following Huxley in terms of five major inferences [36][37][38], yet of unspecified status (deductive, inductive or otherwise).
Natural selection defines a second law of interaction between living organisms (reproduction being the first). This quasi-invariant process provides a constant means of dispersal, acting on all forms of life following the last common ancestor. The existence of this common ancestor, when compared with the immense variety of present living organisms, brings to the fore the action of a principle of divergence underlying natural selection. Natural selection explains, in a parsimonious manner, the immense variety of living organisms in their totality. Indeed, of all the theories conceived by man to understand the universe, the theory of natural selection stands amongst the simplest in hypotheses and the richest in phenomena.
Finally, natural selection operates through a constant search for extremes, both maxima and minima, over all parameters at its disposition. We observe the results of this search of extremes, for instance, in terms of size, of largeness and minuteness, at the level of entire organisms (with elephants and whales, for example, at one end and prokaryotic single-cell organisms at the other). At the cellular level, the eggs of birds are single cells of macroscopic size; similarly, the largest neurons are of the size of an entire animal and can be several meters long. At the molecular level, natural selection leads to the formation of the giant macromolecules of meter-long chromosomal DNA but also to a constantly increasing repertoire of small molecules such as secondary metabolites. At the metabolic level, this search for extremes leads to states of maximum, as well as minimum, dissipation of energy. The overall measure of natural selection, introduced by Fisher, is called fitness and has the dimension of the reciprocal of time [8]. The relative fitness of living organisms is to be understood as an expression a biological principle of fine division.

Cell Theory
We construct cell theory starting from the requirements of natural selection: reproduction, imperfect transmission of hereditary information and constant operation of selective pressure. As explained above, the feasibility of this constructive approach is based on the principle of atomicity that excludes a constant reduction in the size of living organisms. Concomitantly, the theory of natural selection tells us that cells, being complex entities, are themselves assembled from a very large number of elements and can reasonably be described as continuous objects. Our approach is, therefore, semi-continuous. Elaborated in this manner, cell theory is both deductive and inductive and completes the nineteenth century view (of Schleiden, Schwann, and others) which remained purely inductive.
The simplest cellular shape is spherical (having maximal symmetry). The process of reproduction of a cell of minimal size requires a phase of cellular growth, during which all the components of the parental cell are duplicated, prior to a binary fission in which the duplicated components are segregated to the two daughter cells. This combined process of growth followed by division, that we call cell germination, makes the spherical symmetry of cells only approximate, as observed by d'Arcy Thompson [33] and others. Binary fission creates a singular point of contact between two daughter cells at the moment of separation. This point becomes a new pole, leading to the deduction that all cells are polarized, both in space and in time (similar statements can be found in the literature, for instance, in the work of Taddei and coworkers [39]), in accordance with the principle of fine division.
Cell theory constructs both states and processes. It defines different types of cells: the simplest are those involved in ordinary uniparental or haploid reproduction, the more complex (which are germ cells or gametes in addition to somatic cells) are involved in biparental or diploid reproduction. Cell theory states that haploid generation preceded diploid generation. Furthermore, the common ancestor of all living organisms was unicellular. It was perhaps not an isolated entity, but a set of cells freely sharing their genomic content that later evolved into distinct organisms [40].
Cell theory defines the state of latent life [41], also called cryptobiosis [42], in which living organisms can survive for extended periods of time in the absence of metabolism, as isolated thermodynamic systems in a reversible, dormant stage. Cryptobiosis is both necessary and universal as a potentiality at the cellular level. Thus, at any time, a cell is either living, in a cryptobiotic state, or dead. Cell death is a necessity dictated by natural selection. Yet, every cell living today is connected to the common ancestor by a lineage of cells, all of which have escaped death. The continuity of cellular life thus implies that cell death, although necessary, is not universal. Cell theory also constructs the process of programmed cell death, necessary and universal as a potentiality in living organisms today. Both programmed cell death and cryptobiosis are necessary to ensure the continuity of life across most adverse conditions. In unicellular organisms, programmed cell death is often an intermediate process in a pathway leading to cryptobiosis.
Cryptobiosis is a phenomenon of deep and general significance for life. During cryptobiosis, the metabolic process is transiently suspended. As a result, life appears as a discontinuous rather than a continuous process. Cryptobiosis confers robustness to cellular life: it is the discontinuity of metabolism that renders possible the continuity of life. Cryptobiosis demonstrates that metabolism needs not be continuous, that the process of life can be temporarily interrupted. Rather, what is necessary is a certain stability of cellular structure in the absence of metabolism. Cryptobiosis justifies, indeed makes indispensable, a materialistic view of life. The protoplasm of a cell in a cryptobiotic state remains (almost) constant, independently of the environment. The overall stability of a cell in a cryptobiotic state further implies that its components must be endowed with a similar stability, of purely static nature, demonstrating the relevance of the concepts of equilibrium thermodynamics and statistical physics for biology. Cryptobiosis offers a strong argument in favor of an analytical, reductionist approach, seeking to fraction cells and to isolate subcellular structures performing specific functions (for instance: genetic material, viruses, biological catalysts, biological motors, and fuel).
Cryptobiosis offers also a strong argument in favor of synthetic biology, aiming at the reconstitution of functional components of a cell starting from inert parts. It ensures the existence of stable intermediate forms, sub-systems necessary for the assembly and evolution of any complex system, whether it be living or artificial.
Cryptobiosis contributes to our understanding of the origin of life on Earth, as it shows that is possible to obtain a living organism starting from a fully inert structure and therefore to envision its abiogenic synthesis. Given the molecular structure of a cryptobiotic inert cell, and a mastery of the techniques required to assemble them from individual atoms, it is possible to construct a living organism, without the need for a hypothetical "breath of life".
The phenomenon of cryptobiosis thus removes a fundamental obstacle raised by some proponents of vitalism who maintain that a spontaneous appearance of life is inconceivable.

The Informational Theory of Life
The theory of biological inheritance or informational theory of life envisions living organisms, their structures, their functions, and their interactions, in terms of information, stored or transmitted, emitted or received, and involved in regulation or control, in homeostasis [43] as well as adaptation. Biology can be seen as a branch of information and communication sciences, and of cybernetics [44,45]. Biological information constitutes a fundamental asymmetry of life (comparable with molecular chirality, itself often called the asymmetry of life). Information is a physical concept and is associated with an energy cost [46]. The informational theory of life, as such, includes the study of heredity or genetics, and we focus here on this aspect. The theory of inheritance has its roots, in part, in the work of Mendel in the nineteenth century [6,7]. The logical basis of this theory, however, was developed mostly starting in the second half of the twentieth century.
The work of von Neumann on self-reproducing automata offers a proof of existence of such a logic [47,48]. The architecture of his six-component automaton described in 1948 [47] is shown in Table 2 and compared to their structural counterparts in living organisms. The self-reproducing automaton is built from two fundamental, distinct components: a set of instructions and an aggregate of smaller automata. This agrees with the observed existence of two types of informational biopolymers (nucleic acids and proteins) and thus, of a genetic code relating their sequences [49]. The aggregate A + B + C + I D The cell, the simplest living organism The automaton of von Neumann consists of five automata and one instruction. This instruction I D is itself an aggregate of simple parts and acts as the tape in a Turing computing automaton. The automaton B makes a copy of I D : "the copying mechanism B performs the fundamental act of reproduction, the duplication of the genetic material, which is clearly the fundamental operation in the multiplication of living cells." [47]. When the constructing automaton A is furnished with the instruction I D , the controlling automaton "C will first cause A to construct the automaton which is described by this instruction I D . Next C will cause B to copy the instruction I D referred to above, and insert the copy into the automaton referred to above, which has just been constructed by A. Finally, C will separate this construction from the system A + B + C and 'turn it loose' as an independent entity." [47]. In order to function, the aggregate D = A + B + C must be furnished with the instruction I D describing this very automaton~D, thus forming the self-reproducing automaton E. "E is clearly self-reproductive. Note that no vicious circle is involved. The decisive step occurs in E, when the instruction I D , describing D, is constructed and attached to D. When the construction (the copying) of I D called for, D exists already, and it is in no wise modified by the construction of I D . I D is simply added to form E. Thus there is a definite chronological and logical order in which D and I D have to be formed, and the process is legitimate and proper according to the rules of logic." [47].
The automaton described in 1948 performs a process of uniparental reproduction, in which the instruction part I D is duplicated and segregated. Yet it differs from living organisms as we know them today: the process of reproduction is exact, occurring without changes in the instruction part, which is further envisioned by von Neumann as a onedimensional string of two symbols (zero and one). In addition, the mechanism by which the replication of the string occurs, and the problems raised by its transport to daughter structures are not addressed. Our work on the construction of the genetic material will address these issues and investigate the current significance of von Neumann's work [12].
One can combine natural selection, cell theory, and information theory to construct a theory of inheritance, first for the simplest living organisms reproducing through uni-parental, haploid generation, then for the more complex organisms reproducing through biparental, diploid generation. Assuming by simplicity that gametes possess a haploid system of inheritance, one can deduce, first, the existence of a system of diploid inheritance in zygotes and somatic cells and then, demonstrate Mendel's law of inheritance.
The concepts of genetic linkage and genetic recombination can be introduced (to be used the construction of the genetic material [12]). The seven pairs of characters investigated by Mendel are transmitted independently of one another. Later experiments with sweet peas [50] have shown that for certain pairs of genes a preferred co-transmission can be observed, a phenomenon called genetic linkage, leading to the elaboration of the concept of linkage group. In complex living organisms such as metazoan, the genes are usually clustered into several linkage groups. In simpler prokaryotic unicellular organisms, such as the bacterium Escherichia coli, there can exist a single linkage group [51]. This latter discovery relied, in fact, on the exploitation of the newly discovered phenomenon of genetic recombination in bacterium [52]. Indeed, investigations of genetic linkage revealed the plasticity of linkage groups and led to the elaboration of the concept of genetic recombination according to which genes do not occupy fixed positions, but can occasionally move either within a given linkage group or to a different linkage group. This has been observed both in eukaryotes and in prokaryotes. We formulate a generalization of these observations stating that recombination is a universal process in living organisms. The process of genetic recombination points to the existence of cut and paste tools operating on the genetic material within or between linkage groups.

The Physico-Chemical Theory of Life
The physico-chemical theory of life seeks to understand the process of natural selection using the concepts of physics and chemistry. This theory includes molecular biology (and extends into the infra-molecular level, as both nuclear and electronic properties have to be considered, as well as into the supra-molecular level, indeed up to the macroscopic level). We focus here on the elemental level. This theory tries to explain the high rates, high yields, and high specificity of biochemical reactions and how efficient transport processes operate. It is through the study of the biochemistry, physical chemistry and biophysics of nucleic acids that we came to study the general questions raised here. Indeed, we described methods coupling chemical reactions and phase transitions that increase the rates of nucleic-acid hybridization and cyclization by many thousand-fold. These investigations fostered our initial interest in heterogeneous biochemistry and for a principle of fine division underlying biochemical fitness. This principle of fine division manifests itself through complementariness, of heterogeneity (such as compartmentation and phase heterogeneity), anisotropy (such as molecular chirality [53]), and of fine division of time. It can be understood as an extension of the Carnot principle for heat engines [27] to other sources of energy.
Chemical catalysis (a concept introduced by Berzelius and developed by Ostwald, Fischer, Pauling, among others) also involves the principle of fine division as it operates through a tight, supramolecular complementariness between the catalyst and a transition state conformation occurring in the chemical reaction. The most general definition of a catalyst is that of a quasi-invariant of a reaction, able to turnover, transforming a virtual process allowed by the laws of thermodynamics, but which will not necessarily occur, into a phenomenon observable within a constant finite time. Both the biological process of reproduction and von Neumann's self-replicating automaton are instances of (auto)catalysis. Catalysis defines a cycle, conveniently drawn in the Krebs representation [54]. In a Carnot cycle, for instance, the motor itself is a catalyst; In another example, a carbon nucleus serves as the catalyst in the cyclic process of energy production in stars [55]. Catalysts contribute to fitness by increasing the rates of biochemical reactions and are thus necessary biological entities. Catalysis (as originally perceived by Berzelius [56]) brings together the fields of engineering, chemistry, and biology, illustrating a unity of knowledge combining the sciences of nature with the sciences of the artificial.
The physico-chemical theory of life investigates the relation between motion and life. The identification of movement with life and stillness with death is ancient. The observation of the phenomenon of Brownian motion in microscopic studies by Robert Brown [57,58] renewed the question of this relationship. Unger and Dujardin distinguished Brownian motion from active cell motion, killing cells either by heating or through the addition of poisons [59][60][61]. Further questions are raised today by the study of biological motors.
Mitchell's chemiosmotic theory endeavors to unite transport and metabolism into a vectorial metabolism and introduces Curie's asymmetry principle in biochemistry [62]. Osmoenzymes (in the language of Mitchell [63,64]), now called biological motors, perform various types of mechanical work, improving, in particular, the transport of compounds beyond that permitted by Brownian motion, being, just as catalysts, universal and necessary biological entities.
The physico-chemical theory of natural selection explains the extraordinary stability of the components of living organisms in the state of cryptobiosis. The necessary existence of specific macromolecules, heteropolymers, whether catalysts, motors or informational devices for the storage of hereditary information, can then be established deductively. Such approaches can contribute to explain the unique character of essential biological structures and functions, leading, in particular, to a clearer, more intuitive understanding of the structures of proteins and nucleic acids, described in a separate work [12].

Conclusions and Perspectives
We have tried, in the present work, to give a succinct account of our investigations on the foundations of biology. Our key conclusions are the following: - Concerning the origin of necessity and contingency and their respective roles biology and other disciplines. We have identified different sources of necessity: (1) in logic, where it is associated with deductive reasoning; (2) in symmetry considerations, where it is associated with asymmetry through Pierre Curie's principle and through symmetry in the construction of theories; (3) in the study of elements through the Fermatian inference and its use in atomism. We have shown that the concepts of necessity and contingency can be associated with those of symmetry and asymmetry, leading to a description of four philosophical attitudes that applies to all disciplines, and to a general definition of theories (or laws of nature) in terms of necessary asymmetries and compatible symmetries. - The foundations of biology appear similar to those of other disciplines, being based on elemental phenomena, on pairs of complementary principles (of continuity and discontinuity or atomicity, of symmetry and asymmetry, of necessity and contingency, of parsimony and plenty, etc.), and on logic. We have seen, for instance, how the foundations of biology ultimately rely through atomism on an elemental logic. Maxwell's conclusion on the need to incorporate atomism in biological thought remains as timely today as it was almost a century and a half ago [65]. The foundations of biology provide this discipline both with elements, laws of interaction and theories, making possible a deductive formulation of the concept of natural selection and then of cells, genes, catalysts and motors. They help to explain, in a plausible manner, the necessary and unique character of essential biological structures and functions. -Our work supports the existence of a unity of knowledge, observed here in the common foundations of biology and other disciplines, which finds its main manifestation in a science of research (Leibniz Ars inveniendi [66] or Peirce's Economy of Research [67]), describing methods shared by all disciplines. The belief in the unity of knowledge, common to students of methods embodied by Leibniz and Peirce, is shared by a large number of thinkers (for instance Schrödinger [68], E. O. Wilson [69], Eigen [70]), even in an age of increasing specialization. One can mention here the current interest in transdisciplinarity and convergence. Yet, the tree of knowledge also grows through the many disciplinary branches, the existence of which can be explained in terms of an increased efficiency associated with a division of intellectual labor at the basis of eco-nomical employment of thought. We find here again fundamental principles coming as complementary pairs: a principle of diversity expressing a pluralism coupled with a principle of unity, as well as the idea that efficiency is associated with a certain fine division, whether in the process of natural selection or in the economy of knowledge. -Finally, we would like to comment on the obvious limitations and shortcomings of the present work. Every topic examined could deserve a more in-depth treatment Major issues have been left out, either deliberately or because of sheer ignorance. We are aware of these necessary imperfections. They make room for further investigations.
Author Contributions: Text conceived, written, and reviewed by J.-L.S., A.B. and A.G. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.