1. Mathematics as the Language of Nature
At the very beginning of modern science, in the XVII century, Galileo Galilei, in his book
Il Saggiatore (The Assayer) [
1], wrote:
“Natural Philosophy is written in this great book that is constantly open before our eyes (I say the universe), but it cannot be understood without first learning to understand the language and to know the characters in which it is written. It is written in mathematical language, and the characters are triangles, circles, and other geometrical figures, without which it is impossible to humanly understand a single word; without these, it is a vain wandering in a dark labyrinth”.
Who wrote this book? Galileo was convinced that God was the author. In the subsequent centuries, other possibilities were proposed, but what remained almost universally accepted among scientists was that mathematics is the “natural” language of science. This conviction shaped the trajectory of scientific progress: different fields adopted mathematics to different degrees, from the a posteriori statistical evidence of biological phenomena to the abstract formalisms of theoretical physics, yet always under the same guiding principle that natural phenomena could be described once translated into a formal mathematical syntax. The key implication of this view was the rigorous separation between “syntax” and “semantics”. Mathematics was treated as a neutral syntax, a “formal system of signs and rules” that was capable of producing “logically consequent statements” independent from meaning, while semantics, the meaning and function of natural phenomena, was seen as an independent reality. Nature became legible only after being recast into mathematical syntax, and meaning was something derived afterwards. This methodological stance was extraordinarily powerful, which allowed prediction, generalization, and mechanistic explanation, but at the same time, it constrained the way science conceived of the relationship between matter and meaning. It reinforced the idea that matter itself was mute, waiting for external syntax to give it intelligibility [
2]. To simplify subsequent considerations,
Table 1 provides a summary and brief explanation of the key concepts introduced in this perspective.
2. Reduction of Semantics to Syntax in Science
The reduction of
semantics to
syntax became particularly pronounced in the first half of the 20th century. It was driven by two converging developments: the advent of automatic computation and the discovery of the genetic code. The computational paradigm transformed continuous mathematical entities into discrete sequences of “0”s and “1”s, reducing complex phenomena to binary operations. In parallel, the diversity of biological forms and functions was increasingly interpreted as the “readout” of discrete sequences encoded in the four nucleotides of DNA. This perspective suggested a deterministic mapping from genotype to phenotype: the sequence of characters, in principle, entirely generated the observable spectrum of biological phenomena. Within this framework, the distinction between syntax and semantics appeared blurred: the DNA sequence (syntax) was assumed to contain all necessary instructions to produce the features of life (semantics) [
2], while the unfolding of these features could be explained entirely through mechanistic and stochastic principles such as mutation and natural selection. “Chance”, filtered by the survival of the fittest principle, became the author of “the great book”, and living systems were perceived as the passive expressions of prewritten sequences.
Yet this reductionist view was challenged even before the completion of the Human Genome Project. It became increasingly evident that a deterministic one-to-one relation between genotype and phenotype does not exist; the same genotype can give rise to multiple phenotypes depending on environmental, epigenetic, and stochastic factors. While biologists and clinicians had long recognized this complexity, it was largely ignored by the dominant geno-centric paradigm [
3]. The post-genomic era has therefore underscored the irreducibility of life to DNA sequences alone: phenotype emerges not solely from a static code but from the dynamic interplay between sequences, molecular networks, cellular contexts, and environmental inputs. According to Denis Noble, genes are not the blueprint for life, as an organism’s fate is not solely determined by its genome [
4]. Instead, life emerges from the complex and dynamic interactions among genes, environmental factors, and processes occurring at various biological scales [
4]. In line with these important considerations, Philip Ball announced the dawn of “new biology” that provides a systems-level view of life, emphasizing the interactions between multiple levels—genes, proteins, cells, and tissues—rather than a strictly gene-centric or mechanical one [
5]. This approach is known as systems biology, an interdisciplinary field that studies the complex interactions within biological systems as a whole, rather than focusing on individual components in isolation. It aims to understand the emergent properties of biological systems, such as cells, organs, and ecosystems, by analyzing the dynamics and complexity of how components interact, often using mathematical modeling and computational tools.
This historical shift in biology parallels developments in artificial intelligence (AI). In the late 20th century, AI research experienced a profound transition from the notion of intelligence as a strict symbolic manipulation of discrete rules, the so-called “Symbolic AI”, to the connectionist, sub-symbolic paradigm. Here, intelligence is no longer imposed externally through syntax but emerges spontaneously from the interactions among numerous simple processing units. As Paul Smolensky somewhat prophetically noted [
6], “
mental categories, and frames or schemata turn out to provide approximate descriptions of the coarse-grained behaviour of connectionist systems.”
Again, in Smolensky’s words [
6] “
Connectionist AI systems are large networks of extremely simple numerical processors, massively interconnected and running in parallel”. In other words, sufficiently complex networked systems can give rise to behaviors and properties that we associate with intelligence, without pre-specified symbolic rules.
Thus, both in biology and in computation, we observe a fundamental departure from syntax-dominated thinking. The recognition that complex networks (whether of genes, proteins, natural or artificial neurons) can exhibit emergent, adaptive, and context-dependent behavior lays the foundation for the notion of an “Intelligence of Matter” [
7,
8]. Matter, organized in multi-scale networks, is no longer passive; it can process information, adapt, and, in a minimal sense, “learn” from experience.
Scientific progress is grounded in the rigorous application of the scientific method, which prioritizes empirical evidence and verifiable data over rhetoric or opinion. Central to scientific inquiry is the requirement that hypotheses be tested through observation and experimentation and that results be reproducible and subjected to peer review before being accepted as scientific knowledge. This process ensures that our understanding rests on a solid, evidence-based foundation.
Accordingly, the strength of modern science (particularly biomedical science) lies in the effectiveness of experimental work in terms of predictability, repeatability, clarity, and the accumulation of well-substantiated facts. A substantial body of literature highlights challenges in this process, ranging from the lack of reproducibility due to misunderstandings of statistical inference (see [
9,
10,
11]) to the creation of micro-paradigms that bias interpretation when an assumed “realism” of the link between an observed result and its meaning is either incorrect or highly uncertain (see [
12,
13]).
These problems arise from the simple fact that “theory-free,” perfectly objective data do not exist [
14]. Every step in the production of scientific results depends on theory-informed choices, the most evident examples being:
Choosing descriptors (variables) for the statistical units in the dataset, guided by the specific question being addressed.
Selecting inclusion criteria that determine which statistical units form the training set—the “working material”—for both hypothesis testing and hypothesis generation.
Defining operative rules that establish the applicability domain of the results and the boundary conditions for the validity of any predictions.
Selecting an appropriate structure-preserving, dimension-reduction method to eliminate redundancies while retaining the “stiff” information and filtering out the “sloppy” features that obscure the true meaning of the results. This step is particularly delicate in the study of complex systems (see [
15]) and is closely related to the syntactic/semantic divide.
Choosing an error-free, stable gold standard (Y-variable) that can act as a class label across experimental conditions and ensure unbiased evaluation of predictive performance.
To these fundamental points, we must add the challenge of identifying the most productive scale of analysis, i.e., the level at which a phenomenon should be examined to yield the most meaningful insights (see [
16]). Other often underappreciated issues include the applicability domain of measurement instruments and the nature of the noise affecting the data (e.g., multiplicative vs. additive noise). In short, every “fact” is saturated with theory.
It is also worth noting that all forms of knowledge are inherently reductionist in the sense that to study any phenomenon, we must narrow an otherwise unmanageable infinity of possible perspectives to a limited set of descriptors, an interpretive framework, and specific observations. Our own reductionist approach focuses on the emergent properties of complex systems that arise from their adaptive and memory capacities. This means that our “reduction” is grounded in the systemic nature of the phenomena under study, rather than in attempts to decompose the system into isolated components stitched together through linear, “if…then” causal chains.
3. The Geno-Centric Illusion and Its Unraveling
The multi-level organization of biological entities as “networks-of-networks” is self-evident. Proteins interact among themselves to give rise to organized metabolism, yet each protein (a single node in such a network) is itself a network of amino acid residues, whose coordinated motions allow systemic behaviors such as allostery. The same principle extends across the entire spectrum of organizational scales: from cells to tissues, organs, and ecological systems [
17]. Interactions at one level give rise to emergent behaviors at higher levels. This nested architecture suggests that manifestations of intelligence are not out of reach for matter itself.
Such a network perspective challenges the traditional view of information flow codified in the “central dogma” of molecular biology, i.e., a linear, unidirectional flux from DNA to RNA to proteins (and phenotypes). The rise of epigenetics and epitranscriptomics underscores that information in living systems is nonlinear and context-dependent. Epigenetics, for instance, explores heritable changes in gene expression that occur without altering the underlying DNA sequence. Chemical modifications to the DNA or associated histone proteins act as instructions, turning genes “on” or “off” and controlling cellular behavior [
18]. Importantly, these modifications are influenced by environmental and lifestyle factors [
19], can persist through cell division, and in some cases are inherited across generations [
20], providing a form of cellular memory and indicating that instead of being a Read-Only Memory (ROM) subject to change by copying errors and accidents, the genome is in fact “an intricately formatted Read–Write (RW) data storage system constantly subject to cellular modifications and inscriptions” [
21,
22].
The term “epigenetics” itself, coined by the developmental biologist Conrad Waddington in 1942, reflected this networked view as the “whole complex of developmental processes” that lie between “genotype and phenotype”. He described development as a concatenation of processes linked together, such that disturbances at early stages could cascade into widespread changes [
23]. The interest in this field rose strongly only in the 21st century [
24]. Modern epigenetics extends Waddington’s vision and elucidates how dynamic chemical tags, installed by “writers” (e.g., DNA methyltransferases, histone acetyltransferases), interpreted by “readers” (e.g., bromodomain-containing proteins), and removed by “erasers” (e.g., histone deacetylases, demethylases), regulate gene expression in an adaptive, context-sensitive manner [
25]. Transgenerational inheritance of these marks illustrates that cellular systems not only respond to environmental stimuli but also integrate these experiences into their configuration, effectively “remembering” past events [
26]. Epigenetic modifications and associated memory are central to the development of multicellular organisms [
27]; therefore, it is not surprising that the aberrant cellular memory originating from the deregulation and disruption of the epigenetic modification process is linked to the development of various diseases, including cancer [
28].
In 2012, a related dynamic phenomenon of post-transcriptional chemical modifications of nucleic acids was termed epitranscriptomics (also known as RNA epigenetics) [
29]. Similar to the epigenetic modifications of DNA, epitranscriptomic modifications are controlled by the interplay of specific RNA-modifying enzymes and RNA-interacting proteins (“writers”, e.g., METTL3/METTL14 complex for m
6A methylation; “erasers”, e.g., fat mass and obesity-associated protein (FTO); and “readers”, such as the YTH domain proteins) [
30], triggered by the environmental cues, can be inherited across generations [
31], and therefore represent another form of cellular memory. These and many other modern discoveries and observations provided the foundation for transforming the Modern Synthesis (the comprehensive theory of evolution that combines Charles Darwin’s natural selection with Gregor Mendel’s principles of genetics) [
32] into the Inclusive Biological Synthesis [
33]. This new paradigm, proposed in 2020 by Peter A. Corning, expands the traditional Modern Synthesis by incorporating additional evolutionary forces, such as developmental processes, epigenetics, inclusive inheritance (considering all heritable factors, not just genes), and niche construction [
33].
Both epigenetics and epitranscriptomics involve a fine-tuned set of interactions among different molecular players in order for long-lasting adaptive phenomena to emerge, but even single protein molecules exhibit behaviors consistent with minimal intelligence [
30]. Such molecular intelligence, long observed in allostery, signal transduction, and enzyme regulation, is now being harnessed in the nascent field of Intelligent Soft Matter (ISM). ISM sits at the intersection of materials science, physics, and cognitive science, with an aim to create materials with life-like capabilities: perception, learning, memory, and adaptive behavior [
34,
35]. Drawing inspiration from biological systems, these materials use the intrinsic flexibility and responsiveness of soft matter to perform functions akin to cognitive processes [
31]. By studying and engineering matter that can “compute” through its own dynamics, ISM exemplifies how intelligence may emerge not only in living systems but in non-living, physical substrates as well when they reach sufficient complexity to support multiple equilibrium states.
4. Intelligence Emerging from Networks of Matter
According to the evolutionary epistemology [
36], living systems are knowledge systems operating as entities capable of acquiring, storing, and utilizing information to survive and reproduce. The process of evolution represents the cumulative acquisition of knowledge through adaptation and natural selection over time. All processes by which knowledge is gained share fundamental, common features, regardless of the specific mechanism or domain [
36]. Therefore, intelligence, which is a product of evolutionary knowledge processes, can be defined “as the capacity of organisms to gain information about their environment, process that information internally, and translate it into phenotypic forms” [
37]. Based on these premises and on the fact there are different forms of biological intelligence, Predrag Slijepcevic formulated the Distributed Biological Intelligence concept, the fundamental principles of which include [
37]:
- (i)
Intelligence is a universal biological phenomenon that promotes the fitness of individual organisms and is required for effective interactions with their environments. Importantly, neural intelligence should not be considered the evolutionary norm, as animals constitute less than 0.01% of the total planetary biomass.
- (ii)
The fundamental unit of intelligence is a single-celled prokaryote; all other forms of intelligence are derived from this basic structure. For example, Michael Levin, while introducing a Technological Approach to Mind Everywhere (TAME) framework for understanding and manipulating cognition in unconventional substrates, indicated that anatomical homeostasis can be viewed as the result of the behavior of the swarm intelligence of cells [
38].
- (iii)
Intelligence manifests across a range of complexity, from simple life forms, such as bacteria, to expansive, self-regulating systems like the Earth’s biosphere, also referred to as Gaia.
- (iv)
The concept of “information” gains a new significance as information processing is fundamental to biological intelligence. All biological systems—from bacteria to the global ecosystem (Gaia)—are intelligent, open thermodynamic systems that continuously exchange information, matter, and energy with their surroundings.
- (v)
The interaction between an organism and its environment functions as a cybernetic system. The environment influences the organism, causing changes; simultaneously, the organism’s responses to these changes have an impact on the environment, thereby influencing future organism–environment interactions.
We argue that the fundamental unit of intelligence is not a single-celled prokaryote; instead, the concept of “epigenetic information flux” offers a more minimal and fundamental representation of intelligence. The system does not merely react passively to environmental perturbations; it integrates the “experience” of a stimulus into its configuration, creating a form of cellular memory. For example, when cells encounter stress, chemical modifications to DNA or histones are established, retained, and sometimes propagated, which allows the cell to respond more effectively to similar future events. This adaptive memory is a functional hallmark of intelligence: the system learns from past events to modulate future behavior. Similarly, the epitranscriptomic information flux contributes to minimal intelligence. PTMs of RNA, introduced in response to environmental cues, can persist within the cell and, in some cases, be inherited across generations. These modifications influence gene expression dynamically and contextually, forming a network of information that captures temporal and environmental history.
Beyond nucleic acids, single proteins also exhibit this minimal intelligence. Proteins often contain a combination of structured domains and Intrinsically Disordered Regions (IDRs), which together provide a balance between stability and flexibility. This structural duality enables proteins to respond adaptively to molecular cues, maintain functional identity while remaining sensitive to perturbations, and transmit information across molecular networks [
39]. Allosteric regulation exemplifies this principle: binding at one site induces conformational changes that propagate through the molecule, modulating activity at distant sites in a coordinated manner. In essence, the protein itself “computes” information about its environment and history.
An important illustration of organized molecular responsiveness rooted in the protein-based networks is given by the concept of “molecular brains”, where, within the ribosome, ribosomal proteins form the molecular-scale “neuron-like circuits” by establishing mostly permanent connections via long, disordered filaments and small, evolutionarily conserved interfaces that resemble “molecular synapses” [
40,
41,
42,
43]. These neuron-like networks were suggested to “innervate” the ribosomal functional centers, such as tRNA sites, the peptidyl transferase center (PTC), and the peptide tunnel [
40,
41,
42,
43]. Furthermore, Youri Timsit and S. P. Gregoire emphasized that such “ribosomal protein networks and simple nervous systems display architectural and functional analogies, whose comparison could suggest scale invariance in information processing” [
40]. Clusters of bacterial chemotactic receptors represent another illustration of molecular “protobrains” that, by integrating signals, produce the optimal cellular reaction and control bacterial motility in response to attractants or repellents [
44,
45,
46,
47]. Similarly, single-celled euglenoids possess graviperception and graviorientation, being capable of sensing their own cytoplasmic weight and modifying swimming activity [
48].
Plants were shown to possess behavior that can be described as “intelligent” [
49], as illustrated by the marine brown alga
Fucus, which was shown to sense at least 17 environmental conditions (e.g., gradients in temperature, osmotic pressure, light, pH, minerals (K
+, Ca
2+), other chemicals, solution flow, electrical fields, gases, and probably gravity) and then either sum or synergistically integrate this information to make choices and direct the orientation of growth accordingly many hours later [
50].
The shift from viewing matter as a passive substrate to recognizing its capacity for minimal intelligence transforms our understanding of computation in natural sciences. Computation is no longer confined to a “formal list of signs” that is, per se, devoid of any meaning. It emerges from the dynamic interactions within networks of molecules, cells, or materials. Matter, in this view, is not silent: it processes, stores, and responds to information, exhibiting adaptive behaviors that parallel learning and memory. This by no means implies that we should avoid translating the latent and, in its raw form, completely inaccessible intelligence of matter into formal mathematical language, nor does it suggest that we should settle for a merely “holistic” and ultimately uninformative description. On the contrary, research should be directed toward formal approaches capable of simulating the adaptive, “intelligent” behavior exhibited by matter. One example of this orientation is the use of self-organizing associative networks to model emergent behaviors [
51,
52].
The paradigm shift inherent in this line of reasoning lies in abandoning the assumption that mathematical models are “neutral” with respect to the phenomena they describe. Instead, we adopt a more naturalistic perspective—one that explicitly incorporates and tracks the intrinsic “intelligence” embodied in matter itself.
5. Ecosystems as Computational Reservoirs
This emergent perspective becomes even more striking at the level of ecosystems [
53]. Recent studies suggest that ecosystems process information in ways that resemble computation, using intrinsic dynamics to encode, transform, and respond to temporal data. In this context, the concept of Reservoir Computing (RC) provides a powerful framework. Unlike conventional artificial neural networks, which require extensive training of internal weights, RC harnesses the innate, transient dynamics of a complex medium, the “reservoir”, to map input signals into rich internal states [
53]. These states can then be read out to produce meaningful outputs, effectively encoding the history and context of environmental perturbations.
In ecosystems, RC manifests as Environmental Reservoir Computing (ERC). Here, the collective interactions among organisms, abiotic factors, and chemical cycles act as a natural reservoir, integrating information over time and responding adaptively to external stimuli. The intrinsic dynamics of the reservoir allow the system to retain memory, anticipate recurring perturbations, and optimize responses without centralized control or pre-specified algorithms. In this sense, the ecosystem itself performs computation, using its material and organizational properties to generate meaningful representations of its environment.
This principle represents a profound inversion of the traditional reductionist paradigm. In conventional science, semantics is derived from syntax: meaning is imposed upon matter through formal models, codes, or equations. In RC and ERC, semantics dominates syntax: the intrinsic organization and dynamics of the system generate adaptive, context-sensitive behavior, while formal representations serve only as an interpretive layer. Computation emerges not from abstract symbols but from self-organizing processes far from equilibrium, where the material properties of the system and networked interactions encode and process information. Such encoding requires the presence of metastable equilibrium states whose configurations represent mappings of environmental cues. The metastable nature of these states is essential, as it prevents the system from becoming trapped when the environment changes and demands a transition to a different configuration. Within this framework, computation corresponds to the system’s temporal progression through successive metastable states.
From the perspective of scientific methodology, ERC and RC differ from Galileo’s “great book” in form, yet they are aligned in spirit: they aim to produce quantitative, intelligible representations of nature. The key conceptual shift is that, rather than imposing an external formal language onto matter, we now recognize that matter itself possesses computational and adaptive capacities for which no existing mathematics can yet fully account. Consequently, we must rely on simulation-based approaches, using artificial network systems whose dynamics mirror those of the natural systems under study. Although simulation is not equivalent to a full mathematical description, it can nonetheless generate valuable hypotheses about the principles that govern the intelligence of matter. In other words, we must return to careful observation of natural phenomena and use simulation techniques to extract meaningful insights, rather than forcing results into existing theoretical frameworks that may be inadequate or misleading. By studying these innate properties, we approach a deeper understanding of the natural language of matter, discovering that intelligence is not an abstract imposition but a property emerging from complex organization and dynamic interaction.
6. Conclusions: Toward a New Style of Doing Science
In this trajectory, the notion of an “Intelligence of Matter” ceases to be metaphorical and becomes an epistemic category for science itself. From the scale of single proteins that exhibit allosteric adaptability, to epigenetic networks encoding cellular memory, to ecosystems performing reservoir-like computation, we observe a unifying principle: matter, when organized in complex networks, expresses adaptive behaviors that cannot be reduced to syntax alone. These behaviors include memory, anticipation, context sensitivity, and plasticity, all features we traditionally ascribe to intelligence. This view reorients the scientific enterprise. If Galileo’s great book was written in mathematical symbols, the emerging perspective suggests that these symbols are not imposed from the outside but are already inscribed within the dynamics of matter. Mathematics here becomes not merely a tool of description but a lens through which we attempt to translate the computational and semantic capacities of physical and biological systems. In other words, the task of science shifts from reducing semantics to syntax, toward decoding the intrinsic semantics of matter itself.
Such a shift carries profound implications. First, it challenges the lingering reductionist dogma in biochemistry, genetics, neuroscience, and materials science, and encourages us to consider information processing as an emergent property of organization rather than a pre-programmed sequence of instructions. Second, it invites the development of new methodologies that bridge formal models with material intelligence: intelligent soft matter, adaptive biomolecular networks, or ecological computing may become laboratories where the cognitive capacities of matter are experimentally harnessed. Finally, it opens a philosophical horizon in which “intelligence” is no longer the prerogative of human-designed machines or biological nervous systems but a general property of organized matter far from equilibrium.
The rise of the intelligence-of-matter perspective may indeed represent not just a new field but a new style of doing science. It requires us to move beyond the symbolic manipulation of detached models and to engage with matter as an active participant in knowledge production. If successful, this style will not only enrich our understanding of life, cognition, and complexity but may also redefine the epistemological foundations of science, bringing us closer (paradoxically!) to Galileo’s vision: to read the great book of nature, now understood as written not only in symbols but in the very dynamics of matter itself.
Author Contributions
Conceptualization, T.T., V.N.U. and A.G.; methodology, T.T., V.N.U. and A.G.; validation, T.T., V.N.U. and A.G.; data analysis, T.T., V.N.U. and A.G.; investigation, T.T., V.N.U. and A.G.; data curation, T.T., V.N.U. and A.G.; writing—original draft preparation, T.T., V.N.U. and A.G.; writing—review and editing, T.T., V.N.U. and A.G. All authors have read and agreed to the published version of the manuscript.
Funding
T.T. acknowledges the Indian National Science Academy (INSA), New Delhi, India, for the INSA Associate Fellowship. V.N.U. acknowledges University of South Florida for the Collaborative Research Excellence and Translational Efforts (CREATE) award. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Galilei, G. Il Saggiatore; Giacomo Mascardi: Rome, Italy, 1623. [Google Scholar]
- Longo, G. The systemic unity in Mathematics and Science: Beyond Techno-Science myths. Systems 2025, 13, 136. [Google Scholar] [CrossRef]
- Noble, D. A theory of biological relativity: No privileged level of causation. Interface Focus 2012, 2, 55–64. [Google Scholar] [CrossRef] [PubMed]
- Noble, D. Dance to the Tune of Life: Biological Relativity; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
- Ball, P. How life works: A user’s guide to the new biology. In How Life Works; University of Chicago Press: Chicago, IL, USA, 2023. [Google Scholar]
- Smolensky, P. Connectionist AI, symbolic AI, and the brain. Artif. Intell. Rev. 1987, 1, 95–109. [Google Scholar] [CrossRef]
- Shang, Y.; Lei, Z.; Alvares, E.; Garroni, S.; Chen, T.; Dore, R.; Rustici, M.; Enzo, S.; Schoekel, A.; Shi, Y. Ultra-lightweight compositionally complex alloys with large ambient-temperature hydrogen storage capacity. Mater. Today 2023, 67, 113–126. [Google Scholar] [CrossRef]
- Kaspar, C.; Ravoo, B.J.; van der Wiel, W.G.; Wegner, S.V.; Pernice, W.H.P. The rise of intelligent matter. Nature 2021, 594, 345–355. [Google Scholar] [CrossRef] [PubMed]
- Nuzzo, R. Scientific method: Statistical errors. Nature 2014, 506, 150–152. [Google Scholar] [CrossRef]
- Stanley Young, S.; Karr, A. Deming, data and observational studies A process out of control and needing fixing. Significance 2011, 8, 116–120. [Google Scholar] [CrossRef]
- Ioannidis, J.P. Why most published research findings are false. PLoS Med. 2005, 2, e124. [Google Scholar] [CrossRef]
- Rzhetsky, A.; Iossifov, I.; Loh, J.M.; White, K.P. Microparadigms: Chains of collective reasoning in publications about molecular interactions. Proc. Natl. Acad. Sci. USA 2006, 103, 4940–4945. [Google Scholar] [CrossRef]
- Sadri, A. Is target-based drug discovery efficient? Discovery and “off-target” mechanisms of all drugs. J. Med. Chem. 2023, 66, 12651–12677. [Google Scholar] [CrossRef]
- Giuliani, A. System Science Can Relax the Tension Between Data and Theory. Systems 2024, 12, 474. [Google Scholar] [CrossRef]
- Transtrum, M.K.; Machta, B.B.; Brown, K.S.; Daniels, B.C.; Myers, C.R.; Sethna, J.P. Perspective: Sloppiness and emergent theories in physics, biology, and beyond. J. Chem. Phys. 2015, 143, 010901. [Google Scholar] [CrossRef]
- Pascual, M.; Levin, S.A. From individuals to population densities: Searching for the intermediate scale of nontrivial determinism. Ecology 1999, 80, 2225–2236. [Google Scholar] [CrossRef]
- Uversky, V.N.; Giuliani, A. Networks of Networks: An Essay on Multi-Level Biological Organization. Front. Genet. 2021, 12, 706260. [Google Scholar] [CrossRef]
- Dupont, C.; Armant, D.R.; Brenner, C.A. Epigenetics: Definition, mechanisms and clinical perspective. Semin. Reprod. Med. 2009, 27, 351–357. [Google Scholar] [CrossRef]
- Deans, C.; Maggert, K.A. What do you mean,”epigenetic”? Genetics 2015, 199, 887–896. [Google Scholar] [CrossRef]
- Lacal, I.; Ventura, R. Epigenetic Inheritance: Concepts, Mechanisms and Perspectives. Front. Mol. Neurosci. 2018, 11, 292. [Google Scholar] [CrossRef] [PubMed]
- Shapiro, J.A. Evolution: A View from the 21st Century; Pearson Education: London, UK, 2011. [Google Scholar]
- Shapiro, J.A. How life changes itself: The Read-Write (RW) genome. Phys. Life Rev. 2013, 10, 287–323. [Google Scholar] [CrossRef] [PubMed]
- Waddington, C.H. The epigenotype. 1942. Int. J. Epidemiol. 2012, 41, 10–13. [Google Scholar] [CrossRef] [PubMed]
- Deichmann, U. Epigenetics: The origins and evolution of a fashionable topic. Dev. Biol. 2016, 416, 249–254. [Google Scholar] [CrossRef]
- Biswas, S.; Rao, C.M. Epigenetic tools (The Writers, The Readers and The Erasers) and their implications in cancer therapy. Eur. J. Pharmacol. 2018, 837, 8–24. [Google Scholar] [CrossRef] [PubMed]
- Bove, G.; Del Gaudio, N.; Altucci, L. Epitranscriptomics and epigenetics: Two sides of the same coin? Clin. Epigenet. 2024, 16, 121. [Google Scholar] [CrossRef] [PubMed]
- Henikoff, S.; Greally, J.M. Epigenetics, cellular memory and gene regulation. Curr. Biol. 2016, 26, R644–R648. [Google Scholar] [CrossRef] [PubMed]
- Moosavi, A.; Motevalizadeh Ardekani, A. Role of Epigenetics in Biology and Human Diseases. Iran. Biomed. J. 2016, 20, 246–258. [Google Scholar] [CrossRef]
- Saletore, Y.; Meyer, K.; Korlach, J.; Vilfan, I.D.; Jaffrey, S.; Mason, C.E. The birth of the Epitranscriptome: Deciphering the function of RNA modifications. Genome Biol. 2012, 13, 175. [Google Scholar] [CrossRef]
- Esteve-Puig, R.; Bueno-Costa, A.; Esteller, M. Writers, readers and erasers of RNA modifications in cancer. Cancer Lett. 2020, 474, 127–137. [Google Scholar] [CrossRef]
- Liebers, R.; Rassoulzadegan, M.; Lyko, F. Epigenetic regulation by heritable RNA. PLoS Genet. 2014, 10, e1004296. [Google Scholar] [CrossRef]
- Huxley, J. Evolution. The Modern Synthesis; George Allen & Unwin Ltd.: London, UK, 1942. [Google Scholar]
- Corning, P.A. Beyond the modern synthesis: A framework for a more inclusive biological synthesis. Prog. Biophys. Mol. Biol. 2020, 153, 5–12. [Google Scholar] [CrossRef]
- Liu, K.; Tebyetekerwa, M.; Ji, D.; Ramakrishna, S. Intelligent materials. Matter 2020, 3, 590–593. [Google Scholar] [CrossRef]
- Baulin, V.A.; Giacometti, A.; Fedosov, D.A.; Ebbens, S.; Varela-Rosales, N.R.; Feliu, N.; Chowdhury, M.; Hu, M.; Fuchslin, R.; Dijkstra, M.; et al. Intelligent soft matter: Towards embodied intelligence. Soft Matter 2025, 21, 4129–4145. [Google Scholar] [CrossRef]
- Plotkin, H.C. Evolutionary epistemology and evolutionary theory. In Learning, Development and Culture: Essays in Evolutionary Epistemology; Wiley: Hoboken, NJ, USA, 1982; pp. 3–16. [Google Scholar]
- Slijepcevic, P. Evolutionary epistemology: Reviewing and reviving with new data the research programme for distributed biological intelligence. Biosystems 2018, 163, 23–35. [Google Scholar] [CrossRef] [PubMed]
- Levin, M. Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds. Front. Syst. Neurosci. 2022, 16, 768201. [Google Scholar] [CrossRef]
- Tripathi, T.; Uversky, V.N.; Giuliani, A. ‘Intelligent’ proteins. Cell Mol. Life Sci. 2025, 82, 239. [Google Scholar] [CrossRef]
- Timsit, Y.; Gregoire, S.P. Towards the Idea of Molecular Brains. Int. J. Mol. Sci. 2021, 22, 11868. [Google Scholar] [CrossRef]
- Timsit, Y.; Bennequin, D. Nervous-Like Circuits in the Ribosome Facts, Hypotheses and Perspectives. Int. J. Mol. Sci. 2019, 20, 2911. [Google Scholar] [CrossRef]
- Poirot, O.; Timsit, Y. Neuron-like networks between ribosomal proteins within the ribosome. Sci. Rep. 2016, 6, 26485. [Google Scholar] [CrossRef]
- Timsit, Y.; Sergeant-Perthuis, G.; Bennequin, D. Evolution of ribosomal protein network architectures. Sci. Rep. 2021, 11, 625. [Google Scholar] [CrossRef]
- Stock, J.; Levit, M. Signal transduction: Hair brains in bacterial chemotaxis. Curr. Biol. 2000, 10, R11–R14. [Google Scholar] [CrossRef]
- Stock, J.B.; Levit, M.N.; Wolanin, P.M. Information processing in bacterial chemotaxis. Sci. STKE 2002, 2002, pe25. [Google Scholar] [CrossRef] [PubMed]
- Bourret, R.B.; Stock, A.M. Molecular information processing: Lessons from bacterial chemotaxis. J. Biol. Chem. 2002, 277, 9625–9628. [Google Scholar] [CrossRef] [PubMed]
- Webre, D.J.; Wolanin, P.M.; Stock, J.B. Bacterial chemotaxis. Curr. Biol. 2003, 13, R47–R49. [Google Scholar] [CrossRef]
- Hader, D.P.; Hemmersbach, R. Graviperception and graviorientation in flagellates. Planta 1997, 203, S7–S10. [Google Scholar] [CrossRef] [PubMed]
- Trewavas, A. Plant Behaviour and Intelligence; OUP Oxford: Oxford, UK, 2014. [Google Scholar]
- Gilroy, S.; Trewavas, A. Signal processing and transduction in plant cells: The end of the beginning? Nat. Rev. Mol. Cell Biol. 2001, 2, 307–314. [Google Scholar] [CrossRef] [PubMed]
- Smart, M.; Zilman, A. Emergent properties of collective gene-expression patterns in multicellular systems. Cell Rep. Phys. Sci. 2023, 4, 101247. [Google Scholar] [CrossRef]
- Carrasquilla, J.; Melko, R.G. Machine learning phases of matter. Nat. Phys. 2017, 13, 431–434. [Google Scholar] [CrossRef]
- Chiolerio, A.; Konkoli, Z.; Adamatzky, A. Ecosystem-based reservoir computing. Hypothesis paper. BioSystems 2025, 255, 105525. [Google Scholar] [CrossRef]
Table 1.
Summary of concepts introduced in this study.
Table 1.
Summary of concepts introduced in this study.
| Term | Definition |
|---|
| Intelligence of matter | We cannot apply the classical definition of human intelligence—understood as insight (from the Latin intus legere, meaning “to read within”)—to material systems. Instead, we must restrict ourselves to a minimal form of intelligence shared by both biological and artificial adaptive systems, defined by their ability to map their environment and retain memories of past experiences. In a highly simplified form, this minimal intelligence can be viewed as a type of hysteresis: a situation in which the response of an experienced system to a given input differs from that of a naïve system encountering the same input for the first time. This property distinguishes an “intelligent” system from a mere sensor. |
| Intrinsic dynamics of matter | While the motion of an inert piece of matter (e.g., a sheet of paper blown by the wind) is entirely dependent on external forces, a complex material—such as a protein or a network of metabolic reactions—possesses intrinsic dynamics governed by its internal structural and kinetic constraints. Any external stimulus acting on such a complex system triggers a response shaped by these intrinsic dynamical constraints, leading to a fundamentally nonlinear stimulus–response relationship. |
| Far from equilibrium | Any system embedded in a continuous flow of energy is described as being “far from equilibrium.” Such a system responds to energy input by self-organizing, i.e., by dissipating excess energy through the formation of correlations among its constituent parts. The most basic and well-studied examples are the so-called convective cells: organized patterns of molecular motion that arise in a liquid subjected to an energy flux created by a temperature gradient (as in the familiar case of water boiling in a pot). |
| Semantics of nature | The term semantics is closely tied to the notion of meaning. For example, two very different character strings—“CANE” and “DOG”—refer to the same concept (the familiar domestic animal) in Italian and English, respectively. In natural systems such as cells or organs, the “meaning” assigned to external perturbations relates to the need to maintain proper functioning. Thus, the “semantics” associated with a bacterium entering the body corresponds to a perceived threat, which must be addressed through the activation of an immune response. |
| Emergence | We define a property as emergent when its appearance requires a minimum number of elementary components acting together. One of the simplest and most striking examples is a traffic jam: although traffic jams arise from the presence of vehicles, no single vehicle can produce one. A traffic jam emerges only when several conditions simultaneously reach critical thresholds—such as the number of vehicles, road capacity, and momentary reductions in speed. |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).