Next Article in Journal
Nietzsche’s “Love” for Socrates
Previous Article in Journal
The Culture of Endings
Previous Article in Special Issue
The Encounter of Nursing and the Clinical Humanities: Nursing Education and the Spirit of Healing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Humanities’ Metaphysical Underpinnings of Late Frontier Scientific Research

by
Alcibiades Malapi-Nelson
School of Liberal Arts and Sciences, Humber College, 205 Humber College Boulevard, Toronto, ON M9W 5L7, Canada
Humanities 2014, 3(4), 740-765; https://doi.org/10.3390/h3040740
Submission received: 26 May 2014 / Revised: 1 December 2014 / Accepted: 2 December 2014 / Published: 22 December 2014

Abstract

:
The behavior/structure methodological dichotomy as locus of scientific inquiry is closely related to the issue of modeling and theory change in scientific explanation. Given that the traditional tension between structure and behavior in scientific modeling is likely here to stay, considering the relevant precedents in the history of ideas could help us better understand this theoretical struggle. This better understanding might open up unforeseen possibilities and new instantiations, particularly in what concerns the proposed technological modification of the human condition. The sequential structure of this paper is twofold. The contribution of three philosophers better known in the humanities than in the study of science proper are laid out. The key theoretical notions interweaving the whole narrative are those of mechanization, constructability and simulation. They shall provide the conceptual bridge between these classical thinkers and the following section. Here, a panoramic view of three significant experimental approaches in contemporary scientific research is displayed, suggesting that their undisclosed ontological premises have deep roots in the Western tradition of the humanities. This ontological lock between core humanist ideals and late research in biology and nanoscience is ultimately suggested as responsible for pervasively altering what is canonically understood as “human”.

1. Philosophical Precedents for the Scientific (Explanatory) Notions of Modeling, Mechanism and Simulation

1.1. Duns Scotus: Continuum between Physical Entities and the Divine Being

After several centuries of having Aristotle’s works unavailable to Europe, the twelfth century witnessed a revival of Aristotelian philosophy. After a millennium of a Christian philosophy largely based upon Platonism—inheriting at times its subsequent relative contempt for the physical realm—St. Thomas Aquinas, a member of the newly formed mendicant Order of St. Dominic, embarked upon the major task of relocating the fundamentals of Christianity upon the newly rediscovered Aristotelian writings. This displacement created waves of tension throughout Christendom. There were deep concerns regarding the outcome of combining the Christian faith with a philosophy which, even if acknowledging the existence of the Divine, it did it in a way that could forever remove it away from a human relational intimacy—Aristotle’s god was not a personal god; even less one that would become man. Aquinas was eventually reivindicated by the Church, which three centuries later placed his Summa Theologica besides the Bible on the altar—the only book ever that enjoyed such honor—in the closing Mass for the Council of Trent, when it defined its position against the Protestant revolt.
One of the prominent figures who relentlessly attacked Aquinas’ ideas at the time was Blessed Duns Scotus, a member of the order started by Saint Francis of Assisi. Both Doctors of the Catholic Church, (Angelicus—Aquinas—and Subtilis—Scotus) their orders were founded just a century apart. They embodied a profound reform of the Church at the time, accentuating poverty as a charisma, and it quickly grew large all over Europe. During the lifetime of both philosophers, the mendicant feature of both orders gave increasing space to an emphasis on education, eventually founding universities with their distinctive character. The famed rivalry between the Franciscans and the Dominicans (the first being somewhat prone to heresy and the second to orthodoxy) has arguably been somewhat exaggerated, although it is widely recognized that each order characterized the two major flavors of European universities for centuries to come—one under the light of an Aristotelian pre-Modern realism (Dominicans), the other under a more Platonic take on reality, with heavy leniencies towards mysticism (Franciscans) ([1], chapter 2).
Scotus was concerned about Aquinas’ understanding of concepts when these would ultimately predicates of God (divine attributes). For Aquinas, when we speak of God as being good, we actually mean this “goodness” in an equivocal way, not in the same way in which we use it when we refer to, for example, humans. The goodness of God can be only analogically related to that of humans—or of creation, health, etc. Aquinas clearly emphasized the transcendent nature of God, and even if there are cues in nature that can eventually lead us to Him (as in his famed Quinque Viae), we ultimately know more about what He is not than what He is.
Scotus saw a problem right there. For him, negative knowledge was ultimately not knowledge at all. This was just the start of the danger that was to come after following Aristotle. He feared that if Aquinas had it his way, the human capability of reaching God would be truncated, and ultimately metaphysics might even cease to exist.1 Furthermore, Aquinas would think of “eing” as not necessarily entailing “existence” (God is, more than merely exists, since “existence”, being contingent, necessitates a “being” to give its very “existence” in the first place). For Scotus this equivocality could have a catastrophic effect, given the nature of human intelligence. The primal object of the human intellect is “being inasmuch as being” (ens inquantum ens). That entails that if something is, it will necessarily fall within the realm of the intelligible. There is no being without existence. To affirm the contrary would allow for the possibility of a being beyond intelligibility, which could lead to the denial of existence of the Ultimate Being. Hence for Scotus there is a semantic continuum in our predication of the attributes of a being—including God—for otherwise there could be the case of a being—including God—equivocally referred to, and thus removed beyond the grasp of our intellectual might. In his own words:
And lest there be a dispute about the name “univocation”, I designate that concept univocal which possesses sufficient unity in itself, so that to affirm or deny it of one and the same thing would be a contradiction. It also has sufficient unity to serve as the middle term of a syllogism, so that wherever two extremes are united by a middle term that is one in this way, we may conclude to the union of two extremes among themselves.
([3], p. 20)2
For Scotus, in predicating, say the “cleanliness” of a man, we can refer to the moral aspect of his persona or to his physical, bodily realm—among other levels of qualified existence. So one can say that such man is clean and not clean, without committing contradiction, given that one is applying cleanliness to different aspects, and still maintaining the univocity of being (univocatio entis) regarding cleanliness. The meaning of the noun (or adjective) is the same for both cases, but the variation occurs in the way or degree in which this notion is applied. When thinking of God’s attributes, “the extremes united by means of a middle term possessing such a unity are themselves united with each other”, so that we say about Him as being good in an infinite way, but of creatures as being good in a finite, fallible way. The same is said of being: the radical opposite to nothingness is being—even if this being can be infinite (God) or finite (everything else). Regardless of the way in which being opposes to nothingness, it is the same notion of being at work here, either applied to God or His creatures. This ontological continuum3 should warrant our finite capacity of reaching the Infinite One. After all, our knowledge would differ from the Creator’s in a manner of degree, not kind—pace Thomas Aquinas. This Scotian ontological bridge established between the physical and the non-physical would re-emerge much later under a technological umbrella, as we shall see in part II.

1.2. Francis Bacon: Mechanical arts against the Epistemic Darkness set by Original Sin

The pervading notion in Christendom of humans having been conceived in the “image and likeness of God”,4 one of the tenets of faith that operated behind Scotus’ attack of Aquinas, evolved in increasingly liberal ways the next two hundred years. The beginning of the sixteenth century signaled the formal (theoretical and then political) rebellion against the Church in Europe. Tradition has it that the Ninety-Five Thesis on the Power and Efficacy of Indulgences, was nailed on the front door of the castle church of Wittenberg, by the Augustinian priest Martin Luther in 1507. This revolt eventually amounted to the biggest breakaway in the body of Christianity after the Great Schism of the eleventh century. Five hundred years later, the Protestant “reform” commands much scholarship;5 it is not my intention to address it here. What is relevant, however, is the evolution of the notion of imago Dei within the ranks of those who fostered the Scientific Revolution.
Although a plethora of non-religious circumstances were leading to the historic event, the reformers’ attack against the Church was symbolically triggered by the lack of clarification, on the part of the Church, regarding what is referred to in the English title of Luther’s manifesto, namely, “the power and efficacy of indulgences”.6 During the ensuing struggle between the reformers and Rome, the former gradually found scriptural passages that would allegedly justify their independence from the Hierarchy in particular and the Holy Orders7 in general. The Hierarchy (conformed by the bishops, cardinals and patriarchs) is the only entity that can provide the sacrament of Holy Orders. Once a society gets rid of it, priests could not be ordained, and the rest of the sacraments, which priests have the mandate to deliver, cannot be provided. We would be in front of a sacrament-less Christianity, which is what the reformers were after—in order to render the Church effectively useless.
Martin Luther held a profoundly dark outlook regarding what remained of human nature after the Fall. Whereas for the Catholic Church this Fall deeply “corrupted” our nature, for Protestantism it irreparably “shattered” it. For Catholicism, this wounded nature could still be healed—via the sacraments—since the destruction was not complete. For Protestantism, we are so completely damned, that only Christ—and Him alone—can save us. However, our underlying nature remains ultimately broken. Simul iustus ad pecator (Simultaneusly justified and sinner) Luther would proclaim.
This shift carried existential and epistemological issues. For fifteen centuries there was a relatively unbroken certainty that the seven sacraments were specifically instituted by Jesus Christ, for us to attain self-realization in this life and salvation in the next one. That was never denied by the earlier Eastern Christian split of the eleventh century—the mentioned Great Schism. The sacramental institution just “made sense”, having in mind the inescapable catastrophe depicted in Genesis 3. Until we accept the redemption given by Christ, which is literally effectuated via the reception of the sacraments, the Great Fall inherently dooms human existence—including our epistemic capacities. Thus the human sacramental participation of divine grace allegedly heals our fallen nature—allowing us to return to the “essence of humanity,” lost as a consequence of Original Sin.
Such was the Christian view of man until then. However, now, given the post-Reformation absence of sacraments, an anxious inquiry regarding the scope and limits of human capabilities quickly set in. Prima facie, given the absolutely totalizing nature of the Fall, any hopeful human endeavor is right from the start destined to crash. Specifically, accounts of the now lost Adamian wisdom, which enjoyed pre-Fall super-human intellectual and cognitive capabilities, begun to flourish. Scriptura Sola, now without the tutelary guidance of the Church and written in vernacular languages, allegedly gave accounts of the encyclopedic knowledge that Adam enjoyed, e.g., in calling creatures by their name. A veil of hopelessness seemed to rear its head for the possibilities of knowledge, and indeed for any significant outcome from human experience whatsoever [7].
The strongest voice that defied this looming prospect arguably was the one to which we attribute having theoretically fostered the Scientific Revolution: Francis Bacon. Immanuel Kant tellingly opened his Critique of Pure Reason with a quote from Baron Verulam (Bacon’s honorific title). Kant does this in order to highlight Bacon’s role in the new science, having “made the proposal that partly prompted this road’s discovery, and partly—in so far as some were already on the trail of this discovery—invigorated it further” ([8], Bxii). Strong words that found echo in the copious amount of scholarship on this author, the alleged articulator of the scientific method for early Modern science. One aspect of Bacon’s work, however, that has arguably received less attention, was the main reason behind the construction of the new science in the first place—namely, the retrieval of the lost epistemic capabilities that humans were endowed with in virtue of having being created in imago Dei.8
Such relative absence in the scholarship is all the more surprising once we realize that “for Francis Bacon, the reform of learning was not a secular pursuit, but a divine mandate” ([9], p. 51). Indeed, Bacon did not mince words for clearly pointing out that Original Sin catastrophically affected man’s intellectual capabilities, removing the mastery over nature that Adam once enjoyed. In The New Organon, he asserts that “[f]or man by the fall fell at the same from his state of innocence and from his dominion over creation” ([10], p. 189). The allusion to Genesis 3 is clear in his understanding of the dramatic loss we underwent due to Adam’s fault. The Devil, represented in Genesis by a serpent, introduced darkness and confusion in the human mind forever. In The Great Instauration, the sharp decrease in our intellectual capabilities is sorely lamented by Bacon in a Trinitarian prayer, which asks for “illumination”, one of the Gifts pertaining to the Holy Spirit: “Lastly, that knowledge being now discharged of that venom which the serpent infused into it, and which makes the mind of man to swell, we may not be wise above measure and sobriety, but cultivate truth in charity” ([11], p. 74). A grim depiction of our epistemic panorama is indeed laid out.
However, there is a way out from cognitive darkness. Again in the New Organon, after referring to both our losses of innocence and of mastery over nature triggered by Original Sin, Bacon gives a glimpse of hope: “Both of these losses, however, can even in this life be in some part repaired; the former by religion and faith, the latter by arts and sciences.” ([10], p. 189). Our intellectual capacity is corrupted, that much is clear. Hence we need a way to put it in check, to rely on something solid. Reliance on our “natural lights” is out of the question, due to inherited state of contamination. The way to get rid of the “idols”9 of the mind will thus entails a return to hard reality. However, not just in any way. The Greeks were faithful realists, but their appreciation of nature, even with powerful insights, did not produce growing knowledge. Their experience as great philosophers but not great scientists stands as a warning for us:
Signs also are to be drawn from the increase and progress of systems and sciences. For what is founded on nature grows and increases; while what is founded on opinion varies but increases not. If, therefore, those doctrines had not plainly been like a plant torn up from its roots, but had remained attached to the womb of nature and continued to draw nourishment from her, that could never have come to pass which we have seen now for twice a thousand years; namely, that the sciences stand where they did and remain almost in the same condition, receiving no noticeable increase, but on the contrary, thriving most under their first founder, and then declining.
([10], p. 113)
Thus there is an element missing. We have to indeed observe nature, but not as a passive gatherer, accumulating data and speculating about possible connections and causes, but rather making up situations that would force the studied objects to behave in such way that we could satisfy our previously construed questions. Immanuel Kant, who appropriately understood the Baconian dictum, captured well Bacon’s suggestion for pressing nature, since Bacon himself was allegedly inspired by the torturous questioning of witnesses in court—which often provide surprising and fruitful outcomes. In the words of Kant, “reason must indeed approach nature in order to be instructed by it; yet it must do so not in the capacity of a pupil who lets the teacher tell him whatever the teacher wants, but in the capacity of an appointed judge who compels the witness to answer the questions that he puts to them” ([8], Bxiii).
Making allusion to the Ancient Greeks one more time, this time specifically to Aristotle, Bacon pointed out the crucial difference it makes to “push” nature into fitting inquiring categories, declaring that “in the business of life, a man’s disposition and the secret workings of his mind and affections are better discovered when he is in trouble than at other times, so likewise the secrets of nature reveal themselves more readily under the vexations of art than when they go their own way” ([10], p. 130). The Greek experience, in contrast…
is the opposite of what happens with the mechanical arts, which are based on nature and the light of experience: they (as long as they find favor with people) continually thrive and grow, having a special kind of spirit in them, so that they are at first rough and ready, then manageable, from then onwards made smoothly convenient by use—and always growing.
([10], p. 113)
One cannot help but stopping for a moment to admire the brilliance of the suggestion. Our fallible intellectual capabilities get anchored in physical substrata, but not imposing its faults nor letting itself be paved upon by nature’s might. Instead, a reciprocal relation is formed, whose results let themselves get noticed by a growth of knowledge. When this constructive conflict is not present in the inquiry, the fate of the Greeks stands as a warning. The “mechanical arts” provide a key element of solidity, whose determinism (in so far as mechanical) acts as a reliable platform for our otherwise weakened reason to perform its questioning tasks. The notion of experimentum crucis, which constituted a methodological pillar for subsequent developments in science, was used as a basic element upon which much of scientific thinking regarding reliable parameters for experimentation was done. However, one could be misled regarding the ultimate aim of this historical progress if we would to take in consideration Bacon’s primal reason behind his proposal to reshape scientific inquiry:
[I]t is not the pleasure of curiosity, nor the quiet of resolution, nor the raising of the spirit, nor victory of wit, nor faculty of speech, nor lucre of profession, nor ambition of honour or fame, nor inablement for business, that are the true ends of knowledge; some of these being more worthy than other, though all inferior and degenerate: but it is a restitution and reinvesting (in great part) of man to the sovereignty and power (for whensoever he shall be able to call the creatures by their true names he shall again command them) which he had in his first state of creation.
([12], p. 188)
The Protestant Reformation’s role in the Scientific Revolution pivots around a concerted philosophical effort to overcome the epistemological crippling effects introduced by Original Sin—effects only brought to the fore after the epistemic “antidote” (the sacraments) were no longer available. Baron Verulam’s intention behind his revolutionary proposal is laid out clearly: to employ the “new science” as the way out of the cognitive darkness inherited from our sinful first fathers. Bacon’s longing for an Adamian nature that would excel in epistemic abilities—now lost—might finally find rest, thanks to the restitutional capabilities of the newly formed “mechanical arts”. Experimentation that fostered a mechanical explanation became the hardcore signature of early Modern science—epitomized in the great names of Galileo, Newton, Boyle and even J. C. Maxwell at some point.10 However, the set of interrogations that would construct the backdrop for the devised experiment would all constitute, once the results are obtained, a model of a portion of reality—a phenomenon’s identified patterns of reaction to our experimentation. The nature of this model generated itself inquiries, and its constitution, aims and even struggles were pondered in the centuries to come. The aspect of constructability that a mechanical model inherently possess would go on to inform realms that were perhaps not foreseen by Bacon himself—as we will see below. However, first, let us explore one more instance of the evolution of this “mechanical epistemology”. This time it came from one of Bacon’s most fervent admirers, two centuries later.

1.3. Giambatista Vico: Secular Creatio and Constructability as Criterion of Truth

Canonically known after La Scienza Nuova (a treatise on how to lead nations that was to be regarded later as the foundational text for philosophy of history), Giambattista Vico’s early writings clearly indicated how dramatically important were Bacon’s insights for his own era. In an autobiography where the Italian philosopher refers to himself in the third person, he speaks of his discovery of Bacon. He recounts that “from his De augmentis scientiarum Vico [himself] concluded that, as Plato is the prince of Greek wisdom, and the Greeks have no Tacitus, so Romans and Greeks alike have no Bacon” ([14], p. 139). Vico, himself a Catholic, castigated Baco’s infamous vicious attacks against Rome, but trying to balance things out, he gave the hostile Anglican philosopher its fair due, saying of him that “without professional and sectarian bias, save for a few things which offend the Catholic religion, he did justice to all the sciences” ([14], p. 139). Indeed Vico acknowledged Bacon’s original contribution to scientific methodology, which would itself later reverberate into an augmentation of empirical knowledge.
Vico’s metaphysics, developed years before La Scienza—and upon which the latter was eventually based—had a strong philological accent. He advanced the argument that in the case of the Latin language, key linguistic terms had been formed out of philosophical insights rather than vulgar trades—indeed “derived from some inward learning rather than from the vernacular usage of the people” ([15], p. 37). Vico engaged into etymological archaeologies to bring into the open the deep meaning of certain metaphysically dense concepts. As a way of guiding examples, he brought up the verb intelligere, which means both “to read perfectly” and “to have plain knowledge”; the verb cogitare, which means “to think” and “to gather”; and the noun ratio, which means both the recognition of mathematical proportions and man’s endowed reason. Most tellingly, verum and factum were at the time interchangeable (convertible), as it is shown in the Latin adagio verum esse ipsum factum: “the truth is what is made, itself”—or, the truth is precisely what is done.11 This assertion carries some immediate epistemological and ontological consequences. For starters, the only one who can fully know is God, for He—and He alone—creates its own object of knowledge.
Before casting Vico’s etymology-based metaphysics under a negative light, one could be reminded that he is not alone in this understanding of divine knowledge as the only one which both perfectly knows something and creates precisely that “something” while knowing it. Indeed Vico’s views on this are not dissimilar from that of probably the most representative philosopher of the Modern Era. Immanuel Kant, roughly a contemporary of Vico, had a very clearly exclusive adscription of the faculty of “intellectual intuition” to God alone. For Kant, we are endowed with “sensible intuition” (space and time being its pure forms); our faculty is passive, receiving raw sensible material from the thing as it is presented to us—all of which would construct the uniquely Kantian understanding of “experience”, which then will be ordered by our intellect. This is within the core of the “Trascendental Aesthetic” in the Critique of Pure Reason. Our intuitus derivativus does not create the object of our intellect. In contrast, in the case of God’s intuitus originarious, to think about something is to create that something:
Our kind of intuition is called sensible because it is not original. I.e., it is not such that through this intuition itself the existence of its object is given (the latter being a kind of intuition that, as far as we can see, can belong only to the original Being)…intellectual intuition seems to belong solely to the original Being, and never to a being that is dependent as regards both its existence and intuition (an intuition that determines that being’s existence by reference to given objects).
([8], B72)
Vico’s take on truth as convertible to that which is built, is mindful of Bacon’s emphasis on our need for reliance on experimentation for extracting the answers from nature. However, the line of thought that goes from Vico’s claim regarding the verum factum to Bacon’s need for experimenting with reality needs some explanation. God knows everything totally and perfectly because all things that constitute this “everything” are already in Him; humans, on the other hand, are external to these things—we have a superficial grasp of them, since they are not internal to us:
God reads all the elements of things whether inner or outer, because He contains and disposes them in order, whereas the human mind, because it is limited and external to everything else that is not itself, is confined to the outside edges of things only and, hence, can never gather them all together.
([15], p. 46)
This nature of ultimate knowledge has a bearing on science “in general”, namely, both for God and humans. Science is “knowledge of the genus or mode by which a thing is made; and by this very knowledge the mind makes the thing, because in knowing it puts together the elements of that thing” ([15], p. 46). Scientific knowledge, thus, implies a creative process. One knows in the very act of creating. Again, only God fully knows, because He creates. Having said that, there is a realm where we in fact fully know, creating the very object that we know: mathematics: “Just as he who occupies himself with geometry is, in his world of figures, a god (so to speak), so God Almighty is, in his world of spirits and bodies, a geometer (so to speak)” ([16], p. 66). It is in mathematical knowledge where we find the closest resemblance to divine cognition.
The situation is dramatically different when it comes to the understanding of the natural order, i.e., knowledge of physical things. Here, once again, both God and humans possess science, but ours is essentially different from His. We drag an inherent stain in our scientific endeavors regarding the natural realm—namely, our lack of ability to “create” (available to God alone) and our subsequent need for abstraction. After all, “[s]ince human knowledge is purely abstractive, the more our sciences are immersed in bodily matter, the less certain they are” ([15], p. 52). So Vico confided the aspect we ought to emphasize while doing science in order to overcome this genetic limitation: “Just as divine truth is what God sets in order and creates in the act of knowing it, so human truth is what man puts together and makes in the act of knowing it” ([15], p. 46). Constructability—“putting together”—should be the first criterion of truth, and thus, of scientific success.
In this vein, Vico reminded us that “God knows all things because in Himself He contains the elements with which He puts all things together. Man, on the other hand, strives to know these things by a process of division” ([15], p. 48). This “process of division”, quite literally fashions the essence of human science, which itself amounts to the “anatomy of nature’s works” ([15], p. 48). This “anatomy” goes beyond a mere figure of speech: we need to dissect, qua physician, the elements of nature, and then cognitively—not existentially, which is only divinely feasible—recreate the object according to our inquiry. Thus, when Vico refers to “dissection”, he is referring to experimentation! ([17], p. 103).
Vico’s indebtedness to Bacon is more explicit when he makes allusion to an incipient Modern scientific method. He asserted that “hypothesis about the natural order are considered most illuminating and are accepted with the fullest consent of everyone, if we can base experiments on them, in which we make something similar to nature” ([15], p. 52). Considering the ingeniousness of Bacon’s proposal pointed above—namely, by pressing nature to answer our own questions—we actually are anchoring ourselves in reality, not removing our minds from it. This critical reverence towards reality begets wisdom, which ultimately “in its broad sense is nothing but the science of making such use of things as their nature dictates” ([18], CXIV: 326). According to Vico, Bacon did nothing less than bridging our intellectual capacity with the natural world. However, it is Vico himself who carries the Baconian lesson to its fulfillment: the experimental reconstruction of a phenomenon under controlled conditions is just a first step towards the recreation of the object. True knowledge is acquired when the model becomes the modeled.

2. Undisclosed Ontological Commitments of Contemporary Scientific Modeling

2.1. Nanotechnology: Feasibility of the Viconian Re-Building of Reality

Duns Scotus proposed a “univocity of being” where our difference from the Divine would be one of quantity, not of quality. This view later morphed via Vico into the notion that true knowledge would imply the necessity of building. Kant himself asserted that God’s intuitive knowledge—His equivalent of the kind of knowledge generated by our “pure intuitions of sensibility”—would necessarily imply creation. Moreover, Bacon gave back to us the confidence in the possibility of knowledge (lost after Original Sin) by means of experimentation and the “mechanical arts”—mechanistic modeling that arguably fostered the Scientific Revolution.
It is Vico who most clearly stated, regarding the essential reliance of knowledge upon the best possibly constructed model, that “Verum et factum convertuntur”: The Truth and the Made converge—or in more colloquial language, “if you know it, build it”. Of course, only a Creator could fully know its own creation. However, since the Scotian ethos defends that humans can epistemically share His Light in an infinitely lower but real degree, we can in principle also know under such light. This emphasis on constructability as criterion of truth (and also as part and parcel of a critique against a Cartesian epistemology), has inspired later thinkers more concerned with the tractability of physically embedded problems.
Vico’s prescient Verum Factum seems to neatly link the Univocatio Entis and Imago Dei views into a zenith of knowledge attainment that drives an important aspect of contemporary scientific research. Vico’s claim on the future “certain sciences” is telling: “The most certain sciences are those that wash away the blemish of their origin and that become similar to divine science through their creative activity in as much as in these sciences that which is true and that which is made are convertible” ([15], p. 52). It should thus not come as a surprise that one can find in contemporary times a cutting edge research laboratory named “The Giambattista Institute of Cybernetics and Applied Epistemology” in the Netherlands. In fact, the whole rise and demise of cybernetics12 could be understood from the viewpoint of having suffered such fate due to the cybernetic legendary emphasis on “embodiment” as the core its scientific methodology—itself deeply rooted in the drive behind “Modern”13 scientific practice. The tension between, on the one hand a theoretical structure per se, and on the other, the physical anchoring of such theory, gets all the more interesting once one notices how contemporary science still understands its explanatory task in fairly Modern terms—in other words, roughly mechanistically.14 These values are indeed essential to the core of the Western tradition, as expressed in the longing for a divine knowledge that entails the possibility of radically transforming (modifying, fixing and improving) a creation given to us for our disposal—ourselves included. Such impetus for control and re-construction is ever more present with us, and in tangible ways.
In 1945, the Manhattan Project physicist Richard Feynman, better known later for his Nobel price research on quantum mechanics, refused a teaching invitation from Princeton University—despite having earned his doctorate from there in 1942. Instead, Feynman accepted a teaching position at CalTech, where he delivered a talk that would come down in history as the first addressing of the possibility of manipulating reality at a level where the transformation and recreation of objects becomes scientifically feasible. The one-time lecture, entitled “There’s Plenty of Room at the Bottom” [22] remained, however, largely unknown, until Eric Drexler, a doctor in engineering from M.I.T., found it and subsequently published it almost three decades later, as part of his book Engines of Creation: The Coming Era of Nanotechnology [23]. Although the term “nanotechnology” was previously coined a decade earlier by Norio Taniguchi (Tokio University of Science), it is after Drexler’s book that the notion started to gain traction in the engineering and—later—in the science community. The journal Nature has now a permanent section dedicated to nanoscale15 research.
Canonically, nanotechnology aims at the mechanical manipulation of matter at the molecular level—literally, atom by atom. Such level of reality is allegedly of utmost importance, since stuff can be still “classically” understood under a Newtonian light and rearranged without incurring in quantum indeterminacy.16 Since a physical thing owes its nature to its particular atomic arrangement (one modifies this arrangement and ends up with another thing), the possibilities immediately foreseen are arguably flabbergasting. In principle, an eventual “molecular assembler” could transform any physical entity into another physical entity, by means of reordering its atomic structure. Such an envisioned feat would obviously transform the world as we know it in radical ways, replacing its economy, resetting global health, and trivializing some deep problems—such as the possibility of attaining intelligence and life artificially (now amenable of being duplicated and enhanced, instead of “found” or “created”).
Even if by consensus the field still is in its infancy, the first decade of our century has witnessed some relatively important achievements, which keeps fostering interest in the field in a gradual but relentless way. Feynman’s explicit proposal, even if only theoretical, tried to garner attention towards that “untreated” level of reality in the midst of the Cold War. He predicted that research performed at a nanoscale17 would have as beneficiaries, among other fields, microscopy and computation. Indeed only 6 years later, in 1965, George Moore would be proclaiming what would be henceforth known as “Moore’s Law”: Every 18–24 months the amount of transistors that could fit in an integrated circuit would double. Thereafter, somewhat expectably, two cornerstones in nanotechnology occurred in a common area shared between computation and microscopy. In 1981 two IBM scientists constructed the first “scanning tunneling microscope”, which allowed humans, for the first time, to actually see atoms separately. Eight years later, in 1989, another two IBM scientists rearranged 35 individual atoms at will, forming the logo “IBM”. This last achievement properly showed that what nanotechnology was aiming for—the mechanical manipulation of individual atoms—seemed to be in fact feasible [24].
By the turn of the millennium, the U.S. National Nanotechnology Initiative was founded, receiving its first funding during the administration of President Bill Clinton. Quickly realizing nanoscience’s potential for multilayered pervasive and disruptive consequences, in 2004 the European Union produced a document entitled Towards a European Strategy for Nanotechnology [25]. There the European community made clear its preferred emphasis on the possible benefits for society as a whole—rather than on the human individual, as the American initiative would suggest. The same year, the United Kingdom’s Royal Society released the report Nanoscience and Nanotechnologies: Opportunities and Uncertainties [26]. In this report Great Britain called for focusing on investigating the possible toxic side effects of doing research at the nanoscale. By 2005, further manipulation of individual atoms was accomplished, this time by means of constructing a nanoscale “car”. A team lead by Professor James Tour (Rice University) put together four fullerenes18 united by a “frame” composed of hydrogen molecules. The experiment was designed to find out in which way fullerenes move across materials. When the surface (the “road”) got warmed up (e.g., to 200 degrees Celsius), the diminutive cars began to run over the road at relatively “high speed”.19 The idea of a nano-machine, indeed proposed by Feynman as a future possibility, seemed to be closer to reality.
One needs to stop for a minute and consider the import of these advances. Indeed, they pertain to the very heart of scientific practice tout court. Vico’s aim for an absolute recreation of the studied object (the model) as the only—or at least, the best—signature of true knowledge has certainly been at the heart of the scientific mind since the beginning of Modern Science. Indeed the case could be made that such aim is the essence of what is expected out of Bacon’s experimentum crucis. Unfortunately, due to the very nature of reality—leave alone the nature of human cognition—all we could aimed for was the best possible description of a behavior manifested by a certain phenomenon or object. Furthermore, when the mapping of a complex system’s behavior triggers an explosion of variables, the set up model seems to be facing imminent demise. There comes a point where the range of possible behaviors is just too immense to be captured and articulated. It would seem that at that moment, doing a piece by piece mapping of the structure of the complex object itself becomes more feasible. However, again, in light of the nature of reality and human cognition, this has shown to be a practically unattainable feat, due to the staggering complexity of its structure—which made scientists focus on behavior in the first place (e.g., behaviorism’s reduction of the human person to a black box).20 Ultimately, the endeavor for mapping and rebuilding the actual structure of an entity—a.k.a., attaining truth—was right from the start inherently beyond our reach.
But now, the epistemic hope of knowing the object in an almost divine way21 seems to be amenable of physical realization. The Viconian horizon of literally re-creating our object of study as proof of true knowledge could finally be within reach. Taking us closer to attaining ultimate knowledge and control, the constructed model could become so close to the modeled object that any distinction would be rendered trivial. Vico’s dictum of “if you know it, build it”, seems to be close to actual instantiation at a qualitatively distinct level. After all, the famous—if unfulfilled—cybernetic dogma was that “the best material model for a cat is another, or preferably the same cat” ([30], p. 320). Reordering atoms one by one, indeed restructuring the very fabric of reality, would have sounded to Vico as the logical step towards the final understanding and mastery of nature.
Furthermore, the possibility of building nanomachines arguably surpasses Vico’s “verum factum” stance, connecting it with the profoundly ambitious Baconian project—project that was recognized and praised by Vico himself. Not only could objects be partitioned atom by atom, but these parts can be so well rearranged that a working machine could arise. After all, that is what we see that living organisms are conformed of: Countless nanomachines working in perfect synchronicity, making up what is regarded as a living system. In fact, the summatory of such realities constitutes the totality of biology. At this point, the distinction between the natural and the artificial has already collapsed. The reckoning of the building blocks of reality subsequently allows for the manipulation and arrangement of them all the way to a living mechanical process. In nanoscience, both the organic and inorganic are equally dissected, treated and put together at will.
Currently operating scientific epistemologies stemming from this development might profoundly affect our overall outlook regarding life, as the following section will show.

2.2. Synthetic Biology: Accomplishment of the Baconian Machinization22 of Nature

Scientists of the above mentioned Nanotechnology Initiative have been noticing that most biological processes indeed happen at the nano level of reality. As examples, the width of the strand of the DNA helix is around 2 nm; the average size of a virus is 40 nm; a small bacteria’s size is 200 nm; and so on. The legacy inherited from the Scientific Revolution gradually deemed “natural” occurrences as mechanical in nature—to the chagrin of those who saw a machine and a living system as being fundamentally at odds.23 For instance, cybernetics took the mechanistic understanding of life to its ultimate workable and theoretical extreme. As such, it gave examples in nature that speak of clever mechanisms that must be underpinned by neatly working machinery. In cybernetic parlance, if nature was capable of accomplishing such a feat, we should logically be able to do it as well, since a machine by definition deems its type of materiality (or even lack thereof) as irrelevant, relying strictly on its determinate behavior.24 One classically mechanistic case in biology occurring at the sub-cellular level is that of the ribosome, which strikingly resembles a factory assembly line—and which gains with this the strange “honor” of being referred to as a “molecular assembler” ([23], Chapter 1).
Biologists have in later years expressed interest in another instance in nature that offers an even more striking example of machinery functioning at a nano-level. The bacterial flagellum has been of interest to biologists, philosophers and even theologians for several reciprocally intertwined reasons, which could all be put under one conceptual umbrella. If rigorously situated in a historical context, the attraction exerted in the scientific community by this organism’s feature is indeed expectable. What is curious is that, beyond scientists, some of the people behind this interest come from unlikely flanks. Thanks to the progress in crystallography and nano-photography, we are now able to see as part of the organism what could only be referred to as an “off-board” motor. The flagellum itself, a tube 20 nm wide, is attached to a complex array of amino acids that make up a structure that not only resembles an engine, but also seems to be an engine. One can identify pistons, camshaft, levers and an axle. The rotation produced (which allows the flagellum to act as a propeller) is analogous to the motion found in an engine, pistons moving an axle via the camshaft, which in turn makes an exterior propeller quickly turn. Endowed with this acquired motility, the bacteria can freely navigate through a liquid environment. The system seems to so neatly have found a way to achieve locomotion in a thoroughly mechanical way, that Keiichi Namba, from the Graduate School of Frontier Biosciences at Osaka University (and a world’s leading authority in bacterial flagellum studies), asserts that
The structural designs and functional mechanisms to be revealed in the complex machinery of the bacterial flagellum could provide many novel technologies that would become a basis for future nanotechnology, from which we should be able to find many useful applications.
[36]
Once we have identified a naturally assembled motor, we have found the proof that nano-motors are in fact possible. Nature seems now further disenchanted from any surrounding aura of mystery, thoroughly mechanized all the way down. For starters, once we recognize self-motion in an entity, we can enrich a view that understandably prefers to focus on structure of the object (as seen above), considering now the object’s behavior in a rather complementary manner. This behavior will no longer be modeled having “blacked out” the structure that is producing it. Instead, it will have a clear correspondence relation with the underlying structure. Indeed, we are facing something deeper than just a biomimetic venue for future nanoscale research.
There are several issues to consider starting with the adjudication itself of the notion of “machine” to the mechanism connected to the flagellum. At earlier times, when pictures of the structure were substantially blurred, there was debate regarding the possibility of committing an anthropomorphisation of whatever we were seeing. After William Paley, we are uneasily aware of the consequences of finding and proclaiming real machinery in nature [37]. However, when the images got strikingly clear, mastering the complete atomic structure of the area responsible for the bacterial motility, little doubt remained regarding whether or not we were witnessing an actual natural motor. We indeed are.25 And that created another set of considerations.
It would seem that an evolutionary account of how the amino acid “pieces” of this engine are put together would not be satisfactory. Moreover, this evolutionary blind-spot allegedly happens at two-levels. At first glance, it would seem that, since we are talking about after all a machine, we should be willing to accept some entailments. One of them is the claimed fact that a machine is constituted by “parts”, and these parts cannot submit to an explanation dependent on evolution and genetic mutation as a whole. These pieces should have always existed as such, without evolutionary change. If the latter would be the case (if a piece would “change”) then it would not fit into the motor and would render it useless –similarly to what occurs when a gear changes its shape and ruins an engine. This observation would seem to trump the way in which evolution works.
What is behind this line of discourse is the falsificationist goal of finding an evolutionary niche which could not be explained away by “natural selection”—thus defeating Charles Darwin in his own turf, given that he famously conceded that “[i]f it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down” ([39], p. 146).
An answer to this raised observation has certainly emerged. There are instances where bacteria show the lack of some amino acids (“pieces”) in the equivalent area that interfaces more directly with its environment. The bacteria that caused the bubonic plague, for instance, have their flagella attached to the body but not through a rotor. Instead it is stiff and fixed, serving as a needle—through which the bacteria transmit the contagious agent. This occurrence should point to a horizon where evolution can still play the ultimate explanatory role. In this case it would be doing it by somehow getting rid (supposedly via genetic mutation) of certain pieces of the “engine”, in order to provide another role in the survival of the entity. However, such counterargument also has a rebuttal. It would seem that the bacterial motility system of a rotary flagellum is older than its fixed counterpart, hence rendering the latter as an unlikely ancestor. To which it is still answered that the flagellum’s ancestor might have been completely extinguished, without trace. The discussion goes on.
Some “Intelligent Design” theorists exploit this issue as a workhorse against what they refer to as a scientific tyranny of sorts exercised by Neo-Darwinism26 in American academia. Intelligent Design (ID), according to its proponents, is a valid scientific alternative to the Neo-Darwinian widely adopted standpoint. ID claims the allegedly verifiable fact that “design” (that which could only have been accomplished by an intelligent agency) is present in nature by default. It was always the case, the defenders would say, but now we get to see it clearer thanks to technological progress that furthered science—such as crystallography—and not just via philosophical reasoning.27 They would stop short of saying what kind of intelligence they are referring to, but since most of them seem to be practicing Protestant science professors,28 those who dispute this view –usually atheist philosophers29 and Catholic scientists30—accuse them of attempting to smuggle religious “creationism” into the realm of scientific discourse. 31 At this point (when the religious convictions of those engaged in the debate are brought to the fore), a sort of justified ad hominem fallacy is invoked, and the whole environment of discourse morphs into what is known as the not so surreptitiously ongoing “Culture Wars”. 32 It is in this context that ID advocates denounce the said “tyranny” of neo-Darwinism, which allegedly monopolizes academic environments in meta-scientific (even political) ways, securing a pre-epistemic (anti-religious) and epistemic (atheist) attitude in culture and society.
To be sure, it is not the first time in history that a religious motivation attempts to defeat a certain scientific view due to the supposed negative consequences that the latter would entail. Jesuit mathematicians (e.g., Giovanni Saccheri) were drawn towards the possibility of finding a geometry that would not be Euclidean, in order to dismantle both the Critique of Pure Reason’s Transcendental Aesthetic and the Prolegomena, so that Immanuel Kant’s challenge to metaphysics would get eroded. In similar fashion, ID supporters tend to be particularly prone to find instances of machinery in nature, in order to advance their claims regarding the presence of “design” in the natural world. Allegedly, as indicated above, a machine by definition is constituted in such way (not only in its being deterministic, but also in its being inherently constituted by an array of parts) that it defies “natural selection”, thus obtaining for us a sort of reverse-Turing test for nature. If a natural entity cannot be accounted for by evolution—as it is the case with a machine—then it has to be designed.
The debate continues, and it may not show signs of receding any time soon. Without attempting to reduce, minimize or dismiss it, one could argue that deep divergences regarding the different conceptualizations of notions such as “creation”, “design”, and most importantly, “machine” rest at the core of the conflict. Be as it may, one feature particularly relevant for this section stands out. Of all the people involved in this debate,33 nobody seems to have a problem with one premise which, in the past, would have likely stopped the dialogue in its tracks. Everyone seems to take for granted that what we have before our eyes is, beyond any reasonable doubt, a machine. From a cybernetic lecture of reality, there is considerable import in this assertion—or realization. Facing the bacterial flagellum, whatever vitalist or exceptionalist remnants of a metaphysic tradition that were seemingly still at work in the most materialists thinkers at the time of cybernetics (mid-twentieth century), seem to be extinct towards the second decade of the twenty-first century. The metaphysical safety-bumpers seem to have worn off. Probably just a few of us today would go through any pain in acknowledging that what we are witnessing is both a product of nature and a machine. A fully natural and fully machinal entity.34 This is probably the most typically cybernetic feature of all the ones we have—somewhat unwittingly—inherited from cybernetics proper.35
The Baconian mechanical arts that were to rescue mankind from the damning epistemic darkness set in by Original Sin—and which mechanized the physical universe after the subsequently triggered Scientific Revolution—seem to have indeed provided successful explanatory mechanisms. To the extent in which biology is being physicalized, it was just a matter of time until the separating line between the artificial and the natural became blurred—via the articulate consideration of what a machine essentially is. As long as we have a scientific method in place with verifiable outcomes (experimentum crucis) towards augmentation of knowledge—and we do—Bacon would have likely approved.
However, the Culture Wars ironically give us still another opportunity to witness novel approaches in scientific modeling—besides the Baconian radical mechanization towards intentional modification. The all-important issue of simulation in science will be shown as logically emerging from the sometimes practically impossible frameworks that scientific experimentation requires. Its validity as a modeling methodology occurred within an important turn of events within the ongoing Culture Wars. This is the topic of the last section.

2.3. Simulation as Reality: Setting a Scotian Continnum between Immateriality and Physicality

During the fall of 2004, the public school of the town of Dover, Pennsylvania, introduced a new biology textbook [50] where Neo-Darwinian evolution was presented alongside an “Intelligent Design” alternative account. Following a mandate from the school committee, a statement had to be read out loud by the biology teacher, referring clearly to the existence of both accounts, so that the student could evaluate and decide on its own. By next fall, a number of concerned teachers and parents had presented a law-suit against the school. The plaintiff sustained that such “strategy” was in fact introducing religion in science class at a public high school. What occurred next signaled the last chapter in the conflict that emerged several times out of the idiosyncratically American “separation of church and state”.
Expert witnesses were called to testify. Several of those mentioned above were called up to testify on the subject.36 It soon became clear that what was at stake was the American legal take on the very identity of science. The religious motivations behind the Scientific Revolution—event that could be understood, as per Francis Bacon, as a tension within Christianity in order to overcome a “genetic” ignorance set off by Original Sin37—were brought to the fore. The methodology of science, the so-called “scientific method” took precedence. ID supporters claimed that Intelligent Design had a valid space in the scientific discourse based upon this very method. They denied the interference of any preconceived theological element, relying instead in methodological deduction after a hypothesis and subsequent induction following the evidence. In this context, instances such as the bacterial flagellum above mentioned were presented in detail, allegedly demonstrating in purely scientific fashion that there is evidence in nature for the existence of “irreducibly complex” machines—dynamical arrays whose perfectly fitting internal parts cannot be accounted for by evolution. As seen in the preceding section, these parts needed to have been “intelligently designed” to fit each other and function in tandem.
There was critical reaction against ID, signaling the alleged undisclosed feature indicated above—a secularly masked creationism. However, soon it became apparent that “evolution” had to be treated under the same judging scope. The taken for granted scandalously undeniable evidence for the truth of evolution had to be shown. How do we know, with the sharp certainty that only science could provide, that evolution is true? After all, ID was being attacked from those very grounds. Accordingly, U.S. District Judge John E. Jones, a Republican Christian, asked for the same sort of evidence that evolution advocates were claiming ID cannot produce.
Since ID theorists almost make a living out of reciting by heart the claimed Darwinian flaws, evolution’s blind spots were so severely criticized that its advocates at some point indeed struggled. Robert Pennock, a philosopher of science and expert witness in the trial for the side of evolution, came up with an alleged proof that neo-Darwinian evolution is not just “a theory”, namely, perfectly falsifiable and hence amenable of being discarded and superseded.38
Avida39 is a computer program that generates an “evolutionary” environment in silico. More specifically, the program is designed to create and maintain entities that compete for resources. Each has its own memory allocation and virtual central processing unit (CPU), and they are protected from each other. They have to evolve in such way that they can gain access to time with the main CPU. These digital entities have the capacity to modify (re-program) themselves, in order to reach a better fit for survival—not completely unlike sophisticated computer viruses that fight for their “lives” in domestic computing environments. In Pennock’s words, “it is clear that natural selection can be perfectly instantiated in A-life systems” ([52], p. 37). In case there is any doubt still lingering so as to how to understand these words, he asserted that “this is not a simulation” of evolution. Rather, it is an “instance” of it. The program—or what it purported to show—did convince the judge.40 Thus the court declared evolution as a fact. Materiality was not deemed necessary to accept the truism of evolution. Pennock further affirmed that:
While it is true that evolution of carbon-based organisms is the prototype of the concept, this historical fact is not a sufficient reason to limit its scope. Why be carbon-centric? It is the patterns of causal interactions that are relevant, not the particular material substrate. …the material substrate of the Darwinian processes should be irrelevant to whether we recognize something as an instance of Darwin’s evolutionary mechanism.
([52], p. 32)
Thus even if the witnessed “evolution” happened entirely inside a machine, one nevertheless can claim now to have “proof”—at least in the eyes of the judge—that evolution is, plainly and simply, true. Facing an instance of the awkward relation between scientific practice and institutionality, one could object that a legal decision cannot make, let alone rule, science. This objection could recognize in this court ruling, at most, a geo-culturally limited change of attitude regarding what is real and what it is not in its own legal environment. Now virtual realities count as legal facts. However, once we factor in Pennock’s statements, we can readily notice that this epistemological ethos has a theoretical and empirical background that goes way beyond contemporary legalisms—in fact, it is increasingly shared by salient scientific enterprises. Indeed Pennock is by no means isolated in advancing these views –and the judge probably knew that much. Some recent examples are relevant.
In 2006, a group of scientists lead by Klaus Schulten (University of Illinois at Urbana) simulated an entire virus, modeling it atom by atom. Unsurprisingly, the study was published in the journal Structure. The satellite tobacco mosaic virus, one of the smallest in nature, had its approximately one million atoms “recreated” simultaneously. It required the sheer computing capability of one of the most powerful machines on the planet, the National Center for Supercomputing Applications. Still, it took 100 days to bring up to “life” only 50 nanoseconds of the “virtual” virus [53]. Later on in 2012, Markus Covert (Stanford University) and his team recreated a complete organism—a bacterium.41 This unicellular entity, the Mycoplasma genitalia, is the living being with the smallest genetic footprint: 525 genes. Remarkably, this time the entity, recreated in silico, underwent a reproductive process. Serving as a witness to the kind of progress that computation can develop in just 6 years, this time the virtual entity lasted 10 hours: The actual time the modeled entity takes to reproduce, and remarkably, the time the constructed model took to do it as well. The constructed model and the modeled organism were operationally identical, all the way down to their atomic existence. For attaining this, a cluster of 128 powerful computers interconnected were required. It was the first time a complete organism has been entirely reproduced inside a virtual environment [54].
In all fairness to cybernetics, these scientific accomplishments were the stuff that made up the cybernetitians’ dreams—albeit unaccomplishable at their time, due to the lack of technological advancement in what regards manipulation of materials.42 However, cybernetics’ penchant for actual (physical) model building,43 at least theoretically, was not entirely new—as we have seen in Section 3. Although it became right from the start a sort of defining feature for the cybernetic enterprise, the recreation of a studied object has deep roots in the Western tradition of philosophy, arguably reaching a peak with Vico’s dictum for interchangeability between object and model as an insurmountable footprint of truth. However, the tension between the mandate for material model construction and the seemingly perennial temptation for abandoning all grips with physical reality since Plato, might have proven to be deeper and more pervading—and might had spelled, at the time,44 the demise of the cybernetic project.
It is ironic that a model existent solely in virtual reality would come to undermine the survival of Intelligent Design, since “ID clearly descends from the Franciscan side in accepting a univocal account of language” ([1], p. 107). Indeed it was the Franciscan Duns Scotus who established a type of strong relation between physical entities and disembodied realms. This connection found echo in Newton’s idea of having access to God’s mind. Reacting against Original Sin’s epistemic darkness via Bacon’s mechanical arts, we can recover that Adamian divinely infused knowledge, instantiated in the only possible way that can show its divine origin: the radical (re)creation of objects—as Vico foresaw. This closes the Scotian circle, where the attributes of humans and those of God are different in degree, not in kind. This dynamic might entail a new context of thinking about the status of what we understand as a human entity. Specifically, a synthetic biology spearheaded by a profoundly mechanistic view of life—where the distinction between the natural and the artificial is blurred—might have found a renewed strength thanks to the promises of a fundamental restructuring of nature via nano-science.

3. Concluding Remarks

Whatever outcome may arrive regarding the status of “the human” from this deep restructuring of the technological impetus, it would be useful to realize that the allegedly ideas driving these late scientific foresights and technological goals are not radically new—at least in spirit. Francis Bacon launched a mechanization of reality—successfully implemented by Isaac Newton—which shaped the way in which natural phenomena are scientifically dissected and systematized via an inter-operatively successful corpus of “explanatory mechanisms”. For Giambattista Vico the ultimate signature for true knowledge becomes the capacity to literally build the object of knowledge, and so the total mechanization of explainable phenomena had to eventually—but necessarily—occur. Nature becomes understandable due to its mechanical essence. Moreover, machine, by definition, can be tinkered with, fixed and improved: Man can indeed be closer to its own potential. After all, as Scotian philosophy would suggest with reasonable stretch, divine qualities were always present in us to some degree. The angelic features dormant in us might gradually—but finally—come to fruition. Indeed the intellectual tendency for disengagement from physical substrata towards an immaterial, “cleaner” and universal reality has been with us since Ancient Greece. It would seem that the force transcending embodiment towards non-materiality is finally allowing for a purification of the best possible mold. Once that abstracted containment is filled up, it can be re-instantiated in physical reality—thus radically modifying it.
To repeat, this hub of technological outlooks might have important implications for the Western notion of what it means to be a human being. It is no wonder that mega-projects such as the so-called Nano-Bio-Info-Cogno (NBIC) Convergence,45 exploiting the possibilities opened up by methodologies of bio-mechanization, virtual simulation and nano-construction, have high hopes for an upcoming Renaissance 2.0 of sorts.46 A scientifically viable instantiation of a collective hope for a positively transhumanist [58] future seems to be in the making, regardless of whether the approach for its implementation takes a high-risk stance47 or one that advocates precaution.48. This dynamism of mechanization, purified abstraction and ulterior re-instantiation might indeed hold the key for our biological survival vis a vis a potentially radical ecological change—even if in the process the very notion of “human” gets trivialized all the way into oblivion.
The awareness that “human nature”—whatever we understand by it—would indeed not survive the advance of science is a common sentiment among scholars in the last decades. As if we would have to give up on our philosophy in order to preserve our biology. If we remind them that the very forces behind these pervasive technological disruptions are deeply rooted into what prominent humanists proposed between the thirteenth and sixteenth centuries, the context for understanding this progress can be geared towards a more enriched and embedded approach—even if what we obtain as an outcome is beyond recognition. Having in consideration the metaphysical tenets behind these disruptive technologies could further foster their advancement, correspondingly cultivating the awareness that there is a hopeful, transcendent, notion of what it entails to be human—which inspired these Medieval and early Modern intellectual forces in the first place. The difference is that now we might be witnessing the beginning of the technological feasibility for this “next” human. It would then seem then that the relevance of the humanities should be considered of utmost importance when facing indeed a new way of human existence: As a “biological citizen”, aware of its “neurochemical self”, and displaying “somatic expertise” with its own “biocapital”.49 Indeed, the renewed fostering of the humanities might be necessary for our very survival. If one is to thoroughly articulate the underlying humanistic impetus behind pervasively disruptive technologies, the input of the humanities could not only let itself be felt, but indeed guide the late scientific ethos towards human alteration into self-critical flourishing.

Acknowledgments

I would like to thank the external editor, Albrecht Classen, and the various anonymous reviewers for comments on earlier versions of this piece. Any persisting errors are entirely my fault.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Steve Fuller. Humanity 2.0: What it Means to be Human Past, Present and Future. Hampshire: Palgrave Macmillan, 2011. [Google Scholar]
  2. Immanuel Kant. Prolegomena to any Future Metaphysics. Indianapolis: Hackett, 2002. [Google Scholar]
  3. Duns Scotus. Philosophical Writings: A Selection. Indianapolis: Hackett, 1987. [Google Scholar]
  4. Aristotle. De Generatione Animalium. Translated by Arthur Platt. Gloucestershire: Clarendon Press, 1910. [Google Scholar]
  5. Elizabeth Eisenstein. The Printing Revolution in Early Modern Europe. New York: Cambridge University Press, 2012. [Google Scholar]
  6. Catholic Church. Catechism of the Catholic Church: Revised in Accordance with the Official Latin Text Promulgated by Pope John Paul II. Vatican City: Libreria Editrice Vaticana, 1997. [Google Scholar]
  7. Peter Harrison. “Original Sin and the Problem of Knowledge in Early Modern Europe.” Journal of the History of Ideas 63 (2002): 239–59. [Google Scholar] [CrossRef]
  8. Immanuel Kant. Critique of Pure Reason. Indianapolis: Hackett, 1996. [Google Scholar]
  9. Steven Mathews. Theology and Science in the Thought of Francis Bacon. Hampshire: Ashgate, 2008. [Google Scholar]
  10. Francis Bacon. “The New Organon.” In Selected Philosophical Works. Indianapolis: Hackett, 1999, pp. 86–189. [Google Scholar]
  11. Francis Bacon. “The Great Instauration.” In Selected Philosophical Works. Indianapolis: Hackett, 1999, pp. 66–85. [Google Scholar]
  12. Francis Bacon. “Valerius Terminus.” In The Philosophical Works of Francis Bacon. London: Routledge, 2011, pp. 186–205. [Google Scholar]
  13. Margaret Morrison. Unifying Scientific Theories: Physical Concepts and Mathematical Structures. Cambridge: Cambridge University Press, 2000. [Google Scholar]
  14. Giambattista Vico. The Autobiography of Giambattista Vico. Ithaca: Cornell University Press, 1963. [Google Scholar]
  15. Giambattista Vico. On the Most Ancient Wisdom of the Italians: Unearthed from the Origins of the Latin Language. Ithaca: Cornell University Press, 1988. [Google Scholar]
  16. Robert Miner. “Verum-factum and Practical Wisdom in the Early Writings of Giambattista Vico.” Journal of the History of Ideas 59 (1998): 53–73. [Google Scholar]
  17. Robert Miner. Truth in the Making: Creative Knowledge in Theology and Philosophy. London: Routledge, 2004. [Google Scholar]
  18. Giambattista Vico. The New Science of Giambattista Vico. Ithaca: Cornell University Press, 1988. [Google Scholar]
  19. Roberto Cordeschi. “Cybernetics.” In The Blackwell Guide to the Philosophy of Computing and Information. Edited by Luciano Floridi. Oxford: Blackwell, 2008, pp. 186–96. [Google Scholar]
  20. Jean-Pierre Dupuy. The Mechanization of the Mind: On the Origins of Cognitive Science. Princeton: Princeton University Press, 2000. [Google Scholar]
  21. Ilya Prigogine, and Isabelle Stengers. Order Out of Chaos: Man’s New Dialogue with Nature. New York: Bantam, 1984. [Google Scholar]
  22. Richard Feynman. “There’s Plenty of Room at the Bottom.” Engineering and Science 23 (1960): 22–36. [Google Scholar]
  23. Eric Drexler. Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor, 1986. [Google Scholar]
  24. IBM. “IBM ‘atoms’.” Available online: https://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV1003.html (accessed on 19 December 2014).
  25. European Commission. Towards a European Strategy for Nanotechnology. Luxembourg City: Office for Official Publications of the European Communities, 2004. [Google Scholar]
  26. Royal Academy of Engineering. Nanoscience and Nanotechnologies: Opportunities and Uncertainties. London: The Royal Society, 2004. [Google Scholar]
  27. Yasuhiro Shirai, Andrew J. Osgood, Yuming Zhao, Kevin F. Kelly, and James M. Tour. “Directional Control in Thermally Driven Single-Molecule Nanocars.” Nano Letters 5 (2005): 2330–34. [Google Scholar] [CrossRef] [PubMed]
  28. John von Neumann. “Letter to Wiener.” In John von Neumann: Selected Letters. Edited by Miklós Redei. Providence: American Mathematical Society, 2005, pp. 277–82. [Google Scholar]
  29. John von Neumann. “The general and logical theory of automata.” In Cerebral Mechanisms in Behavior: The Hixon Symposium. Edited by Lloyd Jeffress. New York: Wiley, 1951, pp. 1–41. [Google Scholar]
  30. Arturo Rosenblueth, and Norbert Wiener. “The role of models in science.” Philosophy of Science 12 (1945): 316–21. [Google Scholar] [CrossRef]
  31. William Ross Ashby. “The Nervous System as Physical Machine: With Special Reference to the Origin of Adaptive Behavior.” Mind 56 (1947): 44–59. [Google Scholar] [CrossRef] [PubMed]
  32. William Ross Ashby. “Principles of the Self-Organizing Dynamic System.” Journal of General Psychology 37 (1947): 125–28. [Google Scholar] [CrossRef] [PubMed]
  33. William Ross Ashby. An Introduction to Cybernetics. London and New York: Chapman and Hall, John Wiley & Sons, 1956. [Google Scholar]
  34. Alan Turing. “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society 42 (1937): 230–65. [Google Scholar] [CrossRef]
  35. Alan Turing. “Systems of Logic Based on Ordinals.” Proceedings of the London Mathematical Society 45 (1939): 161–228. [Google Scholar] [CrossRef]
  36. William Paley. Natural Theology: Or, Evidences of the Existence and Attributes of the Deity, Collected from the Appearances of Nature. Oxford: Oxford University Press, 2006. [Google Scholar]
  37. Keiichi Namba. “Self-Assembly of Bacterial Flagella.” In Paper presented at the Annual Meeting of the American Crystallographic Association, San Antonio, TX, USA, May 2002.
  38. Keiichi Namba. “Conformational change of flagellin for polymorphic supercoiling of the flagellar filament.” Nature Structural & Molecular Biology 17 (2010): 417–22. [Google Scholar]
  39. Charles Darwin. On the Origin of Species: By Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. New York: Cambridge University Press, 2009. [Google Scholar]
  40. St. Thomas Aquinas. Summa Theologica. Notre Dame: Christian Classics, 1981, vol. 5. [Google Scholar]
  41. John Paull II. “Address to the Plenary Session on the Subject ‘The Origins and Early Evolution of Life’ (22 October 1996).” In Papal Addresses to the Pontifical Academy of Sciences 1917–2002 and to The Pontifical Academy of Social Sciences 1994–2002. Vatican City: The Pontifical Academy of Sciences, 2003, pp. 370–74. [Google Scholar]
  42. International Theological Commission. “Communion and Stewardship: Human Persons Created in the Image of God.” In Texts and Documents, Volume 2: 1986–2007. San Francisco: Ignatius Press, 2009, pp. 319–52. [Google Scholar]
  43. Nature (Editorial). “A pope for today: Latest pontiff looks to enhance social relevance of Catholic Church.” Nature 495 (2013): 282. [Google Scholar] [CrossRef]
  44. Steve Fuller. Science v. Religion? Intelligent Design and the Problem of Evolution. Cambridge: Polity Press, 2007. [Google Scholar]
  45. Steve Fuller. Dissent Over Descent: Intelligent Design’s Challenge to Darwinism. London: Icon Books, 2008. [Google Scholar]
  46. Ray Kurzweil. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin, 2005. [Google Scholar]
  47. Steve Fuller. Preparing for Life in Humanity 2.0. Hampshire: Palgrave Macmillan, 2013. [Google Scholar]
  48. Margaret Boden. Mind as Machine: A History of Cognitive Science. Oxford and New York: Clarendon Press, Oxford University Press, 2006. [Google Scholar]
  49. Gualtiero Piccinini. “Review of the Mechanization of Mind: On the Origins of Cognitive Science, by J.P. Dupuy.” Minds and Machines 12 (2002): 449–53. [Google Scholar] [CrossRef]
  50. Percival Davis, and Dean H. Kenyon. Of Pandas and People: The Central Question of Biological Origins, 2nd ed. Dallas: Haughton, 1989. [Google Scholar]
  51. Richard E. Lenski, Charles Ofria, Robert T. Pennock, and Christoph Adami. “The evolutionary origin of complex features.” Nature 423 (2003): 134–44. [Google Scholar] [CrossRef] [PubMed]
  52. Robert Pennock. “Models, simulations, instantiations, and evidence: the case of digital evolution.” Journal of Experimental & Theoretical Artificial Intelligence 19 (2007): 29–42. [Google Scholar] [CrossRef]
  53. Peter L. Freddolino, Anton S. Arkhipov, Steven B. Larson, Alexander McPherson, and Klaus Schulten. “Molecular Dynamics Simulations of the Complete Satellite Tobacco Mosaic Virus.” Structure 14 (2006): 437–49. [Google Scholar] [CrossRef] [PubMed]
  54. Jonathan R. Karr, Jayodita C. Sanghvi, Derek N. Macklin, Miriam V. Gutschow, Jared M. Jacobs, Benjamin Bolival Jr., Nacyra Assad-Garcia, John I. Glass, and Markus W. Covert. “A Whole-Cell Computational Model Predicts Phenotype from Genotype.” Cell 150 (2012): 389–401. [Google Scholar] [CrossRef] [PubMed]
  55. Andrew Pickering. “A Gallery of Monsters: Cybernetics and Self-Organisation, 1940–1970.” In Mechanical Bodies, Omputational Minds: Artificial Intelligence from Automata to Cyborgs. Edited by Stefano Franchi and Guven Güzeldere. Cambridge: MIT Press, 2005, pp. 229–45. [Google Scholar]
  56. National Science Foundation. Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science. Edited by William S. Bainbridge and Mihail C. Roco. Dordrecht: Kluwer Academic Publishers, 2003. [Google Scholar]
  57. Norbert Wiener. Cybernetics: Or Control and Communication in the Animal and the Machine, 2nd rev. ed. Paris and Cambridge: Hermann et Cie., MIT Press, 1948. [Google Scholar]
  58. Steve Fuller, and Veronika Lipinska. “Transhumanism.” In Ethics, Science, Technology, and Engineering: A Global Resource. Edited by Britt Holbrook and Carl Mitcham. Farmington Hills: Gale, Cengage Learning, 2015. [Google Scholar]
  59. European Commission. Converging Technologies: Shaping the Future of European Societies. Reported by Alfred Nordmann; Luxembourg: CORDIS, 2004. [Google Scholar]
  60. European Commission. “Research trajectories and institutional settings of new converging technologies.” In Report for Knowledge Politics and New Converging Technologies: A Social Science Perspective. Reported by Steve Fuller; Luxembourg: CORDIS, 2008. [Google Scholar]
  61. Defence Research, and Development Canada. NBIC Disruptive Technology Watch. Prepared by Scott MacKenzie, Allen Chong, Tiit Romet, and Kimberly Thomas; Ottawa: Consulting and Audit Canada, 2003. [Google Scholar]
  62. Nikolas Rose. The Politics of Life Itself: Biomedicine, Power, and Subjectivity in the Twenty-First Century. Princeton: Princeton University Press, 2009. [Google Scholar]
  • 1Scotus’ concern antecedes Immanuel Kant’s questioning of the possibility of metaphysics as a science by four centuries. Kant would later dispatch his famous ultimatum: “All metaphysicians are therefore solemnly and legally suspended from their occupations until they shall have satisfactorily answered the question: How are synthetic cognitions a priori possible?” ([2], Preamble: Section 5.)
  • 2Ordinatio I, distinctio. 3, part 1, quaestio. 2, numero 26.
  • 3Certainly the idea of a continuum as the default state of affairs of reality was already perused by the Ancient Greeks. In the realm of the inanimate, it was understood that natura abhorret vacuum. Among living entities, a careful continuity in “how rightly Nature orders generation in regular gradation” ([4], pp. II, 1) was recognized.
  • 4“God created man in his image; in the divine image he created him; male and female he created them” (Gen. 1:27, NAB).
  • 5Arguably, Western civilization got wedged in such a way that it never healed. The instantiation of this division was eventually settled, embodied on the one side by those countries who remained faithful to Rome (mostly southern Europe) and on the other those who did not (Anglo-Saxony and northern Europe)—a cosmological split that was naturally extended to their respective colonies. Eisenstein reports this rift as follows:Sixteenth-century heresy and schism shattered Christendom so completely that even after religious warfare had ended, ecumenical movements led by men of good will could not put all the pieces together again. Not only were there too many splinter groups, separatists, and independent sects who regarded a central church government as incompatible with true faith; but the main lines of cleavage had been extended across continents and carried overseas along with Bibles and breviaries. Within a few generations, the gap between Protestant and Catholic had widened sufficiently to give rise to contrasting literary cultures and lifestyles. Long after Christian theology had ceased to provoke wars, Americans as well as Europeans were separated from each other by invisible barriers that are still with us today ([5], pp. 172–73).
  • 6The actual title was Disputatio pro declaratione virtutis indulgentiarum.
  • 7With the sacrament of Holy Orders one is “ordered” a presbyter (a priest), a deacon or a bishop. In the case of a priest, it institutes in that person ad perpetuum the capability of providing the other six sacraments. The provision of this one sacrament is reserved solely to the bishop. “Since the sacrament of Holy Orders is the sacrament of the apostolic ministry, it is for the bishops as the successors of the apostles to hand on the “gift of the Spirit”, the “apostolic line”. Validly ordained bishops, i.e., those who are in the line of apostolic succession, validly confer the three degrees of the sacrament of Holy Orders” ([6], para. 1576).
  • 8Peter Harrison would extend the influence of the awareness of a doomed humanity vis a vis Original Sin beyond Bacon’s motivations and into the very birth of Modern science, advancing that “the biblical narrative of the Fall played a far more direct role in the development of early Modern knowledge—both in England and on the Continent—than has often been assumed, and that competing strategies for the advancement of knowledge in the XVII century were closely related to different assessments of the Fall and of its impact upon the human mind” ([7], p. 240).
  • 9Idols of the tribe (race), of the cave (individual), of the marketplace (language) and of the theatre (authority) ([10], pp. 95ff).
  • 10It is widely acknowledged that J. C. Maxwell held on to visually mechanical explanations of “action at a distance” in electromagnetism until the visual heuristics were no longer needed ([13], chapter 3).
  • 11“For the Latins, verum (the true) and factum (what is made) are interchangeable, or to use the customary language of the Schools, they are convertible” ([15], p. 45).
  • 12Cybernetics was an Anglo-American scientific movement alive in the 1940’s and that lasted a decade. I am currently writing a dissertation on the possible reasons for its collapse. Cybernetics might have been the strong proxy between the humanistic advances laid out in the previous 3 Sections, and the later proposals in techno-scientific outlooks displayed below. Lack of space precludes me from further articulating this missing link. For a short introduction on cybernetics, see [19]. For a longer exposition on this scientific movement see [20].
  • 13As opposed to “contemporary”.
  • 14Although this is bound to continue changing. See [21].
  • 15One billionth of a meter.
  • 16Quantum effects, however, are part and parcel of the whole nanoscale somewhat “exotic” halo. Existing within the threshold of 1–100nm, the effects pertaining to the so called “quantum realm” are present, and thus, materials tend to behave in a way that is absent at the macro-level (e.g., and otherwise inert material becomes conductive, etc.). For an addressing of the quantum-related problems in nanoscale research, see Drexler’s “An Open Letter to Richard Smalley” in ([23], Appendix).
  • 17The prefix “nano-” for these matters was not yet coined at the time, so he did not use it.
  • 18A fullerene—or “buckyball”—is a spherical carbon molecule.
  • 19However, the “nanocar” lacked an engine, and thus, it was not a car in the full sense of the word (a.k.a., a machine with four wheels, self-propelled by a motor) [27].
  • 20This dichotomy was an important source of tension for cybernetics. It might have in fact collaborated towards cybernetics’ ultimate demise. See [28] and [29].
  • 21After all the thrust behind the Scientific Revolution, as indicated in the section pertaining to Francis Bacon).
  • 22I use “machinization” instead of “mechanization” in order to put emphasis on the ontology of the object qua machine, instead of its process or behavior—without saying that they are unrelated.
  • 23The attempt of denial of a machine-organism isomorphism within the last century was largely nested upon the alleged fact that a living organism can self-organize (to achieve biological homeostasis with its environment and survive) whereas a machine cannot. In face of this, the cyberneticist William Ross Ashby embarked upon the project of elaborating a more sophisticated and complete notion of a machine. This enriched notion can indeed show powers of self-organization. Thus the bridge between the living and the non-living (which behaves as living) was stablished: Machines could be alive. In fact, they always were—in nature—but we did not notice it until recently. For Ashby’s treatment of self-organization in machines, see [31] and [32].
  • 24For cybernetics’ addressing of the question regarding the nature of a machine in general, see [33]. For incipient attempts at understanding the nature of a machine (mainly stemming from efforts to articulate the nature of an algorithm) see [34] and [35].
  • 25Professor Namba does not hesitate in identifying in such a mechanism an actual engine: “The bacterial flagellum is a rotary nanomachine that spins at hundreds of revolutions per second driven by the electrochemical potential difference across the cytoplasmic membrane” ([38], p. 417).
  • 26Or the New Synthesis: Darwinian evolution later improved with Gregor Mendel’s theory of genetic mutation.
  • 27E.g., As in St. Thomas Aquinas’ fifth way within the Quinque Viae ([40], Part I, Question 2, Article 3).
  • 28E.g., William Dembski and Jonathan Welsh.
  • 29E.g., Daniel Dennett and Richard Dawkins.
  • 30E.g., Kenneth Miller and George Coyne, S.J.
  • 31The Catholic Church’s Magisterium has repeatedly referred to neo-Darwinism as more than “just a hypothesis”. In 1996 John Paul II said while addressing the Pontifical Academy of Sciences:Taking into account the state of scientific research at the time as well as of the requirements of theology, the Encyclical Humani Generis considered the doctrine of “evolutionism” a serious hypothesis, worthy of investigation and in-depth study equal to that of the opposing hypothesis…Today, almost half a century after the publication of the Encyclical, new knowledge has led to the recognition of more than one hypothesis in the theory of evolution. It is indeed remarkable that this theory has been progressively accepted by researchers, following a series of discoveries in various fields of knowledge. The convergence, neither sought nor fabricated, of the results of work that was conducted independently is in itself a significant argument in favour of this theory ([41], para. 4).Cardinal Joseph Ratzinger (later Benedict XVI), then Prefect of the Sacred Congregation for the Doctrine of the Faith, ratified in 2004 a document drafted by the International Theological Commission that read:Since it has been demonstrated that all living organisms on earth are genetically related, it is virtually certain that all living organisms have descended from this first organism. Converging evidence from many studies in the physical and biological sciences furnishes mounting support for some theory of evolution to account for the development and diversification of life on earth ([42], para. 63).Regarding the last Pope, Francis (and the Catholic Tradition’s position) the Editorial of the journal Nature had this to say on the matter:…what is clear is that, contrary to widespread belief, the modern Catholic Church is science-friendly and Pope Francis will no doubt continue, and perhaps deepen, that tradition. The Church’s strong support for Darwinian evolution, for example, contrasts sharply with the backwards unscientific belief in creationism of many US evangelicals and lawmakers—a concept that Pope Benedict XVI rightly criticized in 2007 as “absurd” [43].
  • 32For an extensive coverage of the issue, see [44] and [45].
  • 33Against ID: Kenneth Miller, Robert Pennock, Michael Ruse. For ID: Michael Behe, William Dembski, Steve Fuller and Jerry Fodor (the last one more anti-Darwin than pro-ID, although he has written in ID blogs).
  • 34Ray Kurzweil, National Medal of Technology and Innovation laureate and Google’s current director of engineering, advocates for a preparation towards an inescapable future where humans and machines will be completely merged, rendering any meaningful distinction impossible—even discriminatory. Fuller calls this stance the “cybernetic” view of the future of humanity, seeing an understanding of anthropology as artificial theology. (See ([46], Chapter 1) and ([47], Chapter 1)). However, a truly cybernetic view would refer to man in a present stage, and in an obvious way, as just another instance of a typical machine.
  • 35Fully fleshing out the lineage between cybernetics and current scientific disciplines would entail a pain-staking articulation that remains to be done. The closer attempt is probably Margaret Boden’s two volume work [48]. Jean-Pierre Dupuy’s book [20] attempts to unveil some of the cybernetic ancestral features of contemporary disciplines, but according to Gualtiero Picinini, he falls short of completing the task [49].
  • 36Previous note on the principal figures in the Culture Wars debate.
  • 37Section 2.
  • 38The perceived exchangeability between the notions of “hypothesis” and “theory” seems to occur mainly in the English language. The statement of evolution as being “just a hypothesis” would in any case make more sense. Evolution as “just a theory” can be translated to evolution as “just a tested hypothesis”—the canonical definition of theory—which would be of course problematic.
  • 39For an explanation co-written by Pennock of the role of Avida in scientific methodology, see [51].
  • 40To be sure, the ID side (the defendants) lost the case not only due to this finding. An early draft of the controversial textbook was found, and in it, it was shown that the word “creationism” was scratched and replaced with “intelligent design” on top. This was enough evidence to show, for the judge, that religion was being smuggled into science class in a public school, thereby violating the constitutional separation between church and state in that country.
  • 41It is traditionally understood that a virus does not qualify as a “complete” organism, given that it needs a host to survive. In fact, debates regarding its status as a “living” entity pivot upon this very issue.
  • 42In fact this very issue, shifting from behavior to structure in scientific modeling, caused deep internal tensions within cybernetics, as indicated above [28,29].
  • 43Famous—even “media-friendly”—cybernetic autonomous machines were Claude Shannon’s “rats”, Grey Walter “tortoises” and Ross Ashby’s “homeostat” [55].
  • 44Late 1950s.
  • 45This was a coordinated effort launched in 2000 to synchronize current developments in sciences and technologies towards a unified vision that would allow for the possibility of a qualitative leap in the human experience:The phrase “convergent technologies” refers to the synergistic combination of four major “NBIC” (nano-bio-info-cogno) provinces of science and technology, each of which is currently progressing at a rapid rate: (a) nanoscience and nanotechnology; (b) biotechnology and biomedicine, including genetic engineering; (c) information technology, including advanced computing and communications; and (d) cognitive science, including cognitive neuroscience ([56], pp. 1–2).Correspondingly, a summarizing motto found in its founding document reads:If the Cognitive Scientists can think it,the Nano people can build it,the Bio people can implement it, andthe IT people can monitor and control it ([56], p. 13).
  • 46Expressing a concern not dissimilar with cybernetic’s own during the 1940s ([57], pp. 2–3), a preoccupation with the wide separation existing between different areas of scientific research was expressed. The countervailing epistemic position would be that of a renaissance man (i.e., the polymath Leonardo Da Vinci). Accordingly, a new, unifying view is proposed:We stand at the threshold of a new renaissance in science and technology, based on a comprehensive understanding of the structure and behavior of matter from the nanoscale up to the most complex system yet discovered, the human brain…Developments in systems approaches, mathematics, and computation in conjunction with NBIC allow us for the first time to understand the natural world, human society, and scientific research as closely coupled complex, hierarchical systems. At this moment in the evolution of technical achievement, improvement of human performance through integration of technologies becomes possible ([56], p. 2. Emphasis added).
  • 47As the founding, American document would suggest [56].
  • 48As the document produced by the European Union in response would propose [59]. For an account of the difference between the American and the European views on NBIC, see [60]. The Canadian stance was aligned with its American neighbor’s [61].
  • 49See chapters 5, 7 and the Afterword of [62].

Share and Cite

MDPI and ACS Style

Malapi-Nelson, A. Humanities’ Metaphysical Underpinnings of Late Frontier Scientific Research. Humanities 2014, 3, 740-765. https://doi.org/10.3390/h3040740

AMA Style

Malapi-Nelson A. Humanities’ Metaphysical Underpinnings of Late Frontier Scientific Research. Humanities. 2014; 3(4):740-765. https://doi.org/10.3390/h3040740

Chicago/Turabian Style

Malapi-Nelson, Alcibiades. 2014. "Humanities’ Metaphysical Underpinnings of Late Frontier Scientific Research" Humanities 3, no. 4: 740-765. https://doi.org/10.3390/h3040740

Article Metrics

Back to TopTop