You are currently viewing a new version of our website. To view the old version click .
Philosophies
  • Article
  • Open Access

29 November 2025

Intensional Differences Between Programming Languages: A Conceptual and Practical Analysis

Faculty of Administration and Social Sciences, Warsaw University of Technology, Plac Politechniki 1, 00-661 Warsaw, Poland
This article belongs to the Special Issue Semantics and Computation

Abstract

This paper investigates intensional differences between programming languages—understood as differences in how computational processes are expressed, structured, and specified rather than merely in what they compute. While such differences have been studied in classical models of computation, they remain underexplored in the context of programming languages. Yet, programming languages undeniably compute, and any account of what “computing” means must include the ways in which they do so. The paper first clarifies the extensional/intensional distinction and introduces a methodological framework to study this distinction based on Carnapian explication. It then follows an idealized programming workflow, which I structure according to the Carnapian framework, to identify where and how intensional differences arise—including during problem specification, algorithm design, language choice, data representation, and physical implementation. The final part situates intensionality within the broader epistemology of programming practice, examining how it is shaped by type-theoretic assumptions, social and historical context, and the implications of bounded outcomes. Throughout, the paper examines both the “nature” (inherent features of computable functions) and “nurture” (human factors influencing programming language design and use) of intensional differences.

1. Introduction

This paper is part of a larger project aimed at developing an inclusive conception of what “computing” means. In my previous research, I explored intensional differences1 between models of computation, such as the theories of Turing machines and recursive functions, and examined how the concept of computing varies in contexts such as computational complexity, the Cobham–Edmonds’ thesis [1,2], and analogue computation [3,4]. In this paper, I extend this investigation to computer programs—systems that undeniably compute. My ultimate goal—of which this paper is a milestone—is to understand and organize the conceptual content of “computing”: for example, how the kind of computation performed by a computer program differs from that embodied in models of computation, or how intuitions about what “computing” means—as shaped by different complexity-theoretic frameworks—influence the evolving meaning of the concept of computing. Future work will extend this analysis to other types of computation, including stochastic generative and predictive models operating on Big Data, quantum computing, analog computing, and various forms of natural computation.
Intensionality, in the most general sense, refers to distinctions that go beyond extensional equality. It operates at a finer granularity than merely identifying when two functions or programs produce the same outputs. While mainstream mathematics often regards intensional characterizations as secondary or even problematic, they are central to computer science, where programs and processes are not merely defined by what they compute but also by how they compute it.
Models of computation formulated to explore the fundamental nature of effective procedures emerged long before programming languages and programming practice as we know it today. These foundational frameworks—such as Turing machines, λ -calculus, or the theory of recursive functions—were developed in the early 20th century to answer mathematical and philosophical questions about which functions could be computed in principle rather than how they could be implemented in practice.
While extensional equivalence (e.g., the fact that Turing machines and recursive functions compute the same class of functions) tells us that these models are computationally equivalent, intensional distinctions are important in a broader scientific context. How intensional differences between models of computation come into play is well captured by Rudolf Carnap’s framework of explication, in which informal pre-theoretical concepts are clarified and replaced in the specific scientific context by their precise formal counterparts [5]. Thus, models such as Turing machines and recursive functions—although extensionally equivalent—can be seen as different explicata of the same informal concept of effective procedure. As I have shown elsewhere [6], the framework of explication can be applied to the model of recursive functions, and this application can naturally be extended to models of computation more generally.
Programming languages also exhibit intensional differences through their type systems. Inherited from Church’s simply typed λ -calculus, types in programming languages constrain not merely what values are computed but how those values can be used, combined, and manipulated. This type-theoretic intensionality operates at a different level than Carnapian explication: rather than offering alternative formalizations of the same concept, type systems encode the internal logical structure and organizational principles of the language itself.
Here, I am mostly interested in the first type of intensionality. I reconstruct the intensional properties of programming languages from their underlying formal models. Intensional differences between programming languages have profound practical implications: once programmers are constrained to a particular language, the intensional characterization plays a fundamental role in how they reason, what solutions they implement, how they maintain code, and consequently how efficiently certain problems are modeled or solved. These differences further influence developers’ mental models, the design of software systems, and even the evolution of programming communities and tools.
This paper explores intensional differences between programming languages. Section 2 provides a conceptual foundation by explaining the extensional/intensional distinction and recalling how Carnapian explication can be used to identify intensional differences between models of computation. It also makes a first attempt to determine what intensionality is in and for computer science. In Section 3, I follow an idealized programming workflow to gain more systematic insights into intensional differences between programming languages. I illustrate this with concrete comparisons (e.g., Euclid’s vs. Stein’s algorithm for the greatest common divisor, or list handling in Haskell, Java, and Prolog). Finally, Section 4 explores intensionality in the broader perspective of programming practice. I examine intensionality conveyed by type theory, intensionality issued from social contexts, and finally show how intensionality becomes salient in the light of limiting results.

2. The Intensional Aspect of Computing

The distinction between extensional and intensional properties has its roots in logic and the philosophy of language. A well-known example from Frege illustrates the following: the terms the morning star and the evening star refer to the same object (Venus) and are thus extensionally equivalent but differ intensionally because they represent that object under different descriptions. The general distinction—between what something refers to and how it is represented—has proved useful across a number of formal and conceptual domains, including theories of meaning, formal logic, and computation. In the context of computing, it helps to distinguish between what a system computes and how it computes it. In the philosophy of computation, the extensional–intensional distinction relates to functions in computability theory: two functions can be extensionally equivalent (producing the same outputs for the same inputs) while being specified differently and thus intensionally distinct.

2.1. Intensional Differences Between Models of Computation

Two models of computation are said to be extensionally equivalent if they compute the same class of functions. This is the case, for example, with Turing machines, λ -calculus, and recursive function theory—all of which characterize the class of computable functions. From an extensional point of view, these models are interchangeable. However, the intensional differences between them are significant.
To better understand where these differences come from, it is useful to invoke Rudolf Carnap’s framework of explication [5]. Explication involves the transformation of an informal concept—the explicandum—into a formal counterpart—the explicatum—tailored to a particular scientific context. The process of explication unfolds in two stages: first, clarification, where the vague concept is delimited and exemplified to identify the aspects most relevant for theoretical use; and, second, specification, where a well-defined formal concept is introduced to replace the original in the scientific context.
Sometimes, this scientific context is an existing one and the explication is crafted to meet its specific needs—as in the case of the axiomatic theory of recursive functions, which explicitly addressed the question of which arithmetic functions are computable. At other times, the explication of an intuitive concept may lead to entirely new ways of thinking in the sciences, such as when Turing machines laid the foundations for what would become computer science. In still other cases, explication proceeds by conforming to specific modes of scientific reasoning, emphasizing certain theoretical possibilities—as in the purely functional reasoning of the λ -calculus. Thus, clarification does not merely prepare the ground for explication but actively shapes the kind of intensional content that the resulting formal concept will convey.
This active role of clarification is exemplified in the contributions of Emil Post. As De Mol [7] points out, Post not only anticipated the formalization of computability in his early work but also insisted that these formalisms be treated as empirical hypotheses rather than mere definitions. He emphasized that
to mask this identification under a definition hides the fact that a fundamental discovery in the limitations of the mathematicizing power of Homo Sapiens has been made and blinds us to the need of its continual verification. ([8], p. 105) after ([7], p. 55).
The strength of models such as Post’s lies in their “psychological fidelity” and their ambition to capture
all the possible processes we humans can set-up to solve problems. ([8], p. 105) after ([7], p. 55).
Post’s approach thus illustrates how different priorities in the clarification stage—here, psychological fidelity versus mathematical abstraction—lead to different intensional properties in the resulting models.
An explication is not judged by whether it is “correct” but by whether it satisfies Carnap’s adequacy conditions: similarity to the informal concept, exactness of definition, fruitfulness for generating results, and simplicity. Different models of computation can thus be seen as distinct explicata of the same informal concept of effective procedure, reflecting different theoretical goals and conceptual priorities.
From this perspective, differences between models such as λ -calculus, which emphasizes substitution and abstraction and is historically associated with proof theory and functional reasoning [9], Turing machines, which emphasize state transitions and mechanical procedures and are more directly associated with the idea of algorithmic step-by-step execution [10], and recursive functions, which reflect a model grounded in arithmetic and bounded computation [11,12], are best understood as different explication strategies. These models are extensionally equivalent: they express the same class of computable functions, and it is widely believed that they encompass all the effectively computable functions. Yet, each clarifies the shared intuitive concept of computation in a different manner and embeds it within a distinct formal framework. Each explication reflects different underlying conceptual priorities: abstraction versus mechanistic procedure, substitution versus state transition, and so on. An explication succeeds not by being uniquely correct but by being useful and illuminating within the context of a particular theoretical or philosophical task.2
The most thoroughly studied intensional differences are those between Turing machines and recursive functions, especially in what can be seen as a revival of the Gödelian ideal of absolute computation—that is, the search for a privileged account of computability. The debate has been developed from many different angles, including Soare’s (1996) [16], Shapiro’s (1982) [17], Gandy’s (1980, 1988) [18,19], Rescorla’s (2007, 2012) [20,21], Copeland’s (2010) [22], and Quinon’s (2014) [23]; see also the centenary volume edited by Floyd and Bokulich [24]. Various arguments have been put forward. Some authors rely on bold assertions. For example, Gandy [19] claims that Turing’s analysis of computability is “a paradigm of philosophical analysis: it shows that what appears to be a vague intuitive notion actually has a unique meaning that can be stated with complete precision” (pp. 84–86).
Others offer more nuanced criticisms. For instance, Rescorla [20] defends Church’s thesis by arguing that attempts to define the semantics of Turing computability risk either extensional inadequacy or circularity. According to this line of reasoning, Turing’s analysis does not qualify as a genuine explication of the intuitive concept of computability. Rescorla appeals to a well-known problem—sometimes called the “semantic halting problem”—which shows that, in the absence of external constraints on the choice of denotation functions (used to map digits to numbers), Turing machines can be made to compute non-computable functions under certain “deviant semantics” [17,22,25].
A less well-known but very interesting perspective on the intensional differences between the model based on Turing machines, recursive functions, and also λ -calculus models of computation has been proposed by Trakhtenbrot [26], who distinguishes between two orientations of foundational approaches to computability. On the one hand, Trakhtenbrot describes the Turing approach as “computer- (or hardware-) oriented”—focused on providing “a direct analysis of computing processes” and “clear arguments that all possible algorithms for computing functions can be embodied in such machines” ([26], p. 603). On the other hand, he characterizes the Church–Kleene and Gödel–Herbrand approaches as “programming or software-oriented” ([26], p. 604), more aligned with symbolic reasoning and traditional mathematical logic. Importantly, Trakhtenbrot presents these approaches as complementary rather than competing accounts of computation.
A similar distinction reappears in the work of Moschovakis, who uses it to frame his account of the difference between functions and algorithms. In his framework, a function corresponds to the value being computed, while an algorithm is formalized as a recursor—a mathematical object consisting of recursive equations that captures the computational procedure. The distinction is explicitly modeled on Frege’s classic division between Sinn (sense) and Bedeutung (denotation), where sense corresponds to intension and denotation to extension. The algorithm (as a recursor) expresses the mode of presentation (sense), and the function value is the denotation.

2.2. Intensional Difference Between Programming Languages

The distinction between intensional and extensional aspects also provides a useful framework as we shift our attention from foundational models to programming languages. Whereas models of computation offer abstract accounts of what it means to compute in principle, programming languages make these accounts operational in concrete contexts. They exhibit different computational styles, each emphasizing certain aspects of a model while downplaying others. Moreover, certain (low-level or domain-specific) programming languages are inseparably tied to their physical implementation in hardware.
In theory, most general-purpose programming languages are Turing complete. That is, any computable function expressible in one can, in principle, be expressed in another. From an extensional point of view—considering only the input–output behavior of computable functions—they are therefore equivalent. However, this theoretical equivalence masks substantial practical differences. Just as formal models such as Turing machines and the λ -calculus differ intensionally, so do programming languages—in how they represent state, organize control flow, and structure abstractions. However, unlike theoretical models that were formulated to capture mathematical intuitions about the process of computation, programming languages are designed to serve specific practical purposes: system programming, rapid prototyping, scientific computation, web development, and so forth. These different purposes shape how each language structures computation and, as a consequence, guides the programmer’s thinking and creativity in solving practical problems.
From this perspective, Church’s λ -calculus carries what Trakhtenbrot calls a “prophetical message” ([26], p. 605), anticipating key computational features of high-level programming languages—such as abstraction (treating functions like entities that can be stored and reused) and higher-order functions (functions that manipulate other functions).3 These features became explicit in languages like Lisp and ISWIM [28], and subsequently influenced Edinburgh ML [29] and LUCID [30]. What I find significant in this view is the idea that the intensional differences between foundational models reappear—often in amplified form—at the level of language design and use. The intensional features of Turing machines, according to Trakhtenbrot, run even deeper: they not only influenced imperative programming styles but also anticipated “the era of universal digital computers” ([26], p. 604) and inspired the very architecture of digital computation. Rather than treating these intensional differences as rival theories, Trakhtenbrot suggests that they should be understood as complementary accounts of computation, each contributing a distinct but overlapping legacy.
This focus parallels Turner’s distinction between the functional and structural aspects of algorithms. While functional specification—the extensional mapping from inputs to outputs—captures what an algorithm does, it does not exhaust what an algorithm is. Structural features—how a computation proceeds, how a problem is decomposed, and how the process is internally represented—are equally central to understanding algorithmic artifacts [31]. The structural aspect corresponds to intensional characterization. Trakhtenbrot’s account targets models of computation; Turner extends the concern to algorithms and, more broadly, to programming languages, understood as layered abstract artifacts that must be interpretable, analyzable, and executable. Intensional features—such as control structures (which determine the order in which instructions are executed), programming styles (like functional or object-oriented approaches to organizing code), and abstraction mechanisms (ways to simplify complex operations, such as functions or classes)—are the central means of assessing the usefulness of a programming language.
The important lesson, I believe, is that the intensional level of description is indispensable for understanding programming languages. Unlike models of computation—where researchers still search for a single universal concept that captures the essence of what it means to compute—programming languages are inherently pragmatic. Programming languages depend not only on what can be computed but also on how that computation is organized, structured, and communicated. Each programming language imposes a particular conceptual lens: some encourage object-oriented thinking; others favor functional composition or logical reasoning.
In addition, unlike foundational models of computation, programming languages are almost always accompanied by a set of practical tools: compilers that translate code into machine instructions, debuggers that help to locate and correct errors, and editors that assist in organizing and refining code. These tools operate not only on the basis of what a program does—its extensional behavior—but also on the basis of how the program is written and structured. Two programs may perform the same task yet differ significantly in how easily they can be debugged or refactored.
The nature of intensionality in formal systems, which is inherited by programming languages, relates to differences in how things are expressed or specified. Analyzing how intensional content is embedded in formally defined concepts using Carnap’s method of explication has proven to be extremely helpful. In this paper, I extend that analysis to programming languages. However, another approach to studying intensionality is more widely recognized in the context of programming languages: one that reveals the internal meta-structure of languages through type theory.
The type structures are inherited from foundational models, most notably Church’s simply typed λ -calculus, in which types were introduced to classify expressions, ensure the well-formedness of computations, prevent logical paradoxes, and clarify the structural properties of functional abstraction. However, as Petricek [32] notes, while Church originally introduced types and functions as formal mathematical constructs, modern programming languages have reinterpreted them as computational and organizational primitives, giving them practical meanings and roles that extend well beyond their foundational origins. In contemporary programming languages, types serve as tools for error detection, documentation, abstraction, and impact management in software development. Type structure is referred to as an intensional construct because types constrain not only the values that are computed but also how those values can be used, combined, and manipulated. I return to the topic of types in programming languages in the final section of this paper.

2.3. How Programming Languages Inherit the Intensional Aspects of Models of Computation

The development of practical programming tools, partially inspired by abstract mathematical models of computation, began in the 1950s with the development of early high-level languages such as Fortran and Lisp. These languages operationalized computation, enabling symbolic instructions to be executed on physical hardware. The relationship with abstract models was more complex than simple derivation, with language creation coming from immediate practical needs.
Fortran, developed under the leadership of John Backus at IBM beginning in 1954, exemplified the style of programming that became closely associated with the von Neumann architecture: a model where programs work by modifying stored data step by step, much like following a recipe that constantly updates its ingredients. This approach was revolutionary in the 1950s because it automated what human computers had conducted manually—performing calculations by updating values on paper—but it also meant that programs were essentially sequences of commands that changed a machine’s memory state. While Backus is credited with designing Fortran, his 1977 Turing Award lecture is a scathing critique of this legacy. Rather than celebrating Fortran, the paper exposes fundamental limitations of von Neumann-style languages, describing them as “fat and weak” due to their reliance on low-level stateful computation and their lack of algebraic structure [33,34]. As an alternative, Backus proposes a functional programming model based on mathematical systems like λ -calculus and combinatory logic, which builds programs by combining functions like mathematical equations rather than through step-by-step instructions that modify data.4
Around the same time, Lisp emerged as another response to the limitations of von Neumann computation—specifically the difficulty of expressing mathematical and logical relationships when programs are confined to sequential manipulation of memory states. In [36], McCarthy traces Lisp’s theoretical roots to the system of partial recursive functions, adapting it to operate on symbolic expressions rather than mere numerals.5 He stresses the need for a formalism that directly supports recursive definitions—that is, a way for functions to call themselves without cumbersome workarounds. While Church’s λ -calculus theoretically permits recursion, it requires complex encoding tricks to achieve it.6 To address this, McCarthy introduces a “label” construct that assigns functions explicit names, allowing them to refer to themselves naturally, much as mathematicians write recursive equations like “factorial ( n ) = n × factorial ( n 1 ) ”, where the function’s name appears on both sides of the definition. Together, Fortran and Lisp illustrate how, in the decades following the formalization of computability, theoretical models began to inform programming practice in different and sometimes divergent ways.
This dual nature—at once formal and pragmatic—makes intensional aspects particularly relevant in the philosophy of programming. A programming language is not just a symbolic system for expressing procedures; it is also a tool for solving real-world problems. As a consequence, intensional differences between programming languages arise not only from the underlying models of computation but also from the epistemological and cognitive principles embedded in each programming paradigm. The management of state, memory allocation and access, and data organization differs greatly between functional, object-oriented, and logical programming styles. Even when addressing the same problem, programs written in these different styles can diverge substantially in structure and presentation.
Functional programming is most directly related to the λ -calculus, a formal system introduced by Alonzo Church in the 1930s for studying functions, abstraction, and application. The λ -calculus serves as the basic model for purely functional languages such as Haskell or OCaml, where programs are expressions and evaluation corresponds to reducing those expressions. Programs are constructed by combining simple functions into more complex ones and applying them to values. The fact that functions themselves serve as the primary units both of representation and composition is a key intensional feature that distinguishes both the λ -calculus and functional programming from other computational models and language styles. The intensional structure of a functional program often mirrors the structure of evaluation in the λ -calculus: how terms are reduced, which redexes (reducible expressions) are chosen, and how fixed points—mathematical constructs that enable recursive definitions by finding values that remain unchanged under a given function—are constructed to support recursion. Notably, the FP (functional programming) system that Backus proposed in his 1978 lecture as an alternative to von Neumann programming represents a different intensional approach within functional programming: rather than using λ -calculus with named variables, FP employs a fixed set of combining forms in a “point-free” style that eschews variable names entirely, building programs purely through function composition.
Object-oriented programming, as exemplified by Java, C#, or Python (when used in an object-oriented style), structures programs around objects—units that hold data and contain instructions for what to do with that data. These languages derive from the traditions of Simula and Smalltalk, two influential systems developed in the 1960s and 1970s. Simula introduced the idea of modeling software as a set of interacting objects, originally to simulate real-world processes [38]. Smalltalk extends this by treating everything as an object and allowing communication through message passing—a paradigm where objects interact not by directly manipulating each other’s data but by sending requests or “messages” to one another, such as asking a “Circle” object to calculate its area or telling a “Window” object to close itself [39,40].
Unlike functional programming, which is based on the λ -calculus and emphasizes stateless computation, object-oriented programming is based on an imperative model of computation—one where programs consist of commands that explicitly tell the computer what to do step by step. Its most natural correspondence is with the Turing machine model; however, as Kay [39] emphasized, the paradigm is based on message-passing between autonomous computational agents, not merely state manipulation as in traditional imperative programming. This message-passing semantic approach—where objects interact only through well-defined interfaces without direct access to each other’s internal state—constitutes a distinct semantic model, not merely a metaphorical description. It therefore provides a distinct intensional structure that cannot be fully reduced to Turing machine state transitions. It also bears a certain resemblance to von Neumann architecture as object-oriented programming particularly exploits this architecture’s ability to treat data as mutable entities that can be modified in place. Object-oriented programs are typically written as sequences of instructions that modify objects’ internal states. For instance, a “BankAccount” object might have its balance updated when a deposit method is called, permanently changing that object’s state. Execution unfolds as a series of such state changes, with each method call potentially altering the program’s overall state, much as a Turing machine progresses by reading, writing, and moving along its tape, transforming its configuration at each step.
From an intensional point of view, object-oriented programming enforces encapsulation and information hiding as semantic properties. The computational semantics are defined by message dispatch mechanisms, inheritance hierarchies, and dynamic binding—formal properties that determine which code executes when a method is invoked, not merely implementation details. The intensional structure of object-oriented languages focuses on how objects are grouped into categories (classes that define shared properties and behaviors), how internal details are kept private from the rest of the program (a principle called encapsulation), how the program determines which specific code to run when an object receives a message (a process called method dispatch), and how objects change over the course of program execution. This temporal dimension is particularly important because object-oriented programs are often designed to model real-world processes that unfold over time, such as a bank account whose balance changes with each transaction or a game character whose position updates as it moves. The intensional character of a given object-oriented implementation thus emerges from multiple factors: how information flows between objects, how unexpected situations are managed, and how the cumulative effects of state changes shape the program’s behavior over time.
Logical programming, as exemplified in languages such as Prolog, operates according to a computational model that is fundamentally different from both functional and object-oriented styles. Rather than specifying a sequence of instructions or composing functions, the programmer declares a set of logical relationships and constraints. Computation then proceeds through a process of automatic inference, typically implemented as a backtracking search that attempts to satisfy the specified conditions.
The computational model underlying logical programming originates from formal logic, particularly first-order predicate logic and proof theory [41,42]. More specifically, computation is considered to be equivalent to proof-seeking: a query is treated as a goal to be proved, and the program consists of rules that guide the inference process. These rules are often written as Horn clauses—logical statements of the form “if A and B and C are true, then D is true” (or simply “D is true” for facts without conditions)—which make automated reasoning more efficient. Prolog is based on a subset of first-order logic that treats logical rules as procedures: when provided a query like “is Socrates mortal?”, the system searches for a proof by working backwards from the goal, finding rules that could establish it and then trying to prove those rules’ conditions in turn. This search process follows Robinson’s resolution principle [43], which works by combining compatible rules to derive new conclusions—for instance, combining “all humans are mortal” with “Socrates is human” to conclude “Socrates is mortal.” The process also relies on unification, a technique for matching logical expressions by finding substitutions that make them identical.
Unlike imperative or functional programming, where commands are executed or expressions evaluated, a logic program attempts to construct a proof. For example, given the rules
parent(alice, bob).
parent(bob, claire).
grandparent(X, Y) :- parent(X, Z), parent(Z, Y).
and the query grandparent(alice, claire)?, the system tries to prove the goal by matching it with the third rule and checking whether the two facts about parents support the conclusion. The variables X, Y, and Z are automatically filled in through unification. The system finds specific values that make the rule match the query, and the proof is completed by linking these matching pieces of information together, like connecting puzzle pieces that fit.
In logic programming, the programmer cedes much of the control over execution to the inference engine: instead of specifying how to solve a problem step by step, the programmer specifies what should be true—for instance, declaring “X is an ancestor of Y if X is a parent of Y” rather than writing loops to traverse a family tree. The system then searches for a derivation that satisfies the stated conditions, automatically finding all the ancestors when asked. This declarative style does not make the program structure irrelevant. On the contrary, the order in which rules are written, the design of recursive definitions, and the placement of constraints all determine whether the system will find solutions, how quickly it will find them, and whether it will find all the possible solutions or become stuck in infinite loops.
The alignment of programming languages with foundational models reinforces the point that intensional differences are not arbitrary stylistic preferences. They reflect deep structural choices: what counts as a basic operation (function application in λ -calculus versus state modification in Turing machines), how control is expressed (through function composition versus sequential instructions), what abstractions are made explicit (objects and messages versus logical rules), and what is left to the system (memory management in functional languages versus proof search in logic programming). The same problem—such as checking membership in a list—becomes a different cognitive and computational task depending on which model the language assumes and which intensional commitments that model brings along.

3. Programming Workflow

In this section, I follow a simplified idealized programming workflow to gain systematic insight into intensional differences between programming languages. My primary focus in this paper is on the theoretical part, more exactly the step from a function one wants to compute to the algorithm that is used in computation and the interrelation between the computational problem to be solved and the choice of the programming language. This theoretical analysis serves as an initial exploration of vast terrain. It would be difficult to cover every aspect comprehensively in a single study. I have therefore selected two focal points: the function-to-algorithm transition, which represents the most thoroughly studied aspect of the computational process, and the problem–language interrelation, which has been historically evident throughout the evolution of programming paradigms. The workflow reconstruction I propose extends to hardware implementation, albeit incompletely, in preparation for subsequent work.
An idealized software development process starts with problem specification: the identification of the external problem to be solved. Next comes a preliminary program specification, which states what the program should do: what information it will receive, what results it should produce, what limitations it must respect, what unusual situations it might encounter, and what kind of data organization it will work with. Following this, algorithm design specifies the computational steps needed to solve the problem. This three-stage process parallels the clarification stage of Carnapian explication in a crucial way: just as Carnapian clarification transforms an intuitive concept into a precisely delineated notion ready for formalization, these three stages transform a real-world problem into a computational specification ready for software and hardware implementation.
The specification phase of Carnapian explication7, which follows the clarification phase, takes the clarified intuitive concept and formalizes it within a particular theoretical framework. In programming, this means expressing the envisioned computational process—one that solves a specific problem—in code using a particular programming language. As I argued in [6], in the context of formal models of computation, the clarification stage is the main bearer of intensionality. However, programming languages present a fundamentally different case. Unlike purely theoretical models, programming languages cannot be evaluated at the theoretical level alone as their primary objective is practical application. Therefore, analyzing intensional aspects of programming languages requires examining not only the theoretical framework but also the implementation stage on actual hardware. Thus, while formal models of computation conduct their clarification stage with a scientific model “viewed on the horizon,” programming languages must consider both the programming framework and the target hardware as integral parts of their clarification process.

3.1. The Problem Specification, Program Specification, and Algorithm Design Stage

This subsection examines the clarification stage—the three-stage process from problem specification through program specification to algorithm design—in the context of programming framework selection. Particular attention is paid to the function–algorithm distinction, the most significant theoretical contribution to understanding this stage, developed by Moschovakis through his theory of algorithms and his notion of algorithmic meaning. Moschovakis’s framework provides tools for distinguishing between what is computed (the function) and how it is computed (the algorithm), establishing that algorithms carry intensional content beyond their extensional input–output behavior.
I extend this function–algorithm analysis to the algorithm–implementation relationship, examining how the same algorithm takes different forms across programming paradigms. Through examples from functional, object-oriented, and logic programming, I demonstrate how a single algorithmic concept manifests differently depending on the computational framework: as compositions of pure functions, as interactions between encapsulated objects, or as declarative rules and unification processes. These variations reveal additional layers of intensionality beyond the initial choice of algorithm.
Finally, I examine the back-and-forth influence between algorithm choice and programming language selection. This bidirectional relationship—where the choice of algorithm influences which programming language is most suitable, while simultaneously the available language features shape which algorithms are considered to be viable—constitutes a distinctive characteristic of practical programming that goes beyond the linear progression typically assumed in theoretical models of computation.

3.1.1. Functions and Algorithms

The function to be computed—understood as a graph of inputs and outputs—belongs to the problem and program specification. It is in the algorithm design stage where we determine how this function will be computed, introducing the computational methods and operational structure that go beyond the extensional definition. As previously noted, a widely discussed intensional difference in the area of programming languages is the intensional difference between the function that is being computed and the algorithm that is used to compute it. The decision of which algorithm to use is an important step on the way to language selection.
Reflection on the problem to be solved leads to different algorithmic approaches in the case of programming languages. Different algorithms may solve the same problem but differ significantly in their operational logic and conceptual underpinnings. To illustrate, consider the two classical algorithms for computing the greatest common divisor (GCD) of two natural numbers, a and b, as discussed by Turner [31]:
Euclidean Algorithm:
  • Let a and b be the inputs, with a b .
  • If b = 0 , return a as the GCD.
  • Otherwise, replace a with b and b with a mod b .
  • Repeat from step 2.
Stein’s Algorithm (Binary GCD Algorithm):
  • If a = b , return a.
  • If a = 0 , return b; if b = 0 , return a.
  • If both a and b are even, return 2 · GCD ( a / 2 , b / 2 ) .
  • If a is even and b is odd, return GCD ( a / 2 , b ) .
  • If a is odd and b is even, return GCD ( a , b / 2 ) .
  • If both are odd and a b , return GCD ( ( a b ) / 2 , b ) .
  • If both are odd and a < b , return GCD ( ( b a ) / 2 , a ) .
Although both algorithms compute the same mathematical function, they rely on different computational primitives: the Euclidean algorithm uses division and modulo operations, while Stein’s algorithm replaces division with subtraction and binary shifts. These differences exemplify intensional variation: the internal structure of the algorithm, the sequence and nature of its operations, and the assumptions it encodes. Even when the extensional behavior (input–output mapping) is the same, the means of computation can be radically different depending on both algorithmic choices and the representational tools offered by the programming language.
The distinction between the function being computed and the algorithm used to compute it aligns with Moschovakis’s formal theory of algorithms, which treats algorithms as structured procedures whose identity is not exhausted by the function they compute. In his framework, the algorithm is a mathematical object in its own right—a recursor, which is a tuple of recursive equations that captures the internal logic, sequence of steps, and representational decisions involved in computation. This corresponds to Frege’s notion of sense (the intensional aspect) as opposed to denotation (the extensional aspect). From this perspective, the Euclidean and Stein algorithms are distinct recursors—distinct senses—that compute the same function (i.e., the same denotation in Fregean terms) [44,45].
In his analysis Moschovakis opposes identifying algorithms with their specific implementations in machines or theoretical machine models. This position diverges from Trakhtenbrot’s more inclusive view, where machine-based characterizations are seen as having complementary value alongside more abstract approaches [26]. Moschovakis—without explicitly referring to Trakhtenbrot—argues that defining algorithms in terms of machines creates serious problems for complexity analysis. When an algorithm is tied to a specific machine, the complexity becomes dependent on arbitrary implementation details rather than the algorithm’s logical structure. For example, the same procedure might run faster if the machine uses binary rather than unary encoding, or if it has two memory tapes instead of one. These differences have nothing to do with the algorithm’s inherent difficulty; they are accidents of the implementation.
Moschovakis instead analyzes complexity directly from the algorithm’s recursive mathematical structure—that is, from the recursor itself—making the results independent of implementation choices and focusing on what the computation logically requires. He illustrates this with mergesort—a sorting algorithm whose recursor captures the divide-and-conquer strategy through a system of recursive equations. The complexity of this algorithm depends only on the input size: for a list of length n, it requires at most n log n comparisons. Once this is proven from the structure of the recursor, the result holds regardless of whether it is implemented in Python, Java, or any other language.
In consequence, Moschovakis’s framework distinguishes three levels: the function being computed (i.e., the input–output mapping), the algorithm that computes it (formalized as a recursor—the mathematical object derived from recursive definitions), and the machine implementation that realizes the algorithm. Only the recursor is treated as the proper unit of analysis: functions are what recursors compute, and machines are realizations whose performance may vary but whose essential structure is captured in the recursor.
To apply this to the Euclidean and Stein algorithms, we analyze their recursive definitions as distinct recursors rather than their machine implementations. Each corresponds to a different system of recursive equations: one relying on division and modulus, the other on binary decomposition and subtraction. Following Moschovakis, we understand these as distinct recursors even though they compute the same function. Their complexity can then be evaluated by examining these recursors directly, independently of the programming language or hardware used.
In Moschovakis’s framework, the Euclidean and Stein algorithms for computing the greatest common divisor exemplify the distinction between algorithms as intensional objects and the functions they compute. Just as there are many sorting algorithms—mergesort, quicksort, and heapsort—each with its own distinct recursor for the same sorting function, the GCD function admits multiple fundamentally different recursors. The Euclidean algorithm yields a simple recursor as follows:
GCD E ( a , b ) = p ( a , b ) where p ( a , b ) a if b = 0 p ( b , a mod b ) otherwise
This yields a simple recursor based on a single recursive equation with its base case (when b = 0 , return a) and its recursive step (otherwise, take the remainder of division and swap arguments). The algorithm simply replaces the pair ( a , b ) with ( b , a mod b ) until b reaches zero. By contrast, Stein’s algorithm requires a more complex recursor structure with mutual recursion:
GCD S ( a , b ) = q ( a , b ) where q ( a , b ) a if a = b b if a = 0 a if b = 0 r ( a , b ) otherwise r ( a , b ) 2 · q ( a / 2 , b / 2 ) if both even q ( a / 2 , b ) if a even , b odd q ( a , b / 2 ) if a odd , b even q ( ( a b ) / 2 , b ) if both odd and a b q ( ( b a ) / 2 , a ) if both odd and a < b
The resulting recursor has three components—capturing the mutual recursion between q and r, where q handles base cases and delegates complex parity-based branching to r.
The key difference is immediately visible: the Euclidean algorithm has one recursive equation, while Stein’s has two mutually recursive equations. This structural difference—simple recursion versus mutual recursion with complex case analysis—is precisely what Moschovakis’s framework captures as the intensional distinction between these algorithms. These recursors are distinct mathematical objects. They are algorithms in the sense of Moschovakis, existing independently of the functions they compute and the machines that implement them. The existence of multiple recursors for the same function (GCD) shows that the set of algorithms is more extensive than the set of computable functions: a single function can be computed using structurally different recursive procedures, each with its own computational primitives and complexity profile.
Moschovakis’s emphasis on analyzing complexity directly from recursor structure rather than machine implementations gains further support when we consider how different classes of recursors exhibit fundamentally different complexity profiles even when computing identical functions. The same function—for instance, the minimum function that takes two natural numbers and returns the smaller one—can be computed by both general recursive and primitive recursive recursors, yet the structural constraints that define these classes lead to markedly different computational behaviors. A general recursive recursor can examine both arguments simultaneously, performing comparisons until one argument is exhausted—achieving optimal complexity proportional to the smaller input value. In contrast, a primitive recursive recursor must follow a predetermined pattern that examines arguments sequentially, preventing simultaneous comparison and forcing suboptimal performance even for this simple function.
Colson’s study from 1991 [46] of primitive recursive algorithms provides an analysis of their computational limitations when evaluated under call-by-name semantics.8 Colson’s analysis reveals a computational constraint: primitive recursive algorithms must examine only one argument at a time, exhausting it completely before switching to another. When a primitive recursive function is defined, its structure determines once and for all which argument it will examine first, second, and so on. For instance, a primitive recursive function computing the minimum of two numbers n and p might be constructed to always process its first argument completely before examining the second—even when examining the second argument first would be more efficient for particular inputs. This order cannot change based on the actual values being computed.
This structural limitation prevents optimal computation—that is, O ( min ( n , p ) ) complexity, where an algorithm could compute the minimum in just min ( n , p ) steps by examining both arguments and stopping as soon as one is exhausted. Colson proves that no primitive recursive algorithm can achieve this optimal complexity. However, when functions are allowed to accept other functions as arguments—that is, when functions can be passed around and used as data within other functions—this limitation disappears. Colson shows that, with such functional parameters, the minimum can be computed with the optimal O ( min ( n , p ) ) complexity, demonstrating that the extension from first-order to higher-order primitive recursion changes the class of efficiently computable functions, not just the class of computable functions. These results align with Moschovakis’s framework of recursors, where algorithms are distinguished from the functions they compute: while the minimum function remains the same mathematical object, the primitive recursive algorithms computing it constitute distinct recursors with measurably different computational properties than their higher-order counterparts.
Colson’s analysis of primitive recursive sequentiality received its name from Thierry Coquand, who termed it the “theorem of ultimate obstination” in his 1992 direct proof [47]. The theorem states that, once a primitive recursive function begins to consume one of its arguments, it exhibits ultimate obstination (it cannot adaptively switch to examining another argument, remaining stubbornly committed to its initial choice). This rigidity explains why certain seemingly simple functions, such as the minimum of two numbers, cannot be computed with optimal efficiency in the primitive recursive framework. The obstination property reveals a fundamental intensional constraint: it characterizes not what functions can be computed but how the computational process must unfold. This exemplifies Moschovakis’s central insight that algorithms possess properties independent of both the functions they compute and their machine implementations. The recursor’s structure imposes constraints that transcend both its extensional behavior and its physical realization.
The theorem of ultimate obstination thus reveals how mathematical constraints in a computational framework produce concrete intensional differences. Colson’s findings that primitive recursive functions cannot optimally compute even simple functions like minimum and Coquand’s constructive proof and that these functions’ behavior can be predicted from finite information demonstrate that the primitive recursive framework imposes structural limitations that are neither properties of the computed functions nor implementation artifacts. These limitations emerge directly from the framework’s recursive organization—a pure instance of intensional constraint shaping computational possibility.
These results by Colson and Coquand exemplify a broader principle—that identical extensional behavior can mask fundamentally different computational processes. This principle has been explored from different angles, such as Fredholm’s [48] focus on the inherent asymmetry of primitive recursive constructions, which reinforces that algorithmic choices reflect structural commitments that determine not just what can be computed but how efficiently it can be completed.

3.1.2. Programming Framework “Viewed on the Horizon”

Similarly to Carnapian clarification, problem specification, program specification, and algorithm design are conducted with a particular programming framework “viewed on the horizon.” Sometimes, this framework is an existing one, but the process might also lead to a new way of thinking that requires new tools. This anticipatory relationship between problem conception and programming framework determines intensional differences between programming languages.
Different programming approaches fundamentally shape how programmers conceptualize problems. Logic-based languages like Prolog encourage focus on what the solution should look like: programmers describe the rules and constraints that define a correct answer. Procedural languages like C emphasize how to reach the solution: programmers write step-by-step instructions for the computer to follow, addressing concerns like memory management and execution order.
Languages also vary in what they require programmers to make explicit. Haskell’s type system demands specification of input and output types. If a function takes a list of integers and produces a string, this must be declared explicitly. Python, by contrast, permits more relaxed approaches where functions can be written without specifying whether they take numbers or text, leaving such assumptions implicit until runtime.
The computational tools a language provides determine how solutions can be expressed. Languages with built-in pattern matching, such as Haskell, naturally encourage breaking problems down into cases—for instance, handling an empty list differently from a non-empty one. Languages emphasizing loops, like Java, promote iterative thinking, where operations repeat until a condition is met. Languages supporting recursion facilitate solutions where functions call themselves with progressively simpler inputs. These intrinsic constraints of programming languages lead to genuinely different algorithmic approaches to identical problems.
Industrial control languages like Structured Text, used to program Programmable Logic Controllers (PLCs), must accommodate hardware constraints, such as scan cycles and real-time requirements. In these systems, the entire program executes repeatedly—often hundreds or thousands of times per second—within fixed time bounds determined by the underlying hardware and communication protocols like CAN bus. This cyclical execution model fundamentally differs from traditional programming, where programs run to completion, forcing programmers to think in terms of state machines and time-dependent behaviors rather than sequential procedures.
When Alan Kay coined the term “object-oriented programming” during graduate school in 1966 or 1967, his central insight was to use encapsulated mini-computers in software that communicated via message passing rather than direct data sharing [39]. As Kay later explained, “I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages” [40]. This biological metaphor for computation led to Smalltalk’s development and later influenced C++, fundamentally altering how programmers conceptualize and structure code.
Similarly, when Joe Armstrong joined Ericsson’s Computer Science Laboratory in 1985, he confronted the challenge of building fault-tolerant telephone systems that could never fail—systems serving hundreds of thousands of users where brief outages meant enormous losses [49]. Armstrong’s solution emerged from a counterintuitive insight: rather than attempting to prevent all failures, build systems that handle failures gracefully through isolated processes and supervisor hierarchies. As Armstrong described, research beginning in 1981 to find better ways to program telecom applications “resulted in the development of a new programming language (called Erlang).” This approach generated Erlang’s famous “let it crash” philosophy, representing an entirely novel approach to fault-tolerant distributed systems.
Data structure design is another place where intensional differences become apparent. Languages differ in how data is represented: some support user-defined data types with named fields (like structs in C), pattern matching over algebraic data types (like Haskell), or aggregate types built from primitive collections (like dictionaries in Python). Others rely on more generic containers, such as arrays or hash tables. Even when representing the same logical content, the available structures—and how they are defined, accessed, and transformed—differ significantly. These differences affect how data is individuated, structured, and manipulated, thereby affecting its status as a computational object.
Similarly, in practice, the program specification is iteratively refined and completed throughout the data structure design phase—including when interactions with memory and states are defined—and further in the program implementation using a specific programming language in a specific hardware environment.
In an idealized process, the choice of the most appropriate programming language would be made after the clarification phase; in practice, this decision is initiated—if not made—before or during this phase. The same applies to the choice of computational models. As Sieg [50] notes, Church clarified the idea of step-by-step effectiveness not in terms of physical symbol manipulation but abstract functions over the natural numbers—specifically as recursive functions defined as a subclass of arithmetic functions based on a prior understanding of what it means to “compute” with numbers. The programming language chosen carries with it similar assumptions: it determines what operations are primitive, what can be abstracted or composed, and what kinds of control and data structures are most naturally expressed.

3.2. Software Implementation

Returning to an idealized software development workflow, the second stage, corresponding to the specification stage of Carnapian explication, involves code implementation in a particular programming language. As Turner observes,
natural language programs have no direct implementation, and, ideally, structural descriptions must provide the input for a manufacturing process that is as mechanical as possible. For this we require programs that are expressed in an implemented language ([31], page 49)
Let me provide a couple of short examples to illustrate the different intensional characteristics of the styles described above, beginning with a simple problem: checking whether a number appears in a list.
In Haskell, which is a functional language, the algorithm is expressed as follows (Listing 1):
Listing 1. Membership test in Haskell.
member  ::  Eq  a  =>  a  ->  [a]  ->  Bool
member  _  []         =  False
member  x  (y:ys)  =  (x  ==  y)  ||  member  x  ys
Here, recursion and pattern matching define the logic. The structure of the data drives the computation. The program is a pure function, with no mutable state, and its control flow is expressed declaratively.
In Java, which is object-oriented, it proceeds as follows (Listing 2):
Listing 2. Membership test in Java.
public boolean contains(int[] arr, int target) {
        boolean found = false;
        for (int i = 0; i < arr.length && !found; i++) {
              if (arr[i] == target) found = true;
        }
        return found;
}
The control flow is explicit: I tell the computer what to do step by step. The iteration is expressed through a loop with an explicit state variable (found) that tracks whether the target has been located. The loop condition checks both the array bounds and this state variable, making the termination condition fully explicit. If this were part of a class, it might be wrapped in a method of a collection object.
Finally, in Prolog, which belongs to logical programming style, it proceeds as follows (Listing 3):
Listing 3. Membership test in Prolog.
member(X, [X|_]).
member(X, [_|T]) :- member(X, T).
This code does not say how to find X; it states two logical rules: one for success (X is the head) and one for recursion (X is in the tail). The system conducts the work of traversing the list and backtracking as needed.
Each of these programs solves extensionally the same problem: they all decide whether an element is in a list. However, intensionally, they could hardly be more different. The Haskell version emphasizes recursion and decomposition of structure. The Java version stresses explicit control and mutable indices. The Prolog version encodes logical rules and relies on automated search. The style shapes the way programmers think, what tools they reach for, and what kinds of errors or ambiguities are likely to arise.
These intensional differences become apparent when programmers need to find and fix errors. In the Haskell version, many mistakes are caught before the program even runs. If you try to search for a number in a list of words, the type system stops you immediately. The Java version requires different debugging strategies: programmers must trace through each loop iteration to see where things go wrong. The Prolog version presents yet another challenge: understanding why the system chose certain logical paths over others. These distinct debugging experiences reflect how each language conceptualizes computation itself.

3.3. Hardware Implementation

We must be careful when using the term “intensional” in the hardware context since intensionality—as opposed to extensionality—is properly a property of expressions or processes, not of hardware varieties themselves. Nevertheless, programming languages create a distinctive feedback loop between software and hardware: software can influence hardware design (as Kay [51] famously said, “People who are really serious about software should make their own hardware”), which in turn affects software development. This interplay has two consequences. First, intensional differences in how functions are expressed in code can have real-world effects on hardware performance and design. Second, hardware constraints can shape software design decisions and language affinities.
There are several cases in which hardware was explicitly created or optimized for a particular software or programming model. Lisp Machines in the 1970s and 1980s were designed around the Lisp language, embedding garbage collection and list operations directly into the processor. Sun developed the picoJava processors, which could run Java’s bytecode instructions directly in hardware rather than relying on the Java Virtual Machine to interpret them. Google’s Tensor Processing Units (TPUs) were developed to accelerate TensorFlow, incorporating matrix-multiplication arrays specialized for neural network computation. More broadly, graphics processing units (GPUs) were shaped by the requirements of graphics APIs such as OpenGL and DirectX, optimizing their architecture for the kinds of linear algebra operations needed in rendering software. These examples illustrate how the structure of a programming language or software environment can directly motivate and constrain hardware design.
The feedback loop also works in the opposite direction, where new hardware architectures have shaped how software was written. A historical example is the early IBM System/360 (1960s): its hardware design introduced a uniform instruction set across a whole family of machines but with specific word sizes and memory models. This influenced compiler writers for COBOL and Fortran, who adjusted language features and optimizations to exploit the new architecture. In this case, software adapted to hardware constraints and possibilities, showing the reciprocal nature of the hardware–software relation.
The implementation of the program in a physical device is the final stage of the programming workflow. At this point, the abstract symbolic construct—refined through layers of specification, algorithmic design, and linguistic encoding—is instantiated as a causal process within a physical system. This is the point at which computation ceases to be hypothetical or simulated and becomes operational: the program is compiled, deployed, or executed on a concrete machine. In this sense, the workflow converges with the perspective of computation as a physically embodied process, where symbolic operations are grounded in mechanisms such as silicon logic gates, memory hierarchies, and instruction pipelines. From the perspective of intensional analysis, this stage reveals how earlier representational and structural choices—at the level of language and algorithm—shape the real-world behavior and efficiency of the computational system.
At the same time, intensional differences also emerge at the level of code implementation through surface-level aspects such as syntax, layout, naming conventions, and code organization. Some languages rely on strict indentation to denote structure; others use explicit parentheses or keywords. Some favor concise expressions, while others emphasize verbosity. These features affect how the program is presented as text, how its internal structure is made manifest, and how it is interpreted by human readers. Although they do not alter the extensional behavior of the program, they do affect its identity as a written object and how it encodes its semantics.

4. Broader Perspective on Intensionality in Computation

In this final section of the paper, I extend the reflection on intensionality in computer science to a broader philosophical perspective on computation—encompassing formal, social, and conceptual dimensions. Discussions about extra-extensional features of programming languages are widespread in philosophy, social philosophy, and the cognitive sciences and should definitely find their place in this paper.
The term intensionality refers to different but related meta-concepts across disciplines. In philosophy, it is often associated with the tradition of the philosophy of language (Frege, Carnap, etc.). In this context, intensionality refers to the mode of presentation or the way of referring to something rather than to its reference (denotation). Recall the example of the morning star and the evening star, both of which refer to Venus, presented in the first section of this paper. This understanding of intensionality is formally modeled in possible-world semantics: intensions are functions from possible worlds to extensions (see Carnap [52] and Montague [53]). An interesting case study for intensionality, so understood, involves contexts that do not preserve substitutivity of co-referential terms, often showing in propositional attitudes. For example, “Lois believes that Superman can fly” might be true, whereas “Lois believes that Clark Kent can fly” is certainly not, even though Clark Kent is Superman, so the reference of both expressions “Clark Kent” and “Superman” is the same object. The study of intensionality in the philosophy of language is to account for meaning beyond reference, especially well studied, as mentioned, in contexts involving knowledge, belief, modality, and counterfactuals.
In my research on intensional differences between models of computation (e.g., Quinon [6]; see also Antonutti Marfori and Quinon [54]), this is exactly the tradition that I referred to. When this tradition is applied to problems related to computing, intensionality refers to differences in how functions or programs are specified, even when their extensional behavior (input–output relation) is the same. For example, a Turing machine and a recursive function can compute the same function but differ in the steps, structure, or representation used. As discussed earlier in this paper, Moschovakis’s formal theory of algorithms explicitly builds on Frege’s distinction between Sinn (sense) and Bedeutung (denotation), treating algorithms as recursors whose identity captures the computational procedure (sense) rather than merely the function computed (denotation). This philosophical framework finds concrete support in Martin-Löf’s work on constructive semantics [55], who refines Dummett’s suggestion that “sense will be related to semantic value as a programme to its execution” ([56], p. 125). Martin-Löf clarifies that the crucial relationship is between a program and the result of its execution ([55], p. 502), emphasizing that what matters is not the computational process itself but the value that emerges from that process. To illustrate this distinction concretely, when we write 2 + 2 in a program, this expression has a sense (the computational procedure for addition) that is distinct from its reference (the final result 4 or s(s(s(s(0)))) in canonical form) ([55], pp. 503–505). The passage from sense to reference happens through computation—by following the definitions and reduction rules step by step. This directly reinforces how programming languages inherit intensional properties from their underlying computational models, and why these differences in computational procedure (sense) are just as important as the final results (reference).
In this final part of the paper, I examine additional methodological and epistemic commitments that influence the structure of computation beyond the function–algorithm distinction. These commitments arise from both the “nature” of computation—the formal constraints and theoretical limits inherent in computational systems—and its “nurture”—the social, institutional, and cognitive factors that shape how programming languages are designed, taught, and used. The term “intensional” appears in various contexts within computer science, some of which may require further investigation to clarify their relationship to intensionality, understood as a matter of “how” computation proceeds. I begin with Martin-Löf’s better-known contribution to programming language theory: his development of type systems as a foundational framework for structuring computation. While his work on constructive semantics illuminates the sense/reference distinction, his type theory provides another lens through which intensional properties manifest in programming languages.
The use of type systems reveals an additional intensional structure in programming languages. Types are used to classify the elements of a programming language, enabling meta-level reasoning about programs (see Martin-Löf [57]). Type-based analysis also reveals a theoretical correspondence known as the Curry–Howard isomorphism, which in certain type systems establishes that programs can be viewed as proofs and types as propositions; that is, executing a program corresponds to conducting a proof, and specifying a type corresponds to stating a proposition (for example, the type A B can be read as the logical implication “if A holds, then B holds”). While this correspondence holds precisely only for specific type systems (such as simply typed lambda calculus and intuitionistic logic) and has limited direct applicability to practical programming languages like ML or Java, it illustrates how type systems encode logical structure within programming languages.

4.1. Types as Intensional Aspects of Programming Languages

A range of analytically significant intensional differences between programming languages emerge when their elements are organized or interpreted through a type system. In programming, a type system is a formal system that defines how expressions are classified according to the types of values they produce, and it specifies the rules that determine which operations are permitted on those values. A type—such as int, bool, or list of strings—is a label that indicates what kind of value an expression represents. Types are not defined in isolation; they are part of a type system, which provides the rules that determine how types can be assigned, combined, and checked in programs. For example, a type system can prevent a programmer from adding a number to a string or calling a function with the wrong kind of input. While type systems are often seen as technical tools for avoiding errors, their design can also have a profound effect on the structure of programs and the ways in which programmers think about data and computation (see Pierce [58], p. 4). As Harper writes, “a programming language is a theory of programming,” and its type system “makes manifest the principles on which the language is based” ([59], p. 3). Different type systems, in this sense, reflect different philosophies of programming, just as Carnapian explications reflect divergent scientific objectives.
As discussed earlier, the evolution of types from Church’s simply typed λ -calculus to modern programming practice illustrates how a construct originally introduced to ensure logical well-formedness has taken on a wider range of computational roles. Type systems were first formalized in the early 20th century as a means of avoiding logical paradoxes such as [60], which posed a serious threat to the foundations of mathematics. From this origin in foundational logic, types developed into standard tools in proof theory and formal systems [61,62], with the key contributions including Russell’s ramified theory of types [63], Ramsey’s simplification [64], and Church’s simply typed λ -calculus [65]. In programming languages, types now serve conceptual as well as technical roles, and, by constraining how values can be used, composed, or combined, they exemplify intensional constructs—structures that reflect not just what a program does but how it is organized.
To illustrate this point, consider two programming languages that allow the construction and manipulation of a list of integers. For example, in Haskell, a functional language, you might write myList :: [Int] = [1, 2, 3], declaring that myList is a list of integers. In Python, the same structure might appear as myList: List[int] = [1, 2, 3], using a type hint to indicate that the list contains integers. On the surface, these types appear to be extensionally equivalent: they both allow the same operations (such as accessing or appending elements) and produce the same results. Intensionally, however, they can differ in important ways. One language might require a formal guarantee (such as a proof) that the list is not empty before allowing access to its first element, while another might allow such access without any safety check, risking a runtime error if the list is empty (see [66]).
Type systems can be understood as encoding the internal logic of a programming language. They determine which distinctions must be made explicit—such as whether physical units like meters and seconds are tracked—what operations are guaranteed to be safe, and what kinds of abstraction or generalization the language permits. These are not merely technical conveniences but conceptual decisions that shape how programmers think, write, and reason about code. As Cardelli puts it, “type systems are engineering tools that enforce clean interfaces and design intentions,” but they also “embody and enforce methodological principles” ([67], p. 222).
Such differences become especially apparent when expressive proof-oriented languages such as Haskell or Idris are compared with dynamically typed languages such as JavaScript or TypeScript. In the former, type systems encode detailed invariants, behavioral constraints, and logical properties, while, in the latter, types often serve as flexible development aids, with many checks deferred to runtime. These differences are not merely technical but philosophical: they reflect contrasting conceptions of programming—whether it is closer to formal reasoning or incremental construction. In this sense, differences between type systems reflect deliberate design choices, just as different explications reflect different explanatory goals.
Modern developments such as effect systems, unsound types, and externally generated types (e.g., those derived from APIs or ontologies) further illustrate this intensional diversity. As Petricek [68] notes, attempts to define type once and for all tend to obscure the multiplicity of practices and purposes that types now serve. Rather than being reducible to static sets of values, types have become dynamic instruments of conceptual articulation—evolving in response to new needs and methodologies.
Petricek [68] also argues that the concept of type resists a universal formal definition precisely because it functions as a boundary object—adapted by different communities to serve different methodological ends. In some contexts, types are used to enforce logical soundness; in others, they facilitate tooling, serve as documentation, or act as epistemic interfaces to external systems, as in the case of type providers. These evolving uses highlight types as intensional constructs, whose meaning is shaped by practice and purpose rather than extensional interpretation alone. The societal influence on systems of types that Petricek describes leads naturally to the following topic of discussion.

4.2. Social Context of Programming Languages

A number of philosophical accounts—although not explicitly formulated in terms of intensionality or directly concerned with intensional differences between programming languages—offer insights that are highly relevant to this topic. These accounts point to human factors, such as social, psychological, or cognitive aspects, that influence how programming languages are designed, understood, and used. In this section, I outline the positions of several authors whose work suggests that such human-centered dimensions of programming practice give rise to, or at least help to explain, intensional differences. I do not present these discussions in detail but rather aim to indicate the breadth and character of this line of inquiry.
Over the past few decades, there has been a remarkable increase in the integration of the social sciences into computer science. Universities have established interdisciplinary programs and research centers that bring together computer scientists, philosophers, anthropologists, and sociologists to study the human aspects of computing. For example, the Massachusetts Institute of Technology (MIT) houses the Program in Science, Technology, and Society, which examines the social and cultural dimensions of science and technology. Similarly, Stanford University offers the Symbolic Systems Program, which combines computer science with philosophy, linguistics, and psychology to study cognition and computation. These initiatives reflect a wider recognition that an understanding of computing needs a strong humanities component.
In his paper, Eden [69] studies disagreements in computer science regarding how programs should be understood, how knowledge about them should be gained, and what kind of discipline computer science really is. He shows that these disagreements often result from deeper assumptions that are not always made explicit. To make sense of these differences, Eden introduces three main ways of thinking about computing, which he calls paradigms: the rationalist paradigm, associated with theoretical computer science, which treats programs as mathematical objects subject to deductive reasoning; the technocratic paradigm, prevalent in software engineering, which views programs as technical artifacts and emphasizes empirical testing and practical reliability; and the scientific paradigm, characteristic of artificial intelligence, which treats programs as analogous to mental processes and supports both formal and empirical methods. These paradigmatic orientations influence how programs are conceptualized, what kinds of knowledge are sought, and what methods are considered to be appropriate. Eden’s analysis shows that philosophical assumptions are embedded not only in foundational models but also in the practical development and use of programming languages.
Fetzer [70] criticizes purely formal views of programming, arguing that programs are not just abstract logical structures but intensional constructs shaped by human goals, choices, and social contexts. He builds on earlier work by DeMillo, Lipton and Perlis [71] that questions whether formal proofs are really how programs—or even mathematical theorems—are accepted in practice. They argue that the acceptance of mathematical proofs and the correctness of programs depend on social processes rather than formal rigor alone. Fetzer takes this further by distinguishing between two types of verification. In mathematics, absolute verification is possible because proofs follow strict logical rules. However, programs work in the real world: they depend on compilers, hardware, and user environments. Because of this, they can only be relatively verified—that is, tested under certain assumptions, with results that may not always hold. Fetzer sees programs as causal models, meaning that their correctness depends not only on logic but also on how they behave when executed. He concludes that program verification is not like proving a theorem; it is more like testing a hypothesis in science. This makes program verification fallible and context-dependent rather than certain and universal.
Many other authors argue along similar lines. Colburn [72] examines the rhetorical and syntactic structure of programming languages, showing how naming conventions, syntactic design, and code organization affect comprehensibility and cognitive access. Humphreys [73] views software systems, including programming languages, as epistemic tools that mediate and extend human reasoning. Smith and Ceusters [74] emphasize that software artifacts acquire meaning and function through their position within broader social and technical contexts. Taken together, these accounts highlight aspects of programming languages that go beyond their extensional behavior, focusing instead on how programs are written, understood, and embedded in practice. In this sense, they support the motivation for studying intensional differences, even if their arguments come from different philosophical backgrounds.
Concrete historical cases further illustrate this point. For example, the design of COBOL in the 1950s was driven by the need to make the code readable by business managers, not just trained mathematicians or engineers—prioritizing English-like syntax and readability over computational elegance [75,76]. By contrast, ALGOL, developed around the same time, was aimed at scientific communities and reflected formalism and compact mathematical expression [77]. These design choices were not merely syntactic preferences but reflected distinct assumptions and priorities about who programs, how, and for what purpose. More recently, languages such as Python have gained popularity in part because of their alignment with contemporary pedagogical and industrial values—simplicity, accessibility, and rapid prototyping—while languages such as Haskell embody intensional commitments to purity, immutability, and compositional reasoning. These differences reflect not only technical goals but also different communities of practice, educational philosophies, and industry norms.
Institutional forces actively shape which intensional features become dominant in programming language development. Universities influence this through curriculum decisions: when MIT adopted Python for its introductory computer science course, it reinforced Python’s intensional commitment to readability and simplicity over performance optimization [78]. Corporate backing determines language trajectories: Google’s investment in Go reflected their need for concurrent programming and fast compilation in large-scale systems [79], while Facebook’s development of React popularized functional programming patterns in JavaScript ecosystems [80]. Government and military funding historically privileged languages with formal verification capabilities, as seen in Ada’s design requirements for safety-critical systems, embedding intensional features that prioritize correctness proofs over ease of use [81]. The rise of data science has led funding agencies like the National Science Foundation to support R and Julia development—languages whose intensional characteristics align with statistical notation and mathematical syntax rather than general-purpose programming [82]. These institutional decisions create feedback loops where certain intensional features become standard and others marginalized. What we perceive as natural or optimal programming language characteristics often reflects the accumulated influence of powerful institutions rather than purely technical considerations.
This perspective aligns with the ideas proposed by Petricek [68], whom we already mentioned in the discussion of types, which evolve to meet the needs of different communities and serve different epistemic and practical goals. The very idea of what a “program” should look like, and what it should express, is shaped by similar contexts. Intensional differences are thus not merely artifacts of syntax or semantics but reflect historically and socially embedded conceptions of what computing is and ought to be.
These social and institutional forces shape both abstract design decisions and practical coding practices. Python’s emphasis on readability, reinforced by its widespread adoption in education, influences how programmers structure and explain their code. The formal verification requirements in safety-critical domains have created programming styles centered on provable correctness. This connection between social context and practical comprehension provides essential background for examining how intensional properties support or hinder human understanding of programs.

4.3. Intensional Properties and Human Comprehension

The intensional differences between programming languages have direct implications for code explainability—how readily programmers can understand, interpret, and communicate what a program does. This dimension of intensionality connects to epistemological questions about knowledge transmission in technical practices.
Consider how the same algorithm—checking list membership—communicates differently across paradigms. The Haskell version uses pattern matching (decomposing data structures into cases) and recursion (functions calling themselves), making the algorithmic structure explicit through these mechanisms. The Java version employs loops (repeated execution of code blocks) with an explicit state variable that tracks progress, representing computation as a sequence of state changes. The Prolog version states logical rules without specifying the execution order, relying on unification (matching logical terms) and backtracking (systematic search through possibilities).
These intensional differences affect cognitive accessibility. Type annotations—explicit declarations of what kind of data functions accept and return—serve as embedded documentation. Pattern matching makes data decomposition visible by showing how complex structures break into simpler parts. Declarative syntax allows programs to express what should be computed rather than how, similar to stating mathematical relationships versus calculating procedures. Conversely, implicit type conversions (automatic transformation between data types) can obscure what transformations occur, while deeply nested control structures make execution paths difficult to trace.
The relationship between intensional structure and comprehension extends to programming communities as intensional properties shape entire epistemic communities. Each programming paradigm develops its own explanatory vocabulary, pedagogical methods, and standards of clarity. Functional programmers explain algorithms through recursive decomposition and type transformations—describing computation as compositions of mathematical-like functions. Object-oriented programmers use metaphors of interacting entities with internal states—conceptualizing programs as collections of cooperating agents. Logic programmers think in terms of constraints and inference—viewing computation as deriving conclusions from stated facts. We can consider that this parallels how different philosophical schools develop distinct conceptual frameworks. Just as phenomenologists and analytic philosophers approach consciousness differently, functional and imperative programmers conceptualize computation through fundamentally different lenses. The intensional structure of a language thus becomes part of its community’s epistemic infrastructure.

4.4. Limiting Results and the Intensional Structure of Programming Languages

A final area in which intensional differences between programming languages become apparent is in the context of formal results that set limits on what can be computed, analyzed, or optimized. These results do more than describe technical constraints; they illustrate how the internal structure of algorithms and representations (i.e., their intensional aspects) is often essential for meaningful reasoning about programs.
One of the most fundamental results in computability theory, Rice’s theorem [83], states that any non-trivial semantic property of the function computed by a Turing machine is undecidable. This means that there is no general way to determine whether an arbitrary program has a given behavioral property (e.g., whether it stops at all inputs, whether it computes a constant function, etc.). From a strictly extensional perspective—which looks only at input–output behavior—this imposes a dramatic limitation: many seemingly natural questions about programs are provably unanswerable. The only way around this barrier in practice is to rely on intensional descriptions: to examine the internal structure, composition, and syntax of the program itself. As Copeland [84] and Turner [85] argue, such limitations reveal the philosophical importance of the algorithmic level of description as distinct from mere input–output mappings.
A related epistemic constraint arises from the No Free Lunch theorems in optimization theory [86]. These theorems show that, averaged over all possible problem instances, no optimization algorithm is better than any other. In the absence of assumptions about the structure of the problem domain, there is no universally optimal strategy. This implies that algorithmic efficiency and effectiveness cannot be evaluated in the abstract; they depend crucially on how the problem is represented and how the search space is structured—again, intensional factors. As Shagrir [87] points out, such results reinforce the view that computation is a context-sensitive process in which representational choices carry significant theoretical weight.
Further reinforcement of this theme comes from Blum’s Speedup Theorem [88], which shows that, for certain computational problems, there is no single “fastest” algorithm. Instead, one can construct an infinite sequence of programs, each asymptotically faster than the last, such that there is no optimal algorithm. This striking result undermines the idea that program efficiency can be captured solely in terms of extensional performance, and it suggests that algorithmic performance is inherently bound up with intensional choices about coding, abstraction, and reuse. As Gandy [18], and also Turner [89] note, such phenomena highlight how computational practice is shaped by human-centered conventions and constructions, not just abstract function mappings.
Finally, the undecidability of program equivalence—the problem of determining whether two programs compute the same function—brings the point home (see [83,90]). While extensionally equivalent programs may behave identically on all inputs, the algorithmic recognition of this fact is generally undecidable. Once again, we are forced to examine the intensional form of programs in order to make even basic comparative claims. Turner ([85]) has argued that this gap between intensional description and extensional behavior is not a defect of programming languages but a necessary condition of expressive computational practice.
Taken together, these theorems illustrate that the intensional features of programs—how they are written, structured, and represented—are not just pragmatic details or syntactic sugar. They are deeply embedded in the theoretical and philosophical landscape of computation. Without attention to intensional structure, the practice and analysis of programming would be not only impoverished but in many ways impossible.

5. Conclusions

This paper has examined intensional differences between programming languages through multiple lenses—from their inheritance of properties from foundational models of computation through the stages of an idealized programming workflow to their embedding in broader epistemic and social contexts. The analysis reveals that intensionality in programming languages operates at multiple levels simultaneously: in how algorithms are structured beyond their input–output behavior, in how type systems constrain and organize computation, and in how institutional and social forces shape language design and adoption.
The Carnapian framework of explication has proven particularly valuable for understanding these differences. Just as models of computation can be seen as different explications of the informal concept of effective procedure, programming languages represent different explications of what it means to program—each reflecting distinct theoretical priorities, practical constraints, and conceptual commitments. This perspective moves beyond viewing intensional differences as mere implementation details or stylistic preferences, revealing them instead as fundamental to how computation is conceptualized, expressed, and realized.
Several key findings emerge from this analysis. First, the function–algorithm distinction, formalized through Moschovakis’s theory of recursors, demonstrates that computational procedures possess intensional properties independent of both the functions they compute and their machine implementations. Second, limiting results such as Rice’s theorem and Colson’s theorem of ultimate obstination show that intensional structure is not optional but necessary for meaningful reasoning about programs. Third, the evolution from theoretical models to practical programming languages reveals a consistent pattern where intensional commitments made at the foundational level persist and amplify in practical contexts.
The paper opens new avenues for investigation in the area of programming. The relationship between intensional properties and code explainability deserves deeper exploration. How do different intensional features support or hinder human understanding of programs? The bidirectional influence between hardware and software design raises questions about how physical constraints shape conceptual possibilities. Moreover, the role of institutional forces in determining which intensional features become dominant suggests that our understanding of “natural” or “optimal” programming constructs may need fundamental reconsideration. A systematic study of low-level languages and their relationship to hardware would help to complete the picture of how abstract computations relate to physical processes.
Future work should also extend the analysis in several directions outside classical computing. I have deliberately focused on deterministic digital computation as embodied in traditional programming languages. This seemed to be a natural next step after addressing intensional differences between classical theoretic models of computing. This project shall be further developed to examine stochastic methods in generative and predictive models, which constitute a fundamentally different computational paradigm requiring its own intensional analysis—one that accounts for probabilistic inference and emergent behaviors rather than discrete state transitions. Hansen and Quinon in 2023 [91] provided a starting point for analyzing how stochastic methods differ from classical computation. This different way of structuring computation corresponds to separate intuitions of what “computing” means. Beyond machine learning and generative AI, the application of this framework to other emerging paradigms such as quantum computing, analog computing, and various forms of natural computation could reveal new forms of intensional structure.
Ultimately, this paper argues that intensional differences are not peripheral to our understanding of computation but central to it. Programming languages do more than compute functions: they embody ways of thinking about computation itself. As we extend this analysis to stochastic methods, quantum computing, and other non-classical paradigms, we will likely discover that each embodies fundamentally different conceptualizations of computation, yet through these differences we may also uncover what is common to all—approaching perhaps the absolute concept of computing. Recognizing and analyzing these intensional dimensions are essential for a complete understanding of what it means to compute.
[custom] References

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

Notes

1
It is important to distinguish intensional (with an s) from intentional (with a t). Intensional refers to structural or representational aspects—such as how a computation is expressed or organized—while intentional refers to the goals, purposes, or mental states of agents. The focus here is on intensional differences between formal systems and programming languages, not on the intentions of individual programmers.
2
An important way intensional differences identified during the clarification process manifest in formal models is through their relative expressiveness—differences in computational power under various constraints. See [13] on computational complexity, [14] on descriptive complexity, and [15] on implicit computational complexity.
3
The untyped λ -calculus, which Church needed to represent all computable functions, initially lacked proper mathematical semantics due to self-application paradoxes. Scott and Strachey’s denotational semantics (developed in the late 1960s and early 1970s) resolved this, with Scott’s domain theory providing the mathematical framework to handle recursion and self-application; see in particular [27].
4
In functional programming, the same expression always produces the same result when provided the same inputs—a property called referential transparency. By contrast, imperative programs have referential opacity: the same expression can produce different results depending on when it is executed and what state has been modified [35].
5
McCarthy explicitly borrowed the lambda notation from Alonzo Church’s λ -calculus, although he later noted that he “didn’t understand the rest of his book,” referring to Church’s broader theoretical framework, and thus “wasn’t tempted to try to implement his more general mechanism for defining functions” ([37], p. 218).
6
Pure lambda calculus achieves recursion through fixed-point combinators such as the Y combinator, discovered by Haskell B. Curry. The Y combinator is a function that takes a non-recursive function as input and returns a recursive version of it, enabling self-reference without requiring functions to have names. For example, to define the factorial recursively in pure lambda calculus, one must write something like Y ( λ f . λ n . if n = 0 then 1 else n × f ( n 1 ) ) , where Y is the Y combinator.
7
Observe that, here, specification refers to Carnapian explication’s systematic concept development, distinct from the problem or program specifications mentioned in the previous paragraph.
8
Primitive recursive functions constitute a mathematically well-defined subclass of computable functions—those constructed exclusively through composition and primitive recursion operations, a restriction that guarantees termination for all inputs. In programming, there are different ways to handle function arguments. Imagine a function that takes two inputs. In one approach (call-by-value), both inputs are completely calculated before the function even begins its work—like requiring all ingredients to be fully prepared before starting to cook. In another approach (call-by-name), the function starts working immediately and only calculates each input value at the moment it is actually needed—like gathering ingredients only as the recipe calls for them. Colson examines primitive recursive algorithms under this second approach. The choice of evaluation strategy is fundamental in computer science because it can dramatically affect both the complexity and even the computability of algorithms. For primitive recursive functions specifically, this choice reveals deep structural properties: under call-by-name evaluation, their inherent sequential nature becomes a measurable limitation, while, under call-by-value, different constraints would emerge. This demonstrates that computational complexity is not just about the function being computed but how the computation unfolds.

References

  1. Cobham, A. The Intrinsic Computational Difficulty of Functions. In Logic, Methodology, and Philosophy of Science, Proceedings of the 1964 International Congress, Jerusalem, Israel, 26 August–2 September 1964; Bar-Hillel, Y., Ed.; North-Holland: Amsterdam, The Netherlands, 1965; pp. 24–30. [Google Scholar]
  2. Edmonds, J. Paths, Trees, and Flowers. Can. J. Math. 1965, 17, 449–467. [Google Scholar] [CrossRef]
  3. Pour-El, M.B.; Richards, J.I. Computability in Analysis and Physics; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
  4. Moore, C. Recursion Theory on the Reals and Continuous-Time Computation. Theor. Comput. Sci. 1996, 162, 23–44. [Google Scholar] [CrossRef]
  5. Carnap, R. Logical Foundations of Probability, 2nd revised ed., 1962 ed.; University of Chicago Press: Chicago, IL, USA, 1950. [Google Scholar]
  6. Quinon, P. Can Church’s thesis be viewed as a Carnapian explication? Synthese 2021, 198, S1047–S1074. [Google Scholar] [CrossRef]
  7. De Mol, L. Generating, Solving and the Mathematics of Homo Sapiens: Emil Post’s Views on Computation. In A Computable Universe: Understanding Computation and Exploring Nature as Computation; Zenil, H., Ed.; World Scientific: Singapore, 2013; pp. 1–28. Available online: https://hal.univ-lille.fr/hal-01396500 (accessed on 5 September 2025).
  8. Post, E.L. Finite Combinatory Processes—Formulation 1. J. Symb. Log. 1936, 1, 103–105. [Google Scholar] [CrossRef]
  9. Church, A. An Unsolvable Problem of Elementary Number Theory. Am. J. Math. 1936, 58, 345–363. [Google Scholar] [CrossRef]
  10. Turing, A.M. On Computable Numbers, with an Application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 1936, 42, 230–265. [Google Scholar]
  11. Kleene, S.C. λ-Definability and Recursiveness. Duke Math. J. 1936, 2, 340–353. [Google Scholar] [CrossRef]
  12. Kleene, S.C. Introduction to Metamathematics; North-Holland: Amsterdam, The Netherlands, 1952. [Google Scholar]
  13. Papadimitriou, C.H. Computational Complexity; Addison-Wesley: Reading, MA, USA, 1994. [Google Scholar]
  14. Immerman, N. Descriptive Complexity; Springer: New York, NY, USA, 1999. [Google Scholar]
  15. Dal Lago, U.; Martini, S. Implicit Computational Complexity: An Introduction. Theor. Comput. Sci. 2008, 399, 191–199. [Google Scholar]
  16. Soare, R.I. Computability and Recursion. Bull. Symb. Log. 1996, 2, 284–321. [Google Scholar] [CrossRef][Green Version]
  17. Shapiro, S. Acceptable Notation. Notre Dame J. Form. Log. 1982, 23, 14–35. [Google Scholar] [CrossRef]
  18. Gandy, R. Church’s Thesis and Principles for Mechanisms. In The Kleene Symposium; Jon Barwise, H., Jerome Keisler, K.K., Eds.; North-Holland: Amsterdam, The Netherlands, 1980; pp. 123–148. [Google Scholar]
  19. Gandy, R. The Confluence of Ideas in 1936. In The Universal Turing Machine: A Half-Century Survey; Oxford University Press: New York, NY, USA, 1988; pp. 55–111. [Google Scholar]
  20. Rescorla, M. Church’s Thesis and the Conceptual Analysis of Computability. Notre Dame J. Form. Log. 2007, 48, 253–280. [Google Scholar] [CrossRef]
  21. Rescorla, M. Copeland and Proudfoot on Computability. Stud. Hist. Philos. Sci. Part A 2012, 43, 199–202. [Google Scholar] [CrossRef]
  22. Copeland, B.J.; Proudfoot, D. Alan Turing’s Forgotten Ideas in Computer Science. Sci. Am. 2010, 302, 76–81. [Google Scholar] [CrossRef]
  23. Quinon, P. From Computability over Strings of Characters to Natural Numbers. In Church’s Thesis: Logic, Mind, and Nature; Olszewski, A., Woleński, J., Urbaniak, R., Eds.; Copernicus Center Press: Berlin, Germany, 2014; pp. 310–330. [Google Scholar]
  24. Floyd, J.; Bokulich, A. (Eds.) Philosophical Explorations of the Legacy of Alan Turing: Turing 100; Springer: Cham, Switzerland, 2017. [Google Scholar]
  25. Heuveln, B.V. Emergence and Consciousness. Ph.D. Thesis, State University of New York at Binghamton, Binghamton, NY, USA, 2000. [Google Scholar]
  26. Trakhtenbrot, B. Comparing the Church and Turing approaches: Two prophetical messages. In The Universal Turing Machine, A Half-Century Survey; Herken, R., Ed.; Kammerer & Unversagt: Hamburg, Germany; Berlin, Germany; Oxford University Press: Oxford, UK; New York, NY, USA, 1988; pp. 603–630. [Google Scholar]
  27. Scott, D.S.; Strachey, C. Toward a Mathematical Semantics for Computer Languages; Technical Report PRG-6; Oxford University Computing Laboratory: Oxford, UK, 1971. [Google Scholar]
  28. Landin, P.J. The Next 700 Programming Languages. Commun. ACM 1966, 9, 157–166. [Google Scholar] [CrossRef]
  29. Milner, R. A Theory of Type Polymorphism in Programming. J. Comput. Syst. Sci. 1978, 17, 348–375. [Google Scholar] [CrossRef]
  30. Ashcroft, E.A.; Wadge, W.W. LUCID, a Nonprocedural Language with Iteration. Commun. ACM 1977, 20, 519–526. [Google Scholar] [CrossRef]
  31. Turner, R. Computational Artifacts: Towards a Philosophy of Computer Science; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar] [CrossRef]
  32. Petricek, T. What We Talk About When We Talk About Monads. In Proceedings of the ACM SIGPLAN Symposium on Haskell, Vancouver, BC, Canada, 3–4 September 2015; ACM: New York, NY, USA, 2015; pp. 1–12. [Google Scholar] [CrossRef]
  33. Backus, J.W.; Beeber, R.J.; Best, S.; Goldberg, H.; Haibt, L.M.; Herrick, H.L.; Nelson, R.A.; Sayre, D.; Sheridan, P.B.; Stern, H.; et al. The FORTRAN Automatic Coding System. In Proceedings of the Western Joint Computer Conference, Los Angeles, CA, USA, 26–28 February 1957; ACM/AIEE/IRE: Los Angeles, CA, USA, 1957; pp. 188–198. [Google Scholar]
  34. Backus, J. Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs. Commun. ACM 1978, 21, 613–641. [Google Scholar] [CrossRef]
  35. Søndergaard, H.; Sestoft, P. Referential transparency, definiteness and unfoldability. Acta Inform. 1990, 27, 505–517. [Google Scholar] [CrossRef]
  36. McCarthy, J. Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I. Commun. ACM 1960, 3, 184–195. [Google Scholar] [CrossRef]
  37. McCarthy, J. History of LISP. ACM SIGPLAN Not. 1978, 13, 217–223. [Google Scholar] [CrossRef]
  38. Dahl, O.J.; Nygaard, K. SIMULA—An ALGOL-Based Simulation Language. Commun. ACM 1966, 9, 671–678. [Google Scholar] [CrossRef]
  39. Kay, A.C. A Personal Computer for Children of All Ages. In Proceedings of the ACM National Conference, Boston, MA, USA, 1 August 1972; ACM: New York, NY, USA, 1972; pp. 1–11. [Google Scholar]
  40. Kay, A.C. The Early History of Smalltalk. ACM SIGPLAN Not. (HOPL II) 1993, 28, 69–95. [Google Scholar] [CrossRef]
  41. Kowalski, R.A. Predicate Logic as a Programming Language. In Proceedings of the IFIP Congress, Stockholm, Sweden, 5–10 August 1974; pp. 569–574. [Google Scholar]
  42. Kowalski, R.A. Logic for Problem Solving; North-Holland: Amsterdam, The Netherlands, 1979. [Google Scholar]
  43. Robinson, J.A. A Machine-Oriented Logic Based on the Resolution Principle. J. ACM 1965, 12, 23–41. [Google Scholar] [CrossRef]
  44. Moschovakis, Y.N. What is an Algorithm? J. Log. Algebr. Program. 2001, 48, 1–36. [Google Scholar]
  45. Moschovakis, Y.N. Sense and Denotation as Algorithm and Value; Center for the Study of Language and Information (CSLI): Stanford, CA, USA, 2009. [Google Scholar]
  46. Colson, L. About primitive recursive algorithms. Theor. Comput. Sci. 1991, 83, 57–69. [Google Scholar] [CrossRef]
  47. Coquand, T. Une preuve directe du théorème d’ultime obstination. Comptes Rendus l’Académie Sci. Série I 1992, 314, 389–392. [Google Scholar]
  48. Fredholm, N. On the Inherent Asymmetry of Primitive Recursive Constructions. Theor. Comput. Sci. 1995, 152, 1–66. [Google Scholar] [CrossRef]
  49. Armstrong, J. A History of Erlang. In Proceedings of the Third ACM SIGPLAN Conference on History of Programming Languages (HOPL III), San Diego, CA, USA, 9–10 June 2007; ACM: New York, NY, USA, 2007; pp. 6-1–6-26. [Google Scholar] [CrossRef]
  50. Sieg, W. Step by Recursive Step: Church’s Analysis of Effective Calculability. Bull. Symb. Log. 2002, 8, 485–501. [Google Scholar]
  51. Kay, A.C. Talk at the Creative Think Seminar; Palo Alto: Santa Clara, CA, USA, 1982. [Google Scholar]
  52. Carnap, R. Meaning and Necessity: A Study in Semantics and Modal Logic; University of Chicago Press: Chicago, IL, USA, 1947. [Google Scholar]
  53. Montague, R. Formal Philosophy: Selected Papers of Richard Montague; Yale University Press: New Haven, CT, USA, 1974. [Google Scholar]
  54. Antonutti Marfori, M.; Quinon, P. Intensionality in mathematics: Problems and prospects. Synthese 2021, 198, 995–999. [Google Scholar] [CrossRef]
  55. Martin-Löf, P. The Sense/Reference Distinction in Constructive Semantics. Bull. Symb. Log. 2021, 27, 501–513. [Google Scholar] [CrossRef]
  56. Dummett, M. The Logical Basis of Metaphysics; Harvard University Press: Cambridge, MA, 1991; Originally delivered as the William James Lectures, Harvard University, 1976. See p. 125 for the quotation on sense and semantic value. [Google Scholar]
  57. Martin-Löf, P.; Sambin, G. Intuitionistic Type Theory; Notes by Giovanni Sambin; Bibliopolis: Naples, Italy, 1984. [Google Scholar]
  58. Pierce, B.C. Types and Programming Languages; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  59. Harper, R. Practical Foundations for Programming Languages, 2nd ed.; Cambridge University Press: Cambridge, MA, USA, 2016. [Google Scholar]
  60. Russell, B. Letter to Frege. In From Frege to Gödel; van Heijenoort, J., Ed.; Harvard University Press: Cambridge, MA, USA, 2002. [Google Scholar]
  61. Gandy, R. Proof Theory and Logical Complexity. Br. J. Philos. Sci. 1976, 27, 213–230. [Google Scholar]
  62. Hindley, J.R. Basic Simple Type Theory; Cambridge University Press: Cambridge, MA, USA, 1997. [Google Scholar]
  63. Whitehead, A.N.; Russell, B. Principia Mathematica; Cambridge University Press: Cambridge, MA, USA, 1910. [Google Scholar]
  64. Ramsey, F.P. The Foundations of Mathematics. Proc. Lond. Math. Soc. 1925, s2-25, 338–384. [Google Scholar] [CrossRef]
  65. Church, A. A Formulation of the Simple Theory of Types. J. Symb. Log. 1940, 5, 56–68. [Google Scholar] [CrossRef]
  66. Xi, H.; Pfenning, F. Dependent Types in Practical Programming. In Proceedings of the 26th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), San Antonio, TX, USA, 20–22 January 1999; ACM: New York, NY, USA, 1999; pp. 214–227. [Google Scholar] [CrossRef]
  67. Cardelli, L. Type Systems. In The Computer Science and Engineering Handbook; Tucker, A.B., Ed.; CRC Press: Boca Raton, FL, USA, 1996; pp. 220–239. [Google Scholar]
  68. Petricek, T. Against a Universal Definition of ‘Type’. In Proceedings of the Onward! 2015: ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, Pittsburgh, PA, USA, 25–30 October 2015; SPLASH Systems, Programming, and Applications. pp. 254–266. [Google Scholar] [CrossRef]
  69. Eden, A.H. Three Paradigms of Computer Science. Minds Mach. 2007, 17, 135–167. [Google Scholar] [CrossRef]
  70. Fetzer, J.H. Program Verification: The Very Idea. Commun. ACM 1988, 31, 1048–1063. [Google Scholar] [CrossRef]
  71. DeMillo, R.A.; Lipton, R.J.; Perlis, A.J. Social Processes and Proofs of Theorems and Programs. Commun. ACM 1979, 22, 271–280. [Google Scholar] [CrossRef]
  72. Colburn, T.R. Philosophy and Computer Science; M.E. Sharpe: Armonk, NY, USA, 2000. [Google Scholar]
  73. Humphreys, P. Extending Ourselves: Computational Science, Empiricism, and Scientific Method; Oxford University Press: Oxford, UK, 2004; See Chapter on software as epistemic tools. [Google Scholar]
  74. Smith, B.; Ceusters, W. Ontology as the Core Discipline of Biomedical Informatics: Legacies of the Past and Recommendations for the Future. In Proceedings of the International Conference on Formal Ontology in Information Systems (FOIS), Turin, Italy, 4–6 November 2004; IOS Press: Amsterdam, The Netherlands, 2004; pp. 103–112. [Google Scholar]
  75. CODASYL. Report on the Programming Language COBOL; Technical Report, Conference on Data Systems Languages; Washington, DC, USA, 1960. [Google Scholar]
  76. Hopper, G.M. The Education of a Computer. In Proceedings of the Symposium on the Mechanization of Thought Processes, London, UK; 1959; pp. 155–160. [Google Scholar]
  77. Naur, P.; Backus, J.W.; Bauer, F.L.; Green, J.; Katz, C.; McCarthy, J.; Perlis, A.J.; Rutishauser, H.; Samelson, K.; Vauquois, B.; et al. Report on the Algorithmic Language ALGOL 60. Commun. ACM 1960, 3, 299–314. [Google Scholar] [CrossRef]
  78. Guttag, J.V. Introduction to Computation and Programming Using Python; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  79. Cox, R.; Griesemer, R.; Pike, R.; Taylor, I.L.; Thompson, K. The Go Programming Language and Environment. Commun. ACM 2022, 65, 70–78. [Google Scholar] [CrossRef]
  80. Occhino, T.; Walke, J. Introducing React.js. Talk Facebook Seattle. 2013. Available online: https://youtu.be/XxVg_s8xAms?si=BborxU0_7JlfMN85 (accessed on 5 September 2025).
  81. Ichbiah, J.D.; Krieg-Brueckner, B.; Wichmann, B.A.; Barnes, J.G.; Roubine, O.; Heliard, J.C. Rationale for the Design of the Ada Programming Language. ACM SIGPLAN Not. 1980, 14, 1–261. [Google Scholar] [CrossRef]
  82. Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A Fresh Approach to Numerical Computing. SIAM Rev. 2017, 59, 65–98. [Google Scholar] [CrossRef]
  83. Rice, H.G. Classes of Recursively Enumerable Sets and Their Decision Problems. Trans. Am. Math. Soc. 1953, 74, 358–366. [Google Scholar] [CrossRef]
  84. Copeland, B.J. Even Turing Machines Can Compute Uncomputable Functions. In Unconventional Models of Computation; Calude, C., Casti, J., Dinneen, M.J., Eds.; Springer: London, UK, 1998; pp. 150–164. [Google Scholar]
  85. Turner, R. Philosophy of Computer Science. Stanford Encyclopedia of Philosophy 2007. Available online: https://plato.stanford.edu/entries/computer-science/ (accessed on 5 September 2025).
  86. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  87. Shagrir, O. Why We View the Brain as a Computer. Synthese 2006, 153, 393–416. [Google Scholar] [CrossRef]
  88. Blum, M. A Machine-Independent Theory of the Complexity of Recursive Functions. J. ACM 1967, 14, 322–336. [Google Scholar] [CrossRef]
  89. Turner, R. Constructive Foundations for Functional Languages; McGraw-Hill: London, UK, 1991; p. 269. [Google Scholar]
  90. Paterson, M.S.; Hewitt, C.E. Comparative Schematology. In Proceedings of the Project MAC Conference on Concurrent Systems and Parallel Computation; ACM: New York, NY, USA, 1970; pp. 119–127. [Google Scholar]
  91. Hansen, J.U.; Quinon, P. The Importance of Expert Knowledge in Big Data and Machine Learning. Synthese 2023, 201, 1–21. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.