Next Article in Journal
Martial Arts and the Problem of Definition
Previous Article in Journal
On Divine Rebaptism
Previous Article in Special Issue
Meaning and Reference in Programming Languages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Interpretation of Denotational Semantics

Dipartimento di Informatica, Università di Torino, Corso Svizzera, 185, 10149 Torino, Italy
Philosophies 2025, 10(3), 54; https://doi.org/10.3390/philosophies10030054
Submission received: 24 February 2025 / Revised: 18 April 2025 / Accepted: 23 April 2025 / Published: 4 May 2025
(This article belongs to the Special Issue Semantics and Computation)

Abstract

:
The denotational approach to the semantics of programming languages views program meanings as elements of domains, abstract partially ordered structures that form the basis of a mathematical theory of computation. This paper discusses how the interpretation of these structures as models of information in computation has evolved into the interactive view of domains represented by game semantics. As a unifying motif of these developments, we propose the interpretation of program meanings as fulfillments of intentions suggested by Heyting’s phenomenological meaning explanation of intuitionistic logical constants, recently carried on by van Atten and Tieszen. We will also make an attempt to interpret this idea in the light of a pragmatic view of computation as communication, connecting it to the foundation of game semantics for programming languages and looking at this pragmatic turn as a source of new intentionalities.

1. The Need for Interpretations of Semantics

Undertaking a semantic account of a programming language is the expression of an intentional stance that interprets programs as meaning something beyond their physical execution on a machine. The something that is meant by a program constitutes the basis for affirming that the execution of the program has given the right result.
Denotational semantics [1] gives a mathematical form to the intentional relation between a program e and its denotation [ [ e ] ] , building [ [ e ] ] as an element of a domainD in a compositional way, by recursion on the abstract syntax tree of e. The construction of [ [ e ] ] exploits the additional structure of D in order to guarantee, for each e, the existence of its meaning [ [ e ] ] D . (In the literature, the term ‘domain’ refers to several different kinds of mathematical structures, having in common the property of being partially ordered sets where certain sets of elements (depending on the specific kind of domain) have the least upper bounds. A presentation at a suitable level of generality would consider a category [2] of domains with a notion of morphism between objects of this category. This generality is, however, not essential to the topics discussed in this paper.)
Consider, as an example, the denotational interpretation of a simple (functional) program 1 + 1 . (Here, ‘1’ abbreviates ‘ S ( 0 ) ’ and the symbols ‘0’ and ‘S’ stand for zero and successor, respectively. We shall focus on functional programs, whose execution consists of the evaluation of expressions, or better, closed expressions of ground type. In this sense, 1 is also a program, whose value is itself. Most of the aspects of denotational semantics that we shall deal with in this paper are already present in this small example.) We have first to provide a specific domain D to which the denotation of expressions like ‘1’ belongs. The denotation of ‘+’ is given by a binary function [ [ + ] ] : D × D D and we may define the denotation [ [ 1 + 1 ] ] D as the composition [ [ + ] ] ( [ [ 1 ] ] , [ [ 1 ] ] ) .
Depending on the structure of D and on the properties that characterize functions over domains, we then have properties of the denotational models, in particular, a notion of program equivalence that holds between e and e when [ [ e ] ] = [ [ e ] ] . For this reason, it is important to choose axioms for domains so that the required program denotations exist and support operational intuitions about program behavior. The central question therefore is as follows: what are the criteria to be used in choosing the structure of domains?
An answer to this question should provide such criteria in a form that, albeit preliminary to the proper mathematical development of domains, can provide a rationale behind the choice of the specific primitive notions and axioms by explaining the structure of D and the material out of which the denotations [ [ e ] ] are built, leading to a first intuitive model of the axioms. This does not aim at giving an ultimate—let alone unique—notion of domain for denotational semantics, but rather at grounding its basic vocabulary in sound computational intuitions. Using loosely the language of phenomenology, one may say that an interpretation of semantics provides the material out of which meaning intentions can be fulfilled.
The need for an interpretation of the mathematical apparatus of denotational semantics, namely domains, has been urged at an early stage of its development by John Reynolds [3] (p. 124):
the theory has been presented as an abstract mathematical development from assumptions which are only informally and intuitively justified. However, since these assumptions lead to conclusions which are quite different from the conventional theory of computation, it is important to understand their precise interpretation in terms of real computation.
In the next Section 2.1, we shall outline, by means of one example, the original interpretation of domains [4] as a theory of information in computation, according to which elements may be partial and may approximate other elements in the sense of being less defined than others. We also consider refinements of the original interpretation of domains motivated by certain intensional properties, like sequentiality, whose definition involves aspects of the causal structure of computations that must be reflected somehow in the structure of domains (Section 2.2). The quest for solutions to the full abstraction problem [5,6,7,8] has led to the interactive interpretation of domains that has been formalized by using games and strategies [9], as discussed in Section 2.3. Then, in Section 3, we resume the theme suggested at the beginning of the paper, namely the unifying aspect of intentionality in the construction of program meaning, drawing inspiration from the semantic account of intuitionistic logic formulated originally by Heyting [10,11,12,13] (see also [14] for a critical appraisal). After a general analysis of the schema of an intentional act as directed toward an object by way of its content (Section 3.1), we show how this schema applies, in particular, to the elements of domains arising from game semantics (Section 3.3) following a digression on logical proofs as fulfillments of intentions associated with certain contracts (Section 3.2). These will serve as the foundation of a pragmatic approach to semantics suggested at the end of this paper, together with some hints on further work.

2. Information in Computation: The Rise of Domains

The axioms for domains have been distilled from an analysis of informal notions needed to conceptualize phenomena related to fixed points of functions and non-termination of computations. These notions include information, approximation and limit [4] (pp. 170–174); they have a natural formulation in the language of partial order, and will be introduced by means of an example that also illustrates the use of their properties in giving meaning to programs. (Our presentation has the sole purpose of making the present paper as self-contained as possible. Among the many references on domains in relation to programming language semantics, see, for example, [15,16]. For the mathematical theory of domains from many points of view, see [17]. Inexhaustible sources of insights into the interpretation of domains, in addition to Scott’s papers, are Plotkin’s almost mythical ‘Pisa notes’ [18] and Winskel’s PhD thesis [19].) Later, we shall return to this example from a phenomenological standpoint, as an illustration of the fulfillment of an intention of an act, namely the act of evaluating (semantically) a program.

2.1. Axiomatizing Domains

Consider the program that we may write as
where = S ( )
assuming that execution is suspended at expressions of the form 0 or S ( e ) , like in lazy evaluation. Then, can be used in meaningful contexts without necessarily causing non-terminating behaviors; for example, the expression 0 < is true, whereas 0 = is false. The denotational interpretation of the program aims at building an element [ [ ] ] of a domain that we shall call N (the domain of lazy natural numbers) with the function s : N N , which is the interpretation of the function symbol S occurring in the program. The denotation of should satisfy the equation
[ [ ] ] = [ [ S ( ) ] ] = [ [ S ] ] ( [ [ ] ] ) = s ( [ [ ] ] ) .
Observe that such an object [ [ ] ] might not exist; we shall select axioms for information, approximation, and limits in domains by investigating what is needed in order to build [ [ ] ] . Initially, we have no information about [ [ ] ] . Still, there is an element ⊥ of N that reifies, so to speak, this absence of information; this is the undefined element ⊥. (Notice that absence is an essential component of one of the formal structures in phenomenology according to [20]. We shall give below, in Section 3.3, a phenomenological interpretation of this construction as the (process of) fulfillment of an intention directed towards [ [ ] ] .) According to [18],
Scott introduced an element, ⊥, into the value domain […] rather than worrying about partial functions and undefined results: to coin a phrase, he objectifies (some of) the metalanguage.
In order to gain more information on [ [ ] ] , we can perform one step of unfolding of the (circular) equation defining . Now, we have information s ( ) ; the information on the object has increased and we may write s ( ) , meaning that ⊥ is an approximation of s ( ) . Abstractly, approximation is a partial-order relation ⊑ on the elements of a domain, where Scott suggests reading ‘ x y ’ as expressing the fact that “y is consistent with x and is (possibly) more accurate than x” [4]. Of course, ⊥ approximates every other element of N , which may be of one of the following two forms:
s ( s n ( 0 ) ) s ( s n ( ) )
for n 0 , where all elements of the first kind are maximal with respect to ⊑.
If one regards the theory of domains as an algebraic formulation of a theory of computable functions, then it is natural to characterize such functions in the language of domains. First,
if f were a function defined by a program in any of the usual ways, it would be sensitive to the accuracy of the arguments (inputs) in a special way: the more accurate the input, the more accurate the output [4].
We regard s as such a function; therefore, for every pair of elements x , y of the domain such that x y , we must have s ( x ) s ( y ) . In general, if D and E are domains and f : D E is a function, then f is monotonic, i.e., f ( x ) f ( y ) whenever x y .
Assume now that X D is a set of elements of a domain D such that for every pair x , y of elements of X, there is a common upper bound, i.e., an element z such that x z , y z . This happens, for example, when X is directed, i.e., any two elements of X have a common upper bound in X. Intuitively, a directed set X should “converge” to its least upper bound X , which is regarded as its “limit”. This completeness property is required to hold in any domain.
In order to give a finite amount of information e about the output of a computable function f on the argument x, e f ( x ) , there must exist a finite amount of information d about the input, d x , so that e f ( d ) . A function f satisfying this requirement is called continuous. Continuity can be expressed equivalently as a preservation property by requiring a monotonic f : D E to satisfy
f ( X ) = { f ( x ) x X }
whenever X D is directed. This equivalence relies upon assuming that each element of a domain is the limit (= least upper bound) of the directed set of its finite approximations. Here, d D is finite (or, better, compact or algebraic) if, for any directed X D , d X implies that there is an x X so that d x . The motivating example for this definition is the finite subsets of a set S in the complete lattice of all subsets of S, or the finite functions in the domain of partial functions from a set A to a set B. Plotkin formulates continuity in terms of (an abstract notion of) physical feasibility:
It is reasonably self-evident that any physically feasible function must be monotonic and obey the condition proved equivalent to continuity as an output event could hardly depend on infinitely many input events as a machine should only be able to do a finite amount of computation before causing a given output event [18]
We can even state an abstract counterpart to Church’s Thesis:
Scott’s Thesis: Computable functions are continuous [19].
In our example, completeness and continuity together allow us to achieve the construction of [ [ ] ] . By monotonicity, we have a chain (a particular kind of directed set)
s ( ) s ( s ( ) )
obtained by repeatedly applying the monotonic function s to both sides of the infinite series of inequalities of the form s n ( ) s n + 1 ( ) from the initial one, s ( ) . Then, by completeness, we can take the “limit” of that chain as the required element:
[ [ ] ] = def n 0 s n ( ) .
In general, a similar construction shows that every continuous function f : D D has a smallest fixed-point fix ( f ) with respect to the partial order:
fix ( f ) = def n 0 f n ( ) .
This property is essential to give meaning to systems of recursive equations that define elements of domains [21] and, in fact, [ [ ] ] = fix ( s ) .
At this point, we can perhaps formulate more clearly the purposes of an interpretation of semantics. Our analysis, which follows the original account of Scott [4], has motivated the essential structural requirements on domains: approximation and completeness with respect to increasing chains of approximations. The perplexities expressed in the above quote from Reynolds still remain; the desired properties have been postulated, but we would like to interpret domains and their elements as constructions over simpler structures, closer to computational intuitions, so that these properties can be proven on the basis of the interpretation. Historically, some of these interpretations have been motivated by the need to refine the continuity of functions to account for the intensional properties of computations, in particular sequentiality. We will briefly outline the main issues because they lead directly to the interactive view of computation that shall be our main topic later.

2.2. Domains as Abstract Physical Models of Computation

Consider the domain-theoretic version of Kleene’s three-valued disjunction [22,23], also known as ‘parallel-or’, where ⊥ is the undefined value, given by the table
t f t t t t f t f t
This is a continuous function, as Scott has pointed out in [24], but cannot be defined in paradigmatic programming languages like the (typed or untyped) λ -calculus, as Plotkin [6] and Berry [25,26] have shown. In fact, program evaluation in these cases is sequential: semantically, this can be characterized by saying, roughly, that in order to increase information on the output of a sequential function, there is an input place where it is necessary to increase the information [5,27,28]. This is not true of parallel-or: it is not possible to choose in advance one argument (place) to be evaluated completely before evaluating the other, because a computing device might become stuck in a non-terminating process trying to complete the evaluation of the argument chosen. Therefore, any device that has the task of evaluating an expression of the form e e , where ∨ represents parallel-or, needs to bifurcate into two parallel processes, one evaluating e and the other evaluating e , terminating the evaluation and returning true if, and when, at least one of these returns true .
In order to restrict domains to support the semantic counterparts to these algorithmic aspects, it turned out to be convenient to represent domain elements as certain sets of events (states, or configurations) that take into account their causal relations in an abstract space and time where they happen [19,29]. The idea is that a state corresponds to a cross section of the causal structure on events, and thus can be identified with a set of events that, when it contains a, also contains all events that causally precede a; these are the events that have happened “before” that state. These are stylized physical models of computation, like the simple models of relativistic space–time used in the study of distributed systems [30]. It may be useful to distinguish some of the properties enjoyed by such causal structures, as we will generically call them:
  • Events are meant as event occurrences at specific places, and causality turns out to be a partial-order relation on events.
  • There is a natural notion of conflict between events that happen at the same place, and are therefore mutually exclusive. This notion is derived from the corresponding notion for Petri nets:
    an event is imagined to occur at a fixed point in space and time; conflict between events is localised in that two conflicting events are enabled at the same time and are competing for the same point in space and time [29]
  • A state that is finite as a set can only be preceded by finitely many other states. This important property is not true in all Scott domains, but is fairly natural when
    information has to do with (occurrences of) events: namely the information that those events occurred. For example […], ⊥ might mean that no event occurred and an integer n, might mean that the event occurred of the integer n being output (or, in another circumstance being input) [18]
  • If a state x can evolve into two different states by the happening of two non-conflicting, causally independent events a and a , then there is a state x { a , a } to which both x { a } and x { a } can evolve (by performing a and a, respectively). In this situation, events a and a are concurrent in x—this, again, is related to the way concurrency is defined in Petri nets [31].
Using a simpler relation of causality as a source of insight into the interpretation of domains provides, at the same time, a way of justifying semantics that relies upon concretization rather than abstraction. The properties of causality have been partly inspired, as we have seen, by the analysis of discrete motion of tokens across places according to a causal mechanism of transition firing, carried out by Petri and others in their quest for models of concurrent computations [31,32,33]. This idea, translated into the language of domains, is at the basis of the concrete domains of Kahn and Plotkin [28]. (This notion of “concreteness” was introduced in order to characterize domains of data—as opposed to domains of continuous functions—that were useful both in the study of coroutine computations introduced earlier by Kahn and in the definition of a sufficiently general notion of ‘sequential function’ over domains; see [34] for the historical context.) It consists of representing physically concrete domains as domains of configurations of such causal models (called information matrices in [28] and concrete data structures in [35]) where, in the words of Plotkin,
a domain is physically concrete iff it can be implemented as a collection of digital events spread through space–time according to some rules (such domains should be suitable for communication between processes) [36].
A companion to Scott’s Thesis in this restricted context is then that
a domain is concrete iff it is physically concrete (ibidem).
Here, the domain structure is induced by the underlying causal structure via a representation theorem [28].
Let us see how this is employed in the particular case of the domain N of (lazy) natural numbers. This concrete domain is built from a concrete data structure given by the following:
  • A collection of places p that may be filled with values 0 and s. Filling of a place p with a value v is an event, represented by the pair ( p , v ) ;
  • A relation of enabling between (finite sets of) events and places that describes the causal structure of events.
A place p may either be filled with 0 or s, wherein the latter case enables a new place p , starting from an initial, enabled place N. Clearly, these two events—filling a place p with 0 and filling it with s—are conflicting because they happen at the same place p.
The elements of the concrete domain generated by this concrete data structure are sets x of non-conflicting events that, along with any event that fills a place p, contain all events that enable p, hereditarily so until the initial place N is reached. The elements of the resulting domain have (up to isomorphism) the same tree structure as N . Observe that this domain structure reflects faithfully the way information increases for numerical expressions in a lazy functional programming language, where evaluation stops at constructors and, therefore, values have either the form 0 or S ( e ) .
It may be interesting at this point to consider what we have gained from this interpretation of concrete domains as sets of states of a concrete data structure. For the purpose of justifying the properties of domains, we can consider, as an example, completeness: every directed set of elements has a least upper bound. Elements of a concrete domain are represented by states, i.e., causally closed sets of non-conflicting events. If we assume that X is a directed set of states, we can observe that their set-theoretic union gives the least upper bound of X [28] (T-6.2). The existence of least upper bounds of chains need not be postulated, but is now a consequence of the interpretation of elements as sets (of events).
In order to introduce a suitable notion of mapping between concrete domains, we can exploit their representation as concrete data structures. Then, such a mapping f : D E can be presented by an algorithm that associates output events in E to input events in D, respecting the causal structures of D and E. As a simple example, assume that we want to define the so-called “test for zero” function from the domain of natural numbers N introduced above to the domain of boolean values B . The causal structure of the latter domain is represented by a a unique initial place B to be filled either by the value t (representing true) or the value f (representing false). It is natural to define “test for zero” as an algorithm for filling B by analyzing the values of places of N . The definition might proceed as follows (for a precise definition, see [35]):
  • to fill B
        read the value of N
          if it is 0
                fill B with t
          if it is s
                fill B with f
In order to understand the working of this algorithm, take state { ( N , s ) } of N as input. The only information we have about that element is that it is the successor of an element of N on which we have no information, hence it represents the lazy natural number s ( ) . Applying the above algorithm to this element yields the state { ( B , f ) } of B , representing the total truth value false , without the need for further information about the input.

2.3. From Physics to Communication

The shift in emphasis brought about by taking into account the causal structure of computations consists of the fact that now the meaning [ [ e ] ] of a program represents an abstract trace of its operational behavior. The definition of transformations of concrete domains as algorithms has already undermined the extensional framework of classical domain theory. The efforts to solve the full abstraction problem were a significant motivation for this change in perspective.
The full abstraction problem consists of building models for a reference programming language (a simply typed λ -calculus with arithmetical constants), where the denotational equivalence of two terms coincides with the equivalence induced by the operational indistinguishability of these terms by replacing them in the same program contexts [5,6,7,8,37,38]. This problem is essentially caused by the presence of continuous functions that cannot be evaluated sequentially. A new series of insights leading to the solution of this problem came from different sources [39] (§6): combinatorial games, the semantics of linear logic, and the interactive approach to computation that originated in the work of Hoare and Milner on process calculi. (The extensive work of Abramsky in this area is especially illuminating from a general computational perspective. Among the many references, see [40] for a comprehensive introduction to the main themes.)
These insights have contributed to replace the original perspective of denotational semantics with a view of computation as communication between system components. From the point of view of semantic structures, this shifted the attention from the investigation of the combinatorial properties of the space–time of digital events, to the definition of abstract models of the flow of acts of communication between modules within a digital system.
The idea of a combinatorial game was already available, together with a systematic mathematical account [41], that could be made into a category with games as objects and certain strategies as morphisms [42]. In these categories, the orientation of a morphism from input to output is not absolute. This is a significant departure from models based on domains and continuous functions, and a step forward to a more symmetric notion of interaction [43]. In linear logic [44], we observe the same phenomenon in the context of a constructive logical system with an involutive negation A . Its interpretation makes use of game models [39,45] formulated in terms of moves of two players, traditionally labeled Proponent and Opponent, partly under the influence of the proposal, by Paul Lorenzen [46], of dialogue games as a semantics of intuitionistic logic.
It is now possible to reinterpret domains based on causal structures, like the concrete domains, as domains of interactions similar in structure to dialogue games. Consider again the sequential algorithm computing the “test for zero” in the setting of the concrete domain N of lazy natural numbers. The pattern of control in the execution of this algorithm resembles the handshake protocol used in asynchronous hardware design [47], described by the dashed line in the following figure: upon request for an event filling B (lower right), a request for the value of N is sent (lower left); upon receiving the value of N (upper left), B can be filled with the appropriate value (upper right).
Philosophies 10 00054 i001
The main new intuition at the basis of these insights is the interactive, dialogical nature of computation, formalized in game semantics (see [37,38] and the references therein for details). This intuition also leads to new ingredients out of which denotational meanings can be constructed not, like before, as traces of abstract physical processes developing information about results, but as traces of dialogues between a computing system responding to an outside user [9]. A unifying role is played by an intentional interpretation of the resulting dialogues that allows us to build a bridge between logic and computation. It is this field that we would like to explore now, and the new approach to the interpretation of semantics that it suggests.

3. From Semantics to Pragmatics

3.1. The Basic Structure of Intentionality

We resume the intentional theme exposed at the beginning, observing that denotational semantics of programs e consists of acts of (semantic) evaluation aiming at the values [ [ e ] ] . Recall that, for simplicity, we will only speak of functional programs without loss of generality because denotational semantics transforms all programs into expressions to be evaluated.
Consider the act of evaluating 2 + 3 . This intentional act is directed toward something that, we claim, is not the number 5 but a state, which is entered when the evaluation of the program is accomplished with (a presentation of) the number 5 as its result. Therefore, this result is not an element of a set but of the more complex domain structure  N , where it is framed strictly above ⊥ and the elements s ( ) , s ( s ( ) ) , s ( s ( s ( ) ) ) , s ( s ( s ( s ( ) ) ) ) , s ( s ( s ( s ( s ( ) ) ) ) ) as identical to s ( s ( s ( s ( s ( 0 ) ) ) ) ) . It is therefore natural to identify [ [ 2 + 3 ] ] with the intentional object of the act of evaluation, which acknowledges its dependence on the act and also the mediation of this act through the structure of the expression, in particular the canonical form for the natural numbers.
We can recognize here the formal structure of intentionality pointed out in [11] (p. 279) that expresses the intentional act as directed toward its (intentional) object by way of its content, in symbols:
act ( content ) directed toward [ object ]
We can see a miniature example of this structure in the actions expressed by telic verbs [48]:
they have just like acts of knowing the characteristic that they have an object, an object towards which they are directed, which is the result of the action [49] (p. 145)
Consider one crossing the street; every proper initial segment of this action cannot be distinguished from merely walking across the street, which is atelic. Crossing the street is accomplished only when the goal of reaching the opposite sidewalk is achieved, resulting in the state of being on the opposite sidewalk [50] (Ch. 2, §2). We shall eventually exploit this fragment of the vocabulary of intentionality in framing the construction of program meanings, but we first review how intentionality arose originally in the semantic analysis of intuitionistic logic.
Heyting devised rules specifying how the constructive truth of a compound proposition depends on the constructions of its components according to the main logical constant, and formulated his insights referring explicitly to Husserl and Becker for the notion of intention and (intentional) act (see the account in [13] and the references therein):
The affirmation of a proposition means the fulfillment of an intention [51] (Eng. translation, p. 59)
In an early example of this idea, the proposition “Euler’s constant is rational” is interpreted as expressing the intention which is fulfilled by finding two integers such that their ratio equals Euler’s constant [51]. At about the same time, Kolmogorov proposed his interpretation of intuitionistic logic as a calculus of problems and their solutions, which, to some extent, is close to Heyting’s interpretation [13]. The general scheme of these explanations is an instance of (2):
affirmation of a   proposition content act means fulfillment of an intention object of the act = judgement
We would like to reinterpret intentions and their fulfillments by means of suitably structured communicative acts in a pragmatic scenario that slightly reformulates the interpretation of assertion proposed by Brandom [52] (see also [53,54,55]). The latter has already been grafted onto the tradition of phenomenology in [56], and has been related to ideas from game semantics by Marion [57,58]. Our pragmatic reformulation of intentionality in the semantics of logic and computation has drawn initial motivation from the approach to semantics developed by Per Martin-Löf starting from [10]. The interest in the formal notion of commitment outlined below came from an attempt to replace his appeal to knowledge in the explanation of assertion with behaviorally oriented notions, and was influenced by [59] (see, especially, the interpretation of implication proposed there on pp. 158–159). (The recent work of Martin-Löf, including [60], is clearly very close to our proposal, but it was discovered only when the collection edited by Ansten Klev and Ivo Pezlar at the web page https://pml.flu.cas.cz was announced, on 17 September 2022, after our approach to assertion had taken the present form. A detailed comparison, also taking into account [61], is deferred to a future paper.)
Before applying the same scheme to program evaluation and denotational interpretation, we first digress to show how the general pattern (2) of intentional acts arises in the interpretation of proof.

3.2. Digression: Proofs and Contracts

While the Brouwer–Heyting–Kolmogorov explanation of mathematical theorems is based on the cognitive activity of a subject, we would like to sketch an account of proofs as structures of speech acts, and the justification of deduction as part of a formal theory of contracts and their execution, of rights and obligations, and of the rules that allow us to compose, create, transfer, and annihilate rights and obligations. The new interpretation of intention and fulfillment that we propose takes the affirmation of a proposition to be directed towards the fulfillment of a contract.
Here, the process/result ambiguity of the term ‘fulfillment’ may be especially confusing, so we distinguish, when necessary, its two readings by writing ‘ fulfillment 1 ’ for the process of fulfillment, and ‘ fulfillment 0 ’ for the state resulting from such a process. Then, we may say that asserting a proposition is directed toward the fulfillment 0 of the contract expressed by the proposition, meaning that the proposition determines the conditions under which a behavior can be regarded as a fulfillment 1 of the contract. The general schema that summarizes this interpretation is a new instance of (2):
assertion of a   proposition content act means fulfillment 0 of a contract object of the act
This account is compatible with the insights of Brandom [54], who describes inference in terms that do not come from a theory of truth but rather from a basic vocabulary of human lawful interaction:
[T]o define the inferential role of an expression ‘&’ of Boolean conjunction, one specifies that anyone who is committed to p, and committed to q, is thereby to count also as committed to p & q , and that anyone who is committed to p & q is thereby committed both to p and to q. For a commitment to become explicit is for it to be thrown into the game of giving and asking for reasons as something whose justification, in terms of other commitments and entitlements, is liable to question [54] (Ch. 1)
Actually, our approach amounts essentially to a reading in terms of contracts of the rules of dialogical semantics devised by Paul Lorenzen [46].
A proof of an assertion is an act of communication among two parties or, better, a series of such acts, subject to rules whose normative character can be justified on pragmatic grounds. A proof involves a proposition asserted by a Proponent and consists of its successful defence through the ensuing debate with the other participant, the Opponent, who challenges the original assertion.
As the debate unfolds, there is a pragmatic structure that amounts to the evolving pattern of rights and obligations in which the participants become involved as they perform actions that may be either questions or new assertions that function as answers. Indeed, a participant who asserts a proposition A becomes liable to be asked reasons by the other participant, being committed, as the passive party, to a contract associated with the proposition, the other participant being the active party. (Associating a contract to a proposition is matter for a theory of meaning. Below, we give essentially one example relative to the meaning of implication in this pragmatic perspective.)
As actions are made in the course of a debate, the contracts are written in an account recording the rights and obligations of both participants. The current state of the debate is the set of contracts in the account.
The performance of a contract is a behavior that starts with a question made by the active party, and ends with an answer to the initial question issued by the passive party, with questions and answers alternating. Such a behavior is a fulfillment 1 of the contract. A proof of an assertion of A is a debate that fulfills 1 the contract associated with A.
In order to study proofs, we may use a graphical notation that represents a contract, associated with a generic proposition P, between Proponent (always on the left) and Opponent (always on the right) as follows, where the active party is marked with • and the passive party with ∘:
Philosophies 10 00054 i002
Accordingly, in this example, the Opponent is active and the Proponent is passive.
The basic operation in this approach consists of adding the contract associated with P when a party asserts P to the current state of the account, where the asserting party is passive. Assume now that the Opponent has asserted an atomic A and, therefore, is committed as the passive party to the associated contract, which is added to the account:
Philosophies 10 00054 i003
At this point, the Proponent can assert A, adding to the account a new contract:
Philosophies 10 00054 i004
The justification of this transformation on accounts is that the only question, q, that can be asked by the Opponent in the lower contract can be answered by the Proponent by asking in turn the only available question q of the upper contract, and then using the answer given to q by the Opponent to answer q. This is the ‘ipse dixisti’ rule of dialogue games [62] (or the copy-cat strategy in game semantics [40] (§5.4)), but in terms of rights and obligation, this mechanism amounts to what in the Roman code is called compensatio [59]. The idea is that a party can compensate an obligation in one contract with the dual obligation by the other party in the other contract.
The meaning of a logical constant specifies the performances that count as fulfillments 1 of a contract associated with a proposition having that constant as a main connective. Consider a contract associated with the proposition A B , where the symbol ‘⊃’ denotes implication. A question of A B is an assertion of A by the active party to the contract, and the corresponding answer is the assertion of B by the passive party. It follows that B is a logical consequence of A ( A B ) when there is a behavior (starting with active Opponent) fulfilling 1 B, provided that the Opponent commits to a contract (with the Proponent as an active party) associated with A.
As an example of a proof as the fulfillment 1 of a contract, and of our definition of logical consequence, consider ( A B ) A B . Initially, we are in a state where the Opponent has already asked the initial question:
Philosophies 10 00054 i005
This contract, to which the Opponent is committed, can be replaced by the two contracts
Philosophies 10 00054 i006
In this state, the Proponent may assert A entering the state
Philosophies 10 00054 i007
and committing the Opponent to B, by the rule for implication,
Philosophies 10 00054 i008
finally allowing the Proponent to answer the initial question:
Philosophies 10 00054 i009
In this state, all obligations annihilate, by compensatio, and the relation ( A B ) A B holds.

3.3. Program Semantics as Dialogue

The general schema of the intentional act also applies to the semantic evaluation of the program [ [ ] ] that has been our starting point as an example of denotational semantics:
evaluation of content act directed toward [ [ ] ] object of the act
It is in this phenomenological scenario that the search for interpretations of semantics finds a motivation. The fulfillment 0 of the intention expressed by the act of evaluation is the accomplishment of the construction of [ [ ] ] on the basis of existence principles made available by the domain structure. We have seen that the latter reduce essentially to the existence of limits (completeness) and continuity of the functions involved in the construction. When domain elements are interpreted as, for example, states of a causal structure, and approximation and limit become, respectively, set inclusion and union, the domain structure is grounded in sets and the proofs rely upon the laws for set-theoretic operations.
Consider now a program that consists of the evaluation of an arithmetic expression. There are two roles involved in this process: the (1) user, who challenges (2) a processor. The communication relation between user and processor allows the former to write, after the prompt, an arithmetic expression. It is then the responsibility of the processor to accomplish this task (i.e., fulfill the intention of the act) of evaluating the expression, which means printing the expression for its result. The fulfillment 1 of the intention in this operational setting is a process that involves a flow of requests and matching replies from the user to the processor and conversely, which may remain forever only partial. Husserl [63] (VI(§18)) already considered the possibility that such intentions might be fulfilled only partially, as in the chain
5 2 5 · 5 5 + 5 + 5 + 5 + 5 1 + 1 + 1 + + 1 25 times
Here, each expression is a partial fulfillment of the original intention expressed by 5 2 and arises by expanding the definition of symbols, reducing ultimately to a complete fulfillment consisting of a sum of units, a canonical presentation of the value of the original expression.
This interactive situation involving user and processor can be seen as another instance of the dialogical setting between a Proponent and an Opponent as in our account of logical dialogues, leading to a slightly different conceptualization of the semantic account of programming based on games. Skipping the formal details of game semantics that we use just as a notation for the type of interaction on which the particular act of evaluation is based, we sketch a description of the game interpretation of the domain N of lazy natural numbers, which we have already considered as a concrete domain, along with a computation of the meaning [ [ 1 + 1 ] ] (where 1 = def S ( 0 ) ), showing the trace of the corresponding interactive behavior.
As before, we may consider the construction of [ [ 1 + 1 ] ] as an instance of schema (3), namely as the fulfillment of a contract between Proponent (processor) and Opponent (user), where the Proponent is committed to answer eventually the value of the expression 1 + 1 to the initial question by the Opponent. (The difference in notation is due to the fact that, in the logical setting of Section 3.2, there is no need to mention values that are essential, instead, to computation.) The intervening actions are recorded in a non-empty, prefix closed set of alternating sequences of moves of Proponent and Opponent, starting with the initial question q by the Opponent: the positions of the game-theoretic version of N .
Let us see first how dialogues allow to build elements of N . Initially no choice has (yet) been made; the empty sequence ε represents the initial state of the dialogue and corresponds to the element ⊥ of the domain. Then, we have the initial question, asked by the Opponent, that may be answered by the Proponent either by stop, yielding the maximal element 0 of the domain, or by next. The resulting position represents in this case the partial number s ( ) . At each step, the (copy of the) Opponent’s question may be answered by the Proponent, by (a copy of) one of the values, by stopping the construction of the current element of N , or by passing to the next partial natural number. These even-length positions are the finite elements of a domain describing a (partial) fulfillment of a contract between the Proponent and Opponent, where the order relation is the restriction of the prefix ordering to these positions. The resulting domain turns out to be (isomorphic to) N . (A formal description of this domain as a recursive type is given in [64].) Using the infinite chain of positions
ε q q · next q · next · q 1 q · next · q 1 · next 1 q · next · q 1 · next 1 · q 2
to define the “infinite” number , we see that is the least upper bound of the chain of elements:
s ( ) s ( s ( ) )
The interaction associated with the computation of 1 + 1 according to the clauses of the recursive definition
e + 0 = e e + S ( e ) = S ( e + e )
is a behavior (a strategy, in the game terminology) fulfilling 1 a contract between the Proponent and Opponent, and is described schematically in the following table, from which we can read the history of the interactive evaluation of 1 + 1 , i.e., of S ( 0 ) + S ( 0 ) :
N 1 × N 2 N 3 O q 1 3 P q 1 2 O next 1 2 P next 1 3 O q 2 3 P q 2 2 O stop 2 2 P q 1 1 O next 1 1 P next 2 3 O q 3 3 P q 2 1 O stop 1 1 P stop 3 3
The table displays the debate between the Proponent and Opponent on the vertical dimension; horizontally, there are the three instances of the game N in which the moves take place, each instance N i having the initial question q i , and the possible answers stop i and next i . Every time a participant in the dialogue answers next j i to a question q j i , a new question q j + 1 i of the opposing party becomes available. The table summarizes the following narrative.
Initially, the only available move is the Opponent’s question q 1 3 in N 3 . Observe that, in the instances of N occurring to the left of the arrow, the roles of the Player and Opponent are exchanged; by asking a question, the Proponent becomes, locally, a user interacting with a processor role played by the Opponent. Therefore, the Proponent can ask, playing as Opponent in N 2 and following the strategy implicit in the recursive definition of the sum, the question q 1 2 , obtaining the answer next 1 2 . The Proponent copies this answer in N 3 , partially fulfilling the contract whose starting move was the initial question.
This pattern of interaction is repeated until the Proponent’s question in N 2 is answered by stop 2 ; this means that the recursion parameter has been examined completely, and the same pattern of interaction is repeated for the first argument.
Finally, the value of the sum can be read off from the rightmost column of the table, namely
q 1 3 · next 1 3 · q 2 3 · next 2 3 · q 3 3 · stop 3 3
which represents the element [ [ 2 ] ] = def [ [ S ( S ( 0 ) ) ] ] = s ( s ( 0 ) ) of the domain N as the history of an interaction that fulfills 1 the contract associated with the evaluation of 1 + 1 , namely the object of the intentional act of evaluating 1 + 1 in game semantics.
There is an objection that may be raised concerning the status of the meanings [ [ e ] ] as objects of intentional acts (we are indebted to a referee for formulating this objection, articulating the steps of the argument by adapting one in [14] (§5) and suggesting the first solution sketched below). In our account based on game semantics, program meanings are (i) interpreted as (partial) fulfillments of intentions expressed as (ii) relations between a program e and its denotational meaning [ [ e ] ] ; (iii) the intention involved in the act of evaluating e is directed toward [ [ e ] ] ; (iv) however, before an intention can be expressed, there has to be an object to be intended, according to diagram (2). (v) Therefore, our account may only interpret program meanings as (partial) fulfillments of intentions if one assumes the [ [ e ] ] ’s as the intended objects. Thus, our entire approach and the possibility of using intentionality in explaining proof and computation appears to be undermined by a circularity.
A way out might consist of granting denotational meanings an existence prior to our directing our intentions toward them in evaluating programs. Another approach to escaping the alleged circularity that also helps to clarify our reading of the basic diagram (2) of intentionality, consists of questioning (iv) in the above argument and observing that this diagram should in fact be read as a movie, where the object is initially intended emptily and then increasingly more fully until (some degree of) fulfillment of the intention. Throughout this process, the object does not change, only the degree of fulfillment of the intention directed towards it may be increased. Recall that, in models based on games, the meaning of a program is the object of an act directed toward the fulfillment 0 of a contract. The object of this act is completely defined by the contract, which determines what should count as a fulfillment 0 . The alleged circularity is unfolded into a series of increasing degrees of fulfillment analogous to a series of approximations converging to a limit.

4. Further Developments

We have considered several approaches to the interpretation of the denotational meanings [ [ e ] ] in mathematical structures motivated by general computational insights. We have recognized the relation between a program e and its denotational meaning [ [ e ] ] as expressing an intention, and we can read what we have carried out so far as an exploration of ways of fulfilling those intentions (i.e., building the values [ [ e ] ] ) in semantic structures that have come progressively closer to abstract models of digital spaces as theaters of computation. Starting from the lattice theoretic models proposed originally by Dana Scott to formalize an abstract notion of information in computation, we have seen how domain structure has been restricted by making assumptions on the causal structure of those spaces.
The developments of these investigations have led to a change in attitude in the construction of the meanings [ [ e ] ] , by reinterpreting the events of these abstract spaces as communications between participants in an interactive process of computation. Here, the meanings represent the traces of sessions of questions and answers in a dialogue that can work as a proof of a logical law or as the evaluation of an expression, as in functional programming.
In all these approaches to the construction of [ [ e ] ] , the general schema of an intentional act has played a unifying role as a formal structure without bringing in the rich analytic background provided by the phenomenological method. We believe that one direction in which our work needs to be extended consists of recognizing that programs may be the content of many acts beyond denotational or operational interpretation: parsing, typing, compiling, and testing, among others. The general schema (2) also applies in these cases, for example,
parsing of a   program content act aims at abstract syntax tree object of the act
These acts are not independent of each other but the relation between them is not yet completely clear. For example, when we say (loosely) that “⊥ represents nontermination”, we allude to the existence of such a relation between denotational and operational interpretation. Also, typing and compilation depend on parsing, among others. The mutual relations of dependence between acts that have a program as content are, in general, still unexplored.
There is a further important aspect of intentionality in connection with programming, namely its mediated character, that immediately raises the issue of understanding how technological mediation [65]—through those particular technical artefacts that are programming languages [66]—is reflected in the intentional objects involved in programming semantics. In particular, a deeper analysis of this type of mediated intentionality could help in bringing to light general schemes for reasoning about intentional objects and providing grounds for justifying axioms for semantic structures, along lines similar to the use of Husserl’s noetic/noematic correlation in [67]. This would probably be one outcome of further investigations into the formal structure of the intentional act, that in this paper we have imported ready-made from the phenomenological approaches to constructive logic [11], taking into account the variations discussed in [49,68,69].
A general suggestion of this paper is, brutally stated, that semantics depends on pragmatics. This is not simply adherence to a ‘meaning is use’ principle. Adopting an account based on acts may give one a vantage point for finer semantical investigations from different angles and for explaining their interconnections. Eventually reducing such acts to patterns of interactive behavior and the associated dynamics of commitments, as we have hinted in the last section, gives us a systematic way of explaining and comparing them. We have outlined here the bare bones of these ideas; more work is needed to flesh them out.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Acknowledgments

I am grateful to the Editors for inviting me to submit a paper to this Special Issue. Many thanks to the two anonymous referees for providing helpful criticism and advice, and also to Henri Stephanou and Julian Rohrhuber for careful readings of previous versions of the paper and for their comments.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Scott, D.S.; Strachey, C. Toward a mathematical semantics for computer languages. In Proceedings of the Symposium on Computers and Automata, New York, NY, USA, 13–15 April 1971; Fox, J., Ed.; Polytechnic Press: New York, NY, USA, 1971; pp. 19–46. [Google Scholar]
  2. Walters, R.F.C. Categories and Computer Science; Cambridge Computer Science Texts, Cambridge University Press: Cambridge, UK, 1992. [Google Scholar]
  3. Reynolds, J.C. On the interpretation of Scott domains. In Symposia Mathematica; Academic Press: Cambridge, MA, USA, 1975; Volume 15, pp. 123–135. [Google Scholar]
  4. Scott, D.S. Outline of a mathematical theory of computation. In Proceedings of the Fourth Annual Princeton Conference on Information Sciences and Systems, Princeton, NJ, USA, 25–26 March 1970; pp. 169–176. [Google Scholar]
  5. Milner, R. Fully abstract models of typed λ-calculi. Theor. Comput. Sci. 1977, 4, 1–22. [Google Scholar] [CrossRef]
  6. Plotkin, G. LCF considered as a programming language. Theor. Comput. Sci. 1977, 5, 223–257. [Google Scholar] [CrossRef]
  7. Berry, G.; Curien, P.L.; Lévy, J.J. Full abstraction for sequential languages: The state of the art. In Proceedings of the Algebraic Methods in Semantics, Fontainebleau, France, 8–15 June 1982; Nivat, M., Reynolds, J., Eds.; Cambridge University Press: Cambridge, UK, 1985; pp. 89–132. [Google Scholar]
  8. Ong, C.H.L. Correspondence between operational and denotational semantics. In Handbook of Logic in Computer Science; Abramsky, S., Gabbay, D., Maibaum, T.S.E., Eds.; Oxford University Press: Oxford, UK, 1995; Volume 4, pp. 269–356. [Google Scholar]
  9. Abramsky, S. Semantics of Interaction: An Introduction to Game Semantics. In Semantics and Logics of Computation; Pitts, A.M., Dybjer, P., Eds.; Publications of the Newton Institute, Cambridge University Press: Cambridge, UK, 1997; pp. 1–32. [Google Scholar]
  10. Martin-Löf, P. On the meanings of the logical constants and the justifications of the logical laws. In Atti degli Incontri di Logica Matematica; Bernardi, C., Pagli, P., Eds.; University di Siena: Siena, Italy, 1985; pp. 203–281. [Google Scholar]
  11. Tieszen, R.L. Phenomenology, Logic, and the Philosophy of Mathematics; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  12. Tieszen, R. Intentionality, Intuition, and Proof in Mathematics. In Foundational Theories of Classical and Constructive Mathematics; Sommaruga, G., Ed.; Springer: Dordrecht, The Netherlands, 2011; pp. 245–263. [Google Scholar]
  13. van Atten, M. The Development of Intuitionistic Logic. In The Stanford Encyclopedia of Philosophy, Fall, 2023rd ed.; Zalta, E.N., Nodelman, U., Eds.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2023. [Google Scholar]
  14. Bentzen, B. Propositions as Intentions. Husserl Stud. 2023, 39, 143–160. [Google Scholar] [CrossRef]
  15. Gunter, C.A. Semantics of Programming Languages. Structures and Techniques; Foundations of Computing; MIT Press: Cambridge, MA, USA, 1993. [Google Scholar]
  16. Winskel, G. The Formal Semantics of Programming Languages—An Introduction; Foundation of Computing Series; MIT Press: Cambridge, MA, USA, 1993. [Google Scholar]
  17. Amadio, R.; Curien, P.L. Domains and Lambda-Calculi. In Cambridge Tracts in Theoretical Computer Science; Cambridge University Press: Cambridge, UK, 1998; Volume 46. [Google Scholar]
  18. Plotkin, G. The Category of Complete Partial Orders: A Tool for Making Meanings; Summer School on Foundations of Artificial Intelligence and Computer Science; Istituto di Scienze dell’Informazione, Università di Pisa: Pisa, Italy, 1978. [Google Scholar]
  19. Winskel, G. Events in Computation. Ph.D. Thesis, Department of Computer Science, University of Edinburgh, Edinburgh, UK, 1980. [Google Scholar]
  20. Sokolowski, R. Introduction to Phenomenology; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  21. Manna, Z. Mathematical Theory of Computation; Dover Publications, Inc.: Mineola, NY, USA, 2003. [Google Scholar]
  22. Kleene, S.C. On notation for ordinal numbers. J. Symb. Log. 1938, 3, 150–155. [Google Scholar] [CrossRef]
  23. Kleene, S.C. Introduction to Metamathematics; Van Nostrand: New York, NY, USA, 1952. [Google Scholar]
  24. Scott, D.S. A type-theoretical alternative to ISWIM, CUCH, OWHY. Theor. Comput. Sci. 1993, 121, 411–420. [Google Scholar] [CrossRef]
  25. Berry, G. Stable models of typed λ-calculi. In Automata, Languages and Programming, Fifth Colloquium; Ausiello, G., Böhm, C., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1978; Volume 62, pp. 72–89. [Google Scholar]
  26. Berry, G. Séquentialité de l’évaluation formelle des lambda-expressions. In Program Transformations, Proceedings of the 3rd International Colloquium on Programming, Paris, France, 28–30 March 1978; Robinet, B., Ed.; Dunod: Paris, France, 1978; pp. 67–80. [Google Scholar]
  27. Vuillemin, J. Correct and optimal implementations of recursion in a simple programming language. J. Comput. Syst. Sci. 1974, 9, 332–354. [Google Scholar] [CrossRef]
  28. Kahn, G.; Plotkin, G. Domaines Concrets. Theor. Comput. Sci. 1978, 1993, 187–277. [Google Scholar]
  29. Nielsen, M.; Plotkin, G.; Winskel, G. Petri nets, event structures and domains, Part I. Theor. Comput. Sci. 1981, 13, 85–108. [Google Scholar] [CrossRef]
  30. Lamport, L. Time, clocks and the ordering of events in a distributed system. Commun. ACM 1978, 21, 558–565. [Google Scholar] [CrossRef]
  31. Holt, A. Final Report of the Information System Theory Project; Technical Report RADC-TR-68-305; Rome Air Development Center, Air Force Systems Command, Griffiss Air Force Base: Rome, NY, USA, 1968. [Google Scholar]
  32. Holt, A.; Commoner, F. Events and conditions. In Proceedings of the MIT Conference on Concurrent Systems and Parallel Computation, Woods Hole, MA, USA, 2–5 June 1970; Association for Computing Machinery: New York, NY, USA, 1970; pp. 3–52. [Google Scholar]
  33. Petri, C.A. Non-Sequential Processes; Interner Bericht ISF-77-5; Gesellschaft für Mathematik und Datenverarbeitung: Bonn, Germany, 1977. [Google Scholar]
  34. Brookes, S. Historical introduction to “concrete domains” by G. Kahn and G. Plotkin. Theor. Comput. Sci. 1993, 121, 179–186. [Google Scholar] [CrossRef]
  35. Berry, G.; Curien, P.L. Sequential algorithms on concrete data structures. Theor. Comput. Sci. 1982, 20, 265–321. [Google Scholar] [CrossRef]
  36. Plotkin, G. Gilles Kahn and concrete domains, January 2007. In Proceedings of the Slides of a Presentation at the Colloquium in Memory of Gilles Kahn, Paris, France, 12 January 2007. [Google Scholar]
  37. Abramsky, S.; Jagadeesan, R.; Malacaria, P. Full Abstraction for PCF. Inf. Comput. 2000, 163, 409–470. [Google Scholar] [CrossRef]
  38. Hyland, J.M.E.; Ong, C.H.L. On full abstraction for PCF: I. Models, observables and the full abstraction problem, II. Dialogue games and innocent strategies, III. A fully abstract and universal game model. Inf. Comput. 2000, 163, 285–408. [Google Scholar] [CrossRef]
  39. Abramsky, S.; Jagadeesan, R. Games and Full Completeness for Multiplicative Linear Logic. J. Symb. Log. 1994, 59, 543–574. [Google Scholar] [CrossRef]
  40. Abramsky, S. Information, processes and games. In Handbook of the Pilosophy of Science, Volume 8: Philosophy of Information; van Benthem, J., Adriaans, P., Eds.; Elsevier: Amsterdam, The Netherlands, 2008; pp. 483–549. [Google Scholar]
  41. Conway, J.H. On Numbers and Games. In London Mathematical Society Monographs; Academic Press: London, UK, 1976; Volume 6. [Google Scholar]
  42. Joyal, A. Remarques sur la théorie des jeux a deux personnes. Gaz. Sci. Math. Que. 1977, 1, 46–52. [Google Scholar]
  43. Curien, P.L. Symmetry and Interactivity in Programming. Bull. Symb. Log. 2003, 9, 169–180. [Google Scholar] [CrossRef]
  44. Girard, J.Y. Linear logic. Theor. Comput. Sci. 1987, 50, 1–102. [Google Scholar] [CrossRef]
  45. Blass, A. A game semantics for linear logic. Ann. Pure Appl. Log. 1992, 56, 183–220. [Google Scholar] [CrossRef]
  46. Lorenzen, P. Ein dialogisches Konstructivitätskriterium. In Infinitistic Methods; Pergamon Press and PWN: Oxford, UK; Warsaw, Poland, 1961; pp. 193–200. [Google Scholar]
  47. van Berkel, K. Handshake Circuits: An Asynchronous Architecture for VLSI Programming; Cambridge University Press: New York, NY, USA, 1993. [Google Scholar]
  48. Krifka, M. The Origins of Telicity. In Events and Grammar; Rothstein, S., Ed.; Springer: Dordrecht, The Netherlands, 1998; pp. 197–235. [Google Scholar]
  49. Martin-Löf, P. A path from logic to metaphysics. In Atti del Congresso: Nuovi Problemi della Logica e della Filosofia della Scienza, Viareggio 8–13 Gennaio 1990; CLUEB: Bologna, Italy, 1991; Volume II, pp. 141–149. [Google Scholar]
  50. Kosman, A. The Activity of Being: An Essay on Aristotle’s Ontology; Harvard: Cambridge, MA, USA, 2013. [Google Scholar]
  51. Heyting, A. Die Intuitionistische Grundlegung der Mathematik. Erkenntnis 1931, 2, 106–115. [Google Scholar] [CrossRef]
  52. Brandom, R. Asserting. Noûs 1983, 17, 637–650. [Google Scholar] [CrossRef]
  53. Brandom, R. Making It Explicit: Reasoning, Representing, and Discursive Commitment; Harvard University Press: Cambridge, MA, USA, 1994. [Google Scholar]
  54. Brandom, R. Articulating Reasons: An Introduction to Inferentialism; Harvard University Press: Cambridge, MA, USA, 2000. [Google Scholar]
  55. Brandom, R. Between Saying and Doing: Towards an Analytic Pragmatism; Oxford University Press UK: Oxford, UK, 2008. [Google Scholar]
  56. Gallagher, S. Phenomenology; Palgrave-Macmillan: New York, NY, USA, 2012. [Google Scholar]
  57. Marion, M. Why Play Logical Games? In Games: Unifying Logic, Language, and Philosophy; Majer, O., Pietarinen, A., Tulenheimo, T., Eds.; Springer: Dordrecht, The Netherlands, 2009; pp. 3–26. [Google Scholar]
  58. Marion, M. Game Semantics and the Manifestation Thesis. In The Realism-Antirealism Debate in the Age of Alternative Logics; Rahman, S., Primiero, G., Marion, M., Eds.; Springer: Dordrecht, The Netherlands, 2012; pp. 141–168. [Google Scholar]
  59. Cardone, F. The algebra and geometry of commitment. In Ludics, Dialogue and Interaction; Lecomte, A., Tronçon, S., Eds.; Lecture Notes in Artificial intelligence FoLLI; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6505, pp. 147–160. [Google Scholar]
  60. Martin-Löf, P. Logic and ethics. In Meaning and understanding; Sedlár, I., Blicha, M., Eds.; College Publications: London, UK, 2020; pp. 83–92. [Google Scholar]
  61. Pagin, P.; Marsili, N. Assertion. In The Stanford Encyclopedia of Philosophy, Winter, 2021st ed.; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2021. [Google Scholar]
  62. Barth, E.M.; Krabbe, E.C. From Axiom to Dialogue; De Gruyter: Berlin, Germany; New York, NY, USA, 1982. [Google Scholar]
  63. Husserl, E. Logical Investigations; Moran, D., Ed.; Routledge: New York, NY, USA, 1970. [Google Scholar]
  64. Abramsky, S.; McCusker, G. Games for recursive types. In Proceedings of the 1994 Workshop on Theory and Formal Methods, London, UK, 28–30 March 1994; Hankin, C.L., Ed.; Imperial College Press: London, UK, 1995; pp. 1–20. [Google Scholar]
  65. Verbeek, P.P. Beyond interaction: A short introduction to mediation theory. Interactions 2015, 22, 26–31. [Google Scholar] [CrossRef]
  66. Turner, R. Programming Languages as Technical Artifacts. Philos. Technol. 2013, 27, 377–397. [Google Scholar] [CrossRef]
  67. van Atten, M. Brouwer Meets Husserl: On the Phenomenology of Choice Sequences; Springer: Dordrecht, The Netherlands, 2007. [Google Scholar]
  68. Sundholm, G. Questions of proof. Manuscrito 1993, XVI, 47–70. [Google Scholar]
  69. Sundholm, G. “Inference Versus Consequence” Revisited: Inference, Consequence, Conditional, Implication. Synthese 2012, 187, 943–956. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cardone, F. On the Interpretation of Denotational Semantics. Philosophies 2025, 10, 54. https://doi.org/10.3390/philosophies10030054

AMA Style

Cardone F. On the Interpretation of Denotational Semantics. Philosophies. 2025; 10(3):54. https://doi.org/10.3390/philosophies10030054

Chicago/Turabian Style

Cardone, Felice. 2025. "On the Interpretation of Denotational Semantics" Philosophies 10, no. 3: 54. https://doi.org/10.3390/philosophies10030054

APA Style

Cardone, F. (2025). On the Interpretation of Denotational Semantics. Philosophies, 10(3), 54. https://doi.org/10.3390/philosophies10030054

Article Metrics

Back to TopTop