Next Article in Journal
Orthographic Visual Distinctiveness Shapes Written Lexicons: Cross-Linguistic Evidence from 131 Languages
Previous Article in Journal
Style-Shifting in Multidialectal Spaces: The Case for Speaker-Based, Mixed-Methods Approaches to Dialect Contact
Previous Article in Special Issue
Morphological Dependencies in English
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Syntax in a Theory of Types with Records

Department of Philosophy, Linguistics and Theory of Science, University of Gothenburg, Box 200, 405 30 Göteborg, Sweden
*
Authors to whom correspondence should be addressed.
Languages 2025, 10(12), 300; https://doi.org/10.3390/languages10120300
Submission received: 20 August 2025 / Revised: 27 November 2025 / Accepted: 1 December 2025 / Published: 10 December 2025
(This article belongs to the Special Issue The Development of Dynamic Syntax)

Abstract

This paper presents a recasting of key aspects of dynamic syntax (DS) in a theory of types with records (TTR), concentrating on the incremental processing of speech events as they unfold and viewed in terms of classifying these events in terms of grammatical types and making predictions about future types that will be realized as the speech event progresses. TTR, like DS, attempts to provide formal analyses of language in terms of a theory of action which is related to cognitive processes. It therefore seems appropriate to explore one in terms of the other in hopes of revealing how the two theories may interact with and contribute to each other.

1. Introduction

Suppose you are out walking with a friend, and after a silence, this person utters John. Being a speaker of English, you immediately classify this event as being an utterance of the English proper name whose phonemic representation is /ʤɔn/. That is, you have classified this utterance as being a string of three phonemes: /ʤ/ /ɔ/ /n/. Of course, this is not a conscious act of classification.
Even linguists, when out walking with their friends, do not consciously keep track of the individual phonemes that are being uttered. Non-linguists may struggle to even understand what a phoneme is. And yet standard linguistic analysis has suggested for a long time that this kind of classification is going on. The input which triggers this classification does not in any simple sense contain these phonemes. What we are presented with is a continuous event involving soundwaves and perhaps a visual event involving the movement of our friend’s lips if we happen to be looking at their face when they make the utterance. It is well-known that lip reading can be helpful in a noisy environment or for the hard of hearing.
What exactly is a phoneme? It is well-known that the phoneme /ʤ/ is realized slightly differently in different phonetic environments. For example, the /ʤ/ in an utterance of Jim is slightly different to the one in an utterance of John. The high front quality of /i/ and the lip-rounding quality of /ɔ/ are already present in the realization of /ʤ/ in the two words, a phenomenon known as coarticulation. This is not to mention the wide variation in sound associated with children, male and female adults, speakers who have a cold, and a whole host of other things that can influence the exact nature of sound.
Still, we, as English speakers, do a pretty good job of recognizing an utterance of John when we hear one. In our terms, we say that this is because we are attuned to different types of utterances. For us, the phonemes like /ʤ/ are types whose witnesses are events of varying kinds which speakers of English would recognize as a realization of the phoneme. Similarly, we think of /ʤɔn/ as representing a type, a type of strings of events witnessing the three phonemes, that is, three smaller events strung together to make a larger event, which is an utterance of John. Of course, the nature of coarticulation means that there are no discrete boundaries between the three subevents, but as this paper is not about phonology, we will leave this discussion for another time.1 In fact, when we talk about phonological types in the rest of the paper, we will use an informal notation using standard orthography in double inverted commas, that is, “John” instead of /ʤɔn/, following the convention used in Cooper (2023).
When we hear John at the beginning of an utterance, we can do more than classify it as a witness of the phonological type “John”. We recognize it as being a proper name of English which can be used to refer to an individual (who in some sense is normally referred to by phonological events of the type “John”). That is, in our terms, the utterance belongs to the type PropName and has as content an individual (of type Ind) named John. As an event of type PropName, it can also be of type NounPhrase. Events of this type can function as the subjects of sentences (that is, events of type Sentence). Thus, on hearing John, we can not only classify it using the types to which we are attuned but we can also predict the type of a larger speech event of which it might be a smaller component event. For example, we can predict that it is the subject of a sentence utterance and that it will be followed by an event of type VerbPhrase and that the contents of the two events will be combined in a certain way to obtain the content of the whole sentence event.
This is, of course, only one of many predictions that are possible. This initial noun phrase could also be the start of something like John, I’ve grown very fond of, where John is the object of the preposition, placed in the initial position for pragmatic reasons. It could also be the complete utterance, the short answer to a question, for example, or an expression of dismay or approval at some action that John has just performed. Thus, we should properly talk of a probabilistic distribution over predictions that can be made. The probability of the various predictions that are made will vary depending on matters of context (for example, whether John is present in the situation) or where we are in the dialogue (for example, the short answer prediction will be more likely if a question has just been asked). We will not address these probability issues in this paper.
The discussion above clearly indicates that we are in the territory of Dynamic Syntax2 (DS, Kempson et al., 2001). The aim of this paper is to show that the proposals that have been made in DS can be cast in the light of classification by types and making type predictions in the way that they have been discussed in TTR, a type theory with records (Cooper, 2023). In particular, we wish to show that the trees and tree descriptions used in DS can be interpreted as types in the sense that they have been introduced in TTR. This is, then, a more radical proposal than earlier work on DS-TTR (for example, Eshghi, 2015), which adds TTR interpretations to DS but does not code the whole of DS in TTR.
The point of this paper is, importantly, not to improve on DS as such, but rather to take steps towards integrating DS into TTR. TTR aims to be a unifying framework for theories of language and cognition, where record types can be used to model syntactic feature structures, semantics, phonology, etc. Whether the reader, to the extent that they are DS practitioners, wants to continue using DS as before (secure in the knowledge that it could be incorporated in TTR) or instead start using TTR-DS is of course a matter of choice. This having been said, it is always possible that casting a theory in a new framework can shed light on both the theory and the framework and reveal new directions for future development. At the least, it can show similarities between what at first sight might appear to be very different approaches.
The remainder of this paper is structured as follows: in Section 2, we give a brief introduction to the aspects of TTR that we will use in this paper. We also give an overview of some previous work that has been performed on combining DS and TTR. In Section 3, we show how we plan to represent key objects and types needed for classical DS3 in terms of records, types, and action rules as used in TTR. In Section 4, we will address the action-based nature of DS by showing how to use TTR action rules to parse in the DS style. In Section 5, we conclude.
Below, we will be assuming the reader has working knowledge of DS. We refer to Howes and Gibson (2021) for a concise overview.

2. Background

2.1. A Short Introduction to TTR (A Theory of Types with Records)

As mentioned, TTR attempts to be a foundational theory of language and cognition, and as such aims to cover many (all) aspects of linguistics. TTR also makes semantics more cognitively oriented in comparison to model theoretic semantics and sees language as action, similarly to DS. This means that it is reasonable to try various ways of marrying the two.
We give a brief sketch of the aspects of TTR which we will use in this paper. For more detailed accounts, see Cooper (2023) (or Cooper and Ginzburg (2015) for a short overview).
s:T represents a judgment that s is of type T. Types may either be basic or complex (in the sense that they are structured objects which have types or other objects introduced in the theory as components). One basic type that we will use is Ind, the type of individuals; another is Real, the type of real numbers.
Given that T 1 and T 2 are types, ( T 1 T 2 ) is the type of total functions with domain objects of type T 1 and range included in the collection of objects of type T 2 .
Among the complex types are ptypes, which are constructed from a predicate and arguments of appropriate types as specified for the predicate. Examples are ‘man(a)’, ‘see(a,b)’, where a , b : Ind . The objects or witnesses of ptypes can be thought of as situations, states or events in the world which instantiate the type. Thus, s : man ( a ) can be glossed as “s is a situation which shows (or proves) that a is a man”.
In TTR, records are modeled as a labeled set consisting of a finite set of fields. Each field is an ordered pair, , o , where is a label (drawn from a countably infinite stock of labels) and o is an object which is a witness of some type. No two fields of a record can have the same label. Importantly, o can itself be a record.
A record type is like a record except that the fields are of the form , T , where is a label as before and T is a type. The basic intuition is that a record, r, is a witness for a record type, T, just in the case that for each field, i , T i , in T, there is a field, i , o i , in r, where o i : T i (note that this allows for the record to have additional fields with labels not included in the fields of the record type). The types within fields in record types may depend on objects which can be found in the record that is being tested as a witness for the record type. We use a graphical display to represent both records and record types, where each line represents a field. Example (1) represents a type of record which can be used to model situations where a man runs.
(1)     ref : Ind c man : man ( ref ) c run : run ( ref )
A record of this type would be of the form
(2)     ref = a c man = s c run = e
where a : Ind , s : man ( a ) , and e : run ( a ) .
Types may contain manifest fields like the c man field below:
(3)     ref : Ind c man = s 23 : man ( ref )
Here, c man = s 23 : man ( ref ) is a convenient notation for c man : man ( ref ) s 23 , where man ( ref ) s 23 is a singleton type. If a : T , then T a is a singleton type and b : T a if b = a .4 Manifest fields allow us to progressively specify what values are required for the fields in a type.
A type T 1 is a subtype of a type T 2 , T 1 T 2 , just in case a : T 1 implies a : T 2 no matter what we assign to the basic types and ptypes. Record types introduce a restrictive notion of subtyping.
(4)     x : Ind c 1 : boy ( x ) y : Ind c 2 : dog ( y ) e : hug ( x , y ) x : Ind c 1 : boy ( x ) y : Ind c 2 : dog ( y )
(4) holds independently of what boys and dogs there are and what kind of hugging is going on. We can tell that the record type to the left is a subtype of the one to the right simply by the fact that the set of fields of the latter is a subset of the set of fields of the former.
It is possible to combine record types. An object a is of the meet type of T 1 and T 2 , a : T 1 T 2 , if a : T 1 and a : T 2 . If T 1 and T 2 are record types, then there will always be a record type (not a meet) T 3 , which is necessarily equivalent to T 1 T 2 . T 3 is the merging of T 1 and T 2 , written as T 1 T 2 :
(5) a .   f : T 1 g : T 2 = f : T 1 g : T 2 b .   f : T 1 f : T 2 = f : T 1 T 2
If T 1 and T 2 are record types, the asymmetric merge  T 1 T 2 is a record type similar to the meet type T 1 T 2 , except that if a label occurs in both T 1 and T 2 , the value of in T 1 T 2 will be T 2 . . Intuitively, it is the union of the two records, but for any shared labels, the value of the label in T 2 is chosen over the value of the label in T 1 .
(6) a .   f : T 1 g : T 2 = f : T 1 g : T 2 b .   f : T 1 f : T 2 = f : T 2
In addition to providing various kinds of types as we have so far discussed, TTR introduces type actions and action rules. Judgments are a kind of type action, and we use the formula in (7) to indicate, and agent, A, judges the object, o, to be of type T.
(7)     o : A T
There can also be non-specific judgments where an agent judges that there is something of a type, that is, the type is non-empty. When we are thinking of types as propositions, the type being non-empty corresponds to the proposition being true. In standard type theories, this is often written as ‘ T true ’. Thus, if ‘run(d)’ is a type of situation where the individual d runs, then the type (proposition) is true, just the in the case that there is a situation of this type. (8) represents that agent, A, judging that there is something of type T.
(8)     : A T
A third kind of type action is to create an object of a given type. This is particularly useful when we wish to represent an agent performing some action, that is, creating a situation of a given type. In the notation, we use ‘!’ to indicate a creation. Thus, (9) represents that agent, A, creating an object of type T.
(9)     : A T !
Note that there is no specific version of this type action—you cannot create something that already exists.
For more kinds of type acts and further discussion, see Cooper (2023, Section 2.3). Type actions are regulated by action rules of the form (10).
(10)Languages 10 00300 i001
Each φ i is either an action or some other condition that must hold true and ψ must be an action. While (10) is in the form of a proof tree indicating that the φ i are premises and ψ is a conclusion, we use the wavy line as opposed to a straight line in order to indicate that this is not a rule of inference but a rule which indicates that the premises above the line hold license or afford on the action below the line. Thus, there is no guarantee that the agent will carry out the action even if all the premises hold. In this way, TTR’s action rules can be seen as providing a calculus of affordance in something like the sense of Gibson (1979). We will use TTR action rules in this paper to model the parse actions introduced by DS.

2.2. Previous Work on Combining DS and TTR

The first time TTR was used together in DS was in Purver et al. (2010) for the purposes of adding speech act context to DS parse trees. In Purver et al. (2011) and Eshghi (2015), DS-TTR parsing was fully incrementally modeled, implemented, and explained. DS-TTR parsing was incorporated within a wider TTR-encoded state in Hough et al. (2020). A comprehensive overview of how DS-TTR can be applied to dialogue phenomena was provided in Eshghi et al. (2022). The current paper differs from all these in that we use TTR to encode DS parsing, not just as a semantic representation resulting from parsing. To contrast our approach from DS-TTR, we will refer to it here as “TTR-DS” (or “DS in TTR”). The current paper is also limited in coverage to recasting (parts of) DS in TTR, but we nevertheless think it provides the basic tools that could be useful in casting what has been done in the previous DS-TTR literature fully in TTR.

3. Objects and Types for DS Parsing

In this section, we provide TTR infrastructure for representing and processing utterances into DS-style contents, using types e and t together with TTR-style types. In this paper, we are not yet using full TTR types for representing contents of utterances (e.g., ptypes) since we are concentrating on the syntactic part of classical DS.

3.1. Tree Nodes

According to Cann et al. (forthcoming) tree nodes in Dynamic Syntax are described as follows (p. 3):
The inhabitants of tree-nodes are now feature structures that can be recursive, with the features being partial functions indicating the various labels and the formula inhabiting the node. Features are functions like Fo, Ty,Tn, standing for ‘formula’, ‘type’, and ‘tree node’ respectively, as well as any potentially useful linguistic features that might be relevant to the processing task.
Following this, we will say that DS tree nodes are modeled as records of the type (11).
(11)     ty : Type fo : ty
We use the label ‘ty’ for type and ‘fo’ corresponding to the formula field in DS. Normally in TTR, we would use ‘cont’ (“content”) for this field since what appears in this field in a record of this type will be not a formula but some kind of semantic object such as a type, a function, or an individual. We will retain ‘fo’ here to make the correspondence to DS more obvious.
We do not include a field labeled ‘Tn’ because we will use paths in records and record types to identify the positions of nodes in a tree. How we do this will become clear as we proceed.
Consider a tree node that would be represented in DS as (12a). We could think of this as corresponding to the record (12b).
(12) a .   Ty ( e ) Fo ( john ) b .   ty = e fo = john
(12b) is of type (11).
Tree nodes may have daughters. There are a few ways to represent this in TTR. One is to introduce a field labeled ‘daughters’ whose value is required to be a string of trees. Another variant with ‘daughters’ is to require a list of trees. If linear precedence of the daughters is not relevant, we can introduce a set of trees as the value labeled ‘daughters’. If the daughters are always to play the same role and their order is not relevant, then we can introduce fields for each of the daughters. This is the case in most or all variants of DS where the daughters introduced are function and argument, and by convention, the argument is always displayed on the left and the function on the right. We will introduce fields labeled ‘arg’ and ‘fun’ and define a (basic, recursive) type Tree, such that
(13)     a:Tree if a: ty : Type fo : ty arg : Tree fun : Tree or a= ϵ (the empty tree)
Thus, what started out as a type of tree node has become a type of tree since the nodes contain their daughters; (14a) is a completed tree for John arrived in DS (ignoring complications of tense) and (14b) is the corresponding record.5
(14)Languages 10 00300 i002
We can also display (14b) as a graphical tree with the convention that the ‘arg’ field is on the left branch and the ‘fun’ field is on the right branch. If there are no branches below a node, then the values of both ‘arg’ and ‘fun’ are the empty tree, ϵ . This is shown in (15).
(15)Languages 10 00300 i003
As mandated by DS, we will be modeling the incremental specification of a tree. In TTR, incremental specification of objects is typically handled by keeping track of types that get progressively more specified. Hence, we will not keep track of the tree as such but instead of a tree type, i.e., an object of type TreeType.
(16)     T:TreeType if TTree
That is, T is a tree type just in case it is a subtype of Tree. Thus, the type, Tree, introduced in (13), is a tree type, since every type is a subtype of itself. Intuitively, we can also think of it as an underspecified tree. Anything that is a tree as defined by this type will be a witness and no further constraints are placed on the witness. Types, T, can be refined to form a proper subtype of T. For example, the type Tree, as defined in (13), could be refined to (17).
(17)     ty : Type fo : ty arg : Tree fun : Tree
This leaves out the option of the empty tree and thus could be called the type of non-empty tree, Tree ϵ . Record types like (17) can be further refined by introducing manifest fields which tie down the object that occurs in a particular field. For example, in (18), we have required that any tree of this type should have t in the ‘ty’ field.
(18)     ty = t : Type fo : ty arg : Tree fun : Tree
(18) is then the type of non-empty trees which contain t in the ‘ty’ field. This type could be represented in standard DS notation as ? Ty ( t ) . The ‘?’ here indicates that the formula, Fo, has not yet been specified and is required to be of type t. For us, this underspecification is represented not by a question mark but by the fact that the ‘fo’ field in the type has not yet been specified; that is, it is not a manifest field. The use of manifest fields in this way means that we can incrementally specify the type as the parse proceeds.6 A successful parse results in a fully specified tree type, that is, one in which each total path in the type (that is, paths from the outermost level of the type down to a field whose type is not a record type) terminates in a manifest field.
As an example, consider a DS tree for John arrived, shown in (19). The corresponding TTR type is shown in (20) (here, the content of the top t node is still not specified, but at a later stage of utterance interpretation, it would be specified as arrive ( john ) ).
(19)Languages 10 00300 i004
(20)     ty = t : Type fo : ty arg : ty = e : Type fo = john : ty arg = ϵ : Tree fun = ϵ : Tree fun : ty = e t : Type fo = λ x : e . arrive ( x ) : ty arg = ϵ : Tree fun = ϵ : Tree
This can be rendered diagrammatically as in (21).
(21)Languages 10 00300 i005
This now represents a tree type, not a tree. We do not yet have anything corresponding to the pointer, . We will take this up in Section 4.

3.2. Expanding a Tree Node

Now let us see how we can use the above to render the rule Introduction from Cann et al. (forthcoming) (and provided in a different format in Cann et al. (2005)) in TTR:
(22)IF ? T y ( α )
THENmake ( 1 ) , go ( 1 ) , put ( ? T y ( β α ) ) , go ( 1 ) ,
make ( 0 ) , go ( 0 ) , put ( ? T y ( β ) )
ELSEAbort
Given that α and β in (22) are not types but rather metavariables that can be instantiated by types, we take (22) to be showing a rule schema rather than a fully specified rule.
We see parsing as the construction and successive refinement of a tree type; that is, some type which is a subtype of Tree. We use T i * to represent the type constructed so far which is being focused on and T i + 1 * to represent a new type in focus at the subsequent step. We will refine this later when we come to parse states below. The initial TTR rendering of the rule schema in (22) is shown in (23).
(23)Languages 10 00300 i006
An instance of the rule schema in (23) is the DS rule in (24):
(24)IF ? T y ( t )
THENmake ( 1 ) , go ( 1 ) , put ( ? T y ( e t ) ) , go ( 1 ) ,
make ( 0 ) , go ( 0 ) , put ( ? T y ( e ) )
ELSEabort
We can render (24) as the TTR rule in (25).
(25)Languages 10 00300 i007
Note that (25) requires T i * .fo to be unspecified, thus (together with T i * .ty) implementing the DS operator “?”, which in the DS expression ?Ty(t) indicates that something of type t is required. Also, note that T i + 1 * is a subtype of T i * in (25); that is, any tree of type T i + 1 * must be of type T i * . Thus, (25) is an action rule for refining a tree type. Type refinement is an important action in the parsing process as we gain an increasing amount of information about the utterance we are parsing. The refinement in (25) introduces function–argument application as a way of finding a content of the type α .

3.3. Instantiating a Tree Node

Now consider an alternative way of refining T i * by simply introducing an object of type α (intuitively creating a non-branching node in the tree). An example of this is the DS lexical entry rule in (26), where α = e .
(26)john:
IF ? T y ( e )
THENput ( T y ( e ) ) ,
put ( F o ( j o h n ) )
ELSEabort
In logical terms, we can call this an instantiation of the type α . The TTR version of the generalized rule of which (26) is an instance is shown in (27), and the TTR version of (26) is shown in (28).
(27)Languages 10 00300 i008
(28)Languages 10 00300 i009
Technically, (25) and (28) are not action rules in the sense of Cooper (2023), since what comes below the wavy line is not an action. We will fix this in Section 4, where we will recast this in terms of a judgment concerning a type that the current parse state witnesses.
In the meantime we will introduce a couple of things that will enable us to state our action rules more concisely and perspicuously.

3.4. Notational Simplifications

First note that the type (29a) is identical with that in (29b), the merge of the type Tree with ty = α : Type .
(29) a .   ty = α : Type fo : ty arg : Tree fun : Tree b .   Tree ty = α : Type
Following the convention introduced in Cooper (2023), we will use the notation in (30) to represent (29b).
(30)     Tree ty = α : Type
Apart from being more concise, this notation has the advantage that if you change your definition of the type Tree, this notation will still be valid and carry over the change. Intuitively, we can think of (30) as the type of trees which have α in their ‘type’ field.
A second notational (and conceptual) simplification is that we can explicitly recognize the operations on the labeled sets that constitute the types being refined in (23) and (28). We will call these operations FunApp (“function application”) and Inst (“instantiation”), respectively. We characterize these operations in (31).
(31)a.If α and β are types, then
FunApp( Tree ty = α : Type , β ) =
ty = α : Type fo = fun . fo ( arg . fo ) : ty arg : ty = β : Type fo : ty arg : Tree fun : Tree fun : ty = ( β α ) : Type fo : ty arg : Tree fun : Tree
b.If α is a type and a : α , then
Inst( Tree ty = α : Type , a) = ty = α : Type fo = a : ty arg = ϵ : Tree fun = ϵ : Tree
Given these additions, we can characterize (25) and (28) more succinctly as in (32).
(32)Languages 10 00300 i010
FunApp includes the effect of the operation which Howes and Gibson (2021) refer to as Elimination; that is, the operation which creates the formula representing function application in the mother node. If desired, this could be separated out as a separate action rule of the kind discussed in Section 4.

4. Parsing by Action Rules

In this section, we will make (32a,b) into proper action rules by ensuring that what comes below the wavy line is an action.

4.1. Parse States

A parse state in DS is a set of triples: T , W , A , where T is a (possibly partial) propositional tree, W is the string of words so far parsed, and A is the sequence of actions (computational and lexical) used to construct T from W (Cann et al., 2007). Cann et al. say (p. 340): “A final, acceptable parse state is one in which there is a complete propositional tree T, i.e., one with no requirements outstanding, W is the complete parsed string and A the complete sequence of actions deriving T from W, taking each word in W in order.”
To define the TTR counterpart to the DS parse state, we begin by defining a type that we choose to call Edge7 as (33), whose witnesses correspond to a triple in the DS parse state.
(33)     treety : TreeType : Path treety actions : ActionRuleID * string : Lex *
where π : Path T for a record type T if π paths ( T ) (see p. 34 of Cooper (2023)). In addition to the triple T , W , A , (33) also includes a TTR counterpart to the currently active node ⋄ in DS. In TTR, the value of ⋄ is a path in the tree type pointing to the active node.
We will use u as a variable over phonological events corresponding to lexical items. For example, u : John . We use s as a variable over strings of such events, s : Lex * , where Lex is the type of lexical events, “John”∨“arrives”∨…. We use s to represent the string of lexical items so far in the utterance being processed and u to represent the current lexical event to be appended to s .
If s is a string, let InitSubstring s be the type whose witnesses are the initial substrings of s (including s itself). For each string of lexical events, s, we introduce a type ParseState s whose witnesses are of type (34) (i.e., an edge whose string is an initial substring of s).
(34)     Edge string : Lex * InitalSubstring s
We make the parse state cumulative, that is, (35) holds.
(35)     q : ParseState s     u : Lex q : ParseState s u
An initial state in DS is P 0 = { { ? Ty ( t ) , } , , } . One possibility would be to let the parse state be a set of objects of type Edge. However, it is more in line with TTR to keep track of objects of ParseState types. In TTR-DS, we encode this initial state by the judgment in (36), corresponding to the DS Axiom rule (Cann et al., 2005).
(36)     treety = Tree ty = t : Type string = ϵ actions = ϵ = ϵ : A   ParseState ϵ

4.2. Parsing as Judgment Actions

Since parsing involves modifying (specifying) the active node, which may occur at arbitrary depths in the tree type, we will need versions of Inst and FunApp that are relative to a path where the updates are carried out. These are given in (37). In characterizing these rules, we will use a notation for converting a path into a record type which has that path. Let π be the path 1 . 2 . . n . Then [ : π : T ] is [ 1 : [ 2 : : [ n : T ] ] ] . T 1 T 2 is the asymmetric merge of types T 1 and T 2 defined on p. 79 of Cooper (2023). Intuitively, this adds paths in T 2 to T 1 . If the two types have a path in common, then the value of that path in T 2 will replace that of T 1 .
(37)a.If α is a type, a : α , T : TreeType , π : Path T , and T . π Tree ty = α : Type
then Inst(T, a, π ) = T  [ : π : ty = α : Type fo = a : ty arg = ϵ : Tree fun = ϵ : Tree ]
b.If α and β are types, T : TreeType and π : Path T , then
FunApp( Tree ty = α : Type , β , π ) =
T  [ : π ty = α : Type fo = fun . fo ( arg . fo ) : ty arg : ty = β : Type fo : ty arg : Tree fun : Tree fun : ty = ( β α ) : Type fo : ty arg : Tree fun : Tree ]
We can now render the rules in (32) as proper TTR action rules that can be used for parsing, as seen in (38) (we use the rule ID IntroductionET for the instantiation of the rule schema Introduction where α is e and β is t).
(38)Languages 10 00300 i011
As an illustration, we will show the initial stages of parsing an utterance beginning with “John”. The IntroductionET rule (38a) together with the initial state (36) licenses the judgment in (39).
(39)     treety = Tree ty = t : Type fo = fun . fo ( arg . fo ) : ty arg : ty = e : Type fo : ty arg : Tree fun : Tree fun : ty = ( e t ) : Type fo : ty arg : Tree fun : Tree string = ϵ actions = I NTRODUCTION ET = arg : A   ParseState ϵ
Given this judgment, if at this point A judges that “John” was uttered ( u John : A John ), (38a) licenses (40).
(40)     treety = Tree ty = t : Type fo = fun . fo ( john ) : ty arg : ty = e : Type fo = john : ty arg = ϵ : Tree fun = ϵ : Tree fun : ty = ( e t ) : Type fo : ty arg : Tree fun : Tree string = u john actions = I NTRODUCTION ET J OHN = arg : A ParseState John

4.3. Action Rules That Move the Pointer

Howes and Gibson (2021) mention two rules that move the pointer: Completion and Anticipation. Completion allows the pointer to be moved from a completed node to its mother. Anticipation allows the pointer to be moved to any daughter which is incomplete. In our proposal, the “moving” of a pointer is represented by changing the path in the field labeled by ‘⋄’. Thus, taking (40) as an example, “moving up to the mother” would involve changing the value for ‘⋄’ from ‘arg’ to ‘ ϵ ’ (the empty path). This corresponds to the action of Completion. This would be appropriate in the case of (40) because all the paths from ‘arg’ in the tree type are specified; that is, they end with a manifest field which determines their value in any tree which is of this type. Thus, we would generate a new edge in the parse state, which is (41)
(41)     treety = Tree ty = t : Type fo = fun . fo ( john ) : ty arg : ty = e : Type fo = john : ty arg = ϵ : Tree fun = ϵ : Tree fun : ty = ( e t ) : Type fo : ty arg : Tree fun : Tree string = u john actions = I NTRODUCTION ET J OHN C OMPLETION = ϵ : A ParseState John
Now notice that there are paths from ‘fun’ in the type labeled ‘treety’ that are not specified, namely, ‘fun.fo’, ‘fun.arg’, and ‘fun.fun’. This should then enable Anticipation to replace ‘ ϵ ’ as the value labeled ‘⋄’ with ‘fun’ as in (42).
(42)     treety = Tree ty = t : Type fo = fun . fo ( john ) : ty arg : ty = e : Type fo = john : ty arg = ϵ : Tree fun = ϵ : Tree fun : ty = ( e t ) : Type fo : ty arg : Tree fun : Tree string = u john actions = I NTRODUCTION ET J OHN C OMPLETION A NTICIPATION = fun : A ParseState John
Completion and Anticipation are allowed to “move” the pointer one level at a time, that is, respectively, remove the last label in a path or add a label within the current record type at the end of the path. This corresponds to moving to a mother or daughter node.
The rules Completion and Anticipation can be characterized as action rules which allow an agent to add edges to the witnesses of the type ParseState s for some string s without observing any element to be added to   s . That is, they are triggered by reasoning about the parse state so far without the input of any new data. A (partial) path π in a record type, T, is said to be fully specified just in case every total path in T, π . π , is such that T . π . π is a singleton type (normally indicated by a manifest field). This means that for any witness, r : T , r . π . π is determined to be a particular object. We will represent this in our action rules by the condition ‘T specified’. If T is not fully specified, we will represent this in our action rules by the condition ‘T underspecified’. If r is a record, we will use r [ π = a ] to represent a record exactly like r except that the value of the path π is a (cf. Cooper (2023) p. 323). We use π [ 1 ] to represent the path π with its last element removed. As usual, we use to represent a single label. Finally, we are using the empty path ϵ , such that for any record (or record type), r, r . ϵ = r . Now we can define Completion and Anticipation as the action rules in (43).
(43)Languages 10 00300 i012

4.4. Parsing a Complete Sentence

Given that we are in the state in (42) to parse “John arrived”, we need a rule for “arrived”, constructed similarly to that for “John”:
(44)Languages 10 00300 i013
Applying this rule followed by Completion yields the state in (45):
(45)     treety = Tree ty = t : Type fo = arrive ( john ) : ty arg : ty = e : Type fo = john : ty arg = ϵ : Tree fun = ϵ : Tree fun : ty = ( e t ) : Type fo = λ x . arrive ( x ) : ty arg = ϵ : Tree fun = ϵ : Tree string = u john u arrived actions = I NTRODUCTION ET J OHN C OMPLETION A NTICIPATION A RRIVED C OMPLETION = ϵ : A ParseState John arrived
The tree in this state has no underspecified fields, and hence, Anticipation does not apply. This is thus a final parse state that has “no requirements outstanding”8.

4.5. DS Basic Actions as TTR Action Rules

Above, we have chosen to model DS “macro” actions (such as John and IntroductionET) directly as TTR action rules. Alternatively, one could follow DS even more closely by modeling macro actions in terms of lower-level basic actions, including make(X) for creating new nodes, go(X) for moving the pointer (where X is a tree address), and put(Y) for annotating nodes (where Y is a node decoration, e.g., a formula or a type). TTR-DS counterparts of these operations are shown in (46). Note that put has been divided into two separate rules PutFo( α ) and PutTy( α ) and that Make and Go take paths (not just labels) as arguments, thus allowing for slightly more brief formulations of macro rules.
(46)Languages 10 00300 i014
One could imagine using these basic actions to spell out macro actions and devising action rules which execute sequences of basic actions. However, we will not pursue this approach further in this paper.

4.6. Unfixed Nodes

A feature of record types is that they place restrictions on fields of the records which are their witnesses with certain labels. However, there can also be fields in the witnessing record which do not have labels mentioned in the type. Consider the type (47).
(47)     x : T 1 y : T 2
If a : T 1 and b : T 2 , then the record (48) would be a witness for (47).
(48)     x = a y = b
However, we could add any further fields to (48) as long as they meet the general requirements of a record, that is, that each label only occurs once and each value is a witness for some type. Such a record is still a witness for (47), since it meets the requirements of the fields mentioned in the type. For example, (49), where c is an object of some type, would also be a witness for (47).
(49)     x = a y = b z = c
We will take advantage of this in our characterization of DS’s unfixed nodes. The type Tree requires the presence of fields labelled ‘ty’,‘fo’,‘arg’, and ‘fun’. As long as a record meets the requirements for these fields, it will be of type Tree even if it contains additional fields. We will now introduce action rules, which will allow the agent to introduce tree types with a label ‘unfixed’. The occurrence of ‘unfixed’ fields is governed by two action rules: AdjunctionIntro9 which introduces an ‘unfixed’ field into a tree type and AdjunctionElim10, which removes an ‘unfixed’ field from a tree type and merges its type with a compatible type that has been found on another path within the tree type. The two action rules are given in (50). We will use a notation π 1 π 2 for π 1 , which is an initial subpath of π 2 (see Cooper, 2023, p. 151). Finally, if r is a record and is a label in r (that is, one of the labels at the topmost level of r), then r [ ] represents the record like r, except that the field has been removed.11 Here, we use standard “straight line” inferences among the conditions for the action rules. If φ is above the line and ψ is below the line, then this intuitively represents “if you can reason from φ to ψ ”. Any variables in φ are universally quantified.
(50)Languages 10 00300 i015
Note that only one unfixed node will be allowed for any tree node since AdjunctionIntro involves a merging of types and only one ‘unfixed’ field can be introduced at any level in a record type. Thus, like the logic of finite trees, the nature of TTR types imposes this constraint on the system.

4.7. Generalized Parsing Rules

In DS, lexical entries are provided in the form of parsing rules, and so far, we have followed this convention. However, it is perfectly possible to provide more general parsing rules that are not specific to any particular lexical entry and instead move information pertaining to particular lexical entries to a more “declarative” lexicon. Thus, we may generalize from (38b) (repeated below in (51) for convenience) to a general rule and a lexical entry (52).
(51)Languages 10 00300 i016
(52)Languages 10 00300 i017
This allows us to straightforwardly add more lexical entries, such as the one for “arrived” in (53) (again ignoring tense).
(53)     phon = arrived ty = e t fo = λ x . arrive ( x ) lex - id = A RRIVED   : A  LexEntry

4.8. DS Parsing with Different Semantics

Here, we have adopted DS-TTR types (for example, e and t). It could of course be possible to reformulate TTR-DS with the kind of semantic types used in Cooper (2023) or Larsson (2020), but we leave this for future work. The way we have performed things here also allows for other semantic representations without the need for changing the TTR-DS parsing rules. For example, Sadrzadeh et al. (2018) explores using DS with vector space semantics. In line with this, given that vectors can be construed as records, one could attempt to recreate vector semantics in TTR. This, however, is another topic for future work.

5. Conclusions

Above, we have provided the fundamentals of TTR infrastructure to enable DS-style parsing. We hope that this will eventually allow us to work out (and implement) a detailed grammar based on these ideas.
Furthermore, recasting DS in TTR has brought our attention to a few theoretical issues. Firstly, we have noted the possibility of replacing parsing rules specific to lexical entries with generalized parsing rules and lexical entries. Secondly, we have considered the possibility of including different semantic representations in TTR-DS. Finally, having encoded DS in TTR, one might also consider whether we could do without the explicit list of actions and just keep a history of parse states.
One reason for recasting DS in TTR is that TTR would like to bill itself as a foundation for various linguistic theories, allowing the integration of analyses in different theories or pointing out relations between them. The hope is that there are elements in TTR which DS would find useful in addition to the use of TTR for semantic representation. In general, the more theoretical light from different perspectives we can shed on incremental processing of language, the better.
In the current work (Larsson et al., 2023, 2025), we are exploring mapping TTR onto the Semantic Pointer Architecture (SPA) (Eliasmith, 2013), a neural implementation of a Vector Symbolic Architecture (Plate, 2003). If this succeeds, casting DS fully in TTR would by transitivity automatically provide a neural implementation of DS. This would give a neurolinguistic perspective on the psycholinguistic motivation of the original DS model.

Author Contributions

Both authors contributed equally to all aspects of this article. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by a grant from the Swedish Research Council (VR project 2014-39) for the establishment of the Centre for Linguistic Theory and Studies in Probability (CLASP) at the University of Gothenburg.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
Some preliminary thoughts about how to deal with coarticulation in a type-based approach are given by Cooper (2023).
2
We are grateful to Chris Howes and Eleni Gregoromichelaki for important discussion of details of Dynamic Syntax.
3
That is, DS which does not use TTR for meaning representations.
4
Cooper (2023) uses a modification of this characterization of singleton types: if a is of some type, then T a is a singleton type. b : T a if b : T and b = a . This allows for there to be types T a , where a : / T . Such types have no witnesses.
5
Here, following classical DS, we use Montague’s types, e (for entities), t (for truth values), and ( e t ) (which Montague would write e , t , for functions from entities to truth values) in addition to TTR types.
6
It also has the consequence that the operation of Thinning (referred to, for example, by Howes and Gibson (2021) is no longer necessary. Thinning removes the question mark when a node has been fully specified, whereas we are relying on the fact that the type is underspecified to indicate the type that we are required to provide a witness of.
7
This corresponds to the notion of Edge in the computational linguistic literature on chart parsing.
8
This is not to say that the sentence could not continue (e.g., “John arrived yesterday”). To handle this, one would need additional rules that would generate an alternative sequence of parse states so that a non-final parse state would be available after parsing “John arrived”.
9
Corresponding to *Adjunction in DS.
10
Corresponding to Merge in DS. We avoid the term “merge” here so as not to confuse it with the TTR merge operation on types
11
For readability, we omit the appending of the rule ID to the actions field from the effects of this and the following rule.

References

  1. Cann, R., Kempson, R., Gregoromichelaki, E., & Howes, C. (2026). Dynamic Syntax: Foundations and developments. In Oxford handbook of the philosophy of linguistics. Oxford University Press. [Google Scholar]
  2. Cann, R., Kempson, R., & Purver, M. (2007). Context and well-formedness: The dynamics of ellipsis. Research on Language and Computation, 5, 333–358. [Google Scholar] [CrossRef]
  3. Cann, R., Kempson, R. M., & Marten, L. (2005). The dynamics of language: An introduction. Elsevier. [Google Scholar]
  4. Cooper, R. (2023). From perception to communication: A theory of types for action and meaning. Oxford University Press. Available online: https://global.oup.com/academic/product/from-perception-to-communication-9780192871312 (accessed on 30 November 2025).
  5. Cooper, R., & Ginzburg, J. (2015). Type theory with records for natural language semantics. In S. Lappin, & C. Fox (Eds.), The handbook of contemporary semantic theory (2nd ed., pp. 375–407). Wiley-Blackwell. [Google Scholar]
  6. Eliasmith, C. (2013). How to build a brain: A neural architecture for biological cognition. Oxford University Press. [Google Scholar]
  7. Eshghi, A. (2015). DS-TTR: An incremental, semantic, contextual parser for dialogue. In Proceedings of the 19th workshop on the semantics and pragmatics of dialogue—poster abstracts. SEMDIAL. [Google Scholar]
  8. Eshghi, A., Gregoromichelaki, E., & Howes, C. (2022). Action coordination and learning in dialogue. In Probabilistic approaches to linguistic theory. CLSI Publications. [Google Scholar]
  9. Gibson, J. J. (1979). The ecological approach to visual perception. Houghton, Mifflin and Company. (Also available in a Classic Edition published by Psychology Press, 2015.). [Google Scholar]
  10. Hough, J., Jamone, L., Schlangen, D., Walck, G., & Haschke, R. (2020). A types-as-classifiers approach to human- robot interaction for continuous structured state classification. CLASP Papers in Computational Linguistics, 2, 28–40. [Google Scholar]
  11. Howes, C., & Gibson, H. (2021). Dynamic syntax. Journal of Logic, Language and Information, 30(2), 263–276. [Google Scholar] [CrossRef]
  12. Kempson, R., Meyer-Viol, W., & Gabbay, D. (2001). Dynamic syntax: The flow of language understanding. Blackwell. [Google Scholar]
  13. Larsson, S. (2020). Discrete and probabilistic classifier-based semantics. In Proceedings of the probability and meaning conference (pam 2020) (pp. 62–68). Association for Computational Linguistics. [Google Scholar]
  14. Larsson, S., Cooper, R., Ginzburg, J., & Lücking, A. (2023). TTR at the SPA: Relating type-theoretical semantics to neural semantic pointers. In S. Chatzikyriakidis, & V. de Paiva (Eds.), Proceedings of the 4th natural logic meets machine learning workshop (NALOMA23). Association of Computational Linguistics. [Google Scholar]
  15. Larsson, S., Ginzburg, J., Cooper, R., & Lücking, A. (2025). Finding answers to questions: Bridging between type-based and computational neuroscience approaches. In Proceedings of the 16th international conference on computational semantics (pp. 118–126). Association for Computational Linguistics. [Google Scholar]
  16. Plate, T. A. (2003). Holographic reduced representation. CSLI Publications. [Google Scholar]
  17. Purver, M., Eshghi, A., & Hough, J. (2011). Incremental semantic construction in a dialogue system. In Proceedings of the ninth international conference on computational semantics (iwcs 2011). Association for Computational Linguistics. [Google Scholar]
  18. Purver, M., Gregoromichelaki, E., Meyer-Viol, W., & Cann, R. (2010). Splitting the ‘I’s and crossing the ‘You’s: Context, speech acts and grammar. In Proceedings of the fourteenth workshop on the aspects of semantics and pragmatics of dialogue (pp. 43–50). Association for Computational Linguistics. [Google Scholar]
  19. Sadrzadeh, M., Purver, M., Hough, J., & Kempson, R. (2018). Exploring semantic incrementality with dynamic syntax and vector space semantics. arXiv, arXiv:1811.00614. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cooper, R.; Larsson, S. Dynamic Syntax in a Theory of Types with Records. Languages 2025, 10, 300. https://doi.org/10.3390/languages10120300

AMA Style

Cooper R, Larsson S. Dynamic Syntax in a Theory of Types with Records. Languages. 2025; 10(12):300. https://doi.org/10.3390/languages10120300

Chicago/Turabian Style

Cooper, Robin, and Staffan Larsson. 2025. "Dynamic Syntax in a Theory of Types with Records" Languages 10, no. 12: 300. https://doi.org/10.3390/languages10120300

APA Style

Cooper, R., & Larsson, S. (2025). Dynamic Syntax in a Theory of Types with Records. Languages, 10(12), 300. https://doi.org/10.3390/languages10120300

Article Metrics

Back to TopTop