Next Article in Journal
How Effective Are the Different Family Policies for Heritage Language Maintenance and Transmission in Australia?
Previous Article in Journal
Proximity Loses: Real-Time Resolution of Ambiguous Wh-Questions in Japanese
Previous Article in Special Issue
Constructive Dynamic Syntax
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Morphological Dependencies in English

School of Philosophy, Psychology and Languages Sciences, University of Edinburgh, Dugald Stewart Building, 3 Chfarles St., Edinburgh EH8 9AD, UK
Languages 2025, 10(12), 289; https://doi.org/10.3390/languages10120289
Submission received: 11 August 2025 / Revised: 11 November 2025 / Accepted: 14 November 2025 / Published: 27 November 2025
(This article belongs to the Special Issue The Development of Dynamic Syntax)

Abstract

This paper presents accounts of preposition selection and agreement in English within Dynamic Syntax. To achieve this, I introduce two new, non-semantic, labels into the tree language: P h that takes as values phonological forms which are modelled as ordered sets of phonemes and M d which takes as values sets of P h values, the phonological forms of certain words and forms with which a particular word can collocate. While these labels are not grounded in semantic concepts like type and formula, they are nevertheless grounded in phonological concepts and thus ultimately in phonetic phenomena. These labels are introduced through the parsing of words and are used to constrain the forms of other words they can felicitously appear with, such as nouns and certain determiners or verbs with selected prepositions or prepositional phrases, in a straightforward manner. It is shown how the remnant agreement and selection patterns in modern (standard) English can be captured without any recourse to traditional categories such as gender, person and number. Certain disagreement phenomena are discussed as are the broader implications of the approach.

1. Introduction

One of the notable characteristics of inflecting languages is that the morphological forms of words often depend on the properties of other words such as co-variation of form in agreement (concord) relations or the lexical selection for particular case-marked or prepositional complements. The ancient Indo-European languages exhibit extensive patterns of such relations and many of their modern descendants still retain some of these. Even in morphologically restricted languages like modern English, remnant selection and agreement patterns are found. While case-marking is restricted to the personal and relative/interrogative pronouns and shows at most a three-way distinction (nominative, accusative and genitive) with only accusative forms appearing as verbal complements, many verbs, adjectives and nouns select for particular prepositional complements as exemplified in (1).
(1)a.Maire is looking for you.
b.The answer to your request is “no”!
c.I’m not brilliant at singing.
Vestigial agreement also remains in standard Englishes. In the verbal system, main verbs and auxiliary verbs have and do show a single person–number distinction in the present tense between third person singular forms ending in -s, in contrast to the use of verbal base forms elsewhere. The copula be, however, retains more distinctions, having a number distinction in the preterite (was/were) and three person–number distinctions in the present tense: first and third singular and plural and second singular (am, is, are). Number agreement exists between certain determiners like demonstratives and their nouns, and gender and number are visible in anaphoric relations between pronouns and their antecedents.
It might seem odd to take morphological dependencies in English as the starting point for an attempt to account for such dependencies within Dynamic Syntax (DS), but the fact that it is so restricted in extent means that there are fewer word forms and constructions to account for. To analyse morphologically richer languages like Latin and Ancient Greek requires detailed discussion of the nature of nominal and verbal paradigms, the significance of syncretism, the type and number of dependent relationships within the language and so on, none of which are particularly relevant to English. The approach taken in this paper can, however, be extended to such languages, although the basis for their analysis is the inflectional morpheme rather than the word.
There are a few analyses, but not many, of morphological dependencies within DS (see, e.g., Kiaer (2014); Kempson and Kiaer (2009, 2010); Turner (2016); Christopher (2021) on case-marking and Cann et al. (2005, ch. 7); Marten (2005); McCormack (2008); Lucas (2014) on agreement). This is, perhaps, unsurprising, given the emphasis in DS on representing meanings as the sole output of parsing strings of words in context. Most of the labels that decorate nodes in DS tree representations are grounded in semantic concepts such as T y and F o for type and formula, respectively, or in relations between nodes in a semantic tree such as T n giving treenode addresses and modal operations over these (Kempson et al., 2001, ch. 1; Cann et al., 2005, ch. 2; Howes & Gibson, 2021). There has, therefore, been some reluctance to introduce labels that have a purely morpho-syntactic function. It would, of course, be possible to introduce labels that mimic the traditional analyses of case and agreement such as is often found in other syntactic frameworks which postulate features or functional heads like Agr or so-called ϕ features for person, number and gender. Such apparently unquestioning importation of the traditional categories is not necessarily that enlightening as to the cognitive status of such concepts within any particular language. Modern English is a case in point. Given the very restricted applicability of case, gender, person and number within the vocabulary of the language as sketched above, it is questionable whether such categories would serve any significant function within the grammar of English. A possible exception is number within the nominal system, which permits certain generalisations to be made across an unrestricted domain (e.g., ‘plural forms of determiners can only modify nouns with a plural form’). But even here, the number of forms that show a number distinction that are not nouns is restricted to four—the demonstratives this, these, that, and those—while in other languages, number is exhibited on adjectives and other elements in the nominal domain as well as nouns. It is, therefore, unlikely that positing a category N u m would serve any significant purpose in the grammar, nor indeed would labels that encode categories like case, gender or person. I thus eschew the use of such concepts in my analysis, relying instead on specifying constraints on collocational restrictions directly in terms of phonological forms of particular expressions.
In the next section, I introduce a new, non-semantic, label P h into the DS tree vocabulary that takes as values phonological forms, here represented as ordered sets of phonemes, thus introducing (lexical) phonological information directly into DS representations.1 This innovation is used to provide an analysis of ‘particle’ verbs and those that take specific prepositional phrase complements. In Section 3, developing an idea concerning functional categories sketched in Cann (2000), another new label M d (for morphological dependency) is introduced. This takes as values sets of P h values, i.e., sets of ordered sets of phonemes. These sets encode phonological forms of closed-class expressions such as personal pronouns and determiners. This label is then used to specify agreement between nouns and other expressions they appear with. The discussion begins with gender and number agreement between anaphoric pronouns and their antecedents, before moving on to an account of dependencies between nouns and associated determiners. The section ends with an analysis of verb–subject agreement in English. Cases of acceptable mismatches of expected forms are also discussed and analysed. The final section explores wider implications of the analyses provided.

2. Preposition Selection

In English, as in many other languages, predicate-denoting expressions like verbs, adjectives and some nouns may lexically select for particular prepositions to mark their complements. Such selected preposition-marked complements are ungrammatical or change their meaning in contexts that do not contain the relevant preposition, as illustrated in examples (2)–(4):
(2)a.Morag took after her mother.[resemblance]
b.Morag took from her mother.[acquisition]
c.Morag took her mother to the shops.[accompaniment]
(3)a.Suzie fell for the oldest trick in the book.[deception]
b.Suzie fell on the oldest trick in the book.[discover]
c.*Suzie fell the oldest trick in the book.
(4)a.I was always good at Greek at school.[aptitude]
b.I was always good for Greek at school.[behaviour]
c.*I was always good Greek at school.
In examples (2a)–(4a), the prepositions seem to lack their inherent meanings of posteriority (after), beneficiary (for) and location (at). Instead, they modulate the meanings of their associated predicates. In certain cases, as in (5), the function of the selected preposition signals a grammatical relation rather than modifying the meaning of the predicate in any significant way.
(5)a.Hugo gave a book to his mother for her birthday.[goal]
b.#Hugo gave a book his mother for her birthday.
c.Hugo gave his mother a book for her birthday.
Finally, there are the so-called particle verbs where a selected preposition and the nominal complement do not form a constituent:2
(6)a.Jeremy looked up the number.
b.Jeremy looked the number up.
c.*It was up the number that Jeremy looked.
d.It was the number Jeremy looked up, not the address.
e.#Jeremy looked the number.
(but cf. Jeremy looked the part.)
f.Jeremy looked up.[direction]
As noted above, to analyse such examples in DS, I introduce a new label P h (phonology) whose values are phonological forms which, for the sake of simplicity, I represent as ordered sets of phonemes. Thus, we have labels like P h ( < ʌ,p > ) , P h ( < æ,t > ) , P h ( < f,ɔ,r > ) and P h ( < a,f,t,ə,r > ) .3 Notice that these are intended to be the phonological representations of actual word forms, not lexemes. So we have P h ( < k,æ,t,s > ) as well as P h ( < k,æ,t > ) , and P h ( < s,ɪ,ŋ,ɪ,ŋ > ) as well as P h ( < s,ɪ,ŋ > ) , P h ( < s,ɪ,ŋ,s > ) , P h ( < s,æ,ŋ > ) and P h ( < s,ʌ,ŋ > ) . Such labels are introduced into DS trees as part of the lexical actions associated with words. Thus, as well as building structure, adding formula and type information, a word also introduces its phonology label.4 In DS, lexical entries for words contain actions that the parser uses to annotate an unfolding semantic tree which are encoded as < I F , T H E N , E L S E > statements. Thus, the actions associated with parsing forms of the English masculine pronoun are triggered by a requirement to construct a term ( ? T y ( e ) ) and involve the annotation of the current node with type, formula and phonological values plus a requirement ( ? x . F o ( x ) ) to find a proper value from the context to substitute for the metavariable U (see Cann et al., 2005, pp. 67 ff.), as illustrated in (7).
(7)a.he
IF ? T y ( e i )
THEN put ( F o ( U ) , T y ( e ) ) , P h ( < h,i > ) ) , ? x . F o ( x )
ELSEAbort
b.him
IF ? T y ( e i )
THEN put ( F o ( U ) , T y ( e ) ) , P h ( < h,ɪ,m > ) , ? x . F o ( x ) )
ELSEAbort
This move to integrate lexical phonological information directly into the grammar does not mark a radical reinterpretation of the tenets of DS. It has long been hypothesised that phonology may have an effect on grammatical processes (Selkirk, 1984; Gussenhoven, 1986, inter al.). There are also, as already noted, publications that discuss the effects of phonology on structure within DS, such as Kula and Marten (2011), Kula (2007) and Kiaer (2014). Although phonology is not often referenced in theoretical syntactic analyses, in sign-based theories such as Head-driven Phrase Structure Grammar (Müller et al., 2024), it forms one aspect of linguistic signs, encoded as a feature phon alongside features encoding syntax, semantics and pragmatics. Other frameworks also at least allow the possibility of phonology interacting with syntax, so the introduction in the current paper of phonological forms that interact with syntactic processes is not particularly controversial.
The P h label provides the means to handle preposition selection straightforwardly: relevant words impose a requirement for a particular phonological form on their complements ? P h ( ϕ ) that is only satisfied by parsing the particular preposition with that phonological form. Consider the lexical entry for take after which forms part of the overall lexical actions associated with parsing take:5
(8)take after
  IF ? T y ( t )
  THEN[Subject and Situation Node Actions]
make ( 0 ) , go ( 0 ) , put ( ? T y ( e i ) , )
go ( 0 ) , make ( 1 ) , go ( 1 ) , put ( ? T y ( e i ( e s t ) ) )
make ( 1 ) , go ( 1 ) ,
put ( T y ( e i ( e i ( e s t ) ) ) , F o ( Take - after ) , P h ( < t,eɪ,k > ) )
go ( 1 ) , make ( 0 ) , go ( 0 ) , put ( ? T y ( e i ) , ? 1 P h ( < a,f,t,ə,r > ) )
  ELSE[Other Actions]
These actions construct more than just adding labels to nodes as in (7); they also build nodes using the make ( ν ) action, where ν is some modal relation between nodes where 1 relates the current node to an immediately dominated functor node (conventionally shown on the right in tree displays) and 0 points to an immediately dominated argument node (on the left). 1 and 0 are their inverses. The predicate go ( ν ) moves the pointer (◊ in tree displays) from one node to another, indicating which node is currently being developed. These actions create a transitive predicate–argument structure just as for take except that the logical object carries a requirement, not only for some formula of the type of a term ( ? T y ( e i ) ), but also a modal requirement, ? 1 P h ( < a,f,t,ə,r > ) , which requires the immediately dominated functor node to be specified as having the relevant phonological form. The result of parsing the first two words of Morag took after her mother is shown in Figure 1 with the complement node appearing in a box.6
Note that the pointer is on the object node in Figure 1, i.e., on a node with an open individual term requirement as well as an open modal phonological requirement. The latter is satisfied by a parse of the preposition after whose associated parsing actions build functor and argument nodes, decorating the former with a formula consisting only of the identity function over individual terms F o ( λ x . x ) , the type T y ( e e ) and the appropriate phonological value which satisfies the dominating modal phonological requirement. The parse also build an argument node carrying an open term requirement. These actions form part of the full set of lexical actions induced by parsing after, as shown in (9).
(9)Languages 10 00289 i001
The result of parsing Morag took after is shown in Figure 2, which has the pointer on a node with an open term requirement, allowing the parse of her mother as shown in Figure 3.7
Compiling the tree in Figure 3 via functional application yields the propositional formula in (10a) to which beta-reduction applies to the identity function to give (10b).
(10)a. ( Take - after ( λ x . x ( ϵ y , R ( Morag , y ) Mother ( y ) ) ) ) ( Morag ) ) ( s i )
b. ( Take - after ( ϵ y , R ( Morag , y ) Mother ( y ) ) ) ( Morag ) ) ( s i )
In the output formula, there is no evidence of the preposition at all, although the compiled tree contains its trace through its phonological form and the structure it builds. This analysis, therefore, bears a strong resemblance to the analysis of these constructions introduced first in Generalized Phrase Structure Grammar (Gazdar et al., 1985) in which selected PPs are assigned to the category NP with a PFORM attribute that has a prepositional form as the value.
An obvious question at this point is as follows: if the preposition is semantically null, why build the extra structure? The answer has to do with left dislocation and word order. If the preposition were to build no structure and the phonological requirement on the object node were not modal but to be satisfied on the current node, the preposition and noun phrase could appear in either order as with particle verbs. This is because the object complement node would contain two non-modal requirements { ? T y ( e i ) , ? P h ( < a,f,t,ə,r > ) } . Since there are no constraints on the order in which requirements can be satisfied, the term requirement could trigger the parse of a noun phrase like her mother first. If a preposition like after simply annotated the node with its P h value, it could be parsed next without giving rise to anything that would abort the parse. However, Morag took her mother after, while grammatical, does not mean Morag took after her mother. The use of the modal P h requirement and the structure built by the preposition prevent a noun phrase being parsed before the preposition. Consider the structure of the argument term built by parsing her mother in Figure 3. This consists of a functor node of type t e i , a formula, and a P h value <h,ə,r> and a propositional argument node. If this structure were built by parsing her mother before after, then it would be rooted in the term node that is the argument of the verbal predicate. This would mean the modal phonological requirement could never be satisfied because the parse of the preposition would build a structure that is incompatible with that built by the noun phrase: the functor node would be annotated with inconsistent values for type, formula and phonological values, any one of which would cause the parse to abort.
The second reason to analyse the semantically void preposition as giving rise to structure has to do with left dislocation: unlike particle verbs, a preposition plus noun phrase can be left-dislocated.
(11)a.It was after her mother that Morag took, not her father.
b.It was her mother that Morag took after.
c.It was after that Morag took ner mother.
d.Jeremy looked up the number.
e.*It was up that Jeremy looked the number.
f.*It was up the number that Jeremy lloked.
g.It was the number Jeremy looked up, not the address.
Without going into details about left dislocation in DS (for which see Kempson et al., 2001, ch. 3; Cann et al., 2005, pp. 59 ff.), such constructions involve unfixed nodes of various types, allowing a phrase to be parsed that does not yet have a fixed function in the current unfolding propositional tree. Any word whose parse involves the building of structure will always induce the building of that structure whether in a fixed or unfixed position. This means that an unfixed node annotated with a term and modal phonological requirements will necessarily mean that a full PP is parsed. (11c) does not have the same analysis, or mean the same, as (11a) (after being adverbial in this context). The fact that the NP complement of the preposition can be left-dislocated independently in (11b) follows because it can be fixed in the position of the null preposition’s argument without leading to inconsistency.
The same analysis can be used for ditransitive verbs that do not alter their meaning but must select a particular prepositional complement as one of their arguments, either always or only in certain circumstances, as with give exemplified in (5). However, the analysis of the particle verbs differs slightly. As shown in (6), with these verbs, the preposition and NP complement can appear in either order and cannot be separated by other expressions (modulo right positioning of a ‘heavy’ noun phrase). The analysis I propose has already been foreshadowed in the above discussion: particle verbs annotate their term argument with a non-modal phonological requirement, and prepositions in this context build no structure. The tree that results from a parse of the first two words of Jeremy looked up the number is shown in Figure 4.
The actions associated with parsing up here simply annotate the node with the relevant P h value, thus satisfying the requirement. The lexical actions associated with any preposition selected by a particle verb begin as that for up in (12). The condition x . T n ( x ) , meaning that the current node has a fixed treenode address, prevents the action being triggered on an unfixed node, thus disallowing (11e) above.
(12)Languages 10 00289 i002
Having parsed up, the pointer remains on the complement node in Figure 4, which still has an open term requirement allowing the parse of the final NP the number to yield the tree in Figure 5.8
As noted above, there is nothing in the actions associated with look up and up that requires the preposition to be parsed before the NP, so either the phonological requirement or the term requirement can be resolved before the other. Whichever is parsed first, the output representation is the same.
An anonymous referee queries whether this theory predicts an ambiguity that might be encountered in parsing the string Jeremy looked up between up being selected and so meaningless and up as a spatial preposition with its own semantics as in Jeremy looked up the stairs. The answer is ‘yes’ in principle because the different uses of the preposition involve different structures and different semantic outputs. This is why I asserted above that the lexical actions associated with prepositions that may appear with a selecting predicate start with a check for a P h requirement. Such prepositions obviously have other uses and appear in different constructions and so involve different sets of actions that operate in different contexts, but only once has it been established that the preposition is not selected. It is not possible to demonstrate this here, as it would require discoursing about how spatial and other prepositions are analysed in DS, whether as complements or adjuncts, a discussion that remains to be provided (although see Marten, 2002 for a DS analysis that treats adjuncts as arguments).
In this section, I have introduced a new label P h into the DS tree language which has as values phonological forms. This allows for verbs (and other expressions) to directly impose a requirement to be collocated with some other expression of a particular form. In this way, the core data of prepositional selection can be accounted for.

3. Agreement

Traditionally, agreement patterns in English (and other modern Germanic languages) are discussed in terms of there being two numbers, singular and plural, three genders, masculine, feminine and neuter, and three persons, first, second and third, and agreement (or concord) patterns are defined in terms of these categories. However, as briefly discussed at the end of the introductory section, these categories have limited, if any, usefulness in the grammatical system of English, given the very restricted agreement patterns displayed. In order to account for gender and number agreement between anaphoric and reflexive pronouns and their antecedents, the collocational constraints on determiner plus noun combinations and the vestigial subject–verb agreement, I do not mimic such grammatical categories directly and propose instead to reference the forms of agreeing words to capture the allowable collocations. To this end, I introduce a new label M d (for ‘morphological dependency’) which takes as values sets of phonological forms, i.e., sets of ordered sets of phonemes. As will become evident during the discussion, the set of forms that need to be set up to act as values to the label is not only finite but very small and restricted to closed-class expressions such as pronouns and determiners. I begin by analysing anaphoric agreement, and then move on to determiner–noun concord and finally to subject–verb agreement.

3.1. Anaphora

I start my analysis by reviewing the theory of third person anaphora introduced in Kempson et al. (2001), which skirts around the problem of matching gendered pronouns with appropriate antecedents, merely subscripting metavariables with labels like masc, fem, pl, etc. without addressing what these subscripts are actually mean. Such matching is, of course, profoundly important in understanding (and producing) utterances as illustrated in the short dialogue in (13):
(13)A: Mike gave Jean some daffodils.
B: Did he? Did she like them?
For anyone like me, for whom Mike is a name for males and Jean for females, the most accessible antecedents are Mike for he, Jean for she and the daffodils for them, but the rule of Substitution introduced in Kempson et al. (2001), which selects some type-matched formula in context and substitutes that for a metavariable projected by an anaphor, cannot ensure this and allows any of the potential antecedents to be associated with any of the pronouns in (13). A proper analysis of what it is that determines the correct dependencies needs a sound theoretical basis within the tenets of DS.
A potential solution is to treat the subscripts on metavariables as semantic in nature, so that a metavariable like U P L entails (or presupposes) that any substituent must have a plural meaning. Of course, this fails immediately because grammatical number is not isomorphic with the mental concepts that it can express. While the plural markers in English often indicate that some concept is to be taken as a real plurality, this may not be the case, and the denotata may be singular such as with trousers, scissors, etc. Such expressions nevertheless require a pronominal anaphor to be plural they not singular it. Conversely, the fact that an expression is not morphologically plural only weakly implicates that what it denotes does not consist of discrete parts, as with nouns like committee and furniture.
Similarly, it cannot be taken for granted that she or he necessarily always refers to female or male entities. So dog may be referred to with any of the third singular pronouns, bull with he or it, ship with she or it and so on. And in our flexibly gendered times, the pronoun that might be used for the biological gender of a human baby may not, in fact, be applicable, either through choice or biology.
To account for these facts, I take up and develop an idea proposed in Cann (2000) where it is suggested that there is a significant cognitive distinction between ‘lexical’ and ‘functional’ categories: the former are defined intensionally in terms of overarching syntactic categories or semantic types and the latter simply in terms of sets of word forms. While I do not accept the hypothesis that there is a real cognitive dichotomy involved, nevertheless I here adopt the proposal that certain, closed-class expressions, individually or in sets, per se, may play an important part in the grammars of natural languages, independently of any syntactic or semantic categorisation. It is the phonological forms of these expressions that I take to define the possible values for the M d label mentioned above. This label encodes the set of expressions with which another expression or set of expressions such as nouns and verbs can be linked to, or collocated with. While being a purely morpho-syntactic label, not ultimately grounded in meaning, M d is nevertheless grounded in the objective obverse of meaning, phonology and ultimately in phonetics realised as sound waves.
Dependent forms, such as anaphoric pronouns, are sensitive to particular values of an M d label. Hence, I treat nouns as associated with the set of phonological representations of the various forms of the personal pronouns that are associated with them as possible co-referring expressions. So we have labels such as M d ( { < h,i > , < h,ɪ,m > , < h,ɪ,z > } ) or M d ( { < ʃ,i > , < h,ə,r > } ) or even M d ( { < h,i > , < h,ɪ,m > , < h,ɪ,z > , < ʃ,i > , < h,ə,r > , < ɪ,t > , < ɪ,t,s > } ) . The parse of a common noun I thus take as involving not just projecting type, formula and phonological labels but also an M d label which decorates the dominating term node. As an example, (14) shows the lexical entry for woman and (15) the tree that results in parsing a woman:9
(14)Languages 10 00289 i003
(15)Parsing a woman:Languages 10 00289 i004
Pronouns are also associated with M d values so that she, her like woman and other feminine nouns project M d ( { < ʃ,i > , < h,ə,r > } ) in addition to a metavariable and associated formula requirement. The process of Substitution assumed in Kempson et al. (2001), Cann et al. (2005) and many others replaces a metavariable with some other type-matching formula from the current context.10 To ensure a proper match between antecedents and pronouns, this process needs also to be made sensitive to the M d values of both anaphor and antecedent. Hence, the rule targets a tree T in a context C that contains a node with a formula value not only of a matching type but also a matching M d value. The relevant actions are given in (16):11,12
(16)Languages 10 00289 i005
Notice that the relation between the M d values is one of non-empty intersection, not identity. This is because words like dog, bull, and ship mentioned above carry an M d value that covers more than one set of potential anaphoric forms. So cow projects M d ( { < ʃ,i > , < h,ə,r > , < ɪ,t > , < ɪ,t,s > } ) , which needs to match both feminine ( M d ( { < ʃ,i > , < h,ə,r > } ) ) and neuter ( M d ( { < ɪ,t > , < ɪ,t,s > } ) ) pronouns. The rule of Substitution in (16) ensures that the pronouns in (13) are paired with the appropriate antecedents under the following assumptions: the referent of Mike is, for the speaker, male, and so the word is associated with M d ( < h,i > , < h,ɪ,m > , < h,ɪ,z > ) as is the pronoun he; the speaker’s intended referent for Jean is female and so associated with M d ( < ʃ,i > , < h,ə,r > ) as is she; and plural noun phrases like the daffodils and forms of the third person plural pronoun are associated with M d ( < ð,ɛ,i > , < ð,ɛ,m > , < ð,ɛ,r > ) . Given that the requirement for M d values on anaphors and antecedents have a non-null intersection, no other construal is possible.
The same sort of analysis for agreement with pronominal anaphors and antecedents can be used for reflexives. The third person reflexive pronouns himself, herself, itself, and themselves all require agreement with their local antecedent (where, as usual in DS, ‘local’ signifies the minimal propositional domain containing some node):
(17)a.Mary doesn’t like herself/*itself/*himself/*themselves very much at the moment.
b.I told that man to get himself/*herself/*itself/*themselves together.
c.Someone sent me the students’ pictures of themselves/*himself/*herself/*itself.
As in Kempson et al. (2001) and Cann et al. (2005), I adopt an analysis which provides lexical actions for the reflexive pronouns that mimic a local version of Substitution which excludes the possibility of there being a discourse or non-local antecedent. However, no metavariable is projected by these actions, so no actual substitution is involved. So herself unlike she, her is restricted to local (proposition internal) antecedents with an M d value { < ʃ,i > , < h,ə,r > } as shown in the lexical actions in (18):
(18)herself
IF { ? T y ( e i ) , T n ( n ) }
THEN put ( T y ( e i ) ,
M d ( { < ʃ,i > , < h,ə,r > } ) ) ,
gofirst ( ? T y ( t ) ) ,
go ( 1 * 0 )
IF { T y ( e i ) , F o ( α ) , M d ( β ) }
THENIF β { < ʃ,i > , < h,ə,r > } )
THEN go ( 0 1 * * T n ( n ) )
put ( F o ( α ) )
ELSEAbort
ELSEAbort
ELSEAbort
The actions thus add the appropriate type and M d labels to the trigger node, target the closest propositional node, seek a locally dominated node of type e i decorated with a formula value α and an M d value that has a non-null intersection with the feminine value and then return the pointer to the original node and copy the formula α . So for the datum in (17a), the grammatical version yields the desired output: ¬ ( ( ( Love ( Mary ) ) ( Mary ) ) ( s j ) ) .
Another thing to note about the theory of agreement presented in this paper is that it is not fully deterministic. If it were, then it should be impossible to resolve the antecedent for the pronoun they/them in examples like those in (19):
(19)a.Every dog adores whoever feeds it. They are such fickle creatures.
b.No student came to the tutorial. None of them had done the set work.
The reason that it should be impossible to resolve they to every dog or them to no student is that the M d values do not share at least one value, so that singular referents should not be compatible with plural anaphors. What, of course, is happening here is that the plural third person pronoun, while not entailing plurality, implicates it, and this implicature induces pragmatic enrichment of the formula of the antecedent to yield an appropriate group referent for the pronoun. Thus, in (19a), the substitution of the antecedent formula for the metavariable projected by the anaphor does not pick up the quantificational force of the antecedent but refers to some contextually maximal group of dogs (in this case the kind), while in (19b), it identifies the maximal group of students who are in the tutorial group. Hence, Substitution that involves matching M d values can be overridden just in case any implicature of the anaphor can be used to create an appropriate referent based on the semantic properties of the putative antecedent and, in particular, the head noun that incorporates the content of the relevant implicature.
The same sort of semantic/pragmatic agreement occurs with respect to the third person plural pronouns whose antecedents are conjoined singular NPs as with Malcolm and John hugged each other. They were lovers once. While easy enough to explain under the assumption that the NP Malcolm and John denotes a group (however defined) and they implicates a group antecedent, as with the examples in (19), the implicature allows the anaphoric link between the pronoun and the conjoined NP, as desired. There is, however, a problem. Consider, for example, Gordon and Sarah said they were not going to the USA this year. The analysis of co-ordinate structures in DS involves a link relation between two type-identical nodes, here T y ( e u ) , where linked trees relate to, but are independent of, the parse tree under construction (Cann et al., 2005, ch. 3). A link Evaluation rule then combines the two formulae on the linked nodes using generalised semantic conjunction, here yielding F o ( Gordon   Sarah ) as the evaluated output. The problem arises with respect to M d values. Since Gordon is parsed first, it will host the conjoined formula, but the node is decorated with the masculine singular M d value. Substitution, as defined above, will entail an interpretation of Gordon and Sarah said he was not going to the USA this year to mean Gordon and Sarah said Gordon and Sarah were not going to the USA this year because the M d values of he and that of the head node of the link structure match, and the latter, after evaluation, has as F o value the conjoined individual terms. A solution would be to modify the link Evaluation rule to not only combine the linked formulae but also to non-monotonically replace the masculine M d value with the third plural one. This would allow the straightforward substitution of the conjoined formula for the metavariable projected by the parse of a third plural pronoun without the need to resort to pragmatics. In turn, however, this will pose another problem. In an example like Gordon and Sarah said they were not going to the USA this year, because he despises the current administration, substituting F o ( Gordon ) for F o ( U ) is no longer straightforward, as there would be no masculine singular M d value for Substitution to target, and a pragmatic strategy would be needed to satisfy the formula requirement of he by means of the male implicature of the pronoun to identify Gordon as the antecedent. So it seems that one way or another, there does need to be a semantic/pragmatic means of construing anaphors in addition to the purely mechanical approach taken above. My preference in the current instance is to proceed with the non-monotonic modification to Conjunction link Evaluation as envisaged above for two reasons. Firstly, my intuition is that the example with anaphoric he requires the word to be stressed, which generally induces the hearer to infer that some pragmatic process is involved in identifying the antecedent (possibly indicating some contrast with Sarah). Secondly, the modification does not give rise to problems in unacceptable interpretations of singular pronouns with plural referents. The non-monotonicity of the replacement of one M d value with another, though not ideal, nevertheless occurs in other instances of ‘disagreement’, such as the use of plural verbs with group-denoting singular nouns in British English, and the use of mass determiners with count nouns and vice versa, as discussed below.
Before moving on to an analysis of nominal agreement in English, it is important to note that my analysis of gender and number agreement of anaphors and antecedents makes no reference at all to any notions of gender or number as reified categories within the grammar. Although I refer to masculine or plural M d values, the grammar itself is blind to such concepts. All that the grammar encodes are collocational restrictions between nouns and pronouns, learned, I assume, during successful linguistic interactions during the language acquisition process. I return to this topic briefly at the end of the paper.

3.2. Nominal Agreement

The treatment of anaphoric agreement of the previous section provides many of the means to ensure agreement between nouns and verbs and determiners and nouns. I begin by reviewing the properties of nominal selection by determiners as in (20):13
(20)Determiner Agreement:
a.this, that, every all select for singular nouns: a dog/*dogs, this computer/*computers, that announcement/*announcements.
b.a(n) selects for singular count nouns: an owl, #a rice.
c.these, those, many, fewer etc. select plural count nouns: these words/*word, those trends/*trend, many thoughts/*thought, fewer mistakes/*mistake.
d.much, less select for singular mass nouns: much rice, #much human, less fussing, ??less eye.
e.more selects for singular mass nouns and plurals: more food, more dogs/#dog.
f.the, some, any and the genitive pronouns have no selection requirements: the dog/dogs/rice, some idea/ideas/furniture.
To analyse the determiners that are sensitive only to number such as the demonstratives, we can simply assume that they decorate their dominating term node with a requirement for an M d value of the appropriate sort: either ? M d ( { { < ɪ,t > , < ɪ,t,s > , } , { < ʃ,i > , < h,ə,r > } , { < h,i > , < h,ɪ,m > , < h,ɪ,z > } } ) for singular or ? M d ( { < ð,ei > , < ð,ɛ,m > , < ð,ɛ,r > } ) for plural. So we have analyses like the one shown in Figure 6 for these songs.14
As the M d value projected by the noun matches that of the requirement, Thinning applies to eliminate the requirement (indicated by strikethrough). However, for singular NPs like this song or that bull, matters are not so straightforward, as the values projected by the nouns may not exactly match those of the requirement, as nouns only project the relevant set of pronominal forms to ensure correct anaphoric relations. So, song projects only third singular neuter values and bull third singular neuter and masculine singular ones. In order to eliminate the requirement, the rule of Thinning must be revised to allow a projected M d value that subsumes the requirement value to satisfy the requirement. This is sufficient to allow the acceptable combination of singular demonstratives with singular nouns and exclude singular forms in combination with plural ones: *this dogs, *those song. (21) provides the appropriate rule with L a standing for any label:
(21)Thinning
  IF ? L a ( α ) , L a ( β )
  THENIF α β
THENDelete ( ? L a ( α ) )
ELSEAbort
  ELSEAbort
The use of the personal pronominal forms as M d values is sufficient to capture simple number agreement between determiners and their collocated nouns. However, when it comes to characterising the mass/count distinction, this is not sufficient, as all common nouns are third person, so there must be some other forms that differentiate them. There is much ink spilled in discussing the difference between count nouns and mass nouns, principally from a semantic perspective (Bunt, 1985; Link, 1983; Gillon, 1992, 1999; Meulen, 1981; Joosten, 2003; Moltmann, 1998; Nicolas, 2002, amongst many others). While there is undoubtedly a human cognitive ability to conceptualise objects in terms of individuation or substance, semantic accounts of mass and count terms are controversial. One of the problems is the arbitrariness within any language and certainly cross-linguistically in terms of which nouns are treated as typically mass or count. This makes a universal characterisation of the distinction problematic, if not impossible. There is, for example, no ontological reason why rice but not bean is mass (cf, earlier English mass noun pease from which the count noun pea/peas is derived by back-formation) or why hair should be mass in English but cheveux should be count in French. Fortunately, to account for determiner agreement per se, we can leave aside any serious discussion of the various semantic theories and only consider morpho-syntactic behaviour, at least with respect to the common usage of nouns as mass or count.
Because of the arbitrary way nouns are treated as mass or count, the distinction can really only be determined by the most common syntactic behaviour of a given noun in a given language, dialect or sociolect. Bloomfield (1935) provides a set of morpho-syntactic properties of different nouns in English. These are summarised in Bale and Barner (2018, p. 4) and reproduced in (22):
(22)a.Singular–Plural Contrast: Count nouns have alternate forms corresponding to singular and plural. Most mass nouns only have a singular form (though there are some with only a plural form).
b.Antecedents: Only noun phrases headed by count nouns in the singular serve as antecedents for the pronouns another and one.
c.Quantifier Distribution: The indefinite article a, the determiners each, every, either and neither and the cardinal numeral one modify only count nouns in the singular. The determiners few, a few, fewer, many and several and the cardinal numerals greater than or less than one modify only count nouns in the plural. The determiners all, enough and more may modify mass nouns or plural count nouns, but not singular count nouns; and mass nouns and plural count nouns, but not singular count nouns, may occur without a determiner. The quantifiers little, a little, less and much modify only mass nouns.
Building on this description, a morpho-syntactic approach is proposed here using the M d label. Common nouns, in addition to being associated with third person pronominal forms, must also record the sorts of determiners that they are typically collocated with, without any need for further inferential processing. Consider singular count nouns which all require a determiner, and the only non-general determiners (like the definite and demonstrative ones) they can appear with are, as noted above, the indefinite article, each, every, neither, either and one. So, including the two associated anaphoric forms in (22b), the associated M d value of count nouns is as follows:
(23)Singular count nouns:
{ < ə > , < ə,n > , < i,tʃ > , < ɛ,v,r,i > , < w,ʌ,n > , < ə,n,ʌ,θ,ə,r > , < n,i,ð,ə,r > , < i,ð,ə,r > }
As this is rather unwieldy, I will abbreviate this value as countsg in the discussion that follows, and I will abbreviate the third person singular M d value as 3sg. Note that these are just abbreviations for sets of phonological forms not grammatical categories as such.15
The analysis of count and mass noun phrases differs from that of demonstrative ones, although the basic idea is the same. All mass and count determiners follow the same general procedures, which involve the projection of a requirement for third person while at the same time asserting an M d value that consists of the phonological form of the individual determiner and a metavariable with which it unifies. Thus, each will decorate the top term node with M d ( { < i,tʃ > } U ) . The function of this metavariable is discussed below.
In addition, a more complex set of actions has to be associated with the parse of a common noun. In addition to projecting a person M d value on the top term node, the actions check for an existing M d value on that node. If there is one, a requirement ? M d ( countsg) is imposed to ensure that the existing M d value is that of a singular count determiner. Then, the pointer returns to the nominal predicate node and adds formula, type and P h values as normal. If there is no M d value, such as with the definite article and the demonstratives, no requirement for a singular count determiner is imposed, and type, formula and P h values are asserted. The actions associated with parsing chair are thus as given in (24), and taking the count noun chair and the indefinite article as exemplars, Figure 7 illustrates the parse of the noun phrase a chair.
(24)Languages 10 00289 i006
Note that I am assuming that the determiner encodes an individuating predicate indiv, following Allan (1980)’s division of determiners into individuating and non-individuating types which can be defined within Link (1983)’s lattice-theoretic semantics of plural and mass terms.
In the second tree, on the top node, there are two M d requirements and two values. Both requirements can be eliminated straightforwardly through subsumption, as discussed above, but after compilation, this node contains two non-subsuming M d values, { < ɪ,t > , < ɪ,t,s > } and { < ə > U } , which should abort the parse. This is where the metavariable projected by the determiner comes into play. If the third singular M d value substitutes for the metavariable, then the resulting value, M d ( { < ə > , < ɪ,t > , < ɪ,t,s > } ) , is subsumed by the third neuter singular value. Given that a node may contain two different instances of the same label just in case one value subsumes the other, the analysis presented here is fully licensed by the grammar.16
Nouns that are typically used as mass terms are not associated with forms of the indefinite article, but they can be analysed similarly to singular count terms by referring to the forms of the mass determiners:
(25)Singular mass nouns (masssg):
M d ( { < l,ɛ,s > , < m,ɔ > , < l,ɪ,t,l > , < i,n,ʌ,f > , < m,ʌ,tʃ > } )
As with count NPs, singular mass determiners assert an M d value containing its phonological form plus a metavariable and project a third person singular requirement. Mass nouns then check whether there is an M d value on the top term node, adding a requirement ? M d (masssg) before projecting their associated M d value and returning to the triggering node and decorating it with P h , formula and type values. Again, if there is no initial M d value on the top term node, then the pointer simply returns to the nominal predicate node and adds type, formula and P h values. Hence, a parse of a mass NP like less rice proceeds exactly as one of a chair except for the different M d requirement and with the determiner formula involving a predicate subst, indicating that the entity referred to is a non-individuated substance (has m-parts in Link (1983)’s terminology).
Plural count NPs like many cars, several bushes, fewer children and the like can again be analysed in exactly the same way, despite, of course, showing different morphological dependencies. So plural NPs involve the annotation of the trigger node with its phonological form and metavariable as the M d value and the plural requirement ? M d { < ð,ɛ,i > , < ð,ɛ,m > , < ð,ɛ,r > } ) , while building the internal structure of the term. Plural nouns then check for an M d value on the top term node and, if found, add a plural determiner requirement ? M d ( { < m,ɔ,r > , < f,j,u > , < f,j,u,w,ə,r > , < m,ɛ,n,i > , < s,ɛ,v,ə,r,ə,l > , } ) before adding formula, P h and type values to the nominal predicate node. Figure 8 shows the parse of many cars before compilation, where μ stands for whatever content is given to the operator provided by the determiner. As before, I am assuming that the third plural M d value substitutes for the metavariable projected by the determiner to yield a single value for the top node M d { < m,ɛ,n,i > , < ð,ɛ,i > , < ð,ɛ,m > , < ð,ɛ,r > } ) .
Unlike singular count nouns but like singular mass nouns, plural count nouns can appear without any determiner at all. To handle these, I take the position that the parse of a plural count noun or a singular mass noun has two potential triggers: a predicate requirement constructed by the parse of a plural or mass determiner and a term requirement with no dominated nodes. In the latter environment, the parser constructs an epsilon operator and then builds the term as normal. Figure 9 gives the structure of the term node after parsing cars and the compiled output is ϵ x . Car ( x ) group(x).
That, of course, is not the whole story of nominal agreement in English. Nouns typically used as mass terms can take individuating determiners, giving rise to interpretations as kinds or specified quantities (26), while at the same time, nouns typically used as count may appear in non-individuating contexts, giving rise to interpretations of substances of some inferred sort (27).
(26)Mass noun disagreement
a.Three coffees, please.[Quantity]
b.That shop stocks at least a dozen coffees from all over the world.[Kind]
c.Every coffee seems to taste different.[Quantity/Kind]
(27)Count noun disagreement
a.After the accident with the butcher’s van, there was a lot of pig all over the road.[Meat]
b.I could have done with less Martin in that meeting![Personality]
c.My sister’s guide dog is more goat than dog.[Property]
The analyses of such examples yield trees that have a mismatch between M d requirements and projected values, clashes that need to be resolved for the tree to compile. For example, the term node in (26c) will have the set of decorations in (28a), and that of (27a) will have that in (28b).17
(28)a. { T y ( e i ) , F o ( τ x , Coffee ( x ) indiv ( x ) ) , M d ( { < ɛ,v,r,i > U } ) ,
? M d ( masssg ) , M d ( { < ɪ,t > , < ɪ,t,s > } ) , ? M d ( 3 sg ) }
b. { T y ( e i ) , F o ( L y . Pig ( y ) subst ( y ) ) . M d ( { < ə,l,ɔ,t,ə,v > U ) , ) ,
? M d ( countsg ) , M d ( { < ɪ,t > , < ɪ,t,s > } ) , ? M d ( 3 sg ) }
What is needed is some sort of rule to delete the non-matching M d requirement on the inconsistent term node and to modify the formula value. Of course, this violates the general DS condition that tree growth is monotonic. However, disagreement phenomena like those in (26) and (27), where there is always a semantic or strong pragmatic effect, is widespread enough cross-linguistically that it is necessary to countenance non-monotonic tree transitions in limited, specified contexts as a sort of ‘last resort’ mechanism. Rules that specify these transitions, which I will refer to as non-monotonic resolution rules, affect only a single node in a tree and license a new tree with modified decorations on that node.
The rule I propose for the resolution of mass-count disagreement is given in (29), where the symbol ⇝ indicates a non-monotonic tree transition:18
(29) { ? M d ( α ) , M d ( β U ) , M d ( γ ) , F o ( q x , P ( x ) δ ( x ) ) , T y ( e i ) , }
{ M d ( β γ ) , F o ( q x , R ( x , ( ϵ y , P ( y ) ) δ ( x ) ) , T y ( e i ) , }
where β α and γ α .
In the output, the offending requirement is eliminated and the formula value modified. The quantifier expressed by the determiner ( β ) is maintained and a contextual/pragmatic relation R is introduced that relates the bound variable to an epsilon term containing the property P of the original formula (the property derived from the common noun phrase argument of the determiner). At the same time, the individual or substance predicate of the input formula is maintained. The pragmatic variable, which is not a metavariable as its value may never be realised, then relies on context in the broad sense for updating to an appropriate relation between a substance and an individual or vice versa. This rule yields the output for every coffees in (30) and for a lot of pig in (31).
(30)Every coffee
{ T y ( e i ) , F o ( τ x , R ( x , ( ϵ y . Coffee ( y ) indiv ( x ) ) , M d ( { < ɛ,v,r,i > , < ɪ,t > , < ɪ,t,s > } ) ,
(31)A lot of pig
{ T y ( e i ) , F o ( L y . R ( y , ( ϵ z . Pig ( z ) ) subst ( y ) ) , M d ( { < ə,l,ɔ,t,ə,v > , < ɪ,t > , < ɪ,t,s > } ) , }
Exactly how R is instantiated depends on many factors: the semantics of both noun and determiner, the context in which the utterance occurs and the interactional concerns of the speech participants. So with regard to the instances of coffee in the examples in (26), the first naturally gives rise to an interpretation ‘relevant portion of’ with exactly what portion that is being dependent on who the speaker is talking to and where they are. In a restaurant, the portion is likely to be a cup; in a takeaway a plastic or paper container; and in a friend’s house a mug. The same variability in precise interpretation applies to all examples of count–mass mismatch.
Not all nouns can be used felicitously as either count or mass. So #much bean, #many admirations, #five waters, #less table, etc., are peculiar, although they could be acceptable given a rich enough context.19 It is possible that frequency of use of certain determiners in certain contexts is involved in the acceptability of typically mass or count terms being used in atypical constructions. There is certainly a familiarity effect that makes some uses more acceptable than others. Thus, replacing instances of coffee in (26) with rice is less acceptable, although interpretable, while substituting furniture is not really acceptable at all. Equally, not all count nouns are felicitous in non-individuating contexts, as illustrated in (32) and (33).
(32)a.Three #rices/?*funitures, please.
b.That shop stocks at least twenty not often seen #rices/*furnitures from all over the world.
c.I won’t touch a #rice/*furniture in the evening.
(33)a.After the accident with the butcher’s van, there was a lot of #eagle/*seat all over the road.
b.The room was buzzing, with ?*idea all over it.
c.I could have done with less #chair/?*idea in that meeting!
Resolution rules for plural mass terms and for singular count nouns appearing without a determiner do not involve non-monotonic tree transitions, as both must involve a new set of actions involved in parsing particular nouns. This is most obvious with plural mass terms which involve the addition of a plural morph to a mass noun, which would also necessitate a plural M d value, a plural count M d requirement and modification of a formula value. There is no theory of lexical processes in DS that I am aware of, but one might envisage a lexical process like that given in (34), where L stands for a macro of lexical actions:
(34)  IF { go ( 1 0 ? T y ( e i ) ) , put ( ? M d ( masssg ) ,
M d ( { < ɪ,t > , < ɪ,t,s > } ) ) } L ,
  THEN IF    { put ( T y ( e i t ) , F o ( α ) , P h ( β ) , ) } L ,
 THEN make-new ( L ) ,
 ELSE               Abort
  ELSEAbort
where L = IF ? T y ( e i t )
THEN go ( 1 0 )
IF x . M d ( x )
THEN put ( ? M d ( countpl , M d ( { < ð,ei > , < ð,ɛ,m > , < ð,ɛ,r > ) } ) ,
go ( 0 1 )
put ( T y ( e i t ) , F o ( λ x [ R ( x , ( ϵ y , α ( y ) ) group ( x ) indiv ( x ) ] ) , P h ( β γ ) )
where γ { < s > , < z > , < ɪ,z > }
ELSEAbort
This rule checks to see if a lexical entry L contains a sequence of actions that moves the pointer up to a dominating open term node and adds a mass singular M d requirement and a third neuter singular M d value. If there is, there is a further check to see if there is an action that adds a predicate type, a formula α and phonological form β . If the condition is met, a new lexical entry is created whose actions are triggered on an open predicate node and involve the following: moving the pointer to the closest dominating open term node; checking if there is an M d value; adding a plural count M d requirement and a third plural M d value, if an M d value already decorates the node; and then finally moving the pointer down to the open predicate node, adding a predicate type value, a phonological form that appends a plural morph to β and a formula value that has a plural epsilon operator that binds a variable that is pragmatically related to an epsilon term whose restrictor is α .
This lexical rule would license a macro of lexical actions for the word coffees, allowing the parse of phrases like three coffees in exactly the same way as with less coffee where the compiled formula value that results from a parse of the former is as follows:
3 x . [ R ( x , ( ϵ y , Coffee ( y ) ) group ( x ) indiv ( x ) ]
Notice that this provides a lexical instantiation of the generalised resolution rule in (29), and it is possible that the rule is derived from repeated exposure to plural mass terms and allows an agent to essentially routinise encounters with particular terms used in this way, circumventing the need to apply the resolution rule every time the word in question is parsed or produced. A lexical approach would certainly explain the gaps in acceptable instances of mass–count disagreement, as illustrated above. Furthermore, such a lexical approach can be assumed to routinise all common instances of count–mass disagreement, leaving the resolution rule to apply only in novel or uncommon instances.

3.3. Verbal Agreement

As noted in the introduction, verbal agreement in English is limited to the present tense in main verbs, have and do, but is visible in both present and past with the copula be. (35) provides a summary of the patterns:
(35)a.Main verbs, have, do: Those forms marked by the suffix -s (and phonological variants) require a third person singular subject; unmarked base forms appear with third plural and first and second person subjects; forms in past tense allow any subject.
b.Copula be: am requires a first person singular subject; is requires a third person singular subject; are requires a second person or plural first and third person subject; was requires a singular subject; were requires a second person or plural subject.
c.Auxiliary verbs: These allow any subjects.
The analysis of person and number agreement follows that of gender and number above: no categories as such, just sets of forms that collocate with other forms. Third person M d values are those already used for gender and number agreement, i.e., { < ɪ,t > , < ɪ,t,s > } , { { < ʃ,i > , < h,ə,r > } , and { < h,i > , < h,ɪ,m > , < h,ɪ,z > } , for the singular and { < ð,e,i > , < ð,ɛ,r > , < ð,ɛ,r} for the plural. First and second person pronouns are again associated with M d values for their various forms: first singular M d ( { < > , < m,i > , < m,ai . } ) , first plural M d ( { < w,i > , < ʌ,s > , < au,r . } ) and second M d ( { < j,u > , < j,ɔ,r > } ) .
Under the analysis of auxiliary and main verbs in Cann (2011), tensed main verbs can only be parsed (or generated) in the context of there being a locally unfixed node that represents the content of the local subject, where locality as mentioned earlier indicates the minimal propositional structure containing the current node.20 To handle agreement, verbal actions need to require such a node to be decorated with an appropriate M d value. Third singular present tense verbs thus require any of the nominative third singular values on their subjects, which must form part of the triggering environment for a successful parse, as illustrated in the partial entry specifying the initial triggering environment for sings in (36) ( 1 * 0 is the ‘locally dominates’ modality associated with Local*Adjunction; see Cann et al., 2005, pp. 234 ff.).21
(36)sings
IF ? T y ( t ) T n ( n )
THENIF 1 * 0
THENAbort
ELSEIF 1 * 0 ( T y ( e i ) M d ( α ) )
where α { < ʃ,i > , < h,i > , < ɪ,t > }
THEN[Other Actions]
Conversely, finite base forms will require reference to nominative non-third person M d values:22
(37)sing
IF ? T y ( t ) T n ( n )
THENIF λ *
THENAbort
ELSEIF λ * ( T y ( e i ) M d ( α ) )
where α { < ð,ei > , < ai > , < j,u > , < w,i > }
THEN[Other Actions]
The initial restriction in (36) and (37), which requires there to be a locally unfixed node that the trigger node dominates and which the verbal actions fix as the grammatical subject, means that there only needs to be a match with person M d values. However, because of Subject Auxiliary Inversion, have, do and be lack this restriction. Instead, a parse of their forms induces the construction of a locally unfixed node on which the relevant M d value is imposed, which must subsume, or be subsumed by, that projected by a subject nominal for the appropriate M d value.23
The different forms of the copula be are analysed similarly but involve different requirements, analogous to the description in (35b):
(38)a.am projects M d ( { < ai > } ) .
b.is projects M d ( { < ʃ,i > , < h,i > , < ɪ,t > } ) .
c.are projects M d ( { < j,u > , < w,i > , < ð,ei > } )
d.was projects M d ( { < ai > , < ʃ,i > , < h,i > , < ɪ,t > } ) .
e.were projects M d ( { < j,u > , < w,i > , < ð,ei > } )
Have and do follow the pattern of the main verbs like sing/sings above, except that they impose M d values on the locally unfixed node that must subsume or be subsumed by that projected by a subject nominal as with be.
To close this section, I turn to a short discussion of the one type of verbal disagreement in British English: the use of present tense base verb forms with group-denoting singular nouns as exemplified in (39).
(39)a.The committee are currently considering your request.
b.This flock of sparrows take to the air every evening at dusk.
It seems that only definite NPs are truly felicitous in this context (my acceptability judgements):
(40)a.?No committee make decisions on a Thursday.
b.*A committee are considering your appeal.
c.*Every committee make decisions on a Thursday.
Note that anaphoric reference to these examples requires the plural third person pronoun:
(41)The committee are considering your application. They/*it usually meet on Mondays to make final decisions.
It is clear that what we have here is a case of semantic agreement as they picks out the group of entities that comprise the denotation of the noun, and again, some rule is needed to resolve the clash of M d values and to provide a semantic representation of the effect of the morphological disagreement. In the current case, I propose the non-monotonic resolution rule in (42), which again uses ⇝ to indicate such tree growth:24
(42) { M d ( { < j,u > , < w,i > , < ð,ei > } ) , M d ( { < ɪ,t > , < ɪ,t,s > } ) , T y ( e i ) , F o ( ι x , P ( x ) ) }
{ M d ( { < ð,e,i > , < ð,ɛ,m > , < ð,ɛ,r > } ) , T y ( e i ) , F o ( ϵ y . group ( y ) z [ z < a y z a ι x . P ( x ) ] ) }
This rule essentially deletes the original M d values projected by the noun and verb, replacing them with the third person plural value, while at the same time altering the original formula to pick out a group of individuals whose atomic parts ( a ) are all atomic parts of the object denoted by the original iota term. Under the assumption that the predicate group requires there to be at least two entities, this semantic representation ensures that only group-denoting nouns can be resolved in this way and that it may only be a subset of the relevant group that engages in the event expressed by the rest of the sentence. Hence, (39) could be true even if not all members of the committee actually do consider the applications.

4. Conclusions

In this paper, I have extended the tree vocabulary of DS to include lexical phonological information about a word and used this to account for prepositional selection by verbs and developed a theory of the restricted patterns of anaphoric, nominal and verbal agreement in English by introducing a label with values that are sets of phonological forms. These sets encode morphological dependencies based on collocational restrictions on word combinations and linkages.
The introduction of the label P h specifies the phonological form of a word, given here as ordered sets of phonemes. This allows a straightforward account of verbs (and other predicates) that select for particular prepositional complements and prepositional particles by the imposition by a verb of a P h requirement on its argument node which can only be satisfied by parsing the relevant preposition. The difference between PP selection and particle selection is captured by the former having a modal P h requirement, which is satisfied by the construction of functor and argument nodes below the argument node by a parse of the preposition and the latter having a non-modal requirement with no extra structure.
The remnant agreement patterns of the language are analysed in terms of another new label M d which encodes morphological dependencies between words and phrases and, as such, has values that are sets of phonological forms (so sets of ordered sets of phonemes). Using this label, I have shown how restrictions between anaphoric pronouns and their antecedents, between determiners and nouns and grammatical subjects and verb forms, can be analysed within DS without specific labels for traditional syntactic categories like gender and number. The drivers of the analyses of these phenomena are M d values that consist of the phonological forms of closed-class expressions such as determiners and pronouns. Traditional agreement categories are thus represented, not by categorial labels, but as sets of phonological forms. So plural agreement can be determined by the phonological forms of the three lexical exponents of they and singular by the forms of he, she, and it. Gender can be represented as the forms of the gendered third person singular pronouns, and distinctions between count and mass terms are analysed by sets of individuating and non-individuating determiners. Acceptable clashes in agreement patterns like a lot of pig, three coffees and the committee convene every Thursday are accounted for by the use of specific lexical rules or non-monotonic tree growth resolution rules that eliminate clashing information and modify formula values on specific nodes.
It should be noted here that the analysis of agreement presented above is equivalent to analyses of the same phenomena in other frameworks and breaks no new ground apart from the explicit discussion of disagreement and treating the matching of mass/count with mass/count nouns as a form of agreement. However, the theoretical tools used to achieve this are firmly grounded in the forms of words rather than the manipulation of abstract linguistic objects. The overall architecture of the analysis has wider implications for the concepts of language and linguistic structure, to a short discussion of which the last pages of this paper are devoted.
I conceive of an M d value as a pointer to part of a speaker–hearer’s memory space of collocations that an agent has successfully experienced, either through parsing or production, in acts of linguistic interaction in relation to a particular word. In the same way, I conceive of the values of P h as pointers to a memory space of phonetic elements that group together to express the possible phonetic realisations of expressions. This analysis thus cannot offer any universal definition of agreement systems, beyond that of matching word forms with collocating expressions. While the general approach is generalisable to other languages, it does not provide definitions of number, person, gender and the mass/count distinction that are applicable to different languages. So there is no recourse to putatively universal reified categories or types of any cross-linguistic validity. The analysis thus shows how it is possible to capture morpho-syntactic dependencies in DS without postulating new labels for specific morphological categories.
Unlike (most) other labels in DS that are grounded in semantics and ultimately in whatever processes of the human brain that allow meanings to be constructed from linguistic input, the new labels P h and M d are grounded in phonology and thus ultimately in acoustic and visual stimuli that induce meanings. So while they operate in this paper in a purely morpho-syntactic capacity, nevertheless they are not purely abstract grammatical categories like gender, person and number, let alone specific values such as fem or pl. Although I have, throughout this paper, referred to M d values as countsg or 3sg and talked about ‘neuter’ or ‘first person’ forms, these do not form part of the grammar itself, but are merely abbreviations for collections of forms that are put together from the linguistic experiences of an agent. Of course, there is nothing to stop an agent creating ad hoc types to refer to collections of collocating phonological forms through routinisation, if such types provide a means of making parsing or generating utterances more efficient. The point is that such things are not immanent in some specialised mental ‘language faculty’ but experientially derived from successful linguistic interactions. Any such ad hoc types are language- and, indeed, agent-specific and not necessary for understanding or producing well-formed linguistic utterances.
The approach taken here to morphological dependencies is rooted in the work of 20th century structuralist linguists such as Bloomfield (1938), Harris (1951, 1954), Firth (1935, 1957) and, in a somewhat different form, the later Wittgenstein (1953). Although extremely unfashionable within current mainstream theoretical linguistics since Chomsky (1965)’s rejection of the notion of linguistic theory as a ‘discovery procedure’, data-driven, distributional approaches to language have, nonetheless, been extensively used in research into child language acquisition (Clark, 2015; Goldsmith, 2001; Ibbotson & Salnikov, 2019; Mintz, 2003; Mintz et al., 2002; Ibbotson, 2020, inter alia multa).
Earlier objections to the possibility of children acquiring language through distributional collocations of words with each other have been shown to be false (Snow, 1972; Saxton, 2009). Contrary to Chomsky (1965)’s assertion that language is too full of errors to be learnable, research on actual speech, especially Child Directed Speech (CDS), has shown that hesitations, dysfluencies and false starts are rarely found. Such speech also tends to develop from very simple collocations, becoming more complex as the child ages and develops greater language skills (see, for example, Mintz, 2003). This observation nullifies the potential objection that a dependent word may be indefinitely separated from another word on which it depends. So recursive modification may disrupt the close collocation of dependent words. For example, while much rice has a determiner and noun together, adjectival modification can obscure the close dependence of the expressions as in much hot, spicy, delicious rice or The dog that chased the cat that ate the mouse that ate the cheese was/*were violently sick, but, as is evident in the now highly extensive corpora of CDS, such disruptions do not occur with strongly collocated expressions. Given these results concerning the data, computational techniques have been used to model how children may acquire grammatical categories or types (Mintz et al., 2002) and even syntactic structure (Ibbotson & Salnikov, 2019; Clark & Lappin, 2010; McCauley & Christiansen, 2019) from distributional patterns alone. There is now a great deal of evidence from experimental studies using CDS corpora as input that strongly supports the hypothesis that children acquire language through the distribution of expressions in a language, supporting Firth’s dictum that ‘you shall know a word by the company it keeps’ (Firth, 1957).
In line with the works on child development cited above, I assume that collocational spaces defined by probabilistic distributional properties of words develop throughout an agent’s lifetime. It is likely that such collocational spaces involve more than collocations of content words with other ‘grammatical’ expressions but the whole array of linguistic associations between words, suggesting a tie-up with Distributional Semantics (Boleda, 2020; Lenci & Sahlgren, 2023) like Vector Space Semantics (see, for example, Erk, 2012; Gregoromichelaki et al., 2019; Gregoromichelaki et al., 2022; Purver et al., 2021). Assuming that such collocational word spaces do underlie linguistic behaviour, the M d label acts as an address that points to that part of a collocational space that is involved in morphological dependencies, and the fact that the label primarily involves closed-class expressions may provide at least a partial reason for the differences in the behaviour between ‘functional’ and ‘lexical’, contentive ones (Cann, 2000; Muysken, 2008).
While these ideas are very speculative, it seems that the unfashionable, form/distribution account of morphological dependencies presented in this paper can provide analyses of such phenomena that are grounded in general linguistic experience, without recourse to concepts of ‘universal grammar’ or some innate, specifically linguistic, ‘mental organ’. Of course, the theory needs to be tested against phenomena presented by languages with a much richer morphology than English. For such languages, M d values will include morphs smaller than a word. Indeed in fusional languages like Latin, these will be bound inflectional forms. Thus, in Latin, the M d value projected by a masculine noun like puer ‘boy’ will include the second declension forms endings -us, -um, - i ¯ , ō, etc., to be matched by the M d value of an adjective form like bonus ‘good’ but not the feminine form bona. This work is ongoing.

Funding

This research received no external funding.

Institutional Review Board Statement

I declare that his research conforms to all ethical guidelines of the University of Edinburgh.

Informed Consent Statement

Not applicable, as no informants were consulted.

Data Availability Statement

All relevant data are given in the text.

Conflicts of Interest

The author declares no conflicts of interest.

Notes

1
There has been some discussion of phonology within the DS literature, although this has tended to be concerned with prosody as being more obviously associated with syntax and semantics (Kula, 2007; Kula & Marten, 2011; Kiaer, 2014).
2
Note that here and throughout, all acceptability and grammaticality judgements are my own. I use the asterisk to mark only those strings that seem to me to have no meaning in any context, while the hashtag marks examples that are pragmatically marked but interpretable in suitable contexts. I use a question mark to indicate that I judge an example to be marked but am unsure whether it could be meaningfully contextualised or used.
3
All representations are assumed to underlie my idiolect of standard Southern British English, although I retain word final /r/ as I tend to pronounce this before a word initial vowel.
4
I make no attempt in this paper to model phrasal phonology. Hence, tree displays show P h labels only on nodes decorated through the actions induced by the parse of a word.
5
The initial set of actions that are not given fix a subject and specify tense and aspect information. See Cann (2011) for details and discussion. Note that T y ( e i ) is the type of individual entities contrasting with situations of type e s . T y ( e ) denotes the supersort.
6
In this and subsequent trees, completed nodes are shown with pairs of formula plus associated type values within square brackets and separated by a colon: so [ Morag : e i ] is equivalent to { F o ( Morag ) , T y ( e i ) } . There is nothing significant in this change in notation. It is simply an attempt to make the trees slightly more reader-friendly. For the same reason, P h labels are omitted unless relevant.
7
The symbol R in Figure 3 stands for a range of pragmatically determined relations that can obtain between a ‘possessor’ and the ‘possessed’, and the structure of epsilon terms differs from that set out in Kempson et al. (2001). Here, it is the quantifier that introduces the bound variable, not the common noun, and the scope is propositional T y ( t ) , not the peculiar, ad hoc-type c n . This means that common nouns have the predicate type e t , thus bringing the system in line with predicate logic. Space constraints prevent me from discussing the matter further here.
8
Note that because the complement tree is rooted in a term requirement, any noun phrase may be parsed, thereby licensing Jeremy looked it/Mary up.
9
The modality 1 0 points to a node reached by going up from a functor node and then up from an argument node with its obverse 0 1 . By convention, a subscripted 0 with an up or down arrow indicates a left-hand, argument node, while a subscripted 1 indicates a right-hand, functor node.
10
Defined as the ordered set of trees that are outputs of parsing the sequential utterances of the current speech exchange; see Purver et al. (2006).
11
Note that for the sake of (almost) readability, I have omitted the ‘Principle B’ restriction against substitution of a co-argument. Note further that this is only part of the actions associated with Substitution, as there are instances where a target or antecedent node does not carry an M d label such as in predicate substitution found with the copula and other auxiliary constructions (Cann, 2006, 2007, 2011). Where this occurs, Substitution simply requires type matching. Also, of course, there is substitution from the non-linguistic context. In these situations, I assume that the pronouns give rise to implicatures that direct the addressee to possible antecedents.
12
Readers unfamiliar with DS can ignore the specification of the rule.
13
I discuss how to analyse instances of ‘disagreement’ like those marked by a hashtag below.
14
There is no attempt here to account properly for demonstrative interpretations, but I sketch them as epsilon terms with an operator that assigns entities as distal or proximal with respect to a particular context ( C ). So I am assuming that dist is a function that locates entities at a distance with respect to the context of utterance and prox locates an entity as close with respect to the context, no matter exactly what concept of distance or proximity is relevant.
15
There is nothing, however, from an agent conceptualising these values as types or categories if it leads to some greater efficiency in processing, but, equally, there is no reason why an agent needs to do so. No difference in grammatical output would result. Note also that individual agents may not be aware of all the determiners that may modify singular count nouns to be able to distinguish the syntactic difference between these and mass nouns, provided that some of the relevant determiners have been acquired. Indeed, since all singular count nouns can appear with the indefinite article, all that an agent needs to have acquired is M d ( { < ə > , < ə, n > } ) .
16
Although this sort of substitution is not covered by the rule of Substitution given above, it seems to me to be intuitively right and requires further formal investigation, which is not pursued further in this paper.
17
L stands for whatever operator is expressed by a lot of.
18
The rule is generalised but assumed to involve only determiner and person, gender and number M d values. Further development of the theory may require extra conditions, but the rule as given is sufficient for current purposes.
19
For example, a manager of a cafe could felicitously say to an employee ‘I want less table in here and more space to move around in’.
20
An unfixed node is a node without a current fixed position within an unfolding tree that has to be fixed at some point in the current parsing process. A locally unfixed node is one that has to be given a fixed position within the minimal propositional structure that contains it.
21
The initial action in this macro checks that there is a dominated locally unfixed node and, if there is not, aborts the parse. This captures the fact that non-imperative main verbs in English cannot appear clause -initially.
22
An alternative is to abort the parse if the locally unfixed node has a third person singular M d value. However, this would make it impossible to account for base verb forms appearing with third singular collective nouns in British English, which is discussed below.
23
For a discussion of SAI without agreement, see Cann (2011).
24
The use of Russell’s iota operator is used to indicate a restriction to definite NPs. Of course, any other way of marking out definite NPs would work as well.

References

  1. Allan, K. (1980). Nouns and countability. Language, 56, 541–567. [Google Scholar] [CrossRef]
  2. Bale, A., & Barner, D. (2018). Quantity judgment and the mass-count distinction across languages: Advances, problems, and future directions for research. Glossa: A journal of general linguistics, 3(1), 63. [Google Scholar] [CrossRef]
  3. Bloomfield, L. (1935). Language. George, Allen and Unwin. [Google Scholar]
  4. Bloomfield, L. (1938). Language or ideas? Language, 12(2), 89–95. [Google Scholar] [CrossRef]
  5. Boleda, G. (2020). Distributional semantics and linguistic theory. Annual Review of Linguistics, 6, 213–234. [Google Scholar] [CrossRef]
  6. Bunt, C. (1985). Mass terms and model theoretic semantics. Cambridge University Press. [Google Scholar]
  7. Cann, R. (2000). Functional versus lexical: A cognitive dichotomy. In R. D. Borsley (Ed.), The nature and function of syntactic categories: Syntax and semantics 26 (pp. 37–78). Academic Press. [Google Scholar]
  8. Cann, R. (2006). Semantic underspecification and the pragmatic interpretation of be. In K. von Heusinger, & K. Turner (Eds.), Where semantics meets pragmatics (pp. 307–335). CRISPI, Elsevier. [Google Scholar]
  9. Cann, R. (2007). Towards a dynamic account of be in English. In I. Comorowski, & K. von Heusinger (Eds.), Existence: Semantics and syntax (pp. 13–48). Kluwer. [Google Scholar]
  10. Cann, R. (2011). Towards an account of the auxiliary system in English: Building interpretations incrementally. In R. Kempson, E. Gregoromichelaki, & C. Howes (Eds.), The dynamics of lexical interfaces (pp. 279–317). CSLI Publications. [Google Scholar]
  11. Cann, R., Kempson, R., & Marten, L. (2005). The dynamics of language (Vol. 35). Academic Press. Syntax and Semantics. [Google Scholar]
  12. Chomsky, N. (1965). Aspects of the theory of syntax. MIT Press. [Google Scholar]
  13. Christopher, N. (2021). Differential object marking in Kazakh: The dynamic syntax approach. Journal of Logic, Language and Information, 30(2), 305–329. [Google Scholar] [CrossRef]
  14. Clark, A. (2015). Distributional learning of syntax. In N. Chater, A. Clark, J. A. Goldsmith, & A. Perfors (Eds.), Empiricism and language learnability (pp. 106–145). Oxford University Press. [Google Scholar]
  15. Clark, A., & Lappin, S. (2010). Unsupervised learning and grammar induction. In A. Clark, C. Fox, & S. Lappin (Eds.), Handbook of computational linguistics and natural language processing (pp. 197–220). Wiley-Blackwell. [Google Scholar]
  16. Erk, K. (2012). Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10), 635–653. [Google Scholar] [CrossRef]
  17. Firth, J. R. (1935). The technique of semantics. Transactions of the Philological Society, 34, 36–73. [Google Scholar] [CrossRef]
  18. Firth, J. R. (1957). A synopsis of linguistic theory 1930–1955. Studies in linguistic analysis. Oxford University Press. [Google Scholar]
  19. Gazdar, G., Klein, E., Pullum, G. K., & Sag, I. A. (1985). Generalized phrase structure grammar. Basil Blackwell. [Google Scholar]
  20. Gillon, B. S. (1992). A common semantics for english count and mass nouns. Linguistics and Philosophy, 15(6), 597–639. [Google Scholar] [CrossRef]
  21. Gillon, B. S. (1999). The lexical semantics of English count and mass nouns. In E. Viegas (Ed.), The breadth and depth of semantic lexicons (pp. 19–37). Kluwer Publications. [Google Scholar]
  22. Goldsmith, J. A. (2001). Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27, 153–198. [Google Scholar] [CrossRef]
  23. Gregoromichelaki, E., Eshghi, A., Howes, C., Mills, G., Kempson, R., Hough, J., Healey, P., & Purver, M. (2022). Language and cognition as distributed process interactions. Proceedings of SemDial 2022. Available online: http://semdial.org/anthology/Z22-Gregoromichelaki_semdial_0018.pdf (accessed on 17 February 2023).
  24. Gregoromichelaki, E., Howes, C., & Kempson, R. (2019). Actionism in syntax and semantics. CLASP Papers in Computational Linguistics, 2, 12–27. [Google Scholar]
  25. Gussenhoven, C. (1986). Phonology and syntax. Journal of Linguistics, 22(2), 455–474. [Google Scholar] [CrossRef]
  26. Harris, Z. (1951). Methods in structural linguistics. University of Chicago Press. [Google Scholar]
  27. Harris, Z. (1954). Distributional structure. Word, 10, 146–162. [Google Scholar] [CrossRef]
  28. Howes, C., & Gibson, H. (2021). Dynamic syntax: The dynamics of incremental processing: Constraints on underspecification. Journal of Logic, Language and Information, 30(2), 263–276. [Google Scholar] [CrossRef]
  29. Ibbotson, P. (2020). What it takes to talk. Mouton, De Gruyter. [Google Scholar]
  30. Ibbotson, P., & Salnikov, V. (2019). A dynamic network analysis of emergent grammar. First Language, 39(6), 652–680. [Google Scholar] [CrossRef]
  31. Joosten, F. (2003). Accounts of the count-mass distinction: A critical survey. Nordlyd, 31(1), 216–229. [Google Scholar] [CrossRef] [PubMed]
  32. Kempson, R., & Kiaer, J. (2009). Japanese scrambling and the dynamics of on-line processing. In H. Hoshi (Ed.), Language, mind and brain: Perspectives from linguistics and cognitive neuroscience (pp. 127–192). Kuroshio. [Google Scholar]
  33. Kempson, R., & Kiaer, J. (2010). Multiple long-distance scrambling: Syntax as reflections of processing. Journal of Linguistics, 46(01), 127–192. [Google Scholar] [CrossRef]
  34. Kempson, R., Meyer-Viol, W., & Gabbay, D. (2001). Dynamic syntax. Basil Blackwell. [Google Scholar]
  35. Kiaer, J. (2014). Pragmatic syntax. Bloomsbury Publishing. [Google Scholar]
  36. Kula, N. (2007). Effects of phonological phrasing on syntactic structure. The Linguistic Review, 24, 201–231. [Google Scholar] [CrossRef]
  37. Kula, N., & Marten, L. (2011). The prosody of Bemba relative clauses: A case study of the syntax-phonology interface in dynamic syntax. In R. Kempson, E. Gregoromichelaki, & C. Howes (Eds.), The dynamics of lexical interfaces (pp. 61–90). CSLI. [Google Scholar]
  38. Lenci, A., & Sahlgren, M. (2023). Distributional semantics. Studies in natural language processing. Cambridge University Press. [Google Scholar]
  39. Link, G. (1983). The logical analysis of plurals and mass terms. In R. Bäuerle, C. Schwarze, & A. von Stechow (Eds.), Meaning, use and interpretation of language (pp. 302–323). De Gruyter. [Google Scholar]
  40. Lucas, C. (2014). Indefinites and negative concord in Maltese: Towards a dynamic account. In Perspectives on Maltese linguistics (pp. 225–248). Akademie Verlag. [Google Scholar]
  41. Marten, L. (2002). At the syntax-pragmatics interface: Verbal underspecification and concept formation in dynamic syntax. Oxford University Press. [Google Scholar]
  42. Marten, L. (2005). The dynamics of agreement and conjunction. Lingua, 115(4), 527–547. [Google Scholar] [CrossRef]
  43. McCauley, S. M., & Christiansen, M. H. (2019). Language learning as language use: A cross-linguistic model of child language development. Psychological Review, 126(1), 1–51. [Google Scholar] [CrossRef]
  44. McCormack, A. (2008). Subject and object pronominal agreement in the southern Bantu languages: From a dynamic syntax perspective [Ph.D. dissertation, University of London, School of Oriental and African Studies]. [Google Scholar]
  45. Meulen, A. T. (1981). Intensional logic for mass terms. Philosophical Studies, 40, 105–125. [Google Scholar] [CrossRef]
  46. Mintz, T. H. (2003). Frequent frames as a cue for grammatical categories in child directed speech. Cognition, 90, 91–117. [Google Scholar] [CrossRef]
  47. Mintz, T. H., Newport, E. L., & Bever, T. G. (2002). The distributional structure of grammatical categories in speech to young children. Cognitive Science, 26, 393–424. [Google Scholar] [CrossRef]
  48. Moltmann, F. (1998). Part Structures, Integrity, and the Mass-Count Distinction. Synthèse, 116, 75–111. [Google Scholar]
  49. Muysken, P. (2008). Functional categories. Cambridge University Press. [Google Scholar]
  50. Müller, S., Abeillé, A., Borsley, R. D., & Koenig, J.-P. (2024). Head-driven phrase structure grammar: The handbook. Language Science Press. [Google Scholar]
  51. Nicolas, D. (2002). Is there anything characteristic about the meaning of a count noun. Revue de La Lexicologie, 18, 125–138. [Google Scholar]
  52. Purver, M., Cann, R., & Kempson, R. (2006). Grammars as parsers: Meeting the dialogue challenge. Research on Language and Computation, 4(2–3), 289–326. [Google Scholar] [CrossRef]
  53. Purver, M., Sadrzadeh, M., Kempson, R., Wijnholds, G., & Hough, J. (2021). Incremental composition in distributional semantics. Journal of Logic, Language and Information, 30, 379–406. [Google Scholar] [CrossRef]
  54. Saxton, M. (2009). The inevitability of child directed speech. In S. Foster-Cohen (Ed.), Language acquisition (pp. 62–86). Palgrave MacMillan. [Google Scholar]
  55. Selkirk, E. (1984). Phonology and syntax: The relation between sound and structure. MIT Press. [Google Scholar]
  56. Snow, C. E. (1972). Mothers’ speech to children learning language. Child Development, 43, 549–566. [Google Scholar] [CrossRef]
  57. Turner, D. (2016). The morpho-syntax of Katcha: A dynamic syntax account [Ph.D. dissertation, University of Edinburgh]. [Google Scholar]
  58. Wittgenstein, L. (1953). Philosophical investigations. MacMillan Publishers. [Google Scholar]
Figure 1. Parsing Morag took.
Figure 1. Parsing Morag took.
Languages 10 00289 g001
Figure 2. Parsing Morag took after.
Figure 2. Parsing Morag took after.
Languages 10 00289 g002
Figure 3. Parsing Morag took after her mother.
Figure 3. Parsing Morag took after her mother.
Languages 10 00289 g003
Figure 4. Parsing Jeremy looked.
Figure 4. Parsing Jeremy looked.
Languages 10 00289 g004
Figure 5. Parsing Jeremy looked up the number.
Figure 5. Parsing Jeremy looked up the number.
Languages 10 00289 g005
Figure 6. Parsing these songs.
Figure 6. Parsing these songs.
Languages 10 00289 g006
Figure 7. Parsing a chair.
Figure 7. Parsing a chair.
Languages 10 00289 g007
Figure 8. Parsing many cars.
Figure 8. Parsing many cars.
Languages 10 00289 g008
Figure 9. Parsing cars.
Figure 9. Parsing cars.
Languages 10 00289 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cann, R. Morphological Dependencies in English. Languages 2025, 10, 289. https://doi.org/10.3390/languages10120289

AMA Style

Cann R. Morphological Dependencies in English. Languages. 2025; 10(12):289. https://doi.org/10.3390/languages10120289

Chicago/Turabian Style

Cann, Ronnie. 2025. "Morphological Dependencies in English" Languages 10, no. 12: 289. https://doi.org/10.3390/languages10120289

APA Style

Cann, R. (2025). Morphological Dependencies in English. Languages, 10(12), 289. https://doi.org/10.3390/languages10120289

Article Metrics

Back to TopTop