Next Article in Journal
English Focus Perception by Mandarin Listeners
Previous Article in Journal
A Study of Tenselessness in Rengma (Western)
Previous Article in Special Issue
Examination of Manner of Motion Sound Symbolism for English Nonce Verbs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructed Action in American Sign Language: A Look at Second Language Learners in a Second Modality

1
Department of American Sign Language & Interpretation Education, Rochester Institute of Technology, National Technical Institute for the Deaf, Rochester, NY 14623, USA
2
Center on Cognition and Language, Rochester Institute of Technology, National Technical Institute for the Deaf, Rochester, NY 14623, USA
*
Author to whom correspondence should be addressed.
Languages 2019, 4(4), 90; https://doi.org/10.3390/languages4040090
Submission received: 20 April 2019 / Revised: 13 September 2019 / Accepted: 31 October 2019 / Published: 12 November 2019
(This article belongs to the Special Issue HDLS 13: Challenges to Common Beliefs in Linguistic Research)

Abstract

:
Constructed action is a cover term used in signed language linguistics to describe multi-functional constructions which encode perspective-taking and viewpoint. Within constructed action, viewpoint constructions serve to create discourse coherence by allowing signers to share perspectives and psychological states. Character, observer, and blended viewpoint constructions have been well documented in signed language literature in Deaf signers. However, little is known about hearing second language learners’ use of constructed action or about the acquisition and use of viewpoint constructions. We investigate the acquisition of viewpoint constructions in 11 college students acquiring American Sign Language (ASL) as a second language in a second modality (M2L2). Participants viewed video clips from the cartoon Canary Row and were asked to “retell the story as if you were telling it to a deaf friend”. We analyzed the signed narratives for time spent in character, observer, and blended viewpoints. Our results show that despite predictions of an overall increase in use of all types of viewpoint constructions, students varied in their time spent in observer and character viewpoints, while blended viewpoint was rarely observed. We frame our preliminary findings within the context of M2L2 learning, briefly discussing how gestural strategies used in multimodal speech-gesture constructions may influence learning trajectories.

1. Introduction

Depicting exists alongside indexing and describing as basic communicative tools used by human beings to relay information about people, places, things, and events. Clark describes this tripartite communicative strategy as follows, “In describing, people use arbitrary symbols (e.g., words, phrases, nods, and thumbs-up) to denote things categorically, and in indicating, they use pointing, placing, and other indexes to locate things in time and space. In depicting, people create one physical scene to represent another” (Clark 2016, p. 324). Clark suggests that both indexing and describing have received the lion’s share of attention, while depiction has escaped the gaze of many researchers operating within mainstream linguistic theory. This is an unfortunate oversight, due to, as Clark notes, a false assumption that depiction does not participate in the complex semantic and syntactic calculations made by language users.
Signed language and gesture researchers have not been fooled by these base assumptions regarding the seemingly elementary nature of depiction. When looking at language use in the visual modality, depiction is an undeniable feature which requires careful consideration and theoretical attention. Both signers and gesturers utilize a range of multimodal semiotic resources, requiring researchers, who focus their efforts on understanding the complexities of languages in the visual modality, to account for these ubiquitous constructions. Gesturers have been shown to deploy depiction in constructions like “like this”, where ‘like-this’ functions to introduce the depiction which can simply be a gesture (Fillmore 1997). Similarly, constructed speech is also recognized as a type of depiction (Partee 1973) in which direct quotations are an enactment of a speech act. Signers can use ‘constructed action’ to encode different perspectives, either simultaneously or sequentially relaying information about referents by pivoting their gaze, shoulders, or bodies or by changing facial markers.
The use of constructed action in deaf signers has been well documented across a variety of signed languages including Danish Sign Language (Engberg-Pedersen 1993), American Sign Language (Metzger 1995; Janzen 2004; Dudis 2004), French Sign Language (Cuxac 2000), South African Sign Language (Aarons and Morgan 2003), German Sign Language (Perniss 2007), Icelandic Sign Language (Thorvaldsdottir 2008), Mexican Sign Language (Quinto-Pozos et al. 2009), Swedish Sign Language (Nilsson 2010), Irish Sign Language (Leeson and Saeed 2012), British Sign Language (Cormier et al. 2013), Auslan (Ferrara and Johnston 2014), Austrian Sign Language (Lackner 2017), and Finnish Sign Language (Jantunen 2017). To give an approximation of how pervasive ‘constructed action’ can be in a stretch of signed discourse, Thumann (2013) found that, in 160 min of recorded video presentations in ASL, presenters averaged 20 “depictions” per minute1. Constructed action is so prevalent it has led researchers to question why it seems to be obligatory in signed languages (Quinto-Pozos 2007). Given the frequency of occurrence and the existence of constructed action cross-linguistically, it does seem to be a ubiquitous strategy for reporting narratives and for general event enactment.
First language acquisition studies suggest that children use constructed action in signed language productions as young as 2–3 years of age, but have yet to master these constructions even at age 8 (Schick 1987; Slobin et al. 2003). Previous studies with adult signers have suggested that Deaf signers have the ability to use constructed actions to switch seamlessly between perspectives during the telling or retelling of stories (Cormier et al. 2013). Central to the effective use of constructed action is the ability of the signer to use space to assign viewpoints belonging to different actors in the narrative thereby maintaining consistent reference tracking to rapidly switch between participants in the narrative discourse (Metzger 1995; Janzen 2017, 2019).
Unfortunately, little is known about the acquisition or use of constructed action among sequential bimodal bilinguals, or signers who learn American Sign Language (ASL) much later than their first spoken language. Signers who learn a signed language as their second language, in a second modality are often referred to as M2L2 (second language-second modality) signers. However, some have used M2L2 to refer to signers more generally with two modalities, including simultaneous bilinguals who are CODAs (children of deaf adults) who learn a signed language alongside a spoken language, from birth (Reynolds 2016). We restrict our use of M2L2 to these college-aged adults, learning a second language in a second modality with a large gap between first and second language acquisition.
M2L2 studies on this population are still in their infancy. Most publications have focused on controlled lab experiments rather than on natural language acquisition trajectories or natural language use (but see Ferrara and Nilsson 2017 for Norwegian M2L2 signers). Luckily researchers in Norway, Germany, Ireland, and the United States have all taken steps toward creating corpora of M2L2 language use and more research in the acquisition of M2L2 signers will be forthcoming. We have based our research questions on what is known about constructed action and the use of viewpoint constructions in Deaf adult signers. We wondered for example, about the acquisition trajectory and use of character, observer, and blended viewpoint in new hearing signers acquiring ASL as a second language in a second modality.
In the next section we give a brief introduction to constructed action, and more specifically on viewpoint constructions in signed languages. We then explain the methods used to investigate constructed action acquisition in hearing M2L2 signers. We will report on our preliminary findings with regards to constructed action, discuss possible interpretations of the data, and conclude with some remarks about the role of gesture on the acquisition of M2L2 constructions.
Depiction, to the extent that it has been researched, is generally tackled by cognitive-functional linguists working on multimodal aspects of spoken languages, or sign language linguists (both formally and functionally oriented) who are forced to confront depiction in the grammars of the world’s signed languages. This in turn has led to a proliferation of terminology as researchers from different theoretical camps continue to reinvent the wheel in their discussions of frequent and well-attested perspective-taking functions in spoken and signed languages. Debates on how best to split these complex constructions into cohesive linguistic groupings vary greatly depending on whether one attempts to describe functional or formal similarities. Within sign language linguistics and gesture studies, ‘depiction’ is often used interchangeably with ‘constructed action’, both as umbrella terms for various depicting constructions which themselves are categorized using varying terminologies, e.g., role-shift (Padden 1986), imagistic gesture (McNeill 1992), constructed speech, enactment, referential shift (Engberg-Pedersen 1993), surrogate blends (Liddell 1995), and personal transfer (Cuxac 2000), just to name a few2. For the purposes of our analysis, ‘constructed action’ serves as a macro-category which includes ‘depictive/classifier’ constructions as well as more canonical examples of ‘constructed action’ which encode viewpoint and perspective through articulations of the hands, face, and body (Metzger 1995; Janzen 2004; Quinto-Pozos and Parrill 2015)3.
But even across the varied approaches to constructed action and depiction, researchers seem to agree that one of the major functions instantiated by the use of constructed action is encoding viewpoint or perspective-taking. Perspective-taking is a complex cognitive task that involves the construction of the conceptualizer’s point-of-view relative to the object of conception (be it an object or event). Perspective-taking in signed languages requires the ability to map the physical articulatory space surrounding the signer onto various referential frameworks which are part of sign language grammars. That is, there is a ‘right way’ and a ‘wrong way’ to effectively use constructed action to convey viewpoints. In this sense, signers can make grammaticality judgements about the use or obligatoriness of constructed action (Quinto-Pozos 2007).
While researchers have developed many terms for referring to the semantic mapping of space onto articulatory space, i.e., surrogate space, token space, depicting space, referential space, it is clear from the list in Table 1 that most have identified two distinct types of perspective construction. For the purposes of our analysis, we adopt the terms ‘observer’ and ‘character viewpoint’ to refer to these two different vantage points. In the following section we describe how character and observer viewpoint differ with regard to how articulatory space and semantic space are structured.
The space around the signer is referred to as signing space or articulatory space more generally. Articulatory space is ‘where the articulation occurs.’ At the same time, it is clear that this space is also semantically structured. The space around the signer can also encode semantic relationships that are set up during the discourse space of the narrative space. In this sense, space is dynamically organized during a given discourse context into semantically significant space with grammatical meaning (Liddell 1990; Engberg-Pedersen 1993). ‘Semantic space’ which can also be considered ‘grammatical space’, encodes where referents are positioned during an event as well as how they move and interact. Thus, the space around the signer is both comprised of forms (the articulations) and meanings (the who’s, how’s, and what’s). In the retelling of a narrative, the relationship between the physical articulatory space and structure of semantic space is dynamically negotiated within that narrative space. Form-meaning relationships can be temporarily used to encode various viewpoints either in sequence or simultaneously.
The relationship between articulatory space and semantic space is a key determiner when deciding whether a signer is using character or observer viewpoint constructions. Perniss and Özyürek have discussed this relationship between articulatory space and narrative space as ‘projection.’ Using Perniss and Özyürek’s term, narrative space is thus “projected” onto articulatory space to create a temporary form-meaning relationship for the purposes of discourse cohesion. Below, Table 2 outlines the main differences between how observer and character viewpoints are mapped, or projected from ‘event space’ to ‘sign space’, or using our terminology, ‘narrative space’ and ‘articulatory space’, respectively.
Observer perspective encodes a point-of-view in which the signer takes a global view of the event, looking at the event from an external point of view, not as a participant in the event itself. Importantly, the narrative space during observer viewpoint is reduced in size relative to the real space occupied by the signer. This is what gives the sense of a ‘bird’s-eye-view’, because the space in front of the signer is of a reduced size, where large objects like buildings and streets, or humans and animals, can be set in space to create a map-like reference between fixed or moving entities. When a signer is occupying an observer viewpoint, the signer is likely to use ‘classifier constructions’ to depict the placement of objects in the scene, as seen below in Figure 1. When using observer viewpoint, the narrator does not put themselves ‘on-stage’ by using first person referring constructions such as ‘I’ and does not convey their own feelings, thoughts, or inner-states. In observer viewpoint the signer describes the scene by showing how objects move, what they look like, or how they are positioned relative to one another. Because of the descriptive off-stage presence of the signer, this viewpoint has sometimes been referred to as ‘narrator viewpoint’ (Slobin et al. 2003).
The character viewpoint construction, on the other hand, describes a perspective in which the signer represents the narrative event space to depict a participant within the story (Slobin et al. 2003). The semantic space is ‘projected’ onto articulatory space. One way to conceptualize this relationship is as a one-to-one mapping between the signer’s body and the character’s body they are depicting. Notably, in character viewpoint, the size of the narrative space (or projected space) is life-sized (Perniss and Özyürek 2008). To the untrained eye, character viewpoint may look like charades, because the body of the signer ‘becomes’ the body of the character they are depicting. When the signer’s body, head, and face move, the referent’s body, head, and face move. An example of character viewpoint can be seen in Figure 2, where the signer is looking through the binoculars at the cat (from the perspective of Tweety).
In addition to character and observer viewpoint constructions, we also analyzed a third viewpoint type, a combination of character and observer viewpoints produced simultaneously. This dual viewpoint construction (McNeill 1992; Parrill 2009) consists of the features of both character and observer viewpoints and is often referred to as ‘blended viewpoint’ (Dudis 2004; Wulf and Dudis 2005). Blended viewpoint is produced by using different articulators to encode different viewpoints, for example, the hands and body, may encode character viewpoint, while the face encodes observer viewpoint. During blended viewpoint the signer simultaneously enacts characteristics of both the character and the observer perspectives, simultaneously (Figure 3).
To give an example of blended viewpoint, Figure 4 shows a signer who is simultaneously producing a character viewpoint, depicting the life-sized element of |cat|, and a much smaller scale, ‘bird’s-eye’ articulation of an entity classifier signifying |cat| simultaneously combining these two elements into a single blended viewpoint construction. Note that neither of these articulations on their own means CAT, but within the blended viewpoint construction embedded within the larger narrative discourse space which is constructed for the retelling of the narrative, the meaning of |cat| is clearly evoked. Clark (2016) notes that multimodal co-speech gesture productions can also consist of what he calls, ‘hybrid depictions’, which correspond to blended viewpoint discussed in signed language literature on depiction constructions.

2. Materials and Methods

2.1. Participants

Eleven M2L2 (second language second modality) students, eight males and three females, were recruited from first semester ASL 1 classes. At the time of the first recording, the students were monolingual English speakers who had not studied a second spoken language. Participants ranged in age from 18 to 44 years old. Protocol for this research was approved by the National Technical Institute of the Deaf’s Institutional Review Board, according to the ethical guidelines laid out by the governing body, in accordance with the Declaration of Helsinki. Participants completed both an informed consent form and a video-release form agreeing to participate in the study and agreeing to allow researchers to share their video data for presentation, publication, and teaching purposes. Participants who did not wish to have their video data released but who gave informed consent were allowed to participate in the study. Their data was collected and used for analysis but their videos were not used for the creation of still-images or presentation of data in the public-SPHERE.
All participants completed language background questionnaires and gave self-proficiency ratings for their ASL skills. All of the participants were matriculated undergraduate students at Rochester Institute of Technology (RIT), a private university in the eastern United States. Because of the large demand for ASL courses at RIT, classes are divided into separate sections for ASL interpreting majors and for students who take ASL as a ‘foreign language.’ We restricted our analysis to students enrolled in ASL for foreign language credit and did not include ASL interpreting majors. This decision to exclude interpreting majors was made because curriculum for ASL interpreting majors and ASL foreign language students is structured differently and students are not exposed to the same content on the same timeline. Table 3 provides information for each participant, regarding testing week for session one and session two (during a 16-week semester), the time interval between testing sessions (provided in months), as well as the ASL course level in which the student was enrolled. In some cases, the student was not enrolled in ASL 2 during the second session which is also noted.

2.2. Stimuli

The stimuli consisted of short clips from Canary Row, a series of Sylvester and Tweety cartoons which have proven to be an effective elicitation tool for narrative retellings (McNeill 1992). Three clips were selected to elicit signed stories from first year M2L2 students. For the purposes of this analysis we focus on analyzing one of the three retellings based on a 52 s video clip from one of the three cartoon clips.

2.3. Procedure

Research assistants, who were hearing ASL interpreting majors, gave the participants an informed consent sheet and described the benefits and risks of the study in English. Instructions were read aloud to participants stating, “For this part of the study, you will watch a short clip from a Sylvester and Tweety cartoon. You will sign in ASL what you saw in the cartoon clip. I will show you the clip two times before I ask you to sign the story”. Participants were then asked to retell the narrative ‘as if to a deaf friend’ (i.e., using gesture, mime, ASL or a combination). This specific direction was added so as not to make the participants feel limited in their production skills if they were not confident retelling the stories in ASL alone, and to use any semiotic device they deemed fit. Participants watched the cartoon videos on a desktop computer, set up in a private testing room with no other distractions. Participants sat in front of a monitor with a built-in webcam running was running in the background during testing to capture the student’s signing4. Upon watching the video, up to two times, the student’s then “retold” the cartoon stories using whatever semiotic devices needed to complete the task. Participants were paid for their participation in the study.

2.4. Coding

Videos were coded and analyzed using ELAN, a video annotation software program developed by researchers at the Max Planck Institute of Psycholinguistics in Nijmegen, Netherlands (Crasborn and Sloetjes 2008). Coding tiers were developed to capture observer, character viewpoint (with separate tiers for Sylvester and Tweety), and blended viewpoint. We used ELAN to track how many times each tier was marked and how long each annotation stretched across each tier. Identification of whether the discourse stretch was presented from character, observer, or blended viewpoint, was primarily based on the direction or placement of the signs in space as they corresponded with the cartoon-space stimulus in the actual cartoon. Coding was completed by two signers, a hearing research assistant who is also a fourth year interpreting major and a Deaf faculty member.
Our first hypothesis proposed that students would exhibit a common acquisition trajectory for constructed action across sessions with a general increase in the use of all types of constructed action over time. We found instead that the acquisition trajectory varied greatly across individuals and that the details of how individuals acquire constructed action constructions is more variable than previously thought. Following Ortega (2013; 2017, inter alia) we also hypothesized that students would rely heavily on their English co-speech gesture, due to the mimetic nature of the task, however this proved hard to test. As gesture and sign are conveyed in the same modality, making principled decisions as to what is a sign and what is a gesture is sometimes impossible to do in the visual modality. We discuss this problem of categorizing signs versus gestures in the discussion (Section 5).

3. Results

Participants’ videos were qualitatively analyzed for evidence of each of the three viewpoints characteristics based on prototypical articulations seen in native-like ASL use. Analysis particularly centered on the participants’ ability to adopt character or blended viewpoints when producing verbs, as the event-internal vantage point which is required to produce these narration styles is a feature of constructed action. The Canary Row clips repeatedly elicited the concept of Sylvester the cat looking at, or searching for, Tweety Bird, with and without explicit mention of a pair of binoculars. The verbs produced to convey this concept were most frequently HOLDING-BINOCULARS, LOOK and SEARCH, or similar variations (see Table 4 for individual analyses).
Figure 5 illustrates a single participant utilizing two different constructions in session one and session two, however each is an instantiation of the character viewpoint. At time one, the participant depicts the character looking through binoculars while at time two, the participant signs the lexical sign LOOK paired with a constructed action, with their eyes wide, eyebrows raised, and mouth slightly open, to show the character looking straight ahead, mildly astonished at the scene.
Figure 6 illustrates a participant using a different lexical sign, in a different construal of the same situation, choosing instead the sign SEARCH with widened eyes and mouth slightly agape. Note that the use of different lexical signs LOOK-AT used by participant 1: time two, or SEARCH used by participant 11: time 2, does not indicate that the depiction is ‘wrong’ but simply that the two participants offer different construals or different ways of conveying the same part of the story where the character is looking for/searching for the antagonist.
Participants’ productions of each of these verbs can also be categorized as successful and unsuccessful attempts at deploying constructed action. An unsuccessful production of constructed action can be seen when the signer does not adequately depict the referent on their body. For example, the participant’s eye gaze, facial expression, head movement, and body movement may not align with the gaze and posture that would be present if they were participating in the narrative from an internal point of view. As a result, the interlocutor does not receive the impression that the participant has fully enacted the perspective of the referent in their narrative. In successful character or blended viewpoint production, the participant adequately enacts characteristics of the referent. Incorporated eye gaze, facial expression, head and body movements are accurate and projected to the relative space an internal participant in the narrative might occupy.
The resulting enactment gives the impression that the participant is telling the story as a character actively experiencing the narrative, not merely as an external narrator. Successful and unsuccessful deployment of character viewpoint constructions can be seen in Figure 7 and Figure 8, both showing the same signer, during the same session. In Figure 7, at the beginning of the narration, the signer does not successfully implement the character viewpoint construction because the straightforward eye gaze and lack of head movement are not modified to match the perspective of a character that is looking around for something.
However, we see that only a little later in this same session, this participant successfully employs the use of the same HOLDING-BINOCULARS construction with the addition of the appropriate head movement (sweeping motion) and eye-gaze because the rotation of the head and hands mimics the movement of a character that is looking around the space for something, shown in Figure 8.
We might take this variable implementation and use of a fully instantiated character viewpoint as evidence that the student is aware of, but has not yet fully acquired, the appropriate formal elements of this construction. However, we might assume that when testing at a later time that this participant might more systematically represent the character viewpoint in their signing. In Figure 9, we see that at time 2, this participant still does not achieve successful character viewpoint because the straightforward eye gaze and lack of head movement do not enact the expressions or movements of character that is searching for something. This is akin to the original production of the sign HOLDING-BINOCULARS except the lexical item is replaced by the sign SEARCH (Figure 7) though neither is an example of successful deployment of constructed action.
In contrast to the unsuccessful deployment of the character viewpoint construction with the lexical item SEARCH in Figure 9, we can see that in Figure 10, Participant 1, at time 2, successfully deploys the character viewpoint construction using SEARCH accompanied by the appropriate formal elements of eye-gaze, and head/body movement. Notice that in Figure 10, the eye gaze moves about the space, and the head moves with the hands/body, enacting the movement of the character that is searching the premises for something in their vicinity.
The following Table 5, Table 6 and Table 7 outline the number of times each predication involving the signs: HOLDING-BINOCULARS, LOOK-AT, and SEARCH were successfully or unsuccessfully produced using the character or blended viewpoint constructions. In other words, only signers who produced the prediction using one of these three lexical signs, at least one time (whether successfully or unsuccessfully), are shown, thus not every participant is represented in each table. If the participant is not included in the table, they did not produce the viewpoint construction in during time 1 or time 2.
This data shows that eight of the eleven participants decreased production of the s HOLDING-BINOCULARS and/or the sign production of LOOK from T1 to T2 (1–4, 6–7, 9, 10). Three of these eight participants introduced the verb SEARCH in T2 (1, 6–7). The verb SEARCH never appeared in any participant’s data in T1. Participant 11 consistently used LOOK successfully across T1 and T2, and later successfully implemented the use of SEARCH, suggesting that they have learned to use these verbs in predication with constructed action to enact perspectives other than their own. It should be noted that all of the participants completed at least Beginning ASL I at RIT during T1 and prior to T2. The verb SEARCH is part of the ASL I curriculum at RIT and is typically introduced during the fifth unit at approximately Week 7 of the semester. While six of the participants would not have been introduced to the sign SEARCH prior to time one testing, the other participants may not have acquired the use of SEARCH, despite having been introduced to it, prior to T1 testing. As such, SEARCH only begins to appear during the T2 testing session and is used to a much lesser extent than LOOK.
Additionally, the viewpoint constructions paired with the verbs HOLDING-BINOCULARS and LOOK were more often successfully implemented with character or blended viewpoints. However, the verb SEARCH was only successfully modified two of the five times it appears in its uses by the four participants in time two. It is possible that because HOLDING-BINOCULARS and LOOK are articulated with static hand shape and position, that the participants are more successful at integrating these verbs with the movement of the head and body within the viewpoint construction. SEARCH is not articulated with a static hand configuration but instead requires the movement of the hand circling the face. This may prove to be articulatorily more difficult for novice signers.
Some participants, such as number 7 (as seen in Figure 7, Figure 8 and Figure 9) used viewpoint constructions both successfully and unsuccessfully at T1 and T2, indicating that while they have knowledge of how to produce viewpoint constructions, they are not able to consistently implement them. Other signers, such as number 3 were unsuccessful in all T1 attempts and successful in al T2 attempts, suggesting that they may have acquired the use of viewpoint constructions. However, signer 3 did not produce many viewpoint constructions at T2, suggesting they are still unconfident of unsure of where exactly they should occur.

4. Discussion

We were interested to see whether the data collected from the M2L2 students revealed qualitative information related to common patterns. The initial hypothesis focused on the search for quantitative similarities and differences among students across times 1 (T1) and 2 (T2). It was hypothesized that students would exhibit similar patterns, revealing a common acquisition trajectory for constructed action in new signers, specifically exhibiting an increase in the presence of character and blended viewpoints due to the acquisition of grammar skills key to proficiency in ASL.
A common acquisition trajectory for constructed action across sessions was not seen in this population. It is possible that we did not have a large enough sample to make generalizations but it seems that students varied in their use of viewpoint types at T1 and T2. Because we were unable to make any quantitative claims about the data, the scope of our analysis shifted to qualitative description of the observed language production to discern the nature of variation among participants. We did not find a general increase in the use of all types of constructed action over time, but in fact that many showed a decrease in character use between T1 and T2. We found instead that the acquisition trajectory varied greatly across individuals and that the details of how individuals acquire constructed action constructions is more variable than previously thought.
We hypothesized that students would exhibit similar patterns across T1 and T2, revealing a common acquisition trajectory for constructed action in new signers, and that students would exhibit a higher number of character viewpoints during the narratives due to the mimetic nature of the task. We found instead that signers were variable both within and across participants in their use and consistency in using these viewpoint constructions.
It is very possible that the students felt more comfortable using their natural co-speech gesture inclinations to depict character viewpoint early-on in their learning trajectory, as they were not familiar with the patterns or rules associated with the appropriate use of character viewpoint constructions in ASL structure. Hearing students’ experience with games such as Charades, which encourage character viewpoint depictions, gives students prior scripts for moving their body in ways which embody a character. However, analyzing whether a given articulation in the visual modality is a sign or a gesture is an impossible judgement, based on formal properties alone. Only a speaker or signer knows whether the articulation they produced was intended to be a sign or gesture, and in some cases, in spontaneous conversation they may not have the metalinguistic awareness to know the difference. To further complicate the matter, regardless of the intent of the speaker/signer, the interlocutor may categorize an articulation as a sign or gesture differently, based on their linguistic experience. As Occhino and Wilcox have stated previously, traditional assumptions about gradience, deciding whether or not something is a sign or a gesture, is a categorization task that is influenced by the linguistic experience of the interlocutors (Occhino and Wilcox 2017).
What does seem clear is that as the students progress through their ASL education, they are exposed to new vocabulary and learn more prescriptive rules of ASL in the formal classroom setting. Over time, the students may have become more sensitive to the formal rules, thus biasing a slight increase in the production of observer viewpoint constructions, which may feel “more like ASL” due to the overt rule-based instruction students receive for classifier-handshapes involved in the production of observer viewpoint constructions. On the other hand, the use of character viewpoint constructions requires the task of mapping the body of the character onto the body of the signer, which is not taught as a rule-based one-to-one mapping between a form and a function. This is a more general schematization of the body that signers need to acquire as a skill required for specific discourse genres which are not necessarily needed for every day, face-to-face communication.
It should be mentioned that this preliminary analysis is a small sample from a much larger longitudinal study of ASL M2L2 acquisition and as such this study is limited in scope. First, due to the small sample size collected during this first round of data collection, our results are not easily generalizable. Upon analyzing more data and expanding beyond our first 11 participants, we hope to gain a better understanding of norms for ASL M2L2 acquisition outcomes for college students.
A second limitation is the high percentage deaf and hard-of-hearing students at Rochester Institute of Technology. The university has approximately 20,000 hearing students and 1200 deaf and hard-of-hearing students, which means that many of the hearing students have regular contact with deaf and signing peers. Although we screened participants to be sure they did not have prior training in ASL and were not enrolled in other language courses prior to beginning our study, it is still possible that these students received minimal exposure to deaf and hard-of-hearing students who use ASL on campus in shared classroom, dorm, dining hall, and other social environments. It is possible that RIT M2L2 students do not represent the ‘average’ M2L2 signer who do not have the same socio-culture exposure to signers and Deaf culture outside of the classroom. Studies should be carried out at other institutions of higher education which have ASL programs to test whether there are unseen benefits outside of the classroom which affect in the M2L2 population at RIT.
Another consideration is that while we controlled for ASL courses, we did not control for courses outside of the ASL curriculum or foreign language classes. What about those signers who take a course such as an acting class, or Visual Gestural Communication class where they are strongly encouraged to use as much visual gestures as possible as part of developing communication strategies? What are the non-linguistic factors such as aptitude, motivation, and learning styles that exert influence on the degree of M2L2 (as suggested by Chen Pichler and Koulidobrova 2015)?
With regards to the sign versus gesture question. The only foreseeable solution to determine to what extent these students relied on gestural repertoire from their L1 English would be to conduct a round of follow-up interviews with students where we watch their story-telling videos with them and ask them a round of meta-linguistic questions regarding their choice of constructions and whether or not they thought they were using ASL or whether they did not know the appropriate construction at the time and instead substituted gestures. This would require an extension of our IRB as well as more funding but it is definitely something to consider if we are to better understand the role of what is traditionally considered “transfer” from L1 to L2 which could be found in the extension of articulatory gestures from multimodal use of English to ASL.
Whether or not new M2L2 signers can try to capitalize on their gestural repertoire as a way to bootstrap learning a language in a visual modality has yet to be seen. Recent research has shown that hearing signers “generate expectations about the form of iconic signs never seen before based on their implicit knowledge of gestures” (Ortega et al. 2019). It is still unclear whether ASL teachers could somehow leverage the knowledge of a gestural repertoire to teach constructed action or viewpoint constructions. To be sure, this would involve making explicit the implicit and of course would vary by individual.
While we have made some preliminary observations, we uncovered many more questions than we found answers. Future studies should include whether new signers require explicit instruction on the use of constructed action or whether they begin to use constructed action during ASL 1 with only minimal exposure from seeing instructors use it in their own dialogues? It is still unclear at what level of instruction are new signers able to use a combination of observer viewpoint and character viewpoint, also known as blended viewpoint constructions. We failed to observe any regularized use of M2L2 signers encoding simultaneous information about the observer and character through the use of body partitioning. The lack of robust use of blended viewpoint suggests it arises later on in the M2L2 acquisition trajectory, but exactly when and how remains to be seen.
Further longitudinal studies are needed to analyze the later stages of acquisition of constructed action, and to measure the amount of improvement at each level of ASL. In future studies, we wish to correlate our findings with the ASL curriculum used at the university to track how long after specific constructions are introduced, do the constructions consistently, and correctly, manifest in the signed productions of M2L2 students. Further analysis of our data will also reveal whether ASL students who take ASL as part of a foreign language requirement, differ from those students majoring in ASL interpreting, with the intention of becoming sign language interpreters. It is possible that different curricula and different emphases during classroom contact hours may accelerate or inhibit the acquisition of these complex constructions. Furthermore, studies are needed to compare the acquisition trajectories of constructed action in M2L2 ASL users with other global signed languages in other countries.

5. Conclusions

The last twenty years American Sign Language (ASL) classes offered for credit has “risen exponentially” (Rosen 2008, p. 19) both for students enrolled in ASL classes in secondary schools and colleges and universities. In 2016, the Modern Language Association reported that although the enrollment of foreign language students declined from 2013 to 2016, ASL rose to the third most enrolled foreign language class after Spanish and French (displacing German) in the United States (Looney and Lusin 2018). In the past decade the average number of ASL 1 classes offered at Rochester Institute of Technology has risen to 10 sections per semester. This rise in the demand for ASL courses has in turn lead to a rise in the demand for standardized instructional materials which has set a trend for national standards of teaching ASL (Ashton et al. 2014). Studies such as the one reported here are just the beginning of a burgeoning new field of inquiry. As enrollment in ASL as a foreign language continue to rise, the need for understanding how adult language learners who are new to language in the visual modality acquire ASL or other signed languages will need to be explored further.
M2L2 research is in its infancy and we are still discovering what a normal acquisition trajectory looks like in this population. Some Second Language Acquisition researchers have expressed their concerns that SLA results can be messy due to variables and contributing factors that cannot be controlled for in a lab. However, it is our hope that this study will contribute to a better understanding of the second language and second modality acquisition and discuss possible directions of future work in this vein.
Our results suggest that learning constructed action is a complex linguistic skill that is not easily acquired by L2 students who are learning ASL as their second language in a second modality. We have shown that M2L2 students vary in their use and proficiency in production of character and observer viewpoint constructions over the course of two semesters of testing. Blended viewpoint constructions were not readily observed in this student population suggesting that this type of constructed action may take more time for signers to acquire. More research is needed to explore the long-term trajectory of the acquisition of constructed action, especially as it relates to the interplay between character viewpoint and blended viewpoint constructions, and how they are used in constructed actions and constructed dialogues in ASL in the domain of spatial event representations.

Author Contributions

Conceptualization, K.B.K. and C.O.; methodology, K.B.K.; formal analysis, K.B.K.; investigation, K.M.; resources, K.B.K.; writing—original draft preparation, K.B.K.; writing—review and editing, K.B.K., C.O., and K.M.; visualization, K.B.K., C.O., and K.M.; supervision, K.B.K., C.O.; project administration, K.B.K.; funding acquisition, K.B.K.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the NTID Department of ASL Interpreter Education, and the NTID Center on Cognition and Language for their research support. A special thank you to Peter Hauser for his guidance in development and analysis of this project. Thank you to Gerry Buckley and the NTID Office of the President for their generous support through the Scholarship Portfolio Development Initiative. We would also like to thank our research assistants (in Alphabetical Order) Carmen Bowman, Melissa Stillman, and Ian White, who all contributed their ideas to the success of the M2L2 project, and the ASL students whose participation made this research possible.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aarons, Debra, and Ruth Morgan. 2003. Classifier predicates and the creation of multiple perspectives in South African Sign Language. Sign Language Studies 3: 125–56. [Google Scholar] [CrossRef]
  2. Ashton, Glenna, Keith Cagle, Kim Brown Kurz, William Newell, Rico Peterson, and Jason E. Zinza. 2014. Standards for learning American Sign Language. In Standards for Foreign Language Learning in the 21st Century. Washington, DC: American Council on Teaching Foreign Languages. [Google Scholar]
  3. Chen Pichler, Deborah, and Elena Koulidobrova. 2015. Acquisition of Sign Language as a Second Language. In The Oxford Handbook of Deaf Studies in Language. Oxford: Oxford University Press. [Google Scholar]
  4. Clark, Herbert H. 2016. Depicting as a method of communication. Psychological Review 123: 324. [Google Scholar] [CrossRef] [PubMed]
  5. Cormier, Kearsey, Sandra Smith, and Martine Zwets. 2013. Framing constructed action in British Sign Language narratives. Journal of Pragmatics 55: 119–39. [Google Scholar] [CrossRef]
  6. Crasborn, Onno, and Han Sloetjes. 2008. Enhanced ELAN functionality for sign language corpora. Paper presented at 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora, Marrakech, Morocco, June 1; Edited by Onno Crasborn, Eleni Efthimiou, Thomas Hanke, Ernst D. Thoutenhoofd and Inge Zwitserlood. Paris: ELRA, pp. 39–43. [Google Scholar]
  7. Cuxac, Christian. 2000. La Langue des Signes Française. Les Voies de l’Iconicité. Paris: Ophrys. [Google Scholar]
  8. Dudis, Paul. 2004. Body partitioning and real space blends. Cognitive Linguistics 15: 223–38. [Google Scholar] [CrossRef]
  9. Emmorey, Karen, and Brenda Falgier. 1999. Talking about space with space: Describing environments in ASL. In Storytelling and Conversation: Discourse in Deaf Communities. Edited by Elizabeth Winston. Washington, DC: Gallaudet University Press, pp. 3–26. [Google Scholar]
  10. Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language: The Semantics and Morphosyntax of the Use of Space in a Visual Language. Hamburg: Signum Press. [Google Scholar]
  11. Ferrara, Lindsay N., and Trevor Johnston. 2014. Elaborating Who’s What: A Study of Constructed Action and Clause Structure in Auslan (Australian Sign Language). Australian Journal of Linguistics 34: 193–215. [Google Scholar] [CrossRef]
  12. Ferrara, Lindsay N., and Anna-Lena Nilsson. 2017. Describing spatial layouts as an M2 signed language learner. Sign Language and Linguistics 20: 1–26. [Google Scholar] [CrossRef]
  13. Fillmore, Charles J. 1997. Lectures on Deixis. Stanford: CSLI Publications. [Google Scholar]
  14. Jantunen, Tommi. 2017. Constructed Action, the Clause and the Nature of Syntax in Finnish Sign Language. Open Linguistics 3: 65–85. [Google Scholar] [CrossRef]
  15. Janzen, Terry. 2004. Space rotation, perspective shift, and verb morphology in ASL. Cognitive Linguistics 15: 149–74. [Google Scholar] [CrossRef]
  16. Janzen, Terry. 2017. Composite utterances in a signed language: Topic constructions and perspective-taking in ASL. Cognitive Linguistics 28: 511–38. [Google Scholar] [CrossRef]
  17. Janzen, Terry. 2019. Shared Spaces, Shared Mind: Connecting Past and Present Viewpoints in American Sign Language Narratives. Cognitive Linguistics 30: 253–79. [Google Scholar] [CrossRef]
  18. Lackner, Andrea. 2017. Functions of Head and Body Movements in Austrian Sign Language. Berlin: De Gruyter Mouton, Preston: Ishara Press. [Google Scholar]
  19. Leeson, Lorraine, and John I. Saeed. 2012. Irish Sign Language. Edinburgh: Edinburgh University Press. [Google Scholar]
  20. Liddell, Scott K. 1990. Four Functions of a Locus: Re-examining the Structure of Space in ASL. In Sign Language Research: Theoretical Issues. Edited by Ceil Lucas. Washington, DC: Gallaudet University Press, pp. 199–218. [Google Scholar]
  21. Liddell, Scott K. 1995. Real, surrogate, and token space: Grammatical consequences in ASL. In Language, Gesture, and Space. Edited by Karen Emmorey and Judy Reilly. Hillsdale: Lawrence Erlbaum Associates, pp. 19–41. [Google Scholar]
  22. Liddell, Scott K. 2000. Blended spaces and deixis in sign language discourse. In Language and Gesture. Edited by David McNeill. Cambridge: Cambridge University Press, pp. 331–57. [Google Scholar]
  23. Liddell, Scott K. 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. [Google Scholar]
  24. Looney, Dennis, and Natalia Lusin. 2018. Enrollments in Languages Other than English in United States Institutions of Higher Education, Summer 2016 and Fall 2016: Preliminary Report. New York: Modern Language Association. [Google Scholar]
  25. McNeill, David. 1992. Hand and Mind: What Gestures Reveal about the Mind. Chicago: University of Chicago Press. [Google Scholar]
  26. Metzger, Melanie. 1995. Constructed dialogue and constructed action in American Sign Language. In Sociolinguistics in Deaf Communities. Edited by Ceil Lucas. Washington, DC: Gallaudet University Press, pp. 255–71. [Google Scholar]
  27. Morgan, Gary. 1999. Event packaging in British Sign Language discourse. In StoryTelling and Conversation: Discourse in Deaf Communities. Edited by Elizabeth Winston. Washington, DC: Gallaudet University Press, pp. 27–58. [Google Scholar]
  28. Morgan, Gary. 2002. Children’s encoding of simultaneity in BSL narratives. Sign Language and Linguistics 5: 131–65. [Google Scholar] [CrossRef]
  29. Nilsson, Anna-Lena. 2010. Space in Swedish Sign Language: Reference, Real-Space Blending, and Interpretation. Stockholm: Stockholm University Press. [Google Scholar]
  30. Occhino, Corrine, and Sherman Wilcox. 2017. Gesture or sign? A categorization problem. Behavioral and Brain Sciences 40: e66. [Google Scholar] [CrossRef] [PubMed]
  31. Ortega, Gerardo. 2013. Acquisition of a Signed Phonological System by Hearing Adults: The Role of Sign Structure and Iconicity. Ph.D. dissertation, University College London, London, UK. [Google Scholar]
  32. Ortega, Gerardo. 2017. Iconicity and Sign Lexical Acquisition: A Review. Frontiers in Psychology 8: 1280. [Google Scholar] [CrossRef] [PubMed]
  33. Ortega, Gerardo, Asli Özyürek, and David Peeters. 2019. Iconic gestures serve as manual cognates in hearing second language learners of a sign language: An ERP study. Journal of Experimental Psychology: Learning, Memory, and Cognition. [Google Scholar] [CrossRef] [PubMed]
  34. Padden, Carol. 1986. Verbs and role-shifting in ASL. Paper presented at Fourth National Symposium on Sign Language Research and Teaching, Las Vegas, NV, USA, January 27–February 1; Edited by Carol Padden. Silver Spring: National Association of the Deaf, pp. 44–57. [Google Scholar]
  35. Parrill, Fay. 2009. Dual viewpoint gestures. Gesture 9: 271–89. [Google Scholar] [CrossRef]
  36. Partee, Barbara H. 1973. The syntax and semantics of quotation. In A Festschrift for Morris Halle. Edited by Steven R. Anderson and Paul Kiparsky. New York: Holt Rinehart and Winston, pp. 410–18. [Google Scholar]
  37. Perniss, Pamela. 2007. Space and iconicity in German Sign Language (DGS). Ph.D. dissertation, Radboud University, Nijmegen, The Netherlands. [Google Scholar]
  38. Perniss, Pamela, and Asli Özyürek. 2008. Representations of action, motion, and location in sign space: A comparison of German (DGS) and Turkish (TİD) Sign Language narratives. In Signs of the Time: Selected Papers from TISLR. Hamburg: Signum Press, pp. 353–78. [Google Scholar]
  39. Poizner, Howard, Edward Klima, and Ursula Bellugi. 1990. What the Hands Reveal about the Brain. Cambridge: MIT Press. [Google Scholar]
  40. Pyers, Jennie, and Ann Senghas. 2007. Referential shift in Nicaraguan Sign Language: A comparison with American Sign Language. In Visible Variation: Comparative Studies on Sign Language Structure. Edited by Roland R. Pfau, Pamela Perniss and Marcus Steinbeck. Amsterdam: Mouton de Gruyter, pp. 279–302. [Google Scholar]
  41. Quinto-Pozos, David. 2007. Can constructed action be considered obligatory? The Linguistics of Sign Language Classifiers: Phonology, Morpho-Syntax, Semantics and Discourse 117: 1285–314. [Google Scholar] [CrossRef]
  42. Quinto-Pozos, David, and Fay Parrill. 2015. Signers and Co-speech Gesturers Adopt Similar Strategies for Portraying Viewpoint in Narratives. Topics in Cognitive Science 7: 12–35. [Google Scholar] [CrossRef] [PubMed]
  43. Quinto-Pozos, David, Kearsy Cormier, and Claire Ramsey. 2009. Constructed action of highly animate referents: Evidence from American, British and Mexican Sign Languages. Paper presented at the 35th Annual Meeting of the Berkeley Linguistics Society (Special Session on Non-Speech Modalities), Berkeley, CA, USA, February 14–16. [Google Scholar]
  44. Reynolds, Wanette. 2016. Early Bimodal Bilingual Development of ASL Narrative Referent Cohesion: Using a Heritage Language Framework. Ph.D. dissertation, Gallaudet University, Washington, DC, USA. Available online: https://slla.lab.uconn.edu/wp-content/uploads/sites/1793/2019/05/Reynolds_Dissertation_2016.pdf (accessed on 9 June 2019).
  45. Rosen, Russell. 2008. American Sign Language as a foreign language in US high schools: State of the art. The Modern Language Journal 92: 10–38. [Google Scholar] [CrossRef]
  46. Schick, Brenda S. 1987. The Acquisition of Classifier Predicates in American Sign Language. West Lafayette: Purdue University Press, Available online: https://books.google.com/books?id=MY9DuwAACAAJ (accessed on 9 June 2019).
  47. Schick, Brenda S. 1990. Classifier predicates in American Sign Language. International Journal of Sign Linguistics 1: 15–40. [Google Scholar]
  48. Slobin, Dan, Nini Hoiting, Marlon Kuntze, Reyna Lindert, Amy Weinberg, Jennie Pyers, Michelle Anthony, Yael Biederman, and Helen Thumann. 2003. A cognitive/functional perspective on the acquisition of “classifiers”. In Perspectives on Classifier Constructions in Sign Languages. Edited by Karen Emmorey. Mahwah: Lawrence Erlbaum Associates, pp. 271–98. [Google Scholar]
  49. Thorvaldsdottir, Gudny Bjork. 2008. Mental Space Theory and Icelandic Sign Language. The ITB Journal 9: 4–19. [Google Scholar] [CrossRef]
  50. Thumann, Mary. 2013. Identifying Recurring Depiction in ASL Presentations. Sign Language Studies 13: 316–49. [Google Scholar] [CrossRef]
  51. Wulf, Alyssa, and Paul Dudis. 2005. Body Partitioning in ASL Metaphorical Blends. Sign Language Studies 5: 317–32. [Google Scholar] [CrossRef]
1
Thumann defines depiction as “the representation of aspects of an entity, event, or an abstract concept by signers’ use of their articulators, their body, and the signing space around them” which likely encompasses more constructions than are covered by our use of ‘constructed action’.
2
To complicate the picture, the term ‘constructed action’ has also been used in contrast to ‘depicting construction’ or ‘classifier constructions’ due to perceived differences in formal characteristics, despite similar functions.
3
While both ‘depiction’ and ‘constructed action’ have their positives and negatives, we will adopt the term ‘constructed action’ because our task involves the retelling of narratives which makes it akin to constructed dialogue and constructed action in co-speech gesture studies, while depiction is, in our minds, a much broader function of language more generally.
4
Shortly after we began our study, we realized that sitting was not conducive to students being able to depict the scene easily, as they were limited in their mobility. In later iterations we had students stand in front of the computer to sign their retellings. As such, some of our preliminary videos are with seated signers and some are with standing signers. We tried to have the same signer continue their time 2 testing using the same manner, meaning if they sat for time 1, we had them sit for time 2.
Figure 1. Observer Viewpoint: |BUILDING| vertical-structure. “The vertical buildings (positioned like-so)”.
Figure 1. Observer Viewpoint: |BUILDING| vertical-structure. “The vertical buildings (positioned like-so)”.
Languages 04 00090 g001
Figure 2. Character viewpoint: first-person looking-through |BINOCULARS|. Translation: (Tweety bird) “Looking through the binoculars (at an object)”.
Figure 2. Character viewpoint: first-person looking-through |BINOCULARS|. Translation: (Tweety bird) “Looking through the binoculars (at an object)”.
Languages 04 00090 g002
Figure 3. Observer, character and blended viewpoint constructional overlap.
Figure 3. Observer, character and blended viewpoint constructional overlap.
Languages 04 00090 g003
Figure 4. Blended viewpoint: Dominant (right) hand “Upright creature slinking”, Non-dominant (left) hand and face “cat slinking while watching the bird (like-so)”. Translation: “The cat was slinking around keeping his eyes on his prey.”.
Figure 4. Blended viewpoint: Dominant (right) hand “Upright creature slinking”, Non-dominant (left) hand and face “cat slinking while watching the bird (like-so)”. Translation: “The cat was slinking around keeping his eyes on his prey.”.
Languages 04 00090 g004
Figure 5. (a) Participant 1: Session 1: Character viewpoint: HOLDING-BINOCULARS (b) Participant 1: Session 2: Character viewpoint: LOOK-AT + (eye-gaze forward ‘looking’).
Figure 5. (a) Participant 1: Session 1: Character viewpoint: HOLDING-BINOCULARS (b) Participant 1: Session 2: Character viewpoint: LOOK-AT + (eye-gaze forward ‘looking’).
Languages 04 00090 g005
Figure 6. Participant 11: Session 2, SEARCH.
Figure 6. Participant 11: Session 2, SEARCH.
Languages 04 00090 g006
Figure 7. Unsuccessful depiction of Character viewpoint with verb. Participant 7: Session 1: HOLDING-BINOCULARS construction without appropriate facial expressions, eye-gaze or body/head movement. Participant did not move their head.
Figure 7. Unsuccessful depiction of Character viewpoint with verb. Participant 7: Session 1: HOLDING-BINOCULARS construction without appropriate facial expressions, eye-gaze or body/head movement. Participant did not move their head.
Languages 04 00090 g007
Figure 8. Successful depiction of character viewpoint with verb. Participant 7: Time 1: HOLDING-BINOCULARS construction with appropriate head-turn and eye-gaze.
Figure 8. Successful depiction of character viewpoint with verb. Participant 7: Time 1: HOLDING-BINOCULARS construction with appropriate head-turn and eye-gaze.
Languages 04 00090 g008
Figure 9. Unsuccessful depiction of character viewpoint with verb. Participant 7: Time 2: SEARCH without appropriate head-turn and eye-gaze to show the character searching for something in the vicinity.
Figure 9. Unsuccessful depiction of character viewpoint with verb. Participant 7: Time 2: SEARCH without appropriate head-turn and eye-gaze to show the character searching for something in the vicinity.
Languages 04 00090 g009
Figure 10. Successful depiction of character viewpoint with verb. Participant 1: Time 2. SEARCH with appropriate head-turn and eye-gaze to show the character searching for something in the vicinity.
Figure 10. Successful depiction of character viewpoint with verb. Participant 1: Time 2. SEARCH with appropriate head-turn and eye-gaze to show the character searching for something in the vicinity.
Languages 04 00090 g010
Table 1. Different terminology used for “observer perspectives” and “character perspective” in previous research on event space descriptions in different languages.
Table 1. Different terminology used for “observer perspectives” and “character perspective” in previous research on event space descriptions in different languages.
ReferenceLanguageObserver PerspectiveCharacter Perspective
Liddell (1995, 2000)ASLToken SpaceSurrogate Space
Liddell (2003)ASLDepicting SpaceSurrogate Space
Morgan (1999, 2002)ASLFixed referential framework/spaceShifted referential framework/space
Poizner et al. (1990)ASLFixed referential frameworkShifted referential framework
Dudis (2004)ASLGlobal viewpointParticipant viewpoint
Slobin et al. (2003)ASLNarrator perspectiveProtagonist perspective
Schick (1990)ASLModel spaceReal-world space
Emmorey and Falgier (1999)ASLDiagrammatic spatial formatViewer spatial format
Pyers and Senghas (2007)ASL and NicaSLDiagrammatic spaceViewer space
Perniss and Özyürek (2008)DGS and TIDObserver perspectiveCharacter perspective
Cormier et al. (2013)BSLConstructed Action (CA) TokenConstructed Action (CA) Character
Table 2. Characteristics of observer and character perspective in terms of event space projection in our coding. (See Perniss and Özyürek 2008).
Table 2. Characteristics of observer and character perspective in terms of event space projection in our coding. (See Perniss and Özyürek 2008).
Observer PerspectiveCharacter Perspective
Projection of Event Space to Sign SpaceEvent-external vantage point
In front of signer
Reduced size
Event-internal vantage point
Encompasses signer
Life size
Table 3. Individual Participation Testing Information.
Table 3. Individual Participation Testing Information.
Participant Information
01T1-T2 interval: 5.25 months
   T1: Fall semester Week 13, enrolled in Beginning ASL 1
   T2: Spring semester Week 14, not enrolled in ASL
02T1-T2 interval: 8.5 months
   T1: Spring semester Week 3, enrolled in Beginning ASL 1
   T2: Fall semester Week 10, not enrolled in ASL
03T1-T2 interval: 8.25 months
   T1: Spring semester Week 4, enrolled in Beginning ASL 1
   T2: Fall Week 10, not enrolled in ASL
04T1-T2 interval: 7.5 months
   T1: Spring semester Week 4, enrolled in Beginning ASL 1
   T2: Fall Week 7, enrolled in Beginning ASL 2
05T1-T2 interval: 7 months
   T1: Spring semester Week 4, enrolled in Beginning ASL 1
   T2: Fall semester Week 4, enrolled in Beginning ASL 2
06T1-T2 interval: 7.5 months
   T1: Spring semester Week 5, enrolled in Beginning ASL 1
   T2: Fall semester Week 4, enrolled in Beginning ASL 2
Declared intent to minor in ASL
07T1-T2 interval: 7 months
   T1: Spring semester Week 7, enrolled in Beginning ASL 1
   T2: Fall semester Week 7, enrolled in Beginning ASL 2
08T1-T2 interval: 6.75 months
   T1: Spring semester Week 7, enrolled in Beginning ASL 1
   T2: Fall semester Week 7, not enrolled in ASL
09T1-T2 interval: 6.5 months
   T1: Spring semester Week 8, enrolled in Beginning ASL 1
   T2: Fall semester Week 6, not enrolled in ASL
10T1-T2 interval: 7 months
   T1: Spring semester Week 8, enrolled in Beginning ASL 1
   T2: Fall semester Week 8, not enrolled in ASL
11T1-T2 interval: 8.25 months
   T1: Spring semester Week 4, enrolled in Beginning ASL 1
   T2: Fall semester Week 9, enrolled in Beginning ASL 2
Table 4. Canary Row elicitation verb counts.
Table 4. Canary Row elicitation verb counts.
Time 1 VerbsTime 2 Verbs
Subject #BINOCULARSSEARCHLOOKBINOCULARSSEARCHLOOK
12 1 13
22 1 1
32 2 2
42 12
5 1 1
6 1 2
72 1
8
94 3
102 1
11 2 12
Table 5. Viewpoint construction success frequency during production of HOLDING-BINOCULARS.
Table 5. Viewpoint construction success frequency during production of HOLDING-BINOCULARS.
HOLDING-BINOCULARS
Time 1Time 2
Subject #UnsuccessfulSuccessfulUnsuccessfulSuccessful
1-2--
2-2-1
32---
4-2-2
711--
9-4-3
10-2-1
Table 6. Viewpoint construction success frequency during production of LOOK.
Table 6. Viewpoint construction success frequency during production of LOOK.
LOOK
Time 1Time 2
Subject #UnsuccessfulSuccessfulUnsuccessfulSuccessful
1-1-3
2--1-
32--2
41---
5-1-1
6-1--
11-2-2
Table 7. Viewpoint construction success frequency during production of SEARCH.
Table 7. Viewpoint construction success frequency during production of SEARCH.
SEARCH
Time 1Time 2
Subject #UnsuccessfulSuccessfulUnsuccessfulSuccessful
1---1
6--2-
7--1-
11---1

Share and Cite

MDPI and ACS Style

Kurz, K.B.; Mullaney, K.; Occhino, C. Constructed Action in American Sign Language: A Look at Second Language Learners in a Second Modality. Languages 2019, 4, 90. https://doi.org/10.3390/languages4040090

AMA Style

Kurz KB, Mullaney K, Occhino C. Constructed Action in American Sign Language: A Look at Second Language Learners in a Second Modality. Languages. 2019; 4(4):90. https://doi.org/10.3390/languages4040090

Chicago/Turabian Style

Kurz, Kim B., Kellie Mullaney, and Corrine Occhino. 2019. "Constructed Action in American Sign Language: A Look at Second Language Learners in a Second Modality" Languages 4, no. 4: 90. https://doi.org/10.3390/languages4040090

Article Metrics

Back to TopTop