Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (565)

Search Parameters:
Keywords = signed language

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 595 KB  
Article
The Impact of Sustainable Aesthetics: A Qualitative Analysis of the Influence of Visual Design and Materiality of Green Products on Consumer Purchase Intention
by Ana-Maria Nicolau and Petruţa Petcu
Sustainability 2025, 17(20), 9082; https://doi.org/10.3390/su17209082 (registering DOI) - 14 Oct 2025
Viewed by 16
Abstract
The transition to a circular economy depends on the widespread adoption of sustainable products by consumers. However, the point-of-sale purchase decision is a complex process, influenced not only by ethical arguments but also by sensory cues. This study investigates how the aesthetics (visual [...] Read more.
The transition to a circular economy depends on the widespread adoption of sustainable products by consumers. However, the point-of-sale purchase decision is a complex process, influenced not only by ethical arguments but also by sensory cues. This study investigates how the aesthetics (visual design) and materiality (tactile sensation) of green products shape value perception and purchase intention. Using a qualitative methodology based on a focus group, the research directly compares consumer reactions to green products (e.g., a bamboo toothbrush) versus their conventional alternatives (e.g., plastic). Thematic analysis of the data reveals a fundamental dichotomy among consumers: while one segment associates high-tech aesthetics and perfect finishes with quality and hygiene, another segment values natural materials and their “imperfections” as signs of authenticity and responsibility. The results demonstrate that there is no single, universally accepted “sustainable aesthetic” and highlight the need for designers and marketers to align the visual and tactile language of products with the value system of the target consumer segment. The study provides a framework for understanding how design can act as either a barrier to or a catalyst for the adoption of sustainable products. Full article
Show Figures

Figure 1

17 pages, 1005 KB  
Article
Leveraging Clinical Record Geolocation for Improved Alzheimer’s Disease Diagnosis Using DMV Framework
by Peng Zhang and Divya Chaudhary
Biomedicines 2025, 13(10), 2496; https://doi.org/10.3390/biomedicines13102496 - 14 Oct 2025
Viewed by 131
Abstract
Background: Early detection of Alzheimer’s disease (AD) is critical for timely intervention, but clinical assessments and neuroimaging are often costly and resource intensive. Natural language processing (NLP) of clinical records offers a scalable alternative, and integrating geolocation may capture complementary environmental risk signals. [...] Read more.
Background: Early detection of Alzheimer’s disease (AD) is critical for timely intervention, but clinical assessments and neuroimaging are often costly and resource intensive. Natural language processing (NLP) of clinical records offers a scalable alternative, and integrating geolocation may capture complementary environmental risk signals. Methods: We propose the DMV (Data processing, Model training, Validation) framework that frames early AD detection as a regression task predicting a continuous risk score (“data_value”) from clinical text and structured features. We evaluated embeddings from Llama3-70B, GPT-4o (via text-embedding-ada-002), and GPT-5 (text-embedding-3-large) combined with a Random Forest regressor on a CDC-derived dataset (≈284 k records). Models were trained and assessed using 10-fold cross-validation. Performance metrics included Mean Squared Error (MSE), Mean Absolute Error (MAE), and R2; paired t-tests and Wilcoxon signed-rank tests assessed statistical significance. Results: Including geolocation (latitude and longitude) consistently improved performance across models. For the Random Forest baseline, MSE decreased by 48.6% when geolocation was added. Embedding-based models showed larger gains; GPT-5 with geolocation achieved the best results (MSE = 14.0339, MAE = 2.3715, R2 = 0.9783), and the reduction in error from adding geolocation was statistically significant (p < 0.001, paired tests). Conclusions: Combining high-quality text embeddings with patient geolocation yields substantial and statistically significant improvements in AD risk estimation. Incorporating spatial context alongside clinical text may help clinicians account for environmental and regional risk factors and improve early detection in scalable, data-driven workflows. Full article
Show Figures

Figure 1

14 pages, 1917 KB  
Article
Moroccan Sign Language Recognition with a Sensory Glove Using Artificial Neural Networks
by Hasnae El Khoukhi, Assia Belatik, Imane El Manaa, My Abdelouahed Sabri, Yassine Abouch and Abdellah Aarab
Digital 2025, 5(4), 53; https://doi.org/10.3390/digital5040053 - 8 Oct 2025
Viewed by 297
Abstract
Every day, countless individuals with hearing or speech disabilities struggle to communicate effectively, as their conditions limit conventional verbal interaction. For them, sign language becomes an essential and often sole tool for expressing thoughts and engaging with others. However, the general public’s limited [...] Read more.
Every day, countless individuals with hearing or speech disabilities struggle to communicate effectively, as their conditions limit conventional verbal interaction. For them, sign language becomes an essential and often sole tool for expressing thoughts and engaging with others. However, the general public’s limited understanding of sign language poses a major barrier, often resulting in social, educational, and professional exclusion. To bridge this communication gap, the present study proposes a smart wearable glove system designed to translate Arabic sign language (ArSL), especially Moroccan sign language (MSL), into a written alphabet in real time. The glove integrates five MPU6050 motion sensors, one on each finger, capable of capturing detailed motion data, including angular velocity and linear acceleration. These motion signals are processed using an Artificial Neural Network (ANN), implemented directly on a Raspberry Pi Pico through embedded machine learning techniques. A custom dataset comprising labeled gestures corresponding to the MSL alphabet was developed for training the model. Following the training phase, the neural network attained a gesture recognition accuracy of 98%, reflecting strong performance in terms of reliability and classification precision. We developed an affordable and portable glove system aimed at improving daily communication for individuals with hearing impairments in Morocco, contributing to greater inclusivity and improved accessibility. Full article
Show Figures

Figure 1

43 pages, 3034 KB  
Article
Real-Time Recognition of NZ Sign Language Alphabets by Optimal Use of Machine Learning
by Seyed Ebrahim Hosseini, Mubashir Ali, Shahbaz Pervez and Muneer Ahmad
Bioengineering 2025, 12(10), 1068; https://doi.org/10.3390/bioengineering12101068 - 30 Sep 2025
Viewed by 331
Abstract
The acquisition of a person’s first language is one of their greatest accomplishments. Nevertheless, being fluent in sign language presents challenges for many deaf students who rely on it for communication. Effective communication is essential for both personal and professional interactions and is [...] Read more.
The acquisition of a person’s first language is one of their greatest accomplishments. Nevertheless, being fluent in sign language presents challenges for many deaf students who rely on it for communication. Effective communication is essential for both personal and professional interactions and is critical for community engagement. However, the lack of a mutually understood language can be a significant barrier. Estimates indicate that a large portion of New Zealand’s disability population is deaf, with an educational approach predominantly focused on oralism, emphasizing spoken language. This makes it essential to bridge the communication gap between the general public and individuals with speech difficulties. The aim of this project is to develop an application that systematically cycles through each letter and number in New Zealand Sign Language (NZSL), assessing the user’s proficiency. This research investigates various machine learning methods for hand gesture recognition, with a focus on landmark detection. In computer vision, identifying specific points on an object—such as distinct hand landmarks—is a standard approach for feature extraction. Evaluation of this system has been performed using machine learning techniques, including Random Forest (RF) Classifier, k-Nearest Neighbours (KNN), AdaBoost (AB), Naïve Bayes (NB), Support Vector Machine (SVM), Decision Trees (DT), and Logistic Regression (LR). The dataset used for model training and testing consists of approximately 100,000 hand gesture expressions, formatted into a CSV dataset for model training. Full article
(This article belongs to the Special Issue AI and Data Science in Bioengineering: Innovations and Applications)
Show Figures

Figure 1

18 pages, 1181 KB  
Article
Inclusion in Higher Education: An Analysis of Teaching Materials for Deaf Students
by Maria Aparecida Lima, Ana Garcia-Valcárcel and Manuel Meirinhos
Educ. Sci. 2025, 15(10), 1290; https://doi.org/10.3390/educsci15101290 - 30 Sep 2025
Viewed by 548
Abstract
This study investigates the challenges of promoting accessibility for deaf teachers and students in higher education, focusing on the development of inclusive teaching materials. A qualitative case study was conducted in ten teacher training programmes at the Federal University of Alagoas (Brazil), including [...] Read more.
This study investigates the challenges of promoting accessibility for deaf teachers and students in higher education, focusing on the development of inclusive teaching materials. A qualitative case study was conducted in ten teacher training programmes at the Federal University of Alagoas (Brazil), including nine distance learning courses and one face-to-face LIBRAS programme. Analysis of the Virtual Learning Environment revealed a predominance of text-based content, with limited use of Libras videos, visual resources, or assistive technologies. The integration of Brazilian Sign Language into teaching practices was minimal, and digital translation tools were rarely used or contextually appropriate. Educators reported limited training, technical support, and institutional guidance for the creation of accessible materials. Time constraints and resource scarcity further hampered inclusive practices. The results highlight the urgent need for institutional policies, continuous teacher training, multidisciplinary support teams, and the strategic use of digital technologies and Artificial Intelligence (AI). Compared with previous studies, significant progress has been made. The present study highlights the establishment of an Accessibility Centre (NAC) and an Accessibility Laboratory (LAB) at the university. These facilities are designed to support the development of policies for the inclusion of people with disabilities, including deaf students, and to assist teachers in designing educational resources, which is essential for enhancing accessibility and learning outcomes. Artificial intelligence tools—such as sign language translators including Hand Talk, VLibras, SignSpeak, Glove-Based Systems, the LIBRAS Online Dictionary, and the Spreadthesign Dictionary—can serve as valuable resources in the teaching and learning process. Full article
Show Figures

Figure 1

6 pages, 649 KB  
Proceeding Paper
Meteorology in Aratus’ Phaenomena
by Dorotheos Evaggelos Aggelis
Environ. Earth Sci. Proc. 2025, 35(1), 46; https://doi.org/10.3390/eesp2025035046 - 25 Sep 2025
Viewed by 219
Abstract
Aratus’ poem Phaenomena, and particularly its second part commonly known as Diosemeia (Signs from Zeus), offers a compelling blend of poetic narrative and proto-scientific observation. Composed in the 3rd century B.C., the work reflects the Hellenistic interest in systematizing knowledge of the natural [...] Read more.
Aratus’ poem Phaenomena, and particularly its second part commonly known as Diosemeia (Signs from Zeus), offers a compelling blend of poetic narrative and proto-scientific observation. Composed in the 3rd century B.C., the work reflects the Hellenistic interest in systematizing knowledge of the natural world through both literary and empirical means. Within its verses, meteorological phenomena such as clouds, rain, hail, winds, and atmospheric changes are not merely described but interpreted through a cosmological lens that reflects the worldview of the era. Aratus Solensis employs a poetic language that transforms everyday weather into a meaningful sequence of signs tied to divine order and celestial cycles providing in that way a kind of classified weather prognostics. Full article
Show Figures

Figure 1

17 pages, 2255 KB  
Article
Electromyography-Based Sign Language Recognition: A Low-Channel Approach for Classifying Fruit Name Gestures
by Kudratjon Zohirov, Mirjakhon Temirov, Sardor Boykobilov, Golib Berdiev, Feruz Ruziboev, Khojiakbar Egamberdiev, Mamadiyor Sattorov, Gulmira Pardayeva and Kuvonch Madatov
Signals 2025, 6(4), 50; https://doi.org/10.3390/signals6040050 - 25 Sep 2025
Viewed by 868
Abstract
This paper presents a method for recognizing sign language gestures corresponding to fruit names using electromyography (EMG) signals. The proposed system focuses on classification using a limited number of EMG channels, aiming to reduce classification process complexity while maintaining high recognition accuracy. The [...] Read more.
This paper presents a method for recognizing sign language gestures corresponding to fruit names using electromyography (EMG) signals. The proposed system focuses on classification using a limited number of EMG channels, aiming to reduce classification process complexity while maintaining high recognition accuracy. The dataset (DS) contains EMG signal data of 46 hearing-impaired people and descriptions of fruit names, including apple, pear, apricot, nut, cherry, and raspberry, in sign language (SL). Based on the presented DS, gesture movements were classified using five different classification algorithms—Random Forest, k-Nearest Neighbors, Logistic Regression, Support Vector Machine, and neural networks—and the algorithm that gives the best result for gesture movements was determined. The best classification result was obtained during recognition of the word cherry based on the RF algorithm, and 97% accuracy was achieved. Full article
(This article belongs to the Special Issue Advances in Signal Detecting and Processing)
Show Figures

Figure 1

22 pages, 1588 KB  
Article
Generative Sign-Description Prompts with Multi-Positive Contrastive Learning for Sign Language Recognition
by Siyu Liang, Yunan Li, Wentian Xin, Huizhou Chen, Xujie Liu, Kang Liu and Qiguang Miao
Sensors 2025, 25(19), 5957; https://doi.org/10.3390/s25195957 - 24 Sep 2025
Viewed by 523
Abstract
While sign language combines sequential hand motions with concurrent non-manual cues (e.g., mouth shapes and head tilts), current recognition systems lack multimodal annotation methods capable of capturing their hierarchical semantics. To bridge this gap, we propose GSP-MC, the first method integrating generative large [...] Read more.
While sign language combines sequential hand motions with concurrent non-manual cues (e.g., mouth shapes and head tilts), current recognition systems lack multimodal annotation methods capable of capturing their hierarchical semantics. To bridge this gap, we propose GSP-MC, the first method integrating generative large language models into sign language recognition. It leverages retrieval-augmented generation with domain-specific large language models and expert-validated corpora to produce precise multipart descriptions. A dual-encoder architecture bidirectionally aligns hierarchical skeleton features with multi-level text descriptions (global, synonym, part) through probabilistic matching. The approach combines global and part-level losses with KL divergence optimization, ensuring robust alignment across relevant text-skeleton pairs while capturing sign semantics and detailed dynamics. Experiments demonstrate state-of-the-art performance, achieving 97.1% accuracy on the Chinese SLR500 (surpassing SSRL’s 96.9%) and 97.07% on the Turkish AUTSL (exceeding SML’s 96.85%), confirming cross-lingual potential for inclusive communication technologies. Full article
Show Figures

Figure 1

25 pages, 11348 KB  
Article
Discourse Markers in French Belgian Sign Language (LSFB) Dialogues and Their Translation into French: A Corpus-Based Study
by Sílvia Gabarró-López
Languages 2025, 10(9), 243; https://doi.org/10.3390/languages10090243 - 22 Sep 2025
Viewed by 456
Abstract
Discourse markers have been extensively studied in spoken languages from different perspectives, covering monolingual, contrastive, and translation studies. However, research on these items remains limited for signed languages, with only a handful of scattered publications. Following a corpus-based approach, this paper aims to [...] Read more.
Discourse markers have been extensively studied in spoken languages from different perspectives, covering monolingual, contrastive, and translation studies. However, research on these items remains limited for signed languages, with only a handful of scattered publications. Following a corpus-based approach, this paper aims to investigate discourse markers in French Belgian Sign Language (LSFB), including their types, functions, and translation/s into written French. An 18 min sample of three dialogues and six signers was analyzed using a two-level independent taxonomy (domain and function) previously applied to spoken and signed data. Overall, 251 discourse markers were identified in the LSFB sample. They can be manual, nonmanual, or a combination of both, the latter type being the most frequent. In contrast to the previous literature, discourse markers cannot be spatial in LSFB. Regarding their functional spectrum, most discourse markers belong to the sequential domain (i.e., they are mostly used to structure discourse) and express ‘addition’ (i.e., providing more information) or ‘monitoring’ (i.e., keeping control over one’s turn or over the interaction). When examining the translation of DMs, most are either omitted or substituted by other non-discourse marking items in the target texts. Although these results are generally similar to previous studies on DMs in spoken languages, more research on these items in other signed languages is needed to obtain a precise overview of their role in human communication. Full article
(This article belongs to the Special Issue Current Trends in Discourse Marker Research)
Show Figures

Figure 1

23 pages, 2168 KB  
Article
Interactive Functions of Palm-Up: Cross-Linguistic and Cross-Modal Insights from ASL, American English, LSFB and Belgian French
by Alysson Lepeut and Emily Shaw
Languages 2025, 10(9), 239; https://doi.org/10.3390/languages10090239 - 19 Sep 2025
Cited by 1 | Viewed by 333
Abstract
This study dives into the interactive functions of the palm-up across four language ecologies drawing on comparable corpus data from American Sign Language (ASL)-American English and French Belgian Sign Language (LSFB)-Belgian French. While researchers have examined palm-up in many different spoken and signed [...] Read more.
This study dives into the interactive functions of the palm-up across four language ecologies drawing on comparable corpus data from American Sign Language (ASL)-American English and French Belgian Sign Language (LSFB)-Belgian French. While researchers have examined palm-up in many different spoken and signed language contexts, they have primarily focused on the canonical forms and its epistemic variants. Work that directly compares palm-up across modalities and language ecologies remains scarce. This study addresses such gaps by documenting all instances of the palm approaching supination in four language ecologies to analyze its interactive functions cross-linguistically and cross-modally. Capitalizing on an existing typology of interactive gestures, palm-up annotations were conducted using ELAN on a total sample of 48 participants interacting face-to-face in dyads. Findings highlight the multifunctional nature of palm-up in terms of conversational dynamics with cross-modal differences in the specific interactive use of palm-up between spoken and signed language contexts. These findings underscore the versatility of the palm-up and reinforce its role in conversational dynamics as not merely supplementary but integral to human interaction. Full article
(This article belongs to the Special Issue Non-representational Gestures: Types, Use, and Functions)
Show Figures

Figure 1

18 pages, 6253 KB  
Article
Exploring Sign Language Dataset Augmentation with Generative Artificial Intelligence Videos: A Case Study Using Adobe Firefly-Generated American Sign Language Data
by Valentin Bercaru and Nirvana Popescu
Information 2025, 16(9), 799; https://doi.org/10.3390/info16090799 - 15 Sep 2025
Viewed by 538
Abstract
Currently, high quality datasets focused on Sign Language Recognition are either private, proprietary or difficult to obtain due to costs. Therefore, we aim to mitigate this problem by augmenting a publicly available dataset with artificially generated data in order to enrich and obtain [...] Read more.
Currently, high quality datasets focused on Sign Language Recognition are either private, proprietary or difficult to obtain due to costs. Therefore, we aim to mitigate this problem by augmenting a publicly available dataset with artificially generated data in order to enrich and obtain a more diverse dataset. The performance of Sign Language Recognition (SLR) systems is highly dependent on the quality and diversity of training datasets. However, acquiring large-scale and well-annotated sign language video data remains a significant challenge. This experiment explores the use of Generative Artificial Intelligence (GenAI), specifically Adobe Firefly, to create synthetic video data for American Sign Language (ASL) fingerspelling. Thirteen letters out of 26 were selected for generation, and short videos representing each sign were synthesized and processed into static frames. These synthetic frames replaced approximately 7.5% of the original dataset and were integrated into the training data of a publicly available Convolutional Neural Network (CNN) model. After retraining the model with the augmented dataset, the accuracy did not drop. Moreover, the validation accuracy was approximately the same. The resulting model achieved a maximum accuracy of 98.04%. While the performance gain was limited (less than 1%), the approach illustrates the feasibility of using GenAI tools to generate training data and supports further research into data augmentation for low-resource SLR tasks. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

10 pages, 2364 KB  
Proceeding Paper
AI-Powered Sign Language Detection Using YOLO-v11 for Communication Equality
by Ivana Lucia Kharisma, Irma Nurmalasari, Yuni Lestari, Salma Dela Septiani, Kamdan and Muchtar Ali Setyo Yudono
Eng. Proc. 2025, 107(1), 83; https://doi.org/10.3390/engproc2025107083 - 8 Sep 2025
Viewed by 632
Abstract
Communication plays a vital role in conveying messages, expressing emotions, and sharing perceptions, becoming a fundamental aspect of human interaction with the environment. For individuals with hearing impairments, sign language serves as an essential communication tool, enabling interaction both within the deaf community [...] Read more.
Communication plays a vital role in conveying messages, expressing emotions, and sharing perceptions, becoming a fundamental aspect of human interaction with the environment. For individuals with hearing impairments, sign language serves as an essential communication tool, enabling interaction both within the deaf community and with non-deaf individuals. This study aims to bridge this misconception by developing an iconic language recognition system using the Deep Learning-based YOLO-v11 algorithm. YOLO-v11, a state-of-the-art object detection algorithm, is known for its speed, accuracy, and efficiency. The system uses image recognition to identify hand gestures in ASL and translates them into text or speech, facilitating inclusive communication. The accuracy of the training model is 94.67%, and the accuracy of the testing model is 93.02%, indicating that the model has excellent performance in recognizing sign language from the training and testing datasets. Additionally, the model is very reliable in recognizing the classes “Hello”, “I Love You”, “No”, and “Thank You” with a sensitivity close to or equal to 100%. This research contributes to advancing communication equality for individuals with hearing impairments, promoting inclusivity, and supporting their integration into society. Full article
Show Figures

Figure 1

19 pages, 2809 KB  
Article
SSTA-ResT: Soft Spatiotemporal Attention ResNet Transformer for Argentine Sign Language Recognition
by Xianru Liu, Zeru Zhou, E Xia and Xin Yin
Sensors 2025, 25(17), 5543; https://doi.org/10.3390/s25175543 - 5 Sep 2025
Viewed by 1126
Abstract
Sign language recognition technology serves as a crucial bridge, fostering meaningful connections between deaf individuals and hearing individuals. This technological innovation plays a substantial role in promoting social inclusivity. Conventional sign language recognition methodologies that rely on static images are inadequate for capturing [...] Read more.
Sign language recognition technology serves as a crucial bridge, fostering meaningful connections between deaf individuals and hearing individuals. This technological innovation plays a substantial role in promoting social inclusivity. Conventional sign language recognition methodologies that rely on static images are inadequate for capturing the dynamic characteristics and temporal information inherent in sign language. This limitation restricts their practical applicability in real-world scenarios. The proposed framework, called SSTA-ResT, integrates ResNet, soft spatiotemporal attention, and Transformer encoders to achieve this objective. The framework utilizes ResNet to extract robust spatial feature representations, employs the lightweight SSTA module for dual-path complementary representation enhancement to strengthen spatiotemporal associations, and leverages the Transformer encoder to capture long-range temporal dependencies. Experimental results on the LSA64 Argentine Sign Language (ASL) dataset demonstrate that the proposed method achieves an accuracy of 96.25%, a precision of 97.18%, and an F1 score of 0.9671. These results surpass the performance of existing methods across all metrics while maintaining a relatively low model parameter count of 11.66 M. This demonstrates the framework’s effectiveness and practicality for sign language video recognition tasks. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 1560 KB  
Article
Towards a Lightweight Arabic Sign Language Translation System
by Mohammed Algabri, Mohamed A. Mekhtiche, Mohamed A. Bencherif and Fahman Saeed
Sensors 2025, 25(17), 5504; https://doi.org/10.3390/s25175504 - 4 Sep 2025
Viewed by 1210
Abstract
There is a pressing need to build a sign-to-text translation system to simplify communication between deaf and non-deaf people. This study investigates the building of a high-performance, lightweight sign language translation system suitable for real-time applications. Two Saudi Sign Language datasets are used [...] Read more.
There is a pressing need to build a sign-to-text translation system to simplify communication between deaf and non-deaf people. This study investigates the building of a high-performance, lightweight sign language translation system suitable for real-time applications. Two Saudi Sign Language datasets are used for evaluation. We also investigate the effects of the number of signers and number of repetitions in sign language datasets. To this end, eight experiments are conducted in both signer-dependent and signer-independent modes. A comprehensive ablation study is presented to study the impacts of model components, network depth, and the size of the hidden dimension. The best accuracies achieved are 97.7% and 90.7% for the signer-dependent and signer-independent modes, respectively, using the KSU-SSL dataset. Similarly, the model achieves 98.38% and 96.22% for signer-dependent and signer-independent modes using the ArSL dataset. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

34 pages, 3234 KB  
Article
L1 Attrition vis-à-vis L2 Acquisition: Lexicon, Syntax–Pragmatics Interface, and Prosody in L1-English L2-Italian Late Bilinguals
by Mattia Zingaretti, Vasiliki Chondrogianni, D. Robert Ladd and Antonella Sorace
Languages 2025, 10(9), 224; https://doi.org/10.3390/languages10090224 - 4 Sep 2025
Viewed by 874
Abstract
Late bilingual speakers immersed in a second language (L2) environment often experience the non-pathological attrition of their first language (L1), exhibiting selective and reversible changes in L1 processing and production. While attrition research has largely focused on long-term residents in anglophone countries, examining [...] Read more.
Late bilingual speakers immersed in a second language (L2) environment often experience the non-pathological attrition of their first language (L1), exhibiting selective and reversible changes in L1 processing and production. While attrition research has largely focused on long-term residents in anglophone countries, examining changes primarily within a single L1 domain, the present study employs a novel experimental design to investigate L1 attrition, alongside L2 acquisition, across three domains (i.e., the lexicon, syntax–pragmatics interface, and prosody) in two groups of L1-English L2-Italian late bilinguals: long-term residents in Italy vs. university students in the UK. A total of 112 participants completed online tasks assessing lexical retrieval, anaphora resolution, and sentence stress patterns in both languages. First, both bilingual groups showed comparable levels of semantic interference in lexical retrieval. Second, at the syntax–pragmatics interface, only residents in Italy showed signs of L1 attrition in real-time processing of anaphora, while resolution preferences were similar between groups; in the L2, both bilingual groups demonstrated target-like preferences, despite some slowdown in processing. Third, while both groups showed some evidence of target-like L2 prosody, with residents in Italy matching L1-Italian sentence stress patterns closely, prosodic attrition was only reported for residents in Italy in exploratory analyses. Overall, this study supports the notion of L1 attrition as a natural consequence of bilingualism—one that is domain- and experience-dependent, unfolds along a continuum, and involves a complex (and possibly inverse) relationship between L1 and L2 performance that warrants further investigation. Full article
Show Figures

Figure 1

Back to TopTop