Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (97)

Search Parameters:
Keywords = comparative linguistic expression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
49 pages, 21554 KiB  
Article
A Disappearing Cultural Landscape: The Heritage of German-Style Land Use and Pug-And-Pine Architecture in Australia
by Dirk H. R. Spennemann
Land 2025, 14(8), 1517; https://doi.org/10.3390/land14081517 - 23 Jul 2025
Viewed by 227
Abstract
This paper investigates the cultural landscapes established by nineteenth-century German immigrants in South Australia and the southern Riverina of New South Wales, with particular attention to settlement patterns, architectural traditions and toponymic transformation. German immigration to Australia, though numerically modest compared to the [...] Read more.
This paper investigates the cultural landscapes established by nineteenth-century German immigrants in South Australia and the southern Riverina of New South Wales, with particular attention to settlement patterns, architectural traditions and toponymic transformation. German immigration to Australia, though numerically modest compared to the Americas, significantly shaped local communities, especially due to religious cohesion among Lutheran migrants. These settlers established distinct, enduring rural enclaves characterized by linguistic, religious and architectural continuity. The paper examines three manifestations of these cultural landscapes. A rich toponymic landscape was created by imposing on natural landscape features and newly founded settlements the names of the communities from which the German settlers originated. It discusses the erosion of German toponyms under wartime nationalist pressures, the subsequent partial reinstatement and the implications for cultural memory. The study traces the second manifestation of a cultural landscapes in the form of nucleated villages such as Hahndorf, Bethanien and Lobethal, which often followed the Hufendorf or Straßendorf layout, integrating Silesian land-use principles into the Australian context. Intensification of land use through housing subdivisions in two communities as well as agricultural intensification through broad acre farming has led to the fragmentation (town) and obliteration (rural) of the uniquely German form of land use. The final focus is the material expression of cultural identity through architecture, particularly the use of traditional Fachwerk (half-timbered) construction and adaptations such as pug-and-pine walling suited to local materials and climate. The paper examines domestic forms, including the distinctive black kitchen, and highlights how environmental and functional adaptation reshaped German building traditions in the antipodes. Despite a conservation movement and despite considerable documentation research in the late twentieth century, the paper shows that most German rural structures remain unlisted and vulnerable. Heritage neglect, rural depopulation, economic rationalization, lack of commercial relevance and local government policy have accelerated the decline of many of these vernacular buildings. The study concludes by problematizing the sustainability of conserving German Australian rural heritage in the face of regulatory, economic and demographic pressures. With its layering of intangible (toponymic), structural (buildings) and land use (cadastral) features, the examination of the cultural landscape established by nineteenth-century German immigrants adds to the body of literature on immigrant communities, settler colonialism and landscape research. Full article
Show Figures

Figure 1

27 pages, 1566 KiB  
Article
Is There a Woman in Los Candidatos? Gender Perception with Masculine “Generics” and Gender-Fair Language Strategies in Spanish
by Laura Vela-Plo, Marta De Pedis and Marina Ortega-Andrés
Languages 2025, 10(7), 175; https://doi.org/10.3390/languages10070175 - 21 Jul 2025
Viewed by 360
Abstract
This study examines how several gender-encoding strategies in Spanish and social factors influence gender perception, reinforcing or mitigating a sexist male bias. Using an experimental design, we tested four linguistic conditions in a job recruitment context: masculine forms (theoretically generic), gender-splits, epicenes, and [...] Read more.
This study examines how several gender-encoding strategies in Spanish and social factors influence gender perception, reinforcing or mitigating a sexist male bias. Using an experimental design, we tested four linguistic conditions in a job recruitment context: masculine forms (theoretically generic), gender-splits, epicenes, and non-binary neomorpheme “-e”. After reading a profile in one of these conditions, 837 participants (52% women) selected an image of a woman or man. Results show that masculine forms lead to the lowest selection of female candidates, manifesting a male bias. In contrast, gender-fair language (GFL) strategies, particularly the neomorpheme (les candidates), elicited the highest selection of female images. Importantly, not only did linguistic factors and participants’ gender identity influence results—with male participants selecting significantly more men in the masculine condition, but affinity with feminist movements and LGBTQIA+ communities or positive attitudes towards GFL also modulated responses—increasing female selections in GFL, but reinforcing male selections in the masculine. Additionally, no extra cognitive cost was found for GFL strategies compared to masculine expressions. These findings highlight the importance, not only of linguistic forms, but of social and attitudinal factors in shaping gender perception, with implications for reducing gender biases in language use and broader efforts toward social equity. Full article
Show Figures

Figure 1

19 pages, 1092 KiB  
Article
Seeing Through Other Eyes: How Language Experience and Cognitive Abilities Shape Theory of Mind
by Manali Pathare, Ester Navarro and Andrew R. A. Conway
Behav. Sci. 2025, 15(6), 755; https://doi.org/10.3390/bs15060755 - 30 May 2025
Viewed by 648
Abstract
Understanding others’ perspectives, or Theory of Mind (ToM), is a critical cognitive skill essential for social competence and effective interpersonal interactions. Although ToM is present in varying degrees across individuals, recent research indicates that linguistic factors, particularly bilingualism, can significantly influence its expression. [...] Read more.
Understanding others’ perspectives, or Theory of Mind (ToM), is a critical cognitive skill essential for social competence and effective interpersonal interactions. Although ToM is present in varying degrees across individuals, recent research indicates that linguistic factors, particularly bilingualism, can significantly influence its expression. Building on these findings, the current study examined performance on the perspective-taking trials of the Director Task among bilinguals and monolinguals. The results showed a nonsignificant trend in accurate responses as a function of bilingualism; however, a significant effect was found when examining only perspective-taking trials, with bilinguals outperforming monolinguals, suggesting that larger sample sizes are needed to identify this effect. Interestingly, a significant interaction between fluid intelligence and bilingualism was found, suggesting that bilinguals with higher fluid intelligence performed better on perspective-taking trials compared to bilinguals with lower fluid intelligence. The results emphasize the importance of domain-general abilities for the effect of bilingualism on perspective-taking and suggest that bilingualism’s effect on ToM may be more salient in individuals with higher cognitive abilities. Full article
Show Figures

Figure 1

29 pages, 2368 KiB  
Article
Chinese “Dialects” and European “Languages”: A Comparison of Lexico-Phonetic and Syntactic Distances
by Chaoju Tang, Vincent J. van Heuven, Wilbert Heeringa and Charlotte Gooskens
Languages 2025, 10(6), 127; https://doi.org/10.3390/languages10060127 - 29 May 2025
Viewed by 2927
Abstract
In this article, we tested some specific claims made in the literature on relative distances among European languages and among Chinese dialects, suggesting that some language varieties within the Sinitic family traditionally called dialects are, in fact, more linguistically distant from one another [...] Read more.
In this article, we tested some specific claims made in the literature on relative distances among European languages and among Chinese dialects, suggesting that some language varieties within the Sinitic family traditionally called dialects are, in fact, more linguistically distant from one another than some European varieties that are traditionally called languages. More generally, we examined whether distances among varieties within and across European language families were larger than those within and across Sinitic language varieties. To this end, we computed lexico-phonetic as well as syntactic distance measures for comparable language materials in six Germanic, five Romance and six Slavic languages, as well as for six Mandarin and nine non-Mandarin (‘southern’) Chinese varieties. Lexico-phonetic distances were expressed as the length-normalized MPI-weighted Levenshtein distances computed on the 100 most frequently used nouns in the 32 language varieties. Syntactic distance was implemented as the (complement of) the Pearson correlation coefficient found for the PoS trigram frequencies established for a parallel corpus of the same four texts translated into each of the 32 languages. The lexico-phonetic distances proved to be relatively large and of approximately equal magnitude in the Germanic, Slavic and non-Mandarin Chinese language varieties. However, the lexico-phonetic distances among the Romance and Mandarin languages were considerably smaller, but of similar magnitude. Cantonese (Guangzhou dialect) was lexico-phonetically as distant from Standard Mandarin (Beijing dialect) as European language pairs such as Portuguese–Italian, Portuguese–Romanian and Dutch–German. Syntactically, however, the differences among the Sinitic varieties were about ten times smaller than the differences among the European languages, both within and across the families—which provides some justification for the Chinese tradition of calling the Sinitic varieties dialects of the same language. Full article
(This article belongs to the Special Issue Dialectal Dynamics)
Show Figures

Figure 1

19 pages, 16096 KiB  
Article
Evaluating Translation Quality: A Qualitative and Quantitative Assessment of Machine and LLM-Driven Arabic–English Translations
by Tawffeek A. S. Mohammed
Information 2025, 16(6), 440; https://doi.org/10.3390/info16060440 - 26 May 2025
Viewed by 996
Abstract
This study investigates translation quality between Arabic and English, comparing traditional rule-based machine translation systems, modern neural machine translation tools such as Google Translate, and large language models like ChatGPT. The research adopts both qualitative and quantitative approaches to assess the efficacy, accuracy, [...] Read more.
This study investigates translation quality between Arabic and English, comparing traditional rule-based machine translation systems, modern neural machine translation tools such as Google Translate, and large language models like ChatGPT. The research adopts both qualitative and quantitative approaches to assess the efficacy, accuracy, and contextual fidelity of translations. It particularly focuses on the translation of idiomatic and colloquial expressions as well as technical texts and genres. Using well-established evaluation metrics such as bilingual evaluation understudy (BLEU), translation error rate (TER), and character n-gram F-score (chrF), alongside the qualitative translation quality assessment model proposed by Juliane House, this study investigates the linguistic and semantic nuances of translations generated by different systems. This study concludes that although metric-based evaluations like BLEU and TER are useful, they often fail to fully capture the semantic and contextual accuracy of idiomatic and expressive translations. Large language models, particularly ChatGPT, show promise in addressing this gap by offering more coherent and culturally aligned translations. However, both systems demonstrate limitations that necessitate human post-editing for high-stakes content. The findings support a hybrid approach, combining machine translation tools with human oversight for optimal translation quality, especially in languages with complex morphology and culturally embedded expressions like Arabic. Full article
(This article belongs to the Special Issue Machine Translation for Conquering Language Barriers)
Show Figures

Figure 1

22 pages, 3576 KiB  
Article
A Deep Learning Approach to Unveil Types of Mental Illness by Analyzing Social Media Posts
by Rajashree Dash, Spandan Udgata, Rupesh K. Mohapatra, Vishanka Dash and Ashrita Das
Math. Comput. Appl. 2025, 30(3), 49; https://doi.org/10.3390/mca30030049 - 3 May 2025
Viewed by 938
Abstract
Mental illness has emerged as a widespread global health concern, often unnoticed and unspoken. In this era of digitization, social media has provided a prominent space for people to express their feelings and find solutions faster. Thus, this area of study with a [...] Read more.
Mental illness has emerged as a widespread global health concern, often unnoticed and unspoken. In this era of digitization, social media has provided a prominent space for people to express their feelings and find solutions faster. Thus, this area of study with a sheer amount of information, which refers to users’ behavioral attributes combined with the power of machine learning (ML), can be explored to make the entire diagnosis process smooth. In this study, an efficient ML model using Long Short-Term Memory (LSTM) is developed to determine the kind of mental illness a user may have using a random text made by the user on their social media. This study is based on natural language processing, where the prerequisites involve data collection from different social media sites and then pre-processing the collected data as per the requirements through stemming, lemmatization, stop word removal, etc. After examining the linguistic patterns of different social media posts, a reduced feature space is generated using appropriate feature engineering, which is further fed as input to the LSTM model to identify a type of mental illness. The performance of the proposed model is also compared with three other ML models, which includes using the full feature space and the reduced one. The optimal resulting model is selected by training and testing all of the models on the publicly available Reddit Mental Health Dataset. Overall, utilizing deep learning (DL) for mental health analysis can offer a promising avenue toward improved interventions, outcomes, and a better understanding of mental health issues at both the individual and population levels, aiding in decision-making processes. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

18 pages, 1049 KiB  
Article
Impact of Respectfulness on Semantic Integration During Discourse Processing
by Wenjing Yu, Yuhan Xie and Xiaohong Yang
Behav. Sci. 2025, 15(4), 448; https://doi.org/10.3390/bs15040448 - 1 Apr 2025
Viewed by 384
Abstract
Linguistic expressions of respectful terms are shaped by social status. Previous studies have shown respectful term usage affects online language processing. This study investigates its impact on semantic integration through three self-pace reading experiments, manipulating Respect Consistency (Respect vs. Disrespect) and Semantic Consistency [...] Read more.
Linguistic expressions of respectful terms are shaped by social status. Previous studies have shown respectful term usage affects online language processing. This study investigates its impact on semantic integration through three self-pace reading experiments, manipulating Respect Consistency (Respect vs. Disrespect) and Semantic Consistency (Semantic Consistent vs. Semantic Inconsistent). In Experiment 1, disrespect was manipulated by using the plain form of pronouns instead of the respectful form when addressing individuals of higher social status. The results showed longer reading times for semantically inconsistent sentences compared to consistent ones, reflecting the classic semantic integration effect. Nevertheless, this effect was only detected when respectful pronouns were employed. For Experiments 2 and 3, disrespect was operationalized by directly addressing individuals of higher social status by their personal names. A comparable interaction to that in Experiment 1 was identified solely in Experiment 3, which involved an appropriateness judgment task. In contrast, no such interaction was observed in Experiment 2, which involved a reading comprehension task. These results indicated that both disrespectful pronouns and addressing individuals by their personal names hinder semantic integration, but through different mechanisms. These findings provide important insights into the role of respectful term usage on semantic integration during discourse comprehension. Full article
Show Figures

Figure 1

16 pages, 332 KiB  
Article
The Ultimate in Verbalization: How Japanese Writer Furui Yoshikichi Reads Western Mystical Experiences
by Seungjun Lee and Do-Hyung Kim
Religions 2025, 16(3), 354; https://doi.org/10.3390/rel16030354 - 12 Mar 2025
Viewed by 657
Abstract
This study examines how the Japanese writer Furui Yoshikichi engages with Western mystical experiences, particularly through his reading of Martin Buber’s Ecstatic Confessions and his broader engagement with Meister Eckhart and medieval German mysticism. Furui’s literary inquiry revolves around the inherent tension between [...] Read more.
This study examines how the Japanese writer Furui Yoshikichi engages with Western mystical experiences, particularly through his reading of Martin Buber’s Ecstatic Confessions and his broader engagement with Meister Eckhart and medieval German mysticism. Furui’s literary inquiry revolves around the inherent tension between the ineffability of mystical experiences and their articulation through language. He critically engages with the paradox of verbalization, recognizing that while mystical experiences transcend linguistic and temporal boundaries, they nevertheless achieve resonance through written and spoken expressions. His reflections converge with Buddhist notions of Sūnyatā, underscoring intersections between Eastern and Western spiritual traditions. Drawing upon his background as a translator of German literature, Furui mediates mystical experiences within a comparative framework, navigating cultural and linguistic boundaries. His approach elucidates the concept of the multiplicity of qualities in mystical experiences, demonstrating particularity and universality simultaneously. By analyzing Furui’s interpretation of mystical texts, this study contributes to broader discussions on the limitations of language in conveying transcendence and the role of literary imagination in rendering the ineffable. Full article
(This article belongs to the Special Issue Imagining Ultimacy: Religious and Spiritual Experience in Literature)
26 pages, 921 KiB  
Article
Communication Outcomes of Children with Hearing Loss: A Comparison of Two Early Intervention Approaches
by Aisha Casoojee, Katijah Khoza-Shangase and Amisha Kanji
Audiol. Res. 2025, 15(2), 27; https://doi.org/10.3390/audiolres15020027 - 8 Mar 2025
Cited by 1 | Viewed by 1570
Abstract
Background: Early intervention approaches play a critical role in shaping the communication outcomes of children with hearing loss, influencing their language development and overall learning trajectory. Objectives: The main objective of this study was to compare the communication outcomes of children with hearing [...] Read more.
Background: Early intervention approaches play a critical role in shaping the communication outcomes of children with hearing loss, influencing their language development and overall learning trajectory. Objectives: The main objective of this study was to compare the communication outcomes of children with hearing loss who received Listening and Spoken Language-South Africa (LSL-SA) with those who received Traditional Speech-Language Therapy (TSLT). Methods: A retrospective record review was conducted to gather data on communication outcomes from participants’ speech-language therapy records. Communication outcomes were measured using standardized assessments evaluating speech intelligibility, expressive vocabulary, receptive language, expressive language, audition, and cognitive–linguistic skills. The data were analyzed using quantitative statistics. Key statistical methods included measures to determine associations, identify statistical significance, determine outcomes, and compare differences between the two groups. Results: The study found that children in the LSL-SA group had statistically significant better communication outcomes, with 63% achieving age-appropriate speech intelligibility compared to 45% in the TSLT group (p = 0.046). Similar trends were observed for expressive vocabulary (LSL-SA: 58% vs. TSLT: 39%, p = 0.048) and receptive language (LSL-SA: 60% vs. TSLT: 39%, p = 0.043). Additionally, 66% of children in the LSL-SA group were recommended for mainstream schooling, compared to 39% in the TSLT group (p = 0.0023). These findings highlight the importance of early amplification and structured intervention in improving communication outcomes. The results also emphasize the importance of Early Hearing Detection and Intervention (EHDI) in decreasing the odds of delay in communication outcomes, irrespective of the type of communication approach, although a higher proportion of children in the LSL-SA approach group achieved age-appropriate communication outcomes than those in the TSLT group. Conclusions: This study highlights that communication intervention approaches aligned with the LSL-SA practice promote better communication development and enhance spoken language outcomes in children with hearing loss, facilitating successful transitions to mainstream schooling. Contribution: This study provides contextually relevant evidence for implementing an LSL-SA intervention approach for children with hearing loss. The implications of these findings for clinical practice and future research are discussed in detail. Full article
Show Figures

Figure 1

14 pages, 423 KiB  
Article
A Small-Scale Evaluation of Large Language Models Used for Grammatical Error Correction in a German Children’s Literature Corpus: A Comparative Study
by Phuong Thao Nguyen, Bernd Nuss, Roswita Dressler and Katie Ovens
Appl. Sci. 2025, 15(5), 2476; https://doi.org/10.3390/app15052476 - 25 Feb 2025
Viewed by 1237
Abstract
Grammatical error correction (GEC) has become increasingly important for enhancing the quality of OCR-scanned texts. This small-scale study explores the application of Large Language Models (LLMs) for GEC in German children’s literature, a genre with unique linguistic challenges due to modified language, colloquial [...] Read more.
Grammatical error correction (GEC) has become increasingly important for enhancing the quality of OCR-scanned texts. This small-scale study explores the application of Large Language Models (LLMs) for GEC in German children’s literature, a genre with unique linguistic challenges due to modified language, colloquial expressions, and complex layouts that often lead to OCR-induced errors. While conventional rule-based and statistical approaches have been used in the past, advancements in machine learning and artificial intelligence have introduced models capable of more contextually nuanced corrections. Despite these developments, limited research has been conducted on evaluating the effectiveness of state-of-the-art LLMs, specifically in the context of German children’s literature. To address this gap, we fine-tuned encoder-based models GBERT and GELECTRA on German children’s literature, and compared their performance to decoder-based models GPT-4o and Llama series (versions 3.2 and 3.1) in a zero-shot setting. Our results demonstrate that all pretrained models, both encoder-based (GBERT, GELECTRA) and decoder-based (GPT-4o, Llama series), failed to effectively remove OCR-generated noise in children’s literature, highlighting the necessity of a preprocessing step to handle structural inconsistencies and artifacts introduced during scanning. This study also addresses the lack of comparative evaluations between encoder-based and decoder-based models for German GEC, with most prior work focusing on English. Quantitative analysis reveals that decoder-based models significantly outperform fine-tuned encoder-based models, with GPT-4o and Llama-3.1-70B achieving the highest accuracy in both error detection and correction. Qualitative assessment further highlights distinct model behaviors: GPT-4o demonstrates the most consistent correction performance, handling grammatical nuances effectively while minimizing overcorrection. Llama-3.1-70B excels in error detection but occasionally relies on frequency-based substitutions over meaning-driven corrections. Unlike earlier decoder-based models, which often exhibited overcorrection tendencies, our findings indicate that state-of-the-art decoder-based models strike a better balance between correction accuracy and semantic preservation. By identifying the strengths and limitations of different model architectures, this study enhances the accessibility and readability of OCR-scanned German children’s literature. It also provides new insights into the role of preprocessing in digitized text correction, the comparative performance of encoder- and decoder-based models, and the evolving correction tendencies of modern LLMs. These findings contribute to language preservation, corpus linguistics, and digital archiving, offering an AI-driven solution for improving the quality of digitized children’s literature while ensuring linguistic and cultural integrity. Future research should explore multimodal approaches that integrate visual context to further enhance correction accuracy for children’s books with image-embedded text. Full article
(This article belongs to the Special Issue Applications of Natural Language Processing to Data Science)
Show Figures

Figure 1

24 pages, 464 KiB  
Article
Probabilistic Linguistic Multiple Attribute Group Decision-Making Based on a Choquet Operator and Its Application in Supplier Selection
by Weijia Kang, Xin Liang and Yan Peng
Mathematics 2025, 13(5), 740; https://doi.org/10.3390/math13050740 - 25 Feb 2025
Viewed by 519
Abstract
As an enhanced version of traditional linguistic term sets, Probabilistic Linguistic Term Sets (PLTS) incorporate probabilistic information, thereby offering a more robust approach to Multiple Attribute Group Decision-Making (MAGDM) and significantly improving its efficacy. This paper proposes two novel information aggregation operators for [...] Read more.
As an enhanced version of traditional linguistic term sets, Probabilistic Linguistic Term Sets (PLTS) incorporate probabilistic information, thereby offering a more robust approach to Multiple Attribute Group Decision-Making (MAGDM) and significantly improving its efficacy. This paper proposes two novel information aggregation operators for PLTS to address MAGDM problems in the PLTS context. Firstly, we introduce Choquet integral-based generalized arithmetic and geometric operators, which are designed to fuse decision information expressed by different PLTSs, thereby more comprehensively considering the interrelationships among various attributes. Subsequently, we further define measures of group consistency and inconsistency for individual decision information in MAGDM, which are used to determine the information weights of decision-makers. Finally, the group decision information is aggregated using the proposed PLTS aggregation operators. The effectiveness as well as the applicability of the developed method are illustrated through numerical examples and comparative analysis. Full article
(This article belongs to the Special Issue Advanced Intelligent Algorithms for Decision Making Under Uncertainty)
Show Figures

Figure 1

32 pages, 1379 KiB  
Article
Multi-Criteria Decision Analysis for Sustainable Medicinal Supply Chain Problems with Adaptability and Challenges Issues
by Alaa Fouad Momena, Kamal Hossain Gazi and Sankar Prasad Mondal
Logistics 2025, 9(1), 31; https://doi.org/10.3390/logistics9010031 - 14 Feb 2025
Cited by 1 | Viewed by 1252
Abstract
Background: The supply chain refers to the full process of creating and providing a good or service, starting with the raw materials and ending with the final customer. It requires cooperation and coordination between many parties, including the suppliers, manufacturers, distributors, retailers, and [...] Read more.
Background: The supply chain refers to the full process of creating and providing a good or service, starting with the raw materials and ending with the final customer. It requires cooperation and coordination between many parties, including the suppliers, manufacturers, distributors, retailers, and customers. Methods: In the medicinal supply chain (MSC), the critical nature of these processes becomes more complicated. It requires strict regulation, quality control, and traceability to ensure patient safety and compliance with regulatory standards. This study is conducted to suggest a smooth channel to deal with the challenges and adaptability of the MSC. Different MSC challenges are considered as criteria which deal with various adaptation plans. Multi-criteria decision-making (MCDM) methodologies are taken as optimization tools and probabilistic linguistic term sets (PLTSs) are considered for express uncertainty. Results: The subscript degree function (SDF) and deviation degree function (DDF) are introduced to evaluate the crisp value of the PLTSs. An MSC model is constructed to optimize the sustainable medicinal supply chain and overcome various barriers to MSC problems. Conclusions: Additionally, sensitivity analysis and comparative analysis were conducted to check the robustness and flexibility of the system. Finally, the conclusion section determines the optimal weighted criteria for the MSC problem and identifies the best possible solutions for MSC using PLTS-based MCDM methodologies. Full article
Show Figures

Figure 1

18 pages, 617 KiB  
Article
Probabilistic Uncertain Linguistic VIKOR Method for Teaching Reform Plan Evaluation for the Core Course “Big Data Technology and Applications” in the Digital Economy Major
by Wenshuai Wu
Mathematics 2024, 12(23), 3710; https://doi.org/10.3390/math12233710 - 26 Nov 2024
Cited by 1 | Viewed by 796
Abstract
The reform of the teaching plan for the core course “big data technology and applications” in the digital economy major has become a globally recognized challenge. Qualitative data, fuzzy and uncertain information, a complex decision-making environment, and various impact factors create significant difficulties [...] Read more.
The reform of the teaching plan for the core course “big data technology and applications” in the digital economy major has become a globally recognized challenge. Qualitative data, fuzzy and uncertain information, a complex decision-making environment, and various impact factors create significant difficulties in formulating effective responses to teaching reform plans. Therefore, the objective of this paper is to develop a precision evaluation technology for teaching reform plans in the core course “big data technology and applications”, to address the challenges of uncertainty and fuzziness in complex decision-making environments. In this study, an extended VlseKriterijuska Optimizacija I Komoromisno Resenje (VIKOR) method based on a probabilistic uncertain linguistic term set (PULTS), which is presented for teaching reform plan evaluation. The extended probabilistic uncertain linguistic VIKOR method can effectively and accurately capture the fuzziness and uncertainty of complex decision-making processes. In addition, PULTS is integrated into the VIKOR method to express decision-makers’ fuzzy language preference information in terms of probability. A case study is conducted to verify and test the extended method, and the research results demonstrate that it is highly effective for decision-making regarding teaching reform plans to foster the high-quality development of education, especially in uncertain and fuzzy environments. Furthermore, parameter and comparative analyses verify the effectiveness of the extended method. Finally, the paper outlines directions for future research. Full article
Show Figures

Figure 1

40 pages, 3414 KiB  
Article
Investigating the Predominance of Large Language Models in Low-Resource Bangla Language over Transformer Models for Hate Speech Detection: A Comparative Analysis
by Fatema Tuj Johora Faria, Laith H. Baniata and Sangwoo Kang
Mathematics 2024, 12(23), 3687; https://doi.org/10.3390/math12233687 - 25 Nov 2024
Cited by 5 | Viewed by 3023
Abstract
The rise in abusive language on social media is a significant threat to mental health and social cohesion. For Bengali speakers, the need for effective detection is critical. However, current methods fall short in addressing the massive volume of content. Improved techniques are [...] Read more.
The rise in abusive language on social media is a significant threat to mental health and social cohesion. For Bengali speakers, the need for effective detection is critical. However, current methods fall short in addressing the massive volume of content. Improved techniques are urgently needed to combat online hate speech in Bengali. Traditional machine learning techniques, while useful, often require large, linguistically diverse datasets to train models effectively. This paper addresses the urgent need for improved hate speech detection methods in Bengali, aiming to fill the existing research gap. Contextual understanding is crucial in differentiating between harmful speech and benign expressions. Large language models (LLMs) have shown state-of-the-art performance in various natural language tasks due to their extensive training on vast amounts of data. We explore the application of LLMs, specifically GPT-3.5 Turbo and Gemini 1.5 Pro, for Bengali hate speech detection using Zero-Shot and Few-Shot Learning approaches. Unlike conventional methods, Zero-Shot Learning identifies hate speech without task-specific training data, making it highly adaptable to new datasets and languages. Few-Shot Learning, on the other hand, requires minimal labeled examples, allowing for efficient model training with limited resources. Our experimental results show that LLMs outperform traditional approaches. In this study, we evaluate GPT-3.5 Turbo and Gemini 1.5 Pro on multiple datasets. To further enhance our study, we consider the distribution of comments in different datasets and the challenge of class imbalance, which can affect model performance. The BD-SHS dataset consists of 35,197 comments in the training set, 7542 in the validation set, and 7542 in the test set. The Bengali Hate Speech Dataset v1.0 and v2.0 include comments distributed across various hate categories: personal hate (629), political hate (1771), religious hate (502), geopolitical hate (1179), and gender abusive hate (316). The Bengali Hate Dataset comprises 7500 non-hate and 7500 hate comments. GPT-3.5 Turbo achieved impressive results with 97.33%, 98.42%, and 98.53% accuracy. In contrast, Gemini 1.5 Pro showed lower performance across all datasets. Specifically, GPT-3.5 Turbo excelled with significantly higher accuracy compared to Gemini 1.5 Pro. These outcomes highlight a 6.28% increase in accuracy compared to traditional methods, which achieved 92.25%. Our research contributes to the growing body of literature on LLM applications in natural language processing, particularly in the context of low-resource languages. Full article
Show Figures

Figure 1

17 pages, 511 KiB  
Article
Interval Linguistic-Valued Intuitionistic Fuzzy Concept Lattice and Its Application to Linguistic Association Rule Extraction
by Kuo Pang, Chao Fu, Li Zou, Gaoxuan Wang and Mingyu Lu
Axioms 2024, 13(12), 812; https://doi.org/10.3390/axioms13120812 - 21 Nov 2024
Viewed by 814
Abstract
In a world rich with linguistic-valued data, traditional methods often lead to significant information loss when converting such data into other formats. This paper presents a novel approach for constructing an interval linguistic-valued intuitionistic fuzzy concept lattice, which adeptly manages qualitative linguistic information [...] Read more.
In a world rich with linguistic-valued data, traditional methods often lead to significant information loss when converting such data into other formats. This paper presents a novel approach for constructing an interval linguistic-valued intuitionistic fuzzy concept lattice, which adeptly manages qualitative linguistic information by leveraging the strengths of interval-valued intuitionistic fuzzy sets to represent both fuzziness and uncertainty. First, the interval linguistic-valued intuitionistic fuzzy concept lattice is constructed by integrating interval intuitionistic fuzzy sets, capturing the bidirectional fuzzy linguistic information between objects, which encompasses both positive and negative aspects. Second, by analyzing the expectations of concept extent relative to intent, and considering both the membership and non-membership perspectives of linguistic expressions, we focus on the extraction of linguistic association rules. Finally, comparative analyses and examples demonstrate the effectiveness of the proposed approach, showcasing its potential to advance the management of linguistic data in various domains. Full article
Show Figures

Figure 1

Back to TopTop