Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = sememe

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 529 KiB  
Article
A Novel Approach to Semic Analysis: Extraction of Atoms of Meaning to Study Polysemy and Polyreferentiality
by Vanessa Bonato, Giorgio Maria Di Nunzio and Federica Vezzani
Languages 2024, 9(4), 121; https://doi.org/10.3390/languages9040121 - 27 Mar 2024
Cited by 3 | Viewed by 1797
Abstract
Semic analysis is a linguistic technique aimed at methodically factorizing the meaning of terms into a collection of minimum non-decomposable atoms of meaning. In this study, we propose a methodology targeted at enhancing the systematicity of semic analysis of medical terminology in order [...] Read more.
Semic analysis is a linguistic technique aimed at methodically factorizing the meaning of terms into a collection of minimum non-decomposable atoms of meaning. In this study, we propose a methodology targeted at enhancing the systematicity of semic analysis of medical terminology in order to increase the quality of the creation of the set of atoms of meaning and improve the identification of concepts, as well as enhance specialized domain studies. Our approach is based on: (1) a semi-automatic domain-specific corpus-based extraction of semes, (2) the application of the property of termhood to address the diaphasic and the diastratic variations of language, (3) the automatic lemmatization of semes, and (4) seme weighting to establish the order of semes in the sememe. The paper explores the distinction between denotative and connotative semes, offering insights into polysemy and polyreferentiality in medical terminology. Full article
(This article belongs to the Special Issue Semantics and Meaning Representation)
18 pages, 1643 KiB  
Article
A Sememe Prediction Method Based on the Central Word of a Semantic Field
by Guanran Luo and Yunpeng Cui
Electronics 2024, 13(2), 413; https://doi.org/10.3390/electronics13020413 - 19 Jan 2024
Cited by 2 | Viewed by 1588
Abstract
A “sememe” is an indivisible minimal unit of meaning in linguistics. Manually annotating sememes in words requires a significant amount of time, so automated sememe prediction is often used to improve efficiency. Semantic fields serve as crucial mediators connecting the semantics between words. [...] Read more.
A “sememe” is an indivisible minimal unit of meaning in linguistics. Manually annotating sememes in words requires a significant amount of time, so automated sememe prediction is often used to improve efficiency. Semantic fields serve as crucial mediators connecting the semantics between words. This paper proposes an unsupervised method for sememe prediction based on the common semantics between words and semantic fields. In comparison to methods based on word vectors, this approach demonstrates a superior ability to align the semantics of words and sememes. We construct various types of semantic fields through ChatGPT and design a semantic field selection strategy to adapt to different scenario requirements. Subsequently, following the order of word–sense–sememe, we decompose the process of calculating the semantic sememe similarity between semantic fields and target words. Finally, we select the word with the highest average semantic sememe similarity as the central word of the semantic field, using its semantic primes as the predicted result. On the BabelSememe dataset constructed based on the sememe knowledge base HowNet, the method of semantic field central word (SFCW) achieved the best results for both unstructured and structured sememe prediction tasks, demonstrating the effectiveness of this approach. Additionally, we conducted qualitative and quantitative analyses on the sememe structure of the central word. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

11 pages, 448 KiB  
Article
Palamism, Humboldtianism, and Magicism in Pavel Florensky’s Philosophy of Language
by Dmitry Biriukov and Artyom Gravin
Religions 2023, 14(2), 197; https://doi.org/10.3390/rel14020197 - 2 Feb 2023
Cited by 6 | Viewed by 2146
Abstract
This article analyzes the evolution of Pavel Florensky’s teachings about language from the end of the 1910s to the early 1920s in the context of the two lines of influence (Humboldtian–Potebnian and Palamite) on the basis of which this teaching developed. In his [...] Read more.
This article analyzes the evolution of Pavel Florensky’s teachings about language from the end of the 1910s to the early 1920s in the context of the two lines of influence (Humboldtian–Potebnian and Palamite) on the basis of which this teaching developed. In his reasoning about language, Florensky, proceeding from intuition, declares that there is a rigid connection between the word’s sound/phoneme; its morpheme, etymon, and sememe (the given here and now meaning); and its denotate. According to Florensky, this points to the magicism of the word as such. At the beginning of the 1910s, Florensky, having become a participant in the name-glorifying debates, also adhered to the line presupposing a rigid connection between the word’s sound (the name, which is applied to God), its meaning, and its denotate. All these lines converged in Florensky’s thoughts on the nature of language in the late 1910s and the early 1920s. He turned again to the Humboldtian–Potebnian language scheme but rethought it, speaking of the intentionally charged sememe as the word’s inner form. In texts written in the late 1910s and the early 1920s, we single out two aspects of the understanding of the magicism of the word which were key for Florensky, namely the aspect revealed in the discourse of the independent and autonomous existence of words and names and the aspect presupposing the intentionally willed moment in the phenomenon of the magicism of the word. Full article
Show Figures

Figure 1

15 pages, 525 KiB  
Article
Completing WordNets with Sememe Knowledge
by Shengwen Li, Bing Li, Hong Yao, Shunping Zhou, Junjie Zhu and Zhuang Zeng
Electronics 2022, 11(1), 79; https://doi.org/10.3390/electronics11010079 - 27 Dec 2021
Cited by 1 | Viewed by 3781
Abstract
WordNets organize words into synonymous word sets, and the connections between words present the semantic relationships between them, which have become an indispensable source for natural language processing (NLP) tasks. With the development and evolution of languages, WordNets need to be constantly updated [...] Read more.
WordNets organize words into synonymous word sets, and the connections between words present the semantic relationships between them, which have become an indispensable source for natural language processing (NLP) tasks. With the development and evolution of languages, WordNets need to be constantly updated manually. To address the problem of inadequate word semantic knowledge of “new words”, this study explores a novel method to automatically update the WordNet knowledge base by incorporating word-embedding techniques with sememe knowledge from HowNet. The model first characterizes the relationships among words and sememes with a graph structure and jointly learns the embedding vectors of words and sememes; finally, it synthesizes word similarities to predict concepts (synonym sets) of new words. To examine the performance of the proposed model, a new dataset connected to sememe knowledge and WordNet is constructed. Experimental results show that the proposed model outperforms the existing baseline models. Full article
Show Figures

Figure 1

13 pages, 2264 KiB  
Article
Incorporating Synonym for Lexical Sememe Prediction: An Attention-Based Model
by Xiaojun Kang, Bing Li, Hong Yao, Qingzhong Liang, Shengwen Li, Junfang Gong and Xinchuan Li
Appl. Sci. 2020, 10(17), 5996; https://doi.org/10.3390/app10175996 - 29 Aug 2020
Cited by 6 | Viewed by 4145
Abstract
Sememe is the smallest semantic unit for describing real-world concepts, which improves the interpretability and performance of Natural Language Processing (NLP). To maintain the accuracy of the sememe description, its knowledge base needs to be continuously updated, which is time-consuming and labor-intensive. Sememes [...] Read more.
Sememe is the smallest semantic unit for describing real-world concepts, which improves the interpretability and performance of Natural Language Processing (NLP). To maintain the accuracy of the sememe description, its knowledge base needs to be continuously updated, which is time-consuming and labor-intensive. Sememes predictions can assign sememes to unlabeled words and are valuable work for automatically building and/or updating sememeknowledge bases (KBs). Existing methods are overdependent on the quality of the word embedding vectors, it remains a challenge for accurate sememe prediction. To address this problem, this study proposes a novel model to improve the performance of sememe prediction by introducing synonyms. The model scores candidate sememes from synonyms by combining distances of words in embedding vector space and derives an attention-based strategy to dynamically balance two kinds of knowledge from synonymous word set and word embedding vector. A series of experiments are performed, and the results show that the proposed model has made a significant improvement in the sememe prediction accuracy. The model provides a methodological reference for commonsense KB updating and embedding of commonsense knowledge. Full article
Show Figures

Figure 1

20 pages, 3028 KiB  
Article
DAWE: A Double Attention-Based Word Embedding Model with Sememe Structure Information
by Shengwen Li, Renyao Chen, Bo Wan, Junfang Gong, Lin Yang and Hong Yao
Appl. Sci. 2020, 10(17), 5804; https://doi.org/10.3390/app10175804 - 21 Aug 2020
Cited by 1 | Viewed by 3268
Abstract
Word embedding is an important reference for natural language processing tasks, which can generate distribution presentations of words based on many text data. Recent evidence demonstrates that introducing sememe knowledge is a promising strategy to improve the performance of word embedding. However, previous [...] Read more.
Word embedding is an important reference for natural language processing tasks, which can generate distribution presentations of words based on many text data. Recent evidence demonstrates that introducing sememe knowledge is a promising strategy to improve the performance of word embedding. However, previous works ignored the structure information of sememe knowledges. To fill the gap, this study implicitly synthesized the structural feature of sememes into word embedding models based on an attention mechanism. Specifically, we propose a novel double attention word-based embedding (DAWE) model that encodes the characteristics of sememes into words by a “double attention” strategy. DAWE is integrated with two specific word training models through context-aware semantic matching techniques. The experimental results show that, in word similarity task and word analogy reasoning task, the performance of word embedding can be effectively improved by synthesizing the structural information of sememe knowledge. The case study also verifies the power of DAWE model in word sense disambiguation task. Furthermore, the DAWE model is a general framework for encoding sememes into words, which can be integrated into other existing word embedding models to provide more options for various natural language processing downstream tasks. Full article
Show Figures

Figure 1

Back to TopTop