Special Issue "Current Approaches and Applications in Natural Language Processing"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 January 2022.

Special Issue Editors

Prof. Dr. Arturo Montejo-Ráez
E-Mail Website
Guest Editor
SINAI Research Group, CEATIC, Universidad de Jaén, 23071 Jaén, Spain
Interests: natural language processing; machine learning; deep NLP; text mining; knowledge engineering; linked data
Dr. Salud María Jiménez-Zafra
E-Mail Website
Guest Editor
SINAI Research Group, Computer Science Department, CEATIC, Universidad de Jaén, 23071 Jaén, Spain
Interests: natural language processing; negation detection and treatment; semantics; text mining
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Current approaches in Natural Language Processing (NLP) have shown impressive improvements in many major tasks: machine translation, language modelling, text generation, sentiment/emotion analysis, natural language understanding, question answering, among others. The advent of new methods and techniques like graph-based approaches, reinforcement learning or deep learning have boosted many of the tasks in NLP to reach human-level (and even further) performance. This has attracted the interest of many companies, so new products and solutions can profit from the advances of this relevant area within the artificial intelligence domain.

This Special Issue focuses on emerging techniques and trendy applications of NLP methods is an opportunity to report on all these achievements, establishing a useful reference for industry and researchers on cutting edge human language technologies. Given the focus of the journal, we expect to receive works that propose new NLP algorithms and applications of current and novel NLP tasks. Also, updated overviews on the given topics will be considered, identifying trends, potential future research areas and new commercial products.

The topics of this Special Issue include but are not limited to:

  • Question answering: open-domain Q&A, knowledge-based Q&A...
  • Knowledge extraction: Relation extraction, fine-grained entity recognition...
  • Text generation: summarization, style transfer, dial...
  • Text classification: Sentiment/emotion analysis, semi-supervised and zero-shot learning...
  • Behaviour modelling: early risk detection, cyberbullying, customer modelling...
  • Dialogue systems: chatbots, voice assistants...
  • Reinforcement learning
  • Data augmentation
  • Graph based approaches
  • Adversarial approaches
  • Multi-modal approaches
  • Multi-lingual/cross-lingual approaches

Prof. Dr. Arturo Montejo-Ráez
Dr. Salud María Jiménez Zafra
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Fine-Grained Named Entity Recognition Using a Multi-Stacked Feature Fusion and Dual-Stacked Output in Korean
Appl. Sci. 2021, 11(22), 10795; https://doi.org/10.3390/app112210795 - 15 Nov 2021
Viewed by 280
Abstract
Named entity recognition (NER) is a natural language processing task to identify spans that mention named entities and to annotate them with predefined named entity classes. Although many NER models based on machine learning have been proposed, their performance in terms of processing [...] Read more.
Named entity recognition (NER) is a natural language processing task to identify spans that mention named entities and to annotate them with predefined named entity classes. Although many NER models based on machine learning have been proposed, their performance in terms of processing fine-grained NER tasks was less than acceptable. This is because the training data of a fine-grained NER task is much more unbalanced than those of a coarse-grained NER task. To overcome the problem presented by unbalanced data, we propose a fine-grained NER model that compensates for the sparseness of fine-grained NEs by using the contextual information of coarse-grained NEs. From another viewpoint, many NER models have used different levels of features, such as part-of-speech tags and gazetteer look-up results, in a nonhierarchical manner. Unfortunately, these models experience the feature interference problem. Our solution to this problem is to adopt a multi-stacked feature fusion scheme, which accepts different levels of features as its input. The proposed model is based on multi-stacked long short-term memories (LSTMs) with a multi-stacked feature fusion layer for acquiring multilevel embeddings and a dual-stacked output layer for predicting fine-grained NEs based on the categorical information of coarse-grained NEs. Our experiments indicate that the proposed model is capable of state-of-the-art performance. The results show that the proposed model can effectively alleviate the unbalanced data problem that frequently occurs in a fine-grained NER task. In addition, the multi-stacked feature fusion layer contributes to the improvement of NER performance, confirming that the proposed model can alleviate the feature interference problem. Based on this experimental result, we conclude that the proposed model is well-designed to effectively perform NER tasks. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
A Language Model for Misogyny Detection in Latin American Spanish Driven by Multisource Feature Extraction and Transformers
Appl. Sci. 2021, 11(21), 10467; https://doi.org/10.3390/app112110467 - 08 Nov 2021
Viewed by 450
Abstract
Creating effective mechanisms to detect misogyny online automatically represents significant scientific and technological challenges. The complexity of recognizing misogyny through computer models lies in the fact that it is a subtle type of violence, it is not always explicitly aggressive, and it can [...] Read more.
Creating effective mechanisms to detect misogyny online automatically represents significant scientific and technological challenges. The complexity of recognizing misogyny through computer models lies in the fact that it is a subtle type of violence, it is not always explicitly aggressive, and it can even hide behind seemingly flattering words, jokes, parodies, and other expressions. Currently, it is even difficult to have an exact figure for the rate of misogynistic comments online because, unlike other types of violence, such as physical violence, these events are not registered by any statistical systems. This research contributes to the development of models for the automatic detection of misogynistic texts in Latin American Spanish and contributes to the design of data augmentation methodologies since the amount of data required for deep learning models is considerable. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Graphical abstract

Article
Causal Pathway Extraction from Web-Board Documents
Appl. Sci. 2021, 11(21), 10342; https://doi.org/10.3390/app112110342 - 03 Nov 2021
Viewed by 302
Abstract
This research aim is to extract causal pathways, particularly disease causal pathways, through cause-effect relation (CErel) extraction from web-board documents. The causal pathways benefit people with a comprehensible representation approach to disease complication. A causative/effect-concept expression is based on a verb phrase of [...] Read more.
This research aim is to extract causal pathways, particularly disease causal pathways, through cause-effect relation (CErel) extraction from web-board documents. The causal pathways benefit people with a comprehensible representation approach to disease complication. A causative/effect-concept expression is based on a verb phrase of an elementary discourse unit (EDU) or a simple sentence. The research has three main problems; how to determine CErel on an EDU-concept pair containing both causative and effect concepts in one EDU, how to extract causal pathways from EDU-concept pairs having CErel and how to indicate and represent implicit effect/causative-concept EDUs as implicit mediators with comprehension on extracted causal pathways. Therefore, we apply EDU’s word co-occurrence concept (wrdCoc) as an EDU-concept and the self-Cartesian product of a wrdCoc set from the documents for extracting wrdCoc pairs having CErel into a wrdCoc-pair set from the documents after learning CErel on wrdCoc pairs by supervised-machine learning. The wrdCoc-pair set is used for extracting the causal pathways by wrdCoc-pair matching through the documents. We then propose transitive closure and a dynamic template to indicate and represent the implicit mediators with the explicit ones. In contrast to previous works, the proposed approach enables causal-pathway extraction with high accuracy from the documents. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
A Query Expansion Method Using Multinomial Naive Bayes
Appl. Sci. 2021, 11(21), 10284; https://doi.org/10.3390/app112110284 - 02 Nov 2021
Viewed by 239
Abstract
Information retrieval (IR) aims to obtain relevant information according to a certain user need and involves a great diversity of data such as texts, images, or videos. Query expansion techniques, as part of information retrieval (IR), are used to obtain more items, particularly [...] Read more.
Information retrieval (IR) aims to obtain relevant information according to a certain user need and involves a great diversity of data such as texts, images, or videos. Query expansion techniques, as part of information retrieval (IR), are used to obtain more items, particularly documents, that are relevant to the user requirements. The user initial query is reformulated, adding meaningful terms with similar significance. In this study, a supervised query expansion technique based on an innovative use of the Multinomial Naive Bayes to extract relevant terms from the first documents retrieved by the initial query is presented. The proposed method was evaluated using MAP and R-prec on the first 5, 10, 15, and 100 retrieved documents. The improved performance of the expanded queries increased the number of relevant retrieved documents in comparison to the baseline method. We achieved more accurate document retrieval results (MAP 0.335, R-prec 0.369, P5 0.579, P10 0.469, P15 0.393, P100 0.175) as compared to the top performers in TREC2017 Precision Medicine Track. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
Enhance Text-to-Text Transfer Transformer with Generated Questions for Thai Question Answering
Appl. Sci. 2021, 11(21), 10267; https://doi.org/10.3390/app112110267 - 01 Nov 2021
Viewed by 299
Abstract
Question Answering (QA) is a natural language processing task that enables the machine to understand a given context and answer a given question. There are several QA research trials containing high resources of the English language. However, Thai is one of the languages [...] Read more.
Question Answering (QA) is a natural language processing task that enables the machine to understand a given context and answer a given question. There are several QA research trials containing high resources of the English language. However, Thai is one of the languages that have low availability of labeled corpora in QA studies. According to previous studies, while the English QA models could achieve more than 90% of F1 scores, Thai QA models could obtain only 70% in our baseline. In this study, we aim to improve the performance of Thai QA models by generating more question-answer pairs with Multilingual Text-to-Text Transfer Transformer (mT5) along with data preprocessing methods for Thai. With this method, the question-answer pairs can synthesize more than 100 thousand pairs from provided Thai Wikipedia articles. Utilizing our synthesized data, many fine-tuning strategies were investigated to achieve the highest model performance. Furthermore, we have presented that the syllable-level F1 is a more suitable evaluation measure than Exact Match (EM) and the word-level F1 for Thai QA corpora. The experiment was conducted on two Thai QA corpora: Thai Wiki QA and iApp Wiki QA. The results show that our augmented model is the winner on both datasets compared to other modern transformer models: Roberta and mT5. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
Classification of Problem and Solution Strings in Scientific Texts: Evaluation of the Effectiveness of Machine Learning Classifiers and Deep Neural Networks
Appl. Sci. 2021, 11(21), 9997; https://doi.org/10.3390/app11219997 - 26 Oct 2021
Viewed by 343
Abstract
One of the central aspects of science is systematic problem-solving. Therefore, problem and solution statements are an integral component of the scientific discourse. The scientific analysis would be more successful if the problem–solution claims in scientific texts were automatically classified. It would help [...] Read more.
One of the central aspects of science is systematic problem-solving. Therefore, problem and solution statements are an integral component of the scientific discourse. The scientific analysis would be more successful if the problem–solution claims in scientific texts were automatically classified. It would help in knowledge mining, idea generation, and information classification from scientific texts. It would also help to compare scientific papers and automatically generate review articles in a given field. However, computational research on problem–solution patterns has been scarce. The linguistic analysis, instructional-design research, theory, and empirical methods have not paid enough attention to the study of problem–solution patterns. This paper tries to solve this issue by applying the computational techniques of machine learning classifiers and neural networks to a set of features to intelligently classify a problem phrase from a non-problem phrase and a solution phrase from a non-solution phrase. Our analysis shows that deep learning networks outperform machine learning classifiers. Our best model was able to classify a problem phrase from a non-problem phrase with an accuracy of 90.0% and a solution phrase from a non-solution phrase with an accuracy of 86.0%. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
Appl. Sci. 2021, 11(21), 9872; https://doi.org/10.3390/app11219872 - 22 Oct 2021
Viewed by 247
Abstract
Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual [...] Read more.
Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
Ternion: An Autonomous Model for Fake News Detection
Appl. Sci. 2021, 11(19), 9292; https://doi.org/10.3390/app11199292 - 06 Oct 2021
Viewed by 536
Abstract
In recent years, the consumption of social media content to keep up with global news and to verify its authenticity has become a considerable challenge. Social media enables us to easily access news anywhere, anytime, but it also gives rise to the spread [...] Read more.
In recent years, the consumption of social media content to keep up with global news and to verify its authenticity has become a considerable challenge. Social media enables us to easily access news anywhere, anytime, but it also gives rise to the spread of fake news, thereby delivering false information. This also has a negative impact on society. Therefore, it is necessary to determine whether or not news spreading over social media is real. This will allow for confusion among social media users to be avoided, and it is important in ensuring positive social development. This paper proposes a novel solution by detecting the authenticity of news through natural language processing techniques. Specifically, this paper proposes a novel scheme comprising three steps, namely, stance detection, author credibility verification, and machine learning-based classification, to verify the authenticity of news. In the last stage of the proposed pipeline, several machine learning techniques are applied, such as decision trees, random forest, logistic regression, and support vector machine (SVM) algorithms. For this study, the fake news dataset was taken from Kaggle. The experimental results show an accuracy of 93.15%, precision of 92.65%, recall of 95.71%, and F1-score of 94.15% for the support vector machine algorithm. The SVM is better than the second best classifier, i.e., logistic regression, by 6.82%. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
A Corpus-Based Study of Linguistic Deception in Spanish
Appl. Sci. 2021, 11(19), 8817; https://doi.org/10.3390/app11198817 - 23 Sep 2021
Viewed by 495
Abstract
In the last decade, fields such as psychology and natural language processing have devoted considerable attention to the automatization of the process of deception detection, developing and employing a wide array of automated and computer-assisted methods for this purpose. Similarly, another emerging research [...] Read more.
In the last decade, fields such as psychology and natural language processing have devoted considerable attention to the automatization of the process of deception detection, developing and employing a wide array of automated and computer-assisted methods for this purpose. Similarly, another emerging research area is focusing on computer-assisted deception detection using linguistics, with promising results. Accordingly, in the present article, the reader is firstly provided with an overall review of the state of the art of corpus-based research exploring linguistic cues to deception as well as an overview on several approaches to the study of deception and on previous research into its linguistic detection. In an effort to promote corpus-based research in this context, this study explores linguistic cues to deception in the Spanish written language with the aid of an automatic text classification tool, by means of an ad hoc corpus containing ground truth data. Interestingly, the key findings reveal that, although there is a set of linguistic cues which contributes to the global statistical classification model, there are some discursive differences across the subcorpora, yielding better classification results on the analysis conducted on the subcorpus containing emotionally loaded language. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning
Appl. Sci. 2021, 11(17), 8241; https://doi.org/10.3390/app11178241 - 06 Sep 2021
Viewed by 523
Abstract
Over the last few years, there has been an increase in the studies that consider experiential (visual) information by building multi-modal language models and representations. It is shown by several studies that language acquisition in humans starts with learning concrete concepts through images [...] Read more.
Over the last few years, there has been an increase in the studies that consider experiential (visual) information by building multi-modal language models and representations. It is shown by several studies that language acquisition in humans starts with learning concrete concepts through images and then continues with learning abstract ideas through the text. In this work, the curriculum learning method is used to teach the model concrete/abstract concepts through images and their corresponding captions to accomplish multi-modal language modeling/representation. We use the BERT and Resnet-152 models on each modality and combine them using attentive pooling to perform pre-training on the newly constructed dataset, which is collected from the Wikimedia Commons based on concrete/abstract words. To show the performance of the proposed model, downstream tasks and ablation studies are performed. The contribution of this work is two-fold: A new dataset is constructed from Wikimedia Commons based on concrete/abstract words, and a new multi-modal pre-training approach based on curriculum learning is proposed. The results show that the proposed multi-modal pre-training approach contributes to the success of the model. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
Named Entity Correction in Neural Machine Translation Using the Attention Alignment Map
Appl. Sci. 2021, 11(15), 7026; https://doi.org/10.3390/app11157026 - 29 Jul 2021
Viewed by 919
Abstract
Neural machine translation (NMT) methods based on various artificial neural network models have shown remarkable performance in diverse tasks and have become mainstream for machine translation currently. Despite the recent successes of NMT applications, a predefined vocabulary is still required, meaning that it [...] Read more.
Neural machine translation (NMT) methods based on various artificial neural network models have shown remarkable performance in diverse tasks and have become mainstream for machine translation currently. Despite the recent successes of NMT applications, a predefined vocabulary is still required, meaning that it cannot cope with out-of-vocabulary (OOV) or rarely occurring words. In this paper, we propose a postprocessing method for correcting machine translation outputs using a named entity recognition (NER) model to overcome the problem of OOV words in NMT tasks. We use attention alignment mapping (AAM) between the named entities of input and output sentences, and mistranslated named entities are corrected using word look-up tables. The proposed method corrects named entities only, so it does not require retraining of existing NMT models. We carried out translation experiments on a Chinese-to-Korean translation task for Korean historical documents, and the evaluation results demonstrated that the proposed method improved the bilingual evaluation understudy (BLEU) score by 3.70 from the baseline. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
Comparative Analysis of Current Approaches to Quality Estimation for Neural Machine Translation
Appl. Sci. 2021, 11(14), 6584; https://doi.org/10.3390/app11146584 - 17 Jul 2021
Viewed by 738
Abstract
Quality estimation (QE) has recently gained increasing interest as it can predict the quality of machine translation results without a reference translation. QE is an annual shared task at the Conference on Machine Translation (WMT), and most recent studies have applied the multilingual [...] Read more.
Quality estimation (QE) has recently gained increasing interest as it can predict the quality of machine translation results without a reference translation. QE is an annual shared task at the Conference on Machine Translation (WMT), and most recent studies have applied the multilingual pretrained language model (mPLM) to address this task. Recent studies have focused on the performance improvement of this task using data augmentation with finetuning based on a large-scale mPLM. In this study, we eliminate the effects of data augmentation and conduct a pure performance comparison between various mPLMs. Separate from the recent performance-driven QE research involved in competitions addressing a shared task, we utilize the comparison for sub-tasks from WMT20 and identify an optimal mPLM. Moreover, we demonstrate QE using the multilingual BART model, which has not yet been utilized, and conduct comparative experiments and analyses with cross-lingual language models (XLMs), multilingual BERT, and XLM-RoBERTa. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Article
English–Welsh Cross-Lingual Embeddings
Appl. Sci. 2021, 11(14), 6541; https://doi.org/10.3390/app11146541 - 16 Jul 2021
Cited by 1 | Viewed by 591
Abstract
Cross-lingual embeddings are vector space representations where word translations tend to be co-located. These representations enable learning transfer across languages, thus bridging the gap between data-rich languages such as English and others. In this paper, we present and evaluate a suite of cross-lingual [...] Read more.
Cross-lingual embeddings are vector space representations where word translations tend to be co-located. These representations enable learning transfer across languages, thus bridging the gap between data-rich languages such as English and others. In this paper, we present and evaluate a suite of cross-lingual embeddings for the English–Welsh language pair. To train the bilingual embeddings, a Welsh corpus of approximately 145 M words was combined with an English Wikipedia corpus. We used a bilingual dictionary to frame the problem of learning bilingual mappings as a supervised machine learning task, where a word vector space is first learned independently on a monolingual corpus, after which a linear alignment strategy is applied to map the monolingual embeddings to a common bilingual vector space. Two approaches were used to learn monolingual embeddings, including word2vec and fastText. Three cross-language alignment strategies were explored, including cosine similarity, inverted softmax and cross-domain similarity local scaling (CSLS). We evaluated different combinations of these approaches using two tasks, bilingual dictionary induction, and cross-lingual sentiment analysis. The best results were achieved using monolingual fastText embeddings and the CSLS metric. We also demonstrated that by including a few automatically translated training documents, the performance of a cross-lingual text classifier for Welsh can increase by approximately 20 percent points. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Review

Jump to: Research

Review
A Survey on Recent Named Entity Recognition and Relationship Extraction Techniques on Clinical Texts
Appl. Sci. 2021, 11(18), 8319; https://doi.org/10.3390/app11188319 - 08 Sep 2021
Viewed by 580
Abstract
Significant growth in Electronic Health Records (EHR) over the last decade has provided an abundance of clinical text that is mostly unstructured and untapped. This huge amount of clinical text data has motivated the development of new information extraction and text mining techniques. [...] Read more.
Significant growth in Electronic Health Records (EHR) over the last decade has provided an abundance of clinical text that is mostly unstructured and untapped. This huge amount of clinical text data has motivated the development of new information extraction and text mining techniques. Named Entity Recognition (NER) and Relationship Extraction (RE) are key components of information extraction tasks in the clinical domain. In this paper, we highlight the present status of clinical NER and RE techniques in detail by discussing the existing proposed NLP models for the two tasks and their performances and discuss the current challenges. Our comprehensive survey on clinical NER and RE encompass current challenges, state-of-the-art practices, and future directions in information extraction from clinical text. This is the first attempt to discuss both of these interrelated topics together in the clinical context. We identified many research articles published based on different approaches and looked at applications of these tasks. We also discuss the evaluation metrics that are used in the literature to measure the effectiveness of the two these NLP methods and future research directions. Full article
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Show Figures

Figure 1

Back to TopTop