Reprint

Current Approaches and Applications in Natural Language Processing

Edited by
August 2022
476 pages
  • ISBN978-3-0365-4439-7 (Hardback)
  • ISBN978-3-0365-4440-3 (PDF)

This book is a reprint of the Special Issue Current Approaches and Applications in Natural Language Processing that was published in

Biology & Life Sciences
Chemistry & Materials Science
Computer Science & Mathematics
Engineering
Environmental & Earth Sciences
Physical Sciences
Summary

Current approaches to Natural Language Processing (NLP) have shown impressive improvements in many important tasks: machine translation, language modeling, text generation, sentiment/emotion analysis, natural language understanding, and question answering, among others. The advent of new methods and techniques, such as graph-based approaches, reinforcement learning, or deep learning, have boosted many NLP tasks to a human-level performance (and even beyond). This has attracted the interest of many companies, so new products and solutions can benefit from advances in this relevant area within the artificial intelligence domain.This Special Issue reprint, focusing on emerging techniques and trendy applications of NLP methods, reports on some of these achievements, establishing a useful reference for industry and researchers on cutting-edge human language technologies.

Format
  • Hardback
License
© 2022 by the authors; CC BY-NC-ND license
Keywords
natural language processing; distributional semantics; machine learning; language model; word embeddings; machine translation; sentiment analysis; quality estimation; neural machine translation; pretrained language model; multilingual pre-trained language model; WMT; neural networks; recurrent neural networks; natural language processing; neural machine translation; named entity recognition; multi-modal dataset; Wikimedia Commons; multi-modal language model; concreteness; curriculum learning; electronic health records; clinical text; natural language processing; named entity recognition; relationship extraction; machine learning; text classification; linguistic corpus; deception; linguistic cues; statistical analysis; discriminant function analysis; fake news detection; natural language processing; machine learning; stance detection; social media; abstractive summarization; monolingual models; multilingual models; transformer models; transfer learning; discourse analysis; problem–solution pattern; automatic classification; machine learning classifiers; deep neural networks; natural language processing; question answering; machine reading comprehension; query expansion; information retrieval; multinomial naive bayes; relevance feedback; cause-effect relation; transitive closure; word co-occurrence; automatic hate speech detection; multisource feature extraction; Latin American Spanish language models; natural language processing; fine-grained named entity recognition; k-stacked feature fusion; dual-stacked output; unbalanced data problem; document representation; semantic analysis; natural language processing; conceptual modeling; universal representation; trend analysis; topic modeling; Bert; geospatial data technology and application; attention model; dual multi-head attention; inter-information relationship; question answering; question difficult estimation; named-entity recognition; transfer learning; BERT model; conditional random field; pre-trained model; fine-tuning; fake news detection; feature fusion; attention mechanism; social media; task-oriented dialogue systems; Arabic; multi-lingual transformer model; mT5; natural language processing; language marker; mental disorder; deep learning; LIWC; spaCy; RobBERT; fastText; LIME; conversational AI; intent detection; slot filling; retrieval-based question answering; query generation; entity linking; knowledge graph; entity embedding; global model; DISC model; personality recognition; predictive model; text analysis; machine reading comprehension; natural language processing; question answering; data privacy; federated learning; transformer; n/a