applsci-logo

Journal Browser

Journal Browser

Artificial Neural Networks and Their Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (22 April 2022) | Viewed by 19423

Special Issue Editors


E-Mail Website
Guest Editor
1. National Research Nuclear University MEPhI, 115409 Moscow, Russia
2. National Research Centre “Kurchatov Institute”, 123182 Moscow, Russia
Interests: machine learning; artificial intelligence; neural computing

E-Mail Website
Guest Editor
1. National Research Center “Kurchatov Institute”, 1 Akademika Kurchatova sq., 123182 Moscow, Russia
2. Moscow Institute of Physics and Technologies (State University), 9 Institutskiy Lane, Moscow Region, 141701 Dolgoprudny, Russia
3. Department of Physics, Lomonosov Moscow State University, 1-2 Leninskie Gory, 119991 Moscow, Russia
Interests: physics of laser interaction with matter; physics of low-dimensional structures; semiconductor physics; optoelectronics; nanoelectronics; neurology

E-Mail Website
Guest Editor
Institute of Philology and Intercultural Communication; Kazan Federal University, Kremlevskaya Street 18, 420008 Kazan, Russia
Interests: quantitative linguistics; cognitive science; computational linguistics

Special Issue Information

Dear Colleagues,

The applications of neural networks are currently expanding towards analyzing data of complicated structure, be it textual, audio, visual, sensor, or any other. Machine learning methods are increasingly embracing not only traditional fields of physics and technology but notably also numerous problems from medicine, humanities, and social studies, such as analysis of social network data, knowledge bases, communications, autonomous device control, filtering malicious content, and so on.

Such expansion owes not only to a quantitative increase in the amount of data and computational resources available, but largely also to the ongoing progress in the learning methods—in particular: deep learning; transfer learning, which makes use of a large body of knowledge inferred from some task in another task lacking in data; bio-inspired (spiking) neural networks, which are potentially implementable in low-power-consuming biomorphic hardware.

This Special Issue aims to outline the current state of neural-network-based methods and promising directions for further development. Thus, we cordially invite contributions that present learning topologies or methods for data representation and preprocessing, or methods of collecting data sets (corpora) for training, along with comprehensive comparisons to conventional machine learning methods on practical applied tasks.

Dr. Alexander Sboev
Prof. Dr. Pavel K Kashkarov
Prof. Dr. Valery Solovyev
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • artificial intelligence
  • neural networks
  • deep learning
  • transfer learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 693 KiB  
Article
Extrapolation of Human Estimates of the Concreteness/ Abstractness of Words by Neural Networks of Various Architectures
by Valery Solovyev and Vladimir Ivanov
Appl. Sci. 2022, 12(9), 4750; https://doi.org/10.3390/app12094750 - 9 May 2022
Viewed by 2035
Abstract
In a great deal of theoretical and applied cognitive and neurophysiological research, it is essential to have more vocabularies with concreteness/abstractness ratings. Since creating such dictionaries by interviewing informants is labor-intensive, considerable effort has been made to machine-extrapolate human rankings. The purpose of [...] Read more.
In a great deal of theoretical and applied cognitive and neurophysiological research, it is essential to have more vocabularies with concreteness/abstractness ratings. Since creating such dictionaries by interviewing informants is labor-intensive, considerable effort has been made to machine-extrapolate human rankings. The purpose of the article is to study the possibility of the fast construction of high-quality machine dictionaries. In this paper, state-of-the-art deep learning neural networks are involved for the first time to solve this problem. For the English language, the BERT model has achieved a record result for the quality of a machine-generated dictionary. It is known that the use of multilingual models makes it possible to transfer ratings from one language to another. However, this approach is understudied so far and the results achieved so far are rather weak. Microsoft’s Multilingual-MiniLM-L12-H384 model also obtained the best result to date in transferring ratings from one language to another. Thus, the article demonstrates the advantages of transformer-type neural networks in this task. Their use will allow the generation of good-quality dictionaries in low-resource languages. Additionally, we study the dependence of the result on the amount of initial data and the number of languages in the multilingual case. The possibilities of transferring into a certain language from one language and from several languages together are compared. The influence of the volume of training and test data has been studied. It has been found that an increase in the amount of training data in a multilingual case does not improve the result. Full article
(This article belongs to the Special Issue Artificial Neural Networks and Their Applications)
Show Figures

Figure 1

34 pages, 1431 KiB  
Article
Analysis of the Full-Size Russian Corpus of Internet Drug Reviews with Complex NER Labeling Using Deep Learning Neural Networks and Language Models
by Alexander Sboev, Sanna Sboeva, Ivan Moloshnikov, Artem Gryaznov, Roman Rybka, Alexander Naumov, Anton Selivanov, Gleb Rylkov and Vyacheslav Ilyin
Appl. Sci. 2022, 12(1), 491; https://doi.org/10.3390/app12010491 - 4 Jan 2022
Cited by 11 | Viewed by 3934
Abstract
The paper presents the full-size Russian corpus of Internet users’ reviews on medicines with complex named entity recognition (NER) labeling of pharmaceutically relevant entities. We evaluate the accuracy levels reached on this corpus by a set of advanced deep learning neural networks for [...] Read more.
The paper presents the full-size Russian corpus of Internet users’ reviews on medicines with complex named entity recognition (NER) labeling of pharmaceutically relevant entities. We evaluate the accuracy levels reached on this corpus by a set of advanced deep learning neural networks for extracting mentions of these entities. The corpus markup includes mentions of the following entities: medication (33,005 mentions), adverse drug reaction (1778), disease (17,403), and note (4490). Two of them—medication and disease—include a set of attributes. A part of the corpus has a coreference annotation with 1560 coreference chains in 300 documents. A multi-label model based on a language model and a set of features has been developed for recognizing entities of the presented corpus. We analyze how the choice of different model components affects the entity recognition accuracy. Those components include methods for vector representation of words, types of language models pre-trained for the Russian language, ways of text normalization, and other pre-processing methods. The sufficient size of our corpus allows us to study the effects of particularities of annotation and entity balancing. We compare our corpus to existing ones by the occurrences of entities of different types and show that balancing the corpus by the number of texts with and without adverse drug event (ADR) mentions improves the ADR recognition accuracy with no notable decline in the accuracy of detecting entities of other types. As a result, the state of the art for the pharmacological entity extraction task for the Russian language is established on a full-size labeled corpus. For the ADR entity type, the accuracy achieved is 61.1% by the F1-exact metric, which is on par with the accuracy level for other language corpora with similar characteristics and ADR representativeness. The accuracy of the coreference relation extraction evaluated on our corpus is 71%, which is higher than the results achieved on the other Russian-language corpora. Full article
(This article belongs to the Special Issue Artificial Neural Networks and Their Applications)
Show Figures

Figure 1

Review

Jump to: Research

20 pages, 940 KiB  
Review
BERT Models for Arabic Text Classification: A Systematic Review
by Ali Saleh Alammary
Appl. Sci. 2022, 12(11), 5720; https://doi.org/10.3390/app12115720 - 4 Jun 2022
Cited by 97 | Viewed by 12319
Abstract
Bidirectional Encoder Representations from Transformers (BERT) has gained increasing attention from researchers and practitioners as it has proven to be an invaluable technique in natural languages processing. This is mainly due to its unique features, including its ability to predict words conditioned on [...] Read more.
Bidirectional Encoder Representations from Transformers (BERT) has gained increasing attention from researchers and practitioners as it has proven to be an invaluable technique in natural languages processing. This is mainly due to its unique features, including its ability to predict words conditioned on both the left and the right context, and its ability to be pretrained using the plain text corpus that is enormously available on the web. As BERT gained more interest, more BERT models were introduced to support different languages, including Arabic. The current state of knowledge and practice in applying BERT models to Arabic text classification is limited. In an attempt to begin remedying this gap, this review synthesizes the different Arabic BERT models that have been applied to text classification. It investigates the differences between them and compares their performance. It also examines how effective they are compared to the original English BERT models. It concludes by offering insight into aspects that need further improvements and future work. Full article
(This article belongs to the Special Issue Artificial Neural Networks and Their Applications)
Show Figures

Figure 1

Back to TopTop