Application of Artificial Intelligence, Deep Neural Networks

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (13 April 2022) | Viewed by 53743

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Chungbuk National University, Cheongju 28644, Korea
Interests: AI; machine learning; data mining; bioinformatics

E-Mail
Guest Editor
Dept. of Computer Engineering, Chungbuk National University, Cheongju 28644, Korea
Interests: AI; data analysis; medical IT; gesture recognition; decision-making problems; serious games

Special Issue Information

Dear Colleagues,

A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the inputand output layers. There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. These components functioning similar to the human brains and can be trained like any other ML algorithm. Deep neural networks have recently become the standard tool for solving a variety of computer vision problems and and it is gradually extending to hardware implementations and human–computer interfaces. The special issue ‘Application of Artificial Intelligence, Deep Neural Networks’ is covered all aspects of theoretical, methodological and applied artificial intelligence (AI) .

The Editors welcome original research articles, comprehensive reviews, correspondences and perspectives with the support of machine/deep learning, data science, Data Analysis, etc.

Prof. Dr. Keon Myung Lee
Dr. Mi-Hye Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data analysis
  • DNN
  • machine learning
  • CNN
  • data computing
  • fog/edge computing
  • deep learning
  • IoT

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 4171 KiB  
Article
Deep-Learning-Incorporated Augmented Reality Application for Engineering Lab Training
by John Estrada, Sidike Paheding, Xiaoli Yang and Quamar Niyaz
Appl. Sci. 2022, 12(10), 5159; https://doi.org/10.3390/app12105159 - 20 May 2022
Cited by 11 | Viewed by 6484
Abstract
Deep learning (DL) algorithms have achieved significantly high performance in object detection tasks. At the same time, augmented reality (AR) techniques are transforming the ways that we work and connect with people. With the increasing popularity of online and hybrid learning, we propose [...] Read more.
Deep learning (DL) algorithms have achieved significantly high performance in object detection tasks. At the same time, augmented reality (AR) techniques are transforming the ways that we work and connect with people. With the increasing popularity of online and hybrid learning, we propose a new framework for improving students’ learning experiences with electrical engineering lab equipment by incorporating the abovementioned technologies. The DL powered automatic object detection component integrated into the AR application is designed to recognize equipment such as multimeter, oscilloscope, wave generator, and power supply. A deep neural network model, namely MobileNet-SSD v2, is implemented for equipment detection using TensorFlow’s object detection API. When a piece of equipment is detected, the corresponding AR-based tutorial will be displayed on the screen. The mean average precision (mAP) of the developed equipment detection model is 81.4%, while the average recall of the model is 85.3%. Furthermore, to demonstrate practical application of the proposed framework, we develop a multimeter tutorial where virtual models are superimposed on real multimeters. The tutorial includes images and web links as well to help users learn more effectively. The Unity3D game engine is used as the primary development tool for this tutorial to integrate DL and AR frameworks and create immersive scenarios. The proposed framework can be a useful foundation for AR and machine-learning-based frameworks for industrial and educational training. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

24 pages, 10161 KiB  
Article
Model Establishment of Cross-Disease Course Prediction Using Transfer Learning
by Josh Jia-Ching Ying, Yen-Ting Chang, Hsin-Hua Chen and Wen-Cheng Chao
Appl. Sci. 2022, 12(10), 4907; https://doi.org/10.3390/app12104907 - 12 May 2022
Cited by 1 | Viewed by 1430
Abstract
In recent years, the development and application of artificial intelligence have both been topics of concern. In the medical field, an important direction of medical technology development is the extraction and use of applicable information from existing medical records to provide more accurate [...] Read more.
In recent years, the development and application of artificial intelligence have both been topics of concern. In the medical field, an important direction of medical technology development is the extraction and use of applicable information from existing medical records to provide more accurate and helpful diagnosis suggestions. Therefore, this paper proposes using the development of diseases with easily discernible symptoms to predict the development of other medically related but distinct diseases that lack similar data. The aim of this study is to improve the ease of assessing the development of diseases in which symptoms are difficult to detect, and to improve the utilization of medical data. First, a time series model was used to capture the continuous manifestations of diseases with symptoms that could be easily found at different time intervals. Then, through transfer learning and attention mechanism, the general features captured were applied to the predictive model of the development of diseases with insufficient data and symptoms that are difficult to detect. Finally, we conducted a comprehensive experimental study based on a dataset collected from the National Health Insurance Research Database in Taiwan. The results demonstrate that the effectiveness of our transfer learning approach outperforms state-of-the-art deep learning prediction models for disease course prediction. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

24 pages, 1513 KiB  
Article
Deep-Learning-Based Stream-Sensing Method for Detecting Asynchronous Multiple Signals
by Yeongjun Kim and Harim Lee
Appl. Sci. 2022, 12(9), 4534; https://doi.org/10.3390/app12094534 - 29 Apr 2022
Viewed by 1336
Abstract
In a disaster site, terrestrial communication infrastructures are often destroyed or malfunctioning, and hence it is very difficult to detect the existence of survivors in the site. At such sites, UAVs are rapidly emerging as an alternative to mobile base stations to establish [...] Read more.
In a disaster site, terrestrial communication infrastructures are often destroyed or malfunctioning, and hence it is very difficult to detect the existence of survivors in the site. At such sites, UAVs are rapidly emerging as an alternative to mobile base stations to establish temporary infrastructure. In this paper, a novel deep-learning-based multi-source detection scheme is proposed for the scenario in which an UAV wants to estimate the number of survivors sending rescue signals within its coverage in a disaster site. For practicality, survivors are assumed to use off-the-shelf smartphones to send rescue signals, and hence the transmitted signals are orthogonal frequency division multiplexing (OFDM)-modulated. Since the line of sight between the UAV and survivors cannot be generally secured, the sensing performance of existing radar techniques significantly deteriorates. Furthermore, we discover that transmitted signals of survivors are unavoidably aysnchronized to each other, and thus existing frequency-domain multi-source classification approaches cannot work. To overcome the limitations of these existing technologies, we propose a lightweight deep-learning-based multi-source detection scheme by carefully designing neural network architecture, input and output signals, and a training method. Extensive numerical simulations show that the proposed scheme outperforms existing methods for various SNRs under the scenario where synchronous and asynchronous transmission is mixed in a received signal. For almost all cases, the precision and recall of the proposed scheme is nearly one, even when users’ signal-to-noise ratios (SNRs) are randomly changing within a certain range. The precision and recall are improved up to 100% compared to existing methods, confirming that the proposal overcomes the limitation of the existing works due to the asynchronicity. Moreover, for Intel(R) Core(TM) i7-6900K CPU, the processing time of our proposal for a case is 31.8 milliseconds. As a result, the proposed scheme provides a robust and reliable detection performance with fast processing time. This proposal can also be applied to any field that needs to detect the number of wireless signals in a scenario where synchronization between wireless signals is not guaranteed. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

25 pages, 2997 KiB  
Article
Driver Drowsiness Detection by Applying Deep Learning Techniques to Sequences of Images
by Elena Magán, M. Paz Sesmero, Juan Manuel Alonso-Weber and Araceli Sanchis
Appl. Sci. 2022, 12(3), 1145; https://doi.org/10.3390/app12031145 - 22 Jan 2022
Cited by 50 | Viewed by 14545
Abstract
This work presents the development of an ADAS (advanced driving assistance system) focused on driver drowsiness detection, whose objective is to alert drivers of their drowsy state to avoid road traffic accidents. In a driving environment, it is necessary that fatigue detection is [...] Read more.
This work presents the development of an ADAS (advanced driving assistance system) focused on driver drowsiness detection, whose objective is to alert drivers of their drowsy state to avoid road traffic accidents. In a driving environment, it is necessary that fatigue detection is performed in a non-intrusive way, and that the driver is not bothered with alarms when he or she is not drowsy. Our approach to this open problem uses sequences of images that are 60 s long and are recorded in such a way that the subject’s face is visible. To detect whether the driver shows symptoms of drowsiness or not, two alternative solutions are developed, focusing on the minimization of false positives. The first alternative uses a recurrent and convolutional neural network, while the second one uses deep learning techniques to extract numeric features from images, which are introduced into a fuzzy logic-based system afterwards. The accuracy obtained by both systems is similar: around 65% accuracy over training data, and 60% accuracy on test data. However, the fuzzy logic-based system stands out because it avoids raising false alarms and reaches a specificity (proportion of videos in which the driver is not drowsy that are correctly classified) of 93%. Although the obtained results do not achieve very satisfactory rates, the proposals presented in this work are promising and can be considered a solid baseline for future works. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

14 pages, 6649 KiB  
Article
Factorial Analysis for Gas Leakage Risk Predictions from a Vehicle-Based Methane Survey
by Khongorzul Dashdondov and Mi-Hwa Song
Appl. Sci. 2022, 12(1), 115; https://doi.org/10.3390/app12010115 - 23 Dec 2021
Cited by 3 | Viewed by 2137
Abstract
Natural gas (NG), typically methane, is released into the air, causing significant air pollution and environmental and health problems. Nowadays, there is a need to use machine-based methods to predict gas losses widely. In this article, we proposed to predict NG leakage levels [...] Read more.
Natural gas (NG), typically methane, is released into the air, causing significant air pollution and environmental and health problems. Nowadays, there is a need to use machine-based methods to predict gas losses widely. In this article, we proposed to predict NG leakage levels through feature selection based on a factorial analysis (FA) of the USA’s urban natural gas open data. The paper has been divided into three sections. First, we select essential features using FA. Then, the dataset is labeled by k-means clustering with OrdinalEncoder (OE)-based normalization. The final module uses five algorithms (extreme gradient boost (XGBoost), K-nearest neighbors (KNN), decision tree (DT), random forest (RF), Naive Bayes (NB), and multilayer perceptron (MLP)) to predict gas leakage levels. The proposed method is evaluated by the accuracy, F1-score, mean standard error (MSE), and area under the ROC curve (AUC). The test results indicate that the F-OE-based classification method has improved successfully. Moreover, F-OE-based XGBoost (F-OE-XGBoost) showed the best performance by giving 95.14% accuracy, an F1-score of 95.75%, an MSE of 0.028, and an AUC of 96.29%. Following these, the second-best outcomes of an accuracy rate of 95.09%, F1-score of 95.60%, MSE of 0.029, and AUC of 96.11% were achieved by the F-OE-RF model. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

17 pages, 640 KiB  
Article
Weibo Text Sentiment Analysis Based on BERT and Deep Learning
by Hongchan Li, Yu Ma, Zishuai Ma and Haodong Zhu
Appl. Sci. 2021, 11(22), 10774; https://doi.org/10.3390/app112210774 - 15 Nov 2021
Cited by 22 | Viewed by 5622
Abstract
With the rapid increase of public opinion data, the technology of Weibo text sentiment analysis plays a more and more significant role in monitoring network public opinion. Due to the sparseness and high-dimensionality of text data and the complex semantics of natural language, [...] Read more.
With the rapid increase of public opinion data, the technology of Weibo text sentiment analysis plays a more and more significant role in monitoring network public opinion. Due to the sparseness and high-dimensionality of text data and the complex semantics of natural language, sentiment analysis tasks face tremendous challenges. To solve the above problems, this paper proposes a new model based on BERT and deep learning for Weibo text sentiment analysis. Specifically, first using BERT to represent the text with dynamic word vectors and using the processed sentiment dictionary to enhance the sentiment features of the vectors; then adopting the BiLSTM to extract the contextual features of the text, the processed vector representation is weighted by the attention mechanism. After weighting, using the CNN to extract the important local sentiment features in the text, finally the processed sentiment feature representation is classified. A comparative experiment was conducted on the Weibo text dataset collected during the COVID-19 epidemic; the results showed that the performance of the proposed model was significantly improved compared with other similar models. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

13 pages, 1213 KiB  
Article
Classification between Elderly Voices and Young Voices Using an Efficient Combination of Deep Learning Classifiers and Various Parameters
by Ji-Yeoun Lee
Appl. Sci. 2021, 11(21), 9836; https://doi.org/10.3390/app11219836 - 21 Oct 2021
Viewed by 1712
Abstract
The objective of this research was to develop deep learning classifiers and various parameters that provide an accurate and objective system for classifying elderly and young voice signals. This work focused on deep learning methods, such as feedforward neural network (FNN) and convolutional [...] Read more.
The objective of this research was to develop deep learning classifiers and various parameters that provide an accurate and objective system for classifying elderly and young voice signals. This work focused on deep learning methods, such as feedforward neural network (FNN) and convolutional neural network (CNN), for the detection of elderly voice signals using mel-frequency cepstral coefficients (MFCCs) and linear prediction cepstrum coefficients (LPCCs), skewness, as well as kurtosis parameters. In total, 126 subjects (63 elderly and 63 young) were obtained from the Saarbruecken voice database. The highest performance of 93.75% appeared when the skewness was added to the MFCC and MFCC delta parameters, although the fusion of the skewness and kurtosis parameters had a positive effect on the overall accuracy of the classification. The results of this study also revealed that the performance of FNN was higher than that of CNN. Most parameters estimated from male data samples demonstrated good performance in terms of gender. Rather than using mixed female and male data, this work recommends the development of separate systems that represent the best performance through each optimized parameter using data from independent male and female samples. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

16 pages, 39044 KiB  
Article
Deep Representation of a Normal Map for Screen-Space Fluid Rendering
by Myungjin Choi, Jee-Hyeok Park, Qimeng Zhang, Byeung-Sun Hong and Chang-Hun Kim
Appl. Sci. 2021, 11(19), 9065; https://doi.org/10.3390/app11199065 - 29 Sep 2021
Cited by 1 | Viewed by 2248
Abstract
We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we [...] Read more.
We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

18 pages, 937 KiB  
Article
Modified Neural Architecture Search (NAS) Using the Chromosome Non-Disjunction
by Kang-Moon Park, Donghoon Shin and Sung-Do Chi
Appl. Sci. 2021, 11(18), 8628; https://doi.org/10.3390/app11188628 - 16 Sep 2021
Cited by 2 | Viewed by 1776
Abstract
This paper proposes a deep neural network structuring methodology through a genetic algorithm (GA) using chromosome non-disjunction. The proposed model includes methods for generating and tuning the neural network architecture without the aid of human experts. Since the original neural architecture search (henceforth, [...] Read more.
This paper proposes a deep neural network structuring methodology through a genetic algorithm (GA) using chromosome non-disjunction. The proposed model includes methods for generating and tuning the neural network architecture without the aid of human experts. Since the original neural architecture search (henceforth, NAS) was announced, NAS techniques, such as NASBot, NASGBO and CoDeepNEAT, have been widely adopted in order to improve cost- and/or time-effectiveness for human experts. In these models, evolutionary algorithms (EAs) are employed to effectively enhance the accuracy of the neural network architecture. In particular, CoDeepNEAT uses a constructive GA starting from minimal architecture. This will only work quickly if the solution architecture is small. On the other hand, the proposed methodology utilizes chromosome non-disjunction as a new genetic operation. Our approach differs from previous methodologies in that it includes a destructive approach as well as a constructive approach, and is similar to pruning methodologies, which realizes tuning of the previous neural network architecture. A case study applied to the sentence word ordering problem and AlexNet for CIFAR-10 illustrates the applicability of the proposed methodology. We show from the simulation studies that the accuracy of the model was improved by 0.7% compared to the conventional model without human expert. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

20 pages, 3341 KiB  
Article
Machine Learning in Discriminating Active Volcanoes of the Hellenic Volcanic Arc
by Athanasios G. Ouzounis and George A. Papakostas
Appl. Sci. 2021, 11(18), 8318; https://doi.org/10.3390/app11188318 - 8 Sep 2021
Cited by 2 | Viewed by 1751
Abstract
Identifying the provenance of volcanic rocks can be essential for improving geological maps in the field of geology and providing a tool for the geochemical fingerprinting of ancient artifacts like millstones and anchors in the field of geoarchaeology. This study examines a new [...] Read more.
Identifying the provenance of volcanic rocks can be essential for improving geological maps in the field of geology and providing a tool for the geochemical fingerprinting of ancient artifacts like millstones and anchors in the field of geoarchaeology. This study examines a new approach to this problem by using machine learning algorithms (MLAs). In order to discriminate the four active volcanic regions of the Hellenic Volcanic Arc (HVA) in Southern Greece, MLAs were trained with geochemical data of major elements, acquired from the GEOROC database, of the volcanic rocks of the Hellenic Volcanic Arc (HVA). Ten MLAs were trained with six variations of the same dataset of volcanic rock samples originating from the HVA. The experiments revealed that the Extreme Gradient Boost model achieved the best performance, reaching 93.07% accuracy. The model developed in the framework of this research was used to implement a cloud-based application which is publicly accessible at This application can be used to predict the provenance of a volcanic rock sample, within the area of the HVA, based on its geochemical composition, easily obtained by using the X-ray fluorescence (XRF) technique. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

14 pages, 11174 KiB  
Article
Experimental Evaluation of Deep Learning Methods for an Intelligent Pathological Voice Detection System Using the Saarbruecken Voice Database
by Ji-Yeoun Lee
Appl. Sci. 2021, 11(15), 7149; https://doi.org/10.3390/app11157149 - 2 Aug 2021
Cited by 21 | Viewed by 3275
Abstract
This work is focused on deep learning methods, such as feedforward neural network (FNN) and convolutional neural network (CNN), for pathological voice detection using mel-frequency cepstral coefficients (MFCCs), linear prediction cepstrum coefficients (LPCCs), and higher-order statistics (HOSs) parameters. In total, 518 voice data [...] Read more.
This work is focused on deep learning methods, such as feedforward neural network (FNN) and convolutional neural network (CNN), for pathological voice detection using mel-frequency cepstral coefficients (MFCCs), linear prediction cepstrum coefficients (LPCCs), and higher-order statistics (HOSs) parameters. In total, 518 voice data samples were obtained from the publicly available Saarbruecken voice database (SVD), comprising recordings of 259 healthy and 259 pathological women and men, respectively, and using /a/, /i/, and /u/ vowels at normal pitch. Significant differences were observed between the normal and the pathological voice signals for normalized skewness (p = 0.000) and kurtosis (p = 0.000), except for normalized kurtosis (p = 0.051) that was estimated in the /u/ samples in women. These parameters are useful and meaningful for classifying pathological voice signals. The highest accuracy, 82.69%, was achieved by the CNN classifier with the LPCCs parameter in the /u/ vowel in men. The second-best performance, 80.77%, was obtained with a combination of the FNN classifier, MFCCs, and HOSs for the /i/ vowel samples in women. There was merit in combining the acoustic measures with HOS parameters for better characterization in terms of accuracy. The combination of various parameters and deep learning methods was also useful for distinguishing normal from pathological voices. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

19 pages, 5348 KiB  
Article
Prediction Method of Underwater Acoustic Transmission Loss Based on Deep Belief Net Neural Network
by Yihao Zhao, Maofa Wang, Huanhuan Xue, Youping Gong and Baochun Qiu
Appl. Sci. 2021, 11(11), 4896; https://doi.org/10.3390/app11114896 - 26 May 2021
Cited by 3 | Viewed by 2454
Abstract
The prediction of underwater acoustic transmission loss in the sea plays a key role in generating situational awareness in complex naval battles and assisting underwater operations. However, the traditional classical underwater acoustic transmission loss models do not consider the regional hydrological elements, and [...] Read more.
The prediction of underwater acoustic transmission loss in the sea plays a key role in generating situational awareness in complex naval battles and assisting underwater operations. However, the traditional classical underwater acoustic transmission loss models do not consider the regional hydrological elements, and the performance of underwater acoustic transmission loss prediction under complex environmental conditions in a wide range of sea areas is limited. In order to solve this problem, we propose a deep learning-based underwater acoustic transmission loss prediction method. First, we studied the application domains of typical underwater acoustic transmission loss models (ray model, normal model, fast field program model, parabolic equation model), analyzed the constraint rules of its characteristic parameters, and constructed a dataset according to the rules. Then, according to the characteristics of the dataset, we built a DBN (deep belief net) neural network model and used DBN to train and learn the dataset. Through the DBN method, the adaptation and calculation of the underwater acoustic transmission loss model under different regional hydrological elements were carried out in a simulation environment. Finally, the new method was verified with the measured transmission loss data of acoustic sea trials in a certain sea area. The results show that the RMSE error between the underwater acoustic transmission loss calculated by the new method and the measured data was less than 6.5 dB, the accuracy was higher than that of the traditional method, and the prediction speed was faster, the result was more accurate, and had a wide range of adaptability in complex seas. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 853 KiB  
Review
A Review on Medical Textual Question Answering Systems Based on Deep Learning Approaches
by Emmanuel Mutabazi, Jianjun Ni, Guangyi Tang and Weidong Cao
Appl. Sci. 2021, 11(12), 5456; https://doi.org/10.3390/app11125456 - 11 Jun 2021
Cited by 43 | Viewed by 6808
Abstract
The advent of Question Answering Systems (QASs) has been envisaged as a promising solution and an efficient approach for retrieving significant information over the Internet. A considerable amount of research work has focused on open domain QASs based on deep learning techniques due [...] Read more.
The advent of Question Answering Systems (QASs) has been envisaged as a promising solution and an efficient approach for retrieving significant information over the Internet. A considerable amount of research work has focused on open domain QASs based on deep learning techniques due to the availability of data sources. However, the medical domain receives less attention due to the shortage of medical datasets. Although Electronic Health Records (EHRs) are empowering the field of Medical Question-Answering (MQA) by providing medical information to answer user questions, the gap is still large in the medical domain, especially for textual-based sources. Therefore, in this study, the medical textual question-answering systems based on deep learning approaches were reviewed, and recent architectures of MQA systems were thoroughly explored. Furthermore, an in-depth analysis of deep learning approaches used in different MQA system tasks was provided. Finally, the different critical challenges posed by MQA systems were highlighted, and recommendations to effectively address them in forthcoming MQA systems were given out. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

Back to TopTop