Next Article in Journal
Simulation of Achievable Data Rates of Broadband Power Line Communication for Smart Metering
Next Article in Special Issue
A Sample-Encoding Generalization of the Kohonen Associative Memory and Application to Knee Kinematic Data Representation and Pathology Classification
Previous Article in Journal
Comparison of the Pullout Strength of Pedicle Screws According to the Thread Design for Various Degrees of Bone Quality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning in the Biomedical Applications: Recent and Future Status

by
Ryad Zemouri
1,*,
Noureddine Zerhouni
2 and
Daniel Racoceanu
3,4
1
CEDRIC Laboratory of the Conservatoire National des Arts et métiers (CNAM), HESAM Université, 292, rue Saint-Martin, 750141 Paris CEDEX 03, France
2
FEMTO-ST, University of Bourgogne-Franche-Comté, 15B avenue des Montboucons, 25030 Besançon CEDEX, France
3
Sorbonne University, 4 Place Jussieu, 75005 Paris, France
4
Scientific Director of Orqual Group, Kitview, 65 bd. Niels Bohr, 69100 Villeurbanne, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(8), 1526; https://doi.org/10.3390/app9081526
Submission received: 12 March 2019 / Revised: 8 April 2019 / Accepted: 9 April 2019 / Published: 12 April 2019
(This article belongs to the Special Issue Machine Learning for Biomedical Data Analysis)

Abstract

:
Deep neural networks represent, nowadays, the most effective machine learning technology in biomedical domain. In this domain, the different areas of interest concern the Omics (study of the genome—genomics—and proteins—transcriptomics, proteomics, and metabolomics), bioimaging (study of biological cell and tissue), medical imaging (study of the human organs by creating visual representations), BBMI (study of the brain and body machine interface) and public and medical health management (PmHM). This paper reviews the major deep learning concepts pertinent to such biomedical applications. Concise overviews are provided for the Omics and the BBMI. We end our analysis with a critical discussion, interpretation and relevant open challenges.

1. Introduction

The biomedical domain nourishes a very rich research field with many applications, medical specialties and associated pathologies. Some of these pathologies are very well-known and mastered by physicians, but others, much less. With the technological and scientific advances, the biomedical data used by the medical practitioners are very heterogeneous, such as a wide range of clinical analyses and parameters, biological parameters and medical imaging modalities. By the multitude of these data as well as the completeness of certain atypical diseases, biomedical data are usually imbalanced [1,2] and nonstationary [3], being characterized by a high complexity [1]. In this context, machine learning represents a tremendous opportunity: (1) to support physicians, biologists and medical authorities to exploit and significantly improve big medical data analysis; (2) to reduce the risk of medical errors; and (3) to generate a better harmonization of the diagnosis and prognosis protocols.
Artificial Neural Networks (ANNs) and Deep Learning (DL) are actually the leading machine-learning tools in several domains such as image analysis and fault diagnosis. The applications of the DL in the biomedical fields cover all the medical levels, starting from the genomic applications, such as the gene expression, to the public medical health management, such as predicting demographic information or infectious disease epidemics. Figure 1 outlines a rapid surge of interest in terms of the number of papers published in recent years in the biomedical applications, when deep learning is used. We can see that the number of research publications recorded an exponential growth during the last three years (comparing to other research field, e.g., fault diagnosis). The two main sub-fields, namely medical/bioimaging and genomics, constitute the major part of these publications (around 70% total per year).
The use of machine learning in biomedical applications can be structured into three main orientations: (1) as computer-aided diagnosis to help the physicians for an efficient and early diagnosis, with a better harmonization and less contradictory diagnosis; (2) to enhance the medical care of patients with better personalized therapies; and (3) to improve the human wellbeing, for example by analyzing the spread of disease and social behaviors in relation to environmental factors, or to implement a brain–machine interface for controlling a wheelchair [4]. To reach these three objectives, we segment the biomedical field into several sub-fields, as illustrated in Figure 2:
  • Omics: genomics and proteomics are the study of the DNA/RNA sequencing and protein interactions and structure prediction.
  • Bioimaging: the study of biological cell and tissue by analyzing the histopathology or immunohistochemically images.
  • Medical imaging: the study of the human organs by analyzing magnetic resonance imaging (MRI), X-ray, etc.,
  • Brain and body machine interface: the study of the brain and body decoding machine interface by analyzing different biosignals such as electroencephalogram (EEG), electroencephalogram (EEG), etc.
  • Public and medical health management (PmHM): the study of big medical data to develop and enhance public health-care decisions for a humanity wellbeing.
Several survey papers on the deep learning biomedical applications are recently published [5,6,7,8,9,10], where we can find all or part of the biomedical sub-fields, but with different appellations. Indeed, by analyzing these survey papers, we found a lack of harmonization for the definitions of certain sub-fields. Figure 3 outlines the contributions and the different points of view given in each survey paper. The biomedical research sub-field appellations used in this survey paper are given at the bottom of the figure. For example, the term “bioinformatics” used in [7] refers to the Omics, and the term “biomedical signal processing” used in [10] refers to brain and body machine interfaces (BBMIs).
This survey paper is structured as followed. In Section 2, we introduce the main deep neural architectures that have been used for biomedical applications. Section 3 describes each field of the biomedical applications, namely the Omics, bio- and medical-imaging, brain and body machine interface (BBMI) and public medical health management (PmHM). We give in this section the main survey papers referring to the use of the deep neural networks. Section 4 and Section 5 give a detailed descriptions of, respectively, the Omics and the BBMI with the main recent publications using the deep learning. Critical discussion and open challenges are given in Section 6.

2. Neural Network and Deep Learning

2.1. From Shallow to Deep Neural Networks

Artificial Neural Networks (ANNs) were inspired in the 1960s by biological neural networks in the brain [11,12]. The feed forward ANNs are composed by layers of interconnected units (neurons). The mathematical point of view of the ANNs consists of a non-linear transformation y = F ( x ) of the input x (Figure 4A). Compared to shallow architectures, ANNs with more hidden layers, called Deep Neural Networks (DNNs) [13], offer much higher capacity to learn fitting and feature extracting from high complexity input data (Figure 4B). The starting point of the deep learning was in 2006 with the greedy layer-wise unsupervised learning algorithm used for Deep Belief Networks (DBNs) [14,15,16].
The interconnection between two units or neurons has an associated connection weight w j i , which is fitted during the learning phase. The input data are propagated from the input layer, neuron after neuron, until the output layer. This propagation transforms these data from a given space to another one, by the neurons of the layers in a nonlinear way. Each neuron computes a weighted sum of its inputs and applies a nonlinear activation function to calculate its output f ( x ) (Figure 4C). The most used activation functions are: the sigmoid function and its variant the hyperbolic tangent for the shallow architectures, the rectified linear unit (ReLU) and its variant the softplus for the deep architectures, and the softmax commonly used for the final layer in classification tasks.
The two main applications of the ANNs are: c l a s s i f i c a t i o n and r e g r e s s i o n . The objective of the classification is to organize the input data space into several classes by supervised or unsupervised learning techniques. In the regression applications or function approximation, the objective is to predict an unknown output parameter—usually, by a supervised learning.
In supervised learning, the predicted label is compared with the true label for the current set of model weights θ w to compute the output error (also called a loss function L ( θ w ) ) (Figure 5A). This loss function is high-dimensional and non-convex with many local optimums. The learning phase consists of tuning the connection weights at each learning step to minimize L ( θ w ) by backward propagating the gradient of the loss function through the network. This gradient backpropagation renewed ANNs in the mid-1980s when the backpropagation algorithm (BP) was used for classification [17]. During the learning procedure, two sets of data are usually used: training set and test set (sometimes a third set is used for validation). The training set is used for the learning while the test set is used for the NN’s performance evaluation. An efficient learning algorithm converges towards a global optimum while avoiding all the local optimums of the loss function, which looks like a landscape, with many hills and valleys [18]. A learning rate η is used to jump over valleys at the beginning of the training and fine-tune the weights in later stages of the learning process. If the learning rate is too low (little jump), it may take forever to converge with a high risk of jamming in a local optimum. Conversely, a too high value (big jump) can cause a non-convergence of the learning algorithm (Figure 5B). Varying and adapting the learning rate during the training process produces better template update [19,20,21,22,23,24,25,26]. Another method grafting the stimulus sampling model onto the standard BP technique was also developed by Gorunescu and Belciug [27].
When deep architectures are used, the magnitude of the back propagated error derivative decreases rapidly along the layers, resulting in slight update of the weights in the first layers (Figure 5C). This drawback is partially solved by using the ReLU or Softplus activation function, which allows faster learning and superior performances compared to the conventional activation function (e.g., sigmoid or hyperbolic tangent) [28]. Another solution is to consider the learning rate as a hyper parameter where different learning rates are used for different layers but few works in the literature use this concept (see [29] for a review). The most popular method used to create deep architectures and solve the problem of the random initialization of the weight parameters is an unsupervised pre-training phase used before the supervised fined-tuned learning phase. Auto-Encoders (AEs) and Restricted Boltzmann Machine (RBM) are stacking in a layer-wise as the basic building blocks [13,30].
Finding the best neural parameters set that minimizes the loss function is still challenging, especially for the DNN learning, which remains an active research area. Another difficulty encountered during the learning process is the choice of the neural network architecture for a given problem in terms of number of hidden layers and units per layer. What are the criteria that define the number of hidden layers and neurons per layer? The greater is the learning base, the greater is the need to use deep architectures. The user often proceeds by trying several neural network topologies to find the best structure and try to avoid the oversized and undersized structure (Figure 6). This is a real drawback and computationally expensive, especially when deep architectures are used [13,29,31,32]. There is no guarantee that the selected number of hidden layers/units is optimal. To prevent the network from overtraining, caused by an oversized design, some regularization techniques are used, such as dropout [33,34], Maxout [35] or weight decay, which is a penalty added to the error function [36].
Evolutionary learning procedures [37] also give interesting solutions where the NN evolves gradually during the training procedure into an optimum structure that satisfies some evolutionary criteria. These adaptive neural networks are divided into three categories [38]: constructive or growing algorithms, pruning algorithms and hybrid methods.

2.2. Most Popular Deep Neural Networks Architectures Used in Biomedical Applications

In this review paper, we consider only feedforward NN architectures, namely ANNs without recurrences. Recurrent Neural Networks (RNNs) consider the dynamic aspect of the data by the existence of self-connections or connections between units of the same layer. Few biomedical applications use RNNs because of the complexity of their training process. However, the long short-term memory (LSTM) is the most used RNNs architecture in biomedical applications [7].
We also consider two kinds of data used in biomedical applications: data with and without local correlations. In the first case, the input data are represented by a one-dimensional (1D) vector, while, in the second case, the data are represented by a multidimensional matrix. In this case, there is a high local correlation (e.g., RGB images) in the input data, and it is very important to consider this local correlation during the learning process. In this section, we give the most used deep architectures for each type of input data: DNNs used for non-correlated data and DNNs used for high locally correlated data.

2.2.1. DNN for Non-Locally Correlated Input data

For this type of data, three architectures are used: deep multilayer perceptron (DMLP), deep auto-encoders (DAEs) and deep belief network (DBN).
Deep multilayer perceptron
The well-known multilayer perceptron developed in 1986 by Rumelhart and trained by the backpropagation algorithm is the ancestor of deep learning [13,17] (Figure 7A–C). At that time, the maximum number of hidden layers was at least two or three, with few units per layer. Nowadays, due to the development of several heuristics for training large architectures and the use of GPUs hardware ([39,40,41]), the size of the NNs can reach several hidden layers with more than 650 thousand neurons and 630 million of trained parameters (e.g., alexNet [42]).
Deep auto-encoders
An auto-encoder is a special case of a one hidden layer MLP (Figure 7D). The aims of an AE is to recreate the input vector x ^ = F ( x ) where x and x ^ are, respectively, the input and the output vector. In an AE, an equal number of units are used in the input/output layers and less units in the hidden layer. Deep AEs are obtained by stacking several AEs (Figure 7E). In deep learning, DAEs are used even for feature extraction/reduction, or to pre-train parameters for a complex network.
Restricted Boltzmann machine and deep belief networks
A restricted Boltzmann machine (RBM) is a generative stochastic network and consists of one layer of visible units and one layer of hidden units, with no connections within each layer [30,36,43] (Figure 7F). In other words, the visible and hidden layers correspond to the observation and the feature extractor, respectively. The resulting model is less general than a Boltzmann machine, but is still useful, for example it can learn to extract interesting features from images. RBM is considered as an energy based model where the energy of the joint configuration of visible and hidden layer units is defined as a Hopfield energy function [44]. This energy function assigns probability to each pair of visible and hidden vectors in the modeled network [45].
Traditionally, RBM is used to model the distribution of the input data or to model the joint distribution between the input data and the target classes [46]. In deep learning, similar to AEs, RBMs can also pre-train parameters for a complex network.
A deep belief network can be viewed as a stack of RBMs where the hidden states of each previous RBM are used as data for training a new second RBM [13,14,47,48] (Figure 7G). Therefore, each RBM perceives pattern representations from the level below and learns to encode them in unsupervised fashion.

2.2.2. DNN for High Locally Correlated Data

The only architecture used when there is a high local correlation within the data, is the Convolutional Neural Network architecture.
Convolutional neural networks
The convolutional neural networks (CNNs) were inspired by the neurobiological model of the visual cortex, where the cells are sensitive to small regions of the visual field [11,49,50,51]. CNNs are used to model input data in the form of multidimensional arrays, such as two-dimensional images with three colour channels. In CNN architectures, there are two types of layers: the convolutional layer and the pooling layer (Figure 7H).
A convolutional layer consists of multiple maps of neurons, so-called feature maps or filters. Unlike in a fully-connected network, each neuron within a feature map is only connected to a local patch of neurons in the previous layer, the so-called receptive field. The input data are then convolved by different convolutional filters via shifting the receptive fields step by step. The convolutional filters share the same parameters in every small portion of the image, largely reducing the number of hyper parameters in the model.
A pooling layer, taking advantage of the stationarity property of images, takes the mean, max, or other statistics of the features at various locations in the feature maps, thus reducing the variance and capturing essential features [6,9].
A CNN typically consists of multiple convolutional and pooling layers, which allow learning more and more abstract features. In the last layers of a CNN, a fully-connected classifier is used for the classification of the features extracted by the previous convolutional and pooling layers. The most popular CNNs used in the machine learning applications are: AlexNet [42], Clarifai [52], VGG [53], and GoogleNet [54].

3. Biomedical Applications

3.1. Omics

The first level of the biomedical research field concerns all studies starting from the genomic sequencing and the gene expression to the protein structure prediction and its interaction with other proteins or drugs. This is a very active research domain where the use of the deep neural networks is growing rapidly (see Figure 1). Usually, in the literature, the generic name for this research area is the “Omics” but other appellations can be found such as bioinformatics [7] or biomedicine [55] (see Figure 3 for the different research sub-field denominations found in the literature). The Omics covers data from genetics and (gen/transcript/epigen/prote/metabol/pharmacogen/multi) Omics [5] and aims to investigate and understand biological processes at a molecular level to predict and prevent diseases by involving patients in the development of more efficient and personalized treatment. A great part of the Omics concerns protein–protein interactions (PPIs) [56], the prediction of human drug targets and their interactions [57], and the protein function prediction [58]. Table 1 gives the recent published survey papers on Omics. We recommend reading the papers by Leung et al. [59] and Mamoshina et al. [55], who provided a complete overview of genomics and major challenges in the machine learning practical problems.

3.2. Bio and Medical Imaging

The next stage after the DNA and the protein level is the study of the cell (cytopathology) and the tissue (histopathology). Cytopathology and histopathology are commonly used in the diagnosis of cancer and some infectious diseases and other inflammatory conditions. The histological and cytopathological slides, usually obtained by fine-needle aspiration biopsies, are examined under a microscope [62,63]. This research field, called in the literature as bioimaging, is the main research area of deep learning in biomedical applications (see Figure 1).
In medical imaging, the human organs are studied by analyzing the different (medical/clinical/health) imaging [5]. Nowadays, large medical high-resolution imaging acquisition systems are available, such as parallel magnetic resonance imaging (MRI), multi-slice computed tomography (CT), ultrasound (US) transducer technology, digital positron emission tomography (PET) or 2D/3D X-ray. These medical images contain a wealth of information, causing some disagreement across interpreters [64]. The manufacturers of medical imaging systems try to provide software, workstations and solutions for archiving, visualizing and analyzing images [64].
In bio and medical imaging, the accurate of the diagnosis and/or assessment of a disease depends on both image acquisition and image interpretation [65]. Image acquisition has improved substantially over recent years with the development of the technology. Medical images interpretations is mostly performed by physicians and can be subject to large variations across interpreters, and fatigue (read the guest editorial [65]). Indeed, the main part of the deep learning applications in bio and medical imaging concerns the computer-aided images interpretations and analyses [66,67,68,69,70]. For example, analyzing histopathology images for breast cancer diagnosis [63,71,72,73], or a digital pathology and image analysis with a focus on research and biomarker discovery [74]). The major part of the deep learning papers published in bio and medical imaging concerns the segmentation, localization and classification of the nuclei [75] and mitosis [76] for bioimaging, and lesion and anatomical object (such as organ, landmarks and other substructures) for medical imaging. Table 2 provides the recent published survey papers in DL for medical and bioimaging. We recommend the work of Litjens et al. [77] for a complete overview.

3.3. Brain and Body Machine Interfaces

The next level of the biomedical applications concerns the brain and body machine interfaces (BBMIs), which covers electrical signals generated by the brain and the muscles, acquired using appropriate sensors [5,82]. A BBMI system is composed of four main parts: a sensing device, an amplifier, a filter, and a control system [83]. For the brain interface, the system decodes and processes signals from a complex brain mechanisms to facilitate a digital interface between the brain and the computer [84]. The brain signals reflect the voluntary or involuntary neural actions generated by human’s current activity. Various techniques for signal acquisition have been recently developed [85]: invasive techniques with an implantation of electrodes under the scalp (e.g., electrocorticography (ECoG)) or non-invasive techniques that do not require implanting of external objects into subject’s brain. Different assessment techniques exists such as electroencephalogram (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and functional near infrared spectroscopy (fNIRS). After the brain–machine interface, the second part of the deep learning applied to the BMIs concerns the anomaly detection and diseases diagnosis such as identification of coronary artery disease by ECG signals [86], automated detection of myocardial infarction using ECG signals [87], electroencephalography data for seizure detection [88], and EEG diagnosis of Alzheimer’s disease [89]. Some of the deep learning bibliography concerns the muscles activity, e.g. muscle–computer interface (MCI), such as electromyography movements classification for prosthetic hands [90] and gesture recognition by instantaneous surface EMG images [91]. Table 3 gives recent survey papers on the machine learning techniques in general [92] and deep learning [93] used in the BMIs.

3.4. Public and Medical Health Management

The aim of public and medical health management (Pm-HM) is to analyze large medical data to develop and enhance health-care decisions for a well-being of humanity. Analyzing the spread of disease such as in the case of epidemics and pandemics in relation with the social behavior and the environmental factor [95] is one of the main challenge of Pm-HM in the coming years [7]. One of the main rich sources of patient information are electronic health records (EHR), which include medical history details such laboratory test results, allergies, radiology images, sensors multivariate times series (such as EEG), medications and treatment plans [7]. The analysis of such clinical information against temporal dimensions provides a valuable opportunity for deep learning in healthcare decision making [96], developing knowledge-distillation approach [97], for temporal pattern discovery over Rochester epidemiology project data [98], or to classify diagnoses given multivariate pediatric intensive care unit (PICU) time series [99]. A novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitate clinical predictive modeling was developed by Lipton et al. [99].
Pm-HM also includes modeling lifestyle diseases, such as obesity, with relation to geographical areas. Tracking public health concerns, such as infectious intestinal diseases [100] or geographical obesity by using the social media where people’s life and social interaction are publicly shared online is nowadays feasible [101]. In [102], geo-tagged images from Instagram are used to study the lifestyle diseases, such as obesity, drinking or smoking. Social media nested epidemic simulation (SimNest) via online semi-supervised DL was developed by Zhao et al. [103].

4. Omics

The scientific research in the Omics field is divided into two areas: DNA and Protein. In the DNA field, the most challenging tasks are the protein–DNA interactions, the gene expression prediction and the genomic sequencing. Identifying the critical genes for an early cancer detection is an open challenge [104,105,106,107]. For the protein field, most of the research is concentrated around the protein structure prediction (PSP) and the protein interaction prediction (PIP).

4.1. Around the Genome

Nowadays, the machine learning techniques are widely used to predict phenotypes from the genome. The usually used input features for the learning process are genomic sequences (ChIP-, RIP- and CLIP-seq). The Omics application around the genome can be divided into three areas: the protein binding prediction (PBP), the gene expression and the genomic sequencing. Figure 8 gives the whole pipeline for these two research fields.

4.1.1. Protein Binding Prediction

Protein–DNA interactions play important roles in different cell processes including transcription, translation, repair, and replication machinery [60]. Finding the exact location of these binding sites is very important in several domains such as drug design and development.
  • DNA–RNA-binding proteins: The chemical interaction between the protein and a DNA or RNA strands is called the protein binding. Predicting these binding sites is very crucial in various biological activities such as gene regulation, transcription, DNA repair, and mutations [59] and it is essential for the interpretation of the genome. To model the binding sites, the position–frequency matrix is mainly used as a four input channels and the output is a binding score [108,109,110,111,112,113,114].
  • Enhancer and promoter identification: Promoters and enhancers act via complex interactions across time and space in the nucleus to control when, where and at what magnitude genes are active [115]. Promoters and enhancers were early discoveries during the molecular characterization of genes [115]. Promoters specify and enable the positioning of RNA polymerase machinery at transcription initiation sites while enhancers are the short regions of DNA sequences bounded by a certain type of proteins, the transcription factors [115,116,117,118,119].

4.1.2. Gene Expression

Gene expression is a biological process divided in three steps: the transcription, the RNA processing and the translation. The transcription creates an RNA molecule (called precursor messenger RNA (pre-mRNA)) that is essentially a copy of the DNA in the gene being transcribed. Then, RNA processing modifies the pre-mRNA into a new RNA molecule called messenger RNA(mRNA). Translation creates a protein molecule (an amino-acid chain) by reading the three-letter (codes) in the mRNA sequence [59]. The deep learning techniques are applied in gene expression according to two orientations: the alternative splicing field and the prediction of gene expression.
  • Alternative splicing: Alternative splicing is a process whereby the exons of a primary transcript may be connected in different ways during pre-mRNA splicing [120]. The objective is to build a model that can predict the outcome of AS from sequence information in different cellular contexts [120,121,122].
  • Gene expression prediction: Gene expression refers to the process of producing a protein from sequence information encoded in DNA [123]. Predicting the gene expression from histone modification can be formulated as a binary classification where the outputs represent the gene expression level (high or low) [124,125,126].

4.1.3. Genomic Sequencing

Genomic sequencing is the process of determining the precise order of nucleotides within a DNA molecule and it is nowadays very crucial in several applications such as for basic biological research, medical diagnosis, biotechnology, forensic biology, virology and biological systematics. The application of the deep learning in the genomic sequencing are divided into two fields: learning the functional activity of DNA sequencing and DNA methylation:
  • DNA sequencing: Learning the functional activity [127], quantifying the function [128] or identifying functional effects of noncoding variants [129] of DNA sequences from genOmics data are fundamental problems in Omics applications recently touched by the enthusiasm of deep learning. De novo identification of replication domains type using replication timing profiles is also a crucial application in the genomic sequencing [130].
  • DNA methylation: DNA methylation is a process by which methyl groups are added to the DNA molecule and can change the activity of a DNA segment without changing the sequence. DNA methylation plays a crucial role in the establishment of tissue-specific gene expression and the regulation of key biological processes [131,132].

4.2. Around the Protein

The application of deep learning in the field of the protein can be divided into two areas: protein structure prediction (PSP) (Figure 9) and protein interaction prediction (PIP) (Figure 10). The commonly used features in these various protein prediction problems are [133]: physicochemical properties, protein position specific scoring matrix (PSSM), solvent accessibility, secondary structure, protein disorder, contact number and the estimated probability density function of errors (difference) between true torsion angles and predicted torsion angles based on related sequence fragments.

4.2.1. Protein Structure Prediction (PSP)

Nowadays, one of the most challenging problems in Omics concerning the protein study is to obtain a better modeling and understanding of the protein folding process [58]. During the gene expression process, every protein folds into a unique three-dimensional structure that determines its specific biological function. The three-dimensional structure is also known as native conformation and it is a function of its secondary structures. Protein structure prediction is crucial to analyze protein function and applications. Several diseases are directly linked to the result of the aggregation of ill-formed proteins [134]. We highlight five main orientations in the field of the protein structure prediction. Most of these studies concern the protein secondary and tertiary structure prediction:
  • Backbone angles prediction: Prediction of the protein backbone torsion angles (Psi and Phi) can provide important information for protein structure prediction and sequence alignment [133,135,136,137].
  • Protein secondary structure prediction: Prediction of the secondary structure of the protein is an important step for the prediction of the three-dimensional structure and concentrates a large part of the scientific publications [134,138,139,140,141,142,143,144,145,146].
  • Protein tertiary structure (3D) prediction: Protein tertiary structure deals with the three-dimensional protein structure and gives how the regional structures are put together in space [147,148,149,150,151,152].
  • Protein quality assessment (QA): In the process of protein three-dimensional structure prediction, assessing the quality of the generated models accurately is crucial. The protein structure quality assessment (QA) is an essential component in protein structure prediction and analysis [153,154,155].
  • Protein loop modeling and disorder prediction: Biology and medicine have a long-standing interest in computational structure prediction and modeling of proteins. There are often missing regions or regions that need to be remodeled in protein structures. The process of predicting missing regions in a protein structure is called loop modeling [156,157,158]. Many proteins contain regions with unstable tertiary structure. These regions are called disorder regions [159].

4.2.2. Protein Interaction Prediction (PIP)

Most of the cellular processes involve interactions between the proteins and other proteins, drugs or compounds elements [56]. Identification of drug targets and drug target interactions are important steps in the drug-discovery pipeline and it is a great challenge in drug repositioning [57]. Identification of the interactions between chemical compounds and proteins plays a critical role in network pharmacology, drug discovery, drug target identification, elucidation of protein functions, and drug repositioning. Nowadays, different machine learning algorithms are employed for the protein interaction prediction to reveal molecular mechanisms in cellular processes. The protein interaction prediction is divided into three fields: protein–protein interactions (PPIs), drug–target interactions (DTIs) and compound–protein interactions (CPIs).
  • Protein–Protein Interactions (PPIs): Protein–protein interactions (PPI) play critical roles in many cellular biological processes, such as signal transduction, immune response, and cellular organization. Analysis of protein–protein interactions is very important in the drug target detection and therapy design [160,161,162,163,164,165,166,167].
  • Drug–Target Interactions (DTIs): Identifying drug–target interactions (DTIs) is a major challenge in drug development and provides detailed descriptions of the biological activity, genomic features and chemical structure to the disease treatment [168,169,170].
  • Compound–Protein Interactions (CPIs): The interactions between compounds and proteins plays a crucial role in pharmacology and drug discovery [171,172,173].

5. Brain and Body Machine Interface

Brain and body machine interface is defined as a combination of hardware and software that allows brain activities to control external devices or even computers [174]. Different signals with number of channels to be recorded out of the brain are acquired by different electrodes technology. These electrodes are essentially classified into two cases: invasive and non-invasive. For the invasive electrodes, neurosurgery is necessary to implant the microelectrodes in the brain under the skull while non-invasive electrodes do not need any penetration in the scalp. The signals produced by invasive electrodes are high quality, however, non-invasive electrodes are still preferable due to avoiding surgery [174]. These control signals are classified into three categories [174]: (1) evoked signals; (2) spontaneous signals; and (3) hybrid signals. Evoked signals are the signals generated unconsciously by the subject when he/she receives external stimuli. The most well-known evoked signals are steady state evoked potentials (SSEP) and P300. Spontaneous signals are the signals generated by subject voluntarily without any external stimulations [174]. Most of the well-known spontaneous signals are the motor and sensorimotor rhythms, slow cortical potentials (SCP), and non-motor cognitive tasks. Hybrid signals mean that a combination of brain generated signals are used for control.
We strongly recommend reading several complete survey papers [82,85,174,175,176,177,178,179,180] that provide the state of the art on brain–machine interface applications and a detail focus on definitions, classifications and comparisons of the brain signals. In addition, they survey the current brain interface hardware and software and explain the current challenges of the field.

5.1. Brain Decoding

5.1.1. Evoked Signals

  • Steady state evoked potentials (SSEP): SSEP signals are brain signals that are generated when the subject perceives periodic external stimulus [174]. Table 4 gives an overview of papers using deep learning techniques for SSEP applications. The SSEP applications are divided into three main parts: human’s affective and emotion states classification, auditory evoked potential and visual evoked potential.
  • P300: It is an EEG signal that appears after almost 300 ms when the subject is exposed to infrequent or surprising task [174]. Table 4 gives an overview of papers using deep learning techniques for P300 applications. The two main applications are: the P300 speller and driver fatigue classification.

5.1.2. Spontaneous

  • Motor and sensorimotor rhythms (MSR): Motor and sensorimotor rhythms are brain signals in relation with motor actions such as moving arms [174]. Most of the applications concerns the motor imagery tasks, which is the translation of the subject motor intention into control signals through motor imagery states (see Table 4).
  • Non-motor cognitive tasks: The non-motor cognitive tasks concern all the signals generated by the brain for spontaneous music imagination, visual counting, mental rotation, and mathematical computation [174] (see Table 4).

5.1.3. Hybrid Signals

For a better reliability and to avoid the disadvantages of each type of signals, some techniques use the combination of several brain signals such as electroencephalography (EEG) for brain activity, electrooculography (EOG) for eye movements, and electromyography (EMG) [174,181] (see Table 4).

5.2. Diseases Diagnosis

The second part of the BBMIs concerns the diseases diagnosis. Many of these applications concern the automated detection of arrhythmia and cardiac abnormalities, which are irregularities in the rate or heartbeat rhythm automatically detected by the deep neural networks. DL has also been applied in other applications such as automated detection and diagnosis of seizure, screening of depression [238] or neonatal sleep state identification [239]. Table 5 gives an overview of papers using DL techniques for diseases diagnosis and abnormalities detection.

6. Discussions and Outlooks

6.1. Overview of the Reviewed Papers

A total of 158 papers are reviewed in this paper: 64 papers for the Omics and 94 for the BBMIs. Figure 11 gives the distribution of these papers. Most of the reviewed papers (more than 50%) were published in 2017 or the first months of 2018, which shows the popularity of deep learning in the biomedical applications during the two last years. Much of the Omics research concerns the proteins, precisely protein structure prediction and protein binding prediction (see Figure 12). In the BBMI, 37% of the papers deals with the disease diagnosis and 60% concerns the brain decoding. Most of these papers use the EEG input signal (see Figure 13 for a complete overview).

6.2. Overview of the Used Deep Neural Networks Architectures

It is well-known today that the most used deep neural networks architecture in the image analysis are the convolutional neural networks (it is also true for the medical image analysis—see the Discussion Section in [77]). We show in Figure 14 a global overview of the used deep neural architectures for the two biomedical fields (Omics and BBMI). We distinguish the two types of architectures:
  • DNN: Deep neural architectures for non-locally correlated input data, which encompasses the deep multilayer perceptron, deep auto-encoders and the deep belief networks (see Section 2.2.1).
  • CNN: Deep neural architectures for high locally correlated data.
Figure 11 shows that the CNNs were the preferred architecture used by researchers in the published papers during 2017. CNNs are now the most used deep neural architectures in BBMI applications, and are not far from it for the Omics (see Figure 14). However, the CNNs are not really preferred in two Omics sub-fields: the protein–protein interaction prediction and gene expression (see Figure 14).
Generally, there is an increasing interest in using the CNNs in biomedical analysis and decoding. For bio and medical images analysis, we think that this interest will grow exponentially in the next few years, as well as for the other biomedical fields, such as the Omics and the BBMI. The recent publications in the image analysis and biomedical imaging show that the current standard practice is to replace the traditional handcrafted machine learning methods by the CNNs used in an end-to-end way [77]. Indeed, for the BBMI signals analysis, a complete study on how to design and train a CNN to decode task-related information from the raw EEG signals without handcrafted features is recently presented [270]. Thus, the EEG signals are represented as 2D-array-image where the number of time-step EEG signals represent the width of the image, and the number of electrodes as the height. In the Omics field, several input features could also be represented in a 2D-array, such as the position–frequency matrix model (Figure 8) or the protein structure feature (Figure 9). The dynamic sequencing data, usually tailored in the past for the recurrent neural architectures, are transformed in a 2D-array-image and easily processed by the CNNs. A great impact of this 2D representation is that the recurrent neural networks are nowadays less used in biomedical applications (see Figure 2 of [7]). This will be further accentuated by the success of some CNNs architectures such as AlexNet or GoogleNet, and also by the latest advances in Graphics Processing Units (GPUs). However, other new architectures are emerging such as DeeperBind which is a long short term recurrent convolutional network for prediction of protein binding ([109]), or convolutional deep belief networks [30].

6.3. Biomedical Data and Transfer Learning

Transfer learning is the ability to exploit similarities between different knowledge or dataset to facilitate the learning of a new task that shares some common characteristics. CNNs are the main deep neural architecture which have shown a great ability in transferring knowledge between apparently different image classification tasks. In most cases, the transfer learning is done by weight transferring where a network is pretrained on a source task and then the weights of some of its layers are transferred to a second network that is used for another task [271].
Recently, transfer learning was widely used in the image analysis and especially in the biomedical imaging [271,272,273,274]. Indeed, in the biomedical field, obtaining datasets that are comprehensively labeled and annotated remains a challenge (see the paragraph transfer learning and fine tuning of the guest editorial [65]). CNNs models are pre-trained from natural image dataset or from a different medical domain and then fine-tuned for new medical images. This practice is in its infancy in the Omics and BBMI. We found few papers referring to the transfer learning (e.g., [258,259] for seizure detection, [234] for mental task classification and [116] for enhancers prediction). An original transfer learning was done by Tan et al. [275], who transferred knowledge from ImageNet computer vision database to an EEG signal classification. The obtained results are very promising and will certainly open new, promising perspectives in Omics and BBMI.

6.4. Model Building

The different steps to follow when using an artificial neural network (shallow or deep) are: (1) model choosing; (2) model building; (3) model learning; and (4) model checking. In the first step, we must choose one neural architecture (CNN, AE, DBN, etc.). In the second step, we have to define the size of the NN: how many layers, how many units per layer, how many convolution filters and what is their size. In the third step, the neural network will be trained by unsupervised or supervised techniques while avoiding overfitting and underfitting. During the last step, we have to check the quality of the NN.
The main difficulty with the neural networks is the model building. None of the papers reviewed in this survey gives a scientific motivation about the choice of the number of hidden layers or the number of units per layer. For example, when using a CNN, it is very hard for a non-expert to define the best size of the convolution filters and to choose the best number of convolution, pooling and fully-connected layers. A good alternative to this drawback is to consider the neural architecture as a hyper-parameter evolving during the learning process. The neural network is built step by step during the learning process, until a convergence criteria is reached. To avoid an oversized architecture, some of the parameters as non-significant units or connections between neurons can be removed. Recently, several promising studies about the constructive and pruning algorithms were published (see [38,276] for a complete survey).
The other difficulty is the interpretability of the obtained results. Artificial neural networks learn to associate an output according to a given input, but they do not learn to give any reason or interpretation associated to this response. It is very hard to understand what happen in the hidden layers and why a trained NN gives a positive diagnosis for a certain pathology. This Black Box aspect is very restrictive, especially in medicine where a decision interpretability is very important and can have serious legal consequences [77]. When convolution networks are used for image processing, several methods have been developed to visualize what happen in the intermediate layers. Some of these algorithms are, for example, visual explanations from CNN networks via gradient-based localization [277]; a visualization technique of the input stimuli that excite individual feature maps at any layer in a CNN model [278]; and a deep Taylor decomposition method for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements [279]. When the input data are not images, the interpretability of the hidden layers activities is less obvious. Some visualization techniques, such as the t-distributed stochastic neighbor embedding projection (t-SNE) [280], converts a high-dimensional dataset into a 2D-matrix of pairwise similarities. Feature maps of the model are then obtained; all the difficulty is explaining the classification decisions according to these maps.

6.5. Emergent Architectures: The Generative Adversarial Networks

One of the most emerging architectures used in the biomedical applications are generative adversarial networks (GANs) (Figure 15). GANs provide a way of data augmentation to enlarge the deep representations without extensively annotated training data [281]. Proposed in 2014 by Goodfellow [282], a GAN includes two deep networks: a generator and a discriminator. The first network is seen in a common analogy: an art forger who creates forgeries with the aim of making realistic images. The discriminator represents the art expert who has to distinguish between the synthetic and the authentic images [281,283]. The training of a GAN requires both finding the parameters of a discriminator and a generator, the discriminator having to maximize its classification accuracy and the generator having to confuse the discriminator as much as possible [281]. During the training process, only one of the two networks is concerned by the parameters updating. The second one keeps its own parameters frozen. Several variants of the GANs have recently been developed from the original fully-connected architecture proposed by Goodfellow [282] such as convolutional GANs, conditional GANs, adversarial autoencoders (AAE) and class experts GAN (see [281,283,284,285,286]).
GANs are recently applied in all biomedical fields such as in the Omics for a protein modeling by Li et al. [156], who considered loop modeling as an image inpainting problem with the generative network having to capture the context of the loop region with a prediction of the missing area.
In the BBMI applications, such as cardiac ECG Applications, a novel concept that embeds a generative variational autoencoder (VAE) into the objective function of Bayesian optimization was applied to estimating tissue excitability in a cardiac electrophysiological model by Dhamala et al. [287]. In [288], a deep generative model is trained to generate the spatiotemporal dynamics of transmembrane potential (TMP).
In bioimaging histology applications, to classify the newly given prostate datasets into low and high Gleason grade, an adversarial training is used to minimize the distribution discrepancy at the feature space, with the loss function adopted from the GAN [289]. In [290], a cascaded of refinement GANs for phase contrast microscopy image super-resolution is proposed.
Most research using GANs concerns medical imaging applications. The main objectives are: image quality enhancement, image reconstruction, crafted images generation, image registration and segmentation.
The first objective of the GANs in medical imaging is to enhance the image quality. A conditional GAN is used for the reduction of metal artifacts (RMA) in computed tomography (CT) ear images of cochlear implants (CIs) recipients in [291], and to estimate the high-quality full-dose PET images from low-dose ones in [292]. To reduce the artifacts from sparsely reconstructed cone-beam CT (CBCT) images, Liao et al. [293] introduced a least squares generative adversarial networks (LSGAN), which is formulated under an image-to-image generative model. Chen et al. [294] introduced a 3D neural network design, namely a multi-level densely connected super-resolution network (mDCSRN) with a GAN to generate a high-resolution (HR) magnetic resonance images (MRI) from one single low-resolution (LR) input image. Similarly, a deep residual network (ResNet) is used as a part of a GAN and trained to enhance the quality of ultrasound images [295]. In [296], a noise reducing generator based on a CNN together with a GAN is developed.
In image reconstruction methods, 3D anatomical structures remain a challenge when limited number of 2D views are used to reduce cost and radiation exposure to patients. A new deep conditional generative network for the 3D reconstruction of the fetal skull from 2D ultrasound (US) standard planes of the head routinely acquired during the fetal screening process was introduced by Cerrolaza et al. [297]. This 3D reconstruction is based on the generative properties of a conditional variational autoencoders (CVAE), a deep conditional generative network for the 3D reconstruction of the fetal skull from freehand non-aligned 2D scans of the head. In another part, compressed sensing-based magnetic resonance imaging (CS-MRI) is a promising paradigm allowing to accelerate MRI acquisition by reconstructing images from only a fraction of the normally required measurements [298]. To solve this problem, deep generative adversarial networks were recently used by Seitzer et al. [298], Zhang et al. [299], and Quan et al. [300].
Due to the lack of annotated data, GANs are used to generate crafted images, for example to synthesize multiple realistic-looking retinal images from an unseen tubular structured annotation that contains the binary vessel morphology in [301] or to generate synthetic liver lesion images in [302]. In [303], adversarial examples are explored in medical imaging and leveraged in a constructive fashion to benchmark model performance not only on clean and noisy but also on adversarially crafted data. In [304], a conditional GAN is explored to augment artificially generated lung nodules to improve the robustness of the progressive holistically nested network (P-HNN) model for pathological lung segmentation of CT scans. In [305], a novel generative invertible networks (GIN), which is a combination of a convolutional neural network and generative adversarial networks, is proposed to extract the features from real patients and generate virtual patients, which are both visually and pathophysiologically plausible, using the features. In [306], a deep generative multi-task is developed to solve the problem of limited training data and data with lesion annotations because making the annotations is a very expensive and time consuming task. The deep generative multi-task is built upon the combination of a convolutional neural networks and a generative adversarial network. Transfer learning is also adapted to a model trained with one type of data to another. In [307], a conditional generative adversarial network is used to generate realistic chest X-ray images with different disease characteristics by conditioning its generation on a real image sample. This approach has the advantage of overcoming limitations of small training datasets by generating truly informative samples. To solve the problem of missing data in multi-modal studies, Pan et al. [308] proposed a two-stage deep learning framework for Alzheimer’s disease diagnosis using incomplete multi-modal imaging data. In the first stage, a 3D cycle consistent generative adversarial networks model is used for imputing the missing PET data based on their corresponding MRI data. A landmark-based multi-modal multi-instance neural network for brain disease classification is used in the second stage.
Image registration can be defined as the process of aligning two or more images to find the optimal transformation that best aligns the structures of interest in the input images. Fan et al. [309] introduced an adversarial similarity network, which is inspired by generative adversarial network, to automatically learn the similarity metric for training a deformable registration network. In [310], an adversarial learning approach is used to constrain convolutional neural network training for image registration. The end-to-end trained network enables efficient and fully-automated registration that only requires a magnetic resonance and transrectal ultrasound image pair as input, without anatomical labels or simulated data during inference.
Generative adversarial networks are also used in medical imaging segmentation. In [311], a novel real-time voxel-to-voxel conditional generative adversarial nets is used for 3D left ventricle segmentation on 3D echocardiography. For automatic bony structures segmentation, a cascaded generative adversarial network with deep-supervision discriminators was used by Zhao et al. [312]. In [313], a novel transfer-learning framework using generative adversarial networks is proposed for robust segmentation of different HEp-2 datasets. A recurrent generative adversarial network called Spine-GAN to perform automated segmentation and classification of intervertebral discs, vertebrae, and neural foramen in MRIs in one shot was developed by Han et al. [314]. Jiang et al. [315] aimed to solve the problem of learning to segment for lung cancer tumors from MR images through domain adaptation from CT to MRI, where there are a reasonable number of labeled data in the source domain CT but are provided with very limited number of target samples in MRI with fewer labels. For breast mass segmentation in mammography, conditional generative adversarial and convolutional networks was used by Singh et al. [316]. For automatic segmentation and classification of images from patients with cardiac diseases associated with structural remodeling, a variational autoencoder (VAE) model based on 3D convolutional layers was introduced by Biffi et al. [317]. A multitask generative adversarial networks was proposed by Xu et al. [318] as a contrast-free, stable and automatic clinical tool to segment and quantify myocardial infarction simultaneously. In [319], a task driven generative adversarial network is introduced to achieve simultaneous image synthesis and automatic multi-organ segmentation on X-ray images.
To model the respiratory motion for tracking mobile tumors in the thorax and abdomen in a radiotherapy, Giger et al. [320] developed a conditional generative adversarial network trained to learn the relation between temporally related ultrasound and 4D MRI navigator images.

7. Conclusions

Deep neural networks are currently the main machine learning tools in all areas of the biomedical applications, such as in the Omics, bio and medical imaging, brain and body machine interface and public health management. We give in this survey paper the recent status of the deep learning in the biomedical applications. Among all the deep neural architectures, there is a growing interest in an end-to-end convolutional neural network, replacing the traditional handcrafted machine learning methods. The analysis of the literature shows that CNNs are the main deep neural architecture and have shown a great ability in transferring knowledge between apparently different classification tasks, done by weight transferring. In most cases, the transfer learning concerns the biomedical imaging applications.
The main emergent architectures used in the biomedical applications are the generative adversarial networks. These amazing neural architectures provide a data augmentation method to enlarge the deep representations, without extensively annotated training data. The GANs are mainly applied in biomedical imaging applications.
Despite the great success of deep neural networks in biomedical applications, many difficulties such as model building or the interpretability of the obtained results are encountered by deep learning users.

Author Contributions

These authors contributed equally to this work.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, H.; Garcia, E.A. Learning from Imbalanced Data. IEEE Trans. Knowl. Data Eng. 2009, 21, 1263–1284. [Google Scholar] [CrossRef]
  2. Yu, H.; Yang, X.; Zheng, S.; Sun, C. Active Learning From Imbalanced Data: A Solution of Online Weighted Extreme Learning Machine. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1–16. [Google Scholar] [CrossRef]
  3. Ditzler, G.; Roveri, M.; Alippi, C.; Polikar, R. Learning in Nonstationary Environments: A Survey. IEEE Comput. Intell. Mag. 2015, 10, 12–25. [Google Scholar] [CrossRef]
  4. Alvarado-Díaz, W.; Lima, P.; Meneses-Claudio, B.; Roman-Gonzalez, A. Implementation of a brain-machine interface for controlling a wheelchair. In Proceedings of the 2017 CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), Pucon, Chile, 18–20 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
  5. Mahmud, M.; Kaiser, M.; Hussain, A.; Vassanelli, S. Applications of Deep Learning and Reinforcement Learning to Biological Data. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2063–2079. [Google Scholar] [CrossRef]
  6. Cao, C.; Liu, F.; Tan, H.; Song, D.; Shu, W.; Li, W.; Zhou, Y.; Bo, X.; Xie, Z. Deep Learning and Its Applications in Biomedicine. Genom. Proteom. Bioinform. 2018, 16, 17–32. [Google Scholar] [CrossRef] [PubMed]
  7. Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.Z. Deep Learning for Health Informatics. IEEE J. Biomed. Health Inform. 2017, 21, 4–21. [Google Scholar] [CrossRef]
  8. Jones, W.; Alasoo, K.; Fishman, D.; Parts, L. Computational biology: Deep learning. Emerg. Top. Life Sci. 2017, 1, 257–274. [Google Scholar] [CrossRef]
  9. Angermueller, C.; Pärnamaa, T.; Parts, L.; Stegle, O. Deep learning for computational biology. Mol. Syst. Biol. 2016, 12, 878. [Google Scholar] [CrossRef] [PubMed]
  10. Min, S.; Lee, B.; Yoon, S. Deep Learning in Bioinformatics. Brief. Bioinform. 2016, 18, 851–869. [Google Scholar] [CrossRef]
  11. Hubel, D.H.; Wiesel, T.N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 1962, 160, 106–154. [Google Scholar] [CrossRef] [PubMed]
  12. Hubel, D.H.; Wiesel, T.N. Receptive fields of single neurones in the cat’s striate cortex. J. Physiol. 1959, 148, 574–591. [Google Scholar] [CrossRef]
  13. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  14. Hinton, G.E.; Osindero, S.; Teh, Y.W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  15. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  16. Bengio, Y.; Lamblin, P.; Popovici, D.; Larochelle, H. Greedy Layer-Wise Training of Deep Networks. In Advances in Neural Information Processing Systems (NIPS 06); Schölkopf, B., Platt, J., Hoffman, T., Eds.; MIT Press: Cambridge, MA, USA, 2007; pp. 153–160. [Google Scholar]
  17. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  18. Choi, B.; Lee, J.H.; Kim, D.H. Solving local minima problem with large number of hidden nodes on two-layered feed-forward artificial neural networks. Neurocomputing 2008, 71, 3640–3643. [Google Scholar] [CrossRef]
  19. Behera, L.; Kumar, S.; Patnaik, A. On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks. IEEE Trans. Neural Netw. 2006, 17, 1116–1125. [Google Scholar] [CrossRef]
  20. Duffner, S.; Garcia, C. An Online Backpropagation Algorithm with Validation Error-Based Adaptive Learning Rate. In Artificial Neural Networks—ICANN 2007: 17th International Conference, Porto, Portugal, 9–13 September 2007, Proceedings, Part I; de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 249–258. [Google Scholar]
  21. Zeraatkar, E.; Soltani, M.; Karimaghaee, P. A fast convergence algorithm for BPNN based on optimal control theory based learning rate. In Proceedings of the 2nd International Conference on Control, Instrumentation and Automation, Shiraz, Iran, 27–29 December 2011; pp. 292–297. [Google Scholar] [CrossRef]
  22. Zhang, R.; Xu, Z.B.; Huang, G.B.; Wang, D. Global Convergence of Online BP Training With Dynamic Learning Rate. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 330–341. [Google Scholar] [CrossRef]
  23. Zeiler, M.D. ADADELTA: An Adaptive Learning Rate Method. arXiv, 2012; arXiv:1212.5701. [Google Scholar]
  24. Shrestha, S.B.; Song, Q. Adaptive learning rate of SpikeProp based on weight convergence analysis. Neural Netw. 2015, 63, 185–198. [Google Scholar] [CrossRef]
  25. Asha, C.; Narasimhadhan, A. Adaptive Learning Rate for Visual Tracking Using Correlation Filters. Procedia Comput. Sci. 2016, 89, 614–622. [Google Scholar] [CrossRef]
  26. Narayanan, S.J.; Bhatt, R.B.; Perumal, B. Improving the Accuracy of Fuzzy Decision Tree by Direct Back Propagation with Adaptive Learning Rate and Momentum Factor for User Localization. Procedia Comput. Sci. 2016, 89, 506–513. [Google Scholar] [CrossRef]
  27. Gorunescu, F.; Belciug, S. Boosting backpropagation algorithm by stimulus-sampling: Application in computer-aided medical diagnosis. J. Biomed. Inform. 2016, 63, 74–81. [Google Scholar] [CrossRef]
  28. Glorot, X.; Bordes, A.; Bengio, Y. Deep Sparse Rectifier Neural Networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS-11), Fort Lauderdale, FL, USA, 11–13 April 2011; Volume 15, pp. 315–323. [Google Scholar]
  29. Chandra, B.; Sharma, R.K. Deep learning with adaptive learning rate using laplacian score. Expert Syst. Appl. 2016, 63, 1–7. [Google Scholar] [CrossRef]
  30. Zhang, N.; Ding, S.; Zhang, J.; Xue, Y. An overview on Restricted Boltzmann Machines. Neurocomputing 2018, 275, 1186–1199. [Google Scholar] [CrossRef]
  31. Hosseini-Asl, E.; Zurada, J.M.; Nasraoui, O. Deep Learning of Part-Based Representation of Data Using Sparse Autoencoders With Nonnegativity Constraints. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2486–2498. [Google Scholar] [CrossRef]
  32. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  33. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  34. Dahl, G.E.; Sainath, T.N.; Hinton, G.E. Improving deep neural networks for LVCSR using rectified linear units and dropout. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 8609–8613. [Google Scholar] [CrossRef]
  35. Goodfellow, I.J.; Warde-farley, D.; Mirza, M.; Courville, A.; Bengio, Y. Maxout networks. arXiv, 2013; arXiv:1302.4389. [Google Scholar]
  36. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
  37. Yao, X. A review of evolutionary artificial neural networks. Int. J. Intell. Syst. 1993, 8, 539–567. [Google Scholar] [CrossRef]
  38. Pérez-Sánchez, B.; Fontenla-Romero, O.; Guijarro-Berdiñas, B. A review of adaptive online learning for artificial neural networks. Artif. Intell. Rev. 2016, 49, 281–299. [Google Scholar] [CrossRef]
  39. Kalaiselvi, T.; Sriramakrishnan, P.; Somasundaram, K. Survey of using GPU CUDA programming model in medical image analysis. Inform. Med. Unlocked 2017, 9, 133–144. [Google Scholar] [CrossRef]
  40. Smistad, E.; Falch, T.L.; Bozorgi, M.; Elster, A.C.; Lindseth, F. Medical image segmentation on GPU—A comprehensive review. Med. Image Anal. 2015, 20, 1–18. [Google Scholar] [CrossRef]
  41. Eklund, A.; Dufort, P.; Forsberg, D.; LaConte, S.M. Medical image processing on the GPU—Past, present and future. Med. Image Anal. 2013, 17, 1073–1094. [Google Scholar] [CrossRef]
  42. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Curran Associates Inc.: Lake Tahoe, NV, USA, 2012; pp. 1097–1105. [Google Scholar]
  43. Fischer, A.; Igel, C. Training restricted Boltzmann machines: An introduction. Pattern Recognit. 2014, 47, 25–39. [Google Scholar] [CrossRef]
  44. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  45. Sheri, A.M.; Rafique, A.; Pedrycz, W.; Jeon, M. Contrastive divergence for memristor-based restricted Boltzmann machine. Eng. Appl. Artif. Intell. 2015, 37, 336–342. [Google Scholar] [CrossRef]
  46. Sankaran, A.; Goswami, G.; Vatsa, M.; Singh, R.; Majumdar, A. Class sparsity signature based Restricted Boltzmann Machine. Pattern Recognit. 2017, 61, 674–685. [Google Scholar] [CrossRef]
  47. Längkvist, M.; Karlsson, L.; Loutfi, A. A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognit. Lett. 2014, 42, 11–24. [Google Scholar] [CrossRef]
  48. Hinton, G.E. Deep belief networks. Scholarpedia 2009, 4, 5947. [Google Scholar] [CrossRef]
  49. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  50. Matsugu, M.; Mori, K.; Mitari, Y.; Kaneda, Y. Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. 2003, 16, 555–559. [Google Scholar] [CrossRef]
  51. Ciregan, D.; Meier, U.; Schmidhuber, J. Multi-column deep neural networks for image classification. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3642–3649. [Google Scholar] [CrossRef]
  52. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2013. [Google Scholar]
  53. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
  54. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.E.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv, 2014; arXiv:1409.4842. [Google Scholar]
  55. Mamoshina, P.; Vieira, A.; Putin, E.; Zhavoronkov, A. Applications of Deep Learning in Biomedicine. Mol. Pharm. 2016, 13, 1445–1454. [Google Scholar] [CrossRef]
  56. Bagchi, A. Protein-protein Interactions: Basics, Characteristics, and Predictions. In Soft Computing for Biological Systems; Purohit, H.J., Kalia, V.C., More, R.P., Eds.; Springer: Singapore, 2018; pp. 111–120. [Google Scholar] [CrossRef]
  57. Nath, A.; Kumari, P.; Chaube, R. Prediction of Human Drug Targets and Their Interactions Using Machine Learning Methods: Current and Future Perspectives. In Computational Drug Discovery and Design; Gore, M., Jagtap, U.B., Eds.; Springer: New York, NY, USA, 2018; pp. 21–30. [Google Scholar] [CrossRef]
  58. Shehu, A.; Barbará, D.; Molloy, K. A Survey of Computational Methods for Protein Function Prediction. In Big Data Analytics in Genomics; Wong, K.C., Ed.; Springer International Publishing: Cham, Switerland, 2016; pp. 225–298. [Google Scholar] [CrossRef]
  59. Leung, M.K.K.; Delong, A.; Alipanahi, B.; Frey, B.J. Machine Learning in Genomic Medicine: A Review of Computational Problems and Data Sets. Proc. IEEE 2016, 104, 176–197. [Google Scholar] [CrossRef]
  60. Xiong, Y.; Zhu, X.; Dai, H.; Wei, D.Q. Survey of Computational Approaches for Prediction of DNA-Binding Residues on Protein Surfaces. In Computational Systems Biology: Methods and Protocols; Huang, T., Ed.; Springer: New York, NY, USA, 2018; pp. 223–234. [Google Scholar] [CrossRef]
  61. Libbrecht, M.W.; Noble, W.S. Machine learning in genetics and genomics. Nat. Rev. Genet. 2015, 16, 321–332. [Google Scholar] [CrossRef]
  62. Khosravi, P.; Kazemi, E.; Imielinski, M.; Elemento, O.; Hajirasouliha, I. Deep Convolutional Neural Networks Enable Discrimination of Heterogeneous Digital Pathology Images. EBioMedicine 2018, 27, 317–328. [Google Scholar] [CrossRef]
  63. Veta, M.; Pluim, J.P.W.; van Diest, P.J.; Viergever, M.A. Breast Cancer Histopathology Image Analysis: A Review. IEEE Trans. Biomed. Eng. 2014, 61, 1400–1411. [Google Scholar] [CrossRef]
  64. Weese, J.; Lorenz, C. Four challenges in medical image analysis from an industrial perspective. Med. Image Anal. 2016, 33, 44–49. [Google Scholar] [CrossRef]
  65. Greenspan, H.; van Ginneken, B.; Summers, R.M. Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [Google Scholar] [CrossRef]
  66. Wells, W.M. Medical Image Analysis—Past, present, and future. Med. Image Anal. 2016, 33, 4–6. [Google Scholar] [CrossRef]
  67. Zhang, S.; Metaxas, D. Large-Scale medical image analytics: Recent methodologies, applications and Future directions. Med. Image Anal. 2016, 33, 98–101. [Google Scholar] [CrossRef]
  68. Criminisi, A. Machine learning for medical images analysis. Med. Image Anal. 2016, 33, 91–93. [Google Scholar] [CrossRef]
  69. De Bruijne, M. Machine learning approaches in medical image analysis: From detection to diagnosis. Med. Image Anal. 2016, 33, 94–97. [Google Scholar] [CrossRef]
  70. Kergosien, Y.L.; Racoceanu, D. Semantic knowledge for histopathological image analysis: From ontologies to processing portals and deep learning. In Proceedings of the 13th International Conference on Medical Information Processing and Analysis, San Andres Island, Colombia, 5–7 October 2017; p. 105721F. [Google Scholar] [CrossRef]
  71. Gandomkar, Z.; Brennan, P.; Mello-Thoms, C. Computer-based image analysis in breast pathology. J. Pathol. Inform. 2016, 7, 43. [Google Scholar] [CrossRef]
  72. Chen, J.M.; Li, Y.; Xu, J.; Gong, L.; Wang, L.W.; Liu, W.L.; Liu, J. Computer-aided prognosis on breast cancer with hematoxylin and eosin histopathology images: A review. Tumor Biol. 2017, 39, 1010428317694550. [Google Scholar] [CrossRef]
  73. Aswathy, M.; Jagannath, M. Detection of breast cancer on digital histopathology images: Present status and future possibilities. Inform. Med. Unlocked 2016, 8, 74–79. [Google Scholar] [CrossRef]
  74. Hamilton, P.W.; Bankhead, P.; Wang, Y.; Hutchinson, R.; Kieran, D.; McArt, D.G.; James, J.; Salto-Tellez, M. Digital pathology and image analysis in tissue biomarker research. Methods 2014, 70, 59–73. [Google Scholar] [CrossRef]
  75. Irshad, H.; Veillard, A.; Roux, L.; Racoceanu, D. Methods for Nuclei Detection, Segmentation, and Classification in Digital Histopathology: A Review-Current Status and Future Potential. IEEE Rev. Biomed. Eng. 2014, 7, 97–114. [Google Scholar] [CrossRef]
  76. Saha, M.; Chakraborty, C.; Racoceanu, D. Efficient deep learning model for mitosis detection using breast histopathology images. Comput. Med. Imaging Graph. 2018, 64, 29–40. [Google Scholar] [CrossRef]
  77. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  78. Pouliakis, A.; Karakitsou, E.; Margari, N.; Bountris, P.; Haritou, M.; Panayiotides, J.; Koutsouris, D.; Karakitsos, P. Artificial Neural Networks as Decision Support Tools in Cytopathology: Past, Present, and Future. Biomed. Eng. Comput. Biol. 2016, 7, 1–18. [Google Scholar] [CrossRef]
  79. Janowczyk, A.; Madabhushi, A. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. J. Pathol. Inform. 2016, 7, 29. [Google Scholar] [CrossRef]
  80. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine Learning for Medical Imaging. RadioGraphics 2017, 37, 505–515. [Google Scholar] [CrossRef]
  81. Akkus, Z.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J. Digit. Imaging 2017, 30, 449–459. [Google Scholar] [CrossRef]
  82. Mahmud, M.; Vassanelli, S. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-Art and Challenges. Front. Neurosci. 2016, 10, 248. [Google Scholar] [CrossRef]
  83. Major, T.C.; Conrad, J.M. A survey of brain computer interfaces and their applications. In Proceedings of the IEEE SOUTHEASTCON 2014, Lexington, KY, USA, 13–16 March 2014; pp. 1–8. [Google Scholar] [CrossRef]
  84. Kerous, B.; Liarokapis, F. Brain-Computer Interfaces—A Survey on Interactive Virtual Environments. In Proceedings of the 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES), Barcelona, Spain, 7–9 September 2016; pp. 1–4. [Google Scholar] [CrossRef]
  85. Abdulkader, S.N.; Atia, A.; Mostafa, M.S.M. Brain computer interfacing: Applications and challenges. Egypt. Inform. J. 2015, 16, 213–230. [Google Scholar] [CrossRef]
  86. Tan, J.H.; Hagiwara, Y.; Pang, W.; Lim, I.; Oh, S.L.; Adam, M.; Tan, R.S.; Chen, M.; Acharya, U.R. Application of stacked convolutional and long short-term memory network for accurate identification of CAD ECG signals. Comput. Biol. Med. 2018, 94, 19–26. [Google Scholar] [CrossRef]
  87. Acharya, U.R.; Fujita, H.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M. Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals. Inform. Sci. 2017, 415, 190–198. [Google Scholar] [CrossRef]
  88. Turner, J.T.; Page, A.; Mohsenin, T.; Oates, T. Deep Belief Networks used on High Resolution Multichannel Electroencephalography Data for Seizure Detection; CoRR: Leawood, KS, USA, 2017. [Google Scholar]
  89. Zhao, Y.; He, L. Deep learning in the eeg diagnosis of alzheimer’s disease. In Asian Conference on Computer Vision; Springer: Cham, Switerland, 2015; pp. 340–353. [Google Scholar] [CrossRef]
  90. Atzori, M.; Cognolato, M.; Müller, H. Deep learning with convolutional neural networks applied to emg data: A resource for the classification of movements for prosthetic hands. Front. Neurorobot. 2016, 10, 9. [Google Scholar] [CrossRef]
  91. Geng, W.; Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Li, J. Gesture recognition by instantaneous surface EMG images. Sci. Rep. 2016, 6, 36571. [Google Scholar] [CrossRef]
  92. Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for EEG-based brain-computer interfaces: A 10 year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar] [CrossRef]
  93. Faust, O.; Hagiwara, Y.; Hong, T.J.; Lih, O.S.; Acharya, U.R. Deep learning for healthcare applications based on physiological signals: A review. Comput. Methods Programs Biomed. 2018, 161, 1–13. [Google Scholar] [CrossRef]
  94. Bhaskar, H.; Hoyle, D.C.; Singh, S. Machine learning in bioinformatics: A brief survey and recommendations for practitioners. Comput. Biol. Med. 2006, 36, 1104–1125. [Google Scholar] [CrossRef]
  95. Ong, B.T.; Sugiura, K.; Zettsu, K. Dynamically pre-trained deep recurrent neural networks using environmental monitoring data for predicting PM2.5. Neural Comput. Appl. 2016, 27, 1553–1566. [Google Scholar] [CrossRef]
  96. Liang, Z.; Zhang, G.; Huang, J.X.; Hu, Q.V. Deep learning for healthcare decision making with EMRs. In Proceedings of the 2014 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Belfast, UK, 2–5 November 2014; pp. 556–559. [Google Scholar] [CrossRef]
  97. Che, Z.; Purushotham, S.; Khemani, R.; Liu, Y. Distilling Knowledge from Deep Networks with Applications to Healthcare Domain. arXiv, 2015; arXiv:1512.03542. [Google Scholar]
  98. Mehrabi, S.; Sohn, S.; Li, D.; Pankratz, J.J.; Therneau, T.; Sauver, J.L.S.; Liu, H.; Palakal, M. Temporal Pattern and Association Discovery of Diagnosis Codes Using Deep Learning. In Proceedings of the 2015 International Conference on Healthcare Informatics, Dallas, TX, USA, 21–23 October2015; pp. 408–416. [Google Scholar] [CrossRef]
  99. Lipton, Z.C.; Kale, D.C.; Elkan, C.; Wetzel, R.C. Learning to Diagnose with LSTM Recurrent Neural Networks. arXiv, 2015; arXiv:1511.03677. [Google Scholar]
  100. Zou, B.; Lampos, V.; Gorton, R.; Cox, I.J. On Infectious Intestinal Disease Surveillance Using Social Media Content. In Proceedings of the 6th International Conference on Digital Health Conference, Montréal, QC, Canada, 11–13 April 2016; ACM: New York, NY, USA, 2016; pp. 157–161. [Google Scholar] [CrossRef]
  101. Phan, N.; Dou, D.; Piniewski, B.; Kil, D. Social restricted Boltzmann Machine: Human behavior prediction in health social networks. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Paris, France, 25–28 August 2015; pp. 424–431. [Google Scholar] [CrossRef]
  102. Garimella, V.R.K.; Alfayad, A.; Weber, I. Social Media Image Analysis for Public Health. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016. [Google Scholar]
  103. Zhao, L.; Chen, J.; Chen, F.; Wang, W.; Lu, C.; Ramakrishnan, N. SimNest: Social Media Nested Epidemic Simulation via Online Semi-Supervised Deep Learning. In Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015; pp. 639–648. [Google Scholar] [CrossRef]
  104. Ibrahim, R.; Yousri, N.A.; Ismail, M.A.; El-Makky, N.M. Multi-level gene/MiRNA feature selection using deep belief nets and active learning. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 3957–3960. [Google Scholar] [CrossRef]
  105. Fakoor, R.; Ladhak, F.; Nazi, A.; Huber, M. Using Deep Learning to Enhance Cancer Diagnosis and Classification. In Proceedings of the International Conference on Machine Learning (ICML), WHEALTH Workshop, Atlanta, GA, USA, 16–21 June 2013. [Google Scholar]
  106. Danaee, P.; Ghaeini, R.; Hendrix, D.A. A Deep Learning Approach for Cancer Detection and Relevant Gene Identification. In Proceedings of the Pacific Symposium on Biocomputing, Kohala Coast, HI, USA, 3–7 January 2017; Volume 22, pp. 219–229. [Google Scholar]
  107. Yuan, Y.; Shi, Y.; Li, C.; Kim, J.; Cai, W.; Han, Z.; Feng, D.D. DeepGene: An advanced cancer type classifier based on deep learning and somatic point mutations. BMC Bioinform. 2016, 17, 476. [Google Scholar] [CrossRef]
  108. Stormo, G.D. DNA binding sites: Representation and discovery. Bioinformatics 2000, 16, 16–23. [Google Scholar] [CrossRef]
  109. Hassanzadeh, H.R.; Wang, M.D. DeeperBind: Enhancing prediction of sequence specificities of DNA binding proteins. In Proceedings of the 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, China, 15–18 December 2016; pp. 178–183. [Google Scholar] [CrossRef]
  110. Zeng, H.; Edwards, M.D.; Liu, G.; Gifford, D.K. Convolutional neural network architectures for predicting DNA–protein binding. Bioinformatics 2016, 32, i121–i127. [Google Scholar] [CrossRef]
  111. Zhang, S.; Zhou, J.; Hu, H.; Gong, H.; Chen, L.; Cheng, C.; Zeng, J. A deep learning framework for modeling structural features of RNA-binding protein targets. Nucleic Acids Res. 2016, 44, e32. [Google Scholar] [CrossRef] [PubMed]
  112. Pan, X.; Shen, H.B. RNA-protein binding motifs mining with a new hybrid deep learning based cross-domain knowledge integration approach. BMC Bioinform. 2017, 18, 136. [Google Scholar] [CrossRef]
  113. Pan, X.; Fan, Y.X.; Yan, J.; Shen, H.B. IPMiner: Hidden ncRNA-protein interaction sequential pattern mining with stacked autoencoder for accurate computational prediction. BMC Genom. 2016, 17, 582. [Google Scholar] [CrossRef] [PubMed]
  114. Haberal, I.; Ogul, H. DeepMBS: Prediction of Protein Metal Binding-Site Using Deep Learning Networks. In Proceedings of the 2017 Fourth International Conference on Mathematics and Computers in Sciences and in Industry (MCSI), Corfu, Greece, 24–27 August 2017; pp. 21–25. [Google Scholar] [CrossRef]
  115. Li, Y.; Shi, W.; Wasserman, W.W. Genome-wide prediction of cis-regulatory regions using supervised deep learning methods. BMC Bioinform. 2018, 19, 202. [Google Scholar] [CrossRef] [PubMed]
  116. Min, X.; Chen, N.; Chen, T.; Jiang, R. DeepEnhancer: Predicting enhancers by convolutional neural networks. In Proceedings of the 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, China, 15–18 December 2016; pp. 637–644. [Google Scholar] [CrossRef]
  117. Liu, F.; Li, H.; Ren, C.; Bo, X.; Shu, W. PEDLA: Predicting enhancers with a deep learning-based algorithmic framework. Sci. Rep. 2016, 6, 28517. [Google Scholar] [CrossRef] [PubMed]
  118. Umarov, R.K.; Solovyev, V.V. Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks. PLoS ONE 2017, 12, e0171410. [Google Scholar] [CrossRef]
  119. Qin, Q.; Feng, J. Imputation for transcription factor binding predictions based on deep learning. PLoS Comput. Biol. 2017, 13, e1005403. [Google Scholar] [CrossRef]
  120. Leung, M.K.K.; Xiong, H.Y.; Lee, L.J.; Frey, B.J. Deep learning of the tissue-regulated splicing code. Bioinformatics 2014, 30, i121–i129. [Google Scholar] [CrossRef]
  121. Dutta, A.; Dubey, T.; Singh, K.K.; Anand, A. SpliceVec: Distributed feature representations for splice junction prediction. Comput. Biol. Chem. 2018, 74, 434–441. [Google Scholar] [CrossRef]
  122. Lee, T.; Yoon, S. Boosted Categorical Restricted Boltzmann Machine for Computational Prediction of Splice Junctions. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; Bach, F., Blei, D., Eds.; PMLR: Lille, France, 2015; Volume 37, pp. 2483–2492. [Google Scholar]
  123. Frasca, M.; Pavesi, G. A neural network based algorithm for gene expression prediction from chromatin structure. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8. [Google Scholar] [CrossRef]
  124. Singh, R.; Lanchantin, J.; Robins, G.; Qi, Y. DeepChrome: Deep-learning for predicting gene expression from histone modifications. Bioinformatics 2016, 32, i639–i648. [Google Scholar] [CrossRef]
  125. Chen, Y.; Li, Y.; Narayan, R.; Subramanian, A.; Xie, X. Gene expression inference with deep learning. Bioinformatics 2016, 32, 1832–1839. [Google Scholar] [CrossRef]
  126. Xie, R.; Wen, J.; Quitadamo, A.; Cheng, J.; Shi, X. A deep auto-encoder model for gene expression prediction. BMC Genom. 2017, 18, 845. [Google Scholar] [CrossRef]
  127. Kelley, D.R.; Snoek, J.; Rinn, J.L. Basset: Learning the regulatory code of the accessible genome with deep convolutional neural networks. Genome Res. 2016, 26, 990–999. [Google Scholar] [CrossRef]
  128. Quang, D.; Xie, X. DanQ: A hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences. Nucleic Acids Res. 2016, 44, e107. [Google Scholar] [CrossRef]
  129. Zhou, J.; Troyanskaya, O.G. Predicting effects of noncoding variants with deep learning–based sequence model. Nat. Methods 2015, 12, 931. [Google Scholar] [CrossRef]
  130. Liu, F.; Ren, C.; Li, H.; Zhou, P.; Bo, X.; Shu, W. De novo identification of replication-timing domains in the human genome by deep learning. Bioinformatics 2016, 32, 641–649. [Google Scholar] [CrossRef]
  131. Zeng, H.; Gifford, D.K. Predicting the impact of non-coding variants on DNA methylation. Nucleic Acids Res. 2017, 45, e99. [Google Scholar] [CrossRef]
  132. Angermueller, C.; Lee, H.J.; Reik, W.; Stegle, O. Deepcpg: Accurate prediction of single-cell dna methylation states using deep learning. Genome Biol. 2017, 18, 67. [Google Scholar] [CrossRef]
  133. Li, H.; Hou, J.; Adhikari, B.; Lyu, Q.; Cheng, J. Deep learning methods for protein torsion angle prediction. BMC Bioinform. 2017, 18, 417. [Google Scholar] [CrossRef]
  134. Hattori, L.T.; Benitez, C.M.V.; Lopes, H.S. A deep bidirectional long short-term memory approach applied to the protein secondary structure prediction problem. In Proceedings of the 2017 IEEE Latin American Conference on Computational Intelligence (LA-CCI), Arequipa, Peru, 8–10 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
  135. James, L.; Abdollah, D.; Rhys, H.; Alok, S.; Kuldip, P.; Abdul, S.; Yaoqi, Z.; Yuedong, Y. Predicting backbone C angles and dihedrals from protein sequences by stacked sparse auto-encoder deep neural network. J. Comput. Chem. 2014, 35, 2040–2046. [Google Scholar] [CrossRef]
  136. Fang, C.; Shang, Y.; Xu, D. Prediction of Protein Backbone Torsion Angles Using Deep Residual Inception Neural Networks. IEEE/ACM Trans. Comput. Biol. Bioinform. 2018, 1. [Google Scholar] [CrossRef]
  137. Gao, Y.; Wang, S.; Deng, M.; Xu, J. RaptorX-Angle: Real-value prediction of protein backbone dihedral angles through a hybrid method of clustering and deep learning. BMC Bioinform. 2018, 19, 100. [Google Scholar] [CrossRef]
  138. Heffernan, R.; Paliwal, K.; Lyons, J.; Dehzangi, A.; Sharma, A.; Wang, J.; Sattar, A.; Yang, Y.; Zhou, Y. Improving prediction of secondary structure, local backbone angles, and solvent accessible surface area of proteins by iterative deep learning. Sci. Rep. 2015, 5, 11476. [Google Scholar] [CrossRef]
  139. Fang, C.; Shang, Y.; Xu, D. A New Deep Neighbor Residual Network for Protein Secondary Structure Prediction. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, USA, 6–8 November 2017; pp. 66–71. [Google Scholar] [CrossRef]
  140. Hu, Y.; Nie, T.; Shen, D.; Yu, G. Sequence Translating Model Using Deep Neural Block Cascade Network: Taking Protein Secondary Structure Prediction as an Example. In Proceedings of the 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), Shanghai, China, 15–17 January 2018; pp. 58–65. [Google Scholar] [CrossRef]
  141. Liu, Y.; Cheng, J.; Ma, Y.; Chen, Y. Protein secondary structure prediction based on two dimensional deep convolutional neural networks. In Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017; pp. 1995–1999. [Google Scholar] [CrossRef]
  142. Aydin, Z.; Uzut, O.G. Combining classifiers for protein secondary structure prediction. In Proceedings of the 2017 9th International Conference on Computational Intelligence and Communication Networks (CICN), Girne, Cyprus, 16–17 September 2017; pp. 29–33. [Google Scholar] [CrossRef]
  143. Wang, S.; Peng, J.; Ma, J.; Xu, J. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields. Sci. Rep. 2016, 6, 18962. [Google Scholar] [CrossRef]
  144. Li, Y.; Shibuya, T. Malphite: A convolutional neural network and ensemble learning based protein secondary structure predictor. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 9–12 November 2015; pp. 1260–1266. [Google Scholar] [CrossRef]
  145. Zhou, J.; Troyanskaya, O. Deep Supervised and Convolutional Generative Stochastic Network for Protein Secondary Structure Prediction. arXiv, 2014; arXiv:1403.1347. [Google Scholar]
  146. Chen, Y. Long sequence feature extraction based on deep learning neural network for protein secondary structure prediction. In Proceedings of the 2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 3–5 October 2017; pp. 843–847. [Google Scholar] [CrossRef]
  147. Bai, L.; Yang, L. A Unified Deep Learning Model for Protein Structure Prediction. In Proceedings of the 2017 3rd IEEE International Conference on Cybernetics (CYBCONF), Exeter, UK, 21–23 June 2017; pp. 1–6. [Google Scholar] [CrossRef]
  148. Ibrahim, W.; Abadeh, M.S. Protein fold recognition using Deep Kernelized Extreme Learning Machine and linear discriminant analysis. Neural Comput. Appl. 2018, 1–14. [Google Scholar] [CrossRef]
  149. Deng, L.; Fan, C.; Zeng, Z. A sparse autoencoder-based deep neural network for protein solvent accessibility and contact number prediction. BMC Bioinform. 2017, 18, 569. [Google Scholar] [CrossRef]
  150. Li, H.; Lyu, Q.; Cheng, J. A Template-Based Protein Structure Reconstruction Method Using Deep Autoencoder Learning. J. Proteom. Bioinform. 2016, 9, 306–313. [Google Scholar] [CrossRef]
  151. Di Lena, P.; Nagata, K.; Baldi, P. Deep architectures for protein contact map prediction. Bioinformatics 2012, 28, 2449–2457. [Google Scholar] [CrossRef]
  152. Wu, H.; Wang, K.; Lu, L.; Xue, Y.; Lyu, Q.; Jiang, M. Deep Conditional Random Field Approach to Transmembrane Topology Prediction and Application to GPCR Three-Dimensional Structure Modeling. IEEE/ACM Trans. Comput. Biol. Bioinform. 2017, 14, 1106–1114. [Google Scholar] [CrossRef]
  153. Nguyen, S.P.; Shang, Y.; Xu, D. DL-PRO: A novel deep learning method for protein model quality assessment. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 2071–2078. [Google Scholar] [CrossRef]
  154. Nguyen, S.; Li, Z.; Shang, Y. Deep Networks and Continuous Distributed Representation of Protein Sequences for Protein Quality Assessment. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, USA, 6–8 November 2017; pp. 527–534. [Google Scholar] [CrossRef]
  155. Wang, J.; Li, Z.; Shang, Y. New Deep Neural Networks for Protein Model Evaluation. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, USA, 6–8 November 2017; pp. 309–313. [Google Scholar] [CrossRef]
  156. Li, Z.; Nguyen, S.P.; Xu, D.; Shang, Y. Protein Loop Modeling Using Deep Generative Adversarial Network. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, USA, 6–8 November 2017; pp. 1085–1091. [Google Scholar] [CrossRef]
  157. Nguyen, S.P.; Li, Z.; Xu, D.; Shang, Y. New Deep Learning Methods for Protein Loop Modeling. IEEE/ACM Trans. Comput. Biol. Bioinform. 2017, 1. [Google Scholar] [CrossRef]
  158. Spencer, M.; Eickholt, J.; Cheng, J. A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction. IEEE/ACM Trans. Comput. Biol. Bioinform. 2015, 12, 103–112. [Google Scholar] [CrossRef]
  159. Eickholt, J.; Cheng, J. DNdisorder: Predicting protein disorder using boosting and deep networks. BMC Bioinform. 2013, 14, 88. [Google Scholar] [CrossRef]
  160. Sun, T.; Zhou, B.; Lai, L.; Pei, J. Sequence-based prediction of protein protein interaction using a deep-learning algorithm. BMC Bioinform. 2017, 18, 277. [Google Scholar] [CrossRef]
  161. Lei, H.; Wen, Y.; Elazab, A.; Tan, E.L.; Zhao, Y.; Lei, B. Protein-protein Interactions Prediction via Multimodal Deep Polynomial Network and Regularized Extreme Learning Machine. IEEE J. Biomed. Health Inform. 2018. [Google Scholar] [CrossRef]
  162. Chen, H.; Shen, J.; Wang, L.; Song, J. Leveraging Stacked Denoising Autoencoder in Prediction of Pathogen-Host Protein-Protein Interactions. In Proceedings of the 2017 IEEE International Congress on Big Data (BigData Congress), Honolulu, HI, USA, 25–30 June 2017; pp. 368–375. [Google Scholar] [CrossRef]
  163. Huang, L.; Liao, L.; Wu, C.H. Completing sparse and disconnected protein-protein network by deep learning. BMC Bioinform. 2018, 19, 103. [Google Scholar] [CrossRef]
  164. Zhao, Z.; Gong, X. Protein-protein interaction interface residue pair prediction based on deep learning architecture. IEEE/ACM Trans. Comput. Biol. Bioinform. 2018, 1. [Google Scholar] [CrossRef]
  165. Farhoodi, R.; Akbal-Delibas, B.; Haspel, N. Accurate prediction of docked protein structure similarity using neural networks and restricted Boltzmann machines. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 9–12 November 2015; pp. 1296–1303. [Google Scholar] [CrossRef]
  166. Han, Y.; Kim, D. Deep convolutional neural networks for pan-specific peptide-MHC class I binding prediction. BMC Bioinform. 2017, 18, 585. [Google Scholar] [CrossRef]
  167. Kuksa, P.P.; Min, M.R.; Dugar, R.; Gerstein, M. High-order neural networks and kernel methods for peptide-MHC binding prediction. Bioinformatics 2015, 31, 3600–3607. [Google Scholar] [CrossRef]
  168. Wang, L.; You, Z.H.; Chen, X.; Xia, S.X.; Liu, F.; Yan, X.; Zhou, Y. Computational Methods for the Prediction of Drug-Target Interactions from Drug Fingerprints and Protein Sequences by Stacked Auto-Encoder Deep Neural Network. In Bioinformatics Research and Applications; Cai, Z., Daescu, O., Li, M., Eds.; Springer International Publishing: Cham, Switerland, 2017; pp. 46–58. [Google Scholar]
  169. Bahi, M.; Batouche, M. Drug-Target Interaction Prediction in Drug Repositioning Based on Deep Semi-Supervised Learning. In Computational Intelligence and Its Applications; Amine, A., Mouhoub, M., Ait Mohamed, O., Djebbar, B., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 302–313. [Google Scholar]
  170. Peng-Wei; Chan, K.C.C.; You, Z.H. Large-scale prediction of drug-target interactions from deep representations. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 1236–1243. [Google Scholar] [CrossRef]
  171. Masatoshi, H.; Kei, T.; Hiroaki, I.; Jun, Y.; Jianguo, P.; Jinlong, H.; Yasushi, O. CGBVS-DNN: Prediction of Compound-protein Interactions Based on Deep Learning. Mol. Inform. 2016, 36, 1600045. [Google Scholar] [CrossRef]
  172. Tian, K.; Shao, M.; Zhou, S.; Guan, J. Boosting compound-protein interaction prediction by deep learning. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 9–12 November 2015; pp. 29–34. [Google Scholar] [CrossRef]
  173. Tian, K.; Shao, M.; Wang, Y.; Guan, J.; Zhou, S. Boosting compound-protein interaction prediction by deep learning. Methods 2016, 110, 64–72. [Google Scholar] [CrossRef]
  174. Ramadan, R.A.; Vasilakos, A.V. Brain computer interface: Control signals review. Neurocomputing 2017, 223, 26–44. [Google Scholar] [CrossRef]
  175. Al-Nafjan, A.; Hosny, M.; Al-Ohali, Y.; Al-Wabil, A. Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review. Appl. Sci. 2017, 7, 1239. [Google Scholar] [CrossRef]
  176. Ahn, M.; Jun, S.C. Performance variation in motor imagery brain–computer interface: A brief review. J. Neurosci. Methods 2015, 243, 103–110. [Google Scholar] [CrossRef]
  177. Alonso-Valerdi, L.M.; Salido-Ruiz, R.A.; Ramirez-Mendoza, R.A. Motor imagery based brain–computer interfaces: An emerging technology to rehabilitate motor deficits. Neuropsychologia 2015, 79, 354–363. [Google Scholar] [CrossRef]
  178. Pattnaik, P.K.; Sarraf, J. Brain Computer Interface issues on hand movement. J. King Saud Univ. Comput. Inf. Sci. 2018, 30, 18–24. [Google Scholar] [CrossRef]
  179. McFarland, D.; Wolpaw, J. EEG-based brain–computer interfaces. Curr. Opin. Biomed. Eng. 2017, 4, 194–200. [Google Scholar] [CrossRef]
  180. Chan, H.L.; Kuo, P.C.; Cheng, C.Y.; Chen, Y.S. Challenges and Future Perspectives on Electroencephalogram-Based Biometrics in Person Recognition. Front. Neuroinform. 2018, 12, 66. [Google Scholar] [CrossRef]
  181. Langkvist, M.; Karlsson, L.; Loutfi, A. Sleep Stage Classification Using Unsupervised Feature Learning. Adv. Artif. Neural Syst. 2012, 2012, 9. [Google Scholar] [CrossRef]
  182. Li, K.; Li, X.; Zhang, Y.; Zhang, A. Affective state recognition from EEG with deep belief networks. In Proceedings of the 2013 IEEE International Conference on Bioinformatics and Biomedicine, Shanghai, China, 18–21 December 2013; pp. 305–310. [Google Scholar] [CrossRef]
  183. Jia, X.; Li, K.; Li, X.; Zhang, A. A Novel Semi-Supervised Deep Learning Framework for Affective State Recognition on EEG Signals. In Proceedings of the 2014 IEEE International Conference on Bioinformatics and Bioengineering, Boca Raton, FL, USA, 10–12 November 2014; pp. 30–37. [Google Scholar] [CrossRef]
  184. Xu, H.; Plataniotis, K.N. EEG-based affect states classification using Deep Belief Networks. In Proceedings of the 2016 Digital Media Industry Academic Forum (DMIAF), Santorini, Greece, 4–6 July 2016; pp. 148–153. [Google Scholar] [CrossRef]
  185. Zheng, W.L.; Guo, H.T.; Lu, B.L. Revealing critical channels and frequency bands for emotion recognition from EEG with deep belief network. In Proceedings of the 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), Montpellier, France, 22–24 April 2015; pp. 154–157. [Google Scholar] [CrossRef]
  186. Zheng, W.L.; Zhu, J.Y.; Peng, Y.; Lu, B.L. EEG-based emotion classification using deep belief networks. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME), Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar] [CrossRef]
  187. Zheng, W.L.; Lu, B.L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  188. Gao, Y.; Lee, H.J.; Mehmood, R.M. Deep learninig of EEG signals for emotion recognition. In Proceedings of the 2015 IEEE International Conference on Multimedia Expo Workshops (ICMEW), Turin, Italy, 29 June–3 July 2015; pp. 1–5. [Google Scholar] [CrossRef]
  189. Jirayucharoensak, S.; Pan-Ngum, S.; Israsena, P. Eeg-based emotion recognition using deep learning network with principal component based covariate shift adaptation. Sci. World J. 2014, 2014, 627892. [Google Scholar] [CrossRef]
  190. KB, S.K.; Krishna, G.; Bhalaji, N.; Chithra, S. BCI cinematics—A pre-release analyser for movies using H2O deep learning platform. Comput. Electr. Eng. 2018, 74, 547–556. [Google Scholar] [CrossRef]
  191. Tripathi, S.; Acharya, S.; Sharma, R.; Mittal, S.; Bhattacharya, S. Using deep and convolutional neural networks for accurate emotion classification on deap dataset. In Proceedings of the Twenty-Ninth IAAI Conference, San Francisco, CA, USA, 6–9 February 2017; pp. 4746–4752. [Google Scholar]
  192. Li, J.; Zhang, Z.; He, H. Hierarchical Convolutional Neural Networks for EEG-Based Emotion Recognition. Cogn. Comput. 2018, 10, 368–380. [Google Scholar] [CrossRef]
  193. Stober, S.; Cameron, D.J.; Grahn, J.A. Classifying EEG Recordings of Rhythm Perception. In Proceedings of the 15th International Society for Music Information Retrieval Conference (ISMIR 2014), Taipei, Taiwan, 27–31 Ocober 2014. [Google Scholar]
  194. Stober, S.; Cameron, D.J.; Grahn, J.A. Using Convolutional Neural Networks to Recognize Rhythm Stimuli from Electroencephalography Recordings. Adv. Neural Inf. Process. Syst. 2014, 27, 1449–1457. [Google Scholar]
  195. Stober, S.; Sternin, A.; Owen, A.M.; Grahn, J.A. Deep Feature Learning for EEG Recordings. arXiv, 2015; arXiv:1511.04306. [Google Scholar]
  196. Sun, X.; Qian, C.; Chen, Z.; Wu, Z.; Luo, B.; Pan, G. Remembered or Forgotten?—An EEG-Based Computational Prediction Approach. PLoS ONE 2016, 11, 1–20. [Google Scholar] [CrossRef]
  197. Wand, M.; Schultz, T. Pattern learning with deep neural networks in EMG-based speech recognition. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 4200–4203. [Google Scholar] [CrossRef]
  198. Cecotti, H.; Graeser, A. Convolutional Neural Network with embedded Fourier Transform for EEG classification. In Proceedings of the 2008 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar] [CrossRef]
  199. Bashivan, P.; Rish, I.; Yeasin, M.; Codella, N. Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks. arXiv, 2015; arXiv:1511.06448. [Google Scholar]
  200. Spampinato, C.; Palazzo, S.; Kavasidis, I.; Giordano, D.; Souly, N.; Shah, M. Deep Learning Human Mind for Automated Visual Classification. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4503–4511. [Google Scholar] [CrossRef]
  201. Kwak, N.S.; Müller, K.R.; Lee, S.W. A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PLoS ONE 2017, 12, 1–20. [Google Scholar] [CrossRef]
  202. Thomas, J.; Maszczyk, T.; Sinha, N.; Kluge, T.; Dauwels, J. Deep learning-based classification for brain-computer interfaces. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 234–239. [Google Scholar] [CrossRef]
  203. Ahn, M.H.; Min, B.K. Applying deep-learning to a top-down SSVEP BMI. In Proceedings of the 2018 6th International Conference on Brain-Computer Interface (BCI), GangWon, Korea, 15–17 January 2018; pp. 1–3. [Google Scholar] [CrossRef]
  204. Ma, T.; Li, H.; Yang, H.; Lv, X.; Li, P.; Liu, T.; Yao, D.; Xu, P. The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing. J. Neurosci. Methods 2017, 275, 80–92. [Google Scholar] [CrossRef]
  205. Zhao, S.; Rudzicz, F. Classifying phonological categories in imagined and articulated speech. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, 19–24 April 2015; pp. 992–996. [Google Scholar] [CrossRef]
  206. Ahmed, S.; Merino, L.M.; Mao, Z.; Meng, J.; Robbins, K.; Huang, Y. A Deep Learning method for classification of images RSVP events with EEG data. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, Austin, TX, USA, 3–5 December 2013; pp. 33–36. [Google Scholar] [CrossRef]
  207. Jiao, Z.; Gao, X.; Wang, Y.; Li, J.; Xu, H. Deep Convolutional Neural Networks for mental load classification based on EEG data. Pattern Recognit. 2018, 76, 582–595. [Google Scholar] [CrossRef]
  208. Cecotti, H.; Graser, A. Convolutional Neural Networks for P300 Detection with Application to Brain-Computer Interfaces. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 433–445. [Google Scholar] [CrossRef]
  209. Kshirsagar, G.B.; Londhe, N.D. Deep convolutional neural network based character detection in devanagari script input based P300 speller. In Proceedings of the 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, India, 5–16 December 2017; pp. 507–511. [Google Scholar] [CrossRef]
  210. Liu, M.; Wu, W.; Gu, Z.; Yu, Z.; Qi, F.; Li, Y. Deep learning based on Batch Normalization for P300 signal detection. Neurocomputing 2018, 275, 288–297. [Google Scholar] [CrossRef]
  211. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
  212. Nashed, N.N.; Eldawlatly, S.; Aly, G.M. A deep learning approach to single-trial classification for P300 spellers. In Proceedings of the 2018 IEEE 4th Middle East Conference on Biomedical Engineering (MECBME), Tunis, Tunisi, 28–30 March 2018; pp. 11–16. [Google Scholar] [CrossRef]
  213. Manor, R.; Geva, A.B. Convolutional Neural Network for Multi-Category Rapid Serial Visual Presentation BCI. Front. Comput. Neurosci. 2015, 9, 146. [Google Scholar] [CrossRef]
  214. Hajinoroozi, M.; Mao, Z.; Huang, Y. Prediction of driver’s drowsy and alert states from EEG signals with deep learning. In Proceedings of the 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Cancun, Mexico, 13–16 December 2015; pp. 493–496. [Google Scholar] [CrossRef]
  215. Mao, Z.; Yao, W.X.; Huang, Y. EEG-based biometric identification with deep learning. In Proceedings of the 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), Shanghai, China, 25–28 May 2017; pp. 609–612. [Google Scholar] [CrossRef]
  216. Hajinoroozi, M.; Mao, Z.; Jung, T.P.; Lin, C.T.; Huang, Y. EEG-based prediction of driver’s cognitive performance by deep convolutional neural network. Signal Process. Image Commun. 2016, 47, 549–555. [Google Scholar] [CrossRef]
  217. Chai, R.; Ling, S.H.; San, P.P.; Naik, G.R.; Nguyen, T.N.; Tran, Y.; Craig, A.; Nguyen, H.T. Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks. Front. Neurosci. 2017, 11, 103. [Google Scholar] [CrossRef]
  218. Deep Feature Learning Using Target Priors with Applications in ECoG Signal Decoding for BCI; AAAI Press: Menlo Park, CA, USA, 2013.
  219. Yang, H.; Sakhavi, S.; Ang, K.K.; Guan, C. On the use of convolutional neural networks and augmented csp features for multi-class motor imagery of eeg signals classification. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2620–2623. [Google Scholar]
  220. Hartmann, K.G.; Schirrmeister, R.T.; Ball, T. Hierarchical internal representation of spectral features in deep convolutional networks trained for EEG decoding. In Proceedings of the 2018 6th International Conference on Brain-Computer Interface (BCI), GangWon, Korea, 15–17 January 2018; pp. 1–6. [Google Scholar] [CrossRef]
  221. Sakhavi, S.; Guan, C.; Yan, S. Learning Temporal Information for Brain-Computer Interface Using Convolutional Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1–11. [Google Scholar] [CrossRef]
  222. Carvalho, S.R.; Filho, I.C.; Resende, D.O.D.; Siravenha, A.C.; Souza, C.D.; Debarba, H.G.; Gomes, B.; Boulic, R. A Deep Learning Approach for Classification of Reaching Targets from EEG Images. In Proceedings of the 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Niteroi, Brazil, 17–20 October 2017; pp. 178–184. [Google Scholar] [CrossRef]
  223. Tabar, Y.R.; Halici, U. A novel deep learning approach for classification of EEG motor imagery signals. J. Neural Eng. 2017, 14, 016003. [Google Scholar] [CrossRef]
  224. Ren, Y.; Wu, Y. Convolutional deep belief networks for feature extraction of EEG signal. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 2850–2853. [Google Scholar] [CrossRef]
  225. Jingwei, L.; Yin, C.; Weidong, Z. Deep learning EEG response representation for brain computer interface. In Proceedings of the 2015 34th Chinese Control Conference (CCC), Hangzhou, China, 28–30 July 2015; pp. 3518–3523. [Google Scholar] [CrossRef]
  226. Sakhavi, S.; Guan, C.; Yan, S. Parallel convolutional-linear neural network for motor imagery classification. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 2736–2740. [Google Scholar] [CrossRef]
  227. Tang, Z.; Li, C.; Sun, S. Single-trial EEG classification of motor imagery using deep convolutional neural networks. Opt. Int. J. Light Electron. Opt. 2017, 130, 11–18. [Google Scholar] [CrossRef]
  228. Lu, N.; Li, T.; Ren, X.; Miao, H. A Deep Learning Scheme for Motor Imagery Classification based on Restricted Boltzmann Machines. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 566–576. [Google Scholar] [CrossRef]
  229. An, X.; Kuang, D.; Guo, X.; Zhao, Y.; He, L. A Deep Learning Method for Classification of EEG Data Based on Motor Imagery. In Intelligent Computing in Bioinformatics; Huang, D.S., Han, K., Gromiha, M., Eds.; Springer International Publishing: Cham, Switerland, 2014; pp. 203–210. [Google Scholar] [CrossRef]
  230. Li, J.; Cichocki, A. Deep Learning of Multifractal Attributes from Motor Imagery Induced EEG. In Neural Information Processing; Loo, C.K., Yap, K.S., Wong, K.W., Teoh, A., Huang, K., Eds.; Springer International Publishing: Cham, Switerland, 2014; pp. 503–510. [Google Scholar] [CrossRef]
  231. Kumar, S.; Sharma, A.; Mamun, K.; Tsunoda, T. A Deep Learning Approach for Motor Imagery EEG Signal Classification. In Proceedings of the 2016 3rd Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE), Nadi, Fiji, 5–6 December 2016; pp. 34–39. [Google Scholar] [CrossRef]
  232. Sturm, I.; Lapuschkin, S.; Samek, W.; Müller, K.R. Interpretable deep neural networks for single-trial EEG classification. J. Neurosci. Methods 2016, 274, 141–145. [Google Scholar] [CrossRef]
  233. Hennrich, J.; Herff, C.; Heger, D.; Schultz, T. Investigating deep learning for fNIRS based BCI. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2844–2847. [Google Scholar] [CrossRef]
  234. Völker, M.; Schirrmeister, R.T.; Fiederer, L.D.J.; Burgard, W.; Ball, T. Deep transfer learning for error decoding from non-invasive EEG. In Proceedings of the 2018 6th International Conference on Brain-Computer Interface (BCI), GangWon, Korea, 15–17 January 2018; pp. 1–6. [Google Scholar] [CrossRef]
  235. Huve, G.; Takahashi, K.; Hashimoto, M. Brain activity recognition with a wearable fNIRS using neural networks. In Proceedings of the 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, 6–9 August 2017; pp. 1573–1578. [Google Scholar] [CrossRef]
  236. Yin, Z.; Zhao, M.; Wang, Y.; Yang, J.; Zhang, J. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model. Comput. Methods Programs Biomed. 2017, 140, 93–110. [Google Scholar] [CrossRef]
  237. Du, L.H.; Liu, W.; Zheng, W.L.; Lu, B.L. Detecting driving fatigue with multimodal deep learning. In Proceedings of the 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), Shanghai, China, 25–28 May 2017; pp. 74–77. [Google Scholar] [CrossRef]
  238. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H.; Subha, D.P. Automated EEG-based screening of depression using deep convolutional neural network. Comput. Methods Programs Biomed. 2018, 161, 103–113. [Google Scholar] [CrossRef]
  239. Fraiwan, L.; Lweesy, K. Neonatal sleep state identification using deep learning autoencoders. In Proceedings of the 2017 IEEE 13th International Colloquium on Signal Processing its Applications (CSPA), Batu Ferringhi, Malaysia, 10–12 March 2017; pp. 228–231. [Google Scholar] [CrossRef]
  240. Wu, Z.; Ding, X.; Zhang, G. A Novel Method for Classification of ECG Arrhythmias Using Deep Belief Networks. Int. J. Comput. Intell. Appl. 2016, 15, 1650021. [Google Scholar] [CrossRef]
  241. Rahhal, M.A.; Bazi, Y.; AlHichri, H.; Alajlan, N.; Melgani, F.; Yager, R. Deep learning approach for active classification of electrocardiogram signals. Inf. Sci. 2016, 345, 340–354. [Google Scholar] [CrossRef]
  242. Sannino, G.; Pietro, G.D. A deep learning approach for ECG-based heartbeat classification for arrhythmia detection. Future Gener. Comput. Syst. 2018, 86, 446–455. [Google Scholar] [CrossRef]
  243. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Gertych, A.; Tan, R.S. A deep convolutional neural network model to classify heartbeats. Comput. Biol. Med. 2017, 89, 389–396. [Google Scholar] [CrossRef]
  244. Kiranyaz, S.; Ince, T.; Gabbouj, M. Real-Time Patient-Specific ECG Classification by 1-D Convolutional Neural Networks. IEEE Trans. Biomed. Eng. 2016, 63, 664–675. [Google Scholar] [CrossRef]
  245. Acharya, U.R.; Fujita, H.; Lih, O.S.; Hagiwara, Y.; Tan, J.H.; Adam, M. Automated detection of arrhythmias using different intervals of tachycardia ECG segments with convolutional neural network. Inf. Sci. 2017, 405, 81–90. [Google Scholar] [CrossRef]
  246. Acharya, U.R.; Fujita, H.; Oh, S.L.; Raghavendra, U.; Tan, J.H.; Adam, M.; Gertych, A.; Hagiwara, Y. Automated identification of shockable and non-shockable life-threatening ventricular arrhythmias using convolutional neural network. Future Gener. Comput. Syst. 2018, 79, 952–959. [Google Scholar] [CrossRef]
  247. Majumdar, A.; Ward, R. Robust greedy deep dictionary learning for ECG arrhythmia classification. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 4400–4407. [Google Scholar] [CrossRef]
  248. Zheng, G.; Ji, S.; Dai, M.; Sun, Y. ECG Based Identification by Deep Learning. In Biometric Recognition; Zhou, J., Wang, Y., Sun, Z., Xu, Y., Shen, L., Feng, J., Shan, S., Qiao, Y., Guo, Z., Yu, S., Eds.; Springer International Publishing: Cham, Switerland, 2017; pp. 503–510. [Google Scholar] [CrossRef]
  249. Luo, K.; Li, J.; Wang, Z.; Cuschieri, A. Patient-Specific Deep Architectural Model for ECG Classification. J. Healthc. Eng. 2017, 2017, 13. [Google Scholar] [CrossRef]
  250. Huanhuan, M.; Yue, Z. Classification of Electrocardiogram Signals with Deep Belief Networks. In Proceedings of the 2014 IEEE 17th International Conference on Computational Science and Engineering, Chengdu, China, 19–21 December 2014; pp. 7–12. [Google Scholar] [CrossRef]
  251. Yan, Y.; Qin, X.; Wu, Y.; Zhang, N.; Fan, J.; Wang, L. A restricted Boltzmann machine based two-lead electrocardiography classification. In Proceedings of the 2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Cambridge, MA, USA, 9–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
  252. Taji, B.; Chan, A.D.C.; Shirmohammadi, S. Classifying measured electrocardiogram signal quality using deep belief networks. In Proceedings of the 2017 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Turin, Italy, 22–25 May 2017; pp. 1–6. [Google Scholar] [CrossRef]
  253. Mirowski, P.; Madhavan, D.; LeCun, Y.; Kuzniecky, R. Classification of patterns of EEG synchronization for seizure prediction. Clin. Neurophysiol. 2009, 120, 1927–1940. [Google Scholar] [CrossRef]
  254. Mirowski, P.W.; LeCun, Y.; Madhavan, D.; Kuzniecky, R. Comparing SVM and convolutional networks for epileptic seizure prediction from intracranial EEG. In Proceedings of the 2008 IEEE Workshop on Machine Learning for Signal Processing, Cancun, Mexico, 16–19 October 2008; pp. 244–249. [Google Scholar] [CrossRef]
  255. Thodoroff, P.; Pineau, J.; Lim, A. Learning Robust Features using Deep Learning for Automatic Seizure Detection. In Machine Learning for Healthcare Conference; Doshi-Velez, F., Fackler, J., Kale, D., Wallace, B., Wiens, J., Eds.; PMLR: Los Angeles, CA, USA, 2016; Volume 56, pp. 178–190. [Google Scholar]
  256. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Biol. Med. 2017, 100, 270–278. [Google Scholar] [CrossRef]
  257. Antoniades, A.; Spyrou, L.; Took, C.C.; Sanei, S. Deep learning for epileptic intracranial EEG data. In Proceedings of the 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), Vietri sul Mare, Italy, 13–16 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
  258. Liang, J.; Lu, R.; Zhang, C.; Wang, F. Predicting Seizures from Electroencephalography Recordings: A Knowledge Transfer Strategy. In Proceedings of the 2016 IEEE International Conference on Healthcare Informatics (ICHI), Chicago, IL, USA, 4–7 October 2016; pp. 184–191. [Google Scholar] [CrossRef]
  259. Page, A.; Shea, C.; Mohsenin, T. Wearable seizure detection using convolutional neural networks with transfer learning. In Proceedings of the 2016 IEEE International Symposium on Circuits and Systems (ISCAS), Montreal, QC, Canada, 22–25 May 2016; pp. 1086–1089. [Google Scholar] [CrossRef]
  260. Hosseini, M.P.; Tran, T.X.; Pompili, D.; Elisevich, K.; Soltanian-Zadeh, H. Deep Learning with Edge Computing for Localization of Epileptogenicity Using Multimodal rs-fMRI and EEG Big Data. In Proceedings of the 2017 IEEE International Conference on Autonomic Computing (ICAC), Columbus, OH, USA, 17–21 July 2017; pp. 83–92. [Google Scholar] [CrossRef]
  261. Wulsin, D.F.; Gupta, J.R.; Mani, R.; Blanco, J.A.; Litt, B. Modeling electroencephalography waveforms with semi-supervised deep belief nets: Fast classification and anomaly measurement. J. Neural Eng. 2011, 8, 036015. [Google Scholar] [CrossRef]
  262. Wulsin, D.; Blanco, J.; Mani, R.; Litt, B. Semi-Supervised Anomaly Detection for EEG Waveforms Using Deep Belief Nets. In Proceedings of the 2010 Ninth International Conference on Machine Learning and Applications, Washington, DC, USA, 2–14 December 2010; pp. 436–441. [Google Scholar] [CrossRef]
  263. San, P.P.; Ling, S.H.; Nguyen, H.T. Deep learning framework for detection of hypoglycemic episodes in children with type 1 diabetes. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 3503–3506. [Google Scholar] [CrossRef]
  264. Van Putten, M.J.A.M.; Hofmeijer, J.; Ruijter, B.J.; Tjepkema-Cloostermans, M.C. Deep Learning for outcome prediction of postanoxic coma. In EMBEC & NBC 2017; Eskola, H., Väisänen, O., Viik, J., Hyttinen, J., Eds.; Springer: Singapore, 2018; pp. 506–509. [Google Scholar] [CrossRef]
  265. Pourbabaee, B.; Roshtkhari, M.J.; Khorasani, K. Deep Convolutional Neural Networks and Learning ECG Features for Screening Paroxysmal Atrial Fibrillation Patients. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 1–10. [Google Scholar] [CrossRef]
  266. Cheng, M.; Sori, W.J.; Jiang, F.; Khan, A.; Liu, S. Recurrent Neural Network Based Classification of ECG Signal Features for Obstruction of Sleep Apnea Detection. In Proceedings of the 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), Guangzhou, China, 21–24 July 2017; Volume 2, pp. 199–202. [Google Scholar] [CrossRef]
  267. Shashikumar, S.P.; Shah, A.J.; Li, Q.; Clifford, G.D.; Nemati, S. A deep learning approach to monitoring and detecting atrial fibrillation using wearable technology. In Proceedings of the 2017 IEEE EMBS International Conference on Biomedical Health Informatics (BHI), Orlando, FL, USA, 16–19 February 2017; pp. 141–144. [Google Scholar] [CrossRef]
  268. Muduli, P.R.; Gunukula, R.R.; Mukherjee, A. A deep learning approach to fetal-ECG signal reconstruction. In Proceedings of the 2016 Twenty Second National Conference on Communication (NCC), Guwahati, India, 4–6 March 2016; pp. 1–6. [Google Scholar] [CrossRef]
  269. Zhu, X.; Zheng, W.L.; Lu, B.L.; Chen, X.; Chen, S.; Wang, C. EOG-based drowsiness detection using convolutional neural networks. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 128–134. [Google Scholar] [CrossRef]
  270. Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef]
  271. Christodoulidis, S.; Anthimopoulos, M.; Ebner, L.; Christe, A.; Mougiakakou, S. Multisource Transfer Learning with Convolutional Neural Networks for Lung Pattern Analysis. IEEE J. Biomed. Health Inform. 2017, 21, 76–84. [Google Scholar] [CrossRef]
  272. Chang, H.; Han, J.; Zhong, C.; Snijders, A.M.; Mao, J.H. Unsupervised Transfer Learning via Multi-Scale Convolutional Sparse Coding for Biomedical Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1182–1194. [Google Scholar] [CrossRef]
  273. Chen, H.; Ni, D.; Qin, J.; Li, S.; Yang, X.; Wang, T.; Heng, P.A. Standard Plane Localization in Fetal Ultrasound via Domain Transferred Deep Neural Networks. IEEE J. Biomed. Health Inform. 2015, 19, 1627–1636. [Google Scholar] [CrossRef]
  274. Van Opbroek, A.; Ikram, M.A.; Vernooij, M.W.; de Bruijne, M. Transfer Learning Improves Supervised Image Segmentation Across Imaging Protocols. IEEE Trans. Med. Imaging 2015, 34, 1018–1030. [Google Scholar] [CrossRef]
  275. Tan, C.; Sun, F.; Zhang, W. Deep Transfer Learning for EEG-Based Brain Computer Interface. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 916–920. [Google Scholar] [CrossRef]
  276. Zemouri, A.; Omri, N.; Fnaiech, F.; Zerhouni, N.; Fnaiech, N. A New Growing Pruning Deep Learning Neural Network Algorithm (GP-DLNN). Neural Comput. Appl. 2019, in press. [Google Scholar]
  277. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. In Proceedings of the NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems, Barcelona, Spain, 9 December 2016. [Google Scholar]
  278. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switerland, 2014; pp. 818–833. [Google Scholar]
  279. Montavon, G.; Lapuschkin, S.; Binder, A.; Samek, W.; Müller, K.R. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognit. 2017, 65, 211–222. [Google Scholar] [CrossRef]
  280. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  281. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative Adversarial Networks: An Overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef]
  282. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Lake Tahoe, NV, USA, 2014; pp. 2672–2680. [Google Scholar]
  283. Liu, W.; Luo, Z.; Li, S. Improving deep ensemble vehicle classification by using selected adversarial samples. Knowl.-Based Syst. 2018, 160, 167–175. [Google Scholar] [CrossRef]
  284. Li, Y.; Xiao, N.; Ouyang, W. Improved Generative Adversarial Networks with Reconstruction Loss. Neurocomputing 2018, 323, 363–372. [Google Scholar] [CrossRef]
  285. Ji, Y.; Zhang, H.; Wu, Q.J. Saliency detection via conditional adversarial image-to-image network. Neurocomputing 2018, 316, 357–368. [Google Scholar] [CrossRef]
  286. Kiasari, M.A.; Moirangthem, D.S.; Lee, M. Coupled generative adversarial stacked Auto-encoder: CoGASA. Neural Netw. 2018, 100, 1–9. [Google Scholar] [CrossRef]
  287. Dhamala, J.; Ghimire, S.; Sapp, J.L.; Horáček, B.M.; Wang, L. High-Dimensional Bayesian Optimization of Personalized Cardiac Model Parameters via an Embedded Generative Model. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 499–507. [Google Scholar]
  288. Ghimire, S.; Dhamala, J.; Gyawali, P.K.; Sapp, J.L.; Horacek, M.; Wang, L. Generative Modeling and Inverse Imaging of Cardiac Transmembrane Potential. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 508–516. [Google Scholar]
  289. Ren, J.; Hacihaliloglu, I.; Singer, E.A.; Foran, D.J.; Qi, X. Adversarial Domain Adaptation for Classification of Prostate Histopathology Whole-Slide Images. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 201–209. [Google Scholar]
  290. Han, L.; Yin, Z. A Cascaded Refinement GAN for Phase Contrast Microscopy Image Super Resolution. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 347–355. [Google Scholar]
  291. Wang, J.; Zhao, Y.; Noble, J.H.; Dawant, B.M. Conditional Generative Adversarial Networks for Metal Artifact Reduction in CT Images of the Ear. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 3–11. [Google Scholar]
  292. Wang, Y.; Yu, B.; Wang, L.; Zu, C.; Lalush, D.S.; Lin, W.; Wu, X.; Zhou, J.; Shen, D.; Zhou, L. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. NeuroImage 2018, 174, 550–562. [Google Scholar] [CrossRef]
  293. Liao, H.; Huo, Z.; Sehnert, W.J.; Zhou, S.K.; Luo, J. Adversarial Sparse-View CBCT Artifact Reduction. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 154–162. [Google Scholar]
  294. Chen, Y.; Shi, F.; Christodoulou, A.G.; Xie, Y.; Zhou, Z.; Li, D. Efficient and Accurate MRI Super-Resolution Using a Generative Adversarial Network and 3D Multi-level Densely Connected Network. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 91–99. [Google Scholar]
  295. Mishra, D.; Chaudhury, S.; Sarkar, M.; Soin, A.S. Ultrasound Image Enhancement Using Structure Oriented Adversarial Network. IEEE Signal Process. Lett. 2018, 25, 1349–1353. [Google Scholar] [CrossRef]
  296. Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Išgum, I. Generative Adversarial Networks for Noise Reduction in Low-Dose CT. IEEE Trans. Med. Imaging 2017, 36, 2536–2545. [Google Scholar] [CrossRef]
  297. Cerrolaza, J.J.; Li, Y.; Biffi, C.; Gomez, A.; Sinclair, M.; Matthew, J.; Knight, C.; Kainz, B.; Rueckert, D. 3D Fetal Skull Reconstruction from 2DUS via Deep Conditional Generative Networks. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 383–391. [Google Scholar]
  298. Seitzer, M.; Yang, G.; Schlemper, J.; Oktay, O.; Würfl, T.; Christlein, V.; Wong, T.; Mohiaddin, R.; Firmin, D.; Keegan, J.; et al. Adversarial and Perceptual Refinement for Compressed Sensing MRI Reconstruction. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 232–240. [Google Scholar]
  299. Zhang, P.; Wang, F.; Xu, W.; Li, Y. Multi-channel Generative Adversarial Network for Parallel Magnetic Resonance Image Reconstruction in K-space. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 180–188. [Google Scholar]
  300. Quan, T.M.; Nguyen-Duc, T.; Jeong, W. Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network With a Cyclic Loss. IEEE Trans. Med. Imaging 2018, 37, 1488–1497. [Google Scholar] [CrossRef]
  301. Zhao, H.; Li, H.; Maurer-Stroh, S.; Cheng, L. Synthesizing retinal and neuronal images with generative adversarial nets. Med. Image Anal. 2018, 49, 14–26. [Google Scholar] [CrossRef]
  302. Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef]
  303. Paschali, M.; Conjeti, S.; Navarro, F.; Navab, N. Generalizability vs. Robustness: Investigating Medical Imaging Networks Using Adversarial Examples. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 493–501. [Google Scholar]
  304. Jin, D.; Xu, Z.; Tang, Y.; Harrison, A.P.; Mollura, D.J. CT-Realistic Lung Nodule Simulation from 3D Conditional Generative Adversarial Networks for Robust Lung Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 732–740. [Google Scholar]
  305. Chen, J.; Xie, Y.; Wang, K.; Wang, Z.H.; Lahoti, G.; Zhang, C.; Vannan, M.A.; Wang, B.; Qian, Z. Generative Invertible Networks (GIN): Pathophysiology-Interpretable Feature Mapping and Virtual Patient Generation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 537–545. [Google Scholar]
  306. Shams, S.; Platania, R.; Zhang, J.; Kim, J.; Lee, K.; Park, S.J. Deep Generative Breast Cancer Screening and Diagnosis. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 859–867. [Google Scholar]
  307. Mahapatra, D.; Bozorgtabar, B.; Thiran, J.P.; Reyes, M. Efficient Active Learning for Image Classification and Segmentation Using a Sample Selection and Conditional Generative Adversarial Network. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 580–588. [Google Scholar]
  308. Pan, Y.; Liu, M.; Lian, C.; Zhou, T.; Xia, Y.; Shen, D. Synthesizing Missing PET from MRI with Cycle-consistent Generative Adversarial Networks for Alzheimer’s Disease Diagnosis. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 455–463. [Google Scholar]
  309. Fan, J.; Cao, X.; Xue, Z.; Yap, P.T.; Shen, D. Adversarial Similarity Network for Evaluating Image Alignment in Deep Learning Based Registration. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 739–746. [Google Scholar]
  310. Hu, Y.; Gibson, E.; Ghavami, N.; Bonmati, E.; Moore, C.M.; Emberton, M.; Vercauteren, T.; Noble, J.A.; Barratt, D.C. Adversarial Deformation Regularization for Training Image Registration Neural Networks. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 774–782. [Google Scholar]
  311. Dong, S.; Luo, G.; Wang, K.; Cao, S.; Mercado, A.; Shmuilovich, O.; Zhang, H.; Li, S. VoxelAtlasGAN: 3D Left Ventricle Segmentation on Echocardiography with Atlas Guided Generation and Voxel-to-Voxel Discrimination. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 622–629. [Google Scholar]
  312. Zhao, M.; Wang, L.; Chen, J.; Nie, D.; Cong, Y.; Ahmad, S.; Ho, A.; Yuan, P.; Fung, S.H.; Deng, H.H.; et al. Craniomaxillofacial Bony Structures Segmentation from MRI with Deep-Supervision Adversarial Learning. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 720–727. [Google Scholar]
  313. Li, Y.; Shen, L. cC-GAN: A Robust Transfer-Learning Framework for HEp-2 Specimen Image Segmentation. IEEE Access 2018, 6, 14048–14058. [Google Scholar] [CrossRef]
  314. Han, Z.; Wei, B.; Mercado, A.; Leung, S.; Li, S. Spine-GAN: Semantic segmentation of multiple spinal structures. Med. Image Anal. 2018, 50, 23–35. [Google Scholar] [CrossRef]
  315. Jiang, J.; Hu, Y.C.; Tyagi, N.; Zhang, P.; Rimner, A.; Mageras, G.S.; Deasy, J.O.; Veeraraghavan, H. Tumor-Aware, Adversarial Domain Adaptation from CT to MRI for Lung Cancer Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 777–785. [Google Scholar]
  316. Singh, V.K.; Romani, S.; Rashwan, H.A.; Akram, F.; Pandey, N.; Sarker, M.M.K.; Abdulwahab, S.; Torrents-Barrena, J.; Saleh, A.; Arquez, M.; et al. Conditional Generative Adversarial and Convolutional Networks for X-ray Breast Mass Segmentation and Shape Classification. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 833–840. [Google Scholar]
  317. Biffi, C.; Oktay, O.; Tarroni, G.; Bai, W.; De Marvao, A.; Doumou, G.; Rajchl, M.; Bedair, R.; Prasad, S.; Cook, S.; et al. Learning Interpretable Anatomical Features Through Deep Generative Models: Application to Cardiac Remodeling. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 464–471. [Google Scholar]
  318. Xu, C.; Xu, L.; Brahm, G.; Zhang, H.; Li, S. MuTGAN: Simultaneous Segmentation and Quantification of Myocardial Infarction Without Contrast Agents via Joint Adversarial Learning. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 525–534. [Google Scholar]
  319. Zhang, Y.; Miao, S.; Mansi, T.; Liao, R. Task Driven Generative Modeling for Unsupervised Domain Adaptation: Application to X-ray Image Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 599–607. [Google Scholar]
  320. Giger, A.; Sandkühler, R.; Jud, C.; Bauman, G.; Bieri, O.; Salomir, R.; Cattin, P.C. Respiratory Motion Modelling Using cGANs. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switerland, 2018; pp. 81–88. [Google Scholar]
Figure 1. Distribution of published papers involving deep learning in biomedical applications. The statistics are obtained from Google Scholar; the search phrase is defined as the subfield name with deep learning, e.g., “genomics” and “deep learning”.
Figure 1. Distribution of published papers involving deep learning in biomedical applications. The statistics are obtained from Google Scholar; the search phrase is defined as the subfield name with deep learning, e.g., “genomics” and “deep learning”.
Applsci 09 01526 g001
Figure 2. The different levels of the biomedical applications associated to each biomedical sub-field.
Figure 2. The different levels of the biomedical applications associated to each biomedical sub-field.
Applsci 09 01526 g002
Figure 3. Different contributions and view points given by each survey paper recently published in the field of deep learning, applied to biomedical applications.
Figure 3. Different contributions and view points given by each survey paper recently published in the field of deep learning, applied to biomedical applications.
Applsci 09 01526 g003
Figure 4. Artificial neural networks.
Figure 4. Artificial neural networks.
Applsci 09 01526 g004
Figure 5. The Loss function and the backward propagating of its gradient. (A) The predicted label is compared with the true label for the current set of model weights θ w to compute the output error (also called a loss function) L ( θ w ) . (B) The learning rate η is used to jump over valleys. If the learning rate is too low (little jump), it may take forever to converge with a high risk of jamming in a local optimum. Conversely, a too high value (big jump) can cause a non-convergence of the learning algorithm. (C) For deep architectures, the magnitude of the back propagated error derivative decreases rapidly along the layers, resulting in slight update of the weights in the first layers.
Figure 5. The Loss function and the backward propagating of its gradient. (A) The predicted label is compared with the true label for the current set of model weights θ w to compute the output error (also called a loss function) L ( θ w ) . (B) The learning rate η is used to jump over valleys. If the learning rate is too low (little jump), it may take forever to converge with a high risk of jamming in a local optimum. Conversely, a too high value (big jump) can cause a non-convergence of the learning algorithm. (C) For deep architectures, the magnitude of the back propagated error derivative decreases rapidly along the layers, resulting in slight update of the weights in the first layers.
Applsci 09 01526 g005
Figure 6. The correlation between the size of the training data and NN parameters.
Figure 6. The correlation between the size of the training data and NN parameters.
Applsci 09 01526 g006
Figure 7. Most popular Neural Networks architectures used in biomedical: (AC) feedforward neural networks with several depths (one hidden layer, two hidden layers and a deep architecture with many hidden layers); (D,E) an Auto-Encoder and a Deep Auto-Encoder, respectively; (F,G) representations of a restricted Boltzmann machine and a deep belief networks, respectively; and (H) an AlexNet [42] convolutional neural network.
Figure 7. Most popular Neural Networks architectures used in biomedical: (AC) feedforward neural networks with several depths (one hidden layer, two hidden layers and a deep architecture with many hidden layers); (D,E) an Auto-Encoder and a Deep Auto-Encoder, respectively; (F,G) representations of a restricted Boltzmann machine and a deep belief networks, respectively; and (H) an AlexNet [42] convolutional neural network.
Applsci 09 01526 g007
Figure 8. Around the genome.
Figure 8. Around the genome.
Applsci 09 01526 g008
Figure 9. Protein Structure Prediction (PSP).
Figure 9. Protein Structure Prediction (PSP).
Applsci 09 01526 g009
Figure 10. Protein Interaction Prediction (PIP).
Figure 10. Protein Interaction Prediction (PIP).
Applsci 09 01526 g010
Figure 11. Distribution of papers reviewed in this survey.
Figure 11. Distribution of papers reviewed in this survey.
Applsci 09 01526 g011
Figure 12. Distribution of the Omics papers reviewed in this survey.
Figure 12. Distribution of the Omics papers reviewed in this survey.
Applsci 09 01526 g012
Figure 13. Distribution of the BBMI papers reviewed in this survey.
Figure 13. Distribution of the BBMI papers reviewed in this survey.
Applsci 09 01526 g013
Figure 14. Distribution of the deep learning architectures used in the papers reviewed in this survey.
Figure 14. Distribution of the deep learning architectures used in the papers reviewed in this survey.
Applsci 09 01526 g014
Figure 15. Distribution of published papers that use Generative Adversarial Networks (GANs) in biomedical applications. The statistics are obtained from Google Scholar.
Figure 15. Distribution of published papers that use Generative Adversarial Networks (GANs) in biomedical applications. The statistics are obtained from Google Scholar.
Applsci 09 01526 g015
Table 1. Survey papers on deep learning in Omics.
Table 1. Survey papers on deep learning in Omics.
Ref.Year of Publi.# of CitationsResearch Topics
[60]20181Brief survey on machine learning used for the prediction of DNA-binding residues on protein surfaces
[55]2016122Biomarkers, genOmics, transcriptOmics, proteOmics
Structural biology and chemistry
Drug discovery and repurposing
Multiplatform data (MultiOmics)
Challenges and limitations of deep learning systems
[59]201658A complete survey paper on how machine learning can be used to solve key problems in genomic
[61]2015304Provides an overview of machine learning as it is applied to practical problems and major challenges in genOmics
Table 2. Survey papers on Deep Learning in bio and medical imaging.
Table 2. Survey papers on Deep Learning in bio and medical imaging.
Ref.Year of Publi.# of CitationsResearch Topics
[77]2017563A complete survey on deep learning in bio and medical imaging:
- Image/exam classification
- Object or lesion classification
- Organ, region and landmark localization
- Organ and substructure segmentation
- Lesion segmentation
- Anatomical application areas (brain, eye, chest, digital pathology and microscopy, breast, cardiac, abdomen, musculoskeletal
- Anatomical/cell structures detection,
- Tissue segmentation,
- Computer aided disease diagnosis/prognosis
[78]201618ANNs as decision support Tools in cytopathology: past, present, and future:
- Gynecological cytopathology, the PAPNET System
- Cytopathology of gastrointestinal system
- Cytopathology of thyroid gland
- Cytopathology of the breast
- Cytopathology of the urinary system
- Cytopathology of effusions
[79]2016130DL for digital pathology image analysis
- Nuclei segmentation use case
- Epithelium segmentation use case
- Tubule segmentation use case
- Invasive ductal carcinoma segmentation use case
- Lymphocyte detection use case
- Mitosis detection use case
- Lymphoma subtype classification use case
[80]201755Machine learning for medical imaging
[81]201758DL for brain magnetic resonance imaging (MRI) Segmentation: state of the art and future directions
Table 3. Survey papers on deep learning in brain and body machine interfaces.
Table 3. Survey papers on deep learning in brain and body machine interfaces.
Ref.Year of Publi.# of CitationsResearch Topics
[92]201819A review of classification algorithms for EEG based brain–computer interfaces: a 10-year update
[93]201821A complete survey on DL for healthcare applications based on physiological signals
[94]200685Machine learning in bioinformatics: a brief survey and recommendations for practitioners
Table 4. Overview of papers using DL techniques for brain decoding applications.
Table 4. Overview of papers using DL techniques for brain decoding applications.
PurposeSignal TypeNN TypeRef.
Steady state evoked potentials (SSEP) applications
Human’s affective and emotion states classificationEEGDBN[182,183,184,185,186,187,188]
DMLP[189,190]
CNN[191,192]
Auditory evoked potentialEEGCNN[193,194,195,196]
EMGDMLP[197]
Visual evoked potentialEEGCNN[198,199,200,201,202]
DMLP[203]
DBN[204,205,206]
Classification of mental loadEEGCNN[207]
P300 applications
The P300 spellerEEGCNN[208,209,210,211]
DNN[212]
Rapid serial visual presentationEEGCNN[213]
Driver fatigue classificationEEGCNN[214,215,216]
DBN[217]
Motor and sensorimotor rhythms (MSR) applications
Motor imagery tasksECoGCNN[218]
EEGCNN[219,220,221,222,223,224,225,226,227]
DBN[228,229]
DMLP[230,231,232]
Non-motor cognitive tasks
Mental arithmetics (MA), word generation (WG) and mental rotation (MR)fNIRDMLP[233]
Mental taskEEGCNN[234]
Mental subtractionsfNIRDMLP[235]
Hybrid signals
Sleep stage classificationEEG, EOG, EMGDBN[181]
Emotion analysisEEG, EOG, EMG, skin temperature, GSR, blood volume pressure, and respiration signalsDMLP[236]
Detecting driving fatigueEEG, EOGDMLP[237]
Table 5. Overview of papers using deep learning techniques for Diseases Diagnosis.
Table 5. Overview of papers using deep learning techniques for Diseases Diagnosis.
PurposeSignal TypeNN TypeRef.
Automated detection of arrhythmia and cardiac abnormalitiesEEGDBN[240]
DNN[241,242]
ECGCNN[243,244,245,246]
DNN[247,248,249]
DBN[250,251,252]
Automated detection and diagnosis of seizureEEGDBN[88]
CNN[253,254,255,256,257,258,259]
Localization of epileptogenicityEEGCNN[260]
Epileptiform discharge anomaly detectionEEGDBN[261]
Alzheimer’s disease diagnosisEEGDBN[89]
Neurophysiological clinical monitoringEEGDBN[262]
Detection of hypoglycemic episodes in ChildrenECGDBN[263]
Automated detection of myocardial infarctionEEGCNN[87]
Screening of depressionEEGCNN[238]
Prediction of post anoxic comaEEGCNN[264]
Coronary artery disease (CAD) DiagnosisEEGCNN[86]
Screening paroxysmal atrial fibrillationECGCNN[265]
Detection sleep apneaECGDNN[266]
Detecting atrial fibrillationECGCNN[267]
Fetal electrocardiogram (FECG) monitoringECGDNN[268]
Drowsiness detectionEOGCNN[269]
Neonatal Sleep State IdentificationEEGDNN[239]

Share and Cite

MDPI and ACS Style

Zemouri, R.; Zerhouni, N.; Racoceanu, D. Deep Learning in the Biomedical Applications: Recent and Future Status. Appl. Sci. 2019, 9, 1526. https://doi.org/10.3390/app9081526

AMA Style

Zemouri R, Zerhouni N, Racoceanu D. Deep Learning in the Biomedical Applications: Recent and Future Status. Applied Sciences. 2019; 9(8):1526. https://doi.org/10.3390/app9081526

Chicago/Turabian Style

Zemouri, Ryad, Noureddine Zerhouni, and Daniel Racoceanu. 2019. "Deep Learning in the Biomedical Applications: Recent and Future Status" Applied Sciences 9, no. 8: 1526. https://doi.org/10.3390/app9081526

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop