Machine Learning (ML) is a subfield of Artificial Intelligence which attempts to endow computers with the capacity of learning from data, so that explicit programming is not necessary to perform a task. ML algorithms allow computers to extract information and infer patterns from the record data so computers can learn from previous examples to make good predictions about new ones. ML algorithms have been successfully applied to a variety of computational tasks in many fields. Pharmacology and bioinformatics are “hot topics” for these techniques because of the complexity of the tasks. For example, in bioinformatics, ML methods are applied to predict protein structure and genomics (and other omics) data mining. In the case of pharmacology, these methods are used to discover, design and prioritize bioactive compounds, which can be candidates for new drugs [1
]. Moreover, ML can be helpful to analyze clinical studies of these compounds, optimize drug forms, and evaluate drug quality [2
]. The development of a drug has different phases; in the first step a set of molecular representation, or descriptors, are selected. These descriptors represent the relevant properties of the molecules of interest. The encoded molecules are compared to one another using a metric or scoring scheme. Next, the data set is usually divided into three parts: training set, validation set and test set. The final step involves the use of ML methods to extract features of interest that can help to differentiate active compounds from inactive ones. Quantitative Structure-Activity Relationship (QSAR) is used to find relationships between the structure of a compound and its activity, both biological and physicochemical [4
]. There are similar mathematical models that look for other relationships, such as Quantitative Structure-Property Relationship (QSPR), Quantitative Structure–Toxicity Relationship (QSTR) or Quantitative Structure–Pharmacokinetic Relationship (QSPkR) [5
It is of major importance to select the right descriptors to extract valuable features from the input data. The accuracy of these data, and the statistical tools used, are also relevant in the development process [4
]. Over the past decades, the ML techniques used in pharmaceutical and bioinformatics applications were “shallow”, with only a few layers of feature transformations. Some of the most used algorithms are: principle component analysis, k-means clustering, decision trees, Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) [1
The ANNs have been applied to pharmacology and bioinformatics for more than two decades. Historically, the first report on the application of ANNs in these fields was published by Qian and Sejnowski in 1988 [6
]. They used ANNs for the prediction of the protein secondary structure. In 1990, Aoyama et al. presented the first report on the application of ANNs to QSAR [7
], whereas in 1993, Wikel and Dow disclosed an application of the ANNs in the description of the pruning step of QSAR [8
]. An example of an effective application of ANNs was with a data set of HIV-1 reverse transcriptase inhibitors, in the descriptor selection process [9
]. Kovalishyn et al. developed a pruning method based on an ANN trained with the Cascade-Correlation learning method in 1998 [10
]. These are some examples of early applications of ANNs, but a huge advance had been made in these ML techniques. To get a historical perspective, and to understand in detail the applications of ANNs, and other ML algorithms, to pharmacology and bioinformatics, the reader is referred to these reviews [1
Although ANNs were soon identified as useful tools for pharmacology and bioinformatics, SVMs and random forest made great progress, dominating the field until recently. The reasons of the limited application of ANNs were: “scarcity” of data, difficulty to understand the features extracted, and the computational cost entailed by the network training. Over the past decade, DNNs have become the state-of-the-art algorithms of ML in speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning and the exponential increase of the chip processing capabilities, especially GPGPUs.
The Big Data term can be understood by the exponential grow of data, since 90% of the data in the world today has been created in the last two years alone. This data explosion is transforming the way research is conducted, making it necessary to acquire skills in the use of Big Data to solve complex problems related to scientific discovery, biomedical research, education, health, national security, among others. In genomic medicine, this can be illustrated by the fact that the first sequenced human genome cost nearly $3 billion, today it can be done for less than $1000. In cancer research, data produced by researchers can be analyzed to support this research. Multiple protein sequences can be analyzed to determine the evolutionary links and predict molecular structures. In Medicine and Bioinformatics, there are numerous opportunities to make the most of the huge amount of data available. Some of the challenges include developing safer drugs, reducing the costs of clinical trials, as well as exploring new alternatives, such novel antibiotics, to fight against resistant microorganisms; and finally, extracting value information from the vast amount of data generated by the public health.
In order to make the most of the huge amount of information available, different data analysis software frameworks, such as Hadoop, have been created [19
]. These frameworks allow the use of simple programming models to process large data sets from thousands of computers. Figure 1
shows a general workflow for Big Data.
DL is a new area of ML research, which is inspired by the brain and data abstraction created by multiple stages of processing. The DL algorithms allow high-level abstraction from the data, and this is helpful for automatic features extraction and for pattern analysis/classification. A key aspect of DL was the development of unsupervised training methods to make the best use of the huge amount of unlabeled data available [11
]. Deep Feedforward Neural Networks (DFNN), Deep Belief Networks (DBN), Deep AutoEncoder Networks, Deep Boltzmann Machines (DBM), Deep Convolutional Neural Networks (DCNN) and Deep Recurrent Neural Networks (DRNN) are examples of artificial neural networks with deep learning. They have been applied to fields such as computer vision, automatic speech recognition or natural language processing, where they have been shown to produce state-of-the-art results on multiples tasks, (see Table 1
). The idea of building DNNs is not new but there was a historical problem, called “vanishing gradient problem” [20
]. It is difficult to train these types of large networks with several layers when the backpropagation algorithm is used to optimize the weights, because the gradients which propagate backwards rapidly diminish in magnitude as the depth of the network increases, thus the weights in the early layers changes very slowly [21
]. DNNs have become the leading ML technology for a range of applications since Geoffrey Hinton examined the issues around training large networks [22
], and came up with a new approach that had consequences for the cost of training of these networks [23
]. Over the past decade, a variety of algorithms and techniques have been developed to design and train different architectures of DNN [25
Finally, GPUs were created to process graphics, especially for gaming and design. Some researchers programmed GPUs using API, but this was a difficult task [33
]. In 2007, NVIDIA published “Compute Unified Device Architecture” (CUDA) [34
], a programming language based on C to optimize GPGPU application. CUDA allows researchers to make the most of the computing capabilities of GPUs for parallel programming. Nowadays, almost every supercomputer in the TOP500 combines CPUs and GPUs [35
]. GPUs are beneficial for DL because the training of DNN is very intensive, so this training can be parallelize with GPUs and a performance improvement greater than 10× can be obtained. However, the ongoing work on design and construction of neuromorphic chips should be pointed out, as they represent a more efficient way to implement DNNs [36
]. The neuromorphic chips attempt to mimic the neuronal architectures present in the brain in order to reduce several orders of magnitude in terms of energy consumption and to improve the performance of the information processing. However, to run DNNs in a neuromorphic chip, they should be mapped in a spiking artificial neural network (SNN) [37
In this review, the main architectures of DNNs and their applications in pharmacology and Bioinformatics are presented. The future need for neuromorphic hardware for DNNs is also discussed, and the two most advanced chips that have already implemented DL are reviewed: IBM TrueNorth and SpiNNaker. In addition, this work points out the importance of considering astrocytes in DNNs and neuromorphic chips, given the proven importance of this type of glial cells in the brain.
3. Neuromorphic Chips
Since Alan Turing created the first computer, the progress in computer science has been remarkable. This progress was predicted by Gordon Moore in 1965, who foretold that the number of transistors that could be manufactured on a single silicon chip would double every 18 months to two years. It is known as Moore’s Law, and over the past century it has been accomplished by making transistors increasingly smaller. As CMOS transistors get smaller they become cheaper to make, faster, and more energy-efficient. This win-win scenario has driven the society to a digital era in which computers play a key role in almost every walk and aspect of our lives [22
However, Moore’s Law has limitations when it comes to shrinking transistors; there is a physical limit in the size of the atom. At this scale, around 1 nm, the properties of the semi-conductor material in the active region of a transistor are compromised by quantum effects like quantum tunneling. In addition, there are also other limitations, such as the energy wall [69
] and memory wall [71
], which denote the high power density and low memory bandwidth [72
]. There are also economic limitations, since the cost of designing a chip and the cost of building a fabrication facility are growing alarmingly [74
Trying to avoid some of these limitations, in the early years of this century, all of the major microprocessor manufacturers moved from ever-faster clock speeds to multicore processors. Over the past decade, instead of creating faster single-processor machines, new systems include more processors per chip. Now we have CPUs with multicores, and GPUs with thousands of cores [22
As already stated, DNNs have become the state-of-the-art algorithms of ML in many tasks. However, both training and execution of large-scale DNNs require vast computing resources, leading to high power requirements and communication overheads. The ongoing work on design and construction of neuromorphic chips, the spike-based hardware platforms resulting from the book about VLSI (Very Large Scale Integration) written by Lynn Conway and Carver Mead, and published in the 1980s [75
], offered an alternative by running DNNs with significantly lower power consumption. However, the neuromorphic chips have to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal [36
]. Moreover, it is necessary to design the structure, neurons, network input, and weights of DNN during training, to efficiently map those networks to SNN in the neuromorphic chips (see Figure 9
Focusing on projects involving neuromorphic hardware, the IBM TrueNorth chip [77
] is one of the most impressive silicon implementation of DNNs. SpiNNaker, a project developed by the University of Manchester, also achieved excellent results implementing DNNs. Both [78
] chips are digital, they compute the information using the binary system. However, some neuromorphic chips are analog, they consist of neuromorphic hardware elements where information is processed with analog signals; that is, they do not operate with binary values, as information is processed with continuous values [22
]. In analog chips, there is no separation between hardware and software, because the hardware configuration is in charge of performing all the computation and can modify itself [79
]. A good example is the HiCANN chip, developed at the University of Heidelberg, which uses wafer-scale above-threshold analog circuits [80
]. There are also hybrid neuromorphic chips, like the Neurogrid from Stanford [81
], which seek to make the most of each type of computing. It usually processes in analog and communicates in digital. This review will focus only on digital neuromorphic chips, the IBM TrueNorth and the SpiNNaker chip, because are the most advanced projects, obtained the best results implementing DNNs and published the highest number of technical papers. For further details about other projects and the differences between digital, analog and hybrid neuromorphic chips, the reader should refer to other reviews [82
3.1. TrueNorth International Business Machines (IBM)
The DARPA SyNAPSE (System of Neuromorphic Adaptive Plastic Scalable Electronics) initiative selected and funded the proposal “Cognitive Computing via Synaptronics and Supercomputing (C2S2)” of the Cognitive Computing Group at IBM Research-Almaden directed by Dharmendra Modha [77
]. The project is based on the design and creation of a neuromorphic chip called TrueNorth, which has a non-von Neumann architecture. It is characterized by modularity, parallelism and scalability. It is inspired by the brain and its function, low power, and compact volume (see Figure 10
). This chip can be used to integrate spatio-temporal and real-time cognitive algorithms for different applications [84
]. Currently in the final phase of the project, the researchers created a board with 16 TrueNorth neuromorphic chips, capable of simulating 16 million neurons and four billion synapses. In 2015, they assembled a system consisting of 128 chips and 128 million neurons [85
]. The next goal is to integrate 4096 chips into a single rack, which would represent four billion neurons and one trillion synapses, consuming around 4 kW of power [86
The TrueNorth prototype was created in 2011 [87
], and it was a neurosynaptic core with 256 digital leaky integrate-and-fire neurons [37
] and up to 256,000 synapses. The core is composed of memory and processor, and the communication takes places through all-or-none spike events. This allows an efficient implementation of a parallel asynchronous communication and Address Event Representation (AER) [88
]. In this communication system, the neurons have a unique identifier, called address, and when a neuron spikes, the address is sent to other neurons. In 2012, Compass [90
] was developed, a simulator to design neural networks to be implemented in the neuromorphic chip. Compass is a multi-threaded, massively parallel functional simulator and a parallel compiler. It uses the C++ language, sends spike events via MPI communication and uses OpenMP for thread-level parallelism. A simulator for GPGPU [91
] was also developed. Modha’s team simulated in 2007 the brain of a rat in an IBM BlueGene/L supercomputer [92
]. In 2010, they simulated a monkey’s brain [93
] in IBM BlueGene/P supercomputers from a network map of long-distance neural connections in the brain obtained with 410 anatomical studies (Collation of Connectivity data on the Macaque brain). Later that same year, they published the results of a simulation with Compass of 2048 billion neurosynaptic cores and 5.4 × 1011
neurons and 1.37 × 1014
]. The execution was 1542× times slower than real time, and 1.5 million Blue Gene/Q supercomputers were needed.
A program in the TrueNorth chips consists of a definition of the inputs and outputs to the network and the topology of the network of neurosynaptic cores. The parameters of the neurons and the synaptic weights should be specified, as well as the inter- and intra-core connectivity [84
The programming paradigm has four levels: The lowest level is the corelet, which represents an abstraction of a TrueNorth program like a blackbox, only showing the inputs and outputs, and hiding the other details. The next level is the Corelet Language which allows the creation and combination of corelets. The validated corelets are included in the Corelet Library and can be reused to create new corelets. This is like a repository and makes up the third level. The last level is the Corelet Laboratory, a programming environment to develop new applications. It is integrated with Compass, the TrueNorth simulator [84
The corelet library has a collection of several functions that were implemented in the TrueNorth chip verified and parameterized. Some examples are algebraic, logical and temporal functions, convolutions, discrete Fourier transformations and many others. Using these functions different algorithms were implemented in the TrueNorth chip, like CNN (see Figure 11
) and Restricted Bolztmann Machines for feature extraction, hidden Markov models, spectral content estimators, liquid state machines, looming detectors, logistic regression, backpropagation and some others. The corelet algorithm can be re-used in different applications, and there are different corelet implementations for the same algorithm, showing the flexibility of the corelet construction [76
TrueNorth was used in different applications, such as recognition of voices, composers, digits, sequences, emotions or eyes. It was also used in collision avoidance and optical flow [96
TrueNorth was also applied to bioinformatics by a group from the University of Pittsburgh, who used the RS130 protein secondary structure data set to predict the local conformation of the polypeptide chain and classified it into three classes: α helices, β-sheets, and coil [74
3.2. SpiNNaker. University of Manchester
SpiNNaker is a project developed at the University of Manchester, whose principal investigator is Steve B. Furber [78
]. Within this project, chips, which contain many small CPUs, were produced. Each CPU is designed to simulate about 1000 neurons, such as neural models of leaky integrate and fire or Izhikevich’s model [37
], which communicate spike events to other CPUs through a network package. Each chip consists of 18 ARM968 processors, one of them acting as a processor monitor. In 2015, a cabinet with 5760 chips was created, which can simulate 100 million point neurons with approximately 1000 synapses per neuron [98
]. The chips are connected with adjacent chips by a two-dimensional toroidal mesh network and each chip has six network ports [99
]. This system is expected to mimic the features of biological neural networks in various ways: (1) native parallelism—each neuron is a primitive computational element within a massively parallel system [102
]; (2) spiking communications—the system uses AER, thus the information flow in a network is represented as a time series of neural identifiers [103
]; (3) event-driven behavior—to reduce power consumption, the hardware was put into “sleep” mode, waiting for an event; (4) distributed memory—this system uses memory local to each of the cores and an SDRAM local to each chip; and (5) reconfigurability—the SpiNNaker architecture allows on-the-fly reconfiguration [104
In order to configure a large number of cores, with millions of neurons and synapses, PACMAN [105
] was developed. It is a software tool that helps the user to create models, translate and run in SpiNNaker. This allows the user to work with neural languages like PyNN [106
] or Nengo [107
The SpiNNaker was created simulate real-time models, but the algorithms had to be defined in the design process, therefore the models were static. In 2013, a paper [109
] was published, in which a novel learning rule was presented, describing its implementation into the SpiNNaker system, which allows the use of the Neural Engineering Framework to establish a supervised framework to learn both linear and non-linear functions. The learning rule belongs to the Prescribed Error Sensitivity class.
SpiNNaker supports two types of Deep Neural Networks:
Deep Belief Networks: These networks of deep learning may be implemented, obtaining an accuracy rate of 95% in the classification of the MNIST database of handwritten digits. Results of 0.06% less accuracy than with the software implementation are obtained, whereas the consumption is only 0.3 W [36
Convolutional Neural Networks: This type of networks has the characteristic of sharing the same value of weights for many neuron-to-neuron connections, which reduces the amount of memory required to store the synaptic weights. A five-layer deep learning network is implemented to recognize symbols which are obtained through a Dynamic Vision Sensor. Each ARM core can accommodate 2048 neurons. The full chip could contain up to 32,000 neurons. A particular ConvNet architecture was implemented in SpiNNaker for visual object recognition, like poker card symbol classification [111
Currently, there are no applications in pharmacology or bioinformatics, but SpiNNaker showed its potential by implementing DNNs and DCNNs to visual recognition and robotics. In the future, it could be trained in drug design, protein structure prediction or genomic, and other omics, data mining.
As was pointed out, DNNs have become the state-of-the-art algorithms of ML in speech recognition, computer vision, natural language processing and many other tasks (see Table 1
]. According to the results obtained, DNNs match the human capabilities, and even surpass them on some tasks. Besides, the inner work of DNNs has similarities with the processing of information in the brain. The pattern of activation of the artificial neurons is very similar to that observed in the brain due to the sparse coding used, which may, for example, be applied to audio to obtain almost exactly the same functions (see Figure 12
). In the case of images, it was also shown that the functions learned in each layers were similar to the patterns recognized by each layer of the human visual system (V1 and V2).
This review analyzed applications in pharmacology and bioinformatics (see Table 2
). DNNs can be used in the drug discovery, design and validation processes, ADME properties prediction and QSAR models. They also can be applied to the prediction of the structure of proteins and genomic, and other omics, data mining. All these applications are very intensive from a computational perspective, thus DNNs are very helpful because of their ability to deal with Big Data. Besides, DL complement the use of other techniques, for example the quality and success of a QSAR model depend strictly on the accuracy of input data, selection of appropriate descriptors and statistical tools, and most importantly validation of the developed model. Feature extraction from the descriptor patterns is the decisive step in the model development process [4
Regarding architectures, nowadays, the largest DNN has millions of artificial neurons and around 160 billion parameters [112
]. Building large networks will improve the results of DL, but the development of new DL architectures is a very interesting way to enhance the capabilities of the networks. For example, the latest DRNN architectures with “memory” show excellent results in natural language processing, one of the hardest task for ML [26
Some authors, such as Ray Kurzweil [114
], claim that the exponential growth based on Moore’s Law and The Law of Accelerating Returns [115
] will be maintained, therefore, in the next decades, building a machine with a similar number of neurons as the human brain, of around 86 billion neurons, should be possible. As previously mentioned, there are some physical limitations to the current architecture of computers, such as the memory wall [69
] and energy wall [71
], which denote the high power density and low memory bandwidth [72
]. There are also economic limitations; the cost of designing a chip and the cost of building a fabrication facility are growing alarmingly [74
]. However, these limitations will probably be surpassed using other technologies and architectures, like GPU clusters or networks of Neuromorphic chips. It was historically calculated that the human brain computes approximately 20 billion operations per second [116
]. Some authors think that these values underestimate the brain capacity, and calculated around 1021
operations per second [120
]. However, reaching the human brain capacity is not enough, because one of the main features of the brain is its connectivity of the billions of cells that forms trillions of synapses. Natural evolution has molded the brain for millions of years, creating a highly complex process of development. This was remarkably pointed out by Andrew Ng, neurons in the brain are very complex structures, and after a century of study the researchers still are not able to fully understand how they work. The neurons in the ANN are simple mathematical functions that attempt to mimic the biological neurons. However, the artificial neurons only reach the level of loose inspiration. Consequently, reaching the level of human brain computation will not necessarily mean that the future computers will surpass human intelligence. In our opinion, the advances in understanding the human brain will be more important in order to make a breakthrough that will lead us to new types of DNNs.
In this regard, it should be pointed out that the human brain is composed of neurons, but also glial cells, and there is almost the same number of both [121
]. More importantly, over the past decade, it has been proven that astrocytes, a type of glial cells of the central nervous system, actively participate in the information processing in the brain. There are many works published over the past two decades on multiple modes of interaction between neurons and glial cells [122
]. Many studies suggest the existence of bidirectional communication between neurons and astrocytes, a type of glial cells of the central nervous system [126
]. This evidence has led to the proposal of the concept of tripartite synapse [128
], formed by three functional elements: presynaptic neuron, postsynaptic neuron and perisynaptic astrocyte (see Figure 13
The relation between these three elements is very complex and there are different pathways of communication: astrocytes can respond to different neurotransmitters (glutamate, GABA, acetylcholine, ATP or noradrenaline) [130
] liberating an intracellular Ca2+
signal, known as calcium wave that could be transmitted to other astrocytes through GAP junctions. In addition, astrocytes may release gliotransmitters that activate presynaptic and postsynaptic neuronal receptors, leading to a regulation of the neural excitability, synaptic transmission, plasticity and memory [131
]. The possibility of a quad-partite synapse, in which microglia are engaged [133
], has recently been proposed.
In addition, there is interesting scientific evidence that suggests the important role of glial cells in the intelligence of the species. Although there are no major differences between neurons of different species of mammals, the glial cells have evolved in the evolutionary chain. For example, a rodent’s astrocytes may include between 20,000 and 120,000 synapses, while a human’s may include up to two million synapses [134
]. Not only should the complexity of the astrocytes be pointed out, but also their size. Human astrocytes have a volume 27 times greater than the same cells in the mouse’s brain [134
]. Besides, the ratio of glial cells to neurons increased along the evolutionary chain. One of the most striking research events has been the discovery of a single glial cell for every 30 neurons in the leech. This single glial cell receives neuronal sensory input and controls neuronal firing to the body. As we move up the evolutionary ladder, in a widely researched worm, Caenorhabditis elegans
, glia cells are 16% of the nervous system. The fruit fly’s brain has about 20% glia. In rodents such as mice and rats, glia cells make up 60% of the nervous system. The nervous system of the chimpanzee has 80% glia, while the human 90%. The ratio of glia to neurons increases with our definition of intelligence [123
]. The number of astrocytes per neuron also increases as we move up the evolutionary ladder, humans having around 1.5 astrocytes per neuron [136
Furthermore, the ratio of glial cells to neurons varies in different brain regions. In the cerebellum, for instance, there are almost five times more neurons than astrocytes. However, in the cortex, there are four times more glial cells than neurons [121
]. All these data suggest that the more complex the task, performed, by either an animal or a brain region, the greater the number of glial cells involved.
Currently, there are two projects aimed at implementing astrocytes in neuromorphic chips, one is BioRC developed by the University of Southern California [138
] and the other project is carried out by the University of Tehran and University of Kermanshah, Iran [142
]. Moreover, the RNASA-IMEDIR group from the University of A Coruña developed an Artificial Neuron-Glia Network (ANGN) incorporating two different types of processing elements: artificial neurons and artificial astrocytes. This extends classical ANN by incorporating recent findings and suppositions regarding the way information is processed via neural and astrocytic networks in the most evolved living organisms [145
]. In our opinion, neurons are specialized in transmission and information processing, whereas glial cells in processing and modulation. Besides, glial cells play a key role in the establishment of synapses and neural architecture. That is why it would be interesting to combine these two types of elements in order to create a Deep Artificial Neuron–Astrocyte Network (DANAN).
DNNs represent a turning point in the history of Artificial Intelligence, achieving results that match, or even surpass the human capabilities in some tasks. These results motivated major companies like Google, Facebook, Microsoft, Apple and IBM to focus their research on this field. Nowadays, DNNs are used every day unknowingly, since in our smartphones there are numerous applications based on Deep Learning. For example, some cameras use a DNN to perform face recognition, while others employ a voice recognition piece of software, which is also based on DL. There are many other applications with DNNs that perform state-of-the-art results.
Pharmacology and bioinformatics are very interesting fields for DL application, because there is an exponential growth of the data. There is a huge potential in applying DNNs in the process of drug discovery, design and validation that could improve performance and greatly reduce the costs. However, the most promising area is genomics, and other omics, like proteomics, transcriptomics or metabolomics. These types of data are so complex that it is almost impossible for humans to extract valuable insights. Thus, the use of DNNs would be necessary to extract the information useful to understand the relationships between the DNA, epigenetics variations, and different diseases.
Consequently, scientific and economic interests have led to the creation of numerous R&D projects to keep improving DNNs. Developing new hardware architectures is also important in order to improve the current CPUs and GPUs. The neuromorphic chips represent a great opportunity to reduce the energy consumption and enhance the capabilities of DNNs, being very helpful to process a vast volume of information generated by the Internet of Things. Besides, using neuromorphic chips may lead to the creation of a large-scale system that would attempt to represent an Artificial General Intelligence, moving from the current Artificial Narrow Intelligence.
Finally, it would be of great interest to create networks with two types of processing elements, to create DANANs that will work more similarly to the human brain. This should be considered a very resourceful way of improving the current systems, and our group’s objective is to implement this first type of DANAN. This type of networks will consider the proven capabilities of the glial cells in the processing of information, regulation of the neural excitability, synaptic transmission, plasticity and memory, to create more complex systems that could bring us closer to an Artificial General Intelligence.