Next Article in Journal
Development of a Multi-Objective Optimal Design Approach for Combined Water Systems
Next Article in Special Issue
Evaluation of Unsupervised Anomaly Detection Techniques in Labelling Epileptic Seizures on Human EEG
Previous Article in Journal
CRC-Based Reliable WiFi Backscatter Communiation for Supply Chain Management
Previous Article in Special Issue
MCI Conversion Prediction Using 3D Zernike Moments and the Improved Dynamic Particle Swarm Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An Overview of Open Source Deep Learning-Based Libraries for Neuroscience

by
Louis Fabrice Tshimanga
1,2,*,
Federico Del Pup
1,2,3,
Maurizio Corbetta
1,2,4 and
Manfredo Atzori
1,2,5,*
1
Department of Neuroscience, University of Padova, Via Belzoni 160, 35121 Padova, Italy
2
Padova Neuroscience Center, University of Padova, Via Orus 2/B, 35129 Padova, Italy
3
Department of Information Engineering, University of Padova, Via Gradenigo 6/b, 35131 Padova, Italy
4
Venetian Institute of Molecular Medicine (VIMM), 35129 Padova, Italy
5
Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), 2800 Sierre, Switzerland
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5472; https://doi.org/10.3390/app13095472
Submission received: 22 March 2023 / Revised: 19 April 2023 / Accepted: 22 April 2023 / Published: 27 April 2023
(This article belongs to the Special Issue Artificial Intelligence (AI) in Neuroscience)

Abstract

:
In recent years, deep learning has revolutionized machine learning and its applications, producing results comparable to human experts in several domains, including neuroscience. Each year, hundreds of scientific publications present applications of deep neural networks for biomedical data analysis. Due to the fast growth of the domain, it could be a complicated and extremely time-consuming task for worldwide researchers to have a clear perspective of the most recent and advanced software libraries. This work contributes to clarifying the current situation in the domain, outlining the most useful libraries that implement and facilitate deep learning applications for neuroscience, allowing scientists to identify the most suitable options for their research or clinical projects. This paper summarizes the main developments in deep learning and their relevance to neuroscience; it then reviews neuroinformatic toolboxes and libraries collected from the literature and from specific hubs of software projects oriented to neuroscience research. The selected tools are presented in tables detailing key features grouped by the domain of application (e.g., data type, neuroscience area, task), model engineering (e.g., programming language, model customization), and technological aspect (e.g., interface, code source). The results show that, among a high number of available software tools, several libraries stand out in terms of functionalities for neuroscience applications. The aggregation and discussion of this information can help the neuroscience community to develop their research projects more efficiently and quickly, both by means of readily available tools and by knowing which modules may be improved, connected, or added.

1. Introduction

In the last decade, deep learning has taken over most classic approaches in machine learning, computer vision, and Natural Language Processing (NLP) research, showing unprecedented versatility and matching or surpassing the performances of human experts in narrow tasks. The recent growth of deep learning applications in several domains, including neuroscience, consequently offers numerous open-source software opportunities for researchers. Mapping available resources can allow for faster and more precise exploitation. Neuroscience is a diversified field on its own, as much for the objects and scales it focuses on as for the types of data it relies on. The discipline is also historically tied to developments in electrical, electronic, and information technology. Modern neuroscience relies on computerization in many aspects of data generation, acquisition, and analysis. Statistical and machine learning techniques already empower many software packages that have become de facto standards in several subfields of neuroscience, such as Principal and Independent Component Analysis (PCA, ICA) in electroencephalography and neuroimaging, to name a few. Concurrently, the rich and rapidly evolving taxonomy of Deep Neural Networks (DNNs) is becoming both an opportunity and a hindrance. On the one hand, currently, open-source deep learning libraries allow an increasing number of applications and studies in neuroscience. On the other hand, the adoption of available methods is slowed down by a lack of standards, reference frameworks, and established workflows. Scientific communities whose primary focus or background is not in machine learning engineering may be left partially aside from the ongoing Artificial Intelligence (AI) gold rush. For such reasons, it is fundamental to provide an overview of open-source libraries and toolkits. Framing a panorama could help researchers in selecting ready-made tools and solutions when convenient and aid them in pointing out problems and filling in the blanks with new applications. This work would contribute to advancing the community’s possibilities, reducing the workload for researchers to exploit deep learning, and allowing neuroscience to benefit from its most recent advancements.
The rest of the paper is organized as follows: first in the Section 2, a historical perspective of the rise of deep learning in the last decade, then, a general presentation of the vast field of neuroscience followed by a definition of neuroinformatics and the role open source culture and deep learning would serve in it; subsequently, in the Section 3, the methodology to collect and present the libraries collection, and the most prominent features are discussed; the Section 4 shows the tables with libraries information; lastly, discussions and final remarks are offered to the readers (Section 5 and Section 6).

2. Background

2.1. Deep Learning

Deep learning has contributed many of the best solutions to problems in its parent field, machine learning, thanks to its theoretical and technological achievements that unlocked its intrinsic versatility. Machine learning is the study of computer algorithms that tackle problems without complete access to predefined rules or analytical, closed-form solutions. The algorithms often require a training phase to adjust parameters and satisfy internal or external constraints (e.g., of exactness, approximation, or generality) on dedicated data for which solutions might be already known. Machine learning comprises a wide array of statistical and mathematical methods, including Artificial Neural Networks (ANNs), biologically inspired systems that connect inputs and outputs through simple computing units (neurons), which act as function approximators. Each unit implements a nonlinear function of the weighted sum of its inputs; thus, the output of the whole ANN is a composite function, as formally intended in mathematics. The networks of neurons are most often layered and “feed-forward”, meaning that units from any layer only output results to units in subsequent layers. The width of a layer refers to its neuron count, while the depth of a network refers to its layer count. The typical architecture instantiating the above characteristics is the MultiLayer Perceptron [1] (MLP). Universal approximation theorems [2,3] ensure that whenever a nonlinear network, such as the MLP, is either bound in width and unbound in depth or vice versa, its weights can then be set to represent virtually any function (i.e., a wide variety of function families). The training problem thus consists of building networks with sets of weights so to instantiate or approximate the function that would solve the assigned task or that represents the input-output relation. This search is not trivial: it can be framed as the optimization problem for a function over the ANN weights. Such functions, typically called “loss function”, associates the “errors” made on the training data to the neural net parameters (its weights), acting as a total performance score. Approaching local minima of the loss function and improving the network performance on the training data is the prerequisite to generalizing on real-world and unseen data. Deep learning is concerned with the use of deep ANNs, namely characterized by depth, stacking several intermediate (hidden) layers between input and output units. As mentioned above, with other dimensions being equal, the depth increases the representational power of ANNs and, more specifically, aims at modeling complicated functions as meaningful compositions of simpler ones. As with their biological counterparts [4], depth is supposed to manage hierarchies of features from larger input portions, capturing characteristics often inherent to real-world objects and effective in modeling actual data. Overall, depth is one of the key features that allowed us to overcome historical limits [5] of simpler ANNs such as the Perceptron. At the same time, depth comes with numerical and methodological hardships in model training. Part of the difficulties arises as the search space for the optimal set of parameters grows considerably with the number of layers (and their width as well). Other issues are strictly numerical since the training algorithms include long computation chains that may affect the stability of training and learning. Hence, new or rediscovered ideas in training protocols and mathematical optimization (e.g., applying the “backpropagation of errors” algorithm to neural nets [6]) played an important role through times when the scientific interest and hopes in ANNs faded (so-called “AI winters”), paving the way for later advancement. The main drivers for the latest success of deep neural networks are of varied nature and can be schematized as technical and human-related factors. On a technical side, deep learning has profited from [7]:
  • The datafication of the world, i.e., the growing availability of (Big) data
  • The diffusion of Graphical Processing Units (GPUs) as hardware tools.
To outperform classic machine learning models, deep neural networks often require larger quantities of data samples. Such data hunger and high parameter count contribute to the high requirements of deep models in terms of memory, number of operations, and computation time. Training models with highly parallelized and smartly scheduled computations gained momentum thanks to GPUs. In 2012 a milestone exemplified both the above technical aspects when AlexNet [8], a deep Convolutional Neural Network (CNN) based on ideas from Fukushima [4] and LeCun [9,10], won the ImageNet Large Scale Visual Recognition Challenge after being trained using two GPUs [11]. Since then, deep learning has brought new outstanding results in various tasks and domains, processing different data types. Nowadays, deep networks can work on images, video, audio, text, and speech data, time series and sequences, graphs, and more; the main tasks consist of classification, prediction, or estimating the probability density of data distributions, with the possibility of modifying, completing the input, or even generating new instances. On a more sociological side, the drivers of deep learning success can be related to the synergy of big tech companies, advanced research centers, and developer communities [12]. Investments of economic and scientific resources in relatively independent, collective projects, such as open-source libraries, frameworks, and APIs (Application Programming Interfaces), have offered varied tools adapted to multiple specific situations and objectives, exploiting horizontal organization [13] and mixing top-down and bottom-up approaches. It is difficult to imagine a rapid rise of successful endeavors without both active communities and the technical means to incorporate and manage lower-level aspects. In fact, applying deep learning to a relevant problem in any research field requires, in addition to specific domain knowledge, a vast background of statistical, mathematical, and programming notions and skills. The tools that support scientists and engineers in focusing on their main tasks encompass the languages to express numerical operations on GPUs, such as CUDA [14] and cuDNN [15] by NVIDIA, as well as the frameworks to design models, like TensorFlow [16] and Keras [17] by Google, and PyTorch by Meta [18], or the supporting strategies to build data pipelines. In particular, PyTorch, TensorFlow, and Keras offer the building blocks for model design. These frameworks comprise the mathematical operations and functions that deep learning models perform during training and at test time. The functions can be treated as modular objects, stacked one upon another, or connected in more complex ways. The data are input and processed through these objects-functions chains to return corresponding outputs. On a higher level, one can ignore computational and mathematical details as long as the role and effect of such components are understood. On a lower level, these frameworks allow the experts in the community to introduce and share novel custom objects and operations that push forward deep learning research. Data loading and preprocessing modules are included, as well as many pre-trained deep learning models, enhancing the framework’s adaptability and usability. Many deep learning achievements are relevant to biomedical and clinical research, and the above-presented tools have enabled explorations of the capabilities of deep neural networks with neuroscience and biomedical data. Fuller exploitation and routine employment of modern algorithms are yet to come, both in research and clinical practice. This process would accelerate by popularizing, democratizing, and jointly developing models, improving their usability, and expanding their environments, i.e., by wrapping solutions into libraries and shared frameworks.

2.2. Neuroscience

As per the journal Nature, <<neuroscience is a multidisciplinary science that is concerned with the study of the structure and function of the nervous system. It encompasses the evolution, development, cellular and molecular biology, physiology, anatomy, and pharmacology of the nervous system, as well as computational, behavioral, and cognitive neuroscience>> [19]. In summary, neuroscience investigates:
  • The evolutionary and individual development of the nervous system;
  • The cellular and molecular biology that characterizes neurons and glial cells;
  • The physiology of living organisms and the role of the nervous system in the homeostatic function;
  • The anatomy, i.e., the identification and description of the system’s structures;
  • Pharmacology, i.e., the effect of chemicals of external origin on the nervous system, their interactions with endogenous molecules;
  • The computational features of the brain and nerves, how information is processed, which mathematical and physical models best predict and approximate the behavior of neurons;
  • Cognition, the mental processes at the intersection of psychology and computational neuroscience;
  • Behavior as a phenomenon rooted in genetics, development, mental states, and so forth.
Overall, given the wide range of phenomena and the apparatus it investigates, neuroscience research is profoundly multi-modal. Data range from sequences or signals (e.g., electromyography (EMG), electroencephalography (EEG), eye-tracking, genetic sequencing), to 2D/3D images (e.g., Magnetic Resonance Imaging (MRI), X-rays, tomography, histopathology microscopy, eye fundus photography) or videos. Tabular data and text data are also common in this field, from clinical reports and anamneses to surveys, test scores, and inspections of cognitive and sensorimotor functions (e.g., the National Institute of Health (NIH) Stroke Scale test scores [20]), and more.

2.3. Neuroinformatics

Neuroscience is evolving into a data-centric discipline. Modern research heavily depends on human researchers as well as machine agents to store, manage, and process computerized data from the experimental apparatus to the end stage. Before delving into the specifics of artificial neural networks applied to the study of biological neural systems, it is useful to outline the broader concepts of neuroinformatics, regarding data and coding, especially in the light of open culture. According to the International Neuroinformatics Coordinating Facility (INCF), <<neuroinformatics is a research field devoted to the development of neuroscience data and knowledge bases together with computational models and analytical tools for sharing, integration, and analysis of experimental data and advancement of theories about the nervous system function.>> [21]. Given the relevance of neuroinformatics to neuroscience, supporting open and reproducible science implies and requires attention to standards and best practices regarding open data and code. The INCF itself is an independent organization devoted to validating and promoting such standards and practices, interacting with the research communities [22] and aiming at the “FAIR principles for scientific data management and stewardship” [23]. FAIR principles consist in:
  • Being Findable, registered and indexed, searchable, richly described in metadata;
  • Being Accessible, through open, free, universally implementable protocols;
  • Being Interoperable, with appropriate standards for metadata in the context of knowledge representation;
  • Being Reusable, clearly licensed, well described, relevant to a domain, and meeting community standards.
Among free and open resources, several software and organized packages integrating pre-processing and data analysis workflows for neuroimaging and signal processing became the reference for worldwide researchers in neuroscience.
Such tools allow us to perform scientific research in neuroscience easily in solid and repeatable ways. It can be useful to mention, for neuroimaging, Freesurfer (https://surfer.nmr.mgh.harvard.edu/) [24] and FSL (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki) [25] that are standalone softwares, and the MATLAB-connected SPM (https://www.fil.ion.ucl.ac.uk/spm/) [26]. In the domain of signal processing, examples are EEGLAB (https://sccn.ucsd.edu/eeglab/index.php) [27], Brainstorm (https://neuroimage.usc.edu/brainstorm/Introduction) [28], PaWFE (http://ninapro.hevs.ch/node/229) [29], all MATLAB related yet free and open, and MNE (https://mne.tools/stable/index.html) [30], that runs on Python. Regarding applications for neurorobotics and Brain Computer Interfaces (BCIs), a recent open source platform can be found in ROS-neuro (https://github.com/rosneuro) [31]. All URLs accessed at date 22nd of November 2022. Interested readers can find lists of open resources for computational neuroscience (including code, data, models, repositories, textbooks, analysis, simulation, and management software) at Open Computational Neuroscience Resource (https://github.com/asoplata/open-computational-neuroscience-resources) (by Austin Soplata), and at Open Neuroscience (https://open-neuroscience.com/). Additional software resources oriented to neuroinformatics in general, but not necessarily open, can also be found as indexed at “COMPUTATIONAL NEUROSCIENCE on the Web” (https://compneuroweb.com/sftwr.html) (by Jim Perlewitz).

2.4. Bringing Deep Learning to the Neurosciences

The deep learning community is accustomed to open science, as many datasets, models, programming frameworks, and scientific outcomes are publicly released by both academia and companies continuously. However, while deep learning can openly provide state-of-the-art models for old and new problems in neuroscience, theoretical understanding, formalization, and standardization are often yet to be achieved, which may prevent adoption in other research endeavors. From a technical standpoint, deep networks are a viable tool for many tasks involving data from the brain sciences. Image classification has arguably been the task in which deep neural networks have had the highest momentum in terms of pushing the state of the art forward. This translates now into a rich taxonomy of architectures and pre-trained models that consistently maintain interesting performances in pattern recognition across a number of image domains. Pattern recognition is indeed central for diagnostic purposes, in the form of classification of images with pathological features (e.g., types of brain tumors or meningiomas), segmentation of structures (such as the brain, brain tumors, or stroke lesions), classification of signals (e.g., classification of electromyography or electro encephalography data), as well as for action recognition in Human-Computer Interfaces (HCIs) and Brain-Computer Interfaces (BCIs), where the complex systems underlying human behavior and mind must be interpreted, processed and used by artificial systems (see [32] for a larger review of BCIs). The initiatives BRain Tumor Segmentation (BRATS) Challenge (https://www.med.upenn.edu/cbica/brats/) [33], Ischemic Stroke LEsion Segmentation (ISLES) Challenge (https://www.isles-challenge.org/) [34,35], and Ninapro (http://ninaweb.hevs.ch/node/7) [36] are examples of data releases for which above-mentioned tools proved effective. There are models learning image-to-image functions capable of enhancing data, preprocessing it, correcting artifacts and aberrations, allowing smart compression as well as super-resolution, and even expressing cross-modal transformations between different acquisition apparatuses. In the related tasks of object tracking, action recognition, and pose estimation, research results from the automotive sector or crowd analysis have inspired solutions for behavioral neuroscience, especially in animal behavioral studies. When dealing with sequences, deep networks success in computer vision has inspired CNN-based approaches to EEG and EMG studies [37,38], either with or without relying on 2D data, given that mathematical convolution has a 1D version, and 1D signals have 2D spectra. Other architectures more directly instantiate temporal and sequential aspects, e.g., Recurrent Neural Networks (RNNs) such as the Long Short Term Memory (LSTM) [39] and Gated Recurrent Units (GRUs) [40], and they too can be applied to sequence problems and sub-tasks in neuroscience, such as decoding time-dependent brain signals. Although deep neural networks do not explicitly model the nervous system, they are inspired by biological knowledge and mimic some aspects of biological computation and dynamical systems. This has inspired new comparative studies and analogous approaches to learning and perception in a unique way among machine learning algorithms [41]. Many neuroinformatic studies demonstrate how novel deep learning concepts and methods apply to neurological data [12]. However, they often showcase new advanced achievements in performance metrics that do not translate directly to new accepted neuroscience discoveries or clinical best practices.
Such results are very often published together with open code repositories, allowing for reproducibility, yet they may not be explicitly organized for widespread routine adoption in domains different from machine learning. Algorithms are usually written in open programming languages like Python [42], R [43], Julia [44], and deep learning design frameworks such as TensorFlow, PyTorch or Flux [45]. Still, they are more inspiring to the experienced machine learning researcher rather than practically helpful to end users such as neuroscientists. In fact, to successfully build a deep learning application from scratch, vast knowledge is needed in the data science aspect of the task and in coding, as much as in the theoretical and experimental foundations and frontiers of the application domain, here being neuroscience. For the above reasons, the open source and open science domains are promising frames for common development and testing of relevant solutions for neuroscience, as they provide an active flow of ideas and robust diversification, avoiding “reinvention of the wheel”, harmful redundancies, or starting from completely blank states. As a contribution to clarifying the current situation and reducing the workload for researchers, this work collects and analyzes several open libraries that implement and facilitate deep learning applications in neuroscience, with the aim of allowing scientists worldwide to identify the most suitable options for their inquiries and clinical tasks.

3. Materials and Methods

The large corpus of available open code makes it useful to specify what qualifies as a coding library or a framework rather than as a model accompanied by utilities for the present scope. In programming, a library is a collection of pre-coded functions and object definitions, often relying on one another and written to optimize programming for custom tasks. The functions are considered useful and unmodified across multiple unrelated programs and tasks. The main program at hand calls the library in the control flow specified by the end users. A framework is a higher level concept, akin to the library, but typically with pre-designed control flows in which custom code from the end users is inserted.
In this review, a repository that simply collects a set of functions that defines and instantiates a deep learning model is not considered a library. On the contrary, a collection of notebooks that allows us to train, retrain, and test models with different architectures, also taking care of data preprocessing and preparation, fully meets the present scopes. The explicit definition given by the authors, their aims, and their level of maintenance were relevant in determining if a repository would be considered a library (or toolkit/toolbox, etc.). Open code for this review comprises code for proprietary languages such as MATLAB, the reasons being the compatibility with free languages such as GNU Octave (https://octave.org/) (where noted), and the general value of open accessing algorithms. For the sake of the review, several resources were queried or scanned. Google Scholar was queried with:
  • Allintitle: “deep learning library”;
  • Allintitle: “deep learning toolbox”;
  • Allintitle: “deep learning package”;
  • “deep learning library|toolbox|package” AND “neuroscience|neuroimaging”;
  • “deep learning library|toolbox|package” AND “EEG|EMG”;
  • “deep learning library” OR “deep learning toolbox” OR “deep learning package” -“MATLAB deep learning toolbox”
preserving the top 100 search results, ordered for relevance by the engine algorithm. On PubMed the queries were:
  • opensource (deep learning) AND (toolbox OR toolkit OR library);
  • (EEG OR EMG OR MRI OR (brain (X-ray OR CT OR PT))) (deep learning) AND (toolbox OR toolkit OR library).
Moreover, the site https://open-neuroscience.com/ was scanned specifically for “deep learning” mentions. Stemming citations and automatic recommendations from the engines of the hosting and publishing platforms were also analyzed. The time window was unrestricted in the past, given the recent development of the field, and the search was finished by 22 November 2022 for the selection of library entries. Data regarding library use were updated to 17 April 2023.
The collected libraries were organized according to the principal aim, in the form of data type processed or the supporting function in the workflow, thus dividing:
  • Libraries for sequence data (e.g., EMG, EEG)
  • Libraries for image data (including scalar volumes, 4-dimensional data as in fMRI, video)
  • Libraries and frameworks for further data types and abstractions (including data handling, evaluation, and cloud platforms)
In each category, a set of three tables present separately the results related to the following library characteristics:
  • Domain of application
  • Model engineering
  • Technology and sources
The domain of application comprises the Neuroscience area, the Data types handled, the provision of Datasets, and the machine learning Task to which the library is dedicated. When available (73 entries out of 74), a publication is referenced for the library entry in the domain table. Together with the repositories, referenced publications contain valuable information every potential user should check before experimenting, such as the data sets leveraged and use cases intended by the original authors. The model engineering tables include information on the architecture of Models manageable in the library, the DL (Deep Learning) framework and Programming language main dependencies, and the possibility of Customization for the model structure or training parameters. Technology and sources refer to the type of Interface available for a library, whether it works Online/Offline, specifically with real-time data or logged data. Maintenance refers to the ongoing activity of releasing features, solving issues and bugs, or offering support through channels (considered active with commits or releases in 2022), Source specifies where code files and instructions are made available. Stars (Forks) refers to the counts of “stars” and “forks” of the repositories, by which the number of users and possible new developers could be approximately estimated. Contributors is the number of people adding code and features to the repository, as declared on site (effective contributors might be differently acknowledged), which is useful to estimate the amount of teamwork, developer support, as well as the inclination to customization that could be expected for the given library. The entry “(*)” signals missing data.

4. Results

The analysis of the literature allowed us to select a total of 74 entries for the tables, with publications that describe libraries implementing or empowering deep learning applications for neuroscience. Despite open source and effectiveness, several publications did not provide an ecosystem of reusable functions. Proofs of concept and single-shot experiments were discarded. Please, refer to the Abbreviations section for acronyms from the entire paper and specifically the following tables.

4.1. Libraries for Sequence Data

Libraries and frameworks for sequence data are shown in Table 1 (domains of application), Table 2 (models characteristics), Table 3 (technologies and sources). The majority of models process EEG signals, which are among the most common types of sequential data in neuroscience research. A common objective is deducing the activity or state of the subject based on temporal or spectral (2D) patterns. Deep Learning is capable of bypassing some of the preprocessing steps often required by other common statistical and engineering techniques, and it comprises both 1D and 2D approaches through MLPs, CNNs, or RNNs architectures. An example of a sequence-oriented library is gumpy, whose intended area of application is that of BCIs, where decoding a signal is the first step towards communication and interaction with a computer or robotic system. Given the setting, gumpy allows working with EEG or EMG data and suits them with specific defaults, e.g., 1-D CNNs, or LSTMs. Similarly to ExBrainable, it was validated on data from the BCI Competition IV (https://www.bbci.de/competition/iv/).
Notable mentions in the sequence category are the library Traja and the VARDNN toolbox, as they depart from the common scenarios of previous examples. Traja stands out as an example of less usual sequential data, namely trajectory data (sequences of coordinates in 2 or 3 dimensions, through time). Moreover, in Traja sequences are modeled and analyzed employing the advanced architectures of Variational AutoEncoders (VAEs) and Generative Adversarial Networks (GANs), usually encountered in image tasks. With different theoretical backgrounds, both architectures allow simulation and characterization of data through their statistical properties. The VARDNN toolbox enables analyses on blood-oxygen-level-dependent (BOLD) signals in the established domain of functional Magnetic Resonance Imaging (fMRI) but uses a unique approach to autoregressive processes mixed with deep neural networks, allowing to perform causal analysis and to study functional connections between brain regions through their patterns of activity in time. It was developed from the data set ADNI-2 (73 subjects) from the Alzheimer’s Disease Neuroimaging Initiative (https://adni.loni.usc.edu/).
Overall, the libraries oriented to sequence data analysis are mainly directed at the classification of EEG signals, whose variety of acquisition settings and downstream applications could be largely approached with the aid of deep models as a part of the pipeline. Other types of sequence data in neuroscience could be processed by newer or harder-to-retrieve libraries. Despite the fact that preprocessing and domain-specific features require special care, sequence model can still be applied in principle to these data to perform several machine learning tasks on magnetoencephalography (MEG), electrocorticography (ECoG), spike train data, and more. The expert end user may apply or adapt the libraries mentioned above to new domains, or develop new applications, possibly leveraging the open source of available code.

4.2. Libraries for Image Data

Libraries and frameworks for image data are shown in Table 4 (domains of application), Table 5 (models characteristics), Table 6 (technologies and sources). Computer vision and 2D image processing are arguably the fields in which deep learning has achieved the most impressive and state-of-art defining results, often inspiring and translating breakthroughs in other domains. Classification and segmentation (i.e., the separation of parts of the image based on their classes) are the most common tasks addressed by image processing libraries. Magnetic resonance is the primary source of data; however, various deep learning libraries are built for microscopic and eye-tracking data as well. Most of the libraries collected in our analysis took advantage of classical CNN architectures for classification, Convolutional AutoEncoders (CAEs) for segmentation, and GANs for synthesis. It is common to employ transfer learning to lessen the computational and memory burden during the training phase and take advantage of pre-trained models. Transfer learning consists of initializing models with parameters learned on usually larger data sets, possibly from different domains and tasks, with varying amounts of further training in the target domain. The best such examples are pose-estimation libraries extending the DeepLabCut system, arguably the most relevant project on the topic. DeepLabCut is an interactive framework for labeling, training, testing, and refining models that originally exploits the weights learned from ResNets (or newer architectures) on the ImageNet data. The results match human annotation using quite a few training samples, holding for many (human and non-human) animals and settings. Validated data sets comprise TRI-MOUSE (161 data points), Parenting Mouse (542), MARMOSET (7600), FISH (100), and HORSE (8114). The documentation, demonstrative notebooks, and tools offered by the Mathis Lab allow different levels of understanding and customization of the process with high levels of robustness. Among the considered libraries, two set apart from the majority given the type of tasks they perform: GaNDLF addresses eXplainable AI (XAI), i.e., Artificial Intelligence whose decisions and outputs can be understood by humans through more transparent mental models; ANTsX performs both the co-registration step and super-resolution as a quality enhancing step for neuroimages, with the former being usually performed by traditional algorithms. GaNDLF sets its goal as the provision of deep learning resources in different layers of abstraction, allowing medical researchers with virtually no ML knowledge to perform robust experiments with models trained on carefully split data, with augmentations and preprocessing, under standardized protocols that can easily integrate interpretability tools such as Grad-CAM [56] and attention maps, which highlight the parts of an image according to how they influenced a model outcome. It was validated on 7 data sets comprising from 371 up to 180,000 images of different systems ranging from brain MRI to dental X-ray and eye fundus. The ANTsX ecosystem is of similar wide scope and is intended to build workflows on quantitative biology and medical imaging data, both in Python and R languages. Packages from the same ecosystem perform registration of brain structures (by classical methods) as well as brain extraction by deep networks, aggregating structural MRI data sets for over 1200 subjects.

4.3. Libraries for Further Data Types and Abstractions

Libraries for further data types and abstractions are shown in Table 7 (domains of application), Table 8 (models characteristics), Table 9 (technologies and sources). There are two ways libraries listed in this section differ from the previous ones. On the one hand, they can process data types that do not fall into the sequence or image category. On the other hand, they are the product of projects that transcend the training of deep learning models, i.e., larger scope frameworks, deep learning support functions (e.g., preprocessing and data augmentations pipelines), and services or infrastructure to host computational experiments. NeuroCAAS is an ambitious project that both standardizes experimental schedules and analyses and offers computational resources on the cloud. The platform lifts the burden of configuring and deploying data analysis tools, also guaranteeing replicability and readily available usage of pre-made pipelines with high efficiency. Other platform-oriented libraries are concerned with federated learning, i.e., the training of deep learning models on separated and private datasets, a relevant issue in healthcare. MONAI is a project that brings deep learning tools to many health and biology problems. The paradigm builds on PyTorch and aims at unifying healthcare AI practices throughout both academia and enterprise research, not only in the model development but also in the creation of shared annotated datasets. It also focuses on deployment and work in real-world clinical production, settling as a strong candidate for being the standard solution in the domain. Importantly, it is a commonly used framework for the 3D variations of UNet [105] lately dominating the yearly BraTS challenge [33] (see at http://braintumorsegmentation.org/). In this regard, nnU-Net is a narrower scope framework explicitly for building UNet-like models, focused on data-driven self-configuration of training hyperparameters, reducing the burden on researchers and practitioners. It was validated on 23 public data sets of biomedical interest with great success. PsychRNN, PyCog and THINGvision, as well as NeuroGym are libraries that bridge deep learning research and computational neuroscience. They are concerned with studying how artificial neural systems solve the same tasks that animal and human brains are subject to. The aims are those of simulating, modeling, learning representations, and reverse engineering cognition and behavior.

5. Discussion

The use of deep learning in neuroscience research requires us to frame a study, or a part of it, as a machine learning problem or task. The setting may or may not require researchers to develop a novel learning architecture and algorithm, train an existing model on the data, or directly apply a trained model to the scientific use case. These situations require different levels of machine learning knowledge and data literacy. Consequently, different libraries and frameworks would be of use. In general, researchers should be aware of what mathematical objects and data objects can represent their experimental data: deep learning typically processes vectors, matrices, and tensors, which could represent signals, images, scalar and tensor fields, and more. Scientists should also consider the nature of data used to train models and how similar they are to use case data, hence how the model would perform on the latter. Importantly, there must be a specific task the model would perform in processing input data and providing the output, e.g., many problems are formulated as classification tasks, while others are very specific to the neuroscience domain. Deep learning models may retain general information from the training data but generally cannot apply it to different tasks without design adaptations. The application of deep learning to neuroscience is thus challenging. Working in teams with different scientific backgrounds is a possible solution for the problem of specialized expertise that deep learning requires. Leveraging the platforms and practices of open science and open source communities offers the support that a working team would still need. Deep learning has the potential hindrance of thriving on large data sets and powerful hardware, but transfer learning and the availability of “model zoos” in many libraries allow researchers to build on systems with great knowledge or efficacy from the start. These systems are not an end in themselves and can be integrated into larger workflows, allowing domain expertise and researchers’ creativity to be enhancing and enhanced.
The tables in this work help to evaluate candidate tools for neuroscience research problems, providing information on the type and domain of data they process, the machine learning task they perform, and whether models are trainable, customizable, or frozen and ready for use.
The panorama of open-source libraries dedicated to deep learning applications in neuroscience is quite rich and diversified. There is a corpus of organized packages that integrate preprocessing, training, testing, and performance analyses of deep neural networks for neurological research. Most of these projects are tuned to specific data modalities and formats, but some libraries are quite versatile and customizable, and there are projects that encompass quantitative biology and medical analysis as a whole. There is a common tendency to develop GUIs, enhancing the user-friendliness of toolkits for non-programmers and researchers unacquainted with the command line interfaces, for example. Visualizations of building blocks, operations, and results allow us to focus on them without further allocating cognitive and time resources to producing the respective codes. Moreover, for the many libraries developed in Python, the (Jupyter) Notebook format appears as a widespread tool both for tutorials, documentation and as an interface to cloud computational resources (e.g., Google Colab [121]). Through notebooks, experimental templates can be modified and adapted in complete environments, where text instructions, dynamic visualization, and code are blended efficiently, can be shared and reproduced, Although learning curves depend on subjective experiences, the availability of such interfaces and instruments reduces the burden on the end user, enhancing accessibility. Apart from specific papers and documentation, and outside of deep learning per se, it is important to make researchers and developers aware of the main topics and initiatives in open culture and neuroinformatics in order to sustain the development of the field. For this reason, the interested reader is invited to rely on competent institutions (e.g., INCF) and databases of open resources (e.g., open-neuroscience) dedicated to neuroscience. Among the possibly missing technologies, the queries employed did not retrieve results in Natural Language Processing libraries dedicated to neuroscience, nor toolkits specifically employing Graph Neural Networks (GNNs), although available in EEG-DL. NLP is actually fundamental in healthcare since medical reports often come in non-standardized forms. Large Language Models (LLMs), Named Entity Recognition (NER) systems and text mining approaches in biomedical research exist [122,123]. GNNs comprise recent architectures that are extremely promising in a variety of fields [124], including biomedical research and particularly neuroscience [125,126]. Even if promising, their application is still less mature than that of computer vision models or time series analysis.

Future Directions

Overall, current libraries cover a wide range of applications, but it seems unlikely that a single deep learning framework could dominate the entire neuroscience field in the near future. Nonetheless, projects such as the PyTorch-based MONAI are strong candidates in unifying ecosystems for deep learning in medicine and biology. DeepLabCut and nnU-Net are also worth mentioning since they are widely applied as blueprints for newer applications, respectively, in pose estimation and segmentation. Investing efforts in open tools and data, accessible documentation, and modularity may guarantee useful results regardless of the specific application field and are indeed foundational to transferring successful tools across domains. Such effort should be paired with data literacy and basic knowledge of machine learning best practices (e.g., how to avoid data leakage between train and test sets, how to assess model performance), paired with digital frameworks with strong priors towards these experimental practices, such as NeuroCAAS. Interpretability and explainability of models depend not only on the researchers’ theoretical understanding but also on transparent models and effective tools to open and shed light on black boxes. In fact, XAI is essential for systems that support decision-making in healthcare or experiments in biomedical sciences. Tools and libraries such as GaNDLF can be expected to gain value. Another relevant aspect of biomedicine is that of privacy and sensitive data. Records cannot be processed anywhere; personal data cannot be published anyhow. Standards and protocols to protect people’s privacy and machine learning models and frameworks that respect them, such as federated learning, are expected to gain momentum. All developments should hold against the intrinsic multimodality of data from the field and the multidisciplinarity required to analyze them. In the end, the interplay of common practices and flexible models can be expected to be of central importance.

6. Conclusions

Although a large and growing number of repositories offer code to build specific models, as published in experimental papers, these resources seldom aim to constitute proper libraries or frameworks for research or clinical practice. Both deep learning and neuroscience gain much value even from sophisticated proofs of concept. In parallel, organized packages are spreading and starting to provide and integrate pre-processing, training, testing, and performance analyses of deep neural networks for neurological and biomedical research. This paper has offered both a historical and a technical context for the use of deep neural networks in neuroinformatics, focusing on open-source tools that scientists can comprehend and adapt to their necessities. At the same time, this work underlines the value of the open culture and points to relevant institutions and platforms for neuroscientists. Although the aim is not restricted to making clinicians develop their own deep models without coding or machine learning background, as was the case in [127], the overall effect of these libraries and sources is to democratize deep learning applications and results, as well as standardize such complex and varied models, supporting the research community in obtaining a proper means to an end and in envisioning then realizing collectively new projects and tools.

Author Contributions

Conceptualization and methodology, M.A. and L.F.T.; validation, L.F.T. and F.D.P.; writing—original draft preparation, L.F.T.; writing—review and editing, M.A., L.F.T., F.D.P., M.C.; supervision, M.A. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the “Department of excellence 2018–2022” initiative of the Italian Ministry of Education (MIUR) awarded to the Department of Neuroscience—University of Padua.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ANNArtificial Neural Network
BCIBrain-Computer Interface
BOLDBlood-Oxygen-Level-Dependent
CAEConvolutional AutoEncoder
CLICommand Line Interface
CNNConvolutional Neural Network
CTComputed Tomography
DNNDeep Neural Network
EEGElectroencephalography
EMElectron Microscopy
EMGElectromiography
GANGenerative Adversarial Network(s)
GRUGated Recurrent Unit
GUIGraphical User Interface
LSTMLong-Short Term Memory
MLPMultiLayer Perceptron
(rs-f)MRI(resting state functional) Magnetic Resonance Imaging
NLPNatural Language Processing
PETPositron Emission Tomography
RNNRecurrent Neural Network
SEMScanning Electron Microscopy
TEMTransmission Electron Microscopy
VAEVariational AutoEncoder
XAIeXplainable Artificial Intelligence

References

  1. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed]
  2. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  3. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  4. Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 1980, 36, 193–202. [Google Scholar] [CrossRef]
  5. Minsky, M.; Papert, S. Perceptrons: An Introduction to Computational Geometry; MIT Press: Cambridge, MA, USA, 1969. [Google Scholar]
  6. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-propagating Errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  7. Meijering, E. A bird’s-eye view of deep learning in bioimage analysis. Comput. Struct. Biotechnol. J. 2020, 18, 2312–2325. [Google Scholar] [CrossRef]
  8. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Morehouse Lane Red Hook, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
  9. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  10. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  11. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. arXiv 2014, arXiv:1409.0575. [Google Scholar] [CrossRef]
  12. Valliani, A.A.A.; Ranti, D.; Oermann, E.K. Deep Learning and Neurology: A Systematic Review. Neurol. Ther. 2019, 8, 351–365. [Google Scholar] [CrossRef]
  13. Raymond, E.S. The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, 2nd ed.; O’Reilly Media: Beijing, China; Cambridge, MA, USA; Farnham, UK; Köln, Germany; Paris, France; Sebastopol, CA, USA; Taip, Taiwan, 2001; p. 241. [Google Scholar]
  14. Nvidia; Vingelmann, P.; Fitzek, F.H. CUDA, Release: 10.2.89. 2020. Available online: https://developer.nvidia.com/cuda-toolkit (accessed on 22 November 2022).
  15. Chetlur, S.; Woolley, C.; Vandermersch, P.; Cohen, J.M.; Tran, J.; Catanzaro, B.; Shelhamer, E. cuDNN: Efficient Primitives for Deep Learning. arXiv 2014, arXiv:1410.0759. [Google Scholar]
  16. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: tensorflow.org (accessed on 22 November 2022).
  17. Chollet, F. keras. 2015. Available online: https://github.com/fchollet/keras (accessed on 22 November 2022).
  18. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32; Curran Associates, Inc.: Morehouse Lane Red Hook, NY, USA, 2019; pp. 8024–8035. [Google Scholar]
  19. Nature Neuroscience. Available online: https://www.nature.com/subjects/neuroscience (accessed on 18 August 2022).
  20. Brott, T.; Adams, H.; Olinger, C.; Marle, J.; Barsan, W.; Biller, J.; Spilker, J.; Holleran, R.; Eberle, R.; Hertzberg, V.; et al. Measurements of acute cerebral infarction: A clinical examination scale. Stroke 1989, 20, 864–870. [Google Scholar] [CrossRef] [PubMed]
  21. What Is Neuroinformatics? Available online: https://www.incf.org/about/what-is-neuroinformatics (accessed on 18 August 2022).
  22. Abrams, M.B.; Bjaalie, J.G.; Das, S.; Egan, G.F.; Ghosh, S.S.; Goscinski, W.J.; Grethe, J.S.; Kotaleski, J.H.; Ho, E.T.W.; Kennedy, D.N.; et al. A Standards Organization for Open and FAIR Neuroscience: The International Neuroinformatics Coordinating Facility. Neuroinformatics 2021, 20, 25–36. [Google Scholar] [CrossRef] [PubMed]
  23. Wilkinson, M.D.; Dumontier, M.; Aalbersberg, I.J.; Appleton, G.; Axton, M.; Baak, A.; Blomberg, N.; Boiten, J.W.; da Silva Santos, L.B.; Bourne, P.E.; et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data 2016, 3, 160018. [Google Scholar] [CrossRef]
  24. Fischl, B. FreeSurfer. Neuroimage 2012, 62, 774–781. [Google Scholar] [CrossRef]
  25. Smith, S.M.; Jenkinson, M.; Woolrich, M.W.; Beckmann, C.F.; Behrens, T.E.J.; Johansen-Berg, H.; Bannister, P.R.; Luca, M.D.; Drobnjak, I.; Flitney, D.; et al. Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage 2004, 23, S208–S219. [Google Scholar] [CrossRef]
  26. Flandin, G.; Friston, K. Statistical Parametric Mapping. Scholarpedia 2008, 3, 6232. [Google Scholar] [CrossRef]
  27. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef]
  28. Tadel, F.; Baillet, S.; Mosher, J.C.; Pantazis, D.; Leahy, R.M. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Comput. Intell. Neurosci. 2011, 2011, 1–13. [Google Scholar] [CrossRef]
  29. Atzori, M.; Müller, H. PaWFE: Fast Signal Feature Extraction Using Parallel Time Windows. Front. Neurorobot. 2019, 13, 74. [Google Scholar] [CrossRef]
  30. Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.A.; Strohmeier, D.; Brodbeck, C.; Goj, R.; Jas, M.; Brooks, T.; Parkkonen, L.; et al. MEG and EEG Data Analysis with MNE-Python. Front. Neurosci. 2013, 7, 1–13. [Google Scholar] [CrossRef] [PubMed]
  31. Luca, T.; Gloria, B.; Stefano, T.; Emanuele, M. ROS-Neuro: An Open-Source Platform for Neurorobotics. Front. Neurorobot. 2022, 16, 886050. [Google Scholar] [CrossRef]
  32. Hramov, A.E.; Maksimenko, V.A.; Pisarchik, A.N. Physical principles of brain–computer interfaces and their applications for rehabilitation, robotics and control of human brain states. Phys. Rep. 2021, 918, 1–133. [Google Scholar] [CrossRef]
  33. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  34. Winzeck, S.; Hakim, A.; McKinley, R.; Pinto, J.; Alves, V.; Silva, C.; Pisov, M.; Krivov, E.; Belyaev, M.; Monteiro, M.; et al. ISLES 2016 and 2017-benchmarking ischemic stroke lesion outcome prediction based on multispectral MRI. Front. Neurol. 2018, 9, 679. [Google Scholar] [CrossRef]
  35. Petzsche, M.R.H.; de la Rosa, E.; Hanning, U.; Wiest, R.; Pinilla, W.E.V.; Reyes, M.; Meyer, M.I.; Liew, S.L.; Kofler, F.; Ezhov, I.; et al. ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Sci. Data 2022, 9, 762. [Google Scholar] [CrossRef]
  36. Atzori, M.; Gijsberts, A.; Heynen, S.; Hager, A.G.M.; Deriaz, O.; van der Smagt, P.; Castellini, C.; Caputo, B.; Muller, H. Building the Ninapro database: A resource for the biorobotics community. In Proceedings of the 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Rome, Italy, 24–27 June 2012; pp. 1258–1265. [Google Scholar]
  37. Park, K.H.; Lee, S.W. Movement intention decoding based on deep learning for multiuser myoelectric interfaces. In Proceedings of the 2016 4th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 22–24 February 2016; pp. 1–2. [Google Scholar] [CrossRef]
  38. Atzori, M.; Cognolato, M.; Müller, H. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands. Front. Neurorobot. 2016, 10, 9. [Google Scholar] [CrossRef]
  39. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  40. Cho, K.; van Merrienboer, B.; Bahdanau, D.; Bengio, Y. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arXiv 2014, arXiv:1409.1259. [Google Scholar] [CrossRef]
  41. Yamins, D.L.K.; DiCarlo, J.J. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 2016, 19, 356–365. [Google Scholar] [CrossRef]
  42. Van Rossum, G.; Drake, F.L. Python 3 Reference Manual; CreateSpace: Scotts Valley, CA, USA, 2009. [Google Scholar]
  43. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2022. [Google Scholar]
  44. Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A fresh approach to numerical computing. SIAM Rev. 2017, 59, 65–98. [Google Scholar] [CrossRef]
  45. Innes, M. Flux: Elegant Machine Learning with Julia. J. Open Source Softw. 2018, 3, 602. [Google Scholar] [CrossRef]
  46. Heilmeyer, F.A.; Schirrmeister, R.T.; Fiederer, L.D.J.; Völker, M.; Behncke, J.; Ball, T. A large-scale evaluation framework for EEG deep learning architectures. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1039–1045. [Google Scholar] [CrossRef]
  47. Kuntzelman, K.M.; Williams, J.M.; Lim, P.C.; Samal, A.; Rao, P.K.; Johnson, M.R. Deep-Learning-Based Multivariate Pattern Analysis (dMVPA): A Tutorial and a Toolbox. Front. Hum. Neurosci. 2021, 15, 638052. [Google Scholar] [CrossRef] [PubMed]
  48. Kostas, D.; Rudzicz, F. DN3: An open-source Python library for large-scale raw neurophysiology data assimilation for more flexible and standardized deep learning. Neuroscience 2020. [Google Scholar] [CrossRef]
  49. Hou, Y.; Zhou, L.; Jia, S.; Lun, X. A novel approach of decoding EEG four-class motor imagery tasks via scout ESI and CNN. J. Neural Eng. 2020, 17, 016048. [Google Scholar] [CrossRef] [PubMed]
  50. Huang, Y.L.; Hsieh, C.Y.; Huang, J.X.; Wei, C.S. ExBrainable: An Open-Source GUI for CNN-based EEG Decoding and Model Interpretation. arXiv 2022, arXiv:2201.04065. [Google Scholar]
  51. Tayeb, Z.; Waniek, N.; Fedjaev, J.; Ghaboosi, N.; Rychly, L.; Widderich, C.; Richter, C.; Braun, J.; Saveriano, M.; Cheng, G.; et al. Gumpy: A Python toolbox suitable for hybrid brain–computer interfaces. J. Neural Eng. 2018, 15, 065003. [Google Scholar] [CrossRef]
  52. Fabietti, M.; Mahmud, M.; Lotfi, A.; Kaiser, M.S.; Averna, A.; Guggenmos, D.J.; Nudo, R.J.; Chiappalone, M.; Chen, J. SANTIA: A Matlab-based open-source toolbox for artifact detection and removal from extracellular neuronal signals. Brain Inform. 2021, 8, 14. [Google Scholar] [CrossRef]
  53. Shenk, J.; Byttner, W.; Nambusubramaniyan, S.; Zoeller, A. Traja: A Python toolbox for animal trajectory analysis. J. Open Source Softw. 2021, 6, 3202. [Google Scholar] [CrossRef]
  54. Luxem, K.; Mocellin, P.; Fuhrmann, F.; Kürsch, J.; Miller, S.R.; Palop, J.J.; Remy, S.; Bauer, P. Identifying behavioral structure from deep variational embeddings of animal motion. Commun. Biol. 2022, 5, 1267. [Google Scholar] [CrossRef]
  55. Okuno, T.; Woodward, A. Vector Auto-Regressive Deep Neural Network: A Data-Driven Deep Learning-Based Directed Functional Connectivity Estimation Toolbox. Front. Neurosci. 2021, 15, 764796. [Google Scholar] [CrossRef] [PubMed]
  56. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2016, 128, 336–359. [Google Scholar] [CrossRef]
  57. Chen, J.; Ding, L.; Viana, M.P.; Lee, H.; Sluezwski, M.F.; Morris, B.; Hendershott, M.C.; Yang, R.; Mueller, I.A.; Rafelski, S.M. The Allen Cell and Structure Segmenter: A new open source toolkit for segmenting 3D intracellular structures in fluorescence microscopy images. Cell Biol. 2018. [Google Scholar] [CrossRef]
  58. Aljovic, A.; Zhao, S.; Chahin, M.; de la Rosa, C.; Van Steenbergen, V.; Kerschensteiner, M.; Bareyre, F.M. A deep learning-based toolbox for Automated Limb Motion Analysis (ALMA) in murine models of neurological disorders. Commun. Biol. 2022, 5, 131. [Google Scholar] [CrossRef]
  59. Tustison, N.J.; Cook, P.A.; Holbrook, A.J.; Johnson, H.J.; Muschelli, J.; Devenyi, G.A.; Duda, J.T.; Das, S.R.; Cullen, N.C.; Gillen, D.L.; et al. The ANTsX ecosystem for quantitative biological and medical imaging. Sci. Rep. 2021, 11, 9068. [Google Scholar] [CrossRef]
  60. Inés, A.; Domínguez, C.; Heras, J.; Mata, E.; Pascual, V. Biomedical image classification made easier thanks to transfer and semi-supervised learning. Comput. Methods Programs Biomed. 2021, 198, 105782. [Google Scholar] [CrossRef]
  61. Zaimi, A. AxonDeepSeg: Automatic axon and myelin segmentation from microscopy data using convolutional neural networks. Sci. Rep. 2018, 8, 3816. [Google Scholar] [CrossRef]
  62. Blumenthal, M.; Luo, G.; Schilling, M.; Holme, H.C.M.; Uecker, M. Deep, deep learning with BART. Magn. Reson. Med. 2023, 89, 678–693. [Google Scholar] [CrossRef]
  63. Zhao, A.; Balakrishnan, G.; Durand, F.; Guttag, J.V.; Dalca, A.V. Data augmentation using learned transformations for one-shot medical image segmentation. arXiv 2019, arXiv:1902.09383. [Google Scholar]
  64. Rupprecht, P.; Carta, S.; Hoffmann, A.; Echizen, M.; Blot, A.; Kwan, A.C.; Dan, Y.; Hofer, S.B.; Kitamura, K.; Helmchen, F.; et al. A database and deep learning toolbox for noise-optimized, generalized spike inference from calcium imaging. Nat. Neurosci. 2021, 24, 1324–1337. [Google Scholar] [CrossRef]
  65. Haberl, M.G.; Churas, C.; Tindall, L.; Boassa, D.; Phan, S.; Bushong, E.A.; Madany, M.; Akay, R.; Deerinck, T.J.; Peltier, S.T.; et al. CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation. Nat. Methods 2018, 15, 677–680. [Google Scholar] [CrossRef] [PubMed]
  66. Apte, A.P.; Iyer, A.; Thor, M.; Pandya, R.; Haq, R.; Jiang, J.; LoCastro, E.; Shukla-Dave, A.; Sasankan, N.; Xiao, Y.; et al. Library of deep-learning image segmentation and outcomes model-implementations. Phys. Medica 2020, 73, 190–196. [Google Scholar] [CrossRef] [PubMed]
  67. Thibeau-Sutre, E.; Diaz, M.; Hassanaly, R.; Routier, A.; Dormont, D.; Colliot, O.; Burgos, N. ClinicaDL: An open-source deep learning software for reproducible neuroimaging processing. Comput. Methods Programs Biomed. 2022, 220, 106818. [Google Scholar] [CrossRef] [PubMed]
  68. Dunn, T.W.; Marshall, J.D.; Severson, K.S.; Aldarondo, D.E.; Hildebrand, D.G.C.; Chettih, S.N.; Wang, W.L.; Gellis, A.J.; Carlson, D.E.; Aronov, D.; et al. Geometric deep learning enables 3D kinematic profiling across species and environments. Nat. Methods 2021, 18, 564–573. [Google Scholar] [CrossRef] [PubMed]
  69. Arac, A.; Zhao, P.; Dobkin, B.H.; Carmichael, S.T.; Golshani, P. DeepBehavior: A Deep Learning Toolbox for Automated Analysis of Animal and Human Behavior Imaging Data. Front. Syst. Neurosci. 2019, 13, 20. [Google Scholar] [CrossRef]
  70. Sun, G.; Lyu, C.; Cai, R.; Yu, C.; Sun, H.; Schriver, K.E.; Gao, L.; Li, X. DeepBhvTracking: A Novel Behavior Tracking Method for Laboratory Animals Based on Deep Learning. Front. Behav. Neurosci. 2021, 15, 750894. [Google Scholar] [CrossRef]
  71. Denis, J.; Dard, R.F.; Quiroli, E.; Cossart, R.; Picardo, M.A. DeepCINAC: A Deep-Learning-Based Python Toolbox for Inferring Calcium Imaging Neuronal Activity Based on Movie Visualization. eNeuro 2020, 7, ENEURO.0038–20.2020. [Google Scholar] [CrossRef]
  72. Mehrtash, A.; Pesteie, M.; Hetherington, J.; Behringer, P.A.; Kapur, T.; Wells, W.M.; Rohling, R.; Fedorov, A.; Abolmaesumi, P. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy; SPIE: Bellingham, WA, USA, 2017; p. 101351K. [Google Scholar] [CrossRef]
  73. Nath, T.; Mathis, A.; Chen, A.C.; Patel, A.; Bethge, M.; Mathis, M.W. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nat. Protoc. 2019, 14, 2152–2176. [Google Scholar] [CrossRef]
  74. Schweihoff, J.F.; Loshakov, M.; Pavlova, I.; Kück, L.; Ewell, L.A.; Schwarz, M.K. DeepLabStream enables closed-loop behavioral experiments using deep learning-based markerless, real-time posture detection. Commun. Biol. 2021, 4, 130. [Google Scholar] [CrossRef]
  75. Beers, A.; Brown, J.; Chang, K.; Hoebel, K.; Patel, J.; Ly, K.I.; Tolaney, S.M.; Brastianos, P.; Rosen, B.; Gerstner, E.R.; et al. DeepNeuro: An open-source deep learning toolbox for neuroimaging. Neuroinformatics 2021, 19, 127–140. [Google Scholar] [CrossRef]
  76. Zhou, Z.; Kuo, H.C.; Peng, H.; Long, F. DeepNeuron: An open deep learning toolbox for neuron tracing. Brain Inform. 2018, 5, 3. [Google Scholar] [CrossRef] [PubMed]
  77. Graving, J.M.; Chae, D.; Naik, H.; Li, L.; Koger, B.; Costelloe, B.R.; Couzin, I.D. DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. eLife 2019, 8, e47994. [Google Scholar] [CrossRef] [PubMed]
  78. Yiu, Y.H.; Aboulatta, M.; Raiser, T.; Ophey, L.; Flanagin, V.L.; zu Eulenburg, P.; Ahmadi, S.A. DeepVOG: Open-source pupil segmentation and gaze estimation in neuroscience using deep learning. J. Neurosci. Methods 2019, 324, 108307. [Google Scholar] [CrossRef] [PubMed]
  79. Pawlowski, N.; Ktena, S.I.; Lee, M.C.H.; Kainz, B.; Rueckert, D.; Glocker, B.; Rajchl, M. DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images. arXiv 2017, arXiv:1711.06853. [Google Scholar]
  80. Chen, X.; Zhou, M.; Gong, Z.; Xu, W.; Liu, X.; Huang, T.; Zhen, Z.; Liu, J. DNNBrain: A Unifying Toolbox for Mapping Deep Neural Networks and Brains. Front. Comput. Neurosci. 2020, 14, 580632. [Google Scholar] [CrossRef]
  81. Henschel, L.; Conjeti, S.; Estrada, S.; Diers, K.; Fischl, B.; Reuter, M. FastSurfer—A fast and accurate deep learning based neuroimaging pipeline. NeuroImage 2020, 219, 117012. [Google Scholar] [CrossRef]
  82. Rutherford, S.; Sturmfels, P.; Angstadt, M.; Hect, J.; Wiens, J.; van den Heuvel, M.I.; Scheinost, D.; Sripada, C.; Thomason, M. Automated Brain Masking of Fetal Functional MRI with Open Data. Neuroinformatics 2021, 20, 173–185. [Google Scholar] [CrossRef]
  83. Pati, S.; Thakur, S.P.; Bhalerao, M.; Thermos, S.; Baid, U.; Gotkowski, K.; Gonzalez, C.; Guley, O.; Hamamci, I.E.; Er, S.; et al. GaNDLF: A Generally Nuanced Deep Learning Framework for Scalable End-to-End Clinical Workflows in Medical Imaging. arXiv 2021, arXiv:2103.01006. [Google Scholar]
  84. Billot, B.; Bocchetta, M.; Todd, E.; Dalca, A.V.; Rohrer, J.D.; Iglesias, J.E. Automated segmentation of the hypothalamus and associated subunits in brain MRI. NeuroImage 2020, 223, 117287. [Google Scholar] [CrossRef]
  85. Gros, C.; Lemay, A.; Vincent, O.; Rouhier, L.; Bucquet, A.; Cohen, J.P.; Cohen-Adad, J. ivadomed: A Medical Imaging Deep Learning Toolbox. arXiv 2020, arXiv:2010.09984. [Google Scholar] [CrossRef]
  86. Pereira, T.D.; Aldarondo, D.E.; Willmore, L.; Kislin, M.; Wang, S.S.H.; Murthy, M.; Shaevitz, J.W. Fast animal pose estimation using deep neural networks. Nat. Methods 2019, 16, 117–125. [Google Scholar] [CrossRef] [PubMed]
  87. Pereira, T.D.; Tabris, N.; Matsliah, A.; Turner, D.M.; Li, J.; Ravindranath, S.; Papadoyannis, E.S.; Normand, E.; Deutsch, D.S.; Wang, Z.Y.; et al. SLEAP: A deep learning system for multi-animal pose tracking. Nat. Methods 2022, 19, 486–495. [Google Scholar] [CrossRef] [PubMed]
  88. Segalin, C.; Williams, J.; Karigo, T.; Hui, M.; Zelikowsky, M.; Sun, J.J.; Perona, P.; Anderson, D.J.; Kennedy, A. The Mouse Action Recognition System (MARS) software pipeline for automated analysis of social behaviors in mice. eLife 2021, 10, e63720. [Google Scholar] [CrossRef] [PubMed]
  89. Xiao, D.; Forys, B.J.; Vanni, M.P.; Murphy, T.H. MesoNet allows automated scaling and segmentation of mouse mesoscale cortical maps using machine learning. Nat. Commun. 2021, 12, 5992. [Google Scholar] [CrossRef] [PubMed]
  90. Mazziotti, R.; Carrara, F.; Viglione, A.; Lupori, L.; Lo Verde, L.; Benedetto, A.; Ricci, G.; Sagona, G.; Amato, G.; Pizzorusso, T. MEYE: Web App for Translational and Real-Time Pupillometry. eNeuro 2021, 8, ENEURO.0122–21.2021. [Google Scholar] [CrossRef] [PubMed]
  91. Müller, D.; Kramer, F. MIScnn: A framework for medical image segmentation with convolutional neural networks and deep learning. BMC Med. Imaging 2021, 21, 12. [Google Scholar] [CrossRef]
  92. Dalca, A.V.; Guttag, J.; Sabuncu, M.R. Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9290–9299. [Google Scholar] [CrossRef]
  93. Gibson, E.; Li, W.; Sudre, C.; Fidon, L.; Shakir, D.I.; Wang, G.; Eaton-Rosen, Z.; Gray, R.; Doel, T.; Hu, Y.; et al. NiftyNet: A deep-learning platform for medical imaging. Comput. Methods Programs Biomed. 2018, 158, 113–122. [Google Scholar] [CrossRef]
  94. Subramanian, A.; Lan, H.; Govindarajan, S.; Viswanathan, L.; Choupan, J.; Sepehrband, F. NiftyTorch: A Deep Learning framework for NeuroImaging. Neuroscience 2021. [Google Scholar] [CrossRef]
  95. Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  96. Lin, Z.; Wei, D.; Lichtman, J.; Pfister, H. PyTorch Connectomics: A Scalable and Flexible Segmentation Framework for EM Connectomics. arXiv 2021, arXiv:2112.05754. [Google Scholar]
  97. Greve, D.N.; Billot, B.; Cordero, D.; Hoopes, A.; Hoffmann, M.; Dalca, A.V.; Fischl, B.; Iglesias, J.E.; Augustinack, J.C. A deep learning toolbox for automatic segmentation of subcortical limbic structures from MRI images. NeuroImage 2021, 244, 118610. [Google Scholar] [CrossRef] [PubMed]
  98. Nilsson, S.R.; Goodwin, N.L.; Choong, J.J.; Hwang, S.; Wright, H.R.; Norville, Z.C.; Tong, X.; Lin, D.; Bentzley, B.S.; Eshel, N.; et al. Simple Behavioral Analysis (SimBA)—an open source toolkit for computer classification of complex social behaviors in experimental animals. bioRxiv 2020. [Google Scholar] [CrossRef]
  99. Hoopes, A.; Mora, J.S.; Dalca, A.V.; Fischl, B.; Hoffmann, M. SynthStrip: Skull-stripping for any brain image. NeuroImage 2022, 260, 119474. [Google Scholar] [CrossRef] [PubMed]
  100. Imbrosci, B.; Schmitz, D.; Orlando, M. Automated Detection and Localization of Synaptic Vesicles in Electron Microscopy Images. eNeuro 2022, 9, ENEURO.0400-20.2021. [Google Scholar] [CrossRef]
  101. Josserand, M.; Rosa-Salva, O.; Versace, E.; Lemaire, B.S. Visual Field Analysis: A reliable method to score left and right eye use using automated tracking. Behav. Res. Methods 2021, 54, 1715–1724. [Google Scholar] [CrossRef]
  102. King, O.N.F.; Bellos, D.; Basham, M. Volume Segmantics: A Python Package for SemanticSegmentation of Volumetric Data Using Pre-trained PyTorch Deep LearningModels. J. Open Source Softw. 2022, 7, 4691. [Google Scholar] [CrossRef]
  103. Balakrishnan, G.; Zhao, A.; Sabuncu, M.R.; Guttag, J.; Dalca, A.V. VoxelMorph: A Learning Framework for Deformable Medical Image Registration. IEEE Trans. Med. Imaging 2019, 38, 1788–1800. [Google Scholar] [CrossRef]
  104. Hoopes, A.; Hoffmann, M.; Fischl, B.; Guttag, J.; Dalca, A.V. HyperMorph: Amortized Hyperparameter Learning for Image Registration. arXiv 2021, arXiv:2101.01035. [Google Scholar]
  105. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  106. Ding, J.; Wen, H.; Tang, W.; Liu, R.; Li, Z.; Venegas, J.; Su, R.; Molho, D.; Jin, W.; Zuo, W.; et al. DANCE: A Deep Learning Library and Benchmark for Single-Cell Analysis. bioRxiv 2022. [Google Scholar] [CrossRef]
  107. Ehrlich, D.B.; Stone, J.T.; Brandfonbrener, D.; Atanasov, A.; Murray, J.D. PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks. eNeuro 2021, 8, ENEURO.0427–20.2020. [Google Scholar] [CrossRef] [PubMed]
  108. Song, H.F.; Yang, G.R.; Wang, X.J. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework. PLoS Comput. Biol. 2016, 12, e1004792. [Google Scholar] [CrossRef] [PubMed]
  109. Muttenthaler, L.; Hebart, M.N. THINGSvision: A Python Toolbox for Streamlining the Extraction of Activations from Deep Neural Networks. Front. Neuroinform. 2021, 15, 679838. [Google Scholar] [CrossRef] [PubMed]
  110. Kinahan, S.; Liss, J.; Berisha, V. TorchDIVA: An Extensible Computational Model of Speech Production built on an Open-Source Machine Learning. PLoS ONE 2023, 18, e0281306. [Google Scholar] [CrossRef]
  111. Rootes-Murdy, K.; Gazula, H.; Verner, E.; Kelly, R.; DeRamus, T.; Plis, S.; Sarwate, A.; Turner, J.; Calhoun, V. Federated Analysis of Neuroimaging Data: A Review of the Field. Neuroinformatics 2022, 20, 377–390. [Google Scholar] [CrossRef] [PubMed]
  112. Silva, S.; Altmann, A.; Gutman, B.; Lorenzi, M. Fed-BioMed: A General Open-Source Frontend Framework for Federated Learning in Healthcare. In Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning; Albarqouni, S., Bakas, S., Kamnitsas, K., Cardoso, M.J., Landman, B., Li, W., Milletari, F., Rieke, N., Roth, H., Xu, D., et al., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12444, pp. 201–210. [Google Scholar] [CrossRef]
  113. Pati, S.; Baid, U.; Edwards, B.; Sheller, M.J.; Foley, P.; Anthony Reina, G.; Thakur, S.; Sako, C.; Bilello, M.; Davatzikos, C.; et al. The federated tumor segmentation (FeTS) tool: An open-source solution to further solid tumor research. Phys. Med. Biol. 2022, 67, 204002. [Google Scholar] [CrossRef]
  114. Zhang, L.; Li, J.; Li, P.; Lu, X.; Gong, M.; Shen, P.; Zhu, G.; Shah, S.A.; Bennamoun, M.; Qian, K.; et al. MEDAS: An open-source platform as a service to help break the walls between medicine and informatics. Neural Comput. Appl. 2022, 34, 6547–6567. [Google Scholar] [CrossRef]
  115. Cardoso, M.J.; Li, W.; Brown, R.; Ma, N.; Kerfoot, E.; Wang, Y.; Murrey, B.; Myronenko, A.; Zhao, C.; Yang, D.; et al. MONAI: An open-source framework for deep learning in healthcare. arXiv 2022, arXiv:2211.02701. [Google Scholar]
  116. Abe, T.; Kinsella, I.; Saxena, S.; Buchanan, E.K.; Couto, J.; Briggs, J.; Kitt, S.L.; Glassman, R.; Zhou, J.; Paninski, L.; et al. Neuroscience Cloud Analysis As a Service: An open-source platform for scalable, reproducible data analysis. Neuron 2022, 110, 2771–2789. [Google Scholar] [CrossRef]
  117. Molano-Mazon, M.; Barbosa, J.; Pastor-Ciurana, J.; Fradera, M.; Zhang, R.Y.; Forest, J.; del Pozo Lerida, J.; Ji-An, L.; Cueva, C.J.; de la Rocha, J.; et al. NeuroGym: An open resource for developing and sharing neuroscience tasks. PsyArXiv 2022. [Google Scholar] [CrossRef]
  118. Reina, G.A.; Gruzdev, A.; Foley, P.; Perepelkina, O.; Sharma, M.; Davidyuk, I.; Trushkin, I.; Radionov, M.; Mokrov, A.; Agapov, D.; et al. OpenFL: An open-source framework for Federated Learning. Phys. Med. Biol. 2022, 67, 214001. [Google Scholar] [CrossRef]
  119. Jungo, A.; Scheidegger, O.; Reyes, M.; Balsiger, F. pymia: A Python package for data handling and evaluation in deep learning-based medical image analysis. Comput. Methods Programs Biomed. 2021, 198, 105796. [Google Scholar] [CrossRef] [PubMed]
  120. Pérez-García, F.; Sparks, R.; Ourselin, S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. Comput. Methods Programs Biomed. 2021, 208, 106236. [Google Scholar] [CrossRef] [PubMed]
  121. Bisong, E. Google Colaboratory. In Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners; Apress: Berkeley, CA, USA, 2019; pp. 59–64. [Google Scholar] [CrossRef]
  122. Basyal, G.P.; Rimal, B.P.; Zeng, D. A Systematic Review of Natural Language Processing for Knowledge Management in Healthcare. arXiv 2020, arXiv:2007.09134. [Google Scholar]
  123. Locke, S.; Bashall, A.; Al-Adely, S.; Moore, J.; Wilson, A.; Kitchen, G.B. Natural language processing in medicine: A review. TRends Anaesth. Crit. Care 2021, 38, 4–9. [Google Scholar] [CrossRef]
  124. Zhou, J.; Cui, G.; Zhang, Z.; Yang, C.; Liu, Z.; Sun, M. Graph Neural Networks: A Review of Methods and Applications. arXiv 2018, arXiv:1812.08434. [Google Scholar] [CrossRef]
  125. Zhang, X.M.; Liang, L.; Liu, L.; Tang, M.J. Graph Neural Networks and Their Current Applications in Bioinformatics. Front. Genet. 2021, 12, 690049. [Google Scholar] [CrossRef]
  126. Li, M.M.; Huang, K.; Zitnik, M. Graph Representation Learning in Biomedicine. arXiv 2022, arXiv:2104.04883. [Google Scholar]
  127. Faes, L.; Wagner, S.K.; Fu, D.J.; Liu, X.; Korot, E.; Ledsam, J.R.; Back, T.; Chopra, R.; Pontikos, N.; Kern, C.; et al. Automated deep learning design for medical image classification by health-care professionals with no coding experience: A feasibility study. Lancet Digit. Health 2019, 1, e232–e242. [Google Scholar] [CrossRef]
Table 1. Domains of applications for the libraries and frameworks processing sequence data.
Table 1. Domains of applications for the libraries and frameworks processing sequence data.
NameNeuroscience AreaData TypeDatasetsTask
braindecode [46]GeneralEEG, MEGExternalClassification
DeepEEGElectrophysiologyEEGNoClassification
DeLINEATE [47]GeneralImages, sequencesExternalClassification
DN3 [48]BCIEEGNoClassification
EEG-DL [49]BCIEEGNoClassification
ExBrainable [50]ElectrophysiologyEEGExternalClassification, XAI
gumpy [51]BCIEEG, EMGNoClassification
SANTIA [52]ElectrophysiologyLocal Field PotentialsNoProcessing
Traja [53]Behavioural neuroscienceTrajectoriesNoPrediction, Classification, Synthesis
VAME [54]Behavioral neuroscienceTrajectoriesNoEmbedding, Clustering
VARDNN toolbox [55]Connectomics (Functional Connectivity)Sequences (BOLD signal)NoTime series causal analysis
Table 2. Model engineering specifications for the libraries and frameworks processing sequence data.
Table 2. Model engineering specifications for the libraries and frameworks processing sequence data.
NameModelsDL FrameworkCustomizationProgramming Language
braindecode1-D CNNPyTorchYes (weights, model)Python
DeepEEGMLP, 1,2,3-D CNN, LSTMKeras, TensorFlowYes (weights)Python
DeLINEATECNNKeras, TensorFlowYes (weights, model)Python
DN3MLPPyTorchYesPython
ExBrainableCNNPyTorchYes (weights)Python
EEG-DLMiscellaneousTensorFlowYes (weights, model)Python, MATLAB
gumpyCNN, LSTMKeras, TheanoYes (weights, model)Python
SANTIAMLP, LSTM, 1-D CNNDeep Learning ToolboxYes (weights, model)MATLAB
TrajaLSTM, VAE, GANPyTorchYes (weights, model)Python
VAMEVAEPyTorchYes (weights, size)Python
VARDNN toolboxVector Auto-Regressive DNNTensorFlowYes (weights)Python
Table 3. Technological aspects and code sources for the libraries and frameworks processing sequence data. The entry “(*)” signals missing data.
Table 3. Technological aspects and code sources for the libraries and frameworks processing sequence data. The entry “(*)” signals missing data.
NameInterfaceOnline/OfflineMaintenanceSourceStars (Forks)Contributors
braindecodeCLIOfflineActivehttps://github.com/braindecode/braindecode463 (123)26
DeepEEGColab NotebooksOfflineInactivehttps://github.com/kylemath/DeepEEG213 (55)2
DeLINEATEGUI, Colab NotebooksOfflineActivehttps://bitbucket.org/delineate/delineate(*)3
DN3CLIOfflineInactivehttps://github.com/SPOClab-ca/dn350 (16)4
EEG-DLCLIOfflineActivehttps://github.com/SuperBruceJia/EEG-DL640 (179)1
ExBrainableGUIOfflineActivehttps://github.com/CECNL/ExBrainable1 (2)4
gumpyCLIOnline, OfflineInactivehttps://github.com/gumpy-bci61 (22)4
SANTIAGUIOfflineInactivehttps://github.com/IgnacioFabietti/SANTIAtoolbox4 (2)1
TrajaCLIOfflineActivehttps://github.com/traja-team/traja71 (23)10
VAMECLIOfflineActivehttps://github.com/LINCellularNeuroscience/VAME130 (43)5
VARDNN toolboxCLIOfflineActivehttps://github.com/takuto-okuno-riken/vardnnpy2 (0)2
Table 4. Domains of applications for the libraries and frameworks processing image data.
Table 4. Domains of applications for the libraries and frameworks processing image data.
NameNeuroscience AreaData TypeDatasetsTask
Allen Cell Structure Segmenter [57]Microbiology, Histology3D-fluorescence microscopyNoSegmentation
ALMA [58]Behavioral neuroscienceVideoExternalPose estimation, Classification
ANTsX [59] (ANTsPyNet, ANTsRNet)NeuroimagingMRINoClassification, Segmentation, Registration, Superresolution
ATLASS [60]Medical ImagingImagesNoAnnotation, Classification
AxonDeepSeg [61]Microbiology, HistologySEM, TEMExternalSegmentation
BART [62]Medical ImagingMRINoReconstruction
Brainstorm [63]Medical ImagingMRINoSynthesis, Augmentation
CASCADE [64]Electrophys.2-photon calcium video, sequencesYesEvent detection
CDeep3M2 [65]Microbiology, HistologyMicroscopyYesSegmentation
CERR [66]Oncology, RadiomicsImagesNoSegmentation, Outcome prediction
ClinicaDL [67]NeuroimagingMRI, PETExternalClassification, Segmentation
DANNCE [68]Behavioral neuroscienceVideoYesPose estimation
DeepBehavior [69]Behavioral neuroscienceVideoYesPose estimation
DeepBhvTracking [70]Behavioral neuroscienceVideoNoPose estimation
DeepCINAC [71]Electrophys.2-photon calcium videoNoClassification
DeepInfer [72]Medical ImagingImages (3D)NoClassification, Segmentation
DeepLabCut [73]Behavioral neuroscienceVideoNoPose estimation
DeepLabStream [74]Behavioral neuroscienceVideoNoPose estimation
DeepNeuro [75]NeuroimagingImages (fMRI, miscellaneous)NoClassification, Segmentation, Synthesis
DeepNeuron [76]MorphologyImages (2D, 3D)NoClassification, Segmentation
DeepPoseKit [77]Behavioral neuroscienceVideoNoPose estimation
DeLINEATE [47]Medical ImagingImages, sequencesExternalClassification
DeepVOG [78]OculographyImages, VideoDemoSegmentation
DLTK [79]Medical ImagingImagesNoClassification, Segmentation
DNNBrain [80]Brain mappingImagesNoClassification
FastSurfer [81]NeuroimagingMRINoSegmentation
fetal-code [82]Neuroimagingrs-fMRIExternalSegmentation
GaNDLF [83]Medical ImagingImages (2D, 3D)ExternalSegmentation, Regression, XAI
hipotalamus_seg [84]NeuroimagingMRINoSegmentation
ivadomed [85]NeuroimagingImages (2D, 3D)NoClassification, Segmentation
LEAP [86], SLEAP [87]Behavioral neuroscienceVideoNoPose estimation
MARS, BENTO [88]Behavioral neuroscienceVideoYesPose estimation, Classification, Action recognition, Tag
MesoNet [89]NeuroimagingImages (fluoresc. microscopy)ExternalSegmentation, Registration
MEYE [90]OculographyImages, VideoYesSegmentation
MIScnn [91]Medical ImagingImages (2D, 3D)NoSegmentation
Neurite, Neuron [92]NeuroimagingImagesNoSegmentation
NiftyNet [93]Medical ImagingMRI, CTNoClassification, Segmentation, Synthesis
NiftyTorch [94]NeuroimagingImages (2D, 3D)NoClassification, Segmentation, Synthesis
nnU-Net [95]Medical ImagingImages (2D, 3D)NoSegmentation
PyTC [96]ConnectomicsImages (2D, 3D)NoSegmentation
ScLimbic [97]NeuroimagingMRIExternalSegmentation
SimBA [98]Behavioral neuroscienceVideoNoPose estimation
SynthStrip [99]NeuroimagingImages (3D)NoSegmentation, Extraction
VesicleSeg [100]Microbiology, HistologyEMNoSegmentation
Visual Fields Analysis [101]Eye tracking, Behavioral neuroscienceVideoNoPose estimation, Classification
Volume Segmantics [102]NeuroimagingImages (3D)NoSegmentation
VoxelMorph [103], HyperMorph [104]NeuroimagingMRINoRegistration
Table 5. Model engineering specifications for the libraries and frameworks processing image data.
Table 5. Model engineering specifications for the libraries and frameworks processing image data.
NameModelsDL FrameworkCustomizationProgramming Language
Allen Cell Structure SegmenterCAEPyTorchNoPython
ALMACNNUnspecifiedNoPython
ANTsX (ANTsPyNet, ANTsRNet)CNN, CAE, GANKeras, TensorFlowYesPython, R, C++
ATLASSCNNFastAIYesPython
AxonDeepSegCAETensorFlowYes (weights)Python
BARTCAE, VAEBART, TensorFlowYesPython
Brainstorm3D-CAEKeras, TensorFlowYes (weights)Python
CASCADE1-D CNNTensorFlowYes (weights)Python
CDeep3M2CAETensorFlowYes (weights)Python
CERRCAECERRYesOctave, MATLAB
ClinicaDLCNN, CAEPyTorchYesPython
DANNCE3D-CNNPyTorchYes (weights)Python, MATLAB
DeepBehaviorCNNTensorFlowYes (weights)Python, MATLAB
DeepBhvTrackingCNNTensorFlowYes (weights)Python, MATLAB
DeepCINACDeepCINAC (CNN+LSTM)Keras, TensorFlowYes (weights)Python
DeepInfer3D-CNN, CAEUnspecifiedYes (weights)Python, C++ (3D Slicer)
DeepLabCutCNNTensorFlowYes (weights)Python
DeepLabStreamCNNTensorFlowYes (weights)Python
DeepNeuroCNN, CAE, GANKeras, TensorFlowYes (weights, model)Python
DeepNeuron2,3-D CNNUnspecified (Vaa3D)Yes (weights)C++
DeepPoseKitCNNKeras, TensorFlowYes (weights)Python
DeLINEATECNNKeras, TensorFlowYes (weights, model)Python
DeepVOGCAETensorFlowNoPython
DLTKCNN, CAETensorflowYes (weights)Python
DNNBrainCNNPyTorchYes (model)Python
FastSurferCNNPyTorchYes (weights)Python (FreeSurfer)
fetal-code2-D CNNTensorFlowNoPython
GaNDLFCNN, CAEPyTorchYesPython
hipotalamus_seg3D-CNNKeras, TensorFlowYes (weights)Python
ivadomed2,3-D CNN, CAEPyTorchYes (weights, model)Python
LEAP, SLEAPCNN, CAETensorFlowYes (weights, model)Python
MARS, BENTOCNNTensorFlowYes (weights)Python
MesoNetCNN, CAEKeras, TensorFlowNoPython
MEYECAE, CNNTensorFlowYes (model)Python
MIScnn2,3-D CNNTensorFlowYes (weights)Python
Neurite, NeuronVAEKeras, TensorFlowYes (weights)Python
NiftyNetCNNTensorFlowYesPython
NiftyTorchCNN, CAE, GANPyTorchYesPython
nnU-Net2,3-D CAEPyTorchYesPython
PyTC2,3-D CAEPyTorchYesPython
ScLimibic3-D CAENeurite, TensorFlowNoPython (FreeSurfer)
SimBACNNTensorFlowYes (weights)Python
SynthStrip3-D CAEPyTorchNoPython
VesicleSegCNNPyTorchNoPython
Visual Fields AnalysisDeepLabCutTensorFlow, DeepLabCutYes (weights)Python
Volume SegmanticsCAEPyTorchYes (weights)Python
VoxelMorph, HyperMorphCAETensorFlowYes (weights)Python
Table 6. Technological aspects and code sources for the libraries and frameworks processing image data. The entry “(*)” signals missing data.
Table 6. Technological aspects and code sources for the libraries and frameworks processing image data. The entry “(*)” signals missing data.
NameInterfaceOnline/OfflineMaintenanceSourceStars (Forks)Contributors
Allen Cell Structure SegmenterGUI, Jupyter NotebooksOfflineActivehttps://github.com/AllenCell/aics-ml-segmentation24 (4)2
ALMAGUIOfflineActivehttps://github.com/sollan/alma7 (1)2
ANTsX (ANTsPyNet, ANTsRNet)CLIOfflineActivehttps://github.com/ANTsX961 (354)55
ATLASSJupyter NotebooksOfflineInactivehttps://github.com/adines/ATLASS2 (1)2
AxonDeepSegJupyter NotebooksOfflineActivehttps://github.com/axondeepseg/axondeepseg106 (28)21
BARTCLIOfflineActivehttps://github.com/mrirecon/deep-deep-learning-with-bart4 (0)1
BrainstormCLIOfflineInactivehttps://github.com/xamyzhao/brainstorm387 (93)1
CASCADEGUI, Colab NotebooksOfflineActivehttps://github.com/HelmchenLabSoftware/Cascade74 (26)7
CDeep3M2GUI, Colab NotebooksOfflineActivehttps://github.com/CRBS/cdeep3m24 (2)2
CERRGUIOfflineActivehttps://github.com/cerr/CERR170 (95)7
ClinicaDLGUI, Colab NotebooksOfflineActivehttps://github.com/aramis-lab/clinicadl
DANNCEGUIOfflineInactivehttps://github.com/spoonsso/dannce/162 (23)7
DeepBehaviorGUIOfflineInactivehttps://github.com/aarac/DeepBehavior29 (17)1
DeepBhvTrackingGUIOfflineInactivehttps://github.com/SunGL001/DeepBhvTracking3 (0)1
DeepCINACGUI, Colab NotebooksOfflineActivehttps://gitlab.com/cossartlab/deepcinac7 (0)10
DeepInferGUIOfflineActivehttp://www.deepinfer.org/24 (14)4
DeepLabCutGUI, Colab NotebooksOfflineActivehttps://github.com/DeepLabCut/DeepLabCut3600 (1500)102
DeepLabStreamGUIOnlineActivehttps://github.com/SchwarzNeuroconLab/DeepLabStream45 (8)5
DeepNeuroCLIOfflineActivehttps://github.com/QTIM-Lab/DeepNeuro113 (35)3
DeepNeuronGUIOfflineInactivehttps://github.com/Vaa3D/vaa3d_tools/tree/master/hackathon/MK/DeepNeuron92 (69)69
DeepPoseKitGUIOfflineInactivehttps://github.com/jgraving/DeepPoseKit353 (83)6
DeLINEATEGUI, Colab NotebooksOfflineActivebitbucket.org/delineate/delineate(*)3
DeepVOGCLIOfflineInactivehttps://github.com/pydsgz/DeepVOG131 (58)4
DLTKCLIOfflineInactivehttps://github.com/DLTK/DLTK1400 (408)6
DNNBrainCLIOfflineActivehttps://github.com/BNUCNL/dnnbrain37 (39)10
FastSurferCLIOfflineActivehttps://github.com/Deep-MI/FastSurfer338 (83)12
fetal-codeGUI, Colab NotebooksOfflineActivehttps://github.com/saigerutherford/fetal-code12 (5)2
GaNDLFGUIOfflineActivehttps://github.com/CBICA/GaNDLF85 (53)30
hipotalamus_segCLIOfflineActivehttps://github.com/BBillot/hypothalamus_seg18 (5)1
ivadomedCLIOfflineActivehttps://github.com/ivadomed/ivadomed146 (151)34
LEAP, SLEAPCLI, GUI, Colab NotebooksOnlineActivehttps://github.com/talmolab/sleap278 (61)26
MARS, BENTOGUI, MATLAB GUI, Jupyter NotebooksOfflineActivehttps://github.com/neuroethology37 (8)3
MesoNetGUI, Colab NotebooksOfflineActiveosf.io/svztu(*)3
MEYEWeb appOnline, OfflineActivepupillometry.it21 (5)2
MIScnnJupyter NotebookOfflineInactivehttps://github.com/frankkramer-lab/MIScnn357 (116)6
Neurite, NeuronCLIOfflineActivehttps://github.com/adalca/neurite279 (59)11
NiftyNetCLIOfflineInactivehttps://github.com/NifTK/NiftyNet1300 (408)41
NiftyTorchCLIOfflineActivehttps://github.com/NiftyTorch/NiftyTorch.doc34 (8)3
nnU-NetCLIOfflineActivehttps://github.com/MIC-DKFZ/nnUNet3500 (1200)38
PyTCCLIOfflineActivehttps://github.com/zudi-lin/pytorch_connectomics139 (67)28
ScLimbicCLIOfflineActivehttps://surfer.nmr.mgh.harvard.edu/fswiki/ScLimbic(*)(*)
SimBAGUIOfflineActivehttps://github.com/sgoldenlab/simba201 (115)9
SynthStripCLIOfflineActivehttps://github.com/freesurfer/freesurfer/tree/dev/mri_synthstrip(*)(*)
VesicleSegGUIOfflineActivehttps://github.com/Imbrosci/synaptic-vesicles-detection4 (3)2
Visual Fields AnalysisGUIOfflineActivehttps://github.com/mathjoss/VisualFieldsAnalysis1 (3)4
Volume SegmanticsCLI, APIOfflineActivehttps://github.com/DiamondLightSource/volume-segmantics7 (3)1
VoxelMorph, HyperMorphCLIOfflineActivehttps://github.com/voxelmorph/voxelmorph1800 (534)13
Table 7. Domains of applications for the libraries and frameworks for special applications.
Table 7. Domains of applications for the libraries and frameworks for special applications.
NameNeuroscience AreaData TypeDatasetsTask
DANCE [106]Single cell analysisGene sequencesExternal availabilityClustering, Classification, Prediction
PsychRNN [107]Computational neuroscienceSequencesNoClassification, Prediction, Cognitive tasks
PyCog [108]Computational neuroscienceSequencesNoClassification, Prediction, Cognitive tasks
THINGvision [109]Computational neuroscienceImages, TextExternal availabilityClassification
TorchDIVA [110]Speech productionSequencesNoAudio synthesis
COINSTAC [111]NeuroimagingImgNoFederated learning, Classification, Segmentation
Fed-BioMed [112]NeuroimagingImgNoFederated learning, Classification, Aggregation
FeTS [113]NeuroimagingImgNoFederated learning, Segmentation
MeDaS [114]NeuroimagingImgNoUtilities, Classification, Segmentation, Object detection
MONAI [115]GeneralGeneralExternal availabilityGeneral
NeuroCAAS [116]GeneralGeneralExternal availabilityGeneral
NeuroGym [117]Behavioral, cognitive neuroscienceGeneralInternalBehavioral, cognitive task generation, evaluation
NiftyNet [93]NeuroimagingImgNoUtilities, Classification, Segmentation, Regression, Synthesis
OpenFL [118]GeneralGeneralNoFederated learning, Segmentation
pymia [119]GeneralImgNoUtilities (data handling, evaluations)
TorchIO [120]ImagingAll imagesNoAugmentation
Table 8. Model engineering specifications for the libraries and frameworks for special applications.
Table 8. Model engineering specifications for the libraries and frameworks for special applications.
NameModelsDL FrameworkCustomizationProgramming Language
DANCEGNNPyTorchYesPython
PsychRNNRNNTensorflowYesPython
PyCogRNNTheanoYesPython
THINGvisionCNN, RNN, TransformersPyTorch, TensorFlowNoPython
TorchDIVACNNPyTorchYes (weights)Python
COINCSTACCNNCOINSTACYesJavaScript, Python
Fed-BioMedVAEPyTorchYes (weights)Python
FeTS3D-ResUNetPyTorchYes (weights)Python
MeDaS2,3-D CNN, CAEPyTorch, TensorFlowYesPython
MONAIGeneralPyTorchYesPython
NeuroCAASCNNTensorFlowYesPython
NeuroGymRNN, GeneralKeras, TensorFlow, PyTorchYesPython
NiftyNetCNN, CAE, GANTensorFlowYesPython
OpenFLGeneralKeras, TensorFlow, PyTorchYesPython
pymiaGeneralGeneralYesPython
TorchIOCNNPyTorchYesPython
Table 9. Technological aspects and code sources for the libraries and frameworks for special applications. The entry “(*)” signals missing data.
Table 9. Technological aspects and code sources for the libraries and frameworks for special applications. The entry “(*)” signals missing data.
NameInterfaceOnline/OfflineMaintenanceSourceStars (Forks)Contributors
DANCECLIOfflineActivehttps://github.com/OmicsML/dance206 (15)8
PsychRNNNoneOfflineActivehttps://github.com/murraylab/PsychRNN122 (38)8
PyCogNoneOfflineInactivehttps://github.com/xjwanglab/pycog45 (29)1
THINGvisionNoneOfflineActivegithub.com/ViCCo-Group/THINGSvision108 (17)12
TorchDIVACLIOfflineActivehttps://github.com/skinahan/DIVA_PyTorch12 (0)1
COINSTACGUIOfflineActivehttps://github.com/trendscenter/coinstac35 (19)19
Fed-BioMedCLIOfflineActivehttps://gitlab.inria.fr/fedbiomed6 (0)26
FeTSGUI, CLIOfflineActivehttps://github.com/FETS-AI/Front-End55 (6)2
MeDaSGUIOfflineInactivehttps://medas.bnc.org.cn/(*)(*)
MONAIGUI, Colab NotebooksOfflineActivegithub.com/Project-MONAI/MONAI4000 (776)151
NeuroCAASGUI, Jupyter NotebooksOfflineActivegithub.com/cunningham-lab/neurocaas25 (22)6
NeuroGymColab NotebooksOfflineActivehttps://neurogym.github.io(*)4
NiftyNetCLIOfflineInactivehttps://github.com/NifTK/NiftyNet1300 (408)41
OpenFLCLIOfflineActivehttps://github.com/securefederatedai/openfl484 (134)49
pymiaCLIOfflineActivehttps://github.com/rundherum/pymia54 (12)4
TorchIOGUI, CLIOfflineActivetorchio.rtfd.io1700 (204)49
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tshimanga, L.F.; Del Pup, F.; Corbetta, M.; Atzori, M. An Overview of Open Source Deep Learning-Based Libraries for Neuroscience. Appl. Sci. 2023, 13, 5472. https://doi.org/10.3390/app13095472

AMA Style

Tshimanga LF, Del Pup F, Corbetta M, Atzori M. An Overview of Open Source Deep Learning-Based Libraries for Neuroscience. Applied Sciences. 2023; 13(9):5472. https://doi.org/10.3390/app13095472

Chicago/Turabian Style

Tshimanga, Louis Fabrice, Federico Del Pup, Maurizio Corbetta, and Manfredo Atzori. 2023. "An Overview of Open Source Deep Learning-Based Libraries for Neuroscience" Applied Sciences 13, no. 9: 5472. https://doi.org/10.3390/app13095472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop