Special Issue "Information Theory in Computational Biology"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 30 May 2022 | Viewed by 2660

Special Issue Editors

Dr. Alon Bartal
E-Mail Website
Guest Editor
The School of Business Administration, Bar-Ilan University, Ramat Gan 5290002, Israel
Interests: information networks; complex networks; information science; machine learning; computational biology
Dr. Kathleen M. Jagodnik
E-Mail Website
Guest Editor
The School of Business Administration, Bar-Ilan University, Ramat Gan 5290002, Israel
Interests: computational biology; bioinformatics; network science; information science; complexity; meta-analysis of science; communication

Special Issue Information

Dear Colleagues,

Starting with Claude Shannon’s foundational work in 1948, the field of Information Theory, key to statistical learning and inference, has shaped a wide range of scientific disciplines. Concepts including self-information, entropy, and mutual information have guided the progress of research ranging from physics to the biological sciences. In recent decades, Information Theory has contributed to significant advances in Computational Biology and Bioinformatics across a broad range of topics.

We are pleased to invite submissions to this Special Issue of Entropy, with the theme “Information Theory in Computational Biology”. Submissions can include, but are not limited to, the following research areas: sequencing, sequence comparison, and error correction; gene expression and transcriptomics; biological networks; omics analyses; genome-wide disease-gene association mapping; and protein sequence, structure, and interaction analysis.

Topics that are particularly welcome include analyses, and/or development of application tools, involving single-cell data; multi-omics integration; biological networks; human health; high-dimensional statistical theory for biological applications; unifying definitions and interpretations of statistical interactions; adaptation of existing information theoretic test statistics and estimators for cases involving missing, erroneous, or heterogeneous data; analyses when distributions of the test statistics under the null and the alternative hypotheses are unknown; biologically inspired information storage; and efficient analysis of very large datasets.

Submitted manuscripts should present original work, and may describe novel algorithms, methods, metrics, applications, tools, platforms, and other resources that apply Information Theory principles to advance the field of Computational Biology. We encourage the rigorous comparison of original work with existing methods. We also welcome survey papers, as well as essays reflecting on theory, controversies, and/or the state of current research involving the application of Information Theory concepts to Computational Biology, and providing informed recommendations to advance this research. (Please limit commentary/perspective articles to 3,000 words.)

Thanks for considering this publication opportunity. We look forward to your submission!

Requests for extensions to the stated manuscript submission deadline will be considered.

Sincerely,

Dr. Alon Bartal
Dr. Kathleen M. Jagodnik
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • bioinformatics
  • biological networks
  • computational biology
  • gene expression and transcriptomics
  • heterogeneous data
  • human health
  • information theory
  • machine learning
  • omics
  • systems biology

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Multi-Classifier Fusion Based on MI–SFFS for Cross-Subject Emotion Recognition
Entropy 2022, 24(5), 705; https://doi.org/10.3390/e24050705 - 16 May 2022
Viewed by 266
Abstract
With the widespread use of emotion recognition, cross-subject emotion recognition based on EEG signals has become a hot topic in affective computing. Electroencephalography (EEG) can be used to detect the brain’s electrical activity associated with different emotions. The aim of this research is [...] Read more.
With the widespread use of emotion recognition, cross-subject emotion recognition based on EEG signals has become a hot topic in affective computing. Electroencephalography (EEG) can be used to detect the brain’s electrical activity associated with different emotions. The aim of this research is to improve the accuracy by enhancing the generalization of features. A Multi-Classifier Fusion method based on mutual information with sequential forward floating selection (MI_SFFS) is proposed. The dataset used in this paper is DEAP, which is a multi-modal open dataset containing 32 EEG channels and multiple other physiological signals. First, high-dimensional features are extracted from 15 EEG channels of DEAP after using a 10 s time window for data slicing. Second, MI and SFFS are integrated as a novel feature-selection method. Then, support vector machine (SVM), k-nearest neighbor (KNN) and random forest (RF) are employed to classify positive and negative emotions to obtain the output probabilities of classifiers as weighted features for further classification. To evaluate the model performance, leave-one-out cross-validation is adopted. Finally, cross-subject classification accuracies of 0.7089, 0.7106 and 0.7361 are achieved by the SVM, KNN and RF classifiers, respectively. The results demonstrate the feasibility of the model by splicing different classifiers’ output probabilities as a portion of the weighted features. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

Article
RNA World Modeling: A Comparison of Two Complementary Approaches
Entropy 2022, 24(4), 536; https://doi.org/10.3390/e24040536 - 11 Apr 2022
Viewed by 415
Abstract
The origin of life remains one of the major scientific questions in modern biology. Among many hypotheses aiming to explain how life on Earth started, RNA world is probably the most extensively studied. It assumes that, in the very beginning, RNA molecules served [...] Read more.
The origin of life remains one of the major scientific questions in modern biology. Among many hypotheses aiming to explain how life on Earth started, RNA world is probably the most extensively studied. It assumes that, in the very beginning, RNA molecules served as both enzymes and as genetic information carriers. However, even if this is true, there are many questions that still need to be answered—for example, whether the population of such molecules could achieve stability and retain genetic information for many generations, which is necessary in order for evolution to start. In this paper, we try to answer this question based on the parasite–replicase model (RP model), which divides RNA molecules into enzymes (RNA replicases) capable of catalyzing replication and parasites that do not possess replicase activity but can be replicated by RNA replicases. We describe the aforementioned system using partial differential equations and, based on the analysis of the simulation, surmise general rules governing its evolution. We also compare this approach with one where the RP system is modeled and implemented using a multi-agent modeling technique. We show that approaching the description and analysis of the RP system from different perspectives (microscopic represented by MAS and macroscopic depicted by PDE) provides consistent results. Therefore, applying MAS does not lead to erroneous results and allows us to study more complex situations where many cases are concerned, which would not be possible through the PDE model. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Tentative Title:  Applications and Methodologies for Evolving Bio-Ontologies and Knowledge Graphs
Author: Hande Kucuk-McGinty

Tentative Title: Revealing the dynamics of neural information processing with multivariate information decomposition
Authors: Newman, E.L.; Varley, T.F.; Sherill, S.P.; Timme, N.M.; Beggs, J.M.
Abstract: The varied cognitive abilities and rich adaptive behaviours enabled by the animal nervous sys- tem are often described in terms of “information processing.” This framing raises the issue of how biological neural circuits actually implement “computations” and some of the most fundamental outstanding questions in neuroscience centre on defining and determining the mechanisms of neu- ral information processing. Classical information theory has long been understood to be a natural framework by which neural computation can be understood, and recent advances in the field of multivariate information theory specifically offer exciting new insights into the structure of compu- tation in complex nervous systems. In this paper, we specifically focus on the partial information decomposition (PID), which reveals multiple redundant, unique, and synergistic “modes” by which neurons integrate information from multiple upstream sources. Of these different dynamics, syn- ergistic integration, where the output of a computation is irreducible the individual components represents a fundamental operation, with implications for many cognitive process such as pattern recognition, learning, and memory updating. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure-function relationships: emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and during correlated processes. In this review, we synthesize existing literature on higher-order information dynamics in neuronal networks, and describe what insights have been gained by taking an information-decomposition perspective on neural activity. Furthermore, we provide an introduction to the PID framework intended to be accessible to both novice and experienced scientists. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work in behaving animals, multi-target generalizations of the PID, and time-resolved local analyses.
Highlights: - we’ll describe use of MV IT to decompose neural dynamics for study as unique, redundant, and synergistic components - we’ll survey and synthesize prior successes in using this approach to study synergistic processing
Back to TopTop