Open AccessArticle
CFD-PBM Approach with Different Inlet Locations for the Gas-Liquid Flow in a Laboratory-Scale Bubble Column with Activated Sludge/Water
Computation 2017, 5(3), 38; doi:10.3390/computation5030038 -
Abstract
A novel computational fluid dynamics-population balance model (CFD-PBM) for the simulation of gas mixing in activated sludge (i.e., an opaque non-Newtonian liquid) in a bubble column is developed and described to solve the problem of measuring the hydrodynamic behavior of opaque non-Newtonian liquid-gas
[...] Read more.
A novel computational fluid dynamics-population balance model (CFD-PBM) for the simulation of gas mixing in activated sludge (i.e., an opaque non-Newtonian liquid) in a bubble column is developed and described to solve the problem of measuring the hydrodynamic behavior of opaque non-Newtonian liquid-gas two-phase flow. We study the effects of the inlet position and liquid-phase properties (water/activated sludge) on various characteristics, such as liquid flow field, gas hold-up, liquid dynamic viscosity, and volume-averaged bubble diameter. As the inlet position changed, two symmetric vortices gradually became a single main vortex in the flow field in the bubble column. In the simulations, when water was in the liquid phase, the global gas hold-up was higher than when activated sludge was in the liquid phase in the bubble column, and a flow field that was dynamic with time was observed in the bubble column. Additionally, when activated sludge was used as the liquid phase, no periodic velocity changes were found. When the inlet position was varied, the non-Newtonian liquid phase had different peak values and distributions of (dynamic) liquid viscosity in the bubble column, which were related to the gas hold-up. The high gas hold-up zone corresponded to the low dynamic viscosity zone. Finally, when activated sludge was in the liquid phase, the volume-averaged bubble diameter was much larger than when water was in the liquid phase. Full article
Figures

Figure 1

Open AccessArticle
A Non-Isothermal Chemical Lattice Boltzmann Model Incorporating Thermal Reaction Kinetics and Enthalpy Changes
Computation 2017, 5(3), 37; doi:10.3390/computation5030037 -
Abstract
The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as
[...] Read more.
The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines. Full article
Figures

Figure 1

Open AccessArticle
TFF (v.4.1): A Mathematica Notebook for the Calculation of One- and Two-Neutron Stripping and Pick-Up Nuclear Reactions
Computation 2017, 5(3), 36; doi:10.3390/computation5030036 -
Abstract
The program TFF calculates stripping single-particle form factors for one-neutron transfer in prior representation with appropriate perturbative treatment of recoil. Coupled equations are then integrated along a semiclassical trajectory to obtain one- and two-neutron transfer amplitudes and probabilities within first- and second-order perturbation
[...] Read more.
The program TFF calculates stripping single-particle form factors for one-neutron transfer in prior representation with appropriate perturbative treatment of recoil. Coupled equations are then integrated along a semiclassical trajectory to obtain one- and two-neutron transfer amplitudes and probabilities within first- and second-order perturbation theory. Total and differential cross-sections are then calculated by folding with a transmission function (obtained from a phenomenological imaginary absorption potential). The program description, user instructions and examples are discussed. Full article
Figures

Figure 1

Open AccessArticle
Using an Interactive Lattice Boltzmann Solver in Fluid Mechanics Instruction
Computation 2017, 5(3), 35; doi:10.3390/computation5030035 -
Abstract
This article gives an overview of the diverse range of teaching applications that can be realized using an interactive lattice Boltzmann simulation tool in fluid mechanics instruction and outreach. In an inquiry-based learning framework, examples are given of learning scenarios that address instruction
[...] Read more.
This article gives an overview of the diverse range of teaching applications that can be realized using an interactive lattice Boltzmann simulation tool in fluid mechanics instruction and outreach. In an inquiry-based learning framework, examples are given of learning scenarios that address instruction on scientific results, scientific methods or the scientific process at varying levels of student activity, from consuming to applying to researching. Interactive live demonstrations on portable hardware enable new and innovative teaching concepts for fluid mechanics, also for large audiences and in the early stages of the university education. Moreover, selected examples successfully demonstrate that the integration of high-fidelity CFD methods into fluid mechanics teaching facilitates high-quality student research work within reach of the current state of the art in the respective field of research. Full article
Figures

Figure 1

Open AccessArticle
Tensor-Based Semantically-Aware Topic Clustering of Biomedical Documents
Computation 2017, 5(3), 34; doi:10.3390/computation5030034 -
Abstract
Biomedicine is a pillar of the collective, scientific effort of human self-discovery, as well as a major source of humanistic data codified primarily in biomedical documents. Despite their rigid structure, maintaining and updating a considerably-sized collection of such documents is a task of
[...] Read more.
Biomedicine is a pillar of the collective, scientific effort of human self-discovery, as well as a major source of humanistic data codified primarily in biomedical documents. Despite their rigid structure, maintaining and updating a considerably-sized collection of such documents is a task of overwhelming complexity mandating efficient information retrieval for the purpose of the integration of clustering schemes. The latter should work natively with inherently multidimensional data and higher order interdependencies. Additionally, past experience indicates that clustering should be semantically enhanced. Tensor algebra is the key to extending the current term-document model to more dimensions. In this article, an alternative keyword-term-document strategy, based on scientometric observations that keywords typically possess more expressive power than ordinary text terms, whose algorithmic cornerstones are third order tensors and MeSH ontological functions, is proposed. This strategy has been compared against a baseline using two different biomedical datasets, the TREC (Text REtrieval Conference) genomics benchmark and a large custom set of cognitive science articles from PubMed. Full article
Figures

Figure 1

Open AccessArticle
A Discrete Approach to Meshless Lagrangian Solid Modeling
Computation 2017, 5(3), 33; doi:10.3390/computation5030033 -
Abstract
The author demonstrates a stable Lagrangian solid modeling method, tracking the interactions of solid mass particles rather than using a meshed grid. This numerical method avoids the problem of tensile instability often seen with smooth particle applied mechanics by having the solid particles
[...] Read more.
The author demonstrates a stable Lagrangian solid modeling method, tracking the interactions of solid mass particles rather than using a meshed grid. This numerical method avoids the problem of tensile instability often seen with smooth particle applied mechanics by having the solid particles apply stresses expected with Hooke’s law, as opposed to using a smoothing function for neighboring solid particles. This method has been tested successfully with a bar in tension, compression, and shear, as well as a disk compressed into a flat plate, and the numerical model consistently matched the analytical Hooke’s law as well as Hertz contact theory for all examples. The solid modeling numerical method was then built into a 2-D model of a pressure vessel, which was tested with liquid water particles under pressure and simulated with smoothed particle hydrodynamics. This simulation was stable, and demonstrated the feasibility of Lagrangian specification modeling for fluid–solid interactions. Full article
Figures

Figure 1

Open AccessArticle
Anomalous Diffusion within the Transcriptome as a Bio-Inspired Computing Framework for Resilience
Computation 2017, 5(3), 32; doi:10.3390/computation5030032 -
Abstract
Much of biology-inspired computer science is based on the Central Dogma, as implemented with genetic algorithms or evolutionary computation. That 60-year-old biological principle based on the genome, transcriptome and proteasome is becoming overshadowed by a new paradigm of complex ordered associations and connections
[...] Read more.
Much of biology-inspired computer science is based on the Central Dogma, as implemented with genetic algorithms or evolutionary computation. That 60-year-old biological principle based on the genome, transcriptome and proteasome is becoming overshadowed by a new paradigm of complex ordered associations and connections between layers of biological entities, such as interactomes, metabolomics, etc. We define a new hierarchical concept as the “Connectosome”, and propose new venues of computational data structures based on a conceptual framework called “Grand Ensemble” which contains the Central Dogma as a subset. Connectedness and communication within and between living or biology-inspired systems comprise ensembles from which a physical computing system can be conceived. In this framework the delivery of messages is filtered by size and a simple and rapid semantic analysis of their content. This work aims to initiate discussion on the Grand Ensemble in network biology as a representation of a Persistent Turing Machine. This framework adding interaction and persistency to the classic Turing-machine model uses metrics based on resilience that has application to dynamic optimization problem solving in Genetic Programming. Full article
Figures

Figure 1

Open AccessArticle
Artificial Immune Classifier Based on ELLipsoidal Regions (AICELL)
Computation 2017, 5(2), 31; doi:10.3390/computation5020031 -
Abstract
Pattern classification is a central problem in machine learning, with a wide array of applications, and rule-based classifiers are one of the most prominent approaches. Among these classifiers, Incremental Rule Learning algorithms combine the advantages of classic Pittsburg and Michigan approaches, while, on
[...] Read more.
Pattern classification is a central problem in machine learning, with a wide array of applications, and rule-based classifiers are one of the most prominent approaches. Among these classifiers, Incremental Rule Learning algorithms combine the advantages of classic Pittsburg and Michigan approaches, while, on the other hand, classifiers using fuzzy membership functions often result in systems with fewer rules and better generalization ability. To discover an optimal set of rules, learning classifier systems have always relied on bio-inspired models, mainly genetic algorithms. In this paper we propose a classification algorithm based on an efficient bio-inspired approach, Artificial Immune Networks. The proposed algorithm encodes the patterns as antigens, and evolves a set of antibodies, representing fuzzy classification rules of ellipsoidal surface, to cover the problem space. The innate immune mechanisms of affinity maturation and diversity preservation are modified and adapted to the classification context, resulting in a classifier that combines the advantages of both incremental rule learning and fuzzy classifier systems. The algorithm is compared to a number of state-of-the-art rule-based classifiers, as well as Support Vector Machines (SVM), producing very satisfying results, particularly in problems with large number of attributes and classes. Full article
Figures

Figure 1

Open AccessArticle
Theoretical Prediction of Electronic Structures and Phonon Dispersion of Ce2XN2 (X = S, Se, and Te) Ternary
Computation 2017, 5(2), 29; doi:10.3390/computation5020029 -
Abstract
A systematic study of structural, electronic, vibrational properties of new ternary dicerium selenide dinitride, Ce2SeN2 and predicted compounds—Ce2SN2 and Ce2TeN2—is performed using first-principles calculations within Perdew–Burke–Ernzerhof functional with Hubbard correction. Our calculated results
[...] Read more.
A systematic study of structural, electronic, vibrational properties of new ternary dicerium selenide dinitride, Ce2SeN2 and predicted compounds—Ce2SN2 and Ce2TeN2—is performed using first-principles calculations within Perdew–Burke–Ernzerhof functional with Hubbard correction. Our calculated results for structural parameters nicely agree to the experimental measurements. We predict that all ternary dicerium chalcogenide nitrides are thermodynamically stable. The predicted elastic constants and related mechanical properties demonstrate its profound mechanical stability as well. Moreover, our results show that Ce2XN2 are insulator materials. Trends of the structural parameters, electronic structures, and phonon dispersion are discussed in terms of the characteristics of the Ce (4f) states. Full article
Figures

Figure 1

Open AccessArticle
Levy-Lieb-Based Monte Carlo Study of the Dimensionality Behaviour of the Electronic Kinetic Functional
Computation 2017, 5(2), 30; doi:10.3390/computation5020030 -
Abstract
We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D), two dimensional (2D) and three dimensional (3D) cases. We focus on the determination of the correlation part of the kinetic functional by employing
[...] Read more.
We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D), two dimensional (2D) and three dimensional (3D) cases. We focus on the determination of the correlation part of the kinetic functional by employing a Monte Carlo sampling technique of electrons in space based on an analytic derivation via the Levy-Lieb constrained search principle. Of particular interest is the question of the behaviour of the functional as one passes from 1D to 3D; according to the basic principles of Density Functional Theory (DFT) the form of the universal functional should be independent of the dimensionality. However, in practice the straightforward use of current approximate functionals in different dimensions is problematic. Here, we show that going from the 3D to the 2D case the functional form is consistent (concave function) but in 1D becomes convex; such a drastic difference is peculiar of 1D electron systems as it is for other quantities. Given the interesting behaviour of the functional, this study represents a basic first-principle approach to the problem and suggests further investigations using highly accurate (though expensive) many-electron computational techniques, such as Quantum Monte Carlo. Full article
Figures

Figure 1

Open AccessArticle
Geometric Derivation of the Stress Tensor of the Homogeneous Electron Gas
Computation 2017, 5(2), 28; doi:10.3390/computation5020028 -
Abstract
The foundation of many approximations in time-dependent density functional theory (TDDFT) lies in the theory of the homogeneous electron gas. However, unlike the ground-state DFT, in which the exchange-correlation potential of the homogeneous electron gas is known exactly via the quantum Monte Carlo
[...] Read more.
The foundation of many approximations in time-dependent density functional theory (TDDFT) lies in the theory of the homogeneous electron gas. However, unlike the ground-state DFT, in which the exchange-correlation potential of the homogeneous electron gas is known exactly via the quantum Monte Carlo calculation, the time-dependent or frequency-dependent dynamical potential of the homogeneous electron gas has not been known exactly, due to the absence of a similar variational principle for excited states. In this work, we present a simple geometric derivation of the time-dependent dynamical exchange-correlation potential for the homogeneous system. With this derivation, the dynamical potential can be expressed in terms of the stress tensor, offering an alternative to calculate the bulk and shear moduli, two key input quantities in TDDFT. Full article
Open AccessArticle
Energetic Study of Clusters and Reaction Barrier Heights from Efficient Semilocal Density Functionals
Computation 2017, 5(2), 27; doi:10.3390/computation5020027 -
Abstract
The accurate first-principles prediction of the energetic properties of molecules and clusters from efficient semilocal density functionals is of broad interest. Here we study the performance of a non-empirical Tao-Mo (TM) density functional on binding energies and excitation energies of titanium dioxide and
[...] Read more.
The accurate first-principles prediction of the energetic properties of molecules and clusters from efficient semilocal density functionals is of broad interest. Here we study the performance of a non-empirical Tao-Mo (TM) density functional on binding energies and excitation energies of titanium dioxide and water clusters, as well as reaction barrier heights. To make a comparison, a combination of the TM exchange part with the TPSS (Tao–Perdew–Staroverov–Scuseria) correlation functional—called TMTPSS—is also included in this study. Our calculations show that the best binding energies of titanium dioxide are predicted by PBE0 (Perdew–Burke–Ernzerhof hybrid functional), TM, and TMTPSS with nearly the same accuracy, while B3LYP (Beck’s three-parameter exchange part with Lee-Yang-Parr correlation), TPSS, and PBE (Perdew–Burke–Ernzerhof) yield larger mean absolute errors. For excitation energies of titanium and water clusters, PBE0 and B3LYP are the most accurate functionals, outperforming the performance of semilocal functionals due to the nonlocality problem suffered by the latter. Nevertheless, TMTPSS and TM functionals are still good accurate semilocal methods, improving upon the commonly-used TPSS and PBE functionals. We also find that the best reaction barrier heights are predicted by PBE0 and B3LYP, thanks to the nonlocality incorporated into these two hybrid functionals, but TMTPSS and TM are obviously more accurate than SCAN (Strongly Constrained and Appropriately Normed), TPSS, and PBE, suggesting the good performance of TM and TMTPSS for physically different systems and properties. Full article
Figures

Figure 1

Open AccessArticle
Deep Visual Attributes vs. Hand-Crafted Audio Features on Multidomain Speech Emotion Recognition
Computation 2017, 5(2), 26; doi:10.3390/computation5020026 -
Abstract
Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may
[...] Read more.
Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may be recognized using several modalities such as analyzing facial expressions, speech, physiological parameters (e.g., electroencephalograms, electrocardiograms) etc. However, measuring of these modalities may be difficult, obtrusive or require expensive hardware. In that context, speech may be the best alternative modality in many practical applications. In this work we present an approach that uses a Convolutional Neural Network (CNN) functioning as a visual feature extractor and trained using raw speech information. In contrast to traditional machine learning approaches, CNNs are responsible for identifying the important features of the input thus, making the need of hand-crafted feature engineering optional in many tasks. In this paper no extra features are required other than the spectrogram representations and hand-crafted features were only extracted for validation purposes of our method. Moreover, it does not require any linguistic model and is not specific to any particular language. We compare the proposed approach using cross-language datasets and demonstrate that it is able to provide superior results vs. traditional ones that use hand-crafted features. Full article
Figures

Figure 1

Open AccessArticle
Numerical Simulation of the Laminar Forced Convective Heat Transfer between Two Concentric Cylinders
Computation 2017, 5(2), 25; doi:10.3390/computation5020025 -
Abstract
The dual reciprocity method (DRM) is a highly efficient numerical method of transforming domain integrals arising from the non-homogeneous term of the Poisson equation into equivalent boundary integrals. In this paper, the velocity and temperature fields of laminar forced heat convection in a
[...] Read more.
The dual reciprocity method (DRM) is a highly efficient numerical method of transforming domain integrals arising from the non-homogeneous term of the Poisson equation into equivalent boundary integrals. In this paper, the velocity and temperature fields of laminar forced heat convection in a concentric annular tube, with constant heat flux boundary conditions, have been studied using numerical simulations. The DRM has been used to solve the governing equation, which is expressed in the form of a Poisson equation. A test problem is employed to verify the DRM solutions with different boundary element discretizations and numbers of internal points. The results of the numerical simulations are discussed and compared with exact analytical solutions. Good agreement between the numerical results and exact solutions is evident, as the maximum relative errors are less than 5% to 6%, and the R2-values are greater than 0.999 in all cases. These results confirm the effectiveness and accuracy of the proposed numerical model, which is based on the DRM. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Analyzing the Effect and Performance of Lossy Compression on Aeroacoustic Simulation of Gas Injector
Computation 2017, 5(2), 24; doi:10.3390/computation5020024 -
Abstract
Computational fluid dynamic simulations involve large state data, leading to performance degradation due to data transfer times, while requiring large disk space. To alleviate the situation, an adaptive lossy compression algorithm has been developed, which is based on regions of interest. This algorithm
[...] Read more.
Computational fluid dynamic simulations involve large state data, leading to performance degradation due to data transfer times, while requiring large disk space. To alleviate the situation, an adaptive lossy compression algorithm has been developed, which is based on regions of interest. This algorithm uses prediction-based compression and exploits the temporal coherence between subsequent simulation frames. The difference between the actual value and the predicted value is adaptively quantized and encoded. The adaptation is in line with user requirements, that consist of the acceptable inaccuracy, the regions of interest and the required compression throughput. The data compression algorithm was evaluated with simulation data obtained by the discontinuous Galerkin spectral element method. We analyzed the performance, compression ratio and inaccuracy introduced by the lossy compression algorithm. The post processing analysis shows high compression ratios, with reasonable quantization errors. Full article
Figures

Figure 1

Open AccessArticle
Implicit Large Eddy Simulation of Flow in a Micro-Orifice with the Cumulant Lattice Boltzmann Method
Computation 2017, 5(2), 23; doi:10.3390/computation5020023 -
Abstract
A detailed numerical study of turbulent flow through a micro-orifice is presented in this work. The flow becomes turbulent due to the orifice at the considered Reynolds numbers (∼104). The obtained flow rates are in good agreement with the experimental
[...] Read more.
A detailed numerical study of turbulent flow through a micro-orifice is presented in this work. The flow becomes turbulent due to the orifice at the considered Reynolds numbers (∼104). The obtained flow rates are in good agreement with the experimental measurements. The discharge coefficient and the pressure loss are presented for two input pressures. The laminar stress and the generated turbulent stresses are investigated in detail, and the location of the vena contracta is quantitatively reproduced. Full article
Figures

Figure 1

Open AccessArticle
Scatter Search Applied to the Inference of a Development Gene Network
Computation 2017, 5(2), 22; doi:10.3390/computation5020022 -
Abstract
Efficient network inference is one of the challenges of current-day biology. Its application to the study of development has seen noteworthy success, yet a multicellular context, tissue growth, and cellular rearrangements impose additional computational costs and prohibit a wide application of current methods.
[...] Read more.
Efficient network inference is one of the challenges of current-day biology. Its application to the study of development has seen noteworthy success, yet a multicellular context, tissue growth, and cellular rearrangements impose additional computational costs and prohibit a wide application of current methods. Therefore, reducing computational cost and providing quick feedback at intermediate stages are desirable features for network inference. Here we propose a hybrid approach composed of two stages: exploration with scatter search and exploitation of intermediate solutions with low temperature simulated annealing. We test the approach on the well-understood process of early body plan development in flies, focusing on the gap gene network. We compare the hybrid approach to simulated annealing, a method of network inference with a proven track record. We find that scatter search performs well at exploring parameter space and that low temperature simulated annealing refines the intermediate results into excellent model fits. From this we conclude that for poorly-studied developmental systems, scatter search is a valuable tool for exploration and accelerates the elucidation of gene regulatory networks. Full article
Figures

Figure 1

Open AccessArticle
An Information Technology Framework for the Development of an Embedded Computer System for the Remote and Non-Destructive Study of Sensitive Archaeology Sites
Computation 2017, 5(2), 21; doi:10.3390/computation5020021 -
Abstract
The paper proposes an information technology framework for the development of an embedded remote system for non-destructive observation and study of sensitive archaeological sites. The overall concept and motivation are described. The general hardware layout and software configuration are presented. The paper concentrates
[...] Read more.
The paper proposes an information technology framework for the development of an embedded remote system for non-destructive observation and study of sensitive archaeological sites. The overall concept and motivation are described. The general hardware layout and software configuration are presented. The paper concentrates on the implementation of the following informational technology components: (a) a geographically unique identification scheme supporting a global key space for a key-value store; (b) a common method for octree modeling for spatial geometrical models of the archaeological artifacts, and abstract object representation in the global key space; (c) a broadcast of the archaeological information as an Extensible Markup Language (XML) stream over the Web for worldwide availability; and (d) a set of testing methods increasing the fault tolerance of the system. This framework can serve as a foundation for the development of a complete system for remote archaeological exploration of enclosed archaeological sites like buried churches, tombs, and caves. An archaeological site is opened once upon discovery, the embedded computer system is installed inside upon a robotic platform, equipped with sensors, cameras, and actuators, and the intact site is sealed again. Archaeological research is conducted on a multimedia data stream which is sent remotely from the system and conforms to necessary standards for digital archaeology. Full article
Figures

Figure 1

Open AccessArticle
Detecting Perturbed Subpathways towards Mouse Lung Regeneration Following H1N1 Influenza Infection
Computation 2017, 5(2), 20; doi:10.3390/computation5020020 -
Abstract
It has already been established by the systems-level approaches that the future of predictive disease biomarkers will not be sketched by plain lists of genes or proteins or other biological entities but rather integrated entities that consider all underlying component relationships. Towards this
[...] Read more.
It has already been established by the systems-level approaches that the future of predictive disease biomarkers will not be sketched by plain lists of genes or proteins or other biological entities but rather integrated entities that consider all underlying component relationships. Towards this orientation, early pathway-based approaches coupled expression data with whole pathway interaction topologies but it was the recent approaches that zoomed into subpathways (local areas of the entire biological pathway) that provided more targeted and context-specific candidate disease biomarkers. Here, we explore the application potential of PerSubs, a graph-based algorithm which identifies differentially activated disease-specific subpathways. PerSubs is applicable both for microarray and RNA-Seq data and utilizes the Kyoto Encyclopedia of Genes and Genomes (KEGG) database as reference for biological pathways. PerSubs operates in two stages: first, identifies differentially expressed genes (or uses any list of disease-related genes) and in second stage, treating each gene of the list as start point, it scans the pathway topology around to build meaningful subpathway topologies. Here, we apply PerSubs to investigate which pathways are perturbed towards mouse lung regeneration following H1N1 influenza infection. Full article
Figures

Figure 1

Open AccessArticle
Esoteric Twist: An Efficient in-Place Streaming Algorithmus for the Lattice Boltzmann Method on Massively Parallel Hardware
Computation 2017, 5(2), 19; doi:10.3390/computation5020019 -
Abstract
We present and analyze the Esoteric Twist algorithm for the Lattice Boltzmann Method. Esoteric Twist is a thread safe in-place streaming method that combines streaming and collision and requires only a single data set. Compared to other in-place streaming techniques, Esoteric Twist minimizes
[...] Read more.
We present and analyze the Esoteric Twist algorithm for the Lattice Boltzmann Method. Esoteric Twist is a thread safe in-place streaming method that combines streaming and collision and requires only a single data set. Compared to other in-place streaming techniques, Esoteric Twist minimizes the memory footprint and the memory traffic when indirect addressing is used. Esoteric Twist is particularly suitable for the implementation of the Lattice Boltzmann Method on Graphic Processing Units. Full article
Figures

Figure 1