Computation
http://www.mdpi.com/journal/computation
Latest open access articles published in Computation at http://www.mdpi.com/journal/computation<![CDATA[Computation, Vol. 3, Pages 600-615: An Incompressible, Depth-Averaged Lattice Boltzmann Method for Liquid Flow in Microfluidic Devices with Variable Aperture]]>
http://www.mdpi.com/2079-3197/3/4/600
Two-dimensional (2D) pore-scale models have successfully simulated microfluidic experiments of aqueous-phase flow with mixing-controlled reactions in devices with small aperture. A standard 2D model is not generally appropriate when the presence of mineral precipitate or biomass creates complex and irregular three-dimensional (3D) pore geometries. We modify the 2D lattice Boltzmann method (LBM) to incorporate viscous drag from the top and bottom microfluidic device (micromodel) surfaces, typically excluded in a 2D model. Viscous drag from these surfaces can be approximated by uniformly scaling a steady-state 2D velocity field at low Reynolds number. We demonstrate increased accuracy by approximating the viscous drag with an analytically-derived body force which assumes a local parabolic velocity profile across the micromodel depth. Accuracy of the generated 2D velocity field and simulation permeability have not been evaluated in geometries with variable aperture. We obtain permeabilities within approximately 10% error and accurate streamlines from the proposed 2D method relative to results obtained from 3D simulations. In addition, the proposed method requires a CPU run time approximately 40 times less than a standard 3D method, representing a significant computational benefit for permeability calculations.Computation2015-11-2434Article10.3390/computation30406006006152079-31972015-11-24doi: 10.3390/computation3040600Artin LaleianAlbert ValocchiCharles Werth<![CDATA[Computation, Vol. 3, Pages 586-599: Dominant Strategies of Quantum Games on Quantum Periodic Automata]]>
http://www.mdpi.com/2079-3197/3/4/586
Game theory and its quantum extension apply in numerous fields that affect people’s social, political, and economical life. Physical limits imposed by the current technology used in computing architectures (e.g., circuit size) give rise to the need for novel mechanisms, such as quantum inspired computation. Elements from quantum computation and mechanics combined with game-theoretic aspects of computing could open new pathways towards the future technological era. This paper associates dominant strategies of repeated quantum games with quantum automata that recognize infinite periodic inputs. As a reference, we used the PQ-PENNY quantum game where the quantum strategy outplays the choice of pure or mixed strategy with probability 1 and therefore the associated quantum automaton accepts with probability 1. We also propose a novel game played on the evolution of an automaton, where players’ actions and strategies are also associated with periodic quantum automata.Computation2015-11-2034Article10.3390/computation30405865865992079-31972015-11-20doi: 10.3390/computation3040586Konstantinos GiannakisChristos PapalitsasKalliopi KastampolidouAlexandros SinghTheodore Andronikos<![CDATA[Computation, Vol. 3, Pages 574-585: A Comparative Density Functional Theory and Density Functional Tight Binding Study of Phases of Nitrogen Including a High Energy Density Material N8]]>
http://www.mdpi.com/2079-3197/3/4/574
We present a comparative dispersion-corrected Density Functional Theory (DFT) and Density Functional Tight Binding (DFTB-D) study of several phases of nitrogen, including the well-known alpha, beta, and gamma phases as well as recently discovered highly energetic phases: covalently bound cubic gauche (cg) nitrogen and molecular (vdW-bound) N8 crystals. Among several tested parametrizations of N–N interactions for DFTB, we identify only one that is suitable for modeling of all these phases. This work therefore establishes the applicability of DFTB-D to studies of phases, including highly metastable phases, of nitrogen, which will be of great use for modelling of dynamics of reactions involving these phases, which may not be practical with DFT due to large required space and time scales. We also derive a dispersion-corrected DFT (DFT-D) setup (atom-centered basis parameters and Grimme dispersion parameters) tuned for accurate description simultaneously of several nitrogen allotropes including covalently and vdW-bound crystals and including high-energy phases.Computation2015-11-1134Article10.3390/computation30405745745852079-31972015-11-11doi: 10.3390/computation3040574Nicholas CapelDevang BharaniaSergei Manzhos<![CDATA[Computation, Vol. 3, Pages 558-573: Effective Thermal Conductivity of MOF-5 Powder under a Hydrogen Atmosphere]]>
http://www.mdpi.com/2079-3197/3/4/558
Effective thermal conductivity is an important thermophysical property in the design of metal-organic framework-5 (MOF-5)-based hydrogen storage tanks. A modified thermal conductivity model is built by coupling a theoretical model with the grand canonical Monte Carlo simulation (GCMC) to predict the effect of the H2 adsorption process on the effective thermal conductivity of a MOF-5 powder bed at pressures ranging from 0.01 MPa to 50 MPa and temperatures ranging from 273.15 K to 368.15 K. Results show that the mean pore diameter of the MOF-5 crystal decreases with an increase in pressure and increases with an increase in temperature. The thermal conductivity of the adsorbed H2 increases with an increased amount of H2 adsorption. The effective thermal conductivity of the MOF-5 crystal is significantly enhanced by the H2 adsorption at high pressure and low temperature. The effective thermal conductivity of the MOF-5 powder bed increases with an increase in pressure and remains nearly unchanged with an increase in temperature. The thermal conductivity of the MOF-5 powder bed increases linearly with the decreased porosity and increased thermal conductivity of the skeleton of the MOF-5 crystal. The variation in the effective thermal conductivities of the MOF-5 crystals and bed mainly results from the thermal conductivities of the gaseous and adsorption phases.Computation2015-11-0634Article10.3390/computation30405585585732079-31972015-11-06doi: 10.3390/computation3040558Hui WangZhiguo QuWen ZhangWenquan Tao<![CDATA[Computation, Vol. 3, Pages 541-557: Numerical Simulation of Fluid-Solid Coupling in Fractured Porous Media with Discrete Fracture Model and Extended Finite Element Method]]>
http://www.mdpi.com/2079-3197/3/4/541
Fluid-solid coupling is ubiquitous in the process of fluid flow underground and has a significant influence on the development of oil and gas reservoirs. To investigate these phenomena, the coupled mathematical model of solid deformation and fluid flow in fractured porous media is established. In this study, the discrete fracture model (DFM) is applied to capture fluid flow in the fractured porous media, which represents fractures explicitly and avoids calculating shape factor for cross flow. In addition, the extended finite element method (XFEM) is applied to capture solid deformation due to the discontinuity caused by fractures. More importantly, this model captures the change of fractures aperture during the simulation, and then adjusts fluid flow in the fractures. The final linear equation set is derived and solved for a 2D plane strain problem. Results show that the combination of discrete fracture model and extended finite element method is suited for simulating coupled deformation and fluid flow in fractured porous media.Computation2015-10-3034Article10.3390/computation30405415415572079-31972015-10-30doi: 10.3390/computation3040541Qingdong ZengJun Yao<![CDATA[Computation, Vol. 3, Pages 528-540: A Scale Invariant Distribution of the Prime Numbers]]>
http://www.mdpi.com/2079-3197/3/4/528
The irregular distribution of prime numbers amongst the integers has found multiple uses, from engineering applications of cryptography to quantum theory. The degree to which this distribution can be predicted thus has become a subject of current interest. Here, we present a computational analysis of the deviations between the actual positions of the prime numbers and their predicted positions from Riemann’s counting formula, focused on the variance function of these deviations from sequential enumerative bins. We show empirically that these deviations can be described by a class of probabilistic models known as the Tweedie exponential dispersion models that are characterized by a power law relationship between the variance and the mean, known by biologists as Taylor’s power law and by engineers as fluctuation scaling. This power law behavior of the prime number deviations is remarkable in that the same behavior has been found within the distribution of genes and single nucleotide polymorphisms (SNPs) within the human genome, the distribution of animals and plants within their habitats, as well as within many other biological and physical processes. We explain the common features of this behavior through a statistical convergence effect related to the central limit theorem that also generates 1/f noise.Computation2015-10-2834Article10.3390/computation30405285285402079-31972015-10-28doi: 10.3390/computation3040528Wayne KendalBent Jørgensen<![CDATA[Computation, Vol. 3, Pages 509-527: CFD and Experimental Study on the Effect of Progressive Heating on Fluid Flow inside a Thermal Wind Tunnel]]>
http://www.mdpi.com/2079-3197/3/4/509
A detailed Computational Fluid Dynamics (CFD) and experimental investigation into characterizing the fluid flow and thermal profiles in a wind tunnel was carried out, highlighting the effect of progressive heating on the non-uniformity flow profile of air. Using controllable electrical heating elements, the operating temperatures in the test-section were gradually increased in order to determine its influence on the subsequent velocity and thermal profiles found inside the test-section. The numerical study was carried out using CFD FLUENT code, alongside validating the experimental results. Good correlation was observed as the comparison yielded a mean error of 6.4% for the air velocity parameter and 2.3% for the air temperature parameter between the two techniques. The good correlation established between the numerically predicted and experimentally tested results identified broad scope for using the advanced computational capabilities of CFD applicable to the thermal modeling of wind tunnels. For a constant temperature process, the non-uniformity and turbulence intensity in the test section was 0.9% and 0.5%, which is under the recommended guidelines for wind tunnels. The findings revealed that the increase in temperature from 20 °C to 50 °C reduced the velocity by 15.2% inside the test section.Computation2015-10-2134Article10.3390/computation30405095095272079-31972015-10-21doi: 10.3390/computation3040509Hassam ChaudhryJohn CalautitBen HughesLik Sim<![CDATA[Computation, Vol. 3, Pages 479-508: Applicability of URANS and DES Simulations of Flow Past Rectangular Cylinders and Bridge Sections]]>
http://www.mdpi.com/2079-3197/3/3/479
This paper discusses the results of computational fluid dynamics simulations carried out for rectangular cylinders with various side ratios of interest for many civil engineering structures. A bridge deck of common cross-section geometry was also considered. Unsteady Reynolds-averaged Navier–Stokes (URANS) equations were solved in conjunction with either an eddy viscosity or a linearized explicit algebraic Reynolds stress model. The analysis showed that for the case studies considered, the 2D URANS approach was able to give reasonable results if coupled with an advanced turbulence model and a suitable computational mesh. The simulations even reproduced, at least qualitatively, complex phenomena observed in the wind tunnel, such as Reynolds number effects for a sharp-edged geometry. The study focused both on stationary and harmonically oscillating bodies. For the latter, self-excited forces and flutter derivatives were calculated and compared to experimental data. In the particular case of a benchmark rectangular 5:1 cylinder, 3D detached eddy simulations were also carried out, highlighting the improvement in the accuracy of the results with respect to both 2D and 3D URANS calculations. All of the computations were performed with the Tau code, a non-commercial unstructured solver developed by the German Aerospace Center.Computation2015-09-1833Article10.3390/computation30304794795082079-31972015-09-18doi: 10.3390/computation3030479Claudio Mannini<![CDATA[Computation, Vol. 3, Pages 444-478: Towards Online Visualization and Interactive Monitoring of Real-Time CFD Simulations on Commodity Hardware]]>
http://www.mdpi.com/2079-3197/3/3/444
Real-time rendering in the realm of computational fluid dynamics (CFD) in particular and scientific high performance computing (HPC) in general is a comparably young field of research, as the complexity of most problems with practical relevance is too high for a real-time numerical simulation. However, recent advances in HPC and the development of very efficient numerical techniques allow running first optimized numerical simulations in or near real-time, which in return requires integrated and optimized visualization techniques that do not affect performance. In this contribution, we present concepts, implementation details and several application examples of a minimally-invasive, efficient visualization tool for the interactive monitoring of 2D and 3D turbulent flow simulations on commodity hardware. The numerical simulations are conducted with ELBE, an efficient lattice Boltzmann environment based on NVIDIA CUDA (Compute Unified Device Architecture), which provides optimized numerical kernels for 2D and 3D computational fluid dynamics with fluid-structure interactions and turbulence.Computation2015-09-1133Article10.3390/computation30304444444782079-31972015-09-11doi: 10.3390/computation3030444Nils KolihaChristian JanßenThomas Rung<![CDATA[Computation, Vol. 3, Pages 427-443: Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms]]>
http://www.mdpi.com/2079-3197/3/3/427
Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA) can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M) and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M) a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.Computation2015-09-0233Article10.3390/computation30304274274432079-31972015-09-02doi: 10.3390/computation3030427Richard LambJoshua Premo<![CDATA[Computation, Vol. 3, Pages 386-426: Numerical Simulations of Wave-Induced Flow Fields around Large-Diameter Surface-Piercing Vertical Circular Cylinder]]>
http://www.mdpi.com/2079-3197/3/3/386
A computational analysis is performed on the diffraction of water waves induced by large-diameter, surface-piercing, vertical circular cylinder. With reference to linear-wave cases, the phenomenon is preliminarly considered in terms of velocity potential, a simplified theoretical framework in which both hypotheses of inviscid fluid and irrotational flow are incorporated. Then, and as a first-approximation analysis, the Euler equations in primitive variables are considered (a framework in which the fluid is still handled as inviscid, but the field can be rotational). Finally, the real-fluid behavior is analyzed, by numerically integrating the full Navier-Stokes equations (viscous fluid and rotational field) in their velocity-pressure formulation, by following the approach of the Direct Numerical Simulation (DNS, no models are used for the fluctuating portion of the velocity field). For further investigation of the flow fields, the swirling-strength criterion for flow-structure extraction, and the Karhunen-Loève (KL) decomposition technique for the extraction of the most energetic flow modes respectively, are applied to the computed fields. It is found that remarkable differences exist between the wave-induced fields, as derived within the different computing frameworks tested.Computation2015-08-2833Article10.3390/computation30303863864262079-31972015-08-28doi: 10.3390/computation3030386Giancarlo Alfonsi<![CDATA[Computation, Vol. 3, Pages 354-385: Validation of the GPU-Accelerated CFD Solver ELBE for Free Surface Flow Problems in Civil and Environmental Engineering]]>
http://www.mdpi.com/2079-3197/3/3/354
This contribution is dedicated to demonstrating the high potential and manifold applications of state-of-the-art computational fluid dynamics (CFD) tools for free-surface flows in civil and environmental engineering. All simulations were performed with the academic research code ELBE (efficient lattice boltzmann environment, http://www.tuhh.de/elbe). The ELBE code follows the supercomputing-on-the-desktop paradigm and is especially designed for local supercomputing, without tedious accesses to supercomputers. ELBE uses graphics processing units (GPU) to accelerate the computations and can be used in a single GPU-equipped workstation of, e.g., a design engineer. The code has been successfully validated in very different fields, mostly related to naval architecture and mechanical engineering. In this contribution, we give an overview of past and present applications with practical relevance for civil engineers. The presented applications are grouped into three major categories: (i) tsunami simulations, considering wave propagation, wave runup, inundation and debris flows; (ii) dam break simulations; and (iii) numerical wave tanks for the calculation of hydrodynamic loads on fixed and moving bodies. This broad range of applications in combination with accurate numerical results and very competitive times to solution demonstrates that modern CFD tools in general, and the ELBE code in particular, can be a helpful design tool for civil and environmental engineers.Computation2015-07-0733Article10.3390/computation30303543543852079-31972015-07-07doi: 10.3390/computation3030354Christian JanßenDennis MierkeMicha ÜberrückSilke GralherThomas Rung<![CDATA[Computation, Vol. 3, Pages 336-353: In Silico Comparison of the Hemicelluloses Xyloglucan and Glucuronoarabinoxylan in Protecting Cellulose from Degradation]]>
http://www.mdpi.com/2079-3197/3/3/336
We used a previously developed simulation model of a plant cell wall and its enzymatic degradation to compare the abilities of two hemicelluloses, glucuronoarabinoxylan (GAX) and xyloglucan (XG), to protect cellulose microfibrils (CMFs) from attack by cellulose-degrading enzymes. Additionally, we investigated the effect of XG abundance on the degradation rate of CMFs in the presence of the same enzymes. Simulations were run using hypothetical cell-wall compositions in which the numbers and arrangement of CMFs and (1,3;1,4)-β-glucan were kept constant, but the proportions of GAX and XG were altered. Scenarios considered walls with low and equal proportions of either GAX or XG, and also low, medium and high proportions of XG in the absence of GAX. The rate of CMF degradation was much lower in walls with GAX than walls with XG, except for early in the simulation when the reverse held, suggesting that XGs were protecting CMFs by competitive inhibition. Increasing XG content reduced both the degradation rate of CMFs and the percent of XG degraded, indicating that activity of enzymes decreased with XG density despite XG being degradable. Glucose oligosaccharide breakdown products were analysed on the basis of the originating polysaccharide and their degree of polymerisation (DP). The presence of GAX as opposed to equal amounts of XG had some significant effects on the amount and profile of breakdown products from XG and CMFs.Computation2015-07-0633Article10.3390/computation30303363363532079-31972015-07-06doi: 10.3390/computation3030336Indrakumar VetharaniamMartin UpsdellWilliam KellyGraeme AttwoodChristina MoonPhilip Harris<![CDATA[Computation, Vol. 3, Pages 326-335: Fast Computation of the Non-Central Chi Square PDF Outside the HDR Under a Requisite Precision Constraint]]>
http://www.mdpi.com/2079-3197/3/2/326
Computation of the non-central chi square probability density function is encountered in diverse fields of applied statistics and engineering. The distribution is commonly computed as a Poisson mixture of central chi square densities, where the terms of the sum are computed starting with the integer nearest the non-centrality parameter. However, for computation of the values in either tail region these terms are not the most significant and starting with them results in an increased computational load without a corresponding increase in accuracy. The most significant terms are shown to be a function of both the non-centrality parameter, the degree of freedom and the point of evaluation. A computationally simple approximate solution to the location of the most significant terms as well as the exact solution based on a Newton–Raphson iteration is presented. A quadratic approximation of the interval of summation is also developed in order to meet a requisite number of significant digits of accuracy. Computationally efficient recursions are used over these improved intervals. The method provides a means of computing the non-central chi square probability density function to a requisite accuracy as a Poisson mixture over all domains of interest.Computation2015-06-1932Article10.3390/computation30203263263352079-31972015-06-19doi: 10.3390/computation3020326Paul Gendron<![CDATA[Computation, Vol. 3, Pages 299-325: On Roof Geometry for Urban Wind Energy Exploitation in High-Rise Buildings]]>
http://www.mdpi.com/2079-3197/3/2/299
The European program HORIZON2020 aims to have 20% of electricity produced by renewable sources. The building sector represents 40% of the European Union energy consumption. Reducing energy consumption in buildings is therefore a priority for energy efficiency. The present investigation explores the most adequate roof shapes compatible with the placement of different types of small wind energy generators on high-rise buildings for urban wind energy exploitation. The wind flow around traditional state-of-the-art roof shapes is considered. In addition, the influence of the roof edge on the wind flow on high-rise buildings is analyzed. These geometries are investigated, both qualitatively and quantitatively, and the turbulence intensity threshold for horizontal axis wind turbines is considered. The most adequate shapes for wind energy exploitation are identified, studying vertical profiles of velocity, turbulent kinetic energy and turbulence intensity. Curved shapes are the most interesting building roof shapes from the wind energy exploitation point of view, leading to the highest speed-up and the lowest turbulence intensity.Computation2015-06-1032Article10.3390/computation30202992993252079-31972015-06-10doi: 10.3390/computation3020299Francisco Toja-SilvaCarlos PeraltaOscar Lopez-GarciaJorge NavarroIgnacio Cruz<![CDATA[Computation, Vol. 3, Pages 285-298: Computational Recognition of RNA Splice Sites by Exact Algorithms for the Quadratic Traveling Salesman Problem]]>
http://www.mdpi.com/2079-3197/3/2/285
One fundamental problem of bioinformatics is the computational recognition of DNA and RNA binding sites. Given a set of short DNA or RNA sequences of equal length such as transcription factor binding sites or RNA splice sites, the task is to learn a pattern from this set that allows the recognition of similar sites in another set of DNA or RNA sequences. Permuted Markov (PM) models and permuted variable length Markov (PVLM) models are two powerful models for this task, but the problem of finding an optimal PM model or PVLM model is NP-hard. While the problem of finding an optimal PM model or PVLM model of order one is equivalent to the traveling salesman problem (TSP), the problem of finding an optimal PM model or PVLM model of order two is equivalent to the quadratic TSP (QTSP). Several exact algorithms exist for solving the QTSP, but it is unclear if these algorithms are capable of solving QTSP instances resulting from RNA splice sites of at least 150 base pairs in a reasonable time frame. Here, we investigate the performance of three exact algorithms for solving the QTSP for ten datasets of splice acceptor sites and splice donor sites of five different species and find that one of these algorithms is capable of solving QTSP instances of up to 200 base pairs with a running time of less than two days.Computation2015-06-0332Article10.3390/computation30202852852982079-31972015-06-03doi: 10.3390/computation3020285Anja FischerFrank FischerGerold JägerJens KeilwagenPaul MolitorIvo Grosse<![CDATA[Computation, Vol. 3, Pages 274-284: Effects of a Sprinkler on Evacuation Dynamics in Fire]]>
http://www.mdpi.com/2079-3197/3/2/274
A fire in an enclosed space, such as a room in a building, is generally called a compartment fire. To prevent the compartment fire, a sprinkler for first-aid fire-fighting is installed in rooms. However, it is difficult to determine the degree to which smoke generation and the fire spreading will be inhibited when sprinklers are on. In particular, demonstrating evacuation behavior assuming an actual fire is impossible. In this study, we evaluated an effectiveness of the sprinkler by numerical simulations. To consider evacuation dynamics, a real-coded cellular automata (RCA) was used, where we can freely set the direction and velocity of an evacuee based on a floor field model. To consider the situation in the room fire, we used a simulator called Fire Dynamics Simulator (FDS). Two cases with and without the sprinkler were compared to see the validity of the sprinkler on evacuation dynamics. The effect of smoke and the expansion of the fire-spreading region were discussed. Results show that, since the fire-spreading region disappears when the sprinkler is actuated, the evacuation time decreases. Even though the sprinkler is actuated, the smoke generated at the beginning of a fire diffuses inside the whole room. However, the duration of evacuees being overwhelmed by smoke is less, because the amount of smoke generated by the pyrolysis reaction is much decreased.Computation2015-06-0332Article10.3390/computation30202742742842079-31972015-06-03doi: 10.3390/computation3020274Kazuhiro YamamotoYuki TakeuchiShinnosuke Nishiki<![CDATA[Computation, Vol. 3, Pages 262-273: LES: Unsteady Atmospheric Turbulent Layer Inlet. A Precursor Method Application and Its Quality Check]]>
http://www.mdpi.com/2079-3197/3/2/262
The motivation of this work is to bridge the gap between experimental approaches in wind tunnel testing and numerical computations, in the field of structural design against strong winds. This paper focuses on the generation of an unsteady flow field, representative of a natural wind field, but still compatible with Computational Fluid Dynamics inlet requirements. A simple and “naive” procedure is explained, and the results are in good agreement with some international standards.Computation2015-05-2632Article10.3390/computation30202622622732079-31972015-05-26doi: 10.3390/computation3020262Julien Berthaut-GerentesDidier Delaunay<![CDATA[Computation, Vol. 3, Pages 235-261: Engineering-Based Thermal CFD Simulations on Massive Parallel Systems]]>
http://www.mdpi.com/2079-3197/3/2/235
The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers with several thousands to millions of cores. In this paper, we present a hierarchical data structure for massive parallel computations that supports the coupling of a Navier–Stokes-based fluid flow code with the Boussinesq approximation in order to address complex thermal scenarios for energy-related assessments. The newly designed data structure is specifically designed with the idea of interactive data exploration and visualization during runtime of the simulation code; a major shortcoming of traditional high-performance computing (HPC) simulation codes. We further show and discuss speed-up values obtained on one of Germany’s top-ranked supercomputers with up to 140,000 processes and present simulation results for different engineering-based thermal problems.Computation2015-05-2232Article10.3390/computation30202352352612079-31972015-05-22doi: 10.3390/computation3020235Jérôme FrischRalf-Peter MundaniErnst RankChristoph van Treeck<![CDATA[Computation, Vol. 3, Pages 222-234: Computational Approach to 3D Modeling of the Lymph Node Geometry]]>
http://www.mdpi.com/2079-3197/3/2/222
In this study we present a computational approach to the generation of the major geometric structures of an idealized murine lymph node (LN). In this generation, we consider the major compartments such as the subcapsular sinus, B cell follicles, trabecular and medullar sinuses, blood vessels and the T cell zone with a primary focus on the fibroblastic reticular cell (FRC) network. Confocal microscopy data of LN macroscopic structures and structural properties of the FRC network have been generated and utilized in the present model. The methodology sets a library of modules that can be used to assemble a solid geometric LN model and subsequently generate an adaptive mesh model capable of implementing transport phenomena. Overall, based on the use of high-resolution confocal microscopy and morphological analysis of cell 3D reconstructions, we have developed a computational model of the LN geometry, suitable for further investigation in studies of fluid transport and cell migration in this immunologically essential organ.Computation2015-05-2232Communication10.3390/computation30202222222342079-31972015-05-22doi: 10.3390/computation3020222Alexey KislitsynRostislav SavinkovMario NovkovicLucas OnderGennady Bocharov<![CDATA[Computation, Vol. 3, Pages 197-221: Simulation Frameworks for Morphogenetic Problems]]>
http://www.mdpi.com/2079-3197/3/2/197
Morphogenetic modelling and simulation help to understand the processes by which the form and shapes of organs (organogenesis) and organisms (embryogenesis) emerge. This requires two mutually coupled entities: the biomolecular signalling network and the tissue. Whereas the modelling of the signalling has been discussed and used in a multitude of works, the realistic modelling of the tissue has only started on a larger scale in the last decade. Here, common tissue modelling techniques are reviewed. Besides the continuum approach, the principles and main applications of the spheroid, vertex, Cellular Potts, Immersed Boundary and Subcellular Element models are discussed in detail. In recent years, many software frameworks, implementing the aforementioned methods, have been developed. The most widely used frameworks and modelling markup languages and standards are presented.Computation2015-04-2432Review10.3390/computation30201971972212079-31972015-04-24doi: 10.3390/computation3020197Simon Tanaka<![CDATA[Computation, Vol. 3, Pages 177-196: A Guide to Phylogenetic Reconstruction Using Heterogeneous Models—A Case Study from the Root of the Placental Mammal Tree]]>
http://www.mdpi.com/2079-3197/3/2/177
There are numerous phylogenetic reconstruction methods and models available—but which should you use and why? Important considerations in phylogenetic analyses include data quality, structure, signal, alignment length and sampling. If poorly modelled, variation in rates of change across proteins and across lineages can lead to incorrect phylogeny reconstruction which can then lead to downstream misinterpretation of the underlying data. The risk of choosing and applying an inappropriate model can be reduced with some critical yet straightforward steps outlined in this paper. We use the question of the position of the root of placental mammals as our working example to illustrate the topological impact of model misspecification. Using this case study we focus on using models in a Bayesian framework and we outline the steps involved in identifying and assessing better fitting models for specific datasets.Computation2015-04-1532Article10.3390/computation30201771771962079-31972015-04-15doi: 10.3390/computation3020177Raymond MoranClaire MorganMary O'Connell<![CDATA[Computation, Vol. 3, Pages 150-176: Coupling of Petri Net Models of the Mycobacterial Infection Process and Innate Immune Response]]>
http://www.mdpi.com/2079-3197/3/2/150
Computational and mathematical modeling is important in support of a better understanding of complex behavior in biology. For the investigation of biological systems, researchers have used computers to construct, verify, and validate models that describe the mechanisms behind biological processes in multi-scale representations. In this paper we combine Petri net models that represent the mycobacterial infection process and innate immune response at various levels of organization, from molecular interaction to granuloma dissemination. In addition to the conventional graphical representation of the Petri net, the outcome of the model is projected onto a 3D model representing the zebrafish embryo. In this manner we provide a visualization of the process in a simulation framework that portrays the infection in the living system.Computation2015-04-0832Article10.3390/computation30201501501762079-31972015-04-08doi: 10.3390/computation3020150Rafael CarvalhoJeroen van den HeuvelJetty KleijnFons Verbeek<![CDATA[Computation, Vol. 3, Pages 128-149: CFD Simulation and Optimisation of a Low Energy Ventilation and Cooling System]]>
http://www.mdpi.com/2079-3197/3/2/128
Mechanical Heating Ventilation and Air-Conditioning (HVAC) systems account for 60% of the total energy consumption of buildings. As a sector, buildings contributes about 40% of the total global energy demand. By using passive technology coupled with natural ventilation from wind towers, significant amounts of energy can be saved, reducing the emissions of greenhouse gases. In this study, the development of Computational Fluid Dynamics (CFD) analysis in aiding the development of wind towers was explored. Initial concepts of simple wind tower mechanics to detailed design of wind towers which integrate modifications specifically to improve the efficiency of wind towers were detailed. From this, using CFD analysis, heat transfer devices were integrated into a wind tower to provide cooling for incoming air, thus negating the reliance on mechanical HVAC systems. A commercial CFD code Fluent was used in this study to simulate the airflow inside the wind tower model with the heat transfer devices. Scaled wind tunnel testing was used to validate the computational model. The airflow supply velocity was measured and compared with the numerical results and good correlation was observed. Additionally, the spacing between the heat transfer devices was varied to optimise the performance. The technology presented here is subject to a patent application (PCT/GB2014/052263).Computation2015-04-0232Article10.3390/computation30201281281492079-31972015-04-02doi: 10.3390/computation3020128John CalautitDominic O'ConnorPolytimi SofotasiouBen Hughes<![CDATA[Computation, Vol. 3, Pages 114-127: Evolution by Pervasive Gene Fusion in Antibiotic Resistance and Antibiotic Synthesizing Genes]]>
http://www.mdpi.com/2079-3197/3/2/114
Phylogenetic (tree-based) approaches to understanding evolutionary history are unable to incorporate convergent evolutionary events where two genes merge into one. In this study, as exemplars of what can be achieved when a tree is not assumed a priori, we have analysed the evolutionary histories of polyketide synthase genes and antibiotic resistance genes and have shown that their history is replete with convergent events as well as divergent events. We demonstrate that the overall histories of these genes more closely resembles the remodelling that might be seen with the children’s toy Lego, than the standard model of the phylogenetic tree. This work demonstrates further that genes can act as public goods, available for re-use and incorporation into other genetic goods.Computation2015-03-2632Article10.3390/computation30201141141272079-31972015-03-26doi: 10.3390/computation3020114Orla ColemanRuth HoganNicole McGoldrickNiamh RuddenJames McInerney<![CDATA[Computation, Vol. 3, Pages 99-113: Evolutionary Dynamics in Gene Networks and Inference Algorithms]]>
http://www.mdpi.com/2079-3197/3/1/99
Dynamical interactions among sets of genes (and their products) regulate developmental processes and some dynamical diseases, like cancer. Gene regulatory networks (GRNs) are directed networks that define interactions (links) among different genes/proteins involved in such processes. Genetic regulation can be modified during the time course of the process, which may imply changes in the nodes activity that leads the system from a specific state to a different one at a later time (dynamics). How the GRN modifies its topology, to properly drive a developmental process, and how this regulation was acquired across evolution are questions that the evolutionary dynamics of gene networks tackles. In the present work we review important methodology in the field and highlight the combination of these methods with evolutionary algorithms. In recent years, this combination has become a powerful tool to fit models with the increasingly available experimental data.Computation2015-03-1331Review10.3390/computation3010099991132079-31972015-03-13doi: 10.3390/computation3010099Daniel Aguilar-HidalgoMaría LemosAntonio Córdoba<![CDATA[Computation, Vol. 3, Pages 72-98: Use of CMEIAS Image Analysis Software to Accurately Compute Attributes of Cell Size, Morphology, Spatial Aggregation and Color Segmentation that Signify in Situ Ecophysiological Adaptations in Microbial Biofilm Communities]]>
http://www.mdpi.com/2079-3197/3/1/72
In this review, we describe computational features of computer-assisted microscopy that are unique to the Center for Microbial Ecology Image Analysis System (CMEIAS) software, and examples illustrating how they can be used to gain ecophysiological insights into microbial adaptations occurring at micrometer spatial scales directly relevant to individual cells occupying their ecological niches in situ. These features include algorithms that accurately measure (1) microbial cell length relevant to avoidance of protozoan bacteriovory; (2) microbial biovolume body mass relevant to allometric scaling and local apportionment of growth-supporting nutrient resources; (3) pattern recognition rules for morphotype classification of diverse microbial communities relevant to their enhanced fitness for success in the particular habitat; (4) spatial patterns of coaggregation that reveal the local intensity of cooperative vs. competitive adaptations in colonization behavior relevant to microbial biofilm ecology; and (5) object segmentation of complex color images to differentiate target microbes reporting successful cell-cell communication. These unique computational features contribute to the CMEIAS mission of developing accurate and freely accessible tools of image bioinformatics that strengthen microscopy-based approaches for understanding microbial ecology at single-cell resolution.Computation2015-03-0931Review10.3390/computation301007272982079-31972015-03-09doi: 10.3390/computation3010072Frank DazzoBrighid Niccum<![CDATA[Computation, Vol. 3, Pages 58-71: Visual Simulation of Soil-Microbial System Using GPGPU Technology]]>
http://www.mdpi.com/2079-3197/3/1/58
General Purpose (use of) Graphics Processing Units (GPGPU) is a promising technology for simulation upscaling; in particular for bottom–up modelling approaches seeking to translate micro-scale system processes to macro-scale properties. Many existing simulations of soil ecosystems do not recover the emergent system scale properties and this may be a consequence of “missing” information at finer scales. Interpretation of model output can be challenging and we advocate the “built-in” visual simulation afforded by GPGPU implementations. We apply this GPGPU approach to a reaction–diffusion soil ecosystem model with the intent of linking micro (micron) and core (cm) spatial scales to investigate how microbes respond to changing environments and the consequences on soil respiration. The performance is evaluated in terms of computational speed up, spatial upscaling and visual feedback. We conclude that a GPGPU approach can significantly improve computational efficiency and offers the potential added benefit of visual immediacy. For massive spatial domains distribution over GPU devices may still be required.Computation2015-02-2731Article10.3390/computation301005858712079-31972015-02-27doi: 10.3390/computation3010058Ruth FalconerAlasdair Houston<![CDATA[Computation, Vol. 3, Pages 29-57: A Review of Two Multiscale Methods for the Simulation of Macromolecular Assemblies: Multiscale Perturbation and Multiscale Factorization]]>
http://www.mdpi.com/2079-3197/3/1/29
Many mesoscopic N-atom systems derive their structural and dynamical properties from processes coupled across multiple scales in space and time. That is, they simultaneously deform or display collective behaviors, while experiencing atomic scale vibrations and collisions. Due to the large number of atoms involved and the need to simulate over long time periods of biological interest, traditional computational tools, like molecular dynamics, are often infeasible for such systems. Hence, in the current review article, we present and discuss two recent multiscale methods, stemming from the N-atom formulation and an underlying scale separation, that can be used to study such systems in a friction-dominated regime: multiscale perturbation theory and multiscale factorization. These novel analytic foundations provide a self-consistent approach to yield accurate and feasible long-time simulations with atomic detail for a variety of multiscale phenomena, such as viral structural transitions and macromolecular self-assembly. As such, the accuracy and efficiency of the associated algorithms are demonstrated for a few representative biological systems, including satellite tobacco mosaic virus (STMV) and lactoferrin.Computation2015-02-0531Article10.3390/computation301002929572079-31972015-02-05doi: 10.3390/computation3010029Stephen PankavichPeter Ortoleva<![CDATA[Computation, Vol. 3, Pages 2-28: Computational Studies of the Intestinal Host-Microbiota Interactome]]>
http://www.mdpi.com/2079-3197/3/1/2
A large and growing body of research implicates aberrant immune response and compositional shifts of the intestinal microbiota in the pathogenesis of many intestinal disorders. The molecular and physical interaction between the host and the microbiota, known as the host-microbiota interactome, is one of the key drivers in the pathophysiology of many of these disorders. This host-microbiota interactome is a set of dynamic and complex processes, and needs to be treated as a distinct entity and subject for study. Disentangling this complex web of interactions will require novel approaches, using a combination of data-driven bioinformatics with knowledge-driven computational modeling. This review describes the computational approaches for investigating the host-microbiota interactome, with emphasis on the human intestinal tract and innate immunity, and highlights open challenges and existing gaps in the computation methodology for advancing our knowledge about this important facet of human health.Computation2015-01-1431Review10.3390/computation30100022282079-31972015-01-14doi: 10.3390/computation3010002Scott ChristleyChase CockrellGary An<![CDATA[Computation, Vol. 3, Pages 1: Acknowledgement to Reviewers of Computation in 2014]]>
http://www.mdpi.com/2079-3197/3/1/1
The editors of Computation would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2014:[...]Computation2015-01-0931Editorial10.3390/computation3010001112079-31972015-01-09doi: 10.3390/computation3010001 Computation Editorial Office<![CDATA[Computation, Vol. 2, Pages 246-257: SBMLSimulator: A Java Tool for Model Simulation and Parameter Estimation in Systems Biology]]>
http://www.mdpi.com/2079-3197/2/4/246
The identification of suitable model parameters for biochemical reactions has been recognized as a quite difficult endeavor. Parameter values from literature or experiments can often not directly be combined in complex reaction systems. Nature-inspired optimization techniques can find appropriate sets of parameters that calibrate a model to experimentally obtained time series data. We present SBMLsimulator, a tool that combines the Systems Biology Simulation Core Library for dynamic simulation of biochemical models with the heuristic optimization framework EvA2. SBMLsimulator provides an intuitive graphical user interface with various options as well as a fully-featured command-line interface for large-scale and script-based model simulation and calibration. In a parameter estimation study based on a published model and artificial data we demonstrate the capability of SBMLsimulator to identify parameters. SBMLsimulator is useful for both, the interactive simulation and exploration of the parameter space and for the large-scale model calibration and estimation of uncertain parameter values.Computation2014-12-1824Article10.3390/computation20402462462572079-31972014-12-18doi: 10.3390/computation2040246Alexander DörrRoland KellerAndreas ZellAndreas Dräger<![CDATA[Computation, Vol. 2, Pages 221-245: Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism]]>
http://www.mdpi.com/2079-3197/2/4/221
Endogenous retroviruses (ERVs) are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.Computation2014-11-2824Article10.3390/computation20402212212452079-31972014-11-28doi: 10.3390/computation2040221Le BaoDaniel EllederRaunaq MalhotraMichael DeGiorgioTheodora MaravegiasLindsay HorvathLaura CarrelColin GillinTomáš HronHelena FábryováDavid HunterMary Poss<![CDATA[Computation, Vol. 2, Pages 199-220: Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials]]>
http://www.mdpi.com/2079-3197/2/4/199
In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS). Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.Computation2014-11-1424Review10.3390/computation20401991992202079-31972014-11-14doi: 10.3390/computation2040199Claus Vogl<![CDATA[Computation, Vol. 2, Pages 182-198: Incongruencies in Vaccinia Virus Phylogenetic Trees]]>
http://www.mdpi.com/2079-3197/2/4/182
Over the years, as more complete poxvirus genomes have been sequenced, phylogenetic studies of these viruses have become more prevalent. In general, the results show similar relationships between the poxvirus species; however, some inconsistencies are notable. Previous analyses of the viral genomes contained within the vaccinia virus (VACV)-Dryvax vaccine revealed that their phylogenetic relationships were sometimes clouded by low bootstrapping confidence. To analyze the VACV-Dryvax genomes in detail, a new tool-set was developed and integrated into the Base-By-Base bioinformatics software package. Analyses showed that fewer unique positions were present in each VACV-Dryvax genome than expected. A series of patterns, each containing several single nucleotide polymorphisms (SNPs) were identified that were counter to the results of the phylogenetic analysis. The VACV genomes were found to contain short DNA sequence blocks that matched more distantly related clades. Additionally, similar non-conforming SNP patterns were observed in (1) the variola virus clade; (2) some cowpox clades; and (3) VACV-CVA, the direct ancestor of VACV-MVA. Thus, traces of past recombination events are common in the various orthopoxvirus clades, including those associated with smallpox and cowpox viruses.Computation2014-10-1424Article10.3390/computation20401821821982079-31972014-10-14doi: 10.3390/computation2040182Chad SmithsonSamantha KampmanBenjamin HetmanChris Upton<![CDATA[Computation, Vol. 2, Pages 159-181: Multiscale Modeling of the Early CD8 T-Cell Immune Response in Lymph Nodes: An Integrative Study]]>
http://www.mdpi.com/2079-3197/2/4/159
CD8 T-cells are critical in controlling infection by intracellular pathogens. Upon encountering antigen presenting cells, T-cell receptor activation promotes the differentiation of naïve CD8 T-cells into strongly proliferating activated and effector stages. We propose a 2D-multiscale computational model to study the maturation of CD8 T-cells in a lymph node controlled by their molecular profile. A novel molecular pathway is presented and converted into an ordinary differential equation model, coupled with a cellular Potts model to describe cell-cell interactions. Key molecular players such as activated IL2 receptor and Tbet levels control the differentiation from naïve into activated and effector stages, respectively, while caspases and Fas-Fas ligand interactions control cell apoptosis. Coupling this molecular model to the cellular scale successfully reproduces qualitatively the evolution of total CD8 T-cell counts observed in mice lymph node, between Day 3 and 5.5 post-infection. Furthermore, this model allows us to make testable predictions of the evolution of the different CD8 T-cell stages.Computation2014-09-2924Article10.3390/computation20401591591812079-31972014-09-29doi: 10.3390/computation2040159Sotiris ProkopiouLoic BarbarrouxSamuel BernardJulien MafilleYann LeverrierChristophe ArpinJacqueline MarvelOlivier GandrillonFabien Crauste<![CDATA[Computation, Vol. 2, Pages 131-158: Computational Models of the NF-KB Signalling Pathway]]>
http://www.mdpi.com/2079-3197/2/4/131
In this review article, we discuss the current state of computational modelling of the nuclear factor-kappa B (NF-ΚB) signalling pathway. NF-ΚB is a transcription factor, which is ubiquitous within cells and controls a number of immune responses, including inflammation and apoptosis. The NF-ΚB signalling pathway is tightly regulated, commencing with activation at the cell membrane, signal transduction through various components within the cytoplasm, translocation of NF-ΚB into the nucleus and, finally, the transcription of various genes relating to the innate and adaptive immune responses. There have been a number of computational (mathematical) models developed of the signalling pathway over the past decade. This review describes how these approaches have helped advance our understanding of NF-ΚB control.Computation2014-09-2924Review10.3390/computation20401311311582079-31972014-09-29doi: 10.3390/computation2040131Richard WilliamsJon TimmisEva Qwarnstrom<![CDATA[Computation, Vol. 2, Pages 112-130: On Mechanistic Modeling of Gene Content Evolution: Birth-Death Models and Mechanisms of Gene Birth and Gene Retention]]>
http://www.mdpi.com/2079-3197/2/3/112
Characterizing the mechanisms of duplicate gene retention using phylogenetic methods requires models that are consistent with different biological processes. The interplay between complex biological processes and necessarily simpler statistical models leads to a complex modeling problem. A discussion of the relationship between biological processes, existing models for duplicate gene retention and data is presented. Existing models are then extended in deriving two new birth/death models for phylogenetic application in a gene tree/species tree reconciliation framework to enable probabilistic inference of the mechanisms from model parameterization. The goal of this work is to synthesize a detailed discussion of modeling duplicate genes to address biological questions, moving from previous work to future trajectories with the aim of generating better models and better inference.Computation2014-08-2823Article10.3390/computation20301121121302079-31972014-08-28doi: 10.3390/computation2030112Ashley TeufelJing ZhaoMalgorzata O'ReillyLiang LiuDavid Liberles<![CDATA[Computation, Vol. 2, Pages 102-111: Investigation of the Ergopeptide Epimerization Process]]>
http://www.mdpi.com/2079-3197/2/3/102
Ergopeptides, like ergocornine and a-ergocryptine, exist in an S- and in an R-configuration. Kinetic experiments imply that certain configurations are preferred depending on the solvent. The experimental methods are explained in this article. Furthermore, computational methods are used to understand this configurational preference. Standard quantum chemical methods can predict the favored configurations by using minimum energy calculations on the potential energy landscape. However, the explicit role of the solvent is not revealed by this type of methods. In order to better understand its influence, classical mechanical molecular simulations are applied. It appears from our research that “folding” the ergopeptide molecules into an intermediate state (between the S- and the R-configuration) is mechanically hindered for the preferred configurations.Computation2014-08-0823Article10.3390/computation20301021021112079-31972014-08-08doi: 10.3390/computation2030102Karsten AndraeStefan MerkelVedat DurmazKonstantin FackeldeyRobert KöppenMarcus WeberMatthias Koch<![CDATA[Computation, Vol. 2, Pages 83-101: Cultural Collapse and System Survival Due to Environmental Modification]]>
http://www.mdpi.com/2079-3197/2/3/83
We consider a simple mathematical approach to the rise and fall of societies based on population growth and its effects on the environment, both beneficial and detrimental. We find that in any simple model of population dynamics with environmental coupling, stable cultures are impossible. Populations inevitably grow or decline exponentially. Further, if the parameters defining a civilisation are allowed to evolve towards an evolutionarily stable state, the only possible solutions are those where each culture ultimately declines. However, computer simulation with multiple competing cultures show that while each eventually collapses, some are always extant and the system is robust. In this broad class of models, individual death is a requirement for system survival.Computation2014-07-2923Article10.3390/computation2030083831012079-31972014-07-29doi: 10.3390/computation2030083Graeme AcklandAdrien HenryAlexander WilliamsMorrel Cohen<![CDATA[Computation, Vol. 2, Pages 61-82: Universal Dimensions of Meaning Derived from Semantic Relations among Words and Senses: Mereological Completeness vs. Ontological Generality]]>
http://www.mdpi.com/2079-3197/2/3/61
A key to semantic analysis is a precise and practically useful definition of meaning that is general for all domains of knowledge. We previously introduced the notion of weak semantic map: a metric space allocating concepts along their most general (universal) semantic characteristics while at the same time ignoring other, domain-specific aspects of their meanings. Here we address questions of the number, quality, and mutual independence of the weak semantic dimensions. Specifically, we employ semantic relationships not previously used for weak semantic mapping, such as holonymy/meronymy (“is-part/member-of”), and we compare maps constructed from word senses to those constructed from words. We show that the “completeness” dimension derived from the holonym/meronym relation is independent of, and practically orthogonal to, the “abstractness” dimension derived from the hypernym-hyponym (“is-a”) relation, while both dimensions are orthogonal to the maps derived from synonymy and antonymy. Interestingly, the choice of using relations among words vs. senses implies a non-trivial trade-off between rich and unambiguous information due to homonymy and polysemy. The practical utility of the new and prior dimensions is illustrated by the automated evaluation of different kinds of documents. Residual analysis of available linguistic resources, such as WordNet, suggests that the number of universal semantic dimensions representable in natural language may be finite. Their complete characterization, as well as the extension of results to non-linguistic materials, remains an open challenge.Computation2014-07-1523Article10.3390/computation203006161822079-31972014-07-15doi: 10.3390/computation2030061Alexei SamsonovichGiorgio Ascoli<![CDATA[Computation, Vol. 2, Pages 47-60: Can the Thermodynamic Hodgkin-Huxley Model of Voltage-Dependent Conductance Extrapolate for Temperature?]]>
http://www.mdpi.com/2079-3197/2/2/47
Hodgkin and Huxley (H-H) fitted their model of voltage-dependent conductances to experimental data using empirical functions of voltage. The thermodynamic H-H model of voltage dependent conductances is more physically plausible, as it constrains and parameterises its empirical fit by assuming that ion channel transition rates depend exponentially on a free energy barrier that in turn, linearly or non-linearly, depends on voltage. The original H-H model contains no explicit temperature terms and requires Q10 factors to describe data at different temperatures. The thermodynamic H-H model does have explicit terms for temperature. Do these endow the model with extrapolation for temperature? We utilised voltage clamp data for a voltage-gated K+ current, recorded at three different temperatures. The thermodynamic H-H model’s free parameters were fitted (Marquardt-Levenberg algorithm) to a data set recorded at one (or more) temperature(s). Then we assessed whether it could describe another data set, recorded at a different temperature, with these same free parameter values and its temperature terms set to the new temperature. We found that it could not.Computation2014-05-1422Article10.3390/computation202004747602079-31972014-05-14doi: 10.3390/computation2020047Michael Forrest<![CDATA[Computation, Vol. 2, Pages 23-46: A 3-D Model of a Perennial Ryegrass Primary Cell Wall and Its Enzymatic Degradation]]>
http://www.mdpi.com/2079-3197/2/2/23
We have developed a novel 3-D, agent-based model of cell-wall digestion to improve our understanding of ruminal cell-wall digestion. It offers a capability to study cell walls and their enzymatic modification, by providing a representation of cellulose microfibrils and non-cellulosic polysaccharides and by simulating their spatial and catalytic interactions with enzymes. One can vary cell-wall composition and the types and numbers of enzyme molecules, allowing the model to be applied to a range of systems where cell walls are degraded and to the modification of cell walls by endogenous enzymes. As a proof of principle, we have modelled the wall of a mesophyll cell from the leaf of perennial ryegrass and then simulated its enzymatic degradation. This is a primary, non-lignified cell wall and the model includes cellulose, hemicelluloses (glucuronoarabinoxylans, 1,3;1,4-β-glucans, and xyloglucans) and pectin. These polymers are represented at the level of constituent monosaccharides, and assembled to form a 3-D, meso-scale representation of the molecular structure of the cell wall. The composition of the cell wall can be parameterised to represent different walls in different cell types and taxa. The model can contain arbitrary combinations of different enzymes. It simulates their random diffusion through the polymer networks taking collisions into account, allowing steric hindrance from cell-wall polymers to be modelled. Steric considerations are included when target bonds are encountered, and breakdown products resulting from enzymatic activity are predicted.Computation2014-05-0522Article10.3390/computation202002323462079-31972014-05-05doi: 10.3390/computation2020023Indrakumar VetharaniamWilliam KellyGraeme AttwoodPhilip Harris<![CDATA[Computation, Vol. 2, Pages 12-22: Ab Initio Research on a New Type of Half-Metallic Double Perovskites, A2CrMO6 (A = IVA Group Elements; M = Mo, Re and W)]]>
http://www.mdpi.com/2079-3197/2/1/12
The research based on density functional theory was carried out using generalized gradient approximation (GGA) for full-structural optimization and the addition of the correlation effect (GGA + U (Coulomb parameter)) in a double perovskite structure, A2BB’O6. According to the similar valance electrons between IIA(s2) and IVA(p2), IVA group elements instead of alkaline-earth elements settled on the A-site ion position with fixed BB' combinations as CrM (M = Mo, Re and W). The ferrimagnetic half-metallic (HM-FiM) properties can be attributed to the p-d hybridization between the Crd-Mp and the double exchange. All the compounds can be half-metallic (HM) materials, except Si2CrMoO6, Ge2CrMo and Ge2CrReO6, because the strong-correlation correction should be considered. For M = W, only A = Sn and Pb are possible candidates as HM materials. Nevertheless, an examination of the structural stability is needed, because Si, Ge, Sn and Pb are quite different from Sr. All compounds are stable, except for the Si-based double perovskite structure.Computation2014-03-2121Article10.3390/computation201001212222079-31972014-03-21doi: 10.3390/computation2010012Yun-Ping LiuHuei-Ru FuhYin-Kuo Wang<![CDATA[Computation, Vol. 2, Pages 1-11: Linear Scaling Solution of the Time-Dependent Self-Consistent-Field Equations]]>
http://www.mdpi.com/2079-3197/2/1/1
A new approach to solving the Time-Dependent Self-Consistent-Field equations is developed based on the double quotient formulation of Tsiper 2001 (J. Phys. B). Dual channel, quasi-independent non-linear optimization of these quotients is found to yield convergence rates approaching those of the best case (single channel) Tamm-Dancoff approximation. This formulation is variational with respect to matrix truncation, admitting linear scaling solution of the matrix-eigenvalue problem, which is demonstrated for bulk excitons in the polyphenylene vinylene oligomer and the (4,3) carbon nanotube segment.Computation2014-03-1421Letter10.3390/computation20100011112079-31972014-03-14doi: 10.3390/computation2010001Matt Challacombe<![CDATA[Computation, Vol. 1, Pages 31-45: Second-Row Transition-Metal Doping of (ZniSi), i = 12, 16 Nanoclusters: Structural and Magnetic Properties]]>
http://www.mdpi.com/2079-3197/1/3/31
TM@ZniSi nanoclusters have been characterized by means of the Density Functional Theory, in which Transition Metal (TM) stands from Y to Cd, and i = 12 and 16. These two nanoclusters have been chosen owing to their highly spheroidal shape which allow for favored endohedral structures as compared to other nanoclusters. Doping with TM is chosen due to their magnetic properties. In similar cluster-assembled materials, these magnetic properties are related to the Transition Metal-Transition Metal (TM-TM) distances. At this point, endohedral doping presents a clear advantage over substitutional or exohedral doping, since in the cluster-assembled materials, these TM would occupy the well-fixed center of the cluster, providing in this way a better TM-TM distance control to experimentalists. In addition to endohedral compounds, surface structures and the TS’s connecting both isomers have been characterized. In this way the kinetic and thermal stability of endohedral nanoclusters is predicted. We anticipate that silver and cadmium endohedrally doped nanoclusters have the longest life-times. This is due to the weak interaction of these metals with the cage, in contrast to the remaining cases where the TM covalently bond to a region of the cage. The open-shell electronic structure of Ag provides magnetic properties to Ag@ZniSi clusters. Therefore, we have further characterized (Ag@Zn12S12)2 and (Ag@Zn16S16)2 dimers both in the ferromagnetic and antiferromagnetic state, in order to calculate the corresponding magnetic exchange coupling constant, J.Computation2013-11-1413Article10.3390/computation103003131452079-31972013-11-14doi: 10.3390/computation1030031Elisa Jimenez-IzalJon MatxainMario PirisJesus Ugalde<![CDATA[Computation, Vol. 1, Pages 27-30: Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering]]>
http://www.mdpi.com/2079-3197/1/2/27
Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized below. In each section a further focusing will be provided by occasionally organizing special issues on topics of high interests, collecting papers on fundamental work in the field. More applied papers should be submitted to their corresponding specialist journals. To help us achieve our goal with this journal, we have an excellent editorial board to advise us on the exciting current and future trends in computation from methodology to application. We very much look forward to hearing all about the research going on across the world. [...]Computation2013-09-0412Editorial10.3390/computation102002727302079-31972013-09-04doi: 10.3390/computation1020027Karlheinz SchwarzRainer BreitlingChristian Allen<![CDATA[Computation, Vol. 1, Pages 16-26: Structural Features That Stabilize ZnO Clusters: An Electronic Structure Approach]]>
http://www.mdpi.com/2079-3197/1/1/16
We show that a simple approach to building small computationally inexpensive clusters offers insights on specific structural motifs that stabilize the electronic structure of ZnO. All-electron calculations on ZniOi needle (i = 6, 9, 12, 15, and 18) and plate (i = 9 and 18) clusters within the density functional theory (DFT) formalism show a higher stability for ZnO needles that increases with length. Puckering of the rings to achieve a more wurtzite-like structure destabilizes the needles, although this destabilization is reduced by going to infinite needles (calculated using periodic boundary conditions). Calculations of density of states (DOS) curves and band gaps for finite clusters and infinite needles highlight opportunities for band-gap tuning through kinetic control of nanocrystal growth.Computation2013-05-3111Article10.3390/computation101001616262079-31972013-05-31doi: 10.3390/computation1010016Csaba SzakacsErika Merschrod S.Kristin Poduska<![CDATA[Computation, Vol. 1, Pages 1-15: Effect of Isotopic Substitution on Elementary Processes in Dye-Sensitized Solar Cells: Deuterated Amino-Phenyl Acid Dyes on TiO2]]>
http://www.mdpi.com/2079-3197/1/1/1
We present the first computational study of the effects of isotopic substitution on the operation of dye-sensitized solar cells. Ab initio molecular dynamics is used to study the effect of deuteration on light absorption, dye adsorption dynamics, the averaged over vibrations driving force to injection (∆Gi) and regeneration (∆Gr), as well as on promotion of electron back-donation in dyes NK1 (2E,4E-2-cyano-5-(4-dimethylaminophenyl)penta-2,4-dienoic acid) and NK7 (2E,4E-2-cyano-5-(4-diphenylaminophenyl)penta-2,4-dienoic acid) adsorbed in monodentate molecular and bidentate bridging dissociative configurations on the anatase (101) surface of TiO2. Deuteration causes a red shift of the absorption spectrum of the dye/TiO2 complex by about 5% (dozens of nm), which can noticeably affect the overlap with the solar spectrum in real cells. The dynamics effect on the driving force to injection and recombination (the difference between the averaged &lt;∆Gi,r&gt; and ∆Gi,requil at the equilibrium configuration) is strong, yet there is surprisingly little isotopic effect: the average driving force to injection &lt;∆Gi&gt; and to regeneration &lt;∆Gr&gt; changes by only about 10 meV upon deuteration. The nuclear dynamics enhance recombination to the dye ground state due to the approach of the electron-donating group to TiO2, yet this effect is similar for deuterated and non-deuterated dyes. We conclude that the nuclear dynamics of the C-H(D) bonds, mostly affected by deuteration, might not be important for the operation of photoelectrochemical cells based on organic dyes. As the expectation value of the ground state energy is higher than its optimum geometry value (by up to 0.1 eV in the present case), nuclear motions will affect dye regeneration by recently proposed redox shuttle-dye combinations operating at low driving forces.Computation2013-03-1111Article10.3390/computation10100011152079-31972013-03-11doi: 10.3390/computation1010001Sergei ManzhosHiroshi SegawaKoichi Yamashita