Computation
http://www.mdpi.com/journal/computation
Latest open access articles published in Computation at http://www.mdpi.com/journal/computation<![CDATA[Computation, Vol. 4, Pages 32: Calculation of the Acoustic Spectrum of a Cylindrical Vortex in Viscous Heat-Conducting Gas Based on the Navier–Stokes Equations]]>
http://www.mdpi.com/2079-3197/4/3/32
An extremely interesting problem in aero-hydrodynamics is the sound radiation of a single vortical structure. Currently, this type of problem is mainly considered for an incompressible medium. In this paper a method was developed to take into account the viscosity and thermal conductivity of gas. The acoustic radiation frequency of a cylindrical vortex on a flat wall in viscous heat-conducting gas (air) has been investigated. The problem is solved on the basis of the Navier–Stokes equations using the small initial vorticity approach. The power expansion of unknown functions in a series with a small parameter (vorticity) is used. It is shown that there are high-frequency oscillations modulated by a low-frequency signal. The value of the high frequency remains constant for a long period of time. Thus the high frequency can be considered a natural frequency of the vortex radiation. The value of the natural frequency depends only on the initial radius of the cylindrical vortex, and does not depend on the intensity of the initial vorticity. As expected from physical considerations, the natural frequency decreases exponentially as the initial radius of the cylinder increases. Furthermore, the natural frequency differs from that of the oscillations inside the initial cylinder and in the outer domain. The results of the paper may be of interest for aeroacoustics and tornado modeling.Computation2016-08-2043Article10.3390/computation4030032322079-31972016-08-20doi: 10.3390/computation4030032Tatiana PetrovaFedor Shugaev<![CDATA[Computation, Vol. 4, Pages 31: Computational Analysis of Natural Ventilation Flows in Geodesic Dome Building in Hot Climates]]>
http://www.mdpi.com/2079-3197/4/3/31
For centuries, dome roofs were used in traditional houses in hot regions such as the Middle East and Mediterranean basin due to its thermal advantages, structural benefits and availability of construction materials. This article presents the computational modelling of the wind- and buoyancy-induced ventilation in a geodesic dome building in a hot climate. The airflow and temperature distributions and ventilation flow rates were predicted using Computational Fluid Dynamics (CFD). The three-dimensional Reynolds-Averaged Navier-Stokes (RANS) equations were solved using the CFD tool ANSYS FLUENT15. The standard k-epsilon was used as turbulence model. The modelling was verified using grid sensitivity and flux balance analysis. In order to validate the modelling method used in the current study, additional simulation of a similar domed-roof building was conducted for comparison. For wind-induced ventilation, the dome building was modelled with upper roof vents. For buoyancy-induced ventilation, the geometry was modelled with roof vents and also with two windows open in the lower level. The results showed that using the upper roof openings as a natural ventilation strategy during winter periods is advantageous and could reduce the indoor temperature and also introduce fresh air. The results also revealed that natural ventilation using roof vents cannot satisfy thermal requirements during hot summer periods and complementary cooling solutions should be considered. The analysis showed that buoyancy-induced ventilation model can still generate air movement inside the building during periods with no or very low wind.Computation2016-08-1743Article10.3390/computation4030031312079-31972016-08-17doi: 10.3390/computation4030031Zohreh SoleimaniJohn CalautitBen Hughes<![CDATA[Computation, Vol. 4, Pages 30: Electron Correlations in Local Effective Potential Theory]]>
http://www.mdpi.com/2079-3197/4/3/30
Local effective potential theory, both stationary-state and time-dependent, constitutes the mapping from a system of electrons in an external field to one of the noninteracting fermions possessing the same basic variable such as the density, thereby enabling the determination of the energy and other properties of the electronic system. This paper is a description via Quantal Density Functional Theory (QDFT) of the electron correlations that must be accounted for in such a mapping. It is proved through QDFT that independent of the form of external field, (a) it is possible to map to a model system possessing all the basic variables; and that (b) with the requirement that the model fermions are subject to the same external fields, the only correlations that must be considered are those due to the Pauli exclusion principle, Coulomb repulsion, and Correlation–Kinetic effects. The cases of both a static and time-dependent electromagnetic field, for which the basic variables are the density and physical current density, are considered. The examples of solely an external electrostatic or time-dependent electric field constitute special cases. An efficacious unification in terms of electron correlations, independent of the type of external field, is thereby achieved. The mapping is explicated for the example of a quantum dot in a magnetostatic field, and for a quantum dot in a magnetostatic and time-dependent electric field.Computation2016-08-1643Article10.3390/computation4030030302079-31972016-08-16doi: 10.3390/computation4030030Viraht SahniXiao-Yin PanTao Yang<![CDATA[Computation, Vol. 4, Pages 29: DiamondTorre Algorithm for High-Performance Wave Modeling]]>
http://www.mdpi.com/2079-3197/4/3/29
Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit) memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.Computation2016-08-1243Article10.3390/computation4030029292079-31972016-08-12doi: 10.3390/computation4030029Vadim LevchenkoAnastasia PerepelkinaAndrey Zakirov<![CDATA[Computation, Vol. 4, Pages 28: Highly Excited States from a Time Independent Density Functional Method]]>
http://www.mdpi.com/2079-3197/4/3/28
A constrained optimized effective potential (COEP) methodology proposed earlier by us for singly low-lying excited states is extended to highly excited states having the same spatial and spin symmetry. Basic tenets of time independent density functional theory and its COEP implementation for excited states are briefly reviewed. The amended Kohn–Sham-like equations for excited state orbitals and their specific features for highly excited states are discussed. The accuracy of the method is demonstrated using exchange-only calculations for highly excited states of the He and Li atoms.Computation2016-08-0543Article10.3390/computation4030028282079-31972016-08-05doi: 10.3390/computation4030028Vitaly GlushkovMel Levy<![CDATA[Computation, Vol. 4, Pages 27: Automatic Generation of Massively Parallel Codes from ExaSlang]]>
http://www.mdpi.com/2079-3197/4/3/27
Domain-specific languages (DSLs) have the potential to provide an intuitive interface for specifying problems and solutions for domain experts. Based on this, code generation frameworks can produce compilable source code. However, apart from optimizing execution performance, parallelization is key for pushing the limits in problem size and an essential ingredient for exascale performance. We discuss necessary concepts for the introduction of such capabilities in code generators. In particular, those for partitioning the problem to be solved and accessing the partitioned data are elaborated. Furthermore, possible approaches to expose parallelism to users through a given DSL are discussed. Moreover, we present the implementation of these concepts in the ExaStencils framework. In its scope, a code generation framework for highly optimized and massively parallel geometric multigrid solvers is developed. It uses specifications from its multi-layered external DSL ExaSlang as input. Based on a general version for generating parallel code, we develop and implement widely applicable extensions and optimizations. Finally, a performance study of generated applications is conducted on the JuQueen supercomputer.Computation2016-08-0443Article10.3390/computation4030027272079-31972016-08-04doi: 10.3390/computation4030027Sebastian KuckukHarald Köstler<![CDATA[Computation, Vol. 4, Pages 26: Interaction of Hydrogen with Au Modified by Pd and Rh in View of Electrochemical Applications]]>
http://www.mdpi.com/2079-3197/4/3/26
Hydrogen interaction with bimetallic Au(Pd) and Au(Rh) systems are studied with the density functional theory (DFT)-based periodic approach. Several bimetallic configurations with varying concentrations of Pd and Rh atoms in the under layer of a gold surface(111) were considered. The reactivity of the doped Au(111) toward hydrogen adsorption and absorption was related to the property modifications induced by the presence of metal dopants. DFT-computed quantities, such as the energy stability, the inter-atomic and inter-slab binding energies between gold and dopants, and the charge density were used to infer the similarities and differences between both Pd and Rh dopants in these model alloys. The hydrogen penetration into the surface is favored in the bimetallic slab configurations. The underlayer dopants affect the reactivity of the surface gold toward hydrogen adsorption in the systems with a dopant underlayer, covered by absorbed hydrogen up to a monolayer. This indicates a possibility to tune the gold surface properties of bimetallic electrodes by modulating the degree of hydrogen coverage of the inner dopant layer(s).Computation2016-07-2043Article10.3390/computation4030026262079-31972016-07-20doi: 10.3390/computation4030026Fernanda JuarezGerman SoldanoElizabeth SantosHazar GuesmiFrederik TielensTzonka Mineva<![CDATA[Computation, Vol. 4, Pages 25: Predictions of Physicochemical Properties of Ionic Liquids with DFT]]>
http://www.mdpi.com/2079-3197/4/3/25
Nowadays, density functional theory (DFT)-based high-throughput computational approach is becoming more efficient and, thus, attractive for finding advanced materials for electrochemical applications. In this work, we illustrate how theoretical models, computational methods, and informatics techniques can be put together to form a simple DFT-based throughput computational workflow for predicting physicochemical properties of room-temperature ionic liquids. The developed workflow has been used for screening a set of 48 ionic pairs and for analyzing the gathered data. The predicted relative electrochemical stabilities, ionic charges and dynamic properties of the investigated ionic liquids are discussed in the light of their potential practical applications.Computation2016-07-1943Article10.3390/computation4030025252079-31972016-07-19doi: 10.3390/computation4030025Karl KaruAnton RuzanovHeigo ErsVladislav IvaništševIsabel Lage-EstebanezJosé García de la Vega<![CDATA[Computation, Vol. 4, Pages 23: Orbital Energy-Based Reaction Analysis of SN2 Reactions]]>
http://www.mdpi.com/2079-3197/4/3/23
An orbital energy-based reaction analysis theory is presented as an extension of the orbital-based conceptual density functional theory. In the orbital energy-based theory, the orbitals contributing to reactions are interpreted to be valence orbitals giving the largest orbital energy variation from reactants to products. Reactions are taken to be electron transfer-driven when they provide small variations for the gaps between the contributing occupied and unoccupied orbital energies on the intrinsic reaction coordinates in the initial processes. The orbital energy-based theory is then applied to the calculations of several S N2 reactions. Using a reaction path search method, the Cl− + CH3I → ClCH3 + I− reaction, for which another reaction path called “roundabout path” is proposed, is found to have a precursor process similar to the roundabout path just before this SN2 reaction process. The orbital energy-based theory indicates that this precursor process is obviously driven by structural change, while the successor SN2 reaction proceeds through electron transfer between the contributing orbitals. Comparing the calculated results of the SN2 reactions in gas phase and in aqueous solution shows that the contributing orbitals significantly depend on solvent effects and these orbitals can be correctly determined by this theory.Computation2016-07-0843Article10.3390/computation4030023232079-31972016-07-08doi: 10.3390/computation4030023Takao TsunedaSatoshi MaedaYu HarabuchiRaman Singh<![CDATA[Computation, Vol. 4, Pages 24: On the v-Representabilty Problem in Density Functional Theory: Application to Non-Interacting Systems]]>
http://www.mdpi.com/2079-3197/4/3/24
Based on a computational procedure for determining the functional derivative with respect to the density of any antisymmetric N-particle wave function for a non-interacting system that leads to the density, we devise a test as to whether or not a wave function known to lead to a given density corresponds to a solution of a Schrödinger equation for some potential. We examine explicitly the case of non-interacting systems described by Slater determinants. Numerical examples for the cases of a one-dimensional square-well potential with infinite walls and the harmonic oscillator potential illustrate the formalism.Computation2016-07-0543Article10.3390/computation4030024242079-31972016-07-05doi: 10.3390/computation4030024Markus DäneAntonios Gonis<![CDATA[Computation, Vol. 4, Pages 22: Online Adaptive Local-Global Model Reduction for Flows in Heterogeneous Porous Media]]>
http://www.mdpi.com/2079-3197/4/2/22
We propose an online adaptive local-global POD-DEIM model reduction method for flows in heterogeneous porous media. The main idea of the proposed method is to use local online indicators to decide on the global update, which is performed via reduced cost local multiscale basis functions. This unique local-global online combination allows (1) developing local indicators that are used for both local and global updates (2) computing global online modes via local multiscale basis functions. The multiscale basis functions consist of offline and some online local basis functions. The approach used for constructing a global reduced system is based on Proper Orthogonal Decomposition (POD) Galerkin projection. The nonlinearities are approximated by the Discrete Empirical Interpolation Method (DEIM). The online adaption is performed by incorporating new data, which become available at the online stage. Once the criterion for updates is satisfied, we adapt the reduced system online by changing the POD subspace and the DEIM approximation of the nonlinear functions. The main contribution of the paper is that the criterion for adaption and the construction of the global online modes are based on local error indicators and local multiscale basis function which can be cheaply computed. Since the adaption is performed infrequently, the new methodology does not add significant computational overhead associated with when and how to adapt the reduced basis. Our approach is particularly useful for situations where it is desired to solve the reduced system for inputs or controls that result in a solution outside the span of the snapshots generated in the offline stage. Our method also offers an alternative of constructing a robust reduced system even if a potential initial poor choice of snapshots is used. Applications to single-phase and two-phase flow problems demonstrate the efficiency of our method.Computation2016-06-0742Article10.3390/computation4020022222079-31972016-06-07doi: 10.3390/computation4020022Yalchin EfendievEduardo GildinYanfang Yang<![CDATA[Computation, Vol. 4, Pages 21: Pore-Network Modeling of Water and Vapor Transport in the Micro Porous Layer and Gas Diffusion Layer of a Polymer Electrolyte Fuel Cell]]>
http://www.mdpi.com/2079-3197/4/2/21
In the cathode side of a polymer electrolyte fuel cell (PEFC), a micro porous layer (MPL) added between the catalyst layer (CL) and the gas diffusion layer (GDL) plays an important role in water management. In this work, by using both quasi-static and dynamic pore-network models, water and vapor transport in the MPL and GDL has been investigated. We illustrated how the MPL improved water management in the cathode. Furthermore, it was found that dynamic liquid water transport in the GDL was very sensitive to the built-up thermal gradient along the through-plane direction. Thus, we may control water vapor condensation only along GDL-land interfaces by properly adjusting the GDL thermal conductivity. Our numerical results can provide guidelines for optimizing GDL pore structures for good water management.Computation2016-05-3042Article10.3390/computation4020021212079-31972016-05-30doi: 10.3390/computation4020021Chao-Zhong QinS. HassanizadehLucas Van Oosterhout<![CDATA[Computation, Vol. 4, Pages 20: On the Use of Benchmarks for Multiple Properties]]>
http://www.mdpi.com/2079-3197/4/2/20
Benchmark calculations provide a large amount of information that can be very useful in assessing the performance of density functional approximations, and for choosing the one to use. In order to condense the information some indicators are provided. However, these indicators might be insufficient and a more careful analysis is needed, as shown by some examples from an existing data set for cubic crystals.Computation2016-04-3042Article10.3390/computation4020020202079-31972016-04-30doi: 10.3390/computation4020020Bartolomeo CivalleriRoberto DovesiPascal PernotDavide PrestiAndreas Savin<![CDATA[Computation, Vol. 4, Pages 19: Kinetic and Exchange Energy Densities near the Nucleus]]>
http://www.mdpi.com/2079-3197/4/2/19
We investigate the behavior of the kinetic and the exchange energy densities near the nuclear cusp of atomic systems. Considering hydrogenic orbitals, we derive analytical expressions near the nucleus, for single shells, as well as in the semiclassical limit of large non-relativistic neutral atoms. We show that a model based on the helium iso-electronic series is very accurate, as also confirmed by numerical calculations on real atoms up to two thousands electrons. Based on this model, we propose non-local density-dependent ingredients that are suitable for the description of the kinetic and exchange energy densities in the region close to the nucleus. These non-local ingredients are invariant under the uniform scaling of the density, and they can be used in the construction of non-local exchange-correlation and kinetic functionals.Computation2016-04-0242Article10.3390/computation4020019192079-31972016-04-02doi: 10.3390/computation4020019Lucian ConstantinEduardo FabianoFabio Della Sala<![CDATA[Computation, Vol. 4, Pages 18: Grand Canonical Monte Carlo Simulation of Nitrogen Adsorption in a Silica Aerogel Model]]>
http://www.mdpi.com/2079-3197/4/2/18
In this paper, the Diffusion Limited Cluster Aggregation (DLCA) method is employed to reconstruct the three-dimensional network of silica aerogel. Then, simulation of nitrogen adsorption at 77 K in silica aerogel is conducted by the Grand Canonical Monte Carlo (GCMC) method. To reduce the computational cost and guarantee accuracy, a continuous-discrete hybrid potential model, as well as an adsorbed layer thickness estimation method, is employed. Four different structures are generated to investigate impacts of specific surface area and porosity on adsorptive capacity. Good agreement with experimental results is found over a wide range of relative pressures, which proves the validity of the model. Specific surface area and porosity mainly affect nitrogen uptake under low pressure and high pressure, respectively.Computation2016-04-0142Article10.3390/computation4020018182079-31972016-04-01doi: 10.3390/computation4020018Wen-Li XieZheng-Ji ChenZeng LiWen-Quan Tao<![CDATA[Computation, Vol. 4, Pages 17: Analytical Results on the Behavior of a Liquid Junction across a Porous Diaphragm or a Charged Porous Membrane between Two Solutions According to the Nernst–Planck Equation]]>
http://www.mdpi.com/2079-3197/4/2/17
We model the behavior of an ideal liquid junction, across a porous and possibly charged medium between two ion-containing solutions, by means of the Nernst–Planck equation for the stationary state, in conditions of local electroneutrality. An analytical solution of the equation was found long ago by Planck for the uncharged junction with only ions of valences +1 and −1. Other analytical results, which have later been obtained also for more general situations, seem impractical for performing calculations. In this paper, we obtain analytical solutions for systems with up to three valence classes, which can be applied to perform numerical calculations in a straightforward way. Our method provides a much larger amount of information on the behavior of the system than the well-known Henderson’s approximation. At the same time, it is more simple and reliable, and much less demanding in terms of computational effort, than the nowadays commonly employed numerical methods, typically based on discrete integration and trial-and-error numerical inversions. We present some examples of practical applications of our results. We study in particular the uphill transport (i.e., the transport from the lower-concentration to the higher-concentration region) of a divalent cation in a liquid junction containing also other univalent anions and cations.Computation2016-03-3042Article10.3390/computation4020017172079-31972016-03-30doi: 10.3390/computation4020017Massimo MarinoDoriano Brogioli<![CDATA[Computation, Vol. 4, Pages 16: Current Issues in Finite-T Density-Functional Theory and Warm-Correlated Matter †]]>
http://www.mdpi.com/2079-3197/4/2/16
Finite-temperature density functional theory (DFT) has become of topical interest, partly due to the increasing ability to create novel states of warm-correlated matter (WCM).Warm-dense matter (WDM), ultra-fast matter (UFM), and high-energy density matter (HEDM) may all be regarded as subclasses of WCM. Strong electron-electron, ion-ion and electron-ion correlation effects and partial degeneracies are found in these systems where the electron temperature Te is comparable to the electron Fermi energy EF. Thus, many electrons are in continuum states which are partially occupied. The ion subsystem may be solid, liquid or plasma, with many states of ionization with ionic charge Zj. Quasi-equilibria with the ion temperature Ti ≠ Te are common. The ion subsystem in WCM can no longer be treated as a passive “external potential”, as is customary in T = 0 DFT dominated by solid-state theory or quantum chemistry. Many basic questions arise in trying to implement DFT for WCM. Hohenberg-Kohn-Mermin theory can be adapted for treating these systems if suitable finite-T exchange-correlation (XC) functionals can be constructed. They are functionals of both the one-body electron density ne and the one-body ion densities ρj. Here, j counts many species of nuclei or charge states. A method of approximately but accurately mapping the quantum electrons to a classical Coulomb gas enables one to treat electron-ion systems entirely classically at any temperature and arbitrary spin polarization, using exchange-correlation effects calculated in situ, directly from the pair-distribution functions. This eliminates the need for any XC-functionals. This classical map has been used to calculate the equation of state of WDM systems, and construct a finite-T XC functional that is found to be in close agreement with recent quantum path-integral simulation data. In this review, current developments and concerns in finite-T DFT, especially in the context of non-relativistic warm-dense matter and ultra-fast matter will be presented.Computation2016-03-2842Article10.3390/computation4020016162079-31972016-03-28doi: 10.3390/computation4020016M. Dharma-wardana<![CDATA[Computation, Vol. 4, Pages 15: Bonding Strength Effects in Hydro-Mechanical Coupling Transport in Granular Porous Media by Pore-Scale Modeling]]>
http://www.mdpi.com/2079-3197/4/1/15
The hydro-mechanical coupling transport process of sand production is numerically investigated with special attention paid to the bonding effect between sand grains. By coupling the lattice Boltzmann method (LBM) and the discrete element method (DEM), we are able to capture particles movements and fluid flows simultaneously. In order to account for the bonding effects on sand production, a contact bond model is introduced into the LBM-DEM framework. Our simulations first examine the experimental observation of “initial sand production is evoked by localized failure” and then show that the bonding or cement plays an important role in sand production. Lower bonding strength will lead to more sand production than higher bonding strength. It is also found that the influence of flow rate on sand production depends on the bonding strength in cemented granular media, and for low bonding strength sample, the higher the flow rate is, the more severe the erosion found in localized failure zone becomes.Computation2016-03-0741Article10.3390/computation4010015152079-31972016-03-07doi: 10.3390/computation4010015Zhiqiang ChenChiyu XieYu ChenMoran Wang<![CDATA[Computation, Vol. 4, Pages 14: Influence of the Localization of Ge Atoms within the Si(001)(4 × 2) Surface Layer on Semicore One-Electron States]]>
http://www.mdpi.com/2079-3197/4/1/14
Adsorption complexes of germanium on the reconstructed Si(001)(4 × 2) surface have been simulated by the Si96Ge2Н84 cluster. For Ge atoms located on the surface layer, DFT calculations (B3LYP/6-31G**) of their 3d semicore-level energies have shown a clear-cut correlation between the 3d5/2 chemical shifts and mutual arrangement of Ge atoms. Such a shift is positive when only one Ge atom penetrates into the crystalline substrate, while being negative for both penetrating Ge atoms. We interpret these results in terms of the charge distribution in clusters under consideration.Computation2016-03-0341Article10.3390/computation4010014142079-31972016-03-03doi: 10.3390/computation4010014Olha TkachukMaria TerebinskayaVictor LobanovAlexei Arbuznikov<![CDATA[Computation, Vol. 4, Pages 13: Direct Numerical Simulation of Turbulent Channel Flow on High-Performance GPU Computing System]]>
http://www.mdpi.com/2079-3197/4/1/13
The flow of a viscous fluid in a plane channel is simulated numerically following the DNS approach, and using a computational code for the numerical integration of the Navier-Stokes equations implemented on a hybrid CPU/GPU computing architecture (for the meaning of symbols and acronyms used, one can refer to the Nomenclature). Three turbulent-flow databases, each representing the turbulent statistically-steady state of the flow at three different values of the Reynolds number, are built up, and a number of statistical moments of the fluctuating velocity field are computed. For turbulent-flow-structure investigation, the vortex-detection technique of the imaginary part of the complex eigenvalue pair in the velocity-gradient tensor is applied to the fluctuating-velocity fields. As a result, and among other types, hairpin vortical structures are unveiled. The processes of evolution that characterize the hairpin vortices in the near-wall region of the turbulent channel are investigated, in particular at one of the three Reynolds numbers tested, with specific attention given to the relationship that exists between the dynamics of the vortical structures and the occurrence of ejection and sweep quadrant events. Interestingly, it is found that the latter events play a preminent role in the way in which the morphological evolution of a hairpin vortex develops over time, as related in particular to the establishment of symmetric and persistent hairpins. The present results have been obtained from a database that incorporates genuine DNS solutions of the Navier-Stokes equations, without superposition of any synthetic structures in the form of initial and/or boundary conditions for the simulations.Computation2016-02-2641Article10.3390/computation4010013132079-31972016-02-26doi: 10.3390/computation4010013Giancarlo AlfonsiStefania CilibertiMarco ManciniLeonardo Primavera<![CDATA[Computation, Vol. 4, Pages 12: Contact Angle Effects on Pore and Corner Arc Menisci in Polygonal Capillary Tubes Studied with the Pseudopotential Multiphase Lattice Boltzmann Model]]>
http://www.mdpi.com/2079-3197/4/1/12
In porous media, pore geometry and wettability are determinant factors for capillary flow in drainage or imbibition. Pores are often considered as cylindrical tubes in analytical or computational studies. Such simplification prevents the capture of phenomena occurring in pore corners. Considering the corners of pores is crucial to realistically study capillary flow and to accurately estimate liquid distribution, degree of saturation and dynamic liquid behavior in pores and in porous media. In this study, capillary flow in polygonal tubes is studied with the Shan-Chen pseudopotential multiphase lattice Boltzmann model (LBM). The LB model is first validated through a contact angle test and a capillary intrusion test. Then capillary rise in square and triangular tubes is simulated and the pore meniscus height is investigated as a function of contact angle θ. Also, the occurrence of fluid in the tube corners, referred to as corner arc menisci, is studied in terms of curvature versus degree of saturation. In polygonal capillary tubes, the number of sides leads to a critical contact angle θc which is known as a key parameter for the existence of the two configurations. LBM succeeds in simulating the formation of a pore meniscus at θ &gt; θc or the occurrence of corner arc menisci at θ &lt; θc. The curvature of corner arc menisci is known to decrease with increasing saturation and decreasing contact angle as described by the Mayer and Stoewe-Princen (MS-P) theory. We obtain simulation results that are in good qualitative and quantitative agreement with the analytical solutions in terms of height of pore meniscus versus contact angle and curvature of corner arc menisci versus saturation degree. LBM is a suitable and promising tool for a better understanding of the complicated phenomena of multiphase flow in porous media.Computation2016-02-2041Article10.3390/computation4010012122079-31972016-02-20doi: 10.3390/computation4010012Soyoun SonLi ChenQinjun KangDominique DeromeJan Carmeliet<![CDATA[Computation, Vol. 4, Pages 11: Enhancing Computational Precision for Lattice Boltzmann Schemes in Porous Media Flows]]>
http://www.mdpi.com/2079-3197/4/1/11
We reassess a method for increasing the computational accuracy of lattice Boltzmann schemes by a simple transformation of the distribution function originally proposed by Skordos which was found to give a marginal increase in accuracy in the original paper. We restate the method and give further important implementation considerations which were missed in the original work and show that this method can in fact enhance the precision of velocity field calculations by orders of magnitude and does not lose accuracy when velocities are small, unlike the usual LB approach. The analysis is framed within the multiple-relaxation-time method for porous media flows, however the approach extends directly to other lattice Boltzmann schemes. First, we compute the flow between parallel plates and compare the error from the analytical profile for the traditional approach and the transformed scheme using single (4-byte) and double (8-byte) precision. Then we compute the flow inside a complex-structured porous medium and show that the traditional approach using single precision leads to large, systematic errors compared to double precision, whereas the transformed approach avoids this issue whilst maintaining all the computational efficiency benefits of using single precision.Computation2016-02-1741Article10.3390/computation4010011112079-31972016-02-17doi: 10.3390/computation4010011Farrel GrayEdo Boek<![CDATA[Computation, Vol. 4, Pages 9: A New Method to Infer Advancement of Saline Front in Coastal Groundwater Systems by 3D: The Case of Bari (Southern Italy) Fractured Aquifer]]>
http://www.mdpi.com/2079-3197/4/1/9
A new method to study 3D saline front advancement in coastal fractured aquifers has been presented. Field groundwater salinity was measured in boreholes of the Bari (Southern Italy) coastal aquifer with depth below water table. Then, the Ghyben-Herzberg freshwater/saltwater (50%) sharp interface and saline front position were determined by model simulations of the freshwater flow in groundwater. Afterward, the best-fit procedure between groundwater salinity measurements, at assigned water depth of 1.0 m in boreholes, and distances of each borehole from the modelled freshwater/saltwater saline front was used to convert each position (x, y) in groundwater to the water salinity concentration at depth of 1.0 m. Moreover, a second best-fit procedure was applied to the salinity measurements in boreholes with depth z. These results provided a grid file (x, y, z, salinity) suitable for plotting the actual Bari aquifer salinity by 3D maps. Subsequently, in order to assess effects of pumping on the saltwater-freshwater transition zone in the coastal aquifer, the Navier-Stokes (N-S) equations were applied to study transient density-driven flow and salt mass transport into freshwater of a single fracture. The rate of seawater/freshwater interface advancement given by the N-S solution was used to define the progression of saline front in Bari groundwater, starting from the actual salinity 3D map. The impact of pumping of 335 L·s−1 during the transition period of 112.8 days was easily highlighted on 3D salinity maps of Bari aquifer.Computation2016-02-1641Article10.3390/computation401000992079-31972016-02-16doi: 10.3390/computation4010009Costantino MasciopintoDomenico Palmiotta<![CDATA[Computation, Vol. 4, Pages 8: CFD Simulation and Experimental Analyses of a Copper Wire Woven Heat Exchanger Design to Improve Heat Transfer and Reduce the Size of Adsorption Beds]]>
http://www.mdpi.com/2079-3197/4/1/8
The chief objective of this study is the proposal design and CFD simulation of a new compacted copper wire woven fin heat exchanger and silica gel adsorbent bed used as part of an adsorption refrigeration system. This type of heat exchanger design has a large surface area because of the wire woven fin design. It is estimated that this will help improve the coefficient of performance (COP) of the adsorption phase and increase the heat transfer in this system arrangement. To study the heat transfer between the fins and porous adsorbent reactor bed, two experiments were carried out and matched to computational fluid dynamics (CFD) results.Computation2016-02-0641Review10.3390/computation401000882079-31972016-02-06doi: 10.3390/computation4010008John White<![CDATA[Computation, Vol. 4, Pages 7: Applications of Computational Modelling and Simulation of Porous Medium in Tissue Engineering]]>
http://www.mdpi.com/2079-3197/4/1/7
In tissue engineering, porous biodegradable scaffolds are used as templates for regenerating required tissues. With the advances in computational tools, many modeling approaches have been considered. For example, fluid flow through porous medium can be modeled using the Brinkman equation where permeability of the porous medium has to be defined. In this review, we summarize various models recently reported for defining permeability and non-invasive pressure drop monitoring as a tool to validate dynamic changes in permeability. We also summarize some models used for scaffold degradation and integrating mass transport in the simulation.Computation2016-02-0641Article10.3390/computation401000772079-31972016-02-06doi: 10.3390/computation4010007Carrie GermanSundararajan Madihally<![CDATA[Computation, Vol. 4, Pages 10: Localized Polycentric Orbital Basis Set for Quantum Monte Carlo Calculations Derived from the Decomposition of Kohn-Sham Optimized Orbitals]]>
http://www.mdpi.com/2079-3197/4/1/10
In this work, we present a simple decomposition scheme of the Kohn-Sham optimized orbitals which is able to provide a reduced basis set, made of localized polycentric orbitals, specifically designed for Quantum Monte Carlo. The decomposition follows a standard Density functional theory (DFT) calculation and is based on atomic connectivity and shell structure. The new orbitals are used to construct a compact correlated wave function of the Slater–Jastrow form which is optimized at the Variational Monte Carlo level and then used as the trial wave function for a final Diffusion Monte Carlo accurate energy calculation. We are able, in this way, to capture the basic information on the real system brought by the Kohn-Sham orbitals and use it for the calculation of the ground state energy within a strictly variational method. Here, we show test calculations performed on some small selected systems to assess the validity of the proposed approach in a molecular fragmentation, in the calculation of a barrier height of a chemical reaction and in the determination of intermolecular potentials. The final Diffusion Monte Carlo energies are in very good agreement with the best literature data within chemical accuracy.Computation2016-02-0641Article10.3390/computation4010010102079-31972016-02-06doi: 10.3390/computation4010010Claudio AmovilliFranca FlorisAndrea Grisafi<![CDATA[Computation, Vol. 4, Pages 6: Computation of the Likelihood of Joint Site Frequency Spectra Using Orthogonal Polynomials]]>
http://www.mdpi.com/2079-3197/4/1/6
In population genetics, information about evolutionary forces, e.g., mutation, selection and genetic drift, is often inferred from DNA sequence information. Generally, DNA consists of two long strands of nucleotides or sites that pair via the complementary bases cytosine and guanine (C and G), on the one hand, and adenine and thymine (A and T), on the other. With whole genome sequencing, most genomic information stored in the DNA has become available for multiple individuals of one or more populations, at least in humans and model species, such as fruit flies of the genus Drosophila. In a genome-wide sample of L sites for M (haploid) individuals, the state of each site may be made binary, by binning the complementary bases, e.g., C with G to C/G, and contrasting C/G to A/T, to obtain a “site frequency spectrum” (SFS). Two such samples of either a single population from different time-points or two related populations from a single time-point are called joint site frequency spectra (joint SFS). While mathematical models describing the interplay of mutation, drift and selection have been available for more than 80 years, calculation of exact likelihoods from joint SFS is difficult. Sufficient statistics for inference of, e.g., mutation or selection parameters that would make use of all the information in the genomic data are rarely available. Hence, often suites of crude summary statistics are combined in simulation-based computational approaches. In this article, we use a bi-allelic boundary-mutation and drift population genetic model to compute the transition probabilities of joint SFS using orthogonal polynomials. This allows inference of population genetic parameters, such as the mutation rate (scaled by the population size) and the time separating the two samples. We apply this inference method to a population dataset of neutrally-evolving short intronic sites from six DNA sequences of the fruit fly Drosophila melanogaster and the reference sequence of the related species Drosophila sechellia.Computation2016-02-0441Article10.3390/computation401000662079-31972016-02-04doi: 10.3390/computation4010006Claus VoglJuraj Bergman<![CDATA[Computation, Vol. 4, Pages 5: Extracting Conformational Ensembles of Small Molecules from Molecular Dynamics Simulations: Ampicillin as a Test Case]]>
http://www.mdpi.com/2079-3197/4/1/5
The accurate and exhaustive description of the conformational ensemble sampled by small molecules in solution, possibly at different physiological conditions, is of primary interest in many fields of medicinal chemistry and computational biology. Recently, we have built an on-line database of compounds with antimicrobial properties, where we provide all-atom force-field parameters and a set of molecular properties, including representative structures extracted from cluster analysis over μs-long molecular dynamics (MD) trajectories. In the present work, we used a medium-sized antibiotic from our sample, namely ampicillin, to assess the quality of the conformational ensemble. To this aim, we compared the conformational landscape extracted from previous unbiased MD simulations to those obtained by means of Replica Exchange MD (REMD) and those originating from three freely-available conformer generation tools widely adopted in computer-aided drug-design. In addition, for different charge/protonation states of ampicillin, we made available force-field parameters and static/dynamic properties derived from both Density Functional Theory and MD calculations. For the specific system investigated here, we found that: (i) the conformational statistics extracted from plain MD simulations is consistent with that obtained from REMD simulations; (ii) overall, our MD-based approach performs slightly better than any of the conformer generator tools if one takes into account both the diversity of the generated conformational set and the ability to reproduce experimentally-determined structures.Computation2016-01-2641Article10.3390/computation401000552079-31972016-01-26doi: 10.3390/computation4010005Giuliano MallociGiovanni SerraAndrea BosinAttilio Vargiu<![CDATA[Computation, Vol. 4, Pages 4: Acknowledgement to Reviewers of Computation in 2015]]>
http://www.mdpi.com/2079-3197/4/1/4
The editors of Computation would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...]Computation2016-01-2241Editorial10.3390/computation401000442079-31972016-01-22doi: 10.3390/computation4010004 Computation Editorial Office<![CDATA[Computation, Vol. 4, Pages 3: A Test of Various Partial Atomic Charge Models for Computations on Diheteroaryl Ketones and Thioketones]]>
http://www.mdpi.com/2079-3197/4/1/3
The effective use of partial atomic charge models is essential for such purposes in molecular computations as a simplified representation of global charge distribution in a molecule and predicting its conformational behavior. In this work, ten of the most popular models of partial atomic charge are taken into consideration, and these models operate on the molecular wave functions/electron densities of five diheteroaryl ketones and their thiocarbonyl analogs. The ten models are tested in order to assess their usefulness in achieving the aforementioned purposes for the compounds in title. Therefore, the following criteria are used in the test: (1) how accurately these models reproduce the molecular dipole moments of the conformers of the investigated compounds; (2) whether these models are able to correctly determine the preferred conformer as well as the ordering of higher-energy conformers for each compound. The results of the test indicate that the Merz-Kollman-Singh (MKS) and Hu-Lu-Yang (HLY) models approximate the magnitude of the molecular dipole moments with the greatest accuracy. The natural partial atomic charges perform best in determining the conformational behavior of the investigated compounds. These findings may constitute important support for the effective computations of electrostatic effects occurring within and between the molecules of the compounds in question as well as similar compounds.Computation2016-01-1941Article10.3390/computation401000332079-31972016-01-19doi: 10.3390/computation4010003Piotr Matczak<![CDATA[Computation, Vol. 4, Pages 2: Modeling Groundwater Flow in Heterogeneous Porous Media with YAGMod]]>
http://www.mdpi.com/2079-3197/4/1/2
Modeling flow and transport in porous media requires the management of complexities related both to physical processes and to subsurface heterogeneity. A thorough approach needs a great number of spatially-distributed phenomenological parameters, which are seldom measured in the field. For instance, modeling a phreatic aquifer under high water extraction rates is very challenging, because it requires the simulation of variably-saturated flow. 3D steady groundwater flow is modeled with YAGMod (yet another groundwater flow model), a model based on a finite-difference conservative scheme and implemented in a computer code developed in Fortran90. YAGMod simulates also the presence of partially-saturated or dry cells. The proposed algorithm and other alternative methods developed to manage dry cells in the case of depleted aquifers are analyzed and compared to a simple test. Different approaches yield different solutions, among which, it is not possible to select the best one on the basis of physical arguments. A possible advantage of YAGMod is that no additional non-physical parameter is needed to overcome the numerical difficulties arising to handle drained cells. YAGMod also includes a module that allows one to identify the conductivity field for a phreatic aquifer by solving an inverse problem with the comparison model method.Computation2015-12-2941Article10.3390/computation401000222079-31972015-12-29doi: 10.3390/computation4010002Laura CattaneoAlessandro ComunianGiovanna de FilippisMauro GiudiciChiara Vassena<![CDATA[Computation, Vol. 4, Pages 1: Reduced Numerical Model for Methane Hydrate Formation under Conditions of Variable Salinity. Time-Stepping Variants and Sensitivity]]>
http://www.mdpi.com/2079-3197/4/1/1
In this paper, we consider a reduced computational model of methane hydrate formation in variable salinity conditions, and give details on the discretization and phase equilibria implementation. We describe three time-stepping variants: Implicit, Semi-implicit, and Sequential, and we compare the accuracy and efficiency of these variants depending on the spatial and temporal discretization parameters. We also study the sensitivity of the model to the simulation parameters and in particular to the reduced phase equilibria model.Computation2015-12-2441Article10.3390/computation401000112079-31972015-12-24doi: 10.3390/computation4010001Malgorzata PeszynskaFrancis MedinaWei-Li HongMarta Torres<![CDATA[Computation, Vol. 3, Pages 701-713: Exact Likelihood Calculation under the Infinite Sites Model]]>
http://www.mdpi.com/2079-3197/3/4/701
A key parameter in population genetics is the scaled mutation rate θ = 4 N μ , where N is the effective haploid population size and μ is the mutation rate per haplotype per generation. While exact likelihood inference is notoriously difficult in population genetics, we propose a novel approach to compute a first order accurate likelihood of θ that is based on dynamic programming under the infinite sites model without recombination. The parameter θ may be either constant, i.e., time-independent, or time-dependent, which allows for changes of demography and deviations from neutral equilibrium. For time-independent θ, the performance is compared to the approach in Griffiths and Tavaré’s work “Simulating Probability Distributions in the Coalescent” (Theor. Popul. Biol. 1994, 46, 131–159) that is based on importance sampling and implemented in the “genetree” program. Roughly, the proposed method is computationally fast when n × θ &lt; 100 , where n is the sample size. For time-dependent θ ( t ) , we analyze a simple demographic model with a single change in θ ( t ) . In this case, the ancestral and current θ need to be estimated, as well as the time of change. To our knowledge, this is the first accurate computation of a likelihood in the infinite sites model with non-equilibrium demography.Computation2015-12-1134Article10.3390/computation30407017017132079-31972015-12-11doi: 10.3390/computation3040701Muhammad FaisalAndreas FutschikClaus Vogl<![CDATA[Computation, Vol. 3, Pages 687-700: Molecular Simulation of Shale Gas Adsorption and Diffusion in Clay Nanopores]]>
http://www.mdpi.com/2079-3197/3/4/687
The present work aims to study the adsorption behavior and dynamical properties of CH4 in clay slit pore with or without cation exchange structures at sizes of 1.0 nm–4.0 nm using grand canonical Monte Carlo (GCMC) and molecular dynamics (MD) methods. The adsorption isotherms of CH4 have been investigated by GCMC simulations at different temperatures and various pore sizes. In the montmorillonite (MMT) clays without a cation exchange structure, from the density profile, we find the molecules preferentially adsorb onto the surface, and only an obvious single layer was observed. The general trend within slit pores is that with increasing pore width, the adsorbed amount will increase. However, the larger pores exhibit lower excess density and the smaller pores exhibit higher excess density. The preloaded water will reduce CH4 sorption. The in plane self-diffusion coefficient of CH4 which is investigated by MD simulations combined with Einstein fluid equation increases rapidly with the pore size increasing at low pressure. Under these given conditions, the effect of temperature has little influence on the in-plane self-diffusion coefficient. In the MMT clays with cation exchange structure, cation exchange has little effect on CH4 adsorption and self-diffusion.Computation2015-12-1134Article10.3390/computation30406876877002079-31972015-12-11doi: 10.3390/computation3040687Hongguang SuiJun YaoLei Zhang<![CDATA[Computation, Vol. 3, Pages 670-686: Multiscale Simulations for Coupled Flow and Transport Using the Generalized Multiscale Finite Element Method]]>
http://www.mdpi.com/2079-3197/3/4/670
In this paper, we develop a mass conservative multiscale method for coupled flow and transport in heterogeneous porous media. We consider a coupled system consisting of a convection-dominated transport equation and a flow equation. We construct a coarse grid solver based on the Generalized Multiscale Finite Element Method (GMsFEM) for a coupled system. In particular, multiscale basis functions are constructed based on some snapshot spaces for the pressure and the concentration equations and some local spectral decompositions in the snapshot spaces. The resulting approach uses a few multiscale basis functions in each coarse block (for both the pressure and the concentration) to solve the coupled system. We use the mixed framework, which allows mass conservation. Our main contributions are: (1) the development of a mass conservative GMsFEM for the coupled flow and transport; (2) the development of a robust multiscale method for convection-dominated transport problems by choosing appropriate test and trial spaces within Petrov-Galerkin mixed formulation. We present numerical results and consider several heterogeneous permeability fields. Our numerical results show that with only a few basis functions per coarse block, we can achieve a good approximation.Computation2015-12-1134Article10.3390/computation30406706706862079-31972015-12-11doi: 10.3390/computation3040670Eric ChungYalchin EfendievWing LeungJun Ren<![CDATA[Computation, Vol. 3, Pages 657-669: Optical Properties of Silicon-Rich Silicon Nitride (SixNyHz) from First Principles]]>
http://www.mdpi.com/2079-3197/3/4/657
The real and imaginary parts of the complex refractive index of SixNyHz have been calculated from first principles. Optical spectra for reflectivity, absorption coefficient, energy-loss function (ELF), and refractive index were obtained. The results for Si3N4 are in agreement with the available theoretical and experimental results. To understand the electron energy loss mechanism in Si-rich silicon nitride, the influence of the Si/N ratio, the positions of the access Si atoms, and H in and on the surface of the ELF have been investigated. It has been found that all defects, such as dangling bonds in the bulk and surfaces, increase the intensity of the ELF in the low energy range (below 10 eV). H in the bulk and on the surface has a healing effect, which can reduce the intensity of the loss peaks by saturating the dangling bonds. Electronic structure analysis has confirmed the origin of the changes in the ELF. It has demonstrated that the changes in ELF are not only affected by the composition but also by the microstructures of the materials. The results can be used to tailor the optical properties, in this case the ELF of Si-rich Si3N4, which is essential for secondary electron emission applications.Computation2015-12-0834Article10.3390/computation30406576576692079-31972015-12-08doi: 10.3390/computation3040657Shu TaoAnne TheulingsVioleta ProdanovićJohn SmedleyHarry van der Graaf<![CDATA[Computation, Vol. 3, Pages 616-656: Assessment of Density-Functional Tight-Binding Ionization Potentials and Electron Affinities of Molecules of Interest for Organic Solar Cells Against First-Principles GW Calculations]]>
http://www.mdpi.com/2079-3197/3/4/616
Ionization potentials (IPs) and electron affinities (EAs) are important quantities input into most models for calculating the open-circuit voltage (Voc) of organic solar cells. We assess the semi-empirical density-functional tight-binding (DFTB) method with the third-order self-consistent charge (SCC) correction and the 3ob parameter set (the third-order DFTB (DFTB3) organic and biochemistry parameter set) against experiments (for smaller molecules) and against first-principles GW (Green’s function, G, times the screened potential, W) calculations (for larger molecules of interest in organic electronics) for the calculation of IPs and EAs. Since GW calculations are relatively new for molecules of this size, we have also taken care to validate these calculations against experiments. As expected, DFTB is found to behave very much like density-functional theory (DFT), but with some loss of accuracy in predicting IPs and EAs. For small molecules, the best results were found with ΔSCF (Δ self-consistent field) SCC-DFTB calculations for first IPs (good to ± 0.649 eV). When considering several IPs of the same molecule, it is convenient to use the negative of the orbital energies (which we refer to as Koopmans’ theorem (KT) IPs) as an indication of trends. Linear regression analysis shows that KT SCC-DFTB IPs are nearly as accurate as ΔSCF SCC-DFTB eigenvalues (± 0.852 eV for first IPs, but ± 0.706 eV for all of the IPs considered here) for small molecules. For larger molecules, SCC-DFTB was also the ideal choice with IP/EA errors of ± 0.489/0.740 eV from ΔSCF calculations and of ± 0.326/0.458 eV from (KT) orbital energies. Interestingly, the linear least squares fit for the KT IPs of the larger molecules also proves to have good predictive value for the lower energy KT IPs of smaller molecules, with significant deviations appearing only for IPs of 15–20 eV or larger. We believe that this quantitative analysis of errors in SCC-DFTB IPs and EAs may be of interest to other researchers interested in DFTB investigation of large and complex problems, such as those encountered in organic electronics.Computation2015-12-0434Article10.3390/computation30406166166562079-31972015-12-04doi: 10.3390/computation3040616Ala DarghouthMark CasidaWalid TaoualiKamel AlimiMathias LjungbergPeter KovalDaniel Sánchez-PortalDietrich Foerster<![CDATA[Computation, Vol. 3, Pages 600-615: An Incompressible, Depth-Averaged Lattice Boltzmann Method for Liquid Flow in Microfluidic Devices with Variable Aperture]]>
http://www.mdpi.com/2079-3197/3/4/600
Two-dimensional (2D) pore-scale models have successfully simulated microfluidic experiments of aqueous-phase flow with mixing-controlled reactions in devices with small aperture. A standard 2D model is not generally appropriate when the presence of mineral precipitate or biomass creates complex and irregular three-dimensional (3D) pore geometries. We modify the 2D lattice Boltzmann method (LBM) to incorporate viscous drag from the top and bottom microfluidic device (micromodel) surfaces, typically excluded in a 2D model. Viscous drag from these surfaces can be approximated by uniformly scaling a steady-state 2D velocity field at low Reynolds number. We demonstrate increased accuracy by approximating the viscous drag with an analytically-derived body force which assumes a local parabolic velocity profile across the micromodel depth. Accuracy of the generated 2D velocity field and simulation permeability have not been evaluated in geometries with variable aperture. We obtain permeabilities within approximately 10% error and accurate streamlines from the proposed 2D method relative to results obtained from 3D simulations. In addition, the proposed method requires a CPU run time approximately 40 times less than a standard 3D method, representing a significant computational benefit for permeability calculations.Computation2015-11-2434Article10.3390/computation30406006006152079-31972015-11-24doi: 10.3390/computation3040600Artin LaleianAlbert ValocchiCharles Werth<![CDATA[Computation, Vol. 3, Pages 586-599: Dominant Strategies of Quantum Games on Quantum Periodic Automata]]>
http://www.mdpi.com/2079-3197/3/4/586
Game theory and its quantum extension apply in numerous fields that affect people’s social, political, and economical life. Physical limits imposed by the current technology used in computing architectures (e.g., circuit size) give rise to the need for novel mechanisms, such as quantum inspired computation. Elements from quantum computation and mechanics combined with game-theoretic aspects of computing could open new pathways towards the future technological era. This paper associates dominant strategies of repeated quantum games with quantum automata that recognize infinite periodic inputs. As a reference, we used the PQ-PENNY quantum game where the quantum strategy outplays the choice of pure or mixed strategy with probability 1 and therefore the associated quantum automaton accepts with probability 1. We also propose a novel game played on the evolution of an automaton, where players’ actions and strategies are also associated with periodic quantum automata.Computation2015-11-2034Article10.3390/computation30405865865992079-31972015-11-20doi: 10.3390/computation3040586Konstantinos GiannakisChristos PapalitsasKalliopi KastampolidouAlexandros SinghTheodore Andronikos<![CDATA[Computation, Vol. 3, Pages 574-585: A Comparative Density Functional Theory and Density Functional Tight Binding Study of Phases of Nitrogen Including a High Energy Density Material N8]]>
http://www.mdpi.com/2079-3197/3/4/574
We present a comparative dispersion-corrected Density Functional Theory (DFT) and Density Functional Tight Binding (DFTB-D) study of several phases of nitrogen, including the well-known alpha, beta, and gamma phases as well as recently discovered highly energetic phases: covalently bound cubic gauche (cg) nitrogen and molecular (vdW-bound) N8 crystals. Among several tested parametrizations of N–N interactions for DFTB, we identify only one that is suitable for modeling of all these phases. This work therefore establishes the applicability of DFTB-D to studies of phases, including highly metastable phases, of nitrogen, which will be of great use for modelling of dynamics of reactions involving these phases, which may not be practical with DFT due to large required space and time scales. We also derive a dispersion-corrected DFT (DFT-D) setup (atom-centered basis parameters and Grimme dispersion parameters) tuned for accurate description simultaneously of several nitrogen allotropes including covalently and vdW-bound crystals and including high-energy phases.Computation2015-11-1134Article10.3390/computation30405745745852079-31972015-11-11doi: 10.3390/computation3040574Nicholas CapelDevang BharaniaSergei Manzhos<![CDATA[Computation, Vol. 3, Pages 558-573: Effective Thermal Conductivity of MOF-5 Powder under a Hydrogen Atmosphere]]>
http://www.mdpi.com/2079-3197/3/4/558
Effective thermal conductivity is an important thermophysical property in the design of metal-organic framework-5 (MOF-5)-based hydrogen storage tanks. A modified thermal conductivity model is built by coupling a theoretical model with the grand canonical Monte Carlo simulation (GCMC) to predict the effect of the H2 adsorption process on the effective thermal conductivity of a MOF-5 powder bed at pressures ranging from 0.01 MPa to 50 MPa and temperatures ranging from 273.15 K to 368.15 K. Results show that the mean pore diameter of the MOF-5 crystal decreases with an increase in pressure and increases with an increase in temperature. The thermal conductivity of the adsorbed H2 increases with an increased amount of H2 adsorption. The effective thermal conductivity of the MOF-5 crystal is significantly enhanced by the H2 adsorption at high pressure and low temperature. The effective thermal conductivity of the MOF-5 powder bed increases with an increase in pressure and remains nearly unchanged with an increase in temperature. The thermal conductivity of the MOF-5 powder bed increases linearly with the decreased porosity and increased thermal conductivity of the skeleton of the MOF-5 crystal. The variation in the effective thermal conductivities of the MOF-5 crystals and bed mainly results from the thermal conductivities of the gaseous and adsorption phases.Computation2015-11-0634Article10.3390/computation30405585585732079-31972015-11-06doi: 10.3390/computation3040558Hui WangZhiguo QuWen ZhangWenquan Tao<![CDATA[Computation, Vol. 3, Pages 541-557: Numerical Simulation of Fluid-Solid Coupling in Fractured Porous Media with Discrete Fracture Model and Extended Finite Element Method]]>
http://www.mdpi.com/2079-3197/3/4/541
Fluid-solid coupling is ubiquitous in the process of fluid flow underground and has a significant influence on the development of oil and gas reservoirs. To investigate these phenomena, the coupled mathematical model of solid deformation and fluid flow in fractured porous media is established. In this study, the discrete fracture model (DFM) is applied to capture fluid flow in the fractured porous media, which represents fractures explicitly and avoids calculating shape factor for cross flow. In addition, the extended finite element method (XFEM) is applied to capture solid deformation due to the discontinuity caused by fractures. More importantly, this model captures the change of fractures aperture during the simulation, and then adjusts fluid flow in the fractures. The final linear equation set is derived and solved for a 2D plane strain problem. Results show that the combination of discrete fracture model and extended finite element method is suited for simulating coupled deformation and fluid flow in fractured porous media.Computation2015-10-3034Article10.3390/computation30405415415572079-31972015-10-30doi: 10.3390/computation3040541Qingdong ZengJun Yao<![CDATA[Computation, Vol. 3, Pages 528-540: A Scale Invariant Distribution of the Prime Numbers]]>
http://www.mdpi.com/2079-3197/3/4/528
The irregular distribution of prime numbers amongst the integers has found multiple uses, from engineering applications of cryptography to quantum theory. The degree to which this distribution can be predicted thus has become a subject of current interest. Here, we present a computational analysis of the deviations between the actual positions of the prime numbers and their predicted positions from Riemann’s counting formula, focused on the variance function of these deviations from sequential enumerative bins. We show empirically that these deviations can be described by a class of probabilistic models known as the Tweedie exponential dispersion models that are characterized by a power law relationship between the variance and the mean, known by biologists as Taylor’s power law and by engineers as fluctuation scaling. This power law behavior of the prime number deviations is remarkable in that the same behavior has been found within the distribution of genes and single nucleotide polymorphisms (SNPs) within the human genome, the distribution of animals and plants within their habitats, as well as within many other biological and physical processes. We explain the common features of this behavior through a statistical convergence effect related to the central limit theorem that also generates 1/f noise.Computation2015-10-2834Article10.3390/computation30405285285402079-31972015-10-28doi: 10.3390/computation3040528Wayne KendalBent Jørgensen<![CDATA[Computation, Vol. 3, Pages 509-527: CFD and Experimental Study on the Effect of Progressive Heating on Fluid Flow inside a Thermal Wind Tunnel]]>
http://www.mdpi.com/2079-3197/3/4/509
A detailed Computational Fluid Dynamics (CFD) and experimental investigation into characterizing the fluid flow and thermal profiles in a wind tunnel was carried out, highlighting the effect of progressive heating on the non-uniformity flow profile of air. Using controllable electrical heating elements, the operating temperatures in the test-section were gradually increased in order to determine its influence on the subsequent velocity and thermal profiles found inside the test-section. The numerical study was carried out using CFD FLUENT code, alongside validating the experimental results. Good correlation was observed as the comparison yielded a mean error of 6.4% for the air velocity parameter and 2.3% for the air temperature parameter between the two techniques. The good correlation established between the numerically predicted and experimentally tested results identified broad scope for using the advanced computational capabilities of CFD applicable to the thermal modeling of wind tunnels. For a constant temperature process, the non-uniformity and turbulence intensity in the test section was 0.9% and 0.5%, which is under the recommended guidelines for wind tunnels. The findings revealed that the increase in temperature from 20 °C to 50 °C reduced the velocity by 15.2% inside the test section.Computation2015-10-2134Article10.3390/computation30405095095272079-31972015-10-21doi: 10.3390/computation3040509Hassam ChaudhryJohn CalautitBen HughesLik Sim<![CDATA[Computation, Vol. 3, Pages 479-508: Applicability of URANS and DES Simulations of Flow Past Rectangular Cylinders and Bridge Sections]]>
http://www.mdpi.com/2079-3197/3/3/479
This paper discusses the results of computational fluid dynamics simulations carried out for rectangular cylinders with various side ratios of interest for many civil engineering structures. A bridge deck of common cross-section geometry was also considered. Unsteady Reynolds-averaged Navier–Stokes (URANS) equations were solved in conjunction with either an eddy viscosity or a linearized explicit algebraic Reynolds stress model. The analysis showed that for the case studies considered, the 2D URANS approach was able to give reasonable results if coupled with an advanced turbulence model and a suitable computational mesh. The simulations even reproduced, at least qualitatively, complex phenomena observed in the wind tunnel, such as Reynolds number effects for a sharp-edged geometry. The study focused both on stationary and harmonically oscillating bodies. For the latter, self-excited forces and flutter derivatives were calculated and compared to experimental data. In the particular case of a benchmark rectangular 5:1 cylinder, 3D detached eddy simulations were also carried out, highlighting the improvement in the accuracy of the results with respect to both 2D and 3D URANS calculations. All of the computations were performed with the Tau code, a non-commercial unstructured solver developed by the German Aerospace Center.Computation2015-09-1833Article10.3390/computation30304794795082079-31972015-09-18doi: 10.3390/computation3030479Claudio Mannini<![CDATA[Computation, Vol. 3, Pages 444-478: Towards Online Visualization and Interactive Monitoring of Real-Time CFD Simulations on Commodity Hardware]]>
http://www.mdpi.com/2079-3197/3/3/444
Real-time rendering in the realm of computational fluid dynamics (CFD) in particular and scientific high performance computing (HPC) in general is a comparably young field of research, as the complexity of most problems with practical relevance is too high for a real-time numerical simulation. However, recent advances in HPC and the development of very efficient numerical techniques allow running first optimized numerical simulations in or near real-time, which in return requires integrated and optimized visualization techniques that do not affect performance. In this contribution, we present concepts, implementation details and several application examples of a minimally-invasive, efficient visualization tool for the interactive monitoring of 2D and 3D turbulent flow simulations on commodity hardware. The numerical simulations are conducted with ELBE, an efficient lattice Boltzmann environment based on NVIDIA CUDA (Compute Unified Device Architecture), which provides optimized numerical kernels for 2D and 3D computational fluid dynamics with fluid-structure interactions and turbulence.Computation2015-09-1133Article10.3390/computation30304444444782079-31972015-09-11doi: 10.3390/computation3030444Nils KolihaChristian JanßenThomas Rung<![CDATA[Computation, Vol. 3, Pages 427-443: Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms]]>
http://www.mdpi.com/2079-3197/3/3/427
Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA) can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M) and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M) a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.Computation2015-09-0233Article10.3390/computation30304274274432079-31972015-09-02doi: 10.3390/computation3030427Richard LambJoshua Premo<![CDATA[Computation, Vol. 3, Pages 386-426: Numerical Simulations of Wave-Induced Flow Fields around Large-Diameter Surface-Piercing Vertical Circular Cylinder]]>
http://www.mdpi.com/2079-3197/3/3/386
A computational analysis is performed on the diffraction of water waves induced by large-diameter, surface-piercing, vertical circular cylinder. With reference to linear-wave cases, the phenomenon is preliminarly considered in terms of velocity potential, a simplified theoretical framework in which both hypotheses of inviscid fluid and irrotational flow are incorporated. Then, and as a first-approximation analysis, the Euler equations in primitive variables are considered (a framework in which the fluid is still handled as inviscid, but the field can be rotational). Finally, the real-fluid behavior is analyzed, by numerically integrating the full Navier-Stokes equations (viscous fluid and rotational field) in their velocity-pressure formulation, by following the approach of the Direct Numerical Simulation (DNS, no models are used for the fluctuating portion of the velocity field). For further investigation of the flow fields, the swirling-strength criterion for flow-structure extraction, and the Karhunen-Loève (KL) decomposition technique for the extraction of the most energetic flow modes respectively, are applied to the computed fields. It is found that remarkable differences exist between the wave-induced fields, as derived within the different computing frameworks tested.Computation2015-08-2833Article10.3390/computation30303863864262079-31972015-08-28doi: 10.3390/computation3030386Giancarlo Alfonsi<![CDATA[Computation, Vol. 3, Pages 354-385: Validation of the GPU-Accelerated CFD Solver ELBE for Free Surface Flow Problems in Civil and Environmental Engineering]]>
http://www.mdpi.com/2079-3197/3/3/354
This contribution is dedicated to demonstrating the high potential and manifold applications of state-of-the-art computational fluid dynamics (CFD) tools for free-surface flows in civil and environmental engineering. All simulations were performed with the academic research code ELBE (efficient lattice boltzmann environment, http://www.tuhh.de/elbe). The ELBE code follows the supercomputing-on-the-desktop paradigm and is especially designed for local supercomputing, without tedious accesses to supercomputers. ELBE uses graphics processing units (GPU) to accelerate the computations and can be used in a single GPU-equipped workstation of, e.g., a design engineer. The code has been successfully validated in very different fields, mostly related to naval architecture and mechanical engineering. In this contribution, we give an overview of past and present applications with practical relevance for civil engineers. The presented applications are grouped into three major categories: (i) tsunami simulations, considering wave propagation, wave runup, inundation and debris flows; (ii) dam break simulations; and (iii) numerical wave tanks for the calculation of hydrodynamic loads on fixed and moving bodies. This broad range of applications in combination with accurate numerical results and very competitive times to solution demonstrates that modern CFD tools in general, and the ELBE code in particular, can be a helpful design tool for civil and environmental engineers.Computation2015-07-0733Article10.3390/computation30303543543852079-31972015-07-07doi: 10.3390/computation3030354Christian JanßenDennis MierkeMicha ÜberrückSilke GralherThomas Rung<![CDATA[Computation, Vol. 3, Pages 336-353: In Silico Comparison of the Hemicelluloses Xyloglucan and Glucuronoarabinoxylan in Protecting Cellulose from Degradation]]>
http://www.mdpi.com/2079-3197/3/3/336
We used a previously developed simulation model of a plant cell wall and its enzymatic degradation to compare the abilities of two hemicelluloses, glucuronoarabinoxylan (GAX) and xyloglucan (XG), to protect cellulose microfibrils (CMFs) from attack by cellulose-degrading enzymes. Additionally, we investigated the effect of XG abundance on the degradation rate of CMFs in the presence of the same enzymes. Simulations were run using hypothetical cell-wall compositions in which the numbers and arrangement of CMFs and (1,3;1,4)-β-glucan were kept constant, but the proportions of GAX and XG were altered. Scenarios considered walls with low and equal proportions of either GAX or XG, and also low, medium and high proportions of XG in the absence of GAX. The rate of CMF degradation was much lower in walls with GAX than walls with XG, except for early in the simulation when the reverse held, suggesting that XGs were protecting CMFs by competitive inhibition. Increasing XG content reduced both the degradation rate of CMFs and the percent of XG degraded, indicating that activity of enzymes decreased with XG density despite XG being degradable. Glucose oligosaccharide breakdown products were analysed on the basis of the originating polysaccharide and their degree of polymerisation (DP). The presence of GAX as opposed to equal amounts of XG had some significant effects on the amount and profile of breakdown products from XG and CMFs.Computation2015-07-0633Article10.3390/computation30303363363532079-31972015-07-06doi: 10.3390/computation3030336Indrakumar VetharaniamMartin UpsdellWilliam KellyGraeme AttwoodChristina MoonPhilip Harris<![CDATA[Computation, Vol. 3, Pages 326-335: Fast Computation of the Non-Central Chi Square PDF Outside the HDR Under a Requisite Precision Constraint]]>
http://www.mdpi.com/2079-3197/3/2/326
Computation of the non-central chi square probability density function is encountered in diverse fields of applied statistics and engineering. The distribution is commonly computed as a Poisson mixture of central chi square densities, where the terms of the sum are computed starting with the integer nearest the non-centrality parameter. However, for computation of the values in either tail region these terms are not the most significant and starting with them results in an increased computational load without a corresponding increase in accuracy. The most significant terms are shown to be a function of both the non-centrality parameter, the degree of freedom and the point of evaluation. A computationally simple approximate solution to the location of the most significant terms as well as the exact solution based on a Newton–Raphson iteration is presented. A quadratic approximation of the interval of summation is also developed in order to meet a requisite number of significant digits of accuracy. Computationally efficient recursions are used over these improved intervals. The method provides a means of computing the non-central chi square probability density function to a requisite accuracy as a Poisson mixture over all domains of interest.Computation2015-06-1932Article10.3390/computation30203263263352079-31972015-06-19doi: 10.3390/computation3020326Paul Gendron<![CDATA[Computation, Vol. 3, Pages 299-325: On Roof Geometry for Urban Wind Energy Exploitation in High-Rise Buildings]]>
http://www.mdpi.com/2079-3197/3/2/299
The European program HORIZON2020 aims to have 20% of electricity produced by renewable sources. The building sector represents 40% of the European Union energy consumption. Reducing energy consumption in buildings is therefore a priority for energy efficiency. The present investigation explores the most adequate roof shapes compatible with the placement of different types of small wind energy generators on high-rise buildings for urban wind energy exploitation. The wind flow around traditional state-of-the-art roof shapes is considered. In addition, the influence of the roof edge on the wind flow on high-rise buildings is analyzed. These geometries are investigated, both qualitatively and quantitatively, and the turbulence intensity threshold for horizontal axis wind turbines is considered. The most adequate shapes for wind energy exploitation are identified, studying vertical profiles of velocity, turbulent kinetic energy and turbulence intensity. Curved shapes are the most interesting building roof shapes from the wind energy exploitation point of view, leading to the highest speed-up and the lowest turbulence intensity.Computation2015-06-1032Article10.3390/computation30202992993252079-31972015-06-10doi: 10.3390/computation3020299Francisco Toja-SilvaCarlos PeraltaOscar Lopez-GarciaJorge NavarroIgnacio Cruz<![CDATA[Computation, Vol. 3, Pages 285-298: Computational Recognition of RNA Splice Sites by Exact Algorithms for the Quadratic Traveling Salesman Problem]]>
http://www.mdpi.com/2079-3197/3/2/285
One fundamental problem of bioinformatics is the computational recognition of DNA and RNA binding sites. Given a set of short DNA or RNA sequences of equal length such as transcription factor binding sites or RNA splice sites, the task is to learn a pattern from this set that allows the recognition of similar sites in another set of DNA or RNA sequences. Permuted Markov (PM) models and permuted variable length Markov (PVLM) models are two powerful models for this task, but the problem of finding an optimal PM model or PVLM model is NP-hard. While the problem of finding an optimal PM model or PVLM model of order one is equivalent to the traveling salesman problem (TSP), the problem of finding an optimal PM model or PVLM model of order two is equivalent to the quadratic TSP (QTSP). Several exact algorithms exist for solving the QTSP, but it is unclear if these algorithms are capable of solving QTSP instances resulting from RNA splice sites of at least 150 base pairs in a reasonable time frame. Here, we investigate the performance of three exact algorithms for solving the QTSP for ten datasets of splice acceptor sites and splice donor sites of five different species and find that one of these algorithms is capable of solving QTSP instances of up to 200 base pairs with a running time of less than two days.Computation2015-06-0332Article10.3390/computation30202852852982079-31972015-06-03doi: 10.3390/computation3020285Anja FischerFrank FischerGerold JägerJens KeilwagenPaul MolitorIvo Grosse<![CDATA[Computation, Vol. 3, Pages 274-284: Effects of a Sprinkler on Evacuation Dynamics in Fire]]>
http://www.mdpi.com/2079-3197/3/2/274
A fire in an enclosed space, such as a room in a building, is generally called a compartment fire. To prevent the compartment fire, a sprinkler for first-aid fire-fighting is installed in rooms. However, it is difficult to determine the degree to which smoke generation and the fire spreading will be inhibited when sprinklers are on. In particular, demonstrating evacuation behavior assuming an actual fire is impossible. In this study, we evaluated an effectiveness of the sprinkler by numerical simulations. To consider evacuation dynamics, a real-coded cellular automata (RCA) was used, where we can freely set the direction and velocity of an evacuee based on a floor field model. To consider the situation in the room fire, we used a simulator called Fire Dynamics Simulator (FDS). Two cases with and without the sprinkler were compared to see the validity of the sprinkler on evacuation dynamics. The effect of smoke and the expansion of the fire-spreading region were discussed. Results show that, since the fire-spreading region disappears when the sprinkler is actuated, the evacuation time decreases. Even though the sprinkler is actuated, the smoke generated at the beginning of a fire diffuses inside the whole room. However, the duration of evacuees being overwhelmed by smoke is less, because the amount of smoke generated by the pyrolysis reaction is much decreased.Computation2015-06-0332Article10.3390/computation30202742742842079-31972015-06-03doi: 10.3390/computation3020274Kazuhiro YamamotoYuki TakeuchiShinnosuke Nishiki<![CDATA[Computation, Vol. 3, Pages 262-273: LES: Unsteady Atmospheric Turbulent Layer Inlet. A Precursor Method Application and Its Quality Check]]>
http://www.mdpi.com/2079-3197/3/2/262
The motivation of this work is to bridge the gap between experimental approaches in wind tunnel testing and numerical computations, in the field of structural design against strong winds. This paper focuses on the generation of an unsteady flow field, representative of a natural wind field, but still compatible with Computational Fluid Dynamics inlet requirements. A simple and “naive” procedure is explained, and the results are in good agreement with some international standards.Computation2015-05-2632Article10.3390/computation30202622622732079-31972015-05-26doi: 10.3390/computation3020262Julien Berthaut-GerentesDidier Delaunay<![CDATA[Computation, Vol. 3, Pages 235-261: Engineering-Based Thermal CFD Simulations on Massive Parallel Systems]]>
http://www.mdpi.com/2079-3197/3/2/235
The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers with several thousands to millions of cores. In this paper, we present a hierarchical data structure for massive parallel computations that supports the coupling of a Navier–Stokes-based fluid flow code with the Boussinesq approximation in order to address complex thermal scenarios for energy-related assessments. The newly designed data structure is specifically designed with the idea of interactive data exploration and visualization during runtime of the simulation code; a major shortcoming of traditional high-performance computing (HPC) simulation codes. We further show and discuss speed-up values obtained on one of Germany’s top-ranked supercomputers with up to 140,000 processes and present simulation results for different engineering-based thermal problems.Computation2015-05-2232Article10.3390/computation30202352352612079-31972015-05-22doi: 10.3390/computation3020235Jérôme FrischRalf-Peter MundaniErnst RankChristoph van Treeck<![CDATA[Computation, Vol. 3, Pages 222-234: Computational Approach to 3D Modeling of the Lymph Node Geometry]]>
http://www.mdpi.com/2079-3197/3/2/222
In this study we present a computational approach to the generation of the major geometric structures of an idealized murine lymph node (LN). In this generation, we consider the major compartments such as the subcapsular sinus, B cell follicles, trabecular and medullar sinuses, blood vessels and the T cell zone with a primary focus on the fibroblastic reticular cell (FRC) network. Confocal microscopy data of LN macroscopic structures and structural properties of the FRC network have been generated and utilized in the present model. The methodology sets a library of modules that can be used to assemble a solid geometric LN model and subsequently generate an adaptive mesh model capable of implementing transport phenomena. Overall, based on the use of high-resolution confocal microscopy and morphological analysis of cell 3D reconstructions, we have developed a computational model of the LN geometry, suitable for further investigation in studies of fluid transport and cell migration in this immunologically essential organ.Computation2015-05-2232Communication10.3390/computation30202222222342079-31972015-05-22doi: 10.3390/computation3020222Alexey KislitsynRostislav SavinkovMario NovkovicLucas OnderGennady Bocharov<![CDATA[Computation, Vol. 3, Pages 197-221: Simulation Frameworks for Morphogenetic Problems]]>
http://www.mdpi.com/2079-3197/3/2/197
Morphogenetic modelling and simulation help to understand the processes by which the form and shapes of organs (organogenesis) and organisms (embryogenesis) emerge. This requires two mutually coupled entities: the biomolecular signalling network and the tissue. Whereas the modelling of the signalling has been discussed and used in a multitude of works, the realistic modelling of the tissue has only started on a larger scale in the last decade. Here, common tissue modelling techniques are reviewed. Besides the continuum approach, the principles and main applications of the spheroid, vertex, Cellular Potts, Immersed Boundary and Subcellular Element models are discussed in detail. In recent years, many software frameworks, implementing the aforementioned methods, have been developed. The most widely used frameworks and modelling markup languages and standards are presented.Computation2015-04-2432Review10.3390/computation30201971972212079-31972015-04-24doi: 10.3390/computation3020197Simon Tanaka<![CDATA[Computation, Vol. 3, Pages 177-196: A Guide to Phylogenetic Reconstruction Using Heterogeneous Models—A Case Study from the Root of the Placental Mammal Tree]]>
http://www.mdpi.com/2079-3197/3/2/177
There are numerous phylogenetic reconstruction methods and models available—but which should you use and why? Important considerations in phylogenetic analyses include data quality, structure, signal, alignment length and sampling. If poorly modelled, variation in rates of change across proteins and across lineages can lead to incorrect phylogeny reconstruction which can then lead to downstream misinterpretation of the underlying data. The risk of choosing and applying an inappropriate model can be reduced with some critical yet straightforward steps outlined in this paper. We use the question of the position of the root of placental mammals as our working example to illustrate the topological impact of model misspecification. Using this case study we focus on using models in a Bayesian framework and we outline the steps involved in identifying and assessing better fitting models for specific datasets.Computation2015-04-1532Article10.3390/computation30201771771962079-31972015-04-15doi: 10.3390/computation3020177Raymond MoranClaire MorganMary O'Connell<![CDATA[Computation, Vol. 3, Pages 150-176: Coupling of Petri Net Models of the Mycobacterial Infection Process and Innate Immune Response]]>
http://www.mdpi.com/2079-3197/3/2/150
Computational and mathematical modeling is important in support of a better understanding of complex behavior in biology. For the investigation of biological systems, researchers have used computers to construct, verify, and validate models that describe the mechanisms behind biological processes in multi-scale representations. In this paper we combine Petri net models that represent the mycobacterial infection process and innate immune response at various levels of organization, from molecular interaction to granuloma dissemination. In addition to the conventional graphical representation of the Petri net, the outcome of the model is projected onto a 3D model representing the zebrafish embryo. In this manner we provide a visualization of the process in a simulation framework that portrays the infection in the living system.Computation2015-04-0832Article10.3390/computation30201501501762079-31972015-04-08doi: 10.3390/computation3020150Rafael CarvalhoJeroen van den HeuvelJetty KleijnFons Verbeek<![CDATA[Computation, Vol. 3, Pages 128-149: CFD Simulation and Optimisation of a Low Energy Ventilation and Cooling System]]>
http://www.mdpi.com/2079-3197/3/2/128
Mechanical Heating Ventilation and Air-Conditioning (HVAC) systems account for 60% of the total energy consumption of buildings. As a sector, buildings contributes about 40% of the total global energy demand. By using passive technology coupled with natural ventilation from wind towers, significant amounts of energy can be saved, reducing the emissions of greenhouse gases. In this study, the development of Computational Fluid Dynamics (CFD) analysis in aiding the development of wind towers was explored. Initial concepts of simple wind tower mechanics to detailed design of wind towers which integrate modifications specifically to improve the efficiency of wind towers were detailed. From this, using CFD analysis, heat transfer devices were integrated into a wind tower to provide cooling for incoming air, thus negating the reliance on mechanical HVAC systems. A commercial CFD code Fluent was used in this study to simulate the airflow inside the wind tower model with the heat transfer devices. Scaled wind tunnel testing was used to validate the computational model. The airflow supply velocity was measured and compared with the numerical results and good correlation was observed. Additionally, the spacing between the heat transfer devices was varied to optimise the performance. The technology presented here is subject to a patent application (PCT/GB2014/052263).Computation2015-04-0232Article10.3390/computation30201281281492079-31972015-04-02doi: 10.3390/computation3020128John CalautitDominic O'ConnorPolytimi SofotasiouBen Hughes<![CDATA[Computation, Vol. 3, Pages 114-127: Evolution by Pervasive Gene Fusion in Antibiotic Resistance and Antibiotic Synthesizing Genes]]>
http://www.mdpi.com/2079-3197/3/2/114
Phylogenetic (tree-based) approaches to understanding evolutionary history are unable to incorporate convergent evolutionary events where two genes merge into one. In this study, as exemplars of what can be achieved when a tree is not assumed a priori, we have analysed the evolutionary histories of polyketide synthase genes and antibiotic resistance genes and have shown that their history is replete with convergent events as well as divergent events. We demonstrate that the overall histories of these genes more closely resembles the remodelling that might be seen with the children’s toy Lego, than the standard model of the phylogenetic tree. This work demonstrates further that genes can act as public goods, available for re-use and incorporation into other genetic goods.Computation2015-03-2632Article10.3390/computation30201141141272079-31972015-03-26doi: 10.3390/computation3020114Orla ColemanRuth HoganNicole McGoldrickNiamh RuddenJames McInerney<![CDATA[Computation, Vol. 3, Pages 99-113: Evolutionary Dynamics in Gene Networks and Inference Algorithms]]>
http://www.mdpi.com/2079-3197/3/1/99
Dynamical interactions among sets of genes (and their products) regulate developmental processes and some dynamical diseases, like cancer. Gene regulatory networks (GRNs) are directed networks that define interactions (links) among different genes/proteins involved in such processes. Genetic regulation can be modified during the time course of the process, which may imply changes in the nodes activity that leads the system from a specific state to a different one at a later time (dynamics). How the GRN modifies its topology, to properly drive a developmental process, and how this regulation was acquired across evolution are questions that the evolutionary dynamics of gene networks tackles. In the present work we review important methodology in the field and highlight the combination of these methods with evolutionary algorithms. In recent years, this combination has become a powerful tool to fit models with the increasingly available experimental data.Computation2015-03-1331Review10.3390/computation3010099991132079-31972015-03-13doi: 10.3390/computation3010099Daniel Aguilar-HidalgoMaría LemosAntonio Córdoba<![CDATA[Computation, Vol. 3, Pages 72-98: Use of CMEIAS Image Analysis Software to Accurately Compute Attributes of Cell Size, Morphology, Spatial Aggregation and Color Segmentation that Signify in Situ Ecophysiological Adaptations in Microbial Biofilm Communities]]>
http://www.mdpi.com/2079-3197/3/1/72
In this review, we describe computational features of computer-assisted microscopy that are unique to the Center for Microbial Ecology Image Analysis System (CMEIAS) software, and examples illustrating how they can be used to gain ecophysiological insights into microbial adaptations occurring at micrometer spatial scales directly relevant to individual cells occupying their ecological niches in situ. These features include algorithms that accurately measure (1) microbial cell length relevant to avoidance of protozoan bacteriovory; (2) microbial biovolume body mass relevant to allometric scaling and local apportionment of growth-supporting nutrient resources; (3) pattern recognition rules for morphotype classification of diverse microbial communities relevant to their enhanced fitness for success in the particular habitat; (4) spatial patterns of coaggregation that reveal the local intensity of cooperative vs. competitive adaptations in colonization behavior relevant to microbial biofilm ecology; and (5) object segmentation of complex color images to differentiate target microbes reporting successful cell-cell communication. These unique computational features contribute to the CMEIAS mission of developing accurate and freely accessible tools of image bioinformatics that strengthen microscopy-based approaches for understanding microbial ecology at single-cell resolution.Computation2015-03-0931Review10.3390/computation301007272982079-31972015-03-09doi: 10.3390/computation3010072Frank DazzoBrighid Niccum<![CDATA[Computation, Vol. 3, Pages 58-71: Visual Simulation of Soil-Microbial System Using GPGPU Technology]]>
http://www.mdpi.com/2079-3197/3/1/58
General Purpose (use of) Graphics Processing Units (GPGPU) is a promising technology for simulation upscaling; in particular for bottom–up modelling approaches seeking to translate micro-scale system processes to macro-scale properties. Many existing simulations of soil ecosystems do not recover the emergent system scale properties and this may be a consequence of “missing” information at finer scales. Interpretation of model output can be challenging and we advocate the “built-in” visual simulation afforded by GPGPU implementations. We apply this GPGPU approach to a reaction–diffusion soil ecosystem model with the intent of linking micro (micron) and core (cm) spatial scales to investigate how microbes respond to changing environments and the consequences on soil respiration. The performance is evaluated in terms of computational speed up, spatial upscaling and visual feedback. We conclude that a GPGPU approach can significantly improve computational efficiency and offers the potential added benefit of visual immediacy. For massive spatial domains distribution over GPU devices may still be required.Computation2015-02-2731Article10.3390/computation301005858712079-31972015-02-27doi: 10.3390/computation3010058Ruth FalconerAlasdair Houston<![CDATA[Computation, Vol. 3, Pages 29-57: A Review of Two Multiscale Methods for the Simulation of Macromolecular Assemblies: Multiscale Perturbation and Multiscale Factorization]]>
http://www.mdpi.com/2079-3197/3/1/29
Many mesoscopic N-atom systems derive their structural and dynamical properties from processes coupled across multiple scales in space and time. That is, they simultaneously deform or display collective behaviors, while experiencing atomic scale vibrations and collisions. Due to the large number of atoms involved and the need to simulate over long time periods of biological interest, traditional computational tools, like molecular dynamics, are often infeasible for such systems. Hence, in the current review article, we present and discuss two recent multiscale methods, stemming from the N-atom formulation and an underlying scale separation, that can be used to study such systems in a friction-dominated regime: multiscale perturbation theory and multiscale factorization. These novel analytic foundations provide a self-consistent approach to yield accurate and feasible long-time simulations with atomic detail for a variety of multiscale phenomena, such as viral structural transitions and macromolecular self-assembly. As such, the accuracy and efficiency of the associated algorithms are demonstrated for a few representative biological systems, including satellite tobacco mosaic virus (STMV) and lactoferrin.Computation2015-02-0531Article10.3390/computation301002929572079-31972015-02-05doi: 10.3390/computation3010029Stephen PankavichPeter Ortoleva<![CDATA[Computation, Vol. 3, Pages 2-28: Computational Studies of the Intestinal Host-Microbiota Interactome]]>
http://www.mdpi.com/2079-3197/3/1/2
A large and growing body of research implicates aberrant immune response and compositional shifts of the intestinal microbiota in the pathogenesis of many intestinal disorders. The molecular and physical interaction between the host and the microbiota, known as the host-microbiota interactome, is one of the key drivers in the pathophysiology of many of these disorders. This host-microbiota interactome is a set of dynamic and complex processes, and needs to be treated as a distinct entity and subject for study. Disentangling this complex web of interactions will require novel approaches, using a combination of data-driven bioinformatics with knowledge-driven computational modeling. This review describes the computational approaches for investigating the host-microbiota interactome, with emphasis on the human intestinal tract and innate immunity, and highlights open challenges and existing gaps in the computation methodology for advancing our knowledge about this important facet of human health.Computation2015-01-1431Review10.3390/computation30100022282079-31972015-01-14doi: 10.3390/computation3010002Scott ChristleyChase CockrellGary An<![CDATA[Computation, Vol. 3, Pages 1: Acknowledgement to Reviewers of Computation in 2014]]>
http://www.mdpi.com/2079-3197/3/1/1
The editors of Computation would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2014:[...]Computation2015-01-0931Editorial10.3390/computation3010001112079-31972015-01-09doi: 10.3390/computation3010001 Computation Editorial Office<![CDATA[Computation, Vol. 2, Pages 246-257: SBMLSimulator: A Java Tool for Model Simulation and Parameter Estimation in Systems Biology]]>
http://www.mdpi.com/2079-3197/2/4/246
The identification of suitable model parameters for biochemical reactions has been recognized as a quite difficult endeavor. Parameter values from literature or experiments can often not directly be combined in complex reaction systems. Nature-inspired optimization techniques can find appropriate sets of parameters that calibrate a model to experimentally obtained time series data. We present SBMLsimulator, a tool that combines the Systems Biology Simulation Core Library for dynamic simulation of biochemical models with the heuristic optimization framework EvA2. SBMLsimulator provides an intuitive graphical user interface with various options as well as a fully-featured command-line interface for large-scale and script-based model simulation and calibration. In a parameter estimation study based on a published model and artificial data we demonstrate the capability of SBMLsimulator to identify parameters. SBMLsimulator is useful for both, the interactive simulation and exploration of the parameter space and for the large-scale model calibration and estimation of uncertain parameter values.Computation2014-12-1824Article10.3390/computation20402462462572079-31972014-12-18doi: 10.3390/computation2040246Alexander DörrRoland KellerAndreas ZellAndreas Dräger<![CDATA[Computation, Vol. 2, Pages 221-245: Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism]]>
http://www.mdpi.com/2079-3197/2/4/221
Endogenous retroviruses (ERVs) are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.Computation2014-11-2824Article10.3390/computation20402212212452079-31972014-11-28doi: 10.3390/computation2040221Le BaoDaniel EllederRaunaq MalhotraMichael DeGiorgioTheodora MaravegiasLindsay HorvathLaura CarrelColin GillinTomáš HronHelena FábryováDavid HunterMary Poss<![CDATA[Computation, Vol. 2, Pages 199-220: Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials]]>
http://www.mdpi.com/2079-3197/2/4/199
In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS). Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.Computation2014-11-1424Review10.3390/computation20401991992202079-31972014-11-14doi: 10.3390/computation2040199Claus Vogl<![CDATA[Computation, Vol. 2, Pages 182-198: Incongruencies in Vaccinia Virus Phylogenetic Trees]]>
http://www.mdpi.com/2079-3197/2/4/182
Over the years, as more complete poxvirus genomes have been sequenced, phylogenetic studies of these viruses have become more prevalent. In general, the results show similar relationships between the poxvirus species; however, some inconsistencies are notable. Previous analyses of the viral genomes contained within the vaccinia virus (VACV)-Dryvax vaccine revealed that their phylogenetic relationships were sometimes clouded by low bootstrapping confidence. To analyze the VACV-Dryvax genomes in detail, a new tool-set was developed and integrated into the Base-By-Base bioinformatics software package. Analyses showed that fewer unique positions were present in each VACV-Dryvax genome than expected. A series of patterns, each containing several single nucleotide polymorphisms (SNPs) were identified that were counter to the results of the phylogenetic analysis. The VACV genomes were found to contain short DNA sequence blocks that matched more distantly related clades. Additionally, similar non-conforming SNP patterns were observed in (1) the variola virus clade; (2) some cowpox clades; and (3) VACV-CVA, the direct ancestor of VACV-MVA. Thus, traces of past recombination events are common in the various orthopoxvirus clades, including those associated with smallpox and cowpox viruses.Computation2014-10-1424Article10.3390/computation20401821821982079-31972014-10-14doi: 10.3390/computation2040182Chad SmithsonSamantha KampmanBenjamin HetmanChris Upton<![CDATA[Computation, Vol. 2, Pages 159-181: Multiscale Modeling of the Early CD8 T-Cell Immune Response in Lymph Nodes: An Integrative Study]]>
http://www.mdpi.com/2079-3197/2/4/159
CD8 T-cells are critical in controlling infection by intracellular pathogens. Upon encountering antigen presenting cells, T-cell receptor activation promotes the differentiation of naïve CD8 T-cells into strongly proliferating activated and effector stages. We propose a 2D-multiscale computational model to study the maturation of CD8 T-cells in a lymph node controlled by their molecular profile. A novel molecular pathway is presented and converted into an ordinary differential equation model, coupled with a cellular Potts model to describe cell-cell interactions. Key molecular players such as activated IL2 receptor and Tbet levels control the differentiation from naïve into activated and effector stages, respectively, while caspases and Fas-Fas ligand interactions control cell apoptosis. Coupling this molecular model to the cellular scale successfully reproduces qualitatively the evolution of total CD8 T-cell counts observed in mice lymph node, between Day 3 and 5.5 post-infection. Furthermore, this model allows us to make testable predictions of the evolution of the different CD8 T-cell stages.Computation2014-09-2924Article10.3390/computation20401591591812079-31972014-09-29doi: 10.3390/computation2040159Sotiris ProkopiouLoic BarbarrouxSamuel BernardJulien MafilleYann LeverrierChristophe ArpinJacqueline MarvelOlivier GandrillonFabien Crauste<![CDATA[Computation, Vol. 2, Pages 131-158: Computational Models of the NF-KB Signalling Pathway]]>
http://www.mdpi.com/2079-3197/2/4/131
In this review article, we discuss the current state of computational modelling of the nuclear factor-kappa B (NF-ΚB) signalling pathway. NF-ΚB is a transcription factor, which is ubiquitous within cells and controls a number of immune responses, including inflammation and apoptosis. The NF-ΚB signalling pathway is tightly regulated, commencing with activation at the cell membrane, signal transduction through various components within the cytoplasm, translocation of NF-ΚB into the nucleus and, finally, the transcription of various genes relating to the innate and adaptive immune responses. There have been a number of computational (mathematical) models developed of the signalling pathway over the past decade. This review describes how these approaches have helped advance our understanding of NF-ΚB control.Computation2014-09-2924Review10.3390/computation20401311311582079-31972014-09-29doi: 10.3390/computation2040131Richard WilliamsJon TimmisEva Qwarnstrom<![CDATA[Computation, Vol. 2, Pages 112-130: On Mechanistic Modeling of Gene Content Evolution: Birth-Death Models and Mechanisms of Gene Birth and Gene Retention]]>
http://www.mdpi.com/2079-3197/2/3/112
Characterizing the mechanisms of duplicate gene retention using phylogenetic methods requires models that are consistent with different biological processes. The interplay between complex biological processes and necessarily simpler statistical models leads to a complex modeling problem. A discussion of the relationship between biological processes, existing models for duplicate gene retention and data is presented. Existing models are then extended in deriving two new birth/death models for phylogenetic application in a gene tree/species tree reconciliation framework to enable probabilistic inference of the mechanisms from model parameterization. The goal of this work is to synthesize a detailed discussion of modeling duplicate genes to address biological questions, moving from previous work to future trajectories with the aim of generating better models and better inference.Computation2014-08-2823Article10.3390/computation20301121121302079-31972014-08-28doi: 10.3390/computation2030112Ashley TeufelJing ZhaoMalgorzata O'ReillyLiang LiuDavid Liberles<![CDATA[Computation, Vol. 2, Pages 102-111: Investigation of the Ergopeptide Epimerization Process]]>
http://www.mdpi.com/2079-3197/2/3/102
Ergopeptides, like ergocornine and a-ergocryptine, exist in an S- and in an R-configuration. Kinetic experiments imply that certain configurations are preferred depending on the solvent. The experimental methods are explained in this article. Furthermore, computational methods are used to understand this configurational preference. Standard quantum chemical methods can predict the favored configurations by using minimum energy calculations on the potential energy landscape. However, the explicit role of the solvent is not revealed by this type of methods. In order to better understand its influence, classical mechanical molecular simulations are applied. It appears from our research that “folding” the ergopeptide molecules into an intermediate state (between the S- and the R-configuration) is mechanically hindered for the preferred configurations.Computation2014-08-0823Article10.3390/computation20301021021112079-31972014-08-08doi: 10.3390/computation2030102Karsten AndraeStefan MerkelVedat DurmazKonstantin FackeldeyRobert KöppenMarcus WeberMatthias Koch<![CDATA[Computation, Vol. 2, Pages 83-101: Cultural Collapse and System Survival Due to Environmental Modification]]>
http://www.mdpi.com/2079-3197/2/3/83
We consider a simple mathematical approach to the rise and fall of societies based on population growth and its effects on the environment, both beneficial and detrimental. We find that in any simple model of population dynamics with environmental coupling, stable cultures are impossible. Populations inevitably grow or decline exponentially. Further, if the parameters defining a civilisation are allowed to evolve towards an evolutionarily stable state, the only possible solutions are those where each culture ultimately declines. However, computer simulation with multiple competing cultures show that while each eventually collapses, some are always extant and the system is robust. In this broad class of models, individual death is a requirement for system survival.Computation2014-07-2923Article10.3390/computation2030083831012079-31972014-07-29doi: 10.3390/computation2030083Graeme AcklandAdrien HenryAlexander WilliamsMorrel Cohen<![CDATA[Computation, Vol. 2, Pages 61-82: Universal Dimensions of Meaning Derived from Semantic Relations among Words and Senses: Mereological Completeness vs. Ontological Generality]]>
http://www.mdpi.com/2079-3197/2/3/61
A key to semantic analysis is a precise and practically useful definition of meaning that is general for all domains of knowledge. We previously introduced the notion of weak semantic map: a metric space allocating concepts along their most general (universal) semantic characteristics while at the same time ignoring other, domain-specific aspects of their meanings. Here we address questions of the number, quality, and mutual independence of the weak semantic dimensions. Specifically, we employ semantic relationships not previously used for weak semantic mapping, such as holonymy/meronymy (“is-part/member-of”), and we compare maps constructed from word senses to those constructed from words. We show that the “completeness” dimension derived from the holonym/meronym relation is independent of, and practically orthogonal to, the “abstractness” dimension derived from the hypernym-hyponym (“is-a”) relation, while both dimensions are orthogonal to the maps derived from synonymy and antonymy. Interestingly, the choice of using relations among words vs. senses implies a non-trivial trade-off between rich and unambiguous information due to homonymy and polysemy. The practical utility of the new and prior dimensions is illustrated by the automated evaluation of different kinds of documents. Residual analysis of available linguistic resources, such as WordNet, suggests that the number of universal semantic dimensions representable in natural language may be finite. Their complete characterization, as well as the extension of results to non-linguistic materials, remains an open challenge.Computation2014-07-1523Article10.3390/computation203006161822079-31972014-07-15doi: 10.3390/computation2030061Alexei SamsonovichGiorgio Ascoli<![CDATA[Computation, Vol. 2, Pages 47-60: Can the Thermodynamic Hodgkin-Huxley Model of Voltage-Dependent Conductance Extrapolate for Temperature?]]>
http://www.mdpi.com/2079-3197/2/2/47
Hodgkin and Huxley (H-H) fitted their model of voltage-dependent conductances to experimental data using empirical functions of voltage. The thermodynamic H-H model of voltage dependent conductances is more physically plausible, as it constrains and parameterises its empirical fit by assuming that ion channel transition rates depend exponentially on a free energy barrier that in turn, linearly or non-linearly, depends on voltage. The original H-H model contains no explicit temperature terms and requires Q10 factors to describe data at different temperatures. The thermodynamic H-H model does have explicit terms for temperature. Do these endow the model with extrapolation for temperature? We utilised voltage clamp data for a voltage-gated K+ current, recorded at three different temperatures. The thermodynamic H-H model’s free parameters were fitted (Marquardt-Levenberg algorithm) to a data set recorded at one (or more) temperature(s). Then we assessed whether it could describe another data set, recorded at a different temperature, with these same free parameter values and its temperature terms set to the new temperature. We found that it could not.Computation2014-05-1422Article10.3390/computation202004747602079-31972014-05-14doi: 10.3390/computation2020047Michael Forrest<![CDATA[Computation, Vol. 2, Pages 23-46: A 3-D Model of a Perennial Ryegrass Primary Cell Wall and Its Enzymatic Degradation]]>
http://www.mdpi.com/2079-3197/2/2/23
We have developed a novel 3-D, agent-based model of cell-wall digestion to improve our understanding of ruminal cell-wall digestion. It offers a capability to study cell walls and their enzymatic modification, by providing a representation of cellulose microfibrils and non-cellulosic polysaccharides and by simulating their spatial and catalytic interactions with enzymes. One can vary cell-wall composition and the types and numbers of enzyme molecules, allowing the model to be applied to a range of systems where cell walls are degraded and to the modification of cell walls by endogenous enzymes. As a proof of principle, we have modelled the wall of a mesophyll cell from the leaf of perennial ryegrass and then simulated its enzymatic degradation. This is a primary, non-lignified cell wall and the model includes cellulose, hemicelluloses (glucuronoarabinoxylans, 1,3;1,4-β-glucans, and xyloglucans) and pectin. These polymers are represented at the level of constituent monosaccharides, and assembled to form a 3-D, meso-scale representation of the molecular structure of the cell wall. The composition of the cell wall can be parameterised to represent different walls in different cell types and taxa. The model can contain arbitrary combinations of different enzymes. It simulates their random diffusion through the polymer networks taking collisions into account, allowing steric hindrance from cell-wall polymers to be modelled. Steric considerations are included when target bonds are encountered, and breakdown products resulting from enzymatic activity are predicted.Computation2014-05-0522Article10.3390/computation202002323462079-31972014-05-05doi: 10.3390/computation2020023Indrakumar VetharaniamWilliam KellyGraeme AttwoodPhilip Harris<![CDATA[Computation, Vol. 2, Pages 12-22: Ab Initio Research on a New Type of Half-Metallic Double Perovskites, A2CrMO6 (A = IVA Group Elements; M = Mo, Re and W)]]>
http://www.mdpi.com/2079-3197/2/1/12
The research based on density functional theory was carried out using generalized gradient approximation (GGA) for full-structural optimization and the addition of the correlation effect (GGA + U (Coulomb parameter)) in a double perovskite structure, A2BB’O6. According to the similar valance electrons between IIA(s2) and IVA(p2), IVA group elements instead of alkaline-earth elements settled on the A-site ion position with fixed BB' combinations as CrM (M = Mo, Re and W). The ferrimagnetic half-metallic (HM-FiM) properties can be attributed to the p-d hybridization between the Crd-Mp and the double exchange. All the compounds can be half-metallic (HM) materials, except Si2CrMoO6, Ge2CrMo and Ge2CrReO6, because the strong-correlation correction should be considered. For M = W, only A = Sn and Pb are possible candidates as HM materials. Nevertheless, an examination of the structural stability is needed, because Si, Ge, Sn and Pb are quite different from Sr. All compounds are stable, except for the Si-based double perovskite structure.Computation2014-03-2121Article10.3390/computation201001212222079-31972014-03-21doi: 10.3390/computation2010012Yun-Ping LiuHuei-Ru FuhYin-Kuo Wang<![CDATA[Computation, Vol. 2, Pages 1-11: Linear Scaling Solution of the Time-Dependent Self-Consistent-Field Equations]]>
http://www.mdpi.com/2079-3197/2/1/1
A new approach to solving the Time-Dependent Self-Consistent-Field equations is developed based on the double quotient formulation of Tsiper 2001 (J. Phys. B). Dual channel, quasi-independent non-linear optimization of these quotients is found to yield convergence rates approaching those of the best case (single channel) Tamm-Dancoff approximation. This formulation is variational with respect to matrix truncation, admitting linear scaling solution of the matrix-eigenvalue problem, which is demonstrated for bulk excitons in the polyphenylene vinylene oligomer and the (4,3) carbon nanotube segment.Computation2014-03-1421Letter10.3390/computation20100011112079-31972014-03-14doi: 10.3390/computation2010001Matt Challacombe<![CDATA[Computation, Vol. 1, Pages 31-45: Second-Row Transition-Metal Doping of (ZniSi), i = 12, 16 Nanoclusters: Structural and Magnetic Properties]]>
http://www.mdpi.com/2079-3197/1/3/31
TM@ZniSi nanoclusters have been characterized by means of the Density Functional Theory, in which Transition Metal (TM) stands from Y to Cd, and i = 12 and 16. These two nanoclusters have been chosen owing to their highly spheroidal shape which allow for favored endohedral structures as compared to other nanoclusters. Doping with TM is chosen due to their magnetic properties. In similar cluster-assembled materials, these magnetic properties are related to the Transition Metal-Transition Metal (TM-TM) distances. At this point, endohedral doping presents a clear advantage over substitutional or exohedral doping, since in the cluster-assembled materials, these TM would occupy the well-fixed center of the cluster, providing in this way a better TM-TM distance control to experimentalists. In addition to endohedral compounds, surface structures and the TS’s connecting both isomers have been characterized. In this way the kinetic and thermal stability of endohedral nanoclusters is predicted. We anticipate that silver and cadmium endohedrally doped nanoclusters have the longest life-times. This is due to the weak interaction of these metals with the cage, in contrast to the remaining cases where the TM covalently bond to a region of the cage. The open-shell electronic structure of Ag provides magnetic properties to Ag@ZniSi clusters. Therefore, we have further characterized (Ag@Zn12S12)2 and (Ag@Zn16S16)2 dimers both in the ferromagnetic and antiferromagnetic state, in order to calculate the corresponding magnetic exchange coupling constant, J.Computation2013-11-1413Article10.3390/computation103003131452079-31972013-11-14doi: 10.3390/computation1030031Elisa Jimenez-IzalJon MatxainMario PirisJesus Ugalde<![CDATA[Computation, Vol. 1, Pages 27-30: Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering]]>
http://www.mdpi.com/2079-3197/1/2/27
Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized below. In each section a further focusing will be provided by occasionally organizing special issues on topics of high interests, collecting papers on fundamental work in the field. More applied papers should be submitted to their corresponding specialist journals. To help us achieve our goal with this journal, we have an excellent editorial board to advise us on the exciting current and future trends in computation from methodology to application. We very much look forward to hearing all about the research going on across the world. [...]Computation2013-09-0412Editorial10.3390/computation102002727302079-31972013-09-04doi: 10.3390/computation1020027Karlheinz SchwarzRainer BreitlingChristian Allen<![CDATA[Computation, Vol. 1, Pages 16-26: Structural Features That Stabilize ZnO Clusters: An Electronic Structure Approach]]>
http://www.mdpi.com/2079-3197/1/1/16
We show that a simple approach to building small computationally inexpensive clusters offers insights on specific structural motifs that stabilize the electronic structure of ZnO. All-electron calculations on ZniOi needle (i = 6, 9, 12, 15, and 18) and plate (i = 9 and 18) clusters within the density functional theory (DFT) formalism show a higher stability for ZnO needles that increases with length. Puckering of the rings to achieve a more wurtzite-like structure destabilizes the needles, although this destabilization is reduced by going to infinite needles (calculated using periodic boundary conditions). Calculations of density of states (DOS) curves and band gaps for finite clusters and infinite needles highlight opportunities for band-gap tuning through kinetic control of nanocrystal growth.Computation2013-05-3111Article10.3390/computation101001616262079-31972013-05-31doi: 10.3390/computation1010016Csaba SzakacsErika Merschrod S.Kristin Poduska<![CDATA[Computation, Vol. 1, Pages 1-15: Effect of Isotopic Substitution on Elementary Processes in Dye-Sensitized Solar Cells: Deuterated Amino-Phenyl Acid Dyes on TiO2]]>
http://www.mdpi.com/2079-3197/1/1/1
We present the first computational study of the effects of isotopic substitution on the operation of dye-sensitized solar cells. Ab initio molecular dynamics is used to study the effect of deuteration on light absorption, dye adsorption dynamics, the averaged over vibrations driving force to injection (∆Gi) and regeneration (∆Gr), as well as on promotion of electron back-donation in dyes NK1 (2E,4E-2-cyano-5-(4-dimethylaminophenyl)penta-2,4-dienoic acid) and NK7 (2E,4E-2-cyano-5-(4-diphenylaminophenyl)penta-2,4-dienoic acid) adsorbed in monodentate molecular and bidentate bridging dissociative configurations on the anatase (101) surface of TiO2. Deuteration causes a red shift of the absorption spectrum of the dye/TiO2 complex by about 5% (dozens of nm), which can noticeably affect the overlap with the solar spectrum in real cells. The dynamics effect on the driving force to injection and recombination (the difference between the averaged &lt;∆Gi,r&gt; and ∆Gi,requil at the equilibrium configuration) is strong, yet there is surprisingly little isotopic effect: the average driving force to injection &lt;∆Gi&gt; and to regeneration &lt;∆Gr&gt; changes by only about 10 meV upon deuteration. The nuclear dynamics enhance recombination to the dye ground state due to the approach of the electron-donating group to TiO2, yet this effect is similar for deuterated and non-deuterated dyes. We conclude that the nuclear dynamics of the C-H(D) bonds, mostly affected by deuteration, might not be important for the operation of photoelectrochemical cells based on organic dyes. As the expectation value of the ground state energy is higher than its optimum geometry value (by up to 0.1 eV in the present case), nuclear motions will affect dye regeneration by recently proposed redox shuttle-dye combinations operating at low driving forces.Computation2013-03-1111Article10.3390/computation10100011152079-31972013-03-11doi: 10.3390/computation1010001Sergei ManzhosHiroshi SegawaKoichi Yamashita