Computation doi: 10.3390/computation6020038

Authors: Jean-Paul Kone Xinyu Zhang Yuying Yan Stephen Adegbite

In this paper, an open-source toolbox that can be used to accurately predict the distribution of the major physical quantities that are transported within a proton exchange membrane (PEM) fuel cell is presented. The toolbox has been developed using the Open Source Field Operation and Manipulation (OpenFOAM) platform, which is an open-source computational fluid dynamics (CFD) code. The base case results for the distribution of velocity, pressure, chemical species, Nernst potential, current density, and temperature are as expected. The plotted polarization curve was compared to the results from a numerical model and experimental data taken from the literature. The conducted simulations have generated a significant amount of data and information about the transport processes that are involved in the operation of a PEM fuel cell. The key role played by the concentration constant in shaping the cell polarization curve has been explored. The development of the present toolbox is in line with the objectives outlined in the International Energy Agency (IEA, Paris, France) Advanced Fuel Cell Annex 37 that is devoted to developing open-source computational tools to facilitate fuel cell technologies. The work therefore serves as a basis for devising additional features that are not always feasible with a commercial code.

]]>Computation doi: 10.3390/computation6020037

Authors: Khalid Hattaf Noura Yousfi

Human immunodeficiency virus (HIV) is a retrovirus that causes HIV infection and over time acquired immunodeficiency syndrome (AIDS). It can be spread and transmitted through two fundamental modes, one by virus-to-cell infection, and the other by direct cell-to-cell transmission. In this paper, we propose a new mathematical model that incorporates both modes of transmission and takes into account the role of the adaptive immune response in HIV infection. We first show that the proposed model is mathematically and biologically well posed. Moreover, we prove that the dynamical behavior of the model is fully determined by five threshold parameters. Furthermore, numerical simulations are presented to confirm our theoretical results.

]]>Computation doi: 10.3390/computation6020036

Authors: Claudio Amovilli Franca Floris

Electron density is used to compute Shannon entropy. The deviation from the Hartree–Fock (HF) of this quantity has been observed to be related to correlation energy. Thus, Shannon entropy is here proposed as a valid quantity to assess the quality of an energy density functional developed within Kohn–Sham theory. To this purpose, results from eight different functionals, representative of Jacob’s ladder, are compared with accurate results obtained from diffusion quantum Monte Carlo (DMC) computations. For three series of atomic ions, our results show that the revTPSS and the PBE0 functionals are the best, whereas those based on local density approximation give the largest discrepancy from DMC Shannon entropy.

]]>Computation doi: 10.3390/computation6020035

Authors: Eberhard Engel

Far outside the surface of slabs, the exact exchange (EXX) potential v x falls off as &minus; 1 / z , if z denotes the direction perpendicular to the surface and the slab is localized around z = 0 . Similarly, the EXX energy density e x behaves as &minus; n / ( 2 z ) , where n is the electron density. Here, an alternative proof of these relations is given, in which the Coulomb singularity in the EXX energy is treated in a particularly careful fashion. This new approach allows the derivation of the next-to-leading order contributions to the asymptotic v x and e x . It turns out that in both cases, the corrections are proportional to 1 / z 2 in general.

]]>Computation doi: 10.3390/computation6020034

Authors: Ali Cemal Benim Michael Diederich Björn Pfeiffelmann

The purpose of this study is the development of an automated two-dimensional airfoil shape optimization procedure for small horizontal axis wind turbines (HAWT), with an emphasis on high thrust and aerodynamically stable performance. The procedure combines the Computational Fluid Dynamics (CFD) analysis with the Response Surface Methodology (RSM), the Biobjective Mesh Adaptive Direct Search (BiMADS) optimization algorithm and an automatic geometry and mesh generation tool. In CFD analysis, a Reynolds Averaged Numerical Simulation (RANS) is applied in combination with a two-equation turbulence model. For describing the system behaviour under alternating wind conditions, a number of CFD 2D-RANS-Simulations with varying Reynolds numbers and wind angles are performed. The number of cases is reduced by the use of RSM. In the analysis, an emphasis is placed upon the role of the blade-to-blade interaction. The average and the standard deviation of the thrust are optimized by a derivative-free optimization algorithm to define a Pareto optimal set, using the BiMADS algorithm. The results show that improvements in the performance can be achieved by modifications of the blade shape and the present procedure can be used as an effective tool for blade shape optimization.

]]>Computation doi: 10.3390/computation6020033

Authors: María Teresa Sánchez José Manuel García-Aznar

Cell migration is an important biological process that has generated increasing interest during the last several years. This process is based on three phases: protrusion at the front end of the cell, de-adhesion at the rear end and contraction of the cell body, all of them coordinated due to the polymerization/depolymerization of certain cytoskeletal proteins. The aim of this work is to present a mathematical model to simulate the actin polymerization/depolymerization process that regulates the final outcome of cell migration process, considering all the above phases, in a particular case: when the cell is confined in a microfluidic channel. Under these specific conditions, cell migration can be approximated by using one-dimensional simulations. We will propose a system of reaction&ndash;diffusion equations to simulate the behavior of the cytoskeletal proteins responsible for protrusion and contraction in the cell, coupled with the mechanical response of the cell, computing its deformations and stresses. Furthermore, a numerical procedure is presented in order to simulate the whole process in a moving and deformable domain corresponding to the cell body.

]]>Computation doi: 10.3390/computation6020032

Authors: Pham Phuc Tsuyoshi Nozu Hirotoshi Kikuchi Kazuki Hibi Yukio Tamura

A subgrid-scale model based on coherent structures, called the Coherent Structure Smagorinsky Model (CSM), has been applied to a large eddy simulation to assess its performance in the prediction of wind pressure distributions on buildings. The study cases were carried out for the assessment of an isolated rectangular high-rise building and a building with a setback (both in a uniform flow) and an actual high-rise building in an urban city with turbulent boundary layer flow. For the isolated rectangular high-rise building in uniform flow, the CSM showed good agreement with both the traditional Smagorinsky Model (SM) and the experiments (values within 20%). For the building with a setback as well as the actual high-rise building in an urban city, both of which have a distinctive wind pressure distribution with large negative pressure caused by the complicated flow due to the strong influence of neighboring buildings, the CSM effectively gives more accurate results with less variation than the SM in comparison with the experimental results (within 20%). The CSM also yielded consistent peak pressure coefficients for all wind directions, within 20% of experimental values in a relatively high-pressure region of the case study of the actual high-rise building in an urban city.

]]>Computation doi: 10.3390/computation6020031

Authors: Ruifeng Hu Limin Wang Ping Wang Yan Wang Xiaojing Zheng

In the present work, a highly efficient incompressible flow solver with a semi-implicit time advancement on a fully staggered grid using a high-order compact difference scheme is developed firstly in the framework of approximate factorization. The fourth-order compact difference scheme is adopted for approximations of derivatives and interpolations in the incompressible Navier–Stokes equations. The pressure Poisson equation is efficiently solved by the fast Fourier transform (FFT). The framework of approximate factorization significantly simplifies the implementation of the semi-implicit time advancing with a high-order compact scheme. Benchmark tests demonstrate the high accuracy of the proposed numerical method. Secondly, by applying the proposed numerical method, we compute turbulent channel flows at low and moderate Reynolds numbers by direct numerical simulation (DNS) and large eddy simulation (LES). It is found that the predictions of turbulence statistics and especially energy spectra can be obviously improved by adopting the high-order scheme rather than the traditional second-order central difference scheme.

]]>Computation doi: 10.3390/computation6020030

Authors: Samir Matar

Topochemical and electronic structure relationships are shown upon going from ANCl to A2N2Se (A = Zr, Ce) through metathesis. The chalcogen Se (divalent) displacing halogen Cl (monovalent) modifies the arrangements of A–N monolayers within ANCl (…Cl|{AN}|Cl… sequences) to double layers in A2N2Se (…Se|{A2N2}|Se… sequences). The investigation carried out in the framework of the quantum density functional theory DFT points to peculiar features pertaining to the dominant effect of the A–N covalent bond stronger than ionic A–Cl and ionocovalent A–Se, as identified from analyses of bonding from overlap integral, charge transfer, electron localization function mapping. Electronic density of states shows semi-conducting behavior due to the tetravalent character of A. The resulting overall pseudo-binary compounds are expressed formally with full ionization as {AN}Cl and {A2N2}Se.

]]>Computation doi: 10.3390/computation6020029

Authors: Alexander Landa Per Söderlind Ivan Naumov John Klepeis Levente Vitos

In the periodic table, only a few pure metals exhibit lattice or magnetic instabilities associated with Fermi surface nesting, the classical examples being α-U and Cr. Whereas α-U displays a strong Kohn anomaly in the phonon spectrum that ultimately leads to the formation of charge density waves (CDWs), Cr is known for its nesting-induced spin density waves (SDWs). Recently, it has become clear that a pronounced Kohn anomaly and the corresponding softening in the elastic constants is also the key factor that controls structural transformations and mechanical properties in compressed group VB metals—materials with relatively high superconducting critical temperatures. This article reviews the current understanding of the structural and mechanical behavior of these metals under pressure with an introduction to the concept of the Kohn anomaly and how it is related to the important concept of Peierls instability. We review both experimental and theoretical results showing different manifestations of the Kohn anomaly in the transverse acoustic phonon mode TA (ξ00) in V, Nb, and Ta. Specifically, in V the anomaly triggers a structural transition to a rhombohedral phase, whereas in Nb and Ta it leads to an anomalous reduction in yield strength.

]]>Computation doi: 10.3390/computation6020028

Authors: Gongbo Zu Kit Lam

Wind flow structures and their consequent wind loads on two high-rise buildings in staggered arrangement are investigated by Large Eddy Simulation (LES). Synchronized pressure and flow field measurements by particle image velocimetry (PIV) are conducted in a boundary layer wind tunnel to validate the numerical simulations. The instantaneous and time-averaged flow fields are analyzed and discussed in detail. The coherent flow structures in the building gap are clearly observed and the upstream building wake is found to oscillate sideways and meander down to the downstream building in a coherent manner. The disruptive effect on the downstream building wake induced by the upstream building is also observed. Furthermore, the connection between the upstream building wake and the wind loads on the downstream building is explored by the simultaneous data of wind pressures and wind flow fields.

]]>Computation doi: 10.3390/computation6020027

Authors: S. Paz Cameron Abrams

In this work, we study the influence of hidden barriers on the convergence behavior of three free-energy calculation methods: well-tempered metadynamics (WTMD), adaptive-biasing forces (ABF), and on-the-fly parameterization (OTFP). We construct a simple two-dimensional potential-energy surfaces (PES) that allows for an exact analytical result for the free-energy in any one-dimensional order parameter. Then we chose different CV definitions and PES parameters to create three different systems with increasing sampling challenges. We find that all three methods are not greatly affected by the hidden-barriers in the simplest case considered. The adaptive sampling methods show faster sampling while the auxiliary high-friction requirement of OTFP makes it slower for this case. However, a slight change in the CV definition has a strong impact in the ABF and WTMD performance, illustrating the importance of choosing suitable collective variables.

]]>Computation doi: 10.3390/computation6010026

Authors: Fredrik Nilsson Ferdi Aryasetiawan

Substantial progress has been achieved in the last couple of decades in computing the electronic structure of correlated materials from first principles. This progress has been driven by parallel development in theory and numerical algorithms. Theoretical development in combining ab initio approaches and many-body methods is particularly promising. A crucial role is also played by a systematic method for deriving a low-energy model, which bridges the gap between real and model systems. In this article, an overview is given tracing the development from the LDA+U to the latest progress in combining the G W method and (extended) dynamical mean-field theory ( G W +EDMFT). The emphasis is on conceptual and theoretical aspects rather than technical ones.

]]>Computation doi: 10.3390/computation6010025

Authors: Xiao-Yin Pan Viraht Sahni

Dissipative effects arise in an electronic system when it interacts with a time-dependent environment. Here, the Schrödinger theory of electrons in an electromagnetic field including dissipative effects is described from a new perspective. Dissipation is accounted for via the effective Hamiltonian approach in which the electron mass is time-dependent. The perspective is that of the individual electron: the corresponding equation of motion for the electron or time-dependent differential virial theorem—the ‘Quantal Newtonian’ second law—is derived. According to the law, each electron experiences an external field comprised of a binding electric field, the Lorentz field, and the electromagnetic field. In addition, there is an internal field whose components are representative of electron correlations due to the Pauli exclusion principle and Coulomb repulsion, kinetic effects, and density. There is also an internal contribution due to the magnetic field. The response of the electron is governed by the current density field in which a damping coefficient appears. The law leads to further insights into Schrödinger theory, and in particular the intrinsic self-consistent nature of the Schrödinger equation. It is proved that in the presence of dissipative effects, the basic variables (gauge-invariant properties, knowledge of which determines the Hamiltonian) are the density and physical current density. Finally, a local effective potential theory of dissipative systems—quantal density functional theory (QDFT)—is developed. This constitutes the mapping from the interacting dissipative electronic system to one of noninteracting fermions possessing the same dissipation and basic variables. Attributes of QDFT are the separation of the electron correlations due to the Pauli exclusion principle and Coulomb repulsion, and the determination of the correlation contributions to the kinetic energy. Hence, Schrödinger theory in conjunction with QDFT leads to additional insights into the dissipative system.

]]>Computation doi: 10.3390/computation6010024

Authors: Katrina Calautit Angelo Aquino John Calautit Payam Nejat Fatemeh Jomehzadeh Ben Hughes

Global demand for energy continues to increase rapidly, due to economic and population growth, especially for increasing market economies. These lead to challenges and worries about energy security that can increase as more users need more energy resources. Also, higher consumption of fossil fuels leads to more greenhouse gas emissions, which contribute to global warming. Moreover, there are still more people without access to electricity. Several studies have reported that one of the rapidly developing source of power is wind energy and with declining costs due to technology and manufacturing advancements and concerns over energy security and environmental issues, the trend is predicted to continue. As a result, tools and methods to simulate and optimize wind energy technologies must also continue to advance. This paper reviews the most recently published works in Computational Fluid Dynamic (CFD) simulations of micro to small wind turbines, building integrated with wind turbines, and wind turbines installed in wind farms. In addition, the existing limitations and complications included with the wind energy system modelling were examined and issues that needs further work are highlighted. This study investigated the current development of CFD modelling of wind energy systems. Studies on aerodynamic interaction among the atmospheric boundary layer or wind farm terrain and the turbine rotor and their wakes were investigated. Furthermore, CFD combined with other tools such as blade element momentum were examined.

]]>Computation doi: 10.3390/computation6010023

Authors: B. Shadrack Jabes Christian Krekeler

We use the Grand Canonical Adaptive Resolution Molecular Dynamics Technique (GC-AdResS) to examine the essential degrees of freedom necessary for reproducing the structural properties of the imidazolium class of ionic liquids. In this technique, the atomistic details are treated as an open sub-region of the system while the surrounding environment is modelled as a generic coarse-grained model. We systematically characterize the spatial quantities such as intramolecular, intermolecular radial distribution functions, other structural and orientational properties of ILs. The spatial quantities computed in an open sub-region of the system are in excellent agreement with the equivalent quantities calculated in a full atomistic simulation, suggesting that the atomistic degrees of freedom outside the sub-region are negligible. The size of the sub-region considered in this study is 2 nm, which is essentially the size of a few ions. Insight from the study suggests that a higher degree of spatial locality seems to play a crucial role in characterizing the properties of imidazolium based ionic liquids.

]]>Computation doi: 10.3390/computation6010022

Authors: Péter Koltai Hao Wu Frank Noé Christof Schütte

There are multiple ways in which a stochastic system can be out of statistical equilibrium. It might be subject to time-varying forcing; or be in a transient phase on its way towards equilibrium; it might even be in equilibrium without us noticing it, due to insufficient observations; and it even might be a system failing to admit an equilibrium distribution at all. We review some of the approaches that model the effective statistical behavior of equilibrium and non-equilibrium dynamical systems, and show that both cases can be considered under the unified framework of optimal low-rank approximation of so-called transfer operators. Particular attention is given to the connection between these methods, Markov state models, and the concept of metastability, further to the estimation of such reduced order models from finite simulation data. All these topics bear an important role in, e.g., molecular dynamics, where Markov state models are often and successfully utilized, and which is the main motivating application in this paper. We illustrate our considerations by numerical examples.

]]>Computation doi: 10.3390/computation6010021

Authors: Joseph Rudzinski Tristan Bereau

Coarse-grained molecular simulation models can provide significant insight into the complex behavior of protein systems, but suffer from an inherently distorted description of dynamical properties. We recently demonstrated that, for a heptapeptide of alanine residues, the structural and kinetic properties of a simulation model are linked in a rather simple way, given a certain level of physics present in the model. In this work, we extend these findings to a longer peptide, for which the representation of configuration space in terms of a full enumeration of sequences of helical/coil states along the peptide backbone is impractical. We verify the structural-kinetic relationships by scanning the parameter space of a simple native-biased model and then employ a distinct transferable model to validate and generalize the conclusions. Our results further demonstrate the validity of the previous findings, while clarifying the role of conformational entropy in the determination of the structural-kinetic relationships. More specifically, while the global, long timescale kinetic properties of a particular class of models with varying energetic parameters but approximately fixed conformational entropy are determined by the overarching structural features of the ensemble, a shift in these kinetic observables occurs for models with a distinct representation of steric interactions. At the same time, the relationship between structure and more local, faster kinetic properties is not affected by varying the conformational entropy of the model.

]]>Computation doi: 10.3390/computation6010020

Authors: Marcus Weber

Upon ligand binding or during chemical reactions the state of a molecular system changes in time. Usually we consider a finite set of (macro-) states of the system (e.g., ‘bound’ vs. ‘unbound’), although the process itself takes place in a continuous space. In this context, the formula χ = X A connects the micro-dynamics of the molecular system to its macro-dynamics. χ can be understood as a clustering of micro-states of a molecular system into a few macro-states. X is a basis of an invariant subspace of a transfer operator describing the micro-dynamics of the system. The formula claims that there is an unknown linear relation A between these two objects. With the aid of this formula we can understand rebinding effects, the electron flux in pericyclic reactions, and systematic changes of binding rates in kinetic ITC experiments. We can also analyze sequential spectroscopy experiments and rare event systems more easily. This article provides an explanation of the formula and an overview of some of its consequences.

]]>Computation doi: 10.3390/computation6010019

Authors: Xinghao Liang Yang Li Qiang Zhao Zheng Zhang Xiaoping Ouyang

Silicon carbide (SiC) is considered as an important material for nuclear engineering due to its excellent properties. Changing the carbon content in SiC can regulate and control its elastic and thermodynamic properties, but a simulation study of the effect of carbon content on the sputtering (caused by the helium ions) of SiC is still lacking. In this work, we used the Monte-Carlo and molecular dynamics simulation methods to study the effects of carbon concentration, incidence energy, incident angle, and target temperature on the sputtering yield of SiC. The results show that the incident ions’ energy and angle have a significant effect on sputtering yield of SiC when the carbon concentration in SiC is around 62 at %, while the target temperature has a little effect on the sputtering yield of SiC. Our work might provide theoretical support for the experimental research and engineering application of carbon fiber-reinforced SiC that be used as the plasma-facing material in tokamak fusion reactors.

]]>Computation doi: 10.3390/computation6010018

Authors: George Kasapoglu Evangelia Karagianni Michael Fafalios Ioannis Koukos

The recently modified Pascal function is further exploited in this paper in the design of passive analog filters. The Pascal approximation has non-equiripple magnitude, in contrast of the most well-known approximations, such as the Chebyshev approximation. A novelty of this work is the introduction of a precise method that calculates the coefficients of the Pascal function. Two examples are presented for the passive design to illustrate the advantages and the disadvantages of the Pascal approximation. Moreover, the values of the passive elements can be taken from tables, which are created to define the normalized values of these elements for the Pascal approximation, as Zverev had done for the Chebyshev, Elliptic, and other approximations. Although Pascal approximation can be implemented to both passive and active filter designs, a passive filter design is addressed in this paper, and the benefits and shortcomings of Pascal approximation are presented and discussed.

]]>Computation doi: 10.3390/computation6010017

Authors: Darrell Pepper Maria Ramos Gonzalez

A localized meshless method is used to simulate 3-D atmospheric wind fields for wind energy assessment and emergency response. The meshless (or mesh-free) method with radial basis functions (RBFs) alleviates the need to create a mesh required by finite difference, finite volume, and finite element methods. The method produces a fast solution that converges with high accuracy, establishing 3-D wind estimates over complex terrain. The method does not require discretization of the domain or boundary and removes the need for domain integration. The meshless method converges exponentially for smooth boundary shapes and boundary data, and is insensitive to dimensional constraints. Coding of the method is very easy and can be done using MATLAB or MAPLE. By employing a localized RBF procedure, 3-D wind fields can be established from sparse meteorological data. The meshless method can be easily run on PCs and hand-held mobile devices. This article summarizes previous work where the meshless method has successfully simulated 3D wind fields over various environments, along with the equations used to obtain the simulations.

]]>Computation doi: 10.3390/computation6010016

Authors: Wienczyslaw Stalewski

Optimization methods are increasingly used to solve problems in aeronautical engineering. Typically, optimization methods are utilized in the design of an aircraft airframe or its structure. The presented study is focused on improvement of aircraft flight control procedures through numerical optimization. The optimization problems concern selected phases of flight of a light gyroplane—a rotorcraft using an unpowered rotor in autorotation to develop lift and an engine-powered propeller to provide thrust. An original methodology of computational simulation of rotorcraft flight was developed and implemented. In this approach the aircraft motion equations are solved step-by-step, simultaneously with the solution of the Unsteady Reynolds-Averaged Navier–Stokes equations, which is conducted to assess aerodynamic forces acting on the aircraft. As a numerical optimization method, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) algorithm was adapted. The developed methodology was applied to optimize the flight control procedures in selected stages of gyroplane flight in direct proximity to the ground, where proper control of the aircraft is critical to ensure flight safety and performance. The results of conducted computational optimizations proved the qualitative correctness of the developed methodology. The research results can be helpful in the design of easy-to-control gyroplanes and also in the training of pilots for this type of rotorcraft.

]]>Computation doi: 10.3390/computation6010015

Authors: Andrea Cesari Sabine Reißer Giovanni Bussi

Molecular dynamics (MD) simulations allow the investigation of the structural dynamics of biomolecular systems with unrivaled time and space resolution. However, in order to compensate for the inaccuracies of the utilized empirical force fields, it is becoming common to integrate MD simulations with experimental data obtained from ensemble measurements. We review here the approaches that can be used to combine MD and experiment under the guidance of the maximum entropy principle. We mostly focus on methods based on Lagrangian multipliers, either implemented as reweighting of existing simulations or through an on-the-fly optimization. We discuss how errors in the experimental data can be modeled and accounted for. Finally, we use simple model systems to illustrate the typical difficulties arising when applying these methods.

]]>Computation doi: 10.3390/computation6010014

Authors: Petros Daltzis Christos Volos Hector Nistazakis Andreas Tsigopoulos George Tombras

In this work, a 4D hyperchaotic hyperjerk system, with better results for its Lyapunov exponents and Kaplan–Yorke dimension regarding other systems of this family, as well as its circuit implementation, is presented. Hyperchaotic hyperjerk systems depict complex dynamical behavior in a high-dimensional phase space with n ≥ 4, offering robustness against many types of attacks in private communications. For this reason, an adaptive controller in order to achieve global chaos synchronization of coupled 4D hyperchaotic hyperjerk systems with unknown parameters is designed. The adaptive results in this work are proved using Lyapunov stability theory and the effectiveness of the proposed synchronization scheme is confirmed through the simulation results.

]]>Computation doi: 10.3390/computation6010013

Authors: Per Söderlind David Young

The last decade has seen a continued development of better experimental techniques to measure equation-of-state (EOS) for various materials. These improvements of both static and shock-compression approaches have increased the accuracy of the EOS and challenged the complimentary theoretical modeling. The conventional modeling of EOS, at least at pressure and temperature conditions that are not too extreme, is founded on density-functional theory (DFT). Naturally, there is an increased interest in the accuracy of DFT as the measurements are becoming more refined and there is a particular interest in the robustness and validity of DFT at conditions where experimental data are not available. Here, we consider a broad and large set of 64 elemental solids from low atomic number Z up to the very high Z actinide metals. The intent is to compare DFT with experimental zero-temperature isotherms up to 1 Mbar (100 GPa) and draw conclusions regarding the theoretical (DFT) error and quantify a reasonable and defensible approach to define the theoretical uncertainty. We find that in all 64 cases the DFT error at high pressure is smaller than or equal to the DFT error at lower pressures which thus provides an upper bound to the error at high compression.

]]>Computation doi: 10.3390/computation6010012

Authors: Christos Vazouras George Kasapoglu Evangelia Karagianni Nikolaos Uzunoglu

A microwave free space reflectometry technique with swept frequency measurements for the profiling of wall structures and the detection of hidden (covered) layers has been applied to the Hagia Sophia byzantine monument. Experimental measurement results are presented and compared with three-dimensional (3D) simulated results, exhibiting fair agreement in some (though not all) aspects. Based on the experimental results, the possibility of clear discrimination between regions with and without covered mosaic layers, and hence the detection of such layers, is demonstrated.

]]>Computation doi: 10.3390/computation6010011

Authors: Giovanni Ciccotti Mauro Ferrario

A dynamical system submitted to holonomic constraints is Hamiltonian only if considered in the reduced phase space of its generalized coordinates and momenta, which need to be defined ad hoc in each particular case. However, specially in molecular simulations, where the number of degrees of freedom is exceedingly high, the representation in generalized coordinates is completely unsuitable, although conceptually unavoidable, to provide a rigorous description of its evolution and statistical properties. In this paper, we first review the state of the art of the numerical approach that defines the way to conserve exactly the constraint conditions (by an algorithm universally known as SHAKE) and permits integrating the equations of motion directly in the phase space of the natural Cartesian coordinates and momenta of the system. We then discuss in detail SHAKE numerical implementations in the notable cases of Verlet and velocity-Verlet algorithms. After discussing in the same framework how constraints modify the properties of the equilibrium ensemble, we show how, at the price of moving to a dynamical system no more (directly) Hamiltonian, it is possible to provide a direct interpretation of the dynamical system and so derive its Statistical Mechanics both at equilibrium and in non-equilibrium conditions. To achieve that, we generalize the statistical treatment to systems no longer conserving the phase space volume (equivalently, we introduce a non-Euclidean invariant measure in phase space) and derive a generalized Liouville equation describing the ensemble even out of equilibrium. As a result, we can extend the response theory of Kubo (linear and nonlinear) to systems subjected to constraints.

]]>Computation doi: 10.3390/computation6010010

Authors: Denise-Penelope Kontoni Ahmed Farghaly

Various cases of two adjacent multi-story buildings with different numbers of floors and equal or unequal foundation levels under earthquake loading and considering soil-structure interaction (SSI) are investigated. A two-dimensional model for each case of the two adjacent unequal buildings without separation distance is used and a special arrangement of contact elements in the contact zone is employed to fulfil all possible deformation contact modes which take place under seismic loading. The soil is modelled by two-dimensional 4-node elements which are in contact with the foundations of the two adjacent buildings. This paper studies the earthquake-induced double pounding that takes place between the two adjacent unequal height buildings in some upper points at superstructure in the contact zone and also at foundation level, considering soil-structure interaction (SSI). The double pounding and the soil-structure interaction (SSI) effects should be taken into consideration in the seismic analysis of adjacent buildings especially those with different heights and different foundation levels.

]]>Computation doi: 10.3390/computation6010009

Authors: Abhishekh Gupta Pedro Mendes

Stochastic simulation has been widely used to model the dynamics of biochemical reaction networks. Several algorithms have been proposed that are exact solutions of the chemical master equation, following the work of Gillespie. These stochastic simulation approaches can be broadly classified into two categories: network-based and -free simulation. The network-based approach requires that the full network of reactions be established at the start, while the network-free approach is based on reaction rules that encode classes of reactions, and by applying rule transformations, it generates reaction events as they are needed without ever having to derive the entire network. In this study, we compare the efficiency and limitations of several available implementations of these two approaches. The results allow for an informed selection of the implementation and methodology for specific biochemical modeling applications.

]]>Computation doi: 10.3390/computation6010008

Authors: Denise-Penelope Kontoni Ahmed Farghaly

An existing riverine platform in Egypt, together with its pile group foundation, is analyzed under environmental loads using 3D FEM structural analysis software incorporating soil-pile interaction. The interaction between the transfer plate and the piles supporting the platform is investigated. Two connection conditions were studied assuming fixed or hinged connection between the piles and the reinforced concrete platform for the purpose of comparison of the structural behavior. The analysis showed that the fixed or hinged connection condition between the piles and the platform altered the values and distribution of displacements, normal force, bending moments, and shear forces along the length of each pile. The distribution of piles in the pile group affects the stress distribution on both the soil and platform. The piles were found to suffer from displacement failure rather than force failure. Moreover, the resulting bending stresses on the reinforced concrete plate in the case of a fixed connection between the piles and the platform were almost doubled and much higher than the allowable reinforced concrete stress and even exceeded the ultimate design strength and thus the environmental loads acting on a pile-supported riverine offshore platform may cause collapse if they are not properly considered in the structural analysis and design.

]]>Computation doi: 10.3390/computation6010007

Authors: Aleksandr Terentjev Pietro Cortona Lucian Constantin José Pitarke Fabio Della Sala Eduardo Fabiano

We extend the SG4 generalized gradient approximation, developed for covalent and ionic solids with a nonlocal van der Waals functional. The resulting SG4-rVV10m functional is tested, considering two possible parameterizations, for various kinds of bulk solids including layered materials and molecular crystals as well as regular bulk materials. The results are compared to those of similar methods, PBE + rVV10L and rVV10. In most cases, SG4-rVV10m yields a quite good description of systems (from iono-covalent to hydrogen-bond and dispersion interactions), being competitive with PBE + rVV10L and rVV10 for dispersion-dominated systems and slightly superior for iono-covalent ones. Thus, it shows a promising applicability for solid-state applications. In a few cases, however, overbinding is observed. This is analysed in terms of gradient contributions to the functional.

]]>Computation doi: 10.3390/computation6010006

Authors: Tobias Martin Ivan Shevchuk

In this article, the development of high-order semi-implicit interpolation schemes for convection terms on unstructured grids is presented. It is based on weighted essentially non-oscillatory (WENO) reconstructions which can be applied to the evaluation of any field in finite volumes using its known cell-averaged values. Here, the algorithm handles convex cells in arbitrary three-dimensional meshes. The implementation is parallelized using the Message Passing Interface. All schemes are embedded in the code structure of OpenFOAM® resulting in the access to a huge open-source community and the applicability to high-level programming. Several verification cases and applications of the scalar advection equation and the incompressible Navier-Stokes equations show the improved accuracy of the WENO approach due to a mapping of the stencil to a reference space without scaling effects. An efficiency analysis indicates an increased computational effort of high-order schemes in comparison to available high-resolution methods. However, the reconstruction time can be efficiently decreased when more processors are used.

]]>Computation doi: 10.3390/computation6010005

Authors: Matthias Schramm Bernhard Stoevesandt Joachim Peinke

The adjoint approach in gradient-based optimization combined with computational fluid dynamics is commonly applied in various engineering fields. In this work, the gradients are used for the design of a two-dimensional airfoil shape, where the aim is a change in lift and drag coefficient, respectively, to a given target value. The optimizations use the unconstrained quasi-Newton method with an approximation of the Hessian. The flow field is computed with a finite-volume solver where the continuous adjoint approach is implemented. A common assumption in this approach is the use of the same turbulent viscosity in the adjoint diffusion term as for the primal flow field. The effect of this so-called “frozen turbulence” assumption is compared to the results using adjoints to the Spalart–Allmaras turbulence model. The comparison is done at a Reynolds number of R e = 2 × 10 6 for two different airfoils at different angles of attack.

]]>Computation doi: 10.3390/computation6010004

Authors: Computation Editorial Office

Peer review is an essential part in the publication process, ensuring that Computation maintains high quality standards for its published papers.

]]>Computation doi: 10.3390/computation6010003

Authors: Rudolf Podgornik Julija Zavadlav Matej Praprotnik

Densely packed DNA arrays exhibit hexagonal and orthorhombic local packings, as well as a weakly first order transition between them. While we have some understanding of the interactions between DNA molecules in aqueous ionic solutions, the structural details of its ordered phases and the mechanism governing the respective phase transitions between them remains less well understood. Since at high DNA densities, i.e., small interaxial spacings, one can neither neglect the atomic details of the interacting macromolecular surfaces nor the atomic details of the intervening ionic solution, the atomistic resolution is a sine qua non to properly describe and analyze the interactions between DNA molecules. In fact, in order to properly understand the details of the observed osmotic equation of state, one needs to implement multiple levels of organization, spanning the range from the molecular order of DNA itself, the possible ordering of counterions, and then all the way to the induced molecular ordering of the aqueous solvent, all coupled together by electrostatic, steric, thermal and direct hydrogen-bonding interactions. Multiscale simulations therefore appear as singularly suited to connect the microscopic details of this system with its macroscopic thermodynamic behavior. We review the details of the simulation of dense atomistically resolved DNA arrays with different packing symmetries and the ensuing osmotic equation of state obtained by enclosing a DNA array in a monovalent salt and multivalent (spermidine) counterions within a solvent permeable membrane, mimicking the behavior of DNA arrays subjected to external osmotic stress. By varying the DNA density, the local packing symmetry, and the counterion type, we are able to analyze the osmotic equation of state together with the full structural characterization of the DNA subphase, the counterion distribution and the solvent structural order in terms of its different order parameters and consequently identify the most important contribution to the DNA-DNA interactions at high DNA densities.

]]>Computation doi: 10.3390/computation6010002

Authors: Martin Skote Tze Sim Narasimalu Srikanth

The objective of this study is to investigate the temporal behavior of the pressure field of a stationary dry microburst-like wind phenomenon utilizing Unsteady Reynolds-averaged Navier-Stokes (URANS) numerical simulations. Using an axisymmetric steady impinging jet model, the dry microburst-like wind is simulated from the initial release of a steady downdraft flow, till the time after the primary vortices have fully convected out of the stagnation region. The validated URANS results presented herein shed light on the temporal variation of the pressure field which is in agreement with the qualitative description obtained from field measurements. The results have an impact on understanding the wind load on structures from the initial touch-down phase of the downdraft from a microburst. The investigation is based on CFD techniques, together with a simple impinging jet model that does not include any microphysical processes. Unlike previous investigations, this study focuses on the transient pressure field from a downdraft without obstacles.

]]>Computation doi: 10.3390/computation6010001

Authors: Rufina Tretyakova Rostislav Savinkov Gennady Lobov Gennady Bocharov

The lymphatic system is a body-wide network of lymphatic vessels and lymphoid organs. The complexity of the structural and functional organization of the lymphatic system implies the necessity of using computational modeling approaches to unravel the mechanisms of its regulation in quantitative terms. Although it is a vital part of the circulatory and immune systems, the lymphatic system remains poorly investigated as a mathematical modeling object. Modeling of the lymphatic vessel network needs to be established using a systematic approach in order to advance the model-driven research of this important physiological system. In our study, we elucidate key general features underlying the 3D structural organization of the lymphatic system in order to develop computational geometry and network graph models of the human lymphatic system based on available anatomical data (from the PlasticBoy project), which provides an estimate of the structure of the lymphatic system, and to analyze the topological properties of the resulting models.

]]>Computation doi: 10.3390/computation5040049

Authors: Karlheinz Schwarz

The routinely made assumptions for simulating solid materials are briefly summarized, since they need to be critically assessed when new aspects become important, such as excited states, finite temperature, time-dependence, etc. The significantly higher computer power combined with improved experimental data open new areas for interdisciplinary research, for which new ideas and concepts are needed.

]]>Computation doi: 10.3390/computation5040048

Authors: Christoph Riesinger Arash Bakhtiari Martin Schreiber Philipp Neumann Hans-Joachim Bungartz

Heterogeneous clusters are a widely utilized class of supercomputers assembled from different types of computing devices, for instance CPUs and GPUs, providing a huge computational potential. Programming them in a scalable way exploiting the maximal performance introduces numerous challenges such as optimizations for different computing devices, dealing with multiple levels of parallelism, the application of different programming models, work distribution, and hiding of communication with computation. We utilize the lattice Boltzmann method for fluid flow as a representative of a scientific computing application and develop a holistic implementation for large-scale CPU/GPU heterogeneous clusters. We review and combine a set of best practices and techniques ranging from optimizations for the particular computing devices to the orchestration of tens of thousands of CPU cores and thousands of GPUs. Eventually, we come up with an implementation using all the available computational resources for the lattice Boltzmann method operators. Our approach shows excellent scalability behavior making it future-proof for heterogeneous clusters of the upcoming architectures on the exaFLOPS scale. Parallel efficiencies of more than 90 % are achieved leading to 2604.72 GLUPS utilizing 24,576 CPU cores and 2048 GPUs of the CPU/GPU heterogeneous cluster Piz Daint and computing more than 6.8 × 10 9 lattice cells.

]]>Computation doi: 10.3390/computation5040047

Authors: Timothy Sands

By reversing paradigms that normally utilize mathematical models as the basis for nonlinear adaptive controllers, this article describes using the controller to serve as a novel computational approach for mathematical system identification. System identification usually begins with the dynamics, and then seeks to parameterize the mathematical model in an optimization relationship that produces estimates of the parameters that minimize a designated cost function. The proposed methodology uses a DC motor with a minimum-phase mathematical model controlled by a self-tuning regulator without model pole cancelation. The normal system identification process is briefly articulated by parameterizing the system for least squares estimation that includes an allowance for exponential forgetting to deal with time-varying plants. Next, towards the proposed approach, the Diophantine equation is derived for an indirect self-tuner where feedforward and feedback controls are both parameterized in terms of the motor’s math model. As the controller seeks to nullify tracking errors, the assumed plant parameters are adapted and quickly converge on the correct parameters of the motor’s math model. Next, a more challenging non-minimum phase system is investigated, and the earlier implemented technique is modified utilizing a direct self-tuner with an increased pole excess. The nominal method experiences control chattering (an undesirable characteristic that could potentially damage the motor during testing), while the increased pole excess eliminates the control chattering, yet maintains effective mathematical system identification. This novel approach permits algorithms normally used for control to instead be used effectively for mathematical system identification.

]]>Computation doi: 10.3390/computation5040046

Authors: David Sadowsky Andrew Abboud Anthony Cyr Lena Vodovotz Paulo Fontes Ruben Zamora Yoram Vodovotz

Extracorporeal organ perfusion, in which organs are preserved in an isolated, ex vivo environment over an extended time-span, is a concept that has led to the development of numerous alternative preservation protocols designed to better maintain organ viability prior to transplantation. These protocols offer researchers a novel opportunity to obtain extensive sampling of isolated organs, free from systemic influences. Data-driven computational modeling is a primary means of integrating the extensive and multivariate data obtained in this fashion. In this review, we focus on the application of dynamic data-driven computational modeling to liver pathophysiology and transplantation based on data obtained from ex vivo organ perfusion.

]]>Computation doi: 10.3390/computation5040045

Authors: Nikolai Bessonov Vitaly Volpert

This paper is devoted to modelling tissue growth with a deformable cell model. Each cell represents a polygon with particles located at its vertices. Stretching, bending and pressure forces act on particles and determine their displacement. Pressure-dependent cell proliferation is considered. Various patterns of growing tissue are observed. An application of the model to tissue regeneration is illustrated. Approximate analytical models of tissue growth are developed.

]]>Computation doi: 10.3390/computation5040044

Authors: Animesh Agarwal Brooks Rabideau Ahmed Ismail

We present a hierarchical coarse-graining framework for modeling semidilute polymer solutions, based on the wavelet-accelerated Monte Carlo (WAMC) method. This framework forms a hierarchy of resolutions to model polymers at length scales that cannot be reached via atomistic or even standard coarse-grained simulations. Previously, it was applied to simulations examining the structure of individual polymer chains in solution using up to four levels of coarse-graining (Ismail et al., J. Chem. Phys., 2005, 122, 234901 and Ismail et al., J. Chem. Phys., 2005, 122, 234902), recovering the correct scaling behavior in the coarse-grained representation. In the present work, we extend this method to the study of polymer solutions, deriving the bonded and non-bonded potentials between coarse-grained superatoms from the single chain statistics. A universal scaling function is obtained, which does not require recalculation of the potentials as the scale of the system is changed. To model semi-dilute polymer solutions, we assume the intermolecular potential between the coarse-grained beads to be equal to the non-bonded potential, which is a reasonable approximation in the case of semidilute systems. Thus, a minimal input of microscopic data is required for simulating the systems at the mesoscopic scale. We show that coarse-grained polymer solutions can reproduce results obtained from the more detailed atomistic system without a significant loss of accuracy.

]]>Computation doi: 10.3390/computation5040042

Authors: Matthew Chan Rogelio Cuevas-Saavedra Debajit Chakraborty Paul Ayers

We propose a limited-memory quasi-Newton method using the bad Broyden update and apply it to the nonlinear equations that must be solved to determine the effective Fermi momentum in the weighted density approximation for the exchange energy density functional. This algorithm has advantages for nonlinear systems of equations with diagonally dominant Jacobians, because it is easy to generalize the method to allow for periodic updates of the diagonal of the Jacobian. Systematic tests of the method for atoms show that one can determine the effective Fermi momentum at thousands of points in less than fifteen iterations.

]]>Computation doi: 10.3390/computation5040043

Authors: Xuhua Xia

A self-organizing map (SOM) is an artificial neural network algorithm that can learn from the training data consisting of objects expressed as vectors and perform non-hierarchical clustering to represent input vectors into discretized clusters, with vectors assigned to the same cluster sharing similar numeric or alphanumeric features. SOM has been used widely in transcriptomics to identify co-expressed genes as candidates for co-regulated genes. I envision SOM to have great potential in characterizing heterogeneous sequence motifs, and aim to illustrate this potential by a parallel presentation of SOM with a set of numerical vectors and a set of equal-length sequence motifs. While there are numerous biological applications of SOM involving numerical vectors, few studies have used SOM for heterogeneous sequence motif characterization. This paper is intended to encourage (1) researchers to study SOM in this new domain and (2) computer programmers to develop user-friendly motif-characterization SOM tools for biologists.

]]>Computation doi: 10.3390/computation5030041

Authors: DooJin Jeon KiTae Kim SangEul Han

Among the various blast load equations, the Kingery-Bulmash equation is applicable to both a free-air burst and a surface burst that enables calculations of the parameters of a pressure-time history curve. On the other hand, this equation is quite complicated. This paper proposes a modified equation that may replace the conventional Kingery-Bulmash equation. The proposed modified equation, which was constructed by performing curve-fitting of this equation, requires a brief calculation process with a simpler equation compared to the original equation. The modified equation is also applicable to both types of bursts and has the same calculable scaled distance range as the conventional equation. The calculation results obtained using the modified equation were similar to the results obtained from the original equation with a less than 1% difference.

]]>Computation doi: 10.3390/computation5030040

Authors: Mohammed Hassan Mohamed Hamada

Accuracy improvement is among the primary key research focuses in the area of recommender systems. Traditionally, recommender systems work on two sets of entities, Users and Items, to estimate a single rating that represents a user’s acceptance of an item. This technique was later extended to multi-criteria recommender systems that use an overall rating from multi-criteria ratings to estimate the degree of acceptance by users for items. The primary concern that is still open to the recommender systems community is to find suitable optimization algorithms that can explore the relationships between multiple ratings to compute an overall rating. One of the approaches for doing this is to assume that the overall rating as an aggregation of multiple criteria ratings. Given this assumption, this paper proposed using feed-forward neural networks to predict the overall rating. Five powerful training algorithms have been tested, and the results of their performance are analyzed and presented in this paper.

]]>Computation doi: 10.3390/computation5030039

Authors: Volodymyr Turkowski Naseem Din Talat Rahman

In this work, we summarize the recent progress made in constructing time-dependent density-functional theory (TDDFT) exchange-correlation (XC) kernels capable to describe excitonic effects in semiconductors and apply these kernels in two important cases: a “classic” bulk semiconductor, GaAs, with weakly-bound excitons and a novel two-dimensional material, MoS2, with very strongly-bound excitonic states. Namely, after a brief review of the standard many-body semiconductor Bloch and Bethe-Salpether equation (SBE and BSE) and a combined TDDFT+BSE approaches, we proceed with details of the proposed pure TDDFT XC kernels for excitons. We analyze the reasons for successes and failures of these kernels in describing the excitons in bulk GaAs and monolayer MoS2, and conclude with a discussion of possible alternative kernels capable of accurately describing the bound electron-hole states in both bulk and two-dimensional materials.

]]>Computation doi: 10.3390/computation5030038

Authors: Le Wang Qiang Pan Jie Chen Shunsheng Yang

A novel computational fluid dynamics-population balance model (CFD-PBM) for the simulation of gas mixing in activated sludge (i.e., an opaque non-Newtonian liquid) in a bubble column is developed and described to solve the problem of measuring the hydrodynamic behavior of opaque non-Newtonian liquid-gas two-phase flow. We study the effects of the inlet position and liquid-phase properties (water/activated sludge) on various characteristics, such as liquid flow field, gas hold-up, liquid dynamic viscosity, and volume-averaged bubble diameter. As the inlet position changed, two symmetric vortices gradually became a single main vortex in the flow field in the bubble column. In the simulations, when water was in the liquid phase, the global gas hold-up was higher than when activated sludge was in the liquid phase in the bubble column, and a flow field that was dynamic with time was observed in the bubble column. Additionally, when activated sludge was used as the liquid phase, no periodic velocity changes were found. When the inlet position was varied, the non-Newtonian liquid phase had different peak values and distributions of (dynamic) liquid viscosity in the bubble column, which were related to the gas hold-up. The high gas hold-up zone corresponded to the low dynamic viscosity zone. Finally, when activated sludge was in the liquid phase, the volume-averaged bubble diameter was much larger than when water was in the liquid phase.

]]>Computation doi: 10.3390/computation5030037

Authors: Stuart Bartlett

The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines.

]]>Computation doi: 10.3390/computation5030036

Authors: Lorenzo Fortunato Ilyas Inci José-Antonio Lay Andrea Vitturi

The program TFF calculates stripping single-particle form factors for one-neutron transfer in prior representation with appropriate perturbative treatment of recoil. Coupled equations are then integrated along a semiclassical trajectory to obtain one- and two-neutron transfer amplitudes and probabilities within first- and second-order perturbation theory. Total and differential cross-sections are then calculated by folding with a transmission function (obtained from a phenomenological imaginary absorption potential). The program description, user instructions and examples are discussed.

]]>Computation doi: 10.3390/computation5030035

Authors: Mirjam Glessmer Christian Janßen

This article gives an overview of the diverse range of teaching applications that can be realized using an interactive lattice Boltzmann simulation tool in fluid mechanics instruction and outreach. In an inquiry-based learning framework, examples are given of learning scenarios that address instruction on scientific results, scientific methods or the scientific process at varying levels of student activity, from consuming to applying to researching. Interactive live demonstrations on portable hardware enable new and innovative teaching concepts for fluid mechanics, also for large audiences and in the early stages of the university education. Moreover, selected examples successfully demonstrate that the integration of high-fidelity CFD methods into fluid mechanics teaching facilitates high-quality student research work within reach of the current state of the art in the respective field of research.

]]>Computation doi: 10.3390/computation5030034

Authors: Georgios Drakopoulos Andreas Kanavos Ioannis Karydis Spyros Sioutas Aristidis G. Vrahatis

Biomedicine is a pillar of the collective, scientific effort of human self-discovery, as well as a major source of humanistic data codified primarily in biomedical documents. Despite their rigid structure, maintaining and updating a considerably-sized collection of such documents is a task of overwhelming complexity mandating efficient information retrieval for the purpose of the integration of clustering schemes. The latter should work natively with inherently multidimensional data and higher order interdependencies. Additionally, past experience indicates that clustering should be semantically enhanced. Tensor algebra is the key to extending the current term-document model to more dimensions. In this article, an alternative keyword-term-document strategy, based on scientometric observations that keywords typically possess more expressive power than ordinary text terms, whose algorithmic cornerstones are third order tensors and MeSH ontological functions, is proposed. This strategy has been compared against a baseline using two different biomedical datasets, the TREC (Text REtrieval Conference) genomics benchmark and a large custom set of cognitive science articles from PubMed.

]]>Computation doi: 10.3390/computation5030033

Authors: Matthew Marko

The author demonstrates a stable Lagrangian solid modeling method, tracking the interactions of solid mass particles rather than using a meshed grid. This numerical method avoids the problem of tensile instability often seen with smooth particle applied mechanics by having the solid particles apply stresses expected with Hooke’s law, as opposed to using a smoothing function for neighboring solid particles. This method has been tested successfully with a bar in tension, compression, and shear, as well as a disk compressed into a flat plate, and the numerical model consistently matched the analytical Hooke’s law as well as Hertz contact theory for all examples. The solid modeling numerical method was then built into a 2-D model of a pressure vessel, which was tested with liquid water particles under pressure and simulated with smoothed particle hydrodynamics. This simulation was stable, and demonstrated the feasibility of Lagrangian specification modeling for fluid–solid interactions.

]]>Computation doi: 10.3390/computation5030032

Authors: William Seffens

Much of biology-inspired computer science is based on the Central Dogma, as implemented with genetic algorithms or evolutionary computation. That 60-year-old biological principle based on the genome, transcriptome and proteasome is becoming overshadowed by a new paradigm of complex ordered associations and connections between layers of biological entities, such as interactomes, metabolomics, etc. We define a new hierarchical concept as the “Connectosome”, and propose new venues of computational data structures based on a conceptual framework called “Grand Ensemble” which contains the Central Dogma as a subset. Connectedness and communication within and between living or biology-inspired systems comprise ensembles from which a physical computing system can be conceived. In this framework the delivery of messages is filtered by size and a simple and rapid semantic analysis of their content. This work aims to initiate discussion on the Grand Ensemble in network biology as a representation of a Persistent Turing Machine. This framework adding interaction and persistency to the classic Turing-machine model uses metrics based on resilience that has application to dynamic optimization problem solving in Genetic Programming.

]]>Computation doi: 10.3390/computation5020031

Authors: Aris Lanaridis Giorgos Siolas Andreas Stafylopatis

Pattern classification is a central problem in machine learning, with a wide array of applications, and rule-based classifiers are one of the most prominent approaches. Among these classifiers, Incremental Rule Learning algorithms combine the advantages of classic Pittsburg and Michigan approaches, while, on the other hand, classifiers using fuzzy membership functions often result in systems with fewer rules and better generalization ability. To discover an optimal set of rules, learning classifier systems have always relied on bio-inspired models, mainly genetic algorithms. In this paper we propose a classification algorithm based on an efficient bio-inspired approach, Artificial Immune Networks. The proposed algorithm encodes the patterns as antigens, and evolves a set of antibodies, representing fuzzy classification rules of ellipsoidal surface, to cover the problem space. The innate immune mechanisms of affinity maturation and diversity preservation are modified and adapted to the classification context, resulting in a classifier that combines the advantages of both incremental rule learning and fuzzy classifier systems. The algorithm is compared to a number of state-of-the-art rule-based classifiers, as well as Support Vector Machines (SVM), producing very satisfying results, particularly in problems with large number of attributes and classes.

]]>Computation doi: 10.3390/computation5020029

Authors: Mohammed Benali Kanoun Souraya Goumri-Said

A systematic study of structural, electronic, vibrational properties of new ternary dicerium selenide dinitride, Ce2SeN2 and predicted compounds—Ce2SN2 and Ce2TeN2—is performed using first-principles calculations within Perdew–Burke–Ernzerhof functional with Hubbard correction. Our calculated results for structural parameters nicely agree to the experimental measurements. We predict that all ternary dicerium chalcogenide nitrides are thermodynamically stable. The predicted elastic constants and related mechanical properties demonstrate its profound mechanical stability as well. Moreover, our results show that Ce2XN2 are insulator materials. Trends of the structural parameters, electronic structures, and phonon dispersion are discussed in terms of the characteristics of the Ce (4f) states.

]]>Computation doi: 10.3390/computation5020030

Authors: Seshaditya A. Luca Ghiringhelli Luigi Delle Site

We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D), two dimensional (2D) and three dimensional (3D) cases. We focus on the determination of the correlation part of the kinetic functional by employing a Monte Carlo sampling technique of electrons in space based on an analytic derivation via the Levy-Lieb constrained search principle. Of particular interest is the question of the behaviour of the functional as one passes from 1D to 3D; according to the basic principles of Density Functional Theory (DFT) the form of the universal functional should be independent of the dimensionality. However, in practice the straightforward use of current approximate functionals in different dimensions is problematic. Here, we show that going from the 3D to the 2D case the functional form is consistent (concave function) but in 1D becomes convex; such a drastic difference is peculiar of 1D electron systems as it is for other quantities. Given the interesting behaviour of the functional, this study represents a basic first-principle approach to the problem and suggests further investigations using highly accurate (though expensive) many-electron computational techniques, such as Quantum Monte Carlo.

]]>Computation doi: 10.3390/computation5020028

Authors: Jianmin Tao Giovanni Vignale Jian-Xin Zhu

The foundation of many approximations in time-dependent density functional theory (TDDFT) lies in the theory of the homogeneous electron gas. However, unlike the ground-state DFT, in which the exchange-correlation potential of the homogeneous electron gas is known exactly via the quantum Monte Carlo calculation, the time-dependent or frequency-dependent dynamical potential of the homogeneous electron gas has not been known exactly, due to the absence of a similar variational principle for excited states. In this work, we present a simple geometric derivation of the time-dependent dynamical exchange-correlation potential for the homogeneous system. With this derivation, the dynamical potential can be expressed in terms of the stress tensor, offering an alternative to calculate the bulk and shear moduli, two key input quantities in TDDFT.

]]>Computation doi: 10.3390/computation5020027

Authors: Guocai Tian Yuxiang Mo Jianmin Tao

The accurate first-principles prediction of the energetic properties of molecules and clusters from efficient semilocal density functionals is of broad interest. Here we study the performance of a non-empirical Tao-Mo (TM) density functional on binding energies and excitation energies of titanium dioxide and water clusters, as well as reaction barrier heights. To make a comparison, a combination of the TM exchange part with the TPSS (Tao–Perdew–Staroverov–Scuseria) correlation functional—called TMTPSS—is also included in this study. Our calculations show that the best binding energies of titanium dioxide are predicted by PBE0 (Perdew–Burke–Ernzerhof hybrid functional), TM, and TMTPSS with nearly the same accuracy, while B3LYP (Beck’s three-parameter exchange part with Lee-Yang-Parr correlation), TPSS, and PBE (Perdew–Burke–Ernzerhof) yield larger mean absolute errors. For excitation energies of titanium and water clusters, PBE0 and B3LYP are the most accurate functionals, outperforming the performance of semilocal functionals due to the nonlocality problem suffered by the latter. Nevertheless, TMTPSS and TM functionals are still good accurate semilocal methods, improving upon the commonly-used TPSS and PBE functionals. We also find that the best reaction barrier heights are predicted by PBE0 and B3LYP, thanks to the nonlocality incorporated into these two hybrid functionals, but TMTPSS and TM are obviously more accurate than SCAN (Strongly Constrained and Appropriately Normed), TPSS, and PBE, suggesting the good performance of TM and TMTPSS for physically different systems and properties.

]]>Computation doi: 10.3390/computation5020026

Authors: Michalis Papakostas Evaggelos Spyrou Theodoros Giannakopoulos Giorgos Siantikos Dimitrios Sgouropoulos Phivos Mylonas Fillia Makedon

Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may be recognized using several modalities such as analyzing facial expressions, speech, physiological parameters (e.g., electroencephalograms, electrocardiograms) etc. However, measuring of these modalities may be difficult, obtrusive or require expensive hardware. In that context, speech may be the best alternative modality in many practical applications. In this work we present an approach that uses a Convolutional Neural Network (CNN) functioning as a visual feature extractor and trained using raw speech information. In contrast to traditional machine learning approaches, CNNs are responsible for identifying the important features of the input thus, making the need of hand-crafted feature engineering optional in many tasks. In this paper no extra features are required other than the spectrogram representations and hand-crafted features were only extracted for validation purposes of our method. Moreover, it does not require any linguistic model and is not specific to any particular language. We compare the proposed approach using cross-language datasets and demonstrate that it is able to provide superior results vs. traditional ones that use hand-crafted features.

]]>Computation doi: 10.3390/computation5020025

Authors: Ioan Sarbu Anton Iosif

The dual reciprocity method (DRM) is a highly efficient numerical method of transforming domain integrals arising from the non-homogeneous term of the Poisson equation into equivalent boundary integrals. In this paper, the velocity and temperature fields of laminar forced heat convection in a concentric annular tube, with constant heat flux boundary conditions, have been studied using numerical simulations. The DRM has been used to solve the governing equation, which is expressed in the form of a Poisson equation. A test problem is employed to verify the DRM solutions with different boundary element discretizations and numbers of internal points. The results of the numerical simulations are discussed and compared with exact analytical solutions. Good agreement between the numerical results and exact solutions is evident, as the maximum relative errors are less than 5% to 6%, and the R2-values are greater than 0.999 in all cases. These results confirm the effectiveness and accuracy of the proposed numerical model, which is based on the DRM.

]]>Computation doi: 10.3390/computation5020024

Authors: Seyyed Najmabadi Philipp Offenhäuser Moritz Hamann Guhathakurta Jajnabalkya Fabian Hempert Colin Glass Sven Simon

Computational fluid dynamic simulations involve large state data, leading to performance degradation due to data transfer times, while requiring large disk space. To alleviate the situation, an adaptive lossy compression algorithm has been developed, which is based on regions of interest. This algorithm uses prediction-based compression and exploits the temporal coherence between subsequent simulation frames. The difference between the actual value and the predicted value is adaptively quantized and encoded. The adaptation is in line with user requirements, that consist of the acceptable inaccuracy, the regions of interest and the required compression throughput. The data compression algorithm was evaluated with simulation data obtained by the discontinuous Galerkin spectral element method. We analyzed the performance, compression ratio and inaccuracy introduced by the lossy compression algorithm. The post processing analysis shows high compression ratios, with reasonable quantization errors.

]]>Computation doi: 10.3390/computation5020023

Authors: Ehsan Kian Far Martin Geier Konstantin Kutscher Manfred Krafczyk

A detailed numerical study of turbulent flow through a micro-orifice is presented in this work. The flow becomes turbulent due to the orifice at the considered Reynolds numbers (∼ 10 4 ). The obtained flow rates are in good agreement with the experimental measurements. The discharge coefficient and the pressure loss are presented for two input pressures. The laminar stress and the generated turbulent stresses are investigated in detail, and the location of the vena contracta is quantitatively reproduced.

]]>Computation doi: 10.3390/computation5020022

Authors: Amir Abdol Damjan Cicin-Sain Jaap Kaandorp Anton Crombach

Efficient network inference is one of the challenges of current-day biology. Its application to the study of development has seen noteworthy success, yet a multicellular context, tissue growth, and cellular rearrangements impose additional computational costs and prohibit a wide application of current methods. Therefore, reducing computational cost and providing quick feedback at intermediate stages are desirable features for network inference. Here we propose a hybrid approach composed of two stages: exploration with scatter search and exploitation of intermediate solutions with low temperature simulated annealing. We test the approach on the well-understood process of early body plan development in flies, focusing on the gap gene network. We compare the hybrid approach to simulated annealing, a method of network inference with a proven track record. We find that scatter search performs well at exploring parameter space and that low temperature simulated annealing refines the intermediate results into excellent model fits. From this we conclude that for poorly-studied developmental systems, scatter search is a valuable tool for exploration and accelerates the elucidation of gene regulatory networks.

]]>Computation doi: 10.3390/computation5020021

Authors: Iliya Georgiev Ivo Georgiev

The paper proposes an information technology framework for the development of an embedded remote system for non-destructive observation and study of sensitive archaeological sites. The overall concept and motivation are described. The general hardware layout and software configuration are presented. The paper concentrates on the implementation of the following informational technology components: (a) a geographically unique identification scheme supporting a global key space for a key-value store; (b) a common method for octree modeling for spatial geometrical models of the archaeological artifacts, and abstract object representation in the global key space; (c) a broadcast of the archaeological information as an Extensible Markup Language (XML) stream over the Web for worldwide availability; and (d) a set of testing methods increasing the fault tolerance of the system. This framework can serve as a foundation for the development of a complete system for remote archaeological exploration of enclosed archaeological sites like buried churches, tombs, and caves. An archaeological site is opened once upon discovery, the embedded computer system is installed inside upon a robotic platform, equipped with sensors, cameras, and actuators, and the intact site is sealed again. Archaeological research is conducted on a multimedia data stream which is sent remotely from the system and conforms to necessary standards for digital archaeology.

]]>Computation doi: 10.3390/computation5020020

Authors: Aristidis G. Vrahatis Konstantina Dimitrakopoulou Andreas Kanavos Spyros Sioutas Athanasios Tsakalidis

It has already been established by the systems-level approaches that the future of predictive disease biomarkers will not be sketched by plain lists of genes or proteins or other biological entities but rather integrated entities that consider all underlying component relationships. Towards this orientation, early pathway-based approaches coupled expression data with whole pathway interaction topologies but it was the recent approaches that zoomed into subpathways (local areas of the entire biological pathway) that provided more targeted and context-specific candidate disease biomarkers. Here, we explore the application potential of PerSubs, a graph-based algorithm which identifies differentially activated disease-specific subpathways. PerSubs is applicable both for microarray and RNA-Seq data and utilizes the Kyoto Encyclopedia of Genes and Genomes (KEGG) database as reference for biological pathways. PerSubs operates in two stages: first, identifies differentially expressed genes (or uses any list of disease-related genes) and in second stage, treating each gene of the list as start point, it scans the pathway topology around to build meaningful subpathway topologies. Here, we apply PerSubs to investigate which pathways are perturbed towards mouse lung regeneration following H1N1 influenza infection.

]]>Computation doi: 10.3390/computation5020019

Authors: Martin Geier Martin Schönherr

We present and analyze the Esoteric Twist algorithm for the Lattice Boltzmann Method. Esoteric Twist is a thread safe in-place streaming method that combines streaming and collision and requires only a single data set. Compared to other in-place streaming techniques, Esoteric Twist minimizes the memory footprint and the memory traffic when indirect addressing is used. Esoteric Twist is particularly suitable for the implementation of the Lattice Boltzmann Method on Graphic Processing Units.

]]>Computation doi: 10.3390/computation5010018

Authors: Theodore D. Katsilieris George P. Latsas Hector E. Nistazakis George S. Tombras

The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using the derived mathematical expression and the presented computational tool, we present the corresponding numerical results, using common parameter values for realistic terrestrial free space optical communication systems.

]]>Computation doi: 10.3390/computation5010017

Authors: Mostafa Farajian Mohammad Khodakarami Denise-Penelope Kontoni

Soil-structure interaction (SSI) could affect the seismic response of structures. Since liquid storage tanks are vital structures and must continue their operation under severe earthquakes, their seismic behavior should be studied. Accordingly, the seismic response of two types of steel liquid storage tanks (namely, broad and slender, with aspect ratios of height to radius equal to 0.6 and 1.85) founded on half-space soil is scrutinized under different earthquake ground motions. For a better comparison, the six considered ground motions are classified, based on their pulse-like characteristics, into two groups, named far and near fault ground motions. To model the liquid storage tanks, the simplified mass-spring model is used and the liquid is modeled as two lumped masses known as sloshing and impulsive, and the interaction of fluid and structure is considered using two coupled springs and dashpots. The SSI effect, also, is considered using a coupled spring and dashpot. Additionally, four types of soils are used to consider a wide variety of soil properties. To this end, after deriving the equations of motion, MATLAB programming is employed to obtain the time history responses. Results show that although the SSI effect leads to a decrease in the impulsive displacement, overturning moment, and normalized base shear, the sloshing (or convective) displacement is not affected by such effects due to its long period.

]]>Computation doi: 10.3390/computation5010016

Authors: Anass Bouchnita Fatima-Ezzahra Belmaati Rajae Aboulaich Mark Koury Vitaly Volpert

Multiple myeloma (MM) is a genetically complex hematological cancer that is characterized by proliferation of malignant plasma cells in the bone marrow. MM evolves from the clonal premalignant disorder monoclonal gammopathy of unknown signiﬁcance (MGUS) by sequential genetic changes involving many different genes, resulting in dysregulated growth of multiple clones of plasma cells. The migration, survival, and proliferation of these clones require the direct and indirect interactions with the non-hematopoietic cells of the bone marrow. We develop a hybrid discrete-continuous model of MM development from the MGUS stage. The discrete aspect of the modelisobservedatthecellularlevel: cellsarerepresentedasindividualobjectswhichmove,interact, divide, and die by apoptosis. Each of these actions is regulated by intracellular and extracellular processes as described by continuous models. The hybrid model consists of the following submodels that have been simpliﬁed from the much more complex state of evolving MM: cell motion due to chemotaxis, intracellular regulation of plasma cells, extracellular regulation in the bone marrow, and acquisition of mutations upon cell division. By extending a previous, simpler model in which the extracellular matrix was considered to be uniformly distributed, the new hybrid model provides a more accurate description in which cytokines are produced by the marrow microenvironment and consumed by the myeloma cells. The complex multiple genetic changes in MM cells and the numerous cell-cell and cytokine-mediated interactions between myeloma cells and their marrow microenviroment are simpliﬁed in the model such that four related but evolving MM clones can be studied as they compete for dominance in the setting of intraclonal heterogeneity.

]]>Computation doi: 10.3390/computation5010014

Authors: Guillaume Madelaine Elisa Tonello Cédric Lhoussaine Joachim Niehren

Reaction networks can be simpliﬁed by eliminating linear intermediate species in partial steadystates. Inthispaper,westudythequestionwhetherthisrewriteprocedureisconﬂuent,so that for any given reaction network with kinetic constraints, a unique normal form will be obtained independently of the elimination order. We ﬁrst show that conﬂuence fails for the elimination of intermediates even without kinetics, if “dependent reactions” introduced by the simpliﬁcation are not removed. This leads us to revising the simpliﬁcation algorithm into a variant of the double description method for computing elementary modes, so that it keeps track of kinetic information. Folklore results on elementary modes imply the conﬂuence of the revised simpliﬁcation algorithm with respect to the network structure, i.e., the structure of fully simpliﬁed networks is unique. We show, however, that the kinetic rates assigned to the reactions may not be unique, and provide a biological example where two different simpliﬁed networks can be obtained. Finally, we give a criterion on the structure of the initial network that is sufﬁcient to guarantee the conﬂuence of both the structure and the kinetic rates.

]]>Computation doi: 10.3390/computation5010015

Authors: Viraht Sahni Xiao-Yin Pan

The Schrödinger theory of electrons in an external electromagnetic field is described from the new perspective of the individual electron. The perspective is arrived at via the time-dependent “Quantal Newtonian” law (or differential virial theorem). (The time-independent law, a special case, provides a similar description of stationary-state theory). These laws are in terms of “classical” fields whose sources are quantal expectations of Hermitian operators taken with respect to the wave function. The laws reveal the following physics: (a) in addition to the external field, each electron experiences an internal field whose components are representative of a specific property of the system such as the correlations due to the Pauli exclusion principle and Coulomb repulsion, the electron density, kinetic effects, and an internal magnetic field component. The response of the electron is described by the current density field; (b) the scalar potential energy of an electron is the work done in a conservative field. It is thus path-independent. The conservative field is the sum of the internal and Lorentz fields. Hence, the potential is inherently related to the properties of the system, and its constituent property-related components known. As the sources of the fields are functionals of the wave function, so are the respective fields, and, therefore, the scalar potential is a known functional of the wave function; (c) as such, the system Hamiltonian is a known functional of the wave function. This reveals the intrinsic self-consistent nature of the Schrödinger equation, thereby providing a path for the determination of the exact wave functions and energies of the system; (d) with the Schrödinger equation written in self-consistent form, the Hamiltonian now admits via the Lorentz field a new term that explicitly involves the external magnetic field. The new understandings are explicated for the stationary state case by application to two quantum dots in a magnetostatic field, one in a ground state and the other in an excited state. For the time-dependent case, the evolution of the same states of the quantum dots in both a magnetostatic and a time-dependent electric field is described. In each case, the satisfaction of the corresponding “Quantal Newtonian” law is demonstrated.

]]>Computation doi: 10.3390/computation5010013

Authors: Dimitra Douvi Dionissios Margaris Aristeidis Davaris

This paper opens up a new perspective on the aerodynamic performance of a wind turbine airfoil. More specifically, the paper deals with a steady, incompressible two-phase flow, consisting of air and two different concentrations of sand particles, over an airfoil from the National Renewable Energy Laboratory, NREL S809. The numerical simulations were performed on turbulence models for aerodynamic operations using commercial computational fluid dynamics (CFD) code. The computational results obtained for the aerodynamic performance of an S809 airfoil at various angles of attack operating at Reynolds numbers of Re = 1 × 106 and Re = 2 × 106 in a dry, dusty environment were compared with existing experimental data on air flow over an S809 airfoil from reliable sources. Notably, a structured mesh consisting of 80,000 cells had already been identified as the most appropriate for numerical simulations. Finally, it was concluded that sand concentration significantly affected the aerodynamic performance of the airfoil; there was an increase in the values of the predicted drag coefficients, as well as a decrease in the values of the predicted lift coefficients caused by increasing concentrations of sand particles. The region around the airfoil was studied by using contours of static pressure and discrete phase model (DPM) concentration.

]]>Computation doi: 10.3390/computation5010012

Authors: Michaela Elmatzoglou Aris Avdelas

Double-steel plate concrete composite shear walls are being used for nuclear plants and high-rise buildings. They consist of thick concrete walls, exterior steel faceplates serving as reinforcement and shear connectors, which guarantee the composite action between the two different materials. Several researchers have used the Finite Element Method to investigate the behaviour of double-steel plate concrete walls. The majority of them model every element explicitly leading to a rather time-consuming solution, which cannot be easily used for design purposes. In the present paper, the main objective is the introduction of a three-dimensional finite element model, which can efficiently predict the overall performance of a double-steel plate concrete wall in terms of accuracy and time saving. At first, empirical formulations and design relations established in current design codes for shear connectors are evaluated. Then, a simplified finite element model is used to investigate the nonlinear response of composite walls. The developed model is validated using results from tests reported in the literature in terms of axial compression and monotonic, cyclic in-plane shear loading. Several finite element modelling issues related to potential convergence problems, loading strategies and computer efficiency are also discussed. The accuracy and simplicity of the proposed model make it suitable for further numerical studies on the shear connection behaviour at the steel-concrete interface.

]]>Computation doi: 10.3390/computation5010011

Authors: Andrey Golov Sergey Simakov Yan Soe Roman Pryamonosov Ospan Mynbaev Alexander Kholodov

An airflow in the first four generations of the tracheobronchial tree was simulated by the 1D model of incompressible fluid flow through the network of the elastic tubes coupled with 0D models of lumped alveolar components, which aggregates parts of the alveolar volume and smaller airways, extended with convective transport model throughout the lung and alveolar components which were combined with the model of oxygen and carbon dioxide transport between the alveolar volume and the averaged blood compartment during pathological respiratory conditions. The novel features of this work are 1D reconstruction of the tracheobronchial tree structure on the basis of 3D segmentation of the computed tomography (CT) data; 1D−0D coupling of the models of 1D bronchial tube and 0D alveolar components; and the alveolar gas exchange model. The results of our simulations include mechanical ventilation, breathing patterns of severely ill patients with the cluster (Biot) and periodic (Cheyne-Stokes) respirations and bronchial asthma attack. The suitability of the proposed mathematical model was validated. Carbon dioxide elimination efficiency was analyzed in all these cases. In the future, these results might be integrated into research and practical studies aimed to design cyberbiological systems for remote real-time monitoring, classification, prediction of breathing patterns and alveolar gas exchange for patients with breathing problems.

]]>Computation doi: 10.3390/computation5010010

Authors: Sebastian Neumeyer Konrad Exner Simon Kind Haygazun Hayka Rainer Stark

As a result of the growing demand for highly customized and individual products, companies need to enable flexible and intelligent manufacturing. Cyber-physical production systems (CPPS) will act autonomously in the future in an interlinked production and enable such flexibility. However, German mid-sized plant manufacturers rarely use virtual technologies for design and validation in order to design CPPS. The research project Virtual Commissioning with Smart Hybrid Prototyping (VIB-SHP) investigated the usage of virtual technologies for manufacturing systems and CPPS design. Aspects of asynchronous communicating, intelligent- and autonomous-acting production equipment in an immersive validation environment, have been investigated. To enable manufacturing system designers to validate CPPS, a software framework for virtual prototyping has been developed. A mechatronic construction kit for production system design integrates discipline-specific models and manages them in a product lifecycle management (PLM) solution. With this construction kit manufacturing designers are able to apply virtual technologies and the validation of communication processes with the help of behavior models. The presented approach resolves the sequential design process for the development of mechanical, electrical, and software elements and ensures the consistency of these models. With the help of a bill of material (BOM)- and signal-based alignment of the discipline-specific models in an integrated mechatronic product model, the communication of the design status and changes are improved. The re-use of already-specified and -designed modules enable quick behavior modeling, code evaluation, as well as interaction with the virtualized assembly system in an immersive environment.

]]>Computation doi: 10.3390/computation5010009

Authors: Young-Moo Byun Carsten Ullrich

Excitonic effects in solids can be calculated using the Bethe-Salpeter equation (BSE) or the Casida equation of time-dependent density-functional theory (TDDFT). In both methods, the Tamm-Dancoff approximation (TDA), which decouples excitations and de-excitations, is widely used to reduce computational cost. Here, we study the effect of the TDA on exciton binding energies of solids obtained from the Casida equation using long-range-corrected (LRC) exchange-correlation kernels. We find that the TDA underestimates TDDFT-LRC exciton binding energies of semiconductors slightly, but those of insulators significantly (i.e., by more than 100%), and thus it is essential to solve the full Casida equation to describe strongly bound excitons. These findings are relevant in the ongoing search for accurate and efficient TDDFT approaches for excitons.

]]>Computation doi: 10.3390/computation5010008

Authors: Eleni Douvi Dimitra Douvi Dionissios Margaris Ioannis Drosis

This paper concentrates on a new configuration for a wind turbine, named KIONAS. The main purpose is to determine the performance and aerodynamic behavior of KIONAS, which is a vertical axis wind turbine with a stator over the rotor and a special feature in that it can consist of several stages. Notably, the stator is shaped in such a way that it increases the velocity of the air impacting the rotor blades. Moreover, each stage’s performance can be increased with the increase of the total number of stages. The effects of wind velocity, the various numbers of inclined rotor blades, the rotor diameter, the stator’s shape and the number of stages on the performance of KIONAS were studied. A FORTRAN code was developed in order to predict the power in several cases by solving the equations of continuity and momentum. Subsequently, further knowledge on the flow field was obtained by using a commercial Computational Fluid Dynamics code. Based on the results, it can be concluded that higher wind velocities and a greater number of blades produce more power. Furthermore, higher performance was found for a stator with curved guide vanes and for a KIONAS configuration with more stages.

]]>Computation doi: 10.3390/computation5010007

Authors: Computation Editorial Office

The editors of Computation would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]

]]>Computation doi: 10.3390/computation5010006

Authors: Anass Bouchnita Gennady Bocharov Andreas Meyerhans Vitaly Volpert

Human Immunodeficiency Virus (HIV) infection of humans represents a complex biological system and a great challenge to public health. Novel approaches for the analysis and prediction of the infection dynamics based on a multi-scale integration of virus ontogeny and immune reactions are needed to deal with the systems’ complexity. The aim of our study is: (1) to formulate a multi-scale mathematical model of HIV infection; (2) to implement the model computationally following a hybrid approach; and (3) to calibrate the model by estimating the parameter values enabling one to reproduce the “standard” observed dynamics of HIV infection in blood during the acute phase of primary infection. The modeling approach integrates the processes of infection spread and immune responses in Lymph Nodes (LN) to that observed in blood. The spatio-temporal population dynamics of T lymphocytes in LN in response to HIV infection is governed by equations linking an intracellular regulation of the lymphocyte fate by intercellular cytokine fields. We describe the balance of proliferation, differentiation and death at a single cell level as a consequence of gene activation via multiple signaling pathways activated by IL-2, IFNa and FasL. Distinct activation thresholds are used in the model to relate different modes of cellular responses to the hierarchy of the relative levels of the cytokines. We specify a reference set of model parameter values for the fundamental processes in lymph nodes that ensures a reasonable agreement with viral load and CD4+ T cell dynamics in blood.

]]>Computation doi: 10.3390/computation5010004

Authors: Samy Bakheet

Early detection of skin cancer through improved techniques and innovative technologies has the greatest potential for significantly reducing both morbidity and mortality associated with this disease. In this paper, an effective framework of a CAD (Computer-Aided Diagnosis) system for melanoma skin cancer is developed mainly by application of an SVM (Support Vector Machine) model on an optimized set of HOG (Histogram of Oriented Gradient) based descriptors of skin lesions. Experimental results obtained by applying the presented methodology on a large, publicly accessible dataset of dermoscopy images demonstrate that the proposed framework is a strong contender for the state-of-the-art alternatives by achieving high levels of sensitivity, specificity, and accuracy (98.21%, 96.43% and 97.32%, respectively), without sacrificing computational soundness.

]]>Computation doi: 10.3390/computation5010005

Authors: Mariachiara Pastore

In the context of solar energy exploitation, dye-sensitized solar cells and dye-sensitized photoelectrosynthetic cells offer the promise of low-cost sunlight conversion and storage, respectively. In this perspective we discuss the main successes and limitations of modern computational methodologies, ranging from hybrid and long-range corrected density functionals, GW approaches and multi-reference perturbation theories, in describing the electronic and optical properties of isolated components and complex interfaces relevant to these devices. While computational modelling has had a crucial role in the development of the dye-sensitized solar cells technology, the theoretical characterization of the interface structure and interfacial processes in water splitting devices is still at its infancy, especially concerning the electron and hole transfer phenomena. Quantitative analysis of interfacial charge separation and recombination reactions in multiple metal-oxide/dye/catalyst heterointerfaces, thus, undoubtedly represents the compelling challenge in the field of modern computational material science.

]]>Computation doi: 10.3390/computation5010003

Authors: Dmitry Grebennikov Raoul van Loon Mario Novkovic Lucas Onder Rostislav Savinkov Igor Sazonov Rufina Tretyakova Daniel Watson Gennady Bocharov

In this study, we discuss critical issues in modelling the structure and function of lymph nodes (LNs), with emphasis on how LN physiology is related to its multi-scale structural organization. In addition to macroscopic domains such as B-cell follicles and the T cell zone, there are vascular networks which play a key role in the delivery of information to the inner parts of the LN, i.e., the conduit and blood microvascular networks. We propose object-oriented computational algorithms to model the 3D geometry of the fibroblastic reticular cell (FRC) network and the microvasculature. Assuming that a conduit cylinder is densely packed with collagen fibers, the computational flow study predicted that the diffusion should be a dominating process in mass transport than convective flow. The geometry models are used to analyze the lymph flow properties through the conduit network in unperturbed- and damaged states of the LN. The analysis predicts that elimination of up to 60%–90% of edges is required to stop the lymph flux. This result suggests a high degree of functional robustness of the network.

]]>Computation doi: 10.3390/computation5010002

Authors: Supratik Kar Juganta Roy Danuta Leszczynska Jerzy Leszczynski

Post silicon solar cell era involves light-absorbing dyes for dye-sensitized solar systems (DSSCs). Therefore, there is great interest in the design of competent organic dyes for DSSCs with high power conversion efficiency (PCE) to bypass some of the disadvantages of silicon-based solar cell technologies, such as high cost, heavy weight, limited silicon resources, and production methods that lead to high environmental pollution. The DSSC has the unique feature of a distance-dependent electron transfer step. This depends on the relative position of the sensitized organic dye in the metal oxide composite system. In the present work, we developed quantitative structure-property relationship (QSPR) models to set up the quantitative relationship between the overall PCE and quantum chemical molecular descriptors. They were calculated from density functional theory (DFT) and time-dependent DFT (TD-DFT) methods as well as from DRAGON software. This allows for understanding the basic electron transfer mechanism along with the structural attributes of arylamine-organic dye sensitizers for the DSSCs explicit to cobalt electrolyte. The identified properties and structural fragments are particularly valuable for guiding time-saving synthetic efforts for development of efficient arylamine organic dyes with improved power conversion efficiency.

]]>Computation doi: 10.3390/computation5010001

Authors: Reem Yassine Faten Salman Ali Al Shaer Mohammad Hammoud Denis Duhamel

The frequency response function is a quantitative measure used in structural analysis and engineering design; hence, it is targeted for accuracy. For a large structure, a high number of substructures, also called cells, must be considered, which will lead to a high amount of computational time. In this paper, the recursive method, a finite element method, is used for computing the frequency response function, independent of the number of cells with much lesser time costs. The fundamental principle is eliminating the internal degrees of freedom that are at the interface between a cell and its succeeding one. The method is applied solely for free (no load) nodes. Based on the boundary and interior degrees of freedom, the global dynamic stiffness matrix is computed by means of products and inverses resulting with a dimension the same as that for one cell. The recursive method is demonstrated on periodic structures (cranes and buildings) under harmonic vibrations. The method yielded a satisfying time decrease with a maximum time ratio of 1 18 and a percentage difference of 19%, in comparison with the conventional finite element method. Close values were attained at low and very high frequencies; the analysis is supported for two types of materials (steel and plastic). The method maintained its efficiency with a high number of forces, excluding the case when all of the nodes are under loads.

]]>Computation doi: 10.3390/computation4040046

Authors: Kazuhiro Yamamoto Tatsuya Sakai

Nowadays, in the after-treatment of diesel exhaust gas, a diesel particulate filter (DPF) has been used to trap nano-particles of the diesel soot. However, as there are more particles inside the filter, the pressure which corresponds to the filter backpressure increases, which worsens the fuel consumption rate, together with the abatement of the available torque. Thus, a filter with lower backpressure would be needed. To achieve this, it is necessary to utilize the information on the phenomena including both the soot transport and its removal inside the DPF, and optimize the filter substrate structure. In this paper, to obtain useful information for optimization of the filter structure, we tested seven filters with different porosities and pore sizes. The porosity and pore size were changed systematically. To consider the soot filtration, the particle-laden flow was simulated by a lattice Boltzmann method (LBM). Then, the flow field and the pressure change were discussed during the filtration process.

]]>Computation doi: 10.3390/computation4040045

Authors: Ágnes Nagy Karlheinz Schwarz

The properties of many materials at the atomic scale depend on the electronic structure, which requires a quantum mechanical treatment. The most widely used approach to make such a treatment feasible is density functional theory (DFT), the advances in which were presented and discussed during the DFT conference in Debrecen. Some of these issues are presented in this Special Issue.

]]>Computation doi: 10.3390/computation4040043

Authors: Mausumi Chattopadhyaya Md. Alam

In the present work, we have theoretically studied the one and two-photon absorption (OPA and TPA) probabilities of the native D-luciferin molecule and attempted to find the origin of its larger TPA cross-sections in polar solvents than in non-polar ones. The calculations using state-of-the-art linear and quadratic response theory in the framework of time-dependent density functional theory using hybrid B3LYP functional and cc-pVDZ basis set suggests that two-photon transition probability of this molecule increases with increasing solvent polarity. In order to explicate our present findings, we employed the generalized few-state-model and inspected the role of different optical channels related to the TPA process. We have found that the two-photon transition probability is always guided by a destructive interference term, the magnitude of which decreases with increasing solvent polarity. Furthermore, we have evaluated OPA parameters of D-luciferin and noticed that the the excitation energy is in very good agreement with the available experimental results.

]]>Computation doi: 10.3390/computation4040044

Authors: Salem Osta-Omar Christopher Micallef

The objective of this paper is to develop a mathematical model for thermodynamic analysis of an absorption refrigeration system equipped with an adiabatic absorber using a lithium-bromide/water (LiBr/water) pair as the working fluid. The working temperature of the generator, adiabatic absorber, condenser, evaporator, the cooling capacity of the system, and the ratio of the solution mass flow rate at the circulation pump to that at the solution pump are used as input data. The model evaluates the thermodynamic properties of all state points, the heat transfer in each component, the various mass flow rates, and the coefficient of performance (COP) of the cycle. The results are used to investigate the effect of key parameters on the overall performance of the system. For instance, increasing the generator temperatures and decreasing the adiabatic absorber temperatures can increase the COP of the cycle. The results of this mathematical model can be used for designing and sizing new LiBr/water absorption refrigeration systems equipped with an adiabatic absorber or for optimizing existing aforementioned systems.

]]>Computation doi: 10.3390/computation4040042

Authors: Sergei Pravdin

Computer simulation of normal and diseased human heart activity requires a 3D anatomical model of the myocardium, including myofibers. For clinical applications, such a model has to be constructed based on routine methods of cardiac visualization, such as sonography. Symmetrical models are shown to be too rigid, so an analytical non-symmetrical model with enough flexibility is necessary. Based on previously-made anatomical models of the left ventricle, we propose a new, much more flexible spline-based analytical model. The model is fully described and verified against DT-MRI data. We show a way to construct it on the basis of sonography data. To use this model in further physiological simulations, we propose a numerical method to utilize finite differences in solving the reaction-diffusion problem together with an example of scroll wave dynamics simulation.

]]>Computation doi: 10.3390/computation4040041

Authors: Konstantinos Kalaitzis Evripidis Sotiriadis Ioannis Papaefstathiou Apostolos Dollas

The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs). Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user) they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.

]]>Computation doi: 10.3390/computation4040039

Authors: Simeone Marino Denise Kirschner

Tuberculosis (TB) is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB). The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs) is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs), take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung) to lung-draining lymph nodes (LNs) where they prime T cells to recognize Mtb. These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim) describing in silico immune cell, bacterial (Mtb) and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form), lung draining lymph node (LN, site of generation of adaptive immunity) and blood (a measurable compartment). Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM), while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs). In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs) in a computational fashion into GranSim. Results: The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. Conclusion: The main conclusion of this study is that early events after Mtb infection are critical to establishing a timely and effective response. Manipulating CD8+ and CD4+ T cell proliferation rates, as well as DC migration early on during infection can determine the difference between bacterial clearance vs. uncontrolled bacterial growth and dissemination.

]]>Computation doi: 10.3390/computation4040040

Authors: Karlheinz Schwarz Lu Sham Ann Mattsson Matthias Scheffler

Walter Kohn (Figure 1) is one of the most cited scientists of our time, who died on 19 April 2016 in Santa Barbara, CA, USA. [...]

]]>Computation doi: 10.3390/computation4040038

Authors: Atanas Atanasov Benjamin Uekermann Carlos Pachajoa Mejía Hans-Joachim Bungartz Philipp Neumann

We present an Anderson acceleration-based approach to spatially couple three-dimensional Lattice Boltzmann and Navier–Stokes (LBNS) flow simulations. This allows to locally exploit the computational features of both fluid flow solver approaches to the fullest extent and yields enhanced control to match the LB and NS degrees of freedom within the LBNS overlap layer. Designed for parallel Schwarz coupling, the Anderson acceleration allows for the simultaneous execution of both Lattice Boltzmann and Navier–Stokes solver. We detail our coupling methodology, validate it, and study convergence and accuracy of the Anderson accelerated coupling, considering three steady-state scenarios: plane channel flow, flow around a sphere and channel flow across a porous structure. We find that the Anderson accelerated coupling yields a speed-up (in terms of iteration steps) of up to 40% in the considered scenarios, compared to strictly sequential Schwarz coupling.

]]>Computation doi: 10.3390/computation4030037

Authors: Paul Torrens

Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber-physical systems and new developments in urban computing and mobile computing.

]]>Computation doi: 10.3390/computation4030036

Authors: Yingyi Liu Ying Gou Bin Teng Shigeo Yoshida

The present study aims to develop an efficient numerical method for computing the diffraction and radiation of water waves with horizontal long cylindrical structures, such as floating breakwaters in the coastal region, etc. A higher-order scheme is used to discretize geometry of the structure as well as the physical wave potentials. As the kernel of this method, Wehausen’s free-surface Green function is calculated by a newly-developed Gauss–Kronrod adaptive quadrature algorithm after elimination of its Cauchy-type singularities. To improve its computation efficiency, an analytical solution is derived for a fast evaluation of the Green function that needs to be implemented thousands of times. In addition, the OpenMP parallelization technique is applied to the formation of the influence coefficient matrix, significantly reducing the running CPU time. Computations are performed on wave-exciting forces and hydrodynamic coefficients for the long cylindrical structures, either floating or submerged. Comparison with other numerical and analytical methods demonstrates a good performance of the present method.

]]>Computation doi: 10.3390/computation4030035

Authors: Alexander Danilov Roman Pryamonosov Alexandra Yurova

In this study, we present several image segmentation techniques for various image scales and modalities. We consider cellular-, organ-, and whole organism-levels of biological structures in cardiovascular applications. Several automatic segmentation techniques are presented and discussed in this work. The overall pipeline for reconstruction of biological structures consists of the following steps: image pre-processing, feature detection, initial mask generation, mask processing, and segmentation post-processing. Several examples of image segmentation are presented, including patient-specific abdominal tissues segmentation, vascular network identification and myocyte lipid droplet micro-structure reconstruction.

]]>Computation doi: 10.3390/computation4030034

Authors: Shree Acharya Volodymyr Turkowski Talat Rahman

We present some details of our recently-proposed Time-Dependent Density-Functional Theory (TDDFT) for strongly-correlated materials in which the exchange-correlation (XC) kernel is derived from the charge susceptibility obtained using Dynamical Mean-Field Theory (the TDDFT + DMFT approach). We proceed with deriving the expression for the XC kernel for the one-band Hubbard model by solving DMFT equations via two approaches, the Hirsch–Fye Quantum Monte Carlo (HF-QMC) and an approximate low-cost perturbation theory approach, and demonstrate that the latter gives results that are comparable to the exact HF-QMC solution. Furthermore, through a variety of applications, we propose a simple analytical formula for the XC kernel. Additionally, we use the exact and approximate kernels to examine the nonhomogeneous ultrafast response of two systems: a one-band Hubbard model and a Mott insulator YTiO3. We show that the frequency dependence of the kernel, i.e., memory effects, is important for dynamics at the femtosecond timescale. We also conclude that strong correlations lead to the presence of beats in the time-dependent electric conductivity in YTiO3, a feature that could be tested experimentally and that could help validate the few approximations used in our formulation. We conclude by proposing an algorithm for the generalization of the theory to non-linear response.

]]>