Computation doi: 10.3390/computation7030053

Authors: Shengkun Xie Anna T. Lawniczak Chong Gan

For a better understanding of the nature of complex systems modeling, computer simulations and the analysis of the resulting data are major tools which can be applied. In this paper, we study a statistical modeling problem of data coming from a simulation model that investigates the correctness of autonomous agents&rsquo; decisions in learning to cross a cellular automaton-based highway. The goal is a better understanding of cognitive agents&rsquo; performance in learning to cross a cellular automaton-based highway with different traffic density. We investigate the effects of parameters&rsquo; values of the simulation model (e.g., knowledge base transfer, car creation probability, agents&rsquo; fear and desire to cross the highway) and their interactions on cognitive agents&rsquo; decisions (i.e., correct crossing decisions, incorrect crossing decisions, correct waiting decisions, and incorrect waiting decisions). We firstly utilize canonical correlation analysis (CCA) to see if all the considered parameters&rsquo; values and decision types are significantly statistically correlated, so that no considered dependent variables or independent variables (i.e., decision types and configuration parameters, respectively) can be omitted from the simulation model in potential future studies. After CCA, we then use the regression tree method to explore the effects of model configuration parameters&rsquo; values on the agents&rsquo; decisions. In particular, we focus on the discussion of the effects of the knowledge base transfer, which is a key factor in the investigation on how accumulated knowledge/information about the agents&rsquo; performance in one traffic environment affects the agents&rsquo; learning outcomes in another traffic environment. This factor affects the cognitive agents&rsquo; decision-making abilities in a major way in a new traffic environment where the cognitive agents start learning from existing accumulated knowledge/information about their performance in an environment with different traffic density. The obtained results provide us with a better understanding of how cognitive agents learn to cross the highway, i.e., how the knowledge base transfer as a factor affects the experimental outcomes. Furthermore, the proposed methodology can become useful in modeling and analyzing data coming from other computer simulation models and can provide an approach for better understanding a factor or treatment effect.

]]>Computation doi: 10.3390/computation7030052

Authors: Norma Flores-Holguín Juan Frau Daniel Glossman-Mitnik

A methodology based on concepts that arose from Density Functional Theory (CDFT) was chosen for the calculation of global and local reactivity descriptors of the Seragamide family of marine anticancer peptides. Determination of active sites for the molecules was achieved by resorting to some descriptors within Molecular Electron Density Theory (MEDT) such as Fukui functions. The pKas of the six studied peptides were established using a proposed relationship between this property and calculated chemical hardness. The drug likenesses and bioactivity properties of the peptides considered in this study were obtained by resorting to a homology model by comparison with the bioactivity of related molecules in their interaction with different receptors. With the object of analyzing the concept of drug repurposing, a study of potential AGE-inhibition abilities of Seragamides peptides was pursued by comparison with well-known drugs that are already available as pharmaceuticals.

]]>Computation doi: 10.3390/computation7030051

Authors: Alireza Sahebgharani Mahmoud Mohammadi Hossein Haghshenas

Space-time prism (STP) is a comprehensive and powerful model for computing accessibility to urban opportunities. Despite other types of accessibility measures, STP models capture spatial and temporal dimensions in a unified framework. Classical STPs assume that travel time in street networks is a deterministic and fixed variable. However, this assumption is in contradiction with the uncertain nature of travel time taking place due to fluctuations and traffic congestion. In addition, travel time in street networks mostly follows non-normal probability distributions which are not modeled in the structure of classical STPs. Neglecting travel time uncertainty and disregarding different types of probability distributions cause unrealistic accessibility values in STP-based metrics. In this way, this paper proposes a spatiotemporal accessibility model by extending classical STPs to non-normal stochastic urban networks and blending this modified STP with the attractiveness of urban opportunities. The elaborated model was applied on the city of Isfahan to assess the accessibility of its traffic analysis zones (TAZs) to Kowsar discount retail markets. A significant difference was found between the results of accessibility values in normally and non-normally distributed networks. In addition, the results show that the northern TAZs had larger accessibility level compared to the southern ones.

]]>Computation doi: 10.3390/computation7030050

Authors: Georgia Karataraki Andreas Sapalidis Elena Tocci Anastasios Gotzias

We employed molecular dynamics simulations on the water solvation of conically shaped carbon nanoparticles. We explored the hydrophobic behaviour of the nanoparticles and investigated microscopically the cavitation of water in a conical confinement with different angles. We performed additional molecular dynamics simulations in which the carbon structures do not interact with water as if they were in vacuum. We detected a waving on the surface of the cones that resembles the shape agitations of artificial water channels and biological porins. The surface waves were induced by the pentagonal carbon rings (in an otherwise hexagonal network of carbon rings) concentrated near the apex of the cones. The waves were affected by the curvature gradients on the surface. They were almost undetected for the case of an armchair nanotube. Understanding such nanoscale phenomena is the key to better designed molecular models for membrane systems and nanodevices for energy applications and separation.

]]>Computation doi: 10.3390/computation7030049

Authors: Noman Saleem Kashif Zafar Alizaa Sabzwari

Redundant and irrelevant features disturb the accuracy of the classifier. In order to avoid redundancy and irrelevancy problems, feature selection techniques are used. Finding the most relevant feature subset that can enhance the accuracy rate of the classifier is one of the most challenging parts. This paper presents a new solution to finding relevant feature subsets by the niche based bat algorithm (NBBA). It is compared with existing state of the art approaches, including evolutionary based approaches. The multi-objective bat algorithm (MOBA) selected 8, 16, and 248 features with 93.33%, 93.54%, and 78.33% accuracy on ionosphere, sonar, and Madelon datasets, respectively. The multi-objective genetic algorithm (MOGA) selected 10, 17, and 256 features with 91.28%, 88.70%, and 75.16% accuracy on same datasets, respectively. Finally, the multi-objective particle swarm optimization (MOPSO) selected 9, 21, and 312 with 89.52%, 91.93%, and 76% accuracy on the above datasets, respectively. In comparison, NBBA selected 6, 19, and 178 features with 93.33%, 95.16%, and 80.16% accuracy on the above datasets, respectively. The niche multi-objective genetic algorithm selected 8, 15, and 196 features with 93.33%, 91.93%, and 79.16 % accuracy on the above datasets, respectively. Finally, the niche multi-objective particle swarm optimization selected 9, 19, and 213 features with 91.42%, 91.93%, and 76.5% accuracy on the above datasets, respectively. Hence, results show that MOBA outperformed MOGA and MOPSO, and NBBA outperformed the niche multi-objective genetic algorithm and the niche multi-objective particle swarm optimization.

]]>Computation doi: 10.3390/computation7030048

Authors: Dejan Brkić Pavel Praks

The logarithmic Colebrook flow friction equation is implicitly given in respect to an unknown flow friction factor. Traditionally, an explicit approximation of the Colebrook equation requires evaluation of computationally demanding transcendental functions, such as logarithmic, exponential, non-integer power, Lambert W and Wright Ω functions. Conversely, we herein present several computationally cheap explicit approximations of the Colebrook equation that require only one logarithmic function in the initial stage, whilst for the remaining iterations the cheap Padé approximant of the first order is used instead. Moreover, symbolic regression was used for the development of a novel starting point, which significantly reduces the error of internal iterations compared with the fixed value staring point. Despite the starting point using a simple rational function, it reduces the relative error of the approximation with one internal cycle from 1.81% to 0.156% (i.e., by a factor of 11.6), whereas the relative error of the approximation with two internal cycles is reduced from 0.317% to 0.0259% (i.e., by a factor of 12.24). This error analysis uses a sample with 2 million quasi-Monte Carlo points and the Sobol sequence.

]]>Computation doi: 10.3390/computation7030047

Authors: Arash Mirhashemi

At the cost of added complexity and time, hyperspectral imaging provides a more accurate measure of the scene&rsquo;s irradiance compared to an RGB camera. Several camera designs with more than three channels have been proposed to improve the accuracy. The accuracy is often evaluated based on the estimation quality of the spectral data. Currently, such evaluations are carried out with either simulated data or color charts to relax the spatial registration requirement between the images. To overcome this limitation, this article presents an accurately registered image database of six icon paintings captured with five cameras with different number of channels, ranging from three (RGB) to more than a hundred (hyperspectral camera). Icons are challenging topics because they have complex surfaces that reflect light specularly with a high dynamic range. Two contributions are proposed to tackle this challenge. First, an imaging configuration is carefully arranged to control the specular reflection, confine the dynamic range, and provide a consistent signal-to-noise ratio for all the camera channels. Second, a multi-camera, feature-based registration method is proposed with an iterative outlier removal phase that improves the convergence and the accuracy of the process. The method was tested against three other approaches with different features or registration models.

]]>Computation doi: 10.3390/computation7030046

Authors: Franziska Erlekam Sinaida Igde Susanna Röblitz Laura Hartmann Marcus Weber

In addition to the conventional Isothermal Titration Calorimetry (ITC), kinetic ITC (kinITC) not only gains thermodynamic information, but also kinetic data from a biochemical binding process. Moreover, kinITC gives insights into reactions consisting of two separate kinetic steps, such as protein folding or sequential binding processes. The ITC method alone cannot deliver kinetic parameters, especially not for multivalent bindings. This paper describes how to solve the problem using kinITC and an invariant subspace projection. The algorithm is tested for multivalent systems with different valencies.

]]>Computation doi: 10.3390/computation7030045

Authors: Pavel Markov Sergey Rodionov

This article presents the applications of continuous symmetry groups to the computational fluid dynamics simulation of gas flow in porous media. The family of equations for one-phase flow in porous media, such as equations of gas flow with the Klinkenberg effect, is considered. This consideration has been made in terms of difference scheme constructions with the preservation of continuous symmetries, which are presented in original parabolic differential equations. A new method of numerical solution generation using continuous symmetry groups has been developed for the equation of gas flow in porous media. Four classes of invariant difference schemes have been found by using known group classifications of parabolic differential equations with partial derivatives. Invariance of necessary conditions for stability has been shown for the difference schemes from the presented classes. Comparison with the classical approach for seeking numerical solutions for a particular case from the presented classes has shown that the calculation speed is greater by several orders than for the classical approach. Analysis of the accuracy for the presented method of numerical solution generation on the basis of continuous symmetries shows that the accuracy of generated numerical solutions depends on the accuracy of initial solutions for generations.

]]>Computation doi: 10.3390/computation7030044

Authors: Francesco Rundo Giuseppe Luigi Banna Sabrina Conoci

Skin cancer is the most common type of cancer, as also among the riskiest in the medical oncology field. Skin cancer is more common in people who work or practice outdoor sports and those that expose themselves to the sun. It may also develop years after radiographic therapy or exposure to substances that cause cancer (e.g., arsenic ingestion). Numerous tumors can affect the skin, which is the largest organ in our body and is made up of three layers: the epidermis (superficial layer), the dermis (middle layer) and the subcutaneous tissue (deep layer). The epidermis is formed by different types of cells: melanocytes, which have the task of producing melanin (a pigment that protects against the damaging effects of sunlight), and the more numerous keratinocytes. The keratinocytes of the deepest layer are called basal cells and can give rise to basal cell carcinomas. We are interested in types of skin cancer that originate from melanocytes, i.e., the so-called melanomas, because it is the most aggressive. The dermatologist, during a complete visit, evaluates the personal and family history of the patient and carries out an accurate visual examination of the skin, thanks to the use of epi-luminescence (or dermoscopy), a special technique for enlarging and illuminating the skin. This paper mentions one of the most widely used diagnostic methods due to its simplicity and validity&mdash;the ABCDE method (Asymmetry, edge irregularity, Color Variegation, Diameter, Evolution). This methodology, based on &ldquo;visual&rdquo; investigation by the dermatologist and/or oncologist, has the advantage of not being invasive and quite easy to perform. This approach is affected by the opinion of who (physicians) applies it. For this reason, certain diagnosis of cancer is made, however, only with a biopsy, a procedure during which a portion of tissue is taken and then analyzed under a microscope. Obviously, this is particularly invasive for the patient. The authors of this article have analyzed the development of a method that obtains with good accuracy the early diagnosis of skin neoplasms using non-invasive, but at the same time, robust methodologies. To this end, the authors propose the adoption of a deep learning pipeline based on morphological analysis of the skin lesion. The results obtained and compared with previous approaches confirm the good performance of the proposed pipeline.

]]>Computation doi: 10.3390/computation7030043

Authors: Jordan Guillot Diego Restrepo-Leal Carlos Robles-Algarín Ingrid Oliveros

In the field of engineering when a situation is not resolved analytically, efforts are made to develop methods that approximate a possible solution. These efforts have originated the numerical methods known at present, which allow formulating mathematical problems that can be solved using logical and arithmetic operations. This paper presents a comparison between the numerical optimization algorithms golden section search and simulated annealing, which are tested in four different scenarios. These scenarios are functions implemented with a feedforward neural network, which emulate a partial shading behavior in photovoltaic modules with local and global maxima. The presence of the local maxima makes it difficult to track the maximum power point, necessary to obtain the highest possible performance of the photovoltaic module. The programming of the algorithms was performed in C language. The results demonstrate the effectiveness of the algorithms to find global maxima. However, the golden section search method showed a better performance in terms of percentage of error, computation time and number of iterations, except in test scenario number three, where a better percentage of error was obtained with the simulated annealing algorithm for a computational temperature of 1000.

]]>Computation doi: 10.3390/computation7030042

Authors: Joseph F. Rudzinski

Coarse-grained (CG) models can provide computationally efficient and conceptually simple characterizations of soft matter systems. While generic models probe the underlying physics governing an entire family of free-energy landscapes, bottom-up CG models are systematically constructed from a higher-resolution model to retain a high level of chemical specificity. The removal of degrees of freedom from the system modifies the relationship between the relative time scales of distinct dynamical processes through both a loss of friction and a &ldquo;smoothing&rdquo; of the free-energy landscape. While these effects typically result in faster dynamics, decreasing the computational expense of the model, they also obscure the connection to the true dynamics of the system. The lack of consistent dynamics is a serious limitation for CG models, which not only prevents quantitatively accurate predictions of dynamical observables but can also lead to qualitatively incorrect descriptions of the characteristic dynamical processes. With many methods available for optimizing the structural and thermodynamic properties of chemically-specific CG models, recent years have seen a stark increase in investigations addressing the accurate description of dynamical properties generated from CG simulations. In this review, we present an overview of these efforts, ranging from bottom-up parameterizations of generalized Langevin equations to refinements of the CG force field based on a Markov state modeling framework. We aim to make connections between seemingly disparate approaches, while laying out some of the major challenges as well as potential directions for future efforts.

]]>Computation doi: 10.3390/computation7030041

Authors: Cezary J. Walczyk Leonid V. Moroz Jan L. Cieśliński

We present a new algorithm for the approximate evaluation of the inverse square root for single-precision floating-point numbers. This is a modification of the famous fast inverse square root code. We use the same &ldquo;magic constant&rdquo; to compute the seed solution, but then, we apply Newton&ndash;Raphson corrections with modified coefficients. As compared to the original fast inverse square root code, the new algorithm is two-times more accurate in the case of one Newton&ndash;Raphson correction and almost seven-times more accurate in the case of two corrections. We discuss relative errors within our analytical approach and perform numerical tests of our algorithm for all numbers of the type float.

]]>Computation doi: 10.3390/computation7030040

Authors: Raúl Rivera-Blas Salvador Antonio Rodríguez Paredes Luis Armando Flores-Herrera Ignacio Adrián Romero

This paper presents an active control design for the synchronization of two identical Petrzela chaotic systems (Petrzela, J.; Gotthans, T. New chaotic dynamical system with a conic-shaped equilibrium located on the plane structure. Applied Sciences. 2017, 7, 976) on master-slave configuration. For the active control, the parameters of both systems are assumed to be a priori known, the control law by means of the dynamic of the error synchronization is designed to guarantee the convergence to zero of error states and the synchronization process is verified by numerical simulation. By taking advantage of the execution and implementation facilities of microcontroller based chaotic systems in digital devices, the active controller is implemented in a 32 bits ARM microcontroller. The experimental results were obtained by using the fourth order Runge-Kutta numerical method to integrate the differential equations of the controller, where the results were measured with a digital oscilloscope.

]]>Computation doi: 10.3390/computation7030039

Authors: Laura Sani Riccardo Pecori Monica Mordonini Stefano Cagnoni

The so-called Relevance Index (RI) metrics are a set of recently-introduced indicators based on information theory principles that can be used to analyze complex systems by detecting the main interacting structures within them. Such structures can be described as subsets of the variables which describe the system status that are strongly statistically correlated with one another and mostly independent of the rest of the system. The goal of the work described in this paper is to apply the same principles to pattern recognition and check whether the RI metrics can also identify, in a high-dimensional feature space, attribute subsets from which it is possible to build new features which can be effectively used for classification. Preliminary results indicating that this is possible have been obtained using the RI metrics in a supervised way, i.e., by separately applying such metrics to homogeneous datasets comprising data instances which all belong to the same class, and iterating the procedure over all possible classes taken into consideration. In this work, we checked whether this would also be possible in a totally unsupervised way, i.e., by considering all data available at the same time, independently of the class to which they belong, under the hypothesis that the peculiarities of the variable sets that the RI metrics can identify correspond to the peculiarities by which data belonging to a certain class are distinguishable from data belonging to different classes. The results we obtained in experiments made with some publicly available real-world datasets show that, especially when coupled to tree-based classifiers, the performance of an RI metrics-based unsupervised feature extraction method can be comparable to or better than other classical supervised or unsupervised feature selection or extraction methods.

]]>Computation doi: 10.3390/computation7030038

Authors: Anatoli A. Rogovoy Olga S. Stolbova

The paper considers ferromagnetic alloys, which exhibit the shape memory effect during phase transition from the high-temperature cubic phase (austenite) to the low-temperature tetragonal phase (martensite) in the ferromagnetic state. In these alloys, significant macroscopic strains are generated during the direct temperature phase transition from the austenitic to the martensitic state, provided that the process proceeds under the action of the applied mechanical stresses. The critical phase transition temperatures in such alloys depend not only on the stress fields, but also on the magnetic field. By changing the magnetic field, it is possible to control the process of phase transition. In this work, within the framework of the finite deformation theory, we develop a model that allows us to describe the process of the control of the direct (austenite-martensite) and reverse (martensite-austenite) phase transitions in ferromagnetic shape memory polycrystalline materials under the action of external force, thermal, and magnetic fields with the aid of the magnetic field. In view of the fact that the magnetic field affects the material deformation, which, in turn, changes the magnetic field, we formulated and solved a coupled boundary value problem. As an example, we considered the problem of a shift of the outer surface of a long hollow cylinder made of ferromagnetic alloy. The numerical implementation of the problem was based on the finite element method using the step-by-step loading procedure. Complete recovery of the strains accumulated during the direct phase transition and reverting of the axially-displaced outer surface of the cylinder to its original position occurred both on heating of the sample to the temperatures of the reverse phase transition and at a constant temperature, when the magnetic field previously applied in the martensitic state was removed.

]]>Computation doi: 10.3390/computation7030037

Authors: Sitalakshmi Venkatraman Anthony Overmars Fiona Wahr

The information and communications technology (ICT) industry workforce is now required to deal with &rsquo;Big Data&rsquo;, and there is a need to fill the computational skill shortage in data analytics. The integrated skills of combining computer and mathematics capabilities is much sought after by every industry embarking on digital transformation. Studies conducted internationally and by the Australian Industry Group show the requirements for improving computational skills in the workplace. This research takes a positive step to address this issue by introducing visualization and experiential learning in the ICT curriculum in order to uplift mathematics skills required for data analytics. We present the use of such innovative methods adopted in a higher education setting. The results and positive impact achieved through this study are presented.

]]>Computation doi: 10.3390/computation7030036

Authors: Seda Keskin Sacide Alsoy Altinkaya

Computational modeling of membrane materials is a rapidly growing field to investigate the properties of membrane materials beyond the limits of experimental techniques and to complement the experimental membrane studies by providing insights at the atomic-level. In this study, we first reviewed the fundamental approaches employed to describe the gas permeability/selectivity trade-off of polymer membranes and then addressed the great promise of mixed matrix membranes (MMMs) to overcome this trade-off. We then reviewed the current approaches for predicting the gas permeation through MMMs and specifically focused on MMMs composed of metal organic frameworks (MOFs). Computational tools such as atomically-detailed molecular simulations that can predict the gas separation performances of MOF-based MMMs prior to experimental investigation have been reviewed and the new computational methods that can provide information about the compatibility between the MOF and the polymer of the MMM have been discussed. We finally addressed the opportunities and challenges of using computational studies to analyze the barriers that must be overcome to advance the application of MOF-based membranes.

]]>Computation doi: 10.3390/computation7030035

Authors: George D. Roumelas Hector E. Nistazakis Argyris N. Stassinakis Christos K. Volos Andreas D. Tsigopoulos

The obsolete communication systems used in the underwater environment necessitates the development and use of modern telecommunications technologies. One such technology is the optical wireless communications, which can provide very high data rates, almost infinite bandwidth and very high transmission speed for real time fast and secure underwater links. However, the composition and the optical density of seawater hinder the communication between transmitter and receiver, while many significant effects strongly mitigate the underwater optical wireless communication (UOWC) systems&rsquo; performance. In this work, the influences of chromatic dispersion and time jitter are investigated. Chromatic dispersion causes the temporal broadening or narrowing of the pulse, while time jitter complicates the detection process at the receiver. Thus, the broadening of the optical pulse due to chromatic dispersion is studied and the influence of the initial chirp is examined. Moreover, the effect of the time jitter is also taken into consideration and for the first time, to the best of our knowledge, a mathematical expression for the probability of fade is extracted, taking into account the influence of both of the above-mentioned effects for a UOWC system. Finally, the appropriate numerical results are presented.

]]>Computation doi: 10.3390/computation7030034

Authors: Nikolaos A. Androutsos Hector E. Nistazakis Hira Khalid Sajid S. Muhammad George S. Tombras

Over the past few years, terrestrial free space optical (FSO) communication systems have demonstrated increasing research and commercial interest. However, due the signal&rsquo;s propagation path, the operation of FSO links depends strongly on atmospheric conditions and related phenomena. One such significant phenomenon is the scintillation caused by atmospheric turbulence effects; in order to address the significant performance degradation that this causes, several statistical models have been proposed. Here, turbulence-induced fading of the received optical signal is investigated through the recently presented mixture Gamma distribution, which accurately describes the irradiance fluctuations at the receiver&rsquo;s input of the FSO link. Additionally, at the same time, it significantly reduces the mathematical complexity of the expressions used for the description of composite channels with turbulence along with nonzero boresight pointing error-induced fading. In order to counterbalance the performance mitigation due to these effects, serial decode-and-forward relays are employed, and the performance of the system is estimated through derived mathematical expressions.

]]>Computation doi: 10.3390/computation7030033

Authors: George K. Varotsos Hector E. Nistazakis Konstantinos Aidinis F. Jaber K.K. Mujeeb Rahman

The last few years, the scientific field of optical wireless communications (OWC) has witnessed tremendous progress, as reflected in the continuous emergence of new successful high data rate services and variable sophisticated applications. One such development of vital research importance and interest is the employment of high speed, robust, and energy-effective transdermal optical wireless (TOW) links for telemetry with implantable medical devices (IMDs) that also have made considerable progress lately for a variety of medical applications, mainly including neural recording and prostheses. However, the outage performance of such TOW links is significantly degraded due to the strong attenuation that affects the propagating information-bearing optical signal through the skin, along with random misalignments between transmitter and receiver terminals, commonly known as pointing error effect. In order to anticipate this, in this work we introduce a SIMO TOW reception diversity system that employs either OOK or more power-effective L-PPM schemes. Taking into account the joint impact of skin-induced attenuation and non-zero boresight pointing errors, modeled through the suitable Beckmann distribution, novel closed-form mathematical expressions for the average BER of the total TOW system are derived. Thus, the possibility of enhancing the TOW availability by using reception diversity configurations along with the appropriate modulation format is investigated. Finally, the corresponding numerical results are presented using the new derived theoretical outcomes.

]]>Computation doi: 10.3390/computation7020032

Authors: Giuseppe Battaglia Luigi Gurreri Girolama Airò Farulla Andrea Cipollina Antonina Pirrotta Giorgio Micale Michele Ciofalo

In electro-membrane processes, a pressure difference may arise between solutions flowing in alternate channels. This transmembrane pressure (TMP) causes a deformation of the membranes and of the fluid compartments. This, in turn, affects pressure losses and mass transfer rates with respect to undeformed conditions and may result in uneven flow rate and mass flux distributions. These phenomena were analyzed here for round pillar-type profiled membranes by integrated mechanical and fluid dynamics simulations. The analysis involved three steps: (1) A conservatively large value of TMP was imposed, and mechanical simulations were performed to identify the geometry with the minimum pillar density still able to withstand this TMP without collapsing (i.e., without exhibiting contacts between opposite membranes); (2) the geometry thus identified was subject to expansion and compression conditions in a TMP interval including the values expected in practical applications, and for each TMP, the corresponding deformed configuration was predicted; and (3) for each computed deformed configuration, flow and mass transfer were predicted by computational fluid dynamics. Membrane deformation was found to have important effects; friction and mass transfer coefficients generally increased in compressed channels and decreased in expanded channels, while a more complex behavior was obtained for mass transfer coefficients.

]]>Computation doi: 10.3390/computation7020031

Authors: Jingwei Too Abdul Rahim Abdullah Norhashimah Mohd Saad

Feature selection is known as an NP-hard combinatorial problem in which the possible feature subsets increase exponentially with the number of features. Due to the increment of the feature size, the exhaustive search has become impractical. In addition, a feature set normally includes irrelevant, redundant, and relevant information. Therefore, in this paper, binary variants of a competitive swarm optimizer are proposed for wrapper feature selection. The proposed approaches are used to select a subset of significant features for classification purposes. The binary version introduced here is performed by employing the S-shaped and V-shaped transfer functions, which allows the search agents to move on the binary search space. The proposed approaches are tested by using 15 benchmark datasets collected from the UCI machine learning repository, and the results are compared with other conventional feature selection methods. Our results prove the capability of the proposed binary version of the competitive swarm optimizer not only in terms of high classification performance, but also low computational cost.

]]>Computation doi: 10.3390/computation7020030

Authors: Dimitra K. Manousou Argyris N. Stassinakis Emmanuel Syskakis Hector E. Nistazakis Spiros Gardelis George S. Tombras

Visible Light Communication (VLC) systems use light-emitting diode (LED) technology to provide high-capacity optical links. The advantages they offer, such as the high data rate and the low installation and operational cost, have identified them as a significant solution for modern networks. However, such systems are vulnerable to various exogenous factors, with the background sunlight noise having the greatest impact. In order to reduce the negative influence of the background noise effect, optical filters can be used. In this work, for the first time, a low-cost optical vanadium dioxide (VO2) optical filter has been designed and experimentally implemented based on the requirements of typical and realistic VLC systems in order to significantly increase their performance by reducing the transmittance of background noise. The functionality of the specific filter is investigated by means of its bit error rate (BER) performance estimation, taking into account its experimentally measured characteristics. Numerous results are provided in order to prove the significant performance enhancement of the VLC systems which, as it is shown, reaches almost six orders of magnitude in some cases, using the specific experimental optical filter.

]]>Computation doi: 10.3390/computation7020029

Authors: Matthew David Marko

This manuscript discusses a novel method to map pressure results from one 3D surface shell mesh onto another. This method works independently of the actual pressures, and only focuses on ensuring the surface areas consistently match. By utilizing this approach, the cumulative forces consistently match for all input pressures. This method is demonstrated to work for pressure profiles with precipitous changes in pressures, and with small quadrangular source elements being applied to a mix of large quadrangular and triangular target elements, and the forces at all pressure profiles match remarkably.

]]>Computation doi: 10.3390/computation7020028

Authors: Hira Khalid Sajid Sheikh Muhammad Hector E. Nistazakis George S. Tombras

The hybrid system of free space optic (FSO) and radio frequency (RF) has come forth as alternative good solution for increasing demand for high data rates in wireless communication networks. In this paper, wireless networks with hard-switching between FSO and RF link are analyzed, assuming that at a certain time point either one of the two links are active, with FSO link having higher priority. As the signal-to-noise ratio (SNR) of FSO link falls below a certain selected threshold, the RF link is activated. In this work, it is assumed that the FSO link follows Gamma-Gamma fading due to the atmospheric turbulence effect whereas RF link experiences Rayleigh fading. To analyze the proposed hybrid model, analytical expressions are derived for the outage probability, bit error rate and ergodic capacity. A numerical comparison is also done between the performances of the proposed hybrid FSO/RF model and the single FSO model.

]]>Computation doi: 10.3390/computation7020027

Authors: Mikhail Mazo Nikolay Balabaev Alexandre Alentiev Ivan Strelnikov Yury Yampolskii

Using molecular dynamics, a comparative study was performed of two pairs of glassy polymers, low permeability polyetherimides (PEIs) and highly permeable Si-containing polytricyclononenes. All calculations were made with 32 independent models for each polymer. In both cases, the accessible free volume (AFV) increases with decreasing probe size. However, for a zero-size probe, the curves for both types of polymers cross the ordinate in the vicinity of 40%. The size distribution of free volume in PEI and highly permeable polymers differ significantly. In the former case, they are represented by relatively narrow peaks, with the maxima in the range of 0.5&ndash;1.0 &Aring; for all the probes from H2 to Xe. In the case of highly permeable Si-containing polymers, much broader peaks are observed to extend up to 7&ndash;8 &Aring; for all the gaseous probes. The obtained size distributions of free volume and accessible volume explain the differences in the selectivity of the studied polymers. The surface area of AFV is found for PEIs using Delaunay tessellation. Its analysis and the chemical nature of the groups that form the surface of free volume elements are presented and discussed.

]]>Computation doi: 10.3390/computation7020026

Authors: Yusra Sajid Kiani Ishrat Jabeen

The cytochrome P450s (CYPs) play a central role in the metabolism of various endogenous and exogenous compounds including drugs. CYPs are vulnerable to inhibition and induction which can lead to adverse drug reactions. Therefore, insights into the underlying mechanism of CYP450 inhibition and the estimation of overall CYP inhibitor properties might serve as valuable tools during the early phases of drug discovery. Herein, we present a large data set of inhibitors against five major metabolic CYPs (CYP1A2, CYP2C9, CYP2C19, CYP2D6 and CYP3A4) for the evaluation of important physicochemical properties and ligand efficiency metrics to define property trends across various activity levels (active, efficient and inactive). Decision tree models for CYP inhibition were developed with an accuracy &gt;90% for both the training set and 10-folds cross validation. Overall, molecular weight (MW), hydrogen bond acceptors/donors (HBA/HBD) and lipophilicity (clogP/logPo/w) represent important physicochemical descriptors for CYP450 inhibitors. However, highly efficient CYP inhibitors show mean MW, HBA, HBD and logP values between 294.18&ndash;482.40,5.0&ndash;8.2,1&ndash;7.29 and 1.68&ndash;2.57, respectively. Our results might help in optimization of toxicological profiles associated with new chemical entities (NCEs), through a better understanding of inhibitor properties leading to CYP-mediated interactions.

]]>Computation doi: 10.3390/computation7020025

Authors: Abhaya Kumar Sahoo Chittaranjan Pradhan Rabindra Kumar Barik Harishchandra Dubey

In today&rsquo;s digital world healthcare is one core area of the medical domain. A healthcare system is required to analyze a large amount of patient data which helps to derive insights and assist the prediction of diseases. This system should be intelligent in order to predict a health condition by analyzing a patient&rsquo;s lifestyle, physical health records and social activities. The health recommender system (HRS) is becoming an important platform for healthcare services. In this context, health intelligent systems have become indispensable tools in decision making processes in the healthcare sector. Their main objective is to ensure the availability of the valuable information at the right time by ensuring information quality, trustworthiness, authentication and privacy concerns. As people use social networks to understand their health condition, so the health recommender system is very important to derive outcomes such as recommending diagnoses, health insurance, clinical pathway-based treatment methods and alternative medicines based on the patient&rsquo;s health profile. Recent research which targets the utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed which reduces the workload and cost in health care. In the healthcare sector, big data analytics using recommender systems have an important role in terms of decision-making processes with respect to a patient&rsquo;s health. This paper gives a proposed intelligent HRS using Restricted Boltzmann Machine (RBM)-Convolutional Neural Network (CNN) deep learning method, which provides an insight into how big data analytics can be used for the implementation of an effective health recommender engine, and illustrates an opportunity for the health care industry to transition from a traditional scenario to a more personalized paradigm in a tele-health environment. By considering Root Square Mean Error (RSME) and Mean Absolute Error (MAE) values, the proposed deep learning method (RBM-CNN) presents fewer errors compared to other approaches.

]]>Computation doi: 10.3390/computation7020024

Authors: Vasiliki Liagkou Vasileios Kavvadas Spyridon K. Chronopoulos Dionysios Tafiadis Vasilis Christofilakis Kostas P. Peppas

Data security plays a crucial role in healthcare monitoring systems, since critical patient information is transacted over the Internet, especially through wireless devices, wireless routes such as optical wireless channels, or optical transport networks related to optical fibers. Many hospitals are acquiring their own metro dark fiber networks for collaborating with other institutes as a way to maximize their capacity to meet patient needs, as sharing scarce and expensive assets, such as scanners, allows them to optimize their efficiency. The primary goal of this article is to develop of an attack detection model suitable for healthcare monitoring systems that uses internet protocol (IP) virtual private networks (VPNs) over optical transport networks. To this end, this article presents the vulnerabilities in healthcare monitoring system networks, which employ VPNs over optical transport layer architecture. Furthermore, a multilayer network architecture for closer integration of the IP and optical layers is proposed, and an application for detecting DoS attacks is introduced. The proposed application is a lightweight implementation that could be applied and installed into various remote healthcare control devices with limited processing and memory resources. Finally, an analytical and focused approach correlated to attack detection is proposed, which can also serve as a tutorial oriented towards even nonprofessionals for practical and learning purposes.

]]>Computation doi: 10.3390/computation7020023

Authors: Yong Xian Ng Chang Phang

Nowadays, the dynamics of non-integer order system or fractional modelling has become a widely studied topic due to the belief that the fractional system has hereditary properties. Hence, as part of understanding the dynamic behaviour, in this paper, we will perform the computation of stability criterion for a fractional Shimizu&ndash;Morioka system. Different from the existing stability analysis for a fractional dynamical system in literature, we apply the optimal Routh&ndash;Hurwitz conditions for this fractional Shimizu&ndash;Morioka system. Furthermore, we introduce the way to calculate the range of adjustable control parameter &beta; to obtain the stability criterion for fractional Shimizu&ndash;Morioka system. The result will be verified by using the predictor-corrector scheme to obtain the time series solution for the fractional Shimizu&ndash;Morioka system. The findings of this study can provide a better understanding of how adjustable control parameter &beta; influences the stability criterion for fractional Shimizu&ndash;Morioka system.

]]>Computation doi: 10.3390/computation7020022

Authors: Hyun-Goo Kim

To assess wind resources, a number of simulations should be performed by wind direction, wind speed, and atmospheric stability bins to conduct micro-siting using computational fluid dynamics (CFD). This study proposes a method of accelerating CFD convergence by generating initial conditions that are closer to the converged solution. In addition, the study proposes the &lsquo;mirrored initial condition&rsquo; (IC) using the symmetry of wind direction and geography, the &lsquo;composed IC&rsquo; using the vector composition principle, and the &lsquo;shifted IC&rsquo; which assumes that the wind speed vectors are similar in conditions characterized by minute differences in wind direction as the well-posed initial conditions. They provided a significantly closer approximation to the converged flow field than did the conventional initial condition, which simply assumed a homogenous atmospheric boundary layer over the entire simulation domain. The results of this study show that the computation time taken for micro-siting can be shortened by around 35% when conducting CFD with 16 wind direction sectors by mixing the conventional and the proposed ICs properly.

]]>Computation doi: 10.3390/computation7020021

Authors: Khalid Hattaf

In this paper, we propose and investigate a diffusive viral infection model with distributed delays and cytotoxic T lymphocyte (CTL) immune response. Also, both routes of infection that are virus-to-cell infection and cell-to-cell transmission are modeled by two general nonlinear incidence functions. The well-posedness of the proposed model is also proved by establishing the global existence, uniqueness, nonnegativity and boundedness of solutions. Moreover, the threshold parameters and the global asymptotic stability of equilibria are obtained. Furthermore, diffusive and delayed virus dynamics models presented in many previous studies are improved and generalized.

]]>Computation doi: 10.3390/computation7020020

Authors: Yunfei Teng Anna Choromanska

The unsupervised image-to-image translation aims at finding a mapping between the source ( A ) and target ( B ) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings F A B : A &rarr; B and F B A : B &rarr; A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., F A B ( F B A ( B ) ) &asymp; B and F B A ( F A B ( A ) ) &asymp; A . Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce F B A to be an inverse operation to F A B . We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.

]]>Computation doi: 10.3390/computation7010019

Authors: George Oguntala Gbeminiyi Sobamowo Yinusa Ahmed Raed Abd-Alhameed

In recent times, the subject of effective cooling have become an interesting research topic for electronic and mechanical engineers due to the increased miniaturization trend in modern electronic systems. However, fins are useful for cooling various low and high power electronic systems. For improved thermal management of electronic systems, porous fins of functionally graded materials (FGM) have been identified as a viable candidate to enhance cooling. The present study presents an analysis of a convective&ndash;radiative porous fin of FGM. For theoretical investigations, the thermal property of the functionally graded material is assumed to follow linear and power-law functions. In this study, we investigated the effects of inhomogeneity index of FGM, convective and radiative variables on the thermal performance of the porous heatsink. The results of the present study show that an increase in the inhomogeneity index of FGM, convective and radiative parameter improves fin efficiency. Moreover, the rate of heat transfer in longitudinal FGM fin increases as &beta; increases. The temperature prediction using the Adomian decomposition method is in excellent agreement with other analytical and method.

]]>Computation doi: 10.3390/computation7010018

Authors: Mohammad Hossein Ahmadi Ali Ghahremannezhad Kwok-Wing Chau Parinaz Seifaddini Mohammad Ramezannezhad Roghayeh Ghasempour

Thermophysical properties of nanofluids play a key role in their heat transfer capability and can be significantly affected by several factors, such as temperature and concentration of nanoparticles. Developing practical and simple-to-use predictive models to accurately determine these properties can be advantageous when numerous dependent variables are involved in controlling the thermal behavior of nanofluids. Artificial neural networks are reliable approaches which recently have gained increasing prominence and are widely used in different applications for predicting and modeling various systems. In the present study, two novel approaches, Genetic Algorithm-Least Square Support Vector Machine (GA-LSSVM) and Particle Swarm Optimization- artificial neural networks (PSO-ANN), are applied to model the thermal conductivity and dynamic viscosity of Fe2O3/EG-water by considering concentration, temperature, and the mass ratio of EG/water as the input variables. Obtained results from the models indicate that GA-LSSVM approach is more accurate in predicting the thermophysical properties. The maximum relative deviation by applying GA-LSSVM was found to be approximately &plusmn;5% for the thermal conductivity and dynamic viscosity of the nanofluid. In addition, it was observed that the mass ratio of EG/water has the most significant impact on these properties.

]]>Computation doi: 10.3390/computation7010017

Authors: Giorgio Besagni Fabio Inzoli

A precise estimation of the bubble size distribution (BSD) is required to understand the fluid dynamics in gas-liquid bubble columns at the &ldquo;bubble scale,&rdquo; evaluate the heat and mass transfer rate, and support scale-up approaches. In this paper, we have formulated a population balance model, and we have validated it against a previously published experimental dataset. The experimental dataset consists of BSDs obtained in the &ldquo;pseudo-homogeneous&rdquo; flow regime, in a large-diameter and large-scale bubble column. The aim of the population balance model is to predict the BSD in the developed region of the bubble column using as input the BSD at the sparger. The proposed approach has been able to estimate the BSD correctly and is a promising approach for future studies and to estimate bubble size in large-scale gas&ndash;liquid bubble columns.

]]>Computation doi: 10.3390/computation7010016

Authors: Anna Choromanska Ish Kumar Jain

We analyze the theoretical properties of the recently proposed objective function for efficient online construction and training of multiclass classification trees in the settings where the label space is very large. We show the important properties of this objective and provide a complete proof that maximizing it simultaneously encourages balanced trees and improves the purity of the class distributions at subsequent levels in the tree. We further explore its connection to the three well-known entropy-based decision tree criteria, i.e., Shannon entropy, Gini-entropy and its modified variant, for which efficient optimization strategies are largely unknown in the extreme multiclass setting. We show theoretically that this objective can be viewed as a surrogate function for all of these entropy criteria and that maximizing it indirectly optimizes them as well. We derive boosting guarantees and obtain a closed-form expression for the number of iterations needed to reduce the considered entropy criteria below an arbitrary threshold. The obtained theorem relies on a weak hypothesis assumption that directly depends on the considered objective function. Finally, we prove that optimizing the objective directly reduces the multi-class classification error of the decision tree.

]]>Computation doi: 10.3390/computation7010015

Authors: Saeedeh Bahrami Alireza Bosaghzadeh Fadi Dornaika

In semi-supervised label propagation (LP), the data manifold is approximated by a graph, which is considered as a similarity metric. Graph estimation is a crucial task, as it affects the further processes applied on the graph (e.g., LP, classification). As our knowledge of data is limited, a single approximation cannot easily find the appropriate graph, so in line with this, multiple graphs are constructed. Recently, multi-metric fusion techniques have been used to construct more accurate graphs which better represent the data manifold and, hence, improve the performance of LP. However, most of these algorithms disregard use of the information of label space in the LP process. In this article, we propose a new multi-metric graph-fusion method, based on the Flexible Manifold Embedding algorithm. Our proposed method represents a unified framework that merges two phases: graph fusion and LP. Based on one available view, different simple graphs were efficiently generated and used as input to our proposed fusion approach. Moreover, our method incorporated the label space information as a new form of graph, namely the Correlation Graph, with other similarity graphs. Furthermore, it updated the correlation graph to find a better representation of the data manifold. Our experimental results on four face datasets in face recognition demonstrated the superiority of the proposed method compared to other state-of-the-art algorithms.

]]>Computation doi: 10.3390/computation7010014

Authors: Piotr Jaśkowski Sławomir Biruk

This study adopts the flow shop concept used in industrial production to schedule repetitive non-linear construction projects, where specialized groups of workers execute processes in work zones (buildings) in a predefined order common to all groups. This problem is characteristic of construction projects that involve erecting multiple buildings. As the duration of the project heavily depends upon the sequence of the work zones, this study aims at providing a model and a practical approach for finding the optimal solution that assures the shortest duration of the project, allows the contractor to complete particular work zones (buildings) as soon as possible (without idle time), and conforms to a predefined sequence of work zone completion. This last constraint may arise from the client&rsquo;s requirements or physical conditions of the project and has not been addressed by existing scheduling methods. Reducing the duration of the entire project brings the benefit of lower indirect costs and, if accompanied by a reduced duration of completing particular buildings (i.e., work zones), may also provide the opportunity to sell project deliverables sooner, thus improving the economic efficiency of the project. In search of optimal schedules, the authors apply the algorithms of Minimum Hamiltonian Cycle/Asymmetric Traveling Salesman Problem (ATSP).

]]>Computation doi: 10.3390/computation7010013

Authors: Francesco Rundo Sergio Rinella Simona Massimino Marinella Coco Giorgio Fallica Rosalba Parenti Sabrina Conoci Vincenzo Perciavalle

The development of detection methodologies for reliable drowsiness tracking is a challenging task requiring both appropriate signal inputs and accurate and robust algorithms of analysis. The aim of this research is to develop an advanced method to detect the drowsiness stage in electroencephalogram (EEG), the most reliable physiological measurement, using the promising Machine Learning methodologies. The methods used in this paper are based on Machine Learning methodologies such as stacked autoencoder with softmax layers. Results obtained from 62 volunteers indicate 100% accuracy in drowsy/wakeful discrimination, proving that this approach can be very promising for use in the next generation of medical devices. This methodology can be extended to other uses in everyday life in which the maintaining of the level of vigilance is critical. Future works aim to perform extended validation of the proposed pipeline with a wide-range training set in which we integrate the photoplethysmogram (PPG) signal and visual information with EEG analysis in order to improve the robustness of the overall approach.

]]>Computation doi: 10.3390/computation7010012

Authors: Jingwei Too Abdul Rahim Abdullah Norhashimah Mohd Saad Weihown Tee

Due to the increment in hand motion types, electromyography (EMG) features are increasingly required for accurate EMG signals classification. However, increasing in the number of EMG features not only degrades classification performance, but also increases the complexity of the classifier. Feature selection is an effective process for eliminating redundant and irrelevant features. In this paper, we propose a new personal best (Pbest) guide binary particle swarm optimization (PBPSO) to solve the feature selection problem for EMG signal classification. First, the discrete wavelet transform (DWT) decomposes the signal into multiresolution coefficients. The features are then extracted from each coefficient to form the feature vector. After which pbest-guide binary particle swarm optimization (PBPSO) is used to evaluate the most informative features from the original feature set. In order to measure the effectiveness of PBPSO, binary particle swarm optimization (BPSO), genetic algorithm (GA), modified binary tree growth algorithm (MBTGA), and binary differential evolution (BDE) were used for performance comparison. Our experimental results show the superiority of PBPSO over other methods, especially in feature reduction; where it can reduce more than 90% of features while keeping a very high classification accuracy. Hence, PBPSO is more appropriate for application in clinical and rehabilitation applications.

]]>Computation doi: 10.3390/computation7010011

Authors: Wilfried Gappmair

Parameter estimation is of paramount importance in every digital receiver. This is not only true for radio, but also for optical links; otherwise, subsequent processing stages, like detector units or error correction schemes, could not be operated reliably. However, for a bandlimited optical intensity channel, the problem of parameter estimation is strongly related to non-negative pulse shapes satisfying also the Nyquist criterion to keep the detection process as simple as possible. To the best of the author&rsquo;s knowledge, it is the first time that both topics&mdash;parameter estimation on the one hand and bandlimited intensity modulation on the other&mdash;are jointly investigated. Since symbol timing and signal amplitude are the parameters of interest in this case, the corresponding Cramer&ndash;Rao lower bounds are derived as the theoretical limit of the jitter variance generated by the related estimator algorithms. In this context, a maximum likelihood solution is developed for the recovery of both timing and amplitude. Since this approach requires a receiver matched filter destroying the Nyquist criterion of the non-negative pulse shape, we compare it to a flat receiver filter preserving the required orthogonality property. It turned out that the jitter performance of the matched filter method is close to the Cramer&ndash;Rao lower bound in the medium-to-low SNR range, but due to inter-symbol interference effects an error floor emerges at higher SNR values. The flat filter solution avoids this drawback, although the price to be paid is a larger noise level at the filter output, so that a somewhat increased jitter variance is observed.

]]>Computation doi: 10.3390/computation7010010

Authors: Hafiz Waqar Ahmad Jeong Ho Hwang Kamran Javed Umer Masood Chaudry Dong Ho Bae

Welding alloy 617 with other metals and alloys has been receiving significant attention in the last few years. It is considered to be the benchmark for the development of economical hybrid structures to be used in different engineering applications. The differences in the physical and metallurgical properties of dissimilar materials to be welded usually result in weaker structures. Fatigue failure is one of the most common failure modes of dissimilar material welded structures. In this study, fatigue life prediction of dissimilar material weld was evaluated by the accelerated life method and artificial neural network approach (ANN). The accelerated life testing approach was evaluated for different distributions. Weibull distribution was the most appropriate distribution that fits the fatigue data very well. Acceleration of fatigue life test data was attained with 95% reliability for Weibull distribution. The probability plot verified that accelerating variables at each level were appropriate. Experimental test data and predicted fatigue life were in good agreement with each other. Two training algorithms, Bayesian regularization (BR) and Levenberg&ndash;Marquardt (LM), were employed for training ANN. The Bayesian regularization training algorithm exhibited a better performance than the Levenberg&ndash;Marquardt algorithm. The results confirmed that the assessment methods are effective for lifetime prediction of dissimilar material welded joints.

]]>Computation doi: 10.3390/computation7010009

Authors: Christoph Rettinger Ulrich Rüde

Parallel multiphysics simulations often suffer from load imbalances originating from the applied coupling of algorithms with spatially and temporally varying workloads. It is, thus, desirable to minimize these imbalances to reduce the time to solution and to better utilize the available hardware resources. Taking particulate flows as an illustrating example application, we present and evaluate load balancing techniques that tackle this challenging task. This involves a load estimation step in which the currently generated workload is predicted. We describe in detail how such a workload estimator can be developed. In a second step, load distribution strategies like space-filling curves or graph partitioning are applied to dynamically distribute the load among the available processes. To compare and analyze their performance, we employ these techniques to a benchmark scenario and observe a reduction of the load imbalances by almost a factor of four. This results in a decrease of the overall runtime by 14% for space-filling curves.

]]>Computation doi: 10.3390/computation7010008

Authors: Hugo Valdés Kevin Unda Aldo Saavedra

This research answers the following question: What is the fluid dynamic behavior of a supercritical fluid (SCF) inside a membrane module? At this time, there is very little or no reported information that can provide an answer to this question. The research studies related to the themes of supercritical CO2 (SC-CO2), hollow fiber membrane contactors (HFMCs), and numerical simulations have mainly reported on 2D simulations, but in this work, 3D profiles are presented. Simulations were performed based on the experimental results and other simulations, using the geometry of a commercial module. The results were mainly based on the different operating conditions and geometric dimensions. A mesh study was performed to ensure the mesh non-dependence of the results presented here. It was observed that the velocity profile developed at 10 mm from the wall of the supercritical CO2 entrance pipe. A profile equilibrium around the fiber close to the entrance of the module was achieved in the experimental hollow fiber membrane contactor when compared to the case of the commercial hollow fiber membrane contactor. The results of this research provided a visualization of the boundary layer, which did not cover the entire fiber length. Finally, the results of this paper are interesting for technical applications and contribute to our understanding of the hydrodynamics of SCFs.

]]>Computation doi: 10.3390/computation7010007

Authors: Olaoluwa Rotimi Popoola Sinan Sinanović Wasiu O. Popoola Roberto Ramirez-Iniguez

Overlap of footprints of light emitting diodes (LEDs) increases the positioning accuracy of wearable LED indoor positioning systems (IPS) but such an approach assumes that the footprint boundaries are defined. In this work, we develop a mathematical model for defining the footprint boundaries of an LED in terms of a threshold angle instead of the conventional half or full angle. To show the effect of the threshold angle, we compare how overlaps and receiver tilts affect the performance of an LED-based IPS when the optical boundary is defined at the threshold angle and at the full angle. Using experimental measurements, simulations, and theoretical analysis, the effect of the defined threshold angle is estimated. The results show that the positional time when using the newly defined threshold angle is 12 times shorter than the time when the full angle is used. When the effect of tilt is considered, the threshold angle time is 22 times shorter than the full angle positioning time. Regarding accuracy, it is shown in this work that a positioning error as low as 230 mm can be obtained. Consequently, while the IPS gives a very low positioning error, a defined threshold angle reduces delays in an overlap-based LED IPS.

]]>Computation doi: 10.3390/computation7010006

Authors: Eric M. Miller Cody J. Brazel Krystina A. Brillos-Monia Philip W. Crawford Hannah C. Hufford Michael R. Loncaric Monica N. Mruzik Austin W. Nenninger Christina M. Ragain

The ability for DFT: B3LYP calculations using the 6-31g and lanl2dz basis sets to predict the electrochemical properties of twenty (20) 3-aryl-quinoxaline-2-carbonitrile 1,4-di-N-oxide derivatives with varying degrees of cytotoxic activity in dimethylformamide (DMF) was investigated. There was a strong correlation for the first reduction and moderate-to-low correlation of the second reduction of the diazine ring between the computational and the experimental data, with the exception of the derivative containing the nitro functionality. The four (4) nitro group derivatives are clear outliers in the overall data sets and the derivative E4 is ill-behaved. The remaining three (3) derivatives containing the nitro groups had a strong correlation between the computational and experimental data; however, the computational data falls substantially outside of the expected range.

]]>Computation doi: 10.3390/computation7010005

Authors: Computation Editorial Office

Rigorous peer-review is the corner-stone of high-quality academic publishing [...]

]]>Computation doi: 10.3390/computation7010004

Authors: Francesco Rundo Francesca Trenta Agatino Luigi Di Stallo Sebastiano Battiato

Stock market prediction and trading has attracted the effort of many researchers in several scientific areas because it is a challenging task due to the high complexity of the market. More investors put their effort to the development of a systematic approach, i.e., the so called &ldquo;Trading System (TS)&rdquo; for stocks pricing and trend prediction. The introduction of the Trading On-Line (TOL) has significantly improved the overall number of daily transactions on the stock market with the consequent increasing of the market complexity and liquidity. One of the most main consequence of the TOL is the &ldquo;automatic trading&rdquo;, i.e., an ad-hoc algorithmic robot able to automatically analyze a lot of financial data with target to open/close several trading operations in such reduced time for increasing the profitability of the trading system. When the number of such automatic operations increase significantly, the trading approach is known as High Frequency Trading (HFT). In this context, recently, the usage of machine learning has improved the robustness of the trading systems including HFT sector. The authors propose an innovative approach based on usage of ad-hoc machine learning approach, starting from historical data analysis, is able to perform careful stock price prediction. The stock price prediction accuracy is further improved by using adaptive correction based on the hypothesis that stock price formation is regulated by Markov stochastic propriety. The validation results applied to such shares and financial instruments confirms the robustness and effectiveness of the proposed automatic trading algorithm.

]]>Computation doi: 10.3390/computation7010003

Authors: Panteleimon D. Mavroudis Jeremy D. Scheff John C. Doyle Yoram Vodovotz Ioannis P. Androulakis

The dysregulation of inflammation, normally a self-limited response that initiates healing, is a critical component of many diseases. Treatment of inflammatory disease is hampered by an incomplete understanding of the complexities underlying the inflammatory response, motivating the application of systems and computational biology techniques in an effort to decipher this complexity and ultimately improve therapy. Many mathematical models of inflammation are based on systems of deterministic equations that do not account for the biological noise inherent at multiple scales, and consequently the effect of such noise in regulating inflammatory responses has not been studied widely. In this work, noise was added to a deterministic system of the inflammatory response in order to account for biological stochasticity. Our results demonstrate that the inflammatory response is highly dependent on the balance between the concentration of the pathogen and the level of biological noise introduced to the inflammatory network. In cases where the pro- and anti-inflammatory arms of the response do not mount the appropriate defense to the inflammatory stimulus, inflammation transitions to a different state compared to cases in which pro- and anti-inflammatory agents are elaborated adequately and in a timely manner. In this regard, our results show that noise can be both beneficial and detrimental for the inflammatory endpoint. By evaluating the parametric sensitivity of noise characteristics, we suggest that efficiency of inflammatory responses can be controlled. Interestingly, the time period on which parametric intervention can be introduced efficiently in the inflammatory system can be also adjusted by controlling noise. These findings represent a novel understanding of inflammatory systems dynamics and the potential role of stochasticity thereon.

]]>Computation doi: 10.3390/computation7010002

Authors: Maria T. Plytaria Christos Tzivanidis Evangelos Bellos Ioannis Alexopoulos Kimon A. Antonopoulos

Energy consumption in the building sector is responsible for a very large amount of electricity consumption worldwide. The reduction of this consumption is a crucial issue in order to achieve sustainability. The objective of this work is to investigate the use of phase change materials (PCMs) in the building walls in order to reduce the heating and the cooling loads. The novelty of this work is based on the investigation of different scenarios about the position of the PCM layer in the south and the north walls. PCMs can improve the thermal performance and the thermal comfort of a building due to their ability to store large amounts of thermal energy in latent form and so to reduce the temperature fluctuations of the structural components, keeping them within the desired temperature levels. More specifically, this work presents and compares the heating loads, the cooling loads and the temperature distribution of a building in Athens (Greece), with and without PCMs in different positions in the south wall and in the north walls. The simulation is performed with the commercial software TRNSYS 17, using the TRNSYS component: type 1270 (PCM Wall). The results proved that the maximum energy savings per year were achieved by the combination of the insulation and the PCM layer in the north and south walls. More specifically, the reductions in the heating and the cooling loads were found to be 1.54% and 5.90%, respectively. Furthermore, the temperature distribution with the use of a PCM layer is the most acceptable, especially during the summer period.

]]>Computation doi: 10.3390/computation7010001

Authors: Manisha Ajmani Sinan Sinanović Tuleen Boutaleb

In this paper, the performance of the optimal beam radius indoor positioning (OBRIP) and two-receiver indoor positioning (TRIP) algorithms are analysed by varying system parameters in the presence of an indoor optical wireless channel modelled in line of sight configuration. From all the conducted simulations, the minimum average error value obtained for TRIP is 0.61 m against 0.81 m obtained for OBRIP for room dimensions of 10 m &times; 10 m &times; 3 m. In addition, for each simulated condition, TRIP, which uses two receivers, outperforms OBRIP and reduces position estimation error up to 30%. To get a better understanding of error in position estimation for different combinations of beam radius and separation between light emitting diodes, the 90th percentile error is determined using a cumulative distribution frequency (CDF) plot, which gives an error value of 0.94 m for TRIP as compared to 1.20 m obtained for OBRIP. Both algorithms also prove to be robust towards change in receiver tilting angle, thus providing flexibility in the selection of the parameters to adapt to any indoor environment. In addition, in this paper, a mathematical model based on the concept of raw moments is used to confirm the findings of the simulation results for the proposed algorithms. Using this mathematical model, closed-form expressions are derived for standard deviation of uniformly distributed points in an optical wireless communication based indoor positioning system with circular and rectangular beam shapes.

]]>Computation doi: 10.3390/computation6040065

Authors: Konstantinos Vasilopoulos Michalis Mentzos Ioannis E. Sarris Panagiotis Tsoutsanis

A hazardous release accident taking place within the complex morphology of an urban setting could cause grave damage both to the population&rsquo;s safety and to the environment. An unpredicted accident constitutes a complicated physical phenomenon with unanticipated outcomes. This is because, in the event of an unforeseen accident, the dispersion of the hazardous materials exhausted in the environment is determined by unstable parameters such as the wind flow and the complex turbulent diffusion around urban blocks of buildings. Our case study focused on a diesel pool fire accident that occured between an array of nine cubical buildings. The accident was studied with a Large eddy Simulation model based on the Fire Dynamics Simulation method. This model was successfully compared against the nine cubes of the Silsoe experiment. The model&rsquo;s results were used for the determination of the immediately dangerous to life or health smoke zones of the accident. It was found that the urban geometry defined the hazardous gasses dispersion, thus increasing the toxic mass concentration around the buildings.

]]>Computation doi: 10.3390/computation6040064

Authors: Alberto Viskovic

Wind tunnel experiments are necessary for geometries that are not investigated by codes or that are not generally and parametrically investigated by literature. One example is the hyperbolic parabolic shape mostly used for cable net roofs, for which codes do not provide pressure coefficients and literature only gives mean, maxima, and minima pressure coefficient maps. However, most of pressure series acquired in wind tunnels on the roof are not Gaussian processes and, for this reason, the mean values are not precisely representative of the process. The paper investigates the ratio between mean and mode of pressure coefficient series acquired in wind tunnels on buildings covered with hyperbolic paraboloid roofs with square plans. Mode pressure coefficient maps are given as an addition to traditional pressure coefficient maps.

]]>Computation doi: 10.3390/computation6040063

Authors: Volker Eyert Mikael Christensen Walter Wolf David Reith Alexander Mavromaras Clive Freeman Erich Wimmer

The development of density functional theory and the tremendous increase of compute power in recent decades have created a framework for the incredible success of modern computational materials engineering (CME). CME has been widely adopted in the academic world and is now established as a standard tool for industrial applications. As theory and compute resources have developed, highly efficient computer codes to solve the basic equations have been implemented and successively integrated into comprehensive computational environments leading to unprecedented increases in productivity. The MedeA software of Materials Design combines a set of comprehensive productivity tools with leading computer codes such as the Vienna Ab initio Simulation Package (VASP), LAMMPS, GIBBS and the UNiversal CLuster Expansion code (UNCLE), provides interoperability at different length and time scales. In the present review, technological applications including microelectronic materials, Li-ion batteries, disordered systems, high-throughput applications and transition-metal oxides for electronics applications are described in the context of the development of CME and with reference to the MedeA environment.

]]>Computation doi: 10.3390/computation6040062

Authors: Rojalina Priyadarshini Rabindra Kumar Barik Harishchandra Dubey

The use of wearable and Internet-of-Things (IoT) for smart and affordable healthcare is trending. In traditional setups, the cloud backend receives the healthcare data and performs monitoring and prediction for diseases, diagnosis, and wellness prediction. Fog computing (FC) is a distributed computing paradigm that leverages low-power embedded processors in an intermediary node between the client layer and cloud layer. The diagnosis for wellness and fitness monitoring could be transferred to the fog layer from the cloud layer. Such a paradigm leads to a reduction in latency at an increased throughput. This paper processes a fog-based deep learning model, DeepFog that collects the data from individuals and predicts the wellness stats using a deep neural network model that can handle heterogeneous and multidimensional data. The three important abnormalities in wellness namely, (i) diabetes; (ii) hypertension attacks and (iii) stress type classification were chosen for experimental studies. We performed a detailed analysis of proposed models&rsquo; accuracy on standard datasets. The results validated the efficacy of the proposed system and architecture for accurate monitoring of these critical wellness and fitness criteria. We used standard datasets and open source software tools for our experiments.

]]>Computation doi: 10.3390/computation6040061

Authors: Matthew David Marko

This algorithm is designed to perform numerical transforms to convert data from the temporal domain into the spectral domain. This algorithm obtains the spectral magnitude and phase by studying the Coefficient of Determination of a series of artificial sinusoidal functions with the temporal data, and normalizing the variance data into a high-resolution spectral representation of the time-domain data with a finite sampling rate. What is especially beneficial about this algorithm is that it can produce spectral data at any user-defined resolution, and this highly resolved spectral data can be transformed back to the temporal domain.

]]>Computation doi: 10.3390/computation6040060

Authors: Vinh-Tan Nguyen Pankaj Kumar Jason Yu Chuan Leong

Piezoelectric structures are widely used in engineering designs including sensors, actuators, and energy-harvesting devices. In this paper, we present the development of a three-dimensional finite element model for simulations of piezoelectric actuators and quantification of their responses under uncertain parameter inputs. The implementation of the finite element model is based on standard nodal approach extended for piezoelectric materials using three-dimensional tetrahedral and hexahedral elements. To account for electrical-mechanical coupling in piezoelectric materials, an additional degree of freedom for electrical potential is added to each node in those elements together with their usual mechanical displacement unknowns. The development was validated with analytical and experimental data for a range of problems from a single-layer piezoelectric beam to multiple layer beams in unimorph and bimorph arrangement. A more detailed analysis is conducted for a unimorph composite plate actuator with different design parameters. Uncertainty quantification was also performed to evaluate the sensitivity of the responses of the piezoelectric composite plate with an uncertain input of material properties. This sheds light on understanding the variations in reported responses of the device; at the same time, providing extra confidence to the numerical model.

]]>Computation doi: 10.3390/computation6040059

Authors: Yadigar Sekerci Sergei Petrovskii

Decreasing level of dissolved oxygen has recently been reported as a growing ecological problem in seas and oceans around the world. Concentration of oxygen is an important indicator of the marine ecosystem&rsquo;s health as lack of oxygen (anoxia) can lead to mass mortality of marine fauna. The oxygen decrease is thought to be a result of global warming as warmer water can contain less oxygen. Actual reasons for the observed oxygen decay remain controversial though. Recently, it has been shown that it may as well result from a disruption of phytoplankton photosynthesis. In this paper, we further explore this idea by considering the model of coupled plankton-oxygen dynamics in two spatial dimensions. By means of extensive numerical simulations performed for different initial conditions and in a broad range of parameter values, we show that the system&rsquo;s dynamics normally lead to the formation of a rich variety of patterns. We reveal how these patterns evolve when the system approaches the tipping point, i.e., the boundary of the safe parameter range beyond which the depletion of oxygen is the only possibility. In particular, we show that close to the tipping point the spatial distribution of the dissolved oxygen tends to become more regular; arguably, this can be considered as an early warning of the approaching catastrophe.

]]>Computation doi: 10.3390/computation6040058

Authors: Simeone Marino Caitlin Hult Paul Wolberg Jennifer J. Linderman Denise E. Kirschner

Within the first 2&ndash;3 months of a Mycobacterium tuberculosis (Mtb) infection, 2&ndash;4 mm spherical structures called granulomas develop in the lungs of the infected hosts. These are the hallmark of tuberculosis (TB) infection in humans and non-human primates. A cascade of immunological events occurs in the first 3 months of granuloma formation that likely shapes the outcome of the infection. Understanding the main mechanisms driving granuloma development and function is key to generating treatments and vaccines. In vitro, in vivo, and in silico studies have been performed in the past decades to address the complexity of granuloma dynamics. This study builds on our previous 2D spatio-temporal hybrid computational model of granuloma formation in TB (GranSim) and presents for the first time a more realistic 3D implementation. We use uncertainty and sensitivity analysis techniques to calibrate the new 3D resolution to non-human primate (NHP) experimental data on bacterial levels per granuloma during the first 100 days post infection. Due to the large computational cost associated with running a 3D agent-based model, our major goal is to assess to what extent 2D and 3D simulations differ in predictions for TB granulomas and what can be learned in the context of 3D that is missed in 2D. Our findings suggest that in terms of major mechanisms driving bacterial burden, 2D and 3D models return very similar results. For example, Mtb growth rates and molecular regulation mechanisms are very important both in 2D and 3D, as are cellular movement and modulation of cell recruitment. The main difference we found was that the 3D model is less affected by crowding when cellular recruitment and movement of cells are increased. Overall, we conclude that the use of a 2D resolution in GranSim is warranted when large scale pilot runs are to be performed and if the goal is to determine major mechanisms driving infection outcome (e.g., bacterial load). To comprehensively compare the roles of model dimensionality, further tests and experimental data will be needed to expand our conclusions to molecular scale dynamics and multi-scale resolutions.

]]>Computation doi: 10.3390/computation6040057

Authors: Eva Roos Nerut Karl Karu Iuliia V. Voroshylova Kathleen Kirchner Tom Kirchner Maxim V. Fedorov Vladislav B. Ivaništšev

Computational modeling is more and more often used in studies of novel ionic liquids. The inevitable side-effect is the growing number of similar computations that require automation. This article introduces NaRIBaS (Nanomaterials and Room Temperature Ionic Liquids in Bulk and Slab)&mdash;a scripting framework that combines bash scripts with computational codes to ease modeling of nanomaterials and ionic liquids in bulk and slab. NaRIBaS helps to organize and document all input and output data, thus, improving the reproducibility of computations. Three examples are given to illustrate the NaRIBaS workflows for density functional theory (DFT) calculations of ionic pairs, molecular dynamics (MD) simulations of bulk ionic liquids (ILs), and MD simulations of ILs at an interface.

]]>Computation doi: 10.3390/computation6040056

Authors: Ghassan Ghssein Samir F. Matar

In bacterial pathology, metallophores fabricated by bacteria such as Staphylococcus aureus and Pseudomonas aeruginosa are exported to surrounding physiological media via a specific process to sequester and import metals, resulting in enhanced virulence of the bacteria. While these mechanisms are understood at qualitative levels, our investigation presents a complementary original view based on quantum chemical computations. Further understanding of the active centers in particular was provided for pseudopaline and staphylopine metallophores, which were described chemically and with vibration spectroscopy. Then, for complexes formed with a range of transition metal divalent ions (Ni, Cu, and Zn), description and analyses of the frontier molecular orbitals (FMOs) are provided, highlighting a mechanism of metal-to-ligand charge transfer (MLCT), based on excited-states calculations (time-dependent density functional theory (TD-DFT)) at the basis of the delivery of the metallic ionic species to the bacterial medium, leading eventually to its enhanced virulence. Such investigation gains importance especially in view of stepwise syntheses of metallophores in the laboratory, providing significant progress in the understanding of mechanisms underlying the enhancement of bacterial pathologies.

]]>Computation doi: 10.3390/computation6040055

Authors: Michalis P. Ninos Hector E. Nistazakis

A CDMA RoFSO link with receivers&rsquo; spatial diversity is studied. Turbulence-induced fading, modeled by the M(alaga) distribution, is considered that hamper the FSO link performance along with the nonzero boresight pointing errors effect. Novel, analytical closed-form expressions are extracted for the estimation of the average bit-error-rate and the outage probability of the CDMA RoFSO system for both directions of the forward and the reverse link. The numerical results show clearly the performance improvement of using spatial diversity, even in the most adverse atmospheric conditions with strong and saturated atmospheric turbulence with enhanced misalignment. Also, the effects of nonlinear distortion, multiple access interference and clipping noise aggravate the performance of the link, where cases with large number of users are taken into account.

]]>Computation doi: 10.3390/computation6040054

Authors: Senthil Kumar Raman Heuy Dong Kim

A centrifugal compressor working with supercritical CO 2 (S-CO 2 ) has several advantages over other supercritical and conventional compressors. S-CO 2 is as dense as the liquid CO 2 and becomes difficult to compress. Thus, during the operation, the S-CO 2 centrifugal compressor requires lesser compression work than the gaseous CO 2 . The performance of S-CO 2 compressors is highly varying with tip clearance and vanes in the diffuser. To improve the performance of the S-CO 2 centrifugal compressor, knowledge about the influence of individual components on the performance characteristics is necessary. This present study considers an S-CO 2 compressor designed with traditional engineering design tools based on ideal gas behaviour and tested by SANDIA national laboratory. Three-dimensional, steady, viscous flow through the S-CO 2 compressor was analysed with computational fluid dynamics solver based on the finite volume method. Navier-Stokes equations are solved with K- &omega; (SST) turbulence model at operating conditions in the supercritical regime. Performance of the impeller, the main component of the centrifugal compressor is compared with the impeller with vaneless diffuser and vaned diffuser configurations. The flow characteristics of the shrouded impeller are also studied to analyse the tip-leakage effect.

]]>Computation doi: 10.3390/computation6040053

Authors: Hyunjong Kim Mohan Kumar Dey Nobuyuki Oshima Yeon Won Lee

A study on sloshing characteristics in a rectangular tank, which is horizontally excited with a specific range of the Reynolds number, is approached numerically. The nonlinearity of sloshing flow is confirmed by comparing it with the linear solution based on the potential theory, and the time series results of the sloshing pressure are analyzed by Fast Fourier Transform (FFT) algorithm. Then, the pressure fluctuation phenomena are mainly observed and the magnitude of the amplitude spectrum is compared. The results show that, when the impact pressure is generated, large pressure fluctuation in a pressure cycle is observed, and the effects of the frequencies of integral multiples when the fundamental frequency appears dominantly in the sloshing flow.

]]>Computation doi: 10.3390/computation6040052

Authors: Kazuhiro Yamamoto Yusuke Toda

Using five samples with different porous materials of Al2TiO5, SiC, and cordierite, we numerically realized the fluid dynamics in a diesel filter (diesel particulate filter, DPF). These inner structures were obtained by X-ray CT scanning to reproduce the flow field in the real product. The porosity as well as pore size was selected systematically. Inside the DPF, the complex flow pattern appears. The maximum filtration velocity is over ten times larger than the velocity at the inlet. When the flow forcibly needs to go through the consecutive small pores along the filter&rsquo;s porous walls, the resultant pressure drop becomes large. The flow path length ratio to the filter wall thickness is almost the same for all samples, and its value is only 1.2. Then, the filter backpressure closely depends on the flow pattern inside the filter, which is due to the local substrate structure. In the modified filter substrate, by enlarging the pore and reducing the resistance for the net flow, the pressure drop is largely suppressed.

]]>Computation doi: 10.3390/computation6040051

Authors: Pradeep R. Varadwaj Arpita Varadwaj Helder M. Marques Koichi Yamashita

The divergence of fluorine-based systems and significance of their nascent non-covalent chemistry in molecular assemblies are presented in a brief review of the field. Emphasis has been placed to show that type-I and -II halogen-centered F&middot;&middot;&middot;F long-ranged intermolecular distances viable between the entirely negative fluorine atoms in some fluoro-substituted dimers of C6H6 can be regarded as the consequence of significant non-covalent attractive interactions. Such attractive interactions observed in the solid-state structures of C6F6 and other similar fluorine-substituted aromatic compounds have frequently been underappreciated. While these are often ascribed to crystal packing effects, we show using first-principles level calculations that these are much more fundamental in nature. The stability and reliability of these interactions are supported by their negative binding energies that emerge from a supermolecular procedure using MP2 (second-order M&oslash;ller-Plesset perturbation theory), and from the Symmetry Adapted Perturbation Theory, in which the latter does not determine the interaction energy by computing the total energy of the monomers or dimer. Quantum Theory of Atoms in Molecules and Reduced Density Gradient Non-Covalent Index charge-density-based approaches confirm the F&middot;&middot;&middot;F contacts are a consequence of attraction by their unified bond path (and bond critical point) and isosurface charge density topologies, respectively. These interactions can be explained neither by the so-called molecular electrostatic surface potential (MESP) model approach that often demonstrates attraction between sites of opposite electrostatic surface potential by means of Coulomb&rsquo;s law of electrostatics, nor purely by the effect of electrostatic polarization. We provide evidence against the standalone use of this approach and the overlooking of other approaches, as the former does not allow for the calculation of the electrostatic potential on the surfaces of the overlapping atoms on the monomers as in the equilibrium geometry of a complex. This study thus provides unequivocal evidence of the limitation of the MESP approach for its use in gaining insight into the nature of reactivity of overlapped interacting atoms and the intermolecular interactions involved.

]]>Computation doi: 10.3390/computation6030050

Authors: Jonatas E. Borges Marcos Lourenço Elie L. M. Padilla Christopher Micallef

The immersed boundary method has attracted considerable interest in the last few years. The method is a computational cheap alternative to represent the boundaries of a geometrically complex body, while using a cartesian mesh, by adding a force term in the momentum equation. The advantage of this is that bodies of any arbitrary shape can be added without grid restructuring, a procedure which is often time-consuming. Furthermore, multiple bodies may be simulated, and relative motion of those bodies may be accomplished at reasonable computational cost. The numerical platform in development has a parallel distributed-memory implementation to solve the Navier-Stokes equations. The Finite Volume Method is used in the spatial discretization where the diffusive terms are approximated by the central difference method. The temporal discretization is accomplished using the Adams-Bashforth method. Both temporal and spatial discretizations are second-order accurate. The Velocity-pressure coupling is done using the fractional-step method of two steps. The present work applies the immersed boundary method to simulate a Newtonian laminar flow through a three-dimensional sudden contraction. Results are compared to published literature. Flow patterns upstream and downstream of the contraction region are analysed at various Reynolds number in the range 44 &le; R e D &le; 993 for the large tube and 87 &le; R e D &le; 1956 for the small tube, considerating a contraction ratio of &beta; = 1.97 . Comparison between numerical and experimental velocity profiles has shown good agreement.

]]>Computation doi: 10.3390/computation6030049

Authors: Sheng-Chang Zhang Jing-Zhou Zhang Xiao-Ming Tan

Film cooling enhancement by incorporating an upstream sand-dune-shaped ramp (SDSR) to the film hole exit was numerically investigated on a flat plate under typical blowing ratios ranging from 0.5 to 1.5. Three heights of SDSRs were designed: 0.25D, 0.5D, and 0.75D. The results indicated that the upstream SDSR effectively controlled the near-wall primary flow and subsequent mutual interaction with the coolant jet, which was the main mechanism of the film cooling enhancement. First, a pair of anti-kidney vortices was formed at the trailing ridges of the SDSR, which helped suppress the kidney vortex pair due to the interaction between the coolant jet and the primary flow. Second, a weak separation and a low pressure zone were induced behind the backside of the SDSR, which caused the coolant jet to spread around the film cooling hole and improve the lateral film coverage. With respect to the baseline cylindrical film cooling holes, the effect of the upstream SDSR was distinct under different blowing ratios. Under a low blowing ratio, the upstream SDSR shortened the streetwise film layer coverage in the vicinity of the film hole centerline but increased the span-wise film layer coverage. A relatively optimal ramp height seemed to be 0.5D. Under a high blowing ratio, both the streamwise and span-wise film layer coverages improved in comparison with the baseline case. The film cooling effectiveness improved gradually with increasing ramp height.

]]>Computation doi: 10.3390/computation6030048

Authors: Fabrizio Ferrari-Ruffino Lorenzo Fortunato

The program diagonalizes the Geometric Collective Model (Bohr Hamiltonian) with generalized Gneuss&ndash;Greiner potential with terms up to the sixth power in &beta; . In nuclear physics, the Bohr&ndash;Mottelson model with later extensions into the rotovibrational Collective model is an important theoretical tool with predictive power and it represents a fundamental step in the education of a nuclear physicist. Nuclear spectroscopists might find it useful for fitting experimental data, reproducing spectra, EM transitions and moments and trying theoretical predictions, while students might find it useful for learning about connections between the nuclear shape and its quantum origin. Matrix elements for the kinetic energy operator and for scalar invariants as &beta; 2 and &beta; 3 cos ( 3 &gamma; ) have been calculated in a truncated five-dimensional harmonic oscillator basis with a different program, checked with three different methods and stored in a matrix library for the lowest values of angular momentum. These matrices are called by the program that uses them to write generalized Hamiltonians as linear combinations of certain simple operators. Energy levels and eigenfunctions are obtained as outputs of the diagonalization of these Hamiltonian operators.

]]>Computation doi: 10.3390/computation6030047

Authors: Zhen-Zong He Jun-Kui Mao Xing-Si Han

The comparison of the angular light-scattering method (ALSM) and the spectral extinction method (SEM) in solving the inverse problem of aerosol size distribution (ASD) are studied. The inverse problem is solved by a SPSO-DE hybrid algorithm, which is based on the stochastic particle swarm optimization (SPSO) algorithm and differential evolution (DE) algorithm. To improve the retrieval accuracy, the sensitivity analysis of measurement signals to characteristic parameters in ASDs is studied; and the corresponding optimal measurement angle selection region for ALSM and optimal measurement wavelength selection region for SEM are proposed, respectively. Results show that more satisfactory convergence properties can be obtained by using the SPSO-DE hybrid algorithm. Moreover, short measurement wavelengths and forward measurement angles are beneficial to obtaining more accurate results. Then, common monomodal and bimodal ASDs are estimated under different random measurement errors by using ALSM and SEM, respectively. Numerical tests show that retrieval results by using ALSM show better convergence accuracy and robustness than those by using SEM, which is attributed to the distribution of the objective function value. As a whole, considering the convergence properties and the independence on prior optical information, the ALSM combined with SPSO-DE hybrid algorithm provides a more effective and reliable technique to obtain the ASDs.

]]>Computation doi: 10.3390/computation6030046

Authors: Francesco Rundo Alessandro Ortis Sebastiano Battiato Sabrina Conoci

Blood Pressure (BP) is one of the most important physiological indicators that provides useful information in the field of health-care monitoring. Blood pressure may be measured by both invasive and non-invasive methods. A novel algorithmic approach is presented to estimate systolic and diastolic blood pressure accurately in a way that does not require any explicit user calibration, i.e., it is non-invasive and cuff-less. The approach herein described can be applied in a medical device, as well as in commercial mobile smartphones by an ad hoc developed software based on the proposed algorithm. The authors propose a system suitable for blood pressure estimation based on the PhotoPlethysmoGraphy (PPG) physiological signal sampling time-series. Photoplethysmography is a simple optical technique that can be used to detect blood volume changes in the microvascular bed of tissue. It is non-invasive since it takes measurements at the skin surface. In this paper, the authors present an easy and smart method to measure BP through careful neural and mathematical analysis of the PPG signals. The PPG data are processed with an ad hoc bio-inspired mathematical model that estimates systolic and diastolic pressure values through an innovative analysis of the collected physiological data. We compared our results with those measured using a classical cuff-based blood pressure measuring device with encouraging results of about 97% accuracy.

]]>Computation doi: 10.3390/computation6030045

Authors: Mohammed Mahmoud Mark Hoffmann Hassan Reza

Sparse matrix-vector multiplication (SpMV) can be used to solve diverse-scaled linear systems and eigenvalue problems that exist in numerous, and varying scientific applications. One of the scientific applications that SpMV is involved in is known as Configuration Interaction (CI). CI is a linear method for solving the nonrelativistic Schr&ouml;dinger equation for quantum chemical multi-electron systems, and it can deal with the ground state as well as multiple excited states. In this paper, we have developed a hybrid approach in order to deal with CI sparse matrices. The proposed model includes a newly-developed hybrid format for storing CI sparse matrices on the Graphics Processing Unit (GPU). In addition to the new developed format, the proposed model includes the SpMV kernel for multiplying the CI matrix (proposed format) by a vector using the C language and the Compute Unified Device Architecture (CUDA) platform. The proposed SpMV kernel is a vector kernel that uses the warp approach. We have gauged the newly developed model in terms of two primary factors, memory usage and performance. Our proposed kernel was compared to the cuSPARSE library and the CSR5 (Compressed Sparse Row 5) format and already outperformed both.

]]>Computation doi: 10.3390/computation6030044

Authors: Min-Rui Chen Jin-Yuan Qian Zan Wu Chen Yang Zhi-Jiang Jin Bengt Sunden

When liquids flow through a throttling element, the velocity increases and the pressure decreases. At this point, if the pressure is below the saturated vapor pressure of this liquid, the liquid will vaporize into small bubbles, causing hydraulic cavitation. In fact, a vaporization nucleus is another crucial condition for vaporizing, and particles contained in the liquid can also work as the vaporization nuclear. As a novel heat transfer medium, nanofluids have attracted the attention of many scholars. The nanoparticles contained in the nanofluids play a significant role in the vaporization of liquids. In this paper, the effects of the nanoparticles on hydraulic cavitation are investigated. Firstly, a geometric model of a perforated plate, the throttling element in this paper, is established. Then with different nanoparticle volume fractions and diameters, the nanofluids flowing through the perforated plate are numerically simulated based on a validated numerical method. The operation conditions, such as the ratio of inlet to outlet pressures and the temperature are the considered variables. Additionally, cavitation numbers under different operating conditions are achieved to investigate the effects of nanoparticles on hydraulic cavitation. Meanwhile, the contours are extracted to research the distribution of bubbles for further investigation. This study is of interest for researchers working on hydraulic cavitation or nanofluids.

]]>Computation doi: 10.3390/computation6030043

Authors: Hermann Knaus Martin Hofsäß Alexander Rautenberg Jens Bange

A model for the simulation of wind flow in complex terrain is presented based on the Reynolds averaged Navier&ndash;Stokes (RANS) equations. For the description of turbulence, the standard k-&epsilon;, the renormalization group (RNG) k-&epsilon;, and a Reynolds stress turbulence model are applied. Additional terms are implemented in the momentum equations to describe stratification of the Earth&rsquo;s atmosphere and to account for the Coriolis forces driven by the Earth&rsquo;s rotation, as well as for the drag force due to forested canopy. Furthermore, turbulence production and dissipation terms are added to the turbulence equations for the two-equation, as well as for the Reynolds stress models, in order to capture different types of land use. The approaches for the turbulence models are verified by means of a homogeneous canopy test case with flat terrain and constant forest height. The validation of the models is performed by investigating the WindForS wind test site. The simulation results are compared with five-hole probe velocity measurements using multipurpose airborne sensor carrier (MASC) systems (unmanned small research aircraft)&mdash;UAV at different locations for the main wind regime. Additionally, Reynolds stresses measured with sonic anemometers at a meteorological wind mast at different heights are compared with simulation results using the Reynolds stress turbulence model.

]]>Computation doi: 10.3390/computation6030042

Authors: Alessio Fuoco Sylvain Galier Hélène Roux-de Balmann Giorgio De Luca

The widespread use of nanofiltration and electrodialysis membrane processes is slowed down by the difficulties in predicting the membrane performances for treating streams of variable ionic compositions. Correlations between ion hydration properties and solute transfer can help to overcome this drawback. This research aims to investigate the correlation between theoretically evaluated hydration properties of major ions in solution and experimental values of neutral organic solute fluxes. In particular, ion hydration energies, coordination and hydration number and the average ion-water distance of Na+, Ca2+, Mg2+, Cl&minus; and SO42&minus; were calculated at a high quantum mechanics level and compared with experimental sugar fluxes previously reported. The properties computed by simple and not computationally expensive models were validated with information from the literature. This work discusses the correlation between the hydration energies of ions and fluxes of three saccharides, measured through nanofiltration and ionic-exchange membranes. In nanofiltration, the sugar flux increases with the presence of ions of increasing hydration energy. Instead, inverse linear correlations were found between the hydration energy and the sugar fluxes through ion exchange membranes. Finally, an empirical model is proposed for a rough evaluation of the variation in sugar fluxes as function of hydration energy for the ion exchange membranes in diffusion experiments.

]]>Computation doi: 10.3390/computation6030041

Authors: Omar Kebiri Lara Neureither Carsten Hartmann

We study linear-quadratic stochastic optimal control problems with bilinear state dependence where the underlying stochastic differential equation (SDE) has multiscale features. We show that, in the same way in which the underlying dynamics can be well approximated by a reduced-order dynamics in the scale separation limit (using classical homogenization results), the associated optimal expected cost converges to an effective optimal cost in the scale separation limit. This entails that we can approximate the stochastic optimal control for the whole system by a reduced-order stochastic optimal control, which is easier to compute because of the lower dimensionality of the problem. The approach uses an equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares Monte Carlo algorithm and show its applicability by a suitable numerical example.

]]>Computation doi: 10.3390/computation6030040

Authors: Afshan Kanwal Chang Phang Umer Iqbal

In this paper, two-dimensional Genocchi polynomials and the Ritz&ndash;Galerkin method were developed to investigate the Fractional Diffusion Wave Equation (FDWE) and the Fractional Klein&ndash;Gordon Equation (FKGE). A satisfier function that satisfies all the initial and boundary conditions was used. A linear system of algebraic equations was obtained for the considered equation with the help of two-dimensional Genocchi polynomials along with the Ritz&ndash;Galerkin method. The FDWE and FKGE, including the nonlinear case, were reduced to solve the linear system of the algebraic equation. Hence, the proposed method was able to greatly reduce the complexity of the problems and provide an accurate solution. The effectiveness of the proposed technique is demonstrated through several examples.

]]>Computation doi: 10.3390/computation6020039

Authors: Nasrin Akhter Wanli Qiao Amarda Shehu

The energy landscape, which organizes microstates by energies, has shed light on many cellular processes governed by dynamic biological macromolecules leveraging their structural dynamics to regulate interactions with molecular partners. In particular, the protein energy landscape has been central to understanding the relationship between protein structure, dynamics, and function. The landscape view, however, remains underutilized in an important problem in protein modeling, decoy selection in template-free protein structure prediction. Given the amino-acid sequence of a protein, template-free methods compute thousands of structures, known as decoys, as part of an optimization process that seeks minima of an energy function. Selecting biologically-active/native structures from the computed decoys remains challenging. Research has shown that energy is an unreliable indicator of nativeness. In this paper, we advocate that, while comparison of energies is not informative for structures that already populate minima of an energy function, the landscape view exposes the overall organization of generated decoys. As we demonstrate, such organization highlights macrostates that contain native decoys. We present two different computational approaches to extracting such organization and demonstrate through the presented findings that a landscape-driven treatment is promising in furthering research on decoy selection.

]]>Computation doi: 10.3390/computation6020038

Authors: Jean-Paul Kone Xinyu Zhang Yuying Yan Stephen Adegbite

In this paper, an open-source toolbox that can be used to accurately predict the distribution of the major physical quantities that are transported within a proton exchange membrane (PEM) fuel cell is presented. The toolbox has been developed using the Open Source Field Operation and Manipulation (OpenFOAM) platform, which is an open-source computational fluid dynamics (CFD) code. The base case results for the distribution of velocity, pressure, chemical species, Nernst potential, current density, and temperature are as expected. The plotted polarization curve was compared to the results from a numerical model and experimental data taken from the literature. The conducted simulations have generated a significant amount of data and information about the transport processes that are involved in the operation of a PEM fuel cell. The key role played by the concentration constant in shaping the cell polarization curve has been explored. The development of the present toolbox is in line with the objectives outlined in the International Energy Agency (IEA, Paris, France) Advanced Fuel Cell Annex 37 that is devoted to developing open-source computational tools to facilitate fuel cell technologies. The work therefore serves as a basis for devising additional features that are not always feasible with a commercial code.

]]>Computation doi: 10.3390/computation6020037

Authors: Khalid Hattaf Noura Yousfi

Human immunodeficiency virus (HIV) is a retrovirus that causes HIV infection and over time acquired immunodeficiency syndrome (AIDS). It can be spread and transmitted through two fundamental modes, one by virus-to-cell infection, and the other by direct cell-to-cell transmission. In this paper, we propose a new mathematical model that incorporates both modes of transmission and takes into account the role of the adaptive immune response in HIV infection. We first show that the proposed model is mathematically and biologically well posed. Moreover, we prove that the dynamical behavior of the model is fully determined by five threshold parameters. Furthermore, numerical simulations are presented to confirm our theoretical results.

]]>Computation doi: 10.3390/computation6020036

Authors: Claudio Amovilli Franca Floris

Electron density is used to compute Shannon entropy. The deviation from the Hartree–Fock (HF) of this quantity has been observed to be related to correlation energy. Thus, Shannon entropy is here proposed as a valid quantity to assess the quality of an energy density functional developed within Kohn–Sham theory. To this purpose, results from eight different functionals, representative of Jacob’s ladder, are compared with accurate results obtained from diffusion quantum Monte Carlo (DMC) computations. For three series of atomic ions, our results show that the revTPSS and the PBE0 functionals are the best, whereas those based on local density approximation give the largest discrepancy from DMC Shannon entropy.

]]>Computation doi: 10.3390/computation6020035

Authors: Eberhard Engel

Far outside the surface of slabs, the exact exchange (EXX) potential v x falls off as &minus; 1 / z , if z denotes the direction perpendicular to the surface and the slab is localized around z = 0 . Similarly, the EXX energy density e x behaves as &minus; n / ( 2 z ) , where n is the electron density. Here, an alternative proof of these relations is given, in which the Coulomb singularity in the EXX energy is treated in a particularly careful fashion. This new approach allows the derivation of the next-to-leading order contributions to the asymptotic v x and e x . It turns out that in both cases, the corrections are proportional to 1 / z 2 in general.

]]>Computation doi: 10.3390/computation6020034

Authors: Ali Cemal Benim Michael Diederich Björn Pfeiffelmann

The purpose of this study is the development of an automated two-dimensional airfoil shape optimization procedure for small horizontal axis wind turbines (HAWT), with an emphasis on high thrust and aerodynamically stable performance. The procedure combines the Computational Fluid Dynamics (CFD) analysis with the Response Surface Methodology (RSM), the Biobjective Mesh Adaptive Direct Search (BiMADS) optimization algorithm and an automatic geometry and mesh generation tool. In CFD analysis, a Reynolds Averaged Numerical Simulation (RANS) is applied in combination with a two-equation turbulence model. For describing the system behaviour under alternating wind conditions, a number of CFD 2D-RANS-Simulations with varying Reynolds numbers and wind angles are performed. The number of cases is reduced by the use of RSM. In the analysis, an emphasis is placed upon the role of the blade-to-blade interaction. The average and the standard deviation of the thrust are optimized by a derivative-free optimization algorithm to define a Pareto optimal set, using the BiMADS algorithm. The results show that improvements in the performance can be achieved by modifications of the blade shape and the present procedure can be used as an effective tool for blade shape optimization.

]]>Computation doi: 10.3390/computation6020033

Authors: María Teresa Sánchez José Manuel García-Aznar

Cell migration is an important biological process that has generated increasing interest during the last several years. This process is based on three phases: protrusion at the front end of the cell, de-adhesion at the rear end and contraction of the cell body, all of them coordinated due to the polymerization/depolymerization of certain cytoskeletal proteins. The aim of this work is to present a mathematical model to simulate the actin polymerization/depolymerization process that regulates the final outcome of cell migration process, considering all the above phases, in a particular case: when the cell is confined in a microfluidic channel. Under these specific conditions, cell migration can be approximated by using one-dimensional simulations. We will propose a system of reaction&ndash;diffusion equations to simulate the behavior of the cytoskeletal proteins responsible for protrusion and contraction in the cell, coupled with the mechanical response of the cell, computing its deformations and stresses. Furthermore, a numerical procedure is presented in order to simulate the whole process in a moving and deformable domain corresponding to the cell body.

]]>Computation doi: 10.3390/computation6020032

Authors: Pham Phuc Tsuyoshi Nozu Hirotoshi Kikuchi Kazuki Hibi Yukio Tamura

A subgrid-scale model based on coherent structures, called the Coherent Structure Smagorinsky Model (CSM), has been applied to a large eddy simulation to assess its performance in the prediction of wind pressure distributions on buildings. The study cases were carried out for the assessment of an isolated rectangular high-rise building and a building with a setback (both in a uniform flow) and an actual high-rise building in an urban city with turbulent boundary layer flow. For the isolated rectangular high-rise building in uniform flow, the CSM showed good agreement with both the traditional Smagorinsky Model (SM) and the experiments (values within 20%). For the building with a setback as well as the actual high-rise building in an urban city, both of which have a distinctive wind pressure distribution with large negative pressure caused by the complicated flow due to the strong influence of neighboring buildings, the CSM effectively gives more accurate results with less variation than the SM in comparison with the experimental results (within 20%). The CSM also yielded consistent peak pressure coefficients for all wind directions, within 20% of experimental values in a relatively high-pressure region of the case study of the actual high-rise building in an urban city.

]]>Computation doi: 10.3390/computation6020031

Authors: Ruifeng Hu Limin Wang Ping Wang Yan Wang Xiaojing Zheng

In the present work, a highly efficient incompressible flow solver with a semi-implicit time advancement on a fully staggered grid using a high-order compact difference scheme is developed firstly in the framework of approximate factorization. The fourth-order compact difference scheme is adopted for approximations of derivatives and interpolations in the incompressible Navier–Stokes equations. The pressure Poisson equation is efficiently solved by the fast Fourier transform (FFT). The framework of approximate factorization significantly simplifies the implementation of the semi-implicit time advancing with a high-order compact scheme. Benchmark tests demonstrate the high accuracy of the proposed numerical method. Secondly, by applying the proposed numerical method, we compute turbulent channel flows at low and moderate Reynolds numbers by direct numerical simulation (DNS) and large eddy simulation (LES). It is found that the predictions of turbulence statistics and especially energy spectra can be obviously improved by adopting the high-order scheme rather than the traditional second-order central difference scheme.

]]>Computation doi: 10.3390/computation6020030

Authors: Samir Matar

Topochemical and electronic structure relationships are shown upon going from ANCl to A2N2Se (A = Zr, Ce) through metathesis. The chalcogen Se (divalent) displacing halogen Cl (monovalent) modifies the arrangements of A–N monolayers within ANCl (…Cl|{AN}|Cl… sequences) to double layers in A2N2Se (…Se|{A2N2}|Se… sequences). The investigation carried out in the framework of the quantum density functional theory DFT points to peculiar features pertaining to the dominant effect of the A–N covalent bond stronger than ionic A–Cl and ionocovalent A–Se, as identified from analyses of bonding from overlap integral, charge transfer, electron localization function mapping. Electronic density of states shows semi-conducting behavior due to the tetravalent character of A. The resulting overall pseudo-binary compounds are expressed formally with full ionization as {AN}Cl and {A2N2}Se.

]]>Computation doi: 10.3390/computation6020029

Authors: Alexander Landa Per Söderlind Ivan Naumov John Klepeis Levente Vitos

In the periodic table, only a few pure metals exhibit lattice or magnetic instabilities associated with Fermi surface nesting, the classical examples being α-U and Cr. Whereas α-U displays a strong Kohn anomaly in the phonon spectrum that ultimately leads to the formation of charge density waves (CDWs), Cr is known for its nesting-induced spin density waves (SDWs). Recently, it has become clear that a pronounced Kohn anomaly and the corresponding softening in the elastic constants is also the key factor that controls structural transformations and mechanical properties in compressed group VB metals—materials with relatively high superconducting critical temperatures. This article reviews the current understanding of the structural and mechanical behavior of these metals under pressure with an introduction to the concept of the Kohn anomaly and how it is related to the important concept of Peierls instability. We review both experimental and theoretical results showing different manifestations of the Kohn anomaly in the transverse acoustic phonon mode TA (ξ00) in V, Nb, and Ta. Specifically, in V the anomaly triggers a structural transition to a rhombohedral phase, whereas in Nb and Ta it leads to an anomalous reduction in yield strength.

]]>Computation doi: 10.3390/computation6020028

Authors: Gongbo Zu Kit Lam

Wind flow structures and their consequent wind loads on two high-rise buildings in staggered arrangement are investigated by Large Eddy Simulation (LES). Synchronized pressure and flow field measurements by particle image velocimetry (PIV) are conducted in a boundary layer wind tunnel to validate the numerical simulations. The instantaneous and time-averaged flow fields are analyzed and discussed in detail. The coherent flow structures in the building gap are clearly observed and the upstream building wake is found to oscillate sideways and meander down to the downstream building in a coherent manner. The disruptive effect on the downstream building wake induced by the upstream building is also observed. Furthermore, the connection between the upstream building wake and the wind loads on the downstream building is explored by the simultaneous data of wind pressures and wind flow fields.

]]>Computation doi: 10.3390/computation6020027

Authors: S. Paz Cameron Abrams

In this work, we study the influence of hidden barriers on the convergence behavior of three free-energy calculation methods: well-tempered metadynamics (WTMD), adaptive-biasing forces (ABF), and on-the-fly parameterization (OTFP). We construct a simple two-dimensional potential-energy surfaces (PES) that allows for an exact analytical result for the free-energy in any one-dimensional order parameter. Then we chose different CV definitions and PES parameters to create three different systems with increasing sampling challenges. We find that all three methods are not greatly affected by the hidden-barriers in the simplest case considered. The adaptive sampling methods show faster sampling while the auxiliary high-friction requirement of OTFP makes it slower for this case. However, a slight change in the CV definition has a strong impact in the ABF and WTMD performance, illustrating the importance of choosing suitable collective variables.

]]>Computation doi: 10.3390/computation6010026

Authors: Fredrik Nilsson Ferdi Aryasetiawan

Substantial progress has been achieved in the last couple of decades in computing the electronic structure of correlated materials from first principles. This progress has been driven by parallel development in theory and numerical algorithms. Theoretical development in combining ab initio approaches and many-body methods is particularly promising. A crucial role is also played by a systematic method for deriving a low-energy model, which bridges the gap between real and model systems. In this article, an overview is given tracing the development from the LDA+U to the latest progress in combining the G W method and (extended) dynamical mean-field theory ( G W +EDMFT). The emphasis is on conceptual and theoretical aspects rather than technical ones.

]]>Computation doi: 10.3390/computation6010025

Authors: Xiao-Yin Pan Viraht Sahni

Dissipative effects arise in an electronic system when it interacts with a time-dependent environment. Here, the Schrödinger theory of electrons in an electromagnetic field including dissipative effects is described from a new perspective. Dissipation is accounted for via the effective Hamiltonian approach in which the electron mass is time-dependent. The perspective is that of the individual electron: the corresponding equation of motion for the electron or time-dependent differential virial theorem—the ‘Quantal Newtonian’ second law—is derived. According to the law, each electron experiences an external field comprised of a binding electric field, the Lorentz field, and the electromagnetic field. In addition, there is an internal field whose components are representative of electron correlations due to the Pauli exclusion principle and Coulomb repulsion, kinetic effects, and density. There is also an internal contribution due to the magnetic field. The response of the electron is governed by the current density field in which a damping coefficient appears. The law leads to further insights into Schrödinger theory, and in particular the intrinsic self-consistent nature of the Schrödinger equation. It is proved that in the presence of dissipative effects, the basic variables (gauge-invariant properties, knowledge of which determines the Hamiltonian) are the density and physical current density. Finally, a local effective potential theory of dissipative systems—quantal density functional theory (QDFT)—is developed. This constitutes the mapping from the interacting dissipative electronic system to one of noninteracting fermions possessing the same dissipation and basic variables. Attributes of QDFT are the separation of the electron correlations due to the Pauli exclusion principle and Coulomb repulsion, and the determination of the correlation contributions to the kinetic energy. Hence, Schrödinger theory in conjunction with QDFT leads to additional insights into the dissipative system.

]]>Computation doi: 10.3390/computation6010024

Authors: Katrina Calautit Angelo Aquino John Calautit Payam Nejat Fatemeh Jomehzadeh Ben Hughes

Global demand for energy continues to increase rapidly, due to economic and population growth, especially for increasing market economies. These lead to challenges and worries about energy security that can increase as more users need more energy resources. Also, higher consumption of fossil fuels leads to more greenhouse gas emissions, which contribute to global warming. Moreover, there are still more people without access to electricity. Several studies have reported that one of the rapidly developing source of power is wind energy and with declining costs due to technology and manufacturing advancements and concerns over energy security and environmental issues, the trend is predicted to continue. As a result, tools and methods to simulate and optimize wind energy technologies must also continue to advance. This paper reviews the most recently published works in Computational Fluid Dynamic (CFD) simulations of micro to small wind turbines, building integrated with wind turbines, and wind turbines installed in wind farms. In addition, the existing limitations and complications included with the wind energy system modelling were examined and issues that needs further work are highlighted. This study investigated the current development of CFD modelling of wind energy systems. Studies on aerodynamic interaction among the atmospheric boundary layer or wind farm terrain and the turbine rotor and their wakes were investigated. Furthermore, CFD combined with other tools such as blade element momentum were examined.

]]>Computation doi: 10.3390/computation6010023

Authors: B. Shadrack Jabes Christian Krekeler

We use the Grand Canonical Adaptive Resolution Molecular Dynamics Technique (GC-AdResS) to examine the essential degrees of freedom necessary for reproducing the structural properties of the imidazolium class of ionic liquids. In this technique, the atomistic details are treated as an open sub-region of the system while the surrounding environment is modelled as a generic coarse-grained model. We systematically characterize the spatial quantities such as intramolecular, intermolecular radial distribution functions, other structural and orientational properties of ILs. The spatial quantities computed in an open sub-region of the system are in excellent agreement with the equivalent quantities calculated in a full atomistic simulation, suggesting that the atomistic degrees of freedom outside the sub-region are negligible. The size of the sub-region considered in this study is 2 nm, which is essentially the size of a few ions. Insight from the study suggests that a higher degree of spatial locality seems to play a crucial role in characterizing the properties of imidazolium based ionic liquids.

]]>Computation doi: 10.3390/computation6010022

Authors: Péter Koltai Hao Wu Frank Noé Christof Schütte

There are multiple ways in which a stochastic system can be out of statistical equilibrium. It might be subject to time-varying forcing; or be in a transient phase on its way towards equilibrium; it might even be in equilibrium without us noticing it, due to insufficient observations; and it even might be a system failing to admit an equilibrium distribution at all. We review some of the approaches that model the effective statistical behavior of equilibrium and non-equilibrium dynamical systems, and show that both cases can be considered under the unified framework of optimal low-rank approximation of so-called transfer operators. Particular attention is given to the connection between these methods, Markov state models, and the concept of metastability, further to the estimation of such reduced order models from finite simulation data. All these topics bear an important role in, e.g., molecular dynamics, where Markov state models are often and successfully utilized, and which is the main motivating application in this paper. We illustrate our considerations by numerical examples.

]]>Computation doi: 10.3390/computation6010021

Authors: Joseph Rudzinski Tristan Bereau

Coarse-grained molecular simulation models can provide significant insight into the complex behavior of protein systems, but suffer from an inherently distorted description of dynamical properties. We recently demonstrated that, for a heptapeptide of alanine residues, the structural and kinetic properties of a simulation model are linked in a rather simple way, given a certain level of physics present in the model. In this work, we extend these findings to a longer peptide, for which the representation of configuration space in terms of a full enumeration of sequences of helical/coil states along the peptide backbone is impractical. We verify the structural-kinetic relationships by scanning the parameter space of a simple native-biased model and then employ a distinct transferable model to validate and generalize the conclusions. Our results further demonstrate the validity of the previous findings, while clarifying the role of conformational entropy in the determination of the structural-kinetic relationships. More specifically, while the global, long timescale kinetic properties of a particular class of models with varying energetic parameters but approximately fixed conformational entropy are determined by the overarching structural features of the ensemble, a shift in these kinetic observables occurs for models with a distinct representation of steric interactions. At the same time, the relationship between structure and more local, faster kinetic properties is not affected by varying the conformational entropy of the model.

]]>Computation doi: 10.3390/computation6010020

Authors: Marcus Weber

Upon ligand binding or during chemical reactions the state of a molecular system changes in time. Usually we consider a finite set of (macro-) states of the system (e.g., ‘bound’ vs. ‘unbound’), although the process itself takes place in a continuous space. In this context, the formula χ = X A connects the micro-dynamics of the molecular system to its macro-dynamics. χ can be understood as a clustering of micro-states of a molecular system into a few macro-states. X is a basis of an invariant subspace of a transfer operator describing the micro-dynamics of the system. The formula claims that there is an unknown linear relation A between these two objects. With the aid of this formula we can understand rebinding effects, the electron flux in pericyclic reactions, and systematic changes of binding rates in kinetic ITC experiments. We can also analyze sequential spectroscopy experiments and rare event systems more easily. This article provides an explanation of the formula and an overview of some of its consequences.

]]>Computation doi: 10.3390/computation6010019

Authors: Xinghao Liang Yang Li Qiang Zhao Zheng Zhang Xiaoping Ouyang

Silicon carbide (SiC) is considered as an important material for nuclear engineering due to its excellent properties. Changing the carbon content in SiC can regulate and control its elastic and thermodynamic properties, but a simulation study of the effect of carbon content on the sputtering (caused by the helium ions) of SiC is still lacking. In this work, we used the Monte-Carlo and molecular dynamics simulation methods to study the effects of carbon concentration, incidence energy, incident angle, and target temperature on the sputtering yield of SiC. The results show that the incident ions’ energy and angle have a significant effect on sputtering yield of SiC when the carbon concentration in SiC is around 62 at %, while the target temperature has a little effect on the sputtering yield of SiC. Our work might provide theoretical support for the experimental research and engineering application of carbon fiber-reinforced SiC that be used as the plasma-facing material in tokamak fusion reactors.

]]>