Computation doi: 10.3390/computation7010017

Authors: Besagni Inzoli

A precise estimation of the bubble size distribution (BSD) is required to understand the fluid dynamics in gas-liquid bubble columns at the &ldquo;bubble scale,&rdquo; evaluate the heat and mass transfer rate, and support scale-up approaches. In this paper, we have formulated a population balance model, and we have validated it against a previously published experimental dataset. The experimental dataset consists of BSDs obtained in the &ldquo;pseudo-homogeneous&rdquo; flow regime, in a large-diameter and large-scale bubble column. The aim of the population balance model is to predict the BSD in the developed region of the bubble column using as input the BSD at the sparger. The proposed approach has been able to estimate the BSD correctly and is a promising approach for future studies and to estimate bubble size in large-scale gas&ndash;liquid bubble columns.

]]>Computation doi: 10.3390/computation7010016

Authors: Anna Choromanska Ish Kumar Jain

We analyze the theoretical properties of the recently proposed objective function for efficient online construction and training of multiclass classification trees in the settings where the label space is very large. We show the important properties of this objective and provide a complete proof that maximizing it simultaneously encourages balanced trees and improves the purity of the class distributions at subsequent levels in the tree. We further explore its connection to the three well-known entropy-based decision tree criteria, i.e., Shannon entropy, Gini-entropy and its modified variant, for which efficient optimization strategies are largely unknown in the extreme multiclass setting. We show theoretically that this objective can be viewed as a surrogate function for all of these entropy criteria and that maximizing it indirectly optimizes them as well. We derive boosting guarantees and obtain a closed-form expression for the number of iterations needed to reduce the considered entropy criteria below an arbitrary threshold. The obtained theorem relies on a weak hypothesis assumption that directly depends on the considered objective function. Finally, we prove that optimizing the objective directly reduces the multi-class classification error of the decision tree.

]]>Computation doi: 10.3390/computation7010015

Authors: Saeedeh Bahrami Alireza Bosaghzadeh Fadi Dornaika

In semi-supervised label propagation (LP), the data manifold is approximated by a graph, which is considered as a similarity metric. Graph estimation is a crucial task, as it affects the further processes applied on the graph (e.g., LP, classification). As our knowledge of data is limited, a single approximation cannot easily find the appropriate graph, so in line with this, multiple graphs are constructed. Recently, multi-metric fusion techniques have been used to construct more accurate graphs which better represent the data manifold and, hence, improve the performance of LP. However, most of these algorithms disregard use of the information of label space in the LP process. In this article, we propose a new multi-metric graph-fusion method, based on the Flexible Manifold Embedding algorithm. Our proposed method represents a unified framework that merges two phases: graph fusion and LP. Based on one available view, different simple graphs were efficiently generated and used as input to our proposed fusion approach. Moreover, our method incorporated the label space information as a new form of graph, namely the Correlation Graph, with other similarity graphs. Furthermore, it updated the correlation graph to find a better representation of the data manifold. Our experimental results on four face datasets in face recognition demonstrated the superiority of the proposed method compared to other state-of-the-art algorithms.

]]>Computation doi: 10.3390/computation7010014

Authors: Piotr Jaśkowski Sławomir Biruk

This study adopts the flow shop concept used in industrial production to schedule repetitive non-linear construction projects, where specialized groups of workers execute processes in work zones (buildings) in a predefined order common to all groups. This problem is characteristic of construction projects that involve erecting multiple buildings. As the duration of the project heavily depends upon the sequence of the work zones, this study aims at providing a model and a practical approach for finding the optimal solution that assures the shortest duration of the project, allows the contractor to complete particular work zones (buildings) as soon as possible (without idle time), and conforms to a predefined sequence of work zone completion. This last constraint may arise from the client&rsquo;s requirements or physical conditions of the project and has not been addressed by existing scheduling methods. Reducing the duration of the entire project brings the benefit of lower indirect costs and, if accompanied by a reduced duration of completing particular buildings (i.e., work zones), may also provide the opportunity to sell project deliverables sooner, thus improving the economic efficiency of the project. In search of optimal schedules, the authors apply the algorithms of Minimum Hamiltonian Cycle/Asymmetric Traveling Salesman Problem (ATSP).

]]>Computation doi: 10.3390/computation7010013

Authors: Francesco Rundo Sergio Rinella Simona Massimino Marinella Coco Giorgio Fallica Rosalba Parenti Sabrina Conoci Vincenzo Perciavalle

The development of detection methodologies for reliable drowsiness tracking is a challenging task requiring both appropriate signal inputs and accurate and robust algorithms of analysis. The aim of this research is to develop an advanced method to detect the drowsiness stage in electroencephalogram (EEG), the most reliable physiological measurement, using the promising Machine Learning methodologies. The methods used in this paper are based on Machine Learning methodologies such as stacked autoencoder with softmax layers. Results obtained from 62 volunteers indicate 100% accuracy in drowsy/wakeful discrimination, proving that this approach can be very promising for use in the next generation of medical devices. This methodology can be extended to other uses in everyday life in which the maintaining of the level of vigilance is critical. Future works aim to perform extended validation of the proposed pipeline with a wide-range training set in which we integrate the photoplethysmogram (PPG) signal and visual information with EEG analysis in order to improve the robustness of the overall approach.

]]>Computation doi: 10.3390/computation7010012

Authors: Jingwei Too Abdul Rahim Abdullah Norhashimah Mohd Saad Weihown Tee

Due to the increment in hand motion types, electromyography (EMG) features are increasingly required for accurate EMG signals classification. However, increasing in the number of EMG features not only degrades classification performance, but also increases the complexity of the classifier. Feature selection is an effective process for eliminating redundant and irrelevant features. In this paper, we propose a new personal best (Pbest) guide binary particle swarm optimization (PBPSO) to solve the feature selection problem for EMG signal classification. First, the discrete wavelet transform (DWT) decomposes the signal into multiresolution coefficients. The features are then extracted from each coefficient to form the feature vector. After which pbest-guide binary particle swarm optimization (PBPSO) is used to evaluate the most informative features from the original feature set. In order to measure the effectiveness of PBPSO, binary particle swarm optimization (BPSO), genetic algorithm (GA), modified binary tree growth algorithm (MBTGA), and binary differential evolution (BDE) were used for performance comparison. Our experimental results show the superiority of PBPSO over other methods, especially in feature reduction; where it can reduce more than 90% of features while keeping a very high classification accuracy. Hence, PBPSO is more appropriate for application in clinical and rehabilitation applications.

]]>Computation doi: 10.3390/computation7010011

Authors: Wilfried Gappmair

Parameter estimation is of paramount importance in every digital receiver. This is not only true for radio, but also for optical links; otherwise, subsequent processing stages, like detector units or error correction schemes, could not be operated reliably. However, for a bandlimited optical intensity channel, the problem of parameter estimation is strongly related to non-negative pulse shapes satisfying also the Nyquist criterion to keep the detection process as simple as possible. To the best of the author&rsquo;s knowledge, it is the first time that both topics&mdash;parameter estimation on the one hand and bandlimited intensity modulation on the other&mdash;are jointly investigated. Since symbol timing and signal amplitude are the parameters of interest in this case, the corresponding Cramer&ndash;Rao lower bounds are derived as the theoretical limit of the jitter variance generated by the related estimator algorithms. In this context, a maximum likelihood solution is developed for the recovery of both timing and amplitude. Since this approach requires a receiver matched filter destroying the Nyquist criterion of the non-negative pulse shape, we compare it to a flat receiver filter preserving the required orthogonality property. It turned out that the jitter performance of the matched filter method is close to the Cramer&ndash;Rao lower bound in the medium-to-low SNR range, but due to inter-symbol interference effects an error floor emerges at higher SNR values. The flat filter solution avoids this drawback, although the price to be paid is a larger noise level at the filter output, so that a somewhat increased jitter variance is observed.

]]>Computation doi: 10.3390/computation7010010

Authors: Hafiz Waqar Ahmad Jeong Ho Hwang Kamran Javed Umer Masood Chaudry Dong Ho Bae

Welding alloy 617 with other metals and alloys has been receiving significant attention in the last few years. It is considered to be the benchmark for the development of economical hybrid structures to be used in different engineering applications. The differences in the physical and metallurgical properties of dissimilar materials to be welded usually result in weaker structures. Fatigue failure is one of the most common failure modes of dissimilar material welded structures. In this study, fatigue life prediction of dissimilar material weld was evaluated by the accelerated life method and artificial neural network approach (ANN). The accelerated life testing approach was evaluated for different distributions. Weibull distribution was the most appropriate distribution that fits the fatigue data very well. Acceleration of fatigue life test data was attained with 95% reliability for Weibull distribution. The probability plot verified that accelerating variables at each level were appropriate. Experimental test data and predicted fatigue life were in good agreement with each other. Two training algorithms, Bayesian regularization (BR) and Levenberg&ndash;Marquardt (LM), were employed for training ANN. The Bayesian regularization training algorithm exhibited a better performance than the Levenberg&ndash;Marquardt algorithm. The results confirmed that the assessment methods are effective for lifetime prediction of dissimilar material welded joints.

]]>Computation doi: 10.3390/computation7010009

Authors: Christoph Rettinger Ulrich Rüde

Parallel multiphysics simulations often suffer from load imbalances originating from the applied coupling of algorithms with spatially and temporally varying workloads. It is, thus, desirable to minimize these imbalances to reduce the time to solution and to better utilize the available hardware resources. Taking particulate flows as an illustrating example application, we present and evaluate load balancing techniques that tackle this challenging task. This involves a load estimation step in which the currently generated workload is predicted. We describe in detail how such a workload estimator can be developed. In a second step, load distribution strategies like space-filling curves or graph partitioning are applied to dynamically distribute the load among the available processes. To compare and analyze their performance, we employ these techniques to a benchmark scenario and observe a reduction of the load imbalances by almost a factor of four. This results in a decrease of the overall runtime by 14% for space-filling curves.

]]>Computation doi: 10.3390/computation7010008

Authors: Hugo Valdés Kevin Unda Aldo Saavedra

This research answers the following question: What is the fluid dynamic behavior of a supercritical fluid (SCF) inside a membrane module? At this time, there is very little or no reported information that can provide an answer to this question. The research studies related to the themes of supercritical CO2 (SC-CO2), hollow fiber membrane contactors (HFMCs), and numerical simulations have mainly reported on 2D simulations, but in this work, 3D profiles are presented. Simulations were performed based on the experimental results and other simulations, using the geometry of a commercial module. The results were mainly based on the different operating conditions and geometric dimensions. A mesh study was performed to ensure the mesh non-dependence of the results presented here. It was observed that the velocity profile developed at 10 mm from the wall of the supercritical CO2 entrance pipe. A profile equilibrium around the fiber close to the entrance of the module was achieved in the experimental hollow fiber membrane contactor when compared to the case of the commercial hollow fiber membrane contactor. The results of this research provided a visualization of the boundary layer, which did not cover the entire fiber length. Finally, the results of this paper are interesting for technical applications and contribute to our understanding of the hydrodynamics of SCFs.

]]>Computation doi: 10.3390/computation7010007

Authors: Olaoluwa Rotimi Popoola Sinan Sinanović Wasiu O. Popoola Roberto Ramirez-Iniguez

Overlap of footprints of light emitting diodes (LEDs) increases the positioning accuracy of wearable LED indoor positioning systems (IPS) but such an approach assumes that the footprint boundaries are defined. In this work, we develop a mathematical model for defining the footprint boundaries of an LED in terms of a threshold angle instead of the conventional half or full angle. To show the effect of the threshold angle, we compare how overlaps and receiver tilts affect the performance of an LED-based IPS when the optical boundary is defined at the threshold angle and at the full angle. Using experimental measurements, simulations, and theoretical analysis, the effect of the defined threshold angle is estimated. The results show that the positional time when using the newly defined threshold angle is 12 times shorter than the time when the full angle is used. When the effect of tilt is considered, the threshold angle time is 22 times shorter than the full angle positioning time. Regarding accuracy, it is shown in this work that a positioning error as low as 230 mm can be obtained. Consequently, while the IPS gives a very low positioning error, a defined threshold angle reduces delays in an overlap-based LED IPS.

]]>Computation doi: 10.3390/computation7010006

Authors: Eric M. Miller Cody J. Brazel Krystina A. Brillos-Monia Philip W. Crawford Hannah C. Hufford Michael R. Loncaric Monica N. Mruzik Austin W. Nenninger Christina M. Ragain

The ability for DFT: B3LYP calculations using the 6-31g and lanl2dz basis sets to predict the electrochemical properties of twenty (20) 3-aryl-quinoxaline-2-carbonitrile 1,4-di-N-oxide derivatives with varying degrees of cytotoxic activity in dimethylformamide (DMF) was investigated. There was a strong correlation for the first reduction and moderate-to-low correlation of the second reduction of the diazine ring between the computational and the experimental data, with the exception of the derivative containing the nitro functionality. The four (4) nitro group derivatives are clear outliers in the overall data sets and the derivative E4 is ill-behaved. The remaining three (3) derivatives containing the nitro groups had a strong correlation between the computational and experimental data; however, the computational data falls substantially outside of the expected range.

]]>Computation doi: 10.3390/computation7010005

Authors: Computation Editorial Office

Rigorous peer-review is the corner-stone of high-quality academic publishing [...]

]]>Computation doi: 10.3390/computation7010004

Authors: Francesco Rundo Francesca Trenta Agatino Luigi Di Stallo Sebastiano Battiato

Stock market prediction and trading has attracted the effort of many researchers in several scientific areas because it is a challenging task due to the high complexity of the market. More investors put their effort to the development of a systematic approach, i.e., the so called &ldquo;Trading System (TS)&rdquo; for stocks pricing and trend prediction. The introduction of the Trading On-Line (TOL) has significantly improved the overall number of daily transactions on the stock market with the consequent increasing of the market complexity and liquidity. One of the most main consequence of the TOL is the &ldquo;automatic trading&rdquo;, i.e., an ad-hoc algorithmic robot able to automatically analyze a lot of financial data with target to open/close several trading operations in such reduced time for increasing the profitability of the trading system. When the number of such automatic operations increase significantly, the trading approach is known as High Frequency Trading (HFT). In this context, recently, the usage of machine learning has improved the robustness of the trading systems including HFT sector. The authors propose an innovative approach based on usage of ad-hoc machine learning approach, starting from historical data analysis, is able to perform careful stock price prediction. The stock price prediction accuracy is further improved by using adaptive correction based on the hypothesis that stock price formation is regulated by Markov stochastic propriety. The validation results applied to such shares and financial instruments confirms the robustness and effectiveness of the proposed automatic trading algorithm.

]]>Computation doi: 10.3390/computation7010003

Authors: Panteleimon D. Mavroudis Jeremy D. Scheff John C. Doyle Yoram Vodovotz Ioannis P. Androulakis

The dysregulation of inflammation, normally a self-limited response that initiates healing, is a critical component of many diseases. Treatment of inflammatory disease is hampered by an incomplete understanding of the complexities underlying the inflammatory response, motivating the application of systems and computational biology techniques in an effort to decipher this complexity and ultimately improve therapy. Many mathematical models of inflammation are based on systems of deterministic equations that do not account for the biological noise inherent at multiple scales, and consequently the effect of such noise in regulating inflammatory responses has not been studied widely. In this work, noise was added to a deterministic system of the inflammatory response in order to account for biological stochasticity. Our results demonstrate that the inflammatory response is highly dependent on the balance between the concentration of the pathogen and the level of biological noise introduced to the inflammatory network. In cases where the pro- and anti-inflammatory arms of the response do not mount the appropriate defense to the inflammatory stimulus, inflammation transitions to a different state compared to cases in which pro- and anti-inflammatory agents are elaborated adequately and in a timely manner. In this regard, our results show that noise can be both beneficial and detrimental for the inflammatory endpoint. By evaluating the parametric sensitivity of noise characteristics, we suggest that efficiency of inflammatory responses can be controlled. Interestingly, the time period on which parametric intervention can be introduced efficiently in the inflammatory system can be also adjusted by controlling noise. These findings represent a novel understanding of inflammatory systems dynamics and the potential role of stochasticity thereon.

]]>Computation doi: 10.3390/computation7010002

Authors: Maria T. Plytaria Christos Tzivanidis Evangelos Bellos Ioannis Alexopoulos Kimon A. Antonopoulos

Energy consumption in the building sector is responsible for a very large amount of electricity consumption worldwide. The reduction of this consumption is a crucial issue in order to achieve sustainability. The objective of this work is to investigate the use of phase change materials (PCMs) in the building walls in order to reduce the heating and the cooling loads. The novelty of this work is based on the investigation of different scenarios about the position of the PCM layer in the south and the north walls. PCMs can improve the thermal performance and the thermal comfort of a building due to their ability to store large amounts of thermal energy in latent form and so to reduce the temperature fluctuations of the structural components, keeping them within the desired temperature levels. More specifically, this work presents and compares the heating loads, the cooling loads and the temperature distribution of a building in Athens (Greece), with and without PCMs in different positions in the south wall and in the north walls. The simulation is performed with the commercial software TRNSYS 17, using the TRNSYS component: type 1270 (PCM Wall). The results proved that the maximum energy savings per year were achieved by the combination of the insulation and the PCM layer in the north and south walls. More specifically, the reductions in the heating and the cooling loads were found to be 1.54% and 5.90%, respectively. Furthermore, the temperature distribution with the use of a PCM layer is the most acceptable, especially during the summer period.

]]>Computation doi: 10.3390/computation7010001

Authors: Manisha Ajmani Sinan Sinanović Tuleen Boutaleb

In this paper, the performance of the optimal beam radius indoor positioning (OBRIP) and two-receiver indoor positioning (TRIP) algorithms are analysed by varying system parameters in the presence of an indoor optical wireless channel modelled in line of sight configuration. From all the conducted simulations, the minimum average error value obtained for TRIP is 0.61 m against 0.81 m obtained for OBRIP for room dimensions of 10 m &times; 10 m &times; 3 m. In addition, for each simulated condition, TRIP, which uses two receivers, outperforms OBRIP and reduces position estimation error up to 30%. To get a better understanding of error in position estimation for different combinations of beam radius and separation between light emitting diodes, the 90th percentile error is determined using a cumulative distribution frequency (CDF) plot, which gives an error value of 0.94 m for TRIP as compared to 1.20 m obtained for OBRIP. Both algorithms also prove to be robust towards change in receiver tilting angle, thus providing flexibility in the selection of the parameters to adapt to any indoor environment. In addition, in this paper, a mathematical model based on the concept of raw moments is used to confirm the findings of the simulation results for the proposed algorithms. Using this mathematical model, closed-form expressions are derived for standard deviation of uniformly distributed points in an optical wireless communication based indoor positioning system with circular and rectangular beam shapes.

]]>Computation doi: 10.3390/computation6040065

Authors: Konstantinos Vasilopoulos Michalis Mentzos Ioannis E. Sarris Panagiotis Tsoutsanis

A hazardous release accident taking place within the complex morphology of an urban setting could cause grave damage both to the population&rsquo;s safety and to the environment. An unpredicted accident constitutes a complicated physical phenomenon with unanticipated outcomes. This is because, in the event of an unforeseen accident, the dispersion of the hazardous materials exhausted in the environment is determined by unstable parameters such as the wind flow and the complex turbulent diffusion around urban blocks of buildings. Our case study focused on a diesel pool fire accident that occured between an array of nine cubical buildings. The accident was studied with a Large eddy Simulation model based on the Fire Dynamics Simulation method. This model was successfully compared against the nine cubes of the Silsoe experiment. The model&rsquo;s results were used for the determination of the immediately dangerous to life or health smoke zones of the accident. It was found that the urban geometry defined the hazardous gasses dispersion, thus increasing the toxic mass concentration around the buildings.

]]>Computation doi: 10.3390/computation6040064

Authors: Alberto Viskovic

Wind tunnel experiments are necessary for geometries that are not investigated by codes or that are not generally and parametrically investigated by literature. One example is the hyperbolic parabolic shape mostly used for cable net roofs, for which codes do not provide pressure coefficients and literature only gives mean, maxima, and minima pressure coefficient maps. However, most of pressure series acquired in wind tunnels on the roof are not Gaussian processes and, for this reason, the mean values are not precisely representative of the process. The paper investigates the ratio between mean and mode of pressure coefficient series acquired in wind tunnels on buildings covered with hyperbolic paraboloid roofs with square plans. Mode pressure coefficient maps are given as an addition to traditional pressure coefficient maps.

]]>Computation doi: 10.3390/computation6040063

Authors: Volker Eyert Mikael Christensen Walter Wolf David Reith Alexander Mavromaras Clive Freeman Erich Wimmer

The development of density functional theory and the tremendous increase of compute power in recent decades have created a framework for the incredible success of modern computational materials engineering (CME). CME has been widely adopted in the academic world and is now established as a standard tool for industrial applications. As theory and compute resources have developed, highly efficient computer codes to solve the basic equations have been implemented and successively integrated into comprehensive computational environments leading to unprecedented increases in productivity. The MedeA software of Materials Design combines a set of comprehensive productivity tools with leading computer codes such as the Vienna Ab initio Simulation Package (VASP), LAMMPS, GIBBS and the UNiversal CLuster Expansion code (UNCLE), provides interoperability at different length and time scales. In the present review, technological applications including microelectronic materials, Li-ion batteries, disordered systems, high-throughput applications and transition-metal oxides for electronics applications are described in the context of the development of CME and with reference to the MedeA environment.

]]>Computation doi: 10.3390/computation6040062

Authors: Rojalina Priyadarshini Rabindra Kumar Barik Harishchandra Dubey

The use of wearable and Internet-of-Things (IoT) for smart and affordable healthcare is trending. In traditional setups, the cloud backend receives the healthcare data and performs monitoring and prediction for diseases, diagnosis, and wellness prediction. Fog computing (FC) is a distributed computing paradigm that leverages low-power embedded processors in an intermediary node between the client layer and cloud layer. The diagnosis for wellness and fitness monitoring could be transferred to the fog layer from the cloud layer. Such a paradigm leads to a reduction in latency at an increased throughput. This paper processes a fog-based deep learning model, DeepFog that collects the data from individuals and predicts the wellness stats using a deep neural network model that can handle heterogeneous and multidimensional data. The three important abnormalities in wellness namely, (i) diabetes; (ii) hypertension attacks and (iii) stress type classification were chosen for experimental studies. We performed a detailed analysis of proposed models&rsquo; accuracy on standard datasets. The results validated the efficacy of the proposed system and architecture for accurate monitoring of these critical wellness and fitness criteria. We used standard datasets and open source software tools for our experiments.

]]>Computation doi: 10.3390/computation6040061

Authors: Matthew David Marko

This algorithm is designed to perform numerical transforms to convert data from the temporal domain into the spectral domain. This algorithm obtains the spectral magnitude and phase by studying the Coefficient of Determination of a series of artificial sinusoidal functions with the temporal data, and normalizing the variance data into a high-resolution spectral representation of the time-domain data with a finite sampling rate. What is especially beneficial about this algorithm is that it can produce spectral data at any user-defined resolution, and this highly resolved spectral data can be transformed back to the temporal domain.

]]>Computation doi: 10.3390/computation6040060

Authors: Vinh-Tan Nguyen Pankaj Kumar Jason Yu Chuan Leong

Piezoelectric structures are widely used in engineering designs including sensors, actuators, and energy-harvesting devices. In this paper, we present the development of a three-dimensional finite element model for simulations of piezoelectric actuators and quantification of their responses under uncertain parameter inputs. The implementation of the finite element model is based on standard nodal approach extended for piezoelectric materials using three-dimensional tetrahedral and hexahedral elements. To account for electrical-mechanical coupling in piezoelectric materials, an additional degree of freedom for electrical potential is added to each node in those elements together with their usual mechanical displacement unknowns. The development was validated with analytical and experimental data for a range of problems from a single-layer piezoelectric beam to multiple layer beams in unimorph and bimorph arrangement. A more detailed analysis is conducted for a unimorph composite plate actuator with different design parameters. Uncertainty quantification was also performed to evaluate the sensitivity of the responses of the piezoelectric composite plate with an uncertain input of material properties. This sheds light on understanding the variations in reported responses of the device; at the same time, providing extra confidence to the numerical model.

]]>Computation doi: 10.3390/computation6040059

Authors: Yadigar Sekerci Sergei Petrovskii

Decreasing level of dissolved oxygen has recently been reported as a growing ecological problem in seas and oceans around the world. Concentration of oxygen is an important indicator of the marine ecosystem&rsquo;s health as lack of oxygen (anoxia) can lead to mass mortality of marine fauna. The oxygen decrease is thought to be a result of global warming as warmer water can contain less oxygen. Actual reasons for the observed oxygen decay remain controversial though. Recently, it has been shown that it may as well result from a disruption of phytoplankton photosynthesis. In this paper, we further explore this idea by considering the model of coupled plankton-oxygen dynamics in two spatial dimensions. By means of extensive numerical simulations performed for different initial conditions and in a broad range of parameter values, we show that the system&rsquo;s dynamics normally lead to the formation of a rich variety of patterns. We reveal how these patterns evolve when the system approaches the tipping point, i.e., the boundary of the safe parameter range beyond which the depletion of oxygen is the only possibility. In particular, we show that close to the tipping point the spatial distribution of the dissolved oxygen tends to become more regular; arguably, this can be considered as an early warning of the approaching catastrophe.

]]>Computation doi: 10.3390/computation6040058

Authors: Simeone Marino Caitlin Hult Paul Wolberg Jennifer J. Linderman Denise E. Kirschner

Within the first 2&ndash;3 months of a Mycobacterium tuberculosis (Mtb) infection, 2&ndash;4 mm spherical structures called granulomas develop in the lungs of the infected hosts. These are the hallmark of tuberculosis (TB) infection in humans and non-human primates. A cascade of immunological events occurs in the first 3 months of granuloma formation that likely shapes the outcome of the infection. Understanding the main mechanisms driving granuloma development and function is key to generating treatments and vaccines. In vitro, in vivo, and in silico studies have been performed in the past decades to address the complexity of granuloma dynamics. This study builds on our previous 2D spatio-temporal hybrid computational model of granuloma formation in TB (GranSim) and presents for the first time a more realistic 3D implementation. We use uncertainty and sensitivity analysis techniques to calibrate the new 3D resolution to non-human primate (NHP) experimental data on bacterial levels per granuloma during the first 100 days post infection. Due to the large computational cost associated with running a 3D agent-based model, our major goal is to assess to what extent 2D and 3D simulations differ in predictions for TB granulomas and what can be learned in the context of 3D that is missed in 2D. Our findings suggest that in terms of major mechanisms driving bacterial burden, 2D and 3D models return very similar results. For example, Mtb growth rates and molecular regulation mechanisms are very important both in 2D and 3D, as are cellular movement and modulation of cell recruitment. The main difference we found was that the 3D model is less affected by crowding when cellular recruitment and movement of cells are increased. Overall, we conclude that the use of a 2D resolution in GranSim is warranted when large scale pilot runs are to be performed and if the goal is to determine major mechanisms driving infection outcome (e.g., bacterial load). To comprehensively compare the roles of model dimensionality, further tests and experimental data will be needed to expand our conclusions to molecular scale dynamics and multi-scale resolutions.

]]>Computation doi: 10.3390/computation6040057

Authors: Eva Roos Nerut Karl Karu Iuliia V. Voroshylova Kathleen Kirchner Tom Kirchner Maxim V. Fedorov Vladislav B. Ivaništšev

Computational modeling is more and more often used in studies of novel ionic liquids. The inevitable side-effect is the growing number of similar computations that require automation. This article introduces NaRIBaS (Nanomaterials and Room Temperature Ionic Liquids in Bulk and Slab)&mdash;a scripting framework that combines bash scripts with computational codes to ease modeling of nanomaterials and ionic liquids in bulk and slab. NaRIBaS helps to organize and document all input and output data, thus, improving the reproducibility of computations. Three examples are given to illustrate the NaRIBaS workflows for density functional theory (DFT) calculations of ionic pairs, molecular dynamics (MD) simulations of bulk ionic liquids (ILs), and MD simulations of ILs at an interface.

]]>Computation doi: 10.3390/computation6040056

Authors: Ghassan Ghssein Samir F. Matar

In bacterial pathology, metallophores fabricated by bacteria such as Staphylococcus aureus and Pseudomonas aeruginosa are exported to surrounding physiological media via a specific process to sequester and import metals, resulting in enhanced virulence of the bacteria. While these mechanisms are understood at qualitative levels, our investigation presents a complementary original view based on quantum chemical computations. Further understanding of the active centers in particular was provided for pseudopaline and staphylopine metallophores, which were described chemically and with vibration spectroscopy. Then, for complexes formed with a range of transition metal divalent ions (Ni, Cu, and Zn), description and analyses of the frontier molecular orbitals (FMOs) are provided, highlighting a mechanism of metal-to-ligand charge transfer (MLCT), based on excited-states calculations (time-dependent density functional theory (TD-DFT)) at the basis of the delivery of the metallic ionic species to the bacterial medium, leading eventually to its enhanced virulence. Such investigation gains importance especially in view of stepwise syntheses of metallophores in the laboratory, providing significant progress in the understanding of mechanisms underlying the enhancement of bacterial pathologies.

]]>Computation doi: 10.3390/computation6040055

Authors: Michalis P. Ninos Hector E. Nistazakis

A CDMA RoFSO link with receivers&rsquo; spatial diversity is studied. Turbulence-induced fading, modeled by the M(alaga) distribution, is considered that hamper the FSO link performance along with the nonzero boresight pointing errors effect. Novel, analytical closed-form expressions are extracted for the estimation of the average bit-error-rate and the outage probability of the CDMA RoFSO system for both directions of the forward and the reverse link. The numerical results show clearly the performance improvement of using spatial diversity, even in the most adverse atmospheric conditions with strong and saturated atmospheric turbulence with enhanced misalignment. Also, the effects of nonlinear distortion, multiple access interference and clipping noise aggravate the performance of the link, where cases with large number of users are taken into account.

]]>Computation doi: 10.3390/computation6040054

Authors: Senthil Kumar Raman Heuy Dong Kim

A centrifugal compressor working with supercritical CO 2 (S-CO 2 ) has several advantages over other supercritical and conventional compressors. S-CO 2 is as dense as the liquid CO 2 and becomes difficult to compress. Thus, during the operation, the S-CO 2 centrifugal compressor requires lesser compression work than the gaseous CO 2 . The performance of S-CO 2 compressors is highly varying with tip clearance and vanes in the diffuser. To improve the performance of the S-CO 2 centrifugal compressor, knowledge about the influence of individual components on the performance characteristics is necessary. This present study considers an S-CO 2 compressor designed with traditional engineering design tools based on ideal gas behaviour and tested by SANDIA national laboratory. Three-dimensional, steady, viscous flow through the S-CO 2 compressor was analysed with computational fluid dynamics solver based on the finite volume method. Navier-Stokes equations are solved with K- &omega; (SST) turbulence model at operating conditions in the supercritical regime. Performance of the impeller, the main component of the centrifugal compressor is compared with the impeller with vaneless diffuser and vaned diffuser configurations. The flow characteristics of the shrouded impeller are also studied to analyse the tip-leakage effect.

]]>Computation doi: 10.3390/computation6040053

Authors: Hyunjong Kim Mohan Kumar Dey Nobuyuki Oshima Yeon Won Lee

A study on sloshing characteristics in a rectangular tank, which is horizontally excited with a specific range of the Reynolds number, is approached numerically. The nonlinearity of sloshing flow is confirmed by comparing it with the linear solution based on the potential theory, and the time series results of the sloshing pressure are analyzed by Fast Fourier Transform (FFT) algorithm. Then, the pressure fluctuation phenomena are mainly observed and the magnitude of the amplitude spectrum is compared. The results show that, when the impact pressure is generated, large pressure fluctuation in a pressure cycle is observed, and the effects of the frequencies of integral multiples when the fundamental frequency appears dominantly in the sloshing flow.

]]>Computation doi: 10.3390/computation6040052

Authors: Kazuhiro Yamamoto Yusuke Toda

Using five samples with different porous materials of Al2TiO5, SiC, and cordierite, we numerically realized the fluid dynamics in a diesel filter (diesel particulate filter, DPF). These inner structures were obtained by X-ray CT scanning to reproduce the flow field in the real product. The porosity as well as pore size was selected systematically. Inside the DPF, the complex flow pattern appears. The maximum filtration velocity is over ten times larger than the velocity at the inlet. When the flow forcibly needs to go through the consecutive small pores along the filter&rsquo;s porous walls, the resultant pressure drop becomes large. The flow path length ratio to the filter wall thickness is almost the same for all samples, and its value is only 1.2. Then, the filter backpressure closely depends on the flow pattern inside the filter, which is due to the local substrate structure. In the modified filter substrate, by enlarging the pore and reducing the resistance for the net flow, the pressure drop is largely suppressed.

]]>Computation doi: 10.3390/computation6040051

Authors: Pradeep R. Varadwaj Arpita Varadwaj Helder M. Marques Koichi Yamashita

The divergence of fluorine-based systems and significance of their nascent non-covalent chemistry in molecular assemblies are presented in a brief review of the field. Emphasis has been placed to show that type-I and -II halogen-centered F&middot;&middot;&middot;F long-ranged intermolecular distances viable between the entirely negative fluorine atoms in some fluoro-substituted dimers of C6H6 can be regarded as the consequence of significant non-covalent attractive interactions. Such attractive interactions observed in the solid-state structures of C6F6 and other similar fluorine-substituted aromatic compounds have frequently been underappreciated. While these are often ascribed to crystal packing effects, we show using first-principles level calculations that these are much more fundamental in nature. The stability and reliability of these interactions are supported by their negative binding energies that emerge from a supermolecular procedure using MP2 (second-order M&oslash;ller-Plesset perturbation theory), and from the Symmetry Adapted Perturbation Theory, in which the latter does not determine the interaction energy by computing the total energy of the monomers or dimer. Quantum Theory of Atoms in Molecules and Reduced Density Gradient Non-Covalent Index charge-density-based approaches confirm the F&middot;&middot;&middot;F contacts are a consequence of attraction by their unified bond path (and bond critical point) and isosurface charge density topologies, respectively. These interactions can be explained neither by the so-called molecular electrostatic surface potential (MESP) model approach that often demonstrates attraction between sites of opposite electrostatic surface potential by means of Coulomb&rsquo;s law of electrostatics, nor purely by the effect of electrostatic polarization. We provide evidence against the standalone use of this approach and the overlooking of other approaches, as the former does not allow for the calculation of the electrostatic potential on the surfaces of the overlapping atoms on the monomers as in the equilibrium geometry of a complex. This study thus provides unequivocal evidence of the limitation of the MESP approach for its use in gaining insight into the nature of reactivity of overlapped interacting atoms and the intermolecular interactions involved.

]]>Computation doi: 10.3390/computation6030050

Authors: Jonatas E. Borges Marcos Lourenço Elie L. M. Padilla Christopher Micallef

The immersed boundary method has attracted considerable interest in the last few years. The method is a computational cheap alternative to represent the boundaries of a geometrically complex body, while using a cartesian mesh, by adding a force term in the momentum equation. The advantage of this is that bodies of any arbitrary shape can be added without grid restructuring, a procedure which is often time-consuming. Furthermore, multiple bodies may be simulated, and relative motion of those bodies may be accomplished at reasonable computational cost. The numerical platform in development has a parallel distributed-memory implementation to solve the Navier-Stokes equations. The Finite Volume Method is used in the spatial discretization where the diffusive terms are approximated by the central difference method. The temporal discretization is accomplished using the Adams-Bashforth method. Both temporal and spatial discretizations are second-order accurate. The Velocity-pressure coupling is done using the fractional-step method of two steps. The present work applies the immersed boundary method to simulate a Newtonian laminar flow through a three-dimensional sudden contraction. Results are compared to published literature. Flow patterns upstream and downstream of the contraction region are analysed at various Reynolds number in the range 44 &le; R e D &le; 993 for the large tube and 87 &le; R e D &le; 1956 for the small tube, considerating a contraction ratio of &beta; = 1.97 . Comparison between numerical and experimental velocity profiles has shown good agreement.

]]>Computation doi: 10.3390/computation6030049

Authors: Sheng-Chang Zhang Jing-Zhou Zhang Xiao-Ming Tan

Film cooling enhancement by incorporating an upstream sand-dune-shaped ramp (SDSR) to the film hole exit was numerically investigated on a flat plate under typical blowing ratios ranging from 0.5 to 1.5. Three heights of SDSRs were designed: 0.25D, 0.5D, and 0.75D. The results indicated that the upstream SDSR effectively controlled the near-wall primary flow and subsequent mutual interaction with the coolant jet, which was the main mechanism of the film cooling enhancement. First, a pair of anti-kidney vortices was formed at the trailing ridges of the SDSR, which helped suppress the kidney vortex pair due to the interaction between the coolant jet and the primary flow. Second, a weak separation and a low pressure zone were induced behind the backside of the SDSR, which caused the coolant jet to spread around the film cooling hole and improve the lateral film coverage. With respect to the baseline cylindrical film cooling holes, the effect of the upstream SDSR was distinct under different blowing ratios. Under a low blowing ratio, the upstream SDSR shortened the streetwise film layer coverage in the vicinity of the film hole centerline but increased the span-wise film layer coverage. A relatively optimal ramp height seemed to be 0.5D. Under a high blowing ratio, both the streamwise and span-wise film layer coverages improved in comparison with the baseline case. The film cooling effectiveness improved gradually with increasing ramp height.

]]>Computation doi: 10.3390/computation6030048

Authors: Fabrizio Ferrari-Ruffino Lorenzo Fortunato

The program diagonalizes the Geometric Collective Model (Bohr Hamiltonian) with generalized Gneuss&ndash;Greiner potential with terms up to the sixth power in &beta; . In nuclear physics, the Bohr&ndash;Mottelson model with later extensions into the rotovibrational Collective model is an important theoretical tool with predictive power and it represents a fundamental step in the education of a nuclear physicist. Nuclear spectroscopists might find it useful for fitting experimental data, reproducing spectra, EM transitions and moments and trying theoretical predictions, while students might find it useful for learning about connections between the nuclear shape and its quantum origin. Matrix elements for the kinetic energy operator and for scalar invariants as &beta; 2 and &beta; 3 cos ( 3 &gamma; ) have been calculated in a truncated five-dimensional harmonic oscillator basis with a different program, checked with three different methods and stored in a matrix library for the lowest values of angular momentum. These matrices are called by the program that uses them to write generalized Hamiltonians as linear combinations of certain simple operators. Energy levels and eigenfunctions are obtained as outputs of the diagonalization of these Hamiltonian operators.

]]>Computation doi: 10.3390/computation6030047

Authors: Zhen-Zong He Jun-Kui Mao Xing-Si Han

The comparison of the angular light-scattering method (ALSM) and the spectral extinction method (SEM) in solving the inverse problem of aerosol size distribution (ASD) are studied. The inverse problem is solved by a SPSO-DE hybrid algorithm, which is based on the stochastic particle swarm optimization (SPSO) algorithm and differential evolution (DE) algorithm. To improve the retrieval accuracy, the sensitivity analysis of measurement signals to characteristic parameters in ASDs is studied; and the corresponding optimal measurement angle selection region for ALSM and optimal measurement wavelength selection region for SEM are proposed, respectively. Results show that more satisfactory convergence properties can be obtained by using the SPSO-DE hybrid algorithm. Moreover, short measurement wavelengths and forward measurement angles are beneficial to obtaining more accurate results. Then, common monomodal and bimodal ASDs are estimated under different random measurement errors by using ALSM and SEM, respectively. Numerical tests show that retrieval results by using ALSM show better convergence accuracy and robustness than those by using SEM, which is attributed to the distribution of the objective function value. As a whole, considering the convergence properties and the independence on prior optical information, the ALSM combined with SPSO-DE hybrid algorithm provides a more effective and reliable technique to obtain the ASDs.

]]>Computation doi: 10.3390/computation6030046

Authors: Francesco Rundo Alessandro Ortis Sebastiano Battiato Sabrina Conoci

Blood Pressure (BP) is one of the most important physiological indicators that provides useful information in the field of health-care monitoring. Blood pressure may be measured by both invasive and non-invasive methods. A novel algorithmic approach is presented to estimate systolic and diastolic blood pressure accurately in a way that does not require any explicit user calibration, i.e., it is non-invasive and cuff-less. The approach herein described can be applied in a medical device, as well as in commercial mobile smartphones by an ad hoc developed software based on the proposed algorithm. The authors propose a system suitable for blood pressure estimation based on the PhotoPlethysmoGraphy (PPG) physiological signal sampling time-series. Photoplethysmography is a simple optical technique that can be used to detect blood volume changes in the microvascular bed of tissue. It is non-invasive since it takes measurements at the skin surface. In this paper, the authors present an easy and smart method to measure BP through careful neural and mathematical analysis of the PPG signals. The PPG data are processed with an ad hoc bio-inspired mathematical model that estimates systolic and diastolic pressure values through an innovative analysis of the collected physiological data. We compared our results with those measured using a classical cuff-based blood pressure measuring device with encouraging results of about 97% accuracy.

]]>Computation doi: 10.3390/computation6030045

Authors: Mohammed Mahmoud Mark Hoffmann Hassan Reza

Sparse matrix-vector multiplication (SpMV) can be used to solve diverse-scaled linear systems and eigenvalue problems that exist in numerous, and varying scientific applications. One of the scientific applications that SpMV is involved in is known as Configuration Interaction (CI). CI is a linear method for solving the nonrelativistic Schr&ouml;dinger equation for quantum chemical multi-electron systems, and it can deal with the ground state as well as multiple excited states. In this paper, we have developed a hybrid approach in order to deal with CI sparse matrices. The proposed model includes a newly-developed hybrid format for storing CI sparse matrices on the Graphics Processing Unit (GPU). In addition to the new developed format, the proposed model includes the SpMV kernel for multiplying the CI matrix (proposed format) by a vector using the C language and the Compute Unified Device Architecture (CUDA) platform. The proposed SpMV kernel is a vector kernel that uses the warp approach. We have gauged the newly developed model in terms of two primary factors, memory usage and performance. Our proposed kernel was compared to the cuSPARSE library and the CSR5 (Compressed Sparse Row 5) format and already outperformed both.

]]>Computation doi: 10.3390/computation6030044

Authors: Min-Rui Chen Jin-Yuan Qian Zan Wu Chen Yang Zhi-Jiang Jin Bengt Sunden

When liquids flow through a throttling element, the velocity increases and the pressure decreases. At this point, if the pressure is below the saturated vapor pressure of this liquid, the liquid will vaporize into small bubbles, causing hydraulic cavitation. In fact, a vaporization nucleus is another crucial condition for vaporizing, and particles contained in the liquid can also work as the vaporization nuclear. As a novel heat transfer medium, nanofluids have attracted the attention of many scholars. The nanoparticles contained in the nanofluids play a significant role in the vaporization of liquids. In this paper, the effects of the nanoparticles on hydraulic cavitation are investigated. Firstly, a geometric model of a perforated plate, the throttling element in this paper, is established. Then with different nanoparticle volume fractions and diameters, the nanofluids flowing through the perforated plate are numerically simulated based on a validated numerical method. The operation conditions, such as the ratio of inlet to outlet pressures and the temperature are the considered variables. Additionally, cavitation numbers under different operating conditions are achieved to investigate the effects of nanoparticles on hydraulic cavitation. Meanwhile, the contours are extracted to research the distribution of bubbles for further investigation. This study is of interest for researchers working on hydraulic cavitation or nanofluids.

]]>Computation doi: 10.3390/computation6030043

Authors: Hermann Knaus Martin Hofsäß Alexander Rautenberg Jens Bange

A model for the simulation of wind flow in complex terrain is presented based on the Reynolds averaged Navier&ndash;Stokes (RANS) equations. For the description of turbulence, the standard k-&epsilon;, the renormalization group (RNG) k-&epsilon;, and a Reynolds stress turbulence model are applied. Additional terms are implemented in the momentum equations to describe stratification of the Earth&rsquo;s atmosphere and to account for the Coriolis forces driven by the Earth&rsquo;s rotation, as well as for the drag force due to forested canopy. Furthermore, turbulence production and dissipation terms are added to the turbulence equations for the two-equation, as well as for the Reynolds stress models, in order to capture different types of land use. The approaches for the turbulence models are verified by means of a homogeneous canopy test case with flat terrain and constant forest height. The validation of the models is performed by investigating the WindForS wind test site. The simulation results are compared with five-hole probe velocity measurements using multipurpose airborne sensor carrier (MASC) systems (unmanned small research aircraft)&mdash;UAV at different locations for the main wind regime. Additionally, Reynolds stresses measured with sonic anemometers at a meteorological wind mast at different heights are compared with simulation results using the Reynolds stress turbulence model.

]]>Computation doi: 10.3390/computation6030042

Authors: Alessio Fuoco Sylvain Galier Hélène Roux-de Balmann Giorgio De Luca

The widespread use of nanofiltration and electrodialysis membrane processes is slowed down by the difficulties in predicting the membrane performances for treating streams of variable ionic compositions. Correlations between ion hydration properties and solute transfer can help to overcome this drawback. This research aims to investigate the correlation between theoretically evaluated hydration properties of major ions in solution and experimental values of neutral organic solute fluxes. In particular, ion hydration energies, coordination and hydration number and the average ion-water distance of Na+, Ca2+, Mg2+, Cl&minus; and SO42&minus; were calculated at a high quantum mechanics level and compared with experimental sugar fluxes previously reported. The properties computed by simple and not computationally expensive models were validated with information from the literature. This work discusses the correlation between the hydration energies of ions and fluxes of three saccharides, measured through nanofiltration and ionic-exchange membranes. In nanofiltration, the sugar flux increases with the presence of ions of increasing hydration energy. Instead, inverse linear correlations were found between the hydration energy and the sugar fluxes through ion exchange membranes. Finally, an empirical model is proposed for a rough evaluation of the variation in sugar fluxes as function of hydration energy for the ion exchange membranes in diffusion experiments.

]]>Computation doi: 10.3390/computation6030041

Authors: Omar Kebiri Lara Neureither Carsten Hartmann

We study linear-quadratic stochastic optimal control problems with bilinear state dependence where the underlying stochastic differential equation (SDE) has multiscale features. We show that, in the same way in which the underlying dynamics can be well approximated by a reduced-order dynamics in the scale separation limit (using classical homogenization results), the associated optimal expected cost converges to an effective optimal cost in the scale separation limit. This entails that we can approximate the stochastic optimal control for the whole system by a reduced-order stochastic optimal control, which is easier to compute because of the lower dimensionality of the problem. The approach uses an equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares Monte Carlo algorithm and show its applicability by a suitable numerical example.

]]>Computation doi: 10.3390/computation6030040

Authors: Afshan Kanwal Chang Phang Umer Iqbal

In this paper, two-dimensional Genocchi polynomials and the Ritz&ndash;Galerkin method were developed to investigate the Fractional Diffusion Wave Equation (FDWE) and the Fractional Klein&ndash;Gordon Equation (FKGE). A satisfier function that satisfies all the initial and boundary conditions was used. A linear system of algebraic equations was obtained for the considered equation with the help of two-dimensional Genocchi polynomials along with the Ritz&ndash;Galerkin method. The FDWE and FKGE, including the nonlinear case, were reduced to solve the linear system of the algebraic equation. Hence, the proposed method was able to greatly reduce the complexity of the problems and provide an accurate solution. The effectiveness of the proposed technique is demonstrated through several examples.

]]>Computation doi: 10.3390/computation6020039

Authors: Nasrin Akhter Wanli Qiao Amarda Shehu

The energy landscape, which organizes microstates by energies, has shed light on many cellular processes governed by dynamic biological macromolecules leveraging their structural dynamics to regulate interactions with molecular partners. In particular, the protein energy landscape has been central to understanding the relationship between protein structure, dynamics, and function. The landscape view, however, remains underutilized in an important problem in protein modeling, decoy selection in template-free protein structure prediction. Given the amino-acid sequence of a protein, template-free methods compute thousands of structures, known as decoys, as part of an optimization process that seeks minima of an energy function. Selecting biologically-active/native structures from the computed decoys remains challenging. Research has shown that energy is an unreliable indicator of nativeness. In this paper, we advocate that, while comparison of energies is not informative for structures that already populate minima of an energy function, the landscape view exposes the overall organization of generated decoys. As we demonstrate, such organization highlights macrostates that contain native decoys. We present two different computational approaches to extracting such organization and demonstrate through the presented findings that a landscape-driven treatment is promising in furthering research on decoy selection.

]]>Computation doi: 10.3390/computation6020038

Authors: Jean-Paul Kone Xinyu Zhang Yuying Yan Stephen Adegbite

In this paper, an open-source toolbox that can be used to accurately predict the distribution of the major physical quantities that are transported within a proton exchange membrane (PEM) fuel cell is presented. The toolbox has been developed using the Open Source Field Operation and Manipulation (OpenFOAM) platform, which is an open-source computational fluid dynamics (CFD) code. The base case results for the distribution of velocity, pressure, chemical species, Nernst potential, current density, and temperature are as expected. The plotted polarization curve was compared to the results from a numerical model and experimental data taken from the literature. The conducted simulations have generated a significant amount of data and information about the transport processes that are involved in the operation of a PEM fuel cell. The key role played by the concentration constant in shaping the cell polarization curve has been explored. The development of the present toolbox is in line with the objectives outlined in the International Energy Agency (IEA, Paris, France) Advanced Fuel Cell Annex 37 that is devoted to developing open-source computational tools to facilitate fuel cell technologies. The work therefore serves as a basis for devising additional features that are not always feasible with a commercial code.

]]>Computation doi: 10.3390/computation6020037

Authors: Khalid Hattaf Noura Yousfi

Human immunodeficiency virus (HIV) is a retrovirus that causes HIV infection and over time acquired immunodeficiency syndrome (AIDS). It can be spread and transmitted through two fundamental modes, one by virus-to-cell infection, and the other by direct cell-to-cell transmission. In this paper, we propose a new mathematical model that incorporates both modes of transmission and takes into account the role of the adaptive immune response in HIV infection. We first show that the proposed model is mathematically and biologically well posed. Moreover, we prove that the dynamical behavior of the model is fully determined by five threshold parameters. Furthermore, numerical simulations are presented to confirm our theoretical results.

]]>Computation doi: 10.3390/computation6020036

Authors: Claudio Amovilli Franca Floris

Electron density is used to compute Shannon entropy. The deviation from the Hartree–Fock (HF) of this quantity has been observed to be related to correlation energy. Thus, Shannon entropy is here proposed as a valid quantity to assess the quality of an energy density functional developed within Kohn–Sham theory. To this purpose, results from eight different functionals, representative of Jacob’s ladder, are compared with accurate results obtained from diffusion quantum Monte Carlo (DMC) computations. For three series of atomic ions, our results show that the revTPSS and the PBE0 functionals are the best, whereas those based on local density approximation give the largest discrepancy from DMC Shannon entropy.

]]>Computation doi: 10.3390/computation6020035

Authors: Eberhard Engel

Far outside the surface of slabs, the exact exchange (EXX) potential v x falls off as &minus; 1 / z , if z denotes the direction perpendicular to the surface and the slab is localized around z = 0 . Similarly, the EXX energy density e x behaves as &minus; n / ( 2 z ) , where n is the electron density. Here, an alternative proof of these relations is given, in which the Coulomb singularity in the EXX energy is treated in a particularly careful fashion. This new approach allows the derivation of the next-to-leading order contributions to the asymptotic v x and e x . It turns out that in both cases, the corrections are proportional to 1 / z 2 in general.

]]>Computation doi: 10.3390/computation6020034

Authors: Ali Cemal Benim Michael Diederich Björn Pfeiffelmann

The purpose of this study is the development of an automated two-dimensional airfoil shape optimization procedure for small horizontal axis wind turbines (HAWT), with an emphasis on high thrust and aerodynamically stable performance. The procedure combines the Computational Fluid Dynamics (CFD) analysis with the Response Surface Methodology (RSM), the Biobjective Mesh Adaptive Direct Search (BiMADS) optimization algorithm and an automatic geometry and mesh generation tool. In CFD analysis, a Reynolds Averaged Numerical Simulation (RANS) is applied in combination with a two-equation turbulence model. For describing the system behaviour under alternating wind conditions, a number of CFD 2D-RANS-Simulations with varying Reynolds numbers and wind angles are performed. The number of cases is reduced by the use of RSM. In the analysis, an emphasis is placed upon the role of the blade-to-blade interaction. The average and the standard deviation of the thrust are optimized by a derivative-free optimization algorithm to define a Pareto optimal set, using the BiMADS algorithm. The results show that improvements in the performance can be achieved by modifications of the blade shape and the present procedure can be used as an effective tool for blade shape optimization.

]]>Computation doi: 10.3390/computation6020033

Authors: María Teresa Sánchez José Manuel García-Aznar

Cell migration is an important biological process that has generated increasing interest during the last several years. This process is based on three phases: protrusion at the front end of the cell, de-adhesion at the rear end and contraction of the cell body, all of them coordinated due to the polymerization/depolymerization of certain cytoskeletal proteins. The aim of this work is to present a mathematical model to simulate the actin polymerization/depolymerization process that regulates the final outcome of cell migration process, considering all the above phases, in a particular case: when the cell is confined in a microfluidic channel. Under these specific conditions, cell migration can be approximated by using one-dimensional simulations. We will propose a system of reaction&ndash;diffusion equations to simulate the behavior of the cytoskeletal proteins responsible for protrusion and contraction in the cell, coupled with the mechanical response of the cell, computing its deformations and stresses. Furthermore, a numerical procedure is presented in order to simulate the whole process in a moving and deformable domain corresponding to the cell body.

]]>Computation doi: 10.3390/computation6020032

Authors: Pham Phuc Tsuyoshi Nozu Hirotoshi Kikuchi Kazuki Hibi Yukio Tamura

A subgrid-scale model based on coherent structures, called the Coherent Structure Smagorinsky Model (CSM), has been applied to a large eddy simulation to assess its performance in the prediction of wind pressure distributions on buildings. The study cases were carried out for the assessment of an isolated rectangular high-rise building and a building with a setback (both in a uniform flow) and an actual high-rise building in an urban city with turbulent boundary layer flow. For the isolated rectangular high-rise building in uniform flow, the CSM showed good agreement with both the traditional Smagorinsky Model (SM) and the experiments (values within 20%). For the building with a setback as well as the actual high-rise building in an urban city, both of which have a distinctive wind pressure distribution with large negative pressure caused by the complicated flow due to the strong influence of neighboring buildings, the CSM effectively gives more accurate results with less variation than the SM in comparison with the experimental results (within 20%). The CSM also yielded consistent peak pressure coefficients for all wind directions, within 20% of experimental values in a relatively high-pressure region of the case study of the actual high-rise building in an urban city.

]]>Computation doi: 10.3390/computation6020031

Authors: Ruifeng Hu Limin Wang Ping Wang Yan Wang Xiaojing Zheng

In the present work, a highly efficient incompressible flow solver with a semi-implicit time advancement on a fully staggered grid using a high-order compact difference scheme is developed firstly in the framework of approximate factorization. The fourth-order compact difference scheme is adopted for approximations of derivatives and interpolations in the incompressible Navier–Stokes equations. The pressure Poisson equation is efficiently solved by the fast Fourier transform (FFT). The framework of approximate factorization significantly simplifies the implementation of the semi-implicit time advancing with a high-order compact scheme. Benchmark tests demonstrate the high accuracy of the proposed numerical method. Secondly, by applying the proposed numerical method, we compute turbulent channel flows at low and moderate Reynolds numbers by direct numerical simulation (DNS) and large eddy simulation (LES). It is found that the predictions of turbulence statistics and especially energy spectra can be obviously improved by adopting the high-order scheme rather than the traditional second-order central difference scheme.

]]>Computation doi: 10.3390/computation6020030

Authors: Samir Matar

Topochemical and electronic structure relationships are shown upon going from ANCl to A2N2Se (A = Zr, Ce) through metathesis. The chalcogen Se (divalent) displacing halogen Cl (monovalent) modifies the arrangements of A–N monolayers within ANCl (…Cl|{AN}|Cl… sequences) to double layers in A2N2Se (…Se|{A2N2}|Se… sequences). The investigation carried out in the framework of the quantum density functional theory DFT points to peculiar features pertaining to the dominant effect of the A–N covalent bond stronger than ionic A–Cl and ionocovalent A–Se, as identified from analyses of bonding from overlap integral, charge transfer, electron localization function mapping. Electronic density of states shows semi-conducting behavior due to the tetravalent character of A. The resulting overall pseudo-binary compounds are expressed formally with full ionization as {AN}Cl and {A2N2}Se.

]]>Computation doi: 10.3390/computation6020029

Authors: Alexander Landa Per Söderlind Ivan Naumov John Klepeis Levente Vitos

In the periodic table, only a few pure metals exhibit lattice or magnetic instabilities associated with Fermi surface nesting, the classical examples being α-U and Cr. Whereas α-U displays a strong Kohn anomaly in the phonon spectrum that ultimately leads to the formation of charge density waves (CDWs), Cr is known for its nesting-induced spin density waves (SDWs). Recently, it has become clear that a pronounced Kohn anomaly and the corresponding softening in the elastic constants is also the key factor that controls structural transformations and mechanical properties in compressed group VB metals—materials with relatively high superconducting critical temperatures. This article reviews the current understanding of the structural and mechanical behavior of these metals under pressure with an introduction to the concept of the Kohn anomaly and how it is related to the important concept of Peierls instability. We review both experimental and theoretical results showing different manifestations of the Kohn anomaly in the transverse acoustic phonon mode TA (ξ00) in V, Nb, and Ta. Specifically, in V the anomaly triggers a structural transition to a rhombohedral phase, whereas in Nb and Ta it leads to an anomalous reduction in yield strength.

]]>Computation doi: 10.3390/computation6020028

Authors: Gongbo Zu Kit Lam

Wind flow structures and their consequent wind loads on two high-rise buildings in staggered arrangement are investigated by Large Eddy Simulation (LES). Synchronized pressure and flow field measurements by particle image velocimetry (PIV) are conducted in a boundary layer wind tunnel to validate the numerical simulations. The instantaneous and time-averaged flow fields are analyzed and discussed in detail. The coherent flow structures in the building gap are clearly observed and the upstream building wake is found to oscillate sideways and meander down to the downstream building in a coherent manner. The disruptive effect on the downstream building wake induced by the upstream building is also observed. Furthermore, the connection between the upstream building wake and the wind loads on the downstream building is explored by the simultaneous data of wind pressures and wind flow fields.

]]>Computation doi: 10.3390/computation6020027

Authors: S. Paz Cameron Abrams

In this work, we study the influence of hidden barriers on the convergence behavior of three free-energy calculation methods: well-tempered metadynamics (WTMD), adaptive-biasing forces (ABF), and on-the-fly parameterization (OTFP). We construct a simple two-dimensional potential-energy surfaces (PES) that allows for an exact analytical result for the free-energy in any one-dimensional order parameter. Then we chose different CV definitions and PES parameters to create three different systems with increasing sampling challenges. We find that all three methods are not greatly affected by the hidden-barriers in the simplest case considered. The adaptive sampling methods show faster sampling while the auxiliary high-friction requirement of OTFP makes it slower for this case. However, a slight change in the CV definition has a strong impact in the ABF and WTMD performance, illustrating the importance of choosing suitable collective variables.

]]>Computation doi: 10.3390/computation6010026

Authors: Fredrik Nilsson Ferdi Aryasetiawan

Substantial progress has been achieved in the last couple of decades in computing the electronic structure of correlated materials from first principles. This progress has been driven by parallel development in theory and numerical algorithms. Theoretical development in combining ab initio approaches and many-body methods is particularly promising. A crucial role is also played by a systematic method for deriving a low-energy model, which bridges the gap between real and model systems. In this article, an overview is given tracing the development from the LDA+U to the latest progress in combining the G W method and (extended) dynamical mean-field theory ( G W +EDMFT). The emphasis is on conceptual and theoretical aspects rather than technical ones.

]]>Computation doi: 10.3390/computation6010025

Authors: Xiao-Yin Pan Viraht Sahni

Dissipative effects arise in an electronic system when it interacts with a time-dependent environment. Here, the Schrödinger theory of electrons in an electromagnetic field including dissipative effects is described from a new perspective. Dissipation is accounted for via the effective Hamiltonian approach in which the electron mass is time-dependent. The perspective is that of the individual electron: the corresponding equation of motion for the electron or time-dependent differential virial theorem—the ‘Quantal Newtonian’ second law—is derived. According to the law, each electron experiences an external field comprised of a binding electric field, the Lorentz field, and the electromagnetic field. In addition, there is an internal field whose components are representative of electron correlations due to the Pauli exclusion principle and Coulomb repulsion, kinetic effects, and density. There is also an internal contribution due to the magnetic field. The response of the electron is governed by the current density field in which a damping coefficient appears. The law leads to further insights into Schrödinger theory, and in particular the intrinsic self-consistent nature of the Schrödinger equation. It is proved that in the presence of dissipative effects, the basic variables (gauge-invariant properties, knowledge of which determines the Hamiltonian) are the density and physical current density. Finally, a local effective potential theory of dissipative systems—quantal density functional theory (QDFT)—is developed. This constitutes the mapping from the interacting dissipative electronic system to one of noninteracting fermions possessing the same dissipation and basic variables. Attributes of QDFT are the separation of the electron correlations due to the Pauli exclusion principle and Coulomb repulsion, and the determination of the correlation contributions to the kinetic energy. Hence, Schrödinger theory in conjunction with QDFT leads to additional insights into the dissipative system.

]]>Computation doi: 10.3390/computation6010024

Authors: Katrina Calautit Angelo Aquino John Calautit Payam Nejat Fatemeh Jomehzadeh Ben Hughes

Global demand for energy continues to increase rapidly, due to economic and population growth, especially for increasing market economies. These lead to challenges and worries about energy security that can increase as more users need more energy resources. Also, higher consumption of fossil fuels leads to more greenhouse gas emissions, which contribute to global warming. Moreover, there are still more people without access to electricity. Several studies have reported that one of the rapidly developing source of power is wind energy and with declining costs due to technology and manufacturing advancements and concerns over energy security and environmental issues, the trend is predicted to continue. As a result, tools and methods to simulate and optimize wind energy technologies must also continue to advance. This paper reviews the most recently published works in Computational Fluid Dynamic (CFD) simulations of micro to small wind turbines, building integrated with wind turbines, and wind turbines installed in wind farms. In addition, the existing limitations and complications included with the wind energy system modelling were examined and issues that needs further work are highlighted. This study investigated the current development of CFD modelling of wind energy systems. Studies on aerodynamic interaction among the atmospheric boundary layer or wind farm terrain and the turbine rotor and their wakes were investigated. Furthermore, CFD combined with other tools such as blade element momentum were examined.

]]>Computation doi: 10.3390/computation6010023

Authors: B. Shadrack Jabes Christian Krekeler

We use the Grand Canonical Adaptive Resolution Molecular Dynamics Technique (GC-AdResS) to examine the essential degrees of freedom necessary for reproducing the structural properties of the imidazolium class of ionic liquids. In this technique, the atomistic details are treated as an open sub-region of the system while the surrounding environment is modelled as a generic coarse-grained model. We systematically characterize the spatial quantities such as intramolecular, intermolecular radial distribution functions, other structural and orientational properties of ILs. The spatial quantities computed in an open sub-region of the system are in excellent agreement with the equivalent quantities calculated in a full atomistic simulation, suggesting that the atomistic degrees of freedom outside the sub-region are negligible. The size of the sub-region considered in this study is 2 nm, which is essentially the size of a few ions. Insight from the study suggests that a higher degree of spatial locality seems to play a crucial role in characterizing the properties of imidazolium based ionic liquids.

]]>Computation doi: 10.3390/computation6010022

Authors: Péter Koltai Hao Wu Frank Noé Christof Schütte

There are multiple ways in which a stochastic system can be out of statistical equilibrium. It might be subject to time-varying forcing; or be in a transient phase on its way towards equilibrium; it might even be in equilibrium without us noticing it, due to insufficient observations; and it even might be a system failing to admit an equilibrium distribution at all. We review some of the approaches that model the effective statistical behavior of equilibrium and non-equilibrium dynamical systems, and show that both cases can be considered under the unified framework of optimal low-rank approximation of so-called transfer operators. Particular attention is given to the connection between these methods, Markov state models, and the concept of metastability, further to the estimation of such reduced order models from finite simulation data. All these topics bear an important role in, e.g., molecular dynamics, where Markov state models are often and successfully utilized, and which is the main motivating application in this paper. We illustrate our considerations by numerical examples.

]]>Computation doi: 10.3390/computation6010021

Authors: Joseph Rudzinski Tristan Bereau

Coarse-grained molecular simulation models can provide significant insight into the complex behavior of protein systems, but suffer from an inherently distorted description of dynamical properties. We recently demonstrated that, for a heptapeptide of alanine residues, the structural and kinetic properties of a simulation model are linked in a rather simple way, given a certain level of physics present in the model. In this work, we extend these findings to a longer peptide, for which the representation of configuration space in terms of a full enumeration of sequences of helical/coil states along the peptide backbone is impractical. We verify the structural-kinetic relationships by scanning the parameter space of a simple native-biased model and then employ a distinct transferable model to validate and generalize the conclusions. Our results further demonstrate the validity of the previous findings, while clarifying the role of conformational entropy in the determination of the structural-kinetic relationships. More specifically, while the global, long timescale kinetic properties of a particular class of models with varying energetic parameters but approximately fixed conformational entropy are determined by the overarching structural features of the ensemble, a shift in these kinetic observables occurs for models with a distinct representation of steric interactions. At the same time, the relationship between structure and more local, faster kinetic properties is not affected by varying the conformational entropy of the model.

]]>Computation doi: 10.3390/computation6010020

Authors: Marcus Weber

Upon ligand binding or during chemical reactions the state of a molecular system changes in time. Usually we consider a finite set of (macro-) states of the system (e.g., ‘bound’ vs. ‘unbound’), although the process itself takes place in a continuous space. In this context, the formula χ = X A connects the micro-dynamics of the molecular system to its macro-dynamics. χ can be understood as a clustering of micro-states of a molecular system into a few macro-states. X is a basis of an invariant subspace of a transfer operator describing the micro-dynamics of the system. The formula claims that there is an unknown linear relation A between these two objects. With the aid of this formula we can understand rebinding effects, the electron flux in pericyclic reactions, and systematic changes of binding rates in kinetic ITC experiments. We can also analyze sequential spectroscopy experiments and rare event systems more easily. This article provides an explanation of the formula and an overview of some of its consequences.

]]>Computation doi: 10.3390/computation6010019

Authors: Xinghao Liang Yang Li Qiang Zhao Zheng Zhang Xiaoping Ouyang

Silicon carbide (SiC) is considered as an important material for nuclear engineering due to its excellent properties. Changing the carbon content in SiC can regulate and control its elastic and thermodynamic properties, but a simulation study of the effect of carbon content on the sputtering (caused by the helium ions) of SiC is still lacking. In this work, we used the Monte-Carlo and molecular dynamics simulation methods to study the effects of carbon concentration, incidence energy, incident angle, and target temperature on the sputtering yield of SiC. The results show that the incident ions’ energy and angle have a significant effect on sputtering yield of SiC when the carbon concentration in SiC is around 62 at %, while the target temperature has a little effect on the sputtering yield of SiC. Our work might provide theoretical support for the experimental research and engineering application of carbon fiber-reinforced SiC that be used as the plasma-facing material in tokamak fusion reactors.

]]>Computation doi: 10.3390/computation6010018

Authors: George Kasapoglu Evangelia Karagianni Michael Fafalios Ioannis Koukos

The recently modified Pascal function is further exploited in this paper in the design of passive analog filters. The Pascal approximation has non-equiripple magnitude, in contrast of the most well-known approximations, such as the Chebyshev approximation. A novelty of this work is the introduction of a precise method that calculates the coefficients of the Pascal function. Two examples are presented for the passive design to illustrate the advantages and the disadvantages of the Pascal approximation. Moreover, the values of the passive elements can be taken from tables, which are created to define the normalized values of these elements for the Pascal approximation, as Zverev had done for the Chebyshev, Elliptic, and other approximations. Although Pascal approximation can be implemented to both passive and active filter designs, a passive filter design is addressed in this paper, and the benefits and shortcomings of Pascal approximation are presented and discussed.

]]>Computation doi: 10.3390/computation6010017

Authors: Darrell Pepper Maria Ramos Gonzalez

A localized meshless method is used to simulate 3-D atmospheric wind fields for wind energy assessment and emergency response. The meshless (or mesh-free) method with radial basis functions (RBFs) alleviates the need to create a mesh required by finite difference, finite volume, and finite element methods. The method produces a fast solution that converges with high accuracy, establishing 3-D wind estimates over complex terrain. The method does not require discretization of the domain or boundary and removes the need for domain integration. The meshless method converges exponentially for smooth boundary shapes and boundary data, and is insensitive to dimensional constraints. Coding of the method is very easy and can be done using MATLAB or MAPLE. By employing a localized RBF procedure, 3-D wind fields can be established from sparse meteorological data. The meshless method can be easily run on PCs and hand-held mobile devices. This article summarizes previous work where the meshless method has successfully simulated 3D wind fields over various environments, along with the equations used to obtain the simulations.

]]>Computation doi: 10.3390/computation6010016

Authors: Wienczyslaw Stalewski

Optimization methods are increasingly used to solve problems in aeronautical engineering. Typically, optimization methods are utilized in the design of an aircraft airframe or its structure. The presented study is focused on improvement of aircraft flight control procedures through numerical optimization. The optimization problems concern selected phases of flight of a light gyroplane—a rotorcraft using an unpowered rotor in autorotation to develop lift and an engine-powered propeller to provide thrust. An original methodology of computational simulation of rotorcraft flight was developed and implemented. In this approach the aircraft motion equations are solved step-by-step, simultaneously with the solution of the Unsteady Reynolds-Averaged Navier–Stokes equations, which is conducted to assess aerodynamic forces acting on the aircraft. As a numerical optimization method, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) algorithm was adapted. The developed methodology was applied to optimize the flight control procedures in selected stages of gyroplane flight in direct proximity to the ground, where proper control of the aircraft is critical to ensure flight safety and performance. The results of conducted computational optimizations proved the qualitative correctness of the developed methodology. The research results can be helpful in the design of easy-to-control gyroplanes and also in the training of pilots for this type of rotorcraft.

]]>Computation doi: 10.3390/computation6010015

Authors: Andrea Cesari Sabine Reißer Giovanni Bussi

Molecular dynamics (MD) simulations allow the investigation of the structural dynamics of biomolecular systems with unrivaled time and space resolution. However, in order to compensate for the inaccuracies of the utilized empirical force fields, it is becoming common to integrate MD simulations with experimental data obtained from ensemble measurements. We review here the approaches that can be used to combine MD and experiment under the guidance of the maximum entropy principle. We mostly focus on methods based on Lagrangian multipliers, either implemented as reweighting of existing simulations or through an on-the-fly optimization. We discuss how errors in the experimental data can be modeled and accounted for. Finally, we use simple model systems to illustrate the typical difficulties arising when applying these methods.

]]>Computation doi: 10.3390/computation6010014

Authors: Petros Daltzis Christos Volos Hector Nistazakis Andreas Tsigopoulos George Tombras

In this work, a 4D hyperchaotic hyperjerk system, with better results for its Lyapunov exponents and Kaplan–Yorke dimension regarding other systems of this family, as well as its circuit implementation, is presented. Hyperchaotic hyperjerk systems depict complex dynamical behavior in a high-dimensional phase space with n ≥ 4, offering robustness against many types of attacks in private communications. For this reason, an adaptive controller in order to achieve global chaos synchronization of coupled 4D hyperchaotic hyperjerk systems with unknown parameters is designed. The adaptive results in this work are proved using Lyapunov stability theory and the effectiveness of the proposed synchronization scheme is confirmed through the simulation results.

]]>Computation doi: 10.3390/computation6010013

Authors: Per Söderlind David Young

The last decade has seen a continued development of better experimental techniques to measure equation-of-state (EOS) for various materials. These improvements of both static and shock-compression approaches have increased the accuracy of the EOS and challenged the complimentary theoretical modeling. The conventional modeling of EOS, at least at pressure and temperature conditions that are not too extreme, is founded on density-functional theory (DFT). Naturally, there is an increased interest in the accuracy of DFT as the measurements are becoming more refined and there is a particular interest in the robustness and validity of DFT at conditions where experimental data are not available. Here, we consider a broad and large set of 64 elemental solids from low atomic number Z up to the very high Z actinide metals. The intent is to compare DFT with experimental zero-temperature isotherms up to 1 Mbar (100 GPa) and draw conclusions regarding the theoretical (DFT) error and quantify a reasonable and defensible approach to define the theoretical uncertainty. We find that in all 64 cases the DFT error at high pressure is smaller than or equal to the DFT error at lower pressures which thus provides an upper bound to the error at high compression.

]]>Computation doi: 10.3390/computation6010012

Authors: Christos Vazouras George Kasapoglu Evangelia Karagianni Nikolaos Uzunoglu

A microwave free space reflectometry technique with swept frequency measurements for the profiling of wall structures and the detection of hidden (covered) layers has been applied to the Hagia Sophia byzantine monument. Experimental measurement results are presented and compared with three-dimensional (3D) simulated results, exhibiting fair agreement in some (though not all) aspects. Based on the experimental results, the possibility of clear discrimination between regions with and without covered mosaic layers, and hence the detection of such layers, is demonstrated.

]]>Computation doi: 10.3390/computation6010011

Authors: Giovanni Ciccotti Mauro Ferrario

A dynamical system submitted to holonomic constraints is Hamiltonian only if considered in the reduced phase space of its generalized coordinates and momenta, which need to be defined ad hoc in each particular case. However, specially in molecular simulations, where the number of degrees of freedom is exceedingly high, the representation in generalized coordinates is completely unsuitable, although conceptually unavoidable, to provide a rigorous description of its evolution and statistical properties. In this paper, we first review the state of the art of the numerical approach that defines the way to conserve exactly the constraint conditions (by an algorithm universally known as SHAKE) and permits integrating the equations of motion directly in the phase space of the natural Cartesian coordinates and momenta of the system. We then discuss in detail SHAKE numerical implementations in the notable cases of Verlet and velocity-Verlet algorithms. After discussing in the same framework how constraints modify the properties of the equilibrium ensemble, we show how, at the price of moving to a dynamical system no more (directly) Hamiltonian, it is possible to provide a direct interpretation of the dynamical system and so derive its Statistical Mechanics both at equilibrium and in non-equilibrium conditions. To achieve that, we generalize the statistical treatment to systems no longer conserving the phase space volume (equivalently, we introduce a non-Euclidean invariant measure in phase space) and derive a generalized Liouville equation describing the ensemble even out of equilibrium. As a result, we can extend the response theory of Kubo (linear and nonlinear) to systems subjected to constraints.

]]>Computation doi: 10.3390/computation6010010

Authors: Denise-Penelope Kontoni Ahmed Farghaly

Various cases of two adjacent multi-story buildings with different numbers of floors and equal or unequal foundation levels under earthquake loading and considering soil-structure interaction (SSI) are investigated. A two-dimensional model for each case of the two adjacent unequal buildings without separation distance is used and a special arrangement of contact elements in the contact zone is employed to fulfil all possible deformation contact modes which take place under seismic loading. The soil is modelled by two-dimensional 4-node elements which are in contact with the foundations of the two adjacent buildings. This paper studies the earthquake-induced double pounding that takes place between the two adjacent unequal height buildings in some upper points at superstructure in the contact zone and also at foundation level, considering soil-structure interaction (SSI). The double pounding and the soil-structure interaction (SSI) effects should be taken into consideration in the seismic analysis of adjacent buildings especially those with different heights and different foundation levels.

]]>Computation doi: 10.3390/computation6010009

Authors: Abhishekh Gupta Pedro Mendes

Stochastic simulation has been widely used to model the dynamics of biochemical reaction networks. Several algorithms have been proposed that are exact solutions of the chemical master equation, following the work of Gillespie. These stochastic simulation approaches can be broadly classified into two categories: network-based and -free simulation. The network-based approach requires that the full network of reactions be established at the start, while the network-free approach is based on reaction rules that encode classes of reactions, and by applying rule transformations, it generates reaction events as they are needed without ever having to derive the entire network. In this study, we compare the efficiency and limitations of several available implementations of these two approaches. The results allow for an informed selection of the implementation and methodology for specific biochemical modeling applications.

]]>Computation doi: 10.3390/computation6010008

Authors: Denise-Penelope Kontoni Ahmed Farghaly

An existing riverine platform in Egypt, together with its pile group foundation, is analyzed under environmental loads using 3D FEM structural analysis software incorporating soil-pile interaction. The interaction between the transfer plate and the piles supporting the platform is investigated. Two connection conditions were studied assuming fixed or hinged connection between the piles and the reinforced concrete platform for the purpose of comparison of the structural behavior. The analysis showed that the fixed or hinged connection condition between the piles and the platform altered the values and distribution of displacements, normal force, bending moments, and shear forces along the length of each pile. The distribution of piles in the pile group affects the stress distribution on both the soil and platform. The piles were found to suffer from displacement failure rather than force failure. Moreover, the resulting bending stresses on the reinforced concrete plate in the case of a fixed connection between the piles and the platform were almost doubled and much higher than the allowable reinforced concrete stress and even exceeded the ultimate design strength and thus the environmental loads acting on a pile-supported riverine offshore platform may cause collapse if they are not properly considered in the structural analysis and design.

]]>Computation doi: 10.3390/computation6010007

Authors: Aleksandr Terentjev Pietro Cortona Lucian Constantin José Pitarke Fabio Della Sala Eduardo Fabiano

We extend the SG4 generalized gradient approximation, developed for covalent and ionic solids with a nonlocal van der Waals functional. The resulting SG4-rVV10m functional is tested, considering two possible parameterizations, for various kinds of bulk solids including layered materials and molecular crystals as well as regular bulk materials. The results are compared to those of similar methods, PBE + rVV10L and rVV10. In most cases, SG4-rVV10m yields a quite good description of systems (from iono-covalent to hydrogen-bond and dispersion interactions), being competitive with PBE + rVV10L and rVV10 for dispersion-dominated systems and slightly superior for iono-covalent ones. Thus, it shows a promising applicability for solid-state applications. In a few cases, however, overbinding is observed. This is analysed in terms of gradient contributions to the functional.

]]>Computation doi: 10.3390/computation6010006

Authors: Tobias Martin Ivan Shevchuk

In this article, the development of high-order semi-implicit interpolation schemes for convection terms on unstructured grids is presented. It is based on weighted essentially non-oscillatory (WENO) reconstructions which can be applied to the evaluation of any field in finite volumes using its known cell-averaged values. Here, the algorithm handles convex cells in arbitrary three-dimensional meshes. The implementation is parallelized using the Message Passing Interface. All schemes are embedded in the code structure of OpenFOAM® resulting in the access to a huge open-source community and the applicability to high-level programming. Several verification cases and applications of the scalar advection equation and the incompressible Navier-Stokes equations show the improved accuracy of the WENO approach due to a mapping of the stencil to a reference space without scaling effects. An efficiency analysis indicates an increased computational effort of high-order schemes in comparison to available high-resolution methods. However, the reconstruction time can be efficiently decreased when more processors are used.

]]>Computation doi: 10.3390/computation6010005

Authors: Matthias Schramm Bernhard Stoevesandt Joachim Peinke

The adjoint approach in gradient-based optimization combined with computational fluid dynamics is commonly applied in various engineering fields. In this work, the gradients are used for the design of a two-dimensional airfoil shape, where the aim is a change in lift and drag coefficient, respectively, to a given target value. The optimizations use the unconstrained quasi-Newton method with an approximation of the Hessian. The flow field is computed with a finite-volume solver where the continuous adjoint approach is implemented. A common assumption in this approach is the use of the same turbulent viscosity in the adjoint diffusion term as for the primal flow field. The effect of this so-called “frozen turbulence” assumption is compared to the results using adjoints to the Spalart–Allmaras turbulence model. The comparison is done at a Reynolds number of R e = 2 × 10 6 for two different airfoils at different angles of attack.

]]>Computation doi: 10.3390/computation6010004

Authors: Computation Editorial Office

Peer review is an essential part in the publication process, ensuring that Computation maintains high quality standards for its published papers.

]]>Computation doi: 10.3390/computation6010003

Authors: Rudolf Podgornik Julija Zavadlav Matej Praprotnik

Densely packed DNA arrays exhibit hexagonal and orthorhombic local packings, as well as a weakly first order transition between them. While we have some understanding of the interactions between DNA molecules in aqueous ionic solutions, the structural details of its ordered phases and the mechanism governing the respective phase transitions between them remains less well understood. Since at high DNA densities, i.e., small interaxial spacings, one can neither neglect the atomic details of the interacting macromolecular surfaces nor the atomic details of the intervening ionic solution, the atomistic resolution is a sine qua non to properly describe and analyze the interactions between DNA molecules. In fact, in order to properly understand the details of the observed osmotic equation of state, one needs to implement multiple levels of organization, spanning the range from the molecular order of DNA itself, the possible ordering of counterions, and then all the way to the induced molecular ordering of the aqueous solvent, all coupled together by electrostatic, steric, thermal and direct hydrogen-bonding interactions. Multiscale simulations therefore appear as singularly suited to connect the microscopic details of this system with its macroscopic thermodynamic behavior. We review the details of the simulation of dense atomistically resolved DNA arrays with different packing symmetries and the ensuing osmotic equation of state obtained by enclosing a DNA array in a monovalent salt and multivalent (spermidine) counterions within a solvent permeable membrane, mimicking the behavior of DNA arrays subjected to external osmotic stress. By varying the DNA density, the local packing symmetry, and the counterion type, we are able to analyze the osmotic equation of state together with the full structural characterization of the DNA subphase, the counterion distribution and the solvent structural order in terms of its different order parameters and consequently identify the most important contribution to the DNA-DNA interactions at high DNA densities.

]]>Computation doi: 10.3390/computation6010002

Authors: Martin Skote Tze Sim Narasimalu Srikanth

The objective of this study is to investigate the temporal behavior of the pressure field of a stationary dry microburst-like wind phenomenon utilizing Unsteady Reynolds-averaged Navier-Stokes (URANS) numerical simulations. Using an axisymmetric steady impinging jet model, the dry microburst-like wind is simulated from the initial release of a steady downdraft flow, till the time after the primary vortices have fully convected out of the stagnation region. The validated URANS results presented herein shed light on the temporal variation of the pressure field which is in agreement with the qualitative description obtained from field measurements. The results have an impact on understanding the wind load on structures from the initial touch-down phase of the downdraft from a microburst. The investigation is based on CFD techniques, together with a simple impinging jet model that does not include any microphysical processes. Unlike previous investigations, this study focuses on the transient pressure field from a downdraft without obstacles.

]]>Computation doi: 10.3390/computation6010001

Authors: Rufina Tretyakova Rostislav Savinkov Gennady Lobov Gennady Bocharov

The lymphatic system is a body-wide network of lymphatic vessels and lymphoid organs. The complexity of the structural and functional organization of the lymphatic system implies the necessity of using computational modeling approaches to unravel the mechanisms of its regulation in quantitative terms. Although it is a vital part of the circulatory and immune systems, the lymphatic system remains poorly investigated as a mathematical modeling object. Modeling of the lymphatic vessel network needs to be established using a systematic approach in order to advance the model-driven research of this important physiological system. In our study, we elucidate key general features underlying the 3D structural organization of the lymphatic system in order to develop computational geometry and network graph models of the human lymphatic system based on available anatomical data (from the PlasticBoy project), which provides an estimate of the structure of the lymphatic system, and to analyze the topological properties of the resulting models.

]]>Computation doi: 10.3390/computation5040049

Authors: Karlheinz Schwarz

The routinely made assumptions for simulating solid materials are briefly summarized, since they need to be critically assessed when new aspects become important, such as excited states, finite temperature, time-dependence, etc. The significantly higher computer power combined with improved experimental data open new areas for interdisciplinary research, for which new ideas and concepts are needed.

]]>Computation doi: 10.3390/computation5040048

Authors: Christoph Riesinger Arash Bakhtiari Martin Schreiber Philipp Neumann Hans-Joachim Bungartz

Heterogeneous clusters are a widely utilized class of supercomputers assembled from different types of computing devices, for instance CPUs and GPUs, providing a huge computational potential. Programming them in a scalable way exploiting the maximal performance introduces numerous challenges such as optimizations for different computing devices, dealing with multiple levels of parallelism, the application of different programming models, work distribution, and hiding of communication with computation. We utilize the lattice Boltzmann method for fluid flow as a representative of a scientific computing application and develop a holistic implementation for large-scale CPU/GPU heterogeneous clusters. We review and combine a set of best practices and techniques ranging from optimizations for the particular computing devices to the orchestration of tens of thousands of CPU cores and thousands of GPUs. Eventually, we come up with an implementation using all the available computational resources for the lattice Boltzmann method operators. Our approach shows excellent scalability behavior making it future-proof for heterogeneous clusters of the upcoming architectures on the exaFLOPS scale. Parallel efficiencies of more than 90 % are achieved leading to 2604.72 GLUPS utilizing 24,576 CPU cores and 2048 GPUs of the CPU/GPU heterogeneous cluster Piz Daint and computing more than 6.8 × 10 9 lattice cells.

]]>Computation doi: 10.3390/computation5040047

Authors: Timothy Sands

By reversing paradigms that normally utilize mathematical models as the basis for nonlinear adaptive controllers, this article describes using the controller to serve as a novel computational approach for mathematical system identification. System identification usually begins with the dynamics, and then seeks to parameterize the mathematical model in an optimization relationship that produces estimates of the parameters that minimize a designated cost function. The proposed methodology uses a DC motor with a minimum-phase mathematical model controlled by a self-tuning regulator without model pole cancelation. The normal system identification process is briefly articulated by parameterizing the system for least squares estimation that includes an allowance for exponential forgetting to deal with time-varying plants. Next, towards the proposed approach, the Diophantine equation is derived for an indirect self-tuner where feedforward and feedback controls are both parameterized in terms of the motor’s math model. As the controller seeks to nullify tracking errors, the assumed plant parameters are adapted and quickly converge on the correct parameters of the motor’s math model. Next, a more challenging non-minimum phase system is investigated, and the earlier implemented technique is modified utilizing a direct self-tuner with an increased pole excess. The nominal method experiences control chattering (an undesirable characteristic that could potentially damage the motor during testing), while the increased pole excess eliminates the control chattering, yet maintains effective mathematical system identification. This novel approach permits algorithms normally used for control to instead be used effectively for mathematical system identification.

]]>Computation doi: 10.3390/computation5040046

Authors: David Sadowsky Andrew Abboud Anthony Cyr Lena Vodovotz Paulo Fontes Ruben Zamora Yoram Vodovotz

Extracorporeal organ perfusion, in which organs are preserved in an isolated, ex vivo environment over an extended time-span, is a concept that has led to the development of numerous alternative preservation protocols designed to better maintain organ viability prior to transplantation. These protocols offer researchers a novel opportunity to obtain extensive sampling of isolated organs, free from systemic influences. Data-driven computational modeling is a primary means of integrating the extensive and multivariate data obtained in this fashion. In this review, we focus on the application of dynamic data-driven computational modeling to liver pathophysiology and transplantation based on data obtained from ex vivo organ perfusion.

]]>Computation doi: 10.3390/computation5040045

Authors: Nikolai Bessonov Vitaly Volpert

This paper is devoted to modelling tissue growth with a deformable cell model. Each cell represents a polygon with particles located at its vertices. Stretching, bending and pressure forces act on particles and determine their displacement. Pressure-dependent cell proliferation is considered. Various patterns of growing tissue are observed. An application of the model to tissue regeneration is illustrated. Approximate analytical models of tissue growth are developed.

]]>Computation doi: 10.3390/computation5040044

Authors: Animesh Agarwal Brooks Rabideau Ahmed Ismail

We present a hierarchical coarse-graining framework for modeling semidilute polymer solutions, based on the wavelet-accelerated Monte Carlo (WAMC) method. This framework forms a hierarchy of resolutions to model polymers at length scales that cannot be reached via atomistic or even standard coarse-grained simulations. Previously, it was applied to simulations examining the structure of individual polymer chains in solution using up to four levels of coarse-graining (Ismail et al., J. Chem. Phys., 2005, 122, 234901 and Ismail et al., J. Chem. Phys., 2005, 122, 234902), recovering the correct scaling behavior in the coarse-grained representation. In the present work, we extend this method to the study of polymer solutions, deriving the bonded and non-bonded potentials between coarse-grained superatoms from the single chain statistics. A universal scaling function is obtained, which does not require recalculation of the potentials as the scale of the system is changed. To model semi-dilute polymer solutions, we assume the intermolecular potential between the coarse-grained beads to be equal to the non-bonded potential, which is a reasonable approximation in the case of semidilute systems. Thus, a minimal input of microscopic data is required for simulating the systems at the mesoscopic scale. We show that coarse-grained polymer solutions can reproduce results obtained from the more detailed atomistic system without a significant loss of accuracy.

]]>Computation doi: 10.3390/computation5040042

Authors: Matthew Chan Rogelio Cuevas-Saavedra Debajit Chakraborty Paul Ayers

We propose a limited-memory quasi-Newton method using the bad Broyden update and apply it to the nonlinear equations that must be solved to determine the effective Fermi momentum in the weighted density approximation for the exchange energy density functional. This algorithm has advantages for nonlinear systems of equations with diagonally dominant Jacobians, because it is easy to generalize the method to allow for periodic updates of the diagonal of the Jacobian. Systematic tests of the method for atoms show that one can determine the effective Fermi momentum at thousands of points in less than fifteen iterations.

]]>Computation doi: 10.3390/computation5040043

Authors: Xuhua Xia

A self-organizing map (SOM) is an artificial neural network algorithm that can learn from the training data consisting of objects expressed as vectors and perform non-hierarchical clustering to represent input vectors into discretized clusters, with vectors assigned to the same cluster sharing similar numeric or alphanumeric features. SOM has been used widely in transcriptomics to identify co-expressed genes as candidates for co-regulated genes. I envision SOM to have great potential in characterizing heterogeneous sequence motifs, and aim to illustrate this potential by a parallel presentation of SOM with a set of numerical vectors and a set of equal-length sequence motifs. While there are numerous biological applications of SOM involving numerical vectors, few studies have used SOM for heterogeneous sequence motif characterization. This paper is intended to encourage (1) researchers to study SOM in this new domain and (2) computer programmers to develop user-friendly motif-characterization SOM tools for biologists.

]]>Computation doi: 10.3390/computation5030041

Authors: DooJin Jeon KiTae Kim SangEul Han

Among the various blast load equations, the Kingery-Bulmash equation is applicable to both a free-air burst and a surface burst that enables calculations of the parameters of a pressure-time history curve. On the other hand, this equation is quite complicated. This paper proposes a modified equation that may replace the conventional Kingery-Bulmash equation. The proposed modified equation, which was constructed by performing curve-fitting of this equation, requires a brief calculation process with a simpler equation compared to the original equation. The modified equation is also applicable to both types of bursts and has the same calculable scaled distance range as the conventional equation. The calculation results obtained using the modified equation were similar to the results obtained from the original equation with a less than 1% difference.

]]>Computation doi: 10.3390/computation5030040

Authors: Mohammed Hassan Mohamed Hamada

Accuracy improvement is among the primary key research focuses in the area of recommender systems. Traditionally, recommender systems work on two sets of entities, Users and Items, to estimate a single rating that represents a user’s acceptance of an item. This technique was later extended to multi-criteria recommender systems that use an overall rating from multi-criteria ratings to estimate the degree of acceptance by users for items. The primary concern that is still open to the recommender systems community is to find suitable optimization algorithms that can explore the relationships between multiple ratings to compute an overall rating. One of the approaches for doing this is to assume that the overall rating as an aggregation of multiple criteria ratings. Given this assumption, this paper proposed using feed-forward neural networks to predict the overall rating. Five powerful training algorithms have been tested, and the results of their performance are analyzed and presented in this paper.

]]>Computation doi: 10.3390/computation5030039

Authors: Volodymyr Turkowski Naseem Din Talat Rahman

In this work, we summarize the recent progress made in constructing time-dependent density-functional theory (TDDFT) exchange-correlation (XC) kernels capable to describe excitonic effects in semiconductors and apply these kernels in two important cases: a “classic” bulk semiconductor, GaAs, with weakly-bound excitons and a novel two-dimensional material, MoS2, with very strongly-bound excitonic states. Namely, after a brief review of the standard many-body semiconductor Bloch and Bethe-Salpether equation (SBE and BSE) and a combined TDDFT+BSE approaches, we proceed with details of the proposed pure TDDFT XC kernels for excitons. We analyze the reasons for successes and failures of these kernels in describing the excitons in bulk GaAs and monolayer MoS2, and conclude with a discussion of possible alternative kernels capable of accurately describing the bound electron-hole states in both bulk and two-dimensional materials.

]]>Computation doi: 10.3390/computation5030038

Authors: Le Wang Qiang Pan Jie Chen Shunsheng Yang

A novel computational fluid dynamics-population balance model (CFD-PBM) for the simulation of gas mixing in activated sludge (i.e., an opaque non-Newtonian liquid) in a bubble column is developed and described to solve the problem of measuring the hydrodynamic behavior of opaque non-Newtonian liquid-gas two-phase flow. We study the effects of the inlet position and liquid-phase properties (water/activated sludge) on various characteristics, such as liquid flow field, gas hold-up, liquid dynamic viscosity, and volume-averaged bubble diameter. As the inlet position changed, two symmetric vortices gradually became a single main vortex in the flow field in the bubble column. In the simulations, when water was in the liquid phase, the global gas hold-up was higher than when activated sludge was in the liquid phase in the bubble column, and a flow field that was dynamic with time was observed in the bubble column. Additionally, when activated sludge was used as the liquid phase, no periodic velocity changes were found. When the inlet position was varied, the non-Newtonian liquid phase had different peak values and distributions of (dynamic) liquid viscosity in the bubble column, which were related to the gas hold-up. The high gas hold-up zone corresponded to the low dynamic viscosity zone. Finally, when activated sludge was in the liquid phase, the volume-averaged bubble diameter was much larger than when water was in the liquid phase.

]]>Computation doi: 10.3390/computation5030037

Authors: Stuart Bartlett

The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines.

]]>Computation doi: 10.3390/computation5030036

Authors: Lorenzo Fortunato Ilyas Inci José-Antonio Lay Andrea Vitturi

The program TFF calculates stripping single-particle form factors for one-neutron transfer in prior representation with appropriate perturbative treatment of recoil. Coupled equations are then integrated along a semiclassical trajectory to obtain one- and two-neutron transfer amplitudes and probabilities within first- and second-order perturbation theory. Total and differential cross-sections are then calculated by folding with a transmission function (obtained from a phenomenological imaginary absorption potential). The program description, user instructions and examples are discussed.

]]>Computation doi: 10.3390/computation5030035

Authors: Mirjam Glessmer Christian Janßen

This article gives an overview of the diverse range of teaching applications that can be realized using an interactive lattice Boltzmann simulation tool in fluid mechanics instruction and outreach. In an inquiry-based learning framework, examples are given of learning scenarios that address instruction on scientific results, scientific methods or the scientific process at varying levels of student activity, from consuming to applying to researching. Interactive live demonstrations on portable hardware enable new and innovative teaching concepts for fluid mechanics, also for large audiences and in the early stages of the university education. Moreover, selected examples successfully demonstrate that the integration of high-fidelity CFD methods into fluid mechanics teaching facilitates high-quality student research work within reach of the current state of the art in the respective field of research.

]]>Computation doi: 10.3390/computation5030034

Authors: Georgios Drakopoulos Andreas Kanavos Ioannis Karydis Spyros Sioutas Aristidis G. Vrahatis

Biomedicine is a pillar of the collective, scientific effort of human self-discovery, as well as a major source of humanistic data codified primarily in biomedical documents. Despite their rigid structure, maintaining and updating a considerably-sized collection of such documents is a task of overwhelming complexity mandating efficient information retrieval for the purpose of the integration of clustering schemes. The latter should work natively with inherently multidimensional data and higher order interdependencies. Additionally, past experience indicates that clustering should be semantically enhanced. Tensor algebra is the key to extending the current term-document model to more dimensions. In this article, an alternative keyword-term-document strategy, based on scientometric observations that keywords typically possess more expressive power than ordinary text terms, whose algorithmic cornerstones are third order tensors and MeSH ontological functions, is proposed. This strategy has been compared against a baseline using two different biomedical datasets, the TREC (Text REtrieval Conference) genomics benchmark and a large custom set of cognitive science articles from PubMed.

]]>Computation doi: 10.3390/computation5030033

Authors: Matthew Marko

The author demonstrates a stable Lagrangian solid modeling method, tracking the interactions of solid mass particles rather than using a meshed grid. This numerical method avoids the problem of tensile instability often seen with smooth particle applied mechanics by having the solid particles apply stresses expected with Hooke’s law, as opposed to using a smoothing function for neighboring solid particles. This method has been tested successfully with a bar in tension, compression, and shear, as well as a disk compressed into a flat plate, and the numerical model consistently matched the analytical Hooke’s law as well as Hertz contact theory for all examples. The solid modeling numerical method was then built into a 2-D model of a pressure vessel, which was tested with liquid water particles under pressure and simulated with smoothed particle hydrodynamics. This simulation was stable, and demonstrated the feasibility of Lagrangian specification modeling for fluid–solid interactions.

]]>Computation doi: 10.3390/computation5030032

Authors: William Seffens

Much of biology-inspired computer science is based on the Central Dogma, as implemented with genetic algorithms or evolutionary computation. That 60-year-old biological principle based on the genome, transcriptome and proteasome is becoming overshadowed by a new paradigm of complex ordered associations and connections between layers of biological entities, such as interactomes, metabolomics, etc. We define a new hierarchical concept as the “Connectosome”, and propose new venues of computational data structures based on a conceptual framework called “Grand Ensemble” which contains the Central Dogma as a subset. Connectedness and communication within and between living or biology-inspired systems comprise ensembles from which a physical computing system can be conceived. In this framework the delivery of messages is filtered by size and a simple and rapid semantic analysis of their content. This work aims to initiate discussion on the Grand Ensemble in network biology as a representation of a Persistent Turing Machine. This framework adding interaction and persistency to the classic Turing-machine model uses metrics based on resilience that has application to dynamic optimization problem solving in Genetic Programming.

]]>