Previous Issue
Volume 8, September

Computation, Volume 8, Issue 4 (December 2020) – 20 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story:
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Graph Reachability on Parallel Many-Core Architectures
Computation 2020, 8(4), 103; https://doi.org/10.3390/computation8040103 - 02 Dec 2020
Abstract
Many modern applications are modeled using graphs of some kind. Given a graph, reachability, that is, discovering whether there is a path between two given nodes, is a fundamental problem as well as one of the most important steps of many other algorithms. [...] Read more.
Many modern applications are modeled using graphs of some kind. Given a graph, reachability, that is, discovering whether there is a path between two given nodes, is a fundamental problem as well as one of the most important steps of many other algorithms. The rapid accumulation of very large graphs (up to tens of millions of vertices and edges) from a diversity of disciplines demand efficient and scalable solutions to the reachability problem. General-purpose computing has been successfully used on Graphics Processing Units (GPUs) to parallelize algorithms that present a high degree of regularity. In this paper, we extend the applicability of GPU processing to graph-based manipulation, by re-designing a simple but efficient state-of-the-art graph-labeling method, namely the GRAIL (Graph Reachability Indexing via RAndomized Interval) algorithm, to many-core CUDA-based GPUs. This algorithm firstly generates a label for each vertex of the graph, then it exploits these labels to answer reachability queries. Unfortunately, the original algorithm executes a sequence of depth-first visits which are intrinsically recursive and cannot be efficiently implemented on parallel systems. For that reason, we design an alternative approach in which a sequence of breadth-first visits substitute the original depth-first traversal to generate the labeling, and in which a high number of concurrent visits is exploited during query evaluation. The paper describes our strategy to re-design these steps, the difficulties we encountered to implement them, and the solutions adopted to overcome the main inefficiencies. To prove the validity of our approach, we compare (in terms of time and memory requirements) our GPU-based approach with the original sequential CPU-based tool. Finally, we report some hints on how to conduct further research in the area. Full article
Open AccessArticle
Simulation of Diffusion Bonding of Different Heat Resistant Nickel-Base Alloys
Computation 2020, 8(4), 102; https://doi.org/10.3390/computation8040102 - 30 Nov 2020
Viewed by 205
Abstract
Currently, an important fundamental problem of practical importance is the production of high-quality solid-phase compounds of various metals. This paper presents a theoretical model that allows one to study the diffusion process in nickel-base refractory alloys. As an example, a two-dimensional model of [...] Read more.
Currently, an important fundamental problem of practical importance is the production of high-quality solid-phase compounds of various metals. This paper presents a theoretical model that allows one to study the diffusion process in nickel-base refractory alloys. As an example, a two-dimensional model of ternary alloy is considered to model diffusion bonding of the alloys with different compositions. The main idea is to divide the alloy components into three groups: (i) the base element Ni, (ii) the intermetallic forming elements Al and Ti and (iii) the alloying elements. This approach allows one to consider multi-component alloys as ternary alloys, which greatly simplifies the analysis. The calculations are carried out within the framework of the hard sphere model when describing interatomic interactions by pair potentials. The energy of any configuration of a given system is written in terms of order parameters and ordering energies. A vacancy diffusion model is described, which takes into account the gain/loss of potential energy due to a vacancy jump and temperature. Diffusion bonding of two dissimilar refractory alloys is modeled. The concentration profiles of the components and order parameters are analyzed at different times. The results obtained indicate that the ternary alloy model is efficient in modeling the diffusion bonding of dissimilar Ni-base refractory alloys. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

Open AccessArticle
Combined Heat and Power Dynamic Economic Emissions Dispatch with Valve Point Effects and Incentive Based Demand Response Programs
Computation 2020, 8(4), 101; https://doi.org/10.3390/computation8040101 - 23 Nov 2020
Viewed by 204
Abstract
In this paper, the Combined Heat and Power Dynamic Economic Emissions Dispatch (CHPDEED) problem formulation is considered. This problem is a complicated nonlinear mathematical formulation with multiple, conflicting objective functions. The aim of this mathematical problem is to obtain the optimal quantities of [...] Read more.
In this paper, the Combined Heat and Power Dynamic Economic Emissions Dispatch (CHPDEED) problem formulation is considered. This problem is a complicated nonlinear mathematical formulation with multiple, conflicting objective functions. The aim of this mathematical problem is to obtain the optimal quantities of heat and power output for the committed generating units which includes power and heat only units. Heat and load demand are expected to be satisfied throughout the total dispatch interval. In this paper, Valve Point effects are considered in the fuel cost function of the units which lead to a non-convex cost function. Furthermore, an Incentive Based Demand Response Program formulation is also simultaneously considered with the CHPDEED problem further complicating the mathematical problem. The decision variables are thus the optimal power and heat output of the generating units and the optimal power curbed and monetary incentive for the participating demand response consumers. The resulting mathematical formulations are tested on four practical scenarios depicting different system operating conditions and obtained results show the efficacy of the developed mathematical optimization model. Obtained results indicate that, when the Incentive-Based Demand Response (IBDR) program’s operational hours is unrestricted with a residential load profile, the energy curtailed is highest (2680 MWh), the energy produced by the generators is lowest (38,008.53 MWh), power losses are lowest (840.5291 MW) and both fuel costs and emissions are lowest. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Open AccessArticle
Near-Field Flow Structure and Entrainment of a Round Jet at Low Exit Velocities: Implications on Microclimate Ventilation
Computation 2020, 8(4), 100; https://doi.org/10.3390/computation8040100 - 23 Nov 2020
Viewed by 213
Abstract
This paper explores the flow structure, mean/turbulent statistical characteristics of the vector field and entrainment of round jets issued from a smooth contracting nozzle at low nozzle exit velocities (1.39–6.44 m/s). The motivation of the study was to increase understand of the near [...] Read more.
This paper explores the flow structure, mean/turbulent statistical characteristics of the vector field and entrainment of round jets issued from a smooth contracting nozzle at low nozzle exit velocities (1.39–6.44 m/s). The motivation of the study was to increase understand of the near field and get insights on how to control and reduce entrainment, particularly in applications that use jets with low-medium momentum flow like microclimate ventilation systems. Additionally, the near field of free jets with low momentum flow is not extensively covered in literature. Particle image velocimetry (PIV), a whole field vector measurement method, was used for data acquisition of the flow from a 0.025 m smooth contracting nozzle. The results show that at low nozzle exit velocities the jet flow was unstable with oscillations and this increased entrainment, however, increasing the nozzle exit velocity stabilized the jet flow and reduced entrainment. This is linked to the momentum flow of the jet, the structure characteristics of the flow and the type or disintegration distance of vortices created on the shear layer. The study discusses practical implications on microclimate ventilation systems and at the same time contributes data to the development and validation of a planned computational turbulence model for microclimate ventilation. Full article
Show Figures

Figure 1

Open AccessArticle
A Discrete Particle Swarm Optimization to Solve the Put-Away Routing Problem in Distribution Centres
Computation 2020, 8(4), 99; https://doi.org/10.3390/computation8040099 - 18 Nov 2020
Viewed by 251
Abstract
Put-away operations typically consist of moving products from depots to allocated storage locations using either operators or Material Handling Equipment (MHE), accounting for important operative costs in warehouses and impacting operations efficiency. Therefore, this paper aims to formulate and solve a Put-away Routing [...] Read more.
Put-away operations typically consist of moving products from depots to allocated storage locations using either operators or Material Handling Equipment (MHE), accounting for important operative costs in warehouses and impacting operations efficiency. Therefore, this paper aims to formulate and solve a Put-away Routing Problem (PRP) in distribution centres (DCs). This PRP formulation represents a novel approach due to the consideration of a fleet of homogeneous Material Handling Equipment (MHE), heterogeneous products linked to a put-away list size, depot location and multi-parallel aisles in a distribution centre. It should be noted that the slotting problem, rather than the PRP, has usually been studied in the literature, whereas the PRP is addressed in this paper. The PRP is solved using a discrete particle swarm optimization (PSO) algorithm that is compared to tabu search approaches (Classical Tabu Search (CTS), Tabu Search (TS) 2-Opt) and an empirical rule. As a result, it was found that a discrete PSO generates the best solutions, as the time savings range from 2 to 13% relative to CTS and TS 2-Opt for different combinations of factor levels evaluated in the experimentation. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Open AccessArticle
Effect of Computational Schemes on Coupled Flow and Geo-Mechanical Modeling of CO2 Leakage through a Compromised Well
Computation 2020, 8(4), 98; https://doi.org/10.3390/computation8040098 - 13 Nov 2020
Viewed by 260
Abstract
Carbon capture, utilization, and storage (CCUS) describes a set of technically viable processes to separate carbon dioxide (CO2) from industrial byproduct streams and inject it into deep geologic formations for long-term storage. Legacy wells located within the spatial domain of new [...] Read more.
Carbon capture, utilization, and storage (CCUS) describes a set of technically viable processes to separate carbon dioxide (CO2) from industrial byproduct streams and inject it into deep geologic formations for long-term storage. Legacy wells located within the spatial domain of new injection and production activities represent potential pathways for fluids (i.e., CO2 and aqueous phase) to leak through compromised components (e.g., through fractures or micro-annulus pathways). The finite element (FE) method is a well-established numerical approach to simulate the coupling between multi-phase fluid flow and solid phase deformation interactions that occur in a compromised well system. We assumed the spatial domain consists of a three-phases system: a solid, liquid, and gas phase. For flow in the two fluids phases, we considered two sets of primary variables: the first considering capillary pressure and gas pressure (PP) scheme, and the second considering liquid pressure and gas saturation (PS) scheme. Fluid phases were coupled with the solid phase using the full coupling (i.e., monolithic coupling) and iterative coupling (i.e., sequential coupling) approaches. The challenge of achieving numerical stability in the coupled formulation in heterogeneous media was addressed using the mass lumping and the upwinding techniques. Numerical results were compared with three benchmark problems to assess the performance of coupled FE solutions: 1D Terzaghi’s consolidation, Liakopoulos experiments, and the Kueper and Frind experiments. We found good agreement between our results and the three benchmark problems. For the Kueper and Frind test, the PP scheme successfully captured the observed experimental response of the non-aqueous phase infiltration, in contrast to the PS scheme. These exercises demonstrate the importance of fluid phase primary variable selection for heterogeneous porous media. We then applied the developed model to the hypothetical case of leakage along a compromised well representing a heterogeneous media. Considering the mass lumping and the upwinding techniques, both the monotonic and the sequential coupling provided identical results, but mass lumping was needed to avoid numerical instabilities in the sequential coupling. Additionally, in the monolithic coupling, the magnitude of primary variables in the coupled solution without mass lumping and the upwinding is higher, which is essential for the risk-based analyses. Full article
(This article belongs to the Special Issue Computational Models for Complex Fluid Interfaces across Scales)
Show Figures

Figure 1

Open AccessEditorial
Computational Insights into Industrial Chemistry
Computation 2020, 8(4), 97; https://doi.org/10.3390/computation8040097 - 12 Nov 2020
Viewed by 210
Abstract
This brief Editorial is dedicated to announcing the Special Issue “Computational Insights into Industrial Chemistry”. The Special Issue covers the most recent progress in the rapidly growing field of computational chemistry, and the application of computer modeling in topics relevant to industrial chemistry [...] Read more.
This brief Editorial is dedicated to announcing the Special Issue “Computational Insights into Industrial Chemistry”. The Special Issue covers the most recent progress in the rapidly growing field of computational chemistry, and the application of computer modeling in topics relevant to industrial chemistry (chemical industrial processes and materials, environmental effects caused by chemical industry activities, computer-aided design of catalysts, green chemistry, etc.). Full article
(This article belongs to the Special Issue Computational Insights into Industrial Chemistry)
Open AccessArticle
Application of the Robust Fixed Point Iteration Method in Control of the Level of Twin Tanks Liquid
Computation 2020, 8(4), 96; https://doi.org/10.3390/computation8040096 - 10 Nov 2020
Viewed by 279
Abstract
Precise control of the flow rate of fluids stored in multiple tank systems is an important task in process industries. On this reason coupled tanks are considered popular paradigms in studies because they form strongly nonlinear systems that challenges the controller designers to [...] Read more.
Precise control of the flow rate of fluids stored in multiple tank systems is an important task in process industries. On this reason coupled tanks are considered popular paradigms in studies because they form strongly nonlinear systems that challenges the controller designers to develop various approaches. In this paper the application of a novel, Fixed Point Iteration (FPI)-based technique is reported to control the fluid level in a “lower tank” that is fed by the egress of an “upper” one. The control signal is the ingress rate at the upper tank. Numerical simulation results obtained by the use of simple sequential Julia code with Euler integration are presented to illustrate the efficiency of this approach. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Open AccessArticle
Explicit Sensitivity Coefficients for Estimation of Temperature-Dependent Thermophysical Properties in Inverse Transient Heat Conduction Problems
Computation 2020, 8(4), 95; https://doi.org/10.3390/computation8040095 - 06 Nov 2020
Viewed by 348
Abstract
Explicit expressions are obtained for sensitivity coefficients to separately estimate temperature-dependent thermophysical properties, such as specific heat and thermal conductivity, in two-dimensional inverse transient heat conduction problems for bodies with irregular shape from temperature measurement readings of a single sensor inside the body. [...] Read more.
Explicit expressions are obtained for sensitivity coefficients to separately estimate temperature-dependent thermophysical properties, such as specific heat and thermal conductivity, in two-dimensional inverse transient heat conduction problems for bodies with irregular shape from temperature measurement readings of a single sensor inside the body. The proposed sensitivity analysis scheme allows for the computation of all sensitivity coefficients in only one direct problem solution at each iteration with no need to solve the sensitivity and adjoint problems. In this method, a boundary-fitted grid generation (elliptic) method is used to mesh the irregular shape of the heat conducting body. Explicit expressions are obtained to calculate the sensitivity coefficients efficiently and the conjugate gradient method as an iterative gradient-based optimization method is used to minimize the objective function and reach the solution. A test case with different initial guesses and sensor locations is presented to investigate the proposed inverse analysis. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Open AccessArticle
Spatial and Temporal Validation of a CFD Model Using Residence Time Distribution Test in a Tubular Reactor
Computation 2020, 8(4), 94; https://doi.org/10.3390/computation8040094 - 06 Nov 2020
Viewed by 226
Abstract
Computational fluid dynamic (CFD) has been increasingly exploited for the design and optimization of (bio)chemical processes. Validation is a crucial part of any modeling application. In CFD, when validation is done, complex and expensive techniques are normally employed. The aim of this study [...] Read more.
Computational fluid dynamic (CFD) has been increasingly exploited for the design and optimization of (bio)chemical processes. Validation is a crucial part of any modeling application. In CFD, when validation is done, complex and expensive techniques are normally employed. The aim of this study was to test the capability of the CFD model to represent a residence time distribution (RTD) test in a temporal and spatial fashion inside a reactor. The RTD tests were carried out in a tubular reactor operated in continuous mode, with and without the presence of artificial biomass. Two hydraulic retention times of 7.2 and 13 h and superficial velocities 0.65, 0.6, 1.3, and 1.1 m h−1 were evaluated. As a tracer, an aqueous solution of methylene blue was used. The CFD model was implemented in ANSYS Fluent, and to solve the equations system, the SIMPLE scheme and second-order discretization methods were selected. The proposed CFD model that represents the reactor was able to predict the spatial and temporal distribution of the tracer injected in the reactor. The main disagreements between the simulations and the experimental results were observed, especially in the first 50 min of the RTD, caused by the different error sources, associated to the manual execution of the triplicates, as well as some channeling or tracer by-pass that cannot be predicted by the CFD model. The CFD model performed better as the time of the experiment elapsed for all the sampling ports. A validation methodology based on an RTD by sampling at different reactor positions can be employed as a simple way to validate CFD models. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

Open AccessArticle
The Influence of Pressure on the Formation of FM/AF Configurations in LSMO Films: A Monte Carlo Approach
Computation 2020, 8(4), 93; https://doi.org/10.3390/computation8040093 - 06 Nov 2020
Viewed by 337
Abstract
In this work, Monte Carlo simulations of magnetic properties of thin films, including the influence of an external pressure, are presented. These simulations were developed using a Hamiltonian composed by terms that represent the exchange interaction, dipolar interaction, Zeeman effect, monocrystalline anisotropy, and [...] Read more.
In this work, Monte Carlo simulations of magnetic properties of thin films, including the influence of an external pressure, are presented. These simulations were developed using a Hamiltonian composed by terms that represent the exchange interaction, dipolar interaction, Zeeman effect, monocrystalline anisotropy, and pressure influence. The term that represents the pressure influence on the magnetic properties was included, since for many applications, magnetic materials are a part of a multiferroic material together with a piezoelectric or a ferroelectric compound. Initially, the model was developed using generic parameters, in order to probe its suitable performance; after that, parameters were adjusted for simulating thin films of La0.67Sr0.33MnO3, a manganite with several technological applications because its Curie temperature is greater than room temperature. Including the pressure influence, it was observed the formation of several kind of FM/AF configurations as strip, labyrinth, and chess board forms. Furthermore, it was observed that, as the pressure increased, the critical temperature tended to decrease, and this result was in agreement with experimental reports. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

Open AccessArticle
A Modified Heart Dipole Model for the Generation of Pathological ECG Signals
Computation 2020, 8(4), 92; https://doi.org/10.3390/computation8040092 - 06 Nov 2020
Viewed by 283
Abstract
In this paper, we introduce a new dynamic model of simulation of electrocardiograms (ECGs) affected by pathologies starting from the well-known McSharry dynamic model for the ECGs without cardiac disorders. In particular, the McSharry model has [...] Read more.
In this paper, we introduce a new dynamic model of simulation of electrocardiograms (ECGs) affected by pathologies starting from the well-known McSharry dynamic model for the ECGs without cardiac disorders. In particular, the McSharry model has been generalized (by a linear transformation and a rotation) for simulating ECGs affected by heart diseases verifying, from one hand, the existence and uniqueness of the solution and, on the other hand, if it admits instabilities. The results, obtained numerically by a procedure based on a Four Stage Lobatto IIIa formula, show the good performances of the proposed model in producing ECGs with or without heart diseases very similar to those achieved directly on the patients. Moreover, verified that the ECGs signals are affected by uncertainty and/or imprecision through the computation of the linear index and the fuzzy entropy index (whose values obtained are close to unity), these similarities among ECGs signals (with or without heart diseases) have been quantified by a well-established fuzzy approach based on fuzzy similarity computations highlighting that the proposed model to simulate ECGs affected by pathologies can be considered as a solid starting point for the development of synthetic pathological ECGs signals. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

Open AccessArticle
All-Nitrogen Cages and Molecular Crystals: Topological Rules, Stability, and Pyrolysis Paths
Computation 2020, 8(4), 91; https://doi.org/10.3390/computation8040091 - 06 Nov 2020
Viewed by 223
Abstract
We combined ab initio molecular dynamics with the intrinsic reaction coordinate in order to investigate the mechanisms of stability and pyrolysis of N4 ÷ N120 fullerene-like nitrogen cages. The stability of the cages was evaluated in terms of the activation barriers [...] Read more.
We combined ab initio molecular dynamics with the intrinsic reaction coordinate in order to investigate the mechanisms of stability and pyrolysis of N4 ÷ N120 fullerene-like nitrogen cages. The stability of the cages was evaluated in terms of the activation barriers and the activation Gibbs energies of their thermal-induced breaking. We found that binding energies, bond lengths, and quantum-mechanical descriptors failed to predict the stability of the cages. However, we derived a simple topological rule that adjacent hexagons on the cage surface resulted in its instability. For this reason, the number of stable nitrogen cages is significantly restricted in comparison with their carbon counterparts. As a rule, smaller clusters are more stable, whereas the earlier proposed large cages collapse at room temperature. The most stable all-nitrogen cages are the N4 and N6 clusters, which can form the van der Waals crystals with densities of 1.23 and 1.36 g/cm3, respectively. The examination of their band structures and densities of electronic states shows that they are both insulators. Their power and sensitivity are not inferior to the modern advanced high-energy nanosystems. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Graphical abstract

Open AccessArticle
Self-Adjusting Variable Neighborhood Search Algorithm for Near-Optimal k-Means Clustering
Computation 2020, 8(4), 90; https://doi.org/10.3390/computation8040090 - 05 Nov 2020
Viewed by 244
Abstract
The k-means problem is one of the most popular models in cluster analysis that minimizes the sum of the squared distances from clustered objects to the sought cluster centers (centroids). The simplicity of its algorithmic implementation encourages researchers to apply it in a [...] Read more.
The k-means problem is one of the most popular models in cluster analysis that minimizes the sum of the squared distances from clustered objects to the sought cluster centers (centroids). The simplicity of its algorithmic implementation encourages researchers to apply it in a variety of engineering and scientific branches. Nevertheless, the problem is proven to be NP-hard which makes exact algorithms inapplicable for large scale problems, and the simplest and most popular algorithms result in very poor values of the squared distances sum. If a problem must be solved within a limited time with the maximum accuracy, which would be difficult to improve using known methods without increasing computational costs, the variable neighborhood search (VNS) algorithms, which search in randomized neighborhoods formed by the application of greedy agglomerative procedures, are competitive. In this article, we investigate the influence of the most important parameter of such neighborhoods on the computational efficiency and propose a new VNS-based algorithm (solver), implemented on the graphics processing unit (GPU), which adjusts this parameter. Benchmarking on data sets composed of up to millions of objects demonstrates the advantage of the new algorithm in comparison with known local search algorithms, within a fixed time, allowing for online computation. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Open AccessArticle
Modeling of Isocyanate Synthesis by the Thermal Decomposition of Carbamates
Computation 2020, 8(4), 89; https://doi.org/10.3390/computation8040089 - 18 Oct 2020
Viewed by 392
Abstract
The presented work is devoted to isocyanate synthesis by the thermal decomposition of carbamates model. The work describes the existing isocyanate-obtaining processes and the main problems in the study of isocyanate synthesis by the thermal decomposition of carbamates, which can be solved using [...] Read more.
The presented work is devoted to isocyanate synthesis by the thermal decomposition of carbamates model. The work describes the existing isocyanate-obtaining processes and the main problems in the study of isocyanate synthesis by the thermal decomposition of carbamates, which can be solved using mathematical and computer models. Experiments with carbamates of various structures were carried out. After processing the experimental data, the activation energy and the pre-exponential factor for isocyanate synthesis by the thermal decomposition of carbamates were determined. Then, a mathematical model of the reactor for the thermal decomposition of carbamates using the COMSOL Multiphysics software was developed. For this model, computational experiments under different conditions were carried out. It was shown that the calculation results correspond to the experimental ones, so the suggested model can be used in the design of the equipment for isocyanate synthesis by the thermal decomposition of carbamates. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

Open AccessArticle
A QP Solver Implementation for Embedded Systems Applied to Control Allocation
Computation 2020, 8(4), 88; https://doi.org/10.3390/computation8040088 - 13 Oct 2020
Viewed by 374
Abstract
Quadratic programming problems (QPs) frequently appear in control engineering. For use on embedded platforms, a QP solver implementation is required in the programming language C. A new solver for quadratic optimization problems, EmbQP, is described, which was implemented in well readable C code. [...] Read more.
Quadratic programming problems (QPs) frequently appear in control engineering. For use on embedded platforms, a QP solver implementation is required in the programming language C. A new solver for quadratic optimization problems, EmbQP, is described, which was implemented in well readable C code. The algorithm is based on the dual method of Goldfarb and Idnani and solves strictly convex QPs with a positive definite objective function matrix and linear equality and inequality constraints. The algorithm is outlined and some details for an efficient implementation in C are shown, with regard to the requirements of embedded systems. The newly implemented QP solver is demonstrated in the context of control allocation of an over-actuated vehicle as application example. Its performance is assessed in a simulation experiment. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Open AccessArticle
Complex Modelling and Design of Catalytic Reactors Using Multiscale Approach—Part 2: Catalytic Reactions Modelling with Cellular Automata Approach
Computation 2020, 8(4), 87; https://doi.org/10.3390/computation8040087 - 10 Oct 2020
Viewed by 492
Abstract
The presented work is devoted to reactions of obtaining 4,4’-Diaminodiphenylmethane (MDA) in the presence of a catalyst model. The work describes the importance of studying the MDA obtaining process and the possibility of the cellular automata (CA) approach in the modelling of chemical [...] Read more.
The presented work is devoted to reactions of obtaining 4,4’-Diaminodiphenylmethane (MDA) in the presence of a catalyst model. The work describes the importance of studying the MDA obtaining process and the possibility of the cellular automata (CA) approach in the modelling of chemical reactions. The work suggests a CA-model that makes it possible to predict the kinetic curves of the studied MDA-obtaining reaction. The developed model was used to carry out computational experiments under the following different conditions—aniline:formaldehyde:catalyst ratios, stirrer speed, and reaction temperature. The results of computational experiments were compared with the corresponding experimental data. The suggested model was shown to be suitable for predicting MDA-obtaining reaction kinetics. The proposed CA model can be used with the CFD model, suggested in Part 1, allowing the implementation of complex multiscale modeling of a flow catalytic reactor from the molecule level to the level of the entire apparatus. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

Open AccessArticle
Biomass Steam Gasification: A Comparison of Syngas Composition between a 1-D MATLAB Kinetic Model and a 0-D Aspen Plus Quasi-Equilibrium Model
Computation 2020, 8(4), 86; https://doi.org/10.3390/computation8040086 - 05 Oct 2020
Viewed by 468
Abstract
Biomass is one of the most widespread and accessible energy source and steam gasification is one of the most important processes to convert biomass into combustible gases. However, to date the difference of results between the main models used to predict steam gasification [...] Read more.
Biomass is one of the most widespread and accessible energy source and steam gasification is one of the most important processes to convert biomass into combustible gases. However, to date the difference of results between the main models used to predict steam gasification producer gas composition have been not analyzed in details. Indeed, gasification, involving heterogeneous reactions, does not reach thermodynamic equilibrium and so thermodynamic models with experimental corrections and kinetic models are mainly applied. Thus, this paper compares a 1-D kinetic model developed in MATLAB, combining hydrodynamics and reaction kinetics, and a 0-D thermodynamic model developed in Aspen Plus, based on Gibbs free energy minimization applying the quasi-equilibrium approach, calibrated by experimental data. After a comparison of the results of the models against experimental data at two S/B ratios, a sensitivity analysis for a wide range of S/B ratios has been performed. The experimental comparison and sensitivity analysis shows that the two models provide sufficiently similar data in terms of the main components of the syngas although the thermodynamic model shows, with increasing S/B, a greater increase of H2 and CO2 and lower decrease of CH4 and CO respect to the kinetic one and the experimental data. Thus, the thermodynamic model, despite being calibrated by experimental data, can be used mainly to analyze global plant performance due to the reduced importance of the discrepancy from a global energy and plant perspective. Meanwhile, the more complex kinetic model should be used when a more precise gas composition is needed and, of course, for reactor design. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Open AccessArticle
Causal Modeling of Twitter Activity during COVID-19
Computation 2020, 8(4), 85; https://doi.org/10.3390/computation8040085 - 29 Sep 2020
Viewed by 538
Abstract
Understanding the characteristics of public attention and sentiment is an essential prerequisite for appropriate crisis management during adverse health events. This is even more crucial during a pandemic such as COVID-19, as primary responsibility of risk management is not centralized to a single [...] Read more.
Understanding the characteristics of public attention and sentiment is an essential prerequisite for appropriate crisis management during adverse health events. This is even more crucial during a pandemic such as COVID-19, as primary responsibility of risk management is not centralized to a single institution, but distributed across society. While numerous studies utilize Twitter data in descriptive or predictive context during COVID-19 pandemic, causal modeling of public attention has not been investigated. In this study, we propose a causal inference approach to discover and quantify causal relationships between pandemic characteristics (e.g., number of infections and deaths) and Twitter activity as well as public sentiment. Our results show that the proposed method can successfully capture the epidemiological domain knowledge and identify variables that affect public attention and sentiment. We believe our work contributes to the field of infodemiology by distinguishing events that correlate with public attention from events that cause public attention. Full article
(This article belongs to the Special Issue Computation to Fight SARS-CoV-2 (CoVid-19))
Show Figures

Figure 1

Open AccessArticle
Development of a Parallel 3D Navier–Stokes Solver for Sediment Transport Calculations in Channels
Computation 2020, 8(4), 84; https://doi.org/10.3390/computation8040084 - 25 Sep 2020
Viewed by 426
Abstract
We propose a method to parallelize a 3D incompressible Navier–Stokes solver that uses a fully implicit fractional-step method to simulate sediment transport in prismatic channels. The governing equations are transformed into generalized curvilinear coordinates on a non-staggered grid. To develop a parallel version [...] Read more.
We propose a method to parallelize a 3D incompressible Navier–Stokes solver that uses a fully implicit fractional-step method to simulate sediment transport in prismatic channels. The governing equations are transformed into generalized curvilinear coordinates on a non-staggered grid. To develop a parallel version of the code that can run on various platforms, in particular on PC clusters, it was decided to parallelize the code using Message Passing Interface (MPI) which is one of the most flexible parallel programming libraries. Code parallelization is accomplished by “message passing” whereby the computer explicitly uses library calls to accomplish communication between the individual processors of the machine (e.g., PC cluster). As a part of the parallelization effort, besides the Navier–Stokes solver, the deformable bed module used in simulations with loose beds are also parallelized. The flow, sediment transport, and bathymetry at equilibrium conditions were computed with the parallel and serial versions of the code for the case of a 140-degree curved channel bend of rectangular section. The parallel simulation conducted on eight processors gives exactly the same results as the serial solver. The parallel version of the solver showed good scalability. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Previous Issue
Back to TopTop