Computation doi: 10.3390/computation8040085

Authors: Oguzhan Gencoglu Mathias Gruber

Understanding the characteristics of public attention and sentiment is an essential prerequisite for appropriate crisis management during adverse health events. This is even more crucial during a pandemic such as COVID-19, as primary responsibility of risk management is not centralized to a single institution, but distributed across society. While numerous studies utilize Twitter data in descriptive or predictive context during COVID-19 pandemic, causal modeling of public attention has not been investigated. In this study, we propose a causal inference approach to discover and quantify causal relationships between pandemic characteristics (e.g., number of infections and deaths) and Twitter activity as well as public sentiment. Our results show that the proposed method can successfully capture the epidemiological domain knowledge and identify variables that affect public attention and sentiment. We believe our work contributes to the field of infodemiology by distinguishing events that correlate with public attention from events that cause public attention.

]]>Computation doi: 10.3390/computation8040084

Authors: Gokhan Kirkil

We propose a method to parallelize a 3D incompressible Navier&ndash;Stokes solver that uses a fully implicit fractional-step method to simulate sediment transport in prismatic channels. The governing equations are transformed into generalized curvilinear coordinates on a non-staggered grid. To develop a parallel version of the code that can run on various platforms, in particular on PC clusters, it was decided to parallelize the code using Message Passing Interface (MPI) which is one of the most flexible parallel programming libraries. Code parallelization is accomplished by &ldquo;message passing&rdquo; whereby the computer explicitly uses library calls to accomplish communication between the individual processors of the machine (e.g., PC cluster). As a part of the parallelization effort, besides the Navier&ndash;Stokes solver, the deformable bed module used in simulations with loose beds are also parallelized. The flow, sediment transport, and bathymetry at equilibrium conditions were computed with the parallel and serial versions of the code for the case of a 140-degree curved channel bend of rectangular section. The parallel simulation conducted on eight processors gives exactly the same results as the serial solver. The parallel version of the solver showed good scalability.

]]>Computation doi: 10.3390/computation8030083

Authors: Md. Mamun Molla Preetom Nag Sharaban Thohura Amirul Khan

A modified power-law (MPL) viscosity model of non-Newtonian fluid flow has been used for the multiple-relaxation-time (MRT) lattice Boltzmann methods (LBM) and then validated with the benchmark problems using the graphics process unit (GPU) parallel computing via Compute Unified Device Architecture (CUDA) C platform. The MPL model for characterizing the non-Newtonian behavior is an empirical correlation that considers the Newtonian behavior of a non-Newtonian fluid at a very low and high shear rate. A new time unit parameter (&lambda;) governing the flow has been identified, and this parameter is the consequence of the induced length scale introduced by the power law. The MPL model is free from any singularities due to the very low or even zero shear-rate. The proposed MPL model was first validated for the benchmark study of the lid-driven cavity and channel flows. The model was then applied for shear-thinning and shear-thickening fluid flows through a backward-facing step with relatively low Reynolds numbers, Re = 100&ndash;400. In the case of shear-thinning fluids (n=0.5), laminar to transitional flow arises while Re&ge;300, and the large vortex breaks into several small vortices. The numerical results are presented regarding the velocity distribution, streamlines, and the lengths of the reattachment points.

]]>Computation doi: 10.3390/computation8030082

Authors: Chang Phang Yoke Teng Toh Farah Suraya Md Nasrudin

In this work, we derive the operational matrix using poly-Bernoulli polynomials. These polynomials generalize the Bernoulli polynomials using a generating function involving a polylogarithm function. We first show some new properties for these poly-Bernoulli polynomials; then we derive new operational matrix based on poly-Bernoulli polynomials for the Atangana&ndash;Baleanu derivative. A delay operational matrix based on poly-Bernoulli polynomials is derived. The error bound of this new method is shown. We applied this poly-Bernoulli operational matrix for solving fractional delay differential equations with variable coefficients. The numerical examples show that this method is easy to use and yet able to give accurate results.

]]>Computation doi: 10.3390/computation8030081

Authors: Martine Castellà-Ventura Alain Moissette Emile Kassab

The Si/Al ratio and confinement effects of zeolite framework on energetics and vibrational frequencies of pyridine and 4,4&prime;-bipyridine adsorbed on Br&oslash;nsted acid sites in the straight channel of H-ZSM-5 are investigated by DFT calculations at the B3LYP and M06-2X+D3 levels. The straight channel of H-ZSM-5 is simulated by a cluster of 32 tetrahedral centers covering the intersection between straight and zigzag channels. Pyridine and 4,4&prime;-bipyridine adsorption at two different sites in the intersection (open region) and/or in the narrow region situated between two intersections (closed region) is studied. For two Si/Al ratios (31, 15), the ion pair complexes formed by proton transfer upon pyridine and 4,4&prime;-bipyridine adsorption in the open region and for the first time in the closed region are characterized. Our results indicate: (i) the stability for all adsorption complexes is essentially governed by the dispersive van der Waals interactions and the open region is energetically more favorable than the closed region owing to the predominance of the dispersive interactions over the steric constraints exerted by the confinement effects; (ii) as the Al centers are sufficiently spaced apart, Si/Al ratio does not influence pyridine adsorption energy, but significantly affects the adsorption energies and the relative stability of 4,4&prime;-bipyridine complexes; (iii) neither Si/Al ratio nor confinement significantly influence pyridine and 4,4&prime;-bipyridine vibrational frequencies within their complexes.

]]>Computation doi: 10.3390/computation8030080

Authors: Christos Kalyvas Manolis Maragoudakis

One of the most common tasks nowadays in big data environments is the need to classify large amounts of data. There are numerous classification models designed to perform best in different environments and datasets, each with its advantages and disadvantages. However, when dealing with big data, their performance is significantly degraded because they are not designed&mdash;or even capable&mdash;of handling very large datasets. The current approach is based on a novel proposal of exploiting the dynamics of skyline queries to efficiently identify the decision boundary and classify big data. A comparison against the popular k-nearest neighbor (k-NN), support vector machines (SVM) and na&iuml;ve Bayes classification algorithms shows that the proposed method is faster than the k-NN and the SVM. The novelty of this method is based on the fact that only a small number of computations are needed in order to make a prediction, while its full potential is revealed in very large datasets.

]]>Computation doi: 10.3390/computation8030079

Authors: Ibrahim Ahmad Muhammad Kanikar Muangchoo Auwal Muhammad Ya’u Sabo Ajingi Ibrahim Yahaya Muhammad Ibrahim Dauda Umar Abubakar Bakoji Muhammad

Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was found to be a severe threat to global public health in late 2019. Nevertheless, no approved medicines have been found to inhibit the virus effectively. Anti-malarial and antiviral medicines have been reported to target the SARS-CoV-2 virus. This paper chose eight natural eucalyptus compounds to study their binding interactions with the SARS-CoV-2 main protease (Mpro) to assess their potential for becoming herbal drugs for the new SARS-CoV-2 infection virus. In-silico methods such as molecular docking, molecular dynamics (MD) simulations, and Molecular Mechanics Poisson Boltzmann Surface Area (MM/PBSA) analysis were used to examine interactions at the atomistic level. The results of molecular docking indicate that Mpro has good binding energy for all compounds studied. Three docked compounds, &alpha;-gurjunene, aromadendrene, and allo-aromadendrene, with highest binding energies of &minus;7.34 kcal/mol (&minus;30.75 kJ/mol), &minus;7.23 kcal/mol (&minus;30.25 kJ/mol), and &minus;7.17 kcal/mol (&minus;29.99 kJ/mol) respectively, were simulated with GROningen MAchine for Chemical Simulations (GROMACS) to measure the molecular interactions between Mpro and inhibitors in detail. Our MD simulation results show that &alpha;-gurjunene has the strongest binding energy of &minus;20.37 kcal/mol (&minus;85.21 kJ/mol), followed by aromadendrene with &minus;18.99 kcal/mol (&minus;79.45 kJ/mol), and finally allo-aromadendrene with &minus;17.91 kcal/mol (&minus;74.95 kJ/mol). The findings indicate that eucalyptus may be used to inhibit the Mpro enzyme as a drug candidate. This is the first computational analysis that gives an insight into the potential role of structural flexibility during interactions with eucalyptus compounds. It also sheds light on the structural design of new herbal medicinal products against Mpro.

]]>Computation doi: 10.3390/computation8030078

Authors: Claudia Germoso Giacomo Quaranta Jean Louis Duval Francisco Chinesta

Mesh-based solution of 3D models defined in plate or shell domains remains a challenging issue nowadays due to the fact that the needed meshes generally involve too many degrees of freedom. When the considered problem involves some parameters aiming at computing its parametric solution the difficulty is twofold. The authors proposed, in some of their former works, strategies for solving both, however they suffer from a deep intrusiveness. This paper proposes a totally novel approach that from any existing discretization is able to reduce the 3D parametric complexity to the one characteristic of a simple 2D calculation. Thus, the 3D complexity is reduced to 2D, the parameters included naturally into the solution, and the procedure applied on a discretization performed with a standard software, which taken together enable real-time engineering.

]]>Computation doi: 10.3390/computation8030077

Authors: Giulia Culletta Maria Rita Gulotta Ugo Perricone Maria Zappalà Anna Maria Almerico Marco Tutone

To date, SARS-CoV-2 infectious disease, named COVID-19 by the World Health Organization (WHO) in February 2020, has caused millions of infections and hundreds of thousands of deaths. Despite the scientific community efforts, there are currently no approved therapies for treating this coronavirus infection. The process of new drug development is expensive and time-consuming, so that drug repurposing may be the ideal solution to fight the pandemic. In this paper, we selected the proteins encoded by SARS-CoV-2 and using homology modeling we identified the high-quality model of proteins. A structure-based pharmacophore modeling study was performed to identify the pharmacophore features for each target. The pharmacophore models were then used to perform a virtual screening against the DrugBank library (investigational, approved and experimental drugs). Potential inhibitors were identified for each target using XP docking and induced fit docking. MM-GBSA was also performed to better prioritize potential inhibitors. This study will provide new important comprehension of the crucial binding hot spots usable for further studies on COVID-19. Our results can be used to guide supervised virtual screening of large commercially available libraries.

]]>Computation doi: 10.3390/computation8030076

Authors: Gilberto González-Parra Miguel Díaz-Rodríguez Abraham J. Arenas

In this paper, we study and explore two control strategies to decrease the spread of Zika virus in the human and mosquito populations. The control strategies that we consider in this study are awareness and spraying campaigns. We solve several optimal control problems relying on a mathematical epidemic model of Zika that considers both human and mosquito populations. The first control strategy is broad and includes using information campaigns, encouraging people to use bednetting, wear long-sleeve shirts, or similar protection actions. The second control is more specific and relies on spraying insecticides. The control system relies on a Zika mathematical model with control functions. To develop the optimal control problem, we use Pontryagins&rsquo; maximum principle, which is numerically solved as a boundary value problem. For the mathematical model of the Zika epidemic, we use parameter values extracted from real data from an outbreak in Colombia. We study the effect of the costs related to the controls and infected populations. These costs are important in real life since they can change the outcomes and recommendations for health authorities dramatically. Finally, we explore different options regarding which control measures are more cost-efficient for society.

]]>Computation doi: 10.3390/computation8030075

Authors: Angel E. Rodriguez-Fernandez Bernardo Gonzalez-Torres Ricardo Menchaca-Mendez Peter F. Stadler

MAX-CUT is one of the well-studied NP-hard combinatorial optimization problems. It can be formulated as an Integer Quadratic Programming problem and admits a simple relaxation obtained by replacing the integer &ldquo;spin&rdquo; variables xi by unitary vectors v&rarr;i. The Goemans&ndash;Williamson rounding algorithm assigns the solution vectors of the relaxed quadratic program to a corresponding integer spin depending on the sign of the scalar product v&rarr;i&middot;r&rarr; with a random vector r&rarr;. Here, we investigate whether better graph cuts can be obtained by instead using a more sophisticated clustering algorithm. We answer this question affirmatively. Different initializations of k-means and k-medoids clustering produce better cuts for the graph instances of the most well known benchmark for MAX-CUT. In particular, we found a strong correlation of cluster quality and cut weights during the evolution of the clustering algorithms. Finally, since in general the maximal cut weight of a graph is not known beforehand, we derived instance-specific lower bounds for the approximation ratio, which give information of how close a solution is to the global optima for a particular instance. For the graphs in our benchmark, the instance specific lower bounds significantly exceed the Goemans&ndash;Williamson guarantee.

]]>Computation doi: 10.3390/computation8030074

Authors: Silvia Mirri Giovanni Delnevo Marco Roccetti

The Nobel laureate Niels Bohr once said that: &ldquo;Predictions are very difficult, especially if they are about the future&rdquo;. Nonetheless, models that can forecast future COVID-19 outbreaks are receiving special attention by policymakers and health authorities, with the aim of putting in place control measures before the infections begin to increase. Nonetheless, two main problems emerge. First, there is no a general agreement on which kind of data should be registered for judging on the resurgence of the virus (e.g., infections, deaths, percentage of hospitalizations, reports from clinicians, signals from social media). Not only this, but all these data also suffer from common defects, linked to their reporting delays and to the uncertainties in the collection process. Second, the complex nature of COVID-19 outbreaks makes it difficult to understand if traditional epidemiological models, such as susceptible, infectious, or recovered (SIR), are more effective for a timely prediction of an outbreak than alternative computational models. Well aware of the complexity of this forecasting problem, we propose here an innovative metric for predicting COVID-19 diffusion based on the hypothesis that a relation exists between the spread of the virus and the presence in the air of particulate pollutants, such as PM2.5, PM10, and NO2. Drawing on the recent assumption of 239 experts who claimed that this virus can be airborne, and further considering that particulate matter may favor this airborne route, we developed a machine learning (ML) model that has been instructed with: (i) all the COVID-19 infections that occurred in the Italian region of Emilia-Romagna, one of the most polluted areas in Europe, in the period of February&ndash;July 2020, (ii) the daily values of all the particulates taken in the same period and in the same region, and finally (iii) the chronology according to which restrictions were imposed by the Italian Government to human activities. Our ML model was then subjected to a classic ten-fold cross-validation procedure that returned a promising 90% accuracy value. Finally, the model was used to predict a possible resurgence of the virus in all the nine provinces of Emilia-Romagna, in the period of September&ndash;December 2020. To make those predictions, input to our ML model were the daily measurements of the aforementioned pollutants registered in the periods of September&ndash;December 2017/2018/2019, along with the hypothesis that the mild containment measures taken in Italy in the so-called Phase 3 are obeyed. At the time we write this article, we cannot have a confirmation of the precision of our predictions. Nevertheless, we are projecting a scenario based on an original hypothesis that makes our COVID-19 prediction model unique in the world. Its accuracy will be soon judged by history&mdash;and this, too, is science at the service of society.

]]>Computation doi: 10.3390/computation8030073

Authors: Dmitriy Klyuchinskiy Nikita Novikov Maxim Shishlenin

We investigate the mathematical model of the 2D acoustic waves propagation in a heterogeneous domain. The hyperbolic first order system of partial differential equations is considered and solved by the Godunov method of the first order of approximation. This is a direct problem with appropriate initial and boundary conditions. We solve the coefficient inverse problem (IP) of recovering density. IP is reduced to an optimization problem, which is solved by the gradient descent method. The quality of the IP solution highly depends on the quantity of IP data and positions of receivers. We introduce a new approach for computing a gradient in the descent method in order to use as much IP data as possible on each iteration of descent.

]]>Computation doi: 10.3390/computation8030072

Authors: Amit Kumar Verma Mukesh Kumar Rawani Ravi P. Agarwal

In this paper, we propose a 7th order weakly L-stable time integration scheme. In the process of derivation of the scheme, we use explicit backward Taylor&rsquo;s polynomial approximation of sixth-order and Hermite interpolation polynomial approximation of fifth order. We apply this formula in the vector form in order to solve Burger&rsquo;s equation, which is a simplified form of Navier-Stokes equation. The literature survey reveals that several methods fail to capture the solutions in the presence of inconsistency and for small values of viscosity, e.g., 10&minus;3, whereas the present scheme produces highly accurate results. To check the effectiveness of the scheme, we examine it over six test problems and generate several tables and figures. All of the calculations are executed with the help of Mathematica 11.3. The stability and convergence of the scheme are also discussed.

]]>Computation doi: 10.3390/computation8030071

Authors: Miroslava Mikusova Jamshid Abdunazarov Joanna Zukowska Juraj Jagelcak

Nowadays, in all cities, there is an acute problem of a lack of parking spaces. The number of vehicles is constantly increasing not only in big cities and megacities, but also in small towns of the country, and there are not enough parking places&mdash;the pace of solving the problem is several times slower than the growth rate of transport among citizens. The paper is dedicated to the determination of an optimal size of a parking place for design vehicles in a parking space as an element of roads. In the example of passenger cars and trucks, the optimal number of parking places is presented. The results of the research on the dimensioning of parking spaces serve as recommendations and can be used for the design of objects of transportation infrastructure. According to the research, authors introduce the term &ldquo;design vehicle&rdquo; and provide its definition. They also figure out optimal parameters for each design vehicle and recommend a special template for designing parking places.

]]>Computation doi: 10.3390/computation8030070

Authors: YM Tang Ka-Yin Chau Wenqiang Li TW Wan

Time series forecasting technology and related applications for stock price forecasting are gradually receiving attention. These approaches can be a great help in making decisions based on historical information to predict possible future situations. This research aims at establishing forecasting models with deep learning technology for share price prediction in the logistics industry. The historical share price data of five logistics companies in Hong Kong were collected and trained with various time series forecasting algorithms. Based on the Mean Absolute Percentage Error (MAPE) results, we adopted Long Short-Term Memory (LSTM) as the methodology to further predict share price. The proposed LSTM model was trained with different hyperparameters and validated by the Root Mean Square Error (RMSE). In this study, we found various optimal parameters for the proposed LSTM model for six different logistics stocks in Hong Kong, and the best RMSE result was 0.43%. Finally, we can forecast economic recessions through the prediction of the stocks, using the LSTM model.

]]>Computation doi: 10.3390/computation8030069

Authors: Gus I. Argyros Michael I. Argyros Samundra Regmi Ioannis K. Argyros Santhosh George

The method of discretization is used to solve nonlinear equations involving Banach space valued operators using Lipschitz or H&ouml;lder constants. But these constants cannot always be found. That is why we present results using &omega;&minus; continuity conditions on the Fr&eacute;chet derivative of the operator involved. This way, we extend the applicability of the discretization technique. It turns out that if we specialize &omega;&minus; continuity our new results improve those in the literature too in the case of Lipschitz or H&ouml;lder continuity. Our analysis includes tighter upper error bounds on the distances involved.

]]>Computation doi: 10.3390/computation8030068

Authors: Kizito Muzhinji Stanford Shateyi

In this paper, we consider the numerical solution of the optimal control problems of the elliptic partial differential equation. Numerically tackling these problems using the finite element method produces a large block coupled algebraic system of equations of saddle point form. These systems are of large dimension, block, sparse, indefinite and ill conditioned. The solution of such systems is a major computational task and poses a greater challenge for iterative techniques. Thus they require specialised methods which involve some preconditioning strategies. The preconditioned solvers must have nice convergence properties independent of the changes in discretisation and problem parameters. Most well known preconditioned solvers converge independently of mesh size but not for the decreasing regularisation parameter. This work proposes and extends the work for the formulation of preconditioners which results in the optimal performances of the iterative solvers independent of both the decreasing mesh size and the regulation parameter. In this paper we solve the indefinite system using the preconditioned minimum residual method. The main task in this work was to analyse the 3 &times; 3 block diagonal preconditioner that is based on the approximation of the Schur complement form obtained from the matrix system. The eigenvalue distribution of both the proposed Schur complement approximate and the preconditioned system will be investigated since the clustering of eigenvalues points to the effectiveness of the preconditioner in accelerating an iterative solver. This is done in order to create fast, efficient solvers for such problems. Numerical experiments demonstrate the effectiveness and performance of the proposed approximation compared to the other approximations and demonstrate that it can be used in practice. The numerical experiments confirm the effectiveness of the proposed preconditioner. The solver used is robust and optimal with respect to the changes in both mesh size and the regularisation parameter.

]]>Computation doi: 10.3390/computation8030067

Authors: Riccardo Longo Alessandro Sebastian Podda Roberto Saia

Currently, an increasing number of third-party applications exploit the Bitcoin blockchain to store tamper-proof records of their executions, immutably. For this purpose, they leverage the few extra bytes available for encoding custom metadata in Bitcoin transactions. A sequence of records of the same application can thus be abstracted as a stand-alone subchain inside the Bitcoin blockchain. However, several existing approaches do not make any assumptions about the consistency of their subchains, either (i) neglecting the possibility that this sequence of messages can be altered, mainly due to unhandled concurrency, network malfunctions, application bugs, or malicious users, or (ii) giving weak guarantees about their security. To tackle this issue, in this paper, we propose an improved version of a consensus protocol formalized in our previous work, built on top of the Bitcoin protocol, to incentivize third-party nodes to consistently extend their subchains. Besides, we perform an extensive analysis of this protocol, both defining its properties and presenting some real-world attack scenarios, to show how its specific design choices and parameter configurations can be crucial to prevent malicious practices.

]]>Computation doi: 10.3390/computation8030066

Authors: Suyash Verma Arman Hemmati

The wake dynamics of sharp-edge rigid panels is examined using Overset Grid Assembly (OGA) utilized in OpenFOAM, an open-source platform. The OGA method is an efficient solution technique based on overlap of a single or multiple moving grids on a stationary background grid. Five test cases for a stationary panel at different angle of attack are compared with available computational data, which show a good agreement in predicting global flow variables, such as mean drag. The models also provided accurate results in predicting the main flow features and structures. The flow past a pitching square panel is also investigated at two Reynolds numbers. The study of surface pressure distribution and shear forces acting on the panel suggests that a higher streamwise pressure gradient exists for the high Reynolds number case, which leads to an increase in lift, whereas the highly viscous effects at low Reynolds number lead to an increased drag production. The wake visualizations for the stationary and pitching motion cases show that the vortex shedding and wake characteristics are captured accurately using the OGA method.

]]>Computation doi: 10.3390/computation8030065

Authors: Winter Sinkala

Construction of conservation laws of differential equations is an essential part of the mathematical study of differential equations. In this paper we derive, using two approaches, general formulas for finding conservation laws of the Black-Scholes equation. In one approach, we exploit nonlinear self-adjointness and Lie point symmetries of the equation, while in the other approach we use the multiplier method. We present illustrative examples and also show how every solution of the Black-Scholes equation leads to a conservation law of the same equation.

]]>Computation doi: 10.3390/computation8030064

Authors: Shengkun Xie Anna T. Lawniczak Junlin Hao

A lot of effort has been devoted to mathematical modelling and simulation of complex systems for a better understanding of their dynamics and control. Modelling and analysis of computer simulations outcomes are also important aspects of studying the behaviour of complex systems. It often involves the use of both traditional and modern statistical approaches, including multiple linear regression, generalized linear model and non-linear regression models such as artificial neural networks. In this work, we first conduct a simulation study of the agents&rsquo; decisions learning to cross a cellular automaton based highway and then, we model the simulation data using artificial neural networks. Our research shows that artificial neural networks are capable of capturing the functional relationships between input and output variables of our simulation experiments, and they outperform the classical modelling approaches. The variable importance measure techniques can consistently identify the most dominant factors that affect the response variables, which help us to better understand how the decision-making by the autonomous agents is affected by the input factors. The significance of this work is in extending the investigations of complex systems from mathematical modelling and computer simulations to the analysis and modelling of the data obtained from the simulations using advanced statistical models.

]]>Computation doi: 10.3390/computation8030063

Authors: Uygulana Gavrilieva Maria Vasilyeva Eric T. Chung

In this work, we consider elastic wave propagation in fractured media. The mathematical model is described by the Helmholtz problem related to wave propagation with specific interface conditions (Linear Slip Model, LSM) on the fracture in the frequency domain. For the numerical solution, we construct a fine grid that resolves all fracture interfaces on the grid level and construct approximation using a finite element method. We use a discontinuous Galerkin method for the approximation by space that helps to weakly impose interface conditions on fractures. Such approximation leads to a large system of equations and is computationally expensive. In this work, we construct a coarse grid approximation for an effective solution using the Generalized Multiscale Finite Element Method (GMsFEM). We construct and compare two types of the multiscale methods&mdash;Continuous Galerkin Generalized Multiscale Finite Element Method (CG-GMsFEM) and Discontinuous Galerkin Generalized Multiscale Finite Element Method (DG-GMsFEM). Multiscale basis functions are constructed by solving local spectral problems in each local domains to extract dominant modes of the local solution. In CG-GMsFEM, we construct continuous multiscale basis functions that are defined in the local domains associated with the coarse grid node and contain four coarse grid cells for the structured quadratic coarse grid. The multiscale basis functions in DG-GMsFEM are discontinuous and defined in each coarse grid cell. The results of the numerical solution for the two-dimensional Helmholtz equation are presented for CG-GMsFEM and DG-GMsFEM for different numbers of multiscale basis functions.

]]>Computation doi: 10.3390/computation8030062

Authors: Ravi P. Agarwal

Following the corrected chronology of ancient Hindu scientists/mathematicians, in this article, a sincere effort is made to report the origin of Pythagorean triples. We shall account for the development of these triples from the period of their origin and list some known astonishing directions. Although for researchers in this field, there is not much that is new in this article, we genuinely hope students and teachers of mathematics will enjoy this article and search for new directions/patterns.

]]>Computation doi: 10.3390/computation8030061

Authors: Kifayat Ullah Junaid Ahmad Manuel de la Sen

We introduce a very general class of generalized non-expansive maps. This new class of maps properly includes the class of Suzuki non-expansive maps, Reich&ndash;Suzuki type non-expansive maps, and generalized &alpha; -non-expansive maps. We establish some basic properties and demiclosed principle for this class of maps. After this, we establish existence and convergence results for this class of maps in the context of uniformly convex Banach spaces and compare several well known iterative algorithms.

]]>Computation doi: 10.3390/computation8030060

Authors: Orchidea Maria Lecian

The optical equivalence principle is analyzed according to the possibility of describing unbounded states, and the suitable approximations are calculated for highly energetic phenomena. Among these possibilities, the relevance for laser fields, interferometers, and optomehcanical systems are implemented. Their suitableness for research in General Relativity, Cosmology, and High-Energy Physics are outlined.

]]>Computation doi: 10.3390/computation8020059

Authors: Giovanni Delnevo Silvia Mirri Marco Roccetti

As we prepare to emerge from an extensive and unprecedented lockdown period, due to the COVID-19 virus infection that hit the Northern regions of Italy with the Europe&rsquo;s highest death toll, it becomes clear that what has gone wrong rests upon a combination of demographic, healthcare, political, business, organizational, and climatic factors that are out of our scientific scope. Nonetheless, looking at this problem from a patient&rsquo;s perspective, it is indisputable that risk factors, considered as associated with the development of the virus disease, include older age, history of smoking, hypertension and heart disease. While several studies have already shown that many of these diseases can also be favored by a protracted exposure to air pollution, there has been recently an insurgence of negative commentary against authors who have correlated the fatal consequences of COVID-19 (also) to the exposition of specific air pollutants. Well aware that understanding the real connection between the spread of this fatal virus and air pollutants would require many other investigations at a level appropriate to the scale of this phenomenon (e.g., biological, chemical, and physical), we propose the results of a study, where a series of the measures of the daily values of PM2.5, PM10, and NO2 were considered over time, while the Granger causality statistical hypothesis test was used for determining the presence of a possible correlation with the series of the new daily COVID19 infections, in the period February&ndash;April 2020, in Emilia-Romagna. Results taken both before and after the governmental lockdown decisions show a clear correlation, although strictly seen from a Granger causality perspective. Moving beyond the relevance of our results towards the real extent of such a correlation, our scientific efforts aim at reinvigorating the debate on a relevant case, that should not remain unsolved or no longer investigated.

]]>Computation doi: 10.3390/computation8020058

Authors: Valentin Alekseev Qili Tang Maria Vasilyeva Eric T. Chung Yalchin Efendiev

In this paper, we consider a coupled system of equations that describes simplified magnetohydrodynamics (MHD) problem in perforated domains. We construct a fine grid that resolves the perforations on the grid level in order to use a traditional approximation. For the solution on the fine grid, we construct approximation using the mixed finite element method. To reduce the size of the fine grid system, we will develop a Mixed Generalized Multiscale Finite Element Method (Mixed GMsFEM). The method differs from existing approaches and requires some modifications to represent the flow and magnetic fields. Numerical results are presented for a two-dimensional model problem in perforated domains. This model problem is a special case for the general 3D problem. We study the influence of the number of multiscale basis functions on the accuracy of the method and show that the proposed method provides a good accuracy with few basis functions.

]]>Computation doi: 10.3390/computation8020057

Authors: Winter Sinkala Tembinkosi F. Nkalashe

Two equations are considered in this paper&mdash;the Black&ndash;Scholes equation and an equation that models the spatial dynamics of a brain tumor under some treatment regime. We shall call the latter equation the tumor equation. The Black&ndash;Scholes and tumor equations are partial differential equations that arise in very different contexts. The tumor equation is used to model propagation of brain tumor, while the Black&ndash;Scholes equation arises in financial mathematics as a model for the fair price of a European option and other related derivatives. We use Lie symmetry analysis to establish a mapping between them and hence deduce solutions of the tumor equation from solutions of the Black&ndash;Scholes equation.

]]>Computation doi: 10.3390/computation8020056

Authors: Ikha Magdalena Muh Fadhel Atras Leo Sembiring M. A. Nugroho Roi Solomon B. Labay Marian P. Roque

In this paper, we investigate the wave damping mechanism caused by the presence of submerged bars using the Shallow Water Equations (SWEs). We first solve these equations for the single bar case using separation of variables to obtain the analytical solution for the wave elevation over a rectangular bar wave reflector with specific heights and lengths. From the analytical solution, we derive the wave reflection and transmission coefficients and determine the optimal height and length of the bar that would give the smallest transmission coefficient. We also measure the effectiveness of the bar by comparing the amplitude of the incoming wave before and after the wave passes the submerged bar, and extend the result to the case of n-submerged bars. We then construct a numerical scheme for the SWEs based on the finite volume method on a staggered grid to simulate the propagation of a monochromatic wave as it passes over a single submerged rectangular bar. For validation, we compare the transmission coefficient values obtained from the analytical solution, numerical scheme, and experimental data. The result of this paper may be useful in wave reflector engineering and design, particularly that of rectangle-shaped wave reflectors, as it can serve as a basis for designing bar wave reflectors that reduce wave amplitudes optimally.

]]>Computation doi: 10.3390/computation8020055

Authors: Stanford Shateyi Hillary Muzara

A thorough and detailed investigation of an unsteady free convection boundary layer flow of an incompressible electrically conducting Williamson fluid over a stretching sheet saturated with a porous medium has been numerically carried out. The partial governing equations are transferred into a system of non-linear dimensionless ordinary differential equations by employing suitable similarity transformations. The resultant equations are then numerically solved using the spectral quasi-linearization method. Numerical solutions are obtained in terms of the velocity, temperature and concentration profiles, as well as the skin friction, heat and mass transfers. These numerical results are presented graphically and in tabular forms. From the results, it is found out that the Weissenberg number, local electric parameter, the unsteadiness parameter, the magnetic, porosity and the buoyancy parameters have significant effects on the flow properties.

]]>Computation doi: 10.3390/computation8020054

Authors: Piotr Soczówka Renata Żochowska Grzegorz Karoń

The transport system of a Smart City consists of many subsystems; therefore, the modeling of the transportation network, which maps its structure, requires consideration of both the connections between individual subsystems and the relationships within each of them. The road and street network is one of the most important subsystems, whose main task is to ensure access to places generating travel demand in the city. Thus, its effectiveness should be at an appropriate level of quality. Connectivity is one of the most important characteristics of a road and street network. It describes how elements of that network are connected, which translates to travel times and costs. The analysis of the connectivity of the road and street network in urban areas is often conducted with the application of topological measures. In the case of a large area of the city, such analysis requires its division into smaller parts, which may affect the computational results of these measures; therefore, the main goal of the study was to present a method of performing analysis based on the computation of numerical values of selected measures of connectivity of road and street network, for a city area divided into fields of regular shape. To achieve that goal, the analyzed area was split into a regular grid. Subsequently, numerical values of the chosen measures of connectivity were calculated for each basic field, and the results allowed us to determine whether they are influenced by the method of division of the area. Obtained results showed that the size of the basic field influences the numerical values of measures of connectivity; however that influence is different for each of the selected measures.

]]>Computation doi: 10.3390/computation8020053

Authors: Zhen Qiao Hongtao Zhang Hai-Feng Ji Qian Chen

Since the outbreak of the 2019 novel coronavirus disease (COVID-19), the medical research community is vigorously seeking a treatment to control the infection and save the lives of severely infected patients. The main potential candidates for the control of viruses are virally targeted agents. In this short letter, we report our calculations on the inhibitors for the SARS-CoV-2 3CL protease and the spike protein for the potential treatment of COVID-19. The results show that the most potent inhibitors of the SARS-CoV-2 3CL protease include saquinavir, tadalafil, rivaroxaban, sildenafil, dasatinib, etc. Ergotamine, amphotericin b, and vancomycin are most promising to block the interaction of the SARS-CoV-2 S-protein with human ACE-2.

]]>Computation doi: 10.3390/computation8020052

Authors: Jerwin Jay E. Taping Junie B. Billones Voltaire G. Organo

Nickel(II) complexes of mono-functionalized pyridine-tetraazamacrocycles (PyMACs) are a new class of catalysts that possess promising activity similar to biological peroxidases. Experimental studies with ABTS (2,2&prime;-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid), substrate) and H2O2 (oxidant) proposed that hydrogen-bonding and proton-transfer reactions facilitated by their pendant arm were responsible for their catalytic activity. In this work, density functional theory calculations were performed to unravel the influence of pendant arm functionalization on the catalytic performance of Ni(II)&ndash;PyMACs. Generated frontier orbitals suggested that Ni(II)&ndash;PyMACs activate H2O2 by satisfying two requirements: (1) the deprotonation of H2O2 to form the highly nucleophilic HOO&minus;, and (2) the generation of low-spin, singlet state Ni(II)&ndash;PyMACs to allow the binding of HOO&minus;. COSMO solvation-based energies revealed that the O&ndash;O Ni(II)&ndash;hydroperoxo bond, regardless of pendant arm type, ruptures favorably via heterolysis to produce high-spin (S = 1) [(L)Ni3+&ndash;O&middot;]2+ and HO&minus;. Aqueous solvation was found crucial in the stabilization of charged species, thereby favoring the heterolytic process over homolytic. The redox reaction of [(L)Ni3+&ndash;O&middot;]2+ with ABTS obeyed a 1:2 stoichiometric ratio, followed by proton transfer to produce the final intermediate. The regeneration of Ni(II)&ndash;PyMACs at the final step involved the liberation of HO&minus;, which was highly favorable when protons were readily available or when the pKa of the pendant arm was low.

]]>Computation doi: 10.3390/computation8020051

Authors: Evgenia Ishchukova Ekaterina Maro Pavel Pristalov

In January 2016, a new standard for symmetric block encryption was established in the Russian Federation. The standard contains two encryption algorithms: Magma and Kuznyechik. In this paper we propose to consider the possibility of applying the algebraic analysis method to these ciphers. To do this, we use the simplified algorithms Magma &oplus; and S-KN2. To solve sets of nonlinear Boolean equations, we choose two different approaches: a reduction and solving of the Boolean satisfiability problem (by using the CryptoMiniSat solver) and an extended linearization method (XL). In our research, we suggest using a security assessment approach that identifies the resistance of block ciphers to algebraic cryptanalysis. The algebraic analysis of an eight-round Magma (68 key bits were fixed) with the CryptoMiniSat solver demanded four known text pairs and took 3029.56 s to complete (the search took 416.31 s). The algebraic analysis of a five-round Magma cipher with weakened S-boxes required seven known text pairs and took 1135.61 s (the search took 3.36 s). The algebraic analysis of a five-round Magma cipher with disabled S-blocks (equivalent value substitution) led to getting only one solution for five known text pairs in 501.18 s (the search took 4.92 s). The complexity of the XL algebraic analysis of a four-round S-KN2 cipher with three text pairs was 236.33 s (took 1.191 Gb RAM).

]]>Computation doi: 10.3390/computation8020050

Authors: Stephan Lenz Martin Geier Manfred Krafczyk

The simulation of fire is a challenging task due to its occurrence on multiple space-time scales and the non-linear interaction of multiple physical processes. Current state-of-the-art software such as the Fire Dynamics Simulator (FDS) implements most of the required physics, yet a significant drawback of this implementation is its limited scalability on modern massively parallel hardware. The current paper presents a massively parallel implementation of a Gas Kinetic Scheme (GKS) on General Purpose Graphics Processing Units (GPGPUs) as a potential alternative modeling and simulation approach. The implementation is validated for turbulent natural convection against experimental data. Subsequently, it is validated for two simulations of fire plumes, including a small-scale table top setup and a fire on the scale of a few meters. We show that the present GKS achieves comparable accuracy to the results obtained by FDS. Yet, due to the parallel efficiency on dedicated hardware, our GKS implementation delivers a reduction of wall-clock times of more than an order of magnitude. This paper demonstrates the potential of explicit local schemes in massively parallel environments for the simulation of fire.

]]>Computation doi: 10.3390/computation8020049

Authors: Khalid Hattaf

This paper proposes a new definition of fractional derivative with non-singular kernel in the sense of Caputo which generalizes various forms existing in the literature. Furthermore, the version in the sense of Riemann&ndash;Liouville is defined. Moreover, fundamental properties of the new generalized fractional derivatives in the sense of Caputo and Riemann&ndash;Liouville are rigorously studied. Finally, an application in epidemiology as well as in virology is presented.

]]>Computation doi: 10.3390/computation8020048

Authors: Stefano Quer Andrea Marcelli Giovanni Squillero

The maximum common subgraph of two graphs is the largest possible common subgraph, i.e., the common subgraph with as many vertices as possible. Even if this problem is very challenging, as it has been long proven NP-hard, its countless practical applications still motivates searching for exact solutions. This work discusses the possibility to extend an existing, very effective branch-and-bound procedure on parallel multi-core and many-core architectures. We analyze a parallel multi-core implementation that exploits a divide-and-conquer approach based on a thread pool, which does not deteriorate the original algorithmic efficiency and it minimizes data structure repetitions. We also extend the original algorithm to parallel many-core GPU architectures adopting the CUDA programming framework, and we show how to handle the heavily workload-unbalance and the massive data dependency. Then, we suggest new heuristics to reorder the adjacency matrix, to deal with &ldquo;dead-ends&rdquo;, and to randomize the search with automatic restarts. These heuristics can achieve significant speed-ups on specific instances, even if they may not be competitive with the original strategy on average. Finally, we propose a portfolio approach, which integrates all the different local search algorithms as component tools; such portfolio, rather than choosing the best tool for a given instance up-front, takes the decision on-line. The proposed approach drastically limits memory bandwidth constraints and avoids other typical portfolio fragility as CPU and GPU versions often show a complementary efficiency and run on separated platforms. Experimental results support the claims and motivate further research to better exploit GPUs in embedded task-intensive and multi-engine parallel applications.

]]>Computation doi: 10.3390/computation8020047

Authors: Kirkil Lin

A high-resolution large eddy simulation (LES) of wind flow over the Oklahoma City downtown area was performed to explain the effect of the building height on wind flow over the city. Wind flow over cities is vital for pedestrian and traffic comfort as well as urban heat effects. The average southerly wind speed of eight meters per second was used in the inflow section. It was found that heights and distribution of the buildings have the greatest impact on the wind flow patterns. The complexity of the flow field mainly depended on the location of buildings relative to each other and their heights. A strong up and downflows in the wake of tall buildings as well as large-scale coherent eddies between the low-rise buildings were observed. It was found out that high-rise buildings had the highest impact on the urban wind patterns. Other characteristics of urban canopy flows, such as wind shadows and channeling effects, are also successfully captured by the LES. The LES solver was shown to be a powerful tool for understanding urban canopy flows; therefore, it can be used in similar studies (e.g., other cities, dispersion studies, etc.) in the future.

]]>Computation doi: 10.3390/computation8020046

Authors: Ashis Kumar Mandal M. N. M. Kahar Graham Kendall

The paper investigates a partial exam assignment approach for solving the examination timetabling problem. Current approaches involve scheduling all of the exams into time slots and rooms (i.e., produce an initial solution) and then continuing by improving the initial solution in a predetermined number of iterations. We propose a modification of this process that schedules partially selected exams into time slots and rooms followed by improving the solution vector of partial exams. The process then continues with the next batch of exams until all exams are scheduled. The partial exam assignment approach utilises partial graph heuristic orderings with a modified great deluge algorithm (PGH-mGD). The PGH-mGD approach is tested on two benchmark datasets, a capacitated examination dataset from the 2nd international timetable competition (ITC2007) and an un-capacitated Toronto examination dataset. Experimental results show that PGH-mGD is able to produce quality solutions that are competitive with those of the previous approaches reported in the scientific literature.

]]>Computation doi: 10.3390/computation8020045

Authors: Yulia Shichkina Muon Ha

In this article, we describe a new formalized method for constructing the NoSQL document database of MongoDB, taking into account the structure of queries planned for execution to the database. The method is based on set theory. The initial data are the properties of objects, information about which is stored in the database, and the set of queries that are most often executed or whose execution speed should be maximum. In order to determine the need to create embedded documents, our method uses the type of relationship between tables in a relational database. Our studies have shown that this method is in addition to the method of creating collections without embedded documents. In the article, we also describe a methodology for determining in which cases which methods should be used to make working with databases more efficient. It should be noted that this approach can be used for translating data from MySQL to MongoDB and for the consolidation of these databases.

]]>Computation doi: 10.3390/computation8020044

Authors: Ivan Girotto Sebastiano Fabio Schifano Enrico Calore Gianluca Di Staso Federico Toschi

This paper presents the performance analysis for both the computing performance and the energy efficiency of a Lattice Boltzmann Method (LBM) based application, used to simulate three-dimensional multicomponent turbulent systems on massively parallel architectures for high-performance computing. Extending results reported in previous works, the analysis is meant to demonstrate the impact of using optimized data layouts designed for LBM based applications on high-end computer platforms. A particular focus is given to the Intel Skylake processor and to compare the target architecture with other models of the Intel processor family. We introduce the main motivations of the presented work as well as the relevance of its scientific application. We analyse the measured performances of the implemented data layouts on the Skylake processor while scaling the number of threads per socket. We compare the results obtained on several CPU generations of the Intel processor family and we make an analysis of energy efficiency on the Skylake processor compared with the Intel Xeon Phi processor, finally adding our interpretation of the presented results.

]]>Computation doi: 10.3390/computation8020043

Authors: Marc Haussmann Florian Ries Jonathan B. Jeppener-Haltenhoff Yongxiang Li Marius Schmidt Cooper Welch Lars Illmann Benjamin Böhm Hermann Nirschl Mathias J. Krause Amsini Sadiki

In this paper, we compare the capabilities of two open source near-wall-modeled large eddy simulation (NWM-LES) approaches regarding prediction accuracy, computational costs and ease of use to predict complex turbulent flows relevant to internal combustion (IC) engines. The applied open source tools are the commonly used OpenFOAM, based on the finite volume method (FVM), and OpenLB, an implementation of the lattice Boltzmann method (LBM). The near-wall region is modeled by the Musker equation coupled to a van Driest damped Smagorinsky-Lilly sub-grid scale model to decrease the required mesh resolution. The results of both frameworks are compared to a stationary engine flow bench experiment by means of particle image velocimetry (PIV). The validation covers a detailed error analysis using time-averaged and root mean square (RMS) velocity fields. Grid studies are performed to examine the performance of the two solvers. In addition, the differences in the processes of grid generation are highlighted. The performance results show that the OpenLB approach is on average 32 times faster than the OpenFOAM implementation for the tested configurations. This indicates the potential of LBM for the simulation of IC engine-relevant complex turbulent flows using NWM-LES with computationally economic costs.

]]>Computation doi: 10.3390/computation8020042

Authors: Boris Avdeev Roman Dema Sergei Chernyi

The magnetic field distribution along the radius and height in the working chamber of a hydrocyclone with a radial magnetic field is studied. One of the most important parameters of magnetic hydrocyclones is the magnetic field distribution along the radius and height of the working chamber. It is necessary for calculating the coagulation forces and the magnetic force affecting the particle or flocculus. The magnetic field strength was calculated through magnetic induction, measured by a teslameter at equal intervals and at different values of the supply DC current. The obtained values for the magnetic field strength are presented in the form of graphs. The field distribution curves produced from the dependences found earlier were constructed. The correlation coefficients were calculated. It was proven that the analyzed dependences could be used in further calculations of coagulation forces and magnetic force, because theoretical and experimental data compared favourably with each other. The distribution along the radius and height in the cylindrical part of the magnetic hydrocyclone was consistent with data published in the scientific literature.

]]>Computation doi: 10.3390/computation8020041

Authors: Felicia Anisoara Damian Simona Moldovanu Nilanjan Dey Amira S. Ashour Luminita Moraru

(1) Background: In this research, we aimed to identify and validate a set of relevant features to distinguish between benign nevi and melanoma lesions. (2) Methods: Two datasets with 70 melanomas and 100 nevi were investigated. The first one contained raw images. The second dataset contained images preprocessed for noise removal and uneven illumination reduction. Further, the images belonging to both datasets were segmented, followed by extracting features considered in terms of form/shape and color such as asymmetry, eccentricity, circularity, asymmetry of color distribution, quadrant asymmetry, fast Fourier transform (FFT) normalization amplitude, and 6th and 7th Hu&rsquo;s moments. The FFT normalization amplitude is an atypical feature that is computed as a Fourier transform descriptor and focuses on geometric signatures of skin lesions using the frequency domain information. The receiver operating characteristic (ROC) curve and area under the curve (AUC) were employed to ascertain the relevance of the selected features and their capability to differentiate between nevi and melanoma. (3) Results: The ROC curves and AUC were employed for all experiments and selected features. A comparison in terms of the accuracy and AUC was performed, and an evaluation of the performance of the analyzed features was carried out. (4) Conclusions: The asymmetry index and eccentricity, together with F6 Hu&rsquo;s invariant moment, were fairly competent in providing a good separation between malignant melanoma and benign lesions. Also, the FFT normalization amplitude feature should be exploited due to showing potential in classification.

]]>Computation doi: 10.3390/computation8020040

Authors: Valery Ochkov Inna Vasileva Massimiliano Nori Konstantin Orlov Evgeny Nikulchev

In this article, we examine the use of symmetry groups for modeling applied problems through computer symbolic calculus. We consider the problem of solving radical equations symbolically using computer mathematical packages. We propose some methods to obtain a correct analytical solution for this class of equations by means of the Mathcad package. The application of symmetric polynomials is proposed to ensure a correct approach to the solution. Issues on the solvability based on the physical sense of a problem are discussed. Common errors in solving radical equations related to the specificity of the computer usage are analyzed. Provable electrical and geometrical problems are illustrated as example.

]]>Computation doi: 10.3390/computation8020039

Authors: Varadarajan Rengaraj Michael Lass Christian Plessl Thomas D. Kühne

In scientific computing, the acceleration of atomistic computer simulations by means of custom hardware is finding ever-growing application. A major limitation, however, is that the high efficiency in terms of performance and low power consumption entails the massive usage of low precision computing units. Here, based on the approximate computing paradigm, we present an algorithmic method to compensate for numerical inaccuracies due to low accuracy arithmetic operations rigorously, yet still obtaining exact expectation values using a properly modified Langevin-type equation.

]]>Computation doi: 10.3390/computation8020038

Authors: Andreas G. Fotopoulos Dionissios P. Margaris

Our study presents the computational implementation of an air lubrication system on a commercial ship with 154,800 m3 Liquified Natural Gas capacity. The air lubrication reduces the skin friction between the ship&rsquo;s wetted area and sea water. We analyze the real operating conditions as well as the assumptions, that will approach the problem as accurately as possible. The computational analysis is performed with the ANSYS FLUENT software. Two separate geometries (two different models) are drawn for a ship&rsquo;s hull: with and without an air lubrication system. Our aim is to extract two different skin friction coefficients, which affect the fuel consumption and the CO2 emissions of the ship. A ship&rsquo;s hull has never been designed before in real scale with air lubrication injectors adjusted in a computational environment, in order to simulate the function of air lubrication system. The system&rsquo;s impact on the minimization of LNG transfer cost and on the reduction in fuel consumption and CO2 emissions is also examined. The study demonstrates the way to install the entire system in a new building. Fuel consumption can be reduced by up to 8%, and daily savings could reach up to EUR 8000 per travelling day.

]]>Computation doi: 10.3390/computation8020037

Authors: Kaijie Fan Biagio Cosenza Ben Juurlink

Energy optimization is an increasingly important aspect of today&rsquo;s high-performance computing applications. In particular, dynamic voltage and frequency scaling (DVFS) has become a widely adopted solution to balance performance and energy consumption, and hardware vendors provide management libraries that allow the programmer to change both memory and core frequencies manually to minimize energy consumption while maximizing performance. This article focuses on modeling the energy consumption and speedup of GPU applications while using different frequency configurations. The task is not straightforward, because of the large set of possible and uniformly distributed configurations and because of the multi-objective nature of the problem, which minimizes energy consumption and maximizes performance. This article proposes a machine learning-based method to predict the best core and memory frequency configurations on GPUs for an input OpenCL kernel. The method is based on two models for speedup and normalized energy predictions over the default frequency configuration. Those are later combined into a multi-objective approach that predicts a Pareto-set of frequency configurations. Results show that our approach is very accurate at predicting extema and the Pareto set, and finds frequency configurations that dominate the default configuration in either energy or performance.

]]>Computation doi: 10.3390/computation8020036

Authors: Alessio Fuoco Giorgio De Luca Elena Tocci Johannes Carolus Jansen

Computational modelling and simulation form a consolidated branch in the multidisciplinary field of membrane science and technology [...]

]]>Computation doi: 10.3390/computation8020035

Authors: Christos Manopoulos Giannis Savva Achilleas Tsoukalis Georgios Vasileiou Nikolaos Rogkas Vasilios Spitas Sokrates Tsangaris

In this study, an infusion roller pump comprising two separate innovative resilient tube designs is presented. The first incorporates the flexible tubing cross-section area in its relaxed state as a lenticular one for power reduction reasons. The second keeps the previous lenticular cross-section along its length, while it additionally incorporates an inflating portion, for creating a momentary flow positive pulse to balance the void generated by the roller disengagement. Fluid&ndash;Structure Interaction (FSI) simulations cannot provide quantitatively realistic results, due to the limitation of full compression of the tube, and are only used qualitatively to reveal by which way to set the inflated portion along the tube length in order to suppress backflow and achieve constant flow rate. Finally, indirect lumen volume measurements were performed numerically and an optimum design was found testing eight design approaches. These indirect fluid volume measurements assess the optimum inflated tube&rsquo;s portion leading to backflow and pulsating elimination. The optimum design has an inflation portion of 75 degrees covering almost 42% of the curved part of the tube, while it has a constant zone with the maximum value of inflated lenticular cross-section, within the portion, of 55 degrees covering about 73% of the inflation portion.

]]>Computation doi: 10.3390/computation8020034

Authors: David Goz Georgios Ieronymakis Vassilis Papaefstathiou Nikolaos Dimou Sara Bertocco Francesco Simula Antonio Ragagnin Luca Tornatore Igor Coretti Giuliano Taffoni

New challenges in Astronomy and Astrophysics (AA) are urging the need for many exceptionally computationally intensive simulations. &ldquo;Exascale&rdquo; (and beyond) computational facilities are mandatory to address the size of theoretical problems and data coming from the new generation of observational facilities in AA. Currently, the High-Performance Computing (HPC) sector is undergoing a profound phase of innovation, in which the primary challenge to the achievement of the &ldquo;Exascale&rdquo; is the power consumption. The goal of this work is to give some insights about performance and energy footprint of contemporary architectures for a real astrophysical application in an HPC context. We use a state-of-the-art N-body application that we re-engineered and optimized to exploit the heterogeneous underlying hardware fully. We quantitatively evaluate the impact of computation on energy consumption when running on four different platforms. Two of them represent the current HPC systems (Intel-based and equipped with NVIDIA GPUs), one is a micro-cluster based on ARM-MPSoC, and one is a &ldquo;prototype towards Exascale&rdquo; equipped with ARM-MPSoCs tightly coupled with FPGAs. We investigate the behavior of the different devices where the high-end GPUs excel in terms of time-to-solution while MPSoC-FPGA systems outperform GPUs in power consumption. Our experience reveals that considering FPGAs for computationally intensive application seems very promising, as their performance is improving to meet the requirements of scientific applications. This work can be a reference for future platform development for astrophysics applications where computationally intensive calculations are required.

]]>Computation doi: 10.3390/computation8020033

Authors: Dimitrios Koulocheris Clio Vossou

Tank vehicles are widely used for the road transportation of dangerous goods and especially flammable liquid fuels. Existing gross weight limitations, in such transportations, render the self-weight of the tank structure a crucial parameter of the design. For the design and the construction of metallic tank vehicles carrying dangerous goods, the European Standard EN13094:2015 is applied. This Standard poses a minimum thickness for the shell thickness for the tank construction according to the mechanical properties of the construction material. In the present paper, primarily, the proposed design was investigated and a weight minimization of such a tank vehicle with respect to its structural integrity was performed. As test case, a tank vehicle with a box-shaped cross-section and low gross weight was considered. For the evaluation of the structural integrity of the tank construction, the mechanical analysis software ANSYS &reg; 2019R1 was used. The boundary values and the suitable computation for structural integrity were applied, as they are defined in the corresponding Standards. The thickness of the construction material was decreased to a minimum, lower than that posed by the standards, indicating that the limit posed by them was by no means boundary in terms of structural integrity.

]]>Computation doi: 10.3390/computation8020032

Authors: Musa Demba Poom Kumam Wiboonsak Watthayu Pawicha Phairatchatniyom

In this work, a pair of embedded explicit exponentially-fitted Runge–Kutta–Nyström methods is formulated for solving special second-order ordinary differential equations (ODEs) with periodic solutions. A variable step-size technique is used for the derivation of the 5(3) embedded pair, which provides a cheap local error estimation. The numerical results obtained signify that the new adapted method is more efficient and accurate compared with the existing methods.

]]>Computation doi: 10.3390/computation8020030

Authors: Jose Aguilar Camilo Salazar Henry Velasco Julian Monsalve-Pulido Edwin Montoya

This paper analyses the capabilities of different techniques to build a semantic representation of educational digital resources. Educational digital resources are modeled using the Learning Object Metadata (LOM) standard, and these semantic representations can be obtained from different LOM fields, like the title, description, among others, in order to extract the features/characteristics from the digital resources. The feature extraction methods used in this paper are the Best Matching 25 (BM25), the Latent Semantic Analysis (LSA), Doc2Vec, and the Latent Dirichlet allocation (LDA). The utilization of the features/descriptors generated by them are tested in three types of educational digital resources (scientific publications, learning objects, patents), a paraphrase corpus and two use cases: in an information retrieval context and in an educational recommendation system. For this analysis are used unsupervised metrics to determine the feature quality proposed by each one, which are two similarity functions and the entropy. In addition, the paper presents tests of the techniques for the classification of paraphrases. The experiments show that according to the type of content and metric, the performance of the feature extraction methods is very different; in some cases are better than the others, and in other cases is the inverse.

]]>Computation doi: 10.3390/computation8020031

Authors: Lenuta Pana Simona Moldovanu Nilanjan Dey Amira S. Ashour Luminita Moraru

Background: The purpose of this article is to provide a new evaluation tool based on skeleton maps to assess the tumoral and non-tumoral regions of the 2D MRI in PD-weighted (proton density) and T2w (T2-weighted type) brain images. Methods: The proposed method investigated inter-hemisphere brain tissue similarity using a mask in the right hemisphere and its mirror reflection in the left one. At the hemisphere level and for each ROI (region of interest), a morphological skeleton algorithm was used to efficiently investigate the similarity between hemispheres. Two datasets with 88 T2w and PD images belonging to healthy patients and patients diagnosed with glioma were investigated: D1 contains the original raw images affected by Rician noise and D2 consists of the same images pre-processed for noise removal. Results: The investigation was based on structural similarity assessment by using the Structural Similarity Index (SSIM) and a modified Jaccard metrics. A novel S-Jaccard (Skeleton Jaccard) metric was proposed. Cluster accuracy was estimated based on the Silhouette method (SV). The Silhouette coefficient (SC) indicates the quality of the clustering process for the SSIM and S-Jaccard. To assess the overall classification accuracy an ROC curve implementation was carried out. Conclusions: Consistent results were obtained for healthy patients and for PD images of glioma. We demonstrated that the S-Jaccard metric based on skeletal similarity is an efficient tool for an inter-hemisphere brain similarity evaluation. The accuracy of the proposed skeletonization method was smaller for the original images affected by Rician noise (AUC = 0.883 (T2w) and 0.904 (PD)) but increased for denoised images (AUC = 0.951 (T2w) and 0.969 (PD)).

]]>Computation doi: 10.3390/computation8020029

Authors: Tommaso Civitarese Giuseppe Zollo

According to the recent literature, it has been demonstrated that the atomistic scale recognition of amino acids and peptide-bonds in polypeptides and proteins is in principle possible by measuring the tunneling current flowing across a narrow nano-gap in graphene nano ribbons during the peptide translocation. In this paper, we concentrate on the tunneling current signal properties measured for nano-gaps of different sizes. Using the non equilibrium Green function method based on the density functional theory, we have studied the tunneling current for larger gap sizes that can be actually realized according to the present state of the art sub-nanometer nano-pore and nano-gap technology. Also in these cases the peptide bond can be still recognized, the obtained signal being well within the measurable range of the current. The signal shapes undergo a change from a double peak feature per peptide bond for narrow gaps to a structured single peak signal per peptide bond for wider gaps. The reason is related to the different orbital overlap range of the two contributions giving rise to the original double peak signal for narrow gaps.

]]>Computation doi: 10.3390/computation8020028

Authors: Alessio Fuoco Marcello Monteleone Elisa Esposito Rosaria Bruno Jesús Ferrando-Soria Emilio Pardo Donatella Armentano Johannes Carolus Jansen

The most widely used method to measure the transport properties of dense polymeric membranes is the time lag method in a constant volume/pressure increase instrument. Although simple and quick, this method provides only relatively superficial, averaged data of the permeability, diffusivity, and solubility of gas or vapor species in the membrane. The present manuscript discusses a more sophisticated computational method to determine the transport properties on the basis of a fit of the entire permeation curve, including the transient period. The traditional tangent method and the fitting procedure were compared for the transport of six light gases (H2, He, O2, N2, CH4, and CO2) and ethane and ethylene in mixed matrix membranes (MMM) based on Pebax&reg;1657 and the metal&ndash;organic framework (MOF) CuII2(S,S)-hismox&middot;5H2O. Deviations of the experimental data from the theoretical curve could be attributed to the particular MOF structure, with cavities of different sizes. The fitting procedure revealed two different effective diffusion coefficients for the same gas in the case of methane and ethylene, due to the unusual void morphology in the MOFs. The method was furthermore applied to mixed gas permeation in an innovative constant-pressure/variable-volume setup with continuous analysis of the permeate composition by an on-line mass-spectrometric residual gas analyzer. This method can provide the diffusion coefficient of individual gas species in a mixture, during mixed gas permeation experiments. Such information was previously inaccessible, and it will greatly enhance insight into the mixed gas transport in polymeric or mixed matrix membranes.

]]>Computation doi: 10.3390/computation8020027

Authors: Dina U. Abdullina Elena A. Korznikova Volodymyr I. Dubinko Denis V. Laptev Alexey A. Kudreyko Elvira G. Soboleva Sergey V. Dmitriev Kun Zhou

Structure evolution and mechanical response of the carbon nanotube (CNT) bundle under lateral biaxial compression is investigated in plane strain conditions using the chain model. In this model, tensile and bending rigidity of CTN walls, and the van der Waals interactions between them are taken into account. Initially the bundle in cross section is a triangular lattice of circular zigzag CNTs. Under increasing strain control compression, several structure transformations are observed. Firstly, the second-order phase transition leads to the crystalline structure with doubled translational cell. Then the first-order phase transition takes place with the appearance of collapsed CNTs. Further compression results in increase of the fraction of collapsed CNTs at nearly constant compressive stress and eventually all CNTs collapse. It is found that the potential energy of the CNT bundle during deformation changes mainly due to bending of CNT walls, while the contribution from the walls tension-compression and from the van der Waals energies is considerably smaller.

]]>Computation doi: 10.3390/computation8020026

Authors: Eduardo C. da Silva Paulo H. R. Gabriel

The multiprocessor task scheduling problem has received considerable attention over the last three decades. In this context, a wide range of studies focuses on the design of evolutionary algorithms. These papers deal with many topics, such as task characteristics, environmental heterogeneity, and optimization criteria. To classify the academic production in this research field, we present here a systematic literature review for the directed acyclic graph (DAG) scheduling, that is, when tasks are modeled through a directed acyclic graph. Based on the survey of 56 works, we provide a panorama about the last 30 years of research in this field. From the analyzes of the selected studies, we found a diversity of application domains and mapped their main contributions.

]]>Computation doi: 10.3390/computation8020025

Authors: Konstantin Grishakov Konstantin Katin Mikhail Maslov

Quantum&ndash;chemical calculations of the electronic characteristics of carbon and boron-doped silicon polyprismanes were carried out, and the atomic hydrogen adsorption on these structures was analyzed. It was established that silicon polyprismanes doped with boron and carbon retained their metallicity predicted earlier. It was shown that the doping of polyprismanes made them more thermodynamically stable. For the silicon prismanes doped with boron or carbon, hydrogen adsorption was found to be energetically favorable. In the case of boron-doped prismanes, adsorption on the boron impurity was much more advantageous than on the neighboring silicon nodes. For the carbon doping, the adsorption energy of polyprismane with a small diameter weakly depended on the position of the hydrogen atom near the impurity center. However, for the C-doped polyprismanes with a larger diameter, the hydrogen adsorption on the silicon atom belonging to the ring with impurity is more energetically favorable than the adsorption on the silicon atom from the adjacent ring.

]]>Computation doi: 10.3390/computation8020024

Authors: Dimitriοs Stamatelos George Labeas

In the pursuit of a lighter composite wing design, fast and effective methodologies for sizing and validating the wing members (e.g., spar, ribs, skins, etc.) are required. In the present paper, the preliminary design methodology of an airliner main composite wing, which has an innovative multispar configuration instead of the conventional two-spar design, is investigated. The investigated aircraft wing is a large-scale composite component, requiring an efficient analysis methodology; for this purpose, the initial wing sizing is mostly based on simplified Finite Element (FE) stress analysis combined to analytically formulated design criteria. The proposed methodology comprises three basic modules, namely, computational stress analysis of the wing structure, comparison of the stress&ndash;strain results to specific design allowable and a suitable resizing procedure, until all design requirements are satisfied. The design constraints include strain allowable for the entire wing structure, stability constraints for the upper skin and spar webs, as well as bearing bypass analysis of the riveted/bolted joints of the spar flanges/skins connection. A comparison between a conventional (2-spar) and an innovative 4-spar wing configuration is presented. It arises from the comparison between the conventional and the 4-spar wing arrangement, that under certain conditions the multispar configuration has significant advantages over the conventional design.

]]>Computation doi: 10.3390/computation8020023

Authors: Tooran Tavangar Hesam Tofighian Ali Tarokh

Particle-laden jet flows can be observed in many industrial applications. In this investigation, the horizontal motion of particle laden jets is simulated using the Eulerian&ndash;Lagrangian framework. The two-way coupling is applied to the model to simulate the interaction between discrete and continuum phase. In order to track the continuum phase, a passive scalar equation is added to the solver. Eddy Life Time (ELT) is employed as a dispersion model. The influences of different non-dimensional parameters, such as Stokes number, Jet Reynolds number and mass loading ratio on the flow characteristics, are studied. The results of the simulations are verified with the available experimental data. It is revealed that more gravitational force is exerted on the jet as a result of the increase in mass loading, which deflects it more. Moreover, with an increase in the Reynolds number, the speed of the jet rises, and consequently, the gravitational force becomes less capable of deviating the jet. In addition, it is observed that by increasing the Stokes number, the particles leave the jet at higher speed, which causes a lower deviation of the jet.

]]>Computation doi: 10.3390/computation8020022

Authors: Michele Ferraiuolo Concetta Palumbo Andrea Sellitto Aniello Riccio

The thermo-structural design of the wing leading edge of hypersonic vehicles is a very challenging task as high gradients in thermal field, and hence high thermal stresses, are expected. Indeed, when employing passive hot structures based thermal protection systems, very high temperatures (e.g., 1400 &deg;C) are expected on the external surface of the wing leading edge, while the internal structural components are required to not exceed a few hundred degrees Celsius (e.g., 400 &deg;C) at the interface with the internal cold structure. Hence, ceramic matrix composites (CMC) are usually adopted for the manufacturing of the external surface of the wing leading edge since they are characterized by good mechanical properties at very high temperatures (up to 1900 &deg;C) together with an excellent thermal shock resistance. Furthermore, the orthotropic behavior of these materials together with the possibility to tailor their lamination sequence to minimize the heat transferred to internal components, make them very attractive for hot structure based thermal protection systems applications. However, the numerical predictions of the thermo-mechanical behavior of such materials, taking into account the influence of each ply (whose thickness generally ranges between 0.2 and 0.3 mm), can be very expensive from a computational point of view. To overcome this limitation, usually, sub-models are adopted, able to focus on specific and critical areas of the structure where very detailed thermo-mechanical analyses can be performed without significantly affecting the computational efficiency of the global model. In the present work, sub-modeling numerical approaches have been adopted for the analysis of the thermo-mechanical behavior of a ceramic matrix composite wing leading edge of a hypersonic vehicle. The main aim is to investigate the feasibility, in terms of computational efficiency and accuracy of results, in using sub-models for dimensioning complex ceramic matrix components. Hence, a comprehensive study on the size of sub-models and on the choice of their boundaries has been carried out in order to assess the advantages and the limitations in approximating the thermo-mechanical behavior of the investigated global ceramic matrix composite component.

]]>Computation doi: 10.3390/computation8010021

Authors: Nikolaos Rogkas Christos Vakouftsis Georgios Vasileiou Christos Manopoulos Vasilios Spitas

In recent years, multidisc wet friction clutches are used in demanding powertrains of automatic and dual clutch transmissions targeting high efficiency and smoothness during gearshift. However, the developed flow pattern between the clutch discs is significantly complex and the Computational Fluid Dynamics (CFD) methods employed are quite demanding in terms of computational cost. To deal with this issue semi-analytical solutions were derived, which are limited, however, to specific problems, in order to obtain handy expressions, while also providing insight to the wet clutch physics. Nevertheless, this lack of global validity is counterbalanced by the fact that the governing equations become analytically solvable at specific operational conditions with satisfactory accuracy, provided that the simplifications rendering the truncated terms inactive hold true. In this work, a quantitative way of determining the relative weight of each term of the Navier-Stokes (NS) equations set is presented, based on the post-processing of CFD results using the Buckingham &ldquo;&pi;-theorem&rdquo;. The sets of nondimensional numbers created were used to describe and model the physics of the wet clutch.

]]>Computation doi: 10.3390/computation8010020

Authors: Enrico Calore Alessandro Gabbana Sebastiano Fabio Schifano Raffaele Tripiccione

In the last years, the energy efficiency of HPC systems is increasingly becoming of paramount importance for environmental, technical, and economical reasons. Several projects have investigated the use of different processors and accelerators in the quest of building systems able to achieve high energy efficiency levels for data centers and HPC installations. In this context, Arm CPU architecture has received a lot of attention given its wide use in low-power and energy-limited applications, but server grade processors have appeared on the market just recently. In this study, we targeted the Marvell ThunderX2, one of the latest Arm-based processors developed to fit the requirements of high performance computing applications. Our interest is mainly focused on the assessment in the context of large HPC installations, and thus we evaluated both computing performance and energy efficiency, using the ERT benchmark and two HPC production ready applications. We finally compared the results with other processors commonly used in large parallel systems and highlight the characteristics of applications which could benefit from the ThunderX2 architecture, in terms of both computing performance and energy efficiency. Pursuing this aim, we also describe how ERT has been modified and optimized for ThunderX2, and how to monitor power drain while running applications on this processor.

]]>Computation doi: 10.3390/computation8010019

Authors: Zouridaki Vasileiou

Asymmetric helical gears have been under investigation for more than two decades due to their inherent ability to handle greater bending loads than their spur counterparts of comparable size (i.e., the number of teeth and module). For this type of gear, only one side of each gear tooth in a geared mechanism is usually loaded (driving/driven side), whereas the other remains mostly unloaded (coast side). Due to the asymmetry of the tooth, a nonlinear model is used. For that reason, a numerical design procedure is introduced involving the geometrical and structural modelling of conjugate helical gear sets. This is accomplished with the tool of Finite Element Analysis (FEA) which is presented to the scientific literature. The basic geometry is initially generated in 2D and thereafter converted to a 3D shape using Boolean operations. The rigid body which is necessary for FEA software is produced from Computer Aided Design (CAD) software (SolidWorks). This paper is focused on the effect analysis of different geometry characteristics on bending loads. The effects on bending stress play a significant role in gear design wherein its magnitude is controlled by the nominal bending stress and the stress concentration due to the geometrical shape of the teeth. The analysis of this effect of the different geometrical characteristics in the load is presented in detail. Moreover, a comparison of the stresses that are developed between pairs with asymmetrical helical teeth by keeping one geometric parameter constant and modifying the values of the other two parameters is presented.

]]>Computation doi: 10.3390/computation8010018

Authors: Abdel-Rahman Hedar Amira Allam Alaa Abdel-Hakim

With the rapid growth of simulation software packages, generating practical tools for simulation-based optimization has attracted a lot of interest over the last decades. In this paper, a modified method of Estimation of Distribution Algorithms (EDAs) is constructed by a combination with variable-sample techniques to deal with simulation-based optimization problems. Moreover, a new variable-sample technique is introduced to support the search process whenever the sample sizes are small, especially in the beginning of the search process. The proposed method shows efficient results by simulating several numerical experiments.

]]>Computation doi: 10.3390/computation8010017

Authors: Naeem Saleem Iqra Habib Manuel De la Sen

In this paper, we introduce Suzuki-type ( &alpha; , &beta; , &gamma; g ) - generalized and modified proximal contractive mappings. We establish some coincidence and best proximity point results in fairly complete spaces. Also, we provide coincidence and best proximity point results in partially ordered complete metric spaces for Suzuki-type ( &alpha; , &beta; , &gamma; g ) - generalized and modified proximal contractive mappings. Furthermore, some examples are presented in each section to elaborate and explain the usability of the obtained results. As an application, we obtain fixed-point results in metric spaces and in partially ordered metric spaces. The results obtained in this article further extend, modify and generalize the various results in the literature.

]]>Computation doi: 10.3390/computation8010016

Authors: Mutti-Ur Rehman Jehad Alzabut

The numerical approximation of the &mu; -value is key towards the measurement of instability, stability analysis, robustness, and the performance of linear feedback systems in system theory. The MATLAB function mussv available in MATLAB Control Toolbox efficiently computes both lower and upper bounds of the &mu; -value. This article deals with the numerical approximations of the lower bounds of &mu; -values by means of low-rank ordinary differential equation (ODE)-based techniques. The numerical simulation shows that approximated lower bounds of &mu; -values are much tighter when compared to those obtained by the MATLAB function mussv.

]]>Computation doi: 10.3390/computation8010015

Authors: Michael Frank Dimitris Drikakis Vassilis Charissis

The re-kindled fascination in machine learning (ML), observed over the last few decades, has also percolated into natural sciences and engineering. ML algorithms are now used in scientific computing, as well as in data-mining and processing. In this paper, we provide a review of the state-of-the-art in ML for computational science and engineering. We discuss ways of using ML to speed up or improve the quality of simulation techniques such as computational fluid dynamics, molecular dynamics, and structural analysis. We explore the ability of ML to produce computationally efficient surrogate models of physical applications that circumvent the need for the more expensive simulation techniques entirely. We also discuss how ML can be used to process large amounts of data, using as examples many different scientific fields, such as engineering, medicine, astronomy and computing. Finally, we review how ML has been used to create more realistic and responsive virtual reality applications.

]]>Computation doi: 10.3390/computation8010014

Authors: Azza Hassan Soliman Mohamed Abdelsabour Fahmy

In this work, Finite Element Method (FEM) is applied to obtain the condition at the boundary of the interface between a channel and a porous medium. The boundary conditions that should be applied to the inhomogeneous interface zone between the two homogeneous regions of free fluid and porous medium are derived. The comparison has been performed for porous material characterizations to provide the velocity at the inhomogeneous interface zone with variable permeability between the two homogeneous regions of free fluid and porous medium. Also, the dependence of the slip coefficient on the thickness of the transition zone is established and the values of the thickness are so justified that the numerical results and the numerical results of our proposed technique are found to be in good agreement with experimental results in the literature.

]]>Computation doi: 10.3390/computation8010013

Authors: Yi Zhou Zhong Xiao Shichao Liu Ping Chen Hua Pang Yong Xin Yongjun Jiao Shixin Gao Kun Zhang Wenjie Li Junchong Yu

The effect of non-fuel part size on the thermal-mechanical performance of fully ceramic microencapsulated (FCMTM) Fuel was investigated, and the non-fuel part size was selected according to integrity maintaining of non-fuel part and silicon carbide (SiC) layers. The non-fuel part size can affect the FCMTM temperature and stress distribution greatly by changing the distance between tristructural isotropic (TRISO) particles. The maximum temperature of SiC matrix increased from 1220 K to 1450 K with the non-fuel part size increasing from 100 &mu;m to 500 &mu;m, and the matrix temperature of all the samples was lower than the decomposition point of SiC ceramics. The maximum hoop stress decreased with non-fuel part size, but the inner part exhibiteda crosscurrent trend. The inner part of the SiC matrix lost structure integrity because of the large hoop stress caused by the deformation of TRISO particles, however, the non-fuel parts of FCMTM pellet may maintain their integrity when the non-fuel part size was larger than 300 &mu;m. SiC layers hoop stress increased with non-fuel part size, and the failure probability of SiC layer was lower than 2.2 &times; 10&minus;4 for the FCMTM pellet with small non-fuel part size. The integrity of non-fuel and SiC layers can be maintained for the FCMTM pellet with the non-fuel part size from 300 &mu;m to 400 &mu;m.

]]>Computation doi: 10.3390/computation8010012

Authors: Lorenzo Agosta Erik G. Brandt Alexander Lyubartsev

Atomistic simulations can complement the scarce experimental data on free energies of molecules at bio-inorganic interfaces. In molecular simulations, adsorption free energy landscapes are efficiently explored with advanced sampling methods, but classical dynamics is unable to capture charge transfer and polarization at the solid&ndash;liquid interface. Ab initio simulations do not suffer from this flaw, but only at the expense of an overwhelming computational cost. Here, we introduce a protocol for adsorption free energy calculations that improves sampling on the timescales relevant to ab initio simulations. As a case study, we calculate adsorption free energies of the charged amino acids Lysine and Aspartate on the fully hydrated anatase (101) TiO2 surface using tight-binding forces. We find that the first-principle description of the system significantly contributes to the adsorption free energies, which is overlooked by calculations with previous methods.

]]>Computation doi: 10.3390/computation8010011

Authors: Natalia Menshutina Evgeniy Lebedev Andrey Kolnoochenko Pavel Tsygankov Igor Lebedev

The presented work is dedicated to the modeling of catalytic reactors using a multiscale approach, based on the combination of cellular automata and Computational Fluid Dynamics (CFD). This work describes the first step in the development of a complex model of catalytic reactors and considers the diffusion of components inside a porous structure of an aluminosilicate catalyst. Various cellular automata were used to generate virtual porous structures of catalysts with specific surface areas equal to 250, 500, and 700 m2/g and to calculate the effective diffusion coefficient for the substance transfer inside the catalysts. The obtained effective diffusion coefficient was included in the CFD model of a laboratory scale reactor simulating extraction of aniline from the catalyst with methanol. Results of numerical experiments carried out using the CFD model were compared with the corresponding experimental data. It is shown that the proposed approach is suitable for describing macroscopic and microscopic mass transfer phenomena on consideration of the catalyst&rsquo;s structure.

]]>Computation doi: 10.3390/computation8010010

Authors: Anusmriti Ghosh Mohammad S. Islam Suvash C. Saha

Aerosolized drug inhalation plays an important role in the treatment of respiratory diseases. All of the published in silico, in vivo, and in vitro studies have improved the knowledge of aerosol delivery in the human respiratory system. However, aerosolized magnetic nano-particle (MNP) transport and deposition (TD) for the specific position of the human lung are still unavailable in the literature. Therefore, this study is aimed to provide an understanding of the magnetic nano-particle TD in the targeted region by imposing an external magnetic field for the development of future therapeutics. Uniform aerosolized nano-particle TD in the specific position of the lung airways will be modelled by adopting turbulence k&ndash;&omega; low Reynolds number simulation. The Euler&ndash;Lagrange (E&ndash;L) approach and the magneto hydrodynamics (MHD) model are incorporated in the ANSYS fluent (18.0) solver to investigate the targeted nano-particle TD. The human physical activity conditions of sleeping, resting, light activity and fast breathing are considered in this study. The aerosolized drug particles are navigated to the targeted position under the influence of external magnetic force (EMF), which is applied in two different positions of the two-generation lung airways. A numerical particle tracing model is also developed to predict the magnetic drug targeting behavior in the lung. The numerical results reveal that nano-particle deposition efficiency (DE) in two different magnetic field position is different for various physical activities, which could be helpful for targeted drug delivery to a specific region of the lung after extensive clinical trials. This process will also be cost-effective and will minimize unwanted side effects due to systemic drug distribution in the lung.

]]>Computation doi: 10.3390/computation8010009

Authors: Cornelius Demuth Andrés Fabián Lasagni

Functional surfaces characterised by periodic microstructures are sought in numerous technological applications. Direct laser interference patterning (DLIP) is a technique that allows the fabrication of microscopic periodic features on different materials, e.g., metals. The mechanisms effective during nanosecond pulsed DLIP of metal surfaces are not yet fully understood. In the present investigation, the heat transfer and fluid flow occurring in the metal substrate during the DLIP process are simulated using a smoothed particle hydrodynamics (SPH) methodology. The melt pool convection, driven by surface tension gradients constituting shear stresses according to the Marangoni boundary condition, is solved by an incompressible SPH (ISPH) method. The DLIP simulations reveal a distinct behaviour of the considered substrate materials stainless steel and high-purity aluminium. In particular, the aluminium substrate exhibits a considerably deeper melt pool and remarkable velocity magnitudes of the thermocapillary flow during the patterning process. On the other hand, convection is less pronounced in the processing of stainless steel, whereas the surface temperature is consistently higher. Marangoni convection is therefore a conceivable effective mechanism in the structuring of aluminium at moderate fluences. The different character of the melt pool flow during DLIP of stainless steel and aluminium is confirmed by experimental observations.

]]>Computation doi: 10.3390/computation8010008

Authors: Ioannis K. Argyros Stepan Shakhno Halyna Yarmola

The technique of using the restricted convergence region is applied to study a semilocal convergence of the Newton&ndash;Kurchatov method. The analysis is provided under weak conditions for the derivatives and the first order divided differences. Consequently, weaker sufficient convergence criteria and more accurate error estimates are retrieved. A special case of weak conditions is also considered.

]]>Computation doi: 10.3390/computation8010007

Authors: Computation Editorial Office

The editorial team greatly appreciates the reviewers who have dedicated their considerable time and expertise to the journal’s rigorous editorial process over the past 12 months, regardless of whether the papers are finally published or not[...]

]]>Computation doi: 10.3390/computation8010006

Authors: Muhammad Anwar Ma’sum Hadaiq Rolis Sanabila Petrus Mursanto Wisnu Jatmiko

One of the challenges in machine learning is a classification in multi-modal data. The problem needs a customized method as the data has a feature that spreads in several areas. This study proposed a multi-codebook fuzzy neural network classifiers using clustering and incremental learning approaches to deal with multi-modal data classification. The clustering methods used are K-Means and GMM clustering. Experiment result, on a synthetic dataset, the proposed method achieved the highest performance with 84.76% accuracy. Whereas on the benchmark dataset, the proposed method has the highest performance with 79.94% accuracy. The proposed method has 24.9% and 4.7% improvements in synthetic and benchmark datasets respectively compared to the original version. The proposed classifier has better accuracy compared to a popular neural network with 10% and 4.7% margin in synthetic and benchmark dataset respectively.

]]>Computation doi: 10.3390/computation8010005

Authors: Antonio Gómez-Mompeán Rafael Lahoz-Beltra

Evolution of an individual within another individual is known as within-host dynamics (WHD). The most common modeling technique to study WHD involves ordinary differential equations (ODEs). In the field of biology, models of this kind assume, for example, that both the number of viruses and the number of mouse cells susceptible to being infected change according to their interaction as stated in the ODE model. However, viruses can undergo mutations and, consequently, evolve inside the mouse, whereas the mouse, in turn, displays evolutionary mechanisms through its immune system (e.g., clonal selection), defending against the invading virus. In this work, as the main novelty, we propose an evolutionary WHD model simulating the coexistence of an evolving invader within a host. In addition, instead of using ODEs we developed an alternative methodology consisting of the hybridization of a genetic algorithm with an artificial immune system. Aside from the model, interest in biology, and its potential clinical use, the proposed WHD model may be useful in those cases where the invader exhibits evolutionary changes, for instance, in the design of anti-virus software, intrusion detection algorithms in a corporation&rsquo;s computer systems, etc. The model successfully simulates two intruder detection paradigms (i.e., humoral detection, danger detection) in which the intruder represents an evolving invader or guest (e.g., virus, computer program,) that infects a host (e.g., mouse, computer memory). The obtained results open up the possibility of simulating environments in which two entities (guest versus host) compete evolutionarily with each other when occupying the same space (e.g., organ cells, computer memory, network).

]]>Computation doi: 10.3390/computation8010004

Authors: Håvard H. Holm André R. Brodtkorb Martin L. Sætra

In this work, we examine the performance, energy efficiency, and usability when using Python for developing high-performance computing codes running on the graphics processing unit (GPU). We investigate the portability of performance and energy efficiency between Compute Unified Device Architecture (CUDA) and Open Compute Language (OpenCL); between GPU generations; and between low-end, mid-range, and high-end GPUs. Our findings showed that the impact of using Python is negligible for our applications, and furthermore, CUDA and OpenCL applications tuned to an equivalent level can in many cases obtain the same computational performance. Our experiments showed that performance in general varies more between different GPUs than between using CUDA and OpenCL. We also show that tuning for performance is a good way of tuning for energy efficiency, but that specific tuning is needed to obtain optimal energy efficiency.

]]>Computation doi: 10.3390/computation8010003

Authors: Gonzalo Fernandez Mariana Mendina Gabriel Usera

The use of Computational Fluid Dynamics (CFD) to assist in air quality studies in urban environments can provide accurate results for the dispersion of pollutants. However, due to the computational resources needed, simulation domain sizes tend to be limited. This study aims to improve the computational efficiency of an emission and dispersion model implemented in a CPU-based solver by migrating it to a CPU&ndash;GPU-based one. The migration of the functions that handle boundary conditions and source terms for the pollutants is explained, as well as the main differences present in the solvers used. Once implemented, the model was used to run simulations with both engines on different platforms, enabling the comparison between them and reaching promising time improvements in favor of the use of GPUs.

]]>Computation doi: 10.3390/computation8010002

Authors: Ikramullah Andri Afrizal Syifaul Huzni Sulaiman Thalib H. P. S. Abdul Khalil Samsul Rizal

The microbond test for natural fibers is difficult to conduct experimentally due to several challenges including controlling the gap distance of the blade, the meniscus shape, and the large data spread. In this study, a finite element simulation was performed to investigate the effects of the bonding characteristics in the interface between the fiber and matrix on the Typha fiber/epoxy microbond test. Our aim was to obtain the accurate mesh and cohesive properties via simulation of the Typha fiber/epoxy microbond test using the cohesive zone model technique. The axisymmetric model was generated to model the microbond test specimen with a cohesive layer between the fiber and matrix. The cohesive parameter and mesh type were varied to determine the appropriate cohesive properties and mesh type. The fine mesh with 61,016 elements and cohesive properties including stiffness coefficients Knn = 2700 N/mm3, Ktt = 2700 N/mm3, and Kss = 2700 N/mm3; fracture energy of 15.15 N/mm; and damage initiation tnn = 270 N/mm2, ttt = 270 N/mm2, and tss = 270 N/mm2 were the most suitable. The cohesive zone model can describe the debonding process in the simulation of the Typha fiber/epoxy microbond test. Therefore, the results of the Typha fiber/epoxy microbond simulation can be used in the simulation of Typha fiber reinforced composites at the macro-scale.

]]>Computation doi: 10.3390/computation8010001

Authors: Ikha Magdalena Novry Erwina

In this paper, we study the maximum run-up of solitary waves on a sloping beach and over a reef through a non-hydrostatic model. We do a modification on the non-hydrostatic model derived by Stelling and Zijlema. The model is approximated by resolving the vertical fluid depth into two-layer system. In contrast to the two-layer model proposed by Stelling, here, we have a block of a tridiagonal matrix for the hydrodynamic pressure. The equations are then solved by applying a staggered finite volume method with predictor-corrector step. For validation, several test cases are presented. The first test is simulating the propagation of solitary waves over a flat bottom. Good results in amplitude and shape preservation are obtained. Furthermore, run-up simulations are conducted for solitary waves climbing up a sloping beach, following the experimental set-up by Synolakis. In this case, two simulations are performed with solitary waves of small and large amplitude. Again, good agreements are obtained, especially for the prediction of run-up height. Moreover, we validate our numerical scheme for wave run-up simulation over a reef, and the result confirms the experimental data.

]]>Computation doi: 10.3390/computation7040069

Authors: Ali Yousef Hosny Hamdy

This paper sequentially estimates the inverse coefficient of variation of the normal distribution using Hall&rsquo;s three-stage procedure. We find theorems that facilitate finding a confidence interval for the inverse coefficient of variation that has pre-determined width and coverage probability. We also discuss the sensitivity of the constructed confidence interval to detect a possible shift in the inverse coefficient of variation. Finally, we find the asymptotic regret encountered in point estimation of the inverse coefficient of variation under the squared-error loss function with linear sampling cost. The asymptotic regret provides negative values, which indicate that the three-stage sampling does better than the optimal fixed sample size had the population inverse coefficient of variation been known.

]]>Computation doi: 10.3390/computation7040068

Authors: Canola Dai Negri

Conjugated singlet ground state diradicals have received remarkable attention owing to their potential applications in optoelectronic devices. A distinctive character of these systems is the location of the double-exciton state, a low lying excited state dominated by the doubly excited HOMO,HOMOLUMO,LUMO configuration, (where HOMO=highest occupied molecular orbital, LUMO=lowest unoccupied molecular orbital) which may influence optical and other photophysical properties. In this contribution we investigate this specific excited state, for a series of recently synthesized conjugated diradicals, employing time dependent density functional theory (TDDFT) based on the unrestricted parallel spin reference configuration in the spin-flip formulation (SF-TDDFT) and standard TD calculations based on the unrestricted antiparallel spin reference configuration (TDUDFT). The quality of computed results is assessed considering diradical and multiradical descriptors, and the excited state wavefunction composition.

]]>Computation doi: 10.3390/computation7040067

Authors: Salvatore Carta Andrea Corriga Anselmo Ferreira Diego Reforgiato Recupero Roberto Saia

Financial markets forecasting represents a challenging task for a series of reasons, such as the irregularity, high fluctuation, noise of the involved data, and the peculiar high unpredictability of the financial domain. Moreover, literature does not offer a proper methodology to systematically identify intrinsic and hyper-parameters, input features, and base algorithms of a forecasting strategy in order to automatically adapt itself to the chosen market. To tackle these issues, this paper introduces a fully automated optimized ensemble approach, where an optimized feature selection process has been combined with an automatic ensemble machine learning strategy, created by a set of classifiers with intrinsic and hyper-parameters learned in each marked under consideration. A series of experiments performed on different real-world futures markets demonstrate the effectiveness of such an approach with regard to both to the Buy and Hold baseline strategy and to several canonical state-of-the-art solutions.

]]>Computation doi: 10.3390/computation7040066

Authors: T. S. Müller W. Töws G. M. Pastor

Recent developments in the density-functional theory of electron correlations in many-body lattice models are reviewed. The theoretical framework of lattice density-functional theory (LDFT) is briefly recalled, giving emphasis to its universality and to the central role played by the single-particle density-matrix &gamma; . The Hubbard model and the Anderson single-impurity model are considered as relevant explicit problems for the applications. Real-space and reciprocal-space approximations to the fundamental interaction-energy functional W [ &gamma; ] are introduced, in the framework of which the most important ground-state properties are derived. The predictions of LDFT are contrasted with available exact analytical results and state-of-the-art numerical calculations. Thus, the goals and limitations of the method are discussed.

]]>Computation doi: 10.3390/computation7040065

Authors: Szymon Śmiga Lucian A. Constantin Fabio Della Sala Eduardo Fabiano

The Laplacian of the electronic density diverges at the nuclear cusp, which complicates the development of Laplacian-level meta-GGA (LLMGGA) kinetic energy functionals for all-electron calculations. Here, we investigate some Laplacian renormalization methods, which avoid this divergence. We developed two different LLMGGA functionals, which improve the kinetic energy or the kinetic potential. We test these KE functionals in the context of Frozen-Density-Embedding (FDE), for a large palette of non-covalently interacting molecular systems. These functionals improve over the present state-of-the-art LLMGGA functionals for the FDE calculations.

]]>Computation doi: 10.3390/computation7040064

Authors: Rafael Lopez Frank Martinez Ignacio Ema Jose Manuel Garcia de la Vega Guillermo Ramirez

An algorithm for the efficient computation of molecular electrostatic potential is reported. It is based on the partition/expansion of density into (pseudo) atomic fragments with the method of Deformed Atoms in Molecules, which allows to compute the potential as a sum of atomic contributions. These contributions are expressed as a series of irregular spherical harmonics times effective multipole moments and inverse multipole moments, including short-range terms. The problem is split into two steps. The first one consists of the partition/expansion of density accompanied by the computation of multipole moments, and its cost depends on the size of the basis set used in the computation of electron density within the Linear Combination of Atomic Orbitals framework. The second one is the actual computation of the electrostatic potential from the quantities calculated in the first step, and its cost depends on the number of computation points. For a precision in the electrostatic potential of six decimal figures, the algorithm leads to a dramatic reduction of the computation time with respect to the calculation from electron density matrix and integrals involving basis set functions.

]]>Computation doi: 10.3390/computation7040063

Authors: Juan Ruiz-Rosero Gustavo Ramirez-Gonzalez Rahul Khanna

Field Programmable Gate Array (FPGA) is a general purpose programmable logic device that can be configured by a customer after manufacturing to perform from a simple logic gate operations to complex systems on chip or even artificial intelligence systems. Scientific publications related to FPGA started in 1992 and, up to now, we found more than 70,000 documents in the two leading scientific databases (Scopus and Clarivative Web of Science). These publications show the vast range of applications based on FPGAs, from the new mechanism that enables the magnetic suspension system for the kilogram redefinition, to the Mars rovers&rsquo; navigation systems. This paper reviews the top FPGAs&rsquo; applications by a scientometric analysis in ScientoPy, covering publications related to FPGAs from 1992 to 2018. Here we found the top 150 applications that we divided into the following categories: digital control, communication interfaces, networking, computer security, cryptography techniques, machine learning, digital signal processing, image and video processing, big data, computer algorithms and other applications. Also, we present an evolution and trend analysis of the related applications.

]]>Computation doi: 10.3390/computation7040062

Authors: Emilio San-Fabián Maroto Juan-Carlos Sancho-García

We briefly present some of the most modern and outstanding non-conventional density-functional theory (DFT) methods, which have largely broadened the field of applications with respect to more traditional calculations. The results of these ongoing efforts reveal that a DFT-inspired solution always exists even for pathological cases. Among the set of emerging methods, we specifically mention FT-DFT, OO-DFT, RSX-DFT, MC-PDFT, and FLOSIC-DFT, complementing the last generation of existing density functionals, such as local hybrid and double-hybrid expressions.

]]>Computation doi: 10.3390/computation7040061

Authors: Ali Reza Soltani Mohammad T. Baei

Density functional theory calculations were carried out to investigate the adsorption behaviors of dopamine (DPM) on the BN nanostructures in gas and solvent phases. Our results revealed that the adsorption of DPM on BN nano-cages was stronger than other BN nanotubes. It was found that the adsorption of two DPM (&minus;1.30 eV) upon B12N12 was weaker than those of a single DPM (&minus;1.41 eV). The Ga-doped B12N12 had better conditions for the detection of DPM than that of the Al-doped B12N12 nano-cage. The solvation effects for the most stable systems were calculated which showed that it had positive impacts upon the adsorption behavior of the applied systems than those studied in gas phase. The available results are expected to provide a useful guidance for the adsorption of DPM and generation of the new hybrid compounds.

]]>Computation doi: 10.3390/computation7040060

Authors: Alexander Galashev Ksenia Ivanichkina Konstantin Katin Mikhail Maslov

Silicene is considered to be the most promising anode material for lithium-ion batteries. In this work, we show that transmutation doping makes silicene substantially more suitable for use as an anode material. Pristine and modified bilayer silicene was simulated on a graphite substrate using the classical molecular dynamics method. The parameters of Morse potentials for alloying elements were determined using quantum mechanical calculations. The main advantage of modified silicene is its low deformability during lithium intercalation and its possibility of obtaining a significantly higher battery charge capacity. Horizontal and vertical profiles of the density of lithium as well as distributions of the most significant stresses in the walls of the channels were calculated both in undoped and doped systems with different gaps in silicene channels. The energies of lithium adsorption on silicene, including phosphorus-doped silicene, were determined. High values of the self-diffusion coefficient of lithium atoms in the silicene channels were obtained, which ensured a high cycling rate. The calculations showed that such doping increased the normal stress on the walls of the channel filled with lithium to 67% but did not provoke a loss of mechanical strength. In addition, doping achieved a greater battery capacity and higher charging/discharging rates.

]]>Computation doi: 10.3390/computation7040059

Authors: Ágnes Nagy

A time-independent density functional theory for excited states of Coulomb systems has recently been proposed in a series of papers. It has been revealed that the Coulomb density determines not only its Hamiltonian, but the degree of excitation as well. A universal functional valid for any excited state has been constructed. The excited-state Kohn&ndash;Sham equations bear resemblance to those of the ground-state theory. In this paper, it is studied how the excited-state functionals behave under coordinate scaling. A few relations for the scaled exchange, correlation, exchange-correlation, and kinetic functionals are presented. These relations are expected to be advantageous for designing approximate functionals.

]]>Computation doi: 10.3390/computation7040058

Authors: Mou Zhang Dimitris Drikakis Lei Li Xiu Yan

Due to the complex physics of underwater explosion problems, it is difficult to derive analytical solutions with accurate results. In this study, a machine-learning method to train a back-propagation neural network for parameter prediction is presented for the first time in literature. The specific problem is the response of a structure submerged in water subjected to shock loads produced by an underwater explosion, with the detonation point being far away from the structure so that the loading wave can be regarded as a planar shock wave. Two rigid parallel plates connected by a linear spring and a linear dashpot that simulate structural stiffness and damping respectively, represent the structure. Taking the Laplace transform of the governing equations, solving the resulting equations, and then taking the inverse Laplace transform, the simplified problem is analyzed theoretically. The coupled ordinary differential equations governing the motion of the system are also solved numerically by the fourth order Runge&ndash;Kutta method and then verified by a finite element method using Ansys/LSDYNA. The parametric training with the back-propagation neural network algorithm was conducted to delineate the effects of structural stiffness and damping on the attenuation of shock waves, the cavitation time, and the time of maximum momentum transfer. The prediction results agree well with the validation and test sample results.

]]>Computation doi: 10.3390/computation7040057

Authors: Fang Yu Yu Liu

A first-principles calculation program is used for investigating the structural, mechanical, and electronic properties of the cubic NiTi shape-memory alloy (SMA) with the B2 phase under high pressure. Physical parameters including dimensionless ratio, elastic constants, Young&rsquo;s modulus, bulk modulus, shear modulus, ductile-brittle transition, elastic anisotropy, and Poisson&rsquo;s ratio are computed under different pressures. Results indicate that high pressure enhances the ability to resist volume deformation along with the ductility and metallic bonds, but the biggest resistances to elastic and shear deformation occur at P = 35 &nbsp; GPa for the B2-phase NiTi SMA. Meanwhile, the strong anisotropy produced by the high pressure will motivate the cross-slip process of screw dislocations, thereby improving the plasticity of the B2-phase NiTi SMA. Additionally, the results of the density of states (DOS) reveal that the B2-phase NiTi SMA is essentially characterized by the metallicity, and it is hard to induce the structural phase transition for the B2-phase NiTi SMA under high pressure, which provides valuable guidance for designing and applying the NiTi SMA under high pressure.

]]>Computation doi: 10.3390/computation7040056

Authors: Francesco Petrosino Stefano Curcio Sudip Chakraborty Giorgio De Luca

Adsorption of the phosphotriesterase on a polysulfone membrane surface was investigated in this paper through a double-scale computational approach. Surface charges of the enzyme, as well as membrane, were calculated at sub and nanoscale while protein adsorption was simulated at larger scale. Adsorption energies were calculated as a function of the enzyme&ndash;surface distance, and for each distance, several protein rotations were tested to find the most stable orientations of the macromolecule. The results of this model were useful in obtaining information about the adhesion of the enzyme and to give indications on the orientations of its binding site. Adsorption energies agreed with the literature data. Furthermore, the binding site of the immobilized phosphotriesterase was less accessible with respect to native enzymes due to the steric hindrance of the polymer surface; thus, a reduction of its efficiency is expected. The proposed methodology made use of fundamental quantities, calculated without resorting to adjustable or empirical parameters, providing basic outputs useful for ascertaining enzymatic catalysis rate.

]]>Computation doi: 10.3390/computation7040055

Authors: Mohammad Nasucha Josaphat T. Sri Sumantyo Cahya E. Santosa Peberlin Sitompul Agus H. Wahyudi Yang Yu Joko Widodo

Electromagnetic wave backscattering by corner reflectors in an anechoic chamber is studied using our developed computational tool. The tool applies the Finite-Difference Time-Domain (FDTD) method to simulate the propagation of the wave&rsquo;s electric and magnetic fields. Experimental measurement in an anechoic chamber is also carried out as a comparison. The two results show agreement, including the finding that the backscatter intensity variation amongst the four circularly polarized modes is significantly smaller than the variation amongst the four linearly polarization modes.

]]>