Open AccessArticle
Multiresolution Modeling of Semidilute Polymer Solutions: Coarse-Graining Using Wavelet-Accelerated Monte Carlo
Computation 2017, 5(4), 44; doi:10.3390/computation5040044 -
Abstract
We present a hierarchical coarse-graining framework for modeling semidilute polymer solutions, based on the wavelet-accelerated Monte Carlo (WAMC) method. This framework forms a hierarchy of resolutions to model polymers at length scales that cannot be reached via atomistic or even standard coarse-grained simulations.
[...] Read more.
We present a hierarchical coarse-graining framework for modeling semidilute polymer solutions, based on the wavelet-accelerated Monte Carlo (WAMC) method. This framework forms a hierarchy of resolutions to model polymers at length scales that cannot be reached via atomistic or even standard coarse-grained simulations. Previously, it was applied to simulations examining the structure of individual polymer chains in solution using up to four levels of coarse-graining (Ismail et al., J. Chem. Phys., 2005, 122, 234901 and Ismail et al., J. Chem. Phys., 2005, 122, 234902), recovering the correct scaling behavior in the coarse-grained representation. In the present work, we extend this method to the study of polymer solutions, deriving the bonded and non-bonded potentials between coarse-grained superatoms from the single chain statistics. A universal scaling function is obtained, which does not require recalculation of the potentials as the scale of the system is changed. To model semi-dilute polymer solutions, we assume the intermolecular potential between the coarse-grained beads to be equal to the non-bonded potential, which is a reasonable approximation in the case of semidilute systems. Thus, a minimal input of microscopic data is required for simulating the systems at the mesoscopic scale. We show that coarse-grained polymer solutions can reproduce results obtained from the more detailed atomistic system without a significant loss of accuracy. Full article
Figures

Figure 1

Open AccessReview
Self-Organizing Map for Characterizing Heterogeneous Nucleotide and Amino Acid Sequence Motifs
Computation 2017, 5(4), 43; doi:10.3390/computation5040043 -
Abstract
A self-organizing map (SOM) is an artificial neural network algorithm that can learn from the training data consisting of objects expressed as vectors and perform non-hierarchical clustering to represent input vectors into discretized clusters, with vectors assigned to the same cluster sharing similar
[...] Read more.
A self-organizing map (SOM) is an artificial neural network algorithm that can learn from the training data consisting of objects expressed as vectors and perform non-hierarchical clustering to represent input vectors into discretized clusters, with vectors assigned to the same cluster sharing similar numeric or alphanumeric features. SOM has been used widely in transcriptomics to identify co-expressed genes as candidates for co-regulated genes. I envision SOM to have great potential in characterizing heterogeneous sequence motifs, and aim to illustrate this potential by a parallel presentation of SOM with a set of numerical vectors and a set of equal-length sequence motifs. While there are numerous biological applications of SOM involving numerical vectors, few studies have used SOM for heterogeneous sequence motif characterization. This paper is intended to encourage (1) researchers to study SOM in this new domain and (2) computer programmers to develop user-friendly motif-characterization SOM tools for biologists. Full article
Open AccessFeature PaperArticle
A Diagonally Updated Limited-Memory Quasi-Newton Method for the Weighted Density Approximation
Computation 2017, 5(4), 42; doi:10.3390/computation5040042 -
Abstract
We propose a limited-memory quasi-Newton method using the bad Broyden update and apply it to the nonlinear equations that must be solved to determine the effective Fermi momentum in the weighted density approximation for the exchange energy density functional. This algorithm has advantages
[...] Read more.
We propose a limited-memory quasi-Newton method using the bad Broyden update and apply it to the nonlinear equations that must be solved to determine the effective Fermi momentum in the weighted density approximation for the exchange energy density functional. This algorithm has advantages for nonlinear systems of equations with diagonally dominant Jacobians, because it is easy to generalize the method to allow for periodic updates of the diagonal of the Jacobian. Systematic tests of the method for atoms show that one can determine the effective Fermi momentum at thousands of points in less than fifteen iterations. Full article
Figures

Figure 1

Open AccessArticle
Modified Equation of Shock Wave Parameters
Computation 2017, 5(3), 41; doi:10.3390/computation5030041 -
Abstract
Among the various blast load equations, the Kingery-Bulmash equation is applicable to both a free-air burst and a surface burst that enables calculations of the parameters of a pressure-time history curve. On the other hand, this equation is quite complicated. This paper proposes
[...] Read more.
Among the various blast load equations, the Kingery-Bulmash equation is applicable to both a free-air burst and a surface burst that enables calculations of the parameters of a pressure-time history curve. On the other hand, this equation is quite complicated. This paper proposes a modified equation that may replace the conventional Kingery-Bulmash equation. The proposed modified equation, which was constructed by performing curve-fitting of this equation, requires a brief calculation process with a simpler equation compared to the original equation. The modified equation is also applicable to both types of bursts and has the same calculable scaled distance range as the conventional equation. The calculation results obtained using the modified equation were similar to the results obtained from the original equation with a less than 1% difference. Full article
Figures

Figure 1

Open AccessArticle
Performance Comparison of Feed-Forward Neural Networks Trained with Different Learning Algorithms for Recommender Systems
Computation 2017, 5(3), 40; doi:10.3390/computation5030040 -
Abstract
Accuracy improvement is among the primary key research focuses in the area of recommender systems. Traditionally, recommender systems work on two sets of entities, Users and Items, to estimate a single rating that represents a user’s acceptance of an item. This technique
[...] Read more.
Accuracy improvement is among the primary key research focuses in the area of recommender systems. Traditionally, recommender systems work on two sets of entities, Users and Items, to estimate a single rating that represents a user’s acceptance of an item. This technique was later extended to multi-criteria recommender systems that use an overall rating from multi-criteria ratings to estimate the degree of acceptance by users for items. The primary concern that is still open to the recommender systems community is to find suitable optimization algorithms that can explore the relationships between multiple ratings to compute an overall rating. One of the approaches for doing this is to assume that the overall rating as an aggregation of multiple criteria ratings. Given this assumption, this paper proposed using feed-forward neural networks to predict the overall rating. Five powerful training algorithms have been tested, and the results of their performance are analyzed and presented in this paper. Full article
Figures

Figure 1

Open AccessReview
Time-Dependent Density-Functional Theory and Excitons in Bulk and Two-Dimensional Semiconductors
Computation 2017, 5(3), 39; doi:10.3390/computation5030039 -
Abstract
In this work, we summarize the recent progress made in constructing time-dependent density-functional theory (TDDFT) exchange-correlation (XC) kernels capable to describe excitonic effects in semiconductors and apply these kernels in two important cases: a “classic” bulk semiconductor, GaAs, with weakly-bound excitons and a
[...] Read more.
In this work, we summarize the recent progress made in constructing time-dependent density-functional theory (TDDFT) exchange-correlation (XC) kernels capable to describe excitonic effects in semiconductors and apply these kernels in two important cases: a “classic” bulk semiconductor, GaAs, with weakly-bound excitons and a novel two-dimensional material, MoS2, with very strongly-bound excitonic states. Namely, after a brief review of the standard many-body semiconductor Bloch and Bethe-Salpether equation (SBE and BSE) and a combined TDDFT+BSE approaches, we proceed with details of the proposed pure TDDFT XC kernels for excitons. We analyze the reasons for successes and failures of these kernels in describing the excitons in bulk GaAs and monolayer MoS2, and conclude with a discussion of possible alternative kernels capable of accurately describing the bound electron-hole states in both bulk and two-dimensional materials. Full article
Figures

Figure 1

Open AccessArticle
CFD-PBM Approach with Different Inlet Locations for the Gas-Liquid Flow in a Laboratory-Scale Bubble Column with Activated Sludge/Water
Computation 2017, 5(3), 38; doi:10.3390/computation5030038 -
Abstract
A novel computational fluid dynamics-population balance model (CFD-PBM) for the simulation of gas mixing in activated sludge (i.e., an opaque non-Newtonian liquid) in a bubble column is developed and described to solve the problem of measuring the hydrodynamic behavior of opaque non-Newtonian liquid-gas
[...] Read more.
A novel computational fluid dynamics-population balance model (CFD-PBM) for the simulation of gas mixing in activated sludge (i.e., an opaque non-Newtonian liquid) in a bubble column is developed and described to solve the problem of measuring the hydrodynamic behavior of opaque non-Newtonian liquid-gas two-phase flow. We study the effects of the inlet position and liquid-phase properties (water/activated sludge) on various characteristics, such as liquid flow field, gas hold-up, liquid dynamic viscosity, and volume-averaged bubble diameter. As the inlet position changed, two symmetric vortices gradually became a single main vortex in the flow field in the bubble column. In the simulations, when water was in the liquid phase, the global gas hold-up was higher than when activated sludge was in the liquid phase in the bubble column, and a flow field that was dynamic with time was observed in the bubble column. Additionally, when activated sludge was used as the liquid phase, no periodic velocity changes were found. When the inlet position was varied, the non-Newtonian liquid phase had different peak values and distributions of (dynamic) liquid viscosity in the bubble column, which were related to the gas hold-up. The high gas hold-up zone corresponded to the low dynamic viscosity zone. Finally, when activated sludge was in the liquid phase, the volume-averaged bubble diameter was much larger than when water was in the liquid phase. Full article
Figures

Figure 1

Open AccessArticle
A Non-Isothermal Chemical Lattice Boltzmann Model Incorporating Thermal Reaction Kinetics and Enthalpy Changes
Computation 2017, 5(3), 37; doi:10.3390/computation5030037 -
Abstract
The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as
[...] Read more.
The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines. Full article
Figures

Figure 1

Open AccessArticle
TFF (v.4.1): A Mathematica Notebook for the Calculation of One- and Two-Neutron Stripping and Pick-Up Nuclear Reactions
Computation 2017, 5(3), 36; doi:10.3390/computation5030036 -
Abstract
The program TFF calculates stripping single-particle form factors for one-neutron transfer in prior representation with appropriate perturbative treatment of recoil. Coupled equations are then integrated along a semiclassical trajectory to obtain one- and two-neutron transfer amplitudes and probabilities within first- and second-order perturbation
[...] Read more.
The program TFF calculates stripping single-particle form factors for one-neutron transfer in prior representation with appropriate perturbative treatment of recoil. Coupled equations are then integrated along a semiclassical trajectory to obtain one- and two-neutron transfer amplitudes and probabilities within first- and second-order perturbation theory. Total and differential cross-sections are then calculated by folding with a transmission function (obtained from a phenomenological imaginary absorption potential). The program description, user instructions and examples are discussed. Full article
Figures

Figure 1

Open AccessArticle
Using an Interactive Lattice Boltzmann Solver in Fluid Mechanics Instruction
Computation 2017, 5(3), 35; doi:10.3390/computation5030035 -
Abstract
This article gives an overview of the diverse range of teaching applications that can be realized using an interactive lattice Boltzmann simulation tool in fluid mechanics instruction and outreach. In an inquiry-based learning framework, examples are given of learning scenarios that address instruction
[...] Read more.
This article gives an overview of the diverse range of teaching applications that can be realized using an interactive lattice Boltzmann simulation tool in fluid mechanics instruction and outreach. In an inquiry-based learning framework, examples are given of learning scenarios that address instruction on scientific results, scientific methods or the scientific process at varying levels of student activity, from consuming to applying to researching. Interactive live demonstrations on portable hardware enable new and innovative teaching concepts for fluid mechanics, also for large audiences and in the early stages of the university education. Moreover, selected examples successfully demonstrate that the integration of high-fidelity CFD methods into fluid mechanics teaching facilitates high-quality student research work within reach of the current state of the art in the respective field of research. Full article
Figures

Figure 1

Open AccessArticle
Tensor-Based Semantically-Aware Topic Clustering of Biomedical Documents
Computation 2017, 5(3), 34; doi:10.3390/computation5030034 -
Abstract
Biomedicine is a pillar of the collective, scientific effort of human self-discovery, as well as a major source of humanistic data codified primarily in biomedical documents. Despite their rigid structure, maintaining and updating a considerably-sized collection of such documents is a task of
[...] Read more.
Biomedicine is a pillar of the collective, scientific effort of human self-discovery, as well as a major source of humanistic data codified primarily in biomedical documents. Despite their rigid structure, maintaining and updating a considerably-sized collection of such documents is a task of overwhelming complexity mandating efficient information retrieval for the purpose of the integration of clustering schemes. The latter should work natively with inherently multidimensional data and higher order interdependencies. Additionally, past experience indicates that clustering should be semantically enhanced. Tensor algebra is the key to extending the current term-document model to more dimensions. In this article, an alternative keyword-term-document strategy, based on scientometric observations that keywords typically possess more expressive power than ordinary text terms, whose algorithmic cornerstones are third order tensors and MeSH ontological functions, is proposed. This strategy has been compared against a baseline using two different biomedical datasets, the TREC (Text REtrieval Conference) genomics benchmark and a large custom set of cognitive science articles from PubMed. Full article
Figures

Figure 1

Open AccessArticle
A Discrete Approach to Meshless Lagrangian Solid Modeling
Computation 2017, 5(3), 33; doi:10.3390/computation5030033 -
Abstract
The author demonstrates a stable Lagrangian solid modeling method, tracking the interactions of solid mass particles rather than using a meshed grid. This numerical method avoids the problem of tensile instability often seen with smooth particle applied mechanics by having the solid particles
[...] Read more.
The author demonstrates a stable Lagrangian solid modeling method, tracking the interactions of solid mass particles rather than using a meshed grid. This numerical method avoids the problem of tensile instability often seen with smooth particle applied mechanics by having the solid particles apply stresses expected with Hooke’s law, as opposed to using a smoothing function for neighboring solid particles. This method has been tested successfully with a bar in tension, compression, and shear, as well as a disk compressed into a flat plate, and the numerical model consistently matched the analytical Hooke’s law as well as Hertz contact theory for all examples. The solid modeling numerical method was then built into a 2-D model of a pressure vessel, which was tested with liquid water particles under pressure and simulated with smoothed particle hydrodynamics. This simulation was stable, and demonstrated the feasibility of Lagrangian specification modeling for fluid–solid interactions. Full article
Figures

Figure 1

Open AccessArticle
Anomalous Diffusion within the Transcriptome as a Bio-Inspired Computing Framework for Resilience
Computation 2017, 5(3), 32; doi:10.3390/computation5030032 -
Abstract
Much of biology-inspired computer science is based on the Central Dogma, as implemented with genetic algorithms or evolutionary computation. That 60-year-old biological principle based on the genome, transcriptome and proteasome is becoming overshadowed by a new paradigm of complex ordered associations and connections
[...] Read more.
Much of biology-inspired computer science is based on the Central Dogma, as implemented with genetic algorithms or evolutionary computation. That 60-year-old biological principle based on the genome, transcriptome and proteasome is becoming overshadowed by a new paradigm of complex ordered associations and connections between layers of biological entities, such as interactomes, metabolomics, etc. We define a new hierarchical concept as the “Connectosome”, and propose new venues of computational data structures based on a conceptual framework called “Grand Ensemble” which contains the Central Dogma as a subset. Connectedness and communication within and between living or biology-inspired systems comprise ensembles from which a physical computing system can be conceived. In this framework the delivery of messages is filtered by size and a simple and rapid semantic analysis of their content. This work aims to initiate discussion on the Grand Ensemble in network biology as a representation of a Persistent Turing Machine. This framework adding interaction and persistency to the classic Turing-machine model uses metrics based on resilience that has application to dynamic optimization problem solving in Genetic Programming. Full article
Figures

Figure 1

Open AccessArticle
Artificial Immune Classifier Based on ELLipsoidal Regions (AICELL)
Computation 2017, 5(2), 31; doi:10.3390/computation5020031 -
Abstract
Pattern classification is a central problem in machine learning, with a wide array of applications, and rule-based classifiers are one of the most prominent approaches. Among these classifiers, Incremental Rule Learning algorithms combine the advantages of classic Pittsburg and Michigan approaches, while, on
[...] Read more.
Pattern classification is a central problem in machine learning, with a wide array of applications, and rule-based classifiers are one of the most prominent approaches. Among these classifiers, Incremental Rule Learning algorithms combine the advantages of classic Pittsburg and Michigan approaches, while, on the other hand, classifiers using fuzzy membership functions often result in systems with fewer rules and better generalization ability. To discover an optimal set of rules, learning classifier systems have always relied on bio-inspired models, mainly genetic algorithms. In this paper we propose a classification algorithm based on an efficient bio-inspired approach, Artificial Immune Networks. The proposed algorithm encodes the patterns as antigens, and evolves a set of antibodies, representing fuzzy classification rules of ellipsoidal surface, to cover the problem space. The innate immune mechanisms of affinity maturation and diversity preservation are modified and adapted to the classification context, resulting in a classifier that combines the advantages of both incremental rule learning and fuzzy classifier systems. The algorithm is compared to a number of state-of-the-art rule-based classifiers, as well as Support Vector Machines (SVM), producing very satisfying results, particularly in problems with large number of attributes and classes. Full article
Figures

Figure 1

Open AccessArticle
Theoretical Prediction of Electronic Structures and Phonon Dispersion of Ce2XN2 (X = S, Se, and Te) Ternary
Computation 2017, 5(2), 29; doi:10.3390/computation5020029 -
Abstract
A systematic study of structural, electronic, vibrational properties of new ternary dicerium selenide dinitride, Ce2SeN2 and predicted compounds—Ce2SN2 and Ce2TeN2—is performed using first-principles calculations within Perdew–Burke–Ernzerhof functional with Hubbard correction. Our calculated results
[...] Read more.
A systematic study of structural, electronic, vibrational properties of new ternary dicerium selenide dinitride, Ce2SeN2 and predicted compounds—Ce2SN2 and Ce2TeN2—is performed using first-principles calculations within Perdew–Burke–Ernzerhof functional with Hubbard correction. Our calculated results for structural parameters nicely agree to the experimental measurements. We predict that all ternary dicerium chalcogenide nitrides are thermodynamically stable. The predicted elastic constants and related mechanical properties demonstrate its profound mechanical stability as well. Moreover, our results show that Ce2XN2 are insulator materials. Trends of the structural parameters, electronic structures, and phonon dispersion are discussed in terms of the characteristics of the Ce (4f) states. Full article
Figures

Figure 1

Open AccessArticle
Levy-Lieb-Based Monte Carlo Study of the Dimensionality Behaviour of the Electronic Kinetic Functional
Computation 2017, 5(2), 30; doi:10.3390/computation5020030 -
Abstract
We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D), two dimensional (2D) and three dimensional (3D) cases. We focus on the determination of the correlation part of the kinetic functional by employing
[...] Read more.
We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D), two dimensional (2D) and three dimensional (3D) cases. We focus on the determination of the correlation part of the kinetic functional by employing a Monte Carlo sampling technique of electrons in space based on an analytic derivation via the Levy-Lieb constrained search principle. Of particular interest is the question of the behaviour of the functional as one passes from 1D to 3D; according to the basic principles of Density Functional Theory (DFT) the form of the universal functional should be independent of the dimensionality. However, in practice the straightforward use of current approximate functionals in different dimensions is problematic. Here, we show that going from the 3D to the 2D case the functional form is consistent (concave function) but in 1D becomes convex; such a drastic difference is peculiar of 1D electron systems as it is for other quantities. Given the interesting behaviour of the functional, this study represents a basic first-principle approach to the problem and suggests further investigations using highly accurate (though expensive) many-electron computational techniques, such as Quantum Monte Carlo. Full article
Figures

Figure 1

Open AccessArticle
Geometric Derivation of the Stress Tensor of the Homogeneous Electron Gas
Computation 2017, 5(2), 28; doi:10.3390/computation5020028 -
Abstract
The foundation of many approximations in time-dependent density functional theory (TDDFT) lies in the theory of the homogeneous electron gas. However, unlike the ground-state DFT, in which the exchange-correlation potential of the homogeneous electron gas is known exactly via the quantum Monte Carlo
[...] Read more.
The foundation of many approximations in time-dependent density functional theory (TDDFT) lies in the theory of the homogeneous electron gas. However, unlike the ground-state DFT, in which the exchange-correlation potential of the homogeneous electron gas is known exactly via the quantum Monte Carlo calculation, the time-dependent or frequency-dependent dynamical potential of the homogeneous electron gas has not been known exactly, due to the absence of a similar variational principle for excited states. In this work, we present a simple geometric derivation of the time-dependent dynamical exchange-correlation potential for the homogeneous system. With this derivation, the dynamical potential can be expressed in terms of the stress tensor, offering an alternative to calculate the bulk and shear moduli, two key input quantities in TDDFT. Full article
Open AccessArticle
Energetic Study of Clusters and Reaction Barrier Heights from Efficient Semilocal Density Functionals
Computation 2017, 5(2), 27; doi:10.3390/computation5020027 -
Abstract
The accurate first-principles prediction of the energetic properties of molecules and clusters from efficient semilocal density functionals is of broad interest. Here we study the performance of a non-empirical Tao-Mo (TM) density functional on binding energies and excitation energies of titanium dioxide and
[...] Read more.
The accurate first-principles prediction of the energetic properties of molecules and clusters from efficient semilocal density functionals is of broad interest. Here we study the performance of a non-empirical Tao-Mo (TM) density functional on binding energies and excitation energies of titanium dioxide and water clusters, as well as reaction barrier heights. To make a comparison, a combination of the TM exchange part with the TPSS (Tao–Perdew–Staroverov–Scuseria) correlation functional—called TMTPSS—is also included in this study. Our calculations show that the best binding energies of titanium dioxide are predicted by PBE0 (Perdew–Burke–Ernzerhof hybrid functional), TM, and TMTPSS with nearly the same accuracy, while B3LYP (Beck’s three-parameter exchange part with Lee-Yang-Parr correlation), TPSS, and PBE (Perdew–Burke–Ernzerhof) yield larger mean absolute errors. For excitation energies of titanium and water clusters, PBE0 and B3LYP are the most accurate functionals, outperforming the performance of semilocal functionals due to the nonlocality problem suffered by the latter. Nevertheless, TMTPSS and TM functionals are still good accurate semilocal methods, improving upon the commonly-used TPSS and PBE functionals. We also find that the best reaction barrier heights are predicted by PBE0 and B3LYP, thanks to the nonlocality incorporated into these two hybrid functionals, but TMTPSS and TM are obviously more accurate than SCAN (Strongly Constrained and Appropriately Normed), TPSS, and PBE, suggesting the good performance of TM and TMTPSS for physically different systems and properties. Full article
Figures

Figure 1

Open AccessArticle
Deep Visual Attributes vs. Hand-Crafted Audio Features on Multidomain Speech Emotion Recognition
Computation 2017, 5(2), 26; doi:10.3390/computation5020026 -
Abstract
Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may
[...] Read more.
Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may be recognized using several modalities such as analyzing facial expressions, speech, physiological parameters (e.g., electroencephalograms, electrocardiograms) etc. However, measuring of these modalities may be difficult, obtrusive or require expensive hardware. In that context, speech may be the best alternative modality in many practical applications. In this work we present an approach that uses a Convolutional Neural Network (CNN) functioning as a visual feature extractor and trained using raw speech information. In contrast to traditional machine learning approaches, CNNs are responsible for identifying the important features of the input thus, making the need of hand-crafted feature engineering optional in many tasks. In this paper no extra features are required other than the spectrogram representations and hand-crafted features were only extracted for validation purposes of our method. Moreover, it does not require any linguistic model and is not specific to any particular language. We compare the proposed approach using cross-language datasets and demonstrate that it is able to provide superior results vs. traditional ones that use hand-crafted features. Full article
Figures

Figure 1

Open AccessArticle
Numerical Simulation of the Laminar Forced Convective Heat Transfer between Two Concentric Cylinders
Computation 2017, 5(2), 25; doi:10.3390/computation5020025 -
Abstract
The dual reciprocity method (DRM) is a highly efficient numerical method of transforming domain integrals arising from the non-homogeneous term of the Poisson equation into equivalent boundary integrals. In this paper, the velocity and temperature fields of laminar forced heat convection in a
[...] Read more.
The dual reciprocity method (DRM) is a highly efficient numerical method of transforming domain integrals arising from the non-homogeneous term of the Poisson equation into equivalent boundary integrals. In this paper, the velocity and temperature fields of laminar forced heat convection in a concentric annular tube, with constant heat flux boundary conditions, have been studied using numerical simulations. The DRM has been used to solve the governing equation, which is expressed in the form of a Poisson equation. A test problem is employed to verify the DRM solutions with different boundary element discretizations and numbers of internal points. The results of the numerical simulations are discussed and compared with exact analytical solutions. Good agreement between the numerical results and exact solutions is evident, as the maximum relative errors are less than 5% to 6%, and the R2-values are greater than 0.999 in all cases. These results confirm the effectiveness and accuracy of the proposed numerical model, which is based on the DRM. Full article
Figures

Figure 1