Next Issue
Volume 8, December
Previous Issue
Volume 8, October

Mathematics, Volume 8, Issue 11 (November 2020) – 244 articles

Cover Story (view full-size image): In Snowdrift-type games, Cooperators and Defectors coexist at a stable equilibrium. However, if the equilibrium is disrupted, individuals may have to re-learn their effective strategies. We characterise optimal learning paths for such settings. Surprisingly, the least advantageous strategy should be learnt first to correct for behavioural mistakes.View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Discovering Correlation Indices for Link Prediction Using Differential Evolution
Mathematics 2020, 8(11), 2097; https://doi.org/10.3390/math8112097 - 23 Nov 2020
Cited by 1 | Viewed by 695
Abstract
Binary correlation indices are crucial for forecasting and modelling tasks in different areas of scientific research. The setting of sound binary correlations and similarity measures is a long and mostly empirical interactive process, in which researchers start from experimental correlations in one domain, [...] Read more.
Binary correlation indices are crucial for forecasting and modelling tasks in different areas of scientific research. The setting of sound binary correlations and similarity measures is a long and mostly empirical interactive process, in which researchers start from experimental correlations in one domain, which usually prove to be effective in other similar fields, and then progressively evaluate and modify those correlations to adapt their predictive power to the specific characteristics of the domain under examination. In the research of prediction of links on complex networks, it has been found that no single correlation index can always obtain excellent results, even in similar domains. The research of domain-specific correlation indices or the adaptation of known ones is therefore a problem of critical concern. This paper presents a solution to the problem of setting new binary correlation indices that achieve efficient performances on specific network domains. The proposed solution is based on Differential Evolution, evolving the coefficient vectors of meta-correlations, structures that describe classes of binary similarity indices and subsume the most known correlation indices for link prediction. Experiments show that the proposed evolutionary approach always results in improved performances, and in some cases significantly enhanced, compared to the best correlation indices available in the link prediction literature, effectively exploring the correlation space and exploiting its self-adaptability to the given domain to improve over generations. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Artificial Intelligent Systems)
Show Figures

Figure 1

Article
Estimating General Parameters from Non-Probability Surveys Using Propensity Score Adjustment
Mathematics 2020, 8(11), 2096; https://doi.org/10.3390/math8112096 - 23 Nov 2020
Cited by 1 | Viewed by 606
Abstract
This study introduces a general framework on inference for a general parameter using nonprobability survey data when a probability sample with auxiliary variables, common to both samples, is available. The proposed framework covers parameters from inequality measures and distribution function estimates but the [...] Read more.
This study introduces a general framework on inference for a general parameter using nonprobability survey data when a probability sample with auxiliary variables, common to both samples, is available. The proposed framework covers parameters from inequality measures and distribution function estimates but the scope of the paper is broader. We develop a rigorous framework for general parameter estimation by solving survey weighted estimating equations which involve propensity score estimation for units in the non-probability sample. This development includes the expression of the variance estimator, as well as some alternatives which are discussed under the proposed framework. We carried a simulation study using data from a real-world survey, on which the application of the estimation methods showed the effectiveness of the proposed design-based inference on several general parameters. Full article
(This article belongs to the Special Issue Stochastic Statistics and Modeling)
Article
Lorenz Surfaces Based on the Sarmanov–Lee Distribution with Applications to Multidimensional Inequality in Well-Being
Mathematics 2020, 8(11), 2095; https://doi.org/10.3390/math8112095 - 23 Nov 2020
Cited by 1 | Viewed by 675
Abstract
The purpose of this paper is to derive analytic expressions for the multivariate Lorenz surface for a relevant type of models based on the class of distributions with given marginals described by Sarmanov and Lee. The expression of the bivariate Lorenz surface can [...] Read more.
The purpose of this paper is to derive analytic expressions for the multivariate Lorenz surface for a relevant type of models based on the class of distributions with given marginals described by Sarmanov and Lee. The expression of the bivariate Lorenz surface can be conveniently interpreted as the convex linear combination of products of classical and concentrated univariate Lorenz curves. Thus, the generalized Gini index associated with this surface is expressed as a function of marginal Gini indices and concentration indices. This measure is additively decomposable in two factors, corresponding to inequality within and between variables. We present different parametric models using several marginal distributions including the classical Beta, the GB1, the Gamma, the lognormal distributions and others. We illustrate the use of these models to measure multidimensional inequality using data on two dimensions of well-being, wealth and health, in five developing countries. Full article
(This article belongs to the Special Issue Multivariate Sarmanov Distributions and Applications)
Show Figures

Figure 1

Article
Heating a 2D Thermoelastic Half-Space Induced by Volumetric Absorption of a Laser Radiation
Mathematics 2020, 8(11), 2094; https://doi.org/10.3390/math8112094 - 23 Nov 2020
Cited by 3 | Viewed by 707
Abstract
In this work, the generalized theory with dual-phase-lag of thermoelasticity is employed to study the influences induced by absorbing a penetrating laser radiation inside a 2D thermoelastic semi-infinite medium. The medium’s surface is presumed to be exposed to temperature-dependent heat losses and is [...] Read more.
In this work, the generalized theory with dual-phase-lag of thermoelasticity is employed to study the influences induced by absorbing a penetrating laser radiation inside a 2D thermoelastic semi-infinite medium. The medium’s surface is presumed to be exposed to temperature-dependent heat losses and is traction-free. The considered problem is solved using the integral transforms technique by applying the double-transformation Laplace and Hankel. A numerical fashion is applied to obtain the inverse of the Laplace transformation. The results of this problem are presented graphically for some studied fields. Full article
(This article belongs to the Section Mathematical Physics)
Show Figures

Figure 1

Article
Multivariate Control Chart and Lee–Carter Models to Study Mortality Changes
Mathematics 2020, 8(11), 2093; https://doi.org/10.3390/math8112093 - 23 Nov 2020
Cited by 3 | Viewed by 723
Abstract
The mortality structure of a population usually reflects the economic and social development of the country. The purpose of this study was to identify moments in time and age intervals at which the observed probability of death is substantially different from the pattern [...] Read more.
The mortality structure of a population usually reflects the economic and social development of the country. The purpose of this study was to identify moments in time and age intervals at which the observed probability of death is substantially different from the pattern of mortality for a studied period. Therefore, a mortality model was fitted to decompose the historical pattern of mortality. The model residuals were monitored by the T2 multivariate control chart to detect substantial changes in mortality that were not identified by the model. The abridged life tables for Colombia in the period 1973–2005 were used as a case study. The Lee–Carter model collects information regarding violence in Colombia. Therefore, the years identified as out-of-control in the charts are associated with very early or quite advanced ages of death and are inversely related to the violence that did not claim as many victims at those ages. The mortality changes identified in the control charts pertain to changes in the population’s health conditions or new causes of death such as COVID-19 in the coming years. The proposed methodology is generalizable to other countries, especially developing countries. Full article
(This article belongs to the Special Issue Advances in Statistical Process Control and Their Applications)
Show Figures

Figure 1

Article
On the Normalization of Interval Data
Mathematics 2020, 8(11), 2092; https://doi.org/10.3390/math8112092 - 23 Nov 2020
Cited by 1 | Viewed by 590
Abstract
The impreciseness of numeric input data can be expressed by intervals. On the other hand, the normalization of numeric data is a usual process in many applications. How do we match the normalization with impreciseness on numeric data? A straightforward answer is that [...] Read more.
The impreciseness of numeric input data can be expressed by intervals. On the other hand, the normalization of numeric data is a usual process in many applications. How do we match the normalization with impreciseness on numeric data? A straightforward answer is that it is enough to apply a correct interval arithmetic, since the normalized exact value will be enclosed in the resulting “normalized” interval. This paper shows that this approach is not enough since the resulting “normalized” interval can be even wider than the input intervals. So, we propose a pair of axioms that must be satisfied by an interval arithmetic in order to be applied in the normalization of intervals. We show how some known interval arithmetics behave with respect to these axioms. The paper ends with a discussion about the current paradigm of interval computations. Full article
Show Figures

Figure 1

Article
Valuation of Exchange Option with Credit Risk in a Hybrid Model
Mathematics 2020, 8(11), 2091; https://doi.org/10.3390/math8112091 - 23 Nov 2020
Viewed by 618
Abstract
In this paper, the valuation of the exchange option with credit risk under a hybrid credit risk model is investigated. In order to build the hybrid model, we consider both the reduced-form model and the structural model. We adopt the probabilistic approach to [...] Read more.
In this paper, the valuation of the exchange option with credit risk under a hybrid credit risk model is investigated. In order to build the hybrid model, we consider both the reduced-form model and the structural model. We adopt the probabilistic approach to derive the closed-form formula of an exchange option price with credit risk under the proposed model. Specifically, the change of measure technique is used repeatedly, and the pricing formula is provided as the standard normal cumulative distribution functions. Full article
(This article belongs to the Special Issue Financial Mathematics II)
Article
WINFRA: A Web-Based Platform for Semantic Data Retrieval and Data Analytics
Mathematics 2020, 8(11), 2090; https://doi.org/10.3390/math8112090 - 23 Nov 2020
Cited by 1 | Viewed by 968
Abstract
Given the huge amount of heterogeneous data stored in different locations, it needs to be federated and semantically interconnected for further use. This paper introduces WINFRA, a comprehensive open-access platform for semantic web data and advanced analytics based on natural language processing (NLP) [...] Read more.
Given the huge amount of heterogeneous data stored in different locations, it needs to be federated and semantically interconnected for further use. This paper introduces WINFRA, a comprehensive open-access platform for semantic web data and advanced analytics based on natural language processing (NLP) and data mining techniques (e.g., association rules, clustering, classification based on associations). The system is designed to facilitate federated data analysis, knowledge discovery, information retrieval, and new techniques to deal with semantic web and knowledge graph representation. The processing step integrates data from multiple sources virtually by creating virtual databases. Afterwards, the developed RDF Generator is built to generate RDF files for different data sources, together with SPARQL queries, to support semantic data search and knowledge graph representation. Furthermore, some application cases are provided to demonstrate how it facilitates advanced data analytics over semantic data and showcase our proposed approach toward semantic association rules. Full article
(This article belongs to the Special Issue Applied Data Analytics)
Show Figures

Figure 1

Article
An Extended Theory of Planned Behavior for the Modelling of Chinese Secondary School Students’ Intention to Learn Artificial Intelligence
Mathematics 2020, 8(11), 2089; https://doi.org/10.3390/math8112089 - 23 Nov 2020
Cited by 4 | Viewed by 1292
Abstract
Artificial Intelligence (AI) is currently changing how people live and work. Its importance has prompted educators to begin teaching AI in secondary schools. This study examined how Chinese secondary school students’ intention to learn AI were associated with eight other relevant psychological factors. [...] Read more.
Artificial Intelligence (AI) is currently changing how people live and work. Its importance has prompted educators to begin teaching AI in secondary schools. This study examined how Chinese secondary school students’ intention to learn AI were associated with eight other relevant psychological factors. Five hundred and forty-five secondary school students who have completed at least one cycle of AI course were recruited to participate in this study. Based on the theory of planned behavior, the students’ AI literacy, subjective norms, and anxiety were identified as background factors. These background factors were hypothesized to influence the students’ attitudes towards AI, their perceived behavioral control, and their intention to learn AI. To provide more nuanced understanding, the students’ attitude towards AI was further delineated as constituted by their perception of the usefulness of AI, the potential of AI technology to promote social good, and their attitude towards using AI technology. Similarly, the perceived behavioral control was operationalized as students’ confidence towards learning AI knowledge and optimistic outlook of an AI infused world. Relationships between the factors were theoretically illustrated as a model that depicts how students’ intention to learn AI was constituted. Two research questions were then formulated. Confirmatory factor analysis was employed to validate that multi-factor survey, followed by structural equational modelling to ascertain the significant associations between the factors. The confirmatory factor analysis supports the construct validity of the questionnaire. Twenty-five out of the thirty-three hypotheses were supported through structural equation modelling. The model helps researchers and educators to understand the factors that shape students’ intention to learn AI. These factors should be considered for the design of AI curriculum. Full article
(This article belongs to the Special Issue Artificial Intelligence in Education)
Show Figures

Figure 1

Article
Cognitive Emotional Embedded Representations of Text to Predict Suicidal Ideation and Psychiatric Symptoms
Mathematics 2020, 8(11), 2088; https://doi.org/10.3390/math8112088 - 23 Nov 2020
Cited by 1 | Viewed by 980
Abstract
Mathematical modeling of language in Artificial Intelligence is of the utmost importance for many research areas and technological applications. Over the last decade, research on text representation has been directed towards the investigation of dense vectors popularly known as word embeddings. In this [...] Read more.
Mathematical modeling of language in Artificial Intelligence is of the utmost importance for many research areas and technological applications. Over the last decade, research on text representation has been directed towards the investigation of dense vectors popularly known as word embeddings. In this paper, we propose a cognitive-emotional scoring and representation framework for text based on word embeddings. This representation framework aims to mathematically model the emotional content of words in short free-form text messages, produced by adults in follow-up due to any mental health condition in the outpatient facilities within the Psychiatry Department of Hospital Fundación Jiménez Díaz in Madrid, Spain. Our contribution is a geometrical-topological framework for Sentiment Analysis, that includes a hybrid method that uses a cognitively-based lexicon together with word embeddings to generate graded sentiment scores for words, and a new topological method for clustering dense vector representations in high-dimensional spaces, where points are very sparsely distributed. Our framework is useful in detecting word association topics, emotional scoring patterns, and embedded vectors’ geometrical behavior, which might be useful in understanding language use in this kind of texts. Our proposed scoring system and representation framework might be helpful in studying relations between language and behavior and their use might have a predictive potential to prevent suicide. Full article
(This article belongs to the Special Issue Recent Advances in Data Science)
Show Figures

Figure 1

Article
A Nonlinear Model Predictive Control with Enlarged Region of Attraction via the Union of Invariant Sets
Mathematics 2020, 8(11), 2087; https://doi.org/10.3390/math8112087 - 22 Nov 2020
Cited by 1 | Viewed by 998
Abstract
In the dual-mode model predictive control (MPC) framework, the size of the stabilizable set, which is also the region of attraction, depends on the terminal constraint set. This paper aims to formulate a larger terminal set for enlarging the region of attraction in [...] Read more.
In the dual-mode model predictive control (MPC) framework, the size of the stabilizable set, which is also the region of attraction, depends on the terminal constraint set. This paper aims to formulate a larger terminal set for enlarging the region of attraction in a nonlinear MPC. Given several control laws and their corresponding terminal invariant sets, a convex combination of the given sets is used to construct a time-varying terminal set. The resulting region of attraction is the union of the regions of attraction from each invariant set. Simulation results show that the proposed MPC has a larger stabilizable initial set than the one obtained when a fixed terminal set is used. Full article
(This article belongs to the Special Issue Dynamical Systems and Optimal Control)
Show Figures

Figure 1

Article
Fractional Diffusion–Wave Equation with Application in Electrodynamics
Mathematics 2020, 8(11), 2086; https://doi.org/10.3390/math8112086 - 22 Nov 2020
Cited by 1 | Viewed by 784
Abstract
We consider a diffusion–wave equation with fractional derivative with respect to the time variable, defined on infinite interval, and with the starting point at minus infinity. For this equation, we solve an asympotic boundary value problem without initial conditions, construct a representation of [...] Read more.
We consider a diffusion–wave equation with fractional derivative with respect to the time variable, defined on infinite interval, and with the starting point at minus infinity. For this equation, we solve an asympotic boundary value problem without initial conditions, construct a representation of its solution, find out sufficient conditions providing solvability and solution uniqueness, and give some applications in fractional electrodynamics. Full article
(This article belongs to the Special Issue Mathematical Modeling of Hereditarity Oscillatory Systems)
Article
New Modeling Approaches Based on Varimax Rotation of Functional Principal Components
Mathematics 2020, 8(11), 2085; https://doi.org/10.3390/math8112085 - 22 Nov 2020
Cited by 5 | Viewed by 1118
Abstract
Functional Principal Component Analysis (FPCA) is an important dimension reduction technique to interpret the main modes of functional data variation in terms of a small set of uncorrelated variables. The principal components can not always be simply interpreted and rotation is one of [...] Read more.
Functional Principal Component Analysis (FPCA) is an important dimension reduction technique to interpret the main modes of functional data variation in terms of a small set of uncorrelated variables. The principal components can not always be simply interpreted and rotation is one of the main solutions to improve the interpretation. In this paper, two new functional Varimax rotation approaches are introduced. They are based on the equivalence between FPCA of basis expansion of the sample curves and Principal Component Analysis (PCA) of a transformation of the matrix of basis coefficients. The first approach consists of a rotation of the eigenvectors that preserves the orthogonality between the eigenfunctions but the rotated principal component scores are not uncorrelated. The second approach is based on rotation of the loadings of the standardized principal component scores that provides uncorrelated rotated scores but non-orthogonal eigenfunctions. A simulation study and an application with data from the curves of infections by COVID-19 pandemic in Spain are developed to study the performance of these methods by comparing the results with other existing approaches. Full article
(This article belongs to the Special Issue Stochastic Statistics and Modeling)
Show Figures

Figure 1

Article
An Integral Equation Approach to the Irreversible Investment Problem with a Finite Horizon
Mathematics 2020, 8(11), 2084; https://doi.org/10.3390/math8112084 - 22 Nov 2020
Viewed by 567
Abstract
This paper studies an irreversible investment problem under a finite horizon. The firm expands its production capacity in irreversible investments by purchasing capital to increase productivity. This problem is a singular stochastic control problem and its associated Hamilton–Jacobi–Bellman equation is derived. By using [...] Read more.
This paper studies an irreversible investment problem under a finite horizon. The firm expands its production capacity in irreversible investments by purchasing capital to increase productivity. This problem is a singular stochastic control problem and its associated Hamilton–Jacobi–Bellman equation is derived. By using a Mellin transform, we obtain the integral equation satisfied by the free boundary of this investment problem. Furthermore, we solve the integral equation numerically using the recursive integration method and present the graph for the free boundary. Full article
Show Figures

Figure 1

Article
A Note on the Asymptotic Normality Theory of the Least Squares Estimates in Multivariate HAR-RV Models
Mathematics 2020, 8(11), 2083; https://doi.org/10.3390/math8112083 - 22 Nov 2020
Cited by 1 | Viewed by 778
Abstract
In this work, multivariate heterogeneous autoregressive-realized volatility (HAR-RV) models are discussed with their least squares estimations. We consider multivariate HAR models of order p with q multiple assets to explore the relationships between two or more assets’ volatility. The strictly stationary solution of [...] Read more.
In this work, multivariate heterogeneous autoregressive-realized volatility (HAR-RV) models are discussed with their least squares estimations. We consider multivariate HAR models of order p with q multiple assets to explore the relationships between two or more assets’ volatility. The strictly stationary solution of the HAR(p,q) model is investigated as well as the asymptotic normality theories of the least squares estimates are established in the cases of i.i.d. and correlated errors. In addition, an exponentially weighted multivariate HAR model with a common decay rate on the coefficients is discussed together with the common rate estimation. A Monte Carlo simulation is conducted to validate the estimations: sample mean and standard error of the estimates as well as empirical coverage and average length of confidence intervals are calculated. Lastly, real data of volatility of Gold spot price and S&P index are applied to the model and it is shown that the bivariate HAR model fitted by selected optimal lags and estimated coefficients is well matched with the volatility of the financial data. Full article
Show Figures

Figure 1

Article
Coalgebras on Digital Images
Mathematics 2020, 8(11), 2082; https://doi.org/10.3390/math8112082 - 22 Nov 2020
Viewed by 527
Abstract
In this article, we investigate the fundamental properties of coalgebras with coalgebra comultiplications, counits, and coalgebra homomorphisms of coalgebras over a commutative ring R with identity 1R based on digital images with adjacency relations. We also investigate a contravariant functor from the [...] Read more.
In this article, we investigate the fundamental properties of coalgebras with coalgebra comultiplications, counits, and coalgebra homomorphisms of coalgebras over a commutative ring R with identity 1R based on digital images with adjacency relations. We also investigate a contravariant functor from the category of digital images and digital continuous functions to the category of coalgebras and coalgebra homomorphisms based on digital images via the category of unitary R-modules and R-module homomorphisms. Full article
Show Figures

Figure 1

Article
Accumulative Pension Schemes with Various Decrement Factors
Mathematics 2020, 8(11), 2081; https://doi.org/10.3390/math8112081 - 22 Nov 2020
Viewed by 2108
Abstract
We consider accumulative defined contribution pension schemes with a lump sum payment on retirement. These schemes differ in relation to inheritance and provide various decrement factors. For each scheme, we construct the balance equation and obtain an expression for calculation of gross premium. [...] Read more.
We consider accumulative defined contribution pension schemes with a lump sum payment on retirement. These schemes differ in relation to inheritance and provide various decrement factors. For each scheme, we construct the balance equation and obtain an expression for calculation of gross premium. Payments are made at the end of the insurance event period (survival to retirement age or death or retirement for disability within the accumulation interval). A simulation model was developed to analyze the constructed schemes. Full article
(This article belongs to the Special Issue Stability Problems for Stochastic Models: Theory and Applications)
Show Figures

Figure 1

Article
Risk Analysis through the Half-Normal Distribution
Mathematics 2020, 8(11), 2080; https://doi.org/10.3390/math8112080 - 21 Nov 2020
Cited by 2 | Viewed by 985
Abstract
We study the applicability of the half-normal distribution to the probability–severity risk analysis traditionally performed through risk matrices and continuous probability–consequence diagrams (CPCDs). To this end, we develop a model that adapts the financial risk measures Value-at-Risk (VaR) and Conditional Value at Risk [...] Read more.
We study the applicability of the half-normal distribution to the probability–severity risk analysis traditionally performed through risk matrices and continuous probability–consequence diagrams (CPCDs). To this end, we develop a model that adapts the financial risk measures Value-at-Risk (VaR) and Conditional Value at Risk (CVaR) to risky scenarios that face only negative impacts. This model leads to three risk indicators: The Hazards Index-at-Risk (HIaR), the Expected Hazards Damage (EHD), and the Conditional HIaR (CHIaR). HIaR measures the expected highest hazards impact under a certain probability, while EHD consists of the expected impact that stems from truncating the half-normal distribution at the HIaR point. CHIaR, in turn, measures the expected damage in the case it exceeds the HIaR. Therefore, the Truncated Risk Model that we develop generates a measure for hazards expectations (EHD) and another measure for hazards surprises (CHIaR). Our analysis includes deduction of the mathematical functions that relate HIaR, EHD, and CHIaR to one another as well as the expected loss estimated by risk matrices. By extending the model to the generalised half-normal distribution, we incorporate a shape parameter into the model that can be interpreted as a hazard aversion coefficient. Full article
Show Figures

Graphical abstract

Article
Lifting Dual Connections with the Riemann Extension
Mathematics 2020, 8(11), 2079; https://doi.org/10.3390/math8112079 - 21 Nov 2020
Viewed by 595
Abstract
Let (M,g) be a Riemannian manifold equipped with a pair of dual connections (,*). Such a structure is known as a statistical manifold since it was defined in the context of information geometry. [...] Read more.
Let (M,g) be a Riemannian manifold equipped with a pair of dual connections (,*). Such a structure is known as a statistical manifold since it was defined in the context of information geometry. This paper aims at defining the complete lift of such a structure to the cotangent bundle T*M using the Riemannian extension of the Levi-Civita connection of M. In the first section, common tensors are associated with pairs of dual connections, emphasizing the cyclic symmetry property of the so-called skewness tensor. In a second section, the complete lift of this tensor is obtained, allowing the definition of dual connections on TT*M with respect to the Riemannian extension. This work was motivated by the general problem of finding the projective limit of a sequence of a finite-dimensional statistical manifold. Full article
(This article belongs to the Special Issue Geometry and Topology in Statistics)
Article
Non-Linear Macroeconomic Models of Growth with Memory
Mathematics 2020, 8(11), 2078; https://doi.org/10.3390/math8112078 - 21 Nov 2020
Cited by 5 | Viewed by 767
Abstract
In this article, two well-known standard models with continuous time, which are proposed by two Nobel laureates in economics, Robert M. Solow and Robert E. Lucas, are generalized. The continuous time standard models of economic growth do not account for memory effects. Mathematically, [...] Read more.
In this article, two well-known standard models with continuous time, which are proposed by two Nobel laureates in economics, Robert M. Solow and Robert E. Lucas, are generalized. The continuous time standard models of economic growth do not account for memory effects. Mathematically, this is due to the fact that these models describe equations with derivatives of integer orders. These derivatives are determined by the properties of the function in an infinitely small neighborhood of the considered time. In this article, we proposed two non-linear models of economic growth with memory, for which equations are derived and solutions of these equations are obtained. In the differential equations of these models, instead of the derivative of integer order, fractional derivatives of non-integer order are used, which allow describing long memory with power-law fading. Exact solutions for these non-linear fractional differential equations are obtained. The purpose of this article is to study the influence of memory effects on the rate of economic growth using the proposed simple models with memory as examples. As the methods of this study, exact solutions of fractional differential equations of the proposed models are used. We prove that the effects of memory can significantly (several times) change the growth rate, when other parameters of the model are unchanged. Full article
(This article belongs to the Section Financial Mathematics)
Article
Use of Correlated Data for Nonparametric Prediction of a Spatial Target Variable
Mathematics 2020, 8(11), 2077; https://doi.org/10.3390/math8112077 - 20 Nov 2020
Viewed by 614
Abstract
The kriging methodology can be applied to predict the value of a spatial variable at an unsampled location, from the available spatial data. Furthermore, additional information from secondary variables, correlated with the target one, can be included in the resulting predictor by using [...] Read more.
The kriging methodology can be applied to predict the value of a spatial variable at an unsampled location, from the available spatial data. Furthermore, additional information from secondary variables, correlated with the target one, can be included in the resulting predictor by using the cokriging techniques. The latter procedures require a previous specification of the multivariate dependence structure, difficult to characterize in practice in an appropriate way. To simplify this task, the current work introduces a nonparametric kernel approach for prediction, which satisfies good properties, such as asymptotic unbiasedness or the convergence to zero of the mean squared prediction error. The selection of the bandwidth parameters involved is also addressed, as well as the estimation of the remaining unknown terms in the kernel predictor. The performance of the new methodology is illustrated through numerical studies with simulated data, carried out in different scenarios. In addition, the proposed nonparametric approach is applied to predict the concentrations of a pollutant that represents a risk to human health, the cadmium, in the floodplain of the Meuse river (Netherlands), by incorporating the lead level as an auxiliary variable. Full article
(This article belongs to the Special Issue Probability, Statistics and Their Applications)
Show Figures

Figure 1

Article
Latent Class Regression Utilizing Fuzzy Clusterwise Generalized Structured Component Analysis
Mathematics 2020, 8(11), 2076; https://doi.org/10.3390/math8112076 - 20 Nov 2020
Cited by 1 | Viewed by 701
Abstract
Latent class analysis (LCA) has been applied in many research areas to disentangle the heterogeneity of a population. Despite its popularity, its estimation method is limited to maximum likelihood estimation (MLE), which requires large samples to satisfy both the multivariate normality assumption and [...] Read more.
Latent class analysis (LCA) has been applied in many research areas to disentangle the heterogeneity of a population. Despite its popularity, its estimation method is limited to maximum likelihood estimation (MLE), which requires large samples to satisfy both the multivariate normality assumption and local independence assumption. Although many suggestions regarding adequate sample sizes were proposed, researchers continue to apply LCA with relatively smaller samples. When covariates are involved, the estimation issue is encountered more. In this study, we suggest a different estimating approach for LCA with covariates, also known as latent class regression (LCR), using a fuzzy clustering method and generalized structured component analysis (GSCA). This new approach is free from the distributional assumption and stable in estimating parameters. Parallel to the three-step approach used in the MLE-based LCA, we extend an algorithm of fuzzy clusterwise GSCA into LCR. This proposed algorithm has been demonstrated with an empirical data with both categorical and continuous covariates. Because the proposed algorithm can be used for a relatively small sample in LCR without requiring a multivariate normality assumption, the new algorithm is more applicable to social, behavioral, and health sciences. Full article
(This article belongs to the Special Issue Operations Research Using Fuzzy Sets Theory)
Show Figures

Figure 1

Article
Comparing Deep-Learning Architectures and Traditional Machine-Learning Approaches for Satire Identification in Spanish Tweets
Mathematics 2020, 8(11), 2075; https://doi.org/10.3390/math8112075 - 20 Nov 2020
Cited by 1 | Viewed by 842
Abstract
Automatic satire identification can help to identify texts in which the intended meaning differs from the literal meaning, improving tasks such as sentiment analysis, fake news detection or natural-language user interfaces. Typically, satire identification is performed by training a supervised classifier for finding [...] Read more.
Automatic satire identification can help to identify texts in which the intended meaning differs from the literal meaning, improving tasks such as sentiment analysis, fake news detection or natural-language user interfaces. Typically, satire identification is performed by training a supervised classifier for finding linguistic clues that can determine whether a text is satirical or not. For this, the state-of-the-art relies on neural networks fed with word embeddings that are capable of learning interesting characteristics regarding the way humans communicate. However, as far as our knowledge goes, there are no comprehensive studies that evaluate these techniques in Spanish in the satire identification domain. Consequently, in this work we evaluate several deep-learning architectures with Spanish pre-trained word-embeddings and compare the results with strong baselines based on term-counting features. This evaluation is performed with two datasets that contain satirical and non-satirical tweets written in two Spanish variants: European Spanish and Mexican Spanish. Our experimentation revealed that term-counting features achieved similar results to deep-learning approaches based on word-embeddings, both outperforming previous results based on linguistic features. Our results suggest that term-counting features and traditional machine learning models provide competitive results regarding automatic satire identification, slightly outperforming state-of-the-art models. Full article
(This article belongs to the Special Issue Recent Advances in Deep Learning)
Show Figures

Figure 1

Article
Solutions of Sturm-Liouville Problems
Mathematics 2020, 8(11), 2074; https://doi.org/10.3390/math8112074 - 20 Nov 2020
Cited by 1 | Viewed by 732
Abstract
This paper further improves the Lie group method with Magnus expansion proposed in a previous paper by the authors, to solve some types of direct singular Sturm–Liouville problems. Next, a concrete implementation to the inverse Sturm–Liouville problem algorithm proposed by Barcilon (1974) is [...] Read more.
This paper further improves the Lie group method with Magnus expansion proposed in a previous paper by the authors, to solve some types of direct singular Sturm–Liouville problems. Next, a concrete implementation to the inverse Sturm–Liouville problem algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm–Liouville problems of higher order (for n=2,4) are verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied. In conclusion, this work provides a method that can be adapted successfully for solving a direct (regular/singular) or inverse Sturm–Liouville problem (SLP) of an arbitrary order with arbitrary boundary conditions. Full article
(This article belongs to the Special Issue Mathematical Analysis and Boundary Value Problems)
Show Figures

Figure 1

Article
US Policy Uncertainty and Stock Market Nexus Revisited through Dynamic ARDL Simulation and Threshold Modelling
Mathematics 2020, 8(11), 2073; https://doi.org/10.3390/math8112073 - 20 Nov 2020
Cited by 2 | Viewed by 902
Abstract
Since the introduction of the measure of economic policy uncertainty, businesses, policymakers, and academic scholars closely monitor its momentum due to expected economic implications. The US is the world’s top-ranked equity market by size, and prior literature on policy uncertainty and stock prices [...] Read more.
Since the introduction of the measure of economic policy uncertainty, businesses, policymakers, and academic scholars closely monitor its momentum due to expected economic implications. The US is the world’s top-ranked equity market by size, and prior literature on policy uncertainty and stock prices for the US is conflicting. In this study, we reexamine the policy uncertainty and stock price nexus from the US perspective, using a novel dynamically simulated autoregressive distributed lag setting introduced in 2018, which appears superior to traditional models. The empirical findings document a negative response of stock prices to 10% positive/negative shock in policy uncertainty in the short-run, while in the long-run, an increase in policy uncertainty by 10% reduces the stock prices, which increases in response to a decrease with the same magnitude. Moreover, we empirically identified two significant thresholds: (1) policy score of 4.89 (original score 132.39), which negatively explain stock prices with high magnitude, and (2) policy score 4.48 (original score 87.98), which explains stock prices negatively with a relatively low magnitude, and interestingly, policy changes below the second threshold become irrelevant to explain stock prices in the United States. It is worth noting that all indices are not equally exposed to unfavorable policy changes. The overall findings are robust to the alternative measures of policy uncertainty and stock prices and offer useful policy input. The limitations of the study and future line of research are also highlighted. All in all, the policy uncertainty is an indicator that shall remain ever-important due to its nature and implication on the various sectors of the economy (the equity market in particular). Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

Article
TOPSIS Decision on Approximate Pareto Fronts by Using Evolutionary Algorithms: Application to an Engineering Design Problem
Mathematics 2020, 8(11), 2072; https://doi.org/10.3390/math8112072 - 20 Nov 2020
Cited by 4 | Viewed by 661
Abstract
A common technique used to solve multi-objective optimization problems consists of first generating the set of all Pareto-optimal solutions and then ranking and/or choosing the most interesting solution for a human decision maker (DM). Sometimes this technique is referred to as generate first–choose [...] Read more.
A common technique used to solve multi-objective optimization problems consists of first generating the set of all Pareto-optimal solutions and then ranking and/or choosing the most interesting solution for a human decision maker (DM). Sometimes this technique is referred to as generate first–choose later. In this context, this paper proposes a two-stage methodology: a first stage using a multi-objective evolutionary algorithm (MOEA) to generate an approximate Pareto-optimal front of non-dominated solutions and a second stage, which uses the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) devoted to rank the potential solutions to be proposed to the DM. The novelty of this paper lies in the fact that it is not necessary to know the ideal and nadir solutions of the problem in the TOPSIS method in order to determine the ranking of solutions. To show the utility of the proposed methodology, several original experiments and comparisons between different recognized MOEAs were carried out on a welded beam engineering design benchmark problem. The problem was solved with two and three objectives and it is characterized by a lack of knowledge about ideal and nadir values. Full article
(This article belongs to the Section Mathematics and Computer Science)
Show Figures

Figure 1

Article
Advances in Tracking Control for Piezoelectric Actuators Using Fuzzy Logic and Hammerstein-Wiener Compensation
Mathematics 2020, 8(11), 2071; https://doi.org/10.3390/math8112071 - 20 Nov 2020
Cited by 8 | Viewed by 751
Abstract
Piezoelectric actuators (PEA) are devices that are used for nano- microdisplacement due to their high precision, but one of the major issues is the non-linearity phenomena caused by the hysteresis effect, which diminishes the positioning performance. This study presents a novel control structure [...] Read more.
Piezoelectric actuators (PEA) are devices that are used for nano- microdisplacement due to their high precision, but one of the major issues is the non-linearity phenomena caused by the hysteresis effect, which diminishes the positioning performance. This study presents a novel control structure in order to reduce the hysteresis effect and increase the PEA performance by using a fuzzy logic control (FLC) combined with a Hammerstein–Wiener (HW) black-box mapping as a feedforward (FF) compensation. In this research, a proportional-integral-derivative (PID) was contrasted with an FLC. From this comparison, the most accurate was taken and tested with a complex structure with HW-FF to verify the accuracy with the increment of complexity. All of the structures were implemented in a dSpace platform to control a commercial Thorlabs PEA. The tests have shown that an FLC combined with HW was the most accurate, since the FF compensate the hysteresis and the FLC reduced the errors; the integral of the absolute error (IAE), the root-mean-square error (RMSE), and relative root-mean-square-error (RRMSE) for this case were reduced by several magnitude orders when compared to the feedback structures. As a conclusion, a complex structure with a novel combination of FLC and HW-FF provided an increment in the accuracy for a high-precision PEA. Full article
(This article belongs to the Special Issue Fuzzy Applications in Industrial Engineering)
Show Figures

Figure 1

Article
Two-Agent Pareto-Scheduling of Minimizing Total Weighted Completion Time and Total Weighted Late Work
Mathematics 2020, 8(11), 2070; https://doi.org/10.3390/math8112070 - 20 Nov 2020
Cited by 8 | Viewed by 589
Abstract
We investigate the Pareto-scheduling problem with two competing agents on a single machine to minimize the total weighted completion time of agent A’s jobs and the total weighted late work of agent B’s jobs, the B-jobs having a common due [...] Read more.
We investigate the Pareto-scheduling problem with two competing agents on a single machine to minimize the total weighted completion time of agent A’s jobs and the total weighted late work of agent B’s jobs, the B-jobs having a common due date. Since this problem is known to be NP-hard, we present two pseudo-polynomial-time exact algorithms to generate the Pareto frontier and an approximation algorithm to generate a (1+ϵ)-approximate Pareto frontier. In addition, some numerical tests are undertaken to evaluate the effectiveness of our algorithms. Full article
(This article belongs to the Special Issue Theoretical and Computational Research in Various Scheduling Models)
Show Figures

Figure 1

Article
Non-Spiking Laser Controlled by a Delayed Feedback
Mathematics 2020, 8(11), 2069; https://doi.org/10.3390/math8112069 - 20 Nov 2020
Viewed by 521
Abstract
In 1965, Statz et al. (J. Appl. Phys. 30, 1510 (1965)) investigated theoretically and experimentally the conditions under which spiking in the laser output can be completely suppressed by using a delayed optical feedback. In order to explore its effects, they formulate a [...] Read more.
In 1965, Statz et al. (J. Appl. Phys. 30, 1510 (1965)) investigated theoretically and experimentally the conditions under which spiking in the laser output can be completely suppressed by using a delayed optical feedback. In order to explore its effects, they formulate a delay differential equation model within the framework of laser rate equations. From their numerical simulations, they concluded that the feedback is effective in controlling the intensity laser pulses provided the delay is short enough. Ten years later, Krivoshchekov et al. (Sov. J. Quant. Electron. 5394 (1975)) reconsidered the Statz et al. delay differential equation and analyzed the limit of small delays. The stability conditions for arbitrary delays, however, were not determined. In this paper, we revisit Statz et al.’s delay differential equation model by using modern mathematical tools. We determine an asymptotic approximation of both the domains of stable steady states as well as a sub-domain of purely exponential transients. Full article
(This article belongs to the Special Issue Recent Advances in Delay Differential and Difference Equations)
Show Figures

Figure 1

Article
Multiple Solutions for a Class of New p(x)-Kirchhoff Problem without the Ambrosetti-Rabinowitz Conditions
Mathematics 2020, 8(11), 2068; https://doi.org/10.3390/math8112068 - 19 Nov 2020
Viewed by 489
Abstract
In this paper, we consider a nonlocal p(x)-Kirchhoff problem with a p+-superlinear subcritical Caratheodory reaction term, which does not satisfy the Ambrosetti–Rabinowitz condition. Under some certain assumptions, we prove the existence of nontrivial solutions and many solutions. [...] Read more.
In this paper, we consider a nonlocal p(x)-Kirchhoff problem with a p+-superlinear subcritical Caratheodory reaction term, which does not satisfy the Ambrosetti–Rabinowitz condition. Under some certain assumptions, we prove the existence of nontrivial solutions and many solutions. Our results are an improvement and generalization of the corresponding results obtained by Hamdani et al. (2020). Full article
(This article belongs to the Special Issue New Trends in Variational Methods in Nonlinear Analysis)
Previous Issue
Next Issue
Back to TopTop