Mathematics doi: 10.3390/math9182335

Authors: Elena Niculina Dragoi Vlad Dafinescu

The search for powerful optimizers has led to the development of a multitude of metaheuristic algorithms inspired from all areas. This work focuses on the animal kingdom as a source of inspiration and performs an extensive, yet not exhaustive, review of the animal inspired metaheuristics proposed in the 2006–2021 period. The review is organized considering the biological classification of living things, with a breakdown of the simulated behavior mechanisms. The centralized data indicated that 61.6% of the animal-based algorithms are inspired from vertebrates and 38.4% from invertebrates. In addition, an analysis of the mechanisms used to ensure diversity was performed. The results obtained showed that the most frequently used mechanisms belong to the niching category.

]]>Mathematics doi: 10.3390/math9182334

Authors: Ángel Luis Muñoz Muñoz Castañeda Noemí De Castro-García David Escudero Escudero García

This work proposes a new algorithm for optimizing hyper-parameters of a machine learning algorithm, RHOASo, based on conditional optimization of concave asymptotic functions. A comparative analysis of the algorithm is presented, giving particular emphasis to two important properties: the capability of the algorithm to work efficiently with a small part of a dataset and to finish the tuning process automatically, that is, without making explicit, by the user, the number of iterations that the algorithm must perform. Statistical analyses over 16 public benchmark datasets comparing the performance of seven hyper-parameter optimization algorithms with RHOASo were carried out. The efficiency of RHOASo presents the positive statistically significant differences concerning the other hyper-parameter optimization algorithms considered in the experiments. Furthermore, it is shown that, on average, the algorithm needs around 70% of the iterations needed by other algorithms to achieve competitive performance. The results show that the algorithm presents significant stability regarding the size of the used dataset partition.

]]>Mathematics doi: 10.3390/math9182333

Authors: Ruslan Yanbarisov Yuri Efremov Nastasia Kosheleva Peter Timashev Yuri Vassilevski

Parallel-plate compression of multicellular spheroids (MCSs) is a promising and popular technique to quantify the viscoelastic properties of living tissues. This work presents two different approaches to the simulation of the MCS compression based on viscoelastic solid and viscoelastic fluid models. The first one is the standard linear solid model implemented in ABAQUS/CAE. The second one is the new model for 3D viscoelastic free surface fluid flow, which combines the Oldroyd-B incompressible fluid model and the incompressible neo-Hookean solid model via incorporation of an additional elastic tensor and a dynamic equation for it. The simulation results indicate that either approach can be applied to model the MCS compression with reasonable accuracy. Future application of the viscoelastic free surface fluid model is the MCSs fusion highly-demanded in bioprinting.

]]>Mathematics doi: 10.3390/math9182332

Authors: Quintero Ordóñez González López Reche Urbano Fuentes Esparrell

The progress experienced by society resulting from the ready availability of information through the use of technology highlights the need to develop specific learning related to informational competences (IC) in educational settings where future professionals are trained to educate others, specifically in university degrees in social sciences. This study seeks to ascertain the opinions of students enrolled in these degrees at the Universidad de Córdoba (Spain) with regard to the knowledge they consider that they possess about IC for their future professional development, through the practical application of exploratory factor analysis. The methodology designed is based on a descriptive, non-experimental, correlational survey. The results show that factor analysis is a fundamental tool for obtaining results in terms of students’ perception of their knowledge of IC because its psychometric value has confirmed construct validity and enabled us to break down the items that made up the four initial dimensions of IC into eight factors to improve the understanding and explanation of these IC.

]]>Mathematics doi: 10.3390/math9182330

Authors: Iskandar Waini Anuar Ishak Ioan Pop

This paper examines the impact of hybrid nanoparticles on the stagnation point flow towards a curved surface. Silica (SiO2) and alumina (Al2O3) nanoparticles are added into water to form SiO2-Al2O3/water hybrid nanofluid. Both buoyancy-opposing and -assisting flows are considered. The governing partial differential equations are reduced to a set of ordinary differential equations, before being coded in MATLAB software to obtain the numerical solutions. Findings show that the solutions are not unique, where two solutions are obtained, for both buoyancy-assisting and -opposing flow cases. The local Nusselt number increases in the presence of the hybrid nanoparticles. The temporal stability analysis shows that only one of the solutions is stable over time.

]]>Mathematics doi: 10.3390/math9182331

Authors: Osca-Guadalajara Díaz-Carnicero González-de-Julián Vivas-Consuelo

Osteoporosis is frequent in elderly people, causing bone fractures and lowering their quality of life. The costs incurred by these fractures constitute a problem for public health. Markov chains were used to carry out an incremental cost-utility analysis of the four main drugs used in Spain to treat osteoporosis (alendronate, risedronate, denosumab and teriparatide). We considered 14 clinical transition states, from starting osteoporotic treatment at the age of 50 until death or the age of 100. Cost-effectiveness was measured by quality adjusted life years (QALYs). The values used in the Markov model were obtained from the literature. Teriparatide is the cost-effective alternative in the treatment of osteoporosis in patients with fractures from the age of 50, establishing a payment threshold of 20,000 EUR/QALY. However, it is the most expensive therapy, not appearing cost-effective in cases that do not present fracture and in ages over 80 years with fracture. Alendronate and denosumab therapies are presented as cost-effective osteoporosis treatment alternatives depending on the age of onset and duration of treatment. From the perspective of cost-effectiveness, establishing a payment threshold of 20,000 EUR/QALY, teriparatide is the cost-effective alternative in patients with fracture from the age of 50 to 70 years old in Spain.

]]>Mathematics doi: 10.3390/math9182328

Authors: Zeyu Lin Hamdi Ayed Belgacem Bouallegue Hana Tomaskova Saeid Jafarzadeh Ghoushchi Gholamreza Haseli

Nowadays, because of the energy crisis, combined heat and power systems have notable benefits. One of the best devices is SOFC (Solid Oxide Fuel Cell) which joins heat and power frameworks. Some considerable failure modes arise that can affect these devices’ productivity. Generally, failure modes evaluations need an experts team to achieve uncertainties belongs to the risk assessment procedure. To improve the efficiency of the routine FMEA methodology and to represent a suitable hybrid fuzzy MCDM approach for FMEA, in this work, fully fuzzy best-worst method (FF-BWM) is employed to achieve the risk factors weights then fuzzy weighted aggregated sum product assessment (F-WASPAS) approach to detect the failure modes priorities is utilized. Ultimately, the sensitivity analyses demonstrate that the offered framework is verified and can make applicable data in risk management decision-making evaluation.

]]>Mathematics doi: 10.3390/math9182329

Authors: Se Hyun Nam Yu Hwan Kim Jiho Choi Seung Baek Hong Muhammad Owais Kang Ryoung Park

Age estimation is applicable in various fields, and among them, research on age estimation using human facial images, which are the easiest to acquire, is being actively conducted. Since the emergence of deep learning, studies on age estimation using various types of convolutional neural networks (CNN) have been conducted, and they have resulted in good performances, as clear images with high illumination were typically used in these studies. However, human facial images are typically captured in low-light environments. Age information can be lost in facial images captured in low-illumination environments, where noise and blur generated by the camera in the captured image reduce the age estimation performance. No study has yet been conducted on age estimation using facial images captured under low light. In order to overcome this problem, this study proposes a new generative adversarial network for low-light age estimation (LAE-GAN), which compensates for the brightness of human facial images captured in low-light environments, and a CNN-based age estimation method in which compensated images are input. When the experiment was conducted using the MORPH, AFAD, and FG-NET databases—which are open databases—the proposed method exhibited more accurate age estimation performance and brightness compensation in low-light images compared to state-of-the-art methods.

]]>Mathematics doi: 10.3390/math9182327

Authors: Taiwo Olubunmi Sangodapo Babatunde Oluwaseun Onasanya Sarka Mayerova-Hoskova

In this paper, we study the matrix representation of fuzzy soft sets, complement of fuzzy soft sets, product of fuzzy soft matrices and the application of fuzzy soft matrices in medical diagnosis presented by Lavanya and Akila. Additionally, a new method (max-min average) based on fuzzy reference function is introduced instead of the max-product method by Lavanya and Akila to extend Sanchez’s technique for decision making problems in medical diagnosis. Using the same data by Lavanya and Akila, the result shows that the new method gives more information about the medical status of the patients being considered in relation to a set of diseases.

]]>Mathematics doi: 10.3390/math9182323

Authors: Antonio Francisco Roldán López de Hierro Miguel Sánchez Daniel Puente-Fernández Rafael Montoya-Juárez Concepción Roldán

Delphi multi-round survey is a procedure that has been widely and successfully used to aggregate experts’ opinions about some previously established statements or questions. Such opinions are usually expressed as real numbers and some commentaries. The evolution of the consensus can be shown by an increase in the agreement percentages, and a decrease in the number of comments made. A consensus is reached when this percentage exceeds a certain previously set threshold. If this threshold has not been reached, the moderator modifies the questionnaire according to the comments he/she has collected, and the following round begins. In this paper, a new fuzzy Delphi method is introduced. On the one hand, the experts’ subjective judgments are collected as fuzzy numbers, enriching the approach. On the other hand, such opinions are collected through a computerized application that is able to interpret the experts’ opinions as fuzzy numbers. Finally, we employ a recently introduced fuzzy ranking methodology, satisfying many properties according to human intuition, in order to determine whether the expert’s fuzzy opinion is favorable enough (comparing with a fixed fuzzy number that indicates Agree or Strongly Agree). A cross-cultural validation was performed to illustrate the applicability of the proposed method. The proposed approach is simple for two reasons: it does not need a defuzzification step of the experts’ answers, and it can consider a wide range of fuzzy numbers not only triangular or trapezoidal fuzzy numbers.

]]>Mathematics doi: 10.3390/math9182325

Authors: Wang Peng Xu

To identify the impact of low-carbon policies on the location-routing problem (LRP) with cargo splitting (LRPCS), this paper first constructs the bi-level programming model of LRPCS. On this basis, the bi-level programming models of LRPCS under four low-carbon policies are constructed, respectively. The upper-level model takes the engineering construction department as the decision-maker to decide on the distribution center’s location. The lower-level model takes the logistics and distribution department as the decision-maker to make decisions on the vehicle distribution route’s scheme. Secondly, the hybrid algorithm of Ant Colony Optimization and Tabu Search (ACO-TS) is designed, and an example is introduced to verify the model’s and algorithm’s effectiveness. Finally, multiple sets of experiments are designed to explore the impact of various low-carbon policies on the decision-making of the LRPCS. The experimental results show that the influence of the carbon tax policy is the greatest, the carbon trading and carbon offset policy have a certain impact on the decision-making of the LRPCS, and the influence of the emission cap policy is the least. Based on this, we provide the relevant low-carbon policies advice and management implications.

]]>Mathematics doi: 10.3390/math9182326

Authors: Noufe H. Aljahdaly Ravi P. Agarwal Rasool Shah Thongchai Botmart

In this article, we have investigated the fractional-order Burgers equation via Natural decomposition method with nonsingular kernel derivatives. The two types of fractional derivatives are used in the article of Caputo–Fabrizio and Atangana–Baleanu derivative. We employed Natural transform on fractional-order Burgers equation followed by inverse Natural transform, to achieve the result of the equations. To validate the method, we have considered a two examples and compared with the exact results.

]]>Mathematics doi: 10.3390/math9182324

Authors: Căruntu Paşca

We present a relatively new and very efficient method to find approximate analytical solutions for a very general class of nonlinear fractional Volterra and Fredholm integro-differential equations. The test problems included and the comparison with previous results by other methods clearly illustrate the simplicity and accuracy of the method.

]]>Mathematics doi: 10.3390/math9182322

Authors: Sun Li Yao

In this paper, a dynamic model of cytosolic calcium concentration () oscillations is established for mast cells (MCs). This model includes the cytoplasm (Cyt), endoplasmic reticulum (ER), mitochondria (Mt), and functional region (μd), formed by the ER and Mt, also with channels in these cellular compartments. By this model, we calculate oscillations that are driven by distinct mechanisms at varying (degradation coefficient of inositol 1, 4, 5-trisphosphate, and production coefficient of ), as well as at different distances between the ER and Mt (ER–Mt distance). The model predicts that (i) Mt and μd compartments can reduce the amplitude of oscillations, and cause the ER to release less during oscillations; (ii) with increasing cytosolic concentration (), the amplitude of oscillations increases (from 0.1 μM to several μM), but the frequency decreases; (iii) the frequency of oscillations decreases as the ER–Mt distance increases. What is more, when the ER–Mt distance is greater than 65 nm, the μd compartment has less effect on oscillations. These results suggest that Mt, μd, and can all affect the amplitude and frequency of oscillations, but the mechanism is different. The model provides a comprehensive mechanism for predicting cytosolic concentration oscillations in mast cells, and a theoretical basis for calcium oscillations observed in mast cells, so as to better understand the regulation mechanism of calcium signaling in mast cells.

]]>Mathematics doi: 10.3390/math9182321

Authors: Ahmed A. Ewees Mohammed A. A. Al-qaness Laith Abualigah Diego Oliva Zakariya Yahya Algamal Ahmed M. Anter Rehab Ali Ali Ibrahim Rania M. Ghoniem Mohamed Abd Abd Elaziz

Feature selection is a well-known prepossessing procedure, and it is considered a challenging problem in many domains, such as data mining, text mining, medicine, biology, public health, image processing, data clustering, and others. This paper proposes a novel feature selection method, called AOAGA, using an improved metaheuristic optimization method that combines the conventional Arithmetic Optimization Algorithm (AOA) with the Genetic Algorithm (GA) operators. The AOA is a recently proposed optimizer; it has been employed to solve several benchmark and engineering problems and has shown a promising performance. The main aim behind the modification of the AOA is to enhance its search strategies. The conventional version suffers from weaknesses, the local search strategy, and the trade-off between the search strategies. Therefore, the operators of the GA can overcome the shortcomings of the conventional AOA. The proposed AOAGA was evaluated with several well-known benchmark datasets, using several standard evaluation criteria, namely accuracy, number of selected features, and fitness function. Finally, the results were compared with the state-of-the-art techniques to prove the performance of the proposed AOAGA method. Moreover, to further assess the performance of the proposed AOAGA method, two real-world problems containing gene datasets were used. The findings of this paper illustrated that the proposed AOAGA method finds new best solutions for several test cases, and it got promising results compared to other comparative methods published in the literature.

]]>Mathematics doi: 10.3390/math9182319

Authors: Efthalitsidou Zafeiriou Spinthiropoulos Betsas Sariannidis

Wagner Law and Keynesian approaches are the two fundamental theories of public finance. The aim of this study is to assess empirical evidence for the public spending–national income relationship at a disaggregated level for the time period 1995–2019. The sectoral public expenditures include education, health, and defense. The data employed were derived by EUROSTAT and OECD. Based on our findings, a sole relationship of the variables was validated, while the causality of the relationship provides conflict results depending on whether two-variate or multivariate methodology is employed. In the case of the multivariate framework that outperforms the two-variate approach in terms of information, the causality is directed from government expenses to the GDP level, validating the Keynesian approach in the long run as well as in the short run. On the other hand, the results validate Wagner Law based on the results of Granger causality pairwise test. A potential interpreatation for the results found is related to the measures imposed by the Memorandum, since the disproportionate cuts of the public expenses in the period of crisis have determined the evolution of national income. The scientific value of the presents study stands on the suggestion of potential effective measures aiming at the limitation of national income shrinkage in periods of severe economic crises worldwide.

]]>Mathematics doi: 10.3390/math9182320

Authors: Vincent F. Yu Putu A. Y. Indrakarna Anak Agung Ngurah Perwira Redi Shih-Wei Lin

The Share-a-Ride Problem with Flexible Compartments (SARPFC) is an extension of the Share-a-Ride Problem (SARP) where both passenger and freight transport are serviced by a single taxi network. The aim of SARPFC is to increase profit by introducing flexible compartments into the SARP model. SARPFC allows taxis to adjust their compartment size within the lower and upper bounds while maintaining the same total capacity permitting them to service more parcels while simultaneously serving at most one passenger. The main contribution of this study is that we formulated a new mathematical model for the problem and proposed a new variant of the Simulated Annealing (SA) algorithm called Simulated Annealing with Mutation Strategy (SAMS) to solve SARPFC. The mutation strategy is an intensification approach to improve the solution based on slack time, which is activated in the later stage of the algorithm. The proposed SAMS was tested on SARP benchmark instances, and the result shows that it outperforms existing algorithms. Several computational studies have also been conducted on the SARPFC instances. The analysis of the effects of compartment size and the portion of package requests to the total profit showed that, on average, utilizing flexible compartments as in SARPFC brings in more profit than using a fixed-size compartment as in SARP.

]]>Mathematics doi: 10.3390/math9182318

Authors: Islem Snoussi Nadia Hamani Nassim Mrabti Lyes Kermad

In this paper, we propose robust optimisation models for the distribution network design problem (DNDP) to deal with uncertainty cases in a collaborative context. The studied network consists of collaborative suppliers who satisfy their customers’ needs by delivering their products through common platforms. Several parameters—namely, demands, unit transportation costs, the maximum number of vehicles in use, etc.—are subject to interval uncertainty. Mixed-integer linear programming formulations are presented for each of these cases, in which the economic and environmental dimensions of the sustainability are studied and applied to minimise the logistical costs and the CO2 emissions, respectively. These formulations are solved using CPLEX. In this study, we propose a case study of a distribution network in France to validate our models. The obtained results show the impacts of considering uncertainty by comparing the robust model to the deterministic one. We also address the impacts of the uncertainty level and uncertainty budget on logistical costs and CO2 emissions.

]]>Mathematics doi: 10.3390/math9182316

Authors: Laura Río-Martín Saray Busto Michael Dumbser

In this paper, we propose a novel family of semi-implicit hybrid finite volume/finite element schemes for computational fluid dynamics (CFD), in particular for the approximate solution of the incompressible and compressible Navier-Stokes equations, as well as for the shallow water equations on staggered unstructured meshes in two and three space dimensions. The key features of the method are the use of an edge-based/face-based staggered dual mesh for the discretization of the nonlinear convective terms at the aid of explicit high resolution Godunov-type finite volume schemes, while pressure terms are discretized implicitly using classical continuous Lagrange finite elements on the primal simplex mesh. The resulting pressure system is symmetric positive definite and can thus be very efficiently solved at the aid of classical Krylov subspace methods, such as a matrix-free conjugate gradient method. For the compressible Navier-Stokes equations, the schemes are by construction asymptotic preserving in the low Mach number limit of the equations, hence a consistent hybrid FV/FE method for the incompressible equations is retrieved. All parts of the algorithm can be efficiently parallelized, i.e., the explicit finite volume step as well as the matrix-vector product in the implicit pressure solver. Concerning parallel implementation, we employ the Message-Passing Interface (MPI) standard in combination with spatial domain decomposition based on the free software package METIS. To show the versatility of the proposed schemes, we present a wide range of applications, starting from environmental and geophysical flows, such as dambreak problems and natural convection, over direct numerical simulations of turbulent incompressible flows to high Mach number compressible flows with shock waves. An excellent agreement with exact analytical, numerical or experimental reference solutions is achieved in all cases. Most of the simulations are run with millions of degrees of freedom on thousands of CPU cores. We show strong scaling results for the hybrid FV/FE scheme applied to the 3D incompressible Navier-Stokes equations, using millions of degrees of freedom and up to 4096 CPU cores. The largest simulation shown in this paper is the well-known 3D Taylor-Green vortex benchmark run on 671 million tetrahedral elements on 32,768 CPU cores, showing clearly the suitability of the presented algorithm for the solution of large CFD problems on modern massively parallel distributed memory supercomputers.

]]>Mathematics doi: 10.3390/math9182317

Authors: Xue Li Xiao-Ting He Jie-Chuan Ai Jun-Yi Sun

In this study, the large deformation problem of a functionally-graded thin circular plate subjected to transversely uniformly-distributed load and with different moduli in tension and compression (bimodular property) is theoretically analyzed, in which the small-rotation-angle assumption, commonly used in the classical Föppl–von Kármán equations of large deflection problems, is abandoned. First, based on the mechanical model on the neutral layer, the bimodular functionally-graded property of materials is modeled as two different exponential functions in the tensile and compressive zones. Thus, the governing equations of the large deformation problem are established and improved, in which the equation of equilibrium is derived without the common small-rotation-angle assumption. Taking the central deflection as a perturbation parameter, the perturbation method is used to solve the governing equations, thus the perturbation solutions of deflection and stress are obtained under different boundary constraints and the regression of the solution is satisfied. Results indicate that the perturbation solutions presented in this study have higher computational accuracy in comparison with the existing perturbation solutions with small-rotation-angle assumption. Specially, the computational accuracies of external load and yield stress are improved by 17.22% and 28.79% at most, respectively, by the numerical examples. In addition, the small-rotation-angle assumption has a great influence on the yield stress at the center of the bimodular functionally-graded circular plate.

]]>Mathematics doi: 10.3390/math9182315

Authors: Vinh Q. Mai Martin Meere

In this paper, we develop a comprehensive mathematical model to describe the phosphorylation of glucose by the enzyme hexokinase I. Glucose phosphorylation is the first step of the glycolytic pathway, and as such, it is carefully regulated in cells. Hexokinase I phosphorylates glucose to produce glucose-6-phosphate, and the cell regulates the phosphorylation rate by inhibiting the action of this enzyme. The cell uses three inhibitory processes to regulate the enzyme: an allosteric product inhibitory process, a competitive product inhibitory process, and a competitive inhibitory process. Surprisingly, the cellular regulation of hexokinase I is not yet fully resolved, and so, in this study, we developed a detailed mathematical model to help unpack the behaviour. Numerical simulations of the model produced results that were consistent with the experimentally determined behaviour of hexokinase I. In addition, the simulations provided biological insights into the abstruse enzymatic behaviour, such as the dependence of the phosphorylation rate on the concentration of inorganic phosphate or the concentration of the product glucose-6-phosphate. A global sensitivity analysis of the model was implemented to help identify the key mechanisms of hexokinase I regulation. The sensitivity analysis also enabled the development of a simpler model that produced an output that was very close to that of the full model. Finally, the potential utility of the model in assisting experimental studies is briefly indicated.

]]>Mathematics doi: 10.3390/math9182309

Authors: Wajid Ullah Jan Muhammad Farooq Rehan Ali Shah Aamir Khan M S Zobaer Rashid Jan

This paper explores the time dependent squeezing flow of a viscous fluid between parallel plates with internal heat generation and homogeneous/heterogeneous reactions. The motive of the present effort is to upgrade the heat transformation rate for engineering and industrial purpose with the rate of chemical reaction. For this purpose the equations for the conservation of mass, momentum, energy and homogeneous/heterogeneous reactions are transformed to a system of coupled equations using the similarity transformation. According to HAM, with the proper starting assumptions and other factors, a similarity solution may be found. On the way to verifying the validity and correctness of HAM findings, we compare the HAM solution with numerical solver programme BVP4c to see whether it matches up. The results of a parametric inquiry are summarized and presented with the use of graphs.

]]>Mathematics doi: 10.3390/math9182314

Authors: Jing Cheng

To analyze the leasing behavior of residential land in Beijing, the mathematical models of the price and the total area of the leased residential land are presented. The variables of the mathematical models are proposed by analyzing the factors influencing the district government’s leasing behavior for residential land based on the leasing right for residential land in Beijing, China. The regression formulae of the mathematical models are obtained with the ordinary least squares method. By introducing the data of the districts in Beijing from 2004 to 2015 into the mathematical models, the numerical results of the coefficients in the mathematical models are obtained by solving the equations of the regression formulae. After discussing the numerical results of the influencing factors, the district government behavior for leasing residential land in Beijing, China, is investigated. The numerical results show the factors concerning the government and how these factors influence the leased price and the total leased area of residential land for this large city in China. Finally, policy implications for the district government regarding residential land leasing in Beijing are proposed.

]]>Mathematics doi: 10.3390/math9182313

Authors: Hassan Shaban Essam H. Houssein Marco Pérez-Cisneros Diego Oliva Amir Y. Hassan Alaa A. K. Ismaeel Diaa Salama AbdElminaam Sanchari Deb Mokhtar Said

Recently, the resources of renewable energy have been in intensive use due to their environmental and technical merits. The identification of unknown parameters in photovoltaic (PV) models is one of the main issues in simulation and modeling of renewable energy sources. Due to the random behavior of weather, the change in output current from a PV model is nonlinear. In this regard, a new optimization algorithm called Runge–Kutta optimizer (RUN) is applied for estimating the parameters of three PV models. The RUN algorithm is applied for the R.T.C France solar cell, as a case study. Moreover, the root mean square error (RMSE) between the calculated and measured current is used as the objective function for identifying solar cell parameters. The proposed RUN algorithm is superior compared with the Hunger Games Search (HGS) algorithm, the Chameleon Swarm Algorithm (CSA), the Tunicate Swarm Algorithm (TSA), Harris Hawk’s Optimization (HHO), the Sine–Cosine Algorithm (SCA) and the Grey Wolf Optimization (GWO) algorithm. Three solar cell models—single diode, double diode and triple diode solar cell models (SDSCM, DDSCM and TDSCM)—are applied to check the performance of the RUN algorithm to extract the parameters. the best RMSE from the RUN algorithm is 0.00098624, 0.00098717 and 0.000989133 for SDSCM, DDSCM and TDSCM, respectively.

]]>Mathematics doi: 10.3390/math9182312

Authors: Harish Garg Gia Sirbiladze Zeeshan Ali Tahir Mahmood

To determine the connection among any amounts of attributes, the Hamy mean (HM) operator is one of the more broad, flexible, and dominant principles used to operate problematic and inconsistent information in actual life dilemmas. Furthermore, for the option to viably portray more complicated fuzzy vulnerability data, the idea of complex q-rung orthopair fuzzy sets can powerfully change the scope of sign of choice data by changing a boundary q, dependent on the distinctive wavering degree from the leaders, where ζ ≥ 1, so they outperform the conventional complex intuitionistic and complex Pythagorean fuzzy sets. In genuine dynamic issues, there is frequently a communication problem between credits. The goal of this study is to initiate the HM operators based on the flexible complex q-rung orthopair fuzzy (Cq-ROF) setting, called the Cq-ROF Hamy mean (Cq-ROFHM) operator and the Cq-ROF weighted Hamy mean (Cq-ROFWHM) operator, and some of their desirable properties are investigated in detail. A multi-attribute decision-making (MADM) dilemma for investigating decision-making problems under the Cq-ROF setting is explored with certain examples. Finally, a down-to-earth model for big business asset-arranging framework determination is provided to check the created approach and to exhibit its reasonableness and adequacy. The exploratory outcomes show that the clever MADM strategy is better than the current MADM techniques for managing MADM issues.

]]>Mathematics doi: 10.3390/math9182311

Authors: Kun-Jen Chung Jui-Jung Liao Hari Mohan Srivastava Shih-Fang Lee Shy-Der Lin

For generality, we observed that some of the optimization methods lack the mathematical rigor and some of them are based on intuitive arguments which result in the solution procedures being questionable from logical viewpoints of a mathematical analysis such as those in the work by Ouyang et al. (2009). They consider an economic order quantity model for deteriorating items with partially permissible delays in payments linked to order quantity. Basically, their inventory models are interesting, however, they ignore explorations of interrelations of functional behaviors (continuity, monotonicity properties, differentiability, et cetera) of the total cost function to locate the optimal solution, so those shortcomings will naturally influence the implementation of their considered inventory model. Consequently, the main purpose of this paper is to provide accurate and reliable mathematical analytic solution procedures for different scenarios that overcome the shortcomings of Ouyang et al.

]]>Mathematics doi: 10.3390/math9182310

Authors: Qiao-Ping Zhang Ngai-Ying Wong

The topic of similarity plays an essential role in developing students’ deductive reasoning. However, knowing how to teach similarity and understanding how to incorporate deductive reasoning and proof along with plane geometry remain a challenge to both school curriculum creators and teachers. This study identified the problems and characteristics regarding how similarity is treated in secondary mathematics textbooks in Hong Kong in the past half century. The content analysis method was used to analyze six secondary mathematics textbook series published in different periods. From the epistemological perspective of the textbook contents, our analysis shows the historical context and learning trajectories of how similarity was treated in school curriculum. The natural axiomatic geometry paradigm is not emphasized too much at different stages and most of the textbooks did not provide formal proofs of similarity. The intuitive idea was gradually consolidated into a formal definition of similarity. Furthermore, the way that rigorous geometric deduction can be performed from intuitive concepts and experimental geometry to the idea of proofs and formal proofs is also discussed.

]]>Mathematics doi: 10.3390/math9182308

Authors: Adrian Marius Deaconu Luciana Majercsik

The network expansion problem is a very important practical optimization problem when there is a need to increment the flow through an existing network of transportation, electricity, water, gas, etc. In this problem, the flow augmentation can be achieved either by increasing the capacities on the existing arcs, or by adding new arcs to the network. Both operations are coming with an expansion cost. In this paper, the problem of finding the minimum network expansion cost so that the modified network can transport a given amount of flow from the source node to the sink node is studied. A strongly polynomial algorithm is deduced to solve the problem.

]]>Mathematics doi: 10.3390/math9182307

Authors: Xiaojin Xie Kangyang Luo Zhixiang Yin Guoqiang Wang

The outbreak of coronavirus disease 2019 (COVID-19) has caused a global disaster, seriously endangering human health and the stability of social order. The purpose of this study is to construct a nonlinear combinational dynamic transmission rate model with automatic selection based on forecasting effective measure (FEM) and support vector regression (SVR) to overcome the shortcomings of the difficulty in accurately estimating the basic infection number R0 and the low accuracy of single model predictions. We apply the model to analyze and predict the COVID-19 outbreak in different countries. First, the discrete values of the dynamic transmission rate are calculated. Second, the prediction abilities of all single models are comprehensively considered, and the best sliding window period is derived. Then, based on FEM, the optimal sub-model is selected, and the prediction results are nonlinearly combined. Finally, a nonlinear combinational dynamic transmission rate model is developed to analyze and predict the COVID-19 epidemic in the United States, Canada, Germany, Italy, France, Spain, South Korea, and Iran in the global pandemic. The experimental results show an the out-of-sample forecasting average error rate lower than 10.07% was achieved by our model, the prediction of COVID-19 epidemic inflection points in most countries shows good agreement with the real data. In addition, our model has good anti-noise ability and stability when dealing with data fluctuations.

]]>Mathematics doi: 10.3390/math9182306

Authors: Vladislav N. Kovalnogov Ruslan V. Fedorov Andrey V. Chukalin Theodore E. Simos Charalampos Tsitouras

The purpose of the present work is to construct a new Runge–Kutta pair of orders five and four to outperform the state-of-the-art in these kind of methods when addressing problems with periodic solutions. We consider the family of such pairs that the celebrated Dormand–Prince pair also belongs. The chosen family comes with coefficients that all depend on five free parameters. These latter parameters are tuned in a way to furnish a new method that performs best on a couple of oscillators. Then, we observe that this trained pair outperforms other well known methods in the relevant literature in a standard set of problems with periodic solutions. This is remarkable since no special property holds such as high phase-lag order or an extended interval of periodicity.

]]>Mathematics doi: 10.3390/math9182305

Authors: Wadim Strielkowski Aida Guliyeva Ulviyya Rzayeva Elena Korneeva Anna Sherstobitova

Our paper aims at testing the impact of separate elements of the intellectual capital (IC) represented for instance by the human, structural, and customer capital, on the functioning and performance of the small and medium-sized enterprises (SMEs) using mathematical modeling. We assess the intellectual capital with respect to the resource-based view theory. Our study is based on the data obtained from the 206 surveys with the representatives of small and medium-sized enterprises from Commonwealth of Independent States (CIS) countries. We employed a mathematical modeling approach as well as the SPSS application package in order to test our hypotheses about the influence of intellectual capital on the enterprise’s efficiency. Our results helped us to determine that the concept of intellectual capital is practically not used in the management of small and medium-sized enterprises in CIS countries. It becomes apparent that individual techniques for managing intellectual resources can only be identified intuitively, based on an in-depth analysis of the current tasks facing managers. These findings confirmed the positive impact of intellectual capital on the performance of small and medium-sized enterprises in the conditions of the economies in transition represented hereinafter in our paper by CIS countries, but only with the availability of financial resources and with some important reservations.

]]>Mathematics doi: 10.3390/math9182304

Authors: Jordi Mill Victor Agudelo Andy L. Olivares Maria Isabel Pons Etelvino Silva Marta Nuñez-Garcia Xabier Morales Dabit Arzamendi Xavier Freixa Jérôme Noailly Oscar Camara

Atrial fibrillation (AF) is nowadays the most common human arrhythmia and it is considered a marker of an increased risk of embolic stroke. It is known that 99% of AF-related thrombi are generated in the left atrial appendage (LAA), an anatomical structure located within the left atrium (LA). Left atrial appendage occlusion (LAAO) has become a good alternative for nonvalvular AF patients with contraindications to anticoagulants. However, there is a non-negligible number of device-related thrombus (DRT) events, created next to the device surface. In silico fluid simulations can be a powerful tool to better understand the relation between LA anatomy, haemodynamics, and the process of thrombus formation. Despite the increasing literature in LA fluid modelling, a consensus has not been reached yet in the community on the optimal modelling choices and boundary conditions for generating realistic simulations. In this line, we have performed a sensitivity analysis of several boundary conditions scenarios, varying inlet/outlet and LA wall movement configurations, using patient-specific imaging data of six LAAO patients (three of them with DRT at follow-up). Mesh and cardiac cycle convergence were also analysed. The boundary conditions scenario that better predicted DRT cases had echocardiography-based velocities at the mitral valve outlet, a generic pressure wave from an AF patient at the pulmonary vein inlets, and a dynamic mesh approach for LA wall deformation, emphasizing the need for patient-specific data for realistic simulations. The obtained promising results need to be further validated with larger cohorts, ideally with ground truth data, but they already offer unique insights on thrombogenic risk in the left atria.

]]>Mathematics doi: 10.3390/math9182303

Authors: Eabhnat Ní Fhloinn Olivia Fitzmaurice

In this paper, we consider the experiences of mathematics lecturers in higher education and how they moved to emergency remote teaching during the initial university closures due to the COVID-19 pandemic. An online survey was conducted in May–June 2020 which received 257 replies from respondents based in 29 countries. We report on the particular challenges mathematics lecturers perceive there to be around teaching mathematics remotely, as well as any advantages or disadvantages of teaching mathematics online that they report. Over 90% of respondents had little or no prior experience teaching mathematics online, and, initially, 72% found it stressful and 88% thought it time-consuming. 88% felt there was a difference between teaching mathematics in this way compared with other disciplines. Four main types of challenges were associated with emergency remote teaching of mathematics: technical challenges; student challenges; teaching challenges; and the nature of mathematics. Respondents identified flexibility as the main advantage of online teaching, with lack of interaction featuring strongly as a disadvantage. We also consider respondents’ personal circumstances during this time, in terms of working conditions and caring responsibilities and conclude by summarizing the impact they perceive this experience may have upon their future teaching. Forty-six percent% of respondents self-identified as having caring responsibilities, and 61% felt the experience would affect their future teaching.

]]>Mathematics doi: 10.3390/math9182302

Authors: Lee Zhuo

The accurate localization of the rolling element failure is very important to ensure the reliability of rotating machinery. This paper proposes an efficient and anti-noise fault diagnosis model for rolling elements. The proposed model is composed of feature extraction, feature selection and fault classification. Feature extraction is composed of signal processing and signal noise reduction. Signal processing is carried out by local mean decomposition (LMD), and signal noise reduction is performed by product function (PF) selection and wavelet packet decomposition (WPD). Through the steps of signal noise reduction, high-frequency noise can be effectively removed, and the fault information hidden under the noise can be extracted. To further improve the effectiveness of the diagnostic model, an improved binary particle swarm optimization (IBPSO) is proposed to find the most important features from the feature space. In IBPSO, cycling time-varying inertia weight is introduced to balance exploitation and exploration and improve the capability to escape from local solutions, and crossover and mutation operations are also introduced to improve exploration and exploitation capabilities, respectively. The main contributions of this research are briefly described as follows: (1) The feature extraction process applied in this research can effectively remove noise and establish a high-accuracy feature set. (2) The proposed feature selection algorithm has higher accuracy than the other state-of-the-art feature selection algorithms. (3) In a strong noise environment, the proposed rolling element fault diagnosis model is compared with the state-of-the-art fault diagnosis model in terms of classification accuracy. Experimental results show that the model can maintain high classification accuracy in a strong noise environment. Therefore, it can be proved that the fault diagnosis model proposed in this paper can be effectively applied to the fault diagnosis of rotating machinery.

]]>Mathematics doi: 10.3390/math9182301

Authors: Andrey V. Orekhov

This paper aims to consider approximation-estimation tests for decision-making by machine-learning methods, and integral-estimation tests are defined, which is a generalization for the continuous case. Approximation-estimation tests are measurable sampling functions (statistics) that estimate the approximation error of monotonically increasing number sequences in different classes of functions. These tests make it possible to determine the Markov moments of a qualitative change in the increase in such sequences, from linear to nonlinear type. If these sequences are trajectories of discrete quasi-deterministic random processes, then moments of change in the nature of their growth and qualitative change in the process match up. For example, in cluster analysis, approximation-estimation tests are a formal generalization of the “elbow method” heuristic. In solid mechanics, they can be used to determine the proportionality limit for the stress strain curve (boundaries of application of Hooke’s law). In molecular biology methods, approximation-estimation tests make it possible to determine the beginning of the exponential phase and the transition to the plateau phase for the curves of fluorescence accumulation of the real-time polymerase chain reaction, etc.

]]>Mathematics doi: 10.3390/math9182300

Authors: José Luis Díaz Díaz Palencia

The aim of this work is to characterize Traveling Waves (TW) solutions for a coupled system with KPP-Fisher nonlinearity and weak advection. The heterogeneous diffusion introduces certain instabilities in the TW heteroclinic connections that are explored. In addition, a weak advection reflects the existence of a critical combined TW speed for which solutions are purely monotone. This study follows purely analytical techniques together with numerical exercises used to validate or extent the contents of the analytical principles. The main concepts treated are related to positivity conditions, TW propagation speed and homotopy representations to characterize the TW asymptotic behaviour.

]]>Mathematics doi: 10.3390/math9182299

Authors: Saleh Mousa Alzahrani Xavier Antoine Chokri Chniti

The aim of this paper is to introduce an orignal coupling procedure between surface integral equation formulations and on-surface radiation condition (OSRC) methods for solving two-dimensional scattering problems for non convex structures. The key point is that the use of the OSRC introduces a sparse block in the surface operator representation of the wave field while the integral part leads to an improved accuracy of the OSRC method in the non convex part of the scattering structure. The procedure is given for both the Dirichlet and Neumann scattering problems. Some numerical simulations show the improvement induced by the coupling method.

]]>Mathematics doi: 10.3390/math9182298

Authors: Mohammed K. A. Kaabar Mehdi Shabibi Jehad Alzabut Sina Etemad Weerawat Sudsutad Francisco Martínez Shahram Rezapour

Our main purpose in this paper is to prove the existence of solutions for the fractional strongly singular thermostat model under some generalized boundary conditions. In this way, we use some recent nonlinear fixed-point techniques involving α-ψ-contractions and α-admissible maps. Further, we establish the similar results for the hybrid version of the given fractional strongly singular thermostat control model. Some examples are studied to illustrate the consistency of our results.

]]>Mathematics doi: 10.3390/math9182297

Authors: Habib Benbouhenni Nicu Bizon

In this work, a third-order sliding mode controller-based direct flux and torque control (DFTC-TOSMC) for an asynchronous generator (AG) based single-rotor wind turbine (SRWT) is proposed. The traditional direct flux and torque control (DFTC) technology or direct torque control (DTC) with integral proportional (PI) regulator (DFTC-PI) has been widely used in asynchronous generators in recent years due to its higher efficiency compared with the traditional DFTC switching strategy. At the same time, one of its main disadvantages is the significant ripples of magnetic flux and torque that are produced by the classical PI regulator. In order to solve these drawbacks, this work was designed to improve the strategy by removing these regulators. The designed strategy was based on replacing the PI regulators with a TOSMC method that will have the same inputs as these regulators. The numerical simulation was carried out in MATLAB software, and the results obtained can evaluate the effectiveness of the designed strategy relative to the traditional strategy.

]]>Mathematics doi: 10.3390/math9182296

Authors: Oscar Molina Vicenç Font Luis Pino-Fan

This paper aims to illustrate how a teacher instilled norms that regulate the theorem construction process in a three-dimensional geometry course. The course was part of a preservice mathematics teacher program, and it was characterized by promoting inquiry and argumentation. We analyze class excerpts in which students address tasks that require formulating conjectures, that emerge as a solution to a problem and proving such conjectures, and the teacher leads whole-class activities where students’ productions are exposed. For this, we used elements of the didactical analysis proposed by the onto-semiotic approach and Toulmin’s model for argumentation. The teacher’s professional actions that promoted reiterative actions in students’ mathematical practices were identified; we illustrate how these professional actions impelled students’ actions to become norms concerning issues about the legitimacy of different types of arguments (e.g., analogical and abductive) in the theorem construction process.

]]>Mathematics doi: 10.3390/math9182293

Authors: Yumo Zhang

This paper considers an optimal investment problem with mispricing in the family of 4/2 stochastic volatility models under mean–variance criterion. The financial market consists of a risk-free asset, a market index and a pair of mispriced stocks. By applying the linear–quadratic stochastic control theory and solving the corresponding Hamilton–Jacobi–Bellman equation, explicit expressions for the statically optimal (pre-commitment) strategy and the corresponding optimal value function are derived. Moreover, a necessary verification theorem was provided based on an assumption of the model parameters with the investment horizon. Due to the time-inconsistency under mean–variance criterion, we give a dynamic formulation of the problem and obtain the closed-form expression of the dynamically optimal (time-consistent) strategy. This strategy is shown to keep the wealth process strictly below the target (expected terminal wealth) before the terminal time. Results on the special case without mispricing are included. Finally, some numerical examples are given to illustrate the effects of model parameters on the efficient frontier and the difference between static and dynamic optimality.

]]>Mathematics doi: 10.3390/math9182294

Authors: Attila Mester Andrei Pop Bogdan-Eduard-Mădălin Mursa Horea Greblă Laura Dioşan Camelia Chira

The stability and robustness of a complex network can be significantly improved by determining important nodes and by analyzing their tendency to group into clusters. Several centrality measures for evaluating the importance of a node in a complex network exist in the literature, each one focusing on a different perspective. Community detection algorithms can be used to determine clusters of nodes based on the network structure. This paper shows by empirical means that node importance can be evaluated by a dual perspective—by combining the traditional centrality measures regarding the whole network as one unit, and by analyzing the node clusters yielded by community detection. Not only do these approaches offer overlapping results but also complementary information regarding the top important nodes. To confirm this mechanism, we performed experiments for synthetic and real-world networks and the results indicate the interesting relation between important nodes on community and network level.

]]>Mathematics doi: 10.3390/math9182295

Authors: Nurul Aityqah Yaacob Jamil J. Jaber Dharini Pathmanathan Sadam Alwadi Ibrahim Mohamed

This study implements various, maximum overlap, discrete wavelet transform filters to model and forecast the time-dependent mortality index of the Lee-Carter model. The choice of appropriate wavelet filters is essential in effectively capturing the dynamics in a period. This cannot be accomplished by using the ARIMA model alone. In this paper, the ARIMA model is enhanced with the integration of various maximal overlap discrete wavelet transform filters such as the least asymmetric, best-localized, and Coiflet filters. These models are then applied to the mortality data of Australia, England, France, Japan, and USA. The accuracy of the projecting log of death rates of the MODWT-ARIMA model with the aforementioned wavelet filters are assessed using mean absolute error, mean absolute percentage error, and mean absolute scaled error. The MODWT-ARIMA (5,1,0) model with the BL14 filter gives the best fit to the log of death rates data for males, females, and total population, for all five countries studied. Implementing the MODWT leads towards improvement in the performance of the standard framework of the LC model in forecasting mortality rates.

]]>Mathematics doi: 10.3390/math9182292

Authors: Mujahid Abbas Rizwan Anjum Vasile Berinde

The aim of this paper is two fold: the first is to define two new classes of mappings and show the existence and iterative approximation of their fixed points; the second is to show that the Ishikawa, Mann, and Krasnoselskij iteration methods defined for such classes of mappings are equivalent. An application of the main results to solve split feasibility and variational inequality problems are also given.

]]>Mathematics doi: 10.3390/math9182291

Authors: Tatiana Pedraza Jesús Rodríguez-López

It is a natural question if a Cartesian product of objects produces an object of the same type. For example, it is well known that a countable Cartesian product of metrizable topological spaces is metrizable. Related to this question, Borsík and Doboš characterized those functions that allow obtaining a metric in the Cartesian product of metric spaces by means of the aggregation of the metrics of each factor space. This question was also studied for norms by Herburt and Moszyńska. This aggregation procedure can be modified in order to construct a metric or a norm on a certain set by means of a family of metrics or norms, respectively. In this paper, we characterize the functions that allow merging an arbitrary collection of (asymmetric) norms defined over a vector space into a single norm (aggregation on sets). We see that these functions are different from those that allow the construction of a norm in a Cartesian product (aggregation on products). Moreover, we study a related topological problem that was considered in the context of metric spaces by Borsík and Doboš. Concretely, we analyze under which conditions the aggregated norm is compatible with the product topology or the supremum topology in each case.

]]>Mathematics doi: 10.3390/math9182290

Authors: Ján Perháč Valerie Novitzká William Steingartner Zuzana Bilanová

Computer network security is an important aspect of computer science. Many researchers are trying to increase security using different methods, technologies, or tools. One of the most common practices is the deployment of an Intrusion Detection System (IDS). The current state of IDS brings only passive protection from network intrusions, i.e., IDS can only detect possible intrusions. Due to that, the manual intervention of an administrator is needed. In our paper, we present a logical model of an active IDS based on category theory, coalgebras, linear logic, and Belief–Desire–Intention (BDI) logic. Such an IDS can not only detect intrusions but also autonomously react to them according to a defined security policy. We demonstrate our approach on a motivating example with real network intrusions.

]]>Mathematics doi: 10.3390/math9182289

Authors: Octav Olteanu

Firstly, we recall the classical moment problem and some basic results related to it. By its formulation, this is an inverse problem: being given a sequence (yj)j∈ℕn&nbsp; of real numbers and a closed subset F⊆ℝn, n∈{1,2,…}, find a positive regular Borel measure μ on F such that ∫Ftjdμ=yj,&nbsp;j∈ℕn. This is the full moment problem. The existence, uniqueness, and construction of the unknown solution μ are the focus of attention. The numbers yj,&nbsp;j∈ℕn are called the moments of the measure μ. When a sandwich condition on the solution is required, we have a Markov moment problem. Secondly, we study the existence and uniqueness of the solutions to some full Markov moment problems. If the moments yj&nbsp;are self-adjoint operators, we have an operator-valued moment problem. Related results are the subject of attention. The truncated moment problem is also discussed, constituting the third aim of this work.

]]>Mathematics doi: 10.3390/math9182288

Authors: Rohan Tahir Allah Bux Sargano Zulfiqar Habib

In recent years, learning-based approaches for 3D reconstruction have gained much popularity due to their encouraging results. However, unlike 2D images, 3D cannot be represented in its canonical form to make it computationally lean and memory-efficient. Moreover, the generation of a 3D model directly from a single 2D image is even more challenging due to the limited details available from the image for 3D reconstruction. Existing learning-based techniques still lack the desired resolution, efficiency, and smoothness of the 3D models required for many practical applications. In this paper, we propose voxel-based 3D object reconstruction (V3DOR) from a single 2D image for better accuracy, one using autoencoders (AE) and another using variational autoencoders (VAE). The encoder part of both models is used to learn suitable compressed latent representation from a single 2D image, and a decoder generates a corresponding 3D model. Our contribution is twofold. First, to the best of the authors’ knowledge, it is the first time that variational autoencoders (VAE) have been employed for the 3D reconstruction problem. Second, the proposed models extract a discriminative set of features and generate a smoother and high-resolution 3D model. To evaluate the efficacy of the proposed method, experiments have been conducted on a benchmark ShapeNet data set. The results confirm that the proposed method outperforms state-of-the-art methods.

]]>Mathematics doi: 10.3390/math9182287

Authors: Jorge Munoz-Minjares Osbaldo Vite-Chavez Jorge Flores-Troncoso Jorge M. Cruz-Duarte

Object segmentation is a widely studied topic in digital image processing, as to it can be used for countless applications in several fields. This process is traditionally achieved by computing an optimal threshold from the image intensity histogram. Several algorithms have been proposed to find this threshold based on different statistical principles. However, the results generated via these algorithms contradict one another due to the many variables that can disturb an image. An accepted strategy to achieve the optimal histogram threshold, to distinguish between the object and the background, is to estimate two data distributions and find their intersection. This work proposes a strategy based on the Cuckoo Search Algorithm (CSA) and the Generalized Gaussian (GG) distribution to assess the optimal threshold. To test this methodology, we carried out several experiments in synthetic and practical scenarios and compared our results against other well-known algorithms from the literature. These practical cases comprise a medical image database and our own generated database. The results in a simulated environment show an evident advantage of the proposed strategy against other algorithms. In a real environment, this ranks among the best algorithms, making it a reliable alternative.

]]>Mathematics doi: 10.3390/math9182286

Authors: Ingrid Semanišinová

In the paper, we present a study devoted to the utilization of multiple-solution tasks (MSTs) in combinatorics as a part of a pre-service teachers course on didactics of mathematics from the view of the mathematics teachers’ specialized knowledge (MTSK) theoretical framework. The study was carried out over the standard course of a summer semester in 2021. The course was attended by 13 pre-service teachers (PSTs). It was carried out online, due to COVID-19 restrictions. Ten combinatorial multiple-solution tasks were assigned to the PSTs. Analyzing pre-service teachers solutions to these tasks, we sought the description and better understanding of the combinatorial knowledge of the topic from the perspective of MSTK. The results revealed some critical aspects of mathematical knowledge in combinatorics that pre-service teachers education should focus on.

]]>Mathematics doi: 10.3390/math9182285

Authors: Jiang Ma Wei Zhao Yanguo Jia Xiumin Shen Haiyang Jiang

Linear complexity is an important property to measure the unpredictability of pseudo-random sequences. Trace representation is helpful for analyzing cryptography properties of pseudo-random sequences. In this paper, a class of new Ding generalized cyclotomic binary sequences of order two with period pq is constructed based on the new segmentation of Ding Helleseth generalized cyclotomy. Firstly, the linear complexity and minimal polynomial of the sequences are investigated. Then, their trace representation is given. It is proved that the sequences have larger linear complexity and can resist the attack of the Berlekamp–Massey algorithm. This paper also confirms that generalized cyclotomic sequences with good randomness may be obtained by modifying the characteristic set of generalized cyclotomy.

]]>Mathematics doi: 10.3390/math9182284

Authors: Endre Kovács Ádám Nagy Mahmoud Saleh

This paper introduces a set of new fully explicit numerical algorithms to solve the spatially discretized heat or diffusion equation. After discretizing the space and the time variables according to conventional finite difference methods, these new methods do not approximate the time derivatives by finite differences, but use a combined two-stage constant-neighbour approximation to decouple the ordinary differential equations and solve them analytically. In the final expression for the new values of the variable, the time step size appears not in polynomial or rational, but in exponential form with negative coefficients, which can guarantee stability. The two-stage scheme contains a free parameter p and we analytically prove that the convergence is second order in the time step size for all values of p and the algorithm is unconditionally stable if p is at least 0.5, not only for the linear heat equation, but for the nonlinear Fisher’s equation as well. We compare the performance of the new methods with analytical and numerical solutions. The results suggest that the new algorithms can be significantly faster than the widely used explicit or implicit methods, particularly in the case of extremely large stiff systems.

]]>Mathematics doi: 10.3390/math9182283

Authors: Mian Bahadur Zada Muhammad Sarwar Thabet Abdeljawad Aiman Mukheimer

The aim of this work is to discuss the existence of solutions to the system of fractional variable order hybrid differential equations. For this reason, we establish coupled fixed point results in Banach spaces.

]]>Mathematics doi: 10.3390/math9182279

Authors: Andres El-Fakdi Josep Lluis de la Rosa

Digital preservation is a research area devoted to keeping digital assets preserved and usable for many years. Out of the many approaches to digital preservation, the present research article follows a new object-centered digital preservation paradigm where digital objects share part of the responsibility for preservation: they can move, replicate, and evolve to a higher-quality format inside a digital ecosystem. In the new framework, the behavior of digital objects needs to be modeled in order to obtain the best preservation strategy. Thus, digital objects are programmed with the mission of their own long-term self-preservation, which entails being accessible and reproducible by users at any time in the future regardless of frequent technological changes due to software and hardware upgrades. Three nature-inspired computational intelligence algorithms, based on the collective behavior of decentralized and self-organized systems, were selected for the modeling approach: multipopulation genetic algorithm, ant colony optimization, and a virus-based algorithm. TiM, a simulated environment for running distributed digital ecosystems, was used to perform the experiments. The results map the relation between the models and the expected object diversity obtained in short- and mid-term digital preservation scenarios. Comparing the results, the best performance corresponded to the multipopulation genetic algorithm. The article aims to be a first step in the digital self-preservation field. Building nature-inspired model behaviors is a good approach and opens the door to future tests with other AI-based methods.

]]>Mathematics doi: 10.3390/math9182282

Authors: Saulius Minkevičius Igor Katin Joana Katina Irina Vinogradova-Zinkevič

The structure of this work in the field of queuing theory consists of two stages. The first stage presents Little’s Law in Multiphase Systems (MSs). To obtain this result, the Strong Law of Large Numbers (SLLN)-type theorems for the most important MS probability characteristics (i.e., queue length of jobs and virtual waiting time of a job) are proven. The next stage of the work is to verify the result obtained in the first stage.

]]>Mathematics doi: 10.3390/math9182281

Authors: Karime Montes Escobar José Luis Vicente-Villardon Javier de la Hoz-M Lelly María Useche-Castro Daniel Fabricio Alarcón Cano Aline Siteneski

Background: Neuroendocrine tumors (NETs) are severe and relatively rare and may affect any organ of the human body. The prevalence of NETs has increased in recent years; however, there seem to be more data on particular types, even though, despite the efforts of different guidelines, there is no consensus on how to identify different types of NETs. In this review, we investigated the countries that published the most articles about NETs, the most frequent organs affected, and the most common related topics. Methods: This work used the Latent Dirichlet Allocation (LDA) method to identify and interpret scientific information in relation to the categories in a set of documents. The HJ-Biplot method was also used to determine the relationship between the analyzed topics, by taking into consideration the years under study. Results: In this study, a literature review was conducted, from which a total of 7658 abstracts of scientific articles published between 1981 and 2020 were extracted. The United States, Germany, United Kingdom, France, and Italy published the majority of studies on NETs, of which pancreatic tumors were the most studied. The five most frequent topics were t_21 (clinical benefit), t_11 (pancreatic neuroendocrine tumors), t_13 (patients one year after treatment), t_17 (prognosis of survival before and after resection), and t_3 (markers for carcinomas). Finally, the results were put through a two-way multivariate analysis (HJ-Biplot), which generated a new interpretation: we grouped topics by year and discovered which NETs were the most relevant for which years. Conclusions: The most frequent topics found in our review highlighted the severity of NETs: patients have a poor prognosis of survival and a high probability of tumor recurrence.

]]>Mathematics doi: 10.3390/math9182280

Authors: Chi-Yo Huang Min-Jen Yang Jeen-Fong Li Hueiling Chen

The industry–academic collaboration (IAC) in developed and developing countries enables these economies to gain momentum in continuous innovation and, thus, economic growth. Patent commercialization is one major channel of knowledge flow in IAC. However, very few studies consider the flow of knowledge between industrial firms and universities. Moreover, ways that the patent commercialization performance of IACs can be evaluated are rarely discussed. Therefore, defining an analytic framework to evaluate the performance of IAC from the aspect of patent commercialization is critical. Traditionally, data envelopment analysis (DEA) models have widely been adopted in performance evaluation. However, traditional DEA models cannot accurately evaluate the performance of IACs with complex university–industry interconnections, the internal linkages, or linking activities of knowledge-flow within the decision-making units (DMUs), i.e., the IACs. In order to solve the abovementioned problems, this study defines a multiple objective programming (MOP)-based network DEA (NDEA), with weighting derived from the decision-making trial and evaluation laboratory (DEMATEL)-based analytic network process (ANP), or the DANP. The proposed analytic framework can evaluate the efficiency of decision-making units (DMUs) with a network structure (e.g., supply chains, strategic alliances, etc.) based on the weights that have been derived, based on experts’ opinions. An empirical study based on the performance of the patent commercialization of Taiwanese IACs was used to demonstrate the feasibility of the proposed framework. The results of the empirical research can serve as a basis for improving the performance of IAC.

]]>Mathematics doi: 10.3390/math9182277

Authors: Mahmoud El-Morshedy Hassan M. Aljohani Mohamed S. Eliwa Mazen Nassar Mohammed K. Shakhatreh Ahmed Z. Afify

Continuous and discrete distributions are essential to model both continuous and discrete lifetime data in several applied sciences. This article introduces two extended versions of the Burr–Hatke model to improve its applicability. The first continuous version is called the exponentiated Burr–Hatke (EBuH) distribution. We also propose a new discrete analog, namely the discrete exponentiated Burr–Hatke (DEBuH) distribution. The probability density and the hazard rate functions exhibit decreasing or upside-down shapes, whereas the reversed hazard rate function. Some statistical and reliability properties of the EBuH distribution are calculated. The EBuH parameters are estimated using some classical estimation techniques. The simulation results are conducted to explore the behavior of the proposed estimators for small and large samples. The applicability of the EBuH and DEBuH models is studied using two real-life data sets. Moreover, the maximum likelihood approach is adopted to estimate the parameters of the EBuH distribution under constant-stress accelerated life-tests (CSALTs). Furthermore, a real data set is analyzed to validate our results under the CSALT model.

]]>Mathematics doi: 10.3390/math9182275

Authors: Radwan Abu-Gdairi Mostafa A. El-Gayar Mostafa K. El-Bably Kamel K. Fleifel

Rough set philosophy is a significant methodology in the knowledge discovery of databases. In the present paper, we suggest new sorts of rough set approximations using a multi-knowledge base; that is, a family of the finite number of general binary relations via different methods. The proposed methods depend basically on a new neighborhood (called basic-neighborhood). Generalized rough approximations (so-called, basic-approximations) represent a generalization to Pawlak’s rough sets and some of their extensions as confirming in the present paper. We prove that the accuracy of the suggested approximations is the best. Many comparisons between these approaches and the previous methods are introduced. The main goal of the suggested techniques was to study the multi-information systems in order to extend the application field of rough set models. Thus, two important real-life applications are discussed to illustrate the importance of these methods. We applied the introduced approximations in a set-valued ordered information system in order to be accurate tools for decision-making. To illustrate our methods, we applied them to find the key foods that are healthy in nutrition modeling, as well as in the medical field to make a good decision regarding the heart attacks problem.

]]>Mathematics doi: 10.3390/math9182278

Authors: Claudiu-Ionuţ Popîrlan Irina-Valentina Tudor Constantin-Cristian Dinu Gabriel Stoian Cristina Popîrlan Daniela Dănciulescu

In this paper, we want to examine how unemployment impacts social life, and, by using datasets from six European countries, we analyze the effect of unemployment on two of the main aspects of social life: social exclusion and life satisfaction. First, we predict unemployment rates using the Auto Regressive Integrated Moving Average (ARIMA) model and the results are further used in a linear regression model alongside social exclusion and life satisfaction data, thus obtaining the hybrid model. With the help of the point prediction method, we use the hybrid model to predict new values for the two aspects of social life for the upcoming three years and we analyze the results obtained in order to better understand their interconnection. The results suggest that unemployment has particularly adverse effects on the subjective perception of life satisfaction, furthermore increasing the social exclusion percentage.

]]>Mathematics doi: 10.3390/math9182276

Authors: Salah Eddargani María José Ibáñez Abdellah Lamnii Mohamed Lamnii Domingo Barrera

In this work, we study quasi-interpolation in a space of sextic splines defined over Powell–Sabin triangulations. These spline functions are of class C2 on the whole domain but fourth-order regularity is required at vertices and C3 regularity is imposed across the edges of the refined triangulation and also at the interior point chosen to define the refinement. An algorithm is proposed to define the Powell–Sabin triangles with a small area and diameter needed to construct a normalized basis. Quasi-interpolation operators which reproduce sextic polynomials are constructed after deriving Marsden’s identity from a more explicit version of the control polynomials introduced some years ago in the literature. Finally, some tests show the good performance of these operators.

]]>Mathematics doi: 10.3390/math9182274

Authors: Lvyang Qiu Shuyu Li Yunsick Sung

With unlabeled music data widely available, it is necessary to build an unsupervised latent music representation extractor to improve the performance of classification models. This paper proposes an unsupervised latent music representation learning method based on a deep 3D convolutional denoising autoencoder (3D-DCDAE) for music genre classification, which aims to learn common representations from a large amount of unlabeled data to improve the performance of music genre classification. Specifically, unlabeled MIDI files are applied to 3D-DCDAE to extract latent representations by denoising and reconstructing input data. Next, a decoder is utilized to assist the 3D-DCDAE in training. After 3D-DCDAE training, the decoder is replaced by a multilayer perceptron (MLP) classifier for music genre classification. Through the unsupervised latent representations learning method, unlabeled data can be applied to classification tasks so that the problem of limiting classification performance due to insufficient labeled data can be solved. In addition, the unsupervised 3D-DCDAE can consider the musicological structure to expand the understanding of the music field and improve performance in music genre classification. In the experiments, which utilized the Lakh MIDI dataset, a large amount of unlabeled data was utilized to train the 3D-DCDAE, obtaining a denoising and reconstruction accuracy of approximately 98%. A small amount of labeled data was utilized for training a classification model consisting of the trained 3D-DCDAE and the MLP classifier, which achieved a classification accuracy of approximately 88%. The experimental results show that the model achieves state-of-the-art performance and significantly outperforms other methods for music genre classification with only a small amount of labeled data.

]]>Mathematics doi: 10.3390/math9182273

Authors: Alexandra Saviuc Manuela Gîrțu Liliana Topliceanu Tudor-Cristian Petrescu Maricel Agop

Assimilating a complex fluid with a fractal object, non-differentiable behaviors in its dynamics are analyzed. Complex fluid dynamics in the form of hydrodynamic-type fractal regimes imply “holographic implementations” through velocity fields at non-differentiable scale resolution, via fractal solitons, fractal solitons–fractal kinks, and fractal minimal vortices. Complex fluid dynamics in the form of Schrödinger type fractal regimes imply “holographic implementations”, through the formalism of Airy functions of fractal type. Then, the in-phase coherence of the dynamics of the complex fluid structural units induces various operational procedures in the description of such dynamics: special cubics with SL(2R)-type group invariance, special differential geometry of Riemann type associated to such cubics, special apolar transport of cubics, special harmonic mapping principle, etc. In such a manner, a possible scenario toward chaos (a period-doubling scenario), without concluding in chaos (nonmanifest chaos), can be mimed.

]]>Mathematics doi: 10.3390/math9182272

Authors: Metod Saniga Henri de Boutray Frédéric Holweck Alain Giorgetti

We study certain physically-relevant subgeometries of binary symplectic polar spaces W(2N−1,2) of small rank N, when the points of these spaces canonically encode N-qubit observables. Key characteristics of a subspace of such a space W(2N−1,2) are: the number of its negative lines, the distribution of types of observables, the character of the geometric hyperplane the subspace shares with the distinguished (non-singular) quadric of W(2N−1,2) and the structure of its Veldkamp space. In particular, we classify and count polar subspaces of W(2N−1,2) whose rank is N−1. W(3,2) features three negative lines of the same type and its W(1,2)’s are of five different types. W(5,2) is endowed with 90 negative lines of two types and its W(3,2)’s split into 13 types. A total of 279 out of 480 W(3,2)’s with three negative lines are composite, i.e., they all originate from the two-qubit W(3,2). Given a three-qubit W(3,2) and any of its geometric hyperplanes, there are three other W(3,2)’s possessing the same hyperplane. The same holds if a geometric hyperplane is replaced by a ‘planar’ tricentric triad. A hyperbolic quadric of W(5,2) is found to host particular sets of seven W(3,2)’s, each of them being uniquely tied to a Conwell heptad with respect to the quadric. There is also a particular type of W(3,2)’s, a representative of which features a point each line through which is negative. Finally, W(7,2) is found to possess 1908 negative lines of five types and its W(5,2)’s fall into as many as 29 types. A total of 1524 out of 1560 W(5,2)’s with 90 negative lines originate from the three-qubit W(5,2). Remarkably, the difference in the number of negative lines for any two distinct types of four-qubit W(5,2)’s is a multiple of four.

]]>Mathematics doi: 10.3390/math9182271

Authors: Daniel Solow Joseph Szmerekovsky Sukumarakurup Krishnakumar

The value and importance of leadership is evident by its prevalence throughout human societies and organizations. Based on an evolutionary argument, models are presented here that provide a mathematical justification as to how and why leadership arose in the first place and then persisted. In this setting, by a leader is meant a person whose overall actions are ultimately responsible for the well-being and survival of the group. The proposed models contain parameters whose values reflect group size, harshness of the environment, diversity of actions taken by individuals, and the amount of group cohesion. Mathematical analysis and computer simulations are used to identify conditions on these parameters under which leadership results in an increased survival probability for the community.

]]>Mathematics doi: 10.3390/math9182270

Authors: Andreas H. Hamel Frank Heyde

A theory for set-valued functions is developed, which are translative with respect to a linear operator. It is shown that such functions cover a wide range of applications, from projections in Hilbert spaces, set-valued quantiles for vector-valued random variables, to scalar or set-valued risk measures in finance with defaultable or nondefaultable securities. Primal, dual, and scalar representation results are given, among them an infimal convolution representation, which is not so well known even in the scalar case. Along the way, new concepts of set-valued lower/upper expectations are introduced and dual representation results are formulated using such expectations. An extension to random sets is discussed at the end. The principal methodology consisted of applying the complete lattice framework of set optimization.

]]>Mathematics doi: 10.3390/math9182269

Authors: Xiao-Ting He Xue Li Bin-Bin Shi Jun-Yi Sun

The closed-form solution of circular membranes subjected to gas pressure loading plays an extremely important role in technical applications such as characterization of mechanical properties for freestanding thin films or thin-film/substrate systems based on pressured bulge or blister tests. However, the only two relevant closed-form solutions available in the literature are suitable only for the case where the rotation angle of membrane is relatively small, because they are derived with the small-rotation-angle assumption of membrane, that is, the rotation angle θ of membrane is assumed to be small so that “sinθ = 1/(1 + 1/tan2θ)1/2” can be approximated by “sinθ = tanθ”. Therefore, the two closed-form solutions with small-rotation-angle assumption cannot meet the requirements of these technical applications. Such a bottleneck to these technical applications is solved in this study, and a new and more refined closed-form solution without small-rotation-angle assumption is given in power series form, which is derived with “sinθ = 1/(1 + 1/tan2θ)1/2”, rather than “sinθ = tanθ”, thus being suitable for the case where the rotation angle of membrane is relatively large. This closed-form solution without small-rotation-angle assumption can naturally satisfy the remaining unused boundary condition, and numerically shows satisfactory convergence, agrees well with the closed-form solution with small-rotation-angle assumption for lightly loaded membranes with small rotation angles, and diverges distinctly for heavily loaded membranes with large rotation angles. The confirmatory experiment conducted shows that the closed-form solution without small-rotation-angle assumption is reliable and has a satisfactory calculation accuracy in comparison with the closed-form solution with small-rotation-angle assumption, particularly for heavily loaded membranes with large rotation angles.

]]>Mathematics doi: 10.3390/math9182268

Authors: Xue Jin Shiwei Zhou Kedong Yin Mingzhen Li

This paper analyzes the price correlation effect between domestic and foreign copper futures contracts. The VAR-BEKK-GARCH (1,1) spillover effect model and the BN-S class non-parametric model based on the jumping perspective are used. The co-integration test shows a long-term equilibrium relationship between the three copper futures markets, and the Granger causality test shows that copper futures contracts have significant two-way spillover effects between different periods in Shanghai for New York copper and unidirectional mean spillover effects for London copper. The BEKK model shows significant bidirectional fluctuation spillover effects between the futures contracts of the Shanghai, London, and New York copper markets before the stock market crash. After the crash, Shanghai and New York copper have significant one-way fluctuation spillover effects on London copper futures contracts. There are jumps within a single market, and the number of joint jumps between markets increases with the significance level.

]]>Mathematics doi: 10.3390/math9182266

Authors: Marko Petkovšek Mitja Nemec Peter Zajec

This paper addresses the challenges of selecting a suitable method for negative temperature coefficient (NTC) thermistor-based temperature measurement in electronic devices. Although measurement accuracy is of great importance, the temperature calculation time represents an even greater challenge since it is inherently constrained by the control algorithm executed in the microcontroller (MCU). Firstly, a simple signal conditioning circuit with the NTC thermistor is introduced, resulting in a temperature-dependent voltage UT being connected to the MCU’s analog input. Next, a simulation-based approximation of the actual temperature vs. voltage curve is derived, resulting in four temperature notations: for a look-up table principle, polynomial approximation, B equation and Steinhart–Hart equation. Within the simulation results, the expected temperature error of individual methods is calculated, whereas in the experimental part, performed on a DC/DC converter prototype, required prework and available MCU resources are evaluated. In terms of expected accuracy, the look-up table and the Steinhart–Hart equation offer superior results over the polynomial approximation and B equation, especially in the nominal temperature range of the NTC thermistor. However, in terms of required prework, the look-up table is inferior compared to the Steinhart–Hart equation, despite the latter having far more complex mathematical functions, affecting the overall MCU algorithm execution time significantly.

]]>Mathematics doi: 10.3390/math9182267

Authors: Hasanen A. Hammad Manuel De la Sen

The objective of this paper is to present a new notion of a tripled fixed point (TFP) findings by virtue of a control function in the framework of fuzzy cone metric spaces (FCM-spaces). This function is a continuous one-to-one self-map that is subsequentially convergent (SC) in FCM-spaces. Moreover, by using the triangular property of a FCM, some unique TFP results are shown under modified contractive-type conditions. Additionally, two examples are discussed to uplift our work. Ultimately, to examine and support the theoretical results, the existence and uniqueness solution to a system of Volterra integral equations (VIEs) are obtained.

]]>Mathematics doi: 10.3390/math9182264

Authors: Jin Tao Dachun Yang Wen Yuan

In this systematic review, the authors give a survey on the recent developments of both the John–Nirenberg space JNp and the space BMO as well as their vanishing subspaces such as VMO, XMO, CMO, VJNp, and CJNp on Rn or a given cube Q0⊂Rn with finite side length. In addition, some related open questions are also presented.

]]>Mathematics doi: 10.3390/math9182265

Authors: Francisco Rubio Carlos Llopis-Albert

A wind turbine can act as an energy recovery device (ERS) in a comparable way to brakes (regenerative braking). When the velocity of a vehicle changes, the amount of energy related to it also changes. When its velocity decreases, the energy tends to dissipate. Over time, this dissipated energy has been ignored. For example, during the braking process, the kinetic energy of the vehicle was converted into heat. In recent years, society’s greater awareness of climate change, pollution, and environmental issues has led to a great deal of interest in developing energy recovery systems. It allows the recovery of kinetic energy from braking (KERS), resulting in consumption reductions (efficiency gains) of up to 45%. The usefulness of installing a wind turbine as an energy recovery device is analysed, evaluating the savings that can be achieved with its two possible working modes: as an energy recovery device and as a system for utilizing aerodynamic force. The wind turbine has a horizontal axis and a diameter of 50 cm and is installed on the front of a vehicle. This vehicle will undergo three particular driving schemes, which will operate under different experimental conditions and operational parameters characterized by speeds, accelerations, stops, and driving time. The results clearly show the advantages of using the proposed technology.

]]>Mathematics doi: 10.3390/math9182263

Authors: Pablo Pereira Álvarez Pierre Kerfriden David Ryckelynck Vincent Robin

Welding operations may be subjected to different types of defects when the process is not properly controlled and most defect detection is done a posteriori. The mechanical variables that are at the origin of these imperfections are often not observable in situ. We propose an offline/online data assimilation approach that allows for joint parameter and state estimations based on local probabilistic surrogate models and thermal imaging in real-time. Offline, the surrogate models are built from a high-fidelity thermomechanical Finite Element parametric study of the weld. The online estimations are obtained by conditioning the local models by the observed temperature and known operational parameters, thus fusing high-fidelity simulation data and experimental measurements.

]]>Mathematics doi: 10.3390/math9182262

Authors: Emilio Defez Javier Ibáñez José M. Alonso Michael M. Tung Teresa Real-Herráiz

Matrix differential equations are at the heart of many science and engineering problems. In this paper, a procedure based on higher-order matrix splines is proposed to provide the approximated numerical solution of special nonlinear third-order matrix differential equations, having the form Y(3)(x)=f(x,Y(x)). Some numerical test problems are also included, whose solutions are computed by our method.

]]>Mathematics doi: 10.3390/math9182261

Authors: Dinh-Tu Nguyen Jeng-Rong Ho Pi-Cheng Tung Chih-Kuang Lin

Kerf width is one of the most important quality items in cutting of thin metallic sheets. The aim of this study was to develop a convolutional neural network (CNN) model for analysis and prediction of kerf width in laser cutting of thin non-oriented electrical steel sheets. Three input process parameters were considered, namely, laser power, cutting speed, and pulse frequency, while one output parameter, kerf width, was evaluated. In total, 40 sets of experimental data were obtained for development of the CNN model, including 36 sets for training with k-fold cross-validation and four sets for testing. Compared with a deep neural network (DNN) model and an extreme learning machine (ELM) model, the developed CNN model had the lowest mean absolute percentage error (MAPE) of 4.76% for the final test dataset in predicting kerf width. This indicates that the proposed CNN model is an appropriate model for kerf width prediction in laser cutting of thin non-oriented electrical steel sheets.

]]>Mathematics doi: 10.3390/math9182260

Authors: Alex Isakson Simone Krummaker María Dolores Martínez-Miranda Ben Rickayzen

In this paper, we apply and further illustrate a recently developed extended continuous chain ladder model to forecast mesothelioma deaths. Making such a forecast has always been a challenge for insurance companies as exposure is difficult or impossible to measure, and the latency of the disease usually lasts several decades. While we compare three approaches to this problem, we show that the extended continuous chain ladder model is a promising benchmark candidate for asbestosis mortality forecasting due to its flexible and simple forecasting strategy. Furthermore, we demonstrate how the model can be used to provide an update for the forecast of the number of deaths due to mesothelioma in Great Britain using in recent Health and Safety Executive (HSE) data.

]]>Mathematics doi: 10.3390/math9182259

Authors: Kai An Sim Kok Bin Wong

In 1977, Davis et al. proposed a method to generate an arrangement of [n]={1,2,…,n} that avoids three-term monotone arithmetic progressions. Consequently, this arrangement avoids k-term monotone arithmetic progressions in [n] for k≥3. Hence, we are interested in finding an arrangement of [n] that avoids k-term monotone arithmetic progression, but allows k−1-term monotone arithmetic progression. In this paper, we propose a method to rearrange the rows of a magic square of order 2k−3 and show that this arrangement does not contain a k-term monotone arithmetic progression. Consequently, we show that there exists an arrangement of n consecutive integers such that it does not contain a k-term monotone arithmetic progression, but it contains a k−1-term monotone arithmetic progression.

]]>Mathematics doi: 10.3390/math9182258

Authors: Nazaria Solferino Maria Elisabetta Tessitore

We devise a theoretical model to shed light on the dynamics leading to toxic relationships. We investigate what intervention policy people could advocate to protect themselves and to reduce suffocating addiction in order to escape from physical or psychological abuses either inside family or at work. Assuming that the toxic partner’s behavior is exogenous and that the main source of addiction is income or wealth we find that an asymptotically stable equilibrium with positive love is always possible. The existence of a third unconditionally reciprocating part as a benchmark, i.e., presence of another partner, support from family, friends, private organizations in helping victims, plays an important role in reducing the toxic partner’s appeal. Analyzing our model, we outline the conditions for the best policy to heal from a toxic relationship.

]]>Mathematics doi: 10.3390/math9182257

Authors: Julia Eisenberg Stefan Kremsner Alexander Steinicke

We investigate a dividend maximization problem under stochastic interest rates with Ornstein-Uhlenbeck dynamics. This setup also takes negative rates into account. First a deterministic time is considered, where an explicit separating curve α(t) can be found to determine the optimal strategy at time t. In a second setting, we introduce a strategy-independent stopping time. The properties and behavior of these optimal control problems in both settings are analyzed in an analytical HJB-driven approach, and we also use backward stochastic differential equations.

]]>Mathematics doi: 10.3390/math9182256

Authors: Xiaowu Chen Guozhang Jiang Yongmao Xiao Gongfa Li Feng Xiang

Intelligent manufacturing is the trend of the steel industry. A cyber-physical system oriented steel production scheduling system framework is proposed. To make up for the difficulty of dynamic scheduling of steel production in a complex environment and provide an idea for developing steel production to intelligent manufacturing. The dynamic steel production scheduling model characteristics are studied, and an ontology-based steel cyber-physical system production scheduling knowledge model and its ontology attribute knowledge representation method are proposed. For the dynamic scheduling, the heuristic scheduling rules were established. With the method, a hyper-heuristic algorithm based on genetic programming is presented. The learning-based high-level selection strategy method was adopted to manage the low-level heuristic. An automatic scheduling rule generation framework based on genetic programming is designed to manage and generate excellent heuristic rules and solve scheduling problems based on different production disturbances. Finally, the performance of the algorithm is verified by a simulation case.

]]>Mathematics doi: 10.3390/math9182255

Authors: Xuemin Xue Xiangtuan Xiong

In this paper, the numerical analytic continuation problem is addressed and a fractional Tikhonov regularization method is proposed. The fractional Tikhonov regularization not only overcomes the difficulty of analyzing the ill-posedness of the continuation problem but also obtains a more accurate numerical result for the discontinuity of solution. This article mainly discusses the a posteriori parameter selection rules of the fractional Tikhonov regularization method, and an error estimate is given. Furthermore, numerical results show that the proposed method works effectively.

]]>Mathematics doi: 10.3390/math9182254

Authors: Pincheira Hardy Muñoz

In this paper, we present a new asymptotically normal test for out-of-sample evaluation in nested models. Our approach is a simple modification of a traditional encompassing test that is commonly known as Clark and West test (CW). The key point of our strategy is to introduce an independent random variable that prevents the traditional CW test from becoming degenerate under the null hypothesis of equal predictive ability. Using the approach developed by West (1996), we show that in our test, the impact of parameter estimation uncertainty vanishes asymptotically. Using a variety of Monte Carlo simulations in iterated multi-step-ahead forecasts, we evaluated our test and CW in terms of size and power. These simulations reveal that our approach is reasonably well-sized, even at long horizons when CW may present severe size distortions. In terms of power, results were mixed but CW has an edge over our approach. Finally, we illustrate the use of our test with an empirical application in the context of the commodity currencies literature.

]]>Mathematics doi: 10.3390/math9182253

Authors: Dalibor Martišek Karel Mikulášek

Shape-from-Focus (SFF) methods have been developed for about twenty years. They able to obtain the shape of 3D objects from a series of partially focused images. The plane to which the microscope or camera is focused intersects the 3D object in a contour line. Due to wave properties of light and due to finite resolution of the output device, the image can be considered as sharp not only on this contour line, but also in a certain interval of height—the zone of sharpness. SSFs are able to identify these focused parts to compose a fully focused 2D image and to reconstruct a 3D profile of the surface to be observed.

]]>Mathematics doi: 10.3390/math9182252

Authors: Aleksei Solodov

We study the asymptotic behavior in a neighborhood of zero of the sum of a sine series g(b,x)=∑k=1∞bksinkx whose coefficients constitute a convex slowly varying sequence b. The main term of the asymptotics of the sum of such a series was obtained by Aljančić, Bojanić, and Tomić. To estimate the deviation of g(b,x) from the main term of its asymptotics bm(x)/x, m(x)=[π/x], Telyakovskiĭ used the piecewise-continuous function σ(b,x)=x∑k=1m(x)−1k2(bk−bk+1). He showed that the difference g(b,x)−bm(x)/x in some neighborhood of zero admits a two-sided estimate in terms of the function σ(b,x) with absolute constants independent of b. Earlier, the author found the sharp values of these constants. In the present paper, the asymptotics of the function g(b,x) on the class of convex slowly varying sequences in the regular case is obtained.

]]>Mathematics doi: 10.3390/math9182251

Authors: Amir Masoud Rahmani Saqib Ali Mohammad Sadegh Yousefpoor Efat Yousefpoor Rizwan Ali Naqvi Kamran Siddique Mehdi Hosseinzadeh

Coverage is a fundamental issue in wireless sensor networks (WSNs). It plays a important role in network efficiency and performance. When sensor nodes are randomly scattered in the network environment, an ON/OFF scheduling mechanism can be designed for these nodes to ensure network coverage and increase the network lifetime. In this paper, we propose an appropriate and optimal area coverage method. The proposed area coverage scheme includes four phases: (1) Calculating the overlap between the sensing ranges of sensor nodes in the network. In this phase, we present a novel, distributed, and efficient method based on the digital matrix so that each sensor node can estimate the overlap between its sensing range and other neighboring nodes. (2) Designing a fuzzy scheduling mechanism. In this phase, an ON/OFF scheduling mechanism is designed using fuzzy logic. In this fuzzy system, if a sensor node has a high energy level, a low distance to the base station, and a low overlap between its sensing range and other neighboring nodes, then this node will be in the ON state for more time. (3) Predicting the node replacement time. In this phase, we seek to provide a suitable method to estimate the death time of sensor nodes and prevent possible holes in the network, and thus the data transmission process is not disturbed. (4) Reconstructing and covering the holes created in the network. In this phase, the goal is to find the best replacement strategy of mobile nodes to maximize the coverage rate and minimize the number of mobile sensor nodes used for covering the hole. For this purpose, we apply the shuffled frog-leaping algorithm (SFLA) and propose an appropriate multi-objective fitness function. To evaluate the performance of the proposed scheme, we simulate it using NS2 simulator and compare our scheme with three methods, including CCM-RL, CCA, and PCLA. The simulation results show that our proposed scheme outperformed the other methods in terms of the average number of active sensor nodes, coverage rate, energy consumption, and network lifetime.

]]>Mathematics doi: 10.3390/math9182250

Authors: Mei Li Gai-Ge Wang Helong Yu

In this era of unprecedented economic and social prosperity, problems such as energy shortages and environmental pollution are gradually coming to the fore, which seriously restrict economic and social development. In order to solve these problems, green shop scheduling, which is a key aspect of the manufacturing industry, has attracted the attention of researchers, and the widely used flow shop scheduling problem (HFSP) has become a hot topic of research. In this paper, we study the fuzzy hybrid green shop scheduling problem (FHFGSP) with fuzzy processing time, with the objective of minimizing makespan and total energy consumption. This is more in line with real-life situations. The non-linear integer programming model of FHFGSP is built by expressing job processing times as triangular fuzzy numbers (TFN) and considering the machine setup times when processing different jobs. To address the FHFGSP, a discrete artificial bee colony (DABC) algorithm based on similarity and non-dominated solution ordering is proposed, which allows individuals to explore their neighbors to different degrees in the employed bee phase according to a sequence of positions, increasing the diversity of the algorithm. During the onlooker bee phase, individuals at the front of the sequence have a higher chance of being tracked, increasing the convergence rate of the colony. In addition, a mutation strategy is proposed to prevent the population from falling into a local optimum. To verify the effectiveness of the algorithm, 400 test cases were generated, comparing the proposed strategy and the overall algorithm with each other and evaluating them using three different metrics. The experimental results show that the proposed algorithm outperforms other algorithms in terms of quantity, quality, convergence and diversity.

]]>Mathematics doi: 10.3390/math9182249

Authors: Osmin Ferrer Arley Sierra José Sanabria

In this paper, we use soft linear operators to introduce the notion of discrete frames on soft Hilbert spaces, which extends the classical notion of frames on Hilbert spaces to the context of algebraic structures on soft sets. Among other results, we show that the frame operator associated to a soft discrete frame is bounded, self-adjoint, invertible and with a bounded inverse. Furthermore, we prove that every element in a soft Hilbert space satisfies the frame decomposition theorem. This theoretical framework is potentially applicable in signal processing because the frame coefficients serve to model the data packets to be transmitted in communication networks.

]]>Mathematics doi: 10.3390/math9182248

Authors: Gaku Ishii Yusaku Yamamoto Takeshi Takaishi

We aim to accelerate the linear equation solver for crack growth simulation based on the phase field model. As a first step, we analyze the properties of the coefficient matrices and prove that they are symmetric positive definite. This justifies the use of the conjugate gradient method with the efficient incomplete Cholesky preconditioner. We then parallelize this preconditioner using so-called block multi-color ordering and evaluate its performance on multicore processors. The experimental results show that our solver scales well and achieves an acceleration of several times over the original solver based on the diagonally scaled CG method.

]]>Mathematics doi: 10.3390/math9182247

Authors: Amparo Baíllo Aurea Grané

The distance-based linear model (DB-LM) extends the classical linear regression to the framework of mixed-type predictors or when the only available information is a distance matrix between regressors (as it sometimes happens with big data). The main drawback of these DB methods is their computational cost, particularly due to the eigendecomposition of the Gram matrix. In this context, ensemble regression techniques provide a useful alternative to fitting the model to the whole sample. This work analyzes the performance of three subsampling and aggregation techniques in DB regression on two specific large, real datasets. We also analyze, via simulations, the performance of bagging and DB logistic regression in the classification problem with mixed-type features and large sample sizes.

]]>Mathematics doi: 10.3390/math9182246

Authors: Ricardo J. Jesus Mário L. Antunes Rui A. da Costa Sergey N. Dorogovtsev José F. F. Mendes Rui L. Aguiar

The function and performance of neural networks are largely determined by the evolution of their weights and biases in the process of training, starting from the initial configuration of these parameters to one of the local minima of the loss function. We perform the quantitative statistical characterization of the deviation of the weights of two-hidden-layer feedforward ReLU networks of various sizes trained via Stochastic Gradient Descent (SGD) from their initial random configuration. We compare the evolution of the distribution function of this deviation with the evolution of the loss during training. We observed that successful training via SGD leaves the network in the close neighborhood of the initial configuration of its weights. For each initial weight of a link we measured the distribution function of the deviation from this value after training and found how the moments of this distribution and its peak depend on the initial weight. We explored the evolution of these deviations during training and observed an abrupt increase within the overfitting region. This jump occurs simultaneously with a similarly abrupt increase recorded in the evolution of the loss function. Our results suggest that SGD’s ability to efficiently find local minima is restricted to the vicinity of the random initial configuration of weights.

]]>Mathematics doi: 10.3390/math9182245

Authors: Po Li Xiang Li Jinghui Li Yimin You Zhongqing Sang

Harmonic interference is a major hazard in the current power system that affects power quality. How to extract harmonics quickly and accurately is the premise to ensure the sustainable operation of power system, which is particularly important in the field of new energy power generation. In this paper, a harmonic extraction method based on a time-varying observer is proposed. Firstly, a frequency estimation algorithm is used to estimate the power grid current frequency, which can estimate the frequency in real time. Then, applying the zero-crossing detection method to convert the frequency into a phase variable. Finally, using the phase variable and integral current signal as input, a observer is modeled to extract each order harmonic component. The proposed method is evaluated on a FGPA test platform, which shows that the method can extract the harmonic components of the grid current and converge within 80 ms even in the presence of grid distortions. In the verification case, the relative errors of the 1st, 5th, 7th and 11th harmonics are 0.005%, −0.003%, 0.251% and 0.620%, respectively, which are sufficiently small.

]]>Mathematics doi: 10.3390/math9182244

Authors: Mohamed M. Mousa Fahad Alsharari

In this work, the main concept of the homotopy perturbation method (HPM) was outlined and convergence theorems of the HPM for solving some classes of nonlinear integral, integro-differential and differential equations were proved. A theorem for estimating the error in the approximate solution was proved as well. The proposed HPM convergence theorems were confirmed and the efficiency of the technique was explored by applying the HPM for solving several classes of nonlinear integral/integro-differential equations.

]]>Mathematics doi: 10.3390/math9182243

Authors: Deepak Rai Hiren Kumar Thakkar Shyam Singh Rajput Jose Santamaria Chintan Bhatt Francisco Roca

In recent years, cardiovascular diseases are on the rise, and they entail enormous health burdens on global economies. Cardiac vibrations yield a wide and rich spectrum of essential information regarding the functioning of the heart, and thus it is necessary to take advantage of this data to better monitor cardiac health by way of prevention in early stages. Specifically, seismocardiography (SCG) is a noninvasive technique that can record cardiac vibrations by using new cutting-edge devices as accelerometers. Therefore, providing new and reliable data regarding advancements in the field of SCG, i.e., new devices and tools, is necessary to outperform the current understanding of the State-of-the-Art (SoTA). This paper reviews the SoTA on SCG and concentrates on three critical aspects of the SCG approach, i.e., on the acquisition, annotation, and its current applications. Moreover, this comprehensive overview also presents a detailed summary of recent advancements in SCG, such as the adoption of new techniques based on the artificial intelligence field, e.g., machine learning, deep learning, artificial neural networks, and fuzzy logic. Finally, a discussion on the open issues and future investigations regarding the topic is included.

]]>Mathematics doi: 10.3390/math9182242

Authors: Ali Rehman Zabidin Salleh

The present research paper explains the influence of Marangoni convection on magnetohydrodynamic viscous dissipation and heat transfer on hybrid nanofluids in a rotating system among two surfaces. Then, the properties of heat and mass transfer are analysed. With the similarity transformation, the governing equations of the defined flow problem are converted into nonlinear ordinary differential equations. These compact equations are solved approximately and analytically using the optimal homotopy analysis method. The impact of different parameters is interpreted through graphs in the form of velocity and temperature profiles. The influence of the skin friction coefficient and Nusselt number are presented in the form of tables. The comparison of the present research paper and published works is also presented table.

]]>Mathematics doi: 10.3390/math9182241

Authors: Maximo Camacho María Dolores Gadea Ana Gómez-Loscos

This paper provides an accurate chronology of the Spanish reference business cycle adapting a multiple change-point model. In that approach, each combination of peaks and troughs dated in a set of economic indicators is assumed to be a realization of a mixture of bivariate Gaussian distributions, whose number of components is estimated from the data. The means of each of these components refer to the dates of the reference turning points. The transitions across the components of the mixture are governed by Markov chain that is restricted to force left-to-right transition dynamic. In the empirical application, seven recessions in the period from February 1970 to February 2020 are identified, which are in high concordance with the timing of the turning point dates established by the Spanish Business Cycle Dating Committee (SBCDC).

]]>Mathematics doi: 10.3390/math9182238

Authors: Paulo Afonso Vishad Vyas Ana Antunes Sérgio Silva Boris P. J. Bret

Nowadays, manufacturing companies are characterized by complex systems with multiple products being manufactured in multiple assembly lines. In such situations, traditional costing systems based on deterministic cost models cannot be used. This paper focuses on developing a stochastic approach to costing systems that considers the variability in the process cycle time of the different workstations in the assembly line. This approach provides a range of values for the product costs, allowing for a better perception of the risk associated to these costs instead of providing a single value of the cost. The confidence interval for the mean and the use of quartiles one and three as lower and upper estimates are proposed to include variability and risk in costing systems. The analysis of outliers and some statistical tests are included in the proposed approach, which was applied in a tier 1 company in the automotive industry. The probability distribution of the possible range of values for the bottleneck’s cycle time showcase all the possible values of product cost considering the process variability and uncertainty. A stochastic cost model allows a better analysis of the margins and optimization opportunities as well as investment appraisal and quotation activities.

]]>Mathematics doi: 10.3390/math9182240

Authors: Radi Romansky

The main goal of dispatching strategies is to minimize the total time for processing tasks at maximum performance of the computer system, which requires strict regulation of the workload of the processing units. To achieve this, it is necessary to conduct a preliminary study of the applied model for planning. The purpose of this article is to present an approach for automating the investigation and optimization of processes in a computer environment for task planning and processing. A stochastic input flow of incoming tasks for processing is considered and mathematical formalization of some probabilistic characteristics related to the complexity of its servicing has been made. On this basis, a software module by using program language APL2 has been developed to conduct experiments for analytical study and obtaining estimates of stochastic parameters of computer processing and dispatching. The proposed model is part of a generalized environment for program investigation of the computer processing organization and expands its field of application with additional research possibilities.

]]>Mathematics doi: 10.3390/math9182239

Authors: Chun-Yueh Lin Yi-Hsien Wang

During enterprise foundation and development, internal finance and debt finance are of vital importance to start-up entrepreneurs. Therefore, the purpose of this study is mainly to focus on how start-ups can make the optimal evaluation among different external equity crowdfunding solutions and to establish a network decision support model that evaluates the optimal financing solution of start-ups for external equity crowdfunding based on decision science and network architecture. The Lending Company in Financial Technology Industry (LCFTI) was taken as an example. The results indicate that equity crowdfunding is the optimal financing plan in LCFTI. Academically, the results of this study not only help propose a network decision support model using decision science methods and implementing the network analysis to establish an architecture to evaluate the optimal financing plans of start-ups for external equity crowdfunding, they also makes up for the gap in the optimal financing plans of entrepreneurs or start-ups for external equity financing, which has not been specified in the POT theory in the past. Practically, this study provides a useful tool for the entrepreneur of LCFTI to understand the key factors affecting the optimal financing plans for external equity financing and enables LCFTI to measure the optimal financing plans for external equity financing to improve the success rate of finance.

]]>Mathematics doi: 10.3390/math9182236

Authors: Hsien-Pin Hsu Chia-Nan Wang Hsin-Pin Fu Thanh-Tuan Dang

The joint scheduling of quay cranes (QCs), yard cranes (YCs), and yard trucks (YTs) is critical to achieving good overall performance for a container terminal. However, there are only a few such integrated studies. Especially, those who have taken the vessel stowage plan (VSP) into consideration are very rare. The VSP is a plan assigning each container a stowage position in a vessel. It affects the QC operations directly and considerably. Neglecting this plan will cause problems when loading/unloading containers into/from a ship or even congest the YT and YC operations in the upstream. In this research, a framework of simulation-based optimization methods have been proposed firstly. Then, four kinds of heuristics/metaheuristics has been employed in this framework, such as sort-by-bay (SBB), genetic algorithm (GA), particle swarm optimization (PSO), and multiple groups particle swarm optimization (MGPSO), to deal with the yard crane scheduling problem (YCSP), yard truck scheduling problem (YTSP), and quay crane scheduling problem (QCSP) simultaneously for export containers, taking operational constraints into consideration. The objective aims to minimize makespan. Each of the simulation-based optimization methods includes three components, load-balancing heuristic, sequencing method, and simulation model. Experiments have been conducted to investigate the effectiveness of different simulation-based optimization methods. The results show that the MGPSO outperforms the others.

]]>Mathematics doi: 10.3390/math9182237

Authors: José E. Valdez-Rodríguez Edgardo M. Felipe-Riverón Hiram Calvo

Glaucoma detection is an important task, as this disease can affect the optic nerve, and this could lead to blindness. This can be prevented with early diagnosis, periodic controls, and treatment so that it can be stopped and prevent visual loss. Usually, the detection of glaucoma is carried out through various examinations such as tonometry, gonioscopy, pachymetry, etc. In this work, we carry out this detection by using images obtained through retinal cameras, in which we can observe the state of the optic nerve. This work addresses an accurate diagnostic methodology based on Convolutional Neural Networks (CNNs) to classify these optical images. Most works require a large number of images to train their CNN architectures, and most of them use the whole image to perform the classification. We will use a small dataset containing 366 examples to train the proposed CNN architecture and we will only focus on the analysis of the optic disc by extracting it from the full image, as this is the element that provides the most information about glaucoma. We experiment with different RGB channels and their combinations from the optic disc, and additionally, we extract depth information. We obtain accuracy values of 0.945, by using the GB and the full RGB combination, and 0.934 for the grayscale transformation. Depth information did not help, as it limited the best accuracy value to 0.934.

]]>