Mathematical and Computational Applications doi: 10.3390/mca28050097

Authors: Adel M. Al-Mahdi

Total fractional-order variation (TFOV) in image deblurring problems can reduce/remove the staircase problems observed with the image deblurring technique by using the standard total variation (TV) model. However, the discretization of the Euler&ndash;Lagrange equations associated with the TFOV model generates a saddle point system of equations where the coefficient matrix of this system is dense and ill conditioned (it has a huge condition number). The ill-conditioned property leads to slowing of the convergence of any iterative method, such as Krylov subspace methods. One treatment for the slowness property is to apply the preconditioning technique. In this paper, we propose a block triangular preconditioner because we know that using the exact triangular preconditioner leads to a preconditioned matrix with exactly two distinct eigenvalues. This means that we need at most two iterations to converge to the exact solution. However, we cannot use the exact preconditioner because the Shur complement of our system is of the form S = K*K + &lambda;L&alpha; &nbsp;which is a huge and dense matrix. The first matrix, K*K, comes from the blurred operator, while the second one is from the TFOV regularization model. To overcome this difficulty, we propose two preconditioners based on the circulant and standard TV matrices. In our algorithm, we use the flexible preconditioned GMRES method for the outer iterations, the preconditioned conjugate gradient (PCG) method for the inner iterations, and the fixed point iteration (FPI) method to handle the nonlinearity. Fast convergence was found in the numerical results by using the proposed preconditioners.

]]>Mathematical and Computational Applications doi: 10.3390/mca28050096

Authors: Fiazuddin D. Zaman Fazal M. Mahomed Faiza Arif

We used the classical Lie symmetry method to study the damped Klein&ndash;Gordon equation (Kge) with power law non-linearity utt+&alpha;(u)ut=(u&beta;ux)x+f(u). We carried out a complete Lie symmetry classification by finding forms for &alpha;(u) and f(u). This led to various cases. Corresponding to each case, we obtained one-dimensional optimal systems of subalgebras. Using the subalgebras, we reduced the Kge to ordinary differential equations and determined some invariant solutions. Furthermore, we obtained conservation laws using the partial Lagrangian approach.

]]>Mathematical and Computational Applications doi: 10.3390/mca28050095

Authors: Guilmer Ferdinand González Flores Pablo Barrera Sánchez

In this paper, we review some grid quality metrics and define some new quality measures for quadrilateral elements. The curved elements are not discussed. Usually, the maximum value of a quality measure corresponds to the minimum value of the energy density over the grid. We also define new discrete functionals, which are implemented as objective functions in an optimization-based method for quadrilateral grid generation and improvement. These functionals are linearly combined with a discrete functional whose domain has an infinite barrier at the boundary of the set of unfolded grids to preserve convex grid cells in each step of the optimization process.

]]>Mathematical and Computational Applications doi: 10.3390/mca28050094

Authors: Mohammad M. Kafini Mohammed M. Al-Gharabli Adel M. Al-Mahdi

In this research work, we investigate the asymptotic behavior of a nonlinear swelling (also called expansive) soil system with a time delay and nonlinear damping of variable exponents. We should note here that swelling soils contain clay minerals that absorb water, which may lead to increases in pressure. In architectural and civil engineering, swelling soils are considered sources of problems and harm. The presence of the delay is used to create more realistic models since many processes depend on past history, and the delays are frequently added by sensors, actuators, and field networks that travel through feedback loops. The appearance of variable exponents in the delay and damping terms in this system allows for a more flexible and accurate modeling of this physical phenomenon. This can lead to more realistic and precise descriptions of the behavior of fluids in different media. In fact, with the advancements of science and technology, many physical and engineering models require more sophisticated mathematical tools to study and understand. The Lebesgue and Sobolev spaces with variable exponents proved to be efficient tools for studying such problems. By constructing a suitable Lyapunov functional, we establish exponential and polynomial decay results. We noticed that the energy decay of the system depends on the value of the variable exponent. These results improve on some existing results in the literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca28050093

Authors: Carlos-Iván Páez-Rueda Arturo Fajardo Manuel Pérez German Yamhure Gabriel Perilla

This paper studies and analyzes the approximation of one-dimensional smooth closed-form functions with compact support using a mixed Fourier series (i.e., a combination of partial Fourier series and other forms of partial series). To explore the potential of this approach, we discuss and revise its application in signal processing, especially because it allows us to control the decreasing rate of Fourier coefficients and avoids the Gibbs phenomenon. Therefore, this method improves the signal processing performance in a wide range of scenarios, such as function approximation, interpolation, increased convergence with quasi-spectral accuracy using the time domain or the frequency domain, numerical integration, and solutions of inverse problems such as ordinary differential equations. Moreover, the paper provides comprehensive examples of one-dimensional problems to showcase the advantages of this approach.

]]>Mathematical and Computational Applications doi: 10.3390/mca28050092

Authors: Molahlehi Charles Kakuli Winter Sinkala Phetogo Masemola

This study investigates via Lie symmetry analysis the Hunter&ndash;Saxton equation, an equation relevant to the theoretical analysis of nematic liquid crystals. We employ the multiplier method to obtain conservation laws of the equation that arise from first-order multipliers. Conservation laws of the equation, combined with the admitted Lie point symmetries, enable us to perform symmetry reductions by employing the double reduction method. The method exploits the relationship between symmetries and conservation laws to reduce both the number of variables and the order of the equation. Five nontrivial conservation laws of the Hunter&ndash;Saxton equation are derived, four of which are found to have associated Lie point symmetries. Applying the double reduction method to the equation results in a set of first-order ordinary differential equations, the solutions of which represent invariant solutions for the equation. While the double reduction method may be more complex to implement than the classical method, since it involves finding Lie point symmetries and deriving conservation laws, it has some advantages over the classical method of reducing PDEs. Firstly, it is more efficient in that it can reduce the number of variables and order of the equation in a single step. Secondly, by incorporating conservation laws, physically meaningful solutions that satisfy important physical constraints can be obtained.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040091

Authors: Hamidreza Eivazi Jendrik-Alexander Tröger Stefan Wittek Stefan Hartmann Andreas Rausch

Multiscale FE2 computations enable the consideration of the micro-mechanical material structure in macroscopical simulations. However, these computations are very time-consuming because of numerous evaluations of a representative volume element, which represents the microstructure. In contrast, neural networks as machine learning methods are very fast to evaluate once they are trained. Even the DNN-FE2 approach is currently a known procedure, where deep neural networks (DNNs) are applied as a surrogate model of the representative volume element. In this contribution, however, a clear description of the algorithmic FE2 structure and the particular integration of deep neural networks are explained in detail. This comprises a suitable training strategy, where particular knowledge of the material behavior is considered to reduce the required amount of training data, a study of the amount of training data required for reliable FE2 simulations with special focus on the errors compared to conventional FE2 simulations, and the implementation aspect to gain considerable speed-up. As it is known, the Sobolev training and automatic differentiation increase data efficiency, prediction accuracy and speed-up in comparison to using two different neural networks for stress and tangent matrix prediction. To gain a significant speed-up of the FE2 computations, an efficient implementation of the trained neural network in a finite element code is provided. This is achieved by drawing on state-of-the-art high-performance computing libraries and just-in-time compilation yielding a maximum speed-up of a factor of more than 5000 compared to a reference FE2 computation. Moreover, the deep neural network surrogate model is able to overcome load-step size limitations of the RVE computations in step-size controlled computations.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040090

Authors: Rohan Singla Shubham Gupta Arnab Chanda

A cerebral aneurysm is a medical condition where a cerebral artery can burst under adverse pressure conditions. A 20% mortality rate and additional 30 to 40% morbidity rate have been reported for patients suffering from the rupture of aneurysms. In addition to wall shear stress, input jets, induced pressure, and complicated and unstable flow patterns are other important parameters associated with a clinical history of aneurysm ruptures. In this study, the anterior cerebral artery (ACA) was modeled using image segmentation and then rebuilt with aneurysms at locations vulnerable to aneurysm growth. To simulate various aneurysm growth stages, five aneurysm sizes and two wall thicknesses were taken into consideration. In order to simulate realistic pressure loading conditions for the anterior cerebral arteries, inlet velocity and outlet pressure were used. The pressure, wall shear stress, and flow velocity distributions were then evaluated in order to predict the risk of rupture. A low-wall shear stress-based rupture scenario was created using a smaller aneurysm and thinner walls, which enhanced pressure, shear stress, and flow velocity. Additionally, aneurysms with a 4 mm diameter and a thin wall had increased rupture risks, particularly at specific boundary conditions. It is believed that the findings of this study will help physicians predict rupture risk according to aneurysm diameters and make early treatment decisions.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040089

Authors: Vivek Gupta Arnab Chanda

Burn injuries are very common due to heat, accidents, and fire. Split-thickness skin grafting technique is majorly used to recover the burn sites. In this technique, the complete epidermis and partial dermis layer of the skin are used to make grafts. A small amount of skin is passed into the mesher to create an incision pattern for higher expansion. These grafts are transplanted into the burn sites with the help of sutures for recovering large burn areas. Presently, the maximum expansion possible with skin grafting is very less (&lt;3), which is insufficient for covering larger burn area with a small amount of healthy skin. This study aimed to determine the possibility of employing innovative auxetic skin graft patterns and traditional skin graft patterns with three levels of hierarchy. Six different hierarchical skin graft designs were tested to describe the biomechanical properties. The meshing ratio, Poisson&rsquo;s ratio, expansion, and induced stresses were quantified for each graft model. The computational results indicated that the expansion potential of the 3rd order auxetic skin graft was highest across all the models. These results are expected to improve burn surgeries and promote skin transplantation research.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040088

Authors: Soumyadip Pal Fahad Al Basir Santanu Ray

The main objective of this study is to find out the influences of cooperation and intra-specific competition in the prey population on escaping predation through refuge and the effect of the two intra-specific interactions on the dynamics of prey–predator systems. For this purpose, two mathematical models with Holling type II functional response functions were proposed and analyzed. The first model includes cooperation among prey populations, whereas the second one incorporates intra-specific competition. The existence conditions and stability of different equilibrium points for both models were analyzed to determine the qualitative behaviors of the systems. Refuge through intra-specific competition has a stabilizing role, whereas cooperation has a destabilizing role on the system dynamics. Periodic oscillations were observed in both systems through Hopf bifurcation. From the analytical and numerical findings, we conclude that intra-specific competition affects the prey population and continuously controls the refuge class under a critical value, and thus, it never becomes too large to cause predator extinction due to food scarcity. Conversely, cooperation leads the maximal number of individuals to escape predation through the refuge so that predators suffer from low predation success.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040087

Authors: José Antonio Loya Carlos Santiuste Josué Aranda-Ruiz Ramón Zaera

This work analyses the buckling behaviour of cracked Euler&ndash;Bernoulli columns immersed in a Winkler elastic medium, obtaining their buckling loads. For this purpose, the beam is modelled as two segments connected in the cracked section by a mass-less rotational spring. Its rotation is proportional to the bending moment transmitted through the cracked section, considering the discontinuity of the rotation due to bending. The differential equations for the buckling behaviour are solved by applying the corresponding boundary conditions, as well as the compatibility and jump conditions of the cracked section. The proposed methodology allows calculating the buckling load as a function of the type of support, the parameter defining the elastic soil, the crack position and the initial length of the crack. The results obtained are compared with those published by other authors in works that deal with the problem in a partial way, showing the interaction and importance of the parameters considered in the buckling loads of the system.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040085

Authors: Frédéric Ouimet

The negative multinomial distribution appears in many areas of applications such as polarimetric image processing and the analysis of longitudinal count data. In previous studies, general formulas for the falling factorial moments and cumulants of the negative multinomial distribution were obtained. However, despite the availability of the moment generating function, no comprehensive formulas for the moments have been calculated thus far. This paper addresses this gap by presenting general formulas for both central and non-central moments of the negative multinomial distribution. These formulas are expressed in terms of binomial coefficients and Stirling numbers of the second kind. Utilizing these formulas, we provide explicit expressions for all central moments up to the fourth order and all non-central moments up to the eighth order.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040086

Authors: Daniel Maposa Amon Masache Precious Mdlongwa

Exploration of solar irradiance can greatly assist in understanding how renewable energy can be better harnessed. It helps in establishing the solar irradiance climate in a particular region for effective and efficient harvesting of solar energy. Understanding the climate provides planners, designers and investors in the solar power generation sector with critical information. However, a detailed exploration of these climatic characteristics has not yet been studied for the Southern African data. Very little exploration is being done through the use of measures of centrality only. These descriptive statistics may be misleading. As a result, we overcome limitations in the currently used deterministic models through the application of distributional modelling through quantile functions. Deterministic and stochastic elements in the data were combined and analysed simultaneously when fitting quantile distributional function models. The fitted models were then used to find population means as explorative parameters that consist of both deterministic and stochastic properties of the data. The application of QFs has been shown to be a practical tool and gives more information than approaches that focus separately on either measures of central tendency or empirical distributions. Seasonal effects were detected in the data from the whole region and can be attributed to the cyclical behaviour exhibited. Daily maximum solar irradiation is taking place within two hours of midday and monthly accumulates in summer months. Windhoek is receiving the best daily total mean, while the maximum monthly accumulated total mean is taking place in Durban. Developing separate solar irradiation models for summer and winter is highly recommended. Though robust and rigorous, quantile distributional function modelling enables exploration and understanding of all components of the behaviour of the data being studied. Therefore, a starting base for understanding Southern Africa&rsquo;s solar climate was developed in this study.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040084

Authors: Carlos Enrique Valencia Murillo Miguel Ernesto Gutierrez Rivera Luis David Celaya Garcia

In this work, a finite element model to perform the thermal&ndash;structural analysis of beams made of functionally graded material (FGM) is presented. The formulation is based on the third-order shear deformation theory. The constituents of the FGM are considered to vary only in the thickness direction, and the effective material properties are evaluated by means of the rule of mixtures. The volume distribution of the top constituent is modeled using the power law form. A comparison of the present finite element model with the numerical results available in the literature reveals that they are in good agreement. In addition, a routine to study functionally graded plane models in a commercial finite element code is used to verify the performance of the proposed model. In the present work, displacements for different values of the power law exponent and surface temperatures are presented. Furthermore, the normal stress variation along the thickness is shown for several power law exponents of functionally graded beams subjected to thermal and mechanical loads.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040083

Authors: SidAhmed Benchiha Laxmi Prasad Sapkota Aned Al Mutairi Vijay Kumar Rana H. Khashab Ahmed M. Gemeay Mohammed Elgarhy Said G. Nassr

In this article, we extensively study a family of distributions using the trigonometric function. We add an extra parameter to the sine transformation family and name it the alpha-sine-G family of distributions. Some important functional forms and properties of the family are provided in a general form. A specific sub-model alpha-sine Weibull of this family is also introduced using the Weibull distribution as a parent distribution and studied deeply. The statistical properties of this new distribution are investigated and intended parameters are estimated using the maximum likelihood, maximum product of spacings, least square, weighted least square, and minimum distance methods. For further justification of these estimates, a simulation experiment is carried out. Two real data sets are analyzed to show the suggested model&rsquo;s application. The suggested model performed well compares to some existing models considered in the study.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040082

Authors: Mustafa Kemal Apalak Junuthula N. Reddy

This study investigates the strain and stress states in an aluminum single lap joint bonded with a functionally graded Al2O3 micro particle reinforced adhesive layer subjected to a uniform temperature field. Navier equations of elasticity theory were designated by considering the spatial derivatives of Lam&eacute; constants and the coefficient of thermal expansion for local material composition. The set of partial differential equations and mechanical boundary conditions for a two-dimensional model was reduced to a set of linear equations by means of the central finite difference approximation at each grid point of a discretized joint. The through-thickness Al2O3-adhesive composition was tailored by the functional grading concept, and the mechanical and thermal properties of local adhesive composition were predicted by Mori&ndash;Tanaka&rsquo;s homogenization approach. The adherend&ndash;adhesive interfaces exhibited sharp discontinuous thermal stresses, whereas the discontinuous nature of thermal strains along bi-material interfaces can be moderated by the gradient power index, which controls the through-thickness variation of particle amount in the local adhesive composition. The free edges of the adhesive layer were also critical due to the occurrence of high normal and shear strains and stresses. The gradient power index can influence the distribution and levels of strain and stress components only for a sufficiently high volume fraction of particles. The grading direction of particles in the adhesive layer was not influential because the temperature field is uniform; namely, it can only upturn the low and high strain and stress regions so that the neat adhesive&ndash;adherend interface and the particle-rich adhesive&ndash;adherend interface can be relocated.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040081

Authors: Meicong Li Zheng Zhang Yangyang Li Qiang Zhao Mei Huang Xiaoping Ouyang

Tungsten is a promising material for nuclear fusion reactors, but its performance can be degraded by the accumulation of hydrogen (H) and helium (He) isotopes produced by nuclear reactions. This study investigates the effect of chrome (Cr) and vanadium (V) on the behavior of hydrogen and helium in tungsten (W) using first-principles calculations. The results show W becomes easier to process after adding Cr and V. Stability improves after adding V. Adding Cr negatively impacts H and He diffusion in W, while V promotes it. There is attraction between H and Cr or H and V for distances over 1.769 &Aring; but repulsion below 1.583 &Aring;. There is always attraction between He and Cr or V. The attraction between vacancies and He is stronger than that between He and Cr or V. There is no clear effect on H when vacancies and Cr or V coexist in W. Vacancies can dilute the effects of Cr and V on H and He in W.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040080

Authors: Diana-Itzel Vázquez-Santiago Héctor-Gabriel Acosta-Mesa Efrén Mezura-Montes

One of the main limitations of traditional neural-network-based classifiers is the assumption that all query data are well represented within their training set. Unfortunately, in real-life scenarios, this is often not the case, and unknown class data may appear during testing, which drastically weakens the robustness of the algorithms. For this type of problem, open-set recognition (OSR) proposes a new approach where it is assumed that the world knowledge of algorithms is incomplete, so they must be prepared to detect and reject objects of unknown classes. However, the goal of this approach does not include the detection of new classes hidden within the rejected instances, which would be beneficial to increase the model&rsquo;s knowledge and classification capability, even after training. This paper proposes an OSR strategy with an extension for new class discovery aimed at vehicle make and model recognition. We use a neuroevolution technique and the contrastive loss function to design a domain-specific CNN that generates a consistent distribution of feature vectors belonging to the same class within the embedded space in terms of cosine similarity, maintaining this behavior in unknown classes, which serves as the main guide for a probabilistic model and a clustering algorithm to simultaneously detect objects of new classes and discover their classes. The results show that the presented strategy works effectively to address the VMMR problem as an OSR problem and furthermore is able to simultaneously recognize the new classes hidden within the rejected objects. OSR is focused on demonstrating its effectiveness with benchmark databases that are not domain-specific. VMMR is focused on improving its classification accuracy; however, since it is a real-world recognition problem, it should have strategies to deal with unknown data, which has not been extensively addressed and, to the best of our knowledge, has never been considered from an OSR perspective, so this work also contributes as a benchmark for future domain-specific OSR.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040079

Authors: Xiaowen Shi Xiangyu Zhang Renwu Tang Juan Yang

Reflected partial differential equations (PDEs) have important applications in financial mathematics, stochastic control, physics, and engineering. This paper aims to present a numerical method for solving high-dimensional reflected PDEs. In fact, overcoming the &ldquo;dimensional curse&rdquo; and approximating the reflection term are challenges. Some numerical algorithms based on neural networks developed recently fail in solving high-dimensional reflected PDEs. To solve these problems, firstly, the reflected PDEs are transformed into reflected backward stochastic differential equations (BSDEs) using the reflected Feyman&ndash;Kac formula. Secondly, the reflection term of the reflected BSDEs is approximated using the penalization method. Next, the BSDEs are discretized using a strategy that combines Euler and Crank&ndash;Nicolson schemes. Finally, a deep neural network model is employed to simulate the solution of the BSDEs. The effectiveness of the proposed method is tested by two numerical experiments, and the model shows high stability and accuracy in solving reflected PDEs of up to 100 dimensions.

]]>Mathematical and Computational Applications doi: 10.3390/mca28040078

Authors: John Dean Van Tonder Martin Philip Venter Gerhard Venter

The inverse finite element method is a technique that can be used for material model parameter characterization. The literature shows that this approach may get caught in the local minima of the design space. These local minimum solutions often fit the material test data with small errors and are often mistaken for the optimal solution. The problem with these sub-optimal solutions becomes apparent when applied to different loading conditions where significant errors can be witnessed. The research of this paper presents a new method that resolves this issue for Mooney&ndash;Rivlin and builds on a previous paper that used flat planes, referred to as hyperplanes, to map the error functions, isolating the unique optimal solution. The new method alternatively uses a constrained optimization approach, utilizing equality constraints to evaluate the error functions. As a result, the design space&rsquo;s curvature is taken into account, which significantly reduces the amount of variation between predicted parameters from a maximum of 1.934% in the previous paper down to 0.1882% in the results presented here. The results of this study demonstrate that the new method not only isolates the unique optimal solution but also drastically reduces the variation in the predicted parameters. The paper concludes that the presented new characterization method significantly contributes to the existing literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030077

Authors: Julianne Blignaut Martin Venter David van den Heever Mark Solms Ivan Crockart

Binocular rivalry is the perceptual dominance of one visual stimulus over another. Conventionally, binocular rivalry is induced using a mirror-stereoscope&mdash;a setup involving mirrors oriented at an angle to a display. The respective mirror planes fuse competing visual stimuli in the observer&rsquo;s visual field by projecting the stimuli through the stereoscope to the observed visual field. Since virtual-reality head-mounted displays fuse dichoptic vision in a similar way, and since virtual-reality head-mounted displays are more versatile and more readily available than mirror stereoscopes, this study investigated the efficacy of using a virtual-reality headset (Oculus Rift-S) as an alternative to using a mirror stereoscope to study binocular rivalry. To evaluate the validity of using virtual-reality headsets to induce visual dominance/suppression, two identical experimental sequences&mdash;one using a conventional mirror stereoscope and one using a virtual-reality headset&mdash;were compared and evaluated. The study used Gabor patches at different orientations to induce binocular rivalry and to evaluate the efficacy of the two experiments. Participants were asked to record all instances of perceptual dominance (complete suppression) and non-dominance (incomplete suppression). Independent sample t-tests confirmed that binocular rivalry with stable vergence was successfully induced for the mirror-stereoscope experiment (t = &minus;4.86; p &le; 0.0001) and the virtual-reality experiment (t = &minus;9.41; p &le; 0.0001). Using ANOVA to compare Gabor patch pairs of gratings at +45&deg;/&minus;45&deg; orientations presented in both visual fields, gratings at 0&deg;/90&deg; orientations presented in both visual fields, and mixed gratings (i.e., unconventional grating pairs) presented in both visual fields, the performance of the two experiments was evaluated by comparing observation duration in seconds (F = 0.12; p = 0.91) and the alternation rate per trial (F = 8.1; p = 0.0005). The differences between the stimulus groups were not statistically significant for the observation duration but were significantly different based on the alternation rates per trial. Moreover, ANOVA also showed that the dominance durations (F = 114.1; p &lt; 0.0001) and the alternation rates (F = 91.6; p &lt; 0.0001) per trial were significantly different between the mirror-stereoscope and the virtual-reality experiments, with the virtual-reality experiment showing an increase in alternation rate and a decrease in observation duration. The study was able to show that a virtual-reality head-mounted display can be used as an effective and novel alternative to induce binocular rivalry, but there were some differences in visual bi-stability between the two methods. This paper discusses the experimental measures taken to minimise piecemeal rivalry and to evaluate perceptual dominance between the two experimental designs.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030076

Authors: Mohammad Khodabakhshi Soureshjani Richard G. Zytner

Bioventing is a widely recognized technique for the remediation of petroleum hydrocarbon-contaminated soil. In this study, the objective was to identify an optimal mathematical model that balances accuracy and ease of implementation. A comprehensive review of various models developed for bioventing was conducted wherein the advantages and disadvantages of each model were evaluated and compared regarding the different numerical methods used to solve relevant bioventing equations. After investigating the various assumptions and methods from the literature, an improved foundational bioventing model was developed that characterizes gas flow in unsaturated zones where water and non-aqueous phase liquid (NAPL) are present and immobile, accounting for interphase mass transfer and biodegradation, incorporating soil properties through a rate constant correlation. The proposed model was solved using the finite volume method in OpenFOAM, an independent dimensional open-source coding toolbox. The preliminary simulation results of a simple case indicate good agreement with the exact analytical solution of the same equations. This improved bioventing model has the potential to enhance predictions of the remediation process and support the development of efficient remediation strategies for petroleum hydrocarbon-contaminated soil.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030075

Authors: María Concepción Salvador-González Juana Canul-Reich Rafael Rivera-López Efrén Mezura-Montes Erick de la Cruz-Hernandez

Bacterial Vaginosis is a common disease and recurring public health problem. Additionally, this infection can trigger other sexually transmitted diseases. In the medical field, not all possible combinations among the pathogens of a possible case of Bacterial Vaginosis are known to allow a diagnosis at the onset of the disease. It is important to contribute to this line of research, so this study uses a dataset with information from sexually active women between 18 and 50 years old, including 17 numerical attributes of microorganisms and bacteria with positive and negative results for BV. These values were semantically categorized for the Apriori algorithm to create the association rules, using support, confidence, and lift as statistical metrics to evaluate the quality of the rules, and incorporate those results in the objective function of the DE algorithm. To guide the evolutionary process we also incorporated the knowledge of a human expert represented as a set of biologically meaningful constraints. Thus, we were able to compare the performance of the rand/1/bin and best/1/bin versions from Differential Evolution to analyze the results of 30 independent executions. Therefore the experimental results allowed a reduced subset of biologically meaningful association rules by their executions, dimension, and DE version to be selected.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030074

Authors: Vijay Arya Kumar Bedabrata Chand

A class of zipper fractal functions is more versatile than corresponding classes of traditional and fractal interpolants due to a binary vector called a signature. A zipper fractal function constructed through a zipper iterated function system (IFS) allows one to use negative and positive horizontal scalings. In contrast, a fractal function constructed with an IFS uses positive horizontal scalings only. This article introduces some novel classes of continuously differentiable convexity-preserving zipper fractal interpolation curves and surfaces. First, we construct zipper fractal interpolation curves for the given univariate Hermite interpolation data. Then, we generate zipper fractal interpolation surfaces over a rectangular grid without using any additional knots. These surface interpolants converge uniformly to a continuously differentiable bivariate data-generating function. For a given Hermite bivariate dataset and a fixed choice of scaling and shape parameters, one can obtain a wide variety of zipper fractal surfaces by varying signature vectors in both the x direction and y direction. Some numerical illustrations are given to verify the theoretical convexity results.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030073

Authors: Nico Heizmann

Internal diffusion limited aggregation (IDLA) is a random aggregation model on a graph G, whose clusters are formed by random walks started in the origin (some fixed vertex) and stopped upon visiting a previously unvisited site. On the Sierpinski gasket graph, the asymptotic shape is known to be a ball in the graph metric. In this paper, we improve the sublinear bounds for the fluctuations known from its known asymptotic shape result by establishing bounds for the odometer function for a divisible sandpile model.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030072

Authors: José-Luis Llaguno-Roque Rocio-Erandi Barrientos-Martínez Héctor-Gabriel Acosta-Mesa Tania Romo-González Efrén Mezura-Montes

Breast cancer has become a global health problem, ranking first in incidences and fifth in mortality in women around the world. In Mexico, the first cause of death in women is breast cancer. This work uses deep learning techniques to discriminate between healthy and breast cancer patients, based on the banding patterns obtained from the Western Blot strip images of the autoantibody response to antigens of the T47D tumor line. The reaction of antibodies to tumor antigens occurs early in the process of tumorigenesis, years before clinical symptoms. One of the main challenges in deep learning is the design of the architecture of the convolutional neural network. Neuroevolution has been used to support this and has produced highly competitive results. It is proposed that neuroevolve convolutional neural networks (CNN) find an optimal architecture to achieve competitive ranking, taking Western Blot images as input. The CNN obtained reached 90.67% accuracy, 90.71% recall, 95.34% specificity, and 90.69% precision in classifying three different classes (healthy, benign breast pathology, and breast cancer).

]]>Mathematical and Computational Applications doi: 10.3390/mca28030071

Authors: Marcela Quiroz-Castellanos Luis Gerardo de la Fraga Adriana Lara Leonardo Trujillo Oliver Schütze

This Special Issue was inspired by the 9th International Workshop on Numerical and Evolutionary Optimization (NEO 2021) held&mdash;due to the COVID-19 pandemic&mdash;as an online-only event from 8 to 10 September 2021 [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28030070

Authors: Mark Pollicott Julia Slipantschuk

We establish rigorous estimates for the Hausdorff dimension of the spectra of Laplacians associated with Sierpi&#324;ski lattices and infinite Sierpi&#324;ski gaskets and other post-critically finite self-similar sets.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030069

Authors: Sebastian Stark

Robust and computationally efficient numeric algorithms are required to simulate the sintering process of complex ceramic components by means of the finite element method. This work focuses on a thermodynamically consistent sintering model capturing the effects of both, viscosity and elasticity, within the standard dissipative framework. In particular, the temporal integration of the model by means of several implicit first and second order accurate one step time integration methods is discussed. It is shown in terms of numerical experiments on the material point level that the first order schemes exhibit poor performance when compared to second order schemes. Further numerical experiments indicate that the results translate directly to finite element simulations.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030068

Authors: Martin Philip Venter Naudé Thomas Conradie

This paper introduced a comparison method for three explicitly defined intermediate encoding methods in generative design for two-dimensional soft robotic units. This study evaluates a conventional genetic algorithm with full access to removing elements from the design domain using an implicit random encoding layer, a Lindenmayer system encoding mimicking biological growth patterns and a compositional pattern producing network encoding for 2D pattern generation. The objective of the optimisation problem is to match the deformation of a single actuator unit with a desired target shape, specifically uni-axial elongation, under internal pressure. The study results suggest that the Lindenmayer system encoding generates candidate units with fewer function evaluations than the traditional implicitly encoded genetic algorithm. However, the distribution of constraint and internal energy is similar to that of the random encoding, and the Lindenmayer system encoding produces a less diverse population of candidate units. In contrast, despite requiring more function evaluations than the Lindenmayer System encoding, the Compositional Pattern Producing Network encoding produces a similar diversity of candidate units. Overall, the Compositional Pattern Producing Network encoding results in a proportionally higher number of high-performing units than the random or Lindenmayer system encoding, making it a viable alternative to a conventional monolithic approach. The results suggest that the compositional pattern producing network encoding may be a promising approach for designing soft robotic actuators with desirable performance characteristics.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030067

Authors: Luis Víctor Maidana Benítez Melisa María Rosa Villamayor Paredes José Colbes César F. Bogado-Martínez Benjamin Barán Diego P. Pinto-Roa

This paper addresses serialized approaches of the routing, modulation level, and spectrum assignment (RMLSA) problem in elastic optical networks, using multiple sequential sub-sets of requests, under Integer Linear Programming (ILP). The literature has reported two-stage serial optimization methods referred to as RML+SA, which retain computational efficiency when the problem grows, compared to the classical one-stage RMLSA optimization approach. However, there still remain numerous issues in terms of the spectrum used that can be improved when compared to the RMLSA solution. Consequently, this paper proposes RML+SA solutions considering multiple sequential sub-sets of requests, split traffic flow, as well as path-oriented and link-oriented routing models. Simulation results on different test scenarios determine that: (a) the multiple sequential sub-sets of request-based models improve computation time without worsening the spectrum usage when compared to just one set of requests optimization approaches, (b) divisible traffic flow approaches show promise in cases where the number of request sub-sets is low compared to the non-divisible counterpart, and (c) path-oriented routing succeeds in improving the used spectrum by increasing the number of candidate routes compared to link-oriented routing.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030066

Authors: Adam Aharony Ron Hindi Maor Valdman Shai Gul

Images or paintings with homogeneous colors may appear dull to the naked eye; however, there may be numerous details in the image that are expressed through subtle changes in color. This manuscript introduces a novel approach that can uncover these concealed details via a transformation that increases the distance between adjacent pixels, ultimately leading to a newly modified version of the input image. We chose the artworks of Mark Rothko&mdash;famous for their simplicity and limited color palette&mdash;as a case study. Our approach offers a different perspective, leading to the discovery of either accidental or deliberate clusters of colors. Our method is based on the quaternion ring, wherein a suitable multiplication can be used to boost the color difference between neighboring pixels, thereby unveiling new details in the image. The quality of the transformation between the original image and the resultant versions can be measured by the ratio between the number of connected components in the original image (m) and the number of connected components in the output versions (n), which usually satisfies nm&#8811;1. Although this procedure has been employed as a case study for artworks, it can be applied to any type of image with a similar simplicity and limited color palette.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030065

Authors: Boris Solomyak

This is a brief survey of selected results obtained using the &ldquo;transversality method&rdquo; developed for studying parametrized families of fractal sets and measures. We mostly focus on the early development of the theory, restricting ourselves to self-similar and self-conformal iterated function systems.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030064

Authors: Khanyisani Mhlangano Makhanya Simon Connell Muaaz Bhamjee Neil Martinson

Pulmonary diseases are a leading cause of illness and disability globally. While having access to hospitals or specialist clinics for investigations is currently the usual way to characterize the patient&rsquo;s condition, access to medical services is restricted in less resourced settings. We posit that pulmonary disease may impact on vocalization which could aid in characterizing a pulmonary condition. We therefore propose a new method to diagnose pulmonary disease analyzing the vocal and cough changes of a patient. Computational fluid dynamics holds immense potential for assessing the flow-induced acoustics in the lungs. The aim of this study is to investigate the potential of flow-induced vocal-, cough-, and lung-generated acoustics to diagnose lung conditions using computational fluid dynamics methods. In this study, pneumonia is the model disease which is studied. The hypothesis is that using a computational fluid dynamics model for assessing the flow-induced acoustics will accurately represent the flow-induced acoustics for healthy and infected lungs and that possible modeled difference in fluid and acoustic behavior between these pathologies will be tested and described. Computational fluid dynamics and a lung geometry will be used to simulate the flow distribution and obtain the acoustics for the different scenarios. The results suggest that it is possible to determine the difference in vocalization between healthy lungs and those with pneumonia, using computational fluid dynamics, as the flow patterns and acoustics differ. Our results suggest there is potential for computational fluid dynamics to enhance understanding of flow-induced acoustics that could be characteristic of different lung pathologies. Such simulations could be repeated using machine learning with the final objective to use telemedicine to triage or diagnose patients with respiratory illness remotely.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030063

Authors: Marc Girondot Jon Barry

The distribution of the sum of negative binomial random variables has a special role in insurance mathematics, actuarial sciences, and ecology. Two methods to estimate this distribution have been published: a finite-sum exact expression and a series expression by convolution. We compare both methods, as well as a new normalized saddlepoint approximation, and normal and single distribution negative binomial approximations. We show that the exact series expression used lots of memory when the number of random variables was high (&gt;7). The normalized saddlepoint approximation gives an output with a high relative error (around 3&ndash;5%), which can be a problem in some situations. The convolution method is a good compromise for applied practitioners, considering the amount of memory used, the computing time, and the precision of the estimates. However, a simplistic implementation of the algorithm could produce incorrect results due to the non-monotony of the convergence rate. The tolerance limit must be chosen depending on the expected magnitude order of the estimate, for which we used the answer generated by the saddlepoint approximation. Finally, the normal and negative binomial approximations should not be used, as they produced outputs with a very low accuracy.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020062

Authors: Jacques Francois Du Toit Ryno Laubscher

Physics-Informed Neural Networks (PINNs) are a new class of machine learning algorithms that are capable of accurately solving complex partial differential equations (PDEs) without training data. By introducing a new methodology for fluid simulation, PINNs provide the opportunity to address challenges that were previously intractable, such as PDE problems that are ill-posed. PINNs can also solve parameterized problems in a parallel manner, which results in favorable scaling of the associated computational cost. The full potential of the application of PINNs to solving fluid dynamics problems is still unknown, as the method is still in early development: many issues remain to be addressed, such as the numerical stiffness of the training dynamics, the shortage of methods for simulating turbulent flows and the uncertainty surrounding what model hyperparameters perform best. In this paper, we investigated the accuracy and efficiency of PINNs for modeling aortic transvalvular blood flow in the laminar and turbulent regimes, using various techniques from the literature to improve the simulation accuracy of PINNs. Almost no work has been published, to date, on solving turbulent flows using PINNs without training data, as this regime has proved difficult. This paper aims to address this gap in the literature, by providing an illustrative example of such an application. The simulation results are discussed, and compared to results from the Finite Volume Method (FVM). It is shown that PINNs can closely match the FVM solution for laminar flow, with normalized maximum velocity and normalized maximum pressure errors as low as 5.74% and 9.29%, respectively. The simulation of turbulent flow is shown to be a greater challenge, with normalized maximum velocity and normalized maximum pressure errors only as low as 41.8% and 113%, respectively.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020061

Authors: Fernando Camarena Miguel Gonzalez-Mendoza Leonardo Chang Ricardo Cuevas-Ascencio

Artificial intelligence&rsquo;s rapid advancement has enabled various applications, including intelligent video surveillance systems, assisted living, and human&ndash;computer interaction. These applications often require one core task: video-based human action recognition. Research in human video-based human action recognition is vast and ongoing, making it difficult to assess the full scope of available methods and current trends. This survey concisely explores the vision-based human action recognition field and defines core concepts, including definitions and explanations of the common challenges and most used datasets. Additionally, we provide in an easy-to-understand manner the literature approaches and their evolution over time, emphasizing intuitive notions. Finally, we explore current research directions and potential future paths. The core goal of this work is to provide future works with a shared understanding of fundamental ideas and clear intuitions about current works and find new research opportunities.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020060

Authors: Quinn G. Reynolds Thokozile P. Kekana Buhle S. Xakalashe

The application of direct-current plasma arc furnace technology to the problem of coal gasification is investigated using computational multiphysics models of the plasma arc inside such units. An integrated modelling workflow for the study of DC plasma arc discharges in synthesis gas atmospheres is presented. The thermodynamic and transport properties of the plasma are estimated using statistical mechanics calculations and are shown to have highly non-linear dependencies on the gas composition and temperature. A computational magnetohydrodynamic solver for electromagnetically coupled flows is developed and implemented in the OpenFOAM&reg; framework, and the behaviour of three-dimensional transient simulations of arc formation and dynamics is studied in response to different plasma gas compositions and furnace operating conditions. To demonstrate the utility of the methods presented, practical engineering results are obtained from an ensemble of simulation results for a pilot-scale furnace design. These include the stability of the arc under different operating conditions and the dependence of voltage&ndash;current relationships on the arc length, which are relevant in understanding the industrial operability of plasma arc furnaces used for waste coal gasification.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020059

Authors: Daniele Boffi Fabio Credali Lucia Gastaldi Simone Scacchi

We present and analyze a parallel solver for the solution of fluid structure interaction problems described by a fictitious domain approach. In particular, the fluid is modeled by the non-stationary incompressible Navier&ndash;Stokes equations, while the solid evolution is represented by the elasticity equations. The parallel implementation is based on the PETSc library and the solver has been tested in terms of robustness with respect to mesh refinement and weak scalability by running simulations on a Linux cluster.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020058

Authors: Dineo A. Ramatlo Daniel N. Wilke Philip W. Loveday

Guided wave ultrasound (GWU) systems have been widely used for monitoring structures such as rails, pipelines, and plates. In railway tracks, the monitoring process involves the complicated propagation of waves over several hundred meters. The propagating waves are multi-modal and interact with discontinuities differently, increasing complexity and leading to different response signals. When the researcher wants to gain insight into the behavior of guided waves, predicting response signals for different combinations of modes becomes necessary. However, the task can become computationally costly when physics-based models are used. Digital twins can enable a practitioner to deal systematically with the complexities of guided wave monitoring in practical or user-specified settings. This paper investigates the use of a hybrid digital model of an operational rail track to predict response signals for varying user-specified settings, specifically, the prediction of response signals for various combinations of modes of propagation in the rail. The digital twin hybrid model employs a physics-based model and a data-driven model. The physics-based model simulates the wave propagation response using techniques developed from the traditional 3D finite element method and the 2D semi-analytical finite element method (FEM). The physics-based model is used to generate virtual experimental signals containing different combinations of modes of propagation. These response signals are used to train the data-driven model based on a variational auto-encoder (VAE). Given an input baseline signal containing only the most dominant mode excited by a transducer, the VAE is trained to predict an inspection signal with increased complexity according to the specified combination of modes. The results show that, once the VAE has been trained, it can be used to predict inspection signals for different combinations of propagating modes, thus replacing the physics-based model, which is computationally costly. In the future, the VAE architecture will be adapted to predict response signals for varying environmental and operational conditions.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020057

Authors: Johann M. Bouwer Daniel N. Wilke Schalk Kok

This research compares the performance of space-time surrogate models (STSMs) and network surrogate models (NSMs). Specifically, when the system response varies over time (or pseudo-time), the surrogates must predict the system response. A surrogate model is used to approximate the response of computationally expensive spatial and temporal fields resulting from some computational mechanics simulations. Within a design context, a surrogate takes a vector of design variables that describe a current design and returns an approximation of the design&rsquo;s response through a pseudo-time variable. To compare various radial basis function (RBF) surrogate modeling approaches, the prediction of a load displacement path of a snap-through structure is used as an example numerical problem. This work specifically considers the scenario where analytical sensitivities are available directly from the computational mechanics&rsquo; solver and therefore gradient enhanced surrogates are constructed. In addition, the gradients are used to perform a domain transformation preprocessing step to construct surrogate models in a more isotropic domain, which is conducive to RBFs. This work demonstrates that although the gradient-based domain transformation scheme offers a significant improvement to the performance of the space-time surrogate models (STSMs), the network surrogate model (NSM) is far more robust. This research offers explanations for the improved performance of NSMs over STSMs and recommends future research to improve the performance of STSMs.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020056

Authors: Henk Pijls Le Phuong Quan

In this paper, we propose two Maple procedures and some related utilities to determine the maximum curvature of a cubic B&eacute;zier-spline curve that interpolates an ordered set of points in R2 or R3. The procedures are designed from closed-form formulas for such open and closed curves.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020055

Authors: Johannes C. Joubert Daniel N. Wilke Patrick Pizette

This work describes a post-processing scheme for multiphase flow systems to characterize primary atomization. The scheme relies on the 2D fast Fourier transform (FFT) to separate the inherently multi-scale features present in the flow results. Emphasis is put on the robust quantitative analysis enabled by this scheme, with this work specifically focusing on comparing atomizer nozzle designs. The generalized finite difference (GFD) method is used to simulate a high pressure gas injected into a viscous liquid stream. The proposed scheme is applied to time-averaged results exclusively. The scheme is used to evaluate both the surface and volume features of the fluid system. Due to the better recovery of small-scale features using the proposed scheme, the benefits of post-processing multiphase surface information rather than fluid volume information was shown. While the volume information lacks the fine-scale details of the surface information, the duality between interfaces and fluid volumes leads to similar trends related to the large-scale spatial structure recovered from both surface- and volume-based data sets.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020054

Authors: Abayomi Adewale Akinwande Dimitry Moskovskikh Elena Romanovskaia Oluwatosin Abiodun Balogun J. Pradeep Kumar Valentin Romanovski

Recent studies have shown the benefits of utilizing ceramic particles as reinforcement in metal alloys; nevertheless, certain drawbacks, including loss of ductility, embrittlement, and decreases in toughness, have been noted. For the objective of obtaining balanced performance, experts have suggested the addition of metal particles as supplement to the ceramic reinforcement. Consequently, high-performance metal hybrid composites have been developed. However, achieving the optimal mix for the reinforcement combination with regards to the optimal performance of developed composite remains a challenge. This research aimed to determine the optimal mixture of Al50Cu10Sn5Mg20Zn10Ti5 lightweight high-entropy alloy (LHEA), B4C, and ZrO2 for the fabrication of trihybrid titanium composites via direct laser deposition. A mixture design was involved in the experimental design, and experimental data were modeled and optimized to achieve the optimal performance of the trihybrid composite. The ANOVA, response surface plots, and ternary maps analyses of the experimental results revealed that various combinations of reinforcement particles displayed a variety of response trends. Moreover, the analysis showed that these reinforcements significantly contributed to the magnitudes and trends of the responses. The generated models were competent for predicting response, and the best formulation consisted of 8.4% LHEA, 1.2% B4C, and 2.4% ZrO2.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020053

Authors: Martin Philip Venter Izak Johannes Joubert

Soft robotics is an emerging field that leverages the compliant nature of materials to control shape and behaviour. However, designing soft robots presents a challenge, as they do not have discrete points of articulation and instead articulate through deformation in whole regions of the robot. This results in a vast, unexplored design space with few established design methods. This paper presents a practical generative design process that combines the Encapsulation, Syllabus, and Pandamonium method with a reduced-order model to produce results comparable to the existing state-of-the-art in reduced design time while including the human designer meaningfully in the design process and facilitating the inclusion of other numerical techniques such as Markov chain Monte Carlo methods. Using a combination of reduced-order models, L-systems, MCMC, curve matching, and optimisation, we demonstrate that our method can produce functional 2D articulating soft robot designs in less than 1 s, a significant reduction in design time compared to monolithic methods, which can take several days. Additionally, we qualitatively show how to extend our approach to produce more complex 3D robots, such as an articulating tentacle with multiple grippers.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020052

Authors: Kristina Laugksch Pieter Rousseau Ryno Laubscher

Physics-informed neural networks (PINNs) were developed to overcome the limitations associated with the acquisition of large training data sets that are commonly encountered when using purely data-driven machine learning methods. This paper proposes a PINN surrogate modeling methodology for steady-state integrated thermofluid systems modeling based on the mass, energy, and momentum balance equations, combined with the relevant component characteristics and fluid property relationships. The methodology is applied to two thermofluid systems that encapsulate the important phenomena typically encountered, namely: (i) a heat exchanger network with two different fluid streams and components linked in series and parallel; and (ii) a recuperated closed Brayton cycle with various turbomachines and heat exchangers. The results generated with the PINN models were compared to benchmark solutions generated via conventional, physics-based thermofluid process models. The largest average relative errors are 0.17% and 0.93% for the heat exchanger network and Brayton cycle, respectively. It was shown that the use of a hybrid Adam-TNC optimizer requires between 180 and 690 fewer iterations during the training process, thus providing a significant computational advantage over a pure Adam optimization approach. The resulting PINN models can make predictions 75 to 88 times faster than their respective conventional process models. This highlights the potential for PINN surrogate models as a valuable engineering tool in component and system design and optimization, as well as in real-time simulation for anomaly detection, diagnosis, and forecasting.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020051

Authors: Johannes C. Joubert Daniel N. Wilke Patrick Pizette

This paper presents a GPU-based, incompressible, multiphase generalized finite difference solver for simulating multiphase flow. The method includes a dampening scheme that allows for large density ratio cases to be simulated. Two verification studies are performed by simulating the relaxation of a square droplet surrounded by a light fluid and a bubble rising in a denser fluid. The scheme is also used to simulate the collision of binary droplets at moderate Reynolds numbers (250&ndash;550). The effects of the surface tension and density ratio are explored in this work by considering cases with Weber numbers of 8 and 180 and density ratios of 2:1 and 1000:1. The robustness of the multiphase scheme is highlighted when resolving thin fluid structures arising in both high and low density ratio cases at We = 180.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020050

Authors: Vuyo T. Hashe Thokozani J. Kunene

Hydrocyclones are devices used in numerous areas of the chemical, food, and mineral industries to separate fine particles. A hydrocyclone with a diameter of d50 mm was modeled using the commercial Simcenter STAR-CCM+13 computational fluid dynamics (CFD) simulation package. The numerical methods confirmed the results of the different parameters, such as the properties of the volume fraction, based on CFD simulations. Reynolds Stress Model (RSM) and the combined technique of volume of fluid (VOF) and discrete element model (DEM) for water and air models were selected to evaluate semi-implicit pressure-linked equations and combine the momentum with continuity laws to obtain derivatives of the pressure. The targeted particle sizes were in a range of 8&ndash;100 microns for a dewatering application. The depth of the vortex finder was varied to 20 mm, 30 mm, and 35 mm to observe the effects of pressure drop and separation efficiency. The split water ratio increased toward a 50% split of overflow and underflow rates as the length of the vortex finder increased. It results in better particle separation when there is a high injection rate at the inlet. The tangential and axial velocities increased as the vortex finder length increased. As the depth of the vortex finder length increased, the time for particle re-entrainment into the underflow stream increased, and the separation efficiency improved.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020049

Authors: Pertti Mattila

Let A and B be Borel subsets of the Euclidean n-space with&nbsp;dimA+dimB&gt;n. This is a survey on the following question: what can we say about the Hausdorff dimension of the intersections&nbsp;A&cap;(g(B)+z)&nbsp;for generic orthogonal transformations g and translations by z?

]]>Mathematical and Computational Applications doi: 10.3390/mca28020048

Authors: Himani Sharma Munish Kansal Ramandeep Behl

We propose a new optimal iterative scheme without memory free from derivatives for solving non-linear equations. There are many iterative schemes existing in the literature which either diverge or fail to work when f&prime;(x)=0. However, our proposed scheme works even in these cases. In addition, we extended the same idea for iterative methods with memory with the help of self-accelerating parameters estimated from the current and previous approximations. As a result, the order of convergence increased from four to seven without the addition of any further functional evaluation. To confirm the theoretical results, numerical examples and comparisons with some of the existing methods are included which reveal that our scheme is more efficient than the existing schemes. Furthermore, basins of attraction are also included to describe a clear picture of the convergence of the proposed method as well as some of the existing methods.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020047

Authors: Philip Frederik Ligthart Martin Philip Venter

This paper demonstrates the effectiveness of a hierarchical design framework in developing environment-specific behaviour for fluid-actuated soft robots. Our proposed framework employs multi-step optimisation and reduced-order modelling to reduce the computational expense associated with simulating non-linear materials used in the design process. Specifically, our framework requires the designer to make high-level decisions to simplify the optimisations, targeting simple objectives in earlier steps and more complex objectives in later steps. We present a case study, where our proposed framework is compared to a conventional direct design approach for a simple 2D design. A soft pneumatic bending actuator was designed that is able to perform asymmetrical motion when actuated cyclically. Our results show that the hierarchical framework can find almost 2.5 times better solutions in less than 3% of the time when compared to a direct design approach.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020046

Authors: Rhoda Ngira Aduke Martin P. Venter Corné J. Coetzee

Corrugated paperboard is a sandwich structure composed of wavy paper (fluting) bonded between two flat paper sheets (liners). The analysis of an entire package using three-dimensional numerical finite element models is computationally expensive due to the waved geometry of the board that requires the use of a relatively large number of elements in a simulation. Because of this, homogenisation approaches are used to evaluate equivalent homogenous models with similar material properties. These techniques have been successfully implemented by various researchers to evaluate the strength of corrugated paperboard. However, studies analysing the various homogenisation techniques and their ranges of applicability are limited. This study analyses the application of three homogenisation techniques: classical laminate plate theory, first-order shear deformation theory and deformation energy equivalence method in the evaluation of effective elastic material properties. In addition, inverse analysis has been applied to determine the effective properties of the board. Finite element models have been used to evaluate the accuracy of the three homogenisation techniques in comparison to the inverse method in modelling four-point bending tests and the results reported.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020045

Authors: Anku Mona Narang Vinay Kanwar

In this paper, a new one-parameter class of fixed point iterative method is proposed to approximate the fixed points of contractive type mappings. The presence of an arbitrary parameter in the proposed family increases its interval of convergence. Further, we also propose new two-step and three-step fixed point iterative schemes. We also discuss the stability, strong convergence and fastness of the proposed methods. Furthermore, numerical experiments are performed to check the applicability of the new methods, and these have been compared with well-known similar existing methods in the literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020044

Authors: Desejo Filipeson Sozinando Bernard Xavier Tchomeni Alfayo Anyika Alugongo

Diagnosis of faults in a rotor system operating in a fluid is a complex task in the field of rotating machinery. In an ideal scenario, a forced shutdown due to rotor-stator contact failure would necessitate the replacement of the rotor or stator. However, factors such as time constraints, economic considerations, and the aging of infrastructure make it imprudent to abruptly shut down machinery that can still be safe to operate. The purpose of this paper is to present an experimental study that validates the theoretical results of the dynamic behavior and friction detection using the wavelet synchrosqueezing transformation (WSST) method for recurrent rotor-stator contacts in a fluid environment, as presented in a previous study. The investigation focused on the analysis of whirl orbits, shaft deflection, and fluctuation frequency during passage through critical speeds. The WSST method was used to decompose the dynamic responses of the rotor in the supercritical speed zone into several supercomponents. The variation of the high-frequency component was studied based on the fluctuation of the instantaneous frequency (IF) technique. Additionally, the fast Fourier transform (FFT) method, in conjunction with the WSST technique, was used to calculate the variation in the amplitude of high-order frequencies in the vibration signal spectrum. The experimental study revealed that the split in resonance caused by rubbing effects is reduced when the rotor and stator interact with an inviscid fluid. However, despite the effects of elasticity and fluid boundaries generating self-excitation at low frequencies and uneven motion due to stator clearance, the experimental results were consistent with the theoretical analysis, demonstrating the effectiveness of the contact detection method based on WSST.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020043

Authors: Mopeli Khama Quinn Reynolds

Metallurgical processes are characterized by a complex interplay of heat and mass transfer, momentum transfer, and reaction kinetics, and these interactions play a crucial role in reactor performance. Integrating chemistry and transport results in stiff and non-linear equations and longer time and length scales, which ultimately leads to a high computational expense. The current study employs the OpenFOAM solver based on a fictitious domain method to analyze gas-solid reactions in a porous medium using hydrogen as a reducing agent. The reduction of oxides with hydrogen involves the hierarchical phenomena that influence the reaction rates at various temporal and spatial scales; thus, multi-scale models are needed to bridge the length scale from micro-scale to macro-scale accurately. As a first step towards developing such capabilities, the current study analyses OpenFOAM reacting flow methods in cases related to hydrogen reduction of iron and manganese oxides. Since reduction of the oxides of interest with hydrogen requires significant modifications to the current industrial processes, this model can aid in the design and optimization. The model was verified against experimental data and the dynamic features of the porous medium observed as the reaction progresses is well captured by the model.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020042

Authors: Kwanda Mercury Dlamini Vuyo Terrence Hashe Thokozani Justin Kunene

The study numerically investigated the noise dissipation, cavitation, output power, and energy produced by marine propellers. A Ffowcs Williams&ndash;Hawkings (FW&ndash;H) model was used to determine the effects of three different marine propellers with three to five blades and a fixed advancing ratio. The large-eddy Simulations model best predicted the turbulent structures&rsquo; spatial and temporal variation, which would better illustrate the flow physics. It was found that a high angle of incidence between the blade&rsquo;s leading edge and the water flow direction typically causes the hub vortex to cavitate. The roll-up of the cavitating tip vortex was closely related to propeller noise. The five-blade propeller was quieter under the same dynamic conditions, such as the advancing ratio, compared to three- or four-blade propellers.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020041

Authors: Anshika Garg Shubham Gupta Nitesh Tewari Sukeshana Srivastav Arnab Chanda

Traumatic dental injuries (TDI) are frequent among individuals of all ages, with a prevalence ranging from 12&ndash;22%, with crown and crown&ndash;root fractures being the most common. Fragment reattachment using light-cured nanocomposites is the recommended method for the management of these fractures. Though there are several clinical studies that have assessed the efficacy of such materials, an in-silico characterization of the effects of traumatic forces on the re-attached fragments has never been performed. Hence, this study aimed to evaluate the efficacy of various adhesive materials in crown and crown&ndash;root reattachments through computational modelling. A full-scale permanent maxillary anterior tooth model was developed by segmenting 3D scanned cone beam computed tomography (CBCT) images of the pulp, root, and enamel precisely. The full-scale 3D tooth model was then subjected to a novel numerical cutting operation to describe the crown and crown&ndash;root fractures. The fractured tooth models were then filled computationally with three commonly used filler (or adhesive) materials, namely flowable composite, resin cement, and resin adhesive, and subjected to masticatory and traumatic loading conditions. The flowable composite demonstrated a statistically significant difference and the lowest produced stresses when subjected to masticatory loading. Resin cement demonstrated reduced stress values for crown&ndash;root fractures that were masticatory loaded after being reattached using adhesive materials. During traumatic loading, resin cement demonstrated lower displacements and stress values across both fractures. The novel findings reported in this study are anticipated to assist dentists in selecting the most appropriate adhesive materials that induce the least stress on the reattached tooth when subjected to second trauma, for both crown and crown&ndash;root fractures.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020040

Authors: Carl-Hein Visser Gerhard Venter Melody Neaves

When performing a digital image correlation (DIC) measurement, multi-camera stereo-DIC is generally preferred over single-camera 2D-DIC. Unlike 2D-DIC, stereo-DIC is able to minimise the in-plane strain error that results from out-of-plane motion. This makes 2D-DIC a less viable alternative for strain measurements than stereo-DIC, despite being less financially and computationally expensive. This work, therefore, proposes a strain-gauge-based method for the compensation of errors from out-of-plane motion in 2D-DIC strain measurements on planar specimens. The method was first developed using equations for the theoretical strain error from out-of-plane motions in 2D-DIC and was then applied experimentally in tensile tests to two different dog-bone specimen geometries. The compensation method resulted in a clear reduction in the strain error in 2D-DIC. The strain-gauge-based method thus improves the accuracy of a 2D-DIC measurement, making it a more viable option for performing full-field strain measurements and providing a possible alternative in cases where stereo-DIC is not practical or is unavailable.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020039

Authors: Jahnavi Merupula V. S. Vaidyanathan Christophe Chesneau

Regression models in which the response variable has a compound distribution have applications in actuarial science. For example, the aggregate claim amount in a vehicle insurance portfolio can be modeled using a compound Poisson distribution. In this paper, we propose a regression model, wherein the response variable is assumed to have a compound Conway&ndash;Maxwell&ndash;Poisson (CMP) distribution. This distribution is a parsimonious two-parameter Poisson distribution that accounts for both over- and under-dispersed count data, making it more suitable for application in various fields. A two-part methodology in the framework of a generalized linear model is proposed to estimate the parameters. Additionally, a method to obtain the prediction interval of the response variable is developed. The workings of the proposed methodology are illustrated through simulated data. An application of the compound CMP regression model to real-life vehicle insurance claims data is presented.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020038

Authors: Manuel Vargas-Martínez Nelson Rangel-Valdez Eduardo Fernández Claudia Gómez-Santillán María Lucila Morales-Rodríguez

Simulated annealing is a metaheuristic that balances exploration and exploitation to solve global optimization problems. However, to deal with multi- and many-objective optimization problems, this balance needs to be improved due to diverse factors such as the number of objectives. To deal with this issue, this work proposes MOSA/D, a hybrid framework for multi-objective simulated annealing based on decomposition and evolutionary perturbation functions. According to the literature, the decomposition strategy allows diversity in a population while evolutionary perturbations add convergence toward the Pareto front; however, a question should be asked: What is the effect of such components when included as part of a multi-objective simulated annealing design? Hence, this work studies the performance of the MOSA/D framework considering in its implementation two widely used perturbation operators: classical genetic operators and differential evolution. The proposed algorithms are MOSA/D-CGO, based on classical genetic operators, and MOSA/D-DE, based on differential evolution operators. The main contribution of this work is the performance analysis of MOSA/D using both perturbation operators and identifying the one most suitable for the framework. The approaches were tested using DTLZ on two and three objectives and CEC2009 benchmarks on two, three, five, and ten objectives; the performance analysis considered diversity and convergence measured through the hypervolume (HV) and inverted generational distance (IGD) indicators. The results pointed out that there is a promising improvement in performance in favor of MOSA/D-DE.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020037

Authors: Aleksandr N. Rozhkov Vera V. Galishnikova

Building information systems use topological tables to implement the transition from two-dimensional line drawings of the geometry of buildings to digital three-dimensional models of linear complexes. The topological elements of the complex are named and the topological relations of the complex are described by arranging the element names in topological tables. The efficient construction and modification of topological tables for complete buildings is investigated. The topology of a linear complex with nodes, edges, faces, and cells is described with 12 tables. Three of the tables of a complex are independent of each other and form a basis for the construction of the other tables. A highly efficient construction algorithm with complexity O (number of cells) for typical buildings with an approximately constant number of edges per face and faces per cell of is presented. In practice, building designs and their digital models are frequently modified. A modification algorithm is presented, whose complexity equals that of the construction algorithm. Examples illustrate that the efficient algorithms permit the replacement of the conventional focus on the topology of building components by a focus on the topology of the entire building. A set of properties of the original, which are not explicitly described by the topological tables, for example, the orientation of surfaces and multiply connected domains, are analyzed in the paper. An overview of the research dealing with the topological attributes that are not contained in topological tables concludes the paper.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020036

Authors: Hector Ascencion-Mestiza Serguei Maximov Efrén Mezura-Montes Juan Carlos Olivares-Galvan Rodrigo Ocon-Valdez Rafael Escarela-Perez

The conventional methods of parameter estimation in transformers, such as the open-circuit and short-circuit tests, are not always available, especially when the transformer is already in operation and its disconnection is impossible. Therefore, alternative (non-interruptive) methods of parameter estimation have become of great importance. In this work, no-interruption, transformer equivalent circuit parameter estimation is presented using the following metaheuristic optimization methods: the genetic algorithm (GA), particle swarm optimization (PSO) and the gravitational search algorithm (GSA). These algorithms provide a maximum average error of 12%, which is twice as better as results found in the literature for estimation of the equivalent circuit parameters in transformers at a frequency of 50 Hz. This demonstrates that the proposed GA, PSO and GSA metaheuristic optimization methods can be applied to estimate the equivalent circuit parameters of single-phase distribution and power transformers with a reasonable degree of accuracy.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020035

Authors: Enrique Naredo Candelaria Sansores Flaviano Godinez Francisco López Paulo Urbano Leonardo Trujillo Conor Ryan

Robotics technology has made significant advancements in various fields in industry and society. It is clear how robotics has transformed manufacturing processes and increased productivity. Additionally, navigation robotics has also been impacted by these advancements, with investors now investing in autonomous transportation for both public and private use. This research aims to explore how training scenarios affect the learning process for autonomous navigation tasks. The primary objective is to address whether the initial conditions (learning cases) have a positive or negative impact on the ability to develop general controllers. By examining this research question, the study seeks to provide insights into how to optimize the training process for autonomous navigation tasks, ultimately improving the quality of the controllers that are developed. Through this investigation, the study aims to contribute to the broader goal of advancing the field of autonomous navigation and developing more sophisticated and effective autonomous systems. Specifically, we conducted a comprehensive analysis of a particular navigation environment using evolutionary computing to develop controllers for a robot starting from different locations and aiming to reach a specific target. The final controller was then tested on a large number of unseen test cases. Experimental results provide strong evidence that the initial selection of the learning cases plays a role in evolving general controllers. This work includes a preliminary analysis of a specific set of small learning cases chosen manually, provides an in-depth analysis of learning cases in a particular navigation task, and develops a tool that shows the impact of the selected learning cases on the overall behavior of a robot&rsquo;s controller.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020034

Authors: Carlos Coello Erik Goodman Kaisa Miettinen Dhish Saxena Oliver Schütze Lothar Thiele

Kalyanmoy Deb was born in Udaipur, Tripura, the smallest state of India at the time, in 1963 [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28020033

Authors: Li Dai Mi-Da Cui Xiao-Xiang Cheng

To rigorously evaluate the health of a steel bridge subjected to vehicle-induced fatigue, both a detailed numerical model and effective fatigue analysis methods are needed. In this paper, the process for establishing the structural health monitoring (SHM)-oriented finite element (FE) model and assessing the vehicle-induced fatigue damage is presented for a large, specially shaped steel arch bridge. First, the bridge is meticulously modeled using multiple FEs to facilitate the exploration of the local structural behavior. Second, manual tuning and model updating are conducted according to the modal parameters measured at the bridge&rsquo;s location. Since the numerical model comprises a large number of FEs, two surrogate-model-based methods are employed to update the model. Third, the established models are validated by using them to predict the structure&rsquo;s mode shapes and the actual structural behavior for the case in which the whole bridge is subjected to static vehicle loads. Fourth, using the numerical model, a new fatigue analysis method based on the high-circle fatigue damage accumulation theory is employed to further analyze the vehicle-induced fatigue damage to the bridge. The results indicate that manual tuning and model updating are indispensable for SHM-oriented FE models with erroneous configurations, and one surrogate-model-based model updating method is effective. In addition, it is shown that the fatigue analysis method based on the high-circle fatigue damage accumulation theory is applicable to real-world engineering cases.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020032

Authors: Bevan I. Smith Charles Chimedza Jacoba H. Bührmann

This study critically evaluates a recent machine learning method called the X-Learner, that aims to estimate treatment effects by predicting counterfactual quantities. It uses information from the treated group to predict counterfactuals for the control group and vice versa. The problem is that studies have either only been applied to real world data without knowing the ground truth treatment effects, or have not been compared with the traditional regression methods for estimating treatment effects. This study therefore critically evaluates this method by simulating various scenarios that include observed confounding and non-linearity in the data. Although the regression X-Learner performs just as well as the traditional regression model, the other base learners performed worse. Additionally, when non-linearity was introduced into the data, the results of the X-Learner became inaccurate.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020031

Authors: Sandra Ferreira

The rapid advances in modeling research have created new challenges and opportunities for statisticians [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28020030

Authors: Yang Zhou Xiaofu Ji

This paper is concerned with the problem of static output feedback control for a class of continuous-time nonlinear time-delay semi-Markov jump systems with incremental quadratic constraints. For a class of time-delay semi-Markov jump systems satisfying incremental quadratic constrained nonlinearity, an appropriate mode-dependent Lyapunov&ndash;Krasovskii functional is constructed. Based on the matrix transformation, projection theorem and convex set principle, the mode-dependent static output feedback control laws are designed. The feedback control law is given in the form of a linear matrix inequality, which is convenient for a numerical solution. Finally, two practical examples are given to illustrate the effectiveness and superiority of the proposed method.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020029

Authors: José Alfredo Brambila-Hernández Miguel Ángel García-Morales Héctor Joaquín Fraire-Huacuja Eduardo Villegas-Huerta Armando Becerra-del-Ángel

This paper proposes a hybrid harmony search algorithm that incorporates a method of reinitializing harmonies memory using a particle swarm optimization algorithm with an improved opposition-based learning method (IOBL) to solve continuous optimization problems. This method allows the algorithm to obtain better results by increasing the search space of the solutions. This approach has been validated by comparing the performance of the proposed algorithm with that of a state-of-the-art harmony search algorithm, solving fifteen standard mathematical functions, and applying the Wilcoxon parametric test at a 5% significance level. The state-of-the-art algorithm uses an opposition-based improvement method (IOBL). Computational experiments show that the proposed algorithm outperforms the state-of-the-art algorithm. In quality, it is better in fourteen of the fifteen instances, and in efficiency is better in seven of fifteen instances.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020028

Authors: Gurjeet Singh Sonia Bhalla Ramandeep Behl

Problems such as population growth, continuous stirred tank reactor (CSTR), and ideal gas have been studied over the last four decades in the fields of medical science, engineering, and applied science, respectively. Some of the main motivations were to understand the pattern of such issues and how to obtain the solution to them. With the help of applied mathematics, these problems can be converted or modeled by nonlinear expressions with similar properties. Then, the required solution can be obtained by means of iterative techniques. In this manuscript, we propose a new iterative scheme for computing multiple roots (without prior knowledge of multiplicity m) based on multiplicative calculus rather than standard calculus. The structure of our scheme stands on the well-known Schr&ouml;der method and also retains the same convergence order. Some numerical examples are tested to find the roots of nonlinear equations, and results are found to be competent compared with ordinary derivative methods. Finally, the new scheme is also analyzed by the basin of attractions that also supports the theoretical aspects.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010027

Authors: Ricardo Pérez-Rodríguez Sergio Frausto-Hernández

The truck and trailer routing problem (TTRP) has been widely studied under different approaches. This is due to its practical characteristic that makes its research interesting. The TTRP continues to be attractive to developing new evolutionary algorithms. This research details a new estimation of the distribution algorithm coupled with a radial probability function from hydrogen. Continuous values are used in the solution representation, and every value indicates, in a hydrogen atom, the distance between the electron and the core. The key point is to exploit the radial probability distribution to construct offspring and to tackle the drawbacks of the estimation of distribution algorithms. Various instances and numerical experiments are presented to illustrate and validate this novel research. Based on the performance of the proposed scheme, we can make the conclusion that incorporating radial probability distributions helps to improve the estimation of distribution algorithms.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010026

Authors: Raktim Biswas Deepak Sharma

Multi-objective reliability-based design optimization (MORBDO) is an efficient tool for generating reliable Pareto-optimal (PO) solutions. However, generating such PO solutions requires many function evaluations for reliability analysis, thereby increasing the computational cost. In this paper, a single-loop multi-objective reliability-based design optimization formulation is proposed that approximates reliability analysis using Karush-Kuhn Tucker (KKT) optimality conditions. Further, chaos control theory is used for updating the point that is estimated through KKT conditions for avoiding any convergence issues. In order to generate the reliable point in the feasible region, the proposed formulation also incorporates the shifting vector approach. The proposed MORBDO formulation is solved using differential evolution (DE) that uses a heuristic convergence parameter based on hypervolume indicator for performing different mutation operators. DE incorporating the proposed formulation is tested on two mathematical and one engineering examples. The results demonstrate the generation of a better set of reliable PO solutions using the proposed method over the double-loop variant of multi-objective DE. Moreover, the proposed method requires 6&times;&ndash;377&times; less functional evaluations than the double-loop-based DE.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010025

Authors: Suleman Nasiru Abdul Ghaniyyu Abubakari Christophe Chesneau

The usefulness of (probability) distributions in the field of biomedical science cannot be underestimated. Hence, several distributions have been used in this field to perform statistical analyses and make inferences. In this study, we develop the arctan power (AP) distribution and illustrate its application using biomedical data. The distribution is flexible in the sense that its probability density function exhibits characteristics such as left-skewedness, right-skewedness, and J and reversed-J shapes. The characteristic of the corresponding hazard rate function also suggests that the distribution is capable of modeling data with monotonic and non-monotonic failure rates. A bivariate extension of the AP distribution is also created to model the interdependence of two random variables or pairs of data. The application reveals that the AP distribution provides a better fit to the biomedical data than other existing distributions. The parameters of the distribution can also be fairly accurately estimated using a Bayesian approach, which is also elaborated. To end the study, the quantile and modal regression models based on the AP distribution provided better fits to the biomedical data than other existing regression models.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010024

Authors: Michael O. Opoku Eric N. Wiah Eric Okyere Albert L. Sackitey Emmanuel K. Essel Stephen E. Moore

We present a Caputo fractional order mathematical model that describes the cellular infection of the Hepatitis B virus and the immune response of the body with Holling type II functional response. We study the existence of unique positive solutions and the local and global stability of virus-free and endemic equilibria. Finally, we present numerical results using the Adam-type predictor&ndash;corrector iterative scheme.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010023

Authors: Gurjeet Singh Sonia Bhalla Ramandeep Behl

Grossman and Katz (five decades ago) suggested a new definition of differential and integral calculus which utilizes the multiplicative and division operator as compared to addition and subtraction. Multiplicative calculus is a vital part of applied mathematics because of its application in the areas of biology, science and finance, biomedical, economic, etc. Therefore, we used a multiplicative calculus approach to develop a new fourth-order iterative scheme for multiple roots based on the well-known King&rsquo;s method. In addition, we also propose a detailed convergence analysis of our scheme with the help of a multiplicative calculus approach rather than the normal one. Different kinds of numerical comparisons have been suggested and analyzed. The obtained results (from line graphs, bar graphs and tables) are very impressive compared to the earlier iterative methods of the same order with the ordinary derivative. Finally, the convergence of our technique is also analyzed by the basin of attractions, which also supports the theoretical aspects.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010022

Authors: Junya Sato

Many approaches have been developed to solve the hand&ndash;eye calibration problem. The traditional approach involves a precise mathematical model, which has advantages and disadvantages. For example, mathematical representations can provide numerical and quantitative results to users and researchers. Thus, it is possible to explain and understand the calibration results. However, information about the end-effector, such as the position attached to the robot and its dimensions, is not considered in the calibration process. If there is no CAD model, additional calibration is required for accurate manipulation, especially for a handmade end-effector. A neural network-based method is used as the solution to this problem. By training a neural network model using data created via the attached end-effector, additional calibration can be avoided. Moreover, it is not necessary to develop a precise and complex mathematical model. However, it is difficult to provide quantitative information because a neural network is a black box. Hence, a method with both advantages is proposed in this study. A mathematical model was developed and optimized using the data created by the attached end-effector. To acquire accurate data and evaluate the calibration results, a tablet computer was utilized. The established method achieved a mean positioning error of 1.0 mm.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010021

Authors: Faisal Salah Abdelmgid O. M. Sidahmed K. K. Viswanathan

In this paper, the numerical solutions for magneto-hydrodynamic Hiemenz fluid over a nonlinear stretching sheet and the Brownian motion effects of nanoparticles through a porous medium with chemical reaction and radiation are studied. The repercussions of thermophoresis and mass transfer at the stagnation point flow are discussed. The plate progresses in the contrary direction or in the free stream orientation. The underlying PDEs are reshaped into a set of ordinary differential equations employing precise transformation. They are addressed numerically using the successive linearization method, which is an efficient systematic process. The main goal of this study is to compare the solutions obtained using the successive linearization method to solve the velocity and temperature equations in the presence of m changes, thereby demonstrating its accuracy and suitability for solving nonlinear differential equations. For comparison, tables containing the results are presented. This contrast is significant because it demonstrates the accuracy with which a set of nonlinear differential equations can be solved using the successive linearization method. The resulting solution is examined and discussed with respect to a number of engineering parameters. Graphs exemplify the simulation of distinct parameters that govern the motion factors.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010020

Authors: S. Divya S. Eswaramoorthi Karuppusamy Loganathan

The main goal of the current research is to investigate the numerical computation of Ag/Al2O3 nanofluid over a Riga plate with injection/suction. The energy equation is formulated using the Cattaneo&ndash;Christov heat flux, non-linear thermal radiation, and heat sink/source. The leading equations are non-dimensionalized by employing the suitable transformations, and the numerical results are achieved by using the MATLAB bvp4c technique. The fluctuations of fluid flow and heat transfer on porosity, Forchheimer number, radiation, suction/injection, velocity slip, and nanoparticle volume fraction are investigated. Furthermore, the local skin friction coefficient (SFC), and local Nusselt number (LNN) are also addressed. Compared to previously reported studies, our computational results exactly coincided with the outcomes of the previous reports. We noticed that the Forchheimer number, suction/injection, slip, and nanoparticle volume fraction factors slow the velocity profile. We also noted that with improving rates of thermal radiation and convective heating, the heat transfer gradient decreases. The 40% presence of the Hartmann number leads to improved drag force by 14% and heat transfer gradient by 0.5%. The 20% presence of nanoparticle volume fraction leads to a decrement in heat transfer gradient for 21% of Ag nanoparticles and 18% of Al2O3 nanoparticles.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010019

Authors: Nat Promma Nawinda Chutsagulprom

The primary objective of this article is to present an adaptive parameter VAR-KF technique (APVAR-KF) to forecast stock market performance and macroeconomic factors. The method exploits a vector autoregressive model as a system identification technique, and the Kalman filter is served as a recursive state parameter estimation tool. A further development was designed by incorporating the GARCH model to quantify an automatic observation covariance matrix in the Kalman filter step. To verify the efficiency of our proposed method, we conducted an experimental simulation applied to the main stock exchange index, real effective exchange rate and consumer price index of Thailand and Indonesia from January 1997 to May 2021. The APVAR-KF method is generally shown to have a superior performance relative to the conventional VAR(1) model and the VAR-KF model with constant parameters.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010018

Authors: Maria Immaculate Joyce Jagan Kandasamy Sivasankaran Sivanandam

Currently, the efficiency of heat exchange is not only determined by enhancements in the rate of heat transfer but also by economic and accompanying considerations. Responding to this demand, many scientists have been involved in improving heat transfer performance, which is referred to as heat transfer enhancement, augmentation, or intensification. This study deals with the influence on hybrid Cu&ndash;Al2CO3/water nanofluidic flows on a porous stretched sheet of velocity slip, convective boundary conditions, Joule heating, and chemical reactions using an adapted Tiwari&ndash;Das model. Nonlinear fundamental equations such as continuity, momentum, energy, and concentration are transmuted into a non-dimensional ordinary nonlinear differential equation by similarity transformations. Numerical calculations are performed using HAM and the outcomes are traced on graphs such as velocity, temperature, and concentration. Temperature and concentration profiles are elevated as porosity is increased, whereas velocity is decreased. The Biot number increases the temperature profile. The rate of entropy is enhanced as the Brinkman number is raised. A decrease in the velocity is seen as the slip increases.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010017

Authors: António Gaspar-Cunha Paulo Costa Francisco Monaco Alexandre Delbem

Solving real-world multi-objective optimization problems using Multi-Objective Optimization Algorithms becomes difficult when the number of objectives is high since the types of algorithms generally used to solve these problems are based on the concept of non-dominance, which ceases to work as the number of objectives grows. This problem is known as the curse of dimensionality. Simultaneously, the existence of many objectives, a characteristic of practical optimization problems, makes choosing a solution to the problem very difficult. Different approaches are being used in the literature to reduce the number of objectives required for optimization. This work aims to propose a machine learning methodology, designated by FS-OPA, to tackle this problem. The proposed methodology was assessed using DTLZ benchmarks problems suggested in the literature and compared with similar algorithms, showing a good performance. In the end, the methodology was applied to a difficult real problem in polymer processing, showing its effectiveness. The algorithm proposed has some advantages when compared with a similar algorithm in the literature based on machine learning (NL-MVU-PCA), namely, the possibility for establishing variable&ndash;variable and objective&ndash;variable relations (not only objective&ndash;objective), and the elimination of the need to define/chose a kernel neither to optimize algorithm parameters. The collaboration with the DM(s) allows for the obtainment of explainable solutions.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010016

Authors: Gianluigi Rozza Oliver Schütze Nicholas Fantuzzi

This Special Issue comprises the first collection of papers submitted by the Editorial Board Members (EBMs) of the journal Mathematical and Computational Applications (MCA), as well as outstanding scholars working in the core research fields of MCA [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28010015

Authors: MCA Editorial Office MCA Editorial Office

High-quality academic publishing is built on rigorous peer review [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28010014

Authors: Xilu Wang Yaochu Jin

Particle filters, also known as sequential Monte Carlo (SMC) methods, constitute a class of importance sampling and resampling techniques designed to use simulations to perform on-line filtering. Recently, particle filters have been extended for optimization by utilizing the ability to track a sequence of distributions. In this work, we incorporate transfer learning capabilities into the optimizer by using particle filters. To achieve this, we propose a novel particle-filter-based multi-objective optimization algorithm (PF-MOA) by transferring knowledge acquired from the search experience. The key insight adopted here is that, if we can construct a sequence of target distributions that can balance the multiple objectives and make the degree of the balance controllable, we can approximate the Pareto optimal solutions by simulating each target distribution via particle filters. As the importance weight updating step takes the previous target distribution as the proposal distribution and takes the current target distribution as the target distribution, the knowledge acquired from the previous run can be utilized in the current run by carefully designing the set of target distributions. The experimental results on the DTLZ and WFG test suites show that the proposed PF-MOA achieves competitive performance compared with state-of-the-art multi-objective evolutionary algorithms on most test instances.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010013

Authors: Amar Debbouche Bhaskar Sundara Vadivoo Vladimir E. Fedorov Valery Antonov

We establish a class of nonlinear fractional differential systems with distributed time delays in the controls and impulse effects. We discuss the controllability criteria for both linear and nonlinear systems. The main results required a suitable Gramian matrix defined by the Mittag&ndash;Leffler function, using the standard Laplace transform and Schauder fixed-point techniques. Further, we provide an illustrative example supported by graphical representations to show the validity of the obtained abstract results.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010012

Authors: Santiago Sinisterra-Sierra Salvador Godoy-Calderón Miriam Pescador-Rojas

Association rule mining plays a crucial role in the medical area in discovering interesting relationships among the attributes of a data set. Traditional association rule mining algorithms such as Apriori, FP growth, or Eclat require considerable computational resources and generate large volumes of rules. Moreover, these techniques depend on user-defined thresholds which can inadvertently cause the algorithm to omit some interesting rules. In order to solve such challenges, we propose an evolutionary multi-objective algorithm based on NSGA-II to guide the mining process in a data set composed of 15.5 million records with official data describing the COVID-19 pandemic in Mexico. We tested different scenarios optimizing classical and causal estimation measures in four waves, defined as the periods of time where the number of people with COVID-19 increased. The proposed contributions generate, recombine, and evaluate patterns, focusing on recovering promising high-quality rules with actionable cause&ndash;effect relationships among the attributes to identify which groups are more susceptible to disease or what combinations of conditions are necessary to receive certain types of medical care.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010011

Authors: Barry C. Arnold Bangalore G. Manjunath

It has been argued in Arnold and Manjunath (2021) that the bivariate pseudo-Poisson distribution will be the model of choice for bivariate data with one equidispersed marginal and the other marginal over-dispersed. This is due to its simple structure, straightforward parameter estimation and fast computation. In the current note, we introduce the effects of concomitant variables on the bivariate pseudo-Poisson parameters and explore the distributional and inferential aspects of the augmented models. We also include a small simulation study and an example of application to real-life data.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010010

Authors: Hao Wang Michael Emmerich André Deutz Víctor Adrián Sosa Hernández Oliver Schütze

Recently, the Hypervolume Newton Method (HVN) has been proposed as a fast and precise indicator-based method for solving unconstrained bi-objective optimization problems with objective functions. The HVN is defined on the space of (vectorized) fixed cardinality sets of decision space vectors for a given multi-objective optimization problem (MOP) and seeks to maximize the hypervolume indicator adopting the Newton&ndash;Raphson method for deterministic numerical optimization. To extend its scope to non-convex optimization problems, the HVN method was hybridized with a multi-objective evolutionary algorithm (MOEA), which resulted in a competitive solver for continuous unconstrained bi-objective optimization problems. In this paper, we extend the HVN to constrained MOPs with in principle any number of objectives. Similar to the original variant, the first- and second-order derivatives of the involved functions have to be given either analytically or numerically. We demonstrate the applicability of the extended HVN on a set of challenging benchmark problems and show that the new method can be readily applied to solve equality constraints with high precision and to some extent also inequalities. We finally use HVN as a local search engine within an MOEA and show the benefit of this hybrid method on several benchmark problems.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010009

Authors: Zakaria Yaagoub Karam Allali

A three-strain SEIR epidemic model with a vaccination strategy is suggested and studied in this work. This model is represented by a system of nine nonlinear ordinary differential equations that describe the interaction between susceptible individuals, strain-1-vaccinated individuals, strain-1-exposed individuals, strain-2-exposed individuals, strain-3-exposed individuals, strain-1-infected individuals, strain-2-infected individuals, strain-3-infected individuals, and recovered individuals. We start our analysis of this model by establishing the existence, positivity, and boundedness of all the solutions. In order to show global stability, the model has five equilibrium points: The first one stands for the disease-free equilibrium, the second stands for the strain-1 endemic equilibrium, the third one describes the strain-2 equilibrium, the fourth one represents the strain-3 equilibrium point, and the last one is called the total endemic equilibrium. We establish the global stability of each equilibrium point using some suitable Lyapunov function. This stability depends on the strain-1 reproduction number R01, the strain-2 basic reproduction number R02, and the strain-3 reproduction number R03. Numerical simulations are given to confirm our theoretical results. It is shown that in order to eradicate the infection, the basic reproduction numbers of all the strains must be less than unity.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010008

Authors: Guilherme Duarte Ana Neves António Ramos Silva

Thermography techniques are gaining popularity in structural integrity monitoring and analysis of mechanical systems&rsquo; behavior because they are contactless, non-intrusive, rapidly deployable, applicable to structures under harsh environments, and can be performed on-site. More so, the use of image optical techniques has grown quickly over the past several decades due to the progress in the digital camera, infrared camera, and computational power. This work focuses on thermoelastic stress analysis (TSA), and its main goal was to create a computational model based on the finite element method that simulates this technique, to evaluate and quantify how the changes in material properties, including orthotropic, affect the results of the stresses obtained with TSA. The numeric simulations were performed for two samples, compact and single lap joints. when comparing the numeric model developed with previous laboratory tests, the results showed a good representation of the stress test for both samples. The created model is applicable to various materials, including fiber-reinforced composites. This work also highlights the need to perform laboratory tests using anisotropic materials to better understand the TSA potential and improve the developed models.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010007

Authors: Juan F. Giraldo Victor M. Calo

We construct a stabilized finite element method for linear and nonlinear unsteady advection&ndash;diffusion&ndash;reaction equations using the method of lines. We propose a residual minimization strategy that uses an ad-hoc modified discrete system that couples a time-marching schema and a semi-discrete discontinuous Galerkin formulation in space. This combination delivers a stable continuous solution and an on-the-fly error estimate that robustly guides adaptivity at every discrete time. We show the performance of advection-dominated problems to demonstrate stability in the solution and efficiency in the adaptivity strategy. We also present the method&rsquo;s robustness in the nonlinear Bratu equation in two dimensions.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010006

Authors: Octavio Ramos-Figueroa Marcela Quiroz-Castellanos Efrén Mezura-Montes Nicandro Cruz-Ramírez

The Grouping Genetic Algorithm (GGA) is an extension to the standard Genetic Algorithm that uses a group-based representation scheme and variation operators that work at the group-level. This metaheuristic is one of the most used to solve combinatorial optimization grouping problems. Its optimization process consists of different components, although the crossover and mutation operators are the most recurrent. This article aims to highlight the impact that a well-designed operator can have on the final performance of a GGA. We present a comparative experimental study of different mutation operators for a GGA designed to solve the Parallel-Machine scheduling problem with unrelated machines and makespan minimization, which comprises scheduling a collection of jobs in a set of machines. The proposed approach is focused on identifying the strategies involved in the mutation operations and adapting them to the characteristics of the studied problem. As a result of this experimental study, knowledge of the problem-domain was gained and used to design a new mutation operator called 2-Items Reinsertion. Experimental results indicate that the state-of-the-art GGA performance considerably improves by replacing the original mutation operator with the new one, achieving better results, with an improvement rate of 52%.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010005

Authors: Adel M. Al-Mahdi Mohammad M. Al-Gharabli Maher Noor Johnson D. Audu

In this paper, we study the long-time behavior of a weakly dissipative viscoelastic equation with variable exponent nonlinearity of the form utt+&Delta;2u&minus;&int;0tg(t&minus;s)&Delta;u(s)ds+a|ut|n(&middot;)&minus;2ut&minus;&Delta;ut=0, where n(.) is a continuous function satisfying some assumptions and g is a general relaxation function such that g&prime;(t)&le;&minus;&xi;(t)G(g(t)), where &xi; and G are functions satisfying some specific properties that will be mentioned in the paper. Depending on the nature of the decay rate of g and the variable exponent n(.), we establish explicit and general decay results of the energy functional. We give some numerical illustrations to support our theoretical results. Our results improve some earlier works in the literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010004

Authors: Ram Krishna Agbotiname Lucky Imoize Rajveer Singh Yaduvanshi Harendra Singh Arun Kumar Rana Subhendu Kumar Pani

The dielectric resonator antenna (DRA) can be modeled as a series and parallel combination of electrical networks consisting of a resistor (R), inductor (L), and capacitor (C) to address peculiar challenges in antennas suitable for application in emerging wireless communication systems for higher frequency range. In this paper, a multi-stacked DRA has been proposed. The performance and characteristic features of the DRA have been analyzed by deriving the mathematical formulations for dynamic impedance, input impedance, admittance, bandwidth, and quality factor for fundamental and high-order resonant modes. Specifically, the performance of the projected multi-stacked DRA was analyzed in MATLAB and a high-frequency structure simulator (HFSS). Generally, results indicate that variation in the permittivity of substrates, such as high and low, can potentially increase and decrease the quality factor, respectively. In particular, the impedance, radiation fields and power flow have been demonstrated using the proposed multi-stacked electrical network of R, L, and C components coupled with a suitable transformer. Overall, the proposed multi-stacked DRA network shows an improved quality factor and selectivity, and bandwidth is reduced reasonably. The multi-stacked DRA network would find useful applications in radio frequency wireless communication systems. Additionally, for enhancing the impedance, BW of DRA a multi-stacked DRA is proposed by the use of ground-plane techniques with slots, dual-segment, and stacked DRA. The performance of multi-stacked DRA is improved by a factor of 10% as compared to existing models in terms of better flexibility, moderate gain, compact size, bandwidth, quality factor, resonant frequency, frequency impedance at the resonance frequency, and the radiation pattern with Terahertz frequency range.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010003

Authors: B. Rushi Kumar R. Vijayakumar A. Jancy Rani

This work analyses the effect of electromagnetic fields on cartilaginous cells in human joints and the nutrients that flow from the synovial fluid to the cartilage. The perturbation approach and the generalised dispersion model is used to solve the governing equation of momentum and mass transfer. The dispersion coefficient increases with dimensionless time. It aids in grasping the level of nutritional transport to the synovial joint. Low-molecular-weight solutes have a lower concentration distribution at the same depth in articular cartilage than high-molecular-weight solutes. Thus, diffusion dominates nutrition transport for low-molecular-weight solutes, whereas a mechanical pumping action dominates nutrition transport for high-molecular-weight solutes. The report says that the cells in the centre of the cartilage surface receive more nutrients during imbibition and exudation than the cells on the periphery, and the earliest indications of cartilage degradation emerge in the uninflected regions. As a result, cartilage nutrition is considered necessary to joint mobility. It also predicts that, as the viscoelastic parameter increases, the concentration in the articular cartilage diminishes, resulting in the cartilage cells receiving less nutrition, which might lead to harmful effects. The dispersion coefficient and mean concentration for distinct factors, such as the Hartmann number, porous parameter, and viscoelastic parameters of gel formation, have been computed and illustrated through graphics.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010002

Authors: Vinodh Srinivasa Reddy Jagan Kandasamy Sivasankaran Sivanandam

The current study used a novel Casson model to investigate hybrid Al2O3-Cu/Ethylene glycol nanofluid flow over a moving thin needle under MHD, Dufour&ndash;Soret effects, and thermal radiation. By utilizing the appropriate transformations, the governing partial differential equations are transformed into ordinary differential equations. The transformed ordinary differential equations are solved analytically using HAM. Furthermore, we discuss velocity profiles, temperature profiles, and concentration profiles for various values of governing parameters. Skin friction coefficient increases by upto 45% as the Casson parameter raised upto 20%, and the heat transfer rate also increases with the inclusion of nanoparticles. Additionally, local skin friction, a local Nusselt number, and a local Sherwood number for many parameters are entangled in this article.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010001

Authors: Xiaofu Ji Xueqing Yan

The problem of finite-time static output feedback H&infin; control for a class of discrete-time singular Markov jump systems is studied in this paper. With the consideration of network transmission delay and event-triggered schemes, a closed-loop model of a discrete-time singular Markov jump system is established under the static output feedback control law, and the corresponding sufficient condition is given to guarantee this system will be regular, causal, finite-time bounded and satisfy the given H&infin; performance. Based on the matrix decomposition algorithm, the output feedback controller can be reduced to a feasible solution of a set of strict matrix inequalities. A numerical example is presented to show the effectiveness of the presented method.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060112

Authors: Ankur Sinha Jyrki Wallenius

Most of the practical applications that require optimization often involve multiple objectives. These objectives, when conflicting in nature, pose both optimization as well as decision-making challenges. An optimization procedure for such a multi-objective problem requires computing (computer-based search) and decision making to identify the most preferred solution. Researchers and practitioners working in various domains have integrated computing and decision-making tasks in several ways, giving rise to a variety of algorithms to handle multi-objective optimization problems. For instance, an a priori approach requires formulating (or eliciting) a decision maker&rsquo;s value function and then performing a one-shot optimization of the value function, whereas an a posteriori decision-making approach requires a large number of diverse Pareto-optimal solutions to be available before a final decision is made. Alternatively, an interactive approach involves interactions with the decision maker to guide the search towards better solutions (or the most preferred solution). In our tutorial and survey paper, we first review the fundamental concepts of multi-objective optimization. Second, we discuss the classic interactive approaches from the field of Multi-Criteria Decision Making (MCDM), followed by the underlying idea and methods in the field of Evolutionary Multi-Objective Optimization (EMO). Third, we consider several promising MCDM and EMO hybrid approaches that aim to capitalize on the strengths of the two domains. We conclude with discussions on important behavioral considerations related to the use of such approaches and future work.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060110

Authors: Manoj Kumar Narayanaswamy Jagan Kandasamy Sivasankaran Sivanandam

The impacts of Stefan blowing along with slip and Joule heating on hybrid nanofluid (HNF) flow past a shrinking cylinder are investigated in the presence of thermal radiation. Using the suitable transformations, the governing equations are converted into ODEs, and the MATLAB tool bvp4c is used to solve the resulting equations. As Stefan blowing increases, temperature and concentration profiles are accelerated but the velocity profile diminishes and also the heat transfer rate improves up to 25% as thermal radiation upsurges. The mass transfer rate diminishes as increasing Stefan blowing. The Sherwood number, the Nusselt number, and the skin friction coefficient are numerically tabulated and graphs are also plotted. The outcomes are conscientiously and thoroughly discussed.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060109

Authors: Xiaoqing Zhao Lei Pang Lianming Wang Sen Men Lei Yan

This paper aimed to combine hyperspectral imaging (378&ndash;1042 nm) and a deep convolutional neural network (DCNN) to rapidly and non-destructively detect and predict the viability of waxy corn seeds. Different viability levels were set by artificial aging (aging: 0 d, 3 d, 6 d, and 9 d), and spectral data for the first 10 h of seed germination were continuously collected. Bands that were significantly correlated (SC) with moisture, protein, starch, and fat content in the seeds were selected, and another optimal combination was extracted using a successive projection algorithm (SPA). The support vector machine (SVM), k-nearest neighbor (KNN), random forest (RF), and deep convolutional neural network (DCNN) approaches were used to establish the viability detection and prediction models. During detection, with the addition of different levels, the recognition effect of the first three methods decreased, while the DCNN method remained relatively stable (always above 95%). When using the previous 2.5 h data, the prediction accuracy rate was generally higher than the detection model. Among them, SVM + full band increased the most, while DCNN + full band was the highest, reaching 98.83% accuracy. These results indicate that the combined use of hyperspectral imaging technology and the DCNN method is more conducive to the rapid detection and prediction of seed viability.

]]>