Mathematics doi: 10.3390/math12060900
Authors: J. De Curtò I. De Zarzà
The advent of space exploration missions, especially those aimed at establishing a sustainable presence on the Moon and beyond, necessitates the development of efficient propulsion and mission planning techniques. This study presents a comprehensive analysis of chemical and electric propulsion systems for spacecraft, focusing on optimizing propellant distribution for missions involving transfers from Low-Earth Orbit (LEO) to Geostationary Orbit (GEO) and the Lunar surface. Using mathematical modeling and optimization algorithms, we calculate the delta-v requirements for key mission segments and determine the propellant mass required for each propulsion method. The results highlight the trade-offs between the high thrust of chemical propulsion and the high specific impulse of electric propulsion. An optimization model is developed to minimize the total propellant mass, considering a hybrid approach that leverages the advantages of both propulsion types. This research contributes to the field of aerospace engineering by providing insights into propulsion system selection and mission planning for future exploration missions to the Moon, Mars, and Venus.
]]>Mathematics doi: 10.3390/math12060899
Authors: Zhengwei Shen Ran Xu Yongquan Zhang Feiwei Qin Ruiquan Ge Changmiao Wang Masahiro Toyoura
The advent of deep learning has provided solutions to many challenges posed by the Internet. However, efficient localization and recognition of vulgar segments within videos remain formidable tasks. This difficulty arises from the blurring of spatial features in vulgar actions, which can render them indistinguishable from general actions. Furthermore, issues of boundary ambiguity and over-segmentation complicate the segmentation of vulgar actions. To address these issues, we present the Boundary-Match U-shaped Temporal Convolutional Network (BMUTCN), a novel approach for the segmentation of vulgar actions. The BMUTCN employs a U-shaped architecture within an encoder–decoder temporal convolutional network to bolster feature recognition by leveraging the context of the video. Additionally, we introduce a boundary-match map that fuses action boundary inform ation with greater precision for frames that exhibit ambiguous boundaries. Moreover, we propose an adaptive internal block suppression technique, which substantially mitigates over-segmentation errors while preserving accuracy. Our methodology, tested across several public datasets as well as a bespoke vulgar dataset, has demonstrated state-of-the-art performance on the latter.
]]>Mathematics doi: 10.3390/math12060898
Authors: Bobo Li Enze Gao Jun Yin Xiaodan Li Guang Yang Qi Liu
Laser deposition manufacturing (LDM) has drawn unprecedented attention for its advantages in manufacturing large-scale and complex metal components. During the process of LDM, a large thermal gradient is generated due to thermal cycling and heat accumulation. As a result, large residual stress and deformation are formed in the LDM metal components. Then, the dimensional accuracy of the metal components becomes poor. To achieve deformation control and increase dimensional accuracy, the deformation prediction of metal components is very meaningful and directional. However, the traditional thermoelastic–plastic method can only achieve deformation prediction for small-scale LDM metal components. Because of the low computational efficiency, it is extremely difficult to meet deformation prediction demand for large-scale metal components. Based on feature partitioning and the inherent strain method, a rapid deformation prediction method is proposed for large-scale metal components in this manuscript. Firstly, to solve the problem of poor consistency of formation quality due to the randomness of the partition process, the partitioning process was established according to typical geometric features. Secondly, the inherent strain values for different partitions were obtained by considering the effects of the extraction method, mesh size, equivalent value layer, and partition size on the inherent strain values. Then, using the inherent strain method, the deformation of large-scale components was predicted rapidly. Comparing the simulation results with the experimental results, the following conclusions were obtained. The deformation predicted by the method proposed in this manuscript is consistent with the deformations predicted using the traditional thermoelastic–plastic method and the experimental method. Significantly, applying the method proposed in this manuscript to predict the deformation of LDM metal components, computational efficiency is improved by 27.25 times compared with results using the conventional thermoelastic–plastic method.
]]>Mathematics doi: 10.3390/math12060897
Authors: Vera Mirović Branimir Kalaš Nada Milenković Jelena Andrašić Miloš Đaković
The aim of this study is to analyze which factors affect the profitability of banks in the eurozone and to make recommendations for supporting them to achieve higher levels of profitability in particular eurozone countries. The banks operating in the eurozone are specific that they are under one monetary policy. The main purpose of the banks’ profitability analysis is to identify main bank-specific and macroeconomic determinants and help bank management to more fully comprehend their importance of bank-specific determinants and macroeconomic determinants’ influence when measuring and evaluating bank profitability. For the purpose of this research, we analyze the impact of bank-specific determinants (NPL, CIR, NIM, NIF and NIT) and macroeconomic determinants (GDP, INF, UNM and DEBT) on bank profitability in the eurozone for the period of 2015–2020 using a random effects model, fixed effects model, and the general method of moments (GMM). This empirical research analyzed quarterly data series from Eurostat for eighteen countries in the eurozone. We came to the results that on the eurozone-level NPL, the cost-to-income ratio has a negative impact on the banks’ profitability, while the net interest income to the operating income, the net income for trading assets to the operating income and the net fee and commission income to the operating income have a positive impact on the banks’ profitability. Considering the macroeconomic variables, we found a positive impact only in the case of GDP, while the inflation rate, unemployment rate and gross government debt have shown a negative impact on the banks’ profitability. The main contribution of this study implies different panel techniques with two uncommonly used macroeconomic variables such as the unemployment rate and debt ratio. The results on the country level differ from country to country and these findings can give a lead to policy makers on the national level on how to enhance the banks’ profitability levels.
]]>Mathematics doi: 10.3390/math12060896
Authors: Charles Castaing Christiane Godet-Thobie Manuel D. P. Monteiro Marques
This paper deals with the research of solutions of bounded variation (BV) to evolution inclusion coupled with a time and state dependent maximal monotone operator. Different problems are studied: existence of solutions, unicity of the solution, existence of periodic and bounded variation right continuous (BVRC) solutions. Second-order evolution inclusions and fractional (Caputo and Riemann–Liouville) differential inclusions are also considered. A result of the Skorohod problem driven by a time- and space-dependent operator under rough signal and a Volterra integral perturbation in the BRC setting is given. The paper finishes with some results for fractional differential inclusions under rough signals and Young integrals. Many of the given results are novel.
]]>Mathematics doi: 10.3390/math12060895
Authors: Andrey B. Muravnik Grigorii L. Rossovskii
We study the Cauchy problem for differential–difference parabolic equations with potentials undergoing translations with respect to the spatial-independent variable. Such equations are used for the modeling of various phenomena not covered by the classical theory of differential equations (such as nonlinear optics, nonclassical diffusion, multilayer plates and envelopes, and others). From the viewpoint of the pure theory, they are important due to crucially new effects not arising in the case of differential equations and due to the fact that a number of classical methods, tools, and approaches turn out to be inapplicable in the nonlocal theory. The qualitative novelty of our investigation is that the initial-value function is assumed to be summable. Earlier, only the case of bounded (essentially bounded) initial-value functions was investigated. For the prototype problem (the spatial variable is single and the nonlocal term of the equation is single), we construct the integral representation of a solution and show its smoothness in the open half-plane. Further, we find a condition binding the coefficient at the nonlocal potential and the length of its translation such that this condition guarantees the uniform decay (weighted decay) of the constructed solution under the unbounded growth of time. The rate of this decay (weighted decay) is estimated as well.
]]>Mathematics doi: 10.3390/math12060894
Authors: Yange Wang Xixian Bai
This study provides an efficient linearized difference algorithm for a diffusive Sel′kov–Schnakenberg system. The algorithm is developed by using a finite difference method that relies on a three-level linearization approach. The boundedness, existence and uniqueness of the solution of our proposed algorithm are proved. The numerical experiments not only validate the accuracy of the algorithm but also preserve the Turing patterns.
]]>Mathematics doi: 10.3390/math12060893
Authors: Lucía Inglada-Pérez Sandra González y Gil
The existence of chaos is particularly relevant, as the identification of a chaotic behavior in a time series could lead to reliable short-term forecasting. This paper evaluates the existence of nonlinearity and chaos in the underlying process of the spot prices of the Spanish electricity market. To this end, we used daily data spanning from 1 January 2013, to 31 March 2021 and we applied a comprehensive framework that encompassed a wide range of techniques. Nonlinearity was analyzed using the BDS method, while the existence of a chaotic structure was studied through Lyapunov exponents, recurrence plots, and quantitative recurrence analysis. While nonlinearity was detected in the underlying process, conclusive evidence supporting chaos was not found. In addition, the generalized autoregressive conditional heteroscedastic (GARCH) model accounts for part of the nonlinear structure that is unveiled in the electricity market. These findings hold substantial value for electricity market forecasters, traders, producers, and market regulators.
]]>Mathematics doi: 10.3390/math12060892
Authors: Dmitriy Kvitko Vyacheslav Rybin Oleg Bayazitov Artur Karimov Timur Karimov Denis Butusov
Developing efficient path-planning algorithms is an essential topic in modern robotics and control theory. Autonomous rovers and wheeled and tracked robots require path generators that can efficiently cover the explorable space with minimal redundancy. In this paper, we present a new path-planning algorithm based on the chaotic behavior of the Courbage–Nekorkin neuron model with a coverage control parameter. Our study aims to reduce the number of iterations required to cover the chosen investigated area, which is a typical efficiency criterion for this class of algorithms. To achieve this goal, we implemented a pseudorandom bit generator (PRBG) based on a Courbage–Nekorkin chaotic map, which demonstrates chaotic behavior and successfully passes all statistical tests for randomness. The proposed PRBG generates a bit sequence that can be used to move the tracked robot in four or eight directions in an operation area of arbitrary size. Several statistical metrics were applied to evaluate the algorithm’s performance, including the percentage of coverage of the study area and the uniformity of coverage. The performance of several competing path-planning algorithms was analyzed using the chosen metrics when exploring two test areas of the sizes 50 × 50 cells and 100 × 100 cells, respectively, in four and eight directions. The experimental results indicate that the proposed algorithm is superior compared to known chaotic path-planning methods, providing more rapid and uniform coverage with the possibility of controlling the covered area using tunable parameters. In addition, this study revealed the high dependence of the coverage rate on the starting point. To investigate how the coverage rate depends on the choice of chaotic map, we implemented six different PRBGs using various chaotic maps. The obtained results can be efficiently used for solving path-planning tasks in both real-life and virtual (e.g., video games) applications.
]]>Mathematics doi: 10.3390/math12060891
Authors: Ferenc Szidarovszky Akio Matsumoto
An n-person cooperative oligopoly is considered without product differentiation. It is assumed that the firms know the unit price function but have no access to the cost functions of the competitors. From market data, they have information about the industry output. The firms want to find the output levels that guarantee maximum industry profit. First, the existence of a unique maximizer is proven, which the firms cannot determine directly because of the lack of the knowledge of the cost functions. Instead, a dynamic model is constructed, which is asymptotically stable under realistic conditions, and the state trajectories converge to the optimum output levels of the firms. Three models are constructed: first, no time delay is assumed; second, information delay is considered for the firms on the industry output; and third, in addition, information delay is also assumed about the firms’ own output levels. The stability of the resulting no-delay, one-delay, and two-delay dynamics is examined.
]]>Mathematics doi: 10.3390/math12060890
Authors: Bogar Díaz Eduardo J. S. Villaseñor Diana Zomeño Salas
We perform a Hamiltonian analysis of unimodular gravity in its first-order formulation, specifically a modification of the Holst action. In order to simplify the analysis, prior studies on this theory have introduced (for several reasons) additional elements, such as parametrization, complex fields, or considering the Barbero–Immirzi parameter as imaginary. We show that, by using a geometric implementation of the Dirac algorithm, a comprehensive analysis of the theory can be conducted without relying on these additional ingredients. The resulting theory reproduces the behavior of metric unimodular gravity.
]]>Mathematics doi: 10.3390/math12060889
Authors: Trond Steihaug
We study the square root computation by Leonardo Fibonacci (or Leonardo of Pisa) in his MSS Liber Abaci from c1202 and c1228 and De Practica Geometrie from c1220. In this MSS, Fibonacci systematically describes finding the integer part of the square root of an integer in numerous examples with three to seven decimal digits. The results of these examples are summarized in a table in the paper. Liber Abaci also describes in detail finding an approximation to the fractional part of the square root. However, in other examples in Liber Abaci and De Practica Geometrie, only the approximate values of the fractional part of the square roots are stated. This paper further explores these approximate values using techniques like reverse engineering. Contrary to many claims that Fibonacci also used other methods or approximations, we show that all examples can be explained using one digit-by-digit method to compute the integer part of the square root and one approximation scheme for the fractional part. Further, it is shown that the approximation scheme is tied to the method to compute the integer part of the square root.
]]>Mathematics doi: 10.3390/math12060888
Authors: Yu Ni Shufen Dai Shuaipeng Yuan Bailin Wang Zhuolun Zhang
This paper investigates a two-machine group scheduling problem with sequence-independent setup times and round-trip transportation times, which is derived from the production management requirements of modern steel manufacturing enterprises. The objective is to minimize the makespan. Addressing limitations in prior studies, we consider a critical but largely ignored transportation method, namely round-trip transportation, and restricted transporter capacity between machines. To solve this problem, a mixed-integer programming model is first developed. Then, the problem complexity is analyzed for situations with both single and unlimited transporters. For the NP-hard case of a single transporter, we design an efficient two-stage heuristic algorithm with proven acceptable solution quality bounds. Extensive computational experiments based on steel plant data demonstrate the effectiveness of our approach in providing near-optimal solutions, and the maximum deviation between our algorithm and the optimal solution is 1.38%. This research can provide an operable optimization method that is valuable for group scheduling and transportation scheduling.
]]>Mathematics doi: 10.3390/math12060887
Authors: Xu Tian Weisheng Wang Liang Zou Shuo Zhai Bin Hai Rui Wang
Distributed renewable sources are one of the most promising contributors for DC microgrids to reduce carbon emission and fuel consumption. Although the battery energy storage system (BESS) is widely applied to compensate the power imbalance between distributed generators (DGs) and loads, the impacts of disturbances, DGs, constant power loads (CPLs) and cyber attacks on this system are not simultaneously considered. Based on this, a distributed fuzzy output consensus control strategy is proposed to realize accurate current sharing and operate normally in the presence of denial of service (DoS) attacks and false data injection (FDI) attacks. Firstly, the whole model of the BESS in DC microgrids embedded into disturbance items, DGs, CPLs and resistive loads, is firstly built. This model could be further transformed into standard linear heterogeneous multi-agent systems with disturbance, which lays the foundation for the following control strategy. Then the model of FDI and DoS attacks are built. Meanwhile, the fuzzy logic controller (FLC) is applied to reduce the burden of communication among batteries. Based on these, a distributed output consensus fuzzy control is proposed to realize accurate current sharing among batteries. Moreover, the system under the proposed control in different cases is analyzed. Finally, the feasibility of the proposed control strategy is verified by numerical simulation results and experiment results.
]]>Mathematics doi: 10.3390/math12060886
Authors: Yajun Zeng Jun Wang Shaoming Wei Chi Zhang Xuan Zhou Yingbin Lin
Spatial registration is a prerequisite for data fusion. Existing methods primarily focus on similar sensor scenarios and rely on accurate data association assumptions. To address the heterogeneous sensor registration in complex data association scenarios, this paper proposes a Gaussian mixture probability hypothesis density (GM-PHD)-based algorithm for heterogeneous sensor bias registration, accompanied by an adaptive measurement iterative update algorithm. Firstly, by constructing augmented target state motion and measurement models, a closed-form expression for prediction is derived based on Gaussian mixture (GM). In the subsequent update, a two-level Kalman filter is used to achieve an approximate decoupled estimation of the target state and measurement bias, taking into account the coupling between them through pseudo-likelihood. Notably, for heterogeneous sensors that cannot directly use sequential update techniques, sequential updates are first performed on sensors that can obtain complete measurements, followed by filtering updates using extended Kalman filter (EKF) sequential update techniques for incomplete measurements. When there are differences in sensor quality, the GM-PHD fusion filter based on measurement iteration update is sequence-sensitive. Therefore, the optimal subpattern assignment (OSPA) metric is used to optimize the fusion order and enhance registration performance. The proposed algorithms extend the multi-target information-based spatial registration algorithm to heterogeneous sensor scenarios and address the impact of different sensor-filtering orders on registration performance. Our proposed algorithms significantly improve the accuracy of bias estimation compared to the registration algorithm based on significant targets. Under different detection probabilities and clutter intensities, the average root mean square error (RMSE) of distance and angular biases decreased by 11.8% and 8.6%, respectively.
]]>Mathematics doi: 10.3390/math12060885
Authors: Shujin Cao Peng Chen Guangyin Lu Yihuai Deng Dongxin Zhang Xinyue Chen
For regional or even global geophysical problems, the curvature of the geophysical model cannot be approximated as a plane, and its curvature must be considered. Tesseroids can fit the curvature, but their shapes vary from almost rectangular at the equator to almost triangular at the poles, i.e., degradation phenomena. Unlike other spherical discrete grids (e.g., square, triangular, and rhombic grids) that can fit the curvature, the Discrete Global Grid System (DGGS) grid can not only fit the curvature but also effectively avoid degradation phenomena at the poles. In addition, since it has only edge-adjacent grids, DGGS grids have consistent adjacency and excellent angular resolution. Hence, DGGS grids are the best choice for discretizing the sphere into cells with an approximate shape and continuous scale. Compared with the tesseroid, which has no analytical solution but has a well-defined integral limit, the DGGS cell (prisms obtained from DGGS grids) has neither an analytical solution nor a fixed integral limit. Therefore, based on the isoparametric transformation, the non-regular DGGS cell in the system coordinate system is transformed into the regular hexagonal prism in the local coordinate system, and the DGGS-based forwarding algorithm of the gravitational field is realized in the spherical coordinate system. Different coordinate systems have differences in the integral kernels of gravity fields. In the current literature, the forward modeling research of polyhedrons (the DGGS cell, which is a polyhedral cell) is mostly concentrated in the Cartesian coordinate system. Therefore, the reliability of the DGGS-based forwarding algorithm is verified using the tetrahedron-based forwarding algorithm and the tesseroid-based forwarding algorithm with tiny tesseroids. From the numerical results, it can be concluded that if the distance from observations to sources is too small, the corresponding gravity field forwarding results may also have ambiguous values. Therefore, the minimum distance is not recommended for practical applications.
]]>Mathematics doi: 10.3390/math12060882
Authors: Ying-Ying Yu Xin Li Ye Ji
Cubic Bézier curves are widely used in computer graphics and geometric modeling, favored for their intuitive design and ease of implementation. However, self-intersections within these curves can pose significant challenges in both geometric modeling and analysis. This paper presents a comprehensive approach to detecting and computing self-intersections of cubic Bézier curves. We introduce an efficient algorithm that leverages both the geometric properties of Bézier curves and numerical methods to accurately identify intersection points. The self-intersection problem of cubic Bézier curves is firstly transformed into a quadratic problem by eliminating trivial solutions. Subsequently, this quadratic system is converted into a linear system that may be easily analyzed and solved. Finally, the parameter values corresponding to the self-intersection points are computed through the solution of the linear system. The proposed method is designed to be robust and computationally efficient, making it suitable for real-time applications.
]]>Mathematics doi: 10.3390/math12060884
Authors: Lívia Mariana Lopes de Souza Torres Francisco S. Ramos
Higher education evaluation presents itself as a worldwide trend. It aims to improve performance due to its importance for economic and personal growth. Graduate activities are essential for Brazilian research and innovation systems. However, previous studies have disregarded the importance of this educational level and have evaluated efficiency by jointly considering teaching and research or only undergraduate courses. Therefore, this study contributes to Brazilian reality by proving a national graduate activities efficiency evaluation that considers them as a two-stage system (formative and scientific production stages). The study provides three main methodological contributions by presenting a new centralized two-stage dynamic network data envelopment analysis (DNDEA) model with shared resources. Besides measuring efficiency, an efficiency decomposition based on a leader–follower assumption shows managers how much efficiency can alter when one of the stages needs to be prioritized. Finally, a new framework based on modified virtual inputs and outputs provides a bi-dimensional representation of the efficiency frontier. Results indicate the usefulness of the approach for ranking universities, and the need to improve scientific production, highlighting the negative impacts of COVID-19 on the formative process efficiency and showing no significant regional discrepancies regarding performance.
]]>Mathematics doi: 10.3390/math12060883
Authors: Yolanda M. Gómez John L. Santibañez Vinicius F. Calsavara Héctor W. Gómez Diego I. Gallardo
A novel cure rate model is introduced by considering, for the number of concurrent causes, the modified power series distribution and, for the time to event, the recently proposed power piecewise exponential distribution. This model includes a wide variety of cure rate models, such as binomial, Poisson, negative binomial, Haight, Borel, logarithmic, and restricted generalized Poisson. Some characteristics of the model are examined, and the estimation of parameters is performed using the Expectation–Maximization algorithm. A simulation study is presented to evaluate the performance of the estimators in finite samples. Finally, an application in a real medical dataset from a population-based study of incident cases of lobular carcinoma diagnosed in the state of São Paulo, Brazil, illustrates the advantages of the proposed model compared to other common cure rate models in the literature, particularly regarding the underestimation of the cure rate in other proposals and the improved precision in estimating the cure rate of our proposal.
]]>Mathematics doi: 10.3390/math12060881
Authors: Nawras H. Sabbry Alla B. Levina
Elliptic curve cryptography (ECC) is widely acknowledged as a method for implementing public key cryptography on devices with limited resources thanks to its use of small keys. A crucial and complex operation in ECC calculations is scalar point multiplication. To improve its execution time and computational complexity in low-power devices, such as embedded systems, several algorithms have been suggested for scalar point multiplication, with each featuring different techniques and mathematical formulas. In this research, we focused on combining some techniques to produce a scalar point multiplication algorithm for elliptic curves over finite fields. The employed methodology involved mathematical analysis to investigate commonly used point multiplication methods. The aim was to propose an efficient algorithm that combined the best computational techniques, resulting in lower computational requirements. The findings show that the proposed method can overcome certain implementation issues found in other multiplication algorithms. In certain scenarios, the proposed method offers a more efficient approach by reducing the number of point doubling and point addition operations on elliptic curves using the inverse of the targeted point.
]]>Mathematics doi: 10.3390/math12060880
Authors: Mengxin Zhang Xiaojun Zhang
Causal structure learning is one of the major fields in causal inference. Only the Markov equivalence class (MEC) can be learned from observational data; to fully orient unoriented edges, experimental data need to be introduced from external intervention experiments to improve the identifiability of causal graphs. Finding suitable intervention targets is key to intervention experiments. We propose a causal structure active learning strategy based on graph structures. In the context of randomized experiments, the central nodes of the directed acyclic graph (DAG) are considered as the alternative intervention targets. In each stage of the experiment, we decompose the chain graph by removing the existing directed edges; then, each connected component is oriented separately through intervention experiments. Finally, all connected components are merged to obtain a complete causal graph. We compare our algorithm with previous work in terms of the number of intervention variables, convergence rate and model accuracy. The experimental results show that the performance of the proposed method in restoring the causal structure is comparable to that of previous works. The strategy of finding the optimal intervention target is simplified, which improves the speed of the algorithm while maintaining the accuracy.
]]>Mathematics doi: 10.3390/math12060879
Authors: Sergey V. Meleshko Evgeniy I. Kaptsov
This article analyzes the symmetry of two-dimensional stationary gas dynamics equations in Lagrangian coordinates, including the search for equivalence transformations, the group classification of equations, the derivation of group foliations, and the construction of conservation laws. The consideration of equations in Lagrangian coordinates significantly simplifies the procedure for obtaining conservation laws, which are derived using the Noether theorem. The final part of the work is devoted to group foliations of the gas dynamics equations, including for the nonstationary isentropic case. The group foliations approach is usually employed for equations that admit infinite-dimensional groups of transformations (which is exactly the case for the gas dynamics equations in Lagrangian coordinates) and may make it possible to simplify their further analysis. The results obtained in this regard generalize previously known results for the two-dimensional shallow water equations in Lagrangian coordinates.
]]>Mathematics doi: 10.3390/math12060878
Authors: Youngjin Seol Seunghyun Lee Jiho Lee Chang-Wan Kim Hyun Su Bak Youngchul Byun Janghyeok Yoon
Considering the pivotal role of ferroalloys in the steel industry and the escalating global emphasis on sustainability (e.g., zero emissions and carbon neutrality), the demand for ferroalloys is anticipated to increase. However, the electric arc furnace (EAF) of ferroalloy plants generates substantial amounts of nitrogen oxides (NOx) because of the high-temperature combustion processes. Despite the substantial contributions of many studies on NOx prediction from various industrial facilities, there is a lack of studies considering the environmental condition of the EAF in ferroalloy plants. Therefore, this study presents a deep learning model for predicting NOx emissions from ferroalloy plants and further can provide guidelines for predicting NOx in industrial sites equipped with electric furnaces. In this study, we collected various historical data from the manufacturing execution system of electric furnaces and exhaust gas systems to develop a prediction model. Additionally, an interpretable artificial intelligence method was employed to track the effects of each variable on the NOx emissions. The proposed prediction model can provide decision support to reduce NOx emissions. Furthermore, the interpretation of the model contributes to a better understanding of the factors influencing NOx emissions and the development of effective strategies for emission reduction in ferroalloys EAF plants.
]]>Mathematics doi: 10.3390/math12060877
Authors: Kyriaki Tsilika
This work surveys the use of directed weighted graphs in conducting comparative static analyses. The paper discusses the implementation of a computer-aided process for building spreadsheet-based graph models for inter- and intra-regional financial flows. The graph-theoretic techniques are programmed to enable the interactive visualization and analysis of financial data using Wolfram technologies (i.e., Mathematica software v. 11.3 or later, Wolfram player v. 12 or later). The paper describes the workflow for several interactive visualizations applicable to financial networks. The author provides four programs, written in the Wolfram language, that customize input–output financial models by combining the Manipulate command with built-in Mathematica functions and functions of the IGraph package (IGraph/M). The study is a tutorial article for the generation of a suite of visual schemes that provide patterns, practices, and roadmaps of the financial markets across the globe.
]]>Mathematics doi: 10.3390/math12060875
Authors: Bailey Meche Venu Poruri Sat Gupta Sadia Khalil
This study examines the effect of measurement error on binary Randomized Response Technique models. We discuss a method for estimating and accounting for measurement error and untruthfulness in two basic models and one comprehensive model. Both theoretical and empirical results show that not accounting for measurement error leads to inaccurate estimates. We introduce estimators that account for the effect of measurement error. Furthermore, we introduce a new measure of model privacy using an odds ratio statistic, which offers better interpretability than traditional methods.
]]>Mathematics doi: 10.3390/math12060876
Authors: Chunsong Bai Jun Du
This paper deals with p-maxian problem on cycles with an upper bound on the distances of all facilities. We consider the case of p=2 and show that, in the worst case, the optimal solution contains at least one vertex of the underlying cycle, which helps to develop an efficient algorithm to solve the constrained 2-maxian problem. Based on this property, we develop a linear time algorithm for the constrained 2-maxian problem on a cycle. We also discuss the relations between the constrained and unconstrained 2-maxian problems on which the underlying graphs are cycles.
]]>Mathematics doi: 10.3390/math12060874
Authors: Kalanka P. Jayalath
This work focuses on making Bayesian inferences for the two-parameter Birnbaum–Saunders (BS) distribution in the presence of right-censored data. A flexible Gibbs sampler is employed to handle the censored BS data in this Bayesian work that relies on Jeffrey’s and Achcar’s reference priors. A comprehensive simulation study is conducted to compare estimates under various parameter settings, sample sizes, and levels of censoring. Further comparisons are drawn with real-world examples involving Type-II, progressively Type-II, and randomly right-censored data. The study concludes that the suggested Gibbs sampler enhances the accuracy of Bayesian inferences, and both the amount of censoring and the sample size are identified as influential factors in such analyses.
]]>Mathematics doi: 10.3390/math12060873
Authors: Egor A. Efremenkov Nikita V. Martyushev Svetlana K. Efremenkova Egor S. Chavrov
Currently, mechanical gears with cycloid engagement are increasingly used in mechanisms along with involute ones. In modern drive mechanisms, using pin gears and gears with intermediate rolling elements (IRE) is widespread, which simultaneously use cycloid gears. To a greater extent, pin gears are now being investigated, but IRE gears have their undeniable advantages. Many works are devoted to the study of cycloid toothing for certain gears, but the efficiency, especially that of IRE gears, has practically not been investigated. Therefore, the analysis of power losses in the engagement of a gear with IRE and a free cage (IREFC) is relevant. In this analysis, the authors of the work have used laws of mechanics, methods of energy flows and a secant normal. Mathematical expressions have been obtained to estimate slip speeds and power friction losses in the engagement of a gear with IREFC, and a formula has been derived to determine the efficiency of a mentioned mechanical transmission. The calculation of slip speeds and power losses at the points of contact of a rolling element with cycloid profiles of wheels for selected initial parameters of a gear with IREFC has been presented. The friction power and the overall efficiency of the entire gear engagement have also been calculated. This work shows that power friction losses at the points of contact of a rolling element with cycloid profiles of tooth wheels of a gear with IREFC are not the same. The friction power in the contact of a rolling element with a cycloid profile of a cam is an order of magnitude higher than the friction power in the contact of a rolling element with a cycloid profile of a crown.
]]>Mathematics doi: 10.3390/math12060872
Authors: Hao Guan Waheed Ahmad Khan Amna Fida Khadija Ali Jana Shafi Aysha Khan
In this manuscript, we introduce a few new types of dominations in intuitionistic fuzzy directed graphs (IFDGs) based on different types of strong arcs (SAs). Our work is not only a direct extension of domination in directed fuzzy graphs (DFGs) but also fills the gap that exists in the literature regarding the dominations in different extended forms of fuzzy graphs (FGs). In the beginning, we introduce several types of strong arcs in IFDGs, like semi-β strong arcs, semi-δ strong arcs, etc. Then, we introduce the concepts of domination in IFDGs based on these strong arcs and discuss its various useful characteristics. Moreover, the dominating set (DS), minimal dominating set (MDS), etc., are described with some fascinating results. We also introduce the concept of an independent set in IFDGs and investigate its relations with the DS, minimal independent set (MIS) and MDS. We also provide numerous important characterizations of domination in IFDGs based on minimal and maximal dominating sets. In this context, we discuss the lower and upper dominations of some IFDGs. In addition, we introduce the terms status and structurally equivalent and examine a few relationships with the dominations in IFDGs. Finally, we investigate the most expert (influential) person in the organization by utilizing the concepts of domination in IFGs.
]]>Mathematics doi: 10.3390/math12060871
Authors: Donal O’Regan Safoura Rezaei Aderyani Reza Saadati
The present paper considers a fractional-order smoke epidemic model. We apply fuzzy systems and probability theory to make the best decision on the stability of the smoking epidemic model by using a new class of controllers powered by special functions to effectively generalize Ulam-type stability problems. Evaluation of optimal controllability and maximal stability is the new issue. This different concept of stability not only covers the old concepts but also investigates the optimization of the problem. Finally, we apply a new optimal method for the governing model with the Atangana–Baleanu–Caputo fractional derivative to obtain stability results in Banach spaces.
]]>Mathematics doi: 10.3390/math12060870
Authors: Xinyu Sha Fucai Qian Hongli He
The calibration of cameras plays a critical role in close-range photogrammetry because the precision of calibration has a direct effect on the quality of results. When handling image capture using a camera, traditional swarm intelligence algorithms such as genetic algorithms and particle swarm optimization, in conjunction with Zhang’s calibration method, frequently face difficulties regarding local optima and sluggish convergence. This study presents an enhanced hybrid optimization approach utilizing both the principles of differential evolution and particle swarm optimization, which is then employed in the context of camera calibration. Initially, we establish a measurement model specific to the camera in close-range photogrammetry and determine its interior orientation parameters. Subsequently, employing these parameters as initial values, we perform global optimization and iteration using the improved hybrid optimization algorithm. The effectiveness of the proposed approach is subsequently validated through simulation and comparative experiments. Compared to alternative approaches, the proposed algorithm enhances both the accuracy of camera calibration and the convergence speed. It effectively addresses the issue of other algorithms getting trapped in local optima due to image distortion. These research findings provide theoretical support for practical engineering applications in the field of control theory and optimization to a certain extent.
]]>Mathematics doi: 10.3390/math12060869
Authors: Elías Berriochoa Alicia Cachafeiro Héctor García Rábade José Manuel García-Amor
In this paper, we study a method of polynomial interpolation that lies in-between Lagrange and Hermite methods. The novelty is that we use very general nodal systems on the unit circle as well as on the bounded interval only characterized by a separation property. The way in which we interpolate consists in considering all the nodes for the prescribed values and only half for the derivatives. Firstly, we develop the theory on the unit circle, obtaining the main properties of the nodal polynomials and studying the convergence of the interpolation polynomials corresponding to continuous functions with some kind of modulus of continuity and with general conditions on the prescribed values for half of the derivatives. We complete this first part of the paper with the study of the convergence for smooth functions obtaining the rate of convergence, which is slightly slower than that when equidistributed nodal points are considered. The second part of the paper is devoted to solving a similar problem on the bounded interval by using nodal systems having good properties of separation, generalizing the Chebyshev–Lobatto system, and well related to the nodal systems on the unit circle studied before. We obtain an expression of the interpolation polynomials as well as results about their convergence in the case of continuous functions with a convenient modulus of continuity and, particularly, for differentiable functions. Finally, we present some numerical experiments related to the application of the method with the nodal systems dealt with.
]]>Mathematics doi: 10.3390/math12060868
Authors: Petr Iakovlevitch Ekel Matheus Pereira Libório Laura Cozzi Ribeiro Mateus Alberto Dorna de Oliveira Ferreira Joel Gomes Pereira Junior
This research addresses the problem of multi-objective resource allocation or resource deficits, offering robust answers to planning decisions that involve the elementary question: “How is it done?”. The solution to the problem is realized using the general scheme of multi-criteria decision-making in uncertain conditions. The bases of the proposed scheme are associated with the possibilistic approach, which involves the generalization of fuzzy sets from the classical approach to process the uncertainty of information to produce robust (non-dominated) solutions in multi-criteria analysis. Applying this general scheme makes it possible to reduce regions of decision uncertainty through the maximum use of available quantitative information. In the case where quantitative information analysis is insufficient to obtain a unique solution, the proposed approach presupposes the appropriation of qualitative data extracted from experts, who express their opinions considering their knowledge, experience, and intuition. The information on the qualitative character can be represented in diverse preference formats processed by transformation functions to provide homogeneous information for decision procedures used at the final decision stage. The presented results have been implemented within the system of multi-criteria decision-making under uncertain conditions described in the paper. Its functioning is illustrated by solving the typical problem in investment planning activities.
]]>Mathematics doi: 10.3390/math12060867
Authors: Pius W. M. Chin Claude R. B. Moutsinga Khadijo R. Adem
The Huxley equation, which is a nonlinear partial differential equation, is used to describe the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon. This equation, just like many other nonlinear equations, is often very difficult to analyze because of the presence of the nonlinearity term, which is always very difficult to approximate. This paper aims to design a reliable scheme that consists of a combination of the nonstandard finite difference in time method, the Galerkin method and the compactness methods in space variables. This method is used to show that the solution of the problem exists uniquely. The a priori estimate from the existence process is applied to the scheme to show that the numerical solution from the scheme converges optimally in the L2 as well as the H1 norms. We proceed to show that the scheme preserves the decaying properties of the exact solution. Numerical experiments are introduced with a chosen example to validate the proposed theory.
]]>Mathematics doi: 10.3390/math12060866
Authors: Adel Alahmadi Malak Altaiary Patrick Solé
We study cyclic codes over the ring H of order 4 and characteristic 2 defined by generators and relations as H=⟨a,b∣2a=2b=0,a2=0,b2=b,ab=ba=0⟩. This is the first time that cyclic codes over a non-unitary ring are studied. Every cyclic code of length n over H is uniquely determined by the data of an ordered pair of binary cyclic codes of length n. We characterize self-dual, quasi-self-dual, and linear complementary dual cyclic codes H. We classify cyclic codes of length at most 7 up to equivalence. A Gray map between cyclic codes of length n over H and quasi-cyclic codes of length 2n over F2 is studied.
]]>Mathematics doi: 10.3390/math12060865
Authors: Laith H. Baniata Sangwoo Kang
Transformer models have emerged as frontrunners in the field of natural language processing, primarily due to their adept use of self-attention mechanisms to grasp the semantic linkages between words in sequences. Despite their strengths, these models often face challenges in single-task learning scenarios, particularly when it comes to delivering top-notch performance and crafting strong latent feature representations. This challenge is more pronounced in the context of smaller datasets and is particularly acute for under-resourced languages such as Arabic. In light of these challenges, this study introduces a novel methodology for text classification of Arabic texts. This method harnesses the newly developed Reverse Positional Encoding (RPE) technique. It adopts an inductive-transfer learning (ITL) framework combined with a switching self-attention shared encoder, thereby increasing the model’s adaptability and improving its sentence representation accuracy. The integration of Mixture of Experts (MoE) and RPE techniques empowers the model to process longer sequences more effectively. This enhancement is notably beneficial for Arabic text classification, adeptly supporting both the intricate five-point and the simpler ternary classification tasks. The empirical evidence points to its outstanding performance, achieving accuracy rates of 87.20% for the HARD dataset, 72.17% for the BRAD dataset, and 86.89% for the LABR dataset, as evidenced by the assessments conducted on these datasets.
]]>Mathematics doi: 10.3390/math12060864
Authors: Tim Leung Theodore Zhao
We analyze the multiscale behaviors of high-frequency intraday prices, with a focus on how asset prices are correlated over different timescales. The multiscale approach proposed in this paper is designed for the analysis of high-frequency intraday prices. It incorporates microstructure noise into the stochastic price process. We consider a noisy fractional Brownian motion model and illustrate its various statistical properties. This leads us to introduce new latent correlation and noise estimators. New numerical algorithms are developed for model estimation using empirical high-frequency data. For a collection of stocks and exchange-traded funds, examples are provided to illustrate the relationship between multiscale correlation and sampling frequency as well as the evolution of multiscale correlation over time.
]]>Mathematics doi: 10.3390/math12060863
Authors: Hannan Lu Zixian Guo Wangmeng Zuo
Existing video object segmentation (VOS) methods based on matching techniques commonly employ a reference set comprising historical segmented frames, referred to as ‘memory frames’, to facilitate the segmentation process. However, these methods suffer from the following limitations: (i) Inherent segmentation errors in memory frames can propagate and accumulate errors when utilized as templates for subsequent segmentation. (ii) The non-local matching technique employed in top-leading solutions often fails to incorporate positional information, potentially leading to incorrect matching. In this paper, we introduce the Modulated Memory Network (MMN) for VOS. Our MMN enhances matching-based VOS methods in the following ways: (i) Introducing an Importance Modulator, which adjusts memory frames using adaptive weight maps generated based on the segmentation confidence associated with each frame. (ii) Incorporating a Position Modulator that encodes spatial and temporal positional information for both memory frames and the current frame. The proposed modulator improves matching accuracy by embedding positional information. Meanwhile, the Importance Modulator mitigates error propagation and accumulation by incorporating confidence-based modulation. Through extensive experimentation, we demonstrate the effectiveness of our proposed MMN, which also achieves promising performance on VOS benchmarks.
]]>Mathematics doi: 10.3390/math12060862
Authors: Adel Alahmadi Altaf Alshuhail Rowena Alma Betty Lucky Galvez Patrick Solé
In this paper, we derive a mass formula for the self-orthogonal codes and self-dual codes over a non-commutative non-unitary ring, namely, Ep=a,b|pa=pb=0,a2=a,b2=b,ab=a,ba=b, where a≠b and p is any odd prime. We also give a classification of self-orthogonal codes and self-dual codes over Ep, where p=3,5, and 7, in short lengths.
]]>Mathematics doi: 10.3390/math12060861
Authors: Intesar F. El Ramley Saleha M. AlZhrani Nada M. Bedaiwi Yas Al-Hadeethi Abeer Z. Barasheed
This paper introduces a new simple moment-generating function (MGF) design modelling method to conclude an optimum filter to maximize the Q-factor and increase the link communication span. This approach mitigates the pulse temporal dispersion, particularly the underwater wireless optical communication (UWOC) systems. Hence, some form of equalizing filter design is highly desirable. The model solution environment includes a Double Gamma Function (DGF) water channel impulse response, intersymbol interference (ISI), stochastic Poisson process, and additive Gaussian thermal noise (AGTN). The optimal filters exhibit temporal profiles comparable to those derived by published works based on complex Chernoff Bound (CB) and Modified Chernoff Bound (MCB) methods. The results show the impact of the optimum filter at a signal level and optical receiver level utilizing Eye-Diagrams and BER vs. Q-Factor, respectively. The computation involves four different UWOC propagation channel models for Coastal and Harbor waters. One of the main conclusions indicates that the optimum filter manages the temporal dispersion due to the ISI impairment correctly. Also, the proposed optimum filter reduces eye-opening and the corresponding Q-Factor by less than 15% for a five-times increase in pulse width for the same transmitted optical power level.
]]>Mathematics doi: 10.3390/math12060860
Authors: Adel Alahmadi Tamador Alihia Rowena Alma Betty Lucky Galvez Patrick Solé
The build-up method is a powerful class of propagation rules that generate self-dual codes over finite fields and unitary rings. Recently, it was extended to non-unitary rings of order 4, to generate quasi self-dual codes. In the present paper, we introduce three such propagation rules to generate self-orthogonal, self-dual and quasi self-dual codes over a special non-unitary ring of order 9. As an application, we classify the three categories of codes completely in length at most 3, and partially in lengths 4 and 5, up to monomial equivalence.
]]>Mathematics doi: 10.3390/math12060859
Authors: Sami Barmada Paolo Di Barba Nunzia Fontana Maria Evelina Mognaschi Mauro Tucci
In this study, a deep learning-based approach is used to address inverse problems involving the inversion of a magnetic field and the identification of the relevant source, given the field data within a specific subdomain. Three different techniques are proposed: the first one is characterized by the use of a conditional variational autoencoder (CVAE) and a convolutional neural network (CNN); the second one employs the CVAE (its decoder, more specifically) and a fully connected deep artificial neural network; while the third one (mainly used as a comparison) uses a CNN directly operating on the available data without the use of the CVAE. These methods are applied to the magnetostatic problem outlined in the TEAM 35 benchmark problem, and a comparative analysis between them is conducted.
]]>Mathematics doi: 10.3390/math12060858
Authors: Jesús Carrillo-Pacheco
The Plücker matrix BL(n,E) of the Lagrangian Grassmannian L(n,E), is determined by the linear envelope ⟨L(n,E)⟩ of the Lagrangian Grassmannian. The linear envelope ⟨L(n,E)⟩ is the intersection of linear relations of Plücker of Lagrangian Grassmannian, defined here. The Plücker matrix BL(n,E) is a direct sum of the incidence matrix of the configuration of subsets. These matrices determine the isotropy index rn and rn-atlas which are invariants associated with the symplectic vector space E.
]]>Mathematics doi: 10.3390/math12060857
Authors: Abdelbaki Choucha Djamel Ouchenane Safa M. Mirgani Eltigan I. Hassan A. H. A. Alfedeel Khaled Zennir
In this work, we consider the one-dimensional thermoelastic Bresse system by addressing the aspects of nonlinear damping and distributed delay term acting on the first and the second equations. We prove a stability result without the common assumption regarding wave speeds under Neumann boundary conditions. We discover a new relationship between the decay rate of the solution and the growth of ϖ at infinity. Our results were achieved using the multiplier method and the perturbed modified energy, named Lyapunov functions together with some properties of convex functions.
]]>Mathematics doi: 10.3390/math12060856
Authors: Vladimir Kodkin Ekaterina Kuznetsova Alexander Anikin Alexander A. Baldenkov
The article proposes a criterion for the existence of sliding processes according to the frequency characteristics of the control device and the control object. It is shown that the conditions for the existence of slip are equivalent to the conditions for the absolute stability of equivalent circuits of the original systems with a variable structure. This approach is proposed by the authors as an alternative to the method of phase trajectories and state space equations used by other researchers. Frequency criteria make it possible to formulate several practical engineering provisions that are very important for the implementation of this control method in real electric drives. The main conclusions are confirmed by simulation of both processes in systems with a variable structure and in equivalent circuits.
]]>Mathematics doi: 10.3390/math12060855
Authors: John Martin Sona Taheri Mali Abdollahian
Credit risk scorecard models are utilized by lending institutions to optimize decisions on credit approvals. In recent years, ensemble learning has often been deployed to reduce misclassification costs in credit risk scorecards. In this paper, we compared the risk estimation of 26 widely used machine learning algorithms based on commonly used statistical metrics. The best-performing algorithms were then used for model selection in ensemble learning. For the first time, we proposed financial criteria that assess the impact of losses associated with both false positive and false negative predictions to identify optimal ensemble learning. The German Credit Dataset (GCD) is augmented with simulated financial information according to a hypothetical mortgage portfolio observed in UK, European and Australian banks to enable the assessment of losses arising from misclassification costs. The experimental results using the simulated GCD show that the best predictive individual algorithm with the accuracy of 0.87, Gini of 0.88 and Area Under the Receiver Operating Curve of 0.94 was the Generalized Additive Model (GAM). The ensemble learning method with the lowest misclassification cost was the combination of Random Forest (RF) and K-Nearest Neighbors (KNN), totaling USD 417 million in costs (USD 230 for default costs and USD 187 for opportunity costs) compared to the costs of the GAM (USD 487, USD 287 and USD 200). Implementing the proposed financial criteria has led to a significant USD 70 million reduction in misclassification costs derived from a small sample. Thus, the lending institutions’ profit would considerably rise as the number of submitted credit applications for approval increases.
]]>Mathematics doi: 10.3390/math12060854
Authors: Yuxue Dong Mengxia Li Mengxiang Zhou
In view of the problems that the dragonfly algorithm has, such as that it easily falls into the local optimal solution and the optimization accuracy is low, an improved Dragonfly Algorithm (IDA) is proposed and applied to Otsu multi-threshold image segmentation. Firstly, an elite-opposition-based learning optimization is utilized to enhance the diversity of the initial population of dragonflies, laying the foundation for subsequent algorithm iterations. Secondly, an enhanced sine cosine strategy is introduced to prevent the algorithm from falling into local optima, thereby improving its ability to escape from local optima. Then, an adaptive t-distribution strategy is incorporated to enhance the balance between global exploration and local search, thereby improving the convergence speed of the algorithm. To evaluate the performance of this algorithm, we use eight international benchmark functions to test the performance of the IDA algorithm and compare it with the sparrow search algorithm (SSA), sine cosine algorithm (SCA) and dragonfly algorithm (DA). The experiments show that the algorithm performs better in terms of convergence speed and accuracy. At the same time, the Otsu method is employed to determine the optimal threshold, a series of experiments are carried out on six images provided by Berkeley University, and the results are compared with the other three algorithms. From the experimental results, the peak signal-to-noise ratio index (PSNR) and structural similarity index (SSIM) based on the IDA algorithm method are better than other optimization algorithms. The experimental results indicate that the application of Otsu multi-threshold segmentation based on the IDA algorithm is potential and meaningful.
]]>Mathematics doi: 10.3390/math12060853
Authors: Jorge Delgado Héctor Orera J. M. Peña
In this work, block checkerboard sign pattern matrices are introduced and analyzed. They satisfy the generalized Perron–Frobenius theorem. We study the case related to total positive matrices in order to guarantee bidiagonal decompositions and some linear algebra computations with high relative accuracy. A result on intervals of checkerboard matrices is included. Some numerical examples illustrate the theoretical results.
]]>Mathematics doi: 10.3390/math12060852
Authors: Raju Bhakta Pradip Kundu Suchandan Kayal Morad Alizadeh
In this paper, we consider two finite mixture models (FMMs) with inverted-Kumaraswamy distributed components’ lifetimes. Several stochastic ordering results between the FMMs are obtained. Mainly, we focus on three different cases in terms of the heterogeneity of parameters. The usual stochastic order between the FMMs is established when heterogeneity presents in one parameter as well as two parameters. In addition, we also study ageing faster order in terms of the reversed hazard rate between two FMMs when heterogeneity is in two parameters. For the case of heterogeneity in three parameters, we obtain the comparison results based on reversed hazard rate and likelihood ratio orders. The theoretical developments are illustrated using several examples and counterexamples.
]]>Mathematics doi: 10.3390/math12060850
Authors: Giorgia Franchini
This paper discusses the challenges of the hyperparameter tuning in deep learning models and proposes a green approach to the neural architecture search process that minimizes its environmental impact. The traditional approach of neural architecture search involves sweeping the entire space of possible architectures, which is computationally expensive and time-consuming. Recently, to address this issue, performance predictors have been proposed to estimate the performance of different architectures, thereby reducing the search space and speeding up the exploration process. The proposed approach aims to develop a performance predictor by training only a small percentage of the possible hyperparameter configurations. The suggested predictor can be queried to find the best configurations without training them on the dataset. Numerical examples of image denoising and classification enable us to evaluate the performance of the proposed approach in terms of performance and time complexity.
]]>Mathematics doi: 10.3390/math12060851
Authors: Daewon Chung Byeongseon Jeong
In this paper, we present an effective method for analyzing patterns in the Russia–Ukraine war based on the Lanchester model. Due to the limited availability of information on combat powers of engaging forces, we utilize the loss of armored equipment as the primary data source. To capture the intricate dynamics of modern warfare, we partition the combat loss data into disjoint subsets by examining their geometric properties. Separate systems of ordinary differential equations for these subsets are then identified using the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm under a generalized formulation of the historical Lanchester model. We provide simulations of our method to demonstrate its effectiveness and performance in analyzing contemporary warfare dynamics.
]]>Mathematics doi: 10.3390/math12060849
Authors: Daniel Maroncelli
In this work, we provide conditions for the existence of periodic solutions to nonlinear, second-order difference equations of the form y(t+2)+by(t+1)+cy(t)=g(y(t)), where b and c are real parameters, c≠0, and g:R→R is continuous.
]]>Mathematics doi: 10.3390/math12060848
Authors: Yan Zhou Haiying Zhou
The technology options for sustainable development are explored with customer low-carbon preference in a port supply chain consisting of one ship and one port. Port supply chains can opt for either shower power or low-sulfur fuel oil to cut down emissions. We set game models considering three power structures: the port dominant (port-led Stackelberg game), the ship dominant (ship-led Stackelberg game), and the port and ship on the same footing (Nash game). We compare the performances of different technologies. It is shown that, when customer low-carbon preference and carbon tax are both low, LSFO is the appropriate choice from the supply chain’s profit perspective, SP is preferred from the emission control perspective, and LSFO is preferred from the social welfare perspective. However, when customers’ low-carbon preferences, carbon tax, and environmental concerns are all low or all high, LSFO should be adopted from the view of social welfare. The profits and carbon emissions of the supply chain in the Nash game are higher than those in the Stackelberg game. While the environmental concern is low, the social welfare of the supply chain in the Nash game is greater than that in the Stackelberg game. Otherwise, it is less than that in the Stackelberg game. The obtained results can help governments formulate policies and ships make emission reduction technology decisions according to their own interests.
]]>Mathematics doi: 10.3390/math12060847
Authors: Ivan Arraut Wilson Rosado Victor Leong
The Revenue Management (RM) problem in airlines for a fixed capacity, single resource and two classes has been solved before by using a standard formalism. In this paper we propose a model for RM by using the semi-classical approach of the Quantum Harmonic Oscillator. We then extend the model to include external factors affecting the people’s decisions, particularly those where collective decisions emerge.
]]>Mathematics doi: 10.3390/math12060846
Authors: Wenfa Qi Xinquan Yu Xiaolong Li Shuangyong Kang
Screen photos often suffer from moiré patterns, which significantly affect their visual quality. Although many deep learning-based methods for removing moiré patterns have been proposed, they fail to recover images with complex textures and heavy moiré patterns. Here, we focus on text images with heavy moiré patterns and propose a new demoiré approach, incorporating frequency-domain peak filtering and spatial-domain visual quality enhancement. We find that the content of the text image mainly lies in the central region, whereas the moiré pattern lies in the peak region, in the frequency domain. Based on this observation, a peak-filtering algorithm and a central region recovery strategy are proposed to accurately locate and remove moiré patterns while preserving the text parts. In addition, to further remove the noisy background and paint the missing text parts, an image enhancement algorithm utilising the Otsu method is developed. Extensive experimental results show that the proposed method significantly removes severe moiré patterns from images with better visual quality and lower time cost compared to the state-of-the-art methods.
]]>Mathematics doi: 10.3390/math12060845
Authors: Jorge De Andrés-Sánchez
Estimating loss reserves is a crucial activity for non-life insurance companies. It involves adjusting the expected evolution of claims over different periods of active policies and their fluctuations. The chain-ladder (CL) technique is recognized as one of the most effective methods for calculating claim reserves in this context. It has become a benchmark within the insurance sector for predicting loss reserves and has been adapted to estimate variability margins. This variability has been addressed through both stochastic and possibilistic analyses. This study adopts the latter approach, proposing the use of the CL framework combined with intuitionistic fuzzy numbers (IFNs). While modeling with fuzzy numbers (FNs) introduces only epistemic uncertainty, employing IFNs allows for the representation of bipolar data regarding the feasible and infeasible values of loss reserves. In short, this paper presents an extension of the chain-ladder technique that estimates the parameters governing claim development through intuitionistic fuzzy regression, such as symmetric triangular IFNs. Additionally, it compares the results obtained with this method with those derived from the stochastic chain ladder by England and Verrall.
]]>Mathematics doi: 10.3390/math12060844
Authors: Ming Wu Xin Li Yuhao Chen
As economic fluctuations and market uncertainty intensify, supply chain members face enormous challenges. To explore the role of revenue-sharing contracts in supply chain members with different risk preferences, we study the risk-averse two-stage supply chain coordination in a revenue-sharing contract under three different scenarios: the supplier is risk-averse and the retailer is risk-neutral, or the retailer is risk-averse and the supplier is risk-neutral, or both are risk-averse. We find that the revenue-sharing contract mechanism allows the supplier to offer a lower wholesale price, effectively bearing part of the retailer’s cost risk. In return, the retailer compensates the supplier with a larger portion of their revenue, and the lower wholesale price also stimulates the retailer’s desire to order more products. In addition, risk aversion always reduces the optimal order quantity in the supply chain. Interestingly, when the retailer’s risk aversion level is low, the supplier charges a higher wholesale price under the risk-averse supply chain than that under the risk-neutral supply chain. However, if the retailer’s risk aversion level is high enough, the supplier should charge a lower price to stimulate the retailer under the risk-averse supply chain to retain the order size to maintain the channel profit.
]]>Mathematics doi: 10.3390/math12060843
Authors: Konstantin E. Starkov Alexander P. Krishchenko
We examine the ultimate dynamics of the four-dimensional model describing interactions between host cells, immune cells, tumor cells, and immune-tumor conjugate cells proposed by Abernethy and Gooding in 2018. In our paper, the ultimate upper bounds for all variables of this model are obtained. Formulas for positively invariant sets are deduced. Using these results, we establish conditions for the existence of the global attractor, derive formulas for its location, and present conditions under which immune and immune-tumor conjugate cells asymptotically die out. Next, we study equilibrium points, including the stability property for most of the equilibrium points. We discuss the existence of very low cancer-burden equilibrium points. Next, parametric conditions are derived under which the derivative of the density of the immune-tumor conjugate cell population eventually tends to zero; this mathematically rigorously confirms the correctness of the application of model reduction for this model in studies of its ultimate dynamics. In the final section, we summarize the results of this work and outline how to continue this study.
]]>Mathematics doi: 10.3390/math12060842
Authors: Norah Alshehri Mohammed Guediri
In this article, we investigate Ricci solitons occurring on spacelike hypersurfaces of Einstein Lorentzian manifolds. We give the necessary and sufficient conditions for a spacelike hypersurface of a Lorentzian manifold, equipped with a closed conformal timelike vector field ξ¯, to be a gradient Ricci soliton having its potential function as the inner product of ξ¯ and the timelike unit normal vector field to the hypersurface. Moreover, when the ambient manifold is Einstein and the hypersurface is compact, we establish that, under certain straightforward conditions, the hypersurface is an extrinsic sphere, that is, a totally umbilical hypersurface with a non-zero constant mean curvature. In particular, if the ambient Lorentzian manifold has a constant sectional curvature, we show that the compact spacelike hypersurface is essentially a round sphere.
]]>Mathematics doi: 10.3390/math12060841
Authors: István Mihálcz Zsolt T. Kosztyán
Large, powerful corporations were formerly solely and exclusively responsible for supplies, manufacturing, and distribution; however, the supply chain has undergone significant transformations over the last half-century. Almost all supply chain processes are currently outsourced, owing to the initiatives of cutting-edge, contemporary businesses. According to a compilation of studies, analysts, and news sources, the level of risk associated with modern supply chains is considerably higher than the majority of supply chain managers believe. Supply chain vulnerabilities continue to pose a substantial obstacle for a great number of organizations. Neglecting to adequately address these risks—encompassing natural disasters, cyber assaults, acts of terrorism, the credit crisis, pandemic scenarios, and war—could result in substantial reductions in metrics such as profitability, productivity, revenue, and competitive advantage. Unresolved concerns persist with respect to the risk assessment of the supply chain. The purpose of this article is to propose a framework for risk evaluation that can be efficiently applied to the evaluation of hazards within the supply chain. This research study significantly enhances the existing knowledge base by offering supply chain managers a pragmatic tool to evaluate their processes, regardless of the mathematical foundations or the variety of variables utilized in risk assessment. The outcomes of multiple aggregation methods are compared using a case study from an automotive EMS production; the conclusions are validated by risk and FMEA specialists from the same factory.
]]>Mathematics doi: 10.3390/math12060839
Authors: Manuela Panoiu Caius Panoiu Petru Ivascanu
The power supply is crucial in the present day due to the negative impacts of poor power quality on the electric grid. In this research, we employed deep learning methods to investigate the power factor, which is a significant indicator of power quality. A multi-step forecast was developed for the power factor in the power supply installation of a hot rolling mill, extending beyond the horizontal line. This was conducted using data obtained from the respective electrical supply system. The forecast was developed via hybrid RNN (recurrent neural networks) incorporating LSTM (long short-term memory) and GRU (gated recurrent unit) layers. This research utilized hybrid recurrent neural network designs with deep learning methods to build several power factor models. These layers have advantages for time series forecasting. After conducting time series forecasting, qualitative indicators of the prediction were identified, including the sMAPE (Symmetric Mean Absolute Percentage Error) and regression coefficient. In this paper, the authors examined the quality of applied models and forecasts utilizing these indicators, both in the short term and long term.
]]>Mathematics doi: 10.3390/math12060840
Authors: Yao Ma Zhong-Min Wang
In this study, the Kevin–Voigt viscoelastic constitutive relationship is used to investigate the vibration characteristics and stability of a functionally graded viscoelastic(FGV) fluid-conveying pipe with initial geometric defects under thermal–magnetic coupling fields. First, the nonlinear dimensionless differential equations of motion are derived by applying Timoshenko beam theory. Second, by solving the equilibrium position of the system, the nonlinear term in the differential equations of motion is approximated as the sum of the longitudinal displacement at the current time and longitudinal displacement relative to the position, and the equations are linearized. Third, these equations are discretized using the Galerkin method and are numerically solved under simply supported conditions. Finally, the effects of dimensionless temperature field parameters, dimensionless magnetic field parameters, thermal–magnetic coupling, initial geometric defect types, and the power-law exponent on the complex frequency of the pipe are examined. Results show that increasing the magnetic field intensity enhances the critical velocity of first-order mode instability, whereas a heightened temperature variation reduces the critical velocity of first-order diverge instability. Under thermal–magnetic fields, when the magnetic field intensity and temperature difference are simultaneously increased, their effects on the complex frequency can partially offset each other. Increasing the initial geometric defect amplitude increases the imaginary parts of the complex frequencies; however, for different types of initial geometric defect tubes, it exhibits the most distinct influence only on a certain order.
]]>Mathematics doi: 10.3390/math12060838
Authors: Rashad M. Asharabi Muaadh Almahalebi
In this paper, we deduce some hyperstability results for a generalized class of homogeneous Pexiderized functional equations, expressed as ∑ρ∈Γfxρ.y=ℓf(x)+ℓg(y), x,y∈M, which is inspired by the concept of Ulam stability. Indeed, we prove that function f that approximately satisfies an equation can, under certain conditions, be considered an exact solution. Domain M is a monoid (semigroup with a neutral element), Γ is a finite subgroup of the automorphisms group of M, ℓ is the cardinality of Γ, and f,g:M→G such that (G,+) denotes an ℓ-cancellative commutative group. We also examine the hyperstability of the given equation in its inhomogeneous version ∑ρ∈Γfxρ.y=ℓf(x)+ℓg(y)+ψ(x,y),x,y∈M, where ψ:M×M→G. Additionally, we apply the main results to elucidate the hyperstability of various functional equations with involutions.
]]>Mathematics doi: 10.3390/math12060837
Authors: Denis Byzov Petr Martyshko
In this paper, the derivation of a concise closed form for the gravitational field of a polyhedron is presented. This formula forms the basis of the algorithm for calculating the gravitational field of an arbitrary shape body with high accuracy. Based on this algorithm, a method for gravity data inversion (creating density models of the Earth’s crust) has been developed. The algorithm can accept either regular or irregular polyhedron discretization for density model creation. The models are approximated with dense irregular grids, elements of which are polyhedrons. When performing gravity data inversion, we face three problems: topography with large amplitude, the sphericity of the planet, and a long computation time because of the large amount of data. In our previous works, we have already considered those problems separately but without explaining the details of the computation of the closed-form solution for a polyhedron. In this paper, we present for the first time a performance-effective numerical method for the inversion of gravity data based on topography. The method is based on closed-form expression for the gravity field of a spherical density model of the Earth’s crust with the upper topography layer, and provides great accuracy and speed of calculation. There are no restrictions on the model’s geometry or gravity data grid. As a case study, a spherical density model of the Earth’s crust of the Urals is created.
]]>Mathematics doi: 10.3390/math12060836
Authors: Shujin Qin Xinkai Xie Jiacun Wang Xiwang Guo Liang Qi Weibiao Cai Ying Tang Qurra Tul Ann Talukder
The growing emphasis on ecological preservation and natural resource conservation has significantly advanced resource recycling, facilitating the realization of a sustainable green economy. Essential to resource recycling is the pivotal stage of disassembly, wherein the efficacy of disassembly tools plays a critical role. This work investigates the impact of disassembly tools on disassembly duration and formulates a mathematical model aimed at minimizing workstation cycle time. To solve this model, we employ an optimized advantage actor-critic algorithm within reinforcement learning. Furthermore, it utilizes the CPLEX solver to validate the model’s accuracy. The experimental results obtained from CPLEX not only confirm the algorithm’s viability but also enable a comparative analysis against both the original advantage actor-critic algorithm and the actor-critic algorithm. This comparative work verifies the superiority of the proposed algorithm.
]]>Mathematics doi: 10.3390/math12060835
Authors: Rodrigo Matos Carnier Yue Li Yasutaka Fujimoto Junji Shikata
The advent of the Internet of Things brought a new age of interconnected device functionality, ranging from personal devices and smart houses to industrial control systems. However, increased security risks have emerged in its wake, in particular self-replicating malware that exploits weak device security. Studies modeling malware epidemics aim to predict malware behavior in essential ways, usually assuming a number of simplifications, but they invariably simplify the single most important subdynamics of malware: random propagation. In our previous work, we derived and presented the first exact mathematical model of random propagation, defined as the subdynamics of propagation of a malware model. The propagation dynamics were derived for the SIS model in discrete form. In this work, we generalize the methodology of derivation and extend it to any Markov chain model of malware based on random propagation. We also propose a second method of derivation based on modifying the simplest form of the model and adjusting it for more complex models. We validated the two methodologies on three malware models, using simulations to confirm the exactness of the propagation dynamics. Stochastic errors of less than 0.2% were found in all simulations. In comparison, the standard nonlinear model of propagation (present in ∼95% of studies) has an average error of 5% and a maximum of 9.88% against simulations. Moreover, our model has a low mathematical trade-off of only two additional operations, being a proper substitute to the standard literature model whenever the dynamical equations are solved numerically.
]]>Mathematics doi: 10.3390/math12060834
Authors: Fuyong Zhang Kuan Li Ziliang Ren
Learning-based classifiers are found to be vulnerable to attacks by adversarial samples. Some works suggested that ensemble classifiers tend to be more robust than single classifiers against evasion attacks. However, recent studies have shown that this is not necessarily the case under more realistic settings of black-box attacks. In this paper, we propose a novel ensemble approach to improve the robustness of classifiers against evasion attacks by using diversified feature selection and a stochastic aggregation strategy. Our proposed scheme includes three stages. Firstly, the adversarial feature selection algorithm is used to select a feature each time that can trade-offbetween classification accuracy and robustness, and add it to the feature vector bank. Secondly, each feature vector in the bank is used to train a base classifier and is added to the base classifier bank. Finally, m classifiers from the classifier bank are randomly selected for decision-making. In this way, it can cause each classifier in the base classifier bank to have good performance in terms of classification accuracy and robustness, and it also makes it difficult to estimate the gradients of the ensemble accurately. Thus, the robustness of classifiers can be improved without reducing the classification accuracy. Experiments performed using both Linear and Kernel SVMs on genuine datasets for spam filtering, malware detection, and handwritten digit recognition demonstrate that our proposed approach significantly improves the classifiers’ robustness against evasion attacks.
]]>Mathematics doi: 10.3390/math12060833
Authors: Sana Naz Scott Uk-Jin Lee
In a blockchain network, a rule set called consensus mechanism is used to create and finalize a block. In a proof-of-stake (PoS), consensus-based blockchain network, nodes become validators, minters, or stakeholders’ nodes to complete the consensus mechanism. In these networks, when a node becomes a validator node, its details need to be saved because the details of the validators are used in the network for many important decisions, such as selecting block proposers for the consensus process. In this paper, we present Sea Shield, which uses a validator chain to save a node’s information when it becomes a validator or leaves its responsibility as a validator in the PoS-based blockchain network. The validator chain is a blockchain that can run with the main chain of a PoS-based blockchain. The internal features of the validator chain are similar to those of the blockchain. We designed and simulated a consensus mechanism to create and finalize the block for the validator chain with no forks. We present a process by which a node may join or unjoin as a validator in a PoS-based blockchain network to improve the overall security of the main chain-consensus process.
]]>Mathematics doi: 10.3390/math12060832
Authors: Junyan Cui Wei Pan Haipeng Wang
In this paper, a Toeplitz construction method based on eigenvalues and eigenvectors is proposed to combine with traditional denoising algorithms, including fractional low-order moment (FLOM), phased fractional low-order moment (PFLOM), and correntropy-based correlation (CRCO) methods. It can improve the direction of arrival (DOA) estimation of signals in impulsive noise. Firstly, the algorithm performs eigenvalue decomposition on the received covariance matrix to obtain eigenvectors and eigenvalues, and then the Toeplitz matrix is created according to the eigenvectors corresponding to its eigenvalues. Secondly, the spatial averaging method is used to obtain an unbiased estimate of the Toeplitz matrix, which is then weighted and added based on the corresponding eigenvalues. Next, the noise subspace of the Toeplitz matrix is reconstructed to obtain the one that has less angle information. Finally, the DOA of the coherent signal is estimated using the Multiple Signal Classification (MUSIC) algorithm. The improved method based on the Toeplitz matrix can not only suppress the effect of impulsive noise but can also solve the problem of aperture loss due to its decoherence. A series of simulations have shown that they have better performances than other algorithms.
]]>Mathematics doi: 10.3390/math12060831
Authors: Alexander Bulinski
The suboptimal procedure under consideration, based on the MDR-EFE algorithm, provides sequential selection of relevant (in a sense) factors affecting the studied, in general, non-binary random response. The model is not assumed linear, the joint distribution of the factors vector and response is unknown. A set of relevant factors has specified cardinality. It is proved that under certain conditions the mentioned forward selection procedure gives a random set of factors that asymptotically (with probability tending to one as the number of observations grows to infinity) coincides with the “oracle” one. The latter means that the random set, obtained with this algorithm, approximates the features collection that would be identified, if the joint distribution of the features vector and response were known. For this purpose the statistical estimators of the prediction error functional of the studied response are proposed. They involve a new version of regularization. This permits to guarantee not only the central limit theorem for normalized estimators, but also to find the convergence rate of their first two moments to the corresponding moments of the limiting Gaussian variable.
]]>Mathematics doi: 10.3390/math12060830
Authors: Jihwan Kim Pyung Kim Younho Lee Daeseon Choi
This paper delves into the advantages of authentication algorithms employing self-sovereign identity, highlighting a reduced communication overhead and the elimination of single points of failure. However, it acknowledges the vulnerability of digital wallets to real-world issues like loss or theft. To address these challenges, we propose an efficient key backup and recovery protocol based on the FROST threshold signature algorithm. This protocol involves trusted third parties and backup devices, ensuring secure secret key sharing and rapid key recovery. Performance evaluations, including key recovery time, demonstrate the protocol’s efficiency and reliability, bolstering the overall robustness of self-sovereign identity systems.
]]>Mathematics doi: 10.3390/math12060829
Authors: Himanshu Singh
Gaussian Radial Basis Function Kernels are the most-often-employed kernel function in artificial intelligence for providing the optimal results in contrast to their respective counterparts. However, our understanding surrounding the utilization of the Generalized Gaussian Radial Basis Function across different machine learning algorithms, such as kernel regression, support vector machines, and pattern recognition via neural networks is incomplete. The results delivered by the Generalized Gaussian Radial Basis Function Kernel in the previously mentioned applications remarkably outperforms those of the Gaussian Radial Basis Function Kernel, the Sigmoid function, and the ReLU function in terms of accuracy and misclassification. This article provides a concrete illustration of the utilization of the Generalized Gaussian Radial Basis Function Kernel as mentioned earlier. We also provide an explicit description of the reproducing kernel Hilbert space by embedding the Generalized Gaussian Radial Basis Function as an L2−measure, which is utilized in implementing the analysis support vector machine. Finally, we provide the conclusion that we draw from the empirical experiments considered in the manuscript along with the possible future directions in terms of spectral decomposition of the Generalized Gaussian Radial Basis Function.
]]>Mathematics doi: 10.3390/math12060828
Authors: Valerio Antonio Pamplona Salomon Luiz Flavio Autran Monteiro Gomes
Consistency checking is one of the reasons for the Analytic Hierarchy Process (AHP) leadership in publications on multiple criteria decision-making (MCDM). Consistency is a measure of the quality of data input in the AHP. The theory of AHP provides indicators for the consistency of data. When an indicator is out of the desired interval, the data must be reviewed. This article presents a method for improving the consistency of reviewing the data input in an AHP application. First, a conventional literature review is presented on the theme. Then, an innovative tool of artificial intelligence is shown to confirm the main result of the conventional review: this topic is still attracting interest from AHP and MCDM researchers. Finally, a simple technique for consistency improvement is presented and illustrated with a practical case of MCDM: supplier selection by a company.
]]>Mathematics doi: 10.3390/math12060827
Authors: Meiyan Wang Leilei Han Yuting Ding
Forest pests and diseases can diminish forest biodiversity, damage forest ecosystem functions, and have an impact on water conservation. Therefore, it is necessary to analyze the interaction mechanism between plants and pests. In this paper, the prevention and control of a specific pest—namely the larva of Paranthrene tabaniformis (Rott.) (hereinafter referred to as larva)—are studied. Based on the invasion mechanism of the larva in poplar, we establish a delayed differential equation and analyze the existence and stability of equilibria. Next, we assess the existence of a Hopf bifurcation to determine the range of parameters that ensures that the equilibria are stable. Then, we select a set of parameters to verify the results of the stability analysis. Finally, we provide biological explanations and effective theoretical control methods for poplar pests and diseases.
]]>Mathematics doi: 10.3390/math12060826
Authors: Dušan P. Nikezić Dušan S. Radivojević Ivan M. Lazović Nikola S. Mirkov Zoran J. Marković
In order to better predict the high aerosol concentrations associated with air pollution and climate change, a machine learning model was developed using transfer learning and the segmentation process of global satellite images. The main concept of transfer learning lies on convolutional neural networks and works by initializing the already trained model weights to better adapt the weights when the network is trained on a different dataset. The transfer learning technique was tested with the ResNet3D-101 model pre-trained from a 2D ImageNet dataset. This model has performed well for contrail detection to assess climate impact. Aerosol distributions can be monitored via satellite remote sensing. Satellites can monitor some aerosol optical properties like aerosol optical thickness. Aerosol optical thickness snapshots were the input dataset for the model and were obtained from NASA’s Terra-Modis satellite; the output images were segmented by comparing the pixel values with a threshold value of 0.8 for aerosol optical thickness. Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model that minimizes a predefined loss function on given independent data. The model structure was adjusted in order to improve the performance of the model by applying methods and hyperparameter optimization techniques such as grid search, batch size, threshold, and input length. According to the criteria defined by the authors, the distance domain criterion and time domain criterion, the developed model is capable of generating adequate data and finding patterns in the time domain. As observed from the comparison of relative coefficients for the criteria metrics proposed by the authors, ddc and dtc, the deep learning model based on ConvLSTM layers developed in our previous studies has better performance than the model developed in this study with transfer learning.
]]>Mathematics doi: 10.3390/math12060825
Authors: Cristiano Maria Verrelli Fabio Della Rossa
The COVID-19 epidemic has required countries to implement different containment strategies to limit its spread, like strict or weakened national lockdown rules and the application of age-stratified vaccine prioritization strategies. These interventions have in turn modified the age-dependent patterns of social contacts. In our recent paper, starting from the available age-structured real data at the national level, we identified, for the Italian case, specific virulence parameters for a two-age-structured COVID-19 epidemic compartmental model (under 60, and 60 years and over) in six different diseases transmission scenarios under concurrently adopted feedback interventions. An interpretation of how each external scenario modifies the age-dependent patterns of social contacts and the spread of COVID-19 disease has been accordingly provided. In this paper, which can be viewed as a sequel to the previous one, we mainly apply the same general methodology therein (involving the same dynamic model) to new data covering the three subsequent additional scenarios: (i) a mitigated coordinated intermittent regional action in conjunction with the II vaccination phase; (ii) a super-attenuated coordinated intermittent regional action in conjunction with the II vaccination phase; and (iii) a last step towards normality in conjunction with the start of the III vaccination phase. As a new contribution, we show how meaningful updated information can be drawn out, once the identification of virulence parameters, characterizing the two age groups within the latest three different phases, is successfully carried out. Nevertheless, differently from our previous paper, the global optimization procedure is carried out here with the number of susceptible individuals in each scenario being left free to change, to account for reinfection and immunity due to vaccination. Not only do the slightly different estimates we obtain for the previous scenarios not impact any of the previous considerations (and thus illustrate the robustness of the procedure), but also, and mainly, the new results provide a meaningful picture of the evolution of social behaviors, along with the goodness of strategic interventions.
]]>Mathematics doi: 10.3390/math12060824
Authors: Taher S. Hassan Rami Ahmad El-Nabulsi Naveed Iqbal Amir Abdel Menaem
In this study, novel criteria are derived to ensure the oscillation of solutions in nonlinear advanced noncanonical dynamic equations. The obtained results are reminiscent of the criteria proposed by Hille and Ohriska for canonical dynamic equations. Additionally, this paper addresses a previously unresolved issue found in numerous existing works in the literature on advanced dynamic equations. This study provides a range of illustrative examples to showcase the precision, practicality, and adaptability of the obtained findings.
]]>Mathematics doi: 10.3390/math12060822
Authors: Dmitry B. Rokhlin Gennady A. Ougolnitsky
We consider an investor, whose capital is divided into an industrial investment xt and cash yt, and satisfy a nonlinear deterministic dynamical system. The investor fixes fractions of capital to be invested, withdrawn, and consumed, and also the production factor parameter. The government fixes a subsidy fraction for industrial investments and a tax fraction for the capital outflow. We study a Stackelberg game, corresponding to the asymptotically stable equilibrium (x∗,y∗) of the mentioned dynamical system. In this game, the government (the leader) uses subsidies to make incentives for the investor (the follower) to maintain the desired level of x∗, and uses taxes to achieve this with the minimal cost. The investor’s aim is to maximize the difference between the consumption and the price of the production factor at equilibrium. We present an explicit analytical solution of the specified Stackelberg game. Based on this solution, we introduce the notion of a fair industrial investment level, which is costless for the government, and show that it can produce realistic results using a case study of water production in Lahore.
]]>Mathematics doi: 10.3390/math12060823
Authors: Saad Al Ahmadi Farah Muhammad Haya Al Dawsari
In a world essentializing communication for human connection, the deaf community encounters distinct barriers. Sign language, their main communication method is rich in hand gestures but not widely understood outside their community, necessitating interpreters. The existing solutions for sign language recognition depend on extensive datasets for model training, risking overfitting with complex models. The scarcity of details on dataset sizes and model specifics in studies complicates the scalability and verification of these technologies. Furthermore, the omission of precise accuracy metrics in some research leaves the effectiveness of gesture recognition by these models in question. The key phases of this study are Data collection, Data preprocessing, Feature extraction using CNN and finally transfer learning-based classification. The purpose of utilizing CNN and transfer learning is to tap into pre-trained neural networks for optimizing performance on new, related tasks by reusing learned patterns, thus accelerating development and improving accuracy. Data preprocessing further involves resizing of images, normalization, standardization, color space conversion, augmentation and noise reduction. This phase is capable enough to prune the image dataset by improving the efficiency of the classifier. In the subsequent phase, feature extraction has been performed that includes the convolution layer, feature mapping, pooling layer and dropout layer to obtain refined features from the images. These refined features are used for classification using ResNet. Three different datasets are utilized for the assessment of proposed model. The ASL-DS-I Dataset includes a total of 5832 images of hand gestures whereas, ASL-DS-II contains 54,049 images and ASL-DS-III dataset includes 7857 images adopted from specified web links. The obtained results have been evaluated by using standard metrics including ROC curve, Precision, Recall and F-measure. Meticulous experimental analysis and comparison with three standard baseline methods demonstrated that the proposed model gives an impressive recognition accuracy of 96.25%, 95.85% and 97.02% on ASL-DS-I, ASL-DS-II and ASL-DS-III, respectively.
]]>Mathematics doi: 10.3390/math12060821
Authors: Jiejie Fan Xiaojuan Ban Manman Yuan Wenxing Zhang
To reduce the communication and computation overhead of neural networks, a novel pinning event-triggered scheme (PETS) is developed in this paper, which enables pinning synchronization of uncertain coupled memristive neural networks (CMNNs) under limited resources. Time-varying delays, uncertainties, and mismatched parameters are all considered, which makes the system more interpretable. In addition, from the low energy cost point of view, an algorithm for pinned node selection is designed to further investigate the newly event-triggered function under limited communication resources. Meanwhile, based on the PETS and following the Lyapunov functional method, sufficient conditions for the pinning exponential stability of the proposed coupled error system are formulated, and the analysis of the self-triggered method shows that our method can efficiently avoid Zeno behavior under the newly determined triggered conditions, which contribute to better PETS performance. Extensive experiments demonstrate that the PETS significantly outperforms the existing schemes in terms of solution quality.
]]>Mathematics doi: 10.3390/math12060820
Authors: Kyeongjun Lee
In recent years, various forms of progressive hybrid censoring schemes (PHCS) have gained significant traction in survival and reliability analysis studies due to their versatility. However, these PHCS variants are often characterized by complexity stemming from the multitude of parameters involved in their specification. Consequently, the primary objective of this paper is to propose a unified approach termed combined type II progressive hybrid censoring scheme (ComT2PHCS) capable of encompassing several existing PHCS variations. Our analysis focuses specifically on the exponential distribution (ExDist). Bayesian inference techniques are employed to estimate the parameters of the ExDist under the ComT2PHCS. Additionally, we conduct fundamental distributional analyses and likelihood inference procedures. We derive the conditional moment-generating function (CondMGF) of maximum likelihood estimator (MLE) for parameters of the ExDist under ComT2PHCS. Further, we use CondMGF for the distribution of MLE for parameters of ExDist under ComT2PHCS. Finally, we provide an illustrative example to elucidate the inference methods derived in this paper.
]]>Mathematics doi: 10.3390/math12060819
Authors: Kalaiarasi Kalaichelvan Soundaria Ramalingam Prasantha Bharathi Dhandapani Víctor Leiva Cecilia Castro
In this article, we present a novel methodology for inventory management in the pharmaceutical industry, considering the nature of its supply chain. Traditional inventory models often fail to capture the particularities of the pharmaceutical sector, characterized by limited storage space, product degradation, and trade credits. To address these particularities, using fuzzy logic, we propose models that are adaptable to real-world scenarios. The proposed models are designed to reduce total costs for both vendors and clients, a gap not explored in the existing literature. Our methodology employs pentagonal fuzzy number (PFN) arithmetic and Kuhn–Tucker optimization. Additionally, the integration of the naive Bayes (NB) classifier and the use of the Weka artificial intelligence suite increase the effectiveness of our model in complex decision-making environments. A key finding is the high classification accuracy of the model, with the NB classifier correctly categorizing approximately 95.9% of the scenarios, indicating an operational efficiency. This finding is complemented by the model capability to determine the optimal production quantity, considering cost factors related to manufacturing and transportation, which is essential in minimizing overall inventory costs. Our methodology, based on machine learning and fuzzy logic, enhances the inventory management in dynamic sectors like the pharmaceutical industry. While our focus is on a single-product scenario between suppliers and buyers, future research hopes to extend this focus to wider contexts, as epidemic conditions and other applications.
]]>Mathematics doi: 10.3390/math12060818
Authors: Yuxiang He Baisong Yang Chiawei Chu
Terrorism poses a significant threat to international peace and stability. The ability to predict potential casualties resulting from terrorist attacks, based on specific attack characteristics, is vital for protecting the safety of innocent civilians. However, conventional data sampling methods struggle to effectively address the challenge of data imbalance in textual features. To tackle this issue, we introduce a novel algorithm, GA-CatBoost-Weight, designed for predicting whether terrorist attacks will lead to casualties among innocent civilians. Our approach begins with feature selection using the RF-RFE method, followed by leveraging the CatBoost algorithm to handle diverse modal features comprehensively and to mitigate data imbalance. Additionally, we employ Genetic Algorithm (GA) to finetune hyperparameters. Experimental validation has demonstrated the superior performance of our method, achieving a sensitivity of 92.68% and an F1 score of 90.99% with fewer iterations. To the best of our knowledge, our study is the pioneering research that applies CatBoost to address the prediction of terrorist attack outcomes.
]]>Mathematics doi: 10.3390/math12060817
Authors: Konstantin Gorbunov Vassily Lyubetsky
The mathematical side of applied problems in multiple subject areas (biology, pattern recognition, etc.) is reduced to the problem of discrete optimization in the following mathematical method. We were provided a network and graphs in its leaves, for which we needed to find a rearrangement of graphs by non-leaf nodes, in which the given functional reached its minimum. Such a problem, even in the simplest case, is NP-hard, which means unavoidable restrictions on the network, on graphs, or on the functional. In this publication, this problem is addressed in the case of all graphs being so-called “structures”, meaning directed-loaded graphs consisting of paths and cycles, and the functional as the sum (over all edges in the network) of distances between structures at the endpoints of every edge. The distance itself is equal to the minimal length of sequence from the fixed list of operations, the composition of which transforms the structure at one endpoint of the edge into the structure at its other endpoint. The list of operations (and their costs) on such a graph is fixed. Under these conditions, the given discrete optimization problem is called the reconstruction problem. This paper presents novel algorithms for solving the reconstruction problem, along with full proofs of their low error and low polynomial complexity. For example, for the network, the problem is solved with a zero error algorithm that has a linear polynomial computational complexity; and for the tree the problem is solved using an algorithm with a multiplicative error of at most two, which has a second order polynomial computational complexity.
]]>Mathematics doi: 10.3390/math12060816
Authors: Qi Zhang Bin Xu Man Yao Jiacun Wang Xiwang Guo Shujin Qin Liang Qi Fayang Lu
In the context of sustainable development strategies, the recycling of discarded products has become increasingly important with the development of electronic technology. Choosing the human–robot collaborative disassembly mode is the key to optimizing the disassembly process and ensuring maximum efficiency and benefits. To solve the problem of human–robot cooperative parallel dismantling line balance, a mixed integer programming model is established and verified by CPLEX. An improved Moth-Flame Optimization (IMFO) algorithm is proposed to speed up convergence and optimize the disassembly process of various products. The effectiveness of IMFO is evaluated through multiple cases and compared with other heuristics. The results of these comparisons can provide insight into whether IMFO is the most appropriate algorithm for the problem presented.
]]>Mathematics doi: 10.3390/math12060815
Authors: Licui Zheng Tao Wu Jinwang Liu
Matrix equivalence plays a pivotal role in multidimensional systems, which are typically represented by multivariate polynomial matrices. The Smith form of matrices is one of the important research topics in polynomial matrices. This article mainly investigates the Smith forms of several types of bivariate polynomial matrices and has successfully derived several necessary and sufficient conditions for matrix equivalence.
]]>Mathematics doi: 10.3390/math12060814
Authors: Renata Avros Mor Ben Haim Almog Madar Elena Ravve Zeev Volkovich
The study introduces a novel approach to identify potential citation manipulation within academic papers. This method utilizes perturbations of a deep embedding model, integrating Graph-Masked Autoencoders to merge textual information with evidence of graph connectivity. Consequently, it yields a more intricate model of citation distribution. By training a deep network with partial data and reconstructing masked connections, the approach capitalizes on the inherent characteristics of central connections amidst network perturbations. It demonstrates its ability to pinpoint trustworthy citations within the analyzed dataset through comprehensive quantitative evaluations. Additionally, it raises concerns regarding the reliability of specific references, which may be subject to manipulation.
]]>Mathematics doi: 10.3390/math12060813
Authors: Daniel Fernández Álvaro Rodríguez-Prieto Ana María Camacho
The selection of the most suitable material is one of the key decisions to be made during the design stage of a manufacturing process. Traditional approaches, such as Ashby maps based on material properties, are widely used in industry. However, in the production of multi-material components, the criteria for the selection can include antagonistic approaches. The aim of this work is to implement a methodology based on the results of process simulations for several materials and to classify them by applying an advanced data analytics method based on machine learning (ML)—in this case, the support vector regression (SVR) or multi-criteria decision-making (MCDM) methodology. Specifically, the multi-criteria optimization and compromise solution (VIKOR) was combined with entropy weighting methods. To achieve this, a finite element model (FEM) was built to evaluate the extrusion force and the die wear during the multi-material co-extrusion process of bimetallic Ti6Al4V-AZ31B billets. After applying SVR and VIKOR in combination with the entropy weighting methodology, a comparison was established based on material selection and the complexity of the methodology used. The results show that the material chosen in both methodologies is very similar, but the MCDM method is easier to implement because there is no need for evaluating the error of the prediction model, and the time required for data preprocessing is less than the time needed when applying SVR. This new methodology is proven to be effective as an alternative to traditional approaches and is aligned with the new trends in industry based on simulation and data analytics.
]]>Mathematics doi: 10.3390/math12060812
Authors: Libero Nigro Franco Cicirelli
Modeling and verification of the correct behavior of embedded real-time systems with strict timing constraints is a well-known and important problem. Failing to fulfill a deadline in system operation can have severe consequences in the practical case. This paper proposes an approach to formal modeling and schedulability analysis. A novel extension of Petri Nets named Constraint Time Petri Nets (C-TPN) is developed, which enables the modeling of a collection of interdependent real-time tasks whose execution is constrained by the use of priority and shared resources like processors and memory data. A C-TPN model is reduced to a network of Timed Automata in the context of the popular Uppaal toolbox. Both functional and, most importantly, temporal properties can be assessed by exhaustive model checking and/or statistical model checking based on simulations. This paper first describes and motivates the proposed C-TPN modeling language and its formal semantics. Then, a Uppaal translation is shown. Finally, three models of embedded real-time systems are considered, and their properties are thoroughly verified.
]]>Mathematics doi: 10.3390/math12060811
Authors: Jingwen Zhang Lili Rong Yufan Gong
This paper enhances the agent model of ordinary individuals by incorporating the roles of places in the transmission, prevention, and control in the process, establishing a fundamental connection between these two types of agents through individual travel rules. The impact of real-world prevention and control measures on regional epidemic transmission is studied based on this model. Firstly, based on the analysis of place elements in typical cases of major infectious diseases, we give the classification of places. Secondly, the resident agent and the place agent are constructed and rules are established to form an agent-based major infectious disease transmission model considering the place. Then, the simulation process is established and an ideal regional environment is constructed to simulate overall changes in major infectious disease scale and spread speed. Additionally, evolutionary simulations are conducted for individual isolation intensity and place control time during emergency management stages to compare infection outcomes, and we analyze different roles and application scenarios of specific prevention and control measures. Finally, simulation results from real major infectious disease cases in Yangzhou, China, validate the effectiveness of this model in predicting major infectious disease development trends as well as evaluating prevention and control measure effects.
]]>Mathematics doi: 10.3390/math12060810
Authors: Artem E. Konkov Yuri V. Mitrishkin
This paper presents a methodology for synthesizing discrete MIMO PID controllers through iterative solutions of LMIs. It justifies the necessity of direct synthesis of discrete controllers in digital control systems. The proposed methodology allows for synthesizing an LTI controller for an LTV plant model, ensuring the robust stability of the synthesized system. Robustness is further ensured by the small-gain theorem and a novel approach to loop shaping, enabling the use of arbitrary shape functions. As a result, this methodology provides a broad statement of the problem with performance criteria necessary for practical application in control systems. Numerical examples are used to illustrate the methodology, which is implemented using the MATLAB toolbox, freely available for use.
]]>Mathematics doi: 10.3390/math12060809
Authors: Adela Elisabeta Capătă
The aim of this paper is to present new existence results for solutions to a generalized quasi-equilibrium problem with set-valued mappings and moving cones. The key to this approach is a new Browder-type fixed point theorem, which permits working in a new direction with the milder condition of transfer open-valued mapping and considering weaker assumptions on the coving cone. These results are applied to some generalized vector quasi-equilibrium problems with trifunctions and to a vector quasi-equilibrium problem with fuzzy mappings in a fuzzy environment.
]]>Mathematics doi: 10.3390/math12060808
Authors: Imran Shabir Chuhan Jing Li Muhammad Shafiq Ahmed Inna Samuilik Muhammad Aqib Aslam Malik Abdul Manan
Purpose—This study aims to perform an in-depth analysis of double-diffusive natural convection (DDNC) in an irregularly shaped porous cavity. We investigate the convective heat transfer process induced by the lower wall treated as a heat source while the side walls of the enclosure are maintained at a lower temperature and concentration, and the remaining wall is adiabatic. Various factors, such as the Rayleigh number, Darcy effects, Hartmann number, Lewis number and effects of magnetic inclination are evaluated for their influence on flow dynamics and heat distribution. Design/methodology/approach—After validating the results, the FEM (finite element method) is used to simulate the flow pattern, temperature variations, and concentration by solving the nonlinear partial differential equations with the modified Rayleigh number (104 ≤ Ra ≤ 107), Darcy number (10−4 ≤ Da ≤ 10−1), Lewis number (0.1≤Le≤10), and Hartmann number 0≤Ha≤40 as the dimensionless operating parameters. Findings—The finding shows that the patterns of convection and the shape of the isotherms within porous enclosures are notably affected by the angle of the applied magnetic field. This study enhances our understanding of how double-diffusive natural convection (DDNC) operates in these enclosures, which helps improve heating and cooling technologies in various engineering fields. Research limitations/implications—Numerical and experimental extensions of the present study make it possible to investigate differences in thermal performance as a result of various curvatures, orientations, boundary conditions, and the use of three-dimensional analysis and other working fluids. Practical implications—The geometry configurations used in this study have wide-ranging applications in engineering fields, such as in heat exchangers, crystallization, microelectronics, energy storage, mixing, food processing, and biomedical systems. Originality/value—This study shows how an inclined magnetic field affects double-diffusive natural convection (DDNC) within a porous system featuring an irregularly shaped cavity, considering various multiphysical conditions.
]]>Mathematics doi: 10.3390/math12060807
Authors: Yan Tian Guidong Liu
This study explores efficient methods for computing eigenvalues and function values associated with Chebyshev-type prolate spheroidal wave functions (CPSWFs). Applying the expansion of the factor eicxy and the inherent properties of Chebyshev polynomials, we present an exact and stable numerical approximation for the exact eigenvalues of the integral operator to CPSWFs. Additionally, we illustrate the efficiency of employing fast Fourier transform and barycentric interpolation techniques for computing CPSWF values and related quantities, which are essential for various numerical applications based on these functions. The analysis is supported by numerical examples, providing validation for the accuracy and reliability of our proposed approach.
]]>Mathematics doi: 10.3390/math12060806
Authors: Tamás Katona Gábor Tóth Mátyás Petró Balázs Harangi
Chest X-ray evaluation is challenging due to its high demand and the complexity of diagnoses. In this study, we propose an optimized deep learning model for the multi-label classification of chest X-ray images. We leverage pretrained convolutional neural networks (CNNs) such as VGG16, ResNet 50, and DenseNet 121, modifying their output layers and fine-tuning the models. We employ a novel optimization strategy using the Hyperband algorithm to efficiently search the hyperparameter space while adjusting the fully connected layers of the CNNs. The effectiveness of our approach is evaluated on the basis of the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) metric. Our proposed methodology could assist in automated chest radiograph interpretation, offering a valuable tool that can be used by clinicians in the future.
]]>Mathematics doi: 10.3390/math12060805
Authors: Nurehemaiti Yiming
In this paper, we consider the M/G/1 stochastic clearing queueing model in a three-phase environment, which is described by integro-partial differential equations (IPDEs). Our first result is semigroup well-posedness for the dynamic system. Utilizing a C0—semigroup theory, we prove that the system has a unique positive time-dependent solution (TDS) that satisfies the probability condition. As our second result, we prove that the TDS of the system strongly converges to its steady-state solution (SSS) if the service rates of the servers are constants. For this asymptotic behavior, we analyze the spectrum of the system operator associated with the system. Additionally, the stability of the semigroup generated by the system operator is also discussed.
]]>Mathematics doi: 10.3390/math12060804
Authors: Zhenyu Bai Chuanzhi Bai
In this paper, we explore the stability of a new class of Caputo-type fractional stochastic delay differential systems with Poisson jumps. We prove the Hyers–Ulam stability of the solution by utilizing a version of fixed point theorem, fractional calculus, Cauchy–Schwartz inequality, Jensen inequality, and some stochastic analysis techniques. Finally, an example is provided to illustrate the effectiveness of the results.
]]>Mathematics doi: 10.3390/math12060803
Authors: Shuo Liu Xu Chen Xuan Di
This paper proposes a scalable learning framework to solve a system of coupled forward–backward partial differential equations (PDEs) arising from mean field games (MFGs). The MFG system incorporates a forward PDE to model the propagation of population dynamics and a backward PDE for a representative agent’s optimal control. Existing work mainly focus on solving the mean field game equilibrium (MFE) of the MFG system when given fixed boundary conditions, including the initial population state and terminal cost. To obtain MFE efficiently, particularly when the initial population density and terminal cost vary, we utilize a physics-informed neural operator (PINO) to tackle the forward–backward PDEs. A learning algorithm is devised and its performance is evaluated on one application domain, which is the autonomous driving velocity control. Numerical experiments show that our method can obtain the MFE accurately when given different initial distributions of vehicles. The PINO exhibits both memory efficiency and generalization capabilities compared to physics-informed neural networks (PINNs).
]]>Mathematics doi: 10.3390/math12060802
Authors: Xuelian Guo Ivan Kaygorodov Liming Tang
Using the first cohomology from the mirror Heisenberg–Virasoro algebra to the twisted Heisenberg algebra (as the mirror Heisenberg–Virasoro algebra module), in this paper, we determined the derivations on the mirror Heisenberg–Virasoro algebra. Based on this result, we proved that any two-local derivation on the mirror Heisenberg–Virasoro algebra is a derivation. All half-derivations are described, and as corollaries, we have descriptions of transposed Poisson structures and local (two-local) half-derivations on the mirror Heisenberg–Virasoro algebra.
]]>Mathematics doi: 10.3390/math12060801
Authors: Jae-Mo Kang Dong-Woo Lim
In this paper, we consider a new design problem of optimizing a linear transceiver for correlated multiple-input multiple-output (MIMO) interference channels in the presence of channel state information (CSI) errors, which is a more realistic and practical scenario than those considered in the previous studies on uncorrelated MIMO interference channels. By taking CSI errors into account, the optimization problem is initially formulated to minimize the average mean square error (MSE) under the general power constraints. Since the objective function is not jointly convex in precoders and receive filters, we split the original problem into two convex subproblems, and then linear precoders and receive filters are obtained by solving two subproblems iteratively. It is shown that the proposed algorithm is guaranteed to converge to a local minimum. The numerical results show that the proposed algorithm can significantly reduce the sensitivity to CSI errors compared with the existing robust schemes in the correlated MIMO interference channel.
]]>