Next Issue
Volume 12, March-2
Previous Issue
Volume 12, February-2
 
 

Mathematics, Volume 12, Issue 5 (March-1 2024) – 166 articles

Cover Story (view full-size image): Causality has become a powerful tool for addressing the out-of-distribution (OOD) generalization problem and existing methods for learning invariant features are based on optimization, which fails to converge to the optimal solution. Thus, obtaining the variables that cause the target outcome through a causal inference method is a more effective method. This paper presents a new approach for invariant feature learning based on causal inference (IFCI). IFCI detects causal variables unaffected by the environment through the causal inference method and focuses on partial causal relationships. The result shows that IFCI can detect and filter out environmental variables affected by the environment. After filtering out environmental variables, even a model with a simple structure and common loss function can have a strong OOD generalization capability. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 281 KiB  
Article
Anisotropic Moser–Trudinger-Type Inequality with Logarithmic Weight
by Tao Zhang and Jie Liu
Mathematics 2024, 12(5), 785; https://doi.org/10.3390/math12050785 - 06 Mar 2024
Viewed by 541
Abstract
Our main purpose in this paper is to study the anisotropic Moser–Trudinger-type inequalities with logarithmic weight ωβ(x)=[lnFo(x)|(n1)β. This can be seen as [...] Read more.
Our main purpose in this paper is to study the anisotropic Moser–Trudinger-type inequalities with logarithmic weight ωβ(x)=[lnFo(x)|(n1)β. This can be seen as a generation result of the isotropic Moser–Trudinger inequality with logarithmic weight. Furthermore, we obtain the existence of extremal function when β is small. Finally, we give Lions’ concentration-compactness principle, which is the improvement of the anisotropic Moser–Trudinger-type inequality. Full article
13 pages, 690 KiB  
Article
Fractional-Order Model-Free Adaptive Control with High Order Estimation
by Zhuo-Xuan Lv and Jian Liao
Mathematics 2024, 12(5), 784; https://doi.org/10.3390/math12050784 - 06 Mar 2024
Viewed by 518
Abstract
This paper concerns an improved model-free adaptive fractional-order control with a high-order pseudo-partial derivative for uncertain discrete-time nonlinear systems. Firstly, a new equivalent model is obtained by employing the Grünwald–Letnikov (G-L) fractional-order difference of the input in a compact-form dynamic linearization. Then, the [...] Read more.
This paper concerns an improved model-free adaptive fractional-order control with a high-order pseudo-partial derivative for uncertain discrete-time nonlinear systems. Firstly, a new equivalent model is obtained by employing the Grünwald–Letnikov (G-L) fractional-order difference of the input in a compact-form dynamic linearization. Then, the pseudo-partial derivative (PPD) is derived using a high-order estimation algorithm, which provides more PPD information than the previous time. A discrete-time model-free adaptive fractional-order controller is proposed, which utilizes more past input–output data information. The ultimate uniform boundedness of the tracking errors are demonstrated through formal analysis. Finally, the simulation results demonstrate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

23 pages, 1126 KiB  
Article
Bayesian Feature Extraction for Two-Part Latent Variable Model with Polytomous Manifestations
by Qi Zhang, Yihui Zhang and Yemao Xia
Mathematics 2024, 12(5), 783; https://doi.org/10.3390/math12050783 - 06 Mar 2024
Viewed by 525
Abstract
Semi-continuous data are very common in social sciences and economics. In this paper, a Bayesian variable selection procedure is developed to assess the influence of observed and/or unobserved exogenous factors on semi-continuous data. Our formulation is based on a two-part latent variable model [...] Read more.
Semi-continuous data are very common in social sciences and economics. In this paper, a Bayesian variable selection procedure is developed to assess the influence of observed and/or unobserved exogenous factors on semi-continuous data. Our formulation is based on a two-part latent variable model with polytomous responses. We consider two schemes for the penalties of regression coefficients and factor loadings: a Bayesian spike and slab bimodal prior and a Bayesian lasso prior. Within the Bayesian framework, we implement a Markov chain Monte Carlo sampling method to conduct posterior inference. To facilitate posterior sampling, we recast the logistic model from Part One as a norm-type mixture model. A Gibbs sampler is designed to draw observations from the posterior. Our empirical results show that with suitable values of hyperparameters, the spike and slab bimodal method slightly outperforms Bayesian lasso in the current analysis. Finally, a real example related to the Chinese Household Financial Survey is analyzed to illustrate application of the methodology. Full article
(This article belongs to the Special Issue Multivariate Statistical Analysis and Application)
Show Figures

Figure 1

35 pages, 7945 KiB  
Article
Mathematical Patterns in Fuzzy Logic and Artificial Intelligence for Financial Analysis: A Bibliometric Study
by Ionuț Nica, Camelia Delcea and Nora Chiriță
Mathematics 2024, 12(5), 782; https://doi.org/10.3390/math12050782 - 06 Mar 2024
Cited by 1 | Viewed by 759
Abstract
In this study, we explored the dynamic field of fuzzy logic and artificial intelligence (AI) in financial analysis from 1990 to 2023. Utilizing the bibliometrix package in RStudio and data from the Web of Science, we focused on identifying mathematical models and the [...] Read more.
In this study, we explored the dynamic field of fuzzy logic and artificial intelligence (AI) in financial analysis from 1990 to 2023. Utilizing the bibliometrix package in RStudio and data from the Web of Science, we focused on identifying mathematical models and the evolving role of fuzzy information granulation in this domain. The research addresses the urgent need to understand the development and impact of fuzzy logic and AI within the broader scope of evolving technological and analytical methodologies, particularly concentrating on their application in financial and banking contexts. The bibliometric analysis involved an extensive review of the literature published during this period. We examined key metrics such as the annual growth rate, international collaboration, and average citations per document, which highlighted the field’s expansion and collaborative nature. The results revealed a significant annual growth rate of 19.54%, international collaboration of 21.16%, and an average citation per document of 25.52. Major journals such as IEEE Transactions on Fuzzy Systems, Fuzzy Sets and Systems, the Journal of Intelligent & Fuzzy Systems, and Information Sciences emerged as significant contributors, aligning with Bradford’s Law’s Zone 1. Notably, post-2020, IEEE Transactions on Fuzzy Systems showed a substantial increase in publications. A significant finding was the high citation rate of seminal research on fuzzy information granulation, emphasizing its mathematical importance and practical relevance in financial analysis. Keywords like “design”, “model”, “algorithm”, “optimization”, “stabilization”, and terms such as “fuzzy logic controller”, “adaptive fuzzy controller”, and “fuzzy logic approach” were prevalent. The Countries’ Collaboration World Map indicated a strong pattern of global interconnections, suggesting a robust framework of international collaboration. Our study highlights the escalating influence of fuzzy logic and AI in financial analysis, marked by a growth in research outputs and global collaborations. It underscores the crucial role of fuzzy information granulation as a mathematical model and sets the stage for further investigation into how fuzzy logic and AI-driven models are transforming financial and banking analysis practices worldwide. Full article
Show Figures

Figure 1

16 pages, 6908 KiB  
Article
Multiclass Sentiment Prediction of Airport Service Online Reviews Using Aspect-Based Sentimental Analysis and Machine Learning
by Mohammed Saad M. Alanazi, Jun Li and Karl W. Jenkins
Mathematics 2024, 12(5), 781; https://doi.org/10.3390/math12050781 - 06 Mar 2024
Viewed by 601
Abstract
Airport service quality ratings found on social media such as Airline Quality and Google Maps offer invaluable insights for airport management to improve their quality of services. However, there is currently a lack of research analysing these reviews by airport services using sentimental [...] Read more.
Airport service quality ratings found on social media such as Airline Quality and Google Maps offer invaluable insights for airport management to improve their quality of services. However, there is currently a lack of research analysing these reviews by airport services using sentimental analysis approaches. This research applies multiclass models based on Aspect-Based Sentimental Analysis to conduct a comprehensive analysis of travellers’ reviews, in which the major airport services are tagged by positive, negative, and non-existent sentiments. Seven airport services commonly utilised in previous studies are also introduced. Subsequently, various Deep Learning architectures and Machine Learning classification algorithms are developed, tested, and compared using data collected from Twitter, Google Maps, and Airline Quality, encompassing travellers’ feedback on airport service quality. The results show that the traditional Machine Learning algorithms such as the Random Forest algorithm outperform Deep Learning models in the multiclass prediction of airport service quality using travellers’ feedback. The findings of this study offer concrete justifications for utilising multiclass Machine Learning models to understand the travellers’ sentiments and therefore identify airport services required for improvement. Full article
Show Figures

Figure 1

21 pages, 4039 KiB  
Article
Enhanced Genetic-Algorithm-Driven Triple Barrier Labeling Method and Machine Learning Approach for Pair Trading Strategy in Cryptocurrency Markets
by Ning Fu, Mingu Kang, Joongi Hong and Suntae Kim
Mathematics 2024, 12(5), 780; https://doi.org/10.3390/math12050780 - 06 Mar 2024
Viewed by 708
Abstract
In the dynamic world of finance, the application of Artificial Intelligence (AI) in pair trading strategies is gaining significant interest among scholars. Current AI research largely concentrates on regression analyses of prices or spreads between paired assets for formulating trading strategies. However, AI [...] Read more.
In the dynamic world of finance, the application of Artificial Intelligence (AI) in pair trading strategies is gaining significant interest among scholars. Current AI research largely concentrates on regression analyses of prices or spreads between paired assets for formulating trading strategies. However, AI models typically exhibit less precision in regression tasks compared to classification tasks, presenting a challenge in refining the accuracy of pair trading strategies. In pursuit of high-performance labels to elevate the precision of classification models, this study advanced the Triple Barrier Labeling Method for enhanced compatibility with pair trading strategies. This refinement enables the creation of diverse label sets, each tailored to distinct barrier configurations. Focusing on achieving maximal profit or minimizing the Maximum Drawdown (MDD), Genetic Algorithms (GAs) were employed for the optimization of these labels. After optimization, the labels were classified into two distinct types: High Risk and High Profit (HRHP) and Low Risk and Low Profit (LRLP). These labels then serve as the foundation for training machine learning models, which are designed to predict future trading activities in the cryptocurrency market. Our approach, employing cryptocurrency price data from 9 November 2017 to 31 August 2022 for training and 1 September 2022 to 1 December 2023 for testing, demonstrates a substantial improvement over traditional pair trading strategies. In particular, models trained with HRHP signals realized a 51.42% surge in profitability, while those trained with LRLP signals significantly mitigated risk, marked by a 73.24% reduction in the MDD. This innovative method marks a significant advancement in cryptocurrency pair trading strategies, offering traders a powerful and refined tool for optimizing their trading decisions. Full article
(This article belongs to the Special Issue Advances in Financial Mathematics and Risk Management)
Show Figures

Figure 1

11 pages, 3500 KiB  
Article
Non-Newtonian Pressure-Governed Rivulet Flows on Inclined Surface
by Sergey V. Ershkov and Dmytro D. Leshchenko
Mathematics 2024, 12(5), 779; https://doi.org/10.3390/math12050779 - 06 Mar 2024
Viewed by 562
Abstract
We have generalized, in the current study, the results of research presented earlier with the aim of obtaining an approximate solution for the creeping, plane-parallel flow of viscoplastic non-Newtonian fluid where the focus is on the study of rivulet fluid flows on an [...] Read more.
We have generalized, in the current study, the results of research presented earlier with the aim of obtaining an approximate solution for the creeping, plane-parallel flow of viscoplastic non-Newtonian fluid where the focus is on the study of rivulet fluid flows on an inclined surface. Namely, profiles of velocity of flow have been considered to be given in the same form as previously (i.e., Gaussian-like, non-stationary solutions) but with a novel type of pressure field p. The latter has been chosen for solutions correlated explicitly with the critical maximal non-zero level of stress τs in the shared plane layer of rivulet flow, when it begins to move as viscous flow (therefore, we have considered here the purely non-Newtonian case of viscoplastic flow). Correlating phenomena such as the above stem from the equations of motion of viscoplastic non-Newtonian fluid considered along with the continuity equation. We have obtained a governing sub-system of two partial differential equations of the first order for two functions, p and τs. As a result, a set of new semi-analytical solutions are presented and graphically plotted. Full article
(This article belongs to the Section Engineering Mathematics)
Show Figures

Figure 1

16 pages, 2369 KiB  
Article
On Stochastic Representations of the Zero–One-Inflated Poisson Lindley Distribution
by Razik Ridzuan Mohd Tajuddin and Noriszura Ismail
Mathematics 2024, 12(5), 778; https://doi.org/10.3390/math12050778 - 06 Mar 2024
Viewed by 461
Abstract
A zero–one-inflated Poisson Lindley distribution has been introduced recently as an alternative to the zero–one-inflated Poisson distribution for describing count data with a substantial number of zeros and ones. Several stochastic representations of the zero–one-inflated Poisson Lindley distribution and their equivalence to some [...] Read more.
A zero–one-inflated Poisson Lindley distribution has been introduced recently as an alternative to the zero–one-inflated Poisson distribution for describing count data with a substantial number of zeros and ones. Several stochastic representations of the zero–one-inflated Poisson Lindley distribution and their equivalence to some well-known distributions under some conditions are presented. Using these stochastic representations, the distributional properties such as the nth moments, as well as the conditional distributions are discussed. These stochastic representations can be used to explain the relationship between two or more distributions. Several likelihood ratio tests are developed and examined for the presence of one-inflation and fixed rate parameters. The likelihood ratio tests are found to be powerful and have ability to control the error rates as the sample size increases. A sample size of 1000 is acceptable and sufficient for the likelihood ratio tests to be useful. Full article
(This article belongs to the Special Issue New Advances in Distribution Theory and Its Applications)
Show Figures

Figure 1

33 pages, 2077 KiB  
Article
Handling Overlapping Asymmetric Data Sets—A Twice Penalized P-Spline Approach
by Matthew McTeer, Robin Henderson, Quentin M. Anstee and Paolo Missier
Mathematics 2024, 12(5), 777; https://doi.org/10.3390/math12050777 - 05 Mar 2024
Viewed by 590
Abstract
Aims: Overlapping asymmetric data sets are where a large cohort of observations have a small amount of information recorded, and within this group there exists a smaller cohort which have extensive further information available. Missing imputation is unwise if cohort size differs substantially; [...] Read more.
Aims: Overlapping asymmetric data sets are where a large cohort of observations have a small amount of information recorded, and within this group there exists a smaller cohort which have extensive further information available. Missing imputation is unwise if cohort size differs substantially; therefore, we aim to develop a way of modelling the smaller cohort whilst considering the larger. Methods: Through considering traditionally once penalized P-Spline approximations, we create a second penalty term through observing discrepancies in the marginal value of covariates that exist in both cohorts. Our now twice penalized P-Spline is designed to firstly prevent over/under-fitting of the smaller cohort and secondly to consider the larger cohort. Results: Through a series of data simulations, penalty parameter tunings, and model adaptations, our twice penalized model offers up to a 58% and 46% improvement in model fit upon a continuous and binary response, respectively, against existing B-Spline and once penalized P-Spline methods. Applying our model to an individual’s risk of developing steatohepatitis, we report an over 65% improvement over existing methods. Conclusions: We propose a twice penalized P-Spline method which can vastly improve the model fit of overlapping asymmetric data sets upon a common predictive endpoint, without the need for missing data imputation. Full article
(This article belongs to the Special Issue Machine Learning Theory and Applications)
Show Figures

Figure 1

22 pages, 4582 KiB  
Article
Strategic Warehouse Location Selection in Business Logistics: A Novel Approach Using IMF SWARA–MARCOS—A Case Study of a Serbian Logistics Service Provider
by Vukašin Pajić, Milan Andrejić, Marijana Jolović and Milorad Kilibarda
Mathematics 2024, 12(5), 776; https://doi.org/10.3390/math12050776 - 05 Mar 2024
Cited by 1 | Viewed by 1043
Abstract
Business logistics encompasses the intricate planning, seamless implementation, and precise control of the efficient and effective movement and storage of goods, services, and associated information from their origin to their final consumption point. The strategic placement of facilities is intricately intertwined with business [...] Read more.
Business logistics encompasses the intricate planning, seamless implementation, and precise control of the efficient and effective movement and storage of goods, services, and associated information from their origin to their final consumption point. The strategic placement of facilities is intricately intertwined with business logistics, exerting a direct influence on the efficiency and cost-effectiveness of supply chain operations. In the realm of business logistics, decisions regarding the location of facilities, including warehouses, distribution centers, and manufacturing plants, assume a pivotal role in shaping the overarching logistics strategy. Warehouses, serving as pivotal nodes in the supply chain network, establish crucial links at both local and global markets. They serve as the nexus connecting suppliers and customers across the entire supply chain, thus constituting indispensable elements that significantly impact the overall performance of the supply chain. The optimal location of warehouses is paramount for efficient supply chains, ensuring minimized costs and bigger profits. The decision on warehouse location exerts a profound influence on investment costs, operational expenses, and the distribution strategy of a company, thereby playing a substantial role in elevating customer service levels. Hence, the primary objective of this paper is to propose a novel methodology grounded in the application of the Improved Fuzzy Stepwise Weight Assessment Ratio Analysis (SWARA)-Measurement of Alternatives and Ranking according to Compromise Solution (MARCOS) methods for determining warehouse locations tailored to a logistics service provider (LSP) operating in the Serbian market. Through the definition of seven evaluation criteria based on a comprehensive literature review and expert insights, this study aims to assess five potential locations. The findings suggest that the proposed model offers great decision support for effectively addressing challenges akin to the one presented in this study. Full article
(This article belongs to the Special Issue Fuzzy Logic Applications in Traffic and Transportation Engineering)
Show Figures

Figure 1

15 pages, 2654 KiB  
Article
An EOQ Model for Temperature-Sensitive Deteriorating Items in Cold Chain Operations
by Ming-Fang Yang, Pei-Fang Tsai, Meng-Ru Tu and Yu-Fang Yuan
Mathematics 2024, 12(5), 775; https://doi.org/10.3390/math12050775 - 05 Mar 2024
Viewed by 556
Abstract
To improve the inventory management of cold chain logistics, we propose an economic order quantity (EOQ) inventory model for temperature-sensitive deteriorating products. Considering that the products are temperature-sensitive, the deterioration rate of the proposed model is a function of the temperature. In addition, [...] Read more.
To improve the inventory management of cold chain logistics, we propose an economic order quantity (EOQ) inventory model for temperature-sensitive deteriorating products. Considering that the products are temperature-sensitive, the deterioration rate of the proposed model is a function of the temperature. In addition, the transportation cost, which is a function of the quantity ordered, is considered in this study. This article aims to find the optimal value of the total profit, selling price, and the length of the ordering cycle. Numerical examples are provided; the sensitivity analysis shows that the total profit is much more sensitive to transportation costs, compared with ordering and holding costs. Full article
Show Figures

Figure 1

18 pages, 2468 KiB  
Article
Operation Assessment of a Hybrid Distribution Transformer Compensating for Voltage and Power Factor Using Predictive Control
by Esteban I. Marciel, Carlos R. Baier, Roberto O. Ramírez, Carlos A. Muñoz, Marcelo A. Pérez and Mauricio Arevalo
Mathematics 2024, 12(5), 774; https://doi.org/10.3390/math12050774 - 05 Mar 2024
Viewed by 642
Abstract
Hybrid Distribution Transformers (HDTs) offer a compelling alternative to traditional low-frequency transformers (LFTs), providing auxiliary services in addition to standard functionalities. By integrating LFTs with power converters, HDTs enhance the operational capabilities of the system. The specific configuration in which converters are connected [...] Read more.
Hybrid Distribution Transformers (HDTs) offer a compelling alternative to traditional low-frequency transformers (LFTs), providing auxiliary services in addition to standard functionalities. By integrating LFTs with power converters, HDTs enhance the operational capabilities of the system. The specific configuration in which converters are connected to the transformer allows for the provision of multiple services. This can not only prevent network failures but also extend the lifespan of its components, an outcome that is highly desirable in a distribution grid. This article discusses an HDT developed to mitigate voltage fluctuations in the grid and to decrease the reactive power drawn from the secondary side of traditional LFTs. A finite-control-set model predictive control (FCS-MPC), in conjunction with linear controllers, is utilized for the effective management of the HDT converters. Two separate control loops are established to regulate voltage and reactive power on the secondary side of the transformer. Results from Hardware-in-the-Loop (HIL) testing affirm the proficiency of HDT in reducing grid voltage variations by 15% and in cutting reactive power consumption by up to 94%. The adopted control strategy and topology are demonstrated to be effective in stabilizing voltage and reactive power fluctuations while concurrently facilitating the charging of the converters’ DC link directly from the grid. Full article
(This article belongs to the Topic Intelligent Control in Smart Energy Systems)
Show Figures

Figure 1

16 pages, 997 KiB  
Article
Distributed Traffic Signal Optimization at V2X Intersections
by Li Zhang and Lei Zhang
Mathematics 2024, 12(5), 773; https://doi.org/10.3390/math12050773 - 05 Mar 2024
Viewed by 523
Abstract
This paper presents our research on a traffic signal control system (TSCS) at V2X intersections. The overall objective of the study is to create an implementable TSCS. The specific objective of this paper is to investigate a distributed system towards implementation. The objective [...] Read more.
This paper presents our research on a traffic signal control system (TSCS) at V2X intersections. The overall objective of the study is to create an implementable TSCS. The specific objective of this paper is to investigate a distributed system towards implementation. The objective function of minimizing queue delay is formulated as the integral of queue lengths. The discrete queueing estimation is mixed with macro and micro traffic flow models. The novel proposed architecture alleviates the communication network bandwidth constraint by processing BSMs and computing queue lengths at the local intersection. In addition, a two-stage distributed system is designed to optimize offsets, splits, and cycle length simultaneously and in real time. The paper advances TSCS theories by contributing a novel analytic formulation of delay functions and their first degree of derivatives for a two-stage optimization model. The open-source traffic simulation engine Enhanced Transportation Flow Open-Source Microscopic Model (ETFOMM version 1.2) was selected as a simulation environment to develop, debug, and evaluate the models and the system. The control delay of the major direction, minor direction, and the total network were collected to assess the system performance. Compared with the optimized TSCS timing plan by the Virginia Department of Transportation, the system generated a 21% control delay reduction in the major direction and a 7% control delay reduction in the minor direction at just a 10% penetration rate of connected vehicles. Finally, the proposed distributed and centralized systems present similar performances in the case study. Full article
(This article belongs to the Special Issue Simulation and Mathematical Programming Based Optimization)
Show Figures

Figure 1

20 pages, 306 KiB  
Article
Multiplicity of Normalized Solutions for the Fractional Schrödinger Equation with Potentials
by Xue Zhang, Marco Squassina and Jianjun Zhang
Mathematics 2024, 12(5), 772; https://doi.org/10.3390/math12050772 - 05 Mar 2024
Viewed by 454
Abstract
We are concerned with the existence and multiplicity of normalized solutions to the fractional Schrödinger equation [...] Read more.
We are concerned with the existence and multiplicity of normalized solutions to the fractional Schrödinger equation (Δ)su+V(εx)u=λu+h(εx)f(u)inRN,RN|u|2dx=a,, where (Δ)s is the fractional Laplacian, s(0,1), a,ε>0, λR is an unknown parameter that appears as a Lagrange multiplier, h:RN[0,+) are bounded and continuous, and f is L2-subcritical. Under some assumptions on the potential V, we show the existence of normalized solutions depends on the global maximum points of h when ε is small enough. Full article
(This article belongs to the Special Issue Problems and Methods in Nonlinear Analysis)
21 pages, 15265 KiB  
Article
Navigating the Dynamics of Squeeze Film Dampers: Unraveling Stiffness and Damping Using a Dual Lens of Reynolds Equation and Neural Network Models for Sensitivity Analysis and Predictive Insights
by Haobo Wang, Yulai Zhao, Tongguang Yang, Zhong Luo and Qingkai Han
Mathematics 2024, 12(5), 771; https://doi.org/10.3390/math12050771 - 05 Mar 2024
Viewed by 592
Abstract
The squeeze film damper (SFD) is proven to be highly effective in mitigating rotor vibration as it traverses the critical speed, thus making it extensively utilized in the aeroengine domain. In this paper, we investigate the stiffness and damping of SFD using the [...] Read more.
The squeeze film damper (SFD) is proven to be highly effective in mitigating rotor vibration as it traverses the critical speed, thus making it extensively utilized in the aeroengine domain. In this paper, we investigate the stiffness and damping of SFD using the Reynolds equation and neural network models. Our specific focus includes examining the structural and operating parameters of SFDs, such as clearance, feed pressure of oil, rotor whirl, and rotational speed. Firstly, the pressure distribution analytical model of the oil film inside the SFD based on the hydrodynamic lubrication theory is established, as described by the Reynolds equation. It obtained oil film forces, pressure, stiffness, and damping values under various sets of structural, lubrication, and operating parameters, including length, clearance, boundary pressure at both sides, rotational speed, and whirling motion, by applying difference computations to the Reynolds equation. Secondly, according to the significant analyses of the obtained oil film stiffness and damping, the following three parameters of the most significance are found: clearance, rotational speed, and rotor whirl. Furthermore, neural network models, including GA-BP and decision tree models, are established based on the obtained results of difference computation. The numerical simulation and calculation of these models are then applied to show their validity with all given parameters and the three significant parameters separately as two sets of model input. Regardless of either set of model inputs, these established neural network models are capable of predicting the nonlinear stiffness and damping of the oil film inside an SFD. These sensitive parameters merely require measurement, followed by the utilization of a neural network to predict stiffness and damping instead of the Reynolds equation. This process serves structural enhancement, facilitates parameter optimization in SFDs, and provides crucial support for refining the design parameters of SFDs. Full article
(This article belongs to the Section Dynamical Systems)
Show Figures

Figure 1

16 pages, 977 KiB  
Article
Examining Differences of Invariance Alignment in the Mplus Software and the R Package Sirt
by Alexander Robitzsch
Mathematics 2024, 12(5), 770; https://doi.org/10.3390/math12050770 - 05 Mar 2024
Viewed by 553
Abstract
Invariance alignment (IA) is a multivariate statistical technique to compare the means and standard deviations of a factor variable in a one-dimensional factor model across multiple groups. To date, the IA method is most frequently estimated using the commercial Mplus software. IA has [...] Read more.
Invariance alignment (IA) is a multivariate statistical technique to compare the means and standard deviations of a factor variable in a one-dimensional factor model across multiple groups. To date, the IA method is most frequently estimated using the commercial Mplus software. IA has also been implemented in the R package sirt. In this article, the performance of IA in the software packages Mplus and R are compared. It is argued and empirically shown in a simulation study and an empirical example that differences between software packages are primarily the cause of different identification constraints in IA. With a change of the identification constraint employing an argument in the IA function in sirt, Mplus and sirt resulted in comparable performance. Moreover, in line with previous work, the simulation study also highlighted that the tuning parameter ε=0.001 in IA is preferable to ε=0.01. Furthermore, an empirical example raises the question of whether IA, in its current implementations, behaves as expected in the case of many groups. Full article
11 pages, 281 KiB  
Article
Boundedness of Vector Linéard Equation with Multiple Variable Delays
by Melek Gözen
Mathematics 2024, 12(5), 769; https://doi.org/10.3390/math12050769 - 04 Mar 2024
Viewed by 410
Abstract
In this article, we consider a system of ordinary differential equations (ODEs) of second order with two variable time delays. We obtain new conditions for uniform ultimate bounded (UUB) solutions of the considered system. The technique of the proof is based on the [...] Read more.
In this article, we consider a system of ordinary differential equations (ODEs) of second order with two variable time delays. We obtain new conditions for uniform ultimate bounded (UUB) solutions of the considered system. The technique of the proof is based on the Lyapunov–Krasovskii functional (LKF) method using a new LKF. The main result of this article extends and improves a recent result for ODEs of second order with a constant delay to a more general system of ODEs of second order with two variable time delays. In this particular case, we also give a numerical example to verify the application of the main result of this article. Full article
Show Figures

Figure 1

13 pages, 670 KiB  
Article
An Efficient Limited Memory Multi-Step Quasi-Newton Method
by Issam A. R. Moghrabi and Basim A. Hassan
Mathematics 2024, 12(5), 768; https://doi.org/10.3390/math12050768 - 04 Mar 2024
Viewed by 484
Abstract
This paper is dedicated to the development of a novel class of quasi-Newton techniques tailored to address computational challenges posed by memory constraints. Such methodologies are commonly referred to as “limited” memory methods. The method proposed herein showcases adaptability by introducing a customizable [...] Read more.
This paper is dedicated to the development of a novel class of quasi-Newton techniques tailored to address computational challenges posed by memory constraints. Such methodologies are commonly referred to as “limited” memory methods. The method proposed herein showcases adaptability by introducing a customizable memory parameter governing the retention of historical data in constructing the Hessian estimate matrix at each iterative stage. The search directions generated through this novel approach are derived from a modified version closely resembling the full memory multi-step BFGS update, incorporating limited memory computation for a singular term to approximate matrix–vector multiplication. Results from numerical experiments, exploring various parameter configurations, substantiate the enhanced efficiency of the proposed algorithm within the realm of limited memory quasi-Newton methodologies category. Full article
Show Figures

Figure 1

31 pages, 30631 KiB  
Article
John von Neumann’s Space-Frequency Orthogonal Transforms
by Dan Stefanoiu and Janetta Culita
Mathematics 2024, 12(5), 767; https://doi.org/10.3390/math12050767 - 04 Mar 2024
Viewed by 533
Abstract
Among the invertible orthogonal transforms employed to perform the analysis and synthesis of 2D signals (especially images), the ones defined by means of John von Neumann’s cardinal sinus are extremely interesting. Their definitions rely on transforms similar to those employed to process time-varying [...] Read more.
Among the invertible orthogonal transforms employed to perform the analysis and synthesis of 2D signals (especially images), the ones defined by means of John von Neumann’s cardinal sinus are extremely interesting. Their definitions rely on transforms similar to those employed to process time-varying 1D signals. This article deals with the extension of John von Neumann’s transforms from 1D to 2D. The approach follows the manner in which the 2D Discrete Fourier Transform was obtained and has the great advantage of preserving the orthogonality property as well as the invertibility. As an important consequence, the numerical procedures to compute the direct and inverse John von Neumann’s 2D transforms can be designed to be efficient thanks to 1D corresponding algorithms. After describing the two numerical procedures, this article focuses on the analysis of their performance after running them on some real-life images. One black and white and one colored image were selected to prove the transforms’ effectiveness. The results show that the 2D John von Neumann’s Transforms are good competitors for other orthogonal transforms in terms of compression intrinsic capacity and image recovery. Full article
(This article belongs to the Section Engineering Mathematics)
Show Figures

Figure 1

10 pages, 247 KiB  
Review
On the Geometry and Topology of Discrete Groups: An Overview
by Renata Grimaldi
Mathematics 2024, 12(5), 766; https://doi.org/10.3390/math12050766 - 04 Mar 2024
Viewed by 580
Abstract
In this paper, we provide a brief introduction to the main notions of geometric group theory and of asymptotic topology of finitely generated groups. We will start by presenting the basis of discrete groups and of the topology at infinity, then we will [...] Read more.
In this paper, we provide a brief introduction to the main notions of geometric group theory and of asymptotic topology of finitely generated groups. We will start by presenting the basis of discrete groups and of the topology at infinity, then we will state some of the main theorems in these fields. Our aim is to give a sample of how the presence of a group action may affect the geometry of the underlying space and how in many cases topological methods may help the determine solutions of algebraic problems which may appear unrelated. Full article
(This article belongs to the Special Issue Geometry and Topology with Applications)
24 pages, 14284 KiB  
Article
Mask2Former with Improved Query for Semantic Segmentation in Remote-Sensing Images
by Shichen Guo, Qi Yang, Shiming Xiang, Shuwen Wang and Xuezhi Wang
Mathematics 2024, 12(5), 765; https://doi.org/10.3390/math12050765 - 04 Mar 2024
Viewed by 989
Abstract
Semantic segmentation of remote sensing (RS) images is vital in various practical applications, including urban construction planning, natural disaster monitoring, and land resources investigation. However, RS images are captured by airplanes or satellites at high altitudes and long distances, resulting in ground objects [...] Read more.
Semantic segmentation of remote sensing (RS) images is vital in various practical applications, including urban construction planning, natural disaster monitoring, and land resources investigation. However, RS images are captured by airplanes or satellites at high altitudes and long distances, resulting in ground objects of the same category being scattered in various corners of the image. Moreover, objects of different sizes appear simultaneously in RS images. For example, some objects occupy a large area in urban scenes, while others only have small regions. Technically, the above two universal situations pose significant challenges to the segmentation with a high quality for RS images. Based on these observations, this paper proposes a Mask2Former with an improved query (IQ2Former) for this task. The fundamental motivation behind the IQ2Former is to enhance the capability of the query of Mask2Former by exploiting the characteristics of RS images well. First, we propose the Query Scenario Module (QSM), which aims to learn and group the queries from feature maps, allowing the selection of distinct scenarios such as the urban and rural areas, building clusters, and parking lots. Second, we design the query position module (QPM), which is developed to assign the image position information to each query without increasing the number of parameters, thereby enhancing the model’s sensitivity to small targets in complex scenarios. Finally, we propose the query attention module (QAM), which is constructed to leverage the characteristics of query attention to extract valuable features from the preceding queries. Being positioned between the duplicated transformer decoder layers, QAM ensures the comprehensive utilization of the supervisory information and the exploitation of those fine-grained details. Architecturally, the QSM, QPM, and QAM as well as an end-to-end model are assembled to achieve high-quality semantic segmentation. In comparison to the classical or state-of-the-art models (FCN, PSPNet, DeepLabV3+, OCRNet, UPerNet, MaskFormer, Mask2Former), IQ2Former has demonstrated exceptional performance across three publicly challenging remote-sensing image datasets, 83.59 mIoU on the Vaihingen dataset, 87.89 mIoU on Potsdam dataset, and 56.31 mIoU on LoveDA dataset. Additionally, overall accuracy, ablation experiment, and visualization segmentation results all indicate IQ2Former validity. Full article
(This article belongs to the Special Issue Advanced Research in Data-Centric AI)
Show Figures

Figure 1

18 pages, 1040 KiB  
Article
Gaussian Mixture Estimation from Lower-Dimensional Data with Application to PET Imaging
by Azra Tafro and Damir Seršić
Mathematics 2024, 12(5), 764; https://doi.org/10.3390/math12050764 - 04 Mar 2024
Viewed by 506
Abstract
In positron emission tomography (PET), the original points of emission are unknown, and the scanners record pairs of photons emitting from those origins and creating lines of response (LORs) in random directions. This presents a latent variable problem, since at least one dimension [...] Read more.
In positron emission tomography (PET), the original points of emission are unknown, and the scanners record pairs of photons emitting from those origins and creating lines of response (LORs) in random directions. This presents a latent variable problem, since at least one dimension of relevant information is lost. This can be solved by a statistical approach to image reconstruction—modeling the image as a Gaussian mixture model (GMM). This allows us to obtain a high-quality continuous model that is not computationally demanding and does not require postprocessing. In this paper, we propose a novel method of GMM estimation in the PET setting, directly from lines of response. This approach utilizes some well-known and convenient properties of the Gaussian distribution and the fact that the random slopes of the lines are independent from the points of origin. The expectation–maximization (EM) algorithm that is most commonly used to estimate GMMs in the traditional setting here is adapted to lower-dimensional data. The proposed estimation method is unbiased, and simulations and experiments show that accurate reconstruction on synthetic data is possible from relatively small samples. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging: Theory and Applications)
Show Figures

Figure 1

20 pages, 4980 KiB  
Article
Improved Swarm Intelligence-Based Logistics Distribution Optimizer: Decision Support for Multimodal Transportation of Cross-Border E-Commerce
by Jiayi Xu, Mario Di Nardo and Shi Yin
Mathematics 2024, 12(5), 763; https://doi.org/10.3390/math12050763 - 04 Mar 2024
Viewed by 717
Abstract
Cross-border e-commerce logistics activities increasingly use multimodal transportation modes. In this transportation mode, the use of high-performance optimizers to provide decision support for multimodal transportation for cross-border e-commerce needs to be given attention. This study constructs a logistics distribution optimization model for cross-border [...] Read more.
Cross-border e-commerce logistics activities increasingly use multimodal transportation modes. In this transportation mode, the use of high-performance optimizers to provide decision support for multimodal transportation for cross-border e-commerce needs to be given attention. This study constructs a logistics distribution optimization model for cross-border e-commerce multimodal transportation. The mathematical model aims to minimize distribution costs, minimize carbon emissions during the distribution process, and maximize customer satisfaction as objective functions. It also considers constraints from multiple dimensions, such as cargo aircraft and vehicle load limitations. Meanwhile, corresponding improvement strategies were designed based on the Sand Cat Swarm Optimization (SCSO) algorithm. An improved swarm intelligence algorithm was proposed to develop an optimizer based on the improved swarm intelligence algorithm for model solving. The effectiveness of the proposed mathematical model and improved swarm intelligence algorithm was verified through a real-world case of cross-border e-commerce logistics transportation. The results indicate that using the proposed solution in this study, the cost of delivery and carbon emissions can be reduced, while customer satisfaction can be improved. Full article
(This article belongs to the Special Issue Advanced Methods in Intelligent Transportation Systems)
Show Figures

Figure 1

18 pages, 1022 KiB  
Article
Delay-Embedding Spatio-Temporal Dynamic Mode Decomposition
by Gyurhan Nedzhibov
Mathematics 2024, 12(5), 762; https://doi.org/10.3390/math12050762 - 04 Mar 2024
Viewed by 652
Abstract
Spatio-temporal dynamic mode decomposition (STDMD) is an extension of dynamic mode decomposition (DMD) designed to handle spatio-temporal datasets. It extends the framework so that it can analyze data that have both spatial and temporal variations. This facilitates the extraction of spatial structures along [...] Read more.
Spatio-temporal dynamic mode decomposition (STDMD) is an extension of dynamic mode decomposition (DMD) designed to handle spatio-temporal datasets. It extends the framework so that it can analyze data that have both spatial and temporal variations. This facilitates the extraction of spatial structures along with their temporal evolution. The STDMD method extracts temporal and spatial development information simultaneously, including wavenumber, frequencies, and growth rates, which are essential in complex dynamic systems. We provide a comprehensive mathematical framework for sequential and parallel STDMD approaches. To increase the range of applications of the presented techniques, we also introduce a generalization of delay coordinates. The extension, labeled delay-embedding STDMD allows the use of delayed data, which can be both time-delayed and space-delayed. An explicit expression of the presented algorithms in matrix form is also provided, making theoretical analysis easier and providing a solid foundation for further research and development. The novel approach is demonstrated using some illustrative model dynamics. Full article
Show Figures

Figure 1

12 pages, 271 KiB  
Article
The Role of Data on the Regularity of Solutions to Some Evolution Equations
by Maria Michaela Porzio
Mathematics 2024, 12(5), 761; https://doi.org/10.3390/math12050761 - 04 Mar 2024
Viewed by 471
Abstract
In this paper, we study the influence of the initial data and the forcing terms on the regularity of solutions to a class of evolution equations including linear and semilinear parabolic equations as the model cases, together with the nonlinear p-Laplacian equation. We [...] Read more.
In this paper, we study the influence of the initial data and the forcing terms on the regularity of solutions to a class of evolution equations including linear and semilinear parabolic equations as the model cases, together with the nonlinear p-Laplacian equation. We focus our study on the regularity (in terms of belonging to appropriate Lebesgue spaces) of the gradient of the solutions. We prove that there are cases where the regularity of the solutions as soon as t>0 is not influenced at all by the initial data. We also derive estimates for the gradient of these solutions that are independent of the initial data and reveal, once again, that for this class of evolution problems, the real “actors of the regularity” are the forcing terms. Full article
11 pages, 809 KiB  
Article
Eigenvalue Problem Describing Magnetorotational Instability in Outer Regions of Galaxies
by Evgeny Mikhailov and Tatiana Khasaeva
Mathematics 2024, 12(5), 760; https://doi.org/10.3390/math12050760 - 04 Mar 2024
Viewed by 569
Abstract
The existence of magnetic fields in spiral galaxies is beyond doubt and is confirmed by both observational data and theoretical models. Their generation occurs due to the dynamo mechanism action associated with the properties of turbulence. Most studies consider magnetic fields at moderate [...] Read more.
The existence of magnetic fields in spiral galaxies is beyond doubt and is confirmed by both observational data and theoretical models. Their generation occurs due to the dynamo mechanism action associated with the properties of turbulence. Most studies consider magnetic fields at moderate distances to the center of the disk, since the dynamo number is small in the marginal regions, and the field growth should be suppressed. At the same time, the computational results demonstrate the possibility of magnetic field penetration into the marginal regions of galaxies. In addition to the action of the dynamo, magnetorotational instability (MRI) can serve as one of the mechanisms of the field occurrence. This research is devoted to the investigation of MRI impact on galactic magnetic field generation and solving the occurring eigenvalue problems. The problems are formulated assuming that the perturbations may possibly increase. In the present work, we consider the eigenvalue problem, picturing the main field characteristics in the case of MRI occurrence, where the eigenvalues are firmly connected with the average vertical scale of the galaxy, to find out whether MRI takes place in the outer regions of the galaxy. The eigenvalue problem cannot be solved exactly; thus, it is solved using the methods of the perturbation theory for self-adjoint operators, where the eigenvalues are found using the series with elements including parameters characterizing the properties of the interstellar medium. We obtain linear and, as this is not enough, quadratic approximations and compare them with the numerical results. It is shown that they give a proper precision. We have compared the approximation results with those from numerical calculations and they were relatively close for the biggest eigenvalue. Full article
(This article belongs to the Special Issue Mathematical Analysis and Its Application in Astrophysics)
Show Figures

Figure 1

24 pages, 887 KiB  
Article
Searching by Topological Complexity: Lightweight Neural Architecture Search for Coal and Gangue Classification
by Wenbo Zhu, Yongcong Hu, Zhengjun Zhu, Wei-Chang Yeh, Haibing Li, Zhongbo Zhang and Weijie Fu
Mathematics 2024, 12(5), 759; https://doi.org/10.3390/math12050759 - 04 Mar 2024
Viewed by 685
Abstract
Lightweight and adaptive adjustment are key research directions for deep neural networks (DNNs). In coal industry mining, frequent changes in raw coal sources and production batches can cause uneven distribution of appearance features, leading to concept drift problems. The network architecture and parameters [...] Read more.
Lightweight and adaptive adjustment are key research directions for deep neural networks (DNNs). In coal industry mining, frequent changes in raw coal sources and production batches can cause uneven distribution of appearance features, leading to concept drift problems. The network architecture and parameters should be adjusted frequently to avoid a decline in model accuracy. This poses a significant challenge for those without specialist expertise. Although the Neural Architecture Search (NAS) has a strong ability to automatically generate networks, enabling the automatic design of highly accurate networks, it often comes with complex internal topological connections. These redundant architectures do not always effectively improve network performance, especially in resource-constrained environments, where their computational efficiency is significantly reduced. In this paper, we propose a method called Topology Complexity Neural Architecture Search (TCNAS). TCNAS proposes a new method for evaluating the topological complexity of neural networks and uses both topological complexity and accuracy to guide the search, effectively obtaining lightweight and efficient networks. TCNAS employs an adaptive shrinking search space optimization method, which gradually eliminates poorly performing cells to reduce the search space, thereby improving search efficiency and solving the problem of space explosion. In the classification experiments of coal and gangue, the optimal network designed by TCNAS has an accuracy of 83.3%. And its structure is much simpler, which is about 1/53 of the parameters of the network dedicated to coal and gangue recognition. Experiments have shown that TCNAS is able to generate networks that are both efficient and simple for resource-constrained industrial applications. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control, 2nd Edition)
Show Figures

Figure 1

42 pages, 9098 KiB  
Review
Consequential Advancements of Self-Supervised Learning (SSL) in Deep Learning Contexts
by Mohammed Majid Abdulrazzaq, Nehad T. A. Ramaha, Alaa Ali Hameed, Mohammad Salman, Dong Keon Yon, Norma Latif Fitriyani, Muhammad Syafrudin and Seung Won Lee
Mathematics 2024, 12(5), 758; https://doi.org/10.3390/math12050758 - 03 Mar 2024
Viewed by 1115
Abstract
Self-supervised learning (SSL) is a potential deep learning (DL) technique that uses massive volumes of unlabeled data to train neural networks. SSL techniques have evolved in response to the poor classification performance of conventional and even modern machine learning (ML) and DL models [...] Read more.
Self-supervised learning (SSL) is a potential deep learning (DL) technique that uses massive volumes of unlabeled data to train neural networks. SSL techniques have evolved in response to the poor classification performance of conventional and even modern machine learning (ML) and DL models of enormous unlabeled data produced periodically in different disciplines. However, the literature does not fully address SSL’s practicalities and workabilities necessary for industrial engineering and medicine. Accordingly, this thorough review is administered to identify these prominent possibilities for prediction, focusing on industrial and medical fields. This extensive survey, with its pivotal outcomes, could support industrial engineers and medical personnel in efficiently predicting machinery faults and patients’ ailments without referring to traditional numerical models that require massive computational budgets, time, storage, and effort for data annotation. Additionally, the review’s numerous addressed ideas could encourage industry and healthcare actors to take SSL principles into an agile application to achieve precise maintenance prognostics and illness diagnosis with remarkable levels of accuracy and feasibility, simulating functional human thinking and cognition without compromising prediction efficacy. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Decision Making)
Show Figures

Figure 1

15 pages, 3158 KiB  
Article
Inferencing Space Travel Pricing from Mathematics of General Relativity Theory, Accounting Equation, and Economic Functions
by Kang-Lin Peng, Xunyue Xue, Liqiong Yu and Yixin Ren
Mathematics 2024, 12(5), 757; https://doi.org/10.3390/math12050757 - 03 Mar 2024
Viewed by 681
Abstract
This study derives space travel pricing by Walrasian Equilibrium, which is logical reasoning from the general relativity theory (GRT), the accounting equation, and economic supply and demand functions. The Cobb–Douglas functions embed the endogenous space factor as new capital to form the space [...] Read more.
This study derives space travel pricing by Walrasian Equilibrium, which is logical reasoning from the general relativity theory (GRT), the accounting equation, and economic supply and demand functions. The Cobb–Douglas functions embed the endogenous space factor as new capital to form the space travel firm’s production function, which is also transformed into the consumer’s utility function. Thus, the market equilibrium occurs at the equivalence of supply and demand functions, like the GRT, which presents the equivalence between the spatial geometric tensor and the energy–momentum tensor, explaining the principles of gravity and the motion of space matter in the spacetime framework. The mathematical axiomatic set theory of the accounting equation explains the equity premium effect that causes a short-term accounting equation inequality, then reaches the equivalence by suppliers’ incremental equity through the closing accounts process of the accounting cycle. On the demand side, the consumption of space travel can be assumed as a value at risk (VaR) investment to attain the specific spacetime curvature in an expected orbit. Spacetime market equilibrium is then achieved to construct the space travel pricing model. The methodology of econophysics and the analogy method was applied to infer space travel pricing with the model of profit maximization, single-mindedness, and envy-free pricing in unit-demand markets. A case study with simulation was conducted for empirical verification of the mathematical models and algorithm. The results showed that space travel pricing remains associated with the principle of market equilibrium, but needs to be extended to the spacetime tensor of GRT. Full article
Show Figures

Figure 1

15 pages, 337 KiB  
Article
Generalized Boussinesq System with Energy Dissipation: Existence of Stationary Solutions
by Evgenii S. Baranovskii and Olga Yu. Shishkina
Mathematics 2024, 12(5), 756; https://doi.org/10.3390/math12050756 - 03 Mar 2024
Viewed by 447
Abstract
In this paper, we investigate the solvability of a boundary value problem for a heat and mass transfer model with the spatially averaged Rayleigh function. The considered model describes the 3D steady-state non-isothermal flow of a generalized Newtonian fluid (with shear-dependent viscosity) in [...] Read more.
In this paper, we investigate the solvability of a boundary value problem for a heat and mass transfer model with the spatially averaged Rayleigh function. The considered model describes the 3D steady-state non-isothermal flow of a generalized Newtonian fluid (with shear-dependent viscosity) in a bounded domain with Lipschitz boundary. The main novelty of our work is that we do not neglect the viscous dissipation effect in contrast to the classical Boussinesq approximation, and hence, deal with a system of strongly nonlinear partial differential equations. Using the properties of the averaging operation and d-monotone operators as well as the Leray–Schauder alternative for completely continuous mappings, we prove the existence of weak solutions without any smallness assumptions for model data. Moreover, it is shown that the set of all weak solutions is compact, and each solution from this set satisfies some energy equalities. Full article
(This article belongs to the Special Issue Modeling of Multiphase Flow Phenomena)
Previous Issue
Next Issue
Back to TopTop