Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (66)

Search Parameters:
Keywords = homogeneous markov chains

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3189 KB  
Article
Continuous-Time Markov Chain Modelling for Service Life Prediction of Building Elements
by Artur Zbiciak, Dariusz Walasek, Vazgen Bagdasaryan and Eugeniusz Koda
Appl. Sci. 2026, 16(7), 3555; https://doi.org/10.3390/app16073555 - 5 Apr 2026
Viewed by 399
Abstract
A continuous-time Markov chain framework is developed for service life prediction of building assets, and three formulations are compared: a homogeneous generator, a time-varying generator, and a fractional model. The framework delivers survival, density of absorption time, hazard, and mean time to absorption. [...] Read more.
A continuous-time Markov chain framework is developed for service life prediction of building assets, and three formulations are compared: a homogeneous generator, a time-varying generator, and a fractional model. The framework delivers survival, density of absorption time, hazard, and mean time to absorption. For the homogeneous case, state trajectories are computed using matrix exponentials. The time-varying case is solved both by local exponential propagation on a time grid and by direct integration of the Kolmogorov equation. The fractional case is implemented in two independent ways, via a truncated series expansion and via an in-house routine for the Mittag-Leffler function, which also allows the direct evaluation of survival and hazard from the standard fractional relations while avoiding singular behaviour at the origin. This study shows that non-homogeneous rates accelerate deterioration relative to the homogeneous benchmark, whereas fractional dynamics reproduce early-time acceleration followed by a slow decline of the hazard, which is consistent with heavy-tailed survival and longer effective service life. The two fractional solvers provide mutually consistent outputs, which supports the numerical robustness of the approach. The framework is readily applicable to sparse inspection data and short observation windows and provides a transparent basis for comparing modelling assumptions that affect life cycle forecasts used in asset management and maintenance planning. Full article
Show Figures

Figure 1

25 pages, 26945 KB  
Article
Addressing Compatibility Challenges in Multi-Cloud Services: A Markov Chain-Based Service Recommendation Framework
by Shiyang Ma, Lingtao Xue, Xiaojie Guo, Zesong Dong and Xuewen Dong
Computers 2026, 15(2), 85; https://doi.org/10.3390/computers15020085 - 1 Feb 2026
Viewed by 395
Abstract
Service recommendation aims to assist users in selecting appropriate services according to their requirements while ensuring seamless compatibility in modern cloud and edge computing environments. In dynamic multi-cloud scenarios, services are typically deployed across heterogeneous cloud platforms and are frequently reconfigured. However, most [...] Read more.
Service recommendation aims to assist users in selecting appropriate services according to their requirements while ensuring seamless compatibility in modern cloud and edge computing environments. In dynamic multi-cloud scenarios, services are typically deployed across heterogeneous cloud platforms and are frequently reconfigured. However, most existing service recommendation approaches primarily focus on static compatibility aspects, such as service interfaces or communication protocols, while overlooking the dynamic characteristics of service interactions. However, several limitations can be identified. First, the lack of effective mechanisms for quantifying service compatibility in dynamic cloud environments often leads to degraded system efficiency. Second, the absence of dedicated multi-cloud service compatibility quantification methodologies restricts recommendation accuracy. Third, insufficient mathematical analysis with respect to uniqueness, feasibility, and correctness may result in unstable evaluation outcomes and additional computational overhead. To overcome these limitations, this paper presents McCom, a multi-cloud service recommendation framework designed to quantify service compatibility performance and address the aforementioned challenges. First, a novel Markov chain-based compatibility quantification model is developed to characterize service interactions in dynamic multi-cloud environments. By exploiting the homogeneity, irreducibility, and convergence properties of Markov chains, the proposed model enables stable and reliable compatibility assessment. Second, a multi-cloud compatibility quantification strategy is introduced to mitigate interference arising from complex service pools through refined filtering and sketching mechanisms. Third, a series of mathematical proofs are provided to rigorously demonstrate the feasibility, correctness, and uniqueness of the proposed quantification method. Extensive simulation results indicate that the proposed framework achieves significant performance improvements, including enhancements in recommendation quality (14.44% in F1 score), reductions in latency (40.68%), and increases in accuracy (50.85%), compared with existing state-of-the-art approaches. Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (3rd Edition))
Show Figures

Figure 1

24 pages, 2583 KB  
Article
Hybrid Demand Forecasting in Fuel Supply Chains: ARIMA with Non-Homogeneous Markov Chains and Feature-Conditioned Evaluation
by Daniel Kubek and Paweł Więcek
Energies 2025, 18(22), 6044; https://doi.org/10.3390/en18226044 - 19 Nov 2025
Cited by 1 | Viewed by 1131
Abstract
In the context of growing data availability and increasing complexity of demand patterns in retail fuel distribution, selecting effective forecasting models for large collections of time series is becoming a key operational challenge. This study investigates the effectiveness of a hybrid forecasting approach [...] Read more.
In the context of growing data availability and increasing complexity of demand patterns in retail fuel distribution, selecting effective forecasting models for large collections of time series is becoming a key operational challenge. This study investigates the effectiveness of a hybrid forecasting approach combining ARIMA models with dynamically updated Markov Chains. Unlike many existing studies that focus on isolated or small-scale experiments, this research evaluates the hybrid model across a full set of approximately 150 time series collected from multiple petrol stations, without pre-clustering or manual selection. A comprehensive set of statistical and structural features is extracted from each time series to analyze their relation to forecast performance. The results show that the hybrid ARIMA–Markov approach significantly outperforms both individual statistical models and commonly applied machine learning methods in many cases, particularly for non-stationary or regime-shifting series. In 100% of cases, the hybrid model reduced the error compared to both baseline models—the median RMSE improvement over ARIMA was 13.03%, and 15.64% over the Markov model, with statistical significance confirmed by the Wilcoxon signed-rank test. The analysis also highlights specific time series features—such as entropy, regime shift frequency, and autocorrelation structure—as strong indicators of whether hybrid modeling yields performance gains. Feature-conditioning analyses (e.g., lag-1 autocorrelation, volatility, entropy) explain when hybridization helps, enabling a feature-aware workflow that selectively deploys model components and narrows parameter searches. The greatest benefits of applying the hybrid model were observed for time series characterized by high variability, moderate entropy of differences, and a well-defined temporal dependency structure—the correlation values between these features and the improvement in hybrid performance relative to ARIMA and Markov models reached 0.55–0.58, ensuring adequate statistical significance. Such approaches are particularly valuable in enterprise environments dealing with thousands of time series, where automated model configuration becomes essential. The findings position interpretable, adaptive hybrids as a practical default for short-horizon demand forecasting in fuel supply chains and, more broadly, in energy-use applications characterized by heterogeneous profiles and evolving regimes. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

14 pages, 882 KB  
Article
Discrete-Time Markov Chain Method for Predicting Probability of Crop Yield Variability
by László Huzsvai, Elza Kovács, Géza Tuba, Csaba Juhász, Danijel Jug and József Zsembeli
Earth 2025, 6(4), 142; https://doi.org/10.3390/earth6040142 - 6 Nov 2025
Viewed by 1587
Abstract
Agricultural crop yield prediction is vital for ensuring global food security and optimizing resource management amid the increasing challenges posed by climate change and extreme weather variability. This study investigates the use of discrete-time, finite-state, time-homogeneous Markov chains to model crop failure and [...] Read more.
Agricultural crop yield prediction is vital for ensuring global food security and optimizing resource management amid the increasing challenges posed by climate change and extreme weather variability. This study investigates the use of discrete-time, finite-state, time-homogeneous Markov chains to model crop failure and yield fluctuation probability. Maize yields in Hungary during 1921–1960 and 1980–2023 were analyzed. Yield distribution was assumed to depend only on the yield of the previous year. The Olympic average was computed for 5-year periods, excluding the highest and lowest values. Annual yield was divided by the value of the moving average and expressed as a percentage. According to our estimates, a higher degree of yield fluctuation is associated with an increased frequency of years with yields close to the long-term average. Considering the long-time trend during 1925–1960, the probability of having average maize yield, yield failure, and high yield would be 73.5%, 11.8%, and 14.7%, respectively. For the period of 1985–2023, the probability of failure was calculated to be at least 15% higher, while that of the high yield was found to be lower than for the first period. Taking the second period’s trend into account, the probabilities of average harvest, crop failure, and high harvest would be 66%, 21%, and 13%, respectively. Our findings confirm that the probability of yield variability can be modeled using the discrete-time Markov chain method, providing a new mathematical approach for crop yield prediction. Full article
(This article belongs to the Topic Advances in Crop Simulation Modelling)
Show Figures

Figure 1

19 pages, 362 KB  
Article
An Approach to Obtain Upper Ergodicity Bounds for Some QBDs with Countable State Space
by Yacov Satin, Rostislav Razumchik and Alexander Zeifman
Mathematics 2025, 13(16), 2604; https://doi.org/10.3390/math13162604 - 14 Aug 2025
Viewed by 667
Abstract
Usually, when the computation of limiting distributions of (in)homogeneous (in)finite continuous-time Markov chains (CTMC) has to be performed numerically, the algorithm has to be told when to stop the computation. Such an instruction can be constructed based on available ergodicity bounds. One of [...] Read more.
Usually, when the computation of limiting distributions of (in)homogeneous (in)finite continuous-time Markov chains (CTMC) has to be performed numerically, the algorithm has to be told when to stop the computation. Such an instruction can be constructed based on available ergodicity bounds. One of the analytical methods to obtain ergodicity bounds for CTMCs is the logarithmic norm method. It can be applied to any CTMC; however, since the method requires a guessing step (search for proper Lyapunov functions), which may not be successful, the obtained bounds are not always meaningful. Moreover, the guessing step in the method cannot be eliminated or automated and has to be performed in each new use-case, i.e., for each new structure of the infinitesimal matrix. However, the simplicity of the method makes attempts to expand its scope tempting. In this paper, such an attempt is made. We present a new technique that allows one to apply, in one unified way, the logarithmic norm method to QBDs with countable state spaces. The technique involves the preprocessing of the infinitesimal matrix of the QBD, finding bounding for its blocks, and then merging them into the single explicit upper bound. The applicability of the technique is demonstrated through a series of examples. Full article
(This article belongs to the Special Issue Advances in Queueing Theory and Applications)
Show Figures

Figure 1

21 pages, 5559 KB  
Article
The Use of Minimization Solvers for Optimizing Time-Varying Autoregressive Models and Their Applications in Finance
by Zhixuan Jia, Wang Li, Yunlong Jiang and Xingshen Liu
Mathematics 2025, 13(14), 2230; https://doi.org/10.3390/math13142230 - 9 Jul 2025
Cited by 2 | Viewed by 1255
Abstract
Time series data are fundamental for analyzing temporal dynamics and patterns, enabling researchers and practitioners to model, forecast, and support decision-making across a wide range of domains, such as finance, climate science, environmental studies, and signal processing. In the context of high-dimensional time [...] Read more.
Time series data are fundamental for analyzing temporal dynamics and patterns, enabling researchers and practitioners to model, forecast, and support decision-making across a wide range of domains, such as finance, climate science, environmental studies, and signal processing. In the context of high-dimensional time series, the Vector Autoregressive model (VAR) is widely used, wherein each variable is modeled as a linear combination of lagged values of all variables in the system. However, the traditional VAR framework relies on the assumption of stationarity, which states that the autoregressive coefficients remain constant over time. Unfortunately, this assumption often fails in practice, especially in systems subject to structural breaks or evolving temporal dynamics. The Time-Varying Vector Autoregressive (TV-VAR) model has been developed to address this limitation, allowing model parameters to vary over time and thereby offering greater flexibility in capturing non-stationary behavior. In this study, we propose an enhanced modeling approach for the TV-VAR framework by incorporating minimization solvers in generalized additive models and one-sided kernel smoothing techniques. The effectiveness of the proposed methodology is assessed using simulations based on non-homogeneous Markov chains, accompanied by a detailed discussion of its advantages and limitations. Finally, we illustrate the practical utility of our approach using an application to real-world financial data. Full article
(This article belongs to the Section E5: Financial Mathematics)
Show Figures

Figure 1

25 pages, 657 KB  
Article
Bitcoin Price Regime Shifts: A Bayesian MCMC and Hidden Markov Model Analysis of Macroeconomic Influence
by Vaiva Pakštaitė, Ernestas Filatovas, Mindaugas Juodis and Remigijus Paulavičius
Mathematics 2025, 13(10), 1577; https://doi.org/10.3390/math13101577 - 10 May 2025
Cited by 3 | Viewed by 13828
Abstract
Bitcoin’s role in global finance has rapidly expanded with increasing institutional participation, prompting new questions about its linkage to macroeconomic variables. This study thoughtfully integrates a Bayesian Markov Chain Monte Carlo (MCMC) covariate selection process within homogeneous and non-homogeneous Hidden Markov Models (HMMs) [...] Read more.
Bitcoin’s role in global finance has rapidly expanded with increasing institutional participation, prompting new questions about its linkage to macroeconomic variables. This study thoughtfully integrates a Bayesian Markov Chain Monte Carlo (MCMC) covariate selection process within homogeneous and non-homogeneous Hidden Markov Models (HMMs) to analyze 16 macroeconomic and Bitcoin-specific factors from 2016 to 2024. The proposed method integrates likelihood penalties to refine variable selection and employs a rolling-window bootstrap procedure for 1-, 5-, and 30-step-ahead forecasting. Results indicate a fundamental shift: while early Bitcoin pricing was primarily driven by technical and supply-side factors (e.g., halving cycles, trading volume), later periods exhibit stronger ties to macroeconomic indicators such as exchange rates and major stock indices. Heightened volatility aligns with significant events—including regulatory changes and institutional announcements—underscoring Bitcoin’s evolving market structure. These findings demonstrate that integrating Bayesian MCMC within a regime-switching model provides robust insights into Bitcoin’s deepening connection with traditional financial forces. Full article
Show Figures

Figure 1

17 pages, 360 KB  
Review
Statistics for Continuous Time Markov Chains, a Short Review
by Manuel L. Esquível and Nadezhda P. Krasii
Axioms 2025, 14(4), 283; https://doi.org/10.3390/axioms14040283 - 8 Apr 2025
Cited by 3 | Viewed by 3372
Abstract
This review article is concerned to provide a global context to several works on the fitting of continuous time nonhomogeneous Markov chains with finite state space and also to point out some selected aspects of two techniques previously introduced—estimation and calibration—relevant for applications [...] Read more.
This review article is concerned to provide a global context to several works on the fitting of continuous time nonhomogeneous Markov chains with finite state space and also to point out some selected aspects of two techniques previously introduced—estimation and calibration—relevant for applications and used to fit a continuous time Markov chain model to data by the adequate selection of parameters. The denomination estimation suits the procedure better when statistical techniques—e.g., maximum likelihood estimators—are employed, while calibration covers the case where, for instance, some optimisation technique finds a best approximation parameter to ensure good model fitting. For completeness, we provide a short summary of well-known important notions and results formulated for nonhomogeneous Markov chains that, in general, can be transferred to the homogeneous case. Then, as an illustration for the homogeneous case, we present a selected Billingsley’s result on parameter estimation for irreducible chains with finite state space. In the nonhomogeneous case, we quote two recent results, one of the calibration type and the other with more of a statistical flavour. We provide an ample set of bibliographic references so that the reader wanting to pursue her/his studies will be able to do so more easily and productively. Full article
Show Figures

Figure 1

19 pages, 3021 KB  
Article
Time-Lag Transiograms and Their Implications for Landscape Change Characterization
by Xinba Li, Weidong Li and Chuanrong Zhang
Stats 2024, 7(4), 1454-1472; https://doi.org/10.3390/stats7040085 - 6 Dec 2024
Viewed by 1602
Abstract
Markov chain transition probability matrices (TPMs) have traditionally been used to characterize land use and land cover (LULC) changes and species succession. However, previous studies relied solely on TPMs or transition area matrices to describe overall class area/proportion changes, overlooking important time correlation [...] Read more.
Markov chain transition probability matrices (TPMs) have traditionally been used to characterize land use and land cover (LULC) changes and species succession. However, previous studies relied solely on TPMs or transition area matrices to describe overall class area/proportion changes, overlooking important time correlation features. This study introduces the concept of idealized time-lag transiograms and demonstrates how they can be computed from temporal TPMs, using illustrative examples. The primary objective is to explore the potential value and utility of idealized time-lag transiograms in revealing additional characteristics of landscape change. Specifically, we focus on computing idealized time-lag transiograms with a fixed starting point and highlighting their fundamental features, such as sills, practical correlation ranges, and curve shapes, along with peak positions and peak height ratios of peaked cross-transiograms. These features are identified and discussed in terms of their potential implications for characterizing LULC changes. While idealized time-lag transiograms with a fixed starting point may not precisely predict future LULC changes due to the assumptions of the Markov property and time homogeneity (i.e., stationarity), they provide new insights into future LULC dynamics, revealing aspects that traditional Markov chain analysis has overlooked. Full article
(This article belongs to the Section Statistical Methods)
Show Figures

Figure 1

12 pages, 323 KB  
Article
On One Approach to Obtaining Estimates of the Rate of Convergence to the Limiting Regime of Markov Chains
by Yacov Satin, Rostislav Razumchik, Alexander Zeifman and Ilya Usov
Mathematics 2024, 12(17), 2763; https://doi.org/10.3390/math12172763 - 6 Sep 2024
Cited by 2 | Viewed by 1482
Abstract
We revisit the problem of the computation of the limiting characteristics of (in)homogeneous continuous-time Markov chains with the finite state space. In general, it can be performed only numerically. The common rule of thumb is to interrupt calculations after quite some time, hoping [...] Read more.
We revisit the problem of the computation of the limiting characteristics of (in)homogeneous continuous-time Markov chains with the finite state space. In general, it can be performed only numerically. The common rule of thumb is to interrupt calculations after quite some time, hoping that the values at some distant time interval will represent the sought-after solution. Convergence or ergodicity bounds, when available, can be used to answer such questions more accurately; i.e., they can indicate how to choose the position and the length of that distant time interval. The logarithmic norm method is a general technique that may allow one to obtain such bounds. Although it can handle continuous-time Markov chains with both finite and countable state spaces, its downside is the need to guess the proper similarity transformations, which may not exist. In this paper, we introduce a new technique, which broadens the scope of the logarithmic norm method. This is achieved by firstly splitting the generator of a Markov chain and then merging the convergence bounds of each block into a single bound. The proof of concept is illustrated by simple examples of the queueing theory. Full article
Show Figures

Figure 1

16 pages, 2495 KB  
Article
Discrete Homogeneous and Non-Homogeneous Markov Chains Enhance Predictive Modelling for Dairy Cow Diseases
by Jan Saro, Jaromir Ducháček, Helena Brožová, Luděk Stádník, Petra Bláhová, Tereza Horáková and Robert Hlavatý
Animals 2024, 14(17), 2542; https://doi.org/10.3390/ani14172542 - 1 Sep 2024
Cited by 1 | Viewed by 2670
Abstract
Modelling and predicting dairy cow diseases empowers farmers with valuable information for herd health management, thereby decreasing costs and increasing profits. For this purpose, predictive models were developed based on machine learning algorithms. However, machine-learning based approaches require the development of a specific [...] Read more.
Modelling and predicting dairy cow diseases empowers farmers with valuable information for herd health management, thereby decreasing costs and increasing profits. For this purpose, predictive models were developed based on machine learning algorithms. However, machine-learning based approaches require the development of a specific model for each disease, and their consistency is limited by low farm data availability. To overcome this lack of complete and accurate data, we developed a predictive model based on discrete Homogeneous and Non-homogeneous Markov chains. After aggregating data into categories, we developed a method for defining the adequate number of Markov chain states. Subsequently, we selected the best prediction model through Chebyshev distance minimization. For 14 of 19 diseases, less than 15% maximum differences were measured between the last month of actual and predicted disease data. This model can be easily implemented in low-tech dairy farms to project costs with antibiotics and other treatments. Furthermore, the model’s adaptability allows it to be extended to other disease types or conditions with minimal adjustments. Therefore, including this predictive model for dairy cow diseases in decision support systems may enhance herd health management and streamline the design of evidence-based farming strategies. Full article
Show Figures

Figure 1

17 pages, 731 KB  
Article
New Computer Experiment Designs with Area-Interaction Point Processes
by Ahmed Ait Ameur, Hichem Elmossaoui and Nadia Oukid
Mathematics 2024, 12(15), 2397; https://doi.org/10.3390/math12152397 - 31 Jul 2024
Cited by 2 | Viewed by 1815
Abstract
This article presents a novel method for constructing computer experiment designs based on the theory of area-interaction point processes. This method is essential for capturing the interactions between different elements within a modeled system, offering a more flexible and adaptable approach compared with [...] Read more.
This article presents a novel method for constructing computer experiment designs based on the theory of area-interaction point processes. This method is essential for capturing the interactions between different elements within a modeled system, offering a more flexible and adaptable approach compared with traditional mathematical modeling. Unlike conventional rough models that rely on simplified equations, our method employs the Markov Chain Monte Carlo (MCMC) method and the Metropolis–Hastings algorithm combined with Voronoi tessellations. It uses a new dynamic called homogeneous birth and death dynamics of a set of points to generate the designs. This approach does not require the development of specific mathematical models for each system under study, making it universally applicable while achieving comparable results. Furthermore, we provide an in-depth analysis of the convergence properties of the Markov Chain to ensure the reliability of the generated designs. An expanded literature review situates our work within the context of existing research, highlighting its unique contributions and advancements. A comparison between our approach and other existing computer experiment designs has been performed. Full article
(This article belongs to the Special Issue Stochastic Processes: Theory, Simulation and Applications)
Show Figures

Figure 1

14 pages, 8040 KB  
Article
Application of a Bayesian-Based Integrated Approach for Groundwater Contamination Sources Parameter Identification Considering Observation Error
by Xueman Yan and Yongkai An
Water 2024, 16(11), 1618; https://doi.org/10.3390/w16111618 - 5 Jun 2024
Cited by 2 | Viewed by 1740
Abstract
Groundwater contamination source (GCS) parameter identification can help with controlling groundwater contamination. It is proverbial that groundwater contamination concentration observation errors have a significant impact on identification results, but few studies have adequately quantified the specific impact of the errors in contamination concentration [...] Read more.
Groundwater contamination source (GCS) parameter identification can help with controlling groundwater contamination. It is proverbial that groundwater contamination concentration observation errors have a significant impact on identification results, but few studies have adequately quantified the specific impact of the errors in contamination concentration observations on identification results. For this reason, this study developed a Bayesian-based integrated approach, which integrated Markov chain Monte Carlo (MCMC), relative entropy (RE), Multi-Layer Perceptron (MLP), and the surrogate model, to identify the unknown GCS parameters while quantifying the specific impact of the observation errors on identification results. Firstly, different contamination concentration observation error situations were set for subsequent research. Then, the Bayesian inversion approach based on MCMC was used for GCS parameter identification for different error situations. Finally, RE was applied to quantify the differences in the identification results of each GCS parameter under different error situations. Meanwhile, MLP was utilized to build a surrogate model to replace the original groundwater numerical simulation model in the GCS parameter identification processes of these error situations, which was to reduce the computational time and load. The developed approach was applied to two hypothetical numerical case studies involving homogeneous and heterogeneous cases. The results showed that RE could effectively quantify the differences caused by contamination concentration observation errors, and the changing trends of the RE values for GCS parameters were directly related to their sensitivity. The established MLP surrogate model could significantly reduce the computational load and time for GCS parameter identification. Overall, this study highlights that the developed approach represents a promising solution for GCS parameter identification considering observation errors. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

15 pages, 342 KB  
Article
The Arsenal of Perturbation Bounds for Finite Continuous-Time Markov Chains: A Perspective
by Alexander Y. Mitrophanov
Mathematics 2024, 12(11), 1608; https://doi.org/10.3390/math12111608 - 21 May 2024
Cited by 7 | Viewed by 8523
Abstract
Perturbation bounds are powerful tools for investigating the phenomenon of insensitivity to perturbations, also referred to as stability, for stochastic and deterministic systems. This perspective article presents a focused account of some of the main concepts and results in inequality-based perturbation theory for [...] Read more.
Perturbation bounds are powerful tools for investigating the phenomenon of insensitivity to perturbations, also referred to as stability, for stochastic and deterministic systems. This perspective article presents a focused account of some of the main concepts and results in inequality-based perturbation theory for finite state-space, time-homogeneous, continuous-time Markov chains. The diversity of perturbation bounds and the logical relationships between them highlight the essential stability properties and factors for this class of stochastic processes. We discuss the linear time dependence of general perturbation bounds for Markov chains, as well as time-independent (i.e., time-uniform) perturbation bounds for chains whose stationary distribution is unique. Moreover, we prove some new results characterizing the absolute and relative tightness of time-uniform perturbation bounds. Specifically, we show that, in some of them, an equality is achieved. Furthermore, we analytically compare two types of time-uniform bounds known from the literature. Possibilities for generalizing Markov-chain stability results, as well as connections with stability analysis for other systems and processes, are also discussed. Full article
14 pages, 325 KB  
Article
Attainability for Markov and Semi-Markov Chains
by Brecht Verbeken and Marie-Anne Guerry
Mathematics 2024, 12(8), 1227; https://doi.org/10.3390/math12081227 - 19 Apr 2024
Cited by 6 | Viewed by 1544
Abstract
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n, where each component ni represents the number of entities in the state Si, can be maintained or attained. This question leads [...] Read more.
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n, where each component ni represents the number of entities in the state Si, can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability (SR-maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using SR-maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated. Full article
Show Figures

Figure 1

Back to TopTop