Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (279)

Search Parameters:
Keywords = Markov stochastic process

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 509 KB  
Article
Zero-Inflated Distributions of Lifetime Reproductive Output
by Hal Caswell
Populations 2025, 1(3), 19; https://doi.org/10.3390/populations1030019 - 23 Aug 2025
Viewed by 88
Abstract
Lifetime reproductive output (LRO), also called lifetime reproductive success (LRS) is often described by its mean (total fertility rate or net reproductive rate), but it is in fact highly variable among individuals and often positively skewed. Several approaches exist to calculating the variance [...] Read more.
Lifetime reproductive output (LRO), also called lifetime reproductive success (LRS) is often described by its mean (total fertility rate or net reproductive rate), but it is in fact highly variable among individuals and often positively skewed. Several approaches exist to calculating the variance and skewness of LRO. These studies have noted that a major factor contributing to skewness is the fraction of the population that dies before reaching a reproductive age or stage. The existence of that fraction means that LRO has a zero-inflated distribution. This paper shows how to calculate that fraction and to fit a zero-inflated Poisson or zero-inflated negative binomial distribution to the LRO. We present a series of applications to populations before and after demographic transitions, to populations with particularly high probabilities of death before reproduction, and a couple of large mammal populations for good measure. The zero-inflated distribution also provides extinction probabilities from a Galton-Watson branching process. We compare the zero-inflated analysis with a recently developed analysis using convolution methods that provides exact distributions of LRO. The agreement is strikingly good. Full article
Show Figures

Figure 1

14 pages, 460 KB  
Article
Modeling Local Search Metaheuristics Using Markov Decision Processes
by Rubén Ruiz-Torrubiano, Deepak Dhungana, Sarita Paudel and Himanshu Buckchash
Algorithms 2025, 18(8), 512; https://doi.org/10.3390/a18080512 - 14 Aug 2025
Viewed by 154
Abstract
Local search metaheuristics like tabu search or simulated annealing are popular heuristic optimization algorithms for finding near-optimal solutions for combinatorial optimization problems. However, it is still challenging for researchers and practitioners to analyze their behavior and systematically choose one over a vast set [...] Read more.
Local search metaheuristics like tabu search or simulated annealing are popular heuristic optimization algorithms for finding near-optimal solutions for combinatorial optimization problems. However, it is still challenging for researchers and practitioners to analyze their behavior and systematically choose one over a vast set of possible metaheuristics for the particular problem at hand. In this paper, we introduce a theoretical framework based on Markov Decision Processes (MDPs) for analyzing local search metaheuristics. This framework not only helps in providing convergence results for individual algorithms but also provides an explicit characterization of the exploration–exploitation tradeoff and a theory-grounded guidance for practitioners for choosing an appropriate metaheuristic for the problem at hand. We present this framework in detail and show how to apply it in the case of hill climbing and the simulated annealing algorithm, including computational experiments. Full article
Show Figures

Figure 1

14 pages, 1957 KB  
Article
Reliability and Availability Analysis of a Two-Unit Cold Standby System with Imperfect Switching
by Nariman M. Ragheb, Emad Solouma, Abdullah A. Alahmari and Sayed Saber
Axioms 2025, 14(8), 589; https://doi.org/10.3390/axioms14080589 - 29 Jul 2025
Viewed by 308
Abstract
This paper presents a stochastic analysis of a two-unit cold standby system incorporating imperfect switching mechanisms. Each unit operates in one of three states: normal, partial failure, or total failure. Employing Markov processes, the study evaluates system reliability by examining the mean time [...] Read more.
This paper presents a stochastic analysis of a two-unit cold standby system incorporating imperfect switching mechanisms. Each unit operates in one of three states: normal, partial failure, or total failure. Employing Markov processes, the study evaluates system reliability by examining the mean time to failure (MTTF) and steady-state availability metrics. Failure and repair times are assumed to follow exponential distributions, while the switching mechanism is modeled as either perfect or imperfect. The results highlight the significant influence of switching reliability on both MTTF and system availability. This analysis is crucial for optimizing the performance of complex systems, such as thermal power plants, where continuous and reliable operation is imperative. The study also aligns with recent research trends emphasizing the integration of preventive maintenance and advanced reliability modeling approaches to enhance overall system resilience. Full article
Show Figures

Figure 1

20 pages, 2538 KB  
Article
Research on Long-Term Scheduling Optimization of Water–Wind–Solar Multi-Energy Complementary System Based on DDPG
by Zixing Wan, Wenwu Li, Mu He, Taotao Zhang, Shengzhe Chen, Weiwei Guan, Xiaojun Hua and Shang Zheng
Energies 2025, 18(15), 3983; https://doi.org/10.3390/en18153983 - 25 Jul 2025
Viewed by 347
Abstract
To address the challenges of high complexity in modeling the correlation of multi-dimensional stochastic variables and the difficulty of solving long-term scheduling models in continuous action spaces in multi-energy complementary systems, this paper proposes a long-term optimization scheduling method based on Deep Deterministic [...] Read more.
To address the challenges of high complexity in modeling the correlation of multi-dimensional stochastic variables and the difficulty of solving long-term scheduling models in continuous action spaces in multi-energy complementary systems, this paper proposes a long-term optimization scheduling method based on Deep Deterministic Policy Gradient (DDPG). First, an improved C-Vine Copula model is used to construct the multi-dimensional joint probability distribution of water, wind, and solar energy, and Latin Hypercube Sampling (LHS) is employed to generate a large number of water–wind–solar coupling scenarios, effectively reducing the model’s complexity. Then, a long-term optimization scheduling model is established with the goal of maximizing the absorption of clean energy, and it is converted into a Markov Decision Process (MDP). Next, the DDPG algorithm is employed with a noise dynamic adjustment mechanism to optimize the policy in continuous action spaces, yielding the optimal long-term scheduling strategy for the water–wind–solar multi-energy complementary system. Finally, using a water–wind–solar integrated energy base as a case study, comparative analysis demonstrates that the proposed method can improve the renewable energy absorption capacity and the system’s power generation efficiency by accurately quantifying the uncertainties of water, wind, and solar energy and precisely controlling the continuous action space during the scheduling process. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

30 pages, 1981 KB  
Article
Stochastic Control for Sustainable Hydrogen Generation in Standalone PV–Battery–PEM Electrolyzer Systems
by Mohamed Aatabe, Wissam Jenkal, Mohamed I. Mosaad and Shimaa A. Hussien
Energies 2025, 18(15), 3899; https://doi.org/10.3390/en18153899 - 22 Jul 2025
Viewed by 545
Abstract
Standalone photovoltaic (PV) systems offer a viable path to decentralized energy access but face limitations during periods of low solar irradiance. While batteries provide short-term storage, their capacity constraints often restrict the use of surplus energy, highlighting the need for long-duration solutions. Green [...] Read more.
Standalone photovoltaic (PV) systems offer a viable path to decentralized energy access but face limitations during periods of low solar irradiance. While batteries provide short-term storage, their capacity constraints often restrict the use of surplus energy, highlighting the need for long-duration solutions. Green hydrogen, generated via proton exchange membrane (PEM) electrolyzers, offers a scalable alternative. This study proposes a stochastic energy management framework that leverages a Markov decision process (MDP) to coordinate PV generation, battery storage, and hydrogen production under variable irradiance and uncertain load demand. The strategy dynamically allocates power flows, ensuring system stability and efficient energy utilization. Real-time weather data from Goiás, Brazil, is used to simulate system behavior under realistic conditions. Compared to the conventional perturb and observe (P&O) technique, the proposed method significantly improves system performance, achieving a 99.9% average efficiency (vs. 98.64%) and a drastically lower average tracking error of 0.3125 (vs. 9.8836). This enhanced tracking accuracy ensures faster convergence to the maximum power point, even during abrupt load changes, thereby increasing the effective use of solar energy. As a direct consequence, green hydrogen production is maximized while energy curtailment is minimized. The results confirm the robustness of the MDP-based control, demonstrating improved responsiveness, reduced downtime, and enhanced hydrogen yield, thus supporting sustainable energy conversion in off-grid environments. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

14 pages, 1614 KB  
Article
Neural Networks and Markov Categories
by Sebastian Pardo-Guerra, Johnny Jingze Li, Kalyan Basu and Gabriel A. Silva
AppliedMath 2025, 5(3), 93; https://doi.org/10.3390/appliedmath5030093 - 18 Jul 2025
Viewed by 488
Abstract
We present a formal framework for modeling neural network dynamics using Category Theory, specifically through Markov categories. In this setting, neural states are represented as objects and state transitions as Markov kernels, i.e., morphisms in the category. This categorical perspective offers an algebraic [...] Read more.
We present a formal framework for modeling neural network dynamics using Category Theory, specifically through Markov categories. In this setting, neural states are represented as objects and state transitions as Markov kernels, i.e., morphisms in the category. This categorical perspective offers an algebraic alternative to traditional approaches based on stochastic differential equations, enabling a rigorous and structured approach to studying neural dynamics as a stochastic process with topological insights. By abstracting neural states as submeasurable spaces and transitions as kernels, our framework bridges biological complexity with formal mathematical structure, providing a foundation for analyzing emergent behavior. As part of this approach, we incorporate concepts from Interacting Particle Systems and employ mean-field approximations to construct Markov kernels, which are then used to simulate neural dynamics via the Ising model. Our simulations reveal a shift from unimodal to multimodal transition distributions near critical temperatures, reinforcing the connection between emergent behavior and abrupt changes in system dynamics. Full article
Show Figures

Figure 1

36 pages, 1465 KB  
Article
USV-Affine Models Without Derivatives: A Bayesian Time-Series Approach
by Malefane Molibeli and Gary van Vuuren
J. Risk Financial Manag. 2025, 18(7), 395; https://doi.org/10.3390/jrfm18070395 - 17 Jul 2025
Viewed by 326
Abstract
We investigate the affine term structure models (ATSMs) with unspanned stochastic volatility (USV). Our aim is to test their ability to generate accurate cross-sectional behavior and time-series dynamics of bond yields. Comparing the restricted models and those with USV, we test whether they [...] Read more.
We investigate the affine term structure models (ATSMs) with unspanned stochastic volatility (USV). Our aim is to test their ability to generate accurate cross-sectional behavior and time-series dynamics of bond yields. Comparing the restricted models and those with USV, we test whether they produce both reasonable estimates for the short rate variance and cross-sectional fit. Essentially, a joint approach from both time series and options data for estimating risk-neutral dynamics in ATSMs should be followed. Due to the scarcity of derivative data in emerging markets, we estimate the model using only time-series of bond yields. A Bayesian estimation approach combining Markov Chain Monte Carlo (MCMC) and the Kalman filter is employed to recover the model parameters and filter out latent state variables. We further incorporate macro-economic indicators and GARCH-based volatility as external validation of the filtered latent volatility process. The A1(4)USV performs better both in and out of sample, even though the issue of a tension between time series and cross-section remains unresolved. Our findings suggest that even without derivative instruments, it is possible to identify and interpret risk-neutral dynamics and volatility risk using observable time-series data. Full article
(This article belongs to the Section Financial Markets)
Show Figures

Figure 1

24 pages, 1195 KB  
Article
A Reinforcement Learning-Based Double Layer Controller for Mobile Robot in Human-Shared Environments
by Jian Mi, Jianwen Liu, Yue Xu, Zhongjie Long, Jun Wang, Wei Xu and Tao Ji
Appl. Sci. 2025, 15(14), 7812; https://doi.org/10.3390/app15147812 - 11 Jul 2025
Viewed by 303
Abstract
Various approaches have been explored to address the path planning problem for mobile robots. However, it remains a significant challenge, particularly in environments where a multi-tasking mobile robot operates alongside stochastically moving humans. This paper focuses on path planning for a mobile robot [...] Read more.
Various approaches have been explored to address the path planning problem for mobile robots. However, it remains a significant challenge, particularly in environments where a multi-tasking mobile robot operates alongside stochastically moving humans. This paper focuses on path planning for a mobile robot executing multiple pickup and delivery tasks in an environment shared with humans. To plan a safe path and achieve high task success rate, a Reinforcement Learning (RL)-based double layer controller is proposed in which a double-layer learning algorithm is developed. The high-level layer integrates a Finite-State Automaton (FSA) with RL to perform global strategy learning and task-level decision-making. The low-level layer handles local path planning by incorporating a Markov Decision Process (MDP) that accounts for environmental uncertainties. We verify the proposed double layer algorithm under different configurations and evaluate its performance based on several metrics, including task success rate, reward, etc. The proposed method outperforms conventional RL in terms of reward (+63.1%) and task success rate (+113.0%). The simulation results demonstrate the effectiveness of the proposed algorithm in solving path planning problem with stochastic human uncertainties. Full article
Show Figures

Figure 1

29 pages, 870 KB  
Article
Deep Reinforcement Learning for Optimal Replenishment in Stochastic Assembly Systems
by Lativa Sid Ahmed Abdellahi, Zeinebou Zoubeir, Yahya Mohamed, Ahmedou Haouba and Sidi Hmetty
Mathematics 2025, 13(14), 2229; https://doi.org/10.3390/math13142229 - 9 Jul 2025
Viewed by 708
Abstract
This study presents a reinforcement learning–based approach to optimize replenishment policies in the presence of uncertainty, with the objective of minimizing total costs, including inventory holding, shortage, and ordering costs. The focus is on single-level assembly systems, where both component delivery lead times [...] Read more.
This study presents a reinforcement learning–based approach to optimize replenishment policies in the presence of uncertainty, with the objective of minimizing total costs, including inventory holding, shortage, and ordering costs. The focus is on single-level assembly systems, where both component delivery lead times and finished product demand are subject to randomness. The problem is formulated as a Markov decision process (MDP), in which an agent determines optimal order quantities for each component by accounting for stochastic lead times and demand variability. The Deep Q-Network (DQN) algorithm is adapted and employed to learn optimal replenishment policies over a fixed planning horizon. To enhance learning performance, we develop a tailored simulation environment that captures multi-component interactions, random lead times, and variable demand, along with a modular and realistic cost structure. The environment enables dynamic state transitions, lead time sampling, and flexible order reception modeling, providing a high-fidelity training ground for the agent. To further improve convergence and policy quality, we incorporate local search mechanisms and multiple action space discretizations per component. Simulation results show that the proposed method converges to stable ordering policies after approximately 100 episodes. The agent achieves an average service level of 96.93%, and stockout events are reduced by over 100% relative to early training phases. The system maintains component inventories within operationally feasible ranges, and cost components—holding, shortage, and ordering—are consistently minimized across 500 training episodes. These findings highlight the potential of deep reinforcement learning as a data-driven and adaptive approach to inventory management in complex and uncertain supply chains. Full article
Show Figures

Figure 1

17 pages, 1101 KB  
Article
Ship Scheduling Algorithm Based on Markov-Modulated Fluid Priority Queues
by Jianzhi Deng, Shuilian Lv, Yun Li, Liping Luo, Yishan Su, Xiaolin Wang and Xinzhi Liu
Algorithms 2025, 18(7), 421; https://doi.org/10.3390/a18070421 - 8 Jul 2025
Viewed by 255
Abstract
As a key node in port logistics systems, ship anchorage is often faced with congestion caused by ship flow fluctuations, multi-priority scheduling imbalances and the poor adaptability of scheduling models to complex environments. To solve the above problems, this paper constructs a ship [...] Read more.
As a key node in port logistics systems, ship anchorage is often faced with congestion caused by ship flow fluctuations, multi-priority scheduling imbalances and the poor adaptability of scheduling models to complex environments. To solve the above problems, this paper constructs a ship scheduling algorithm based on a Markov-modulated fluid priority queue, which describes the stochastic evolution of the anchorage operation state via a continuous-time Markov chain and abstracts the arrival and service processes of ships into a continuous fluid input and output mechanism modulated by the state. The algorithm introduces a multi-priority service strategy to achieve the differentiated scheduling of different types of ships and improves the computational efficiency and scalability based on a matrix analysis method. Simulation results show that the proposed model reduces the average waiting time of ships by more than 90% compared with the M/G/1/1 and RL strategies and improves the utilization of anchorage resources by about 20% through dynamic service rate adjustment, showing significant advantages over traditional scheduling methods in multi-priority scenarios. Full article
Show Figures

Figure 1

38 pages, 518 KB  
Article
Credit Risk Assessment Using Fuzzy Inhomogeneous Markov Chains Within a Fuzzy Market
by P.-C.G. Vassiliou
Risks 2025, 13(7), 125; https://doi.org/10.3390/risks13070125 - 28 Jun 2025
Viewed by 413
Abstract
In the present study, we model the migration process and the changes in the market environment. The migration process is being modeled as an F-inhomogeneous semi-Markov process with fuzzy states. The evolution of the migration process takes place within a stochastic market [...] Read more.
In the present study, we model the migration process and the changes in the market environment. The migration process is being modeled as an F-inhomogeneous semi-Markov process with fuzzy states. The evolution of the migration process takes place within a stochastic market environment with fuzzy states, the transitions of which are being modeled as an F-inhomogeneous semi-Markov process. We prove a recursive relation from which we could find the survival probabilities of the bonds or debts as functions of the basic parameters of the two F-inhomogeneous semi-Markov processes. The asymptotic behavior of the survival probabilities is being found under certain easily met conditions in closed analytic form. Finally, we provide maximum likelihood estimators for the basic parameters of the proposed models. Full article
32 pages, 4694 KB  
Article
Visualization of Hazardous Substance Emission Zones During a Fire at an Industrial Enterprise Using Cellular Automaton Method
by Yuri Matveev, Fares Abu-Abed, Leonid Chernishev and Sergey Zhironkin
Fire 2025, 8(7), 250; https://doi.org/10.3390/fire8070250 - 27 Jun 2025
Cited by 1 | Viewed by 380
Abstract
This article discusses and compares approaches to the visualization of the danger zone formed as a result of spreading toxic substances during a fire at an industrial enterprise, to create predictive models and scenarios for evacuation and environmental protection measures. The purpose of [...] Read more.
This article discusses and compares approaches to the visualization of the danger zone formed as a result of spreading toxic substances during a fire at an industrial enterprise, to create predictive models and scenarios for evacuation and environmental protection measures. The purpose of this study is to analyze the features and conditions for the application of algorithms for predicting the spread of a danger zone, based on the Gauss equation and the probabilistic algorithm of a cellular automaton. The research is also aimed at the analysis of the consequences of a fire at an industrial enterprise, taking into account natural and climatic conditions, the development of the area, and the scale of the fire. The subject of this study is the development of software and algorithmic support for the visualization of the danger zone and analysis of the consequences of a fire, which can be confirmed by comparing a computational experiment and actual measurements of toxic substance concentrations. The main research methods include a Gaussian model and probabilistic, frontal, and empirical cellular automation. The results of the study represent the development of algorithms for a cellular automation model for the visual forecasting of a dangerous zone. They are characterized by taking into consideration the rules for filling the dispersion ellipse, as well as determining the effects of interaction with obstacles, which allows for a more accurate mathematical description of the spread of a cloud of toxic combustion products in densely built-up areas. Since the main problems of the cellular automation approach to modeling the dispersion of pollutants are the problems of speed and numerical diffusion, in this article the frontal cellular automation algorithm with a 16-point neighborhood pattern is used, which takes into account the features of the calculation scheme for finding the shortest path. Software and algorithmic support for an integrated system for the visualization and analysis of fire consequences at an industrial enterprise has been developed; the efficiency of the system has been confirmed by computational analysis and actual measurement. It has been shown that the future development of the visualization of dangerous zones during fires is associated with the integration of the Bayesian approach and stochastic forecasting algorithms based on Markov chains into the simulation model of a dangerous zone for the efficient assessment of uncertainties associated with complex atmospheric processes. Full article
(This article belongs to the Special Issue Advances in Industrial Fire and Urban Fire Research: 2nd Edition)
Show Figures

Figure 1

21 pages, 4080 KB  
Article
M-Learning: Heuristic Approach for Delayed Rewards in Reinforcement Learning
by Cesar Andrey Perdomo Charry, Marlon Sneider Mora Cortes and Oscar J. Perdomo
Mathematics 2025, 13(13), 2108; https://doi.org/10.3390/math13132108 - 27 Jun 2025
Viewed by 411
Abstract
The current design of reinforcement learning methods requires extensive computational resources. Algorithms such as Deep Q-Network (DQN) have obtained outstanding results in advancing the field. However, the need to tune thousands of parameters and run millions of training episodes remains a significant challenge. [...] Read more.
The current design of reinforcement learning methods requires extensive computational resources. Algorithms such as Deep Q-Network (DQN) have obtained outstanding results in advancing the field. However, the need to tune thousands of parameters and run millions of training episodes remains a significant challenge. This document proposes a comparative analysis between the Q-Learning algorithm, which laid the foundations for Deep Q-Learning, and our proposed method, termed M-Learning. The comparison is conducted using Markov Decision Processes with the delayed reward as a general test bench framework. Firstly, this document provides a full description of the main challenges related to implementing Q-Learning, particularly concerning its multiple parameters. Then, the foundations of our proposed heuristic are presented, including its formulation, and the algorithm is described in detail. The methodology used to compare both algorithms involved training them in the Frozen Lake environment. The experimental results, along with an analysis of the best solutions, demonstrate that our proposal requires fewer episodes and exhibits reduced variability in the outcomes. Specifically, M-Learning trains agents 30.7% faster in the deterministic environment and 61.66% faster in the stochastic environment. Additionally, it achieves greater consistency, reducing the standard deviation of scores by 58.37% and 49.75% in the deterministic and stochastic settings, respectively. The code will be made available in a GitHub repository upon this paper’s publication. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms, 2nd Edition)
Show Figures

Figure 1

20 pages, 992 KB  
Review
Markov-Chain Perturbation and Approximation Bounds in Stochastic Biochemical Kinetics
by Alexander Y. Mitrophanov
Mathematics 2025, 13(13), 2059; https://doi.org/10.3390/math13132059 - 21 Jun 2025
Viewed by 1130
Abstract
Markov chain perturbation theory is a rapidly developing subfield of the theory of stochastic processes. This review outlines emerging applications of this theory in the analysis of stochastic models of chemical reactions, with a particular focus on biochemistry and molecular biology. We begin [...] Read more.
Markov chain perturbation theory is a rapidly developing subfield of the theory of stochastic processes. This review outlines emerging applications of this theory in the analysis of stochastic models of chemical reactions, with a particular focus on biochemistry and molecular biology. We begin by discussing the general problem of approximate modeling in stochastic chemical kinetics. We then briefly review some essential mathematical results pertaining to perturbation bounds for continuous-time Markov chains, emphasizing the relationship between robustness under perturbations and the rate of exponential convergence to the stationary distribution. We illustrate the use of these results to analyze stochastic models of biochemical reactions by providing concrete examples. Particular attention is given to fundamental problems related to approximation accuracy in model reduction. These include the partial thermodynamic limit, the irreversible-reaction limit, parametric uncertainty analysis, and model reduction for linear reaction networks. We conclude by discussing generalizations and future developments of these methodologies, such as the need for time-inhomogeneous Markov models. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

26 pages, 1906 KB  
Article
Context-Aware Markov Sensors and Finite Mixture Models for Adaptive Stochastic Dynamics Analysis of Tourist Behavior
by Xiaolong Chen, Hongfeng Zhang, Cora Un In Wong and Zhengchun Song
Mathematics 2025, 13(12), 2028; https://doi.org/10.3390/math13122028 - 19 Jun 2025
Viewed by 507
Abstract
We propose a novel framework for adaptive stochastic dynamics analysis of tourist behavior by integrating context-aware Markov models with finite mixture models (FMMs). Conventional Markov models often fail to capture abrupt changes induced by external shocks, such as event announcements or weather disruptions, [...] Read more.
We propose a novel framework for adaptive stochastic dynamics analysis of tourist behavior by integrating context-aware Markov models with finite mixture models (FMMs). Conventional Markov models often fail to capture abrupt changes induced by external shocks, such as event announcements or weather disruptions, leading to inaccurate predictions. The proposed method addresses this limitation by introducing virtual sensors that dynamically detect contextual anomalies and trigger regime switches in real-time. These sensors process streaming data to identify shocks, which are then used to reweight the probabilities of pre-learned behavioral regimes represented by FMMs. The system employs expectation maximization to train distinct Markov sub-models for each regime, enabling seamless transitions between them when contextual thresholds are exceeded. Furthermore, the framework leverages edge computing and probabilistic programming for efficient, low-latency implementation. The key contribution lies in the explicit modeling of contextual shocks and the dynamic adaptation of stochastic processes, which significantly improves robustness in volatile tourism scenarios. Experimental results demonstrate that the proposed approach outperforms traditional Markov models in accuracy and adaptability, particularly under rapidly changing conditions. Quantitative results show a 13.6% improvement in transition accuracy (0.742 vs. 0.653) compared to conventional context-aware Markov models, with an 89.2% true positive rate in shock detection and a median response latency of 47 min for regime switching. This work advances the state-of-the-art in tourist behavior analysis by providing a scalable, real-time solution for capturing complex, context-dependent dynamics. The integration of virtual sensors and FMMs offers a generalizable paradigm for stochastic modeling in other domains where external shocks play a critical role. Full article
Show Figures

Figure 1

Back to TopTop