Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,037)

Search Parameters:
Keywords = Markov modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 878 KB  
Article
A Class of Causal 2D Markov-Switching ARMA Models: Probabilistic Properties and Variational Estimation
by Khudhayr A. Rashedi, Soumia Kharfouchi, Abdullah H. Alenezy and Tariq S. Alshammari
Axioms 2026, 15(5), 302; https://doi.org/10.3390/axioms15050302 - 22 Apr 2026
Abstract
This paper introduces a rigorous class of two-dimensional Markov-switching autoregressive moving-average (2D MS-ARMA) models for spatial lattice data exhibiting regime-dependent dynamics. The switching mechanism is governed by a latent causal Markov random field that drives spatial transitions between regime-specific autoregressive and moving-average structures. [...] Read more.
This paper introduces a rigorous class of two-dimensional Markov-switching autoregressive moving-average (2D MS-ARMA) models for spatial lattice data exhibiting regime-dependent dynamics. The switching mechanism is governed by a latent causal Markov random field that drives spatial transitions between regime-specific autoregressive and moving-average structures. We provide sufficient conditions for the existence of a strictly stationary solution through the top Lyapunov exponent associated with a sequence of random matrices obtained from a state-space representation constructed along the lexicographic order. For the first-order bidirectional specification, we derive explicit spectral conditions linking stationarity to the regime-dependent spectral radii. Sufficient conditions ensuring the existence of finite second-order moments are also provided. Parameter estimation is carried out using a variational expectation–maximization (VEM) algorithm based on a mean-field approximation of the posterior distribution of the hidden regimes. The E-step yields closed-form coordinate ascent updates, while the M-step relies on gradient-based numerical optimization with derivatives computed via recursive differentiation. Under increasing-domain asymptotics, we discuss the consistency and asymptotic behavior of the variational estimator. The proposed framework fills a methodological gap between classical one-dimensional Markov-switching ARMA models and spatial autoregressive structures by extending regime-switching theory to multi-indexed processes with rigorous probabilistic foundations. It provides a comprehensive basis for statistical inference, model diagnostics, and prediction in spatially heterogeneous environments. Full article
Show Figures

Figure 1

23 pages, 384 KB  
Article
Cues for a Grammar of Potentials in Markov Field Models of Computer Vision
by Luigi Burigana
Appl. Sci. 2026, 16(8), 4030; https://doi.org/10.3390/app16084030 - 21 Apr 2026
Abstract
Several well-known models in present-day computer vision take the form of Markov random fields. Any model of this kind amounts to a network of soft constraints, which are called potentials. These are the subject of this study. First, three kinds of information that [...] Read more.
Several well-known models in present-day computer vision take the form of Markov random fields. Any model of this kind amounts to a network of soft constraints, which are called potentials. These are the subject of this study. First, three kinds of information that are involved in any computer vision inference task are identified, namely, evidence, target, and principled information, and the concept of a variable as applied in this context is discussed. The general meaning of a potential is then described, which is a local soft constraint that aims to promote a corresponding desired condition. Following this, the formal structure of a potential is highlighted, which includes a set of parameters and an analytic frame, with this being a hierarchy of operations by which the value of the potential can be computed. The possible presence of a core in the analytic frame is considered, and two salient kinds of cores are distinguished and illustrated using examples from the literature: one involving a distance function and the other given by a probabilistic conditional. In summary, this contribution highlights substantial aspects of the semantics and syntax of potentials in Markov field models of computer vision, and constructs a framework within which these aspects may be consistently arranged and explained. Full article
Show Figures

Figure 1

17 pages, 939 KB  
Article
Solar Flare Detection from Sudden Ionospheric Disturbances in VLF Signals via a CNN–HMM Framework
by Yuliyan Velchev, Boncho Bonev, Ilia Iliev, Peter Gallagher, Peter Z. Petkov and Ivaylo Nachev
Sensors 2026, 26(8), 2548; https://doi.org/10.3390/s26082548 - 21 Apr 2026
Abstract
In this paper we present a hybrid convolutional neural network–hidden Markov model framework for detecting solar flare events of intensity greater than or equal to M1.0 from very low frequency signals via their induced sudden ionospheric disturbances. The convolutional neural network processes fixed-length [...] Read more.
In this paper we present a hybrid convolutional neural network–hidden Markov model framework for detecting solar flare events of intensity greater than or equal to M1.0 from very low frequency signals via their induced sudden ionospheric disturbances. The convolutional neural network processes fixed-length windows of raw very low frequency signals and their temporal derivatives to produce probabilistic flare estimates, which serve as emission probabilities for a two-state hidden Markov model. Viterbi decoding enforces temporal consistency, suppressing spurious fluctuations and yielding physically plausible event sequences. The approach is specifically designed to detect the onset-to-peak interval of flare events and, with further development, could operate in real time for early flare warning. The model was trained and evaluated on very low frequency data from the DHO38 transmitter in Germany to a receiver near Birr, Ireland. Sample-level evaluation achieved a balanced accuracy of 0.819 and a Matthews correlation coefficient of 0.529, while event-level detection reached a peak F1-score of 0.558 for moderate-to-strong flares of intensity greater than or equal to C6.0. These results demonstrate automated, physically consistent detection of solar flares based on sudden ionospheric disturbances, indicating the potential of the proposed approach, when combined across multiple receivers, to act as a low-cost complement to satellite-based monitoring. Full article
(This article belongs to the Special Issue Advanced Sensing Technologies for Space Electromagnetic Environments)
Show Figures

Figure 1

19 pages, 1426 KB  
Article
Lung Cancer Screening in a Population from Northeast Italy Exposed to Both Asbestos and Smoking: A Cost-Effectiveness Analysis
by Rami Cosulich, Chloe Thomas, Fabiano Barbiero, Duncan Gillespie, Ettore Bidoli, Maria Assunta Cova, Stefano Lovadina, Alessandra Guglielmi, Luigino Dal Maso, Barbara Alessandrini, Francesca Larese Filon, Fabio Barbone and Elisa Baratella
J. Clin. Med. 2026, 15(8), 3136; https://doi.org/10.3390/jcm15083136 - 20 Apr 2026
Abstract
Background: Past workplace exposure to asbestos in combination with tobacco smoking has increased the risk of lung cancer for some residents in an area within the Friuli Venezia Giulia region, Northeast Italy. In light of studies showing that lung cancer screening (LCS) [...] Read more.
Background: Past workplace exposure to asbestos in combination with tobacco smoking has increased the risk of lung cancer for some residents in an area within the Friuli Venezia Giulia region, Northeast Italy. In light of studies showing that lung cancer screening (LCS) with low-dose computed tomography (LDCT) can reduce mortality, local stakeholders and decision-makers decided to assess the potential benefits, harms and cost-effectiveness of a single round of LCS with LDCT versus standard care among people aged 55 to 80 who were formerly exposed to asbestos and with at least 10 pack-years of smoking. Methods: An economic model was developed using a decision tree connected to a Markov cohort model. The primary outcome was the incremental cost per additional quality-adjusted life year (QALY). Other outcomes included the number of life years saved, the number of deaths averted and overdiagnosis. Results: Per 10,000 people screened, the intervention led to 395 additional QALYs (95% credible interval: 129 to 831) and incremental total costs of EUR 1,086,345 (95% credible interval: −852,607 to 2,155,826). The incremental cost per QALY gained was EUR 2750. There was a probability of cost-effectiveness of 99.5% relative to a threshold of EUR 25,000. Conclusions: The model estimated that the intervention was cost-effective. The model’s simplifications and limitations should be considered when interpreting the findings in relation to policy-making decisions. Further research could include the costs and benefits of incidental findings and could assess the cost-effectiveness of repeated rounds of screening for the same population. Full article
Show Figures

Figure 1

23 pages, 1321 KB  
Article
Potential Public Health and Economic Impact of the Next-Generation COVID-19 Vaccine mRNA-1283 in The Netherlands
by Simon van der Pol, Ekkehard Beck, Tjalke Westra, Maarten Postma and Cornelis Boersma
Vaccines 2026, 14(4), 364; https://doi.org/10.3390/vaccines14040364 - 20 Apr 2026
Abstract
Background: COVID-19 remains a substantial public health challenge in the Netherlands. Next-generation COVID-19 vaccine, mRNA-1283, is approved in the European Union, with potential for higher relative vaccine efficacy compared with originally licensed COVID-19 vaccines. Methods: The potential public health and economic impact of [...] Read more.
Background: COVID-19 remains a substantial public health challenge in the Netherlands. Next-generation COVID-19 vaccine, mRNA-1283, is approved in the European Union, with potential for higher relative vaccine efficacy compared with originally licensed COVID-19 vaccines. Methods: The potential public health and economic impact of mRNA-1283 in adults ≥ 60 years and high-risk adults aged 18–59 years was modeled versus no vaccination and originally licensed mRNA-1273 and BNT162b2, adapting a published static Markov model with a 1-year time horizon. COVID-19 burden reflected two full post-pandemic seasons. Vaccine efficacy versus mRNA-1273 was based on pivotal phase 3 NextCOVE trial data; efficacy versus BNT162b2 was derived from an indirect treatment comparison. The economically justifiable price (EJP) of mRNA-1283 versus no vaccination and price premiums over existing vaccines were determined at a willingness-to-pay threshold of €50,000/quality-adjusted life-year (QALY) gained. Results: Without COVID-19 vaccination, an estimated 460,000 infections, 23,800 hospitalizations, and 5300 deaths would occur. With current coverage, mRNA-1283 was estimated to prevent 68,000 infections, 5400 hospitalizations, and 1200 deaths, saving 9667 QALYs and over €66.5 million in treatment costs. The EJP was €238 versus no vaccination. Compared with mRNA-1273 and BNT162b2, mRNA-1283 was estimated to prevent additional burden (e.g., 1309 and 1679 hospitalizations, respectively) and was cost-effective at an incremental EJP of €62 versus mRNA-1273 and €80 versus BNT162b2. Conclusions: The results support continued COVID-19 vaccination to mitigate the ongoing health and societal burden of SARS-CoV-2 in the Netherlands. The comparative analyses indicate that mRNA-1283 may be associated with substantial health benefits over originally licensed mRNA vaccines; consequently, its use may further improve health outcomes and economic efficiency within COVID-19 vaccination programs. Full article
(This article belongs to the Section COVID-19 Vaccines and Vaccination)
Show Figures

Figure 1

14 pages, 2371 KB  
Article
Multimodal Phase-Space Dynamics Fusion for Robust Ischemia Screening: An Edge-AI Paradigm with SERF Magnetocardiography
by Keyi Li, Xiangyang Zhou, Yifan Jia, Ruizhe Wang, Yidi Cao, Jiaojiao Pang, Rui Shang, Yadan Zhang, Yangyang Cui, Dong Xu and Min Xiang
Biosensors 2026, 16(4), 228; https://doi.org/10.3390/bios16040228 - 20 Apr 2026
Abstract
Background: Myocardial ischemia (MI) is a major cause of morbidity and mortality worldwide and requires timely and reliable detection. Although Spin-Exchange Relaxation-Free (SERF) magnetocardiography (MCG) provides femtotesla-level sensitivity for identifying non-linear cardiac repolarization anomalies, its clinical deployment is currently impeded by the computational [...] Read more.
Background: Myocardial ischemia (MI) is a major cause of morbidity and mortality worldwide and requires timely and reliable detection. Although Spin-Exchange Relaxation-Free (SERF) magnetocardiography (MCG) provides femtotesla-level sensitivity for identifying non-linear cardiac repolarization anomalies, its clinical deployment is currently impeded by the computational bottlenecks inherent to portable edge platforms. Methods: We propose a “Sensor-to-Image” Edge-AI framework that links quantum sensing with computer vision. Single-channel SERF-MCG signals from a large cohort of 2118 subjects (1135 Healthy, 983 Ischemia) were transformed into phase-space images using three distinct encoding modalities: Recurrence Plots (RP), Gramian Angular Summation Fields (GASF), and Markov Transition Fields (MTF). These visual representations were subsequently analyzed by a streamlined MobileNetV3-Small architecture, optimized for low-latency inference. To maximize diagnostic precision, an adaptive weighted fusion mechanism was engineered to combine the chaotic specificity captured by RP with the morphological sensitivity of GASF through a validation-optimized fixed global weighting strategy. Results: In our experiments, the fusion model achieved an Area Under the Curve (AUC) of 0.865, which was higher than the 1D-CNN baseline (AUC 0.857) and the single-modality models. Notably, the fusion strategy significantly elevated sensitivity to 88.3% while maintaining a specificity of 66.5%. Although specificity is moderate, this trade-off prioritizes high sensitivity to minimize false negatives in pre-hospital screening scenarios. The average inference time was 4.7 ms per sample on a standard CPU, suggesting suitability for real-time Point-of-Care (PoC) scenarios under further on-device validation. Conclusions: The results suggest that multi-view phase-space fusion can capture subtle spatio-temporal changes associated with ischemia. The proposed lightweight framework may support the development of portable SERF-MCG systems with embedded AI screening. Full article
(This article belongs to the Section Biosensor and Bioelectronic Devices)
Show Figures

Figure 1

25 pages, 639 KB  
Article
Observational Diagnostics of a Parametrized Deceleration Parameter in FLRW Cosmology
by Bhupendra Kumar Shukla, Deger Sofuoğlu, Aroonkumar Beesham, Rishi Kumar Tiwari and Mfanafuthi Siyabonga Msweli
Particles 2026, 9(2), 41; https://doi.org/10.3390/particles9020041 - 20 Apr 2026
Viewed by 25
Abstract
The evolution of the deceleration parameter q(z) plays a crucial role in understanding the dynamics of dark energy within the framework of modern cosmology. In this study, we perform a parametric reconstruction of q(z) in a spatially [...] Read more.
The evolution of the deceleration parameter q(z) plays a crucial role in understanding the dynamics of dark energy within the framework of modern cosmology. In this study, we perform a parametric reconstruction of q(z) in a spatially flat Friedmann–Robertson–Walker (FLRW) Universe composed of radiation, pressureless dark matter, and dark energy. We consider a physically motivated form of q(z) that effectively describes the transition of the Universe from a decelerating to an accelerating expansion phase. This parametrization is incorporated into the Friedmann equations to derive the corresponding Hubble parameter, which is then confronted with a comprehensive set of observational data, including Hubble parameter measurements H(z), Type Ia supernovae (SNIa), and Baryon Acoustic Oscillations (BAO) data. Employing the Markov Chain Monte Carlo (MCMC) approach, we constrain the model parameters using the combined H(z)+SNIa+BAO dataset. The best-fit parameters are subsequently used to reconstruct the cosmographic quantities, such as the deceleration, jerk, and snap parameters, which provide deeper insight into the expansion history of the Universe. Finally, a comparative analysis with the standard ΛCDM model is carried out to assess the compatibility and effectiveness of the proposed parametrization. Full article
(This article belongs to the Section Astroparticle Physics and Cosmology)
Show Figures

Figure 1

13 pages, 825 KB  
Article
Cost-Effectiveness of a Lifestyle and Behavioral Care Model Targeting Cardiometabolic Disease Progression
by Michelle Alencar, Rachel Sauls and Justin Whetten
Int. J. Environ. Res. Public Health 2026, 23(4), 526; https://doi.org/10.3390/ijerph23040526 - 18 Apr 2026
Viewed by 152
Abstract
Chronic diseases drive healthcare costs, and employers seek scalable strategies to improve health outcomes and control expenses. Telehealth behavioral care shows promise for managing chronic conditions, but its long-term economic value in employer populations is still unclear. We assessed the cost-effectiveness and ROI [...] Read more.
Chronic diseases drive healthcare costs, and employers seek scalable strategies to improve health outcomes and control expenses. Telehealth behavioral care shows promise for managing chronic conditions, but its long-term economic value in employer populations is still unclear. We assessed the cost-effectiveness and ROI of a behavioral care (LBC) model using a Markov model in a custom analytic tool. The model simulated disease progression, healthcare utilization, and QALYs over five years from the employer perspective. Transition probabilities, costs, and mortality risks were obtained from the InHealth program, national sources, and published literature. Employees in the behavioral care model were compared with a control group receiving usual care. Among 4461 employees aged 40, intervention participants had five-year costs of $41,431, versus $47,834 for controls, saving $6403 per member and $28.6 million overall. Treated members gained 4.7 QALYs compared to 4.6 in controls, equivalent to 36.5 extra days of full health. The program had a ROI of 6.53, showing significant cost savings. Telehealth behavioral care is a cost-effective way to improve health outcomes and provide financial benefits to employers. These results support incorporating behavioral care into value-based benefits and highlight potential long-term savings through prevention and management of lifestyle-related chronic diseases. Full article
Show Figures

Figure 1

24 pages, 11332 KB  
Article
Intelligent Optimization Methods for Cloud–Edge Collaborative Vehicular Networks via the Integration of Bayesian Decision-Making and Reinforcement Learning
by Youjian Yu, Zhaowei Song, Sifeng Zhu and Qinghua Zhang
Future Internet 2026, 18(4), 215; https://doi.org/10.3390/fi18040215 - 17 Apr 2026
Viewed by 126
Abstract
To improve vehicle user service quality and address data privacy and security issues in intelligent transportation vehicle networking systems, a three-tier communication architecture with cloud-edge-end collaboration was designed in this paper. A Bayesian decision criterion was utilized to divide user data segments into [...] Read more.
To improve vehicle user service quality and address data privacy and security issues in intelligent transportation vehicle networking systems, a three-tier communication architecture with cloud-edge-end collaboration was designed in this paper. A Bayesian decision criterion was utilized to divide user data segments into fine-grained slices based on their privacy levels, and differential privacy techniques were applied to protect the offloaded data. To achieve multi-objective optimization between user service quality and data privacy and security, the problem was formulated as a constrained Markov decision process. A communication model, a caching model, a latency model, an energy consumption model, and a data-fragment privacy protection model were designed. Additionally, a deep reinforcement learning algorithm based on the actor–critic approach was proposed for the collaborative and centralized training of multiple intelligent agents (CTMA-AC), enabling multi-objective optimization decision-making for the protection of offloaded private user data. Simulation experiments demonstrate that the proposed multi-agent collaborative privacy data offloading protection strategy can effectively safeguard private user data while ensuring high service quality. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
19 pages, 3326 KB  
Article
Energy-Harvesting-Assisted UAV Swarm Anti-Jamming Communication Based on Multi-Agent Reinforcement Learning
by Yongfang Li, Tianyu Zhao, Zhijuan Wu, Yan Lin and Yijin Zhang
Drones 2026, 10(4), 294; https://doi.org/10.3390/drones10040294 - 16 Apr 2026
Viewed by 148
Abstract
Considering that the unmanned aerial vehicles (UAVs) are susceptible to both co-channel interference and malicious jamming with limited onboard battery energy, this paper proposes an energy-harvesting-assisted anti-jamming communication framework for UAV swarm networks. Specifically, we first model the problem as a decentralized partially [...] Read more.
Considering that the unmanned aerial vehicles (UAVs) are susceptible to both co-channel interference and malicious jamming with limited onboard battery energy, this paper proposes an energy-harvesting-assisted anti-jamming communication framework for UAV swarm networks. Specifically, we first model the problem as a decentralized partially observable Markov decision process (Dec-POMDP), aiming to achieve a long-term trade-off between data transmission success rate and energy consumption. Then we propose a multi-agent independent advantage actor–critic (IA2C)-based energy-harvesting-assisted anti-jamming communication solution, which enables each cluster head (CH) to learn its transmit channel, power, and energy harvesting time policy independently. By constructing a time-space-based extended Dec-POMDP, the spatiotemporal correlations among neighboring nodes are learned by allowing adjacent agents to share discounted local observations. Extensive simulations show that, compared with the benchmark schemes, the proposed scheme improves the average cumulative reward and average cumulative success rate by 17.26% and 10.37%, respectively, while achieving a higher transmission success rate with lower energy consumption under different numbers of available channels. Full article
(This article belongs to the Special Issue Intelligent Spectrum Management in UAV Communication)
21 pages, 2881 KB  
Article
Risk-Sensitive Reinforcement Learning for Portfolio Optimization Under Stochastic Market Dynamics
by Binod Kumar Mishra, Munish Kumar, Hashmat Fida and Branimir Kalaš
Mathematics 2026, 14(8), 1334; https://doi.org/10.3390/math14081334 - 16 Apr 2026
Viewed by 292
Abstract
Portfolio optimization is one of the most difficult sequential decision problems, as uncertainty and the non-stationary nature of financial markets hinder the development of robust strategies. Reinforcement learning is an attractive framework for addressing this problem, as it allows agents to learn market-adaptive [...] Read more.
Portfolio optimization is one of the most difficult sequential decision problems, as uncertainty and the non-stationary nature of financial markets hinder the development of robust strategies. Reinforcement learning is an attractive framework for addressing this problem, as it allows agents to learn market-adaptive strategies through data-driven interactions. However, existing risk-neutral reinforcement learning solutions for portfolio management are oblivious to downside risk and are mainly concerned with maximizing returns. To address this limitation, this paper proposes a novel risk-sensitive reinforcement learning framework for risk-aware portfolio optimization based on a conditional value-at-risk-based learning objective that explicitly controls extreme loss events. It formulates the portfolio optimization problem as a Markov decision process and solves it using a linearized actor–critic architecture. It also develops theoretical results to analyze important aspects of the learning process, specifically proving that the convexity of the conditional value-at-risk-based formulation and convergence of learning hold under standard assumptions. The proposed algorithm is applied in a realistic investment setting using NIFTY 50 market data. Quantitative results from a rolling window backtesting methodology show that the proposed model achieves the best risk-adjusted portfolio performance, i.e., a Sharpe ratio (0.610), while significantly reducing tail risk, as measured by the conditional value-at-risk (−0.121) and maximum drawdown (−0.198), compared to classical strategies and risk-neutral reinforcement learning solutions. Overall, the results demonstrate that integrating coherent risk measures into reinforcement learning provides an effective approach for developing robust and risk-aware portfolio optimization strategies in dynamic financial environments. Full article
(This article belongs to the Special Issue Portfolio Optimization and Risk Management In Financial Markets )
Show Figures

Figure 1

16 pages, 351 KB  
Article
A Black-Box Multiobjective Optimization Method for Discrete Markov Chains
by Julio B. Clempner
Math. Comput. Appl. 2026, 31(2), 63; https://doi.org/10.3390/mca31020063 - 16 Apr 2026
Viewed by 128
Abstract
In this paper, we propose a Newton-inspired black-box optimization algorithm for multiobjective optimization in constrained ergodic Markov chain environments. The method is motivated by challenges in application areas, where decision-making under uncertainty and limited access to structural information is pervasive. A central contribution [...] Read more.
In this paper, we propose a Newton-inspired black-box optimization algorithm for multiobjective optimization in constrained ergodic Markov chain environments. The method is motivated by challenges in application areas, where decision-making under uncertainty and limited access to structural information is pervasive. A central contribution of the proposed algorithm is the complexity analysis, which yields substantial computational advantages over conventional optimization approaches. Operating in a purely black-box setting, the algorithm relies exclusively on function evaluations and derivative approximations, without requiring explicit knowledge of the objective function’s internal structure. To approximate system dynamics, we employ an Euler-based scheme that enhances the scalability and adaptability of convex optimization problems. While Markov chains are seldom leveraged in black-box optimization, we demonstrate that constrained ergodic Markov chains constitute a powerful and underexplored modeling framework for learning and decision-making under structural constraints. We provide a complexity analysis and illustrate the effectiveness of the proposed method through a numerical example, highlighting its potential to advance applications in multiobjective optimization and decision-making. Full article
Show Figures

Figure 1

30 pages, 1499 KB  
Article
Environment-Aware Optimal Placement and Dynamic Reconfiguration of Underwater Robotic Sonar Networks Using Deep Reinforcement Learning
by Qiming Sang, Yu Tian, Jin Zhang, Yuyang Xiao, Zhiduo Tan, Jiancheng Yu and Fumin Zhang
J. Mar. Sci. Eng. 2026, 14(8), 733; https://doi.org/10.3390/jmse14080733 - 15 Apr 2026
Viewed by 151
Abstract
Underwater dynamic target detection, classification, localization, and tracking (DCLT) is central to maritime surveillance and monitoring and increasingly relies on distributed AUV-based robotic sonar networks operating in passive listening and, when required, cooperative multistatic modes. Achieving a robust performance in realistic oceans remains [...] Read more.
Underwater dynamic target detection, classification, localization, and tracking (DCLT) is central to maritime surveillance and monitoring and increasingly relies on distributed AUV-based robotic sonar networks operating in passive listening and, when required, cooperative multistatic modes. Achieving a robust performance in realistic oceans remains challenging, because sensor placement must adapt to time-varying acoustic conditions and target priors while preserving acoustic communication connectivity, and because frequent reconfiguration under dynamic currents makes classical large-scale planning computationally expensive. This paper presents an integrated deep reinforcement learning (DRL)-based framework for passive-stage sonar placement and dynamic reconfiguration in distributed AUV networks. First, we cast placement as a constructive finite-horizon Markov decision process (MDP) and train a Proximal Policy Optimization (PPO) agent to sequentially build a collision-free layout on a discretized surveillance grid. The terminal reward is formulated to jointly optimize the environment-aware detection performance, computed from BELLHOP-based transmission loss models, and global network connectivity, quantified using algebraic connectivity. Second, to enable time-critical reconfiguration, we estimate flow-aware motion costs for all AUV–destination pairs using a PPO with a Long Short-Term Memory (LSTM) trajectory policy trained for partial observability. The learned policy can be deployed onboard, allowing each AUV to refine its path online using locally sensed currents, improving robustness to ocean-model uncertainty. The resulting cost matrix is solved via an efficient zero-element assignment method to obtain the optimal one-to-one reassignment. In the reported simulation studies, the proposed Sequential PPO placement method achieves a final reward 16–21% higher than Particle Swarm Optimization (PSO) and 2–3.7% higher than the Genetic Algorithm (GA), while the proposed PPO + LSTM planner reduces average travel time by 30.44% compared with A*. The proposed closed-loop architecture supports frequent re-optimization, scalable fleet operation, and a seamless transition to communication-supported cooperative multistatic tracking after detection, enabling efficient, adaptive DCLT in dynamic marine environments. Full article
(This article belongs to the Section Ocean Engineering)
25 pages, 3975 KB  
Article
Landscape Ecological Risk Assessment and Multi-Scenario Simulation of Land Use Based on the Markov-FLUS Model: A Case Study of the Hexi Corridor
by Zaijie Zhang and Xiaoxiao Song
Sustainability 2026, 18(8), 3892; https://doi.org/10.3390/su18083892 - 15 Apr 2026
Viewed by 353
Abstract
As a major ecological safeguard in northwestern China and an important corridor for the Belt and Road Initiative, the Hexi Corridor holds strategic significance for improving landscape structure and enhancing regional ecological security. Focusing on the Hexi Corridor, this study develops a landscape [...] Read more.
As a major ecological safeguard in northwestern China and an important corridor for the Belt and Road Initiative, the Hexi Corridor holds strategic significance for improving landscape structure and enhancing regional ecological security. Focusing on the Hexi Corridor, this study develops a landscape ecological risk (LER) index based on land use (LU) data from 2000, 2010, and 2020. The study employs ArcGIS spatial analysis and XGBoost-SHAP, an interpretable machine learning method, to analyze the spatiotemporal evolution of LU and LERs, as well as their driving factors. Furthermore, the Markov-FLUS model is utilized to simulate and predict LU and LER spatial patterns under multiple scenarios for 2030. The results show that: (1) The dominant land type in the Hexi Corridor is unused land, accounting for 67.33%. During the research period, the extents of unused land, grassland, and forestland showed a steady decline, while built-up land and cropland increased. (2) LERs are categorized into five types, with high risk being the most prevalent, accounting for 52.02%. Between 2000 and 2020, the total area of higher and high risks decreased by 4312 km2, indicating an overall decrease in LER across the region. (3) LER is primarily influenced by annual rainfall, population density, distance to main roads, and distance to rivers. (4) Marked variations in LU patterns and LER are observed across different development scenarios projected for 2030. Full article
(This article belongs to the Special Issue Evaluation of Landscape Ecology and Urban Ecosystems)
Show Figures

Figure 1

16 pages, 13345 KB  
Article
Amortized Parameter Inference for the Arbitrary-Order Hidden Markov Model
by Sixiang Zhang and Liming Cai
Axioms 2026, 15(4), 289; https://doi.org/10.3390/axioms15040289 - 14 Apr 2026
Viewed by 238
Abstract
The arbitrary-order hidden Markov model (α-HMM) is a nontrivial generalization of the standard HMM, designed to model stochastic processes with higher-order dependences among arbitrarily distant random events. The α-HMM admits an efficient Viterbi-style optimal decoding algorithm, making it feasible to [...] Read more.
The arbitrary-order hidden Markov model (α-HMM) is a nontrivial generalization of the standard HMM, designed to model stochastic processes with higher-order dependences among arbitrarily distant random events. The α-HMM admits an efficient Viterbi-style optimal decoding algorithm, making it feasible to discover higher-order dependences among data objects in observed sequential data. Because the α-HMM exceeds the expressive power of standard HMMs, fixed kth-order HMMs, and stochastic context-free grammars, effective probabilistic parameter estimation approaches are required to translate this theoretical expressiveness of the α-HMM into practical utility. This paper introduces a principled methodology for effective estimation of probabilistic parameters of the α-HMM from observed data. In large-scale sequential datasets, higher-order dependencies can vary widely across instances, so a single global parameter set may be inadequate. Instead, an amortized parameter inference approach is proposed for the α-HMM, in which an input-conditioned parameter estimator is learned from data and used to infer instance-specific parameters for each input instance to the decoding algorithm. Specifically, the neural parameter estimator is trained using a composite learning objective that is partially enabled by the optimal decoding algorithm. The effectiveness of the proposed parameter estimation method is demonstrated through empirical results of the application of the α-HMM in biomolecular structure modeling and prediction. Full article
(This article belongs to the Special Issue Stochastic Modeling and Optimization Techniques)
Show Figures

Figure 1

Back to TopTop