Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (147)

Search Parameters:
Keywords = Markov random field model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 443 KB  
Article
Consistent Markov Edge Processes and Random Graphs
by Donatas Surgailis
Mathematics 2025, 13(21), 3368; https://doi.org/10.3390/math13213368 - 22 Oct 2025
Abstract
We discuss Markov edge processes {Ye;eE} defined on edges of a directed acyclic graph (V,E) with the consistency property [...] Read more.
We discuss Markov edge processes {Ye;eE} defined on edges of a directed acyclic graph (V,E) with the consistency property PE(Ye;eE)=PE(Ye;eE) for a large class of subgraphs (V,E) of (V,E) obtained through a mesh dismantling algorithm. The probability distribution PE of such edge process is a discrete version of consistent polygonal Markov graphs. The class of Markov edge processes is related to the class of Bayesian networks and may be of interest to causal inference and decision theory. On regular ν-dimensional lattices, consistent Markov edge processes have similar properties to Pickard random fields on Z2, representing a far-reaching extension of the latter class. A particular case of binary consistent edge process on Z3 was disclosed by Arak in a private communication. We prove that the symmetric binary Pickard model generates the Arak model on Z2 as a contour model. Full article
(This article belongs to the Special Issue Modeling and Data Analysis of Complex Networks)
Show Figures

Figure 1

21 pages, 2630 KB  
Article
Hierarchical Markov Chain Monte Carlo Framework for Spatiotemporal EV Charging Load Forecasting
by Xuehan Zheng, Yalun Zhu, Ming Wang, Bo Lv and Yisheng Lv
Appl. Sci. 2025, 15(20), 11094; https://doi.org/10.3390/app152011094 - 16 Oct 2025
Viewed by 132
Abstract
With the advancement of battery technology and the promotion of the “dual carbon” policy, electric vehicles (EVs) have been widely used in industrial, commercial, and civil fields, and the charging infrastructure of highway service areas across the country has also shown a rapid [...] Read more.
With the advancement of battery technology and the promotion of the “dual carbon” policy, electric vehicles (EVs) have been widely used in industrial, commercial, and civil fields, and the charging infrastructure of highway service areas across the country has also shown a rapid development trend. However, the charging load of electric vehicles in highway scenarios exhibits strong randomness and uncertainty. It is affected by multiple factors such as traffic flow, state of charge (SOC), and user charging behavior, and it is difficult to accurately model it through traditional mathematical models. This paper proposes a hierarchical Markov chain Monte Carlo (HMMC) simulation method to construct a charging load prediction model with spatiotemporal coupling characteristics. The model hierarchically models features such as traffic flow, SOC, and charging behavior through a hierarchical structure to reduce interference between dimensions; by constructing a Markov chain that converges to the target distribution and an inter-layer transfer mechanism, the load change process is deduced layer by layer, thereby achieving a more accurate charging load prediction. Comparative experiments with mainstream methods such as ARIMA, BP neural networks, random forests, and LSTM show that the HMMC model has higher prediction accuracy in highway scenarios, significantly reduces prediction errors, and improves model stability and interpretability. Full article
Show Figures

Figure 1

25 pages, 3025 KB  
Article
QiGSAN: A Novel Probability-Informed Approach for Small Object Segmentation in the Case of Limited Image Datasets
by Andrey Gorshenin and Anastasia Dostovalova
Big Data Cogn. Comput. 2025, 9(9), 239; https://doi.org/10.3390/bdcc9090239 - 18 Sep 2025
Viewed by 500
Abstract
The paper presents a novel probability-informed approach to improving the accuracy of small object semantic segmentation in high-resolution imagery datasets with imbalanced classes and a limited volume of samples. Small objects imply having a small pixel footprint on the input image, for example, [...] Read more.
The paper presents a novel probability-informed approach to improving the accuracy of small object semantic segmentation in high-resolution imagery datasets with imbalanced classes and a limited volume of samples. Small objects imply having a small pixel footprint on the input image, for example, ships in the ocean. Informing in this context means using mathematical models to represent data in the layers of deep neural networks. Thus, the ensemble Quadtree-informed Graph Self-Attention Networks (QiGSANs) are proposed. New architectural blocks, informed by types of Markov random fields such as quadtrees, have been introduced to capture the interconnections between features in images at different spatial resolutions during the graph convolution of superpixel subregions. It has been analytically proven that quadtree-informed graph convolutional neural networks, a part of QiGSAN, tend to achieve faster loss reduction compared to convolutional architectures. This justifies the effectiveness of probability-informed modifications based on quadtrees. To empirically demonstrate the processing of real small data with imbalanced object classes using QiGSAN, two open datasets of synthetic aperture radar (SAR) imagery (up to 0.5 m per pixel) are used: the High Resolution SAR Images Dataset (HRSID) and the SAR Ship Detection Dataset (SSDD). The results of QiGSAN are compared to those of the transformers SegFormer and LWGANet, which constitute a new state-of-the-art model for UAV (Unmanned Aerial Vehicles) and SAR image processing. They are also compared to convolutional neural networks and several ensemble implementations using other graph neural networks. QiGSAN significantly increases the F1-score values by up to 63.93%, 48.57%, and 9.84% compared to transformers, convolutional neural networks, and other ensemble architectures, respectively. QiGSAN outperformed the base segmentors with the mIOU (mean intersection-over-union) metric too: the highest increase was 35.79%. Therefore, our approach to knowledge extraction using mathematical models allows us to significantly improve modern computer vision techniques for imbalanced data. Full article
Show Figures

Figure 1

25 pages, 28048 KB  
Article
Simulation of Non-Stationary Mobile Underwater Acoustic Communication Channels Based on a Multi-Scale Time-Varying Multipath Model
by Honglu Yan, Songzuo Liu, Chenyu Pan, Biao Kuang, Siyu Wang and Gang Qiao
J. Mar. Sci. Eng. 2025, 13(9), 1765; https://doi.org/10.3390/jmse13091765 - 12 Sep 2025
Viewed by 487
Abstract
Traditional Underwater Acoustic Communication (UAC) typically assumes static or slowly varying channels over short observation periods and models multipath amplitude fluctuations with single-state statistical distributions. However, field measurements in shallow-water high-speed mobile scenarios reveal that the combined effects of rapid platform motion and [...] Read more.
Traditional Underwater Acoustic Communication (UAC) typically assumes static or slowly varying channels over short observation periods and models multipath amplitude fluctuations with single-state statistical distributions. However, field measurements in shallow-water high-speed mobile scenarios reveal that the combined effects of rapid platform motion and dynamic environments induce multi-scale time-varying amplitude characteristics. These include distance-dependent attenuation, fluctuations in average energy, and rapid random variations. This observation directly challenges traditional single-state models and wide-sense stationary assumptions. To address this, we propose a multi-scale time-varying multipath amplitude model. Using singular spectrum analysis, we decompose amplitude sequences into hierarchical components: large-scale components modeled via acoustic propagation physics; medium-scale components characterized by Hidden Markov Models; and small-scale components described by zero-mean Gaussian distributions. Building on this model, we further develop a time-varying impulse response simulation framework validated with experimental data. The results demonstrate superior performance over conventional single-state distribution and autoregressive models in statistical distribution matching, temporal dynamics representation, and communication performance testing. The model effectively characterizes non-stationary time-varying channels, supporting high-precision modeling and simulation for mobile UAC systems. Full article
Show Figures

Figure 1

22 pages, 6134 KB  
Article
The Evaluation of Small-Scale Field Maize Transpiration Rate from UAV Thermal Infrared Images Using Improved Three-Temperature Model
by Xiaofei Yang, Zhitao Zhang, Qi Xu, Ning Dong, Xuqian Bai and Yanfu Liu
Plants 2025, 14(14), 2209; https://doi.org/10.3390/plants14142209 - 17 Jul 2025
Viewed by 596
Abstract
Transpiration is the dominant process driving water loss in crops, significantly influencing their growth, development, and yield. Efficient monitoring of transpiration rate (Tr) is crucial for evaluating crop physiological status and optimizing water management strategies. The three-temperature (3T) model has potential for rapid [...] Read more.
Transpiration is the dominant process driving water loss in crops, significantly influencing their growth, development, and yield. Efficient monitoring of transpiration rate (Tr) is crucial for evaluating crop physiological status and optimizing water management strategies. The three-temperature (3T) model has potential for rapid estimation of transpiration rates, but its application to low-altitude remote sensing has not yet been further investigated. To evaluate the performance of 3T model based on land surface temperature (LST) and canopy temperature (TC) in estimating transpiration rate, this study utilized an unmanned aerial vehicle (UAV) equipped with a thermal infrared (TIR) camera to capture TIR images of summer maize during the nodulation-irrigation stage under four different moisture treatments, from which LST was extracted. The Gaussian Hidden Markov Random Field (GHMRF) model was applied to segment the TIR images, facilitating the extraction of TC. Finally, an improved 3T model incorporating fractional vegetation coverage (FVC) was proposed. The findings of the study demonstrate that: (1) The GHMRF model offers an effective approach for TIR image segmentation. The mechanism of thermal TIR segmentation implemented by the GHMRF model is explored. The results indicate that when the potential energy function parameter β value is 0.1, the optimal performance is provided. (2) The feasibility of utilizing UAV-based TIR remote sensing in conjunction with the 3T model for estimating Tr has been demonstrated, showing a significant correlation between the measured and the estimated transpiration rate (Tr-3TC), derived from TC data obtained through the segmentation and processing of TIR imagery. The correlation coefficients (r) were 0.946 in 2022 and 0.872 in 2023. (3) The improved 3T model has demonstrated its ability to enhance the estimation accuracy of crop Tr rapidly and effectively, exhibiting a robust correlation with Tr-3TC. The correlation coefficients for the two observed years are 0.991 and 0.989, respectively, while the model maintains low RMSE of 0.756 mmol H2O m−2 s−1 and 0.555 mmol H2O m−2 s−1 for the respective years, indicating strong interannual stability. Full article
Show Figures

Figure 1

33 pages, 2048 KB  
Article
Multimodal Hidden Markov Models for Real-Time Human Proficiency Assessment in Industry 5.0: Integrating Physiological, Behavioral, and Subjective Metrics
by Mowffq M. Alsanousi and Vittaldas V. Prabhu
Appl. Sci. 2025, 15(14), 7739; https://doi.org/10.3390/app15147739 - 10 Jul 2025
Viewed by 970
Abstract
This paper presents a Multimodal Hidden Markov Model (MHMM) framework specifically designed for real-time human proficiency assessment, integrating physiological (Heart Rate Variability (HRV)), behavioral (Task Completion Time (TCT)), and subjective (NASA Task Load Index (NASA-TLX)) data streams to infer latent human proficiency states [...] Read more.
This paper presents a Multimodal Hidden Markov Model (MHMM) framework specifically designed for real-time human proficiency assessment, integrating physiological (Heart Rate Variability (HRV)), behavioral (Task Completion Time (TCT)), and subjective (NASA Task Load Index (NASA-TLX)) data streams to infer latent human proficiency states in industrial settings. Using published empirical data from the surgical training literature, a comprehensive simulation study was conducted, with the MHMM (Trained) achieving 92.5% classification accuracy, significantly outperforming unimodal Hidden Markov Model (HMM) variants 61–63.9% and demonstrating competitive performance with advanced models such as Long Short-Term Memory (LSTM) networks 90%, and Conditional Random Field (CRF) 88.5%. The framework exhibited robustness across stress-test scenarios, including sensor noise, missing data, and imbalanced class distributions. A key advantage of the MHMM over black-box approaches is its interpretability by providing quantifiable transition probabilities that reveal learning rates, forgetting patterns, and contextual influences on proficiency dynamics. The model successfully captures context-dependent effects, including task complexity and cumulative fatigue, through dynamic transition matrices. When demonstrated through simulation, this framework establishes a foundation for developing adaptive operator-AI collaboration systems in Industry 5.0 environments. The MHMM’s combination of high accuracy, robustness, and interpretability makes it a promising candidate for future empirical validation in real-world industrial, healthcare, and training applications in which it is critical to understand and support human proficiency development. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
Show Figures

Figure 1

27 pages, 7591 KB  
Article
Advancing Land Use Modeling with Rice Cropping Intensity: A Geospatial Study on the Shrinking Paddy Fields in Indonesia
by Laju Gandharum, Djoko Mulyo Hartono, Heri Sadmono, Hartanto Sanjaya, Lena Sumargana, Anindita Diah Kusumawardhani, Fauziah Alhasanah, Dionysius Bryan Sencaki and Nugraheni Setyaningrum
Geographies 2025, 5(3), 31; https://doi.org/10.3390/geographies5030031 - 2 Jul 2025
Viewed by 3295
Abstract
Indonesia faces significant challenges in meeting food security targets due to rapid agricultural land loss, with approximately 1.22 million hectares of rice fields converted between 1990 and 2022. Therefore, this study developed a prediction model for the loss of rice fields by 2030, [...] Read more.
Indonesia faces significant challenges in meeting food security targets due to rapid agricultural land loss, with approximately 1.22 million hectares of rice fields converted between 1990 and 2022. Therefore, this study developed a prediction model for the loss of rice fields by 2030, incorporating land productivity attributes, specifically rice cropping intensity/RCI, using geospatial technology—a novel method with a resolution of approximately 10 m for quantifying ecosystem service (ES) impacts. Land use/land cover data from Landsat images (2013, 2020, 2024) were classified using the Random Forest algorithm on Google Earth Engine. The prediction model was developed using a Multi-Layer Perceptron Neural Network and Markov Cellular Automata (MLP-NN Markov-CA) algorithms. Additionally, time series Sentinel-1A satellite imagery was processed using K-means and a hierarchical clustering analysis to map rice fields and their RCI. The validation process confirmed high model robustness, with an MLP-NN Markov-CA accuracy and Kappa coefficient of 83.90% and 0.91, respectively. The present study, which was conducted in Indramayu Regency (West Java), predicted that 1602.73 hectares of paddy fields would be lost within 2020–2030, specifically 980.54 hectares (61.18%) and 622.19 hectares (38.82%) with 2 RCI and 1 RCI, respectively. This land conversion directly threatens ES, resulting in a projected loss of 83,697.95 tons of rice production, which indicates a critical degradation of service provisioning. The findings provide actionable insights for land use planning to reduce agricultural land conversion while outlining the urgency of safeguarding ES values. The adopted method is applicable to regions with similar characteristics. Full article
Show Figures

Figure 1

25 pages, 33376 KB  
Article
Spatial-Spectral Linear Extrapolation for Cross-Scene Hyperspectral Image Classification
by Lianlei Lin, Hanqing Zhao, Sheng Gao, Junkai Wang and Zongwei Zhang
Remote Sens. 2025, 17(11), 1816; https://doi.org/10.3390/rs17111816 - 22 May 2025
Cited by 1 | Viewed by 766
Abstract
In realistic hyperspectral image (HSI) cross-scene classification tasks, it is ideal to obtain target domain samples during the training phase. Therefore, a model needs to be trained on one or more source domains (SD) and achieve robust domain generalization (DG) performance on an [...] Read more.
In realistic hyperspectral image (HSI) cross-scene classification tasks, it is ideal to obtain target domain samples during the training phase. Therefore, a model needs to be trained on one or more source domains (SD) and achieve robust domain generalization (DG) performance on an unknown target domain (TD). Popular DG strategies constrain the model’s predictive behavior in synthetic space through deep, nonlinear source expansion, and an HSI generation model is usually adopted to enrich the diversity of training samples. However, recent studies have shown that the activation functions of neurons in a network exhibit asymmetry for different categories, which results in the learning of task-irrelevant features while attempting to learn task-related features (called “feature contamination”). For example, even if some intrinsic features of HSIs (lighting conditions, atmospheric environment, etc.) are irrelevant to the label, the neural network still tends to learn them, resulting in features that make the classification related to these spurious components. To alleviate this problem, this study replaces the common nonlinear generative network with a specific linear projection transformation, to reduce the number of neurons activated nonlinearly during training and alleviate the learning of contaminated features. Specifically, this study proposes a dimensionally decoupled spatial spectral linear extrapolation (SSLE) strategy to achieve sample augmentation. Inspired by the weakening effect of water vapor absorption and Rayleigh scattering on band reflectivity, we simulate a common spectral drift based on Markov random fields to achieve linear spectral augmentation. Further considering the common co-occurrence phenomenon of patch images in space, we design spatial weights combined with label determinism of the center pixel to construct linear spatial enhancement. Finally, to ensure the cognitive unity of the high-level features of the discriminator in the sample space, we use inter-class contrastive learning to align the back-end feature representation. Extensive experiments were conducted on four datasets, an ablation study showed the effectiveness of the proposed modules, and a comparative analysis with advanced DG algorithms showed the superiority of our model in the face of various spectral and category shifts. In particular, on the Houston18/Shanghai datasets, its overall accuracy was 0.51%/0.83% higher than the best results of the other methods, and its Kappa coefficient was 0.78%/2.07% higher, respectively. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

23 pages, 11864 KB  
Article
Utilizing Remote Sensing and Random Forests to Identify Optimal Land Use Scenarios and Address the Increase in Landslide Susceptibility
by Aditya Nugraha Putra, Jaenudin, Novandi Rizky Prasetya, Michelle Talisia Sugiarto, Sudarto, Cahyo Prayogo, Febrian Maritimo and Fandy Tri Admajaya
Sustainability 2025, 17(9), 4227; https://doi.org/10.3390/su17094227 - 7 May 2025
Cited by 2 | Viewed by 2063
Abstract
Massive land use changes in Indonesia driven by deforestation, agricultural expansion, and urbanization have significantly increased landslide susceptibility in upper watersheds. This study focuses on the Sumber Brantas and Kali Konto sub-watersheds where rapid land conversion has destabilized slopes and disrupted ecological balance. [...] Read more.
Massive land use changes in Indonesia driven by deforestation, agricultural expansion, and urbanization have significantly increased landslide susceptibility in upper watersheds. This study focuses on the Sumber Brantas and Kali Konto sub-watersheds where rapid land conversion has destabilized slopes and disrupted ecological balance. By integrating remote sensing, Cellular Automata-Markov (CA-Markov), and Random Forest (RF) models, the research aims to identify optimal land use scenarios for mitigating landslide hazards. Three scenarios were analyzed: business as usual (BAU), land capability classification (LCC), and regional spatial planning (RSP) using 400 field-validated landslide data points alongside 22 topographic, geological, environmental, and anthropogenic parameters. Land use analysis from 2017 to 2022 revealed a 1% decline in natural forest cover, which corresponded to a 1% increase in high and very high landslide hazard areas. From 2017 to 2022, landslide risk increased as the “High” category rose from 33.95% to 37.59% and “Very High” from 10.24% to 12.18%; under BAU 2025, they reached 40.89% and 12.48%, while RSP and LCC reduced the “High” category to 44.12% and 34.44%, respectively. These findings highlight the critical role of integrating geospatial analysis and machine learning in regional planning to promote sustainable land use, reduce landslide hazards, and enhance watershed resilience with high model accuracy (>81%). Full article
(This article belongs to the Topic Natural Hazards and Disaster Risks Reduction, 2nd Edition)
Show Figures

Figure 1

27 pages, 42566 KB  
Article
Unsupervised Rural Flood Mapping from Bi-Temporal Sentinel-1 Images Using an Improved Wavelet-Fusion Flood-Change Index (IWFCI) and an Uncertainty-Sensitive Markov Random Field (USMRF) Model
by Amin Mohsenifar, Ali Mohammadzadeh and Sadegh Jamali
Remote Sens. 2025, 17(6), 1024; https://doi.org/10.3390/rs17061024 - 14 Mar 2025
Cited by 2 | Viewed by 1561
Abstract
Synthetic aperture radar (SAR) remote sensing (RS) technology is an ideal tool to map flooded areas on account of its all-time, all-weather imaging capability. Existing SAR data-based change detection approaches lack well-discriminant change indices for reliable floodwater mapping. To resolve this issue, an [...] Read more.
Synthetic aperture radar (SAR) remote sensing (RS) technology is an ideal tool to map flooded areas on account of its all-time, all-weather imaging capability. Existing SAR data-based change detection approaches lack well-discriminant change indices for reliable floodwater mapping. To resolve this issue, an unsupervised change detection approach, made up of two main steps, is proposed for detecting floodwaters from bi-temporal SAR data. In the first step, an improved wavelet-fusion flood-change index (IWFCI) is proposed. The IWFCI modifies the mean-ratio change index (CI) to fuse it with the log-ratio CI using the discrete wavelet transform (DWT). The IWFCI also employs a discriminant feature derived from the co-flood image to enhance the separability between the non-flood and flood areas. In the second step, an uncertainty-sensitive Markov random field (USMRF) model is proposed to diminish the over-smoothness issue in the areas with high uncertainty based on a new Gaussian uncertainty term. To appraise the efficacy of the floodwater detection approach proposed in this study, comparative experiments were conducted in two stages on four datasets, each including a normalized difference water index (NDWI) and pre-and co-flood Sentinel-1 data. In the first stage, the proposed IWFCI was compared to a number of state-of-the-art (SOTA) CIs, and the second stage compared USMRF to the SOTA change detection algorithms. From the experimental results in the first stage, the proposed IWFCI, yielding an average F-score of 86.20%, performed better than SOTA CIs. Likewise, according to the experimental results obtained in the second stage, the USMRF model with an average F-score of 89.27% outperformed the comparative methods in classifying non-flood and flood classes. Accordingly, the proposed floodwater detection approach, combining IWFCI and USMRF, can serve as a reliable tool for detecting flooded areas in SAR data. Full article
Show Figures

Graphical abstract

24 pages, 534 KB  
Article
Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts
by Omar M. Bdair
Mathematics 2025, 13(4), 590; https://doi.org/10.3390/math13040590 - 11 Feb 2025
Cited by 1 | Viewed by 934
Abstract
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields [...] Read more.
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields where censored data frequently appear, such as material science, medical studies and industrial applications. This paper presents both frequentist and Bayesian approaches to estimate the shape and scale parameters of the BS distribution, along with the prediction of unobserved failure times. Random data are generated from the BS distribution under type-II censoring, where a pre-specified number of failures (m) is observed. The generated data are used to calculate the Maximum Likelihood Estimation (MLE) and Bayesian inference and evaluate their performances. The Bayesian method employs Markov Chain Monte Carlo (MCMC) sampling for point predictions and credible intervals. We apply the methods to both datasets generated under type-II censoring and real-world data on the fatigue life of 6061-T6 aluminum coupons. Although the results show that the two methods yield similar parameter estimates, the Bayesian approach offers more flexible and reliable prediction intervals. Extensive R codes are used to explain the practical application of these methods. Our findings confirm the advantages of Bayesian inference in handling censored data, especially when prior information is available for estimation. This work not only supports the theoretical understanding of the BS distribution under type-II censoring but also provides practical tools for analyzing real data in reliability and survival studies. Future research will discuss extensions of these methods to the multi-sample progressive censoring model with larger datasets and the integration of degradation models commonly encountered in industrial applications. Full article
Show Figures

Figure 1

28 pages, 3873 KB  
Article
Bayesian Inference for Long Memory Stochastic Volatility Models
by Pedro Chaim and Márcio Poletti Laurini
Econometrics 2024, 12(4), 35; https://doi.org/10.3390/econometrics12040035 - 27 Nov 2024
Cited by 2 | Viewed by 1962
Abstract
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination [...] Read more.
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination of independent first-order autoregressive processes, lending itself to a Gaussian Markov Random Field representation. Our results from Monte Carlo experiments indicate that this approach exhibits small sample properties akin to those of Markov Chain Monte Carlo estimators. Additionally, it offers the advantages of reduced computational complexity and the mitigation of posterior convergence issues. We employ this methodology to estimate volatility dependency patterns for both the SP&500 index and major cryptocurrencies. We thoroughly assess the in-sample fit and extend our analysis to the construction of out-of-sample forecasts. Furthermore, we propose multi-factor extensions and apply this method to estimate volatility measurements from high-frequency data, underscoring its exceptional computational efficiency. Our simulation results demonstrate that the INLA methodology achieves comparable accuracy to traditional MCMC methods for estimating latent parameters and volatilities in LMSV models. The proposed model extensions show strong in-sample fit and out-of-sample forecast performance, highlighting the versatility of the INLA approach. This method is particularly advantageous in high-frequency contexts, where the computational demands of traditional posterior simulations are often prohibitive. Full article
Show Figures

Figure 1

11 pages, 1942 KB  
Article
Environmental Quality, Extreme Heat, and Healthcare Expenditures
by Douglas A. Becker
Int. J. Environ. Res. Public Health 2024, 21(10), 1322; https://doi.org/10.3390/ijerph21101322 - 5 Oct 2024
Viewed by 1776
Abstract
Although the effects of the environment on human health are well-established, the literature on the relationship between the quality of the environment and expenditures on healthcare is relatively sparse and disjointed. In this study, the Environmental Quality Index developed by the Environmental Protection [...] Read more.
Although the effects of the environment on human health are well-established, the literature on the relationship between the quality of the environment and expenditures on healthcare is relatively sparse and disjointed. In this study, the Environmental Quality Index developed by the Environmental Protection Agency and heatwave days were compared against per capita Medicare spending at the county level. A general additive model with a Markov Random Field smoothing term was used for the analysis to ensure that spatial dependence did not undermine model results. The Environmental Quality Index was found to hold a statistically significant (p < 0.05), multifaceted nonlinear association with spending, as was the average seasonal maximum heat index. The same was not true of heatwave days, however. In a secondary analysis on the individual domains of the index, the social and built environment components were significantly related to spending, but the air, water, and land domains were not. These results provide initial support for the simultaneous benefits of healthcare financing systems to mitigate some dimensions of poor environmental quality and consistently high air temperatures. Full article
(This article belongs to the Special Issue Health Geography’s Contribution to Environmental Health Research)
Show Figures

Figure 1

29 pages, 9774 KB  
Article
High-Resolution Spatiotemporal Forecasting with Missing Observations Including an Application to Daily Particulate Matter 2.5 Concentrations in Jakarta Province, Indonesia
by I Gede Nyoman Mindra Jaya and Henk Folmer
Mathematics 2024, 12(18), 2899; https://doi.org/10.3390/math12182899 - 17 Sep 2024
Cited by 2 | Viewed by 1791
Abstract
Accurate forecasting of high-resolution particulate matter 2.5 (PM2.5) levels is essential for the development of public health policy. However, datasets used for this purpose often contain missing observations. This study presents a two-stage approach to handle this problem. The first stage [...] Read more.
Accurate forecasting of high-resolution particulate matter 2.5 (PM2.5) levels is essential for the development of public health policy. However, datasets used for this purpose often contain missing observations. This study presents a two-stage approach to handle this problem. The first stage is a multivariate spatial time series (MSTS) model, used to generate forecasts for the sampled spatial units and to impute missing observations. The MSTS model utilizes the similarities between the temporal patterns of the time series of the spatial units to impute the missing data across space. The second stage is the high-resolution prediction model, which generates predictions that cover the entire study domain. The second stage faces the big N problem giving rise to complex memory and computational problems. As a solution to the big N problem, we propose a Gaussian Markov random field (GMRF) for innovations with the Matérn covariance matrix obtained from the corresponding Gaussian field (GF) matrix by means of the stochastic partial differential equation (SPDE) method and the finite element method (FEM). For inference, we propose Bayesian statistics and integrated nested Laplace approximation (INLA) in the R-INLA package. The above approach is demonstrated using daily data collected from 13 PM2.5 monitoring stations in Jakarta Province, Indonesia, for 1 January–31 December 2022. The first stage of the model generates PM2.5 forecasts for the 13 monitoring stations for the period 1–31 January 2023, imputing missing data by means of the MSTS model. To capture temporal trends in the PM2.5 concentrations, the model applies a first-order autoregressive process and a seasonal process. The second stage involves creating a high-resolution map for the period 1–31 January 2023, for sampled and non-sampled spatiotemporal units. It uses the MSTS-generated PM2.5 predictions for the sampled spatiotemporal units and observations of the covariate’s altitude, population density, and rainfall for sampled and non-samples spatiotemporal units. For the spatially correlated random effects, we apply a first-order random walk process. The validation of out-of-sample forecasts indicates a strong model fit with low mean squared error (0.001), mean absolute error (0.037), and mean absolute percentage error (0.041), and a high R² value (0.855). The analysis reveals that altitude and precipitation negatively impact PM2.5 concentrations, while population density has a positive effect. Specifically, a one-meter increase in altitude is linked to a 7.8% decrease in PM2.5, while a one-person increase in population density leads to a 7.0% rise in PM2.5. Additionally, a one-millimeter increase in rainfall corresponds to a 3.9% decrease in PM2.5. The paper makes a valuable contribution to the field of forecasting high-resolution PM2.5 levels, which is essential for providing detailed, accurate information for public health policy. The approach presents a new and innovative method for addressing the problem of missing data and high-resolution forecasting. Full article
(This article belongs to the Special Issue Advanced Statistical Application for Realistic Problems)
Show Figures

Figure 1

17 pages, 4687 KB  
Article
Research on LSTM-Based Maneuvering Motion Prediction for USVs
by Rong Guo, Yunsheng Mao, Zuquan Xiang, Le Hao, Dingkun Wu and Lifei Song
J. Mar. Sci. Eng. 2024, 12(9), 1661; https://doi.org/10.3390/jmse12091661 - 16 Sep 2024
Cited by 2 | Viewed by 1665
Abstract
Maneuvering motion prediction is central to the control and operation of ships, and the application of machine learning algorithms in this field is increasingly prevalent. However, challenges such as extensive training time, complex parameter tuning processes, and heavy reliance on mathematical models pose [...] Read more.
Maneuvering motion prediction is central to the control and operation of ships, and the application of machine learning algorithms in this field is increasingly prevalent. However, challenges such as extensive training time, complex parameter tuning processes, and heavy reliance on mathematical models pose substantial obstacles to their application. To address these challenges, this paper proposes an LSTM-based modeling algorithm. First, a maneuvering motion model based on a real USV model was constructed, and typical operating conditions were simulated to obtain data. The Ornstein–Uhlenbeck process and the Hidden Markov Model were applied to the simulation data to generate noise and random data loss, respectively, thereby constructing a sample set that reflects real experiment characteristics. The sample data were then pre-processed for training, employing the MaxAbsScaler strategy for data normalization, Kalman filtering and RRF for data smoothing and noise reduction, and Lagrange interpolation for data resampling to enhance the robustness of the training data. Subsequently, based on the USV maneuvering motion model, an LSTM-based black-box motion prediction model was established. An in-depth comparative analysis and discussion of the model’s network structure and parameters were conducted, followed by the training of the ship maneuvering motion model using the optimized LSTM model. Generalization tests were then performed on a generalization set under Zigzag and turning conditions to validate the accuracy and generalization performance of the prediction model. Full article
Show Figures

Figure 1

Back to TopTop