Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (206)

Search Parameters:
Keywords = Markov random field

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1885 KB  
Article
A Hierarchical Multi-Resolution Self-Supervised Framework for High-Fidelity 3D Face Reconstruction Using Learnable Gabor-Aware Texture Modeling
by Pichet Mareo and Rerkchai Fooprateepsiri
J. Imaging 2026, 12(1), 26; https://doi.org/10.3390/jimaging12010026 - 5 Jan 2026
Viewed by 204
Abstract
High-fidelity 3D face reconstruction from a single image is challenging, owing to the inherently ambiguous depth cues and the strong entanglement of multi-scale facial textures. In this regard, we propose a hierarchical multi-resolution self-supervised framework (HMR-Framework), which reconstructs coarse-, medium-, and fine-scale facial [...] Read more.
High-fidelity 3D face reconstruction from a single image is challenging, owing to the inherently ambiguous depth cues and the strong entanglement of multi-scale facial textures. In this regard, we propose a hierarchical multi-resolution self-supervised framework (HMR-Framework), which reconstructs coarse-, medium-, and fine-scale facial geometry progressively through a unified pipeline. A coarse geometric prior is first estimated via 3D morphable model regression, followed by medium-scale refinement using a vertex deformation map constrained by a global–local Markov random field loss to preserve structural coherence. In order to improve fine-scale fidelity, a learnable Gabor-aware texture enhancement module has been proposed to decouple spatial–frequency information and thus improve sensitivity for high-frequency facial attributes. Additionally, we employ a wavelet-based detail perception loss to preserve the edge-aware texture features while mitigating noise commonly observed in in-the-wild images. Extensive qualitative and quantitative evaluation of benchmark datasets indicate that the proposed framework provides better fine-detail reconstruction than existing state-of-the-art methods, while maintaining robustness over pose variations. Notably, the hierarchical design increases semantic consistency across multiple geometric scales, providing a functional solution for high-fidelity 3D face reconstruction from monocular images. Full article
Show Figures

Figure 1

25 pages, 4363 KB  
Article
Demand Response Potential Evaluation Based on Multivariate Heterogeneous Features and Stacking Mechanism
by Chong Gao, Zhiheng Xu, Ran Cheng, Junxiao Zhang, Xinghang Weng, Huahui Zhang, Tao Yu and Wencong Xiao
Energies 2026, 19(1), 194; https://doi.org/10.3390/en19010194 - 30 Dec 2025
Viewed by 214
Abstract
Accurate evaluation of demand response (DR) potential at the individual user level is critical for the effective implementation and optimization of demand response programs. However, existing data-driven methods often suffer from insufficient feature representation, limited characterization of load profile dynamics, and ineffective fusion [...] Read more.
Accurate evaluation of demand response (DR) potential at the individual user level is critical for the effective implementation and optimization of demand response programs. However, existing data-driven methods often suffer from insufficient feature representation, limited characterization of load profile dynamics, and ineffective fusion of heterogeneous features, leading to suboptimal evaluation performance. To address these challenges, this paper proposes a novel demand response potential evaluation method based on multivariate heterogeneous features and a Stacking-based ensemble mechanism. First, multidimensional indicator features are extracted from historical electricity consumption data and external factors (e.g., weather, time-of-use pricing), capturing load shape, variability, and correlation characteristics. Second, to enrich the information space and preserve temporal dynamics, typical daily load profiles are transformed into two-dimensional image features using the Gramian Angular Difference Field (GADF), the Markov Transition Field (MTF), and an Improved Recurrence Plot (IRP), which are then fused into a single RGB image. Third, a differentiated modeling strategy is adopted: scalar indicator features are processed by classical machine learning models (Support Vector Machine, Random Forest, XGBoost), while image features are fed into a deep convolutional neural network (SE-ResNet-20). Finally, a Stacking ensemble learning framework is employed to intelligently integrate the outputs of base learners, with a Decision Tree as the meta-learner, thereby enhancing overall evaluation accuracy and robustness. Experimental results on a real-world dataset demonstrate that the proposed method achieves superior performance compared to individual models and conventional fusion approaches, effectively leveraging both structured indicators and unstructured image representations for high-precision demand response potential evaluation. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

24 pages, 443 KB  
Article
Consistent Markov Edge Processes and Random Graphs
by Donatas Surgailis
Mathematics 2025, 13(21), 3368; https://doi.org/10.3390/math13213368 - 22 Oct 2025
Viewed by 398
Abstract
We discuss Markov edge processes {Ye;eE} defined on edges of a directed acyclic graph (V,E) with the consistency property [...] Read more.
We discuss Markov edge processes {Ye;eE} defined on edges of a directed acyclic graph (V,E) with the consistency property PE(Ye;eE)=PE(Ye;eE) for a large class of subgraphs (V,E) of (V,E) obtained through a mesh dismantling algorithm. The probability distribution PE of such edge process is a discrete version of consistent polygonal Markov graphs. The class of Markov edge processes is related to the class of Bayesian networks and may be of interest to causal inference and decision theory. On regular ν-dimensional lattices, consistent Markov edge processes have similar properties to Pickard random fields on Z2, representing a far-reaching extension of the latter class. A particular case of binary consistent edge process on Z3 was disclosed by Arak in a private communication. We prove that the symmetric binary Pickard model generates the Arak model on Z2 as a contour model. Full article
(This article belongs to the Special Issue Modeling and Data Analysis of Complex Networks)
Show Figures

Figure 1

21 pages, 2630 KB  
Article
Hierarchical Markov Chain Monte Carlo Framework for Spatiotemporal EV Charging Load Forecasting
by Xuehan Zheng, Yalun Zhu, Ming Wang, Bo Lv and Yisheng Lv
Appl. Sci. 2025, 15(20), 11094; https://doi.org/10.3390/app152011094 - 16 Oct 2025
Viewed by 548
Abstract
With the advancement of battery technology and the promotion of the “dual carbon” policy, electric vehicles (EVs) have been widely used in industrial, commercial, and civil fields, and the charging infrastructure of highway service areas across the country has also shown a rapid [...] Read more.
With the advancement of battery technology and the promotion of the “dual carbon” policy, electric vehicles (EVs) have been widely used in industrial, commercial, and civil fields, and the charging infrastructure of highway service areas across the country has also shown a rapid development trend. However, the charging load of electric vehicles in highway scenarios exhibits strong randomness and uncertainty. It is affected by multiple factors such as traffic flow, state of charge (SOC), and user charging behavior, and it is difficult to accurately model it through traditional mathematical models. This paper proposes a hierarchical Markov chain Monte Carlo (HMMC) simulation method to construct a charging load prediction model with spatiotemporal coupling characteristics. The model hierarchically models features such as traffic flow, SOC, and charging behavior through a hierarchical structure to reduce interference between dimensions; by constructing a Markov chain that converges to the target distribution and an inter-layer transfer mechanism, the load change process is deduced layer by layer, thereby achieving a more accurate charging load prediction. Comparative experiments with mainstream methods such as ARIMA, BP neural networks, random forests, and LSTM show that the HMMC model has higher prediction accuracy in highway scenarios, significantly reduces prediction errors, and improves model stability and interpretability. Full article
Show Figures

Figure 1

25 pages, 3025 KB  
Article
QiGSAN: A Novel Probability-Informed Approach for Small Object Segmentation in the Case of Limited Image Datasets
by Andrey Gorshenin and Anastasia Dostovalova
Big Data Cogn. Comput. 2025, 9(9), 239; https://doi.org/10.3390/bdcc9090239 - 18 Sep 2025
Viewed by 1082
Abstract
The paper presents a novel probability-informed approach to improving the accuracy of small object semantic segmentation in high-resolution imagery datasets with imbalanced classes and a limited volume of samples. Small objects imply having a small pixel footprint on the input image, for example, [...] Read more.
The paper presents a novel probability-informed approach to improving the accuracy of small object semantic segmentation in high-resolution imagery datasets with imbalanced classes and a limited volume of samples. Small objects imply having a small pixel footprint on the input image, for example, ships in the ocean. Informing in this context means using mathematical models to represent data in the layers of deep neural networks. Thus, the ensemble Quadtree-informed Graph Self-Attention Networks (QiGSANs) are proposed. New architectural blocks, informed by types of Markov random fields such as quadtrees, have been introduced to capture the interconnections between features in images at different spatial resolutions during the graph convolution of superpixel subregions. It has been analytically proven that quadtree-informed graph convolutional neural networks, a part of QiGSAN, tend to achieve faster loss reduction compared to convolutional architectures. This justifies the effectiveness of probability-informed modifications based on quadtrees. To empirically demonstrate the processing of real small data with imbalanced object classes using QiGSAN, two open datasets of synthetic aperture radar (SAR) imagery (up to 0.5 m per pixel) are used: the High Resolution SAR Images Dataset (HRSID) and the SAR Ship Detection Dataset (SSDD). The results of QiGSAN are compared to those of the transformers SegFormer and LWGANet, which constitute a new state-of-the-art model for UAV (Unmanned Aerial Vehicles) and SAR image processing. They are also compared to convolutional neural networks and several ensemble implementations using other graph neural networks. QiGSAN significantly increases the F1-score values by up to 63.93%, 48.57%, and 9.84% compared to transformers, convolutional neural networks, and other ensemble architectures, respectively. QiGSAN outperformed the base segmentors with the mIOU (mean intersection-over-union) metric too: the highest increase was 35.79%. Therefore, our approach to knowledge extraction using mathematical models allows us to significantly improve modern computer vision techniques for imbalanced data. Full article
Show Figures

Figure 1

25 pages, 28048 KB  
Article
Simulation of Non-Stationary Mobile Underwater Acoustic Communication Channels Based on a Multi-Scale Time-Varying Multipath Model
by Honglu Yan, Songzuo Liu, Chenyu Pan, Biao Kuang, Siyu Wang and Gang Qiao
J. Mar. Sci. Eng. 2025, 13(9), 1765; https://doi.org/10.3390/jmse13091765 - 12 Sep 2025
Cited by 2 | Viewed by 1430
Abstract
Traditional Underwater Acoustic Communication (UAC) typically assumes static or slowly varying channels over short observation periods and models multipath amplitude fluctuations with single-state statistical distributions. However, field measurements in shallow-water high-speed mobile scenarios reveal that the combined effects of rapid platform motion and [...] Read more.
Traditional Underwater Acoustic Communication (UAC) typically assumes static or slowly varying channels over short observation periods and models multipath amplitude fluctuations with single-state statistical distributions. However, field measurements in shallow-water high-speed mobile scenarios reveal that the combined effects of rapid platform motion and dynamic environments induce multi-scale time-varying amplitude characteristics. These include distance-dependent attenuation, fluctuations in average energy, and rapid random variations. This observation directly challenges traditional single-state models and wide-sense stationary assumptions. To address this, we propose a multi-scale time-varying multipath amplitude model. Using singular spectrum analysis, we decompose amplitude sequences into hierarchical components: large-scale components modeled via acoustic propagation physics; medium-scale components characterized by Hidden Markov Models; and small-scale components described by zero-mean Gaussian distributions. Building on this model, we further develop a time-varying impulse response simulation framework validated with experimental data. The results demonstrate superior performance over conventional single-state distribution and autoregressive models in statistical distribution matching, temporal dynamics representation, and communication performance testing. The model effectively characterizes non-stationary time-varying channels, supporting high-precision modeling and simulation for mobile UAC systems. Full article
Show Figures

Figure 1

24 pages, 23437 KB  
Article
Fusing Direct and Indirect Visual Odometry for SLAM: An ICM-Based Framework
by Jeremias Gaia, Javier Gimenez, Eugenio Orosco, Francisco Rossomando, Carlos Soria and Fernando Ulloa-Vásquez
World Electr. Veh. J. 2025, 16(9), 510; https://doi.org/10.3390/wevj16090510 - 10 Sep 2025
Viewed by 1089
Abstract
The loss of localization in robots navigating GNSS-denied environments poses a critical challenge that can compromise mission success and safe operation. This article presents a method that fuses visual odometry outputs from both direct and feature-based (indirect) methods using Iterated Conditional Modes (ICMs), [...] Read more.
The loss of localization in robots navigating GNSS-denied environments poses a critical challenge that can compromise mission success and safe operation. This article presents a method that fuses visual odometry outputs from both direct and feature-based (indirect) methods using Iterated Conditional Modes (ICMs), an efficient iterative optimization algorithm that maximizes the posterior probability in Markov random fields, combined with uncertainty-aware gain adjustment to perform pose estimation and mapping. The proposed method enhances the performance of visual localization and mapping algorithms in low-texture or visually degraded scenarios. The method was validated using the TUM RGB-D benchmark dataset and through real-world tests in both indoor and outdoor environments. Outdoor experiments were conducted on an electric vehicle, where the method maintained stable tracking. These initial results suggest that the technique could be transferable to electric vehicle platforms and applicable in a variety of real-world conditions. Full article
Show Figures

Figure 1

22 pages, 6134 KB  
Article
The Evaluation of Small-Scale Field Maize Transpiration Rate from UAV Thermal Infrared Images Using Improved Three-Temperature Model
by Xiaofei Yang, Zhitao Zhang, Qi Xu, Ning Dong, Xuqian Bai and Yanfu Liu
Plants 2025, 14(14), 2209; https://doi.org/10.3390/plants14142209 - 17 Jul 2025
Viewed by 962
Abstract
Transpiration is the dominant process driving water loss in crops, significantly influencing their growth, development, and yield. Efficient monitoring of transpiration rate (Tr) is crucial for evaluating crop physiological status and optimizing water management strategies. The three-temperature (3T) model has potential for rapid [...] Read more.
Transpiration is the dominant process driving water loss in crops, significantly influencing their growth, development, and yield. Efficient monitoring of transpiration rate (Tr) is crucial for evaluating crop physiological status and optimizing water management strategies. The three-temperature (3T) model has potential for rapid estimation of transpiration rates, but its application to low-altitude remote sensing has not yet been further investigated. To evaluate the performance of 3T model based on land surface temperature (LST) and canopy temperature (TC) in estimating transpiration rate, this study utilized an unmanned aerial vehicle (UAV) equipped with a thermal infrared (TIR) camera to capture TIR images of summer maize during the nodulation-irrigation stage under four different moisture treatments, from which LST was extracted. The Gaussian Hidden Markov Random Field (GHMRF) model was applied to segment the TIR images, facilitating the extraction of TC. Finally, an improved 3T model incorporating fractional vegetation coverage (FVC) was proposed. The findings of the study demonstrate that: (1) The GHMRF model offers an effective approach for TIR image segmentation. The mechanism of thermal TIR segmentation implemented by the GHMRF model is explored. The results indicate that when the potential energy function parameter β value is 0.1, the optimal performance is provided. (2) The feasibility of utilizing UAV-based TIR remote sensing in conjunction with the 3T model for estimating Tr has been demonstrated, showing a significant correlation between the measured and the estimated transpiration rate (Tr-3TC), derived from TC data obtained through the segmentation and processing of TIR imagery. The correlation coefficients (r) were 0.946 in 2022 and 0.872 in 2023. (3) The improved 3T model has demonstrated its ability to enhance the estimation accuracy of crop Tr rapidly and effectively, exhibiting a robust correlation with Tr-3TC. The correlation coefficients for the two observed years are 0.991 and 0.989, respectively, while the model maintains low RMSE of 0.756 mmol H2O m−2 s−1 and 0.555 mmol H2O m−2 s−1 for the respective years, indicating strong interannual stability. Full article
Show Figures

Figure 1

33 pages, 2048 KB  
Article
Multimodal Hidden Markov Models for Real-Time Human Proficiency Assessment in Industry 5.0: Integrating Physiological, Behavioral, and Subjective Metrics
by Mowffq M. Alsanousi and Vittaldas V. Prabhu
Appl. Sci. 2025, 15(14), 7739; https://doi.org/10.3390/app15147739 - 10 Jul 2025
Viewed by 2019
Abstract
This paper presents a Multimodal Hidden Markov Model (MHMM) framework specifically designed for real-time human proficiency assessment, integrating physiological (Heart Rate Variability (HRV)), behavioral (Task Completion Time (TCT)), and subjective (NASA Task Load Index (NASA-TLX)) data streams to infer latent human proficiency states [...] Read more.
This paper presents a Multimodal Hidden Markov Model (MHMM) framework specifically designed for real-time human proficiency assessment, integrating physiological (Heart Rate Variability (HRV)), behavioral (Task Completion Time (TCT)), and subjective (NASA Task Load Index (NASA-TLX)) data streams to infer latent human proficiency states in industrial settings. Using published empirical data from the surgical training literature, a comprehensive simulation study was conducted, with the MHMM (Trained) achieving 92.5% classification accuracy, significantly outperforming unimodal Hidden Markov Model (HMM) variants 61–63.9% and demonstrating competitive performance with advanced models such as Long Short-Term Memory (LSTM) networks 90%, and Conditional Random Field (CRF) 88.5%. The framework exhibited robustness across stress-test scenarios, including sensor noise, missing data, and imbalanced class distributions. A key advantage of the MHMM over black-box approaches is its interpretability by providing quantifiable transition probabilities that reveal learning rates, forgetting patterns, and contextual influences on proficiency dynamics. The model successfully captures context-dependent effects, including task complexity and cumulative fatigue, through dynamic transition matrices. When demonstrated through simulation, this framework establishes a foundation for developing adaptive operator-AI collaboration systems in Industry 5.0 environments. The MHMM’s combination of high accuracy, robustness, and interpretability makes it a promising candidate for future empirical validation in real-world industrial, healthcare, and training applications in which it is critical to understand and support human proficiency development. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
Show Figures

Figure 1

27 pages, 7591 KB  
Article
Advancing Land Use Modeling with Rice Cropping Intensity: A Geospatial Study on the Shrinking Paddy Fields in Indonesia
by Laju Gandharum, Djoko Mulyo Hartono, Heri Sadmono, Hartanto Sanjaya, Lena Sumargana, Anindita Diah Kusumawardhani, Fauziah Alhasanah, Dionysius Bryan Sencaki and Nugraheni Setyaningrum
Geographies 2025, 5(3), 31; https://doi.org/10.3390/geographies5030031 - 2 Jul 2025
Cited by 1 | Viewed by 4994
Abstract
Indonesia faces significant challenges in meeting food security targets due to rapid agricultural land loss, with approximately 1.22 million hectares of rice fields converted between 1990 and 2022. Therefore, this study developed a prediction model for the loss of rice fields by 2030, [...] Read more.
Indonesia faces significant challenges in meeting food security targets due to rapid agricultural land loss, with approximately 1.22 million hectares of rice fields converted between 1990 and 2022. Therefore, this study developed a prediction model for the loss of rice fields by 2030, incorporating land productivity attributes, specifically rice cropping intensity/RCI, using geospatial technology—a novel method with a resolution of approximately 10 m for quantifying ecosystem service (ES) impacts. Land use/land cover data from Landsat images (2013, 2020, 2024) were classified using the Random Forest algorithm on Google Earth Engine. The prediction model was developed using a Multi-Layer Perceptron Neural Network and Markov Cellular Automata (MLP-NN Markov-CA) algorithms. Additionally, time series Sentinel-1A satellite imagery was processed using K-means and a hierarchical clustering analysis to map rice fields and their RCI. The validation process confirmed high model robustness, with an MLP-NN Markov-CA accuracy and Kappa coefficient of 83.90% and 0.91, respectively. The present study, which was conducted in Indramayu Regency (West Java), predicted that 1602.73 hectares of paddy fields would be lost within 2020–2030, specifically 980.54 hectares (61.18%) and 622.19 hectares (38.82%) with 2 RCI and 1 RCI, respectively. This land conversion directly threatens ES, resulting in a projected loss of 83,697.95 tons of rice production, which indicates a critical degradation of service provisioning. The findings provide actionable insights for land use planning to reduce agricultural land conversion while outlining the urgency of safeguarding ES values. The adopted method is applicable to regions with similar characteristics. Full article
Show Figures

Figure 1

25 pages, 33376 KB  
Article
Spatial-Spectral Linear Extrapolation for Cross-Scene Hyperspectral Image Classification
by Lianlei Lin, Hanqing Zhao, Sheng Gao, Junkai Wang and Zongwei Zhang
Remote Sens. 2025, 17(11), 1816; https://doi.org/10.3390/rs17111816 - 22 May 2025
Cited by 1 | Viewed by 1299
Abstract
In realistic hyperspectral image (HSI) cross-scene classification tasks, it is ideal to obtain target domain samples during the training phase. Therefore, a model needs to be trained on one or more source domains (SD) and achieve robust domain generalization (DG) performance on an [...] Read more.
In realistic hyperspectral image (HSI) cross-scene classification tasks, it is ideal to obtain target domain samples during the training phase. Therefore, a model needs to be trained on one or more source domains (SD) and achieve robust domain generalization (DG) performance on an unknown target domain (TD). Popular DG strategies constrain the model’s predictive behavior in synthetic space through deep, nonlinear source expansion, and an HSI generation model is usually adopted to enrich the diversity of training samples. However, recent studies have shown that the activation functions of neurons in a network exhibit asymmetry for different categories, which results in the learning of task-irrelevant features while attempting to learn task-related features (called “feature contamination”). For example, even if some intrinsic features of HSIs (lighting conditions, atmospheric environment, etc.) are irrelevant to the label, the neural network still tends to learn them, resulting in features that make the classification related to these spurious components. To alleviate this problem, this study replaces the common nonlinear generative network with a specific linear projection transformation, to reduce the number of neurons activated nonlinearly during training and alleviate the learning of contaminated features. Specifically, this study proposes a dimensionally decoupled spatial spectral linear extrapolation (SSLE) strategy to achieve sample augmentation. Inspired by the weakening effect of water vapor absorption and Rayleigh scattering on band reflectivity, we simulate a common spectral drift based on Markov random fields to achieve linear spectral augmentation. Further considering the common co-occurrence phenomenon of patch images in space, we design spatial weights combined with label determinism of the center pixel to construct linear spatial enhancement. Finally, to ensure the cognitive unity of the high-level features of the discriminator in the sample space, we use inter-class contrastive learning to align the back-end feature representation. Extensive experiments were conducted on four datasets, an ablation study showed the effectiveness of the proposed modules, and a comparative analysis with advanced DG algorithms showed the superiority of our model in the face of various spectral and category shifts. In particular, on the Houston18/Shanghai datasets, its overall accuracy was 0.51%/0.83% higher than the best results of the other methods, and its Kappa coefficient was 0.78%/2.07% higher, respectively. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

23 pages, 11864 KB  
Article
Utilizing Remote Sensing and Random Forests to Identify Optimal Land Use Scenarios and Address the Increase in Landslide Susceptibility
by Aditya Nugraha Putra, Jaenudin, Novandi Rizky Prasetya, Michelle Talisia Sugiarto, Sudarto, Cahyo Prayogo, Febrian Maritimo and Fandy Tri Admajaya
Sustainability 2025, 17(9), 4227; https://doi.org/10.3390/su17094227 - 7 May 2025
Cited by 7 | Viewed by 3344
Abstract
Massive land use changes in Indonesia driven by deforestation, agricultural expansion, and urbanization have significantly increased landslide susceptibility in upper watersheds. This study focuses on the Sumber Brantas and Kali Konto sub-watersheds where rapid land conversion has destabilized slopes and disrupted ecological balance. [...] Read more.
Massive land use changes in Indonesia driven by deforestation, agricultural expansion, and urbanization have significantly increased landslide susceptibility in upper watersheds. This study focuses on the Sumber Brantas and Kali Konto sub-watersheds where rapid land conversion has destabilized slopes and disrupted ecological balance. By integrating remote sensing, Cellular Automata-Markov (CA-Markov), and Random Forest (RF) models, the research aims to identify optimal land use scenarios for mitigating landslide hazards. Three scenarios were analyzed: business as usual (BAU), land capability classification (LCC), and regional spatial planning (RSP) using 400 field-validated landslide data points alongside 22 topographic, geological, environmental, and anthropogenic parameters. Land use analysis from 2017 to 2022 revealed a 1% decline in natural forest cover, which corresponded to a 1% increase in high and very high landslide hazard areas. From 2017 to 2022, landslide risk increased as the “High” category rose from 33.95% to 37.59% and “Very High” from 10.24% to 12.18%; under BAU 2025, they reached 40.89% and 12.48%, while RSP and LCC reduced the “High” category to 44.12% and 34.44%, respectively. These findings highlight the critical role of integrating geospatial analysis and machine learning in regional planning to promote sustainable land use, reduce landslide hazards, and enhance watershed resilience with high model accuracy (>81%). Full article
(This article belongs to the Topic Natural Hazards and Disaster Risks Reduction, 2nd Edition)
Show Figures

Figure 1

27 pages, 42566 KB  
Article
Unsupervised Rural Flood Mapping from Bi-Temporal Sentinel-1 Images Using an Improved Wavelet-Fusion Flood-Change Index (IWFCI) and an Uncertainty-Sensitive Markov Random Field (USMRF) Model
by Amin Mohsenifar, Ali Mohammadzadeh and Sadegh Jamali
Remote Sens. 2025, 17(6), 1024; https://doi.org/10.3390/rs17061024 - 14 Mar 2025
Cited by 4 | Viewed by 2094
Abstract
Synthetic aperture radar (SAR) remote sensing (RS) technology is an ideal tool to map flooded areas on account of its all-time, all-weather imaging capability. Existing SAR data-based change detection approaches lack well-discriminant change indices for reliable floodwater mapping. To resolve this issue, an [...] Read more.
Synthetic aperture radar (SAR) remote sensing (RS) technology is an ideal tool to map flooded areas on account of its all-time, all-weather imaging capability. Existing SAR data-based change detection approaches lack well-discriminant change indices for reliable floodwater mapping. To resolve this issue, an unsupervised change detection approach, made up of two main steps, is proposed for detecting floodwaters from bi-temporal SAR data. In the first step, an improved wavelet-fusion flood-change index (IWFCI) is proposed. The IWFCI modifies the mean-ratio change index (CI) to fuse it with the log-ratio CI using the discrete wavelet transform (DWT). The IWFCI also employs a discriminant feature derived from the co-flood image to enhance the separability between the non-flood and flood areas. In the second step, an uncertainty-sensitive Markov random field (USMRF) model is proposed to diminish the over-smoothness issue in the areas with high uncertainty based on a new Gaussian uncertainty term. To appraise the efficacy of the floodwater detection approach proposed in this study, comparative experiments were conducted in two stages on four datasets, each including a normalized difference water index (NDWI) and pre-and co-flood Sentinel-1 data. In the first stage, the proposed IWFCI was compared to a number of state-of-the-art (SOTA) CIs, and the second stage compared USMRF to the SOTA change detection algorithms. From the experimental results in the first stage, the proposed IWFCI, yielding an average F-score of 86.20%, performed better than SOTA CIs. Likewise, according to the experimental results obtained in the second stage, the USMRF model with an average F-score of 89.27% outperformed the comparative methods in classifying non-flood and flood classes. Accordingly, the proposed floodwater detection approach, combining IWFCI and USMRF, can serve as a reliable tool for detecting flooded areas in SAR data. Full article
Show Figures

Graphical abstract

24 pages, 534 KB  
Article
Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts
by Omar M. Bdair
Mathematics 2025, 13(4), 590; https://doi.org/10.3390/math13040590 - 11 Feb 2025
Cited by 4 | Viewed by 1248
Abstract
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields [...] Read more.
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields where censored data frequently appear, such as material science, medical studies and industrial applications. This paper presents both frequentist and Bayesian approaches to estimate the shape and scale parameters of the BS distribution, along with the prediction of unobserved failure times. Random data are generated from the BS distribution under type-II censoring, where a pre-specified number of failures (m) is observed. The generated data are used to calculate the Maximum Likelihood Estimation (MLE) and Bayesian inference and evaluate their performances. The Bayesian method employs Markov Chain Monte Carlo (MCMC) sampling for point predictions and credible intervals. We apply the methods to both datasets generated under type-II censoring and real-world data on the fatigue life of 6061-T6 aluminum coupons. Although the results show that the two methods yield similar parameter estimates, the Bayesian approach offers more flexible and reliable prediction intervals. Extensive R codes are used to explain the practical application of these methods. Our findings confirm the advantages of Bayesian inference in handling censored data, especially when prior information is available for estimation. This work not only supports the theoretical understanding of the BS distribution under type-II censoring but also provides practical tools for analyzing real data in reliability and survival studies. Future research will discuss extensions of these methods to the multi-sample progressive censoring model with larger datasets and the integration of degradation models commonly encountered in industrial applications. Full article
Show Figures

Figure 1

23 pages, 29165 KB  
Article
Parallax-Tolerant Weakly-Supervised Pixel-Wise Deep Color Correction for Image Stitching of Pinhole Camera Arrays
by Yanzheng Zhang, Kun Gao, Zhijia Yang, Chenrui Li, Mingfeng Cai, Yuexin Tian, Haobo Cheng and Zhenyu Zhu
Sensors 2025, 25(3), 732; https://doi.org/10.3390/s25030732 - 25 Jan 2025
Viewed by 1061
Abstract
Camera arrays typically use image-stitching algorithms to generate wide field-of-view panoramas, but parallax and color differences caused by varying viewing angles often result in noticeable artifacts in the stitching result. However, existing solutions can only address specific color difference issues and are ineffective [...] Read more.
Camera arrays typically use image-stitching algorithms to generate wide field-of-view panoramas, but parallax and color differences caused by varying viewing angles often result in noticeable artifacts in the stitching result. However, existing solutions can only address specific color difference issues and are ineffective for pinhole images with parallax. To overcome these limitations, we propose a parallax-tolerant weakly supervised pixel-wise deep color correction framework for the image stitching of pinhole camera arrays. The total framework consists of two stages. In the first stage, based on the differences between high-dimensional feature vectors extracted by a convolutional module, a parallax-tolerant color correction network with dynamic loss weights is utilized to adaptively compensate for color differences in overlapping regions. In the second stage, we introduce a gradient-based Markov Random Field inference strategy for correction coefficients of non-overlapping regions to harmonize non-overlapping regions with overlapping regions. Additionally, we innovatively propose an evaluation metric called Color Differences Across the Seam to quantitatively measure the naturalness of transitions across the composition seam. Comparative experiments conducted on popular datasets and authentic images demonstrate that our approach outperforms existing solutions in both qualitative and quantitative evaluations, effectively eliminating visible artifacts and producing natural-looking composite images. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop