Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (37)

Search Parameters:
Keywords = MCMC sampler

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1391 KB  
Article
Analysis and Forecasting of Cryptocurrency Markets Using Bayesian and LSTM-Based Deep Learning Models
by Bidesh Biswas Biki, Makoto Sakamoto, Amane Takei, Md. Jubirul Alam, Md. Riajuliislam and Showaibuzzaman Showaibuzzaman
Informatics 2025, 12(3), 87; https://doi.org/10.3390/informatics12030087 (registering DOI) - 30 Aug 2025
Abstract
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling [...] Read more.
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling approaches: the Bayesian State-Space model and Long Short-Term Memory (LSTM) neural networks. Historical price data from January 2024 to April 2025 is used for model training and testing. The Bayesian model provided probabilistic insights by achieving a Mean Squared Error (MSE) of 0.0000 and a Mean Absolute Error (MAE) of 0.0026 for training data. For testing data, it provided 0.0013 for MSE and 0.0307 for MAE. On the other hand, the LSTM model provided temporal dependencies and performed strongly by achieving 0.0004 for MSE, 0.0160 for MAE, 0.0212 for RMSE, 0.9924 for R² in terms of training data and for testing data, and 0.0007 for MSE with an R² of 0.3505. From the result, it indicates that while the LSTM model excels in training performance, the Bayesian model provides better interpretability with lower error margins in testing by highlighting the trade-offs between model accuracy and probabilistic forecasting in the cryptocurrency markets. Full article
18 pages, 774 KB  
Article
Bayesian Inertia Estimation via Parallel MCMC Hammer in Power Systems
by Weidong Zhong, Chun Li, Minghua Chu, Yuanhong Che, Shuyang Zhou, Zhi Wu and Kai Liu
Energies 2025, 18(15), 3905; https://doi.org/10.3390/en18153905 - 22 Jul 2025
Viewed by 223
Abstract
The stability of modern power systems has become critically dependent on precise inertia estimation of synchronous generators, particularly as renewable energy integration fundamentally transforms grid dynamics. Increasing penetration of converter-interfaced renewable resources reduces system inertia, heightening the grid’s susceptibility to transient disturbances and [...] Read more.
The stability of modern power systems has become critically dependent on precise inertia estimation of synchronous generators, particularly as renewable energy integration fundamentally transforms grid dynamics. Increasing penetration of converter-interfaced renewable resources reduces system inertia, heightening the grid’s susceptibility to transient disturbances and creating significant technical challenges in maintaining operational reliability. This paper addresses these challenges through a novel Bayesian inference framework that synergistically integrates PMU data with an advanced MCMC sampling technique, specifically employing the Affine-Invariant Ensemble Sampler. The proposed methodology establishes a probabilistic estimation paradigm that systematically combines prior engineering knowledge with real-time measurements, while the Affine-Invariant Ensemble Sampler mechanism overcomes high-dimensional computational barriers through its unique ensemble-based exploration strategy featuring stretch moves and parallel walker coordination. The framework’s ability to provide full posterior distributions of inertia parameters, rather than single-point estimates, helps for stability assessment in renewable-dominated grids. Simulation results on the IEEE 39-bus and 68-bus benchmark systems validate the effectiveness and scalability of the proposed method, with inertia estimation errors consistently maintained below 1% across all generators. Moreover, the parallelized implementation of the algorithm significantly outperforms the conventional M-H method in computational efficiency. Specifically, the proposed approach reduces execution time by approximately 52% in the 39-bus system and by 57% in the 68-bus system, demonstrating its suitability for real-time and large-scale power system applications. Full article
Show Figures

Figure 1

12 pages, 308 KB  
Article
Projected Gradient Descent Method for Tropical Principal Component Analysis over Tree Space
by Ruriko Yoshida
Mathematics 2025, 13(11), 1776; https://doi.org/10.3390/math13111776 - 27 May 2025
Viewed by 415
Abstract
Tropical Principal Component Analysis (PCA) is an analogue of the classical PCA in the setting of tropical geometry, and applied it to visualize a set of gene trees over a space of phylogenetic trees, which is a union of lower-dimensional polyhedral cones in [...] Read more.
Tropical Principal Component Analysis (PCA) is an analogue of the classical PCA in the setting of tropical geometry, and applied it to visualize a set of gene trees over a space of phylogenetic trees, which is a union of lower-dimensional polyhedral cones in an Euclidean space with dimension m(m1)/2, where m is the number of leaves. In this paper, we introduce a projected gradient descent method to estimate the tropical principal polytope over the space of phylogenetic trees, and we apply it to an Apicomplexa dataset. With computational experiments against Markov Chain Monte Carlo (MCMC) samplers, we show that our projected gradient descent method yields a lower sum of tropical distances between observations and their projections onto the estimated best-fit tropical polytope, compared with the MCMC-based approach. Full article
Show Figures

Figure 1

16 pages, 808 KB  
Article
Modern Bayesian Sampling Methods for Cosmological Inference: A Comparative Study
by Denitsa Staicova
Universe 2025, 11(2), 68; https://doi.org/10.3390/universe11020068 - 17 Feb 2025
Cited by 1 | Viewed by 558
Abstract
We present a comprehensive comparison of different Markov chain Monte Carlo (MCMC) sampling methods, evaluating their performance on both standard test problems and cosmological parameter estimation. Our analysis includes traditional Metropolis–Hastings MCMC, Hamiltonian Monte Carlo (HMC), slice sampling, nested sampling as implemented in [...] Read more.
We present a comprehensive comparison of different Markov chain Monte Carlo (MCMC) sampling methods, evaluating their performance on both standard test problems and cosmological parameter estimation. Our analysis includes traditional Metropolis–Hastings MCMC, Hamiltonian Monte Carlo (HMC), slice sampling, nested sampling as implemented in dynesty, and PolyChord. We examine samplers through multiple metrics including runtime, memory usage, effective sample size, and parameter accuracy, testing their scaling with dimension and response to different probability distributions. While all samplers perform well with simple Gaussian distributions, we find that HMC and nested sampling show advantages for more complex distributions typical of cosmological problems. Traditional MCMC and slice sampling become less efficient in higher dimensions, while nested methods maintain accuracy but at higher computational cost. In cosmological applications using BAO data, we observe similar patterns, with particular challenges arising from parameter degeneracies and poorly constrained parameters. Full article
(This article belongs to the Section Cosmology)
Show Figures

Figure 1

26 pages, 1149 KB  
Article
A Massively Parallel SMC Sampler for Decision Trees
by Efthyvoulos Drousiotis, Alessandro Varsi, Alexander M. Phillips, Simon Maskell and Paul G. Spirakis
Algorithms 2025, 18(1), 14; https://doi.org/10.3390/a18010014 - 2 Jan 2025
Viewed by 1085
Abstract
Bayesian approaches to decision trees (DTs) using Markov Chain Monte Carlo (MCMC) samplers have recently demonstrated state-of-the-art accuracy performance when it comes to training DTs to solve classification problems. Despite the competitive classification accuracy, MCMC requires a potentially long runtime to converge. A [...] Read more.
Bayesian approaches to decision trees (DTs) using Markov Chain Monte Carlo (MCMC) samplers have recently demonstrated state-of-the-art accuracy performance when it comes to training DTs to solve classification problems. Despite the competitive classification accuracy, MCMC requires a potentially long runtime to converge. A widely used approach to reducing an algorithm’s runtime is to employ modern multi-core computer architectures, either with shared memory (SM) or distributed memory (DM), and use parallel computing to accelerate the algorithm. However, the inherent sequential nature of MCMC makes it unsuitable for parallel implementation unless the accuracy is sacrificed. This issue is particularly evident in DM architectures, which normally provide access to larger numbers of cores than SM. Sequential Monte Carlo (SMC) samplers are a parallel alternative to MCMC, which do not trade off accuracy for parallelism. However, the performance of SMC samplers in the context of DTs is underexplored, and the parallelization is complicated by the challenges in parallelizing its bottleneck, namely redistribution, especially on variable-size data types such as DTs. In this work, we study the problem of parallelizing SMC in the context of DTs both on SM and DM. On both memory architectures, we show that the proposed parallelization strategies achieve asymptotically optimal O(log2N) time complexity. Numerical results are presented for a 32-core SM machine and a 256-core DM cluster. For both computer architectures, the experimental results show that our approach has comparable or better accuracy than MCMC but runs up to 51 times faster on SM and 640 times faster on DM. In this paper, we share the GitHub link to the source code. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

27 pages, 699 KB  
Article
Estimating the Lifetime Parameters of the Odd-Generalized-Exponential–Inverse-Weibull Distribution Using Progressive First-Failure Censoring: A Methodology with an Application
by Mahmoud M. Ramadan, Rashad M. EL-Sagheer and Amel Abd-El-Monem
Axioms 2024, 13(12), 822; https://doi.org/10.3390/axioms13120822 - 25 Nov 2024
Cited by 2 | Viewed by 1039
Abstract
This paper investigates statistical methods for estimating unknown lifetime parameters using a progressive first-failure censoring dataset. The failure mode’s lifetime distribution is modeled by the odd-generalized-exponential–inverse-Weibull distribution. Maximum-likelihood estimators for the model parameters, including the survival, hazard, and inverse hazard rate functions, are [...] Read more.
This paper investigates statistical methods for estimating unknown lifetime parameters using a progressive first-failure censoring dataset. The failure mode’s lifetime distribution is modeled by the odd-generalized-exponential–inverse-Weibull distribution. Maximum-likelihood estimators for the model parameters, including the survival, hazard, and inverse hazard rate functions, are obtained, though they lack closed-form expressions. The Newton–Raphson method is used to compute these estimations. Confidence intervals for the parameters are approximated via the normal distribution of the maximum-likelihood estimation. The Fisher information matrix is derived using the missing information principle, and the delta method is applied to approximate the confidence intervals for the survival, hazard rate, and inverse hazard rate functions. Bayes estimators were calculated with the squared error, linear exponential, and general entropy loss functions, utilizing independent gamma distributions for informative priors. Markov-chain Monte Carlo sampling provides the highest-posterior-density credible intervals and Bayesian point estimates for the parameters and reliability characteristics. This study evaluates these methods through Monte Carlo simulations, comparing Bayes and maximum-likelihood estimates based on mean squared errors for point estimates, average interval widths, and coverage probabilities for interval estimators. A real dataset is also analyzed to illustrate the proposed methods. Full article
Show Figures

Figure 1

23 pages, 14253 KB  
Article
Optimal Estimation of Reliability Parameters for Modified Frechet-Exponential Distribution Using Progressive Type-II Censored Samples with Mechanical and Medical Data
by Dina A. Ramadan, Ahmed T. Farhat, M. E. Bakr, Oluwafemi Samson Balogun and Mustafa M. Hasaballah
Symmetry 2024, 16(11), 1476; https://doi.org/10.3390/sym16111476 - 6 Nov 2024
Cited by 1 | Viewed by 1363
Abstract
The aim of this research is to estimate the parameters of the modified Frechet-exponential (MFE) distribution using different methods when applied to progressive type-II censored samples. These methods include using the maximum likelihood technique and the Bayesian approach, which were used to determine [...] Read more.
The aim of this research is to estimate the parameters of the modified Frechet-exponential (MFE) distribution using different methods when applied to progressive type-II censored samples. These methods include using the maximum likelihood technique and the Bayesian approach, which were used to determine the values of parameters in addition to calculating the reliability and failure functions at time t. The approximate confidence intervals (ACIs) and credible intervals (CRIs) are derived for these parameters. Two bootstrap techniques of parametric type are provided to compute the bootstrap confidence intervals. Both symmetric loss functions such as the squared error loss (SEL) and asymmetric loss functions such as the linear-exponential (LINEX) loss are used in the Bayesian method to obtain the estimates. The Markov Chain Monte Carlo (MCMC) technique is utilized in the Metropolis–Hasting sampler approach to obtain the unknown parameters using the Bayes approach. Two actual datasets are utilized to examine the various progressive schemes and different estimation methods considered in this paper. Additionally, a simulation study is performed to compare the schemes and estimation techniques. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

17 pages, 7430 KB  
Article
Bayesian Inference and Condition Assessment Based on the Deflection of Aging Reinforced Concrete Hollow Slab Bridges
by Xuliang Yan, Siyi Jia, Shuyang Jia, Jian Gao and Jiayu Peng
Buildings 2024, 14(9), 2920; https://doi.org/10.3390/buildings14092920 - 15 Sep 2024
Viewed by 987
Abstract
This paper presents a Bayesian inference framework for updating the structural rigidity ratio of aging hollow slab RC bridges using deflection measurements. The framework models the structural rigidity ratio as a stochastic field along the hollow RC slabs, using the Karhunen–Loeve (KL) transform [...] Read more.
This paper presents a Bayesian inference framework for updating the structural rigidity ratio of aging hollow slab RC bridges using deflection measurements. The framework models the structural rigidity ratio as a stochastic field along the hollow RC slabs, using the Karhunen–Loeve (KL) transform to capture spatial correlation and variation. Bayesian inference is then applied using deflection data from static loading tests, supported by a finite element model (FEM) and a Kriging surrogate model to enhance computational efficiency. The posterior distribution of the structural rigidity ratio is derived using a Markov chain Monte Carlo (MCMC) sampler. The proposed method was tested on an RC bridge with hollow slabs, using deflection measurements taken before and after reinforcement. The Bayesian updates indicated increased structural rigidity ratios after reinforcement, validating the effectiveness of the reinforcement. The deflection predictions from the updated models closely matched the measurements, with the 95% confidence bounds encompassing most of the data. This demonstrates the method’s validity and robustness in capturing the structural improvements post-reinforcement. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

27 pages, 5652 KB  
Article
Robust Inference of Dynamic Covariance Using Wishart Processes and Sequential Monte Carlo
by Hester Huijsdens, David Leeftink, Linda Geerligs and Max Hinne
Entropy 2024, 26(8), 695; https://doi.org/10.3390/e26080695 - 16 Aug 2024
Cited by 1 | Viewed by 1600
Abstract
Several disciplines, such as econometrics, neuroscience, and computational psychology, study the dynamic interactions between variables over time. A Bayesian nonparametric model known as the Wishart process has been shown to be effective in this situation, but its inference remains highly challenging. In this [...] Read more.
Several disciplines, such as econometrics, neuroscience, and computational psychology, study the dynamic interactions between variables over time. A Bayesian nonparametric model known as the Wishart process has been shown to be effective in this situation, but its inference remains highly challenging. In this work, we introduce a Sequential Monte Carlo (SMC) sampler for the Wishart process, and show how it compares to conventional inference approaches, namely MCMC and variational inference. Using simulations, we show that SMC sampling results in the most robust estimates and out-of-sample predictions of dynamic covariance. SMC especially outperforms the alternative approaches when using composite covariance functions with correlated parameters. We further demonstrate the practical applicability of our proposed approach on a dataset of clinical depression (n=1), and show how using an accurate representation of the posterior distribution can be used to test for dynamics in covariance. Full article
Show Figures

Figure 1

26 pages, 13370 KB  
Article
Statistical Analysis of Type-II Generalized Progressively Hybrid Alpha-PIE Censored Data and Applications in Electronic Tubes and Vinyl Chloride
by Ahmed Elshahhat, Osama E. Abo-Kasem and Heba S. Mohammed
Axioms 2023, 12(6), 601; https://doi.org/10.3390/axioms12060601 - 16 Jun 2023
Cited by 5 | Viewed by 1504
Abstract
A new Type-II generalized progressively hybrid censoring strategy, in which the experiment is ensured to stop at a specified time, is explored when the lifetime model of the test subjects follows a two-parameter alpha-power inverted exponential (Alpha-PIE) distribution. Alpha-PIE’s parameters and reliability indices, [...] Read more.
A new Type-II generalized progressively hybrid censoring strategy, in which the experiment is ensured to stop at a specified time, is explored when the lifetime model of the test subjects follows a two-parameter alpha-power inverted exponential (Alpha-PIE) distribution. Alpha-PIE’s parameters and reliability indices, such as reliability and hazard rate functions, are estimated via maximum likelihood and Bayes estimation methodologies in the presence of the proposed censored data. The estimated confidence intervals of the unknown quantities are created using the normal approximation of the acquired classical estimators. The Bayesian estimators are also produced using independent gamma density priors under symmetrical (squared-error) loss. The Bayes’ estimators and their associated highest posterior density intervals cannot be calculated theoretically since the joint likelihood function is derived in a complicated form, but they can potentially be assessed using Monte Carlo Markov-chain algorithms. We next go through four optimality criteria for identifying the best progressive design. The effectiveness of the suggested estimation procedures is assessed using Monte Carlo comparisons, and certain recommendations are offered. Ultimately, two different applications, one focused on the failure times of electronic tubes and the other on vinyl chloride, are analyzed to illustrate the effectiveness of the proposed techniques that may be employed in real-world scenarios. Full article
(This article belongs to the Special Issue Stochastic and Statistical Analysis in Natural Sciences)
Show Figures

Figure 1

21 pages, 7864 KB  
Article
Variational Hybrid Monte Carlo for Efficient Multi-Modal Data Sampling
by Shiliang Sun, Jing Zhao, Minghao Gu and Shanhu Wang
Entropy 2023, 25(4), 560; https://doi.org/10.3390/e25040560 - 24 Mar 2023
Cited by 4 | Viewed by 2421
Abstract
The Hamiltonian Monte Carlo (HMC) sampling algorithm exploits Hamiltonian dynamics to construct efficient Markov Chain Monte Carlo (MCMC), which has become increasingly popular in machine learning and statistics. Since HMC uses the gradient information of the target distribution, it can explore the state [...] Read more.
The Hamiltonian Monte Carlo (HMC) sampling algorithm exploits Hamiltonian dynamics to construct efficient Markov Chain Monte Carlo (MCMC), which has become increasingly popular in machine learning and statistics. Since HMC uses the gradient information of the target distribution, it can explore the state space much more efficiently than random-walk proposals, but may suffer from high autocorrelation. In this paper, we propose Langevin Hamiltonian Monte Carlo (LHMC) to reduce the autocorrelation of the samples. Probabilistic inference involving multi-modal distributions is very difficult for dynamics-based MCMC samplers, which is easily trapped in the mode far away from other modes. To tackle this issue, we further propose a variational hybrid Monte Carlo (VHMC) which uses a variational distribution to explore the phase space and find new modes, and it is capable of sampling from multi-modal distributions effectively. A formal proof is provided that shows that the proposed method can converge to target distributions. Both synthetic and real datasets are used to evaluate its properties and performance. The experimental results verify the theory and show superior performance in multi-modal sampling. Full article
Show Figures

Figure 1

28 pages, 2172 KB  
Article
Survival Analysis and Applications of Weighted NH Parameters Using Progressively Censored Data
by Ahmed Elshahhat and Heba S. Mohammed
Symmetry 2023, 15(3), 735; https://doi.org/10.3390/sym15030735 - 15 Mar 2023
Cited by 2 | Viewed by 1633
Abstract
A new weighted Nadarajah–Haghighi (WNH) distribution, as an alternative competitor model to gamma, standard half-logistic, generalized-exponential, Weibull, and other distributions, is considered. This paper explores both maximum likelihood and Bayesian estimation approaches for estimating the parameters, reliability, and hazard rate functions of the [...] Read more.
A new weighted Nadarajah–Haghighi (WNH) distribution, as an alternative competitor model to gamma, standard half-logistic, generalized-exponential, Weibull, and other distributions, is considered. This paper explores both maximum likelihood and Bayesian estimation approaches for estimating the parameters, reliability, and hazard rate functions of the WNH distribution when the sample type is Type-II progressive censored order statistics. In the classical interval setup, both asymptotic and bootstrap intervals of each unknown parameter are constructed. Using independent gamma priors and symmetric squared-error loss, the Bayes estimators cannot be obtained theoretically. Thus, two approximation techniques, namely: Lindley and Markov-Chain Monte Carlo (MCMC) methods, are used. From MCMC variates, the Bayes credible and highest posterior density intervals of all unknown parameters are also created. Extensive Monte Carlo simulations are implemented to compare the performance of the proposed methodologies. Numerical evaluations showed that the estimates developed by the MCMC sampler performed better than the Lindley estimates, and both behaved significantly better than the frequentist estimates. To choose the optimal censoring scheme, several optimality criteria are considered. Three engineering applications, including vehicle fatalities, electronic devices, and electronic components data sets, are provided. These applications demonstrated how the proposed methodologies could be applied in real practice and showed that the proposed model provides a satisfactory fit compared to three new weighted models, namely: weighted exponential, weighted Gompertz, and new weighted Lindley distributions. Full article
(This article belongs to the Special Issue Mathematical Models and Methods in Various Sciences)
Show Figures

Figure 1

19 pages, 337 KB  
Article
A Bayesian Variable Selection Method for Spatial Autoregressive Quantile Models
by Yuanying Zhao and Dengke Xu
Mathematics 2023, 11(4), 987; https://doi.org/10.3390/math11040987 - 15 Feb 2023
Cited by 3 | Viewed by 2100
Abstract
In this paper, a Bayesian variable selection method for spatial autoregressive (SAR) quantile models is proposed on the basis of spike and slab prior for regression parameters. The SAR quantile models, which are more generalized than SAR models and quantile regression models, are [...] Read more.
In this paper, a Bayesian variable selection method for spatial autoregressive (SAR) quantile models is proposed on the basis of spike and slab prior for regression parameters. The SAR quantile models, which are more generalized than SAR models and quantile regression models, are specified by adopting the asymmetric Laplace distribution for the error term in the classical SAR models. The proposed approach could perform simultaneously robust parametric estimation and variable selection in the context of SAR quantile models. Bayesian statistical inferences are implemented by a detailed Markov chain Monte Carlo (MCMC) procedure that combines Gibbs samplers with a probability integral transformation (PIT) algorithm. In the end, empirical numerical examples including several simulation studies and a Boston housing price data analysis are employed to demonstrate the newly developed methodologies. Full article
(This article belongs to the Special Issue Recent Advances in Computational Statistics)
23 pages, 489 KB  
Article
Bayesian Logistic Regression Model for Sub-Areas
by Lu Chen and Balgobin Nandram
Stats 2023, 6(1), 209-231; https://doi.org/10.3390/stats6010013 - 29 Jan 2023
Cited by 1 | Viewed by 2223
Abstract
Many population-based surveys have binary responses from a large number of individuals in each household within small areas. One example is the Nepal Living Standards Survey (NLSS II), in which health status binary data (good versus poor) for each individual from sampled households [...] Read more.
Many population-based surveys have binary responses from a large number of individuals in each household within small areas. One example is the Nepal Living Standards Survey (NLSS II), in which health status binary data (good versus poor) for each individual from sampled households (sub-areas) are available in the sampled wards (small areas). To make an inference for the finite population proportion of individuals in each household, we use the sub-area logistic regression model with reliable auxiliary information. The contribution of this model is twofold. First, we extend an area-level model to a sub-area level model. Second, because there are numerous sub-areas, standard Markov chain Monte Carlo (MCMC) methods to find the joint posterior density are very time-consuming. Therefore, we provide a sampling-based method, the integrated nested normal approximation (INNA), which permits fast computation. Our main goal is to describe this hierarchical Bayesian logistic regression model and to show that the computation is much faster than the exact MCMC method and also reasonably accurate. The performance of our method is studied by using NLSS II data. Our model can borrow strength from both areas and sub-areas to obtain more efficient and precise estimates. The hierarchical structure of our model captures the variation in the binary data reasonably well. Full article
Show Figures

Figure 1

14 pages, 797 KB  
Article
High-Resolution Through-the-Wall Radar Imaging with Exploitation of Target Structure
by Chendong Xu and Qisong Wu
Appl. Sci. 2022, 12(22), 11684; https://doi.org/10.3390/app122211684 - 17 Nov 2022
Cited by 3 | Viewed by 1946
Abstract
It is quite challenging for through-the-wall radar imaging (TWRI) to achieve high-resolution ghost-free imaging with limited measurements in an indoor multipath scenario. In this paper, a novel high-resolution TWRI algorithm with the exploitation of the target clustered structure in a hierarchical Bayesian framework [...] Read more.
It is quite challenging for through-the-wall radar imaging (TWRI) to achieve high-resolution ghost-free imaging with limited measurements in an indoor multipath scenario. In this paper, a novel high-resolution TWRI algorithm with the exploitation of the target clustered structure in a hierarchical Bayesian framework is proposed. More specifically, an extended spike-and-slab clustered prior is imposed to statistically encourage the cluster formations in both downrange and crossrange domains of the target region, and a generative model of the proposed approach is provided. Then, a Markov Chain Monte Carol (MCMC) sampler is used to implement the posterior inference. Compared to other state-of-the-art algorithms, the proposed nonparametric Bayesian algorithm can preserve underlying target clustered properties and effectively suppress these isolated spurious scatterers without any prior information on targets themselves, such as sizes, shapes, and numbers. Full article
(This article belongs to the Special Issue Through-the-Wall Radar Imaging Based on Deep Learning)
Show Figures

Figure 1

Back to TopTop