Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = posteriori gradients

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 12288 KiB  
Article
Bayesian Distributed Target Detectors in Compound-Gaussian Clutter Against Subspace Interference with Limited Training Data
by Kun Xing, Zhiwen Cao, Weijian Liu, Ning Cui, Zhiyu Wang, Zhongjun Yu and Faxin Yu
Remote Sens. 2025, 17(5), 926; https://doi.org/10.3390/rs17050926 - 5 Mar 2025
Viewed by 705
Abstract
In this article, the problem of Bayesian detecting rank-one distributed targets under subspace interference and compound Gaussian clutter with inverse Gaussian texture is investigated. Due to the clutter heterogeneity, the training data may be insufficient. To tackle this problem, the clutter speckle covariance [...] Read more.
In this article, the problem of Bayesian detecting rank-one distributed targets under subspace interference and compound Gaussian clutter with inverse Gaussian texture is investigated. Due to the clutter heterogeneity, the training data may be insufficient. To tackle this problem, the clutter speckle covariance matrix (CM) is assumed to obey the complex inverse Wishart distribution, and the Bayesian theory is utilized to obtain an effective estimation. Moreover, the target echo is assumed to be with a known steering vector and unknown amplitudes across range cells. The interference is regarded as a steering matrix that is linearly independent of the target steering vector. By utilizing the generalized likelihood ratio test (GLRT), a Bayesian interference-canceling detector that can work in the absence of training data is derived. Moreover, five interference-cancelling detectors based on the maximum a posteriori (MAP) estimate of the speckle CM are proposed with the two-step GLRT, the Rao, Wald, Gradient, and Durbin tests. Experiments with simulated and measured sea clutter data indicate that the Bayesian interference-canceling detectors show better performance than the competitor in scenarios with limited training data. Full article
Show Figures

Figure 1

34 pages, 30049 KiB  
Article
Blind Infrared Remote-Sensing Image Deblurring Algorithm via Edge Composite-Gradient Feature Prior and Detail Maintenance
by Xiaohang Zhao, Mingxuan Li, Ting Nie, Chengshan Han and Liang Huang
Remote Sens. 2024, 16(24), 4697; https://doi.org/10.3390/rs16244697 - 16 Dec 2024
Viewed by 1074
Abstract
The problem of blind image deblurring remains a challenging inverse problem, due to the ill-posed nature of estimating unknown blur kernels and latent images within the Maximum A Posteriori (MAP) framework. To address this challenge, traditional methods often rely on sparse regularization priors [...] Read more.
The problem of blind image deblurring remains a challenging inverse problem, due to the ill-posed nature of estimating unknown blur kernels and latent images within the Maximum A Posteriori (MAP) framework. To address this challenge, traditional methods often rely on sparse regularization priors to mitigate the uncertainty inherent in the problem. In this paper, we propose a novel blind deblurring model based on the MAP framework that leverages Composite-Gradient Feature (CGF) variations in edge regions after image blurring. This prior term is specifically designed to exploit the high sparsity of sharp edge regions in clear images, thereby effectively alleviating the ill-posedness of the problem. Unlike existing methods that focus on local gradient information, our approach focuses on the aggregation of edge regions, enabling better detection of both sharp and smoothed edges in blurred images. In the blur kernel estimation process, we enhance the accuracy of the kernel by assigning effective edge information from the blurred image to the smoothed intermediate latent image, preserving critical structural details lost during the blurring process. To further improve the edge-preserving restoration, we introduce an adaptive regularizer that outperforms traditional total variation regularization by better maintaining edge integrity in both clear and blurred images. The proposed variational model is efficiently implemented using alternating iterative techniques. Extensive numerical experiments and comparisons with state-of-the-art methods demonstrate the superior performance of our approach, highlighting its effectiveness and real-world applicability in diverse image-restoration tasks. Full article
Show Figures

Figure 1

19 pages, 2523 KiB  
Article
Hyperspectral Image Denoising by Pixel-Wise Noise Modeling and TV-Oriented Deep Image Prior
by Lixuan Yi, Qian Zhao and Zongben Xu
Remote Sens. 2024, 16(15), 2694; https://doi.org/10.3390/rs16152694 - 23 Jul 2024
Cited by 4 | Viewed by 2290
Abstract
Model-based hyperspectral image (HSI) denoising methods have attracted continuous attention in the past decades, due to their effectiveness and interpretability. In this work, we aim at advancing model-based HSI denoising, through sophisticated investigation for both the fidelity and regularization terms, or correspondingly noise [...] Read more.
Model-based hyperspectral image (HSI) denoising methods have attracted continuous attention in the past decades, due to their effectiveness and interpretability. In this work, we aim at advancing model-based HSI denoising, through sophisticated investigation for both the fidelity and regularization terms, or correspondingly noise and prior, by virtue of several recently developed techniques. Specifically, we formulate a novel unified probabilistic model for the HSI denoising task, within which the noise is assumed as pixel-wise non-independent and identically distributed (non-i.i.d) Gaussian predicted by a pre-trained neural network, and the prior for the HSI image is designed by incorporating the deep image prior (DIP) with total variation (TV) and spatio-spectral TV. To solve the resulted maximum a posteriori (MAP) estimation problem, we design a Monte Carlo Expectation–Maximization (MCEM) algorithm, in which the stochastic gradient Langevin dynamics (SGLD) method is used for computing the E-step, and the alternative direction method of multipliers (ADMM) is adopted for solving the optimization in the M-step. Experiments on both synthetic and real noisy HSI datasets have been conducted to verify the effectiveness of the proposed method. Full article
Show Figures

Figure 1

23 pages, 9585 KiB  
Article
An Interpretable Deep Learning Approach for Detecting Marine Heatwaves Patterns
by Qi He, Zihang Zhu, Danfeng Zhao, Wei Song and Dongmei Huang
Appl. Sci. 2024, 14(2), 601; https://doi.org/10.3390/app14020601 - 10 Jan 2024
Cited by 5 | Viewed by 3079
Abstract
Marine heatwaves (MHWs) refer to a phenomenon where the sea surface temperature is significantly higher than the historical average for that region over a period, which is typically a result of the combined effects of climate change and local meteorological conditions, thereby potentially [...] Read more.
Marine heatwaves (MHWs) refer to a phenomenon where the sea surface temperature is significantly higher than the historical average for that region over a period, which is typically a result of the combined effects of climate change and local meteorological conditions, thereby potentially leading to alterations in marine ecosystems and an increased incidence of extreme weather events. MHWs have significant impacts on the marine environment, ecosystems, and economic livelihoods. In recent years, global warming has intensified MHWs, and research on MHWs has rapidly developed into an important research frontier. With the development of deep learning models, they have demonstrated remarkable performance in predicting sea surface temperature, which is instrumental in identifying and anticipating marine heatwaves (MHWs). However, the complexity of deep learning models makes it difficult for users to understand how the models make predictions, posing a challenge for scientists and decision-makers who rely on interpretable results to manage the risks associated with MHWs. In this study, we propose an interpretable model for discovering MHWs. We first input variables that are relevant to the occurrence of MHWs into an LSTM model and use a posteriori explanation method called Expected Gradients to represent the degree to which different variables affect the prediction results. Additionally, we decompose the LSTM model to examine the information flow within the model. Our method can be used to understand which features the deep learning model focuses on and how these features affect the model’s predictions. From the experimental results, this study provides a new perspective for understanding the causes of MHWs and demonstrates the prospect of future artificial intelligence-assisted scientific discovery. Full article
(This article belongs to the Special Issue Environmental Monitoring and Analysis for Hydrology)
Show Figures

Figure 1

19 pages, 5354 KiB  
Article
Walking Trajectory Estimation Using Multi-Sensor Fusion and a Probabilistic Step Model
by Ethan Rabb and John Josiah Steckenrider
Sensors 2023, 23(14), 6494; https://doi.org/10.3390/s23146494 - 18 Jul 2023
Cited by 3 | Viewed by 1893
Abstract
This paper presents a framework for accurately and efficiently estimating a walking human’s trajectory using a computationally inexpensive non-Gaussian recursive Bayesian estimator. The proposed framework fuses global and inertial measurements with predictions from a kinematically driven step model to provide robustness in localization. [...] Read more.
This paper presents a framework for accurately and efficiently estimating a walking human’s trajectory using a computationally inexpensive non-Gaussian recursive Bayesian estimator. The proposed framework fuses global and inertial measurements with predictions from a kinematically driven step model to provide robustness in localization. A maximum a posteriori-type filter is trained on typical human kinematic parameters and updated based on live measurements. Local step size estimates are generated from inertial measurement units using the zero-velocity update (ZUPT) algorithm, while global measurements come from a wearable GPS. After each fusion event, a gradient ascent optimizer efficiently locates the highest likelihood of the individual’s location which then triggers the next estimator iteration.The proposed estimator was compared to a state-of-the-art particle filter in several Monte Carlo simulation scenarios, and the original framework was found to be comparable in accuracy and more efficient at higher resolutions. It is anticipated that the methods proposed in this work could be more useful in general real-time estimation (beyond just personal navigation) than the traditional particle filter, especially if the state is many-dimensional. Applications of this research include but are not limited to: in natura biomechanics measurement, human safety in manual fieldwork environments, and human/robot teaming. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

15 pages, 3875 KiB  
Article
Physico-Chemical and Microbiological Control of the Composting Process of the Organic Fraction of Municipal Solid Waste: A Pilot-Scale Experience
by Natividad Miguel, Andrea López, Sindy D. Jojoa-Sierra, Julen Fernández, Jairo Gómez and María P. Ormad
Int. J. Environ. Res. Public Health 2022, 19(23), 15449; https://doi.org/10.3390/ijerph192315449 - 22 Nov 2022
Cited by 4 | Viewed by 1845
Abstract
The aim of this work was to carry out a pilot experiment to monitor OFMSW (organic fraction of municipal solid waste) composting processes using different types of installations (automatic reactor, aerated static pile and turned pile). To carry out the process, pruning waste [...] Read more.
The aim of this work was to carry out a pilot experiment to monitor OFMSW (organic fraction of municipal solid waste) composting processes using different types of installations (automatic reactor, aerated static pile and turned pile). To carry out the process, pruning waste was used as structuring material (SM), in a 1:1 and 1:2, v:v, OFMSW:SM ratio. Monitoring was carried out through the control of physico-chemical and microbiological parameters, such as temperature, pH, humidity, Rottegrade, Solvita tests, the presence of Salmonella sp. and Escherichia coli, total coliform, and Enterococcus sp. concentrations. After carrying out the tests, it can be affirmed that the three types of installations used worked correctly in terms of the monitoring of physico-chemical parameters, giving rise to a compost of sufficient stability and maturity to be applied on agricultural soil. In all cases the bacterial concentrations in the final compost were lower than those detected in the mixture of initial components for its preparation, thus complying with the requirements established in RD 506/2013 and RD 999/2017RD on fertilizer products. However, it cannot be affirmed that one of the three types of installation used produces a greater bacterial inactivation than the others. When composting with different types of facilities, it is of interest to optimize the irrigation and aeration system in order to have a better control of the process and to study the possible temperature gradients in the piles to ensure good sanitization without the risk of bacterial proliferation a posteriori. Finally, the different initial mixtures of OFMSW and SM used in this study did not have a significant influence on the functioning of the composting process or on the microbiological quality during the process. The irrigation water can provide a bacterial contribution that can lead to increases in concentration during the composting process. This study is part of the Life-NADAPTA project (LIFE16 IPC/ES/000001), an integrated strategy for adaptation to Climate Change in Navarra, where NILSA participates in water action and collaborates in agricultural action, which includes among its objectives the development of new soil amendments from different organic waste. Full article
(This article belongs to the Special Issue Feature Papers in Environmental Microbiology Research)
Show Figures

Figure 1

21 pages, 5582 KiB  
Article
Framework of Methodology to Assess the Link between A Posteriori Dietary Patterns and Nutritional Adequacy: Application to Pregnancy
by Foteini Tsakoumaki, Charikleia Kyrkou, Maria Fotiou, Aristea Dimitropoulou, Costas G. Biliaderis, Apostolos P. Athanasiadis, Georgios Menexes and Alexandra-Maria Michaelidou
Metabolites 2022, 12(5), 395; https://doi.org/10.3390/metabo12050395 - 27 Apr 2022
Cited by 3 | Viewed by 2997
Abstract
This study aimed to explore the nutritional profile of 608 women during the second trimester of pregnancy, in terms of nutrient patterns, dietary quality and nutritional adequacy. Dietary data were collected using a validated Mediterranean-oriented, culture-specific FFQ. Principal component analysis was performed on [...] Read more.
This study aimed to explore the nutritional profile of 608 women during the second trimester of pregnancy, in terms of nutrient patterns, dietary quality and nutritional adequacy. Dietary data were collected using a validated Mediterranean-oriented, culture-specific FFQ. Principal component analysis was performed on 18 energy-adjusted nutrients. Two main nutrient patterns, “plant-origin” (PLO) and “animal-origin” (ANO), were extracted. Six homogenous clusters (C) relative to nutrient patterns were obtained and analyzed through a multidimensional methodological approach. C1, C5 and C6 scored positively on PLO, while C1, C2 and C3 scored positively on ANO. When dietary quality was mapped on food choices and dietary indexes, C6 unveiled a group with a distinct image resembling the Mediterranean-type diet (MedDiet Score = 33.8). Although C1–C5 shared common dietary characteristics, their diet quality differed as reflected in the HEI-2010 (C1:79.7; C2:73.3; C3:70.9; C4:63.2; C5:76.6). The appraisal of nutritional adequacy mirrored a “nutritional-quality gradient”. A total of 50% of participants in C6 had almost 100% adequate magnesium intake, while 50% of participants in C4 had a probability of adequacy of ≤10%. Our methodological framework is efficient for assessing the link between a posteriori dietary patterns and nutritional adequacy during pregnancy. Given that macro- and micronutrient distributions may induce metabolic modifications of potential relevance to offspring’s health, public health strategies should be implemented. Full article
(This article belongs to the Special Issue Nutrition during Pregnancy and Offspring Growth and Metabolism)
Show Figures

Figure 1

16 pages, 5819 KiB  
Article
L2-Norm Based a Posteriori Error Estimates of Compressible and Nearly-Incompressible Elastic Finite Element Solutions
by Mohd. Ahmed, Devinder Singh, Saeed AlQadhi and Nabil Ben Kahla
Appl. Sci. 2022, 12(8), 3999; https://doi.org/10.3390/app12083999 - 15 Apr 2022
Cited by 1 | Viewed by 2126
Abstract
The displacement and stress-based error estimates in a posteriori error recovery of compressible and nearly-incompressible elastic finite element solutions is investigated in the present study. The errors in the finite element solutions, i.e., in displacement and stress, at local and global levels are [...] Read more.
The displacement and stress-based error estimates in a posteriori error recovery of compressible and nearly-incompressible elastic finite element solutions is investigated in the present study. The errors in the finite element solutions, i.e., in displacement and stress, at local and global levels are computed in L2-norm of quantity of interest, namely, displacements and gradients. The error estimation techniques are based on the least square fitting of higher order polynomials to stress and displacement in a patch comprising of node/elements surrounding and including the node/elements under consideration. The benchmark examples of compressible and incompressible elastic bodies, with known solutions employing triangular discretization schemes, are implemented to measure the finite element errors in displacements and gradients. The mixed formulation involving displacement and pressure is used for incompressible elastic analysis. The performance of error estimation is measured in terms of convergence properties, effectivity and mesh required for predefined precision. The error convergence rate, in FEM original solution, recovered solution using displacement recovery-based and stress-based error recovery technique for stresses, are obtained as (1.9714, 2.8999, and 2.5018) and (0.9818, 1.7805, and 1.4952) respectively for compressible and incompressible self-loaded elastic plate benchmark example using higher order triangular elements. It is concluded from the study that displacement fitting technique for extracting higher order derivatives shows a very effective technique for recovery of compressible and nearly-incompressible finite element analysis errors. Full article
Show Figures

Figure 1

17 pages, 3061 KiB  
Article
Recovery-Based Error Estimator for Natural Convection Equations Based on Defect-Correction Methods
by Lulu Li, Haiyan Su and Xinlong Feng
Entropy 2022, 24(2), 255; https://doi.org/10.3390/e24020255 - 9 Feb 2022
Cited by 2 | Viewed by 1738
Abstract
In this paper, we propose an adaptive defect-correction method for natural convection (NC) equations. A defect-correction method (DCM) is proposed for solving NC equations to overcome the convection dominance problem caused by a high Rayleigh number. To solve the large amount of computation [...] Read more.
In this paper, we propose an adaptive defect-correction method for natural convection (NC) equations. A defect-correction method (DCM) is proposed for solving NC equations to overcome the convection dominance problem caused by a high Rayleigh number. To solve the large amount of computation and the discontinuity of the gradient of the numerical solution, we combine a new recovery-type posteriori estimator in view of the gradient recovery and superconvergent theory. The presented reliability and efficiency analysis shows that the true error can be effectively bounded by the recovery-based error estimator. Finally, the stability, accuracy and efficiency of the proposed method are confirmed by several numerical investigations. Full article
Show Figures

Figure 1

15 pages, 549 KiB  
Article
Gradient Regularization as Approximate Variational Inference
by Ali Unlu and Laurence Aitchison
Entropy 2021, 23(12), 1629; https://doi.org/10.3390/e23121629 - 3 Dec 2021
Cited by 5 | Viewed by 2861
Abstract
We developed Variational Laplace for Bayesian neural networks (BNNs), which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. The Variational Laplace objective is simple to evaluate, as [...] Read more.
We developed Variational Laplace for Bayesian neural networks (BNNs), which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. The Variational Laplace objective is simple to evaluate, as it is the log-likelihood plus weight-decay, plus a squared-gradient regularizer. Variational Laplace gave better test performance and expected calibration errors than maximum a posteriori inference and standard sampling-based variational inference, despite using the same variational approximate posterior. Finally, we emphasize the care needed in benchmarking standard VI, as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters. Full article
(This article belongs to the Special Issue Probabilistic Methods for Deep Learning)
Show Figures

Figure 1

20 pages, 2934 KiB  
Article
Adaptive Refinement in Advection–Diffusion Problems by Anomaly Detection: A Numerical Study
by Antonella Falini and Maria Lucia Sampoli
Algorithms 2021, 14(11), 328; https://doi.org/10.3390/a14110328 - 7 Nov 2021
Cited by 1 | Viewed by 2687
Abstract
We consider advection–diffusion–reaction problems, where the advective or the reactive term is dominating with respect to the diffusive term. The solutions of these problems are characterized by the so-called layers, which represent localized regions where the gradients of the solutions are rather large [...] Read more.
We consider advection–diffusion–reaction problems, where the advective or the reactive term is dominating with respect to the diffusive term. The solutions of these problems are characterized by the so-called layers, which represent localized regions where the gradients of the solutions are rather large or are subjected to abrupt changes. In order to improve the accuracy of the computed solution, it is fundamental to locally increase the number of degrees of freedom by limiting the computational costs. Thus, adaptive refinement, by a posteriori error estimators, is employed. The error estimators are then processed by an anomaly detection algorithm in order to identify those regions of the computational domain that should be marked and, hence, refined. The anomaly detection task is performed in an unsupervised fashion and the proposed strategy is tested on typical benchmarks. The present work shows a numerical study that highlights promising results obtained by bridging together standard techniques, i.e., the error estimators, and approaches typical of machine learning and artificial intelligence, such as the anomaly detection task. Full article
Show Figures

Figure 1

24 pages, 61071 KiB  
Article
Multi-Spectral Fusion and Denoising of Color and Near-Infrared Images Using Multi-Scale Wavelet Analysis
by Haonan Su, Cheolkon Jung and Long Yu
Sensors 2021, 21(11), 3610; https://doi.org/10.3390/s21113610 - 22 May 2021
Cited by 8 | Viewed by 3837
Abstract
We formulate multi-spectral fusion and denoising for the luminance channel as a maximum a posteriori estimation problem in the wavelet domain. To deal with the discrepancy between RGB and near infrared (NIR) data in fusion, we build a discrepancy model and introduce the [...] Read more.
We formulate multi-spectral fusion and denoising for the luminance channel as a maximum a posteriori estimation problem in the wavelet domain. To deal with the discrepancy between RGB and near infrared (NIR) data in fusion, we build a discrepancy model and introduce the wavelet scale map. The scale map adjusts the wavelet coefficients of NIR data to have the same distribution as the RGB data. We use the priors of the wavelet scale map and its gradient as the contrast preservation term and gradient denoising term, respectively. Specifically, we utilize the local contrast and visibility measurements in the contrast preservation term to transfer the selected NIR data to the fusion result. We also use the gradient of NIR wavelet coefficients as the weight for the gradient denoising term in the wavelet scale map. Based on the wavelet scale map, we perform fusion of the RGB and NIR wavelet coefficients in the base and detail layers. To remove noise, we model the prior of the fused wavelet coefficients using NIR-guided Laplacian distributions. In the chrominance channels, we remove noise guided by the fused luminance channel. Based on the luminance variation after fusion, we further enhance the color of the fused image. Our experimental results demonstrated that the proposed method successfully performed the fusion of RGB and NIR images with noise reduction, detail preservation, and color enhancement. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 6424 KiB  
Article
Weights-Based Image Demosaicking Using Posteriori Gradients and the Correlation of R–B Channels in High Frequency
by Meidong Xia, Chengyou Wang and Wenhan Ge
Symmetry 2019, 11(5), 600; https://doi.org/10.3390/sym11050600 - 26 Apr 2019
Cited by 2 | Viewed by 4248
Abstract
In this paper, we propose a weights-based image demosaicking algorithm which is based on the Bayer pattern color filter array (CFA). When reconstructing the missing G components, the proposed algorithm uses weights based on posteriori gradients to mitigate color artifacts and distortions. Furthermore, [...] Read more.
In this paper, we propose a weights-based image demosaicking algorithm which is based on the Bayer pattern color filter array (CFA). When reconstructing the missing G components, the proposed algorithm uses weights based on posteriori gradients to mitigate color artifacts and distortions. Furthermore, the proposed algorithm makes full use of the correlation of R–B channels in high frequency when interpolating R/B values at B/R positions. Experimental results show that the proposed algorithm is superior to previous similar algorithms in composite peak signal-to-noise ratio (CPSNR) and subjective visual effect. The biggest advantage of the proposed algorithm is the use of posteriori gradients and the correlation of R–B channels in high frequency. Full article
Show Figures

Graphical abstract

22 pages, 6868 KiB  
Article
A High-Order Numerical Manifold Method for Darcy Flow in Heterogeneous Porous Media
by Lingfeng Zhou, Yuan Wang and Di Feng
Processes 2018, 6(8), 111; https://doi.org/10.3390/pr6080111 - 1 Aug 2018
Cited by 10 | Viewed by 4266
Abstract
One major challenge in modeling Darcy flow in heterogeneous porous media is simulating the material interfaces accurately. To overcome this defect, the refraction law is fully introduced into the numerical manifold method (NMM) as an a posteriori condition. To achieve a better accuracy [...] Read more.
One major challenge in modeling Darcy flow in heterogeneous porous media is simulating the material interfaces accurately. To overcome this defect, the refraction law is fully introduced into the numerical manifold method (NMM) as an a posteriori condition. To achieve a better accuracy of the Darcy velocity and continuous nodal velocity, a high-order weight function with a continuous nodal gradient is adopted. NMM is an advanced method with two independent cover systems, which can easily solve both continuous and discontinuous problems in a unified form. Moreover, a regular mathematical mesh, independent of the physical domain, is used in the NMM model. Compared to the conforming mesh of other numerical methods, it is more efficient and flexible. A number of numerical examples were simulated by the new NMM model, comparing the results with the original NMM model and the analytical solutions. Thereby, it is proven that the proposed method is accurate, efficient, and robust for modeling Darcy flow in heterogeneous porous media, while the refraction law is satisfied rigorously. Full article
(This article belongs to the Special Issue Fluid Flow in Fractured Porous Media)
Show Figures

Figure 1

Back to TopTop