entropy-logo

Journal Browser

Journal Browser

Maximum Entropy and Bayesian Methods

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 June 2018) | Viewed by 82689

Special Issue Editors

Department of Physics, University at Albany, 1400 Washington Avenue, Albany, NY 12222, USA
Interests: bayesian data analysis; entropy; probability theory; signal processing; machine learning; robotics; foundations of physics; quantum information; exoplanet detection and characterization
Special Issues, Collections and Topics in MDPI journals
Department of Statistics, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand
Interests: Bayesian inference; Markov chain Monte Carlo; nested sampling; astrostatistics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Whereas Bayesian inference has now achieved mainstream acceptance and is widely used throughout the sciences, associated ideas such as the principle of maximum entropy (implicit in the work of Gibbs, and developed further by Ed Jaynes and others) have not. There are strong arguments that the principle (and variations, such as maximum relative entropy) is of fundamental importance, but the literature also contains many misguided attempts at applying it, leading to much confusion.

This Special Issue will focus on Bayesian inference and MaxEnt. Some open questions that spring to mind are: Which proposed ways of using entropy (and its maximisation) in inference are legitimate, which are not, and why? Where can we obtain constraints on probability assignments, the input needed by the MaxEnt procedure?

More generally, papers exploring any interesting connections between probabilistic inference and information theory will be considered. Papers presenting high quality applications, or discussing computational methods in these areas, are also welcome.

Dr. Brendon J. Brewer
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Bayesian inference;
  • Uncertainty;
  • Maximum Entropy;
  • Maximum Relative Entropy;
  • Prior Distributions;
  • Principle of Indifference;
  • Symmetry;
  • Relevance

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 5107 KiB  
Article
A Hybrid Structure Learning Algorithm for Bayesian Network Using Experts’ Knowledge
by Hongru Li and Huiping Guo
Entropy 2018, 20(8), 620; https://doi.org/10.3390/e20080620 - 20 Aug 2018
Cited by 13 | Viewed by 4043
Abstract
Bayesian network structure learning from data has been proved to be a NP-hard (Non-deterministic Polynomial-hard) problem. An effective method of improving the accuracy of Bayesian network structure is using experts’ knowledge instead of only using data. Some experts’ knowledge (named here explicit knowledge) [...] Read more.
Bayesian network structure learning from data has been proved to be a NP-hard (Non-deterministic Polynomial-hard) problem. An effective method of improving the accuracy of Bayesian network structure is using experts’ knowledge instead of only using data. Some experts’ knowledge (named here explicit knowledge) can make the causal relationship between nodes in Bayesian Networks (BN) structure clear, while the others (named here vague knowledge) cannot. In the previous algorithms for BN structure learning, only the explicit knowledge was used, but the vague knowledge, which was ignored, is also valuable and often exists in the real world. Therefore we propose a new method of using more comprehensive experts’ knowledge based on hybrid structure learning algorithm, a kind of two-stage algorithm. Two types of experts’ knowledge are defined and incorporated into the hybrid algorithm. We formulate rules to generate better initial network structure and improve the scoring function. Furthermore, we take expert level difference and opinion conflict into account. Experimental results show that our proposed method can improve the structure learning performance. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

27 pages, 1767 KiB  
Article
Large Deviations Properties of Maximum Entropy Markov Chains from Spike Trains
by Rodrigo Cofré, Cesar Maldonado and Fernando Rosas
Entropy 2018, 20(8), 573; https://doi.org/10.3390/e20080573 - 03 Aug 2018
Cited by 5 | Viewed by 4536
Abstract
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. To find the maximum entropy Markov chain, we use the thermodynamic formalism, which provides insightful connections [...] Read more.
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. To find the maximum entropy Markov chain, we use the thermodynamic formalism, which provides insightful connections with statistical physics and thermodynamics from which large deviations properties arise naturally. We provide an accessible introduction to the maximum entropy Markov chain inference problem and large deviations theory to the community of computational neuroscience, avoiding some technicalities while preserving the core ideas and intuitions. We review large deviations techniques useful in spike train statistics to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability, and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

14 pages, 2303 KiB  
Article
Evaluating Flight Crew Performance by a Bayesian Network Model
by Wei Chen and Shuping Huang
Entropy 2018, 20(3), 178; https://doi.org/10.3390/e20030178 - 08 Mar 2018
Cited by 15 | Viewed by 4965 | Correction
Abstract
Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization [...] Read more.
Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

1112 KiB  
Article
Hypothesis Tests for Bernoulli Experiments: Ordering the Sample Space by Bayes Factors and Using Adaptive Significance Levels for Decisions
by Carlos A. de B. Pereira, Eduardo Y. Nakano, Victor Fossaluza, Luís Gustavo Esteves, Mark A. Gannon and Adriano Polpo
Entropy 2017, 19(12), 696; https://doi.org/10.3390/e19120696 - 20 Dec 2017
Cited by 7 | Viewed by 5827
Abstract
The main objective of this paper is to find the relation between the adaptive significance level presented here and the sample size. We statisticians know of the inconsistency, or paradox, in the current classical tests of significance that are based on p-value [...] Read more.
The main objective of this paper is to find the relation between the adaptive significance level presented here and the sample size. We statisticians know of the inconsistency, or paradox, in the current classical tests of significance that are based on p-value statistics that are compared to the canonical significance levels (10%, 5%, and 1%): “Raise the sample to reject the null hypothesis” is the recommendation of some ill-advised scientists! This paper will show that it is possible to eliminate this problem of significance tests. We present here the beginning of a larger research project. The intention is to extend its use to more complex applications such as survival analysis, reliability tests, and other areas. The main tools used here are the Bayes factor and the extended Neyman–Pearson Lemma. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

333 KiB  
Article
Choosing between Higher Moment Maximum Entropy Models and Its Application to Homogeneous Point Processes with Random Effects
by Lotfi Khribi, Brenda MacGibbon and Marc Fredette
Entropy 2017, 19(12), 687; https://doi.org/10.3390/e19120687 - 14 Dec 2017
Viewed by 3422
Abstract
In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and [...] Read more.
In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and the choice of the best order is established by two goodness-of-fit criteria: Kullback–Leibler divergence and a discrepancy measure. This procedure is illustrated on a warranty data set from the automobile industry. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

2049 KiB  
Article
Bayesian Inference of Ecological Interactions from Spatial Data
by Christopher R. Stephens, Victor Sánchez-Cordero and Constantino González Salazar
Entropy 2017, 19(12), 547; https://doi.org/10.3390/e19120547 - 25 Nov 2017
Cited by 15 | Viewed by 4741
Abstract
The characterization and quantification of ecological interactions and the construction of species’ distributions and their associated ecological niches are of fundamental theoretical and practical importance. In this paper, we discuss a Bayesian inference framework, which, using spatial data, offers a general formalism within [...] Read more.
The characterization and quantification of ecological interactions and the construction of species’ distributions and their associated ecological niches are of fundamental theoretical and practical importance. In this paper, we discuss a Bayesian inference framework, which, using spatial data, offers a general formalism within which ecological interactions may be characterized and quantified. Interactions are identified through deviations of the spatial distribution of co-occurrences of spatial variables relative to a benchmark for the non-interacting system and based on a statistical ensemble of spatial cells. The formalism allows for the integration of both biotic and abiotic factors of arbitrary resolution. We concentrate on the conceptual and mathematical underpinnings of the formalism, showing how, using the naive Bayes approximation, it can be used to not only compare and contrast the relative contribution from each variable, but also to construct species’ distributions and ecological niches based on an arbitrary variable type. We also show how non-linear interactions between distinct niche variables can be identified and the degree of confounding between variables accounted for. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

1015 KiB  
Article
Inquiry Calculus and the Issue of Negative Higher Order Informations
by H. R. Noel Van Erp, Ronald O. Linger and Pieter H. A. J. M. Van Gelder
Entropy 2017, 19(11), 622; https://doi.org/10.3390/e19110622 - 18 Nov 2017
Cited by 1 | Viewed by 3218
Abstract
In this paper, we will give the derivation of an inquiry calculus, or, equivalently, a Bayesian information theory. From simple ordering follow lattices, or, equivalently, algebras. Lattices admit a quantification, or, equivalently, algebras may be extended to calculi. The general rules of quantification [...] Read more.
In this paper, we will give the derivation of an inquiry calculus, or, equivalently, a Bayesian information theory. From simple ordering follow lattices, or, equivalently, algebras. Lattices admit a quantification, or, equivalently, algebras may be extended to calculi. The general rules of quantification are the sum and chain rules. Probability theory follows from a quantification on the specific lattice of statements that has an upper context. Inquiry calculus follows from a quantification on the specific lattice of questions that has a lower context. There will be given here a relevance measure and a product rule for relevances, which, taken together with the sum rule of relevances, will allow us to perform inquiry analyses in an algorithmic manner. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

256 KiB  
Article
The Prior Can Often Only Be Understood in the Context of the Likelihood
by Andrew Gelman, Daniel Simpson and Michael Betancourt
Entropy 2017, 19(10), 555; https://doi.org/10.3390/e19100555 - 19 Oct 2017
Cited by 282 | Viewed by 15456
Abstract
A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key [...] Read more.
A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
8487 KiB  
Article
Use of the Principles of Maximum Entropy and Maximum Relative Entropy for the Determination of Uncertain Parameter Distributions in Engineering Applications
by José-Luis Muñoz-Cobo, Rafael Mendizábal, Arturo Miquel, Cesar Berna and Alberto Escrivá
Entropy 2017, 19(9), 486; https://doi.org/10.3390/e19090486 - 12 Sep 2017
Cited by 27 | Viewed by 6056
Abstract
The determination of the probability distribution function (PDF) of uncertain input and model parameters in engineering application codes is an issue of importance for uncertainty quantification methods. One of the approaches that can be used for the PDF determination of input and model [...] Read more.
The determination of the probability distribution function (PDF) of uncertain input and model parameters in engineering application codes is an issue of importance for uncertainty quantification methods. One of the approaches that can be used for the PDF determination of input and model parameters is the application of methods based on the maximum entropy principle (MEP) and the maximum relative entropy (MREP). These methods determine the PDF that maximizes the information entropy when only partial information about the parameter distribution is known, such as some moments of the distribution and its support. In addition, this paper shows the application of the MREP to update the PDF when the parameter must fulfill some technical specifications (TS) imposed by the regulations. Three computer programs have been developed: GEDIPA, which provides the parameter PDF using empirical distribution function (EDF) methods; UNTHERCO, which performs the Monte Carlo sampling on the parameter distribution; and DCP, which updates the PDF considering the TS and the MREP. Finally, the paper displays several applications and examples for the determination of the PDF applying the MEP and the MREP, and the influence of several factors on the PDF. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

1755 KiB  
Article
Disproportionate Allocation of Indirect Costs at Individual-Farm Level Using Maximum Entropy
by Markus Lips
Entropy 2017, 19(9), 453; https://doi.org/10.3390/e19090453 - 29 Aug 2017
Cited by 9 | Viewed by 4787
Abstract
This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing [...] Read more.
This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing approach, both models address the individual-farm level and use standard costs from farm-management literature as allocation bases. They provide a disproportionate allocation, with the distinctive feature that enterprises with large allocation bases face stronger adjustments than enterprises with small ones, approximating indirect costing with reality. Based on crop-farm observations from the Swiss Farm Accountancy Data Network (FADN), including up to 36 observations per enterprise, both models are compared with a proportional allocation as reference base. The mean differences of the enterprise’s allocated labour inputs and machinery costs are in a range of up to ±35% and ±20% for the CoreModel and InequalityModel, respectively. We conclude that the choice of allocation methods has a strong influence on the resulting indirect costs. Furthermore, the application of inequality restrictions is a precondition to make the merits of the maximum entropy principle accessible for the allocation of indirect costs. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

782 KiB  
Article
Computing Entropies with Nested Sampling
by Brendon J. Brewer
Entropy 2017, 19(8), 422; https://doi.org/10.3390/e19080422 - 18 Aug 2017
Cited by 7 | Viewed by 5697
Abstract
The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be [...] Read more.
The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be evaluated. This paper introduces a computational approach, based on Nested Sampling, to evaluate entropies of probability distributions that can only be sampled. I demonstrate the method on three examples: a simple Gaussian example where the key quantities are available analytically; (ii) an experimental design example about scheduling observations in order to measure the period of an oscillating signal; and (iii) predicting the future from the past in a heavy-tailed scenario. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

306 KiB  
Article
Statistical Process Control for Unimodal Distribution Based on Maximum Entropy Distribution Approximation
by Xinghua Fang, Mingshun Song and Yizeng Chen
Entropy 2017, 19(8), 406; https://doi.org/10.3390/e19080406 - 07 Aug 2017
Cited by 1 | Viewed by 5998
Abstract
In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution. [...] Read more.
In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution. This article proposes a simplified method based on maximum entropy for the control chart design when the quantity being monitored is unimodal distribution. First, we use the maximum entropy distribution to approximate the unknown distribution of the monitored quantity. Then we directly take the value of the quantity as the monitoring statistic. Finally, the Lebesgue measure is applied to estimate the acceptance regions and the one with minimum volume is chosen as the optimal in-control region of the monitored quantity. The results from two cases show that the proposed method in this article has a higher detection capability than the conventional control chart techniques when the monitored quantity is asymmetric unimodal distribution. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

1116 KiB  
Article
Optimal Detection under the Restricted Bayesian Criterion
by Shujun Liu, Ting Yang and Hongqing Liu
Entropy 2017, 19(7), 370; https://doi.org/10.3390/e19070370 - 19 Jul 2017
Cited by 1 | Viewed by 3903
Abstract
This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater [...] Read more.
This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

1338 KiB  
Article
A Bayesian Optimal Design for Sequential Accelerated Degradation Testing
by Xiaoyang Li, Yuqing Hu, Fuqiang Sun and Rui Kang
Entropy 2017, 19(7), 325; https://doi.org/10.3390/e19070325 - 01 Jul 2017
Cited by 8 | Viewed by 4393
Abstract
When optimizing an accelerated degradation testing (ADT) plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to [...] Read more.
When optimizing an accelerated degradation testing (ADT) plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG) process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

1954 KiB  
Article
Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint
by Hea-Jung Kim
Entropy 2017, 19(6), 274; https://doi.org/10.3390/e19060274 - 13 Jun 2017
Cited by 1 | Viewed by 4253
Abstract
This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT) models with stochastic (or uncertain) constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT) models (such as log-normal, log-Cauchy, [...] Read more.
This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT) models with stochastic (or uncertain) constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT) models (such as log-normal, log-Cauchy, and log-logistic FT models) as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC) sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt) prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

Back to TopTop