Next Article in Journal
Upper Bound of the Third Hankel Determinant for a Subclass of q-Starlike Functions
Next Article in Special Issue
Anti-Saturation Control of Uncertain Time-Delay Systems with Actuator Saturation Constraints
Previous Article in Journal / Special Issue
Distributional Chaoticity of C0-Semigroup on a Frechet Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Methods of Representative Values of Variable Actions

School of Science, Xi’an University of Architecture and Technology, Xi’an 710055, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(3), 346; https://doi.org/10.3390/sym11030346
Submission received: 14 January 2019 / Revised: 15 February 2019 / Accepted: 15 February 2019 / Published: 7 March 2019
(This article belongs to the Special Issue New Trends in Dynamics)

Abstract

:
In engineering practice, it is sometimes necessary to infer the representative value of variable action under the condition that the test data is insufficient, but the classical statistics methods adopted now do not take into account the influences of statistical uncertainty, and the inferring results are always small, especially when characteristic and frequent values are inferred. Variable actions usually obey a type I maximum distribution, so the linear regression estimation of the tantile of type I minimum distribution can be employed to infer their characteristic and frequent values. However, it is inconvenient to apply and cannot totally meet the demands of characteristic and frequent values inference. Applying Jeffreys non-informative prior distribution, Bayesian methods for inferring characteristic and frequent values of variable actions are put forward, including that with known standard deviation, which could yield more advantageous results. The methods proposed are convenient and flexible, possessing good precision.

1. Introduction

The inference for the representative values of variable actions including the characteristic value, frequent value, and quasi-permanent value is fundamental to establish the methods of structural design and assessment [1,2]. When the sample size is large enough (the test data are sufficient), we usually use classical statistics such as the moment and maximum likelihood estimation methods. However, the observed data are often insufficient in engineering and the classical statistics methods adopted now do not take into account the influences of statistical uncertainty, and the inferring results are always on the aggressive side, especially when characteristic and frequent values are inferred. Therefore, we need to choose an appropriate method which is applicable to the case of a minor sample.
The values at any time point of variable actions usually obey a type I maximum distribution [3,4], and the representative values of variable actions usually are expressed as a tantile of the distribution. Type I maximum distribution and type I minimum distribution belong to the same extreme value distribution families [5,6,7] and can be compared with each other [8,9]; therefore, the linear regression estimation of the tantile of the type I minimum distribution can be employed to infer their characteristic and frequent values. This method is applicable to the case of a minor sample and has taken into account the influences of statistical uncertainty in the different confidence degrees; therefore, it is widely used in machinery, electronics, and other fields to infer the service life of products [10,11]. However, it is inconvenient because a lot of data must be sought, and the present numerical tables don’t totally meet the demands of variable actions inference. Additionally, it is very difficult to establish a new numerical table since that would require a tedious numerical simulation or Monte Carlo simulation.
In the representative values of variable actions, we generally let the average value at any time point in the distribution of variable actions be the quasi-permanent value [1], so the classical statistical methods can be used to infer it since the result is less affected by the influences of statistical uncertainty in the inferring. In this paper, we mainly focus on the methods for inferring characteristic and frequent values of variable actions and put forward Bayesian methods [12,13,14,15] which are applicable to the case of a minor sample, from the utility point.

2. Linear Regression Estimation

Generally, we suppose that the values at any time point of variable actions obey a type I maximum distribution [4], the probability density function is:
f X ( x ) = 1 α e x μ α exp { e x μ α }
where μ , α are distributed parameters, < μ < , 0 < α < . The characteristic value and frequent value of variable actions are usually expressed as a down tantile with p calibration of the random variable X , they can be written as x p , then
P { X x p } = exp { e x p μ α } = p
x p = μ + k α
where p is a guaranteed rate of the characteristic value or frequent value, and k = ln ( ln p ) .
It is assumed that the sample X has a capacity of n , and is arranged in the order of small to large: X ( 1 ) , X ( 2 ) , , X ( n ) , the test values are x ( 1 ) , x ( 2 ) , , x ( n ) , respectively. Let
Y = X
Then, Y obey type I minimum distribution with two parameters μ , α , the order statistic and up tantile with p calibration are:
Y ( j ) = X ( n j + 1 ) ,   j = 1 , 2 , , n
y p = μ k α = x p
Let
V = μ ˜ y p α ˜
μ ˜ = j = 1 n D I ( n , n , j ) Y ( j ) = j = 1 n D I ( n , n , j ) [ X ( n j + 1 ) ]
α ˜ = j = 1 n C I ( n , n , j ) Y ( j ) = j = 1 n C I ( n , n , j ) [ X ( n j + 1 ) ]
where μ ˜ , α ˜ are the best linear invariant estimator of μ , α , respectively. D I ( n , n , j ) , C I ( n , n , j ) are coefficients which are dependent on the sort order j and sample capacity n , and can be looked up in [16]. Then, the probability distribution of the random variable V is unrelated to the unknown parameters μ , α , and it can be confirmed through a numerical simulation calculation [17]. Since
P { V v p , C } = P { y p μ ˜ + v p , C α ˜ } = P { x p μ ˜ + v p , C α ˜ }
We can substitute the test value in the formula, and the characteristic value or frequent value of variable actions can be inferred in accordance with the upper limit estimated value, that is,
x p = μ ˜ + v p , C α ˜ = j = 1 n D I ( n , n , j ) x ( n j + 1 ) + v p , C j = 1 n [ C I ( n , n , j ) x ( n j + 1 ) ]
where v p , C is a down tantile with C calibration of the random variable V , the numerical table can be looked up in [16], and C is a confidence degree.
The linear regression estimation takes into account both the sample capacity and the sample order, adequately utilizing the information of samples, can be used in the conditions of minor samples, and considers the influences of statistical uncertainty with deferment confidence degrees. However, it is inconvenient because a number of data such as D I ( n , n , j ) , C I ( n , n , j ) , v p , C must be sought, and the present numerical tables only give the numerical values when p = 0.90, 0.95, 0.99 and n 25 [16] and don’t meet totally the demands of the characteristic value and frequent value of variable actions inference. Furthermore, it is very difficult to establish a new numerical table since that would require a tedious numerical simulation.

3. Bayesian Inference Method

In this section, we consider the Bayesian inference methods. Firstly, we discuss Jeffreys non-informative prior distribution of a type I maximum distribution and the specific formulas are obtained. Then, Bayesian methods for inferring characteristic and frequent values of variable actions are put forward, including that with known standard deviation, which could yield more advantageous results.

3.1. Jeffreys Non-Informative Prior Distribution of Type I Maximum Distribution

As everyone knows, in the Bayesian analysis, the posterior distribution used for statistical inference and decision is based on the prior distribution [5]. Therefore, how to obtain the prior distribution is the key problem of the Bayesian method. This section is based on Fisher’s information matrix to confirm prior distribution given by Jeffreys, and provide more kinds of prior distribution for a type I maximum distribution; this laid the foundation for the establishment of Bayesian inference method.

3.1.1. Jeffreys Principle

In 1961, Jeffreys proposed a selection method of a non-informative prior distribution based on the information function, that is, the Jeffreys principle [18]. Jeffreys believes that, suppose the selection of prior distribution abides by the same principle, π ( θ ) is a prior distribution of parameter θ , g ( θ ) as a parameter is a function of θ , π g ( η ) is a prior distribution of parameter η = g ( θ ) . Then, the following formula is tenable.
π ( θ ) = π g ( g ( θ ) ) | g ( θ ) |
If the π ( θ ) selected by Jeffreys principle satisfies Equation (12), then the prior distributions determined by θ and determined by the g ( θ ) are always consistent and do not contradict each other. The difficulty is how to find the π ( θ ) which satisfies the conditions given in Equation (12). Jeffreys found π ( θ ) met the requirements by cleverly using the properties of the Fisher information matrix.
Let g ( θ ) be a function of θ , η = g ( θ ) and θ have the same dimension. Then,
| I ( θ ) | 1 2 = | g ( θ ) θ | | I ( η ) | 1 2
where | I ( θ ) | 1 2 , | I ( η ) | 1 2 denote the square root of determinant | I ( θ ) | , | I ( η ) | , respectively.
Firstly, we denote ln p = ln p ( x 1 , x n ; θ ) , it is clear that,
( ln p θ ) = ( ln p η ) ( η θ ) = ( ln p η ) ( g ( θ ) θ )
Hence,
I ( θ ) = E ( ln p θ ) ( ln p θ ) = E ( g ( θ ) θ ) ( ln p η ) = ( ln p η ) ( g ( θ ) θ ) = ( g ( θ ) θ ) [ E ( ln p η ) ( ln p η ) ] ( g ( θ ) θ )
Thus, the proposition is proved.
In conclusion, Jeffreys just used the | I ( θ ) | 1 2 as the kernel of prior distribution.

3.1.2. The Steps for Searching Jeffreys Prior Distribution

In the last section, we obtained a result, that is, Jeffreys simply used the square root of the Fisher information matrix determinant as the non-informative prior distribution.
Let X = ( X 1 , X 2 , , X n ) be a sample from the probability density function p ( x | θ ) , θ = ( θ 1 , θ 2 , , θ p ) is a k dimensional parameter vector, the steps for searching Jeffreys prior distribution when we have no prior information for θ is as follows:
Step 1: Find the log-likelihood function of the sample:
l ( θ | x ) = ln [ i = 1 n p ( x i | θ ) ] = i = 1 n ln p ( x i | θ )
Step 2: Find the information matrix of the sample:
I ( θ ) = E x | θ ( 2 l θ i θ j )    i , j = 1 , 2 , , p
In particular, when the single parameter k = 1 ,
I ( θ ) = E x | θ ( 2 l θ 2 )
Step 3: The non-informative prior density of θ is:
π ( θ ) [ det I ( θ ) ] 1 / 2
Where, det I ( θ ) is the p × p order determinant of I ( θ ) . In particular, when the single parameter k = 1 ,
π ( θ ) [ I ( θ ) ] 1 / 2
The above Equation (20) means that π ( θ ) is proportional to [ I ( θ ) ] 1 / 2 , and the proportionality coefficient can be confirmed by θ π ( θ ) d θ = 1 . In the Bayesian formula, the proportionality coefficient can be reduced, so we can omit the steps.

3.1.3. The Formulas of Jeffreys Non-Informative Prior Distribution of A Type I Maximum Distribution

Let the random variable X = ( X 1 , X 2 , , X n ) obey the type I maximum distribution denoted by M a x ( μ , α ) , X i ( 0 , + ) , i = 1 , 2 , , n , whose probability density function is: f ( x i | μ , α ) = 1 α e x μ α exp { e x μ α } , where < μ < + , 0 < α < + , and μ = μ X C E 6 π 2 σ X ; α = 6 π 2 σ X and C E is Euler’s constant.
It is clear that the log-likelihood function of the sample is:
L ( μ , α | x ) = ln { 1 α n e n ( x ¯ μ ) α e i = 1 n e x i μ α } = n ln α n ( x ¯ μ ) α i = 1 n e x i μ α
When the μ is unknown, α is known;
L μ = 0 + n α i = 1 n 1 α e x i μ α = n α i = 1 n 1 α e x i μ α
2 L μ 2 = 0 i = 1 n 1 α 2 e x i μ α = i = 1 n 1 α 2 e x i μ α
Let e x μ α = y , x = μ α ln y , x = α y . Thus,
E ( e x μ α ) = + e x μ α 1 α e x μ α e e x μ α d x = 0 + y e y d y = 1
Then, the information matrix of the sample is
I ( μ ) = E x | μ ( 2 L μ 2 ) = i = 1 n 1 α 2 E ( e x i μ α ) = i = 1 n 1 α 2 = n α 2
The non-informative prior density of μ is:
π ( μ ) = [ det ( I ( μ ) ] 1 2 = n α
Hence,
π ( μ ) 1
When the α is unknown, μ is known:
L α = n α + n ( x ¯ μ ) α 2 i = 1 n x i μ α 2 e x i μ α
2 L α 2 = n α 2 2 n ( x ¯ μ ) α 3 i = 1 n { 2 ( x i μ ) α 3 e x i μ α + ( x i μ ) 2 α 4 e x i μ α } = n α 2 2 n ( x ¯ μ ) α 3 + i = 1 n 2 ( x i μ ) α 3 e x i μ α i = 1 n ( x i μ ) 2 α 4 e x i μ α
Let e x μ α = y , x = μ α ln y , x = α y . Thus,
E ( x μ α e x μ α ) = + x μ α e x μ α 1 α e x μ α e e x μ α d x = 0 + y ln y d e y    = lim y + y ln y + 1 e y lim y 0 + y ln y + 1 e y + C E = C E 1
Let e x μ α = y , x = μ α ln y , x = α y . Thus,
E [ ( x μ α ) 2 e x μ α ] = + ( x μ α ) 2 e x μ α 1 α e x μ α e e x μ α d x = 0 + y ( ln y ) 2 d e y = lim y ( ln y ) 2 + 2 ln y e y + π 2 6 + C E 2 2 C E = π 2 6 + C E 2 2 C E
Then, the information matrix of the sample is,
I ( α ) = E x | α ( 2 L α 2 ) = n α 2 + 2 n [ E ( x ¯ ) μ ] α 3 i = 1 n 2 α 2 E [ x i μ α e x i μ α ] + i = 1 n 1 α 2 E [ ( x i μ ) 2 α 2 e x i μ α ] = n ( 1 + π 2 6 + C E 2 2 C E ) α 2
The non-informative prior density of α is,
π ( α ) = [ det ( I ( α ) ] 1 2 = n ( 1 + π 2 6 + C E 2 2 C E ) α
Hence,
π ( α ) 1 α
When the μ and α are both known but they are mutually independent.
The non-informative prior density of μ , α is:
π ( μ , α ) = π ( μ ) π ( α ) 1 α
When the μ and α are both known but they are not mutually independent;
2 L μ α = n α 2 i = 1 n { 1 α 2 e x i μ α + x i μ α 3 e x i μ α } = n α 2 + i = 1 n 1 α 2 e x i μ α i = 1 n x i μ α 3 e x i μ α
E ( 2 L μ α ) = E ( n α 2 i = 1 n 1 α 2 e x i μ α + i = 1 n x i μ α 3 e x i μ α ) = n ( C E 1 ) α 2
By combining Equation (25) with (32), we derive that the information matrix of the sample is,
I ( μ , α ) = ( n α 2 n ( C E 1 ) α 2 n ( C E 1 ) α 2 n ( 1 + π 2 6 + C E 2 2 C E ) α 2 )
The non-informative prior density of μ , α is,
π ( μ , α ) = [ det ( I ( μ , α ) ] 1 2 = n π 6 α 2
Hence,
π ( μ , α ) 1 α 2

3.2. The Establishment of Bayesian Inference Method

In this section, we mainly elaborate on the established process of the Bayesian inference method, including that with known standard deviation. The specific methods for inferring characteristic and frequent values of variable actions are put forward by using the non-informative prior distribution obtained in Section 3.1.

3.2.1. In Condition of Known Standard Deviation σ X

When the standard deviation σ X of a random variable X is known, where X is the value at any time of variable actions, we can derive that the distributed parameter α = 6 / π σ X = 0.780 σ X . It is assumed that the test values of the sample X are x 1 , , x n , then the joint probability density function is
f X 1 , , X n ( x 1 , , x n | μ ) = 1 α n e n ( x ¯ μ ) α exp { i = 1 n e x i μ α } ,
where x ¯ is a sample mean. In Bayesian analysis, we usually select Jeffreys non-informative prior distribution as the prior distribution of the unknown parameter μ [18], using the above Equation (27), we know that,
π μ ( θ ) = 1 .
After a series of complex calculation, we can obtain that,
π μ ( θ | x 1 , , x n ) = π μ ( θ ) f X 1 , , X n ( x 1 , , x n ) π μ ( θ ) f X 1 , , X n ( x 1 , , x n ) d θ 1 α n e n ( x ¯ θ ) α exp { i = 1 n e x i θ α } ,
where the sign “ ” denotes “is proportional to”. With a variable substitution like Equation (3), we can figure out the posterior distribution of the tantile x p , that is,
π x p ( z | x 1 , , x n ) 1 α n e n ( x ¯ z + k α ) α exp { i = 1 n e x i z + k α α } ( e z k α α ) n exp { e z k α α i = 1 n e x i α } .
It is assumed that
U = e x p k α α i = 1 n e x i α .
Then, the distribution of U is
π U ( u | x 1 , , x n ) u n 1 e u .
Hence, U obeys a standard Gamma distribution G a ( n , 1 ) [9] with parameter n . According to Equation (44), by using the upper limit estimation of an interval estimate, we can obtain the characteristic value or frequent value of variable actions after a complex calculation, that is,
x p = ( k + ln γ ( n , 1 , C ) / n y ¯ ) α = ( k + ln k 1 y ¯ ) α ,
y ¯ = 1 n i = 1 n e x i α ,
where γ ( n , C ) is a down tantile with C calibration of the standard Gamma distribution G a ( n , 1 ) and C is a confidence degree, k 1 = γ ( n , 1 , C ) / n .
In the condition of the standard deviation, σ X of a random variable X is known, where X is the value at any time of variable actions because the sample mean has little effect on the influence of statistical uncertainty, so we can have α = 0.780 x ¯ δ X approximately, and infer the characteristic value or frequent value of variable actions by using Equation (47).

3.2.2. In the Condition of Unknown Parameter Information

When we have no information of parameter μ ,   α , we usually select Jeffreys non-informative prior distribution as the prior distribution of the unknown parameter μ [18], by the above Equation (34), we know that
π μ , α ( θ 1 , θ 2 ) = 1 α .
Similarly, we can obtain the joint probability density function of x p , α after calculations, that is,
π x p , α ( z , θ 2 | x 1 , , x n ) 1 θ 2 n + 1 e n ( x ¯ z + k θ 2 ) θ 2 exp { i = 1 n e x i z + k θ 2 θ 2 } .
Since the distribution (50) is more complicated, we can use the linear term of the Taylor series expansion to replace the exponential function, roughly. That is,
e x i z + k θ 2 θ 2 = e x i z + k θ 2 C E θ 2 θ 2 C E e C E [ 1 x i z + k θ 2 C E θ 2 θ 2 + 1 2 ( x i z + k θ 2 C E θ 2 θ 2 ) 2 ] ,
where C E is Euler constant and C E 0.5772, the deployment point is x i x p + k α C E α α = 0 . Since μ X = μ + C E α , where μ X is the mean of X , it is clear that, x i μ X = 0 at the deployment point. Therefore, we can get a relatively simple and accurate solution after substitution of Equation (51) to Equation (50). Through calculation, we obtain the joint posterior distributions of x p , α , that is,
π x p , α ( z , θ 2 | x 1 , , x n ) 1 θ 2 n + 1 e 1 2 e C E [ ( n 1 ) s 2 + n ( x ¯ y ) 2 ] θ 2 2 e [ e C E ( 1 k + C E ) 1 ] n ( x ¯ y ) θ 2 ,
where, s is the sample standard deviation. Select
e [ e C E ( 1 k + C E ) 1 ] n ( x ¯ y ) θ 2 = m = 0 { [ e C E ( 1 k + C E ) 1 ] n ( x ¯ y ) } m m ! ( 1 θ 2 2 ) m 2 .
Let
U = x p x ¯ s / n ,
V = e C E [ ( n 1 ) s 2 + n ( x p x ¯ ) 2 ] α 2 .
We can obtain the joint distributions of U ,   V , as follows:
π U , V ( u , v | x 1 , , x n ) 1 ( n 1 + u 2 ) n 2 m = 0 Γ ( n + m 2 ) ( k + e C E C E 1 e C E / n ) m m !      [ 2 u ( n 1 ) + u 2 ] m e v 2 v n + m 2 1 2 n + m 2 Γ ( n + m 2 ) ,
where the last fraction in Equation (56) is the probability density function of χ 2 distribution [9] with n + m degree of freedom. Then, the marginal distribution of U can be obtained, after integration to v in the above formula, that is,
π U ( u | x 1 , , x n ) 1 [ ( n 1 ) + u 2 ] ( n 1 ) + 1 2 m = 0 Γ [ ( n 1 ) + m + 1 2 ] ( k + e C E C E 1 e C E / n ) m m ! [ 2 u ( n 1 ) + u 2 ] m .
Hence, U obey the noncentral t distribution [9] with parameter λ and degree of freedom n 1 , where
λ = k + e C E C E 1 e C E / n = ( 0.152751 + 0.749306 k ) n .
Then, we can obtain the characteristic value or frequent value of variable actions by using the upper estimation of an interval estimate after a complex calculation, that is,
x p = x ¯ + t ( n 1 , λ , 1 C ) n s = x ¯ + k 2 s ,
where t ( n 1 , λ , 1 C ) is a up tantile with 1 C calibration of the noncentral t distribution with parameter λ and degree of freedom n 1 , and C is a confidence degree, k 2 = t ( n 1 , λ , 1 C ) / n . Since the parameter λ in the present t ( n 1 , λ , 1 C ) numerical tables don’t totally meet the demands of variable actions inference, and it is very difficult to establish a new numerical table, we use a relatively simple and accurate solution to calculate t ( n 1 , λ , 1 C ) [19], that is
t ( n 1 , λ , 1 C ) = λ + z 1 C 1 + λ 2 z 1 C 2 2 ( n 1 ) 1 z 1 C 2 2 ( n 1 ) ,
where z 1 C is a up tantile with 1 C calibration of the standard normal distribution.
The guarantee rate, p , of the characteristic value or frequent value of variable actions is no less than 0.90 [1], such as the wind load and snow load, p = 0.98 (characteristic value) and p = 0.90 (frequent value) [20,21,22,23]. In the next section, we mainly, through contrastive analysis, present the accuracy of the Bayesian inference method in the light of p 0.90 [24,25,26].

4. Contrastive Analysis

It is assume that the sample X has a capacity of 10, and is arranged in the order from small to large, the test values are x ( 1 ) , x ( 2 ) , , x ( n ) , respectively, see Table 1, measurements: kN/m2, through calculation, we obtain the statistical result as follows:
x ¯ = 1.418   kN / m 2 s = 0.355   kN / m 2 δ = s / x ¯ = 0.250 α = 0.780   s = 0.277   ( kN / m 2 ) μ = x ¯ 0.5772   α = 1.258   ( kN / m 2 )
In order to perform a contrastive analysis on the accuracy between different inference methods, we select the guarantee rate p of x p as 0.90, 0.95, and 0.99 from the present v p , C numerical tables. The inferring results from different confidence degrees and different guarantee rates are listed in Table 2, including the result inferred by using Equation (47) when σ X = s , where “coefficient” refers to v p , C from Equation (11), k 1 from Equation (47), k 2 from Equation (59), and k from x p = μ + k α , which is the inferring formula of the moment method, respectively. The value of D I ( 10 , 10 , j ) , C I ( 10 , 10 , j ) and the numerical results are listed in Table 1.
By comparing and analyzing the inferred results between different methods, the value of the moment method is the lowest, and the higher the confidence degree C , the higher the divergence within the other inferring results. This is mainly because the moment method does not take into account the influences of statistical uncertainty which are produced by the smaller sample and the inferred results are always on the aggressive side. With reference to the linear regression estimation results using Equation (11), Bayesian results inferred using Equation (59) have better precision compared with those of the previous ones, and the higher the values of p , C , the lower the relative error. When the inferred results are conservative, it can apply to any case in which the confidence degree C is no less than 0.90. Because more parameter information is known, Bayesian results inferred using Equation (47) are obviously better than the linear regression estimation results and Bayesian results inferred with no parameter information. These advantages are more evident when the values of p , C are higher. Hence, when the standard deviation σ X is unknown, we can also select a larger value to infer the result by using Equation (47). In this research, we also compare and analyze the inferred results with different p , C when the sample sizes are n = 5 and n = 20 ; the same conclusions can be obtained.
The error of the Bayesian inference method with no parameter information mainly comes from the approximate method used in Equation (51). On the basis of Equation (51), we can show that the join posterior distribution of μ , α is
π μ , α ( θ 1 , θ 2 | x 1 , , x n ) 1 ( e C θ 2 ) n + 1 e 1 2 i = 1 n { x i [ θ 1 + ( 1 + C e C ) θ 2 ] } 2 ( e C θ 2 ) 2 .
In fact, it is also the join posterior distribution of the distributed parameter μ , σ which comes from the normal distribution N ( μ , σ 2 ) , where
μ = μ + ( 1 + C e C ) α ,
σ = e C α .
This is equivalent to when we use the normal distribution N ( μ , σ 2 ) to replace the previous type I maximum distribution. Because the probability density function curve of the normal distribution and the type I maximum distribution are more similar on the right-hand side, we can obtain a relatively accurate result in the case of guarantee rate p is higher.
According to the inferred result, we can also know that the inferred results are higher than the moment method in accordance with C = 0.90 and C = 0.95, and the changes of C have a great influence on the inferred results; relatively, the inferred results are more suitable in accordance with C = 0.60 and C = 0.75, and there is a little difference between them. Moreover, we select C = 0.75 to infer the result, which can take into account the influences of statistical uncertainty more fully, and have less relative error. In this article, we suggest selecting the confidence degree C = 0.75. In order to make a convenient application, the numerical tables of k 1 , k 2 in the case of C = 0.75 are listed in Table 3:

5. Conclusions

  • When the test dates are insufficient, the statistical uncertainty has a great influence on inferring the representative values of variable actions, especially, the characteristic value and frequent value, the moment method adopted presently does not take into account the influences of statistical uncertainty and the inferred results are always on the aggressive side.
  • The linear regression estimation is applicable to infer the characteristic value and frequent value of variable actions in the case of a minor sample; however, it is inconvenient because of the amount of data that must be sought and the present numerical tables don’t totally meet the demands of characteristic and frequent values inference.
  • The Bayesian inference method presented in this paper is applicable to infer the characteristic value and frequent value of variable actions in the case of a minor sample and it is more convenient than the linear regression estimation. These methods consider the condition in which the standard deviation is known, yielding a better inference of the result.
  • The Bayesian inference method with no parameter information presented in this paper has good precision when the confidence degree is no less than 0.90, it is convenient and flexible, and can be applied to any case in which the confidence degree is no less than 0.90.
  • We suggest selecting the confidence degree C = 0.75 to infer the characteristic value and frequent value of variable actions.

Author Contributions

Research ideas, J.Y.; methodology, X.W.; formal analysis, X.W.; data curation, X.W.; writing—original draft preparation, X.W.; writing—review and editing, X.W.; supervision, J.Y.; project administration, J.Y.; funding acquisition, J.Y.

Funding

This research was funded by the National Natural Science Foundation of China (Nos. 50678143 and 51278401) and the education department of Shaanxi (No.17JK0440).

Conflicts of Interest

The authors declare there is no conflicts of interest regarding the publication of this paper.

References

  1. National Standard of the People’s Republic of China. Unified Standard for Reliability Design of Engineering Structures (GB50153-2008); China Architecture and Building Press: Beijing, China, 2009. [Google Scholar]
  2. Yao, J.T. Statistical inference of material strength of existing structures. J. Xi’an Univ. Archit. Technol. 2003, 35, 307–311. [Google Scholar]
  3. Feng, Y.F.; Gong, J.X.; Wang, J.C. Determination of frequent value and quasi-permanent value of floor live load and wind load. Ind. Constr. 2012, 42, 74–78. [Google Scholar]
  4. National Standard of the People’s Republic of China. Unified Standard for Design of Building Structures (GBJ68-84); China Architecture and Building Press: Beijing, China, 1984. [Google Scholar]
  5. Peng, X.Y.; Yan, Z.Z. Bayesian Estimation for Generalized Exponential Distribution Based on Progressive Type-I Interval Censoring. Acta Math. Appl. Sin. 2013, 29, 391–402. [Google Scholar] [CrossRef]
  6. Lin, Y.J.; Lio, Y.L. Bayesian inference under progressive type-I interval censoring. J. Appl. Stat. 2012, 39, 1811–1824. [Google Scholar] [CrossRef]
  7. Lin, C.T.; Wu, S.; Balakrishnan, D. Planning life tests with progressively Type-I interval censored data from the lognormal distribution. J. Stat. Plan. Inference 2009, 139, 54–61. [Google Scholar] [CrossRef]
  8. Gupta, R.D.; Kundu, D. Exponentiated exponential family: An alternative to Gamma and Weibull distributions. Biom. J. 2001, 43, 117–130. [Google Scholar] [CrossRef]
  9. Mao, S.S. Statistics Handbook; Beijing Science Press: Beijing, China, 2003. [Google Scholar]
  10. Aggarwala, R.P. Interval censoring: Some mathematical results application to inference. Commun. Stat. Theory Methods 2001, 30, 1921–1935. [Google Scholar] [CrossRef]
  11. Zhou, Y.Q. The Quality of Reliability Growth and Reliability Evaluation Method; Beihang University Press: Beijing, China, 1997. [Google Scholar]
  12. Kundu, D.; Biswabrata, P. Bayesian inference and life testing plans for generalized exponential distribution. Sci. China Ser. A Math. 2009, 52, 1373–1388. [Google Scholar] [CrossRef] [Green Version]
  13. Efthymios, G.T. Bayesian inference for multivariate gamma distributions. Stat. Comput. 2004, 14, 223–233. [Google Scholar]
  14. Chansoo, K.; Keunhee, H. Estimation of the scale parameter of the half-logistic distribution under progressively type II censored sample. Stat. Pap. 2010, 51, 375–387. [Google Scholar]
  15. Chopin, N.; Lelièvre, T.; Stoltz, G. Free energy methods for Bayesian inference: Efficient exploration of univariate Gaussian mixture posteriors. Stat. Comput. 2012, 22, 897–916. [Google Scholar] [CrossRef]
  16. Research Department of Machinery Industry Standard Fourth. Table for Reliability Test; National Defence Industry Press: Beijing, China, 1979. [Google Scholar]
  17. Dai, S.S.; Fei, H.L. Reliability Test and Statistical Analysis (First Book); National Defence Industry Press: Beijing, China, 1983. [Google Scholar]
  18. Mao, S.S. Bayes Statistics; China Statistics Press: Beijing, China, 1999. [Google Scholar]
  19. Yao, J.T. Reliability Assessment of Existing Structure Based on Uncertainty Reasoning Theory; Science Press: Beijing, China, 2001. [Google Scholar]
  20. National Standard of the People’s Republic of China. Load Code for the Design of Building Structures (GB50009-2001); China Architecture and Building Press: Beijing, China, 2001. [Google Scholar]
  21. Cruz Campas, M.E.; Gomez Alvarez, A.; Ramirez Leal, R.; Villalba Villalba, A.G.; Monge Amaya, O.; Varela Salazar, J.; Quiroz Castillo, J.M.; Duarte Tagles, H.F. Air quality regarding metals (pb, cd, ni, cu, cr) and relationship with respiratory health: Caso sonora, mexico. Rev. Int. Contam. Ambient. 2017, 33, 23–34. [Google Scholar] [CrossRef]
  22. Gil-Ramirez, A.; Morales, D.; Soler-Rivas, C. Molecular actions of hypocholesterolaemic compounds from edible mushrooms. Food Funct. 2018, 9, 53–69. [Google Scholar] [CrossRef] [PubMed]
  23. Liu, L.; Liu, X.; Liu, G. The risk management of perishable supply chain based on coloured petri net modeling. Inf. Process. Agric. 2018, 5, 47–59. [Google Scholar] [CrossRef]
  24. Liu, Z. What is the future of solar energy? Economic and policy barriers. Energy Sources Part B Econ. Plan. Policy 2018, 13, 169–172. [Google Scholar] [CrossRef]
  25. Milewski, S.; Zabek, K.; Antoszkiewicz, Z.; Tanski, Z.; Sobczak, A. Impact of production season on the chemical composition and health properties of goat milk and rennet cheese. Emir. Emir. J. Food Agric. 2018, 30, 107–114. [Google Scholar]
  26. Wang, L.; Ge, S.; Liu, Z.; Zhou, Y.; Yang, X.; Yang, W.; Li, D.; Peng, W. Properties of antibacterial bioboard from bamboo macromolecule by hot press. Saudi J. Biol. Sci. 2018, 25, 465–468. [Google Scholar] [CrossRef] [PubMed]
Table 1. Test value of sample and numerical tables of C I ( 10 , 10 , j ) , D I ( 10 , 10 , j ) .
Table 1. Test value of sample and numerical tables of C I ( 10 , 10 , j ) , D I ( 10 , 10 , j ) .
j x ( 10 j + 1 ) C I ( 10 , 10 , j ) D I ( 10 , 10 , j ) x ( 10 j + 1 ) C I ( 10 , 10 , j ) x ( 10 j + 1 ) D I ( 10 , 10 , j )
12.05−0.07270.02730.14900.0560
21.86−0.07800.04000.14510.0744
31.72−0.07720.05250.13280.0903
41.47−0.07190.06540.10570.0961
51.39−0.06170.07930.08580.1102
61.29−0.04540.09460.05860.1220
71.22−0.02070.11240.02530.1371
81.110.01790.1342−0.01990.1490
91.050.08510.1642−0.08940.1724
101.020.32460.2300−0.33110.2346
Sum010.26191.2422
Table 2. Inferred results of x p .
Table 2. Inferred results of x p .
Guarantee   Rate   p Inferring Method C = 0.6 C = 0.75 C = 0.6 C = 0.95 Moment Method
Coefficient x p Coefficient x p Coefficient x p Coefficient x p Coefficient x p
0.90Equation (11)2.7201.9543.1302.0623.8602.2534.4102.3972.2501.881
Equation (59)1.9822.1212.2562.2192.7682.4003.1752.545
relative error\0.085\0.076\0.065\0.062
Equation (47)1.0481.9031.1911.9381.4211.9871.5712.015\\
0.95Equation (11)3.5602.1744.0802.3114.9802.5465.6702.7272.9702.080
Equation (59)2.5502.3232.8842.4423.5152.6654.0222.846
relative error\0.068\0.057\0.047\0.044
Equation (47)1.0482.1021.1912.1381.4212.1861.5712.214\\
0.99Equation (11)5.4802.6776.2302.8747.5703.2248.5703.4864.6002.531
Equation (59)3.8432.7824.3192.9515.2313.2755.9733.538
relative error\0.039\0.027\0.016\0.015
Equation (47)1.0482.5531.1912.5891.4212.6371.5712.665\\
Table 3. Numerical tables of k 1 , k 2 ( C = 0.75).
Table 3. Numerical tables of k 1 , k 2 ( C = 0.75).
n k 2 k 1
0.900.910.920.930.940.950.960.970.980.990.999
51.2552.5092.615 2.7322.8653.0183.1993.4193.7024.1004.7787.032
61.2372.4252.527 2.6412.7702.9183.0933.3073.5813.9664.6246.806
71.2232.3652.466 2.5772.7032.8483.0193.2283.4963.8724.5156.648
81.2112.3202.419 2.5292.6532.7952.9633.1683.4323.8024.4346.529
91.2002.2852.382 2.4912.6132.7542.9203.1223.3823.7474.3706.437
101.1912.2562.353 2.4602.5812.7202.8843.0843.3413.7034.3196.362
111.1842.2322.328 2.4342.5542.6922.8543.0533.3083.6664.2766.300
121.1772.2122.307 2.4122.5312.6682.8293.0263.2793.6344.2406.248
131.1712.1942.288 2.3932.5112.6482.8083.0033.2543.6074.2096.203
141.1652.1792.272 2.3772.4942.6302.7892.9833.2333.5844.1816.163
151.1602.1652.258 2.3622.4792.6142.7722.9663.2143.5634.1576.129
161.1552.1532.246 2.3492.4662.6002.7572.9503.1973.5444.1366.098
171.1512.1422.235 2.3372.4532.5872.7442.9363.1823.5284.1176.070
181.1472.1322.224 2.3272.4432.5752.7322.9233.1683.5134.1006.046
191.1442.1232.215 2.3172.4332.5652.7212.9113.1563.4994.0846.023
201.1402.1152.207 2.3082.4232.5552.7112.9013.1453.4874.0706.002
211.1372.1082.199 2.3002.4152.5472.7022.8913.1343.4754.0575.983
221.1342.1012.192 2.2932.4072.5392.6932.8823.1243.4654.0445.966
231.1322.0942.185 2.2862.4002.5312.6852.8743.1153.4554.0335.949
241.1292.0882.179 2.2792.3932.5242.6782.8663.1073.4464.0235.934
251.1272.0822.173 2.2732.3872.5182.6712.8593.0993.4374.0135.920
261.1242.0772.167 2.2682.3812.5112.6652.8523.0923.4294.0045.907
271.1222.0722.162 2.2632.3762.5062.6592.8463.0853.4223.9955.895
281.1202.0682.157 2.2582.3702.5002.6532.8403.0793.4153.9875.883
291.1182.0632.153 2.2532.3662.4952.6482.8343.0733.4083.9805.872
301.1162.0592.149 2.2482.3612.4902.6432.8293.0673.4023.9725.862
351.1082.0412.130 2.2292.3412.4702.6212.8063.0433.3753.9425.818
401.1022.0272.116 2.2142.3252.4532.6042.7883.0233.3543.9175.783
451.0962.0152.104 2.2022.3132.4402.5902.7733.0073.3373.8975.754
501.0912.0062.094 2.1912.3022.4292.5782.7602.9943.3223.8815.730

Share and Cite

MDPI and ACS Style

Wang, X.; Yao, J. Bayesian Methods of Representative Values of Variable Actions. Symmetry 2019, 11, 346. https://doi.org/10.3390/sym11030346

AMA Style

Wang X, Yao J. Bayesian Methods of Representative Values of Variable Actions. Symmetry. 2019; 11(3):346. https://doi.org/10.3390/sym11030346

Chicago/Turabian Style

Wang, Xudong, and Jitao Yao. 2019. "Bayesian Methods of Representative Values of Variable Actions" Symmetry 11, no. 3: 346. https://doi.org/10.3390/sym11030346

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop