Global Sensitivity Analysis Based on Entropy: From Differential Entropy to Alternative Measures

Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative entropy, just as Sobol sensitivity analysis does not have negative variance. Entropy is similar to variance but does not have the same properties. An alternative sensitivity measure based on the approximation of the differential entropy using dome-shaped functionals with non-negative values is proposed in the article. Case studies have shown that new sensitivity measures lead to a rational structure of sensitivity indices with a significantly lower proportion of higher-order sensitivity indices compared to other types of distributional sensitivity analysis. In terms of the concept of sensitivity analysis, a decrease in variance to zero means a transition from the differential to discrete entropy. The form of this transition is an open question, which can be studied using other scientific disciplines. The search for new functionals for distributional sensitivity analysis is not closed, and other suitable sensitivity measures may be found.


Introduction
Sensitivity analysis (SA) based on entropy uses entropy to quantify uncertainty as Sobol SA [1,2] uses variance. Probability distributions with low variance have low entropy, while probability distributions with high variance have high entropy.
From a mathematical point of view, entropy is a certain additive functional on the probability distributions of possible states of a given system [3]. Entropy-based SA belongs to the category of distributional SA, which includes, for example, methods [4][5][6][7][8][9]. In these SAs, uncertainty is characterized by examining the entire distribution of model outputs, not just its variance.
There exist two popular indices based on entropy that have been used for SA. The first is entropy-based SA [10], which is based on the definition of Shannon's entropy [11]. The second [12] is based on Kullback-Leibler entropy, which measures the difference between two probability distributions.
The use of entropy instead of variance is usually justified by the need to analyze the output random variable with heavy-tail or outliers [13]. SA based on entropy was used to study, for example, traffic flow [13], limit states of load-bearing structures [14,15], the seismic demand of concrete structures [16], and groundwater level [17].
Another group of tasks uses entropy to examine the state of a system in combination with certain types of SA, which may not be based on entropy. This group includes, for example, SA of the working process of heat exchangers [18], the hydraulic reliability of water distribution systems [19], shear stress distribution in a rectangular channel [20], creep of soft marine soil [21], air energy storage systems in coal-fired power plants [22], uncertainties of mathematical decision-making models [23][24][25][26][27], and many others.

Entropy of a Random Variable
The concept of entropy for a discrete random variable was introduced by Claude Shannon [11] as a useful benchmark in information theory. The entropy of discrete random variable Y having a probability mass function can be written using the equation: A valued characteristic of discrete entropy is that the entropy of a discrete random variable Y is zero or positive because the probabilities P(y i ) in Equation (1) are in [0,1]. This is also an important difference from differential entropy.
The differential (continuous) entropy can be defined using the following formula: R is a continuous random variable with the probability density function (pdf) f (r) on the real line. The differential entropy is not a limit of the Shannon entropy for n → ∞, although Equation (2) resembles an intuitive extension of Equation (1). In particular, it may be problematic that the differential entropy may be negative if f (r) > 1. For Gauss pdf, this occurs when the standard deviation σ R of f (r) is very small, which is illustrated by the example with mean value µ R = 0, where σ R is parameter of the graph-see Figure 1.
Entropy is a measure of uncertainty similar to variance. Higher entropy indicates higher uncertainty or higher variance. An important difference occurs with small uncertainties. If R has Gauss pdf, then the negative values of the differential entropy H(R) decrease when σ R decreases with the limit H(R) → −∞ when σ R → 0-see Figure 1. The same is true for other classic pdfs. Unlike the discrete case, the entropy of a continuous system does not remain invariant during the transformation of the coordinate systems [65].
The right part of Figure 1 shows that the dependence H(R) vs. ln(σ R ) is linear, similarly, the dependence H(R) vs. ln(σ R ·σ R ) is also linear. It can be noted that the linear dependence of H(R) vs. ln(σ R ) is observed for the Gauss pdf of R but does not occur generally for each pdf. For example, the dependence ln(σ R ) vs. H(R) is linear if the variation coefficient is constant for log-normal pdf of R. R is a continuous random variable with the probability density function (pdf) f(r) on the real line. The differential entropy is not a limit of the Shannon entropy for n → ∞, although Equation (2) resembles an intuitive extension of Equation (1). In particular, it may be problematic that the differential entropy may be negative if f(r) > 1. For Gauss pdf, this occurs when the standard deviation σR of f(r) is very small, which is illustrated by the example with mean value μR = 0, where σR is parameter of the graph-see Figure 1.   (2) using Gauss pdf of f (r).

Entropy-Based Sensitivity Analysis
In SA, entropy is used as a measure of uncertainty for two types of sensitivity indices [10,12]. Both SA analyze changes in the probability density function (pdf) of the model output under the condition that one or more input random variables are fixed. The first concept of SA [10] is based on conditional entropy, which directly uses the definition of Shannon's entropy. The second concept of SA [12] is based on the conditional relative entropy called Kullback-Leibler entropy.
This article builds on the first concept [10], but with the implementation of differential entropy according to Equation (2) and with the introduction of new alternative sensitivity measures.
Any computational model may be regarded as a function R = g(X), where R is a scalar model output, and X is a vector of M uncertain model inputs {X 1 , X 2 , . . . X M }, where statistical independence is assumed between inputs.

Sensitivity Indices based on Differential Entropy H(R)
Global sensitivity indices based on entropy can be formulated analogously to Sobol sensitivity indices [1,2] with the difference that variance is replaced by entropy [10]. Can global sensitivity indices be formulated using Equation (2), and with the shortcoming that the differential entropy can be negative when the variance is small? The answer can be obtained by analyzing the sensitivity index of the first and higher orders. The first-order entropy-base sensitivity index T i can be written as: where the mean value E [·] is taken across all likely values of X i . H(R|X i ) is the conditional differential entropy, which represents the average loss of random variability on model output R when the input value of X i is known. The values of H(R) and H(R|X i ) must be such that T i ∈[0, 1]. In the limit case, if R|X i loses all random variability (σ R|Xi = 0), then the expected influence of X i on R is 100%, which means T i = 1. Therefore, H(R|X i ) must be equal to zero and not −∞ as given by Equation (2)-see Figure 1. Equation (2) has the drawback that it can give negative entropy, which allows a sensitivity greater than 100%, T i > 1, which is not desired in the SA concept. On the other hand, Equation (2) satisfies the second extreme H(R) = H(R|X i ), T i = 0, where fixing X i does not influence the pdf of output R. From the point of view of the SA concept, there are problematic cases where the variance of the output decreases to zero. The variance and entropy of R further decrease if two or more input variables are fixed. The second-order entropy-base sensitivity index T ij is computed with the fixing of pairs X i , X j : where the mean value E [·] is taken across all likely values of X i and X j . The third-order sensitivity index, E ijk , is computed analogously: All input random variables are assumed to be statistically independent. The sum of all indices must be equal to one: The total entropy-base sensitivity index T Ti can be written as: where the second term in the numerator contains the conditional entropy evaluated for the input random variable X i and fixed variables (X 1 , X 2 , . . . , X i−1 , X i+1 , . . . , X M ).
The higher the order of the sensitivity index, the more input variables are fixed and the lower the entropy of the output R, including negative values. Each additional order of the sensitivity index has one additional random variable fixed until all inputs are fixed and the output becomes deterministic. During this process, both the entropy and the variance of R decrease. While the variance decreases to zero, the entropy decreases to negative values, which is particularly problematic when estimating higher-order sensitivity indices.
If all input random variables are fixed, then the output R is deterministic and the variance of the output R is zero. This occurs when the last-order sensitivity index is computed. Although the deterministic value of the output should have zero entropy according to Equation (1), the differential entropy according to Equation (2) extends to minus infinity. From the point of view of the SA concept, the entropy needs to decrease to zero when the variance of the output reaches zero.
This article seeks new forms of functionals as alternatives to Equation (2) rather than the application of Kullback-Leibler (K-L) (relative) entropy [66,67]. In terms of the concept of global SA, alternative forms of functional are sought so that the entropy is not negative when the variance of the output goes to zero.

Approximation of Differential Entropy by Functional H(R) for Sensitivity Indices
From an SA point of view, Equation (2) is a functional that assigns a real value to function f (r). The modified Equation (2) should transform the inputs to the logarithm in the interval of zero to one so that the output value (entropy) cannot be negative for small σ R and also differ as little as possible from the differential entropy if f (r) < 1. The function should be increasing and decreasing approximately according to Equation (2) to fit Equation (2) well in the unproblematic areas. One such function, which is useful in modifying Equation (2), is the hyperbolic tangent: where z = f (r). In Equation (8), the larger the exponent t, the better g(z) fits the bilinear function-see left graph in Figure 2. Function g(z) does not provide an output greater than one, with the limit case in the form g(z) = z, z∈[0, 1); g(z) = 1, z∈[1, ∞]-see Figure 2. In Equation (8), the larger the exponent t, the better g(z) fits the bilinear function-see left graph in Figure 2. Function g(z) does not provide an output greater than one, with the limit case in the form g(z) = z, z∈[0, 1); g(z) = 1, z∈[1, ∞]-see Figure 2. Substituting Equation (8) into Equation (2), we obtain Equation (9) in the form: Figure 2 shows examples of the plots of the natural logarithm functions that are used in Equation (9).
From the point of view of SA, Equation (9) is a functional that has the properties required in the decomposition of sensitivity indices. In the limit case t → ∞, the positive values of the logarithm are simply replaced by zero in the integral in Equation (9). It is apparent from the example shown in Figure 3 that the functional from Equation (9) differs from the differential entropy from Equation (2) Figure 3, including a comparison with the differential entropy H(Y) computed according to Equation (2). Equation (9) perfectly approximates the differential entropy from Equation (2) if large values of σR are used. It can be noted that there is not much difference between the values plotted from Equation (9) for t = 4 (red curve) and t = ∞ (black curve)-see Figure 3.  Substituting Equation (8) into Equation (2), we obtain Equation (9) in the form: where H(R) ≥ 0 and H(R) → 0 if σ R → 0. The right side of Figure 2 shows examples of the plots of the natural logarithm functions that are used in Equation (9). From the point of view of SA, Equation (9) is a functional that has the properties required in the decomposition of sensitivity indices. In the limit case t → ∞, the positive values of the logarithm are simply replaced by zero in the integral in Equation (9). It is apparent from the example shown in Figure 3 that the functional from Equation (9) differs from the differential entropy from Equation (2)  In Equation (8), the larger the exponent t, the better g(z) fits the bilinear function-see left graph in Figure 2. Function g(z) does not provide an output greater than one, with the limit case in the form g(z) = z, z∈[0, 1); g(z) = 1, z∈[1, ∞]-see Figure 2. Substituting Equation (8) into Equation (2), we obtain Equation (9) in the form: Figure 2 shows examples of the plots of the natural logarithm functions that are used in Equation (9).
From the point of view of SA, Equation (9) is a functional that has the properties required in the decomposition of sensitivity indices. In the limit case t → ∞, the positive values of the logarithm are simply replaced by zero in the integral in Equation (9). It is apparent from the example shown in Figure 3 that the functional from Equation (9) differs from the differential entropy from Equation (2) Figure 3, including a comparison with the differential entropy H(Y) computed according to Equation (2). Equation (9) perfectly approximates the differential entropy from Equation (2) if large values of σR are used. It can be noted that there is not much difference between the values plotted from Equation (9) for t = 4 (red curve) and t = ∞ (black curve)-see Figure 3.  Equation (9) prevents the logarithm from returning a positive value within the integral when f (r) > 1. Let f (r) be a Gauss pdf and a natural logarithm is used in Equation (9). Then, the values of the functional H(R) are plotted in Figure 3, including a comparison with the differential entropy H(Y) computed according to Equation (2). Equation (9) perfectly approximates the differential entropy from Equation (2) if large values of σ R are used. It can be noted that there is not much difference between the values plotted from Equation (9) for t = 4 (red curve) and t = ∞ (black curve)-see Figure 3.
The defect in Equation (2) could be engineered by using an additional condition, which replaces the negative entropy values with zero if such a situation occurs. However, this solution can assign zero differential entropy, even in the case where the random variable does not yet have zero standard deviation σ R > 0, so a gradual decrease to zero is more appropriate.
The sensitivity indices based on Equation (9) are computed according to Equations (3) to (7), with the difference that the differential entropy is replaced by the functional H(R) according to Equation (9). Sensitivity indices based on H(R) are denoted as T i , T ij , T ijk , each index is in the interval [0, 1] and the sum of all sensitivity indices is equal to one.

Approximation of Differential Entropy by FunctionalĤ(R) and Sensitivity Indices
Let us consider another functionalĤ(R) approximating Equation (2) such that H(R) ≥ 0 In order to approach Equation (2), c 1 can be computed from the condition where the maximum can be found in z ∈ [0, ∞) for the following arguments: Upon substituting z 1 , z 2 into (11), c 1 can be computed as The left part of Figure 4 shows the variants of the functions that are integrated in Equation (10). Figure 4 shows that Equation (10) does not approximate the differential entropy as perfectly as Equation (9); however, this will not present a problem. The function used in Equation (10)  Although the approximation of the differential entropy using Equation (10) is not as perfect as Equation (9), it is not a shortcoming in the evaluation of the sensitivity indices, as shown in the case studies below.  Although the approximation of the differential entropy using Equation (10) is not as perfect as Equation (9), it is not a shortcoming in the evaluation of the sensitivity indices, as shown in the case studies below.
Sensitivity indices based onĤ(R) are computed according to Equation (3) to (7), with the difference that the differential entropy is replaced by the functionalĤ(R) according to Equation (10). Sensitivity indices based onĤ(R) are denoted asT i ,T ij ,T ijk . The functional gives a non-negative output and is equal to zero when σ R = 0, each index is in the interval [0, 1] and the sum of all sensitivity indices is equal to one.

Standard Distribution-Based Sensitivity Analyzes
The sensitivity analysis described in the previous chapter is based on the probability distribution of all possible outcomes of the random phenomenon R being observed. This type of SA can be categorized as distribution-based SA, because the result of SA depends on the whole probability distribution of random variable R and not only on one moment, as is the case, for example, with Sobol SA. Other types of distribution-based sensitivity analysis that are relevant for comparison are Cramér-von Mises SA and Borgonovo momentindependent SA.

Cramér-von Mises Sensitivity Indices
Let Φ R be the distribution function of R, where R is a model output, and X is a vector of M uncertain model inputs {X 1 , X 2 , . . . X M } with the assumption of statistical independence.
Let Φ i R be the conditional distribution function of R conditionally on X i : The first-order Cramér-von Mises index G i is determined by measuring the distance between probability Φ Z (t) and conditional probability Φ i Z (t) when an input is fixed [68].
The second-order Cramér-von Mises index G ij can be written using [68] as Since sensitivity indices G i , G ij , etc., are based on Hoeffding decomposition, the sum of all sensitivity indices is one [68]. Other characteristics of Cramér-von Mises indices and their behavior in engineering applications are mentioned in [39,69].

Borgonovo Moment-Independent Sensitivity Indices
Borgonovo moment-independent sensitivity indices [70] examine the whole distribution of inputs and outputs.
where f(r) is the pdf of R and f R|Xi (r) is the conditional pdf of R with fixed parameter X i [70]. Upon fixing pairs X i , X j , we obtain the second-order index B ij , where i < j. Upon fixing triplets X i , X j , X k , we obtain the third-order index B ijk , where i < j < k, etc. The higher the order of the index, the greater its value, and the index of the last order is equal to one, 0 ≤ B i ≤ B ij ≤ . . . ≤ B 1,2, . . . ,M ≤ 1 [70]. Compared to SA, which has the sum of all indices equal to one, Borgonovo indices are less practical, especially the last index with fixed inputs, which is always equal to one and does not provide any new information. Identification of the influence of X i using total indices is not possible for Borgonovo indices. The advantage of Borgonovo indices is their evaluation of the whole distribution of the output in a more transparent way than presented by [68]-see Equation (18) vs. Equation (16). The second advantage is that the input variables can be statistically correlated, which is difficult to ensure for other types of indices based on decomposition, such as Sobol indices.

Variance-Based Sensitivity Analysis
Sobol SA [1,2] is a variance-based SA, which decomposes the variance of the model output into segments that can be attributed to inputs or sets of inputs. Sobol SA is a classical method, which is an important part of the research of computational models in stochastic structural mechanics-see, e.g., [71][72][73]. Although Sobol SA is of a different type than the SA in the previous chapters, supplementing the case study with Sobol indices is useful for comparison.
Although the differential entropy is dependent on the entire shape of the pdf, variance is an important characteristic for the computation of H(R). Entropy is a measure of system uncertainty similar to variance. Entropy increases when variance increases. Variance dependency makes Equation (3) similar to Sobol's first-order sensitivity index: where V(R) is the variance and V(R|X i ) is the conditional variance of the model output R.

The Case Studies
The resistance R of a steel member of a rectangular cross-section in tension is studied using six SA methods. Two new types of sensitivity indices, which are based on functionals H(R) andĤ(R), are compared with four classical types of sensitivity indices in the case study.

Computational Model
The resistance R is the product of three random variables: yield strength f y , thickness t 2 , and width b. Statistical characteristics of f y , t 2 and b are taken into consideration using the results of experimental research [75,76], where steel grade S355 was studied for selected steel products-see Table 1. The cross-sectional area is expressed as the product of the thickness and width. The resistance R is the product of yield strength f y and cross-sectional area t 2 ·b-see Equation (20).
Entropy 2021, 23, 778 9 of 16 The mean value µ R of the product R can be obtained from Equation (21): The variance of the product R can be obtained from Equation (22): where σ R is the standard deviation. Although all input random variables have Gauss pdf, their product has non-zero skewness a R -see Equation (23).
The following statistical characteristics of output R: µ R = 495.216 kN, σ R = 40.822 kN, a R = 0.11 are obtained upon substituting the statistical characteristics of the input random variables from Table 1-see Figure 5.  Table 1.

The Results of the Case Studies
The sensitivity indices are estimated using the Latin Hypercube Sampling method (LHS) [77,78] in combination with numerical integration. In equations where the arithmetic mean E (·) is used, its value is estimated using 1000 LHS runs. The three-parameter log-normal pdf of f(r) is used [69]. In all cases, integration is performed numerically by Simpson's rule, using more than 10,000 integration steps over the interval [μR − 10σR, μR + 10σR]. If the lower bound of the domain of f(r) is greater than μR − 10σR, then integration is performed from the lower bound of f(r). Numerical integration is not used for Sobol sensitivity indices, which are computed analytically using Equations (22). Cramér-von Mises sensitivity indices are estimated using the algorithm described in [39,69], with the difference that three input random variables are used in this article. Borgonovo sensitivity indices were estimated according to the procedure in [39]. Further details of the numerical estimates of the sensitivity indices can be found in [39,69,72]. The results of sensitivity analyzes are shown in Figure 6 to Figure 11.
Sensitivity indices based on differential entropy H(R) are computed for b = e, but the same values can also be obtained for b = 2 or 10-see Figure 6. The sensitivity index of the last third-order was computed using the formal assumption that H(R|X1, X2, X3) = 0.  Table 1. To evaluate global SA, the pdf of product R can be reliably approximated using a lognormal pdf with three parameters-µ R , σ R , and a R [69]. The three-parameter log-normal pdf can be used to estimate the sensitivity index, even if one of the three variables in Equation (20) is fixed as deterministic [32]. If two input variables are fixed, a R = 0 and product R has a Gauss pdf.

The Results of the Case Studies
The sensitivity indices are estimated using the Latin Hypercube Sampling method (LHS) [77,78] in combination with numerical integration. In equations where the arithmetic mean E (·) is used, its value is estimated using 1000 LHS runs. The three-parameter lognormal pdf of f (r) is used [69]. In all cases, integration is performed numerically by Simpson's rule, using more than 10,000 integration steps over the interval [µ R − 10σ R , µ R + 10σ R ]. If the lower bound of the domain of f (r) is greater than µ R − 10σ R , then integration is performed from the lower bound of f (r). Numerical integration is not used for Sobol sensitivity indices, which are computed analytically using Equation (22). Cramér-von Mises sensitivity indices are estimated using the algorithm described in [39,69], with the difference that three input random variables are used in this article. Borgonovo sensitivity indices were estimated according to the procedure in [39]. Further details of the numerical estimates of the sensitivity indices can be found in [39,69,72]. The results of sensitivity analyzes are shown in Figures 6-11. sensitivity indices are estimated using the algorithm described in [39,69], with the difference that three input random variables are used in this article. Borgonovo sensitivity indices were estimated according to the procedure in [39]. Further details of the numerical estimates of the sensitivity indices can be found in [39,69,72]. The results of sensitivity analyzes are shown in Figure 6 to Figure 11.
Sensitivity indices based on differential entropy H(R) are computed for b = e, but the same values can also be obtained for b = 2 or 10-see Figure 6. The sensitivity index of the last third-order was computed using the formal assumption that H(R|X1, X2, X3) = 0.

Discussion
The case study presented the results of several types of SA using the input random variables listed in Table 1, which are typical in structural mechanics. For SA based on the differential entropy, the values of relative frequency are especially important-see Figure  12. The combination of fixed and random input variables changes the variance of output variable R, with the peak of the pdf changing from 0.01 to infinity-see Figure 12. Fixing all three inputs leads to zero variance of R and theoretically infinite pdf value-see the red line in Figure 12. The pdf of R with all random inputs (full variance) is depicted in pink-see Figure 5. Figure 12 shows the pdf's, which have the inputs fixed at the mean values.
Zero entropy replaces infinity when estimating the sensitivity index of the last third-order-see the red line in Figure 12. The sum of all sensitivity indices is equal to one only if E (H(R|X1, X2, X3)) = 0-see Equation (5). This means that the entropy of the sensitivity index of the last order (deterministic variables) must be calculated according to Equation (1).
In terms of the concept of sensitivity analysis, differential and discrete entropy are two related concepts, where the decrease of variance to zero (occurring gradually by fixing the input variables in all combinations) means a transition from differential to discrete entropy. The study suggests that global sensitivity analysis can help elucidate the nature of the transition between differential and discrete entropy.
The results depicted in Figure 6 and Figure 7 are practically the same because the estimates of the sensitivity indices in both cases are obtained using f(r) < 0.08 (relatively small values), i.e., in the region where ( )

R H
is very precisely equal to H(R).
Although the values of f(r) are relatively small, using the differential entropy without further modifications does not provide a sufficiently general solution. The use of ( ) R H provides a

Discussion
The case study presented the results of several types of SA using the input random variables listed in Table 1, which are typical in structural mechanics. For SA based on the differential entropy, the values of relative frequency are especially important-see Figure  12. The combination of fixed and random input variables changes the variance of output variable R, with the peak of the pdf changing from 0.01 to infinity-see Figure 12. Fixing all three inputs leads to zero variance of R and theoretically infinite pdf value-see the red line in Figure 12. The pdf of R with all random inputs (full variance) is depicted in pink-see Figure 5. Figure 12 shows the pdf's, which have the inputs fixed at the mean values.
Zero entropy replaces infinity when estimating the sensitivity index of the last third-order-see the red line in Figure 12. The sum of all sensitivity indices is equal to one only if E (H(R|X1, X2, X3)) = 0-see Equation (5). This means that the entropy of the sensitivity index of the last order (deterministic variables) must be calculated according to Equation (1).
In terms of the concept of sensitivity analysis, differential and discrete entropy are two related concepts, where the decrease of variance to zero (occurring gradually by fixing the input variables in all combinations) means a transition from differential to discrete entropy. The study suggests that global sensitivity analysis can help elucidate the nature of the transition between differential and discrete entropy.
The results depicted in Figure 6 and Sensitivity indices based on differential entropy H(R) are computed for b = e, but the same values can also be obtained for b = 2 or 10-see Figure 6. The sensitivity index of the last third-order was computed using the formal assumption that H(R|X 1 , X 2 , X 3 ) = 0.
Sensitivity indices based on functional H(R) are computed for b = e and t = 4, but practically the same values were obtained for t = 1, 2, 6-see Figure 7. The sensitivity index of the last third order was computed using H(R|X 1 , X 2 , X 3 ) = 0.
The sensitivity indices based on functionalĤ(R) are computed for b = e, but practically the same values were obtained for b = 2, 10-see Figure 8. The sensitivity index of the last third-order was computed usingĤ(R|X 1 , X 2 , X 3 ) = 0.

Discussion
The case study presented the results of several types of SA using the input random variables listed in Table 1, which are typical in structural mechanics. For SA based on the differential entropy, the values of relative frequency are especially important-see Figure 12. The combination of fixed and random input variables changes the variance of output variable R, with the peak of the pdf changing from 0.01 to infinity-see Figure 12. Fixing all three inputs leads to zero variance of R and theoretically infinite pdf value-see the red line in Figure 12. The pdf of R with all random inputs (full variance) is depicted in pink-see Figure 5. Figure 12 shows the pdf's, which have the inputs fixed at the mean values.  All of the utilized SA types identically identified the sensitivity of the output R to the inputs in the following descending order: fy, t2, b. This sensitivity order was determined using total indices, except for Borgonovo SA, where total indices do not exist. The large value of the sensitivity index of the last order causes the difference between the total indices of certain SA types to be very small-see Figure 6 and Figure 7. In contrast, Sobol SA ( Figure 11) and SA based on ( ) R Ĥ -see Figure 8-which clearly identify a strong influence of fy, provide clear identification of the influential and non-influential inputs.
Although the sensitivity order is the same, the sizes of sensitivity indices of the same order based on have a smaller value of the index of the last (third) order, which does not provide any new useful information for determining the sensitivity order of input variables. Gamboa SA also has a large share of the sensitivity index of the last order. Borgonovo SA has a last-order sensitivity index value implicitly equal to one. With a bit of exaggeration, the sensitivity index of the last order can be described as a "ballast" index, which does not provide useful information for determining the sensitivity order, either directly or using total indices.
The largest sum of first-order sensitivity indices (sum of all Si is 0.998) and very small higher-order sensitivity indices is given by Sobol SA-see Figure 11. This has also been observed for other tasks [31,47,71]. If the sum of all Si is equal to one, then the sensitivity order can be determined using only the Si indices, which are the same as the total indices, and higher order sensitivity indices do not have to be calculated. Easy interpretation of SA results, often carried out only with Si, is one of the features that makes Sobol SA popular.
A relatively large sum of all first-order sensitivity indices (0.34) was also obtained using SA based on ( ) R Ĥ -see Figure 8. Gamboa sensitivity indices have the sum of all first-order sensitivity indices equal to 0.3; however, the last-order sensitivity index has a relatively high value of 0.65-see Figure 9.
New distribution-oriented sensitivity indices, which are an alternative to other types of distribution SA such as Cramér-von Mises SA [68] or Borgonovo moment-independent SA [70], have been proposed using functional ( ) . The case study showed that the Zero entropy replaces infinity when estimating the sensitivity index of the last thirdorder-see the red line in Figure 12. The sum of all sensitivity indices is equal to one only if E(H(R|X 1 , X 2 , X 3 )) = 0-see Equation (5). This means that the entropy of the sensitivity index of the last order (deterministic variables) must be calculated according to Equation (1).
In terms of the concept of sensitivity analysis, differential and discrete entropy are two related concepts, where the decrease of variance to zero (occurring gradually by fixing the input variables in all combinations) means a transition from differential to discrete entropy. The study suggests that global sensitivity analysis can help elucidate the nature of the transition between differential and discrete entropy.
The results depicted in Figures 6 and 7 are practically the same because the estimates of the sensitivity indices in both cases are obtained using f (r) < 0.08 (relatively small values), i.e., in the region where H(R) is very precisely equal to H(R). Although the values of f (r) are relatively small, using the differential entropy without further modifications does not provide a sufficiently general solution. The use of H(R) provides a more general possibility of attaining f (r) > 1, even during the computation of sensitivity indices of lower orders, not just the last one.
All of the utilized SA types identically identified the sensitivity of the output R to the inputs in the following descending order: f y , t 2 , b. This sensitivity order was determined using total indices, except for Borgonovo SA, where total indices do not exist. The large value of the sensitivity index of the last order causes the difference between the total indices of certain SA types to be very small-see Figures 6 and 7. In contrast, Sobol SA ( Figure 11) and SA based onĤ(R)-see Figure 8-which clearly identify a strong influence of f y , provide clear identification of the influential and non-influential inputs.
Although the sensitivity order is the same, the sizes of sensitivity indices of the same order based onĤ(R) differ from the sizes of indices based on H(R) or H(R). The results of SA based onĤ(R) have a smaller value of the index of the last (third) order, which does not provide any new useful information for determining the sensitivity order of input variables. Gamboa SA also has a large share of the sensitivity index of the last order. Borgonovo SA has a last-order sensitivity index value implicitly equal to one. With a bit of exaggeration, the sensitivity index of the last order can be described as a "ballast" index, which does not provide useful information for determining the sensitivity order, either directly or using total indices.
The largest sum of first-order sensitivity indices (sum of all S i is 0.998) and very small higher-order sensitivity indices is given by Sobol SA-see Figure 11. This has also been observed for other tasks [31,47,71]. If the sum of all S i is equal to one, then the sensitivity order can be determined using only the S i indices, which are the same as the total indices, and higher order sensitivity indices do not have to be calculated. Easy interpretation of SA results, often carried out only with S i , is one of the features that makes Sobol SA popular.
A relatively large sum of all first-order sensitivity indices (0.34) was also obtained using SA based onĤ(R)-see Figure 8. Gamboa sensitivity indices have the sum of all first-order sensitivity indices equal to 0.3; however, the last-order sensitivity index has a relatively high value of 0.65-see Figure 9.
New distribution-oriented sensitivity indices, which are an alternative to other types of distribution SA such as Cramér-von Mises SA [68] or Borgonovo moment-independent SA [70], have been proposed using functionalĤ(R). The case study showed that the sensitivity indices based on functionalĤ(R) have a good structure, which provides clear information about the sensitivity order of input variables-see Figure 8. The properties of these indices are mainly influenced by the beginnings of the curves integrated in Equation (10) and are shown on the left side of Figure 4. Virtually the same indices were obtained when the curves integrated in Equation (10) were replaced by a semicircle for z ∈ [0, 1] or zero for z > 1-see the blue curve on the left side of Figure 4.
In addition to the semicircle, it is also possible to experiment with other dome-shaped curves, which can suitably replace the function integrated in Equation (10). Further qualitative research could study the influence of dome shapes on the structure of sensitivity indices to find suitable curves for specific types of tasks. In general, it is possible to aim to maximize the share of first-and low-order sensitivity indices similar to Sobol's indices. The results can be used in the application of acquired knowledge in understanding similar cases. Based on the findings from the case studies, new functionals with properties usable in SA can be sought.

Conclusions
The presented article compared several types of sensitivity analyzes and presented new distribution-oriented sensitivity indices, which were formulated based on differential entropy research. The comparative studies have shown the rationality of new sensitivity measures, their advantages, and disadvantages, in contrast to other types of SA.
The search for functionals suitable for distributional sensitivity analysis (SA) is not closed and other suitable sensitivity measures can be found. Sensitivity measures reflecting clear contrasts between sensitivity indices with the clear identification of influential and non-influential input variables ought to be sought. Preference may be given to a large proportion of sensitivity indices of the first and lower orders, similar to Sobol SA.
Entropy is an alternative measure for variance, but other similar measures are possible. From the point of view of the SA concept, a decrease in variance to zero means a transition from the differential to discrete entropy; differential entropy alone is not enough. The basic ways of making this transition were formulated in this article. Further research may proceed with the analysis of the links between differential and discrete entropy in specific applications of sensitivity analysis.
Funding: The work has been supported and prepared within the project "probability-oriented global sensitivity measures of structural reliability" of The Czech Science Foundation (GACR, https://gacr.cz/) no. 20-01734S, Czechia.
Institutional Review Board Statement: Not applicable.

Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.