A Review on Local Failure Probability Sensitivity Analysis

: When assessing the reliability of a system, a mathematical model is often deﬁned to replicate the system’s behavior. The inputs of the system are then gathered into two categories, random inputs and deterministic inputs. The failure of the system depends on both categories and here we focus on the inﬂuence of the deterministic inputs. Local failure probability sensitivity analysis consists in computing the derivatives of the failure probability with respect to these deterministic parameters and is a fundamental step in reliability-based design optimization. These sensitivities also provide valuable insights into how speciﬁc model parameters affect the failure probability, allowing engineers and designers to make informed decisions about adjusting those parameters to enhance reliability. This article explores various techniques developed in the literature for assessing the sensitivity of failure probability with respect to distribution or design parameters. Depending on the nature of the deterministic parameters and the selected input space, different methods are available. The statistical characteristics of the resulting estimates as well as their computational cost are discussed here, for comparison purpose.


Introduction
In order to derive the reliability of a system, a limit state function g is often defined depending on different identified inputs of the system.The scalar output of the limit state function is then called the response of the system.This limit state function makes it possible to assess the state of the system thanks to the value of the response: if the value is positive, the system is safe, otherwise it is failing.We assume here that the inputs of the system are divided into two categories: deterministic inputs s ∈ R p and random inputs X ∈ R d .Consequently, the response of the system is random and is denoted Y = g(s, X).The deterministic inputs s of the system can be of different nature and, for the rest of this paper, depending on their nature, a specific notation will be used.Physical variables like design parameters of the system are thus denoted δ ∈ R n .Whereas distribution parameters of the probabilistic framework selected to model the random inputs X are denoted θ ∈ R m .Therefore, the deterministic input vector s is written s = [θ, δ], and m + n = p.It is assumed that the limit state function g is numerically expensive (it can require the resolution of partial derivative equations for instance).Accordingly, the reliability analysis of the system must require a number of evaluations of g as small as possible and the use of advanced reliability techniques [1].
Understanding how the deterministic parameters s impact the system's failure is highly valuable, especially in the context of Reliability-Based Design Optimization (RBDO), as discussed in [2].RBDO is a field of research for engineers tasked with designing structures in the face of uncertainties.The principle of the RBDO problem usually revolves around striking a balance between the structure's cost and its reliability.Therefore, the optimization problem can be written with the minimization of an objective function describing the cost of the structure [3], under some probabilistic reliability constraints.Those constraints typically involve the estimation of the failure probability P f = P(g(s, X) < 0).The derivatives of P f with respect to s are then needed for gradient-based algorithms used to solve RBDO problems [4].By integrating these derivatives with gradient-based optimization techniques, reliability-based design optimization becomes a data-driven and systematic approach to ensuring the safety and performance of critical systems.
The estimation of these derivatives with respect to s is part of the local reliability sensitivity analysis (RSA) [5] of the system.The derivatives, also called sensitivities, make it possible to optimize the reliability of the system.Indeed, these sensitivities provide valuable insights into how specific model parameters affect the failure probability, allowing engineers and designers to make informed decisions about adjusting those parameters to enhance reliability [6].RSA [5] is in fact a specific form of sensitivity analysis (SA) [7] where instead of directly studying the influence of the system's inputs on the output Y, one examines their influence on other quantities such as the failure probability.
In this article, we propose to review different approaches for local RSA with regard to deterministic inputs.The quantity of interest is thus the vector of derivatives of P f (s) according to s for a fixed value of s.In practice, the estimation of these derivatives usually follows the computation of P f (s).Therefore, the evaluations of g needed to estimate P f (s) are repurposed for the computation of those derivatives to reduce the additional numerical cost [8].Depending on the nature of s, different local RSA methods are available.The efficiency of those methods relies on the input space type (original or standard) chosen by the practitioner, along with the reliability method selected (Monte Carlo methods, approximation methods, etc.).The statistical characteristics of the resulting estimates as well as their computational cost are discussed here for each local RSA method.
This paper is organized as follows.In the next section, the expression of the local sensitivities are presented in detail with a particular focus on the influence of the nature of the deterministic parameters.Section 3 describes the available local RSA methods with respect to distribution parameters θ, performed in the original input space, with a focus on sampling methods.Section 4 presents local RSA methods derived in the standardized input space with respect to any deterministic inputs s.Finally, a conclusion briefly summarizes the main outcomes of the review.

Local RSA with Respect to Deterministic Inputs
It is assumed here that the random inputs X follow a standard normal distribution.However, this may be the result of an isoprobalistic transformation: let Z be the original inputs of the system, then X = T(Z) where T is a diffeomorphism.The interested reader may find more information about the various isoprobabilistic transformations T available (not detailed here) in [9,10].We denote g Z as the limit state function in the original input space and g as the transformed limit state function in the standard space.The joint probability density function (pdf) of Z is written f Z , while the joint pdf of X is written f X .

Expression of the Local RSA with Respect to Distribution Parameters
We describe here the framework of the local RSA with respect to θ, the distribution parameters of the original inputs Z.As mentioned before, the derivatives of P f with regard to θ give valuable information about the influence of the probabilistic model selected for the random inputs on the system's failure occurrence.This sensitivity also allows us to identify which random inputs should be focused [11,12].In order to compute the derivatives of P f (θ), two options are possible.
The first option is to evaluate both the failure probability P f and its derivatives in the original input space.The dependence of P f on θ is then uniquely contained in the joint distribution f Z of the original inputs Z.It is assumed here that f Z is continuously differentiable with respect to θ.Furthermore, we suppose that the integration domain of Z does not depend on θ.The failure probability is thus written where ≤ 0 is the failure domain in the original space.The derivatives of P f with respect to θ for ∈ [1, . . ., m] are straightforward The resulting expression of the derivative of P f in Equation ( 2) is thus a domain integral, which depends on the derivatives of f Z with respect to θ , and is defined on The second option is to evaluate both the failure probability P f and its derivatives in the standard space.The dependence of P f on θ is then contained in the transformed limit state function g through the isoprobabilistic transformation while the standardized joint distribution f X is parameter-free.Therefore, the failure probability is expressed as where x) ≤ 0 is the failure domain in the standard normal space.Assuming the gradient ∇ x g(θ, x) = 0 for all x and θ on the limit state surface {g(θ, x) = 0}, the derivatives of P f (θ) with respect to θ for ∈ [1, . . ., m] are defined by the following surface integral [13] where ds(x) stands for surface integration over the limit state surface {g(θ, x) = 0}.The resulting expression of the derivative of P f in Equation ( 4) is thus a surface integral, which depends on the derivative of g with respect to θ and its gradient ∇ x g.

Expression of the Local RSA with Respect to Design Parameters
Computing the derivatives of P f with respect to design parameters δ is required for the gradient-based algorithms used to solve a RBDO problem [2], as previously mentioned.The expression of the dependence of P f on the design parameters δ is necessarily contained in the limit state function.Therefore, no matter the input space selected, the expression of the derivatives of P f is a surface integral as in Equation ( 4) where one adapts the integral to f Z or f X .In the rest of this article, we assume that the sensitivities with regard to design parameters are computed in the standard space of the inputs X.The surface integral is thus given by the following equation with the same assumptions and notations as in Equation ( 4), for ∈ [1, . . ., n].This sensitivity depends then necessarily on the derivative of g with respect to δ and its gradient ∇ x g.

Estimation of the Local Sensitivities
The main challenge of the estimation of the derivatives with respect to s, no matter its nature, is to increase as little as possible the simulation budget needed for the estimation of the probability of failure P f (s).Consequently, it is assumed that the computation of this sensitivity reuses as much as possible the evaluations of the limit state function needed for the computation of P f , as previously mentioned.
In other words, it is uncommon to estimate P f and its derivatives in different input spaces, as the limit state functions differ.
According to the input space, the derivatives of P f with regard to distribution parameters θ are either domain integrals Equation (2) or surface integrals Equation (4).Domain integrals are much easier to handle than surface integrals.However, the literature on failure probability estimation in standardized spaces is more luxuriant than in the original input space.For instance, the methods that depend on the location of the so-called design points of the system [14] require standardized input space (FORM/SORM approximations, radial-based importance sampling, line sampling, etc.) [15].Therefore, both options present advantages and drawbacks.
The next section details methods available to derive the sensitivity with regard to θ in the original input space.

Local RSA in the Original Input Space
In this section, the vector s contains only distribution parameters θ.We denote Z as the original random inputs of the system and f Z as its joint pdf, as before.The dependence of P f on θ is then expressed in the joint pdf f Z , as previously mentioned.The joint pdf f Z is supposed to be continuously differentiable with respect to θ.We further assume that the integration domain of Z does not depend on θ.The derivatives of P f are written as a domain integral Equation (2), and can be estimated with various sampling algorithms presented in the following.

Score Function Method for Sensitivity Analysis
In order to estimate the derivatives of P f , the score function method was first presented in [16] and later in [11,17].It consists in introducing a density function in the domain integral, like in the important sampling framework [18].The density is simply taken as the joint pdf of the inputs Z, and the sensitivity is written for = 1, . . ., m. Noticing the following equality ∂ln( f Z (z; θ))/∂θ = (1/ f Z (z; θ))∂ f Z (z; θ)/ ∂θ holds for all z ∈ R d , the derivative is thus equal to The derivative Q(•) = ∂ ln( f Z (•; θ))/∂θ is then called the score function.In order to compute the sensitivity with regard to the distribution parameters of Z, the following expected value must thus be evaluated: . This expected value can be estimated with a crude Monte Carlo (MC) method [19], and the MC estimate of the derivatives of P f is then where the observations Z (j) are independent and identically distributed (iid), generated from f Z .Therefore, assuming the failure probability is estimated with a crude MC method and written PMC f , the same observations can be entirely reused here.The simulation budget is thus kept constant as the score function does not depend on the costly limit state function g.

Statistical Properties of the Score Function Estimate with Crude MC
In this section, the bias, variance and mean square error of the MC sensitivity estimate are derived.As noted in [20], the crude MC estimate of Equation ( 6) is unbiased, and its variance is written since As the MC sensitivity estimate is unbiased, the expression of its mean square error (mse) E MC mse is equal to its variance.The error estimate is then easy to derive as the quantity E f Z I D f Z (Z)Q(Z) 2 can be evaluated with a crude MC estimate using the same observations generated to compute both P f and ∂P f (θ)/∂θ with the MC method.In the literature, several studies have focused on the expression of the mean square error of estimates obtained with the score function method [21,22], not necessarily in the failure probability framework.Here, we derive the expression presented in [20], which is fitted to our particular case of E MC mse .The mean square error is bounded with the following inequalities 1 N The lower bound of Equation ( 8) is an expression depending on the failure probability and its derivatives; thus, it can be easily derived in practice.The upper bound depends on another quantity: E Q(Z) 4 , in addition to P f and the sensitivity.This quantity can have an analytical expression in very simple cases.Suppose Z is a univariate random variable, which follows a normal distribution of mean µ Z and variance σ 2 Z .We are interested in the derivative of P f with respect to the mean value µ Z .In this scenario, E Q(Z) 4 = 3/σ 4 [20] and the right bounds become In the other cases, the analytical expression of E Q(Z) 4 can be more cumbersome, and the quantity is thus estimated with the Monte Carlo method, reusing the same sample once more.
From the inequality Equation ( 8), we can conclude that the error, and therefore the variance of the estimate, is proportional to 1/N.Consequently, the same variance convergence rate applies for both estimations with the Monte Carlo method of the failure probability PMC f and its derivatives with respect to distribution parameters:

Score Function Method with Advanced Sampling Techniques
Various advanced sampling techniques have been developed, based on the MC method [15], to estimate the failure probability more efficiently.Some of these techniques can be applied to the context of the score function method, to estimate the expected value E f Z I D f Z (Z)Q(Z) in addition to P f .Therefore, these methods make it possible to evaluate more effectively the derivatives of P f according to the distribution parameters θ of Z.We derive some of these methods here.

Score Function Method with Subset Sampling
We recall here that the failure probability estimate with subset sampling (SS) [24,25] has the following expression, with underlined dependence on θ with P 0 (θ) = P(F 0 ) and P i (θ) = P(F i |F i−1 ) for i > 0, where F i are the intermediate failure domains.It is recalled that We assume here that the intermediate failure domains F 0 , F 1 , . . ., F q−1 are independent of θ and are chosen a priori.As underlined in [27], taking the derivatives of Equation ( 9) with respect to θ , for = 1, . . ., m gives the following equation where the terms of the sum are defined with the equation and for i > 0 It should be noted that each P(F i−1 ) is dependent on θ as well, even if the notation is not explicit.Applying the score function method to the Equations ( 11) and ( 12) leads to the final sensitivity estimate where Those expectations can be evaluated with crude Monte Carlo estimates, reusing the exact same samples generated during the SS procedure , where the random variables Z (j) [27] for more details.The simulation budget is therefore constant as the score functions Q i do not depend on the costly limit state function g.

The statistical properties of
for i > 0 are each unbiased.However, there is a small bias induced by the dependence between the samples generated at each step, if one uses the Metropolis algorithms for the Markov chain MC procedure.This bias affects PSS f and the SS sensitivity estimate [27].The dependence makes the variance of the sensitivity estimate along with the mean square error hard to derive.Local sensitivity computation has also been studied in a special case of subset sampling called moving particles method [28].

Score Function Method with Importance Sampling
The score function method is, by definition, already an importance sampling (IS) technique, since a density is added in the sensitivity integral.Nevertheless, if this density is selected as f Z , then the resulting MC estimate is not very efficient, as most of the observations will not be in the failure domain.Suppose the failure probability is estimated in the IS context with an auxiliary density h.The resulting failure probability estimate is written PIS f .This auxiliary density h is supposed to be capable of generating more observations in the failure domain.Using the same density h, the sensitivity of the failure probability can be estimated as The IS sensitivity estimate is then written where the random variables Z (j) are iid with density h and are the same observations used for the estimation of PIS f .The simulation budget is kept constant as the score function Q h does not depend on the costly limit state function g.The IS sensitivity estimate is then unbiased, and its variance, as well as the mean square error, is written Using the same computations as in [20], it is possible to derive an upper for the E IS mse as such Therefore, we notice that compared to the formula Equation ( 8) the likelihood ratio f Z /h to the power 3 is introduced in the expected value.The convergence of E IS mse is thus greatly affected by the choice of the auxiliary density h.
The use of another IS auxiliary density than f z in the score function framework was initiated by Wu in [11,17] where an adaptive importance sampling scheme is presented and the resulting auxiliary density h is close to h opt = I D f Z f Z /P f .Later in [27], the authors developed a simulation method combining the SS framework and the IS technique to build the failure probability estimate, along with the sensitivity estimate with the score function method.Indeed, instead of estimating each intermediate failure probability of the SS framework with crude MC estimates, they employ the IS technique.The samples drawn by IS are then iid and the variance of both the final failure probability estimate and its sensitivity can be derived.
In short, local RSA in the original input space, for distribution parameters θ, does not require any additional evaluation of the limit state function (lsf).The simulation budget is kept constant, no matter the simulation method selected.Furthermore, the sensitivity estimates are unbiased (except in the SS framework with Metropolis algorithms).In the next section, we present local RSA performed in the standard normal space, for any deterministic inputs s.

Local RSA in the Standard Normal Space
In this section, we assume that after an isoprobabilistic transformation T, the random inputs Z have been transformed into standard normal inputs X, such as X = T(Z), using the same notations as before.The dependence of P f on s is then necessarily expressed in the (transformed) limit state function g.According to the nature of the parameters s, this dependence can be further derived.Suppose the vector s contains distribution parameters only, s = θ.Consequently, these parameters have no influence on the limit state function in the Z-space and the influence on the limit state function in the X-space is actually solely contains in the transformation T, denoted T θ such as with T −1 θ , the inverse isoprobabilistic transformation with fixed θ.Suppose the vector s contains design parameters only, s = δ.Accordingly, the parameters δ have influence on the definition of the limit state functions in both spaces.The relationship between the two limit state functions g and g Z can then be written Independently of the nature of s, the sensitivity of P f is expressed with a surface integral: Equation ( 4) for θ and Equation (5) for δ.This surface integral can be estimated with various methods [29,30] for RBDO purposes.In this article, we notably focus on methods which directly come from frameworks employed to compute failure probability estimates, as the goal is to increase as little as possible the simulation budget.In the FORM and SORM context, the limit state function g is approximated with Taylor series expansions in the neighborhood of the design points in the standard normal space [14].The design points of the system are the closest points to the origin of each failure region of the failure domain D f .The reliability analysis is then performed using the built approximations of g.We derive here the sensitivities of P f with the FORM approximation, first in a context where there is a unique design point P * , of norm β: β = P * .The failure probability is estimated with the equation where F X 1 is the cumulative distribution function (cdf) of the univariate standard normal variable X 1 .The quantity β is equal to β(s) = α P * where α = − ∇ x g(s, P * ) ∇ x g(s, P * ) .
Therefore, the value of β is a function of s.The derivative of P f according to s in the FORM context is then simply where f X 1 is the pdf of the random variable X 1 and ∂β(s)/∂s is the derivatives of β with respect to s.After some calculus, see [31], the derivative is equal to Consequently the sensitivity of P f with respect to s , with ∈ [1, . . ., p] is equal to with the FORM approximation context.Therefore, one needs the value of the gradient ∇ x g evaluated in [s, P * ], as well as the derivative of g with respect to s evaluated in [s, P * ].
Apart from these two quantities, no additional evaluation of the costly lsf is required to estimate the sensitivities.Depending on the nature of the deterministic inputs s, this derivative can be furthered detailed [31].If the vector s represents distribution parameters only, s = θ, then where z * = T −1 θ (P * ).Therefore, the derivative of P f is approximated with If the vector s represents design parameters only, s = δ, it follows that ∂g(δ, P * )/∂δ = ∂g Z (δ, z * )/∂δ , where z * = T −1 (P * ).
It should be noted that the study of the bias and the error of the sensitivity estimate can be particularly challenging in a black-box model context.As the failure probability estimation in the FORM framework, the method requires very small simulation budget but the statistical properties of the resulting estimates are arduous to obtain.This remark is also valid when there are multiple design points.The computation of the sensitivities in this context is detailed next.

Sensitivity with Multiple Design Points in the Standard Normal Space
For systems with multiple design points, an accurate formula of P f is difficult to derive [32].However, in the standard normal space, with the approximation introduced in [33], the derivatives can be easily derived in the FORM context.Suppose K design points have been identified P * 1 , P * 2 , . . ., P * K ; we recall here that the failure probability is approximated with the formula [31] PFORM where F K is the cdf of a centered normal vector of dimension K with covariance matrix C(s) and β(s) is the vector of the norms of all the design points β k = P * k k=1,...,K .The covariance matrix C(s) is defined such as C i,j = α i α j where α i = −∇ x g(s, P * i )/ ∇ x g(s, P * i ) .The derivative of the failure probability with respect to s is written where the derivative of each β k can be derived from Equation (13).The interested reader can find the derivatives of C k,j in [31].Another approach to compute the sensitivity of P f in a series system is presented in [12].First, a MC sampling is performed using the various approximated lsfs built at the different design points, in a FORM/SORM context, to estimate P f .Then, the derivatives are obtained with the score function method with IS, using, in the Z-space, the failing observations obtained from the computation of P f ; cf Section 3.3.2.This special approach is restricted to distribution parameters θ only.

Sensitivity with Other Approximation Methods Inspired by FORM
The FORM approximation is accurate for linear lsfs only.Sensitivities derived in a SORM context are proposed in [34], when s are distribution parameters, s = θ.The SORM context makes it possible to build quadratic approximated lsfs, which can be more accurate when g is nonlinear.The authors in [34] take advantage of the radial distribution of the standard normal inputs to derive another quadratic approximation of the limit state surface.
The sensitivities are then estimated with the differentiation of this novel approximation.This method is tested for systems of various dimensions (up to d = 20).
Other approximation methods inspired by the FORM/SORM framework have been derived in the reliability literature.These approximation methods aim to give better results than FORM and SORM in a nonlinear lsf context.We mention here the multi-hyperplane combination method (MHCM), first introduced in [35].In the MHCM framework, several hyperplanes are employed to approximate the lsf at the vicinity of a design point, in the standard normal space.The sensitivities with respect to the distribution parameters θ of the original inputs Z in the MHCM context are presented in [36].Another approximation method presented in [37] is completely independent of the design points' position, as the approximation of the lsf is a hyperplane constructed from a Monte Carlo sample.The sensitivities with respect to the distribution parameters are then computed with this hyperplane.
In a broader context, the sensitivities computed with these approximation methods generally require very few to none additional lsf evaluation (or its gradient).However, the error of the resulting sensitivity estimate is usually difficult to measure without any reference value [34,36].

Sensitivity with Direct Simulation Method
Surface integrals cannot be directly estimated with Monte Carlo methods.Nevertheless, there are special contexts in which the derivative of the domain integral P f is still a domain integral, in the standard normal space.Indeed, if the failure probability is estimated in the directional sampling framework or in the line sampling framework, the failure domain indicator function is no longer explicit in the definition of the failure probability integral.Consequently, the differentiation of the probability integral results in a domain integral as well, which can be estimated with MC methods.
The sensitivity analysis in both sampling frameworks is detailed in the following sections, where no assumption on the nature of s is made.In both frameworks, the observations generated to compute the failure probability estimate can be reused to compute the failure probability sensitivity estimate.

Sensitivity Estimation with Directional Sampling
Directional Sampling (DS) is a sampling technique first introduced in [38] and derived for structural reliability problems in [39,40].We focus here on the case where the limit state function is star-shaped.It is recalled here that the random normal inputs X can be written X = RT [41] where R is a positive random variable and T is a d-variate random vector which is independent of R and uniformly distributed on the unit sphere T d : {(t 1 , . . ., t d ) ∈ R d : t 2 1 + . . .+ t 2 d = 1}.The probability of failure, with the dependency on s included, is written where t → r(s, t) is the function which assigns to every direction t its root (where the dependency of s is underlined), f T is the density of a random vector T and F R is the cdf of the radial component R of the standard normal inputs X.In the DS framework, the root r(s, t) is the distance between the limit state surface {g(s, x) = 0} and the origin, in the direction t.The sensitivity of P f can be simply derived with the equation where f R is the pdf of R. The derivatives of r for ∈ [1, . . ., p] are equal to [42] ∀t ∈ T d ∂r(s, t) ∂s = − 1 ∇ x g(s, r(s, t)t) t ∂g(s, r(s, t)t) ∂s .
Therefore, the failure probability sensitivity can be estimated with ∇ x g s, r s, T (j) T (j) T (j)   ∂g s, r s, T (j) T (j)   ∂s , (15) where the observations T (j) are iid from f T .Introducing the IS framework, the sensitivity estimate in the Directional IS (DIS) context is written ∇ x g s, r s, T (j) T (j) T (j)   ∂g s, r s, T (j) T (j)   ∂s where the observations T (j) are iid from h T .The estimates Equations ( 15) and ( 16) are unbiased.Since all the observations are iid, the variance and mean square error are derived with the classical formulas.The observations are the same as the ones used to compute the failure probability estimate.However, it should be noted that the denominator of the first term of both estimates is the cosine between the sampling direction and the gradient of the lsf evaluated at the intersection between the sampling direction and the limit state surface.Consequently, an evaluation of the gradient ∇ x g is required for each observation T (j) , in addition to the evaluation of the derivative of g with respect to s .Therefore, the simulation budget becomes substantially heavier.
In [43], the sensitivities with respect to distribution parameters only, s = θ, are furthered detailed.Furthermore, a link between the root function r, and the norm β of the design point in the FORM framework is underlined.Indeed, for each observation r s, T (j) T (j) at the intersection between the sampling direction and the limit state surface, a hyperplane can be built as in the FORM context to locally approximate the lsf.There is consequently an equivalence between the derivatives of r and those of β; cf Section 4.1.1.The resulting sensitivity estimate can then be seen as the mean value, with a correction factor, of all the local sensitivities computed on the approximated lsfs [43].

Sensitivity Estimation with Line Sampling
Line sampling was first introduced in [44] and aims to correct the FORM/SORM approximations with sampling.Suppose the design point P * of the system is unique.The idea is to sample on the hyperplane denoted V 1 perpendicular to the direction pointing towards the design point, in the standard normal space X.In order to do so, the sampling space is rotated and reduced by one dimension: is a coordinate parallel to the direction of the design point P * , and R is a rotation matrix.The rotation matrix verifies R R = I d and its d-th row is the unit direction P * / P * ; see [42].In the LS framework, the root b(v 1 ) represents the distance to the limit state surface in a direction orthogonal to the hyperplane {v d = 0} at v 1 .We focus here on the case where the root of the limit state function for the vector v 1 is unique.The probability of failure with the dependency on s, is expressed as where v 1 → b(s, v 1 ) is the function which assigns to every vector v 1 its root, with the dependency of s underlined, Ψ denotes the cdf of an univariate standard normal variable and f V 1 is the pdf of the (d − 1)-variate standard normal random vector on the hyperplane {v d = 0}.The sensitivity of P f can simply be derived with the equation where φ denotes the pdf of a univariate standard normal variable (which is symmetric), and the derivatives of b are equal to [42] ∀v ∂s , where a = P * / P * .Consequently, the failure probability sensitivity can be estimated with where the observations V (j) The sensitivity estimate is unbiased.Since all the observations are iid, the variance and mean square error are derived with the classical formulas; the coefficient of variation (CV) is displayed in [45] for the interested reader.The observations are the same as the ones used to compute the failure probability estimate.However, it should be noted that the denominator of the first term is the cosine between the direction a towards the design point and the gradient of the lsf evaluated at the intersection between the vector parallel to a at V (j) 1 and the limit state surface.Consequently, as in the DS framework, the evaluation of the gradient ∇ x g is required for each observation V (j) 1 , in addition to the evaluation of the derivative of g with respect to s .Therefore, the simulation budget also becomes substantially heavier with this sampling method.
The case of systems with multiple design points P * 1 , P * 2 , . . . is studied in [46], where the sensitivities with regard to distribution parameters only, s = θ, are further detailed.A small modification in the line sampling scheme is required to obtain non-overlapping failure regions [47].Once the failure regions are non-overlapping, the sensitivity of the failure probability with regard to a deterministic parameter is simply equal to the sum of the failure probability sensitivities associated to each design point.Here, we derive the formula for any deterministic input s , as the logic presented in [46] is the same for design parameters.Suppose there are K design points, the derivative of P f is written where each term in the sum is equal to Equation ( 17) adapted to the design point P * k .The resulting sensitivity estimate is still unbiased and since all the samples are independent, its variance can be computed with the classical formula.
As in the DS framework, a link between the differentiation of the root function b and the differentiation of the norm β in the FORM framework can be derived [45,46,48].In [45], a second approach is presented to compute the sensitivity with regard to distribution parameters only, s = θ, in the LS framework.Indeed, in the particular case where the original inputs Z are independent, the expression of the sensitivity with regard to the distribution parameters is first derived in the original input space Z with the score function method (cf Section 3.1).Next, this domain integral is recast in the standard normal space X as the transformation T is quite simple since Z is already an independent vector.The LS method is then applied to the resulting domain integral in the standard normal space; see [45] for more details.A novel method that combines Monte Carlo simulation with LS in the case of a high dimension d is proposed in [49].

Sensitivity through an Approximation of the Failure Domain Indicator Function
As previously mentioned, surface integrals cannot be directly computed with Monte Carlo methods.However, it is possible to obtain a close estimate of the surface integral Equations ( 4) and ( 5) with a domain integral, through the approximation of the failure domain indicator function.This approach is called the Weak approach by Torii in [20,50], as the sensitivity is evaluated in the sense of distributions.The Weak approach is presented here and the statistical properties of the sensitivity estimate are derived, for any deterministic input s , for = 1, . . ., p.

Weak Approach: Approximation of the Indicator Function
The failure domain indicator function makes the direct differentiation of the failure probability integral of Equation ( 3) not possible.By replacing the failure domain indicator function with a smoother function, the differentiation can be performed and results in a domain integral [42].Several smooth approximations of the indicator function have been derived in the literature [51], which are typically cdfs of continuous univariate variables [50].
Here we derive the approximation chosen in [42,52], which comes from the following limit where Ψ is the standard normal univariate cdf, and σ > 0. An illustration of this approximation of the indicator function is displayed in Figure 1.It should be noted, however, that the uniform distribution defined on [−σ/2, σ/2], with σ > 0, is another univariate cdf often considered for the approximation of the indicator function [20,50].In the sense of distributions, the derivative of P f with respect to s can then be defined as [50] where φ is the univariate standard normal pdf.The convergence is proven in [42] under some regularity conditions.Consequently, an approximation of the failure probability sensitivity can be obtained with the following domain integral for σ a fixed positive value.One should notice that the following convergence also holds: P f (s, σ) converges to P f (s) as σ → 0. The domain integral ∂ P f (s, σ)/∂s can be estimated with the Monte Carlo method with the following estimate where the X (j) are iid from f X .The same sample can be reused to compute both the failure probability estimate with MC and this sensitivity estimate.However, for each observation X (j) , the derivative of the lsf with respect to s has to be evaluated.Therefore, the simulation budget increases.This estimate is biased, since σ = 0 and its statistical properties are detailed in the next section.

Statistical Properties of the Weak Approach with Crude MC
The statistical properties of this sensitivity estimate have been studied in numerous papers [20,42,50] as the choice of σ is crucial and has influence on both the bias and the variance of the estimate.As the estimate is biased, it is interesting to use the expression of the mean square error denoted E MC mse (σ), derived in [50], which is a function of σ > 0.
where e is the bias.Intuitively, one would want the value of σ to be as small as possible to reduce the bias.However, for very small σ, the domain integral of ∂ P f (s, σ)/∂s converges to a surface integral over the limit state surface {g(s, x) = 0}.Consequently, most observations would then have a negligible weight in the mean sample value Equation ( 19) of the MC sensitivity estimate, as only few of them are close to the limit state surface.
In [50], analytical formulas are derived when the uniform cdf defined on the interval [−σ/2, −σ/2] is chosen as the approximation of the failure domain indicator function, for the one-dimensional case (d = 1).The resulting bias e is proportional to σ 2 , thus e 2 ∝ σ 4 and the variance is proportional to 1/Nσ.The proportional property of the bias is extended to the multidimensional case in [20].Consequently, decreasing the parameter σ greatly reduces the bias.Nevertheless, past a certain point, the variance of the estimate increases for smaller σ values, if N is kept constant.This is schematically illustrated in Figure 2. The horizontal axis is separated into two stages: the bias stage where the error is mainly affected by the bias of the estimate and the variance stage where the error is mainly affected by the variance of the estimate.Therefore, the optimal value of σ should be selected at the intersection between the bias stage and the variance stage.This intersection value is challenging to obtain in practice and is problem dependent [51].An analytical formula of the optimal σ is detailed in [20] for the one-dimensional case, with the uniform cdf approximation mentioned above.This formula is proportional to 1/N 1/5 and depends on the quantity of interest ∂P f (s)/∂s as well as the derivative of order 3 of P f with respect to s ; thus, its computation is intractable in practice.With this optimal σ value, the value of the mean square error is computed and E MC mse ( σ) ∝ 1/N 4/5 .This convergence rate is slower than the error convergence rate of the MC estimate obtained with the score function method, which is 1/N, see Section 3.2.Consequently, even with the best σ value, the score function method is more efficient.However, the score function is applicable only when s are distribution parameters.
Finally, it should be underlined that the bias and variance of the sensitivity estimate are independent of P f (s), as noticed in [50].They are dependent on the derivatives of P f of order 1 and above.In comparison, the mean square error of the MC estimate obtained with the score function method depends on P f , as shown in Equation (8).Consequently, the sensitivities of P f (s) computed with the Weak approach and the failure probability estimate do not depend on the same quantities.In other words, improvements on the failure probability estimation will not necessarily lead to improvements on the failure probability sensitivity estimation and vice versa, in the Weak approach framework.

Weak Approach with Advanced Sampling Techniques
The Weak approach has been associated with the sequential importance sampling (SIS) framework in [52], to obtain a sensitivity estimate more efficient than the MC one.The selected value of σ is the smallest value of σ such as the coefficient of variation of the sensitivity estimate is close to a CV target.
where σ L is the final value of the decreasing sequence σ i of the SIS framework (see [53]).This selection does not require any additional evaluation of the lsf.The resulting sensitivity estimate has a controlled CV; however, it is not possible to measure its bias, other than with the approximation mentioned above e ∝ σ 2 .Nevertheless, this method is efficient for multiple failure regions [52] and the SIS framework is also quite robust in high-dimensional spaces [53].
The Weak approach has been combined with the subset sampling framework in [51], for sensitivities with respect to design parameters.Equation (10), presented in Section 3.3.1, is still valid (with δ instead of θ); however, the dependence on δ in each P i (δ) is now contained in the intermediate failure domain indicator function, as well as in P(F i−1 ).Each intermediate sensitivity ∂P i (δ)/∂δ is then estimated with the Weak approach.The selected value of σ is obtained with a formula that depends on the number of failing observations in the sampling procedure, on P f (s), and on the order of magnitude of the lsf g.More precisely, σ is chosen as a quantile of the Y (j) = g s, X (j) ordered values, for j = 1, . . ., N. This choice results in a number N r of response values within ± σ, where N r depends on N and P f (s) and is set a priori.As previously mentioned, the bias of this estimate cannot be controlled other than with the approximation e ∝ σ 2 .

Sensitivity with Finite Difference Schemes
All the methods presented so far require the evaluation of the gradient of the limit state function with regard to s.We briefly present here approaches that do not rely on those derivatives of g to compute the sensitivity of P f .These approaches are applicable for many kinds of systems, such as high-dimensional systems whose failure domain encompasses several failure regions for instance.

Expression of the Finite Difference Schemes
The derivatives of P f with respect to s can be directly based on finite difference schemes [54,55].Denoting h as the p-vector whose components equal zero except the -th component equals to h, with h > 0, we recall here the forward formula and the central formula The expected values of these formulas can then be estimated with sampling methods.Here, we use the same formulations to present the different approaches as in [50].The Direct approach consists in employing independent samples for the estimations of P f (s + h ), P f (s) and P f (s − h ).Therefore, in order to obtain the failure probability sensitivity, the simulation budget is multiplied by two with the forward formula and by three with the central formula.In contrast, the Common Random Variable (CRV) approach employs the same sample for the estimations of P f (s + h ), P f (s) and P f (s − h ).However, the limit state functions with parameter s + h and s − h differ from g(s, •).Consequently, the simulation budget is still multiplied by two with the forward formula and by three with the central formula.Using the same sample makes it possible to cancel the noise of the estimates of the expected values, which justifies the CRV approach.Improvements of the finite difference schemes have been recently proposed in [56].

Statistical Properties of the Finite Difference Scheme Estimates
Both approaches are biased since h = 0.The expression of the bias and variance of both approaches, for both finite difference schemes can be found in [50].The bias e of both approaches is proportional to h with the forward formula and h 2 with the central formula.The variance of the Direct approach is proportional to 1/Nh 2 .Whereas the variance of the CRV approach is proportional to 1/Nh.Consequently, the CRV is theoretically superior to the Direct approach.Furthermore, among the two formulas, the central formula is the best theoretically as the bias converges more rapidly, for small h.The evolution of the mean square error as a function of h can be separated into two stages for both methods: the variance stage for small values of h and the bias stage for greater values of h, as presented for the Weak approach in Section 4.3.2.
An optimal value of h is displayed in [20] for the central formula in the CRV approach.This value is proportional to 1/N 1/5 and depends on the quantity of interest ∂P f (s)/∂s as well as the derivative of order 3 of P f with respect to s ; thus, its computation is intractable in practice.With this optimal h, the value of the mean square error is computed and E MC mse ( σ) ∝ 1/N 4/5 , which is the same convergence rate than the Weak approach with the optimal σ.The same conclusion can be drawn: this convergence rate is slower than the error convergence rate of the MC estimate obtained with the score function method, which is 1/N, see Section 3.2.Consequently, for sensitivities with respect to distribution parameters, the score function method is more efficient.

Conclusions
In this article, we reviewed different methods to compute the derivatives of the failure probability with respect to the deterministic inputs s of the system, for RBDO and local reliability-based sensitivity analysis purposes.For each method, the additional simulation budget needed is underlined, compared to the simulation budget necessary to the estimation of the failure probability.
If the deterministic inputs are distribution parameters, denoted θ, two different frameworks are possible.The first framework relies on the score function methods and the estimation is performed in the original input space.The simulation budget is then kept constant and the sensitivity estimates are typically unbiased.However, there are few efficient techniques available to compute the failure probability in the original space.In the second framework, the original inputs are transformed into standard normal inputs with an isoprobabilistic transformation.In this input space, there is a greater number of efficient methods to compute the failure probability.Nevertheless, the sensitivity estimation causes a heavier simulation budget, as the derivatives of the limit state function with respect to θ must be evaluated.Some approaches result in unbiased estimates, as in the DS and LS context, but require an even greater simulation budget as evaluations of the gradient of g with respect to the random inputs are needed.While the other approaches lead to biased estimates, whose bias depends on a scalar parameter (σ or h), or remains unknown (in the FORM context).The resulting bias cannot be properly controlled.
If the deterministic parameters are design parameters, denoted δ, the evaluation of the derivatives of g with respect to δ is mandatory, except in the finite difference schemes context (where the simulation budget is then doubled or tripled).The sensitivity analysis is performed in the standard normal space and the same conclusions can be drawn concerning the simulation budget and the bias than with distribution parameters.Thus, the simulation budget substantially increases with LS or DS and the bias cannot be properly managed with the other approaches.

4. 1 .
Sensitivity through the Design Point of the System 4.1.1.Expression in the FORM Context

Figure 1 .
Figure 1.Approximation of the indicator function I y>0 with the function y → Ψ(y/σ) for three different values of σ, where Ψ is the cdf of a standard normal univariate variable.

Figure 2 .
Figure 2. Illustration of the relation between E MCmse and σ in log-log scale, for a fixed sample size N, with the approximations derived in[50] in a one-dimensional case.