Probabilistic Analysis of Composite Materials with Hyper-Elastic Components

This work is a comprehensive literature overview in the area of probabilistic methods related to composite materials with components exhibiting hyper-elastic constitutive behavior. A practical area of potential applications is seen to be rubber, rubber-like, or even rubber-based heterogeneous media, which have a huge importance in civil, mechanical, environmental, and aerospace engineering. The overview proposed and related discussion starts with some general introductory remarks and a general overview of the theories and methods of hyper-elastic material with a special emphasis on the recent progress. Further, a detailed review of the current trends in probabilistic methods is provided, which is followed by a literature perspective on the theoretical, experimental, and numerical treatments of interphase composites. The most important part of this work is a discussion of the up-to-date methods and works that used the homogenization method and effective medium analysis. There is a specific focus on random composites with and without any interface defects, but the approaches recalled here may also serve as well in sensitivity analysis and optimization studies. This discussion may be especially helpful in all engineering analyses and models related to the reliability of elastomers, whose applicability range, which includes energy absorbers, automotive details, sportswear, and the elements of water supply networks, is still increasing, as well as areas where a stochastic response is the basis of some limit functions that are fundamental for such composites in structural health monitoring.


Introductory Remarks
The computational analysis of materials is now standard and decisive for the design of contemporary appliances, mechanical parts, and bearing systems that are made of both homogeneous and heterogeneous (composite) materials. It is especially remarkable in the area of polymeric materials [1] that are reinforced or filled with some other specific microinjections or nano-particles [2][3][4][5]. Such materials modeling needs a specific approach that is most frequently based upon macro-modeling that is linked with a calculation of the overall properties [6]; nevertheless, it may demand precise experimentally verified knowledge concerning the nano-mechanics of the reinforcements [7]. There is no doubt that numerical simulation changed the entire design process by supplementing the traditional cycle of conceptualizing and laboratory testing with various modeling tools, including sensitivity analyses [8], stochastic models [9] that include various nonlinearities [10], accounting for the anisotropy of fillers [11], atomistic modeling [12], and multiscale approaches [13], finally leading to reliability assessments [14]. It cuts down the design time and provides tools for a very optimized or complex solution that is unavailable when using even a very advanced analytical approach. All computations are based on constitutive models of materials, which involves defining their behavior under different mechanical, thermal [15], electrical [16], magnetic, and coupled [17] conditions. They are well-defined for relatively       Phenomenological models are generally based on invariants, the stretch ratio, or both. Their parameters do not bear any physical interpretation. Invariant-based phenomenological models can be further divided into models that leverage a series function, including the limitation of chain extensibility, or are based on a logarithmic function, power law, or exponential function. They were developed in the mid-and late-twentieth century and provide all analytical solutions for an isotropic single-phase medium. The number of   Phenomenological models are generally based on invariants, the stretch ratio, or both. Their parameters do not bear any physical interpretation. Invariant-based phenomenological models can be further divided into models that leverage a series function, including the limitation of chain extensibility, or are based on a logarithmic function, power law, or exponential function. They were developed in the mid-and late-twentieth century and provide all analytical solutions for an isotropic single-phase medium. The number of Micromechanical models are based on the analysis of networks of cross-linked longchain molecules. They could be subdivided into these based on a Gaussian or non-Gaussian network model, or involve a mixture of the two. Artificial neural networks could be classical and bear no mechanical constraints, or constitutive and include such constraints in the network. A comparison of various constitutive models has been provided below and notation applied in the relevant formulas is the following one: W (with various upper superscripts) denotes a function of deformation energy density, σ 11 denotes the true Cauchy stress under uniaxial tension, µ is the shear modulus (the second Lame constant), ν is the Poisson ratio, λ i denote principal stretches of the material (λ in uniaxial state) and J is their product. Moreover, I 1 is the first invariant of the right Cauchy-Green deformation tensor, F is the deformation gradient, λ L is the first Lame constant, while C i and Di as well as A p , B p , α p and β p are different material constants in the constitutive theories presented below. The Readers are encouraged to look into the given references to find a specific physical or mathematical explanation of some other parameters appearing below.
The first proposed hyper-elastic models were invariant-based, specifically the series function ones. The first theoretical approach was done by Mooney and Rivlin [53,54]. The model includes the first two invariants in their first powers: This model was simplified to include only the first invariant; it is widely known as the neo-Hookean model [55][56][57]: A third, second-power term was included by Isihara [58]: Next, a third term for the first invariant was added by Yeoh [59]: Carroll's model drops the conventional (I i −3) form [60] to give W C = β 1 I 1 +β 2 I 4 1 +β 3 I 2 (9) It violates the restriction introduced by Ogden and Treolar that U(I 1 , I 2 , I 3 ) = 0 in the reference configuration and the following modification was proposed to overcome this peculiarity: It is called a modified Carrol's model and is presented here in the incompressible form [61]. The last series function model was introduced by Zhao [62]. It adds terms with mixed invariants in different powers governed by the same constant. This concept was also used before by Bahreman and Darijani.
The second most common variants of phenomenological models are power, exponential, or logarithmic ones. The first one was proposed by Knowles [63]: The next one was proposed by Swanson [64]: The next model was proposed by Gregory [65]: A further modification was invented recently by Hong et al. and it was called a modified Gregory model [47]: Another variant of a power-law-based function was put forward by Lopez-Pamies [66], where This type of function was also proposed by Yeoh in his modified form by adding an exponential term [67]: This was generalized in 2019 by Travis et al. to include arbitrary powers of all three terms. It was further called the generalized Yeoh model [68].
Gent and Thomas moved away from using a power function and instead proposed a linear term combined with a relatively simple logarithmic term that allowed for a relatively easy stress formulation [69]: These logarithmic or (later) exponential terms are incorporated into their energy functions to allow for a better estimation of deformation in uniaxial tension and equibiaxial tension.
The first proposal of a combined exponential and logarithmic function was put forward by Hart-Smith [70]: W HS = c 10 exp c 1 I 1 −3 2 dI 1 +c 01 ln I 2 3 (31) Another variant of such a combination was proposed by Alexander [71]: Some other variations were put forward by Veronda and Westmann, Vito, Humphrey and Yin, and much later by Mansouri and Darijani.
Another combination of such a phenomenological model was introduced by Hoss and Marczak [72]. It consists of three terms, namely, power, exponential, and logarithmic terms: Next is the exponential-linear model [73], which instead of the summation of the logarithmic and linear terms, proposed their multiplication: The most recent form of this type of model is the Anssari-Benam-Bucchi model [74]: Another type of phenomenological model limits chain extensibility. This type introduces a limit for the stretch ratio for the macromolecular chain of rubber I m and provides other parameters for small strains. The first one was introduced by Warner [75]: A further proposal with a much more elaborate logarithmic term was made by van der Waals [76]. It includes the limit stretch λ m : A small difference in the quotient was then proposed by Gent [77]. It allowed for better accuracy for small stretches: In the early 2000s, Pucci and Saccomandi [78] proposed a model with two logarithmic terms: The last variant of this kind of phenomenological model was put forward by Horgan and Murphy [79]: Phenomenological models based on the stretch ratio are simple, yet could be used for large strain ratios, especially the famous Ogden variant. The first one was proposed by Valanis and Landel [80]: Its compressible form and additional proofs and its experimental validation is also provided in [81]. Further proposals were prepared by Ogden [14]: • Shariff [13]: • Attard and Hunt [25]: • and Arman and Narooei [82]: Mixed phenomenological models use invariants and stretch ratios that can capture small and high strains well. They emerged in the early 2000s. Some examples are the continuum hybrid model [83]: and the WFB model [84]: Micromechanical network models aim to determine the microstructural mechanisms of the material that relate to their mechanical properties. They can be categorized by the presence of a Gaussian character in the chain network. Gaussian chain network models characterize small and moderate stretches well but are not as accurate for large ones with hardening. The first Gaussian model was proposed by Treloar [85]: Some other variants include the slip-link model [86]: and also the tube [43] model. The tube model potential consists of two parts that characterize chain cross-linkings W T,c and chain entanglements W T,e such that The most recent Gaussian-type model is a nonaffine tube model [87] characterized as Non-Gaussian chain network models apply the mechanical property of a non-Gaussian single chain to the molecular chains of the rubber network. The first three-chain model assumed a distribution of molecular chains along principal directions and was efficient only for small stretches. This limitation was overcome by Arruda and Boyce by introducing limited chain extensibility. Furthermore, a more general distribution of molecular chains was allowed in further theories that also introduced chain entanglement and cross-linking.
The three-chain model is defined in the following manner [88]: The well-known and frequently used Arruda-Boyce model is presented next [89]: The eight-chain model is as follows [90]: A systematical consideration of the affine, three-chain, eight-chain, and micro-sphere models is provided in [91].
The Flory-Erman approach, which is an extension of the nonaffine-tube model and consists of the phantom energy W FE,pe function and micromechanics of chain molecules W FE,ce , is provided in the following way [92]: The Gaussian nature of the phantom part of this model causes a deviation at high strains. This is overcome in its modified version proposed by Boyce and Arruda. It replaces the neo-Hookean part with an eight-chain model such that In its simplified version, the micro-sphere model [93] is given as One of the modern formulations of a hyper-elastic material is the extended tube model [94], which includes five parameters. It focuses on the molecular-statistical approach for polymer networks and it is based on the following formulae: • Network averaging tube [95]: • SpT [96]: • Wu-Giessen or full-network [97]: • Bechir and Chavalier [99]: Only recently have some efforts been made to define the hyper-elastic behavior of multi-phase materials. This was because of the high complexity of numerical simulations and the unavailability of analytical solutions. These problems were overcome by a rapid increase in computational resources, allowing for an iterative solution. Some examples of contemporary studies include [100][101][102][103][104][105][106][107][108], as well as [109] for the visco-elastic regime. Demand for advanced hyper-elastic models is stimulated by the biotechnological industry, where the mechanics of biological tissue [110] and its interaction with artificial appliances were studied [21]. They are also required in aerial, textile, and automotive industries, and are used, for example, in tires or various shock absorbers. It needs to be mentioned that Poisson ratio close to the value of 0.5 for rubber-like materials may break the FEM solution incremental processes (in both deterministic and probabilistic context) due to the negative diagonal components within the stiffness matrix or a lack of numerical convergence.
Unlike in linear small strain elasticity, all the models provide their own set of assumptions that were prescribed during the derivation process and are not necessarily easily interchangeable with others. This introduces challenges in the description of their stochastic nature because the material parameters are not a good choice for overall stochastic unknowns. Such a choice would bind the analysis to a specific model, which is highly undesirable. Instead, some more generic parameters or variables should be preferred, for example, a defects volume fraction, which is unique for all models; random (effective) material coefficients should instead be derived from other random sources.
Stochastic or probabilistic studies are quite well documented in the reversible elastic regime of composites [111,112]. This is not true for the inelastic regime. One of the main reasons is the abovementioned lack of generality of various laws and the complexity of their numerical solution already given in the deterministic case. The stochastic hyper-elasticity or visco-elasticity of composites is considered only in a limited number of studies; some examples include [113][114][115][116]. Much attention was focused on a certain hyper-elasticity potential, for example, the Ogden [117][118][119], neo-Hookean [66,120], or van der Waals models [105]; some of these also compare the results of several potentials, as in [121], or propose a specific solution technique [122]. However, there is a lack of studies concerning the composites that include interface defects in the nonlinear regime. This is especially true for a joint deterministic and stochastic analysis coupled with the verification of multiple hyper-elastic potentials. Unlike the majority of the stochastic considerations, these analyses are additionally based on the set of laboratory tests performed especially for the computational part, for which the numerical response of the matrix is fitted with the use of the least squares method. The proposed approach specifically tackles the problem of the lack of generality of hyper-elastic laws via the introduction of a probabilistic homogenization algorithm for probabilistic homogenization. It enables the computation of random effective material parameters for an arbitrary linear hyper-elastic potential with a specified source of the input uncertainty.
Please note that the engineering stress σ i for the above models can be obtained for all the principal directions i using the following formulae: and thus, for uniaxial tension, σ UT λ− 1 λ 2 , and σ PS 3 = 0; and for equibiaxial tension, σ BT 1 = 2 ∂W ∂I 1 + 1 λ ∂W ∂I 2 λ− 1 λ 5 and σ BT 3 = 0. Artificial neural networks emerged in constitutive model determination 30 years ago [123]. Instead of first selecting its closed form and then the tuning parameters, they propose a family of artificial neural networks and then learn its weight and parameters; they may be physical, phenomenological, or mixed. Until recently, classical neural networks remained a black box in terms of the morphology of parameters and partially or totally disregarded the kinematic, thermodynamic, and physical constraints. They also bypassed constitutive modeling altogether. This raised well-posed concerns for their formulation. These were covered recently via the direct inclusion of the physical and mechanical constraints into the neural network [124][125][126].
A very interesting innovation in this regard constitutes artificial neural networks. They take an a priori closed set of constitutive models whose distribution in the final formula is defined during the learning process [127,128]; such a formulation overcomes the above concerns. Such a trained network includes a sum-of-weighted closed-form models for each loading scheme. Thus, it is not strictly a new model in itself but is rather a novel mixture of already created ones; its successful utilization requires prior knowledge of classical models and cannot work without them.

Probabilistic Methods
The structural behavior of materials and structures in civil and mechanical engineering design is defined by models with specific assumptions. Traditionally they have a certain set of parameters to fit them to the response of the material in well-known tests. These parameters are always exposed to some scatter, which comes from various sources of randomness, such as the morphology of the material, tolerances in manufacturing and measurements, accuracy of conversions, or other unknown origins that are impossible to quantify during measurement.
Probabilistic analysis is an approach that tackles this problem directly by augmenting deterministic mathematical models with random parameters or variables that represent sources of uncertainty. It allows for a much more precise estimation of the behavior of the analyzed system. In addition to the mean, characteristic, or design values obtained in most engineering calculations, it also outputs the expected value, coefficient of variation, and other specific information about the uncertainty of the results. It further allows us to directly compute and optimize the structural safety margin in certain load conditions. This margin may be represented, for example, as a reliability index or probability of survivability. Probabilistic analysis may be applied in almost all (and not only) structural analyses at various levels of design, i.e., the level of material, structural element, or even the level of the entire structure; recent examples include heat transfer [129,130], fatigue [131,132], stiffness [133], failure [134][135][136], or system response under uncertainty [137].
The probabilistic approach encompasses all the methods based on probability calculus, leading to the calculation of the material or structural response with input uncertainties. This response is commonly represented as random moments or characteristics, such as the expected value, coefficient of variation, skewness, or kurtosis [138]. The most precise representation of statistical distribution is the probability density function (PDF), but it is also the most demanding in terms of computational resources. This is why in most cases, probabilistic characteristics are preferable, as in the study of hyper-elastic materials. The most widespread methods that allow for probabilistic design are direct derivation methods, simulation methods, spectral methods, and perturbation methods.
Direct derivation methods [14,138] use integral calculus to derive random characteristics of the response with a known PDF of the input random variables. In its classical form, this approach uses a direct relation between the structural response and the random parameter; it is commonly called an analytical method (AM). In many cases, such a relation is not known, cannot be derived analytically, or a symbolic solution for the known relation simply does not exist. An approximation of this relation is obtained with various numerical techniques that are commonly called a response function or a response surface. Such an approach is called a semi-analytical method (SAM) [138,139] and is applied in the study of hyper-elastic materials; alternatively, some approximate integration may be employed.
Computer simulation methods [140][141][142][143][144][145] substitute integration with a finite number of deterministic realizations, which are then subjected to statistical estimation. Realizations are made with sets of parameters obtained from random or pseudo-random generation according to their predefined PDFs; these are all referred to as Monte Carlo simulation (MCS) methods [140,146,147]. They have some advantages, i.e., ease of implementation and avoidance of integration, but they all also have important disadvantages. The major drawback of this approach is the vast number of realizations required to reach a satisfactory convergence result, which is guaranteed only with their infinite number [140,148]. This problem is partially covered in more modern approaches, such as importance sampling [149], stratified sampling [150], or Latin hypercube sampling [151] techniques; all of these approaches attempt to lower the required sampling number. A second disadvantage is poor scaling for an increasing number of unknowns. The last and deciding disadvantage is that the inherent discrete character of the MCS disables the retrieval of continuous probabilistic characteristics or measures of reliability.
The spectral method [130,[152][153][154] describes the Gaussian random field with the use of the Karhunen-Loève expansion. The structural response is emulated by expansion into polynomial chaos [155], but the biggest problem of this methodology is the number of terms in the expansion series necessary to achieve satisfactory accuracy [156]. Because of this, the result may be inadequate and quite far from the exact solution, especially when a second-order polynomial is used (which is commonly done). For this expansion to obtain satisfactory accuracy, a high number of elements may oftentimes be required. An additional problem is connected to the Karhunen-Loève expansion itself, which does not have a solution for certain problems [157,158], with an especially important example being a reliable calculation of higher-order probabilistic characteristics [159].
Perturbation methods [138,159,160] describe the structural response as spread around its mean value with a given small perturbation. They expand the response function into a Taylor series around the expected values of random variables. The order of this method depends on the number of terms in the applied expansion, namely, one term in the first order, two terms in the second order, etc. One of its issues is the convergence of the Taylor series, which is not always guaranteed. This problem is even more complex when the response function is not known and must be approximated, which is the case for the study of hyper-elastic materials. The response function is selected from a set of polynomial functions, where the theoretical convergence of specific Taylor series can be mathematically proven. The major advantages of this method are swift execution, relative ease of implementation, and continuous character of the results. It also substitutes integrals with derivatives and overcomes the problem of the lack of an analytical solution. The biggest drawback is the probabilistic convergence for certain types of functions and the requirement of a higher order expansion to reach high accuracy for input random variables with a high coefficient of variation. A relation between the result and the input random variable is usually unknown before the probabilistic analysis. Its analytical derivation is available only for relatively simple, well-known problems. In other cases, this relation is sought with the use of numerical methods, such as the finite element method, boundary element method, and finite difference method; joining their output with probabilistic analysis results in the stochastic finite element method (SFEM) [153,[161][162][163][164], stochastic boundary element method (SBEM) [165,166], and stochastic finite difference method (SFDM) [138,167]. Such a fusion creates a powerful tool for probabilistic design and stochastic computations but it does not provide a new probabilistic method itself. This is because of the convenience of solving the homogenization problem and the availability of software for FEM computations; readers looking for a comprehensive introduction to the finite element method may refer to [168].

Interphase and Interface Defects
An interphase is an additional phase of the composite formed during its manufacturing process or exploitation. It is introduced (artificially in modeling or during manufacturing) in between two phases of a composite, commonly between the matrix and the filler (reinforcement). Its mechanical [169], thermal, electrical, thermo-mechanical [170], and physical characteristics differ from the ones of the two surrounding phases. Its volume is much lower than the other phases, yet it highly influences the behavior of an entire composite. This is because the interphase effectively encapsulates the filler and prevents a direct interaction in between the composite constituents. As documented, an interphase significantly affects the effective material properties of multiple isotropic, cubic, and anisotropic composites in a deterministic [171][172][173][174][175] and also stochastic context [8,11,176,177]. It either decreases them in the existence of defects [178] or increases when the two phases are chemically bound [179][180][181][182]. Its influence is so high that considerable research attention has been put toward its tuning and tweaking to improve key properties of composites or adjust their performance for special purposes [183,184].
An interphase is extremely difficult to localize and further analyze in laboratory tests. This is because its thickness is very small (in the order of micrometers) and its characteristics differ for each particle or fiber in the same sample of the composite. Despite the existence of multiple surface agents applied in the phases, the interphase around each filler particle or fiber has unique thermo-electro-mechanical conditions in which it forms; this effectively causes its geometry, thickness, and properties to be random (see [185]).
Interface defects encompass all the inclusions, voids, discontinuities, and inaccuracies that exist in the transition of the two phases in a composite. They reflect frequent manufacturing imperfections, for instance, following significant residual thermal stresses, and can be treated during a numerical simulation as geometrical imperfections in composite materials [186]. Their occurrence greatly affects the response and properties of the composite despite its extremely small volume, which is a fraction of the volume of the interphase. Interface defects were shown to be crucial for many properties of a composite, including its durability [187], reliability [188], thermal conductivity [189], and even failure [180,[190][191][192]. The defects and inclusions also cause a high microscopic stress concentration [193]. Some studies were devoted to the defects only [194].
An interphase with interface defects forms an imperfect interface. Its usage in the realistic prediction of composites' behavior dates back many years [195][196][197][198][199]. The analysis of composites with imperfect interfaces is performed primarily with the use of three techniques: (1) insertion of the interphase in between the main composite constituents [200][201][202][203][204], (2) usage of special contact finite elements [205,206], and (3) an application of a system of the springs [207] that may also be supplemented with dumpers. Very interesting strategies belonging to the first group are based upon a geometrical idealization of such defects, e.g., with the use of semi-circular or semi-spherical shapes, which follows the well-known cavitation phenomenon for a variety of matrices [190,208].

Multiscale Models and Numerical Simulation
Multiscale analysis represents some trends in current numerical modeling in which the given heterogeneous system is described simultaneously by multiple models at different scales of resolution. Models at each scale may originate from physical laws of different natures, for example, continuum mechanics at the macroscale [209] and molecular dynamics at the atomistic scale. They are required because certain phenomena visible at one scale cannot be described accurately without supplementary information from a different scale from which they originate. Some examples include (1) brittle failure of the reinforced concrete beam, which is caused by micro-cracks in concrete microstructure in between the cement and grains; (2) plastic elongation of steel caused by a slip between its grains; or (3) macroscopic properties of composites affected by microstructural defects. Alternatively, they are used because purely macroscale models are not accurate enough, while lower-scale models offer too much information or require too high computational power to be executed. The multiscale approach aims at achieving a compromise between accuracy and efficiency and frequently provides solutions to problems that are otherwise unsolvable in a reasonable period. The multiscale analysis comprises three major components: multiscale models, multiscale analysis tools, and multiscale algorithms. Multiscale algorithms involve using multiscale analysis tools to bridge the scales and connect different multiscale models at different levels.
The most used scales depend mainly upon the dimensions and are the following: macroscale, mesoscale (level of microstructure), atomistic scale, and electronic scale. Interestingly, each of these scales falls into a different discipline, for example, the macroscale falls into civil and mechanical engineering, the mesoscale falls into materials science, and the electronic scale falls into physics. Thus, multiscale design frequently requires bridging different disciplines to be solved. The connection between the models is either ensured analytically or numerically [210][211][212] and bridging between different scales is either done sequentially or concurrently. In the sequential approach, certain characteristics used in macroscopic models are precomputed. Usually, only a limited number of the parameters (or variables) is passed to a different scale, but in some sparse representations, even as many as six variables can be passed effectively [213,214]. In the concurrent alternative, the macroscale model variables are computed on the fly during simulation. Its major advantage is that a much smaller domain of macroscopic variables must be computed on a different scale. On the other hand, the complexity of numerical algorithm solving macroscale increases. Concurrent methods are not especially well suited to problems in which parameters are passed to the FEM-based code, where each element would potentially require a separate set of parameters.
Traditional structural analysis preferred in civil engineering does not involve a multiscale approach because it tends to limit calculations to a linear range of material defor-mation, where simple empirical constitutive laws are sufficient. The additional margin of a nonlinear structural response is frequently used as a safety margin. A good example is the design of structures made from constructional steel, for which the engineering codes, such as Eurocodes, prefer the usage of linear elastic models. Calculations become much more involved (even for steel) when localized phenomena, such as strain localization [215] or crystal plasticity [216], must be taken into consideration. For these phenomena, a macroscale model must be supplemented or interchanged with a macroscopic structure. Macroscopic stress in the FEM crash test is calculated with supplementary information from the mesoscale (scale of crystals and phases), as well as lower scales, where defects originate. Unlike in statics, in structural dynamics, a scale is not only defined for space, but also for time. The smaller the scale, the lower the dimensions and the smaller the timespan. In multiscale design, information may be passed from a higher scale into a lower scale (top-down method) or from a lower to a higher scale (bottom-up method).
The process of crossing a certain scale is called scale-bridging; an example of topdown bridging is passing boundary conditions for each element of the FEM analysis into a mesoscale model, while a bottom-up bridging example involves intra-grain bonding conditions coming from the atomistic scale. A multiscale approach is already used, albeit indirectly, at the structural design level for reinforced concrete. Verification of the ultimate limit state for this composite already requires some supplementary information apart from the scale of the structural element. Specifically, precise information on steel rebar positioning is required even when all the macroscopic properties of concrete and steel are already known. These properties already depend on information coming from four lower scales, i.e., C-S-H, cement paste, mortar, and concrete mesoscale scales [217].
There exist multiple methods and algorithms that allow for solving multiscale problems. They focus on algebraic, numerical, or hybrid solutions and are usually aimed at certain problem domains. Their comprehensive review is available in [218]. Some examples include the multi-grid methods aimed at solving a large system of algebraic equations [219] with some alteration in the equation-free method [220] and a heterogeneous multiscale method where a preconceived macroscale model with missing components is assembled and missing data is found with the use of microscale models or matched asymptotics approach [221][222][223]. Some other methods include averaging and tolerance-averaging methods [224,225], hydrodynamic limit methods [226], the Mori-Zwanzig formalism [227], renormalization group methods [228], variational methods [229], and homogenization methods [230][231][232][233][234][235][236]. The multiscale approach is very frequently used for the determination of the macroscopic (or effective) properties of composites, also in the stochastic context [237,238]. Some introduction to their theory is available, for example, in [239,240]. The homogenization method is often chosen because of its relatively easy application in FEM systems.

Homogenization Method and Effective Medium Response
The homogenization method is frequently referred to as a process of the replacement of an equation with a highly oscillatory coefficient with one that has a homogenous coefficient. Initially used in studying partial differential equations (PDEs) [230,241], this concept was found to be efficient at solving a problem of the non-homogenous microstructure of continua, such as composites. This is because the macroscale boundary conditions, such as forcings, loadings, or supports, are much bigger than the length scale of the microstructure. A classical boundary value problem is given in the following way: with τ being a very small parameter and C( → g ) being a periodic coefficient C = ( → g + → e j ) = C( → g ), i = 1, . . . , n. It may be modified to the following form: ∇·(C * ∇u) = f (99) where C * is the effective constitutive tensor with constant coefficients representing a homogenized material. This property could be computed as e i dy 1 . . . dy n , i, j = 1, . . . , n where periodic w j satisfies ∇ y (C( . One equation may be replaced by another if τ is small enough to satisfy u τ ≈ u and when u τ → u at τ → 0 . Regarding the continuum concept, an analog of the differential element is the representative volume element (RVE) in 3D problems or the representative surface element (RSE) in 2D problems. This should be selected in a way to contain all the relevant statistical information about an inhomogeneous medium. With such an assumption, averaging over this element results in an effective property of the medium defined as C * above. A key problem in such a formulation is the assumption that such an RVE is solvable and contains as much information about the microstructure as possible. This also holds for stochastic calculations where, in addition to the RVE selection, uncertain parameters must also be selected in a way to catch the best representation of the most important randomness sources and remain simple enough to be solved.
In its early approaches, homogenization was used in a purely analytical [233,242] way, and thus, the microstructure was very simple and the range of applications was limited. This obstacle was overcome with the incorporation of the FEM, where the RVE was modeled and solved. The rapid evolution of the academic and commercial FEM software that started at the end of the twentieth century supported researchers in the discretization and visualization of the RVE. A simultaneous revolution in the computational power of personal computers made solutions to more complex problems accessible. All of this allowed homogenization to become one of the most widespread methods used to solve multiscale problems in materials science, especially those connected to the meso-and microscale of inhomogeneous materials (see, e.g., [243][244][245]). The topic of a correct RVE is so important that some studies treat it as a research problem connected with its generation [246], size or scale effect [247][248][249][250][251], and validity of applied random microstructure [252]. Some other works reviewed existing stochastic boundary conditions and introduce new ones [253] or propose a formulation of finite elements for the hyper-elastic case [254]. The FEM is commonly included in approaches that are aimed at decreasing the problem size or computation time, which is done at the expense of accuracy. These propose the usage of artificial neural networks and machine learning approaches [255][256][257], divide the heterogeneous medium into several subdomains [117], use a manifold-learning method to reduce the dimensions of microscopic strain fields [258], utilize orthogonal decomposition R3M [259], or aim to apply a reduced database model [260], to name a few. The FEM is, of course, not the only possibility. There also exist some alternative methods, such as mesh-free formulations [261], a gradient approach [262], fast Fourier transform (FFT)-based methods [263], a transformation field [264], or the discrete element method [265], which are perfect for densely packed solids. Sometimes, a nonlinear response of the material is studied, for which some approaches to homogenization exist, such as the ones in [266][267][268][269]. In many cases, they include specific a priori assumptions related to the stress or strain fields (see, e.g., [270][271][272]). Probabilistic homogenization adds yet one more level of abstraction to homogenization. It introduces uncertain parameters and variables in the lower scale of homogenization, usually in the material microstructure; some examples were presented and discussed in [273][274][275][276][277][278][279]. Similarly to probabilistic analysis in macroscopic calculations, it quantifies the influence of input uncertainty on the response of the medium. The difference is the source of uncertainty, which cannot be included at a macroscopic level, yet cause randomness in engineering structures. Common problems for which probabilistic homogenization is applied include uncertain phase properties, interface defects, geometric uncertainty, or inclusions. Uncertainty may be included in one phase or in in the various characteristics of the RVE, for example, the reinforcement positioning. They all result in an uncertain stiffness tensor or parameters leading to the material constitutive relation.
Probabilistic homogenization of composites is especially interesting when it is coupled with the problem of interface defects. An analytical solution to this problem can only be obtained for elastic composites and a simple RVE, which was proposed in [280]. A more in-depth analysis of the stiffness tensor, even in an elastic regime, requires the usage of numerical solvers. The author proposed such an approach [204], verified it with an analytical solution, and studied a fully anisotropic response of the homogenized composite [11]. The results demonstrated that particle clustering and uneven particle distribution affect the anisotropy of the composite and have a high influence on the components of its stiffness tensor. In his other work [8], he verified a numerical solution of a composite with an uncertain reinforcing particle radius with an analytical solution and studied an influence of an uncertain aspect ratio on the effective stiffness tensor of a composite. One may extend this model toward a hyper-elastic regime of the composite, where an uncertain hyper-elastic response of a homogeneous medium can be taken into account [115], which may be further extended toward the stochastic hyper-elastic response of composites with hysteresis [99] and with stochastic interface defects [281,282].
As it is well-known, the effective response of a medium is a relation of the objective function with uncertain parameters or variables. In structural design, the objective function could be defined as a limit function. In homogenization, it usually is an effective property of a medium, such as the stiffness tensor, bulk modulus, effective stress, or strain energy. In the majority of homogenization problems (and also in most structural engineering problems), such a relation cannot be analytically determined. This is a reason why the objective function is commonly computed with the use of discrete numerical procedures. In SPT, this is frequently done using a response function method (RFM) [138] or response surface method (RSM) [283,284] when more variables are considered. An alternative to the direct differentiation method (DDM) is rarely selected because it requires at least an intervention into a source code of a discrete numerical solver or even the introduction of its solver. This is because the deterministic values used by these programs must be substituted with their stochastic counterparts.
The response function method and the response surface method both aim to approximate the real relations of the objective function with the use of a surrogate model (also called a meta-model [285,286]) with an uncertain variable. This is done based on a carefully selected set of discrete numerical (or laboratory) experiments performed for different values of input variables [287,288]. Their major advantage is the ease of application and disconnection of the metamodel fitting from the stochastic procedure, which allows for a simple analysis of the fitting errors. Surrogate models are commonly applied from a subset of polynomial functions and also their fractions or other rational functions [289,290]; a little less common is an application of the B-spline, logarithmic, exponential, or hyperbolic functions.
Searching for the response functions or surfaces is an optimization problem and can be solved using linear programming (LP), quadratic programming (QP), and nonlinear programming (NP) methods. A class of linear programming problems is one where the objective function and all of the constraints are linear functions of the decision variables. It always has either (a) one or more equivalent globally optimal solutions, (b) has an unbounded objective, or (c) no feasible solution. It is convex and has at most one feasible region with "flat faces" (i.e., no curves) on its outer surface. Its optimal solution (if available) lays at a "corner point" on this surface that is represented by constraints. A solver may work pointwise and the solution is fast. Common solvers include families of the simplex technique in its primal [291] or dual [292] version and the interior point technique [293], where the fitting procedure is completed with a linear or (sometimes) nonlinear version of the Least Squares Method.
As it is well known, a quadratic programming problem has an objective function that is a quadratic function of the design variables and constraints that are all linear functions of the variables. They have only one feasible region with "flat faces" on its surface (due to the linear constraints), but the optimal solution may be found anywhere within this region or on its surface. As it is well known, an objective function may be convex or non-convex. The convex functions have either positive definite or semi-definite Hessians, and non-convex have an indefinite Hessian and a saddle shape, which is usually out of the scope of QP solvers. Typical solvers include a simplex extension to the QP, active set and working set method variations [294,295], interior point [296][297][298], or Newton barrier methods [299][300][301][302]. A nonlinear programming problem similar to the hyper-elastic stochastic analysis of composites and their effective characteristics is no doubt one of the most difficult issues in optimization theory. An objective function is generally a nonlinear function of the decision variables and may have many locally optimal solutions. A global minimum is very difficult to be found [303], and common solvers include augmented Lagrange methods [304], sequential quadratic programming [305,306], and reduced gradient methods [307,308].

Concluding Remarks
As demonstrated in this review work, an application of probabilistic and stochastic methods in the analysis of hyper-elastic solids still attracts many researchers and has a large audience. Such an approach is especially very convenient for engineering practice with polymer-based composites, specifically elastomers. Experimental statistics included in new theoretical and computer models allow for further optimization of such materials. This may be crucial for the optimal design of structural dampers, whose reliability, durability, and structural health monitoring may bring huge qualitative and quantitative savings. Stochastic multiscale models, especially including the atomistic scale of the solid, are indeed still very scarce, mainly since this scale uncertainty requires relatively expensive Monte Carlo simulations to obtain reliable stochastic analyses. Time and computer power consumption in this case is still slowly decreasing, but this is the main reason to continuously look for some concurrent techniques. This is also why various homogenization methods remain very attractive in materials engineering, especially those accounting for material and geometrical imperfections of a random nature, and most probably will remain so in the future.