Mechanical Identification of Materials and Structures with Optical Methods and Metaheuristic Optimization

This study presents a hybrid framework for mechanical identification of materials and structures. The inverse problem is solved by combining experimental measurements performed by optical methods and non-linear optimization using metaheuristic algorithms. In particular, we develop three advanced formulations of Simulated Annealing (SA), Harmony Search (HS) and Big Bang-Big Crunch (BBBC) including enhanced approximate line search and computationally cheap gradient evaluation strategies. The rationale behind the new algorithms—denoted as Hybrid Fast Simulated Annealing (HFSA), Hybrid Fast Harmony Search (HFHS) and Hybrid Fast Big Bang-Big Crunch (HFBBBC)—is to generate high quality trial designs lying on a properly selected set of descent directions. Besides hybridizing SA/HS/BBBC metaheuristic search engines with gradient information and approximate line search, HS and BBBC are also hybridized with an enhanced 1-D probabilistic search derived from SA. The results obtained in three inverse problems regarding composite and transversely isotropic hyperelastic materials/structures with up to 17 unknown properties clearly demonstrate the validity of the proposed approach, which allows to significantly reduce the number of structural analyses with respect to previous SA/HS/BBBC formulations and improves robustness of metaheuristic search engines.


Introduction and Theoretical Background
An important type of inverse problems is to identify material properties involved in constitutive equations or stiffness properties that drive the mechanical response to applied loads. Since displacements represent the direct solution of the general mechanics problem for a body subject to some loads and kinematic constraints, the inverse solution of the problem is to identify structural properties corresponding to a given displacement field {u(x, y, z), v(x, y, z), w(x, y, z)}. The term "structural properties" covers material parameters (e.g., Young's modulus, hyperelastic constants, viscosity etc.) and stiffness terms including details on material constituents (e.g., tension/shear/bending terms in composite laminates, fiber orientation and ply thickness etc.) evaluated for the region of the body under investigation. The finite element model updating technique (FEMU) [1][2][3] and the virtual fields method (VFM) [2][3][4] are the most common approaches adopted in the literature for solving mechanical characterization problems. In general, FEMU is computationally more expensive than VFM but the latter method may require special cares in selecting specimen shape, virtual displacement fields and kinematic boundary conditions to simplify computations entailed by the identification process and obtain realistic results.
In the FEMU method, experimentally measured displacement fields are compared with their counterparts predicted by a finite element model simulating the experiment. This comparison is made at a given set of control points. If boundary conditions and loads are properly simulated by the FE model, computed displacements match experimental data only when the actual structural properties are given in input to the numerical model. The difference between computed displacement values and measured target values may be expressed as an error functional Ω that depends on the unknown material/structural properties to be identified. Hence, the inverse problem of identifying NMP unknown mechanical properties may be stated as an optimization problem where the goal is to minimize the error functional Ω. That is: where: X PROP (X 1 ,X 2 , . . . ,X NMP ) is the design vector containing the NMP unknown properties ranging between the lower bounds "L" and the upper bounds "U"; N CNT is the number of control points at which FE results are compared with experimental data; δ FEM j and δ j , respectively, are the computed and target displacement values at the j th control point; G p (X PROP ) define a set of NC constraint functions depending on unknown properties that must be satisfied in order to guarantee the existence of a solution for the inverse problem. Buckling loads or natural frequencies can also be taken as target quantities in the optimization process: in this case, the displacement field of the structure is described by the corresponding normalized mode shape. Non-contact optical techniques [5][6][7] such as moiré, holography, speckle and digital image correlation are naturally suited for material/structure identification because they can accurately measure displacements in real time and gather full field information without altering specimen conditions. The full field ability of optical techniques allows to select the necessary amount of experimental data for making the results of identification process reliable. Furthermore, their versatility also allows to choose the best experimental set-up for the inverse problem at hand. Based on the type of illumination varying from coherent or partially coherent light to white light, the magnitude of measured displacements may range from fraction of microns (using, for example, lasers and interferometry) to some millimetres (using, for example, grating projection, image correlation and white light), thus covering a wide spectrum of materials and structural identification problems.
Regardless of the way displacement information are extracted from recorded images, all optical methods share a common basic principle. The light wave fronts hitting the specimen surface are modulated by the deformations undergone by the tested body. By comparing the wave fronts modulated by the body surface and recorded by a sensor before and after deformation a system of fringes forms on the specimen surface; each fringe represents the locus of an iso-displacement region. The spatial frequency distribution of fringes can be used for recovering strain fields. Material anisotropy, presence of local defects (e.g., dislocations in crystalline structures) and/or damage (e.g., delamination or cracks) produce fringe distortions or changes in spatial frequency of fringe patterns.
The inverse problem (1) is in general highly nonlinear and in all likelihood non-convex, especially if there are many parameters to be identified. Furthermore, the error functional Ω is not explicitly defined and each new evaluation of Ω entails a new finite element analysis. Such a non-smooth optimization problem cannot be handled efficiently by gradient-based algorithms. In fact, their utilization has continuously been decreasing in the last 10 years. For example, in the case of soft materials (a rather complicated subject) just a few studies using Levenberg-Marquardt or Sequential Quadratic Programming techniques (see, for example, [8][9][10][11][12][13][14][15][16]) have earned at least one or two citations per year.
Global optimization methods can explore larger fractions of design space than gradient-based algorithms. This results in better trial solutions and higher probability of avoiding premature of the population over the optimization process. GA and PSO instead may suffer from premature convergence, stagnation, sensitivity to problem formulation. Furthermore, GA and PSO include more internal parameters than SA, HS and BBBC, which increases the amount of heuristics in the optimization process. The same arguments may be used for DE whose performance is strongly dependent on the crossover/mutation scheme implemented in the algorithm. Based on these considerations, we decided to develop advanced formulations of SA, HS and BBBC for material/structural identification problems.
Lamberti et al. attempted to improve the convergence speed of SA (e.g., [77,81,83,84,162,163]), HS (e.g., [164][165][166]) and BBBC (e.g., [165,166]) in inverse and structural optimization problems. While these SA/HS/BBBC variants clearly outperformed referenced algorithms in weight minimization of skeletal structures, improvements in computational cost were less significant for inverse problems as those variants evaluated gradients of error functional Ω using a "brute-force" approach based on finite differences. This occurred in spite of having enriched metaheuristic search with gradient information. In order to overcome this limitation, this study presents new hybrid formulations of SA, HS and BBBC that are significantly more efficient and robust than the algorithms currently available in the literature. For that purpose, low-cost gradient evaluation and approximate line search strategies are introduced in order to generate higher quality trial designs and a very large number of descent directions. Populations of candidate designs are renewed very dynamically by replacing the largest number of designs as possible. All algorithms use a very fast 1-D probabilistic search derived from simulated annealing.
The new algorithms-denoted as Hybrid Fast Simulated Annealing (HFSA), Hybrid Fast Harmony Search (HFHS) and Hybrid Fast Big Bang-Big Crunch (HFBBBC)-are tested in three inverse elasticity problems: (i) mechanical characterization of a composite laminate used as substrate in electronic boards (four unknown elastic constants); (ii) mechanical characterization and layup identification of a composite unstiffened panel for aeronautical use (four unknown elastic constants and three unknown layup angles); (iii) mechanical characterization of bovine pericardium patches used in biomedical applications (sixteen unknown hyperelastic constants and the fiber orientation). Sensitivity of inverse problem solutions and convergence behavior to population size and initial design/population is evaluated in statistical terms.
The rest of this article is structured as follows: Sections 2-4, respectively, describe the new SA, HS and BBBC formulations developed here trying to point out the theoretical aspects behind the proposed enhancements and critically compare the new formulations with currently available SA/HS/BBBC variants including those developed in [77,81,83,84,162,166]. Section 5 presents the results obtained in the inverse problems. Finally, Section 6 discusses the main findings of this study.

Hybrid Fast Simulated Annealing
The flow chart of the new HFSA algorithm developed in this study is shown in Figure 1. HFSA includes a multi-level and multi-point formulation combining global and local annealing, evaluation of multiple trial points, and line search strategies based on fast gradient computation. The hybrid nature of HFSA derives from the fact that metaheuristic search is enriched by approximate line searches. Similar to classical SA, the proposed algorithm starts with setting an initial design vector X 0 as the current best record X OPT . The corresponding cost function value Ω OPT = Ω(X OPT ) is computed. Set the counter of cooling cycles as K = 1 and the maximum number of cooling cycles as K MAX = 100. Set the initial temperature T 0 equal to 0.1 near to the target value 0 of the error functional Ω. where XOPT l and XOPT l − 1 are the best records for the last two iterations; (XOPT l ) and (XOPT l − 1 ) are the corresponding values of error functional; NRND,j is a random number in the interval (0,1).
 Figure 1. Flow chart of the HFSA algorithm developed in this research.

Step 1: Generate A New Trial Design with "Global" Annealing by Perturbing All Design Variables
Since determination of sensitivities ∂ω/∂x j entails new structural analyses, material parameters taken as optimization variables are perturbed as follows: where X OPT l and X OPT l−1 are the best records for the last two iterations; (X OPT l ) and (X OPT l−1 ) are the corresponding values of error functional; N RND,j is a random number in the interval (0,1). If is a descent direction with respect to the previous best record X OPT l−1 while −(X OPT l − X OPT l−1 ) may be a descent direction with respect to the current best record X OPT l . The approximate gradient of Ω is computed as the absolute value accounts for the "−" sign included in Equation (2). The N RND,j random number preserves the heuristic character of the SA search while the Ω OPT,l−1 /Ω OPT,l ratio forces the optimizer to take a large step along a potentially descent direction. Using approximate gradient evaluation allows computational cost of the inverse problem to be drastically reduced with respect to other SA applications [76,77,81,83,84]. A trial design X TR (x OPT,1 + ∆x 1 , x OPT,2 + ∆x 2, . . . , x OPT,NMP−1 + ∆x NMP−1 , x OPT,NMP + ∆x NMP ) is hence formed.

Step 2: Evaluation of the New Trial Design
If Ω(X TR ) < Ω(X OPT ), X TR is set as the new best record X OPT .
Step 5 is executed in order to check for convergence and reset parameters K and T K .
If Ω(X TR ) > Ω(X OPT ), a "mirroring strategy" is used to perturb design along a descent direction. In fact, since Ω(X TR ) > Ω(X OPT ) yields (X TR − X OPT ) T ∇Ω(X OPT ) > 0, the −(X TR − X OPT ) T ∇Ω(X OPT ) < 0 condition is expected to be satisfied thus defining the new descent direction (X TR new − X OPT )≡−(X TR − X OPT ). The new candidate design X TR new is defined as: If the mirror trial point X TR new yet does not improve X OPT (i.e., if Ω(X TR new ) > Ω(X OPT )), the cost function Ω(X) is approximated by a 4th order polynomial that passes through the five trial points X TR , X INT ' , X OPT , X INT " and X TR new where X INT ' is randomly generated on the segment limited by X TR and X OPT while X INT " is randomly generated on the segment limited by X OPT and X TR new . A local 1-D coordinate system is set for the segment limited by X TR and X TR new : the origin is located at X OPT and coordinates are normalized with respect to the distance from the origin. The trial point X OPT * at which the approximate error functional Ω APP (X) takes its minimum value is determined. An exact analysis is performed at X OPT * and the real value of error functional Ω(X OPT *) is computed. The following cases may occur.
If Ω(X OPT *) < Ω(X OPT ), X OPT * is reset as the current best record. Hence, Step 5 is executed in order to check for convergence and reset K and T K .
If Ω(X OPT *) > Ω(X OPT ), trial designs X TR , X INT ' , X OPT *, X INT " and X TR new are evaluated with the Metropolis' criterion. The cost function variation ∆Ω s = [Ω(X s ) − Ω(X OPT )] is computed for these designs (the s subscript denotes TR, INT', OPT*, INT" and TR new , respectively). For the trial design yielding the smallest increment ∆Ω s > 0 (in all likelihood X OPT *), the Metropolis' probability function is defined as: where NDW are the trial points at which error functional was higher than the previously found best records up the current iteration. The ∆Ω r terms are the corresponding cost penalties. The ratio where NDW are the trial points at which error functional was higher than the previously found best records up the current iteration. The ΔΩr terms are the corresponding cost penalties. The ratio r=1, NDW ΔΩr / NDW accounts for the general formation of all previous trial designs and normalizes probability function with respect to cost function changes. The design Xs is provisionally accepted or certainly rejected according to the Metropolis' criterion: P(Ωs) > NRDs  A c cep t P(Ωs) < NRDs  R ej ect (5) where NRDs is a random number defined in the interval (0, 1). If Xs may be accepted from Equation (5), it is added to the database , that includes all trial designs that could not improve the current best record. Hence, Step 4 is executed.
If all of the Xs points are rejected from Equation (5), Step 3 is executed.

2.3.
Step 3: Generate New Designs with "Local" Annealing by Perturbing One Variable at A Time.
where NRD s is a random number defined in the interval (0, 1). If X s may be accepted from Equation (5), it is added to the database Π, that includes all trial designs that could not improve the current best record. Hence, Step 4 is executed.
If all of the X s points are rejected from Equation (5), Step 3 is executed.

2.3.
Step 3: Generate New Designs with "Local" Annealing by Perturbing One Variable at A Time In classical SA, M ann cycles are completed and a total of M ann ·NMP analyses are performed. Here, derivatives ∂Ω/∂x j (j = 1,2, . . . ,NMP) computed at X OPT are sorted in ascending order from the minimum value to the maximum value. Following this order, design variables are perturbed one by one as: where the sign "+" is used if ∂Ω/∂x j < 0 while the sign "−" is used if ∂Ω/∂x j > 0. If x j TR violates side constraints, it is reset as follows: In classical SA, Mann cycles are completed and a total of Mann . NMP analyses are performed. Here, derivatives Ω/xj (j = 1,2,…,NMP) computed at XOPT are sorted in ascending order from the minimum value to the maximum value. Following this order, design variables are perturbed one by one as: where the sign "+" is used if Ω/xj < 0 while the sign "−" is used if Ω/xj > 0. If xj TR violates side constraints, it is reset as follows: xj TR > xj U  xj TR = xOPT,j + (xj U − xOPT,j) . NRND, j (j = 1,2,…,NMP) xj TR < xj L  xj TR = xj L + (xOPT, j − xj L ) . NRND, j Each new design XTR j (xOPT,1,xOPT,2,…,xj TR ,…,xOPT,NMP−1,xOPT,NMP) defined with Equations (6,7) is evaluated and the current best record is updated if it holds Ω(XTR j ) < Ω(XOPT). Conversely, if Ω(XTR j ) > Ω(XOPT), the mirror trial point XTR j,mirr (xOPT,1,xOPT,2,2xOPT −xj TR ,…,xOPT,NMP−1,xOPT,NMP) is evaluated. Two scenarios may occur: (i) if Ω(XTR j,mirr ) < Ω(XOPT), XTR j,mirr is set as the new best record XOPT; (ii) if also Ω(XTR j,mirr ) > Ω(XOPT), XTR j and XTR j,mirr are evaluated with the Metropolis criterion (5) and the best of them is eventually set as the current best record. The 1-D search lasts until no improvement in design is achieved over two consecutive cycles. Similar to the "global" annealing strategy, the 1-D probabilistic search attempts to generate trial designs lying on descent directions. However, perturbation initiates from the most sensitive variables in order to capture the effect of each single variable in a more efficient way. In fact, in the global annealing search, the vector (XTR − XOPT) was defined so as to have (XTR − XOPT) T W Ω(XOPT) < 0, thus forming a descent direction. However, non-linearity of cost function made such a condition be not sufficient for improving design. In view of this, "local" annealing selects the most important terms (XTR − XOPT) j Ω/xj that form the cost function variation and promptly correct them should they not contribute effectively to the reduction of cost function.

Step 4: Evaluation of Trial Designs that Satisfy Metropolis' Criterion.
If there are no trial designs for which the cost function decreases, HFSA extracts from the database  (including designs that satisfy the Metropolis' criterion) the design Xj BEST for which the cost function value is the least, and then sets this design as the current best record. Hence, the increase in cost is minimized each time all improvement routines failed and the 1-D local annealing search could not improve design.

Step 5: Check for Convergence and Eventually Reset Parameters for A New Cooling Cycle.
If the annealing cycles counter K > 3, HFSA utilizes the following convergence criterion: Each new design X TR j (x OPT,1 ,x OPT,2 , . . . ,x j TR , . . . ,x OPT,NMP−1 ,x OPT,NMP ) defined with Equations (6) and (7) is evaluated and the current best record is updated if it holds Ω(X TR j ) < Ω(X OPT ).
Conversely, if Ω(X TR j ) > Ω(X OPT ), the mirror trial point X TR j,mirr (x OPT,1 ,x OPT,2 ,2x OPT −x j TR , . . . ,x OPT,NMP−1 ,x OPT,NMP ) is evaluated. Two scenarios may occur: (i) if Ω(X TR j,mirr ) < Ω(X OPT ), X TR j,mirr is set as the new best record X OPT ; (ii) if also Ω(X TR j,mirr ) > Ω(X OPT ), X TR j and X TR j,mirr are evaluated with the Metropolis criterion (5) and the best of them is eventually set as the current best record. The 1-D search lasts until no improvement in design is achieved over two consecutive cycles. Similar to the "global" annealing strategy, the 1-D probabilistic search attempts to generate trial designs lying on descent directions. However, perturbation initiates from the most sensitive variables in order to capture the effect of each single variable in a more efficient way. In fact, in the global annealing search, the vector (X TR − X OPT ) was defined so as to have (X TR − X OPT ) T ∇Ω(X OPT ) < 0, thus forming a descent direction. However, non-linearity of cost function made such a condition be not sufficient for improving design. In view of this, "local" annealing selects the most important terms (X TR − X OPT ) j ∂Ω/∂x j that form the cost function variation and promptly correct them should they not contribute effectively to the reduction of cost function.

Step 4: Evaluation of Trial Designs that Satisfy Metropolis' Criterion
If there are no trial designs for which the cost function decreases, HFSA extracts from the database Π (including designs that satisfy the Metropolis' criterion) the design X j BEST for which the cost function value is the least, and then sets this design as the current best record. Hence, the increase in cost is minimized each time all improvement routines failed and the 1-D local annealing search could not improve design. If the annealing cycles counter K > 3, HFSA utilizes the following convergence criterion: where Ω OPT,K and X OPT,K, respectively, are the best record and corresponding design vector obtained in the K th cooling cycle. The convergence parameter ε CONV is set equal to 10 −7 .
If the criterion (8) is satisfied or K = K MAX , go to Step 6. Conversely, if K < 3 or stopping criterion is not satisfied (also for K ≥ 3), the number of cooling cycles is reset as K = K + 1. The temperature is adaptively reduced as T K+1 = β K T K where: Ω INIT,K−1 and Ω FIN,K−1 , respectively, are the cost function values at the beginning and at the end of current annealing cycle. N REJE is the number of trial designs rejected out of total number of trial designs N TRIA generated in the current cooling cycle.

Step 6: End Optimization Process
HFSA terminates the optimization process and writes the output data in the results file.

Hybrid Fast Harmony Search
The new HFHS algorithm developed in this research is now described in detail. Like HFSA, HFHS enriches the metaheuristic search with gradient information and approximate line searches. This is done at low computational cost. Furthermore, the HS engine is enhanced by a 1-D probabilistic search based on simulated annealing. Like many state-of-the-art HS variants, internal parameters such as harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are adaptively changed by HFHS in the optimization process based on convergence history. Last, classical harmony refinement process based on bandwidth parameter (bw) is replaced by another random movement that forces HFHS to refine the new harmony moving along a descent direction. The flow chart of the new algorithm is presented in Figure 2. Step 1 Steps 2-3 Step 4 Step 5  The initial population of N POP solutions is randomly generated using the following equation: where ρ j k is a random number uniformly generated in the (0,1) interval.
These designs are sorted in ascending order according to values taken by the error functional Ω.
[HM] = As mentioned above, the present HFHS algorithm does not require initialization of internal parameters HMCR, PAR and bw.

Step 1: Generation and Adjustment of a New Harmony with Adaptive Parameter Selection
Let X OPT = {x OPT,1 ,x OPT,2 , . . . ,x OPT,NMP } be the best design stored in the population corresponding to Ω OPT . The gradient of error functional with respect to design variables ∇Ω(X OPT ) is computed at X OPT . For each variable, a random number N RND,j is extracted from the (0,1) interval.
If N RND,j > HMCR, the new value x TR,j assigned to the j th optimization variable (j = 1,2, . . . ,NMP) currently perturbed is: The new HFHS algorithm developed in this research is now described in detail. Like HFSA, HFHS enriches the metaheuristic search with gradient information and approximate line searches. This is done at low computational cost. Furthermore, the HS engine is enhanced by a 1-D probabilistic search based on simulated annealing. Like many state-of-the-art HS variants, internal parameters such as harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are adaptively changed by HFHS in the optimization process based on convergence history. Last, classical harmony refinement process based on bandwidth parameter (bw) is replaced by another random movement that forces HFHS to refine the new harmony moving along a descent direction. The flow chart of the new algorithm is presented in Figure 2.
The initial population of NPOP solutions is randomly generated using the following equation: where j k is a random number uniformly generated in the (0,1) interval. These designs are sorted in ascending order according to values taken by the error functional Ω.
[HM] = As mentioned above, the present HFHS algorithm does not require initialization of internal parameters HMCR, PAR and bw.

Step 1: Generation and Adjustment of a New Harmony with Adaptive Parameter Selection
Let XOPT = {xOPT,1,xOPT,2,…,xOPT,NMP} be the best design stored in the population corresponding to ΩOPT. The gradient of error functional with respect to design variables W Ω(XOPT) is computed at XOPT. For each variable, a random number NRND,j is extracted from the (0,1) interval.
If NRND,j > HMCR, the new value xTR,j assigned to the j th optimization variable (j = 1,2,…,NMP) currently perturbed is: where Ω/xj is the cost function sensitivity for the j th design variable currently perturbed, j = (Ω/xj)/|| W Ω(XOPT)|| is the sensitivity coefficient normalized with respect to the gradient vector modulus. Sensitivities Ω/xj are computed with Equation (22), which will be described later on in this section. Using the '+' sign if it holds Ω/xj < 0 and the '−' sign if it holds Ω/xj > 0, allows to generate trial points lying on descent directions.
where ∂Ω/∂x j is the cost function sensitivity for the j th design variable currently perturbed, µ j = (∂Ω/∂x j )/||∇Ω(X OPT )|| is the sensitivity coefficient normalized with respect to the gradient vector modulus. Sensitivities ∂Ω/∂x j are computed with Equation (22), which will be described later on in this section. Using the '+' sign if it holds ∂Ω/∂x j < 0 and the '−' sign if it holds ∂Ω/∂x j > 0, allows to generate trial points lying on descent directions. If HMCR is small, Equation (12) is more likely to be used. The (x TR,j − x OPT,j ) perturbations given to each design variable are weighted by sensitivities to form the cost function variation ∆Ω TR for the new harmony X TR . This variation is expressed by the scalar product ∆Ω TR between the gradient ∇Ω(X OPT ) and the search direction S TR T = (X TR − X OPT ) formed by the new harmony and the current best record. If ∆Ω TR < 0, S TR T is a descent direction. In order to make S TR T a descent direction, all increments (x TR,j − x OPT,j ) ∂Ω/∂x j must hence be negative. The following strategy is adopted to retain or adjust the perturbation given to the current design variable (j = 1,2, . . . ,NMP): If HMCR is small, Equation (12) is more likely to be used. The (xTR,j − xOPT,j) perturbations given to each design variable are weighted by sensitivities to form the cost function variation ΔΩTR for the new harmony XTR. This variation is expressed by the scalar product ΔΩTR between the gradient W Ω(XOPT) and the search direction STR T = (XTR − XOPT) formed by the new harmony and the current best record. If ΔΩTR < 0, STR T is a descent direction. In order to make STR T a descent direction, all increments (xTR,j − xOPT,j) Ω/xj must hence be negative. The following strategy is adopted to retain or adjust the perturbation given to the current design variable (j = 1,2,…,NMP): (Ω/xj) (xTR,j − xOPT,j) < 0  Leave xTR,j unchanged (Ω/xj) (xTR,j − xOPT,j) < 0  Reset xTR,j as xTR,j' = (1 + NRND,j )xOPT,j − NRND,j  xTR,j Hence, Equations (12) and (13) randomly generate new trial designs that must lie on descent directions. Sensitivities are computed at the current best record to improve convergence speed. The mirroring strategy implemented by the second relationship of Equation (13) attempts to transform the non-descent direction STR into the descent direction −STR by perturbing design in the opposite direction.
The effect of the distance of current best record XOPT from side constraint boundaries is taken into account by perturbing design variables by the largest step as possible (i.e., (xOPT,j  xj L ) or (xj U  xOPT,j)) along the currently defined descent direction. This allows to maximize the improvement in cost function.
If NRND,j < HMCR, the new value xTR,j assigned to the j th variable is defined as (j = 1,2,…,NMP): , , HM HM less HM more HM HM Hence, Equations (12) and (13) randomly generate new trial designs that must lie on descent directions. Sensitivities are computed at the current best record to improve convergence speed. The mirroring strategy implemented by the second relationship of Equation (13) attempts to transform the non-descent direction S TR into the descent direction −S TR by perturbing design in the opposite direction.
The effect of the distance of current best record X OPT from side constraint boundaries is taken into account by perturbing design variables by the largest step as possible (i.e., (x OPT,j − x j L ) or (x j U − x OPT,j )) along the currently defined descent direction. This allows to maximize the improvement in cost function. If N RND,j < HMCR, the new value x TR,j assigned to the j th variable is defined as (j = 1,2, . . . ,NMP): . Unlike classical HS and advanced formulations [163,164,167], HFHS does not select the value x TR,j from the j th column of the harmony memory storing values of the corresponding variable for each design of the population. This enhances diversity of optimization process and allows to avoid stagnation. By considering the difference (N RND,j − 0.5), it is possible to increase or reduce thex HM TR,j value. The x TR,j value is then adjusted with Equation (13) to make also step (x TR,j − x OPT,j ) lie on a descent direction. Conversely, in classical HS and [164][165][166], the pitch adjusting operation did not include any information on how much the design may be sensitive to the currently analyzed variable.
If it holds also N RND,j < Min(HMCR,PAR), the x TR,j value is finally pitch adjusted as: where NG pitch,adj is the number of previously pitch adjusted trial designs; NG tot is the total number of trial designs generated in the optimization search. The NG pitch,adj parameter is reset as (NG pitch,adj + 1) if the number of pitch adjusted variables included in a new harmony is larger than the number of design variables perturbed with Equation (12) using gradient information.
The scale parameter λ scale is set as: Equations (14)- (16) replace the bandwidth parameter bw usually used in many HS variants. The new harmony X TR (x TR,1 ,x TR,2 , . . . ,x TR,NMP ) can be decomposed in NMP sub-harmonies X TR,j (x OPT,1 ,x OPT,2 , . . . ,x TR,j , . . . ,x OPT,NMP ) obtained by perturbing only one design variable at a time, and that lie on descent directions. These movements are amplified by the scale factor defined by Equation (16). Furthermore, Equation (15) accounts also for optimization history. In fact, since NG pitch,adj /NG tot decreases as optimization progresses, the perturbation step defined to pitch adjust each design variable gets finer as the optimum is approached.
The case N RND,j < HMCR and N RND,j > PAR is dealt with Equation (13) eventually including the mirroring strategy. Hence, the present algorithm intrinsically pitch adjusts design variables and tries anyhow to improve the current design.
As mentioned above, new values of HMCR and PAR parameters are randomly generated in each new iteration and adapted based on optimization history. In the q th iteration, HMCR and PAR are set as: In Equations (17) and (18), Ω aver,init q−1 and Ω aver,end q−1 , respectively, are the average values of cost function for the trial designs included in the harmony memory at the beginning and the end of the previous optimization iteration (the Ω aver,end q−1 /Ω aver,init q−1 ratio should always be smaller than 1). X OPT,init and X WORST,init , X OPT,end and X WORST,end , respectively, denote the best and worst designs at the beginning and the end of the previous iteration. NG gradient is the number of trial designs generated by including gradient information: this parameter is reset to (NG gradient + 1) if the number of design variables perturbed with Equation (12) is greater than NMP/2. Random values HMCR extracted q and PAR extracted q are defined as: where ξ HMCR and ξ PAR are two random numbers in the interval (0,1). The bounds of 0.01 and 0.99 set in Equation (19) allow all possible values of internal parameters to be covered [168].
In the first iteration (q = 1), it obviously holds HMCR q = HMCR extracted q and PAR q = PAR extracted q .
Equations (17) and (18) rely on the following rationale: the error functional may decrease more rapidly if large perturbations are given to many variables. This is more likely to happen when gradient information is directly utilized, that is when it holds N RND,j > HMCR. In order to increase the probability of using Equation (12) for many design variables, the HMCR value randomly generated is scaled by the (Ω aver,end q−1 /Ω aver,init q−1 ) ratio. The generation process of new harmonies is hence forced to be consistent with the current rate of reduction of Ω. Furthermore, HMCR q is scaled by the NG pitch,adj /NG gradient ratio. If the number of new harmonies generated via pitch adjusting tends to be smaller than the number of new harmonies directly generated including gradient information (i.e., if NG pitch,adj /NG gradient < 1), it is more logical to keep following such a trend.
Similar arguments hold for the PAR q parameter. Pitch adjustment is performed if N RND,j < Min(HMCR,PAR).
Besides information on cost function reduction rate (Ω aver,end q−1 /Ω aver,init q−1 ), Equation (18) accounts for population diversity. In fact, pitch adjusting is less effective as population becomes less sparse, that is when the ||X OPT,end − X WORST,end ||/||X OPT,init − X WORST,init || ratio decreases. Again, the NG pitch,adj /NG gradient ratio preserves the current trend of variation of the pitch adjusting rate parameter.

Determination of Sensitivities of Ω
Since the error functional is implicitly defined, gradients are determined by approximate line search. The cost function variation ∆Ω k = [Ω(X k ) − Ω(X OPT )] that occurs by moving from the best design X OPT to the k th design X k stored in the harmony memory is determined for all designs. The corresponding distance ∆S k = ||X k −X OPT || is computed. The approximate (i.e., "average") gradient along each direction S k = (X k − X OPT ) is computed as ∆Ω k /∆S k . Since the new harmony must lie on a descent direction, the S k vectors must be transformed into descent directions S desc that is S desc k = −∆S k . Three descent directions are considered: (i) best direction S BEST corresponding to the largest cost variation between candidate designs (this is the opposite direction to (X WORST − X OPT )); (ii) steepest descent direction S FAST corresponding to the largest gradient ∆Ω k /∆S k ; (iii) the second best direction S 2ndBEST corresponding to the second largest cost variation between candidate designs (this is the opposite direction to (X 2ndWORST − X OPT )). Figure 3 illustrates the formation of the descent directions and their mutual positions with respect to the gradient of cost functional. Any descent direction should be within the region limited by S BEST , S 2ndBEST and S FAST . However, if the problem is highly nonlinear, it may happen that cost function oscillates along these directions and exceeds the current optimum cost. For this reason, step sizes are taken along S BEST , S 2ndBEST and S FAST . The scale factors β BEST , β 2ndBEST and β FAST are defined so that S BEST unit , S 2ndBEST unit and S FAST unit are unit vectors. If ||S BEST || < 1 or ||S 2ndBEST || < 1 or ||S FAST || < 1, the corresponding unit direction coincides with the original direction and remains a descent direction.
In order to check if unit directions are descent directions, three new trial designs X GR (1) , X GR (2) and 3.1.1. Determination of Sensitivities of Ω Since the error functional is implicitly defined, gradients are determined by approximate line search. The cost function variation ΔΩ k = [Ω(X k ) − Ω(XOPT)] that occurs by moving from the best design XOPT to the k th design X k stored in the harmony memory is determined for all designs. The corresponding distance ΔS k = ||X k − XOPT|| is computed. The approximate (i.e., "average") gradient along each direction S k = (X k − XOPT) is computed as ΔΩ k /ΔS k . Since the new harmony must lie on a descent direction, the S k vectors must be transformed into descent directions Sdesc k = (XOPT − X k ), that is Sdesc k = −ΔS k . Three descent directions are considered: (i) best direction SBEST corresponding to the largest cost variation between candidate designs (this is the opposite direction to (XWORST − XOPT)); (ii) steepest descent direction SFAST corresponding to the largest gradient ΔΩ k /ΔS k ; (iii) the second best direction S2ndBEST corresponding to the second largest cost variation between candidate designs (this is the opposite direction to (X 2ndWORST − XOPT)).  The S BEST unit , S 2ndBEST unit and S FAST unit unit vectors are classified as descent directions if the following conditions hold true, respectively: At this point, it is very likely that there will be between one and three unit descent directions in the neighborhood of the current best record. Sensitivities are hence defined as follows (j = 1,2, . . . NMP): Equation (22) shows that the directional derivative along a unit direction is scaled by the direction cosines in order to get sensitivities with respect to optimization variables. The minimum in Equation (22) accounts for the possibility of having non-descent unit directions. In the limit case of three non-descent directions, sensitivity is set equal to the minimum positive value so as to minimize the cost function increment in the neighborhood of the current best record. The approximate gradient evaluation strategy implemented by HFHS allows computational cost of the identification process to be significantly reduced with respect to previously developed HS variants. Once derivatives are computed, Step 1 is completed in the same way as described before.

Step 2: Evaluation of The New Trial Design
The quality of the new harmony X TR defined in Step 1 is evaluated in this step. In classical HS, if the new trial design X TR is better than the worst design X WORST currently stored in the harmony memory, it replaces the worst design in [HM]. The sophisticated generation mechanism developed in this research makes the new trial design have a high probability of improving also the current best record. The following cases may occur: If Ω(X TR ) < Ω OPT , the worst design is removed and the new harmony X TR is set as the current best record. The former optimum design becomes the second best design stored in the population. The remaining (N POP − 2) designs are analyzed. Let (X NPOP−2 ) r be a generic harmony of these (N POP − 2) designs. For each remaining harmony (X NPOP−2 ) r , the approximate gradient with respect to the current optimum is determined as FAST be the harmony corresponding to the largest approximate gradient. Each (X NPOP−2 ) r harmony is tentatively updated using Equation (23), with r∈(N POP − 2): where η BEST , η 2ndBEST and η FAST are three random numbers extracted in the (0,1) interval.
If Ω((X NPOP−2 ) r,new ) < Ω((X NPOP−2 ) r ), the new harmony (X NPOP−2 ) r,new replaces the old harmony (X NPOP−2 ) r . Otherwise, the new harmony is discarded and the hold harmony is kept in the population. The population is reordered based on the cost of each harmony. Equation (23) introduces a sort of social behavior that induces harmonies to approach the two best designs stored in the population and to reduce the cost function as fastest as possible.
If Ω(X TR ) > Ω OPT , the new harmony X TR is compared with the rest of the population. Let us assume that X TR ranks p th in the population of N POP designs. The former worst design is removed from the population and the former second worst design becomes the new worst design. Hence, there are (p − 1) better designs than X TR and (N POP − p) worse designs than X TR .
The (N POP − p) designs are analyzed similarly to what is done for Ω(X TR ) < Ω OPT . New harmonies are defined using Equation (24), with r∈(N POP −p): where η BEST , η 2ndBEST and η FAST are three random numbers in the (0,1) interval. The new harmony (X NPOP−p ) r,new replaces the old harmony (X NPOP−p ) r if it yields a lower value of error functional. The population is reordered based on the new values of Ω.
This strategy has the following rationale. Whilst X TR could not replace the optimum, it has a higher quality than other designs of the population. Hence, the other individuals try to imitate its behavior, at least approaching the optimum and improving their positions.

Step 3: Perform 1-D "Local" Annealing Search
If Step 2 could not improve X OPT , the 1-D "local" annealing search mechanism described in Section 2 is utilized. Variables are perturbed in the neighborhood of X OPT based on the magnitude of sensitivities ∂Ω/∂x j . Trial designs that yield a positive increment ∆Ω s > 0 with respect to Ω OPT and satisfy the Metropolis' criterion replace the worst designs stored in the harmony memory [HM].

Step 4: Check for Convergence
As the optimization process proceeds towards the global optimum, population sparsity must decrease. For this reason, the "average" design is defined as X aver = The following termination criterion is utilized in this research: where the convergence limit ε CONV is set equal to 10 −15 , smaller than the double precision limit used in computing technology. Steps 1 to 4 are repeated until the HFHS algorithm converges to the global optimum.

Step 5: End Optimization Process
The present HFHS algorithm terminates the optimization process and writes the output data in the results file.

Hybrid Fast Big Bang-Big Crunch
The new HFBBBC algorithm developed in this research is described in detail in this section. The strength points of the present formulation with respect to state-of-the-art BBBC variants can be summarized as follows. First, similar to HFSA and HFHS, computation of sensitivities does not entail new structural analyses. Second, descent directions from which X TR is generated are more accurately selected. Third, population is dynamically updated so as to simulate an explosion about the center of mass but with all new trial designs lying on potentially descent directions. The hybrid nature of the HFBBBC algorithm comes from the combination of the explosion/contraction process with 1-D local annealing and line search mechanisms. The flow chart of the algorithm is shown in Figure 4. selected. Third, population is dynamically updated so as to simulate an explosion about the center of mass but with all new trial designs lying on potentially descent directions. The hybrid nature of the HFBBBC algorithm comes from the combination of the explosion/contraction process with 1-D local annealing and line search mechanisms. The flow chart of the algorithm is shown in Figure 4. Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Like HFHS, the initial population of N POP designs used by HFBBBC is generated with Equation (10). The present algorithm does not require any setting of internal parameters except for the population size N POP . Error functional is evaluated for all candidate solutions. The best design X OPT corresponding to the lowest value of error functional Ω OPT is determined.

Step 1: Definition of The Center of Mass
The coordinates of the center of mass of the population X CM (x CM,1, x CM,2, . . . , x CM,NMP ) are defined as: where x j, k is the value of the j th optimization variable stored in the k th trial design, Ω k is error functional value for the k th trial design. Penalty functions can be used to sort designs. The weighting coefficients 1/Ω k make position of center mass be more sensitive to the best designs stored in the population.

4.2.
Step 2: Evaluation of The Center of Mass and Progressive Update of X CM As Current Best Record Error functional is evaluated at X CM . As mentioned above, BBBC formulations usually converge to the optimum design by updating the position of X CM . However, there is no guarantee that the new X CM may be the center of a better population. Since X CM represents a weighted average of candidate designs, its quality will be somewhere in between the worst and best individuals included in the population. The present HFBBBC algorithm considers two cases: (i) X CM is better than X OPT ; (ii) X CM is worse than X OPT .
In [164,165,167], X CM was reset as X OPT if case (i) occurred. The worst design included in the population was replaced by X CM and a new center of mass was defined. The same was done until case (ii) occurred. That approach allows one to avoid performing a new explosion about each new center of mass, thus saving N POP structural analyses with respect to classical BBBC. However, it was replaced only one design at a time while classical BBBC renews the whole population each time X CM is updated. In order to overcome this limitation without increasing computational cost, the following strategy has been implemented in this study.
If X CM is better than X OPT , it is reset as X OPT . The former best record becomes the second best design. The worst design is removed from the population. Any direction defined as (X OPT − X k ) is a descent direction with respect to X k because Ω(X k ) > Ω(X OPT ): the design improves as we move away from X k . However, (X OPT − X k ) is also opposite to (X k − X OPT ), which is a non-descent direction with respect to X OPT . If cost functional gradient changes smoothly, a direction which was descent for X k may remain descent also for X OPT . In view of this, HFBBBC tentatively updates designs as: where ξ k is a random number in the interval (−1,1). If ξ k ∈(−1,0), X k tentative lies between X k and X OPT ; if ξ k ∈(0,1), X k tentative lies beyond X OPT .
Since the X k tentative designs are potentially better than the X k designs as they have been defined by moving towards the current best record or trying to improve X OPT itself, a population including the (N POP − 1) designs X k tentative and the current best record X OPT should be of higher quality than the current population. Consequently, the new center of mass X CM tentative should be better than the former center of mass X CM defined for the original population and further improve X OPT . In order to reduce computational cost, approximate values of error functional Ω APP (X k tentative ) are determined as: The approximate position of the center of mass X CM tentative is determined with Equation (26) using the X k tentative vectors and the approximate values of error functional Ω APP (X k tentative ). The real value of error functional is evaluated at X CM tentative . If X CM tentative is better than X OPT , it is reset as X OPT .
The X k tentative designs replace the original designs X k and a new loop is performed using Equations (27) and (28). If X CM tentative does not improve any more the current best record X OPT , a new center of mass (X CM tentative )' is defined by changing only the weights of the designs that lie between X k and X OPT : that is, Equation (27) is used only for ξ k ∈(−1,0). This is done because the X k tentative designs lying beyond X OPT could violate side constraints because of the very large perturbations given to variables.
If (X CM tentative )' improves the current best record, it is reset as X OPT . The X k tentative designs generated for ξ k ∈(−1,0) replace the corresponding X k designs. A new loop is performed using Equations (27) and (28). This process is repeated until a new center of mass improves the current best record. If both points X CM tentative and (X CM tentative )' do not improve X OPT , the X k tentative designs are moved back to the corresponding X k designs and Step 3 is executed. Similar to classical BBBC, population is renewed each time the position of the center of mass is updated. However, the present algorithm requires only one or two structural analyses to evaluate X CM tentative and (X CM tentative )' vs. between the rather broad range of 0.1N POP to N POP analyses (often sensitive to the optimization problem at hand) required by state-of-the-art BBBC algorithms (see for example [169]).

Step 3: Evaluation of The Center of Mass and Formation of new Trial Designs Different from X CM
The case Ω(X CM ) > Ω(X OPT ) (i.e., X CM is worse than X OPT ) is the most likely to occur because the center of mass averages the properties of the N POP designs included in the population and hence it should rank between X WORST and X OPT . The present algorithm utilizes a computationally inexpensive approach. A new trial design X TR mirr is defined with the mirroring strategy. That is: where η MIRR is a random number in the interval (0,1). The mirroring strategy attempts to turn the non-descent direction (X CM − X OPT ) into the descent direction (X TR mirr − X OPT ). Using a random number smaller than one limits the search in the neighborhood of the current best record.
If Ω (X TR mirr ) < Ω (X OPT ), this trial design replaces the current best record which becomes the second best design of the population. The worst design is removed from the population. The optimization process is continued with Step 2 to generate (N POP − 1) X k tentative designs, renew population and update position of X CM ; convergence is checked in Step 5.
If Ω(X TR mirr ) > Ω(X OPT ), the mirroring strategy (29) is judged not effective and a new trial design must be generated by combining a set of descent directions. Since Ω(X CM ) > Ω(X OPT ), the (X CM − X OPT ) vector is a non-descent direction with respect to the current best record. However, the opposite direction S OPT -CM = −(X CM − X OPT ) ≡ (X OPT − X CM ) may be a descent direction, especially if the gradient of cost function is smooth. Similar to the HFHS algorithm described in Section 3, the approximate gradient of Ω(X) is determined also for HFBBBC. All (X OPT − X k ) vectors are opposite to directions (X k − X OPT ) that were non-descent with respect to X OPT . For each design X k the approximate gradient ∇Ω k appr = [Ω(X k ) − Ω(X OPT )]/||X k − X OPT || is calculated. The S BEST = (X OPT − X WORST ) direction corresponding to the largest variation of cost function between two candidate designs, the S FAST = (X OPT − X FAST ) direction corresponding to the largest ∇Ω k appr , and the S 2ndBEST = (X OPT − X 2ndBEST ) direction corresponding to the second best design are considered. A new trial design X TR is defined as: where η OPT-CM , η BEST , η 2ndBEST and η FAST are four random numbers generated in the (0,1) interval.
The generation of a new trial solution X TR with Equation (30) is illustrated in Figure 5 for an inverse problem with two variables. If the error functional gradient is smooth enough, a descent direction S will satisfy the condition S T ∇Ω(X OPT ) < 0, as it appears to be for S OPT -CM , S BEST , S FAST and S 2ndBEST directions in the figure. By summing up the steps taken on S OPT -CM , S BEST , S FAST and S 2ndBEST , a trial design X TR lying on a descent direction can be obtained. The present HFBBBC algorithm directly perturbs variables with respect to the current best record and hence generates higher quality designs. The effect of the average properties of the population described by XCM is now taken into account by considering the SOPT-CM direction.
The quality of XTR is evaluated as usual. If Ω(XTR) < Ω(XOPT), the trial design replaces the current best record and the worst design is removed from the population.
Step 2 is performed to eventually renew population and update position of XCM; convergence check is performed in Step 5. Otherwise, the 1-D "local" annealing search of Step 4 is executed.

Step 4: Perform 1-D "Local" Annealing Search.
HFBBBC utilizes the same probabilistic search mechanism implemented in HFSA and HFHS. However, the position of the center of mass is updated each time a trial design improves the current best record. The new center of mass and its mirror point with respect to the current best record also are evaluated to check for further improvements in design or to define additional points satisfying the Metropolis criterion. The present HFBBBC algorithm directly perturbs variables with respect to the current best record and hence generates higher quality designs. The effect of the average properties of the population described by X CM is now taken into account by considering the S OPT -CM direction.
The quality of X TR is evaluated as usual.
If Ω(X TR ) < Ω(X OPT ), the trial design replaces the current best record and the worst design is removed from the population.
Step 2 is performed to eventually renew population and update position of X CM ; convergence check is performed in Step 5. Otherwise, the 1-D "local" annealing search of Step 4 is executed.

4.4.
Step 4: Perform 1-D "Local" Annealing Search HFBBBC utilizes the same probabilistic search mechanism implemented in HFSA and HFHS. However, the position of the center of mass is updated each time a trial design improves the current best record. The new center of mass and its mirror point with respect to the current best record also are evaluated to check for further improvements in design or to define additional points satisfying the Metropolis criterion.

Step 5: Check for Convergence and Perform A New Explosion If Necessary
HFBBBC checks if the best design of the population has been improved in the current optimization cycle. Convergence check is performed after operations entailed by Steps 3 and 4. Let be (X OPT ) init and (X OPT ) final the best designs at the beginning and at the end of the current optimization cycle. If (X OPT ) final is better than (X OPT ) init , HFBBBC checks for convergence using the same criterion, Equation (25), adopted for HFHS. If convergence is reached, Step 6 is executed. Otherwise, Steps 1 to 4 are repeated until HFBBBC converges to the global optimum.
If (X OPT ) init is equal to (X OPT ) final , the current optimization cycle did not improve design in spite of the numerous improvement routines available in HFBBBC. For this reason, a new explosion is performed about X OPT trying to generate a higher quality population. The following equation is utilized: x where ρj k is a random number in the interval (0,2) to generate the j th variable of the k th design. The interval (0,2) is large enough to avoid stagnation near the current best record. If x k j < x L j or The new population is generated by perturbing the current best record X OPT along the direction −(X OPT − X CM ), opposite to the non-descent direction (X CM − X OPT ). The rationale of Equation (31) is to search for descent directions with respect to X OPT by decomposing a potentially descent direction in its components.
The new designs are compared with the previous population and only the best N POP designs are retained in the new population. This elitist strategy allows to keep the X OPT design in the population passed into the next optimization iteration should all of the new N POP designs generated with Equation (31) be worse than X OPT . The optimization process is reprised from Step 1.

Step 6: End Optimization Process
The HFBBBC algorithm terminates the optimization process and writes the output data in the results file.

Test Problems and Results
The HFSA, HFHS and HFBBBC algorithms developed in this study for mechanical identification problems were tested on two composite structures and a hyperelastic biological membrane. They were compared with other SA/HS/BBBC variants (e.g., [77,81,83,84] and their successive enhancements [162][163][164][165][166]) including gradient information in the optimization search, as well as with adaptive harmony search [170,171], big bang-big crunch with upper bound strategy (BBBC-UBS) [172], JAYA [35], MATLAB Sequential Quadratic Programming (MATLAB-SQP) [173] and ANSYS built-in optimization routines [174]. The ANSYS built-in optimization routines (e.g., gradient-based, zero order and response surface approximation) were run in cascade or alternated in order to maximize their efficiency.
The abovementioned comparison should be considered very indicative for the following reasons: • Adaptive HS [170,171] and BBBC-UBS [172] represent state-of-the-art formulations of harmony search and big bang-big crunch, which have been successfully utilized in many optimization problems. In particular, the adaptive HS algorithm adaptively changes internal parameters without any intervention by the user: this approach is very similar to what is done by HFHS.
The BBBC-UBS algorithm [172] immediately discharges trial designs that certainly would not improve the current best design included in the population, thus saving computational cost; this elitist strategy is somehow consistent with the rationale followed by HFSA, HFHS and HFBBBC that always try to generate trial designs lying on descent directions. • JAYA [35] is one of the most recently developed metaheuristic algorithms that has soon emerged as a very powerful method and gathered great consideration from optimization experts. The basic idea of JAYA is very simple yet very effective: search process tries to move toward the best design and avoid the worst design of the population. Besides this, JAYA is very easy to implement and does not have internal parameters to be tuned. The basic formulation of JAYA was successfully used in the damage detection problems solved in [153,154]. In [175,176], JAYA's computational efficiency was improved by adding an elitist strategy, which is conceptually similar to that used by BBBC-UBS. However, such a strategy may become computationally ineffective for inverse problems as it entails a new finite element analysis each time a design of the population is updated. Nevertheless, it is interesting to compare HFSA, HFHS and HFBBBC with JAYA also. • SQP is universally reputed by optimization experts the best gradient-based method available in the literature. The method is globally convergent, does not require setting of move limits and does not suffer from premature convergence. The successful use of MATLAB-SQP in highly nonlinear inverse problems taken from very different fields (e.g., optical super-resolution with evanescent illumination, visco-hyperelasticity of cell membranes, damage detection etc.) is well documented in the literature (see, for example, [11,12,14,[177][178][179]).
The structural analyses entailed by the optimization process to evaluate the error functional Ω were performed with the commercial finite element program ANSYS ® [174]. Each metaheuristic search engine and MATLAB-SQP were properly interfaced with the finite element solver. Since the gradient of error functional is not explicitly available and evaluating Ω entails structural analyses, the present algorithms computed approximate gradients as described in Sections 2-4. Partial derivatives ∂Ω/∂X i (i = 1, . . . ,NMP) and ∂δ FEM j /∂X i required by previously developed SA variants [77,81,83,84] were instead evaluated with centered finite differences (δX i = X OPT,i /10,000). Consequently, weighting coefficients µ i = (∂Ω/∂X i )/||∇Ω(X OPT )|| were determined as: Before running optimizations with the new algorithms HFSA, HFHS and HFBBBC, we tried to simplify the previously developed SA/HS/BBBC formulations [77,81,83,84,[162][163][164][165][166] adapting them to inverse problems. The goal was to drastically reduce the number of structural analyses required by the identification process. For example, in the case of SA variants used in [77,81,83,84], the global annealing search equation involving sensitivities ∂Ω/∂X i is replaced by: The new trial design X TR thus obtained is evaluated with respect to the current best record X OPT . If Ω(X TR ) < Ω(X OPT ), X TR is reset as the current best record and a new trial point is defined with Eq. (32). Conversely, if Ω(X TR ) > Ω(X OPT ), a new trial point X TR new = 2X OPT − X TR is defined via mirroring strategy and evaluated with respect to X OPT .
If Ω(X TR new ) < Ω(X OPT ), X TR new is reset as X OPT and a new trial design is generated with If Ω APP (X) does not have any minima, the 1-D local annealing search is performed until the current best record is updated. The above described SA strategy-denoted as SA-NGR in the rest of this article-does not require any exact gradient evaluation but it includes a rather simple line search strategy, which does not ensure trial designs to be lying on descent directions. The (X i U − X i L ) step used in Equation (32) may result in larger perturbations and hence less optimization cycles. However, the total number of structural analyses required in the identification process does not change substantially with respect to the SA variants including gradient evaluations [77,81,83,84].
In the case of HS algorithm, any trial design X TR is defined as: where: S FAST and S BEST , respectively, are the steepest descent and the best directions moving from population designs towards the current best record X OPT (the same nomenclature used for the derivation of Equation (22) applies also in this case); ρ FAST and ρ BEST are two random numbers in the interval (0,1), respectively, generated for S FAST and S BEST . Unlike HS variants [164,166,167], Equation (33) directly utilizes approximate line search to define descent directions. If the new trial design X TR is better than the worst design X WORST included in the harmony memory matrix [HM], it replaces it and the updated population is re-ordered to determine the new current best record. Conversely, if Ω(X TR ) > Ω(X WORST ), a new trial point X TR new = 2·X OPT − X TR is defined via mirroring strategy. If it holds again Ω(X TR new ) > Ω (X WORST ) (this may be due to nonlinearity and non-convexity of the inverse problem), the 4th order polynomial approximation of Ω described above is performed to find a point of minimum X* yielding Ω(X*) < Ω(X WORST ). Should this search be unsuccessful, mirroring strategy and approximate line search are repeated until a trial design better than the worst design stored in the harmony memory is found. Similar to SA-NGR, this simplified HS formulation-denoted as HS-NGR in the rest of this article-does not evaluate the gradient of error functional. However, it considers only two potentially descent directions (yet defined from approximate line search) and hence it is forced to repeatedly perform mirroring of trial designs, polynomial approximation of error functional and 1-D annealing search. While the classical HS strategy of replacing only the worst design is adopted also by HS-NGR without following any elitist criterion, the main architecture of HS based on the use of HMCR and bandwidth parameters is not retained. In fact, HS-NGR always uses the same Equation (33) to generate new trial designs regardless of the fact that the current trend of variation of the error functional would suggest performing exploitation rather than exploration. Consequently, HS-NGR may be unsuccessful in global search and it attempts to correct this problem by carrying out a local search, which usually entails many structural analyses. Furthermore, replacing only the worst design results in an extra number of FE analyses. This counterbalances the reduction in the number of analyses achieved by not computing gradients via finite differences.
In the case of the BBBC algorithm, any trial design X TR is always defined as: where: S FAST,CM and S BEST,CM , respectively, are the steepest descent and the best directions moving towards the center of mass of the population (definitions are the same as for Equation (22) but X OPT is replaced by X CM ); ρ FAST and ρ BEST are two random numbers in the interval (0,1), respectively, generated for S FAST,CM and S BEST,CM . Unlike BBBC variants [165,166] and similar to Equation (33) used for HS-NGR, Equation (34) directly utilizes approximate line search to define descent directions.
The new trial design X TR is evaluated in the same way as in the SA-NGR algorithm and a new explosion in the neighborhood of the center of mass is performed only if the current X OPT could not be improved.
Since the above described simplified BBBC variant is a gradient free algorithm, it will be denoted as BBBC-NGR in the rest of the article. At first glance, BBBC-NGR has to deal with two critical aspects: (i) solution is perturbed with respect to the center of mass of the population rather than with respect to the current best record; (ii) a limited number of potentially descent directions are considered in the formation of a new trial solution. Consequently, the reduction of computational cost granted by the smaller number of explosions may by counterbalanced by the additional structural analyses performed in the attempt of improving current best record with mirroring strategy and 4th order approximation of error functional Ω.

Mathematical Optimization Benchmark: Random Minimum Square Problem
The inverse problem (1) basically is a least square problem. A randomized version of the problem can be stated in the general form for NMP design variables as: where η i (i = 1, . . . ,NMP) are random numbers generated in the (−1,1) interval. The cost function of this problem is unimodal and has a global minimum located at X TRG (η 1 ,η 2 , . . . ,η NMP ) and leading to Ω MIN = 0. The random numbers η i in Equation (35) introduce noise in the least square optimization process similar to the noise that may be caused by optical measurements. Here, the target vector X TRG (η 1 ,η 2 , . . . ,η NMP ) was selected by averaging five randomly generated vectors.
In order to carry out a preliminary comparison between the present algorithms and other optimizers, the problem (35) was solved with NMP = 100 or NMP = 500 using HFSA, HFHS, HFBBBC, adaptive HS [170,171], BBBC-UBS [172], JAYA [35,175,176] and SQP-MATLAB. Using NMP = 500 variables allowed to simulate the use of a fairly large number of control points at which the optically measured displacements are compared with finite element results.
The population size for HFHS, HFBBBC and JAYA was set as 20, 200, 500 and 1000 in order to analyze sensitivity of convergence behavior to N POP . Because of the random nature of metaheuristic search engines, 20 independent optimization runs were carried out for each setting of N POP and NMP. HFSA and SQP-MATLAB runs were started from the best point and center of mass of each initial population defined for HFHS, HFBBBC and JAYA. These points were very far from the target solution: in fact, initial values of Ω ranged between 26.33 and 34.15 with an average percent error on variables ranging between 541.4% and 1392.7%.
The present algorithms found very competitive designs with SQP-MATLAB but required up to three function evaluations to complete the optimization process: on average, 3196 (HFSA), 3562 (HFHS) and 3813 (HFBBBC) vs. 1650 (SQP-MATLAB). However, the average optimized cost and standard deviation on optimized cost were significantly smaller for the present algorithms, which converged to more precise solutions than SQP-MATLAB: in particular, (3.798 ± 1.149) × 10 −12 , (2.899 ± 2.623) × 10 −12 and (7.816 ± 4.241) × 10 −13 , respectively, for HFSA, HFHS and HFBBBC vs. (1.044 ± 0.948) × 10 −10 obtained by SQP-MATLAB. The higher precision of HFSA, HFHS and HFBBBC is confirmed by the larger deviation of optimized designs from the target solution X TRG seen in the case of SQP-MATLAB. It should be noted that, since the target optimum X TRG contains some very small values (for example, of the order of 7 × 10 −4 ), even a small difference between some component of X OPT and X TRG may make average deviation increase by a large extent.
While convergence behavior of the present SA/HS/BBBC variants was rather insensitive to population size, the efficiency of the gradient-based optimizer decreased for increasing population size due to the larger sparsity of design variable values. Adaptive HS variants [170,171] were outperformed by HFSA, HFHS and HFBBBC as they found some intermediate designs with an average cost of 0.174 after 10000 function evaluations. BBBC-UBS [172] was much more efficient than adaptive HS and its convergence speed improved with population size. However, cost function evaluated after 10000 analyses for BBBC-UBS is still 1.467 × 10 −9 , three orders of magnitude higher than for the present HS/BBBC/SA algorithms. JAYA [35,175,176] was slightly more efficient than BBBC-UBS and arrived at the cost function value of 1.297 × 10 −9 after 9850 analyses. However, its computational speed significantly decreased with population size.
Results gathered in this preliminary test confirmed the ability of the present algorithms to solve least square type problems including random noise. This conclusion will be proven true in the next sections also for the three identification problems solved in this study.

Woven Composite Laminate
The first inverse problem solved in this study regards the mechanical characterization of an 8-ply woven-reinforced fiberglass-epoxy composite laminate used as substrate for printed circuit boards (see Figure 6a). The error functional Ω to be minimized depends on four unknown elastic constants, E x , E y , G xy and ν xy . The target values of material properties were provided by the industrial partner involved in the project: E x = 25000 MPa, E y = 22000 MPa, G xy = 5000 MPa and ν xy = 0.280. The first inverse problem solved in this study regards the mechanical characterization of an 8ply woven-reinforced fiberglass-epoxy composite laminate used as substrate for printed circuit boards (see Figure 6a). The error functional Ω to be minimized depends on four unknown elastic constants, Ex, Ey, Gxy and xy. The target values of material properties were provided by the industrial partner involved in the project: Ex = 25000 MPa, Ey = 22000 MPa, Gxy = 5000 MPa and xy = 0.280.
The optimization process entailed by this identification problem attempts to match the in-plane displacements u generated by a vertical load of 140 N that produces 3-point bending. A 46 mm long, 13 mm tall and 1.2 mm thick specimen was cut from the laminate and submitted to 3-point bending. Target displacements u included in the error functional Ω were measured with Phase Shifting Electronic Speckle Pattern Interferometry (PS-ESPI) [5][6][7]. The double-illumination interferometer used in the speckle measurements is schematized in Figure 6b; the symmetric illumination beams make the setup be sensitive to u-displacements. Illumination is realized with a 35 mW He-Ne laser ( = 632.8 nm). The angle of illumination  is 20°. Hence, sensitivity of optical set up is /2sin = 925.1 nm. Fringe patterns were processed following guidelines illustrated in [180]. More details on the ESPI measurements carried out for this identification problem can be found in [76,77]. The ESPI phase pattern containing displacement information is shown in Figure 6c while Figure  6d shows the finite element model including control paths parallel to the Y-axis of symmetry of the specimen. The specimen was modelled in ANSYS with 4-nodes plane elements under the assumption of plane stress. Element size was selected so as to have mesh independent solutions and nodes located in correspondence of the control points defined on the recorded image. The error functional Ω was built by comparing FE results and experimental data at 78 control points. The following bounds were taken for material parameters in the optimization process: 3000  Ex  50000 MPa, 2000  Ey  The optimization process entailed by this identification problem attempts to match the in-plane displacements u generated by a vertical load of 140 N that produces 3-point bending. A 46 mm long, 13 mm tall and 1.2 mm thick specimen was cut from the laminate and submitted to 3-point bending. Target displacements u included in the error functional Ω were measured with Phase Shifting Electronic Speckle Pattern Interferometry (PS-ESPI) [5][6][7]. The double-illumination interferometer used in the speckle measurements is schematized in Figure 6b; the symmetric illumination beams make the setup be sensitive to u-displacements. Illumination is realized with a 35 mW He-Ne laser (λ = 632.8 nm). The angle of illumination θ is 20 • . Hence, sensitivity of optical set up is λ/2sinθ = 925.1 nm. Fringe patterns were processed following guidelines illustrated in [180]. More details on the ESPI measurements carried out for this identification problem can be found in [76,77].
The ESPI phase pattern containing displacement information is shown in Figure 6c while Figure 6d shows the finite element model including control paths parallel to the Y-axis of symmetry of the specimen. The specimen was modelled in ANSYS with 4-nodes plane elements under the assumption of plane stress. Element size was selected so as to have mesh independent solutions and nodes located in correspondence of the control points defined on the recorded image. The error functional Ω was built by comparing FE results and experimental data at 78 control points. The following bounds were taken for material parameters in the optimization process: 3000 ≤ E x ≤ 50000 MPa, 2000 ≤ E y ≤ 50000 MPa, 1000 ≤ G xy ≤ 50000 MPa and 0.01 ≤ ν xy ≤ 0.45. These bounds are large enough not to have any effect on the results of the identification problem.
The population size of all HS and BBBC variants considered in this study was set equal to 10, hence 2.5 times as large as the number of unknown material parameters. The same was done for JAYA. Values of Ω corresponding to the best design, worst design and center of mass of the initial population are 0.180, 0.862 and 0.365, respectively. The corresponding average (maximum) deviations from target properties are 33.8% (56.1%), 37.4% (61.1%) and 41% (53%), respectively. HFSA, ANSYS and MATLAB-SQP optimizations were started from each of the three points mentioned above. Thirty independent optimization runs were carried out starting from different initial populations (yet keeping N POP = 10) to statistically evaluate algorithms' performance.
The results of the identification process are summarized in Table 1. The "SA-Grad" notation refers to the ISA algorithm developed in [77], which combined global and local annealing search strategies based on finite difference evaluation of ∇Ω(X OPT ), and was successfully applied to this test case. All HS/BBBC/SA variants determined material properties with a great deal of accuracy. In fact, the largest error, made on the Poisson's ratio, never exceeded 0.941%. The optimized solutions of HFSA, HFHS and HFBBBC correspond to the lowest errors on material properties. SA-Gradient [77] also was very accurate but required up to 85% more FE analyses than the present algorithms. The maximum residual error on displacements was always lower than 3%, localized near on the closest control path to the applied load. This happened because the u-displacement field is symmetric about the Y-axis (i.e., loading direction) and hence u-displacements approach to zero near this axis. The average error on displacements evaluated for the identified material properties was about 0.6% for all algorithms. Table 1 shows that HFHS and HFBBBC were faster than HFSA as they required, respectively, 210 and 222 structural analyses to complete the optimization process vs. 257 analyses required by HFSA. The proposed algorithms were between 15% and 23% faster than the simplified algorithms SA/HS/BBBC-NGR and the number of explosions and 1-D local annealing searches were substantially reduced by the present formulations. The very small number of optimization variables (only four unknown parameters) defined for this inverse problem somehow limited the ability of HFHS and HFBBBC of building a large number of descent directions.
Remarkably, statistical dispersion on identified material properties, residual error on displacements and required number of finite element analyses evaluated over the thirty independent runs was less than 0.11% thus proving the robustness of the proposed algorithms.
For the sake of brevity, Table 1 does not report the results obtained by AHS [170,171], BBBC-UBS [172], JAYA [35,175,176], MATLAB-SQP [173] and ANSYS [174]. The convergence curves obtained for the best optimization runs of HS/BBBC/SA variants, JAYA and gradient-based optimizers are compared in Figure 7. HFBBBC was the fastest algorithm to significantly reduce the error functional but it then had a fairly long step with little improvements in Ω value. This allowed HFSA and HFHS to have an average convergence rate similar to HFBBBC. The small number of design variables made it difficult for HFSA to recover the initial gap in cost function (i.e., 0.180 vs. 0.365). HS-NGR, BBBC-NGR, SA-NGR and SA-Grad showed oscillatory behavior because they formed trial designs considering only one or two potentially descent directions at a time. JAYA's best run started from a population including better designs than the other algorithms but the search process of this method clearly suffered from the lack of a direct generation of descent directions. In fact, JAYA achieved the last 30% of its total reduction in error functional with respect to the best initial value of Ω = 0.136 over 22 iterations out of a total number of 25 iterations. structural analyses; although elastic moduli were identified very precisely, the Poisson's ratio error increased to 9.2%.
The above listed data confirm the superiority of the proposed SA/HS/BBBC formulations over metaheuristic algorithms that do not use line search to generate trial solutions belonging to descent directions. The elitist strategies of BBBC-UBS and JAYA are more heuristic and do not form descent directions in a direct way unlike HFHS and HFBBBC. The convergence curves obtained for the best optimization runs of HS/BBBC/SA variants, JAYA and gradient-based optimizers are compared in Figure 7. HFBBBC was the fastest algorithm to significantly reduce the error functional but it then had a fairly long step with little improvements in Ω value. This allowed HFSA and HFHS to have an average convergence rate similar to HFBBBC. The small number of design variables made it difficult for HFSA to recover the initial gap in cost function (i.e., 0.180 vs. 0.365). HS-NGR, BBBC-NGR, SA-NGR and SA-Grad showed oscillatory behavior because they formed trial designs considering only one or two potentially descent directions at a time. JAYA's best run started from a population including better designs than the other algorithms but the search process of this method clearly suffered from the lack of a direct generation of descent directions. In fact, JAYA achieved the last 30% of its total reduction in error functional with respect to the best initial value of Ω = 0.136 over 22 iterations out of a total number of 25 iterations. MATLAB-SQP was faster than SA-NGR and SA-Grad, comparable in convergence rate with HFSA and HS-NGR for some iterations but definitely slower than HFHS and HFBBBC. Furthermore, its convergence curve became very similar to that of JAYA after 11 iterations. The extra iterations required by JAYA, ANSYS and MATLAB-SQP for completing optimization process were due to their difficulty in converging to the correct value of Poisson's ratio.

Axially Compressed Composite Panel for Aeronautical Use
The goal of the second inverse problem solved in this study was to identify mechanical properties and ply orientations of a IM7/977-2 graphite-epoxy composite laminate (43 cm long, 16.5 cm tall and 3 mm thick) for aeronautical use. The panel, subject to axial compression, was not reinforced by any stiffener. According to the manufacturer, the laminate included 1/3 of the layers oriented at 0 • (i.e., in the axial direction Y), 1/3 oriented at 90 • (i.e., in the transverse direction X) and 1/3 oriented at ±45 • . The error functional Ω to be minimized depends on seven unknown structural parameters: four elastic constants E x , E y , G xy and ν xy and three ply orientations θ 0 , θ 90 and θ 45 of the laminate (corresponding, respectively, to nominal angles 0 • , 90 • and ±45 • ). The target values of elastic constants indicated by the industrial partner involved in the project are very typical for the IM7/977-2 material: E x = 148300 MPa, E y = 7450 MPa, G xy = 4140 MPa and ν xy = 0.01510.
The optimization process entailed by this identification problem attempts to match the fundamental buckling mode shape of the axially compressed composite panel. Mode shape is normalized with respect to the maximum out-of-plane displacement w max occurring at the onset of buckling. Hence, the target quantity of the optimization process is the normalized out-of-plane displacement w norm defined as w/w max . Figure 8a shows the experimental set-up used for this test case. The axial load is applied to the specimen by imposing a given end-shortening to the panel top edge while bottom edge is fixed. The figure shows the MTS Alliance TM RT/30 testing machine and the grips that realize loading and constraint conditions. In the experiments, end-shortening was progressively increased to 1.5 mm by moving downwards the testing machine cross-bar. The error functional Ω was built by comparing finite element results and experimental data at 86 control points along the AB path sketched in Figure 8c. Such a path was chosen in view of the observed symmetry of buckling mode which resembles a typical Euler mode with one half-wave. The maximum out-of-plane displacement occurs at the center of the panel where the origin of the coordinate system X-Y is placed. The following bounds were taken for material parameters in the optimization process: 10000  Ex  1,000,000 MPa, 1000  Ey  1,000,000 MPa, 500  Gxy  20000 MPa and 0.001  xy  0.1. Ply orientations were made to vary as follows: −50°  0  50°, 0°  90  91°, 10°  45  70°. Similar to the woven composite laminate problem, the bounds imposed on the unknown structural properties are large enough not to affect results of identification process.
The population size of all HS and BBBC variants considered in this study was set equal to 15, slightly more than two times the number of unknown structural parameters. JAYA also was executed with NPOP = 15. Values of  corresponding to the best design, worst design and center of mass of the The buckling shape of the panel was measured with a white light double illumination projection moiré set-up [6,181,182]. The experimental setup included two slide projectors (Kodak Ektalite ® 500, USA) and a standard digital camera (CANON ® Eos 350, 8 Mpix CMOS sensor, Japan) mounted on a tripod; the optical axis of the camera is orthogonal to the panel surface. The illumination angle θ limited by the optical axis of each projector and the optical axis of the camera (i.e., the angle between the direction of illumination and the viewing direction) is 18 • while the nominal pitch of the grating is 317.5 µm (80 lines/inch). It can be seen from the figure that projectors are placed symmetrically about the optical axis of the camera. Each projector projects a system of lines and the wave fronts carrying these lines in the space interfere to form an equivalent grating, which is then modulated by the specimen surface. This condition is equivalent to projecting a grating from infinity. The optical set-up is sensitive to out-of-plane displacements which modulate the projected lines making them curve.
The pitch of the projected grating (p j ) measured on the reference plane (i.e., the surface of the undeformed panel) was 3623.3 µm with a magnification factor of 10.85. Therefore, the sensitivity of the optical setup, p j /2tanθ, was 5575.7 µm. The double illumination, together with the subtraction of the phases of the two systems of lines operated via software, produced a phase distribution on both the reference plane and the observed surface that is equivalent to the case of projection from infinity. The phase pattern corresponding to the onset of buckling is shown in Figure 8b. Image processing was done with the HoloStrain software developed by Sciammarella et al. [183].
The experiment was simulated by a finite element model including 8-node shell elements (Figure 8c): again, the selected mesh size guarantees mesh independent solutions and the correspondence between control nodes and image pixels. An eigenvalue buckling analysis was performed in order to determine the critical load of the panel and the corresponding buckled shape.
The error functional Ω was built by comparing finite element results and experimental data at 86 control points along the AB path sketched in Figure 8c. Such a path was chosen in view of the observed symmetry of buckling mode which resembles a typical Euler mode with one half-wave. The maximum out-of-plane displacement occurs at the center of the panel where the origin of the coordinate system X-Y is placed. The following bounds were taken for material parameters in the optimization process: 10000 ≤ E x ≤ 1,000,000 MPa, 1000 ≤ E y ≤ 1,000,000 MPa, 500 ≤ G xy ≤ 20000 MPa and 0.001 ≤ ν xy ≤ 0.1. Ply orientations were made to vary as follows: Similar to the woven composite laminate problem, the bounds imposed on the unknown structural properties are large enough not to affect results of identification process.
The population size of all HS and BBBC variants considered in this study was set equal to 15, slightly more than two times the number of unknown structural parameters. JAYA also was executed with N POP = 15. Values of Ω corresponding to the best design, worst design and center of mass of the initial population are 0.891, 5.149 and 2.303, respectively. The corresponding maximum deviations from target elastic properties are 130.8%, 521% and 730.6%, respectively. Similar to the previous test problem, HFSA, ANSYS and MATLAB-SQP optimization runs were started from the best and worst points as well as from the center of mass of the initial population generated for HS, BBBC and JAYA. Thirty independent optimization runs (keeping N POP = 15) were carried out to statistically evaluate algorithms' performance. Table 2 presents the results obtained for this inverse problem. The "SA-Grad" notation now refers to the SA algorithm of Refs. [83,84], which evaluated gradients of error functional via finite differences and used up to [(2·NMP + 1) + 2·NMP] descent directions − out of a total of (2 NMP − 1) + 2·NMP potentially available directions-per iteration to form new trial designs. It can be seen that structural properties were identified more accurately by HFSA, HFHS and HFBBBC, that obtained an average error on properties ranging between 0.197 (HFSA) and 0.266% (HFBBBC). The maximum residual error on buckling mode shape evaluated for the identified structural properties never exceeded 2.75% for the present algorithms while was about 3.5% for SA/HS/BBBC-NGR and SA-Grad. Average error on mode shape was always lower than 1.9% for the present algorithms vs. about 2.1% for the other algorithms. The largest errors on w-displacements were localized near control path boundaries A and B, that is where displacements tend to zero and numerical noises may occur. HFHS and HFBBBC were again faster than HFSA as they required, respectively, 478 and 431 structural analyses to complete the optimization process vs. 596 analyses required by HFSA. The proposed algorithms were between 23% and 36% faster than SA/HS/BBBC-NGR variants and even up to 117% faster than SA-Grad. Furthermore, they required much less explosions and 1-D local annealing searches. The fact that HFHS, hybrid HFBBBC and HFSA could reduce the number of finite element analyses with respect to HS/BBBC/SA-NGR and SA-Grad more significantly than for the woven composite problem confirms that increasing the number of design variables may allow the present algorithms to generate more descent directions and speed up the optimization search. Interestingly, the number of structural analyses required in the axially compressed panel identification problem was on average about two times as large as that required in the woven composite laminate identification problem, hence close to the ratio 7 to 4 existing between unknown parameters.
The convergence curves relative to the best optimization runs of SA/HS/BBBC variants, JAYA, ANSYS and MATLAB-SQP optimizers are compared in Figure 9. The present algorithms were definitely faster than SA/HS/BBBC-NGR and SA-Grad. In particular, HFSA and HFHS reduced significantly the error functional in the very first iterations but then conducted a fairly long exploitation phase. HFBBBC showed the most regular rate of reduction of Ω and finally required the lowest number of structural analyses overall. Although HFSA optimization was started from an initial design corresponding to a much higher value of Ω than the best design included in the initial population of HFHS and HFBBBC (i.e., Ω = 2.303 vs. Ω = 0.891), it immediately recovered the gap in cost function and the optimization histories of the present SA/HS/BBBC variants became very similar after about 12 iterations. Similar to the previous test problem, JAYA's best optimization run started from a better population than those generated for the other algorithms: in fact, the best value of error functional was only 0.361. HFHS, HSFA and HFBBBC recovered the initial gap from JAYA within only 3, 6 and 8 iterations, respectively. Furthermore, after the 7th iteration, JAYA's solutions improved slowly. The higher computational complexity of this test case made hence more evident the inherent limitation of JAYA's formulation: the absence of a mechanism for directly defining descent directions.
JAYA [35,175,176] converged to the following solution: Ex = 149703 MPa; Ey = 7451 MPa; Gxy = 4999 MPa; xy = 0.01525; 0 = 0.01586°; 90 = 85.102°; 45 = 46.171° after 65 iterations and 975 structural analyses. Elastic moduli and Poisson's ratio were identified very precisely (less than 1% error) but the largest error on layup angles was about 5.8%. The convergence curves relative to the best optimization runs of SA/HS/BBBC variants, JAYA, ANSYS and MATLAB-SQP optimizers are compared in Figure 9. The present algorithms were definitely faster than SA/HS/BBBC-NGR and SA-Grad. In particular, HFSA and HFHS reduced significantly the error functional in the very first iterations but then conducted a fairly long exploitation phase. HFBBBC showed the most regular rate of reduction of Ω and finally required the lowest number of structural analyses overall. Although HFSA optimization was started from an initial design corresponding to a much higher value of Ω than the best design included in the initial MATLAB-SQP and ANSYS had the slowest converge rate with a marked oscillatory behavior in the latter case. The response surface approximation strategy implemented by ANSYS to build sub-problems (this includes a random selection of the response surface base points) was more efficient than the first-order method used in the woven composite laminate problem. However, it suffered from the noise introduced in the response surface fitting by the very different scales of ply orientations and Poisson's ratio with respect to elastic moduli.

Bovine Pericardium Patch
The last identification problem solved in this study regarded the mechanical characterization of a glutaraldehyde treated bovine pericardium (GTBP) patch subject to inflation. The intensive experimental campaign conducted in [83,84] confirmed the indications of the industrial partner involved in the project that the GTBP patch behaves as a transversely isotropic hyperelastic material. The same conclusion was achieved from both in-plane equibiaxial tension [83] and 3D inflation [84] tests. Since the bovine pericardium patch can be considered a fibrous hyperplastic material, the error functional Ω to be minimized depends on 17 unknown material parameters: 16 hyperelastic constants (a 1 , a 2 , a 3 and b 1 , b 2 , b 3 for the "isotropic" deviatoric term associated to matrix properties; c 2 , c 3 , c 4 , c 5 , c 6 and d 2 , d 3 , d 4 , d 5 , d 6 for the "anisotropic" deviatoric term associated to fiber properties) and the fiber orientation direction cosine cosθ. More details on the transversely isotropic hyperelastic constitutive model are given in [174].
The average values of material parameters and their corresponding standard deviations found in [83,84]  Since standard deviations are very small, average values of material properties can be taken as the target result of the identification process. A further proof of the validity of the assumption made above is that cosθ = 0.6837 matches well with the angle of rotation of the iso-displacement contours seen experimentally with respect to the coordinate axes X and Y.
The optimization process entailed by the identification problem attempts to match the total displacements u tot = √ u 2 + v 2 + w 2 of a circular membrane of diameter 40 mm and thickness 0.5 mm progressively inflated up to the maximum pressure of 12.22 kPa. An assembly view of the experimental set-up used in the inflation test is shown in Figure 10a. 3D displacement components were simultaneously measured by combining intrinsic moiré (IM) and projection moiré (PM) [5][6][7]: IM is sensitive to in-plane displacements while PM is sensitive to out-of-plane displacements. extracted from the FFT pattern of the recorded image by properly selecting spatial frequencies. More details on the experimental tests performed for this identification problem are given in [83,84]. The inflation test was simulated by the finite element model shown in Figure 10c, including 1200 quadratic solid hyperelastic elements and 8603 nodes. The FE model shows the zero-displacement boundary condition imposed at the circular edge of the membrane as well as the uniformly distributed inflation pressure acting on the membrane. Mesh size was determined via convergence analysis again taking care to match control nodes and pixels of the recorded images. In the FE analysis, the NLGEOM geometric nonlinearity option was activated in order to account for the large deformations experienced by the hyperelastic membrane.
The error functional  of this test problem was built by comparing ANSYS results and moiré data at 81 control points located on the horizontal control path hc along the X-axis and the vertical control path vc along the Y-axis. All hyperelastic constants were made to vary between 10 kPa and 1 MPa while coscould range between 0.65 and 0.75. As for the previous two test problems, the bounds imposed on material properties were large enough not to affect the solution of the identification process.  Figure 10b shows a typical image of the inflated membrane illuminated by white light with the two modulated gratings by the deformed specimen: the printed square-dots grating (1 mm pitch) follows the evolution of the in-plane displacements u and v while the projected vertical lines grating (2 mm pitch) follows the evolution of the out-of-plane displacement w. The origin of the reference system X-Y is put in the center of the tested membrane. Images were processed with the HoloStrain software [181]. The largest in-plane displacement measured in the experiments was about 0.5 mm while the largest out-of-plane displacement was about 5.3 mm. Each displacement component was extracted from the FFT pattern of the recorded image by properly selecting spatial frequencies. More details on the experimental tests performed for this identification problem are given in [83,84].
The inflation test was simulated by the finite element model shown in Figure 10c, including 1200 quadratic solid hyperelastic elements and 8603 nodes. The FE model shows the zero-displacement boundary condition imposed at the circular edge of the membrane as well as the uniformly distributed inflation pressure acting on the membrane. Mesh size was determined via convergence analysis again taking care to match control nodes and pixels of the recorded images. In the FE analysis, the NLGEOM geometric nonlinearity option was activated in order to account for the large deformations experienced by the hyperelastic membrane.
The error functional Ω of this test problem was built by comparing ANSYS results and moiré data at 81 control points located on the horizontal control path h c along the X-axis and the vertical control path v c along the Y-axis. All hyperelastic constants were made to vary between 10 kPa and 1 MPa while cosθ could range between 0.65 and 0.75. As for the previous two test problems, the bounds imposed on material properties were large enough not to affect the solution of the identification process.
The population size of all HS and BBBC variants considered in this study was set equal to 30, hence about two times the number of unknown material parameters. JAYA's optimization also was run with N POP = 30. The values of Ω functional corresponding to the best design, worst design and center of mass of the initial population are 0.125, 0.570 and 0.340, respectively. The corresponding average deviation from target properties ranges between 223% and 566%. The high nonlinearity of this identification problem is confirmed by the fact that the best four designs of the population present a larger deviation from target properties than the worst design: respectively, 417%, 345%, 350% and 432% vs. 282%. HFSA, ANSYS and MATLAB-SQP optimization runs were started from the best and worst points as well as from the center of mass of the initial population generated for HS and BBBC. Thirty independent runs were executed with different initial populations (keeping N POP = 30) to statistically evaluate performance of different optimizers. Table 3 presents the results obtained for this inverse problem. The "SA-Grad" notation again refers to the SA algorithm derived from [83,84]. The present SA/HS/BBBC variants were once again more accurate than SA/HS/BBBC-NGR algorithms. In fact, average and maximum errors on identified material properties, respectively, ranged between 0.196 (HFHS) and 0.256% (HFSA), and between 0.594% (HFBBBC) and 0.692% (HFHS) vs. about 0.49% (average) and 1.24% (maximum) errors seen for SA/HS/BBBC-NGR. The largest residual error on u tot -displacements evaluated for the present algorithms never exceed 2.9% and average residual errors were about 30% lower than for SA/HS/BBBC-NGR. The analysis of error maps revealed that largest errors are localized at X = ± 11.5 mm and Y = ± 8.5 mm, that is where the three displacement components become comparable in magnitude.  The convergence curves obtained for the best optimization runs of HS/BBBC/SA variants, MATLAB-SQP and ANSYS optimizers are compared in Figure 11. The JAYA's best run curve is not shown in the figure as values of Ω recorded in the first 30 iterations are off-scale. Because of the high nonlinearity of this problem, intermediate designs were sorted also in terms of deviation from target properties. This explains why convergence curves of BBBC and HS variants start from higher values of Ω than those of SA variants, ANSYS and SQP optimizers which start from the Ω = 0.340 value corresponding to the center of mass of the population. HFBBBC was definitely the fastest algorithm throughout optimization process, followed by HFSA and HFHS. However, convergence curve of HFHS was almost monotonic and this explains why HFHS finally required less finite element analyses than HFBBBC and HFSA, which instead showed fairly long steps with small improvements in solution. The optimization histories of HFSA, HFHS and HFBBBC practically coincided after 20 iterations. The present algorithms were definitely faster than SA/HS/BBBC-NGR variants and SA-Grad. The "NGR" algorithms showed steps and oscillatory behavior in the optimization history because the number of descent directions (i.e., one or two) involved in the formation of new trial solutions was not large enough to deal with the high nonlinearity of the GTBP patch identification problem. Interestingly, SA-Grad considered a set of 69 (i.e., 4 . NMP + 1) descent directions in each iteration to update design. This allowed oscillatory behavior to be limited but about one half of these 69 directions were formed by perturbing one material parameter at a time and hence yield little improvements in the error functional.
ANSYS again showed the slowest converge rate and a marked oscillatory behavior. That happened because the high nonlinearity of this identification problem reduced the efficiency of the response surface approach used by ANSYS for building the approximate sub-problem in each optimization iteration. MATLAB-SQP was competitive with SA-Grad for about 20 iterations but it then started to cycle between intermediate designs characterized by Ω = 0.01 trying to find proper values for hyperelastic constants a3 and d6. Conversely, SA-Grad used its inherent exploitation capability to improve solution in the final part of optimization history. The present algorithms were definitely faster than SA/HS/BBBC-NGR variants and SA-Grad. The "NGR" algorithms showed steps and oscillatory behavior in the optimization history because the number of descent directions (i.e., one or two) involved in the formation of new trial solutions was not large enough to deal with the high nonlinearity of the GTBP patch identification problem. Interestingly, SA-Grad considered a set of 69 (i.e., 4·NMP + 1) descent directions in each iteration to update design. This allowed oscillatory behavior to be limited but about one half of these 69 directions were formed by perturbing one material parameter at a time and hence yield little improvements in the error functional.
ANSYS again showed the slowest converge rate and a marked oscillatory behavior. That happened because the high nonlinearity of this identification problem reduced the efficiency of the response surface approach used by ANSYS for building the approximate sub-problem in each optimization iteration. MATLAB-SQP was competitive with SA-Grad for about 20 iterations but it then started to cycle between intermediate designs characterized by Ω = 0.01 trying to find proper values for hyperelastic constants a 3 and d 6 . Conversely, SA-Grad used its inherent exploitation capability to improve solution in the final part of optimization history.
JAYA (the convergence curve is not shown in the figure) started its best optimization run from a population with Ω OPT = 0.476 and could not reduce the error functional value below 0.35 for the first 30 iterations. Such a behavior confirms that as problem size increases it becomes more important to update population according to the rank held in the population by the currently perturbed design. This requirement is certainly satisfied by HFHS and HFBBBC, which generate search directions S FAST , S BEST , S 2ndBEST , S OPT−CM etc. (or perform low cost evaluations of sensitivities of error functional) while JAYA simply perturbs designs following the initial order assigned to the N POP individuals. For a given design, it is also important to tailor perturbations of each variable to sensitivities of error functional. Whilst this is intrinsically done by 1D probabilistic search utilized by the present algorithms, JAYA updates variables just following the classical 1st to NMPth variable sequence. The latter may result in missing "good" values of "some" variable that potentially improve a given solution more than other values of other variables selected instead. The probability of missing good variable values clearly increases with the problem dimension as it becomes more difficult to reconstruct in the search space the path leading to the global optimum. This explains the "inertia" effect observed for JAYA, which became slower as test problem size increased. Since the ratio between population and number of optimization variables was very similar for all test cases, the "inertia" effect logically occurred also for larger population sizes (see discussion on JAYA's results developed in Section 5.1).
In order to evaluate sensitivity of convergence behavior of HFSA, HFHS and HFBBBC algorithms to initial population and initial design, the bovine pericardium patch identification problem was also solved with a population including 90 candidate solutions. The new population of HS and BBBC included 60 additional "low quality" candidate solutions characterized by higher values of the error functional: the new worst design has Ω = 1.352 while the new center of mass has Ω = 0.709. Consequently, the average deviation from target properties varied between 223% and 1030%. The candidate solutions yielding the lowest values of Ω again did not show the smallest deviations from target properties.
Results of sensitivity analysis to population size and initial solutions are presented in Table 4. All of the present algorithms practically converged to the same material properties regardless of population size/initial design. Deviations from target material properties were slightly higher for N POP = 90 but remained below 0.27% (average error) and 0.78% (maximum error). Residual errors on displacements also changed marginally with respect to those evaluated for N POP = 30. HFHS and HFBBBC performed one more iteration than in the case N POP = 30 while HFSA could eliminate one optimization cycle. The number of explosions and 1-D local annealing searches remained the same for the two populations. The number of finite element analyses required in the optimization process changed at most by 15%. This increase was due to the fact that the 90-designs population included lower quality solutions. The robustness of the present algorithms is confirmed by the convergence curves shown in Figure 12. It appears that each pair of optimization histories relative to a given algorithm practically coincided in the last 4-5 iterations. changed at most by 15%. This increase was due to the fact that the 90-designs population included lower quality solutions. The robustness of the present algorithms is confirmed by the convergence curves shown in Figure 12. It appears that each pair of optimization histories relative to a given algorithm practically coincided in the last 4-5 iterations. Figure 12. Sensitivity of HFSA, HFHS and HFBBBC to initial population/starting point for the bovine pericardium patch identification problem.

Discussion and Conclusions
This study presented a hybrid framework for mechanical identification of materials and structures. The framework combined full-field measurements done with optical methods and global optimization based on metaheuristic algorithms. Such a choice was motivated by the fact that metaheuristic algorithms allow to efficiently deal with the inherent non-linearity and non-convexity of inverse problems. From the experimental point of view, using optical methods is the best approach to identification problems because these techniques provide full-field information, do not alter the state of the investigated specimen, and can precisely detect material anisotropy, presence of local defects and/or damage.
However, the "no free lunch" theorem states that no metaheuristic algorithm can outperform all other algorithms in all optimization problems. Unlike gradient-based optimizers that are still

Discussion and Conclusions
This study presented a hybrid framework for mechanical identification of materials and structures. The framework combined full-field measurements done with optical methods and global optimization based on metaheuristic algorithms. Such a choice was motivated by the fact that metaheuristic algorithms allow to efficiently deal with the inherent non-linearity and non-convexity of inverse problems. From the experimental point of view, using optical methods is the best approach to identification problems because these techniques provide full-field information, do not alter the state of the investigated specimen, and can precisely detect material anisotropy, presence of local defects and/or damage.
However, the "no free lunch" theorem states that no metaheuristic algorithm can outperform all other algorithms in all optimization problems. Unlike gradient-based optimizers that are still implemented in commercial software although their formulations did not change much in the last 25-30 years, most of the newly developed metaheuristic algorithms added very little to the optimization practice and their appeal quickly vanished after a very few years. This suggests that rather than proposing a new metaheuristic algorithm that improves available methods just marginally it is better to significantly improve the most powerful algorithms. For this reason, we developed three advanced versions of simulated annealing (SA), harmony search (HS) and big bang-big crunch (BBBC). These algorithms were selected as they are very well established metaheuristic optimization methods for which many successful applications to inverse problems have been documented in technical literature. Furthermore, these algorithms possess important features, which are very desirable in global optimization. In fact, SA is inherently able to bypass local optima, HS has a memory where good solutions may be stored, BBBC employs the concept of center of mass including information on the average quality of the population of trial solutions.
The rationale behind the new algorithms developed in this study-denoted as Hybrid Fast Simulated Annealing (HFSA), Hybrid Fast Harmony Search (HFHS) and Hybrid Fast Big Bang-Big Crunch (HFBBBC)-was to generate high quality trial designs lying on a properly selected set of descent directions. For that purpose, enhanced approximate line search and computationally cheap gradient evaluation strategies were developed. Besides hybridizing SA/HS/BBBC metaheuristic search engines with gradient information and approximate line search, HS and BBBC were hybridized with an enhanced 1-D probabilistic search derived from SA.
The optimization framework was tested in three inverse elasticity problems: (i) mechanical characterization of a composite laminate used as substrate in electronic boards; (ii) mechanical characterization and layup identification of an axially compressed composite panel for aeronautical use; (iii) mechanical characterization of bovine pericardium patches used in biomedical applications. The largest test case (iii) included 17 unknown parameters. Sensitivity of inverse problem solutions and convergence behavior to population size and initial design/population was statistically evaluated. A preliminary mathematical optimization problem was solved in order to train algorithms. Remarkably, HFSA, HFHS and HFBBBC were very efficient and outperformed other SA, HS and BBBC formulations, the JAYA algorithm as well as a state-of-the-art gradient-based optimizer like MATLAB-SQP. Furthermore, the present algorithms always were very robust.
An interesting fact observed from Tables 1-4 is that the maximum residual error on displacements was about 3% for all identification problems. In order to check if this is an inherent limitation of HFSA, HFHS and HFBBBC algorithms, an in silico identification was carried out by computing displacement fields for the target material/structural properties provided by manufacturers (first two identification problems) or determined by averaging results of [83,84] (last identification problem). Optimizations were hence run to reconstruct the target displacement fields generated numerically. Remarkably, the present algorithms were always able to converge to the target material/structural properties reproducing the displacement field with zero residual errors. The observed 3% maximum error hence falls within the level of uncertainty normally entailed by a FEMU-based characterization process, a very complicated task which attempts to match experimental data and finite element results. Optically measured displacements certainly are a good target because the accuracy of speckle and moiré techniques may go down to a very small fraction of the sensitivity of experimental setup. However, in this study, control points were selected right at critical locations (i.e., near boundaries where displacements tend to zero or in transition regions where all displacement components are comparable in magnitude) where even small drifts on measured quantities may have an impact on the success of the identification process. Furthermore, a rather small set of control points was selected for building the error functional Ω. This was done in order to make the matching of experimental data and numerical results more difficult, thus testing the real ability of HFSA, HFHS and HFBBBC to find the global optimum or get very close to the global optimum. In view of this, the results presented in this article should be considered very satisfactory.
Based on the arguments discussed above, it can be concluded that the proposed hybrid framework is a powerful tool for solving inverse mechanical problems.