On the Design of Innovative Heterogeneous Sheet Metal Tests Using a Shape Optimization Approach †

The development of full-field measurement methods has enabled a new trend of heterogeneous mechanical tests. The inhomogeneous strain fields retrieved from these tests are being widely used in the calibration of constitutive models for sheet metals. However, today, there is no mechanical test able to characterize the material in a large range of strain states. The aim of this work is to present a heterogeneous mechanical test with an innovative tool/specimen shape, capable of producing rich heterogeneous strain paths and thus providing extensive information on material behavior. The proposed specimen is found using a shape optimization process where an index that evaluates the richness of strain information is used. In this work, the methodology and results are extended to non-specimen geometry dependence and to the non-dependence of the geometry parametrization through the use of the Ritz method for boundary value problems. Different curve models, such as splines, B-splines, and NURBS, are used, and C1 continuity throughout the specimen is guaranteed. Moreover, several deterministic and stochastic optimization methods are used in order to find the method or the combination of methods able to minimize the cost function effectively. Results demonstrated that the solution is dependent on the geometry definition, as well as on the optimization methodology. Nevertheless, the obtained solutions provided a wider spectrum of strain states than standard tests.


Introduction
Industry is increasingly demanding and becoming more competitive.Due to efficiency and cost reduction requirements, numerical simulation has been applied to sheet metal-forming processes since the 1960s [1].Currently, numerical simulations can accurately replicate the forming process due to the ability to process complex iterative calculus using material properties, surface properties, and process conditions [2].However, to this end, the finite element models must have accurate constitutive models and well-defined material parameters.Due to the rolling process, sheet metals exhibit orthotropic behavior.This anisotropy influences the sheet metal-forming operations' results, implying the use of complex constitutive mathematical models capable of describing the material behavior [3,4].
Constitutive models consist of functions that accurately describe the material behavior.The complexity of these models determines the number of material parameters to identify.Usually, Finite Element Modeling Update (FEMU) or equilibrium techniques are used to determine the models' input data.FEMU consists of an iterative inverse technique that begins with a known solution and identifies the constitutive parameters minimizing the difference between known and predicted forces and/or displacement fields [5].However, methods based on equilibrium satisfaction, such as the Virtual Field Method (VFM) or the Equilibrium Gap Method (EGM), search for the parameters that satisfy equilibrium equations (in the local or weak form) using a known (experimental) kinematic field (displacement or strain).
Regardless of the method, the parameter identification depends on the amount and quality of experimental data.There are two approaches related to experimental data: homogeneous and non-homogeneous tests [4].The first approach uses classical tests, i.e., tensile test, shear test, etc.Each test usually produces homogeneous strain fields that allow identifying a limited number of parameters, requiring a large number of tests to identify the constitutive parameters properly [6].The second approach is related to Full-Field Measurement (FFM) strategies.Due to the development of FFM methods and Digital Image Correlation (DIC), it is possible to obtain the strain or displacement field data on the surface of the specimen.Theoretically, with this extensive amount of information, it should be possible to identify a large set of parameters using only one test.This leads to a reduction of the experimental test set used to build the experimental database [7,8].As a result of these innovative techniques, this field of research has been growing.Some proposals for non-standard uniaxial tests [9] and also for non-standard biaxial tensile specimens [10] have already been published.In the published works, the specimens used can have different cross-sections and geometries (holes, cuts, etc.), promoting heterogeneous displacement fields through the specimen.Moreover, some original tests introduced new designs, such as the geometry presented by Pottier [11].A review of the mechanical tests commonly used for parameter identification can be seen in [12,13].However, none of the presented tests can promote a very wide spectrum of strain states with a large magnitude of plastic strain.
The main goal of this work is to design a tool-specimen capable of maximizing the number of heterogeneous strain states, as well as plastic strain range.To this end, a numerical iterative process that optimizes the test geometry is used.In each iteration, an indicator capable of quantifying and rating the mechanical information provided by the test is used.Although this approach is completely numerical, recent works [14] experimentally validated this design by an optimization methodology.The approach used is similar to the ones used in recent works [4,7,15].However, previous works have only considered a constant number of design variables without analyzing the influence of the number of variables in the solution.Additionally, it is known [15,16] that the curve model that defines the geometry, such as splines, B-splines, or NURBS, influences the shape optimization process.Moreover, design processes are initial solution dependent when using gradient-based or the direct search optimization algorithm, such as the ones used in the mentioned previous works.
This work focuses on analyzing and solving some of the mentioned limitations of the previous works.Therefore, in this work, a Ritz-type design methodology is used to minimize the dependence of the geometry definition model, as well as the number of design control points.Additionally, several optimization algorithms are used, including a global search and a metaheuristic algorithm, to avoid initial solution dependence and analyze the performance of the design-by-optimization process.

Constitutive Model
An appropriate constitutive model is a prerequisite for accurate simulations and design.In this work, a phenomenological model composed by the non-quadratic Yld2004-18p yield criterion combined with a mixed isotropic-kinematic hardening law is used.Additionally, the rupture behavior of DC04 is characterized by the calibration of the macroscopic Cockcroft and Latham (CL) rupture criterion [15].Table 1 sums up the constitutive equations that describe the material behavior.A detailed description of the model and a parameter identification procedure for a DC04 mild steel can be found in previous works [4,17].The values of isotropic and kinematic hardening coefficients used in this work, as well as the Yld2004-18p yield anisotropy coefficients are listed in Table 2.
Table 1.Constitutive equations that describe the mechanical behavior of the material.Data from [4].

Indicator and Cost Function
To design a heterogeneous test, it is necessary to find a way to quantify the richness of information obtained by the test.In this work, an indicator I T [12] capable of quantifying the strain state range and deformation heterogeneity is used.This indicator also quantifies the strain level achieved in the test from the entire specimen surface, using a continuous evaluation of the strain field up to rupture.The indicator is defined by the terms listed in Table 3 and can be described as, where W ri and W ai are the relative and absolute weighting factors whose values are exposed in Table 4.These values equally balance the impact of each term in the indicator I T [12].ε 2 /ε 1 is the ratio between the minor and major principal strains.εP represents the equivalent plastic strain of the specimen.The value of the indicator depends on the strain state range.Therefore, and due to the anisotropic behavior of the materials, the different strain states considered were calibrated for the DC04 using classical tests [4].Table 5 depicts the values used to define each strain state.These value ranges are used to calculate the maximum strain achieved in each state.Due to the anisotropy of the material, the uniaxial tension and uniaxial compression states are obtained for ε 2 /ε 1 = −0.627and ε 2 /ε 1 = −1.595,respectively.However, the ratio of principal strains in the simple shear state is always equal to −1, as verified experimentally and analytically in [18].It should be highlighted that the aim of this work is to design a specimen that maximizes the value of I T .

Geometry Definition Using a Curve Model
Different curve models are used to describe, parameterize, and control the specimen geometry during the iterative process: 1. Cubic splines: Define a curve as a set of connected piecewise simple polynomial functions.
The control points are interpolated, and no local control is considered in the geometry, only global.Therefore, abrupt variations can be observed on consecutive control points' positions during the optimization, leading to unrealistic specimen shapes and to premature strain localization and rupture [4].In order to avoid this drawback, a penalty function Res to constrain these variations was created.This penalty function is defined by: with: and: where α = 1/(U bound − L bound ) 2 , the parameter a v defines the admissible variation, and X i represents the control point.2. B-splines can be set as a single piecewise curve defined as a linear combination of control points X i and a B-spline basis function N i,k (t).The B-spline can be described as: For the ith normalized B-spline basis function of order k, the basis function N i,k (t) is defined by: and: where v i are elements of a knot vector.In this work, B-splines of fourth order are used.Notice that, when using fourth order B-splines, the penalty function ( 2) is unnecessary because the curve becomes smoother.The knot vector assumes the following form: V = (0, 0, 0, 0, t k , . . ., t i , 1, 1, 1, 1), where n + 1 is the number of control points; 3. NURBS are functions that have the same properties as B-splines and are capable of representing a wider class of geometries.A NURBS curve is represented as a combination of control points and rotational basis functions R i,k (t) and can be defined by: with: where W i > 0 is a weighting factor for the control points.If the weights of the control points are equal to one, the resulting curve is a B-spline.For more details concerning curve models, see [19].

Numerical Method and Test Design
The design of the heterogeneous test resorts to a shape optimization problem.In order to balance the loads in the transversal directions, the design methodology uses a symmetric model (Figure 1a).In this model, the design variables are the n control points' positions X i = [X 1 , . . ., X n ] that define the specimen's initial shape (before deformation) and weights W i = [W 1 , . . ., W n ] and the rigid tool length L tool .The tool length defines only the size of the tool.The shape of the tool is defined by the boundary of the specimen.As shown in Figure 1a, each control point is spaced radially by 90 • /(n − 1).
It was defined that the control points only change their radial position.In each design iteration, the specimen geometry and loading path are allowed to change freely, limited uniquely by the design variables' boundaries.The search universe of the design variables is given by X i ∈ [10, 60] mm and The optimization of the loading path depends on the design of the rigid tool.To avoid symmetry conflicts, the tool needs to be coincident with the specimen shape and the beginning point P tool to be at 90 • , which means that the orientation of the tool displacement θ will be 90 • due to the symmetry.The loading path is applied up to rupture, established by W f CL .During the process, the optimization algorithm automatically modifies the specimen shape until reaching a stopping criterion.In each FEA evaluation, the mesh is automatically adapted to the specimen.However, whatever the specimen size, the number of elements remains constant.This iterative process was launched using a MATLAB-Python-Abaqus environment.In MATLAB, the initial coordinates R 0 = {X 0 , W 0 , P tool } of the design variables, the optimization algorithm, and the curve model definition are specified.A script developed in Python is able to read the defined geometry and to update it.The numerical simulation was carried out up to rupture by Abaqus [20].The output result was also post-processed using a Python script, enabling calculation of the indicator I T and cost function.Based on the final result, the process may end if a stopping criterion is reached or continues to update the design variables' coordinates iteratively.This iterative design optimization problem can be formulated as: subject to: 10 ≤ X i ≤ 60 mm (11) Following the established numerical procedure (Figure 1b), the outcome revealed to be dependent on: • The geometry definition and discretization: This feature is related to the numerical simulation, the number of elements, the number of control points used, the curve model, the parametrization used, etc. • The optimization method: This defines the robustness of the process and can influence the final result.• The cost function parameters: A modification of these parameters can lead to different results.
A very robust cost function can provide better results.However, this is not the object of study in this work.

Number of Control Points and Sensitivity Analysis
The geometry of the specimen is defined by the control points' positions, which means that the number of control points used has an influence on the design process.To obtain a non-linear design process independent of the number of control points, the optimum control parametrization Ritz method was used [21].In this method, the number of control points is progressively increased until convergence to an optimum value.In this work, the iterative process was executed for each curve model starting with three control points, using the SDBOX [22] optimization technique.
Due to the black-box nature of this problem, the derivatives are not available.The sensitivity analysis of the cost function allows analyzing the contribution of a variable to the cost function, showing how it behaves when the variables' values change.The derivative information can be obtained using approximation methods, such as finite differences, when the cost function behaves as a smooth continuous function.This would widen the range of applicable optimization algorithms.However, the output of the objective function value is dependent on a FEA.As the mesh generation is random, even when applying fine meshes, the output result will always have small discrete variations, which can lead to noisy data.In this work, the One-At-a-Time (OAT/OFAT) method was used to perform a sensitivity analysis of the cost function [1,23], where the contributions were determined by varying a parameter, while keeping all the others constant.The initial configuration shown in Figure 1 and Control Point 4 were selected for the sensitivity analysis.

Optimization Strategy
One of the goals of this work is to find an optimization algorithm capable of robustly minimizing the objective function.The performance of the optimization algorithm is evaluated by the time and the number of function evaluations required to converge.According to the sensitivity analysis results (see Section 4.1), the cost function is non-smooth and (possibly) multi-modal.Therefore, the solution lies in finding suitable Derivative-Free Optimization (DFO) methods to solve the optimization problem [22,24].The recently-developed DFO methods are either deterministic or stochastic.Figure 2 classifies the optimization methods according to their categories.
Direct search methods are deterministic and can be described as a continuous examination of the outcome results generated by the optimization strategy.The local direct search algorithms are dependent on the initial parameters and conditions and are susceptible to local minima [25].However, these methods have a fast convergence.Global search optimization methods, such as Lipschitzian-based partitioning methods, provide theoretical guarantees of finding the global minimum of an optimization problem within a certain tolerance.In stochastic optimization, randomness is part of the search procedure for the global optimum.Most of the methods are based on population/swarm intelligence and mimic the behavior of natural systems.Such algorithms are less prone to getting trapped in local minima.However, the convergence is commonly much slower [26,27].In this work, seven optimization strategies were used; however, only four were able to converge to a solution: the Nelder-Mead, pattern search, SDBOX, and CMA-ES.

CMA-ES
The Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) is an evolutionary algorithm proposed by Nikolaus Hansen and Andreas Ostermeier [28].CMA-ES is a second order approach and creates a Gaussian probabilistic model of possible solutions.The new candidate solutions are sampled according to a multivariate normal distribution, and the mutation acts as a perturbation with zero mean.In this particular algorithm, pairwise dependencies between the variables in the distribution are represented by a covariance matrix that is updated progressively.This process will then influence the next generation [24,28].

SDBOX
To guarantee both global convergence and robustness, this optimization strategy has roots in gradient-based methods.This algorithm is able to detect the coordinate direction that minimizes the cost function by investigating the local behavior.To create a new point, the algorithm performs a line-search along the found coordinate direction.The iterations' information can be used to build an approximation of the cost function [22].

Nelder-Mead
Nelder-Mead is a direct search algorithm based on a spacial approximation.It consists of the definition of a non-degenerate simplex [24].For an objective function containing r variables, this algorithm retrieves the convex hull of r + 1 vertices.In each iteration, it removes the vertex with the worst objective function value.Then, it updates it to a better value by reflecting, expanding, or contracting the simplex along the joint line with the centroids of the remaining vertices.This algorithm usually requires a low number of function evaluations before finding the minimum.However, if the function is multi-modal, it becomes dependent on the initial shape of the specimen, as it gets trapped in the local minima.

Pattern Search
Pattern search is a direct search method proposed by Hooke and Jeeves.It involves two types of moves: (i) exploratory search, which is a local search step that looks for an improving direction, and (ii) pattern move, where, after a chosen direction, it tries larger moves in the improving direction.The implemented pattern search uses Mesh Adaptive Direct Search (MADS), which removes the restrictions of general pattern search to finitely-selected directions, allowing local analysis in a dense set of directions in the space of design variables [29].

Ritz Method and Sensitivity Analysis
Figure 3 shows the outcomes of the Ritz method.Unlike what was expected, a full convergence with the increase of control points was not observed.The use of a higher number of control points allows the representation of a wider class of geometries.However, the number of local minima of the cost function can also increase and, consequently prevent the full convergence of the optimization method.Furthermore, the number of design variables was higher, which may have led to unrealistic geometries, resulting in simulations that did not converge to a solution.Even without full convergence, Figure 3 shows the number of control points that should be used for each curve model definition: seven control points for B-splines and NURBS and six for splines.Figure 4 shows the behavior of the cost function in response to small perturbations of the position and weight values.The function exhibits a non-smooth and non-continuous behavior, demonstrating that gradient-based methods cannot be used.

Optimization Algorithms Performance Evaluation
The results were obtained by combining the optimization algorithms with the curve models.The aim was to find a combination capable of robustly minimizing the cost function and, consequently, provide a specimen design.The optimization algorithms' performance was based on two criteria: the best CF value and the convergence rate.Figure 5 shows the convergence rate of the cost function and the evolution of two design variables, allowing us to observe the differences between their behaviors.The direct search algorithms were faster to converge to a solution.The Nelder-Mead had a smoother convergence curve because it promotes small modifications in the design variables between iterations.Pattern search performed more abrupt modifications, which led to an irregular cost function minimization.SDBOX revealed a good compromise between efficiency and convergence properties.As expected for stochastic algorithms, CMA-ES presented large oscillations; however, it reached the lowest cost function value.
Concerning the design variables, in the first evaluations, large oscillations were observed.The design variables reached the search universe boundaries, and with the reduction and convergence of the cost function, their convergence was attained, even if it converged to a local minimum.
CMA-ES was, along with SDBOX, the algorithm that reached the best results; however, a much higher number of iterations was required.Notice that after 1000 function evaluations, the design variables were still converging.

Test Design: Optimization Results
Table 6 shows the results for the sets curve-models/optimization-algorithms.The direct search local algorithms presented similar values for the different models, showing a fast convergence to a local minimum.Regarding the global optimization algorithms, a good ability to decrease the cost function value was observed, and the solutions revealed to be inferior (better) to the ones obtained by local search.However, this was not observed with the use of splines.For this curve model, the values of the cost function were very similar for all optimization algorithms, evidencing proximity between the global and the local minima of the function.Both NURBS and B-splines presented clear differences when using global or local optimization algorithms.It is also important to highlight that more than one global minimum can exist.The final values of each term of the cost function obtained using the evolutionary algorithm CMA-ES and the direct search algorithm SDBOX were different, yet the cost function values were very close.
When analyzing the results, particularly the final cost functions for each test, four candidate solutions were identified: (a) CMA-ES with B-splines, (b) SDBOX and B-splines, (c) CMA-ES with NURBS, and (d) SDBOX with NURBS. Figure 6 shows the deformed state and the strain distributions over the deformed specimen.Regarding the strain state range, the solution provided by SDBOX with NURBS revealed to be limited due to the inability to reach some of the relevant strain states.However, it presented the best value concerning the overall plastic deformation, represented by Av εP .The mean deformation heterogeneity Mean[std(ε 2 /ε 1 )] and the maximum strain achieved εP Max in these tests were identical.Nonetheless, none of the tests were capable of producing the biaxial strain state.
Considering that the solutions identified by global optimization algorithms can be exploratory values and not minima, a local search algorithm was used in each candidate solution.However, no improvements were noticed, demonstrating that the results attained by the global optimization algorithm were minima.Figure 7 presents the strain state range offered by each solution represented in Figure 6.Although all solutions widened the strain states provided by classical homogeneous tests, most strain states fit in the neighborhood of uniaxial tension.This was expected due to the tensile load conditions of the tests and constitutes a limitation of the presented methodology.Other load conditions would present different strain states, but could also require a more complex apparatus than a standard uniaxial load test machine.The solutions (b) and (c) presented shear and uniaxial compression states, but only for small values of strains.Taking into account these analyzes and the CF value, the solution provided in (c) CMA-ES with NURBS revealed to be the most promising.a state-of-the-art improvement.However, it has not reached yet a unique test presenting both the overall range of stress/strain states and large levels of plastic strains.This fact can be due to the limitation in the loading conditions and in the shape definition, opening challenges for future works.

Figure 2 .
Figure 2. Categories of the optimization algorithms.

Figure 3 .
Figure 3.The final value of cost function using different numbers of control points.

Figure 4 .
Figure 4. Sensitivity analysis of the Cost Function (CF) value to: (a) the position of the control point X4; (b) the weight of the control point W4; and (c) both sensitivities.Sensitivity analysis using a B-spline model with seven control points.

Table 4 .
Absolute and relative weights of the design indicator.

Table 6 .
Results for the mechanical test design using different optimization strategies and boundary curve models.