1. Introduction
Composite materials are one of the pillars of modern structural design. The exquisite combination of their light weight, high stiffness and strength, high corrosion resistance, possibility of tailoring material properties, durability, and design flexibility make them the material of choice in many contemporary structures [
1]. The selection of an appropriate composite material and its configuration results in distinct advantages offered compared to other materials. With such enticing properties, composite materials have found their application in important fields of engineering, including the automotive industry [
2,
3], aerospace engineering [
4,
5], transportation [
6,
7], medical applications [
8,
9], civil engineering [
10,
11], defense, security, and ballistics [
12,
13]—to name but a few of these numerous fields. They have also significantly contributed to the development of novel structural concepts, such as lightweight active/smart structures [
14,
15]. The diversity of composite materials provides opportunities that should be explored to thus enhance their utilization in various applications.
To make appropriate use of the extraordinary versatility of composite materials, it is necessary to determine and suitably characterize their mechanical properties and behaviors. Although experimental tests are the ultimate approach to assessing material characteristics and comprehending the mechanical behavior of composites, they are also quite costly, especially when dealing with numerous structural design variations. Therefore, it is of crucial importance to develop numerical tools for modelling and simulating their mechanical behavior [
16]. The Finite Element Method has established itself as a preferred technique in structural analysis, which is especially true for more intricate, anisotropic materials like composites. The method allows for highly reliable, efficient, and cost-effective numerical analyses of a vast spectrum of scenarios, also involving coupled field effects, e.g., thermo-mechanical [
17], or electro-mechanical (piezoelectric) problems [
18]. Composite materials are frequently utilized as fiber-reinforced composite laminates to create thin-walled structures, often referred to as “primadonna” within the structural hierarchy [
19]. The choice of shell form for structures is motivated by the desire to achieve a high load-bearing capacity, i.e., high strength-to-weight ratio. The major workhorse theories used to describe the global behavior of composite laminates are the so-called first order theories—the classical laminate theory [
20] and the first-order shear deformation theory (FSDT) [
21]. The latter is more often implemented in the plate and shell type of finite elements due to the simpler requirement of C
0-continuity from the shape functions [
22].
The previously mentioned exceptional qualities of composite materials also make them a suitable choice to produce lightweight products aimed to serve as ballistic protection. In that case, the material design prioritizes damage tolerance by increasing the impact resistance. A number of researchers have dedicated their work to different aspects of this and other related problems. Abtew et al. [
23] presented various ballistic textiles as well as composites used for ballistic applications in the form of body armor. Their focus was on the ballistic impact mechanism and the analysis of parameters such as the thickness, strength, ductility, toughness, and density of the target material. Yang et al. [
24] applied multi-scale finite element modeling to simulate the behavior of woven fabrics involving fiber bundles and yarns exposed to ballistic impact. The study by Meliande et al. [
25] evaluated the applications of a laminated hybrid composite with aramid woven fabric and a curaua non-woven mat in military ballistic helmets. The main advantage of their approach is in the fact that hybrid composites involve several reinforcement materials, thus providing a wider range of physical and mechanical properties with improved elasticity, strength, and toughness [
26]. Furthermore, Zochowski et al. [
27] conducted FE analysis of composite material matrices reinforced with aramid fibers and subjected to projectile impact conditions. Bajya et al. [
28] proposed an interesting soft armor panel design based on shear thickening fluid (STF)-reinforced Kevlar fabric. Zhang et al. [
29] suggested investigating preload effects on the high velocity impact behavior of fiber metal laminates and used finite element analysis to perform this investigation. Similarly, Gregori et al. [
30] developed both analytical and numerical models to simulate the perforation of ceramic-composite targets by small-caliber projectiles and validated their results by performing impact tests. Ranaweera et al. [
31] showed that protection in the form of tri-metallic steel–titanium–aluminum armor is superior to the protection provided by a monolithic armor.
High-hardness ceramics are often used in lightweight armor systems to protect against the intrusion of high-speed armor-piercing (AP) projectiles. Gour at al. [
32] worked to improve the design performance of ceramic armor for combat vehicles. They applied FEM for numerical modeling and simulation of a bi-layer ceramic and metal structure, which was followed by experimental validation. Biswas and Datta [
33] evaluated the ballistic resistance of a multilayer ceramic-backed fiber-reinforced composite target plate by means of FEM.
Also, other types of composites have been considered as protection against ballistic impact. Ansari et al. [
34,
35] investigated the ballistic performance of aluminum matrix composite armor with ceramic ball reinforcement under high-velocity impact. Batra and Pydah [
36] used explicit analysis in ABAQUS to perform nonlinear, large deformation impact analysis of poli-etar-etar-keton (PEEK)–ceramic–gelatin composites and, thus, study the behind-armor ballistic trauma. Guo et al. [
37] investigated how a Kevlar-29 composite cover layer would improve the performance of a ceramic armor system against penetration of a projectile. Osnes et al. [
38] conducted a study involving experimental tests and numerical simulations of a double-laminated glass plate under ballistic impact. The experimental tests were used to determine the ballistic limit velocity and curve for the laminated glass targets and to create a basis for comparison with numerical simulations. Sandwich structures also belong to this group, and they typically include complex three-dimensional additions, cores, designed to increase the strength and stiffness under bending and shear loading. Cui et al. [
39] investigated the ballistic limit of sandwich plates with a metal foam core using FE simulations. Experimental work was also performed to validate these experimental results. In the work of Beppu et al. [
40], the failure characteristics of ultra-high-performance fiber-reinforced concrete (UHPFRC) panels, with a thickness of 60–120 mm, were investigated experimentally.
Research in this field also covers investigations related to the influence of projectile shape on the impact on the composite material [
41,
42]. Several types of projectiles were used to obtain ballistic curves, such as conical, ogival, spherical, hemispherical, and flat. Another important aspect is the impact angle of the projectile when it hits the composite, and such a study was carried out by Titira and Muntenit [
43], whereby angles were varied from 0° to 70°.
While FEM is regarded as the primary method in structural analysis, it is important to note that the problem of a bullet penetrating a plate has also been simulated and examined using mesh-free methods, such as Smooth-Particle Hydrodynamics (SPH), Multi-Materials Arbitrary (MM-ALE), and Lagrangian with material erosion [
44].
Based on lamina failure theories, several methods predict damage development, classified as non-interactive, interactive, and partially interactive. Limited, or non-interactive, methods compare individual lamina stresses or strains with their corresponding strengths or ultimate strains, and these form the maximum stress and maximum strain [
45] criteria. In the case of interactive methods, all stress components are included in a single expression, such as the Tsai–Wu [
46], Tsai–Hill [
47], and Azzi–Tsai–Hill [
48] criteria. Partially interactive or failure mode-based methods provide separate criteria for fiber and matrix failures, such as the Hashin [
49] criterion.
The contribution of this work is to propose an effective way to optimize the composite plate exposed to ballistic impact loading. A suitable choice of composite material for the requirement of high impact resistance and energy-absorbent layers is a prerequisite for the problem at hand. The favorable properties of Kevlar fibers and resilient epoxy matrix render their combination, namely Kevlar 49/epoxy, an adequate selection. This material is well known for its applications in ballistic armor, bulletproof vests, helmets, and other impact-resistant structures [
50]. As this work represents the authors’ first step in this direction, it will be assumed that the material, number, and thickness of layers are predefined in the problem, and the optimization task is to determine the fiber orientation in the layers that would be most favorable regarding the impact resistance [
51]. The optimal solution for the laminated composite plate was determined using failure theory based on the criterion of maximum stress. This was achieved by implementing a genetic algorithm as the search method to systematically explore the design space and identify the optimal solution. To facilitate the optimization process, a simplified two-dimensional (2D) finite element model of the composite plate was developed. Optimization was performed by varying simulation parameters within this 2D framework to identify the optimal solution. Subsequently, the results were validated using a detailed and more accurate three-dimensional (3D) model of the plate based on established verification criteria. Integrating finite element analysis with a genetic algorithm enhances optimization efficiency while maintaining physically accurate results. Abaqus VUMAT subroutine-based material models and parallelized computations reduce computational cost and improve the robustness of the solution. This framework is generalizable and applicable to the design of advanced composites and impact-resistant structures.
2. Theoretical Background and Methodology
This section presents two distinct theoretical frameworks related to the modeling of the impactor and the composite laminate. The impactor consists of typical metallic structures, and its behavior is described using the Johnson–Cook plasticity model. In accordance with the software requirements, a complete definition of the Johnson–Cook model (including hardening) was implemented, enabling the reproducibility of the analyses by interested readers. For the composite laminate, two modeling approaches were employed: the first to prepare the model for optimization, and the second to validate the optimal solution. The Maximum Stress Criterion method was chosen for the laminate optimization process, as it is suitable for defining the objective function and monitoring the overall failure index according to the specified criterion. Additionally, the Hashin failure model was introduced to describe a more complex laminate model, which serves both to confirm the results obtained by the previous method and to provide a more detailed insight into the damage mechanisms. The Hashin model allows simultaneous tracking of internal damage in the composite’s fibers and matrix. Finally, the theoretical foundations of the optimization methods used to search the design space for the optimal solution are presented. These include the grid search method and the genetic algorithm, both of which were applied to explore and identify the most effective configuration of the composite laminate.
2.1. Johnson-Cook Plasticity
The Johnson–Cook constitutive model was applied to the impactor material to accurately represent its metallic behavior and capture the structural response in the plastic deformation domain. The Johnson–Cook Plasticity model is a particular type of von Mises plasticity model with analytical forms of the hardening law and rate dependence. It is suitable for high strain-rate deformation of many materials, including most metals. It can be used in conjunction with the progressive damage and failure finite element method (FEM) models to specify different damage initiation criteria and damage evolution laws that allow for the progressive degradation of the material stiffness and removal of elements from the mesh. Also, it is used in conjunction with either a linear elastic material model or an equation of state material model [
52].
2.1.1. Johnson–Cook Hardening
Johnson–Cook hardening is briefly presented here, because it is a constituent part of Johnson–Cook plasticity, and the parameters need to be provided in ABAQUS, regardless of the fact that hardening effects play no role in the analysis to be conducted here. The Johnson–Cook hardening model represents a specific form of isotropic hardening, in which the static yield stress σ
0 is expressed as follows:
where
is the equivalent plastic strain and
A,
B,
n, and
m are material parameters measured at or below the transition temperature,
Ttransition, and
is the nondimensional temperature defined as follows:
Here, T denotes the current temperature, Tmelt the melting temperature, and Ttransition the transition temperature, defined as the temperature at or below which the yield stress becomes independent of temperature. The material parameters should be determined at or below this transition temperature.
According to the equations given above, one needs to provide the values of
A,
B,
n,
m,
Tmelt, and
Ttransition as a part of the Johnson–Cook plasticity definition. Furthermore, the Johnson–Cook strain rate dependence assumes the following:
where
is the yield stress at nonzero strain rate,
is the equivalent plastic strain rate,
is the reference strain rate,
C is the strain rate sensitivity,
is the static yield stress and
is the ratio of the yield stress at nonzero strain rate to the static yield stress (so that
). Hence, the yield stress is expressed as follows:
2.1.2. Johnson–Cook Damage Criterion
The Johnson–Cook criterion is a special case of the ductile damage initiation criterion, in which the equivalent plastic strain at the onset of damage,
, is assumed to be of the following form:
where
d1–
d5 are the failure parameters.
The Johnson–Cook criterion can be used in conjunction with Mises, Johnson–Cook, Hill, and Ducker–Prager plasticity models. When used in conjunction with the Johnson–Cook plasticity model, the specified values of the melting and transition temperatures should be consistent with the values specified in the plasticity definition. The Johnson–Cook damage initiation criterion can also be specified together with any other initiation criteria, including the ductile criteria. When used in conjunction with the Johnson–Cook plasticity model, the specified values of the melting and transition temperatures should be consistent with the values specified in the plasticity definition. The Johnson–Cook damage initiation criterion can also be specified together with any other initiation criteria, including the ductile criteria.
2.2. Failure Criteria
Failure of composite materials is predicted by means of failure criteria, the implementation of which into the FEM framework is relatively straightforward. The failure criterion can be expressed using the failure index
IF as follows [
53]:
where
σ is the applied stress and
σF is the material strength in the loading direction. It can also be expressed via the strength ratio
R, which is the inverse of the failure index:
Failure will occur when IF ≥ 1, that is, R ≤ 1.
2.2.1. Maximum Stress Theory
The maximum stress theory [
54] is applied to composite materials to predict the failure strength of unidirectional laminates. Damage prediction is performed based on the stress of an individual composite layer. The damage index
IF according to the maximum stress criterion is
where
σ1 and
σ2 are the normal stresses, and
σ12 is the shear stress. The dependence is given by the following expressions:
where
F1t is longitudinal tensile strength,
F1c longitudinal compressive strength,
F2t transverse tensile strength,
F2c transverse compressive strength, and
F6 in-plane shear strength (see
Figure 1).
Damage in the laminate is avoided when the maximum stress failure index does not exceed unity:
Under these conditions, micro-damage to the matrix and fibers, imperceptible to the naked eye, does not occur.
2.2.2. Hashin Failure Criterion
The Hashin Failure Criterion (HFC) offers a more refined assessment of damage in composite plates compared to traditional methods, as it explicitly differentiates between fiber and matrix failure mechanisms. This distinction enhances its reliability in evaluating material degradation and makes it particularly suitable for validating earlier failure models. Moreover, its compatibility with numerical implementation in three-dimensional finite element simulations further supports its application in advanced composite damage analysis. HFC distinguishes four failure modes, and each of them is described by a suitable equation, as listed below:
where α is the weight factor that affects fiber shear damage.
It should be noted that, according to HFC, Equations (12)–(15) define the square of failure indices. When any of these failure indices surpass 1, it indicates that the corresponding damage mode has initiated in the composite ply.
The stresses
σ1,
σ2, and
σ6 are the components of the stress tensor,
σ, which are used to evaluate the failure criteria. The following expressions are used for this purpose:
where
σ is the true stress and
M is the damage operator:
The internal damage variables,
df,
dm, and
ds, characterize fiber, matrix, and shear damage. These variables are derived from the damage variables
dtf,
dcf,
dtm, and
dcm, corresponding to the aforementioned four modes:
Each damage variable ranges from 0, representing an undamaged state, to 1, indicating complete material failure. The damage operator M modifies the nominal stress components to reflect the progressive degradation of material stiffness associated with damage evolution in each failure mode.
2.3. Procedure for Determining the Failure Index Based on the Maximum Stress Criterion
A flowchart for computing the maximum stress failure index of a laminate exposed to mechanical loading [
55] is presented in
Figure 2.
The procedure for determining the failure index using the Maximum Stress Criterion begins with the preparation of input data. This includes the basic lamina properties such as the longitudinal modulus (E1), transverse modulus (E2), Poisson’s ratio (ν12), and in-plane shear modulus (G12). Additionally, the laminate stacking sequence must be defined, including the orientation angles of each ply relative to the reference direction, along with the thickness of each ply. External loading conditions are also required in the form of in-plane forces (Nₓ, Nᵧ) and moments (Mₓ, Mᵧ).
With the material and geometric data available, the next step is to compute the stiffness matrix. Computation of the ply stiffnesses [
Q]
1,2 is referred to their principal material axes using the following relation:
Once the local stiffness matrix is calculated, it must be transformed to the global laminate coordinate system (x, y). Transforming the layer stiffness [
Q]
kxy, of layer
k to the laminate coordinate system (
x, y) proceeds according to the following:
Following this, the vertical coordinates of each ply’s top and bottom surfaces (
zk and
zk−1) are computed relative to the laminate mid-plane. These values are used in the calculation of the laminate stiffness matrices: the extensional matrix [
A], the coupling matrix [
B], and the bending matrix [
D]. These matrices are obtained by summing the contributions of each ply’s stiffness over its thickness, weighted by appropriate functions of the z-coordinates.
Once the stiffness matrices are determined, the overall laminate compliance matrix is calculated by inverting the combined matrix formed by [
A], [
B], and [
D]:
From this, the mid-plane strains [
ε0]
x,y and curvatures [
κ]
x,y are computed using the applied loads (N and M).
To evaluate the stress and strain in a specific ply, the through-thickness coordinate z at the point of interest must be selected. For laminates with many thin layers or symmetric laminates under in-plane loading, the mid-plane of the ply is typically used. However, for laminates with few, thick layers under bending or asymmetry, it is often necessary to evaluate strains and stresses at the top and bottom surfaces of each ply to capture maximum values. The total strain at the chosen point in the global coordinate system is calculated by adding the mid-plane strain to the product of curvature and distance z from the mid-plane using Equation (26):
These global strains are then transformed to the local material axes (1,2) using a standard transformation matrix:
The stresses in each ply [σ]
k1,2 are then obtained by multiplying the local strain vector by the ply stiffness matrix in the (1,2) coordinate system using Equation (28):
To complete the failure analysis, material strength data [F]1,2 must be input, including the allowable tensile and compressive strengths in both the longitudinal and transverse directions, as well as the shear strength. The Maximum Stress Criterion is then applied by comparing each stress component to its corresponding allowable strength. A failure index is calculated as the maximum ratio between the actual stress and the material strength. If any of these ratios exceed 1, the ply is considered to have failed under the given loading conditions.
2.4. Optimization Methods
2.4.1. Grid Search Method
The grid search method, also known as the scanning method, presents a way to minimize the objective function when solving problems in technical and other sciences [
56]. It is characterized by searching for the value of the objective function within the allowed area, where an extremum of the objective function can be found. The larger the density of points where the function is examined, or the smaller the scanning step, the greater the accuracy of the results will be. This increases the likelihood of discovering a global extremum among the local ones. In all types of mathematical models, finding a global optimum is guaranteed by maintaining a sufficiently high density of points. This is the main practical feature of the scanning method. Scanning methods are divided based on the type of search pattern used within the allowed area. The pattern can take the form of a grid (
Figure 3), a spiral, or other configurations, with either constant or variable steps.
The grid search method used in this paper employs a grid with a constant step. This method is characterized by simplicity and certainty in identifying the lowest function value. Its simplicity comes from using mathematical equations that do not require derivatives. Finding the extremum is based on a straightforward and concise mathematical formulation. Constraint functions do not complicate the procedure; they are often expressed as inequalities. The simplicity of the scanning method offers an advantage over other optimization techniques.
2.4.2. Genetic Algorithm
A genetic algorithm is a metaheuristic optimization technique inspired by the principles of natural selection and evolution, designed to efficiently explore complex search spaces and identify optimal solutions [
57,
58]. By applying specialized evolutionary techniques, we can develop effective formulas for predicting certain events. In a genetic algorithm, each potential solution is encoded as a sequence of genes called a chromosome. These chromosomes collectively form a population, often called a generation, at a specific point in time. After defining the objective function, an initial population is generated. Each chromosome is then evaluated and scored based on its performance. If the desired criteria are not met, the algorithm proceeds through multiple cycles—selecting, combining, and mutating chromosomes—until an optimal or acceptable solution is achieved. At each stage, a genetic algorithm applies three types of rules to produce the next generation: selection, crossover, and mutation. The process starts with the selection principle, where individuals are chosen to serve as parents for the next population. These selected parents undergo crossover, producing offspring that form the new generation. To maintain diversity and explore new options, mutations are introduced randomly, helping the population evolve. In a genetic algorithm, key terms include gene, representing a specific property or variable; chromosome (or organism), which is a combination of genes and represents a potential solution; and population, consisting of multiple chromosomes. Within this population, parents are selected to generate the next generation. Reproduction involves applying crossover and mutation to these parents, creating new chromosomes and evolving the population over time. Initially, chromosomes are created from existing variables, and the algorithm uses a diverse initial population. Each chromosome is tested, with better ones having a higher chance of survival and reproduction, while weaker ones die out. The next step is to create the second generation from this initial pool. A new generation is produced by crossing suitable pairs of individuals, resulting in chromosomes that are more fit than those in the previous generation. This process repeats until the algorithm’s termination condition is met. The algorithm can be stopped under several conditions, such as reaching a predefined number of generations or finding a satisfactory or optimal solution. It may also end if the population becomes uniform, with little to no variation remaining, or manually, based on visual inspection or user judgment. Due to the randomness involved, the final solutions can differ across multiple runs, although they tend to be similar. These differences are caused by various internal factors within the algorithm. The entire process of multi-criteria optimization with genetic algorithms is illustrated in
Figure 4.
5. Conclusions
In this study, the optimal choice of composite plate was determined based on the impact load. The impactor model, with a specified kinetic energy, was represented using the Johnson–Cook plasticity model combined with a material damage law. This approach proved effective for accurately simulating the impact of a deformable impactor on the target. The initial setup of the composite plate involved defining its geometry, the number of layers, their thicknesses, and the constituent materials. These parameters were kept fixed, while optimization was performed for the fiber orientations within the layers. The material selection was based on Kevlar 49, owing to its high energy-absorption capability and resistance to impact loads.
Optimization was carried out using both the grid search method and a genetic algorithm based on the maximum stress criterion. Each method provided acceptable solutions, each with specific advantages, while combining the two approaches yielded more accurate results. The success of the optimization was evaluated based on the condition of the composite plate after impact, which remained undamaged. This finding was validated through a three-dimensional simulation of the optimal solution, conducted according to the Hashin criterion, which confirmed the absence of penetration.
This study represents the authors’ initial step toward developing suitable approaches for optimizing composite material structures subjected to impact loading. The present work is limited to low-speed impact, ensuring the design of a composite plate that can withstand the specified impact without damage. Additional constraints include the use of a relatively simple structural geometry, a limited number of layers with constant thicknesses and a predefined material system. A further limitation of the study lies in the definition of general contact, which was modeled using a surface-to-surface formulation with a linear geometric approximation. The normal contact behavior was modeled as hard contact, while the tangential behavior was assumed to be frictionless. The two-dimensional discretized mesh model employed for the optimization was assigned only elastic properties to enable monitoring of the objective function parameters. However, this approach allows tracking only up to the onset of damage, which is insufficient to capture the model’s final failure state. Future research will address these limitations by considering more complex geometries, variable lay-up configurations, and a broader range of materials. It will also incorporate experimental verification. Expanding the set of optimization parameters will inevitably increase the numerical effort required, making computational efficiency and solution accuracy even more critical. Hence, future work will also investigate the application of advanced metaheuristic optimization methods and their tuning with the aim of improved computational efficiency and solution accuracy.