1. Introduction
The past two decades have witnessed a surge in technological progress, fueled by the rise of numerical methods as a key driver of innovation across diverse engineering disciplines, particularly mechanical engineering. This progress has been further supplied by the impressive growth in computational power witnessed since the late 20th century. The development of increasingly rapid, efficient, adaptable, and user-friendly computational tools has led to the widespread adoption of complex numerical methods in engineering simulations [
1]. Notably, their application in various computational mechanics techniques has revolutionised the entire mechanical construction industry, allowing for the design of optimised components based on the constraints imposed by the manufacturer, enabling accurate predictions of how these components will behave under real-world operating conditions. In the automotive industry, for instance, structural optimisation of various parts plays a crucial role in the performance and behaviour of a vehicle during driving. This approach has led to successful results by considerably reducing the component’s weight and, at the same time, maintaining the necessary strength to ensure that components meet all their mechanical requirements [
2].
In the automotive manufacturing sector, reducing weight across various components offers a pathway for manufacturers to improve profitability. Lighter vehicles often require less material, leading to a direct decrease in production costs [
3]. By integrating weight reduction strategies, in addition to frequent optimisation processes during the parts design phase, manufacturers can achieve substantial financial benefits through these combined effects. Beyond the significant financial incentive, even modest weight reductions throughout the vehicle can significantly improve overall vehicle performance. Growing public demand and stricter regulations within the automotive industry are driving the need to develop lighter, safer, and more production-efficient components across all vehicle functionality areas [
4]. For some components, weight reduction also contributes to enhanced comfort. To meet these evolving requirements, the application of structural optimisation methods in the automotive field has surged in recent years, keeping pace with advancements and refinements in computational techniques. The rise of electrification and increased programming intelligence in automotive electronics has led to a trend of increasing overall mass in passenger cars [
5]. This trend further emphasises the need for weight reduction and, consequently, a decrease in inertia. Reducing weight offers a multitude of benefits for the vehicle itself [
4], including the following:
Reduced fuel/energy consumption (particularly crucial for electric vehicles with inherent weight and range limitations);
Lower emissions of greenhouse gases and air pollutants;
Decreased wear and tear on public roads and various vehicle components, such as tyres, brakes, suspension, engines, and transmissions;
Improved acceleration, deceleration, and overall drivability;
Enhanced safety for pedestrians, drivers, and passengers.
Computer-aided structural assessment techniques equip engineers with powerful tools to analyse the behaviour of components and systems under various loading and stress conditions. These techniques complement the overall optimisation process for the studied structure. Through computer simulations, engineers can predict stresses, strains, and displacements, allowing for the identification of critical failure points and efficient design optimisation. The Finite Element Method (FEM) has historically been the dominant discretisation technique in research, development, and education within computational mechanics [
6]. However, new techniques have emerged to address limitations of FEM, such as its lack of precision for high deformation analysis. These techniques aim to improve the accuracy and efficiency of numerical simulations.
Meshless methods have proven to be a more accurate alternative to FEM in demanding non-linear topics, such as fracture and impact mechanics, where problems involve transient domain boundaries and meshing becomes a challenge. Unlike FEM, meshless methods utilise arbitrarily distributed nodes and approximate field functions based on an influence domain rather than elements. Additionally, the rule in FEM that elements cannot overlap does not apply to meshless methods—their influence domains can and should overlap [
7].
Pioneering the application of meshless methods in computational mechanics was the Diffuse Element Method (DEM), developed by Nayroles et al. [
8]. This method leveraged the approximation functions of the Moving Least Squares (MLS), introduced by Lancaster and Salkauskas [
9], to construct the required approximations [
10,
11]. Belytschko et al. [
12] further enhanced the DEM concept and established one of the most prominent and widely used meshless methods—the Element Free Galerkin Method (EFGM). Over time, additional methods emerged, including the Petrov–Galerkin Local Meshless Method (MLPG) by [
13], the Finite Point Method (FPM) by [
14], and the Finite Sphere Method (FSM) by [
15].
While these methods have been successfully applied to various problems in computational mechanics, they all share limitations, primarily stemming from the use of approximation functions instead of interpolation functions. The Point Interpolation Method (PIM) offers an alternative, effectively addressing the challenge of directly imposing essential boundary conditions [
16]. This is achieved by constructing shape functions with the Kronecker delta property, making them simpler compared with methods such as the EFGM. Later, PIM has been enhanced with the incorporation of radial basis functions, leading to the Radial Point Interpolation Method (RPIM) [
7,
17]. Other efficient interpolation meshless methods were developed in parallel, such as the Natural Element Method (NEM) [
18], the Meshless Finite Element Method (MFEM) [
19], and the Natural Radial Element Method (NREM) [
20]. These methods offer the advantage of potentially utilising FEM pre-processing workflows (allowing them to be integrated into most FEM software), which can help mitigate the generally higher computational cost associated with these mesh-dependent numerical techniques. Among these methods, RPIM stands out as the most popular and well-represented interpolating meshless approach in the research literature [
7].
The objective of the present study is to employ a bio-inspired bi-evolutionary optimisation algorithm for a suspension control arm. The control arm design is based on the geometry of existent industry-standard suspension arms. In the field of automotive mechanical construction, product development heavily relies on design philosophies informed by engineers’ empirical knowledge. By applying automated techniques to selectively remove material from specific stressed components, it becomes feasible to achieve designs that align with the manufacturer’s requirements while significantly reducing mass. Additionally, recent literature indicates a preference for meshless methods, which not only serve as a viable alternative to FEM but also offer potential advantages [
21,
22].
The paper begins with a review and brief explanation of the studied numerical method, the RPIM. Next, the numerical procedures of a traditional bi-evolutionary algorithm are examined. Additionally, a parallel is drawn between these procedures and the concept of bone remodelling. The article proceeds to conduct a numerical study. The optimisation algorithm is applied to the control arm. The topologies are designed using industry-standard philosophies for this specific component. A linear-static analysis follows, comparing the stiffness, specific stiffness, displacement, and maximum von Mises stress values among the different designed models. Finally, the last section summarises the findings and revisits the conclusions drawn throughout this numerical study.
2. Radial Point Interpolation Method
Similar to most numerical node-dependent discretisation methods, the majority of meshless methods follow a common procedure. The main differences between meshless methods and the standard and well-known FEM can be found at the pre-processing phase [
7]. In opposition to FEM, which discretizes the problem domain with elements (containing nodes and integration points), meshless methods discretize the problem domain using only a nodal set. In meshless methods there are no pre-established connections between the discretization nodes. Such connection is enforced by some mathematical concept (radial search, natural neighbours, etc.) [
7]. Also, the background integration points of meshless formulations can be constructed using node-independent or node-dependent rules, allowing a higher flexibility in their distribution along the problem domain [
7]. Regarding the shape function construction, contrasting with FEM, meshless methods are able to apply distinct functions to build their shape functions, such as Moving Least Squares, Point Interpolation, Polynomial Interpolation, Sibson Interpolation, etc. [
7]. Thus, next, in order to understand the general meshless procedure, a general overview is provided.
In meshless formulations, first, the solid domain is discretised with a nodal set, and the essential and natural boundaries are established. Following discretisation, it is necessary to construct a cloud of background integration points, covering the complete solid domain. The integration points can be obtained using the same procedure as FEM. First, integration cells are constructed, discretising the complete solid domain, and then the integration points are obtained following the Gauss–Legendre quadrature integration technique [
7]. Each cell is subsequently transformed into a unit square using a parametric transformation (
Figure 1). This transformation allows for the distribution of integration points within the resulting isoparametric square [
7,
17].
Once the integration mesh is established, the next step involves defining the nodal connectivity. In FEM, element definitions dictate nodal connectivity. However, meshless methods, as mentioned earlier, rely on influence domains rather than elements. Nodal connectivity is established by overlapping these influence domains. Finding these domains involves searching for nodes within a specific area or volume (depending on the problem’s spatial dimension). This approach, due to its simplicity and low computational cost, is widely used in various meshless methods like RPIM, EFGM, MLPG, and RKPM. It is important to consider the impact of domain size and shape on the method’s performance to achieve accurate and satisfactory solutions. While the literature suggests a single optimal size for influence domains is not feasible, a fixed reference dimension is often adopted. Each integration point
searches for its closest nodes, forming its own influence domain. For the RPIM, the literature recommends between nine and sixteen nodes inside each influence domain [
7].
In order to establish the system of equations, it is necessary to construct the shape functions. To obtain them, RPIM uses the radial point interpolating technique [
7,
17]. Considering a function,
, defined within the domain
. The domain is discretised by a set of
N nodes, denoted by
. It can be further assumed that only the nodes located within the influence domain of a specific point of interest,
, have an impact on the value of the function,
, at that point:
where
represents a radial basis function (RBF),
n and
m stand for the number of nodes within the influence domain of
and the number of monomials of the polynomial basis function, respectively, and
and
represent non-constant coefficients of
and
, respectively. Several radial basis functions are available in the literature [
7], and many have been explored and developed over time. This work will utilise the multiquadratics (MQ) RBF, first introduced by Hardy [
23]. This function is widely recognised as yielding superior results and is often paired with RPI [
7]. The MQ RBF can be described as follows:
for which,
The literature shows that shape parameters
c and
p considerably influence the shape of the final interpolation function and therefore the accuracy of the final results [
7,
17]. Therefore, following the literature recommendation, in this work, the following shape parameters are assumed:
and
[
7,
17]. These values will allow us to ensure that the constructed shape functions actually possess the delta Kronecker property. To ensure a unique approximation, the following additional system of equations is introduced:
With Equations (
1) and (
4), it is possible to write
which allows us to obtain the non-constant coefficients
and
:
Incorporating
and
into Equation (
1), the following interpolation is achieved:
being the shape function defined as
In elasticity, the equilibrium conditions can be represented by the following system of partial differential equations:
where ∇ represents the nabla operator,
the Cauchy stress tensor, and
the body force vector. The boundary surface can be divided into two types of boundaries:
Natural boundaries (), where .
Essential boundaries (), where .
where represents the prescribed displacement vector on the essential boundary, . Similarly, denotes the traction force applied on the natural boundary, , and represents its outward normal unit vector.
The Cauchy stress tensor
, usually represented in the Voigt notation (Equation (
10)), and the strain
(Equation (
11)) can be related with Hooke’s law:
, being the strain state obtained from the displacement field:
. In a full 3D problem, the differential operator
and the material constitutive matrix
are represented as shown in Equations (
12) and (
13), respectively:
where
and
. The selection of a methodology for approximating solutions to partial differential equations hinges on the intended application. Within computational mechanics, both strong and weak formulations are widely used approximation methods. While the strong form offers a potentially more direct and accurate approach in certain cases, it can become challenging to implement for problems of greater complexity. In these scenarios, due to its ability to generate a more robust system of equations, using the weak form is more advantageous. Thus, assuming the virtual work principle, the energy conservation is imposed:
which allows, after standard manipulation [
7], to obtain the following expression:
leading to the simplified expression:
. In this equation,
represents the global stiffness matrix, which can be expressed as follows:
being
, the number of integration points discretising the domain and
their corresponding integration weight. The deformability matrix
can be defined as follows:
From Equation (
14), it is possible to express the body force vector (denoted by
) and the external force vector (denoted by
) as follows:
where
denotes the number of integration points defining the boundary where this force is applied to, and
their corresponding integration weight. Finally,
represents the interpolation matrix:
The Kronecker delta property inherent to RPI shape functions allows for the direct imposition of essential boundary conditions on the stiffness matrix,
. Thus, the same numerical techniques applied to standard FEM can be used with RPIM. If plane stress is assumed, the problem reduces to a 2D analysis. The presented 3D formulation is still valid; however, all components associated with the
direction are removed. Thus, regarding the stress and strain vectors,
The constitutive matrix is reduced to
and the deformability and interpolation matrices become
3. Topology Optimisation with a Bi-Directional Bone Remodelling Algorithm
Within computational mechanics, topological optimisation stands as a prominent optimisation method due to its capacity to generate innovative and highly efficient designs. This approach strategically distributes material within a structure, resulting in optimised shapes and layouts that meet specific performance requirements. These requirements can encompass factors such as strength, stiffness, or other mechanically relevant metrics. Consider a common topological optimisation problem applied to a structure, where the goal is to achieve a design with maximum rigidity while adhering to a constraint on the structure’s mass. Mathematically, this translates to minimising the average compliance subject to a constraint on the material’s weight, which can be expressed as
where
C represents the average compliance of the structure,
the mass of the selected structure, and
the mass of node
i. The design variable
indicates the presence (
or absence (
of a node in the layout of the defined domain.
3.1. Bi-Directional Evolutionary Structural Optimisation
Evolutionary computation, inspired by biological evolution, utilises selection, reproduction, and variation mechanisms to find optimal solutions for complex problems [
24]. Compared with traditional optimisation techniques, evolutionary methods offer greater robustness, exploration capabilities, and flexibility. These attributes make them well-suited for tackling intricate problems with challenging cost functions. In recent years, methods like Evolutionary Structural Optimisation (ESO), developed by Xie and Steven [
25], have gained widespread application in structural optimisation problems. This iterative technique removes material deemed inefficient or redundant from a specific domain to achieve an optimal design. While the ESO method demonstrates widespread application and popularity, it does possess limitations that have motivated the exploration of alternative techniques [
26]. To enhance the solution viability achieved through optimisation, the concept of a bidirectional algorithm emerged. This approach would not only remove material in low-stress areas but also introduce material to reinforce high-stress regions. This led to the development of the Bi-Directional Evolutionary Structural Optimisation (BESO) method, drawing inspiration from both ESO’s material removal capabilities and the additive material functionalities of the Additive Evolutionary Structural Optimisation (AESO) method [
27]. BESO facilitates a more comprehensive exploration of the design domain, potentially leading to superior identification of the global minimum.
The elasto-static analysis step initiates each optimisation iteration and returns the displacement, strain, and stress fields. This allows for the calculation of the equivalent von Mises stress for each point, along with the cubic average of the entire von Mises stress field. This average serves as a reference to identify potential locations of abrupt stress variations. High stress values influence the cubic average, while low stress values have minimal impact.
A penalty system is implemented to designate a specific parameter, mass density in this case, to each integration point. The penalty values typically range from , where 1 signifies rewarded domains (solid material) and signifies penalised domains (removed material). The performance of the BESO procedure hinges on the reward ratio, RR, and the penalty ratio, PR.
The procedure identifies the integration points with the highest (stress) and the integration points with the lowest . The points are assigned a reward of , while the points receive a penalty of . A penalty parameter, , is then assigned to each node . For each integration point , the closest nodes update their penalty values using the expression . Once all nodes are updated, the penalty parameters for each integration point are recalculated to filter and smooth the selected values. This is achieved using the interpolation function , where n represents the number of nodes within the analysed influence domain of , and represents the shape function vector of the integration point . After the first iteration, some values might deviate from one, indicating the absence of material. This prompts the process to proceed to the next iteration. In iteration j, the penalty parameters, , are used to modify the material constitutive matrix. Consequently, the penalised constitutive material matrix, , is calculated.
In the subsequent iteration, , the stiffness matrix is calculated using instead of the original material matrix, . The same steps are repeated, leading to a new equivalent von Mises stress field. This new field is used to calculate the updated cubic average stress. A comparison is then made: . If the condition holds true, the integration points with are rewarded with . The penalty parameters are then recalculated, updating the material domain for the next iteration.
3.2. BESO-Inspired Bone Remodelling
This work employs a BESO-inspired bone remodelling analysis aiming at structural optimisation. Bone remodelling is a natural process where the bone’s shape adapts to applied stresses, improving its resistance to the imposed load. To predict this behaviour, researchers have developed stress-strain laws based on observations and experiments. These laws predict bone behaviour under various loading conditions and serve as the foundation for computational bone analysis. The selected model can considerably influence simulation results. Established models include Pauwels’ model [
28], along with more recent models that incorporate additional factors or different approaches, such as Corwin’s [
29] and Carter’s [
30] models.
Similar to Pauwels’ model, Carter’s model necessitates a mechanical stimulus to trigger bone remodelling. This stimulus is computed based on the effective stress, which incorporates both the local stress and bone density, along with the number of load cycles experienced by the bone (represented by the exponent
k). The magnitude of the stress has a direct impact on the remodelling stimulus; a higher stress level translates to a stronger stimulus for remodelling.
The model offers the flexibility to use either stress or strain energy as the optimisation criterion. Strain energy prioritises maximising the bone’s stiffness, its resistance to bending, while stress prioritises optimising the material’s strength. When using strain energy, the model relates the apparent bone density to the local strain energy it experiences. This allows for estimation of the bone’s density at a remodelling equilibrium, a state where bone resorption and formation are balanced.
In cases where multi-directional stress is applied to the bone, the model combines the effects of each stress pattern into a single, unified direction. This direction, known as the normal vector (represented by
), signifies the ideal alignment for the bone’s internal support structures (trabeculae) to achieve optimal strength. To determine this ideal direction, the model incorporates the normal stress acting on the entire bone, as described in Equation (
31).
For the present work, an original algorithm was developed and programmed. The algorithm was incorporated in the previous codes already developed by the research team, which employed an adaptation of Carter’s model for meshless methods developed by Belinha et al. [
31]. This adaptation assumes that a mechanical stimulus, primarily represented by stress and potentially incorporating strain metrics, is the key driver of bone tissue remodelling. A comprehensive description of the entire model can be found in the work of Belinha [
7]. Through the bone remodelling procedure described above, the process itself acts as a topological optimisation algorithm. At each iteration, only the points with high or low energy deformation density undergo density optimisation based on the mechanical stimulus. In order to update the critical variables during the optimisation process, the algorithm assumes a phenomenological law of bone tissue developed by Belinha [
7]. More information and a detailed description of the algorithm can be found in the literature [
7].
Figure 2 illustrates a schematic representation of the optimisation topology algorithm inspired by the bone remodelling process, as implemented in the RPIM method. As illustrated in the flowchart, the structural optimisation algorithm used in this work is iterative. In each iteration, for a given material distribution, the stiffness matrix is computed and the resulting field variables (displacements, strains, and stresses) are determined. Next, the
integration points with the lowest stress levels have their material densities reduced, consequently decreasing their mechanical properties. Here,
represents the total number of integration points, and
is the penalisation ratio defined at the start of the analysis. A similar procedure is applied to the
integration points with the highest stress levels, increasing their material densities and, in turn, their mechanical properties. The value
is the reward ratio, also set at the beginning of the analysis.
After these updates, the material distribution changes, and a new set of field variables is obtained, leading to a fresh remodelling scenario. This iterative remodeling continues until the average density of the model falls below a user-defined threshold.
Because the FEM and RPIM formulations differ mathematically, they produce similar but not identical field variables when applied to the same material model. Moreover, given that the iterative optimisation algorithm’s solution at each stage depends on the previous iteration’s results, there is no straightforward guarantee that FEM and RPIM analyses will converge to the same solution.