1. Introduction
The cost of calculation is fundamental in computational modelling and simulation, as it directly affects the efficiency of solving complex problems. Longer calculation times can yield higher precision using finer meshes, smaller time steps, or more detailed physics, but they also demand more computational resources and can become impractical. Striking an optimal balance between accuracy and computational cost is essential: overly simplified models may miss critical behaviour, while precise simulations can consume time and resources disproportionate to their added value. Identifying this optimum often involves evaluating the sensitivity of results to model resolution and determining the point where additional precision yields diminishing returns relative to the increased calculation time.
Solving complex, mathematically intractable problems requires balancing the asymmetric demands between the problem’s expectations (e.g., accuracy) and the computational resources available (e.g., time and space), and this balancing act creates a form of symmetry in the solution [
1]. Soft computing methods help achieve this symmetry by providing flexible, approximate approaches that trade off some precision for practical feasibility, enabling effective solutions where exact methods fail [
2,
3,
4].
Computational fluid dynamics (CFD) simulations, while highly accurate in modelling fluid flows, come with significant computational costs due to the complexity of solving equations. Data-driven regression models provide an alternative by acting as surrogate models, capable of approximating CFD outcomes without the need for full-scale simulation. This makes them incredibly valuable for tasks such as sensitivity analysis, design optimisation, and uncertainty quantification, where numerous CFD simulations might otherwise be required. By learning the input–output relationships from pre-existing CFD datasets, these regression models can provide rapid predictions, enabling real-time decision-making in engineering applications. Additionally, regression models are particularly useful for exploring the impact of varying boundary conditions, geometric configurations, or material properties on fluid dynamics outcomes, thus significantly reducing computational expenses while retaining acceptable levels of accuracy [
5,
6].
These methods can be utilised in medical research [
7] or surgery planning. Our previous research [
8] is a good example where the renal artery branching was studied using CFD. During this research, the computation time was the main obstacle; producing enough data points with adequate accuracy and in a reasonable time was quite the challenge. We assume that several researchers are facing the same problem, and a methodology guideline would be of high usefulness. This paper focuses on renal artery branching, but these methods can be applied to any circulatory problem.
Circulatory diseases are a major cause of death in developed countries, with hypertension greatly impacting patient outcomes. Hypertension is frequently associated with renal conditions and renal artery stenosis. Additionally, renal infarction—a critical complication—often arises from circulatory disturbances [
9]. The rejection of transplanted kidneys poses serious risks and is associated with elevated mortality rates. Importantly, renal perfusion pressure directly affects renal function, thereby serving as a crucial factor in transplant rejection. The relationship between blood pressure and renal artery branching angles and other geometric features has been demonstrated [
10,
11].
In our previous research [
8], we analysed the effect of the angle at which the renal artery branches from the aorta on the haemodynamic characteristics influence renal function, utilising numerical flow simulations. The results indicated a range of optimal renal artery branching angles correlating with favourable haemodynamic values: a range of 58° to 88° was deemed optimal for pressure, flow velocity, and volumetric flow rates, while angles between 55° and 78° were optimal for turbulent kinetic energy.
Notably, our approach diverges from traditional mathematical models by identifying a range of angles rather than a specific optimal angle as a single number. A comparison with established models revealed that, according to the Murray model [
12], the optimal angle is identified as 81°. In comparison, the HK model [
13] suggests 59°, both values aligning closely within the identified optimal range. This suggests that the ideal renal artery branching angle could indeed be best conceptualised as a range, especially relevant in assessing the remaining kidney in living donors.
There are several modelling techniques to decrease computation time greatly, while the decrease in precision is small. A promising approach is utilising GPU for calculations [
14] and using voxel-based segmentation of the geometry instead of the finite element mesh. Another simplifying step is using a 2D model, which retains proper precision if the fluid dynamics variables are not very dependent on one of the spatial axes.
A different and elegant solution would be using data-driven regression models, substituting the entire finite element analysis and avoiding the need to solve partial differential equations entirely. Data-driven regression models are computational tools designed to capture relationships between input (descriptor) and output (dependent) variables. Unlike analytical regression methods that rely on predefined equations based on physical or theoretical laws, data-driven regression models are built purely on observed data, allowing them to identify complex, non-linear, and multivariate relationships that may be difficult to find or express analytically. These models rely on statistical or machine learning methods to analyse and learn patterns within data, creating predictive frameworks for understanding input–output relationships. Their versatility lies in their ability to handle diverse datasets, whether structured or unstructured, while delivering insights that are interpretable or optimised for prediction accuracy [
15,
16].
Recent advances highlight the growing role of machine learning as surrogates for CFD models to overcome computational barriers. Rygiel et al. introduced an active learning strategy for cardiovascular flows that reduced the required number of CFD simulations by nearly 50% [
17], while Jnini et al. developed a physics-informed DeepONet surrogate enforcing mass conservation and achieving high accuracy with limited training data [
18]. A systematic overview by Wang et al. categorised ML-based CFD approaches into data-driven surrogates, physics-informed frameworks, and solver accelerations [
19]. Surrogate models have also been applied to external aerodynamics using graph-based architectures [
20], and polypropylene reactors through hybrid CFD–ML workflows [
21]. These recent studies show the usefulness of surrogate modelling in reducing computational costs while maintaining predictive reliability, aligning with the goals of this research.
The application of data-driven regression models in CFD begins with the generation of training data, which involves running a series of CFD simulations across a well-defined range of input conditions. These inputs could include variables such as flow velocities, geometric dimensions, or thermal properties, while outputs may involve key performance indicators like drag coefficients, pressure distributions, or temperature fields. The data is then pre-processed, normalised, and sometimes subjected to dimensionality reduction techniques to manage complexity and improve model efficiency. Once a dataset is prepared, machine learning algorithms are employed to train regression models, mapping the inputs to desired outputs. After validation on test datasets, these models can predict CFD outcomes for new conditions almost instantaneously. This predictive capability makes them ideal for iterative processes like optimisation and real-time system monitoring. Furthermore, as new CFD simulation data becomes available, regression models can be retrained to maintain their relevance and accuracy, ensuring their effectiveness in a dynamic simulation environment [
22].
Polynomial regression, decision tree regression, support vector regression (SVR), and neural network regression are versatile techniques widely used for predictive modelling, including applications in computational fluid dynamics (CFD) and other complex systems. Polynomial regression extends the concept of linear regression by fitting non-linear relationships through polynomial terms, making it suitable for datasets with smooth, curved trends. However, it is prone to overfitting when higher-degree polynomials are used, particularly with noisy or sparse data [
23,
24,
25]. Decision tree regressors, on the other hand, split the data into decision nodes based on thresholds in the features, recursively partitioning the space to minimise variance within each region. Despite their use in regression, the output of DTRs are technically discrete values derived from the partitioning, where each region (leaf) corresponds to a specific output value. These models are interpretable and effective at capturing non-linear dependencies but may overfit [
26,
27,
28]. SVR offers a different approach by employing a margin of tolerance around the predicted values and minimising errors outside this margin. Using kernel functions such as radial basis functions or polynomials, SVR can handle non-linear relationships effectively, making it a robust choice for high-dimensional, noisy datasets with limited samples [
29,
30,
31]. Neural network regressors represent the most advanced option among these methods, utilising layers of interconnected neurons to learn complex, high-dimensional relationships between inputs and outputs. Neural networks, particularly deep learning models, excel in capturing intricate patterns in large datasets and are increasingly used for tasks like CFD simulations, where physics-informed neural networks can approximate solutions to fluid mechanics problems [
6,
32,
33]. Each of these regression methods has distinct strengths, with polynomial regression offering simplicity and interpretability; decision trees excelling in structured datasets; SVR balancing robustness and precision; and neural networks providing unmatched flexibility for high-dimensional, non-linear problems. Their choice depends on the specific dataset, the complexity of the relationships being modelled, and computational constraints.
This paper provides an aid for researchers when building surrogate models and optimising simulations with computation time in focus, while keeping precision in mind. Our results suggest that DNN surrogates may reproduce CFD outputs with reasonable accuracy and with orders-of-magnitude reductions in computational time, supporting efficient analysis across large design spaces.
2. Materials and Methods
In this study, we present a comparison of haemodynamic simulations in renal artery bifurcations. We investigate high-fidelity CFD models implemented in Ansys Fluent, COMSOL (2D and 3D), and GPU-accelerated voxel simulations in Ansys Discovery. To overcome computational costs for extensive parametric sweeps, we train deep neural network (DNN) surrogate models using both PyTorch 2.8 and COMSOL Multiphysics 6.3’s built-in DNN tools. The models predict time-dependent outputs—velocity, pressure, and turbulent kinetic energy—as functions of branching angles, blood pressure, and time steps.
In our previous work [
8], we utilised ANSYS 2020 R2 Fluent software, conducting simulations based on a model of the renal artery and its environment. The models were idealised based on measurements. Several geometric differences could distort the results in real models based on radiology imaging [
34]. Moreover, the model is created in a way that its dimensions can be used as variable parameters.
In our previous research, the left renal artery angle was varied in ten-degree increments while maintaining the right renal artery angle constant. The simulation series focused on the determination of the proposed optimal angulation range. The methodology shows ample potential for development, as advancements in technology, new software applications, and mathematical methodologies. The already published data will be used as a reference in this study. The results after modelling simplifications and surrogate models will be compared to the results gained from our previous work.
The examined methods were the voxel-based GPU-accelerated calculation of Ansys Discovery, 2D model calculation in COMSOL Multiphysics, and two different neural networks used as surrogate models. One model was COMSOL’s native deep neural network, and another was developed by our research team.
2.1. Simulation in ANSYS Discovery
ANSYS Discovery uses a GPU-accelerated simulation engine that leverages voxel-based meshing to rapidly compute physics simulations. Instead of generating traditional finite element meshes, Discovery discretises the geometry into a regular 3D grid of voxels (volumetric pixels), which simplifies and speeds up the setup and computation process. This voxel-based approach is parallelizable, making it ideal for execution on modern GPUs. By using GPU computing, Discovery achieves near real-time simulation feedback, enabling users to quickly explore multiple design scenarios and iterate faster with immediate visual insights into structural, thermal, and fluid behaviour.
In this study, using Ansys Discovery, a parametric model of the aorta and the renal arteries was used. The branching angles were given as a variable. Thus, there was no need to build new models for every branching angle, and the simulation series could be run in one batch. As Discovery does not allow transient simulations, the inlet velocity could not be given as a curve, as was performed with Ansys Fluent. As the maximum outlet values were inspected in every analysis, the peak velocity value was used on the inlet, which was found to be 0.4 m/s. The aorta and renal artery outlets were pressure outlets.
2.2. Simulation in COMSOL Multiphysics
To reduce calculation time, a good practice is to simplify the geometry. A very effective method is to use a 2D model. In a planar (2D) model, COMSOL Multiphysics discretises the geometry into a mesh of 2D elements. COMSOL supports laminar and turbulent flows in 2D, using appropriate turbulent models. This planar approach allows for efficient simulation of flow behaviour in geometries where the third dimension is negligible or symmetrical.
A spatial simulation would require computational resources that are a multiple of those needed for a planar simulation. Due to these constraints, we estimate its expected runtime based on geometric and numerical considerations. In a 2D simulation, each variable is defined over a surface using two coordinates, whereas in 3D, the same variables span a volume with three coordinates. Consequently, the number of degrees of freedom increases significantly. Assuming the runtime in 2D is denoted by x, the corresponding 3D runtime can be approximated by multiplying x by either or , depending on the dimensional scaling model used. Alternatively, if we consider the number of elements to grow from x2 in 2D to x3 in 3D, the computational load increases even more substantially. The estimated runtime for a 3D simulation can be roughly expressed as a function of the 2D runtime x, for example, or in a more general form as .
This calculation method is highly speculative; thus, it is validated by running a 3D and 2D simulation with only one set of parameters and with identical settings to compare the computational times. Moreover, the element count is much higher in 3D than in 2D, further increasing the calculation time. The ratio of these computational times allows us to generalise for simulation series, either by parametric sweep or design of experiment table generation for surrogate model training.
It is important to note, however, that while 2D simulations can yield accurate results in scenarios where the third spatial dimension is negligible, they become insufficient when three-dimensional effects are significant. In such cases, generating a data file with similar resolution becomes unrealistic due to prohibitive computational demands.
The ratio between the artery opening lengths in the 2D model was the same as the opening surface ratios in the 3D model. In the COMSOL simulations, a blood model with 1060 kg/m
3 density and 0.003 Pa·s viscosity was used; the material model was simplified to be Newtonian, just as in every simulation in this study. This simplification can be performed, as non-Newtonian behaviour is not that marked at high shear rates [
35]. However, in simulations that focus on vascular wall shear stress, non-Newtonian models will be more appropriate [
36].
The simulation was time-dependent, with 25 time steps with a strict length of 0.025 s, giving a total time window of 0.6 s. Finer time-discretising was tested in our previous research, and it was found that 25 time steps are precise enough; using more time steps would heavily increase the calculation time without adding notable precision.
The vascular wall was treated as a no-slip boundary. The aorta and renal artery outlets are pressure outlets. The inlet boundary on the aorta was a velocity–time curve of a cardiac cycle [
11,
37].
Figure 1 shows the inlet velocity curve.
The blood flow was modelled as incompressible and governed by the Reynolds-Averaged Navier–Stokes (RANS) equations. Conservation of mass was enforced through the continuity equation, while momentum conservation followed the Navier–Stokes equations in which the stress tensor included both molecular and turbulent viscosity. Turbulence was modelled using the realisable k-ε model, which solves additional transport equations for the turbulent kinetic energy (
k) and its dissipation rate (
ε) [
38,
39]. The model relates turbulent viscosity to k and ε, with coefficients that adapt dynamically to the local strain rate. This improves stability and predictive accuracy compared to the standard k-ε model, especially in flows with strong curvature and separation.
Different numerical solvers were applied depending on the platform. In COMSOL Multiphysics, the generalised-α time-stepping scheme was employed with second-order accuracy, and the linear systems were solved using the Generalised Minimum Residual (GMRES) iterative method. In Ansys Fluent, a second-order implicit time integration method was used together with the coupled pressure–velocity scheme. Ansys Discovery employed a GPU-accelerated lattice-Boltzmann-based solver, which enables rapid voxel-based flow computations but at reduced fidelity compared with finite element or finite volume approaches.
This combination allowed for benchmarking across platforms: Fluent provided the reference solution with high accuracy, COMSOL offered flexible finite element formulations in both 2D and 3D, and Discovery enabled accelerated prototyping.
For both 2D and 3D COMSOL simulations, the discretisation of fluids was increased to the second order to facilitate convergence.
In the 2D model, an unstructured tetrahedral mesh was generated with element sizes ranging from 0.063 mm to 2.23 mm, a maximum growth rate of 1.2, and a curvature factor of 0.3. Edge refinement was applied along the walls and boundaries, with element sizes between 0.00955 mm and 0.828 mm, while corner refinement enforced a minimum angle of 240° between boundaries to maintain the smoothness of curved walls. Five inflation layers were added at the walls with a stretching factor of 1.2. The resulting finite element mesh contained 6288 elements.
For the 3D model, an unstructured tetrahedral mesh was also used, with element sizes ranging from 0.137 mm to 1.27 mm, a maximum growth rate of 1.2, and a curvature factor of 0.3. Face refinement was applied to boundaries and walls, with element sizes between 0.00488 mm and 1.27 mm at the boundaries, and between 0.00685 mm and 0.446 mm at the walls. Corner refinement again enforced a minimum angle of 240° between surfaces, ensuring smooth curvature representation. Five boundary layers were applied with a stretching factor of 1.2. The final mesh consisted of 2,228,651 elements, which is approximately 354 times larger than the 2D case.
Figure 2 shows the dimensioned 3D and 2D models and the 3D and 2D meshes on panels (a), (b), (c), and (d), respectively.
2.3. Grid Study
For both 2D and 3D meshes, a mesh refinement study was performed following the Richardson extrapolation and Grid Convergence Index (GCI) methodology recommended by ASME. Three systematically refined meshes (M1–M3) with a refinement ratio r = 1.4 were employed. The quantity of interest was the peak velocity magnitude at the renal outlet.
The observed order of convergence p was computed as follows:
where
,
, and
denote the velocity values obtained on the coarse, medium, and fine meshes, respectively.
The Richardson extrapolated value was estimated as follows:
The relative difference between the fine and medium meshes was calculated as follows:
Finally, the Grid Convergence Index between the fine and medium meshes was evaluated as follows:
with a safety factor of
= 1.25.
A mesh refinement study was carried out for both 2D and 3D simulations. In 2D, the observed order of convergence was p = 3.09, while in 3D it was p = 3.17, both consistent with the theoretical expectation for quadratic elements. Richardson extrapolation gave values of = (2D) and = 0.4221 (3D). The fine-to-medium mesh differences were small—0.364% and 0.076%, respectively—with corresponding GCI values of 0.25% and 0.049%. These results confirm that both the 2D and 3D models are sufficiently refined, and the peak velocity magnitudes can be considered mesh independent.
In our previous study, volume flow rate, velocity magnitude, turbulent kinetic energy, and total pressure were measured. The primary focus of this study is not the flow characteristics of the renal artery, but the differences and similarities of the calculation methods; thus, they are demonstrated through only velocity curves to keep the paper concise.
2.4. Deep Neural Network Training in PyTorch
Surrogate modelling with deep neural networks (DNNs) was introduced to overcome the computational difficulties of repeated CFD evaluations. Four sets of design of experiment (DOE) data were produced with an increasing number of parameter combinations in order to evaluate the data requirements of the deep neural network (DNN) functions. The DOE tables consist of 1300, 2925, 6500, 13,975, and 83,075 lines. The calculation times are measured and extrapolated for 3D. Thus, the cost of calculation for the DNN training can be evaluated.
As an aspect of comparison, a fully connected Deep Feedforward Neural Network (DFNN) [
33] was used as a supervised learning technique with Rectified Linear Unit (ReLu) activation function. The DFNN was tuned to obtain reliable predictions while avoiding overfitting. One of them will be shown below.
Depending on the dataset, training was carried out to determine the most accurate values of optimal pressure, flow velocity, and turbulent kinetic energy on the given interval of renal artery branching angles separately, meaning that simulations were performed for these target values uniquely and the DFNN was trained on these unique datasets one by one, so the output layer consists of 1 neuron.
To create the DFNN, the PyTorch 2.8. package of Python 3 was applied, which is often used to define neural networks. Based on the simulations, the input layer had 5 nodes as follows: left-side renal artery angulation (Ang_L [°]); right-side renal artery angulation (Ang_R [°]); systolic blood pressure (Sys [mmHg]); diastolic blood pressure (Dia [mmHg]); and time (t [s]), which was normalised by Min–Max scaling in the preprocessing phase. The dataset was segmented into training 80% and testing 20% subsets, and 10% of the training set was determined as validation data.
The model was trained with a batch size of 64 and a 0.001 training rate. To reduce the training time, an early stop criterion was also tested, which seemed at first sight an advantageous expansion of the network but worked well only on a few of the datasets, so in the final form, it was removed from the code. As an optimiser, the generally used Adaptive Moment Estimation (Adam) optimiser was chosen, while the Loss function was defined as Mean Squared Error (MSE).
During tuning, more structures of DFNN were tested to find the one that best fits the evaluations. The modifications were tested on the number of used neurons, the number of hidden layers, the dropout rate, and the learning rate. These results were evaluated based on the average value of the Coefficient of Determination, and the lowest value of resulted from a dataset that contains the results of simulating the velocity in the artery. The average of was calculated based on the groups inside the dataset. Based on the renal artery branching angles (Ang_L [°] column of the dataset), it was possible to create groups that represent one cardiac cycle at the given angle. These grouped values from simulation and prediction were compared to obtain the unique values for each group. Each model was trained through 100 epochs for equal comparison.
2.5. Deep Neural Network Training in COMSOL Multiphysics
COMSOL Multiphysics 6.3 has a built-in DNN training feature, which was used to train the models based on the design of experiment (DOE) data produced by simulation series. For each output parameter, a different DOE table was created; thus, a separate DNN model was trained for each parameter. The structure of the models was the same as the previous method, with the same input and output nodes. In the DOE table, the quantities of interest were velocity magnitude, pressure, and turbulent kinetic energy for both left- and right-side renal artery outlets, giving a total of 6 quantities of interest. The input parameters were the same as in the previous method: branching angles of the left and right renal arteries; the systolic and diastolic blood pressures; and time, which was determined by time steps taken by the solver.
Just like in the previous method, 10% of the training set was determined as validation data. The model was trained with a batch size of 128 and a 0.001 training rate. In COMSOL, lower batch size seriously slowed the calculation and produced higher losses. The default 512 batch size gave very high fluctuations in the loss rates, so the 128 batch size was selected as it seemed to be an optimal value. The optimiser (ADAM) and Loss function (MSE) were also the same. The structure of the networks was 5-256-128-64-64-1, based on the results of the previous method. The most important difference between the methods was the epoch count. While the PyTorch method was functional with only 200 epochs, in COMSOL, 5000 epochs were required. This difference affects the calculation times, as shown in the results.
4. Discussion
This study presents a comparative analysis of various computational approaches for simulating haemodynamics in the renal arteries, focusing on balancing simulation accuracy and computational cost. Three main computational fluid dynamics (CFD) frameworks—ANSYS Fluent, COMSOL Multiphysics (2D and 3D), and ANSYS Discovery—along with data-driven deep neural network (DNN) surrogate models, were investigated.
Among the physics-based simulations, ANSYS Fluent yielded results most consistent with those found in the literature [
44], particularly during the systolic phase of the cardiac cycle. Despite lower absolute velocity values—which may be attributed to inlet conditions or inherent differences in solver settings—Fluent’s high fidelity supports its validity in assessing the haemodynamic effects of renal artery geometry. Thus, in this study, the Fluent results are used as the baseline, as they were previously published and shown to be sufficiently accurate and consistent with measured data [
44].
Comparatively, COMSOL’s 2D model demonstrated reasonable agreement during the systolic phase, although inaccuracy was observed in the diastolic phase. This discrepancy is tolerable given the diastolic phase’s reduced significance in assessing peak flow phenomena.
As seen in
Figure 5, panel (a), the COMSOL 3D model revealed unexpectedly higher velocity predictions than Fluent despite identical geometries and settings, likely originating from mesh density differences or solver-specific turbulence modelling. Importantly, computational costs in 3D were dramatically higher, with runtimes exceeding those in 2D by a factor of over 160. This finding underscores the critical importance of model dimensionality in simulation planning.
To address high computational demands, surrogate models using DNNs were implemented. With adequate training (≥6500 samples), DNNs predicted velocity and pressure with high R2 values (>0.98), providing rapid inference at negligible computational cost. Interestingly, turbulent kinetic energy (TKE) predictions showed low loss values but inconsistent reliability, indicating that larger datasets or more tailored network architectures might be required.
It is important to note that the inaccuracy of FEA or DNN models is within the measurement inaccuracy limits. For example, ultrasonic Doppler measurements [
45] produce rather fuzzy diagrams than a precise curve, and even MR measurements have some deviation [
46].
The results suggest that 6500 samples represent a reasonable compromise, achieving similar reliability to 13,975-sample models while requiring less training time and fewer computational resources. Oddly, the losses increased when training an 83,075-sample model. In
Figure 11, the time cost–quality relations are demonstrated. These results suggest that increasing sample count beyond 6500 does not obtain better results, and oddly, for the COMSOL DNN, the losses are even higher. The velocity losses are similar for COMSOL and PyTorch DNN, pressure is markedly better in the case of COMSOL DNN, and TKE is more precise in PyTorch DNN. In the aspect of the prediction curve fitting the original, PyTorch is assumed to be better, as most
R2 values are close to 1 above a certain sample count, whereas COMSOL
R2 values are around 1 for only velocity diagrams.
In addition to velocity, turbulent kinetic energy (TKE) was also compared between CFD and DNN predictions. The results indicate that while the DNN initially struggled to reproduce TKE with limited training data, its accuracy improved as the dataset size increased. This behaviour is consistent with the higher complexity of turbulent flow fields, which typically require larger datasets to capture small-scale variations.
The inclusion of TKE highlights both the potential and the current limitations of surrogate modelling: whereas velocity can be predicted accurately even with smaller training sets, more data are required to obtain robust predictions for turbulence-related quantities. The other solution for accurate turbulence prediction would be to use physics informed DNN models, just like Wang et al. and Raissi et al. did [
5,
6].
In extrapolated 3D simulations, where each sample requires magnitudes more time, this reduction translates to significant savings. Accordingly, surrogate models become particularly valuable when exploring extensive parameter spaces, enabling surgical planning, real-time assessment, or app-based predictive tools using COMSOL App Builder.
An important question is posed: after how many simulations does training a neural network-based surrogate model become more cost-effective than performing additional FEA analyses? The total simulation + train time for the 6500 sample DNN model for all six parameters was 32,085 s for PyTorch and 39,560 s for COMSOL. If we compare these results to 3D simulation, which took 68,383 s for a single case, the cost effectiveness is obvious. But, for the 2D simulation, the cost-effectiveness calculation makes more sense. A single case simulation took 425 s without the preparation times. If we only calculate the computation time, the DNNs become cost-effective after 93 for COMSOL and 76 for PyTorch. So, these models are worth using if at least that amount of surgical planning is to be expected, for example. However, it is important to note that preparation times add up to the single case times, so these ratios are probably less in reality.