1. Introduction
The study of boiling is interesting for many applications that require efficient heat exchanges. Most of the time, boiling is used as a substantial energy sink. The principal information that is needed is the heat transfer between the solid and the fluid. Theoretical models and empirical laws were developed to assess this quantity. In 1934, thanks to a heat wire plunged into a bath full of water, Nukiyama [
1] was the first to describe all the different modes of boiling by plotting heat fluxes versus temperature. Nucleation, transition boiling, and film boiling were observed. Then, a great deal of experiments in various fluids and conditions were made to enhance the comprehension of each mode. Pioneer works [
2,
3,
4,
5,
6,
7] and more recently [
8,
9] have proposed different correlations based on physical arguments in order to estimate heat flux laws. Some analytic models have also been proposed [
10]. All of these works are very interesting and provide analytical laws. However, their precision remains limited when confronted by industrial configurations as they are often case-dependent.
The case of the film boiling mode is interesting in the configuration of quenching in liquid quenchants, where boiling modes are generally not fully controlled. Different quenching techniques exist. The more gentle ones use forced convection of neutral gases, such as nitrogen. More vigorous ones use liquid quenchants, such as oil or even water. Metallic parts are either disposed on grids plunged inside a still pool, or they are dropped by gravity and collected at the bottom. Forced convection can also be employed to further improve cooling speed by removing the vapor from the surface of the metallic part (see [
11,
12] for more information on the techniques and challenges regarding the quenching process). In quenching configurations with liquids whose vaporization temperatures are below the initial metallic part temperature, the temperature is usually higher than the Leidenfrost temperature, and the temperature there is above the vapor film, which entirely covers the solid surface. This determines the first steps of cooling during quenching, and it is thus an important mode to understand. The vapor film is, in reality, never really stable, and interface waves exist whatever the orientation of the heater. A pure conduction model in the vapor film is usually considered as a first estimation of the film boiling heat transfer. Scholars have showed that this estimation underestimates the heat flux by a factor of 25% for vertical film boiling [
13]. This demonstrates that the convection inside the film plays a role, but its influence is moderate when compared to conduction. The vapor film thickness is then a key parameter to evaluate as it is directly linked with conductive fluxes.
For horizontal film boiling, Berenson formulated a theory based on Rayleigh Taylor instabilities, which yields a heat flux proportional to the overheating raised to the power of
[
3]. Klimenko extended these notions [
14], introducing analogous considerations but also incorporated a finite vapor film thickness into the Rayleigh–Taylor instability framework. This modification gives rise to a distinct heat flux scaling, whereby the overheating influences the power of
.
For vertical film boiling, boundary layer theories with pure conduction inside the vapor film have been developed for saturated conditions [
10], both with subcooling [
13,
15,
16,
17,
18,
19] and with forced convection [
9,
20]. They predicted a film thickness that scales with the height to the power of
, and this leads to a heat flux that scales with the height to the power
. These models are interesting for a first approach. However, their reliability is limited as they are based on very simplified scenarios.
Furthermore, the mean film thickness is not a convenient parameter as it is not necessarily an increasing function of the heat flux [
21]. Kelvin–Helmholtz instabilities are generally present at the liquid vapor interface. These waves create very small film thickness that drastically improve the heat transfers. Meduri et al. [
9], inspired by Berenson’s model for horizontal configuration, suggested a correlation to account for these instabilities, which exhibited a scaling behavior with an overheating exponent of
. Subcooling and forced convection were also taken into account as multiplicative factors as they happen to be important factors of the heat transfer magnitude. Honda et al. [
22] applied this vision in the configuration of film boiling around a sphere. Their combination of the resolutions of an integrated form of the boundary layer equation in the vapor and of a wave equation in the liquid leads to a more accurate estimation of the temperature profiles and heat fluxes over the sphere.
In addition to the solid temperature, the control parameters of the film boiling mode around an immersed solid are its material properties, its geometry, as well as the subcooling and the fluid characteristics. The Leidenfrost temperature is also influenced by factors such as wettability and surface roughness. The decision was made to narrow the scope of this study to examine the impact of solid temperature and subcooling.
Regarding the temperature of the liquid that is away from the heater subcooling, its influence is studied by the consideration of the subcooling
. Large subcooling tends to increase the heat flux, whatever the boiling mode. This is not surprising as the liquid is expected to consume a part of the heat delivered by the heater. However the larger impact lies on the values of the heat flux
and the temperature
at the minimum heat flux (MHF), which increase according to [
18]. In the context of quenching, this implies that the lower the bath temperature, the quicker the vapor film rupture occurs. For example, in the experiment of Ebrahim et al. [
23], where an Inconel-600 cylinder was quenched, the overheating at which the vapor film broke increased from 250 to 450 K between the subcooling of 2 to
. This observation has been confirmed by other authors [
19,
24], and Ikkene et al. [
25] have reported Leidenfrost temperatures above 800 °C for very high subcoolings (
). Jouhara et al. [
19] observed that the vapor film breakage nature was different depending on the subcooling. Wetting front propagation occurred exclusively under moderate subcooling conditions. In instances of exceptionally high subcooling, a rapid and explosive wetting phenomenon was observed, leading to instantaneous wetting of the entire heater surface.
The diversity of the mechanisms involved in boiling leads us to narrow the scope of our work to an initial benchmark. The reason behind this approach is to avoid tackling the full complexity of boiling at first and then capturing some fundamental features to be validated. Consequently, a choice was made to experimentally study the quenching of a small nickel sphere that was 1 cm diameter in water. Such a small scale allows for enhanced precision in describing the interface. Additionally, it helps in sustaining the film boiling mode through capillary forces. Moreover, the spherical geometry does not present any sharp edges that offer preferential wetting points. Nickel was chosen as a material due to its large thermal diffusivity, which ensures temperature homogeneity; its large specific heat capacity, which ensures boiling lasts for a long time; a moderate oxidation phenomena at high temperatures under air; and because it does not melt at the maximum studied temperatures (). The majority of industrial processes that engage in boiling operations utilize water, making it the most extensively studied medium in boiling research. High bath temperatures were also considered to improve the vapor film lifetime. At such a small scale using nickel, the significant temperature gradients within the sphere were minimized. As a result, the assumption of a uniform temperature distribution remained valid, effectively sidestepping the intricate thermal variations within the solid material (with a Biot number reaching a maximum of approximately 0.2). The other advantages of nickel is that it is not subject to phase transformation on those temperature ranges. Finally, this choice was motivated by more practical experimental reasons as small scales involve lower energies that are easier to handle.
The quenching of small spheres have already been studied in the literature, and some conclusions are already known. Dhir et al. [
18] studied the quenching of 19 and 24 mm spheres of various materials. They showed that the larger the subcooling, the higher the Leidenfrost temperature
(the latter was measured between 200 and
for low subcooling below
(
)). The associated heat fluxes before the vapor film collapse was around 5 × 10
4 , and it was also improved with subcooling. They showed that the flux varied with a power
of the temperature in the absence of subcooling and forced convection. Radiation effects were shown to represent, at most, 10% of the total heat flux. Finally they observed that the Leidenfrost temperature
was sensitive to the surface condition that was modified by oxidation during the heating. Jouhara et al. [
19] had the same conclusions with their study of different shapes of copper parts. The larger the sphere diameter, the higher the heat fluxes. They observed transient wetting phenomena with high frequency at the top of the spheres during film boiling. In both studies, the film boiling was turbulent for low subcooling, with mean vapor thicknesses of around
mm at
. Both authors confirmed that the temperature was mostly homogeneous and that the temperature variation was proportional to the heat fluxes (this was also shown by Burns et al. [
26]). Other studies have provided measurements of vapor film thicknesses and arguments for Leidenfrost temperature prediction, as summarised by Honda et al. [
22].
The aim of this work was to investigate sphere quenching experiments with different bath temperatures in order to capture the main features of such systems and to analyze them with physical arguments. Those arguments were challenged with the help of a numerical simulation. Given the challenging characteristics of a large density ratio, low viscosity in both media, and a large latent heat of vaporization, simulating the phenomenon becomes complex. To address this, a dedicated numerical framework carried out with an in-house stabilized finite element solver was used. This framework not only supports the validation of our physical interpretation, but it also assessed the suitability of the model and numerical code for simulation. This approach has never been performed in the literature according to the best of the authors’ knowledge, and it is expected to provide valuable insights into the comprehensive hydrodynamics of the system for subcoolings in the vicinity of .
3. Experimental Method
3.1. Estimation of the Cooling Curve
Since no thermocouple was available to directly measure the sphere’s instantaneous temperature, an estimation of this temperature was conducted from the measurements of the quenching time. The idea was to consider that, whatever the initial condition, the sphere temperature could be recovered backward from the remaining time of the quench. This estimation was based on two assumptions.
The first was that the sphere temperature was uniform. This hypothesis can be validated by computing the diffusion time
inside the sphere associated to half of its diameter
:
where
is the thermal diffusivity of the sphere whose value for nickel is 2.39 × 10
−5 m
2 s
−1. This leads to a characteristic time
, which is short compared to the experimental quenching times (from
to
). Another way to evaluate this assumption is by considering the Biot number. In our configuration, it was approximately 10
−2, indicating a significant impact of conductive effects within the solid. Consequently, it was reasonable to assume a uniform temperature distribution within the sphere. This assumption further enabled us to formulate the following energy conservation equation for the entire sphere:
where
is the cooling rate of the sphere,
is the mean heat flux around the sphere,
is its surface, and
its its volume.
The second hypothesis was that the cooling rate is solely dependent on the sphere temperature, bath temperature, and the inherent physical properties of the distinct phases. This entails that the previous states of the sphere do not influence its current behavior. Mathematically, this hypothesis leads to the existence of a monotonic function, denoted as
f, which links the cooling rate
and the current state as follows:
If we know this function f, we can integrate this equation to capture the current temperature in function of time t. The following strategy explains how to capture this function. The mathematical justification is first explained, then the practical method to assess the temperature on the basis of a heat transfer model is presented.
3.1.1. General Theory
Following this hypothesis, we can write the following:
Integrating this equation, there is a bijective function
F that only depends on
(as well as on material properties) and respects the following equality for every couple (
,
) and (
,
):
In other words at a given time
t and for every couple (
,
) encountered by the cooling curve
, the temperature
reads as follows:
This property entails the existence of a master cooling curve: given a bath temperature, all cooling curves are linked together and follow the same path. For example, let us consider two cooling curves of initial temperature
and
, as shown in
Figure 2. If we define the two associated cooling functions
and
, and if we define
such that
, then thanks to (
6), we have the following:
If we synchronize the two cooling times (denoted as t and , such that when , or equivalently ), then we can establish , which is the two cooling curves coincide, forming a single master curve.
This curve is the same as the one obtained by plotting the initial temperature of the sphere
versus the total cooling time
, as shown in
Figure 3. It can be easily estimated by several quenching tests with different initial temperatures. If we find the function
, then the cooling curve
of a given couple (
,
) reads as follows:
and
Thus, if we can find a relationship between h and for any , we can obtain the function f.
To sum up, the following was performed:
Several quenching tests were conducted at different initial sphere temperatures for a fixed bath temperature.
The quenching time was measured for each experiment.
This process resulted in an estimation of the function and, consequently, its derivative .
A relationship was deduced.
The state function
was deduced thanks to (
9) and (
10).
3.1.2. Application in the Case of a Sphere
Dhir et al. [
18] developed a boundary layer model of a vapor film around a sphere with subcooling. This resulted in a correlation between the cooling rate and the degree of overheating, yielding a power–law relationship of order
. They challenged this model with experimental results, demonstrating favorable agreement. This result motivated us to write the following:
With this consideration, the problem can be simplified to finding the proportionality coefficient
A. We noted
, and we integrated (
11) between
and
:
Thus, the function
reads as follows:
Values of
and
A were then iteratively estimated to minimize the interpolation error with experimental values. The obtained parameters were used to assess the sphere temperature at any time
t of the quench by integrating (
11) between
and
:
Thanks to (
2), the estimated cooling curve allowed us to estimate the heat flux noted
.
3.2. Measurement of the Thermo-Hydrodynamic Quantities
The camera set up allowed us to observe the hydrodynamics of vapor. Three parameters were measured: the vapor film thickness at the mid height of the sphere
, the volume of vapor bubbles that detached from the vapor film
, and the frequency of bubbles created
. To achieve this, we used a high-speed camera to capture 1200 frames in
at different time steps of the experiments, with a resulting precision of
, as stated in
Section 3.3. According to the literature, liquid vapor interface movements that are captured during the film boiling mode are within the range of precision of this set up (see, for example, Vijaykumar et al. [
17]), with wave film velocities of at most 1 m s
−1 vertically and 0.05 m s
−1 normal to the wall, as well as film boiling maximum thicknesses of, at most, 1 mm (see also Jouhara et al. [
19] with minimum vapor film thicknesses of at most 1 mm).
Images of the vapor film around a hot nickel sphere immersed in water were captured with a Nikon camera at 60 fps. The post treatment of those images was conducted with a MATLAB code. The camera pictures were binarized to obtain matrices of 0 and 1 using the
imbinarize MATLAB function. A very high contrast was required for this application. It was set up so that the liquid phase was identified with 0 (black pixels), whereas the vapor and solid phases were identified with 1 (white pixels). A correction was also implemented to fill the holes inside the vapor bubbles arising from the influence of light reflection effects. To do so, the
imfill MATLAB function was used. This is explained in
Figure 4.
Once these preliminary steps were completed, the pixel size was converted to a physical length. This was achieved thanks to the pictures of the last video sequence. Indeed, in these last snapshots, the boiling was over and only the sphere remained with no vapor around. The scale could be computed with the measurement of the sphere diameter, which was known. The number of pixels of the sphere was counted, and, from this, we deduced the length of one pixel .
These pictures were analyzed in different ways to capture the desired quantities. Though the pictures only offered a 2D view of the experiment, the analysis was based on the assumption that the problem was axisymmetric. Thus, 3D estimations were extracted from this post-treatment. Moreover, thanks to the estimation of the cooling curve explained in last section, we could recover an estimation of the cooling rate . This was conducted thanks to the measurement of the difference of time between the snapshot and the time of collapse of the vapor film.
3.2.1. Vapor Film Thickness
The height of the sphere equator was obtained thanks to the snapshots of the sphere without boiling. At this height, the sphere and the vapor film on both sides were gathered in one horizontal line of white pixels (see
Figure 5). Thus, the length of this line at this height corresponded to the sum of the diameter of the sphere and the two vapor film thicknesses (see
Figure 5). Thanks to this consideration, and to the value of a pixel size
, the current vapor film thickness was measured. This was performed for all 1200 pictures of every shot. The mean value was then computed as an estimation of
.
3.2.2. Vapor Bubble Volume
Under the assumption that the bubbles were axisymmetric, the revolution of the cross section measured was defined as the bubble volume. On each frame, we considered that each pixel line is a cylinder with a diameter that is the length of the line corrected by
and whose height is
. The sum of the infinitesimal cylinders of the bubble is an estimation of its volume. This estimation is described in
Figure 6 and was conducted for all 1200 pictures of each video. We only considered the largest volume for each picture. However, the recovered volume was not constant. In fact, the development of the bubble resulted in an increase in volume before its detachment from the vapor film. Furthermore, condensation effects caused the bubble to shrink as it ascended. The volume variation resulting from those effect are represented in
Figure 7.
The volume of the sphere was then taken as the mean value of all the peaks of this quasi-periodic function (see
Figure 7a). As the signal was close to being periodic with a single frequency, it could be decomposed by a constant (the mean of the signal) and a sinusoidal signal whose amplitude expresses the deviation from the mean. The volume of the bubble was, then, the sum of the constant with the amplitude of the sinusoid. By employing a Fast Fourier Transform, the signal was transformed from the time domain to the frequency domain. This transformation separated the signal into its constituent frequencies, enabling clearer identification of the dominant frequency components (see
Figure 7b). Adding the amplitude of the first non-zero peak in the Fast Fourier Transform output to the volume mean value of the period enabled the estimation of the vapor bubble volume, which is denoted as
.
3.2.3. Vapor Bubble Detachment Frequency
The period of the volume measurement signal was directly linked with the detachment frequency. The first non zero frequency was identified as the detachment frequency . This procedure was, however, tempered by a phenomenon that occurs at high sphere temperatures and at low subcooling, which is when the vaporization was strong. In this context, the bubbles were so large that some merged with the previous one. This lead to the measurements of doubled bubble volumes along with half detachment frequencies. These pathological cases were solved through manual correction.
3.2.4. Equivalent Heat Fluxes
The measurement of the vapor film thickness allowed us to recover an estimation of the heat fluxes at the surface of the sphere. Considering the pure conduction inside the vapor film, the associated heat flux read as follows:
where
is the thermal conductivity of the vapor. This estimation was based on the assumption that the vapor film thickness at the equator of the sphere is a reliable value for the global vapor film thickness around the sphere. The vapor film model of [
18] predicts a thickness that is close to be constant in the south hemisphere of the sphere. However, the area of detachment of the bubbles undergoes a great deal of perturbations and might deviate from this estimation.
The measurement of the vapor bubble volumes and detachment frequencies allowed us to estimate the energy used to vaporize the liquid. We defined an equivalent heat flux as follows:
where
is the density of vapor,
is the surface of the sphere, and
is the latent heat in the vaporization of water. This estimation needs careful consideration as it overlooks the potential significant condensation effects occurring over the vapor film and during the bubble’s development.
3.3. Measurement Uncertainties
Regarding the bath temperature, the thermocouple was placed at one corner of the bath. Thus, the bath temperature inhomogeneities were not captured. Furthermore, at elevated temperatures, the bath temperature gradually declined by over 1 K/min, introducing a 1 K uncertainty in liquid temperature assessment.
Several uncertainties were noticed regarding the sphere temperature:
Oven temperature inhomogeneity, up to , compromised the reliability of the oven temperature as an indicator. Proximity constraints limited pyrometer precision, yielding a uncertainty in measurements (thus, the pyrometer precision is not relevant here to define uncertainties).
Sphere’s transfer from oven to bath caused temperature loss. Convective effects predicted by the Whitaker correlation [
27] were negligible in our system. Radiation effects, with associated radiative heat flux
, contributed at higher temperatures. Considering this radiative heat flux
, the associated temperature drop was around
at the highest temperature (900 °C), and a black body model led to a cooling rate of 10 K s
−1. The sphere’s transfer took about
, leading to a potential overestimation of
. However the real emissivity of the nickel should be lower than 1. Another
uncertainty was then considered to be conservative.
A bath temperature variation caused a estimation fluctuation, magnifying temperature uncertainty.
Conduction effects at the aluminum base contact point were considered negligible, primarily due to the minimal ratio between spike and sphere diameters. Despite this, their presence could potentially contribute to enhanced wetting and could, consequently, result in high .
The spheres used for the experiments underwent oxidation during oven heating. This could be another potential source of error for the estimation of .
Regarding the uncertainties of the measurement deduced from optical visualization (film thickness, bubble volume. and bubbles detachment frequency), they were linked with the camera set up resolution. As two types of shootings were used (60 frames/s or 400 frames/s), temporal precision was, thus, about 17 ms and 2.5 ms, respectively. Spatially, precision depends on camera placement and image resolution (which is dropped in the high-speed mode). In our specific set up, it was typically /pixel for normal mode and /pixel for high-speed mode. As this latter mode was used for the measurements, this pixel’s translation to a physical distance of value defines the resolution of the cameras set up. While this influence may not be trivial, it is unlikely to substantially affect bubble volume estimations. However, a notable uncertainty arises from the inherent non-sphericity of the bubbles, i.e., a challenge in quantification. Our estimation indicates the potential for a significant 30% error due to this factor. Furthermore, the window frame occasionally truncated the edges of larger bubbles, introducing heightened uncertainty, which is particularly relevant at high overheating experiments. The frequency measurement stood out as highly reliable, and it was driven by consistent data periodicity. This was evidenced by the distinctive amplitude peaks recorded at the localized frequencies within the Fourier transform. The uncertainty was mostly due to the thickness of the peak overlapping with a small interval of frequencies of around .
We must mention the experiments conducted at low initial temperatures. We tried to directly estimate with such trials by considering the limit initial temperature at which the vapor film did not appear. However, it seemed that the hydrodynamical effects had an important impact on the vapor film stability. This instability occurred at temperatures higher than the expected threshold, leading to inconclusive results.
5. Comparison with Numerical Simulations
A numerical study was carried out to capture some of the features of the boiling film regime. In doing so, we can figure out which mechanisms are well represented by the theory and which ones are not yet understood. As the simulation of boiling phenomena is time-consuming, a quasi-steady state configuration was used. The characteristic time of hydrodynamical aspects was, indeed, two orders of magnitude smaller than the characteristic sphere cooling time. It was then acceptable to simulate for a small period of time, but long enough to limit the influence of the transition regime, the hydrodynamics of the cooling fluid at a fixed temperature of the sphere.
Among all existing numerical approaches to simulate boiling, significant results were obtained by several methods (among which the work of Sato and Ničeno, who proposed a sharp interface approach with a projection method based on a staggered Finite-volume algorithm on Cartesian grids and computed proper bubble nucleation and growth [
32], is particularly relevant). Still, with a sharp interface, Gibou et al., Tanguy et al., and, more recently, Sahut et al. have developed a Ghost Fluid method to properly compute discontinuities along with a Level Set function in finite difference [
33,
34,
35]. Lastly, Giustini and Issa have implemented, in finite volume, an interface capturing method with volume of fluid without the need to introduce smearing of the mass transfer term [
36].
A choice was made to work with finite elements. The Level Set method was used to keep track of the liquid–vapor interface along with a Continuous Surface Force approach. Working with immersed interfaces, the computation was conducted on one single domain to address both phases and surface terms simultaneously. This required a smaller mesh size for the same precision in comparison with the above-mentioned approaches, but it also has the great advantage of being simple to implement (details of the numerical implementation with comparison to numerical benchmarks are available in [
37]). A brief overview of the numerical model was then recalled thereafter.
5.1. Monolithic Approach
The Level Set (LS) method consists in considering a distance function
from a given interface
. In the present context,
is the liquid/vapor. The LS method is used to discriminate the two phases and to determine all the considered properties: the density
, the dynamic viscosity
, the specific heat capacity
, and the thermal conductivity
k. To do so, a Heaviside function
related to the interface
is considered:
Since
is mixed using an arithmetic law, this means that
is comparable to a volume fraction.
is then mixed accordingly as it is a specific entity. For
k, a geometric mixing law has been shown to improve the continuity of fluxes at the interface [
38,
39].
The classical Heaviside function is a discontinuous function at the interface. Using it would create discontinuous properties inside the domain. For example, in the case of water and water vapor, the density would jump from around
−3 to around
−3 between two mesh nodes. This is known to create stability problems. A solution is, then, to regularize the transition between the two phases using a sine function over an interface thickness (
):
The interface is, therefore, located on the zero values of this function. In the case of the first-order interpolation in tetrahedral meshes, the interface is shaped by a hyperplan simplex mesh (a set of segments in 2D and a set of triangles in 3D). Then, the sign of this function allows one to discriminate each phase and to attribute the associated material properties.
Immersed methods also need a designed solution to include surface terms, such as surface tension and phase change. Working with a smoothed interface, the natural choice was to use the Continuous Surface Force approach. The idea was to spread surface terms over the smoothed interface to turn them into volume terms with the help of the LS method. In doing so, a proper monolithic formulation could be solved: once to account for both phases, as well as for surface terms. This consideration has been shown to bring stability to multiphase flow solvers with a simple implementation thanks to pioneer works [
40,
41,
42], even though it is at the price of some precision loss [
34]. The pre-existing implementation and improvements of this method from the work of Khalloufi et al. [
43] was used.
Surface terms were turned into volume terms with the help of the smoothed Dirac function
(see
Figure 15a). It was defined as the derivative of
with respect to
:
5.2. Phase Change Model
A phase change process has two different effects on the interface. It creates a movement and leads to a velocity jump in the configuration of a mass transfer between the two phases of different densities. This can result in stability issues. In the framework of the CSF approach, it is natural to also regularize the velocity jump by considering a volume distribution of the mass transfer. This term was expressed following a Dirac function
(see
Figure 15a). This approach leads to the emergence of a source term in the velocity divergence equation of the classical incompressible Navier–Stokes equations:
where
is the velocity field,
represents the jump of the quantity
x across the interface, and
represents the mass transfer rate vector. It also points toward the phase that receives the quantity of mass, and it is locally constant across the width of the interface. This expression is consistent with the velocity jump expressed by the sharp interface formulation, and it leads to a regularized velocity (see
Figure 15b).
This velocity and density jump influences the advection velocity of the interface. Thus, the LS method applied to the advection equation reads as follows:
The convection velocity is the fluid velocity corrected by a term that accounts for the relative velocity of the interface to the fluid due to phase change (details of the demonstration of this equation are available in the work of Brissot et al. [
37]).
5.3. Modified Navier–Stokes Equations
All these ingredients allowed us to build a mechanical monolithic multiphase solver with a phase change. The only remaining effect to add was the surface tension
. This term is only dependent on the interface geometry and, thus, on
. In the context of the CFS approach, the surface tension contribution reads as follows:
where
is the surface tension coefficient, and
is the normal vector of the interface.
We simply added this expression as a source term on the right-hand-side of the Navier–Stokes equations to include surface tension in the momentum conservation. However, if the surface tension is implemented explicitly (meaning that it is considered as a constant term in the time discretization), then the resolution is limited by the very constraining BZK condition:
where
is the time step of the simulation, and
h is the mesh size. BZK stands for the surnames of the original authors that first expressed this stability condition [
44]. This is the reason why semi-implicit techniques have been developed to overcome this difficulty [
45,
46]. The idea is to extrapolate a prediction of the diffusion of velocity induced by surface tension at the interface. In addition to the explicit source term
, the velocity term is then integrated implicitly in the time discretization scheme. Denner et al. [
47] explained that this new implicit term performs well as it includes dissipation of the surface energy of capillary waves with a short wavelength. Details of the numerical implementation used for the present work can be found in the work of Khalloufi et al. [
48].
Including this term in the momentum energy conservation, and coupling the pseudo-compressible Navier–Stokes equations with the LS convection equation, leads to the following system:
where
p is the pressure field,
is the strain rate tensor, and
is the gravity field.
5.4. Energy Conservation
The energy conservation equation was solved with the assumption of negligible pressure effects. Thus, the specific internal energy variation was replaced by the specific thermal energy variation
, where
T is the temperature. The convection diffusion reaction equation was solved with the addition of phase change effects, which were integrated thanks to an interfacial term accounting for the latent heat of vaporization or condensation:
where
is the latent heat of vaporization.
One last equation was required to close the system as the mass transfer rate
had not been estimated thus far. It was determined thanks to the energy conservation at the interface along with the assumption of thermodynamical equilibrium. The latter consideration was guaranteed by the reaction term in the energy conservation Equation (
31) with
C as an arbitrary coefficient that is large enough. In our study, it was set to 10
6 W m
−2 K
−1. In this context, the energy conservation was expressed through the balance of heat fluxes from both sides of the interface:
where
is the balance of diffusive heat fluxes, and
is the radiative contribution from the sphere. One difficulty of this modeling is, however, to properly compute the temperature gradients from both sides of the interface. A possible approach is to capture every quantity straight at the interface while preserving discontinuities thanks to appropriated methods. In our case, the combination of the Continuous Surface Force approach and the heat flux jump computation had to be consistent. To do so, a complete fictitious interface was considered for the phase change distribution and for the temperature profile. Thus, the jump was not computed on one straight surface that usually features the interface but on the border of this thick interface, as shown in
Figure 15a. The temperature gradient was explicitly computed on each element (P0) and then recovered at each node (P1) thanks to an interpolation algorithm. Then, the values of the temperature gradient at the border of the interface were extended through the interface thanks to the resolution of a convection equation with velocity
. This allowed us to access
and, thus, to
on the whole interface. It is worth noticing that, by doing so, these values are normally constant to the interface (more details on this method can be found in Brissot et al. [
37]).