3.1. Thermodynamic Processes Involving Phase Transitions
The most relevant class of examples for FTT processes involving systems with complex energy landscapes concerns the use of materials as working fluids which can exhibit more than one phase during operation of the engine along its path in thermodynamic space. In a more general sense, this also includes the use of such materials in building the device, as already discussed in
Section 2.3, where possible complications due to wear and tear of the real device were noted together with the limitations on the speed with which the thermodynamic intensities
T and
p can spread throughout the working fluid. Such issues can affect the efficiency and capability of the device, e.g., as limitations on the speed at which the thermodynamic cycles can be performed.
We note that the pure “mathematical thermodynamicist” who wants to focus on analytically solvable problems might prefer to consider issues concerning metastability, instability, irreversibility, etc., as an external mathematical side-condition on the optimization problem, such as adding heat conduction between hot and cold reservoirs leading to leakage of heat from the system proportional to the time spent in the overall cycle. Still, even such a researcher needs to first formulate a mathematical model of this feature that represents real-life engines and working fluids, before such real-life aspects can afterwards be reduced to a formal overall factor encapsulating a high entropy production rate or loss of work at certain parts of the cycle.
3.1.1. No Competing Crystalline/Amorphous Phases
The simplest case is a system with a working fluid that consists of a homogeneous (infinitely large) material that can exist in several equilibrium phases but where neither metastable modifications nor amorphous/glassy states are present. For each point in thermodynamic space
, only one phase is present, i.e., on all observational time scales of interest, the energy/enthalpy landscape exhibits only one locally ergodic region, which is identical with the globally ergodic equilibrium phase for this temperature and pressure. For simplicity, we assume that only two equilibrium phases
A and
B exist in the region of thermodynamic space where the cyclic or line-type path resides; here, we denote a path to be a line-type one if the starting and end points are different and if the path does not intersect itself.
Figure 2 shows a simple cyclic path in (
) space, where for low and high temperatures the thermodynamic equilibrium phases are
A and
B, respectively.
The goal of the thermodynamic process is to perform some work in a finite time , where the excess entropy/heat production or loss of work due to the finite-time requirement is minimized or the efficiency of the process is maximized. For simplicity, we assume a cyclic process with a prescribed path in space, where the time spent in each leg of the path and the way in which the time is allocated along the legs are subject to the optimization. Along this cyclic path, the material switches between two (or more) equilibrium phases, such that at least two phase transitions will occur. We note that of course the case of a path with only one transition that can occur when cycling around a second order phase transition point in space—e.g., in some cycles involving a gas–liquid transition—also works, and might be of interest in certain kinds of refrigeration cycles employing the gas–liquid phase transition. In this context, we recall that the phase transitions occur along lines or at points in thermodynamic space; depending on which value is varied, one can speak of transitions driven mainly by pressure changes or mainly by temperature changes. In the kinds of materials we are discussing here, most of the transitions are first-order, although second-order transitions can also occur. In solids, the latter are mostly associated with small displacements of the atoms changing the symmetry group of the crystal to a subgroup, such that no nucleation of the new phase followed by growth of the new phase from this nucleus is needed. As a consequence, for second-order phase transitions there is no free energy barrier for the atom rearrangement, and the amount of thermodynamic work needed is small. Of course, if the transformation takes place at points in space that are some distance away from the second order transition point or line, then we are dealing with a first-order transition.
Upon a change in temperature and/or pressure at the transition temperature/pressure along the path, the material has to re-organize on the atomic level in order to transform into the new equilibrium phase. To achieve this process in finite time, it is usually necessary to move the system to temperatures/pressures deviating from the equilibrium transition values such that the material enters a (potentially massively) non-equilibrium state. In this state, the first phase is no longer stable, but the new equilibrium phase is not yet established; in order for the system to settle into this new phase within a finite time, excess entropy/heat generation or extra work is required.
Figure 3 shows another schematic cycle, where we now indicate the ranges of metastability of phases
A and
B in the respective regions where phases
B and
A are the equilibrium phases.
In order to design models of this process for the purpose of time-allocation optimization, we note the relevant time scales involved. Obviously, there is the available time for the whole cycle , which distinguishes finite-time thermodynamics from classical thermodynamics. Next, we discuss various equilibration time scales; note that in the following we do not worry about or about the observables used to classify ergodicity and equilibrium. The first time scale of interest is the equilibration time for phase i when starting from a generic point inside phase i for general values of T and p for which phase i is locally ergodic. The second time scale is the relaxation time for phase i after a (small) perturbation of the system away from equilibrium. Here, we are starting from a point on the landscape belonging to the locally ergodic region of the landscape associated with the system in equilibrium at , except that now the thermodynamic parameters have been shifted to (nearby) values , placing this point at the rim or even slightly outside the new locally ergodic region associated with the thermodynamic point . Although strictly speaking being an equilibration time, corresponds to the usual relaxation time to equilibrium that is employed when modeling and optimizing the time step allocation for a standard working fluid that needs to be kept in or at least very close to equilibrium. Typically, one assumes that some simplified (phenomenological) model can be used for the excess entropy production as long as the time step associated with moving from to is larger than .
The third group of time scales concerns those associated with the phase transition
. The first is the equilibration time inside the locally ergodic region associated with phase
B,
. Next, there is the escape time from inside the unstable/metastable phase
A,
. However, for complex energy landscapes it is not obvious that leaving region
A will automatically transfer the system at once into the neighborhood of region
B; instead, the system enters a general transition region that connects regions
A and
B. This region can be quite large, and in general may border on many other (locally ergodic or marginally ergodic) regions of the landscape. In particular, reaching the right “exit” from such a complex transition region crossing all of the generalized barriers involved can require quite a long time [
38]. Thus, the escape time from region
A together with the movement of the system inside the transition region into the neighborhood of the (stable) target phase
B,
at the point
in thermodynamic space where phase
B is the equilibrium phase can be considerably larger than
. Here, we note that during the subsequent equilibration “into” phase
B when starting from the region of the landscape formerly associated with phase
A or the transition region connecting
A and
B, the system does not start from a generic point inside phase
B, as is usually assumed in the definition of
; instead, we start from a very specific point on the landscape inside the region associated with the former locally ergodic phase
A or the transition region surrounding it. Thus, we denote this time scale by
. Therefore,
will usually be smaller than the time required to move from the exit of the transition region into the locally ergodic region
B and equilibrate there, i.e.,
. These two time scales
and
constitute lower bounds on the time scale within which a successful transition from the newly unstable phase
A into the new equilibrium phase B takes place on the energy landscape.
If the system in phase A still has a certain degree of stability, as is often the case in first order phase transitions, it can be considered locally ergodic on some small time scale; thus, the quantity of interest would usually be this escape time . On the other hand, if the system can no longer equilibrate on essentially any time scale of interest in phase A at point in thermodynamic space, then the equilibration time of the new phase would often be the relevant quantity, i.e., .
A special very important case in this context consists of nucleation-and-growth processes, which proceed in two stages: the formation of a stable nucleus of phase B inside the metastable phase A, with its time scale (a special case of ), and the growth process of this nucleus into the macroscopic phase B on the time scale (a special case of ). We note that the system might well have left the locally ergodic region A before the nucleus of phase B has come into existence, i.e., . Furthermore, during first-order phase transitions, can far exceed ; thus, the total time needed for such a process can greatly exceed the sum of and , .
Of course, we assume that a phase transition occurs in the first place: if for the starting phase A for all values of (T,p) along the path, then no transition happens during the cycle and we stay in the same locally ergodic region A as far as the material is concerned. Analogously, if , then the transition back to the initial phase will not take place, and we are stuck in the second phase at the end of the cycle. In this case, it is necessary to add a long waiting time before restarting the cycle in order to allow the system to re-initialize and reach equilibrium in phase A. For a cycle that is performed periodically, this waiting time clearly poses a problem. An alternative to deal with either of these events within the available time can be the choice of a different cycle, e.g., extending to higher or lower temperatures and/or different pressures, where the escape times from phases A and B are much shorter and the cycle can be performed as desired, though at the expense of a much higher excess entropy production or loss of work.
A related problem can occur if for the whole region of the path in space where phase B is the equilibrium phase. In this instance, the working fluid remains in a non-equilibrium state for the whole duration of this stage of the path, never quite leaving the transition region connecting regions A and B, and we need to explicitly model the thermodynamic processes during this stage on a non-equilibrium basis. After entering the stage of the path where A is again the equilibrium phase, the problem might be alleviated, since it is to be expected that the relaxation back to the phase A might be rather fast and within the time limits of the process. Again, we can try to address this issue by choosing a different thermodynamic path, along which the two phase transitions occurring quickly enough.
In this context, we recall the probabilistic aspect of the escape time definition, incorporated in the characteristic constant
: if we attempt the transition
times, each time running the simulation/experiment for a length of
, then we expect that we will make a transition into the other phase at least once. In order to take this feature into account, a probabilistic approach can be employed to model the behavior of the system along the cycle, as outlined below in
Section 3.1.5.
Assuming that neither of these serious problem cases occur, it is nevertheless clear that we need to spend enough time in the phase transition region to reach the new phase
j from
i, i.e., along these legs of the path, the times
need to be included and subtracted from the total available time
, and the excess heat or entropy produced during the nucleation-and-growth stage must also be accounted for. As mentioned above, in order to speed up the phase transformation process, we might have to employ values of
that are (possibly considerably) larger/smaller than the values where the two phases are in equilibrium. Depending on how much time we can afford to spend on the phase transition, we can reduce the excess entropy production by staying closer to the transition values of pressure and temperature. On the other hand, spending too much time at the phase transition forces us to be “too fast” on the rest of the path, again producing excess entropy. One critical issue in the modeling of such rather fast movements along the cycle is that we might move the system so far out of equilibrium that the linear approximation-based models for the entropy production during relaxation to equilibrium (frequently used when computing or estimating optimal schedules [
29]) are no longer applicable, considerably complicating the modeling of the thermodynamic cycle. This usual (excess) entropy production along the path in thermodynamic space and its finite-time optimization is not discussed here; we assume that we know or can model the excess entropy production due to the usual finite-time movement along the path for a given material once it is essentially equilibrated inside one phase, since this constitutes the standard FTT problem.
A detailed prototypical example of modeling a first-order phase transition using a finite-time optimal control analysis can be found in the study of the liquid–gas transition by Santoro et al. [
127]. There, the authors minimized the excess work needed to perform this phase transition in a finite time slightly away from the phase transition point in thermodynamic space. Their study included explicit models for the nucleation and growth processes. Such models would also be necessary for the study of any thermodynamic cycle that includes phase transitions if one desires to obtain quantitative results for the optimization of a specific phase transition in a given chemical or physical system.
3.1.2. Presence of Amorphous Precursor or Glassy States
In the preceding
Section 3.1.1, we have considered the case of a system with only two phases, which transform along a certain line in thermodynamic
space into each other. For first-order phase transitions
, the transition is not instantaneous but commonly takes place through a nucleation-and-growth process on a time scale
that depends on the thermodynamic conditions
. For such processes, we deal with free energy barriers that require a certain time to cross in order to create a nucleus of critical size large enough to grow into the new phase [
129,
130,
131]. In order to achieve this in finite time, we must move beyond the transition line in
space deeper into the region where phase
B is the equilibrium phase, which requires excess work or excess entropy. For simplicity, we can visualize the nucleation process being characterized by the above-mentioned escape time
from phase
A, while the time spent in the growth stage of the transition is related to the equilibration time scale
in phase
B. We have noted already that
is only a lower limit on the time that this growth process can take, as we do not start from a generic point inside the locally ergodic region
B on the energy landscape. As a consequence, as already mentioned earlier, the overall nucleation-plus-growth time scale usually exceeds the sum of the above two time scales,
.
In practice the situation is more complex, since the nucleation process takes place via the generation of a multitude of nuclei throughout the macroscopic material. The precise number depends on the specific values of the temperature and pressure applied, quite frequently leading to the formation of a polycrystalline material with less well defined macroscopic properties instead of a single crystal. Furthermore, during a general nucleation process the system resides in a transition region on the complex energy landscape, from which in principle several alternative (metastable) phases can be reached, assuming that such additional phases exist on the landscape in the first place.
In particular, in many materials such a transition region—while never being locally ergodic in itself, and as such not corresponding to a thermodynamic (meta)stable phase—is structurally realized in solid form as an amorphous precursor state
or a glassy state
G, which can persist in some fashion for quite a long time at low enough temperatures. We note that there are of course many amorphous solids with surprisingly high stability that can be created employing a variety of alternative routes; the generation of amorphous Si
3B
3N
7 via the sol–gel process [
132,
133] constitutes such a case. However, here we only focus on those amorphous materials that are generated along a thermodynamic path, i.e., via quenching from a melt or the gas phase, or via amorphization through high-pressure treatments.
In the general amorphous state, the material is far from global thermodynamic equilibrium and serves as the matrix within which thermodynamically (meta)stable crystalline phases
i can nucleate. This synthesis approach of crystallization from an amorphous precursor is frequently employed to access unusual crystalline modifications from, e.g., an amorphous film deposited from the gas phase, or to reach a high-pressure phase from a material that is being amorphized as an intermediary stage via application of high pressures and temperatures. The corresponding time scale for the specific nucleation-and growth process
now takes the place of the earlier mentioned time scales—
,
, and
during first-order phase transitions—and must be compared with the total time available for the cyclic or line-type thermodynamic path. We note that the ability to access multiple metastable phases from such a precursor can be an advantage if one wants to reach valuable or just interesting phases that might not be accessible otherwise [
134,
135,
136]. However, this adds uncertainty to the process with respect to its outcome, in addition to the excess work/entropy associated with the phase transition in finite time. To deal with this uncertainty, we need to introduce a probabilistic scheme into the cycle description, which is described in more detail in
Section 3.1.5.
In many regards, the glassy state G can be treated as another instance of the amorphous state, although it exhibits some features that are of particular relevance for the thermodynamic cycles that include phase transitions which we are discussing here. Similar to the amorphous state, the glassy material will also eventually transform into the thermodynamically more stable equilibrium phase via some kind of nucleation and growth process. However, the glassy state tends to be quite stable on long time scales as far as its macroscopic properties are concerned. This also often applies to its general structural features; in particular, both the time scale of nucleation and the time scale for growth of a crystalline phase i inside the glassy state G, and , respectively, can be very large, and may often exceed the total time available for the cycle, , .
This is related to the fact that in many materials of interest, glassy states appear when quenching a melt (i.e., cooling the material too fast to a temperature below the freezing temperature for nucleation of the crystalline equilibrium phase to occur). As a consequence, the glassy material resembles a liquid with extremely high viscosity in many aspects. Here, we note that the glassy state should not be confused with a metastable super-cooled liquid; the moment the super-cooled liquid experiences a (local) disturbance above a certain (small) strength, a spontaneous nucleation of many nuclei of the solid equilibrium phase takes place and these nuclei very rapidly grow to produce a polycrystalline material. In contrast, the glassy material is very stable, and even a strong disturbance like being hit with a hammer only results in mechanical damage, not in the initiation of a nucleation and growth process. As a consequence of being a quasi-equilibrium continuation of the melt in the solid state, such glassy systems typically exhibit marginal ergodicity and show aging, as discussed in
Section 2.4.
For the thermodynamic cycles in finite time that we are interested in, such glassy states can, e.g., occur if the two phases that are visited along the trajectory in
space correspond to the solid crystalline modification (
A) and the melt (
B). Melting a solid crystalline phase is usually quite straightforward as long as only one solid phase exists in the system close to the melting temperature, and this melting transition usually takes place relatively quickly even for temperatures close to the melting point. We note that some complications can occur for unusual systems such as gallium [
137]; however, here we focus on the standard case. Furthermore, the formation of the thermodynamically stable crystalline solid phase from the melt when cooling the system usually requires a non-negligible amount of time even if the material is not a glass former. As mentioned above, if glassy states
G in the system are possible or likely, then the time to reach the crystalline modification
A,
can be very large, far exceeding the equilibration time of the crystalline modification
, and might require careful tuning of the thermodynamic conditions.
Figure 4 shows the simple cycle already discussed in
Figure 2 and
Figure 3, except that now phase
B is the melt (denoted by
M) and we encounter the added complication that the system can enter a glassy state from the melt instead of nucleating into the equilibrium phase
A.
For the purposes of our discussion, this implies that if exceeds , then we will remain in the glassy state of the material for the rest of the trajectory. In particular, if the cycle is run with a starting point in the crystalline phase A, then we do not reach the original starting point of the cycle, and not only in a small fraction of instances; i.e., being stranded in the glassy state is not a low-probability occurrence. If the working fluid at the start of the cycle had been the melt M, i.e., if the cycle were to have started at point , and if the cycle included a transformation into the solid phase A before returning back to the melt, then the situation is problematic but not completely lost. In this latter case, the solid phase that we access and perform some of the work and heat transfer with is just the glassy state of the material instead of the crystalline equilibrium phase. As long as the properties of the glassy material are such that all relevant tasks can be performed—although perhaps not with the same efficiency compared to using the crystalline state as the working fluid—then the thermodynamic cycle can still be completed.
In principle, we could choose the glassy state and the melt as the two “phases” of interest, since then we can complete the cycle, seemingly returning the working fluid to its original state at point . Nevertheless, even in this case we have to deal with the aging of the glass that will occur while the material of the working fluid is in the glassy state. Depending on the length of time for which the material remains a glass, it will slowly evolve; along each point of the trajectory in space, the system “quickly” leaves the marginal quasi-equilibrium state reached for these thermodynamic conditions and continues to respectively “emit” or “collect” (configurational) entropy into or from the universe as time goes on, while approaching (though not reaching) the crystalline equilibrium phase. Here, we note that this change in configurational entropy occurs in addition to the usual excess entropy associated with the usual “equilibration” that we observe in the equilibrium solid (crystalline) phase when perturbed (slightly) out of equilibrium. As a consequence, if we want to use the glassy state as the starting state of the cycle together with the melt as the second phase—a cycle which can be achieved, since melting a glass is no more problematic than melting a crystal—then we have to realize that the final glassy state is unlikely to be in exactly the same thermodynamic state as the starting one, which presumably had been relaxed for a long time before the engine was started. Nevertheless, if we immediately restart the engine after finishing one cycle, then after a few cycles the material will be in the “same” (evolving) glassy state of the same age at point from one cycle to the next.
Here, we note that such a very slow approach to the equilibrium solid state can also appear in systems where the high-temperature phase corresponds to a solid solution instead of the melt and where the solid equilibrium phase at lower temperatures corresponds to a separation into two solids with different concentrations (possibly realized in a poly-crystalline fashion). In particular, if the solid solution state is quenched to a temperature much below the critical point of the miscibility gap in the phase diagram of the material, where the thermodynamically stable phase would consist of two separated solid solutions with different compositions [
2,
138], then it can occur that the system very slowly evolves into the final two-solid phase. As a consequence, the material exhibits a state between a simple homogeneous metastable solid solution and the equilibrium state consisting of two segregated solids with different compositions, each of which is in thermodynamic equilibrium by itself. Conversely, the same can happen if we raise the temperature on the two-solid phase and have to wait a long time before the thermodynamically stable homogeneous solid solution equilibrium phase has been able to form at the higher temperature. Here, one should keep in mind that the solid solution phase is also a solid; thus, the internal atom diffusion needed to establish the equilibrated solid solution phase is likely to proceed quite slowly. Hence, we observe that returning to the original state of the material can become extremely difficult. We need to spend a great deal of time at the right transformation conditions (temperature, pressure, etc.) in order to nucleate the thermodynamically stable phase out of the glassy, amorphous, or solid solution state. This might force us to add a whole thermodynamic cycle or cycles after our original work cycle in order to re-establish the original starting phase of the working fluid material.
In this context, we note that even if we do not transform the crystalline starting material into the liquid state (from where the subsequent transformation into the glassy state would occur) because we stay below the melting temperature along the whole path, softening (or hardening) can still take place when approaching the melting temperatures/transformation pressures, with cyclic fatigue phenomena appearing [
139]. In this case, high amounts of disorder can arise in the single-crystal material, which can result in both microscopic and mesoscopic structural changes. Similarly, high pressures, even when quite far below the actual transition pressures needed for the formation of a high-pressure phase, can change the amount and distribution of equilibrium and non-equilibrium defects, and create domain changes which take considerable effort and time to reverse. This is particularly critical if the properties of the material (electronic, mechanical, etc.) actually depend on the number and distribution of the (equilibrium and non-equilibrium) “defects” in the solid.
Quite generally, in many materials we find slow weakening or other changes that can occur during thermodynamic cycles and need to be accounted for during computation of the efficiency of the cycle and the resulting excess entropy or loss of available work. These are especially important if they concern the mesoscopic structure (crystallite size, domain size, grain boundary and dislocation distribution, interfaces of composite materials), as these tend to be more difficult to reverse. This applies even if no competing metastable phases are present. In particular, we acquire large numbers of defects or other changes which are not eliminated upon return to the starting material, even if it is still a single-crystal or polycrystalline material of the only solid phase that is locally ergodic in the system. One cannot magically make it so without spending enormous amounts of effort and work on essentially “re-forming”/“transforming” the material at the end of the cycle back into the original starting equilibrium material. We might treat these large re-initializations of the working fluid as somewhat external features one would prefer to ignore when designing and optimizing a work cycle. Nevertheless, such additional work and entropy production needs to be accounted for when analyzing the finite-time thermodynamics of systems that employ materials exhibiting long-lived amorphous or glassy states as working fluids.
3.1.3. Existence of Multiple Metastable Phases in Parallel
Clearly, the situation is even more critical if the material possesses several competing moderately long-lived metastable phases i at values along the path of the thermodynamic cycle. In contrast to the case of the glassy state, which is expected to vary slowly while inexorably transforming more and more into the equilibrium phase, we now deal with well-defined phases that are in local equilibrium on time scales of interest along the cycle, i.e., once we are in such a phase i, equilibrium thermodynamics holds on time scales . In the following, we distinguish two main cases: where we start the cycle with the working fluid in the equilibrium phase (case 1), and where we start with it in a metastable phase (case 2).
First, we discuss the case where the cycle starts with the material in the globally ergodic equilibrium phase A. At some point along the trajectory in thermodynamic space, we find that the system can/does switch into a different phase B—which can be either the new equilibrium phase or a metastable phase—since at and beyond the point , A has become metastable/unstable on the time scale of observation, . Here, can refer to the total time the system spends in the leg of the path in space where A is only metastable at best as well as to the time allocated to a given point in space, i.e., the point where the phase transition takes place or should take place. As far as the possible transition from phase A to phase B is concerned, we are again dealing with the balance in the time scales for the nucleation and growth process discussed in the two previous subsections, that is, the time available at any given point along the trajectory and the total time spent in the leg where phase A is metastable. Let us now assume that for this leg; then, the system will be in phase B starting at some point along this leg of the cycle. Of course, we again have to optimize the time allocation to this leg as a whole, and in particular to the transition stage from A to B.
At the end of this portion of the cycle in thermodynamic space, phase
B becomes metastable and phase
A is again the equilibrium phase. However, we note that
B does not necessarily transform nicely back to
A; the system might evolve into competing phases analogous to the crystallization of alternative modifications from the glassy state or an amorphous precursor mentioned in
Section 3.1.2. Instead, several alternatives can occur. First,
B may be a long-lived metastable phase
for the
values of the rest of the cycle; in that case, we will only reach
A with a low probability on the order of
. Alternatively,
B could transform into another metastable phase
C (with a certain probability ranging from very small to 100 %), which is a long-lived metastable competitor to phase
A. Finally, some kind of glassy or amorphous state might appear, as discussed above in
Section 3.1.2. Fine-tuning the path and the time spent in various regions of
-space along the cycle trajectory might be required in order to avoid either of these three outcomes if we want to return the working fluid to the starting phase
A at the end of the cycle.
Second, we start in a metastable state
C (perhaps because its properties are just perfect for our purposes). Now, we want to return to this state after finishing the cycle. As long as we spend less time at each point
on the path than the escape time
, we might be able to perform the cycle avoiding any phase transition and return to
C at the end of the cycle. However, running through the dangerous parts of the cycle—where
C is highly unstable—at high speed will presumably extract a considerable price in excess entropy production and lost work compared to the ideal values if we did not have to follow this fast schedule. However, if we accept the transition to a different phase
B at some point along the cycle, we are again faced with the four possible outcomes upon return to the starting point of the cycle in thermodynamic space: we remain in the new metastable phase
B, end up in the equilibrium phase
A, reach the desired metastable phase
C, or are stuck in a glassy state. Considering that at least some of the undesired options can occur with a non-vanishing probability, it appears that again a probabilistic approach to the optimization problem of (a set of) cycles will be needed (see
Section 3.1.5).
Figure 5 shows how the possibility of several outcomes when dealing with phase transitions in finite times may result in different (meta)stable phases along the path in thermodynamic space. Starting with the equilibrium phase
A at point
, two possible transitions can occur at the phase transition point
. First, phase
A can continue while being metastable until transforming into the new equilibrium phase
B where the system now remains until the phase transition point
is reached; at that point, phase
B becomes metastable and subsequently transforms either into the equilibrium phase
A (again after some delay) or the metastable phase
C, and it is assumed that no glassy state is present in the system. Alternatively, at
we can switch into the metastable phase
C and remain in this phase until
is reached; there, we either stay in the metastable phase
C until we reach the starting point
again, or we transform back into the equilibrium phase
A. Of course, there are other possibilities, such as the system staying in phase
B (now metastable) until the end. Keeping track of these possible bifurcations results in a decision tree, as discussed in
Section 3.1.5.
We recall that the material can enter in several ways when employed as a working fluid, and can influence the analysis of the thermodynamic cycles. From a modeling point of view, this can pose an external source of problems that, for one, impose restrictions on the speed along the path or the choice of path as such. For example, one might like to avoid the unstable region C at the end of the cycle (case 1 above), or perhaps not want to transform the working fluid into phase B at all if possible. To achieve this, one could, e.g., take a worse (e.g., more inefficient) path in -space, e.g., stay below the phase transition line that separates phases A and B as far as the process is concerned, while still performing the desired work, or alternatively by using a different range of temperatures that has the disadvantage of producing less work than one would like or generating more excess entropy. If this avoids the large amount of extra thermodynamic work needed to recreate the original material, it might be worth the thermodynamic price. Here, we note that if the transformation to phase B is an integral part of the functioning of the thermodynamic engine, then it of course cannot be excluded from the thermodynamic cycle.
As we have seen already, a second issue is that the choice of material can impose an additional fixed cost, e.g., of transforming the material back to A from C. This might occur in case 1 discussed above, and the extra cost needs to be added on top of the otherwise optimized route. We note that the phase transition is assumed to be already included in the optimization procedure, since it belongs to the “ideal” cycle we are planning to follow.
A third concern might be that we can, e.g., achieve certain goals more easily when the material has the properties of modification B while running the engine, but phase B is environmentally unstable (susceptible to rusting) or unsafe (poisonous), and as such should be transformed back to A after the process is finished and the engine is at rest. An example might be a switch from a stable insulating material A to an unstable but conducting material B, where the electrical conductivity is important for the performance of the cycle, e.g., in a battery.
3.1.4. Challenges of Cyclic Processes
A number of the issues discussed in the previous examples apply to both cyclic and line-type processes (where the starting point and end point of the path are different, and the path does not intersect itself), since any phase transition with inconclusive outcomes will require the introduction of a probabilistic setup for the optimization of the finite-time process. Similarly, any phase transition that is encountered along the path will introduce possibly large non-equilibrium states of the system. Thus, it becomes problematic to use the straightforward intuitive (linear response type) models of the relaxation to approximate equilibrium, which underlie our ability to perform, e.g., analytical calculations when optimizing the finite-time process. These aspects are present for both cyclic and line-type paths in thermodynamic space. Cyclic processes employing real materials instead of abstract thermodynamically perfect ones exhibit their own challenges when compared with a given line-type process in thermodynamic space, even if the line-type process also involves various kinds of phase transitions. As pointed out earlier, the major issue for cyclic paths is the ability to return to the original starting point with the properties of the material that serves as the working fluid being intact and identical to the state at the beginning of the cycle.
Several detrimental outcomes have already been discussed, such as the material being in a different (meta)stable phase when returning to the starting point of the cycle in thermodynamic space. Examples we have considered are a metastable phase B or C instead of the original equilibrium phase A, the equilibrium phase A instead of the original metastable phase C, the metastable phase B instead of the metastable phase C, or some kind of glassy state. We have also noted that if we were to start with the material in a glassy state, then aging processes during the cycle can result in the material being in a slightly different aged (or rejuvenated) glassy state when returning to the starting point at the end of the cycle, since the glassy material is not in a locally ergodic equilibrium state but in an ever-evolving marginally ergodic quasi-equilibrium state.
In fact, this issue, raised by the somewhat ill-defined glassy state where the material is permanently in a slowly evolving quasi-equilibrium, can also appear when employing metastable and thermodynamically stable crystalline modifications of the material as a working fluid. The reason is that in these systems we are often dealing with the accumulation of defects in each cycle, which are not eliminated or brought back to their equilibrium concentrations and spatial distributions after the cycle has ended: the material is back in its original modification (or has never left it), but many long-lived defects have been created while undergoing the cyclic thermodynamic process. If these defects affect the physical or chemical properties of the working fluid, then subsequent cycles will yield slightly different outcomes even for the same path in thermodynamic space with the same allocation of time along this path. Of particular importance are long-lasting non-equilibrium defects on the atomic or mesoscopic level, such as defect clusters, dislocations, domain boundaries, or grain boundaries. In the last case, we have usually already crossed the line to essentially permanent changes on most time scales and for most systems of interest, since now we are dealing with a polycrystalline material instead of the single-crystal material in thermodynamic equilibrium that we started with.
Here, we remark that sometimes these non-equilibrium states are preferable as working fluids because of their specific macroscopic properties. For example, if a glass is replaced by a poly- or single-crystalline material as the working fluid, it might no longer have the desired property spectrum, e.g., it might be more brittle, less transparent, etc. In addition to glasses, an extreme case of such materials with a desirable high “defect” density on the atomic level might be so-called “high-entropy” materials [
140,
141], where we aim for a state with a high degree of so-called “controlled disorder” [
7], such as a super-sized version of a solid solution. As mentioned earlier, solid solutions are equilibrium or long-lived near-equilibrium states; often, a material called a “solid solution” is not yet fully relaxed into thermodynamic equilibrium, and “emits”/“absorbs” heat while settling into the maximum entropy state. However, these high-entropy solid (solutions) do not necessarily correspond (and actually are unlikely to correspond) to thermodynamic equilibrium phases/states at the temperatures and pressures at which the material is being used in devices. As a consequence,
cycles performed using such a material as a working fluid can very well lead to a change in structure (on the atomic level and on the mesoscopic level), in particular regarding the compositional atom distribution throughout the material, and properties of the material can even be different after the cycle. Another example for a type of quasi-equilibrium material where thermodynamic cycles (in finite time) are important regarding its properties are battery materials, since each cycle adds a certain number of more or less permanent “defects”. Here, each discharge–recharge cycle (at constant temperature and pressure but at different external applied voltages) leaves the material slightly changed (in a thermodynamically lower metastable state), until no further cycles are possible and the battery needs to be replaced and the material recycled.
The accumulation of defects raises issues that are also present when we consider the finite size of the system or when we are dealing with an inhomogeneous material. In order for an inhomogeneous material to be the equilibrium phase, the material usually needs to be a composite material or agglomeration, otherwise the system will slowly evolve towards a homogeneous new (!) material. If we are dealing with such a composite, then a contact boundary is present, with its own physics and chemistry; in particular, we encounter possible mixing on the atomic level via diffusion of atoms between the phases in the boundary zone. At finite times, we cannot achieve a smooth distribution of the “foreign” atoms as minority “defect”/“solid solution”-type atoms inside the other phase. Thus, in each of the separate phases there is a gradient of the minority atoms in the “other” phase, violating the condition of homogeneity in that subset of the composite material. The equilibrium amount of minority atoms depends on the values of the phase diagram; thus, at least locally in the interface zone, we deal with changes in this concentration as function of . Further “irreversibilities” can appear in the quasi-permanence of cracks and grain boundaries, which are typical mesoscopic features of a material.
This brings up the issue of the finite size of the material employed as a working fluid: if we are down to mesoscopic sizes, e.g., employing single crystalline grains of a material, then this might allow the system to fully equilibrate without appealing to the self-averaging principle. In particular, we could have single crystal-to-single crystal transformations, i.e., we do not end up with some of the essentially irreversible changes mentioned earlier, such as crystal-to-powder-like structure changes on the mesoscopic level, which destroy the macroscopic reversibility and prohibit the return of the material to its state at the beginning, thereby violating the assumption underlying every thermodynamic cycle. Another basic advantage of mesoscopic-size working fluids is that we do not need to worry too much about the speed with which external changes in the thermodynamic conditions propagate and spread through the material serving as the working fluid. On the other hand, the size of the surface is quite large compared to the volume of the material; thus, surface terms should be taken into account when computing thermodynamic functions for the system. Furthermore, every phase becomes metastable for mesoscopic size materials—including the thermodynamically stable one, in principle—and the number of metastable phases in the system can increase considerably. Finally, many thermodynamic engines must employ working fluids of macroscopic size in order to be useful in real-life situations.
3.1.5. Probabilistic Optimizations
In the previous subsections, we have already encountered one general way to deal with those cycles where the system ends up with a final material that is different from the original one. First, it is necessary to add a certain amount of work after the actual work cycle has been performed in order to rejuvenate the material and transform it back to its original state. This work, and in principle the time required to perform this work, needs to be taken into account when formulating the finite-time optimization problem, e.g., by subtracting the time needed for restoring the original phase from the total time available for the cycle, such that we can start the next cycle directly after the material is restored.
As mentioned in the previous subsections, when dealing with competing metastable phases, one obtains a probability (as function of temperature and pressure) that indicates the likelihood of the metastable phase i leaving its locally ergodic region on the energy landscape within the observation time spent at this point in thermodynamic space. In the definition of the escape time, this probability is provided in the form of the characteristic parameter , which indicates that an atomistic trajectory of length will leave the region with a probability exceeding . In the definition of the escape time, is usually assumed to be very small, such that even a small likelihood of leaving the system counts for defining the time scale . Conversely, for a given time scale of interest which we want to spend at the point , we can set this time equal to the escape time, allowing us to compute the corresponding probability . As a consequence, once in every cycles on average, the system will end up leaving phase i and end up in the correct (if we wanted to leave phase i) or incorrect (if we wanted to stay in phase i) phase.
Now, we can construct a “decision”-like tree of outcomes for the cycle as far as the phases that occur during the trajectory in thermodynamic space are concerned, where each decision (i.e., whether a transition occurs or not along the path) is noted. For each transition that takes place, we can include the times needed for the transition in the list of legs of the path and also add the amount of excess entropy production/loss of work associated with this transition to the cost function of the finite-time optimization problem. For each path on the decision tree, we can compute the probability that the system will follow this decision tree path. Furthermore, we can treat each such path of the decision tree as a separate finite-time optimization problem that needs to be analyzed, taking into account the constraints due to the transitions that are defined to occur.
In the next step, we optimize the time allocation/distribution for every given path on the decision tree with respect to the thermodynamic cost function of the cycle. Here, we must decide whether, for a decision path where the “wrong” state of the material is found after the regular thermodynamic cycle has been finished, we are willing to “rejuvenate” the material to its original phase as part of the cycle or not. In the former case, we need to subtract this rejuvenation time from and add the associated thermodynamic cost to the cost associated with this decision tree path. Alternatively, we could instead “replace” the material that ended up in the undesired phase with the material in the correct original phase obtained from a large reservoir/storage, such as that provided by, e.g., a recycling company. In the latter case, it is not quite clear which cost and time should be associated with such a replacement; a compromise might be that we do not subtract the replacement time from —after all, the recycling company will rejuvenate enormous amounts of spent material in one fell swoop—yet still add the thermodynamic work needed in this external recycling process to this path.
Furthermore, we note that when optimizing the cycle following such a decision tree path we might end up influencing the occurrence probability of this path by varying the time we spend within the region in thermodynamic space where this transition occurs. This kind of feedback should be automatically included in the complete optimization problem until a self-consistent solution has been reached. In this discussion, we assume that the probabilities of the transitions occurring in the first place do not change by very much when we optimize the cycle for a given sequence of transitions assumed to occur along this decision tree path.
In a second step, we would now consider an ensemble of cycles representing all the decision tree paths and assign their appropriate probabilities. Because these probabilities will change depending on the choice of path in thermodynamic -space, we can now perform an optimization on the path the cycle should traverse in thermodynamic space. Of course, for each choice of cycle path we need to perform the time allocation optimizations for all the decision tree paths together with a re-determination of the probabilities with which such decision tree paths will occur.
3.2. Optimal Schedule of Processes Aimed at Synthesis/Production of Materials
In the previous
Section 3.1, we have discussed many aspects of thermodynamic cycles in finite time that involve a working fluid with a complex energy landscape where multiple locally ergodic and/or glassy regions are present on the energy landscape for the area of thermodynamic (
)-space in which the process takes place. The typical targets of the optimization are, e.g., minimization of the excess entropy/heat production or loss of work/generation of extra work due to the finite time available for completion of the cycle. Another important class of optimization problems associated with thermodynamic processes following some trajectory in thermodynamic space consists of the production of chemical compounds and materials in finite time. Here, the goal is the maximization of the desired product, where the target compound does not necessarily constitute the thermodynamic equilibrium phase of the system. Alternatively, for a given amount of product, the goal might be to minimize the amount of work or heat production if the product needs to be generated within a certain finite time.
Of course, the efficient production of chemicals has a long tradition in chemistry and chemical engineering, with many processes having been analyzed and optimized in the past [
142,
143,
144,
145]. We also find examples where such analyses have been performed by employing the viewpoint of finite-time thermodynamics, such as studies of distillation processes [
146,
147], various chemical reactions [
148,
149], or maximization of the desired product obtained upon cooling from a melt by a nucleation and growth process [
128]. However, such analyses have usually not taken the complex energy landscape of the underlying chemical system into account.
Designing syntheses that target specific molecules that would never form by themselves from a given initial set of atoms or small molecules have been highly successful [
150,
151,
152], with molecular chemists developing a plethora of so-called elementary name reactions [
153] that describe specific reaction steps in the multi-step reaction paths starting from very simple and widely available educt molecules. We note that many of these target molecules correspond to high-lying minima on the energy landscape of the chemical system that is defined by the set of atoms making up the molecule. Due to the high degree of kinetic control of the chemical reactions involved in building up these specific molecules, moving from one minimum to the next on the energy landscape can be done in a controlled fashion.
In contrast, aiming for the synthesis of specific solid phases and designing efficient synthesis routes for this purpose is a much greater challenge for the experimentalist, even if the goal is only to obtain the thermodynamically stable phase [
154,
155]. One problem is that for many compositions in a chemical system even the equilibrium phase is not known with certainty. Trying to solve the task of identifying all stable solid phases of interest in a chemical system spawned the field of crystal structure prediction over thirty years ago (for reviews, see, e.g., [
154,
156,
157]), which is now in the process of adding machine learning to its toolbox [
158,
159]. However, the major reason for the difficulty in systematically synthesizing such metastable solid compounds in the experiment is probably the lack of atom-level kinetic control of the chemical processes. Instead, thermodynamics-based tools such as variations of temperature, pressure, and attempts to (locally) vary the concentration of starting atoms or precursor materials (crystalline, amorphous, or layers of films) are mainly employed. Trying to alleviate this issue has led to various film- or atom layer-based methods [
134,
160,
161], where systematic quantitative analyses of the outcome of the growth of various crystalline phases (e.g., from an amorphous precursor) have been performed as function of the applied (thermodynamic parameter-based) synthesis schedule [
135,
136]. However, there is clearly still a long way to go in this regard.
From the optimization point of view, the major challenge is to guide the system into the right metastable target phase within finite time, where we assume that we have full information about the set of relevant local minima and locally ergodic regions on the energy landscape together with information about the generalized barrier structure including both energetic and entropic barriers. The issues that arise are in many ways quite analogous to those discussed in
Section 3.1: the many time scales associated with the movement on the energy landscape when exiting from or equilibrating in (intermediary) metastable phases or amorphous precursors, the amount of excess entropy/loss of work when trying to accelerate nucleation and growth processes, the persistence of quasi-equilibrium states such as glasses, and the probabilistic nature of the outcome of the synthesis even for identical schedules in
-space when metastable phases are targeted or need to be passed through on the route to the final solid compound of interest.
To address these issues, it is possible to employ empirical theoretical models for the various stages of the envisioned synthesis route. For example, the maximization of the amount of a crystalline solid phase generated via cooling from the melt within a finite time
has been investigated [
128], where the temperature served as the thermodynamic control parameter. Using standard elementary models to describe the nucleation and growth stages of the process, it was found that the solution to the optimization problem consisted of a bang–bang-type schedule, with the temperature abruptly switching between a minimum value where the nucleation rate is high for the super-cooled melt and a maximum value where the growth rate of these nuclei is maximized (note that this maximum temperature must stay below the melting temperature of the material to avoid a re-melting). The same study investigated the competition between two different metastable phases, where the optimization goal was to preferentially generate one of these two phases during their growth from the super-cooled melt. Here, a bang–bang-type solution for the control parameter (in this case the temperature) was again obtained.
In the above study of crystallization from a melt, it was implicitly assumed that empirical models for both the nucleation and growth processes were available, together with values of the parameters in these models for the chemical system of interest. Furthermore, it was assumed that all relevant metastable phases in the system were known, such that the predictions would be able to guide the experimentalist in their syntheses. However, for many systems no such experiments have yet been performed, or none of the metastable phases that compete with the known equilibrium phase have yet been synthesized; thus, the empirical laws guiding the processes involved are not available to perform optimization with.
While distressing on the one hand, on the other it is clear that even some qualitative guidance in thermodynamic space would already be of great help for the experimentalist in finding ways to synthesize any of the metastable solid phases that have been predicted or intuited to exist in the system, thereby spurring theoretical work in the field even using very rough approximations. To address this challenge of synthesis route prediction, it is necessary to perform the finite-time optimization directly on the energy landscape of the system of interest [
7]. A prerequisite is detailed knowledge of the structure of this landscape, including all local minima of relevance that individually (in the case of crystalline modifications) or in structurally related groups (in the case of solid solution phases) are at the center of locally ergodic regions corresponding to these (meta)stable phases, the (effective) local densities of states for these minima and associated locally ergodic regions, and the energetic and entropic barriers separating these regions. These generalized barriers are captured in form of probability flows between the locally ergodic regions as function of energy during the global landscape explorations [
21,
94,
109]. Such flows, together with the local densities of states, can be determined using the so-called threshold algorithm [
38,
124,
162], which explores the regions of the energy landscape which are accessible from all minima of relevance below a sequence of energy lids.
This information can be used to construct a Markov model description of the dynamics of the system on the level of the locally ergodic regions [
7,
163], i.e., we model the probabilistic dynamics on the energy landscape in a coarsened picture that considers the movement of the walker between locally ergodic regions instead of from microstate to microstate, as would be done in, e.g., a molecular dynamics simulation. The thermodynamic parameters such as temperature and pressure influence the transition probability entries in these Markov matrices via the Boltzmann factors and shifts in the enthalpy levels; we recall that a change in pressure from
to
corresponds to a shift
in the potential enthalpy of the microstates of the system. As a result of the finite-time optimization, we find an optimal temperature–pressure schedule in thermodynamic space for each target phase; these schedules can then serve as a rough guideline for experimentalist to design their synthesis routes.
Examples of such thermodynamic optimizations on the energy landscape level include studies by Hoffmann et al. [
163,
164,
165], who employed the results of energy landscape investigations of periodic approximants of the magnesium difluoride system [
94,
166,
167] to construct such Markov matrices for the probability flows as function of temperature and pressure. Starting the Markovian time evolution from the system at very high temperatures, the optimal temperature-pressure schedules are computed, enabling to obtain not only the experimentally-known rutile structure but also predicted metastable alternative phases such as the anatase and the CdI
2 modification to be obtained with a certain probability.
Such in-principle studies constitute only the beginning of the applicability of such synthesis optimization to realistic systems. The main problem when constructing simulations on the level of the metagraph of the locally ergodic regions for solid state chemical systems is the number of atoms involved. Thus far, it is only possible for small periodic approximants to obtain the detailed landscape information required for the construction of the Markov model—single molecules or clusters are clearly much easier to deal with in this regard! The problem is that unless the transformation between the metastable phases occurs via, e.g., a second-order phase transition (as discussed above), real solid materials typically undergo nucleation and growth processes or grow from glassy or amorphous precursors. Thus, obtaining the appropriate time scales for the probability flows that allow us to model the Markovian evolution on the metagraph of the locally ergodic regions in a quantitatively realistic fashion requires information from landscape explorations for state spaces consisting of hundreds or thousands of atoms/variable periodic simulation cell, ideally on the ab initio level of energy. Nevertheless, such explorations are expected to become feasible with the availability of machine learning (ML) potentials for multi-atom systems, as the ML energies of these systems for multi-atom configurations in solids are reasonable approximations of the ab initio energies for the same configuration but can be computed orders of magnitude faster [
168].
In this context, we comment on the issue of computing free energy differences and free energy barriers on the atomic level via molecular dynamics or Monte Carlo simulations of single walkers (or ensembles thereof) for systems that exhibit several locally ergodic regions that might compete with each other along the path in thermodynamic space. Examples of classic approaches to computing free energy differences, e.g., between two systems that can be transferred into each other by some change in characteristic parameters (such as the strength in the atom–atom interactions) or between the same system but at two different points in thermodynamic space, include the thermodynamic integration method and the thermodynamic perturbation method [
169,
170]. The basic assumption behind such approaches is the observation that the work needed to perform such a transformation/movement on the energy landscape constitutes a lower or upper bound on the free energy difference [
171]. The closer the system can stay to (local) equilibrium during the procedure, the tighter these bounds will be. Usually, the transformation is performed in both the forward and backward direction, at least as long as we are moving between two global equilibrium states. The same considerations apply when the transfer is supposed to take place between two (meta)stable modifications
A and
B at the same location or at different locations in thermodynamic space.
In a practical realization of such a computation, one would employ an ensemble of walkers on the energy landscape. The movement of these walkers as the thermodynamic parameters are changed to drive the system from phase A to phase B then corresponds to a moving ensemble average along the path in thermodynamic space. If the transformation takes place in finite time , either extra work needs to be expended or excess heat generated, as the system will always be slightly out of equilibrium. As a consequence, we are again dealing with a finite-time optimization problem, i.e., attempting to find the optimal path in thermodynamic space to move from phase A to phase B while keeping the ensemble representing the system close to equilibrium everywhere along the path. In addition, we need to identify the allocation of the available time along the path.
The general question of optimally moving a system in thermodynamic space on the level of the energy landscape, and thereby computing the free energy differences, has been addressed in the literature [
30]; however, those derivations assume that no bifurcations will be encountered. If other locally ergodic regions or marginally ergodic regions—corresponding to metastable phases or glassy solids, respectively—can be accessed along with the target phase
B with non-vanishing probability, then the ensemble will no longer stay in or close to thermodynamic equilibrium, since some of the walkers will visit or even end up in other locally ergodic or marginally ergodic regions of the landscape. Ensuring that all walkers reach phase
B requires additional work performed on the system, adding to the uncertainty in the free energy calculation already present due to the finite-time effects along the “correct” route through the energy landscape.
In addition to accepting this handicap and paying the price of extra work or entropy/heat production to force the walkers to stay on the direct route between phases
A and
B, one can in principle again employ a decision-tree approach, as mentioned in
Section 3.1.5, where a probabilistic formulation of the many possible outcomes of the cyclic thermodynamic paths was presented. The advantage of such an approach is that one does not add essentially uncontrollable forcing terms into the algorithm; furthermore, as a positive side-benefit, estimates for free energy differences among many metastable phases of the system can be obtained. The disadvantage is the enormous number of walkers needed to probe the energy landscape in a locally equilibrated way along many possible pathways through the landscape between the locally ergodic regions. The possible appearance of glassy states is another serious handicap of an unbiased approach, since such non- or at best quasi-equilibrium states might not be exited on realistic simulation time scales, possibly requiring the system to be heated close to the melting point. Such a deviation from the original thermodynamic path will quite likely result in many additional uncertainties in the calculation of the free energy. Nevertheless, addressing such issues is an important task in the context of free energy computations for systems with complex energy landscapes.
3.3. Systems with Complex Energy Landscapes Outside Physics and Chemistry
Systems with complex energy landscapes are also found outside the fields of chemistry and physics, ranging from mathematics and biology over engineering and economics to the humanities; for an overview, we refer to [
8]. In the latter cases, the high-dimensional single or vector-valued function over a large space of microstates is often no longer called an energy function; instead, we speak of a generalized cost function. To provide some specific examples, this function is called the fitness function (which is to be maximized) in biological systems when discussing evolution; the welfare function in the context of thermo-economics for multi-agent systems; the happiness function for social systems; the cost function for planning problems in business level economics; and the objective function in abstract or practical combinatorial and global optimization problems in mathematics.
In many of these systems, one is mostly interested in identifying the local and global minima and maxima of the generalized cost function, i.e., most of the effort is devoted to developing or applying suitable global optimization techniques and algorithms. Because these algorithms must explore the landscape in an efficient fashion, many of them are inspired by the way in which physical and chemical systems proceed in a natural way to explore their energy landscapes in order to reach thermodynamically stable phases. Examples of such algorithms are the simulated annealing method [
172] and genetic and evolutionary algorithms [
173], which have spawned a multitude of variants. Here, the picture drawn from classical mechanics of a system rolling downhill under the force of gravity to reach a state of lower potential energy leads to the deterministic gradient descent approach, while stochastic methods involving random walks on the cost function landscape reflect the statistical nature of the approach to low-energy minima associated with equilibrium phases in statistical thermodynamics. Such algorithms have been analyzed by employing the analogy of a glass transition or a glassy intermediary region on the energy landscape, which must be passed through before the desired low-energy cost function minima can be identified [
174,
175].
The challenges faced in the design and optimization of global optimization algorithms to explore such multi-minima cost function landscapes with a limited amount of computational resources are very similar to those encountered in thermodynamic space when moving from a phase that is stable at high temperatures to the thermodynamically stable equilibrium phase at low temperatures discussed above. In this context, we note that the energy landscapes of such combinatorial optimization problems frequently do not exhibit a well-separated ground state basin that would be analogous to the well-defined crystalline thermodynamic equilibrium phase at low temperatures. Instead, many of the landscapes of such optimization problems are more similar to those of spin glasses, which by definition or construction do not have a well-defined locally ergodic region surrounding the global energy minimum. Instead, many minima with nearly the same energy as the global minimum energy exist on the complex energy landscape, and these are located far away from the global minimum. This constitutes a qualitative difference from the energy landscape of a crystalline material modeled with a realistically large periodic approximant such that isolated defect configurations can be included; in the latter case, all low-energy minima that have nearly the same energy as the global minimum correspond to equilibrium defect configurations of the thermodynamically stable zero temperature equilibrium phase, and as such belong to the same locally ergodic region.
Quite generally, employing a whole ensemble of walkers on the cost function landscape as opposed to only a single walker allows us to compare the evolution of a thermodynamic system toward a (meta)stable phase with the gradual establishment of (meta)stable states of biological, ecological, social, or economic systems, which can reach an equilibrium-like state regarding the exchange of (biological or economic) goods and resources with a hypothetical external environment. In particular, when moving from one metastable biological, ecological, or economic state to another, we encounter problems similar to those we have discussed for the movement between two metastable solid phases: we need to invest a large amount of “work” or resources to accomplish the transformation into the desired biological, ecological, or economic target state. Doing this while minimizing the extra work or loss of resources within a finite time is clearly analogous to a finite-time optimization problem. We also note that chaotic and nearly unpredictable changes between two stable biological, ecological, or economic states can occur in such systems. This can result in the system being in a non-equilibrium or quasi-equilibrium situation which can persist for very long times, in analogy to the glassy states of chemical materials.
More concretely, in biological systems we can consider scenarios such as the attempt to breed certain traits into farm animals or to change the resistance of plants against “pests” within as few generations as possible or while employing a minimal number of intermediary breeding animals to be analogues of the scenario involving transformation of a given solid phase into a different metastable material in an as efficient a manner as possible. Similarly, we can consider the recovery of an ecosystem [
176,
177] after, e.g., a destructive volcanic eruption. Typically, the ecosystem observed in the region
V around the volcano must proceed through a series of metastable ecosystems featuring pioneering and other intermediary plant and animal generations before a stable ecosystem is reestablished. Achieving this with a minimal amount of effort within a finite time, perhaps measured in few decades, requires careful fine-tuning of the environmental conditions experienced by the region
V as function of time, which can strongly influence the types of plants and animals that will grow and settle in region
V after the disturbance. This recovery process is quite analogous to a series of phase transitions when moving from, e.g., the melt to the low-temperature equilibrium phase via several intermediary (high-temperature) phases after the system has experienced an abrupt change in its thermodynamic environment, such as a quench in temperature and/or exposure to a cycle of high and low pressures.
Such environmental boundary conditions can include the general climate of the region, the plants or animals introduced by humans in region V after the volcanic eruption, and/or the (fixed) distribution of plants and animals in the geographic region G surrounding the region of interest near the volcano V. While the local climate or weather are difficult to influence by human intervention, the plant and animal populations in the surrounding region G can be controlled by humans. For example, if region G exhibits an agricultural monoculture or if predatory animals (wolves, bears, etc.) are systematically eliminated in G, then this will have a different influence on the final ecological state in region V compared to if region G were a wild forest. Note that the ecosystem reached in the long-time limit may be different from the original one before the volcano erupted; there can be many feasible metastable ecosystems that are stable on long time scales in region V, and the one the system settles into might depend on the environmental boundary conditions. We remark that there is actually no reason to assume that these boundary conditions completely determine the final ecosystem in region V; in principle, many long-time metastable ecosystems could exist for the same set of environmental boundary conditions.
While it is clear that there are many fascinating examples of (thermodynamic-like) processes for systems that have complex energy landscapes in fields of science outside the realm of physics and chemistry, we do not want to go into greater detail here as far as these biological, ecological, social, and economic systems are concerned. Considering, e.g., the mathematical formulation of thermo-economics and expounding the correspondence of its variables with those of thermodynamics would require a lengthy presentation which is beyond the purview of this perspective. Nevertheless, it should be clear that the concepts of finite-time thermodynamics are applicable to many of these non-physical systems, and can provide guidance about the optimal route toward the establishment of the desired states in these systems. Conversely, insights gained from dealing with systems in biology, ecology, or economics may inspire new work in the finite-time thermodynamics of chemical and physical systems.