Adaptive Algorithm for the Generation of Superconfigurations in Hot-Plasma Opacity Calculations

In hot plasmas, such as the ones encountered in astrophysics or laser-fusion studies, the number of ionic excited states may become huge, and the relevant electron configurations cannot always be handled individually. The Super Transition Array approach enables one to calculate the massic photo-absorption cross-section (or radiative opacity) in a statistical manner consisting of grouping configurations close in energy into superconfigurations. One of the main issues of the method, beyond its spectral resolution, is the determination of the most relevant configurations that contribute to opacity. In this work, we discuss different aspects of the generation of superconfigurations in a hot plasma and propose a new adaptive algorithm.


Introduction
The radiative opacity is an essential quantity governing the structure and evolution of stars. In plasmas of medium to high-Z elements, some of the electrons remain bound to the ions even at very high temperature (this is for instance the case of iron in the center of the sun). Among the different contributions to opacity of plasmas at local thermodynamic equilibrium (LTE), the photo-excitation process (also called sometimes "line absorption" or bound-bound opacity), consisting of transitions between electronic states of the ions, is of particular importance.
The methods for the evaluation of the line absorption coefficient, in hot dense plasmas, even in non-LTE conditions, involve a compromise between the needed spectral resolution and the available computer resources. At one extremity one finds the uncompromising method of detailed line accounting (DLA), yielding a highly resolved (or detailed) spectrum, including all photo-absorption lines of all configuration-to-configuration transition arrays. For complex configurations, the numerical cost resulting from the huge number of transitions becomes prohibitive. In this case, one may turn to the unresolved transition arrays (UTA) method [1], which assumes that all lines in the spectrum of each configurationto-configuration excitation merge into a single effective (super-) line, which can be depicted by a Gaussian function. The efficiency of the UTA method comes from the fact that compact formulas are available for the three lowest energy moments (orders 0, 1 and 2) of the line-strength weighted line energies of a transition array. Such moments are expressed in terms of reduced matrix elements of the dipole operator, Slater integrals, and subshell occupation numbers. The effect of higher-order moments was also investigated, using more sophisticated distributions [2,3]. Sometimes, the number of configurations is so high that the UTA approach and its relativistic counterpart the SOSA (Spin-Orbit Split Array) approach [4] are not tractable. For that reason, Bar-Shalom et al. developed the STA model, consisting of grouping the large number transition lines between an enormous number of configurations into large assemblies, called Super Transition Arrays (STAs) [5]. The STA moments are calculated analytically using a canonical partition function algebra, and can be split into smaller STAs until convergence (of the Rosseland mean for instance) is achieved.
The method was implemented in the SCO code, and combined to DLA calculations in the SCO-RCG code [6][7][8][9], which is used to interpret high-resolution spectra (see for instance Ref. [10]). The SCO code was developed in two versions, the first one in the Local Density Approximation [11], the second one in the statistical Hartree-Fock theory [12][13][14].
The STA formalism was recently improved by Kurzweil and Hazak within the framework of the CRSTA (Configurationally Resolved Super Transition Array) model [44][45][46], which enables one to refine the STAs to reveal the underlying structure. The basic idea is an adaptation of the Liouville evolution operator technique to the photo-absorption spectra.
Whatever the variant of the method that we consider, the choice of the superconfigurations that must be included in the calculation is an important issue. In order to obtain converged Rosseland mean opacities, the best completeness as possible has to be ensured. This is a difficult point because each opacity code has its own balance between accuracy (trying to use the most accurate models as possible), and completeness (including as many excited states as possible [7,47]). Since it is not realistic to calculate a full-range opacity spectrum for a trial list of superconfigurations, and then add a superconfiguration and calculate the spectrum again, etc. until convergence of the latter, one has to be predictive. In the present work, we discuss different ways of generating superconfigurations at a given temperature and plasma density, and propose in particular an adaptive "divide and conquer" algorithm.
The estimate of configuration probabilities, based on the average-atom model and/or involving correlated or uncorrelated binomial distributions, are presented in Section 2. The superconfiguration approach is described in Section 3 and the way the superconfigurations can be populated by electrons is described in Section 4. Our adaptive algorithm is presented in Section 5, and results are analyzed in Section 6. Finally, two alternative methods are briefly mentioned and discussed in Section 7-a first one consisting of generating superconfiguration using the correlated Gaussian approximation for configurations and the algorithm used in the SCROLL model-emphasizing the novelty of our approach.

Configuration Probabilities Based on the Average-Atom Model
The real ions in the plasma are in various multi-electron configurations, each one corresponding to a specific set of integer occupation numbers in the bound subshells. Starting from the average-atom picture, it is possible to build a law for the abundance of such ionic species, assuming that

•
The configurations are statistically independent, • A configuration C is entirely defined by the integer occupation numbers q i in each The total energy of C in terms of the occupation numbers can be expressed using quantities provided by the average-atom calculation.
In the grand canonical ensemble, knowing the form of the configuration energy E(q 1 , · · · , q N ) with 0 ≤ q i ≤ g i , 1 ≤ i ≤ N (g i is the degeneracy of subshell i), the probability law of the configurations reads: where U is the partition function and µ the chemical potential; we also set β = 1/(k B T), where k B denotes Boltzmann's constant and T the temperature.
represents the usual binomial coefficient. The direct summation over all the configurations can become intractable when the number of subshells increases, in particular for large atomic numbers. It is, therefore, essential to make approximations to perform such summations. The energy can be formulated as a quadratic form of the subshell occupation numbers: with x i = q i − q i the deviation of the integer q i with respect to its average-atom average value q i = g i f i , where f i denotes the Fermi-Dirac factor: The quantity V ij is an interaction matrix element including the dominant Coulomb interaction between states i and j and an exchange-correlation contribution [48]. All the quantities entering Equation (2), except the occupation numbers q i , are given by the average-atom calculation.
The modeling of occupation-number fluctuations in a hot plasma, based on averageatom results and determination of correlations, has a long history [49][50][51][52][53]. An interesting approach consists of handling the sums over all configurations at once [48,54]. It allows one to calculate the statistical properties of a whole transition array representing a monoelectronic jump. The determination of such statistical properties relies on knowing the correlations between the subshell occupation numbers. In this method, the Gaussian approximation to the binomial coefficients of the configuration degeneracy is used to gather the binomial coefficients and the Boltzmann factor in a single exponential. Using elementary matrix algebra on the argument of this exponential, the problem of finding the correlations reduces to the inversion of a matrix. This boils down to expressions of the configuration free energy, which are quadratic forms of the occupation numbers.

Uncorrelated Probability Law
The simplest possible approximation amounts to neglect the interaction terms in the energy. The consequence is that the probability reduces to a product of independent binomial laws, which presents the following advantages: • The average occupation number of each subshell is a direct result of the average-atom calculation, • The probability can be factorized shell by shell, making the calculation of averages simple. For instance, the variance of the total number of bound electrons V i being the uncorrelated variance of the number of electrons in shell i: The latter approximation (Equations (5) and (6)) tends to overestimate the variance V B , leading to erroneous distribution widths, with a dramatic impact on Rosseland mean opac-ities. Better approximations must account for the quadratic interaction terms (correlations) in the energy.

Correlated Probability Law: Integral Transformation and High Temperature Limit
To deal with the statistical mechanics of systems associated with the quadratic energy in Equation (2), a clever method was proposed, making use of an integral transformation [55,56]. The quadratic term in the Boltzmann factor in Equation (1) is replaced with a linear one, at the cost of a multiple integration over auxiliary variables y i , 1 ≤ i ≤ N. The consequence is that the Boltzmann factor has the same form as in the uncorrelated case, and the probability can be written as a product of factors, each one corresponding to one subshell. However, the method can thus be useful if the multiple integral is easily approximated, but reaches its numerical limits when accurate results are needed. Green proposed a treatment of the interacting system applicable to high temperatures when βV ij 1 [57,58], consisting of computing the correlation factors up to second order in βV ij . The technique, rather cumbersome to implement, can lead to serious practical issues since the accuracy of the obtained expressions decreases as the electron correlations increase. If the temperature is too low, the occupation-number variances can even become negative, which of course makes the method inapplicable.

Correlated Gaussian Approximations
The correlated Gaussian approximations avoid the difficulties of the previous approaches. They treat the occupation-number deviations x i as continuous variables, which is equivalent to replace the binomial law by its Gaussian continuous limit, an approximation justified when the degeneracies of the electron subshells are large and if the number of electrons in a subshell is neither too close to zero nor to its degeneracy (in virtue of the central-limit theorem [59]). In that case, the probability law of Equation (1) becomes where x is the vector with components x i and M the N × N correlation matrix. The elements of the positive definite matrix M are [48,56]: and its eigenvalues are noted ω −2 i . Equation (7) becomes where y is defined as x = Sy, S being the unitary transformation diagonalizing M. The equivalent of Equation (5) then reads The correlated Gaussian approximation described above is very easy to implement. Its results are in excellent agreement with the exact calculation. For instance, in the case of a plasma at an electronic temperature T e = 100 eV and solid density, Wilson found that it is accurate to better than 1% [55]. We confirmed such an estimate, and found that in most usual situations encountered in laser-fusion experiments or stellar physics, the accuracy is usually of the order of a few percent. The method can be improved by combining the discrete variation of integer occupation numbers in a small number of subshells (the active ones and spectator subshells strongly coupled to them) and the continuous variation of occupation numbers in the other (spectator) subshells.

Subshells, Configurations
Let us consider the ensemble E of potential bound subshells (n i , i ) for electronic configurations of ions in LTE conditions. The occupation numbers of the subshells satisfy 0 ≤ q i ≤ g i and ∑ i∈E q i ≤ Z. We define the ionization corresponding to that configuration.

Supershells, Superconfigurations
Let B = {σ} be a partition of the ensemble of bound states E into n B supershells: A supershell σ ∈ B gathers an ensemble of subshells satisfying a given criterion. For instance, two subshells i and j belong to the same supershell σ if their energies are sufficiently (to be precisely defined) close: where i and j are the energies of subshells i and j, and η σ is the "width" (in reduced energy, i.e., energy divided by k B T or multiplied by β) of the supershell σ of median energy σ . A superconfiguration is defined by a set of integer occupation numbers {Q σ } of the supershells. The occupation numbers satisfy 0 ≤ Q σ ≤ G σ , where G σ = ∑ i∈σ g i represents the degeneracy of supershell σ and We write Z * the ionization corresponding to that superconfiguration Ξ. A superconfiguration is then an ensemble of configurations that are close in energy. In practice, a good choice is to group configurations C with energies differing by less than k B T. The superconfigurations are made of supershells σ containing several ordinary shells. The total occupation number of each supershell is fixed. For instance 1s 2 , 2s 2 , 2p 5 , (3s, 3p, 3d) 7 , (4s, 4p, 4d) 3 , (4 f , 5s, 5p) 1 (15) is a superconfiguration for which the first three supershells coincide with the normal lowest shells while each one of the others contains three shells supposed to be close in energy. The choice of the shells to group in a supershell is done on the basis of the average-atom eigenvalues. In each supershell of Ξ, the total number of electrons is fixed. For instance, (3s, 3p, 3d) 7 contains 20 configurations, corresponding to the number of possibilities to distribute seven electrons in subshells 3s, 3p and 3d, i.e.,

Linearization of the Energy of a Superconfiguration
In Section 2, the energy of any configuration of the plasma was written as a quadratic form of the occupation numbers, its coefficients being obtained from a single self-consistent average-atom calculation. The difficulty relied on the statistical description of the system, while the electronic structure was obtained at a low numerical cost. The superconfiguration method relies on another choice: instead of keeping a unique quadratic expression for the total energy of any configuration, it uses several linear expressions, each one being valid for a superconfiguration (an ensemble of configurations), for which the electronic structure is computed independently. With this assumption, the statistical mechanics of the problem is simpler, thanks to the partition function algebra, but calculating the electronic structure becomes much more expensive, especially if a self-consistent calculation (similar to the average-atom one but with integer occupation numbers) is performed for each superconfiguration. An advantage of the method is that the refinement of the electronic structure description also improves the accuracy of the spectral opacity.

Number of Configurations, Weight of a Superconfiguration
The total number of micro-states (i.e., nondegenerate ion states), noted M σ (Q σ , {g} σ ), satisfies, for each supershell σ (and using Vandermonde identity for the second equality): where {g} σ represents the list of degeneracies of the subshells belonging to the supershell σ. Therefore, the total number of micro-states in that superconfiguration is equal to The probability that a superconfiguration Ξ takes into account the thermodynamics at LTE can be expressed through the Boltzmann factors X i = e −β i of the subshells. Its statistical weight is proportional to the quantity where the partition functions are defined by the relation The number of configurations contained in (σ) Q σ , where σ contains N σ subshells of respective degeneracies g 1 , g 2 , . . . , g σ , is (see Ref. [60]): Let us consider the case where σ is made of n 1 orbitals with the same degeneracy g 1 and n 2 orbitals with the same degeneracy g 2 , with N σ = n 1 + n 2 . For instance (2p3p4p3d4d) 10 correspond to g 1 = 6, g 2 = 10, n 1 = 3, n 2 = 2 and N = 10. One has If we generalize and gather the n 1 subshells of degeneracy g 1 , the n 2 subshells of degeneracy g 2 , . . . , the n s subshells of degeneracy g s (with therefore n 1 + n 2 + · · · n s = N σ ) [60], we obtain The latter Formulas (21)- (23) show that it is possible to resort to an explicit formula for the number of configurations, avoiding the multiple nested summations (see Equation (16)) which become numerically costly for a large number of supershells containing many subshells. The most efficient way to compute the number of configurations consists of resorting to recurrence relations (see for instance Ref. [60]). Although the combinatoric number of configurations can be huge even for mid-Z elements, the number of configurations which have non-negligible occupation numbers is much lower and depends on temperature and density. Krief recently published two useful methods for the estimation of the number of populated configurations: the first one relies on an exact calculation of the total combinatoric number of configurations within superconfigurations in a converged STA calculation, and the second one involves an estimate for the multidimensional width of the probability distribution for electronic occupation number over bound subshells [61]. The latter work presents a detailed discussion about the effects of temperature and density on the number of populated configurations.

Number of Superconfigurations Associated with a Partition B
For a given partition B in fixed supershells, the number of possible superconfigurations is

Populating a Partition in Superconfigurations Ξ from the Average-Atom Results
For a fixed partition B = {σ}, we estimate N B from the average-atom results, obtained a priori. We assume that the ensemble of possible subshells E for any configuration is included in the average-atom one. The average-atom calculation consists of a self-consistent computation of the energies of the subshells i , the average (fractional) occupation num-bers q i of the subshells and their associated variances V i and the chemical potential µ, connected to each other by the relations (25) where f i depends only on i and µ (see Equation (3)).

Calculation of N B
For each subshell σ of the partition B, we define, for the average atom, the degeneracy, the average occupation number and the variance, by summing over the subshells it contains: We try to generate the possible (integer) occupation numbers Q σ under the constraints 0 ≤ Q σ ≤ G σ and ∑ σ∈Ξ Q σ ≤ Z. Since the superconfigurations to generate are at local thermodynamic equilibrium, one can restricts the first constraint to an interval around the average occupation number Q σ of the average atom, characterized by an integer number of standard deviations n σ : In practice, one defines where Υ σ is the total number of ways to choose Q σ integer satisfying m σ ≤ Q σ ≤ M σ , and the total number of possible superconfigurations for the partition B is The quantity N B in Equation (29) represents the number of essentially populated superconfigurations, while the quantity N B in Equation (24) denotes the total number of superconfigurations, irrelevant to their populations. It is important to note that the number of superconfigurations accounted according to Equations (28) and (29) is excessive.
If the occupation numbers of several supershells are simultaneously close to m σ or M σ determined from Equation (28), the populations of corresponding superconfigurations are negligible and the latter can be safely disregarded.

Estimating Statistical Weights
In each superconfiguration calculated self-consistently, the energy is accurately approximated by the linear form is an average energy independent of the occupation numbers, and (Ξ) i is the selfconsistent one-electron energy in subshell i of Ξ. In the grand canonical ensemble, the probability law takes the form where P Ξ is an uncorrelated probability relative to a specific superconfiguration Ξ. In analogy with the ideal electron gas partition functions, Bar-Shalom et al. have derived recurrence relations for calculating analytically all the sums required in the averages needed [5]. Hole counting is used instead of electron counting in order to avoid loss of accuracy when the number of electrons is larger than half the total degeneracy of the supershell [12].
One of the limitations of the superconfiguration method is that every non-linear term in the configuration free energy has to be averaged over all configurations belonging to the same superconfiguration. Another limitation is that depending on the supershells that are considered, the superconfiguration set resulting from that choice can still be large.
The N B superconfigurations are generated by covering all the possible partitions of electrons in the supershells. For each superconfiguration Ξ = {Q σ }, its statistical weight is estimated using where is the partition function for the set of occupation numbers of that superconfiguration. Λ Ξ is an estimate of the free energy of that superconfiguration. A rigorous calculation of the statistical weight implies a self-consistent computation of the electronic structure of each superconfiguration, what we precisely wish to avoid at this stage. It is satisfactory to take an estimate of the statistical weight making the following assumptions: • In the calculation of the statistical sum U Ξ , the one-electron energies (and subsequently the Boltzmann factors X i = e −β i ) are approximated by the ones of the average atom. • The free energy of the superconfiguration is estimated by the relation Λ Ξ ≈ µZ * Ξ , Z * Ξ = Z − ∑ σ∈Ξ Q σ being its ionization and µ the average-atom chemical potential. The N B superconfigurations are then sorted by decreasing weights. Finally, the list can be truncated by eliminating the superconfigurations with a weak weight, either using a criterion based on a maximum number, or on a minimum weight.

Algorithm "Divide and Conquer"
We are looking for a partition B of the ensemble of bound states in supershells, providing a compromise between constraints on the energies of the subshells of the kind: β| i − j | ≤ η, for i, j ∈ σ while generating a reasonable number of superconfigurations. We are looking for B such as η be minimum under the constraint N Ξ ≤ N max , where N max is the maximum number that we do not want to override. For a fixed partition B = {σ}, let us define where i are the energies of the average-atom model. For a fixed value of η, we try to find a partition B such that For i, j ∈ σ, we have therefore β| i − j | ≤ η. It is worth mentioning that the condition is equivalent, at first order, to β i − j ≤ η. We set two mechanisms: • A procedure consisting of splitting ("Split") of a supershell σ into two supershells σ 1 and σ 2 : Such a mechanism will be activated if ∆ σ > η (unverified energy criterion). Its effect is to reduce the energy differences, but it increases by one the size of the partition. The number of superconfigurations to generate is multiplied by • A concatenation (gathering) mechanism of consecutive σ 1 and σ 2 supershells ("Merge") Such a mechanism will be activated if N B > N max . Its effect is to reduce by one the size of the partition and the number of superconfigurations to generate is divided by the factor (38), but increases the energy differences inside the resulting supershell:

Algorithm "Divide and Conquer": General Procedure
• Phase 1: gathering the supershells. As far as the number of superconfigurations to generate N B is larger than an imposed maximal value N max , we try to reduce the number of supershells n B of the partition. More precisely, we look for two consecutive supershells such that the energy dispersion (spread) σ+1 If β(∆ σ + ∆ σ+1 ) ≤ η, we gather the states of these two supershells into a unique supershellσ, with characteristics Otherwise, end of phase 1. • Phase 2: splitting the supershells. As far as there are supershells such that β∆ s > η, and that the number of supershells n B remains smaller than an imposed maximum value n max , we look for the supershell such that the energy dispersion ∆ s be maximum: The supershell is then split into two supershells, each one containing half of the states. • Phase 3: relaxation of criterion: η → η + δη. This phase is necessary if there is no partition of the subshells that ensures several superconfigurations (but less than the critical value N max ).

Energy Criterion
It is possible to define an alternative criterion to replace Equation (35). The partition function reads Linearizing we obtain which can be put in the form where M σ (Q σ , {g} σ ) represents the number of micro-states introduced in Equation (17): Applying the relation one can write yielding or leading itself to The condition becomes therefore Such a formulation is interesting since it provides an insight into the way the partition function is involved in the criterion. It is also of practical interest, since σ is arbitrary and can therefore help avoiding numerical problems. Finally, on the contrary to Equation (35), the subshells have to be scrolled only once.

Link to the Master Theorem
The above-mentioned procedure belongs to the class of "divide and conquer" algorithms, which structure and complexity are governed by the so-called "Master theorem" [62]. Consider a problem that can be solved using a recursive algorithm which divides the problem into several sub-problems recursively, each sub-problem being of size n/b. The master theorem always yields asymptotically tight bounds to recurrences from divide and conquer algorithms that partition an input into smaller sub-problems of equal sizes, solve the sub-problems recursively, and then combine the sub-problem solutions to give a solution to the original problem. The master theorem states that the time for such an algorithm can be expressed by adding the work that they perform at the top level of their recursion (to divide the problems into sub-problems and then combine the sub-problem solutions) together with the time made in the recursive calls of the algorithm.
If T (n) denotes the total time for the algorithm on an input of size n, and f (n) denotes the amount of time taken at the top level of the recurrence then the time can be expressed by a recurrence relation that has the form Here n is the size of an input problem, a is the number of sub-problems in the recursion, and b is the factor by which the sub-problem size is reduced in each recursive call (b > 1). This equation can be successively substituted into itself and expanded to obtain an expression for the total amount of work done.

Generating Superconfigurations Using the Correlated Gaussian Approximation for Configurations
The following algorithm was suggested by Faussurier ([36] and private communication). The charge state distribution can be put in the form [48,64,65] The idea is to keep the ions between Z * − α √ V B and min Z, Z * + α √ V B , where x represent the integer part of x. For each supershell, the occupation number is allowed to vary between Q σ,min and Q σ,max defined as where To define the supershells, one has to find the intersection point between the modeled energy distributions of the subshells i and i + 1: and with The physical meaning of Equations (66) and (67) may seem questionable, since they are energy distribution and the variance V i and V i+1 are occupation-number variances. This approximation, however, can be justified considering the continuous variation of the energy of the occupied subshell as proportional to its occupation number. If ξ i → 0, the overlap tends to be weak and if ξ i → 1 it becomes difficult to distinguish between the two distributions D i and D i+1 . The chain is broken in increasing order of the ξ i . Let us consider that the length of link i is αξ i . At each step, the supershell containing the largest link is cut (see Figure 3).  and yielding with The physical meaning of Equations (66) and (67) may seem questionable, since they are energy distribution and the variance V i and V i+1 are occupation-number variances. This approximation, however, can be justified considering the continuous variation of the energy of the occupied subshell as proportional to its occupation number. If ξ i → 0, the overlap tends to be weak and if ξ i → 1 it becomes difficult to distinguish between the two distributions D i and D i+1 . The chain is broken in increasing order of the ξ i . Let us consider that the length of link i is αξ i . At each step, the supershell containing the largest link is cut (see Figure 3). The advantage of such an approach is that the average-atom model and the fluctuation theory enable one to know a priori the order in which the supershells will be built, as well as the number of superconfigurations obtained at each step. Therefore, a proper estimate of the electronic correlations is required in order to avoid an underestimate of the energy separation between subshells and an overestimate of the number of superconfigurations. The parameter α (typically between 1 and 10) must be chosen by the user. The relative population of a superconfiguration can be estimated by the product of Gaussian distributions such as the one of Equation (62) attributed to the different supershells. If the population is smaller than a given small parameter, then it can be disregarded, although it belongs to the The advantage of such an approach is that the average-atom model and the fluctuation theory enable one to know a priori the order in which the supershells will be built, as well as the number of superconfigurations obtained at each step. Therefore, a proper estimate of the electronic correlations is required in order to avoid an underestimate of the energy separation between subshells and an overestimate of the number of superconfigurations. The parameter α (typically between 1 and 10) must be chosen by the user. The relative population of a superconfiguration can be estimated by the product of Gaussian distributions such as the one of Equation (62) attributed to the different supershells. If the population is smaller than a given small parameter, then it can be disregarded, although it belongs to the initial set resulting from Equations (28) and (29).

The Scroll Algorithm
The following algorithm was proposed by Bar-Shalom et al. It was designed for the collisional-radiative superconfiguration code SCROLL (Super Configuration Radiative cOLLisional) [66][67][68]. The code takes into account the numerous excited and autoionizing states using superconfigurations. These are split systematically until the occupation numbers converge. The superconfigurations are refined in a stepwise manner until some convergence is reached. When the superconfiguration structure is fixed, the occupation numbers of the superconfigurations are obtained by solving the collisional-radiative-model rate equations: 70) where N Ξ are the superconfiguration populations and R ΞΞ are the superconfiguration transition rates, averaged over the initial configurations C ∈ Ξ and summed over all final configurations C ∈ Ξ . A basic initial assumption (relaxed in the course of the computation) in this model is that within a superconfiguration, the configuration occupation numbers N C are estimated by a Boltzmann factor. The superconfiguration-averaged rates can thus be written as where R CC are the configuration-to-configuration rates and U C the configuration partition functions. In the case of one-electron jumps, we have where D αβ is the radial integral associated with the transition and computed with HULLAC (Hebrew University Lawrence Livermore Atomic Code) [69]. The assumption of LTE within superconfigurations is relaxed by a convergence procedure where at each step, supershells are split, giving rise to a new set of superconfigurations in Equation (70). This splitting is done in a systematic manner following a "binary tree" algorithm. The initial occupation numbers of these new superconfigurations are taken as proportional to their partition functions. All the necessary new rates are then computed. Actually, only the occupation-number factor needs to be re-evaluated, since the D αβ are taken from a precomputed database. Then the steady state collisional-radiative-model matrix is rebuilt and solved. A sparse matrix algorithm is used, since the superconfigurations belonging to charge states differing by more than one unit are not connected. The resulting occupation numbers are then compared to the initial ones. The superconfigurations are split in this way, repeatedly, until all N Ξ converge. This convergence process eliminates gradually the explicit dependence of the rates on the LTE Boltzmann factors in U Q . In the configuration limit it disappears completely. In order to accelerate the convergence of this process, the temperature assumed for the LTE occupation numbers within the superconfiguration is not the electron temperature T e , but an effective ionization temperature T z defined by Busquet [70].

Effective (Ionization) Temperature
The algorithm starts with a guess for the ionization temperature T z . The idea behind such a concept is that the non-LTE spectrum at T e is very close to an LTE calculation at T z . For that guess, Busquet's model can be used [70][71][72], or one can simply start with T e . With this temperature, an average-atom calculation is performed to obtain Z * and an initial potential. The latter is used in a version of the HULLAC code with hydrogenic configurations [69], to obtain the orbital rates. At this point, a crude supershell structure is defined. A loop is described over pairs of superconfigurations and for each pair all the average rates are added to the rate matrix. A sparse matrix method is used to solve the rate equations for the occupation numbers (with the internal convergence iterations splitting supershells as mentioned above) and calculate Z * .
The STA code is now run to find the temperature T z that produces this average charge state Z * in LTE. At this point, one has to go back to the calculation of rates and so on until convergence [68].

Binary Supershell Split Algorithm
A central point in the splitting procedure is that unlike the LTE case, it is crucial to work with the minimal matrix size. This restricts the number of supershells. The authors have therefore adopted an algorithm based on a binary tree approach.
A node represents a supershell. Starting with a single node of all bound shells, one begins splitting nodes to two daughter nodes. For each node indexed by n, the left and right daughters are indexed by 2n and 2n + 1, respectively. One goes down the tree splitting the left daughters while keeping all the other nodes fixed until convergence of that route. At this point, one must go back up the tree to the lowest unconverged parent node, take its right node and go down again splitting left daughters. We proceed in this way until all the nodes are locally converged with all other nodes fixed. At this point, the final supershell structure is constructed from all the converged nodes and a single splitting on each final node is performed separately to check the total convergence. This algorithm, together with the one described in Section 7.1, proceeds in a unidirectional way (although the SCROLL models offer the possibility to go back in the tree, once a branch comes to its end it is not modified anymore, and other branches of the tree are explored). In other words, such algorithms consist of splitting supershells, but not gathering them. We do not pretend that our adaptive algorithm is more efficient that the above-mentioned ones. The most important points are that it is designed to ensure an optimal representation of atomic states for a given maximum authorized number of superconfigurations, and that its specificity is to rely on successive splitting and grouping.

Conclusions
We presented different algorithms for the generation of superconfigurations, which constitute the first step of an opacity calculation (for local thermodynamic equilibrium or non-local-thermodynamic-equilibrium plasmas) relying on the Super Transition Array approach. This is a critical issue of opacity calculations, which necessarily have to make a compromise between accuracy and completeness, which often turns out to be a real dilemma. It is difficult to assert without hesitation which algorithm is the best. Among the ones discussed above, the adaptive "divide and conquer" algorithm, which constitutes the main new result of the present work, is probably the most efficient one, since it provides the best sampling for a fixed maximal number of superconfigurations. However , the "ideal" algorithm would probably take into account the most important transitions in the plasmas, at the given temperature and density, i.e., the transitions which will contribute to the Rosseland mean.
Funding: This research received no external funding.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical restrictions.