Construction of Ground-State Solutions of the Gross–Pitaevskii–Poisson System Using Genetic Algorithms

: We present the construction of the ground state of the Gross–Pitaevskii–Poisson equations using genetic algorithms. By employing numerical solutions, we develop an empirical formula for the density that works within the considered parameter space. Through the analysis of both numerical and empirical solutions, we investigate the stability of these ground-state solutions. Our findings reveal that while the numerical solution outperforms the empirical formula, both solutions lead to similar oscillation modes. We observe that the stability of the solutions depends on specific values of the central density and the nonlinear self-interaction term and establish an empirical criterion delineating the conditions under which the solutions exhibit stability or instability


Introduction
The study of solitonic cores in physical systems has attracted significant interest due to their relevance in various fields, including cosmology, astrophysics, and condensed matter physics.These localized, stable structures arise from the balance between attractive and repulsive forces, leading to unique properties and behaviors.
One specific field where this system has become particularly interesting is that of bosonic dark matter.This model assumes that dark matter particle is a spinless ultralight boson with a mass of order 10 −19 -10 −23 eV/c 2 , and there are various reviews that draw a general panorama of the subject [1][2][3][4][5].The dynamics of this type of matter in the meanfield approximation is ruled by the Gross-Pitaevskii-Poisson (GPP) system of equations, and the system can be seen as a Bose gas trapped by the gravitational field sourced by itself [6].When the gas has no self-interaction, it is called the Fuzzy Dark Matter (FDM) regime [7] and is the main stream of the subject that has led to study both local (e.g., [8,9]) and structure formation dynamics [10][11][12].
The interesting thing about this model is that for this ultralight particle, the de Broglie wavelength is large, and thus the structures have a minimum size.In fact, it was found since the breakthrough simulation in [10] that cores were an essential part of structures and were an attractor profile surrounded by an envelope with high kinetic energy.This meant that these cores could be fitted by an empirical density profile that coincided numerically with the solution of the ground-state solutions of the GPP system [13].Ever since, these cores have been considered literally the keystone of structures in FDM.Bounds on the values of self-interactions arise from local and cosmological constraints (see, e.g., [14][15][16][17][18]).
The generalization to include self-interacting bosons is a natural extension, and limits to the self-interaction regime are a matter of interest, because the dark matter distribution differs from that of FDM [19], time scales of saturation and relaxation change [20], and ultimately, there is a Tomas-Fermi regime with rather simple density profiles [21].
This background leads us to revisit some properties and stability of solitonic cores with self-interaction that prove to be useful.We solve the well-known eigenvalue problem for ground-state solutions of the GPP system, with a rather unusual but efficient method that involves genetic algorithms.We then propose an empirical formula that describes its profile.We then study the stability of the solutions and those associated with the empirical formula.We compare the reaction of both to truncation error perturbations in both amplitude and frequencies of the oscillation modes triggered.We point out interesting differences and potential implications within the dark-matter context.This paper is organized as follows: Section 2 provides an overview of the theoretical framework and equations governing solitonic cores.Section 3 presents the methodology used to construct the ground-state solutions.Section 4 contains a comparison with empirical formulas, while in Section 5, we compare the evolution of solutions.Finally, Section 6 draws some conclusions.

Model and Equations
In a Bose-Einstein Condensate (BEC), a significant portion of the particles occupy the lowest quantum state, resulting in overlapping wave functions.Consequently, the state of a BEC can be effectively described by a collective wave function, known as the order parameter, Ψ(t, ⃗ x).Due to interactions between bosons, which induce nonlinear effects, this collective wave function satisfies the Gross-Pitaevskii (GP) equation: where h is the reduced Planck constant, m is the mass of the boson, V is the external potential acting as a gas trap, and g = 4πh 2 a s m is the nonlinear coefficient, where a s is the scattering length of two interacting bosons.
The concept of BEC Dark Matter (BEC-DM) hypothesizes that dark matter exists in the form of a BEC, where the trapping potential is self-generated by the bosonic ensemble through the Poisson equation: (2) The nonlinear system (1) and ( 2) is known as the Gross-Pitaevskii-Poisson (GPP) system and exhibits scale invariance, described by the transformation [13]: where λ is a scaling factor.An alternative description of the GPP system (1) and ( 2) can be obtained through the Madelung transformation Ψ = ρ/me iS/h , where ρ = m|Ψ| 2 is the mass density, and S represents the phase of the wave function.By separating the real and imaginary parts and defining the fluid velocity as ⃗ v = ∇S/m, it is possible to rewrite the GPP system as [22,23]: where p SI = g 2 ρ 2 is the self-interaction pressure, and In this framework, the GPP system is known as the quantum hydrodynamic formulation.
If we take the limit h/m → 0, the quantum potential becomes zero, and the classical hydrodynamic equations are recovered with a polytropic equation of state with polytropic constant K = g/2 and polytropic index n = 1.In this limit, we can see the effect of the nonlinear term in the GP equation classically, which results from two-body dispersion interactions between bosons:

•
Repulsive self-interaction (g > 0): the gas has positive pressure opposing the gravitational force, allowing stable structures.

•
Attractive self-interaction (g < 0): no classical behavior is possible since negative pressure is not physically acceptable.• Without self-interaction (g = 0): the limit of positive self-interaction when the polytropic constant tends to zero corresponds to a dustlike state.
When h/m > 0, the quantum potential reintroduces the possibility of structure formation regardless of the value of g.The quantum potential Q is crucial for this behavior.As in any macroscopic system, global quantities can be measured.Some of these are: where M T is the mass, W is the potential energy, K is the total kinetic energy (with K v from classical contributions and K ρ from quantum effects), and I is the self-interaction energy.The total energy is defined as E = K + W + I, and the virial function is Q = 2K + W + 3I (see, e.g., [24]).All these quantities are helpful for the diagnostics of any solution of the system (1) and ( 2), and here, we use them for equilibrium solutions.

Adimensionalization of the System
Transforming the system into a dimensionless coordinate system ensures the uniformity of units and prevents issues arising from disparate scales when performing numerical calculations.To achieve this, we perform the following transformations: t = tt 0 , ⃗ x = ⃗ xx 0 , g = gg 0 , Ψ = ΨΨ 0 , V = ṼV 0 , and ρ = ρρ 0 , where tilde variables are dimensionless and are said to be code units, while nontilded ones are physical.Appropriate scale factors for the GPP system are: Thus, the system effectively possesses a single degree of freedom, which we express in terms of a scaling factor x 0 , equivalent to the transformation (3).With these new variables, the GPP system can be rewritten in dimensionless units as: where we omit the tilde in all variables.Our objective now is to determine the ground state of the stationary version of the GPP system ( 12) and ( 13) since excited states are unstable [6].

The Stationary GPP System
Stationary GPP equations are constructed by assuming spherical symmetry and that the order parameter can be rewritten as Ψ(t, ⃗ x) = ψ(r)e −iωt , with ω an eigenfrequency, and ψ(r) a real function of the radial coordinate r.With these assumptions, the GPP system is written as follows, according to [13,25]: To ensure physically acceptable solutions, we impose certain boundary conditions.For the stationary order parameter ψ, we require that ψ(0) = ψ c , ψ ′ (0) = 0, and lim r→∞ ψ = lim r→∞ ψ ′ = 0.
For the gravitational potential V, we set V(0) = V c and V ′ (0) = 0.The choice of V c can be arbitrary since shifting this condition to V c + V a is equivalent to finding an eigenvalue ω + V a for some arbitrary value V a .These boundary conditions ensure physically meaningful solutions that satisfy the requirements of regularity and isolation.
Since this set of equations is solved numerically, it is convenient to write it as a firstorder system by defining the variables ϕ = r 2 ψ ′ and M = r 2 V ′ , where ′ = d/dr is the derivative operator.The above system is then rewritten as: with the boundary conditions ψ(0) = ψ c , ϕ(0) = 0, V(0) = V c , M(0) = 0, and lim r→∞ ψ(r) = lim r→∞ ψ ′ (r) = 0.This set of equations along with the boundary conditions define an eigenvalue problem, where the eigenvalue is ω.
For ease, it is convenient to define the vector u = (ψ, ϕ, V, M) and the right side of the system as f (r, The system can be written compactly as: where u 0 = (ψ c , 0, V c , 0).

Numerical Methods
Different strategies can be employed to numerically solve the systems of equations above.Some of them solve the system on a discrete domain, traditionally using a shooting routine, like in the flagship reference [25] and most of follow-up papers.We use a discrete domain but not a shooting method.

Stationary System
We construct the solution on a finite domain D = [0, r max ], where the boundary conditions are redefined approximately as ψ(r max ) ≈ ψ ′ (r max ) ≈ 0, that is, we use a finite value r max in which we seek to satisfy the boundary conditions approximately at the external boundary r max .
We define the discrete domain as D d = {r i ∈ D|i = 0, . . ., N r }, where N r is the number of points.The simplest way to construct D d is by employing a uniform partition, where the points are chosen as r i = i∆r, with ∆r = r max /N r the resolution of the discrete domain.
Note that in order to integrate the system ( 16)-(19), we must set the parameters ψ c , V c , g, ω, r max , from which we do not know the eigenvalue ω, and for reasons of numerical precision, the upper radius r max .Therefore, it is necessary to find these values in such a way that they approximately satisfy the boundary conditions at the outer boundary r max .
Instead of using the shooting method to search for the eigenvalue ω of the problem ( 16)-(19) as traditional (e.g., [6,25]), here, we propose an alternative method based on optimization.

Description of the Eigenvalue Search Method
We search for the eigenvalue ω that satisfies the boundary conditions at r max .To accomplish this, we employ a genetic algorithm (GA), which is rooted in the theory of evolution.In a GA, an initial population exists within a defined environment.Each individual in this population is assigned a fitness level, representing their suitability for survival in the environment.This fitness level is determined solely by the DNA of each individual of the population.
Better-adapted individuals have a higher likelihood of reproducing and passing on their genetic material to subsequent generations.Offspring are generated from two parents, each contributing approximately 50% of their genetic material.However, in nature, offspring may adapt better to the environment than their parents due to mutations in their DNA.This iterative process continues for many generations until significant changes are observed in the population.
Based on this understanding of evolution, we outline our GA as follows: 1.
Define the problem: In our context, each individual represents a potential solution to the eigenvalue problem of system (20).The DNA chain determining each individual is represented as DNA = (ω, r max ), where the components of these vectors are called genes.

2.
Initialize the population: the population is generated randomly, with a constant size N population maintained throughout the evolution.

3.
Fitness function: Define the fitness function as where ψ(r) is the solution of the system (20) associated with the value ω.The choice of the form of Equation ( 21) is due to the fact that both the wave function and its derivative must vanish in the limit when r → r max , and we would like the violation of the condition on ψ and on ψ ′ to be of the same order; then, we define the fitness as the inverse of the mean squared violation of the separate violations of ψ and on ψ ′ .4.
Selection: using an elitist method to select the best N best individuals, the value of the fitness function of each element of the population is obtained, and they are arranged in such a way that those with a higher fit are first in the list.By applying this algorithm, it is possible to find a solution to our problem by specifying only the amplitude of the order parameter ψ c at the origin and the coefficient of the nonlinear term g.Let us remember that the choice of V c can be made arbitrarily; however, once the solution is found, we can rescale the gravitational potential and the eigenvalue as follows: so that the gravitational potential satisfies monopolar boundary conditions.Finally, the specific parameters of the GA for the solution of the eigenvalue problem are that N population = 500, whereas half of the population is selected from each generation, N best = 250, to survive and crossover.

Case g = 0
The λ−invariance (3) when g = 0 implies that the stationary solution is uniquely determined for ψ c = 1.The inverse fitness of the best individual in the genetic algorithm (GA) for (ψ c , g) = (1, 0) is depicted on the left of Figure 1, illustrating convergence after 120 generations.The fitness reaches an optimal value of approximately 10 8 , corresponding to an error of 10 −8 .This tolerance is used in subsequent solutions.Once the GA finds an optimal solution, the resulting genes yield (r max , ω) ≈ (12.24, −0.6922), consistent with previous well-known results (e.g., [13]).
The ground state for g = 0 can be approximated by the following empirical formula, found from structure formation simulations of Fuzzy Dark Matter [12]: which is a formula found to match the density profile of the ground-state solution of the GPP system for g = 0.The core radius r c is defined as the radius where the density ρ(r c ) is half of the central density ρ c .Setting r c ≈ 1.306 matches Equation ( 22) with ρ c = 1, specific to this case as illustrated on the right of Figure 1.The relation between r c and ρ c is established using the λ-scaling (3), leading to ρ c ≈ (1.306/r c ) 4 , for either arbitrary central density or core radius.In physical units, with m = m 22 × 10 −22 eV/c 2 .This formula practically relates the tightly core radius and central density [9].On the left we show the inverse of the best individual's fitness as a function of generations for the case (ψ c , g) = (1, 0), demonstrating that the GA method provides solutions approaching the satisfaction of stationary GPP equations very rapidly.On the right, we compare the solution found by the GA with empirical Formula ( 22), for the case with ρ c = 1 and r c ≈ 1.306.

Case g ̸ = 0
For the solutions with non-zero values of g, we propose a generalization of the empirical density profile given by where r c = r c (ρ c , g) and β = β(ρ c , g) determine the core radius and exponent, respectively.The total mass of the core can be integrated to give = M(r max ).
Solving for the core radius r c one obtains a closed expression: In the isolated case, where no atmosphere of bosons is around the core like in BEC darkmatter collapse simulations, the core mass is also the total mass from the numerical solution M core = M(r max ), and β is the only free parameter of ρ core (r).Consequently, β is the only fitting parameter to match the ansatz (24) with the numerical solution of the eigenvalue problem.
The optimal β value is found via a genetic algorithm using DNA = (β) and the fitness function −1 dr, with a tolerance of 10 6 for the best individual.
We propose an empirical function, similar to (26) but for g ̸ = 0, for the core radius and exponent, subject to the following constraints: 1.
The transformation (3) implies that the coefficient α = gρ 1/2 c is λ-invariant; therefore, β should only depend on α.Based on these conditions, the following functions are proposed: where the fitting parameters are a 1 = 0.3681 ± 24.50 × To see that the general empirical formula is a good approximation to the fundamental state, we show in Figure 2 a plot of r c as a function of ρ c in Equation ( 27) for different values of g, both using the solution of the eigenvalue problem and empirical Formula (26).Notice that r c decreases with increasing ρ c and increases with increasing g as expected.
The plots show how good the empirical formula resembles the properties of the solution to the eigenvalue problem.We also show β vs. r c implicit from Equations ( 27) and (28).Additionally, β changes sign when g changes sign.
There is an empirical formula in [20], similar to (24), constructed based on simulations of core-formation simulations.This formula has two parameters: where the parameters are and the core radius is more complex: We now compare our profile with this one.Figure 3 shows the solutions at the corners of our parameter space, at the points (ψ c , g) = (1, −0.5), (1, 0.5), (2, −0.5), and (2, 0.5).For each of these cases, we show with continuous lines the density calculated with empirical Formula (24), with dashed lines the empirical formula found from the core collapse of dark matter in [20], and with points the numerical solution, which is the correct solution of the eigenvalue problem.While our formula works fine for these extreme cases of the parameter space, the formula found in [20] slightly detours from the numerical solution in some cases, probably due to the dynamics that remains in dynamical simulations.These plots show that our results are consistent with previous ones.and the bottom right plot to (ψ c , g) = (1, 0.5).Points represent the solution to the eigenvalue problem, while continuous lines represent our empirical Formula (24) and dashed lines empirical Formula (29) obtained from simulations in [20].

Evolution
While the solutions found with the GA serve as good approximations to the exact solutions, they are not exact.Let us denote the exact solution of the eigenvalue problem as ψ exact and the numerical solution as ψ exact + δψ, where δψ represents the error associated with the numerical truncation error of using a discrete domain for its construction.
Stability can be tested by monitoring the behavior of this error over time when the exact solution is used as the initial condition plus the perturbation.Specifically, we analyze the dynamics triggered by such perturbation.The stationary solution is deemed stable if the perturbation remains bounded during its evolution, and it is considered unstable if the perturbation grows over time.This error analysis is commonly used to test the convergence of numerical solutions of stable systems [6] and to test how the errors converge to zero while numerical resolution is increased.
Thus, the evolution of the numerical solution of the initial value problem has an error that we show does not diverge (see, e.g., [6,13]).But we would like to monitor the error when using the empirical density profile as initial condition and see how it behaves.
For this, we programmed a code that evolved these initial conditions by solving the time-dependent system (1) and ( 2) using spherical symmetry.The solution took place on the same numerical domain D d used to solve the eigenvalue problem.The code used the method of lines to solve GP Equation ( 1) with a fourth-order Runge-Kutta integrator and second-order accurate stencils for spatial derivatives.At the origin, the order parameter was extrapolated with a second-order accurate approximation.Simultaneously, at each intermediate step of the Runge-Kutta method, we solved Poisson Equation (2) outwards from the origin until r max .
The diagnostics we used to monitor the growth of perturbations was the central density at the origin, and the results are shown in Figure 4 for the evolution of the solution to the eigenvalue problem and the empirical profile (24) as well as that of Formula (29) obtained from simulations in [20], for various combinations of ψ c and g.In the left column, we show the time series of the central density, while the right column shows its Fourier transform that illustrates the triggered oscillation modes.
The first case (ψ c , g) = (2, −0.5) is an unstable solution that collapses due to the attractive nature of self-interaction (negative g).In this case, the density when initial conditions are the numerical solution blows up at a finite time, whereas when evolving the empirical profile, the density also acquires an out-of-bound central density, also indicating instability.In that case the Fourier spectrum says little about the oscillation modes and is not shown.On the left, we show the central density as a function of time for the evolution of initial conditions obtained from the eigenvalue problem (in blue) and those using empirical Formula (24) (yellow).We also evolve the configurations constructed in [20], drawn in green.On the right, we show the Fourier transform of the time series on the left.The first row corresponds to the unstable case with (ψ c , g) = (2, −0.5), whose central density diverges indicating the collapse.The subsequent rows correspond to the stable cases with (ψ c , g) = (1, −0.5), (ψ c , g) = (2, −0.5), and (ψ c , g) = (2, 0.5), respectively.
There are also three stable cases with (ψ c , g) = (1, −0.5), (2, ±0.5), whose central density oscillates with different amplitudes and frequency modes.As expected, the density of the solution of the eigenvalue problem is closer to the exact solution than empirical Formula ( 24).An implication is that in the first case, the amplitude of the oscillations triggered by the truncation error is smaller than in the second case, where the difference between the numerical solution of the eigenproblem and the empirical formula add an extra perturbation.The amplitudes differ approximately by an order of magnitude.Now, what can be seen is that the excited oscillation frequencies coincide and are independent of the oscillation amplitudes.
As a particular case, we show the differences between the evolution of the groundstate solution and its empirical formula for the case g = 0 corresponding to FDM.The magnitude of oscillations is particularly important in this case, because initial conditions involving mergers and rotation curves use cores as workhorse configurations for initial conditions and it is interesting to note how these profiles carry on an intrinsic oscillation.The results are shown in Figure 5.
Finally, we carried out an analysis similar to that in [20], where a value of α was found that separated the stable and unstable branches of solutions for g < 0, derived from empirical Formula (26), considering the quantity |g|M core as a function of the invariant α.We show the result in Figure 6.The critical value was found where the quantity |g|M core reached its maximum value at α c ≈ −0.7140, which was similar to the value α ≈ − √ 0.52 found by Chan et al. [20].This result shows the consistency of our one-parameter formula with the formula obtained from simulations of dark-matter collapse.

Conclusions
We constructed the well-known ground-state solutions of the GPP system of equations, this time using a genetic algorithm.The motivation to implement this type of method is that it can be used in the case of many parameters, or equivalently, DNA made of coefficients of a multimode wave function.In this sense, this paper is a proof of concept for the usage of this method in core-plus-halo FDM configurations that we plan to analyze in the near future.
One of the contributions of this work is the construction of a one-parameter empirical formula that describes the density profile of ground-state solutions with self-interaction.Moreover, this formula works for arbitrary g, which is a small but probably important step with respect to the very general formula for core profiles in [20] obtained from simulations.
We also evolved the ground-state solutions, the density profiles given by our empirical formula, and as a control case, we also evolved the profiles obtained in [20].We found that the evolution of empirical formulas was different, even though they produced pretty similar density profiles.The fact that empirical formulas were an approximate version of the solution of the eigenvalue problem, which is already an approximate solution, produced higher-amplitude perturbations.Specifically, we found that the amplitude of the oscillation of stable solutions was bigger than an order of magnitude with respect to those of the ground-state solutions.This is relevant because empirical formulas are commonly used as initial conditions for binary and multimergers of ground-state solutions, which can be improved.
Finally, we verified that the evolution of certain configurations with negative selfinteraction could collapse and found the threshold between stable and unstable solutions using our empirical formula was consistent with the one found by the analysis in [20].

Figure 1 .
Figure 1.On the left we show the inverse of the best individual's fitness as a function of generations for the case (ψ c , g) = (1, 0), demonstrating that the GA method provides solutions approaching the satisfaction of stationary GPP equations very rapidly.On the right, we compare the solution found by the GA with empirical Formula (22), for the case with ρ c = 1 and r c ≈ 1.306.

Figure 2 .
Figure 2.We show the plots of r c vs. ρ c for the empirical formula and the numerical solution in order to compare them.We also show the parameter β as a function of ρ c , which indicates how the configuration compacts for different values of g.

Figure 4 .
Figure 4.On the left, we show the central density as a function of time for the evolution of initial conditions obtained from the eigenvalue problem (in blue) and those using empirical Formula (24) (yellow).We also evolve the configurations constructed in[20], drawn in green.On the right, we show the Fourier transform of the time series on the left.The first row corresponds to the unstable case with (ψ c , g) = (2, −0.5), whose central density diverges indicating the collapse.The subsequent rows correspond to the stable cases with (ψ c , g) = (1, −0.5), (ψ c , g) = (2, −0.5), and (ψ c , g) = (2, 0.5), respectively.

Figure 5 .
Figure5.On the left is shown the central density of the evolution of the system using the initial conditions (ψ c , g) = (1.0,0.0) for t = 250, for the eigensolution and empirical Formula (24).On the right, the Fourier transform shows that the fundamental frequency is the same for both the density of the eigensolution and the empirical density profile.

Figure 6 .
Figure 6.The total mass of the core as a function of the λ-invariant α = g √ ρ c represented with the black line, continuous for the unstable branch and dash-dotted for the stable one.The vertical red dotted line indicates the critical value α c ≈ −0.7140 at which the mass reaches its maximum value.This value divides the stable and unstable branches of solutions.The red dotted line is the result in Chan et al. [20].
Stop condition: repeat steps 4-8 for multiple generations until f (DNA) > 10 8 for at least one individual in the population.
Replacement: Repeat steps 5-6 for N population − N best to generate the remaining individuals.8. Second mutation: Apply a differential mutation to the entire population.For each individual with DNA i , select two other individuals with DNA 1 and DNA 2 randomly.Generate a new individual with DNA new = DNA i + 0.1(DNA 2 − DNA 1 ).If f (DNA new ) > f (DNA i ), replace the ith individual with the new one.9.