The d-Dimensional Cosmological Constant and the Holographic Horizons

This article is dedicated to establishing a novel approach to the cosmological constant, in which it is treated as an eigenvalue of a certain Sturm--Liouville problem. The key to this approach lies in the proper formulation of physically relevant boundary conditions. Our suggestion in this regard is to utilize the ``holographic boundary condition'', under which the cosmological horizon can only bear a natural (i.e., non-fractional) number of bits of information. Under this framework, we study the general d-dimensional problem and derive the general formula for the discrete spectrum of a positive energy density of vacuum. For the particular case of two dimensions, the resultant problem can be analytically solved in the degenerate hypergeometric functions, so it is possible to define explicitly a self-action potential, which determines the fields of matter in the model. We conclude the article by taking a look at the d-dimensional model of a fractal horizon, where the Bekenstein's formula for the entropy gets replaced by the Barrow entropy. This gives us a chance to discuss a recently realized problem of possible existence of naked singularities in the $D\neq 3$ models.

between those two types of horizons: the dS horizon is necessarily observer-dependent, and the black hole horizon is not. This particular distinction immediately makes doubtful a simple straightforward transfer of the properties of the black hole horizons onto a de Sitter horizon. Nevertheless, we would argue that these two different types of the horizons shall indeed share their properties, thanks to the following remarkable fact: the gravitational radius (calculated according to the Schwarzschild formula) for the de Sitter region filled with a positive cosmological constant exactly coincides with the dS radius, and it happens regardless of the choice of d (see Formula (16)). In other words, the two different radiuses appear to be naturally intertwined with each other, so that whenever we are discussing the radius of the dS horizon, we might invoke the properties specific to the gravitational radius, such as the Bekenstein-Hawking constraint. Admittedly, this is not a rigorous proof, but it serves its purpose as a solid foundation for the subsequent calculations.
Before we delve deeper into the problem, let us briefly discuss the structure of this paper. In Section II we will show how, using a very simple heuristic method, one can derive the general formula for the informational capacity of d-dimensional black holes, and then, by establishing the relationship between 1 bit of information recorded on the black hole's horizon with a unitary d-dimensional Plank surface area we find the exact formula for the latter. On our next step in Section III we introduce a useful formalism, based on the results of [7] generalized for the d-dimensional case. This new formalism would be a huge aid in studying the dynamics of multidimensional Friedmann models, because it would allow us to effectively reduce the problem to a single linear Schrödinger equation, with the positive cosmological constant taking the guise of an eigenvalue of that equation. However, in order to properly define this eigenvalue a correct boundary condition must be introduced. This we do in Section IV, using the aforementioned Main Hypothesis: that the surface area of any physically feasible cosmological horizon shall contain a natural number of bits of information. Under this assumption the Friedmann model severely limits possible cosmological constants to a countable set parameterized by a single integer. And if the remaining fields of matter are modeled by a single minimally connected scalar field, the data we got would be sufficient for calculating the self-action potential. This will be aptly demonstrated in a simple case d = 2 where all the calculations can be performed in the explicitly analytic manner with the aid of the hypergeometric function. In the next section, Section V we turn our attention to the generalization of the "Barrow entropy" [8,9] for d dimension. In particular, we demonstrate that even in the Barrow framework where the structure of the horizon ends up being essentially fractal, this fractality changes our model but a little, and predicts no significant alterations in the set of permissible cosmological constants. Despite the fact that in the fractal case the vacuum energy's density depends not on one but on two natural numbers, their values are to so large that the second of them essentially ceases to influence the results in a meaningful manner. However, in conclusion (see Section VI) we discuss one possible exception of this rule: it is possible that for the cosmologies with d > 3 an extreme case of fractality must be taken into account, namely: an infinite horizon encompassing a finite volume. The properties of such a pocket universe would be most intriguing, however, it must be owned that Barrow himself deems such any model "unphysical" [8]. Finally, in the Appendix A we demonstrate in the general case of a d-dimensional closed universe where the matter's equation of state is equal to p/c 2 = wρ with w = const, the resulting Friedmann equations can be completely integrated for any w.

II. THE INFORMATIONAL CAPACITY OF A D-DIMENSIONAL BLACK HOLE
While the concept of informational capacity of a black hole still manages to astonish the mind as something alien to the common sense, the actual value of this capacity can be derived without much ado in a process that has even been featured in the popular literature-see, for example, the beautiful book [10].
Let us go over this derivation in the case of d dimension. For this end, consider a Schwarzschild solution describing a d-dimensional black hole of mass m and gravitational radius R (d) [11]: where Ω denotes the surface of a (d − 1)-dimensional unit sphere, whose entire surface area Ω d−1 = 2π d/2 /Γ(d/2) (here Γ(x) is the Gamma function defined as Γ(x) = ∞ 0 t x−1 e −t dt.), the gravitational radius R (d) has the form: and G (d) denotes the d-dimensional gravitational constant. Note, that in the international system of units the dimension of G (d) would be meter d /(kilogram · second 2 ) . Once we know this, it is then straightforward to check that the d-dimensional Plank length has the following relationship with G (d) , Plank constant and the speed of light c: where α is a dimensionless proportionality coefficient which will be precisely defined a bit later. Now we need to know how many bits of information can be recorded on the (d − 1)-dimensional surface of the black hole in consideration. In order to figure this out, we'll need two crucial pieces of information. First, we know that the total surface area of this black hole is . Second, if a photon of wave length λ = 2R (d) gets swallowed by this black hole, its horizon gains exactly one bit of information (the photons with larger wave lengths will simply bypass the black hole without being consumed, see [10]). During this process the mass of our black hole increases by while its surface area grow by whereR (d) can be found from (2) by changing m → m + m γ . Since m γ m, we can expand the resulting expression into the Taylor series over parameter m γ /m. Neglecting all the higher order terms yields the following formula: According to (4), by defining the hitherto unknown parameter α as we will ensure that one bit of information will correspond to exactly one d-dimensional Plank area A (d)P L = Ω d−1 L d−1 (d)P L . We would like to emphasize that the appearance if a factor r d−1 is a direct outcome of the assumption that the increment ∆A (d) depends not on the gravitational radius R (d) , but is instead a universal (dimensional) constant.

III. THE LINEARIZATION OF D-DIMENSIONAL FRIEDMANN EQUATIONS
In this paper for simplicity's sake we will mostly stick to Friedmann models with flat geometries (the closed Friedmann models with radiation dominance have previously been studied in [12], and the more general models including a multicomponent coupled fluid were investigated in [13]); however, we will take a look at some of the closed models in the Appendix A. Now, let H denote the Hubble parameter, ρ -density, and let p stand for pressure. Then the dynamics of the flat ddimensional isotropic and homogeneous universe will abide by the following Friedmann-Robertson-Walker-Lemaître equations (see [12]: If the universe is filled primarily with radiation, due to the conformal symmetry the energy-momentum tensor ends up being traceless. This means that the equation of state reduces to a simple relationship p = c 2 ρ/d. The model shall also acknowledge a presence of a positive vacuum energy (i.e., of cosmological constant Λ) with density ρ Λ = const > 0. The remaining fields will be modeled by a minimally coupled scalar field φ with the potential V (φ). Tying it all together into one equation produces the following: where for convenience the ρ Λ and φ contributions have been integrated into the general expressions for density and pressure ρ =φ and so is the factor c 2 , which has been included into a redefined p. The Equations (6) and (7) taken together produce a continuity equatioṅ and from it the field equation in the Friedmann metrics can be easily derived. Here it is: As a next step, let us introduce an arbitrary real number n, and a new function ψ n = a n . Using (6)- (8) it is a short work to prove that ψ n must satisfy the following Schrödinger equation: Note, that while we can choose different values of n, if we choose n = d the equation simplifies even further, turning intoψ where ψ d = ψ and In this way we have constructed a d-dimensional generalization of the results of Chervon. What should be the next logical step? Since we are already dealing with the Schrödinger Equation (13), it would be natural to treat it as a full-fledged Sturm-Liouville problem for the eigenvalue E. However, in order to do that, we have to introduce a boundary condition? It has be reasonable and follow logically from the most fundamental properties of our model. We have a suggestion on this regard, and it it this: the boundary condition necessarily arises when we count the total amount of information that can fit on the d-dimensional surface area of the event horizon. Let us discuss this in detail in the next section.
IV. "QUANTIZING" THE ρ Λ Let's take a look at a flat universe filled with both the fields of matter and the vacuum energy. During the later parts of evolution of such a universe the fields of matter's contribution will gradually vanish and the cosmological dynamics gets determined by the cosmological constant. The asymptotic scale factor during this phase take a form of a = exp(H (d) t), where Such a dynamics generates an event horizon of radius R (d)H equal to (note that if the lower limit is chosen to be t = 0 we will instead get the radius of the particle horizon). Now it is easy to show that the gravitational radius corresponding to the "vacuum mass" confined within the dS horizon is exactly equal to the radius of the cosmological event horizon, i.e.
In fact, the "vacuum mass" is We can then express the density ρ Λ in terms of the radius R (d)H using (14) and (15), substitute it into the (17) and then calculate the gravitational radius by the Formula (2). It is easy to check that the result will be the Expression (16). Now the main hypothesis.

Hypothesis 1
The d-dimensional "surface" of cosmological horizon must be such as to contain a natural number of Plank areas, thus recording a natural number of bits of information. In other words, using (5), our Hypothesis implies that Before moving on, we should make a few comments. First, while the expression (18) share certain similarity with the formulas arising in the loop quantum gravity, it has little to do with them. Indeed, the loopback approach does demonstrate a discreteness of the area and the volume [14], but their formulas are expressed in terms of the "colors" of the links adjacent to the corresponding nodes of the spin network and they differ from the Formula (18) (see, however, [15]). Secondly, although at first glance the Equation (18) might look too strong "to believe", it expresses a very simple statement: the smallest physically meaningful Due to this, any real d-dimensional area must, obviously, be a multiple of a d-dimensional Planck area, which is what the Equation (18) expresses. This reasoning is rather similar to the idea of Bekenstein and Mukhanov that the event horizon of a black hole shall be quantizable in integers [16][17][18]. Thirdly, apparently a condition akin to (18) is necessary for the quantum theory of gravity in de Sitter space to make sense at all. The point here is that the quantum Hilbert space in the de Sitter space is finite-dimensional and therefore the Einstein Lagrangian with a positive Λ-term cannot be quantized in a consistent manner, and it must be derived from a more fundamental theory that would determine the possible values of G (d) ρ (d−2)/d Λ [19], because the dimension of the quantum Hilbert space must be a nontrivial function of G (d) ρ (d−2)/d Λ and at the same time an integer. It's doable if the possible values of ρ Λ are discrete, that is, parameterized with an integer (or integers). We call this a "quantization" of the vacuum energy. The Equation (18) is but a straightforward realization of this idea. In fact after a few straightforward calculations this formula yields the following condition for "quantization" of the vacuum energy density in a d-dimensional space: where τ (d)P L = L (d)P L /c. If we substitute (19) into (12), the result would be the Schrödinger equation with the "energy" E defined as Notably, a rate of change of the potential u(t) in the aforementioned Schrödinger equation satisfy the following beautiful formula:u In order to demonstrate the benefits of this method, let us consider a simple two-dimensional model (remember, that from a standpoint of string landscape, there ought to be many more versions of two-dimensional pocket universes than those with d = 3; after all, there are more ways to compactify seven extra dimensions than six!) When d = 2, the "energy condition" (20) produces a well-known "Coulomb" spectrum The potential u = u(t) from the Schrödinger Equation (12) turns into where E = −κ 2 , z = 2κt, l = 1, 2, ..., and we took an absolute value of n to account for any eventuality of its signature. Using the substitution we reduce (12) to a degenerate hypergeometric equation The degenerate hypergeometric function can be expanded into the series which converges when α = −m, where m is an integer. It is this fact that allows us to retrieve the spectrum of (21) with N = m + l.
For example, consider a simple case when l = 1, m = 0. Here and the entire cosmological dynamics hinges on the signature of n. If n > 0, the universe arises from the initial singularity with z = 0 (here it is handy to use z as a temporary variable), expands while z ∈ (0, 2), and then collapses in an asymptotical manner. The Hubble parameter and the acceleration (a = da/dz) of this two-dimensional universe are: where z ± = 2 (1 ± √ n). If 0 < n < 1, we end up with a different dynamics. After initial singularity at z = 0, the universe undergoes a period of accelerated expansion for z ∈ (0, z − ), to be followed by a period of decelerated expansion during z ∈ (z − , 2). When the parameter z reaches its critical value z = 2 the expansion turns into a collapse: a decelerated one at first (when z ∈ (2, z − +)), but the pace then picks up and for z > z + we have a phase of accelerated collapse which satisfies the condition lim z→+∞ a a = 1 4n 2 .
This type of dynamics will be mostly preserved if n ≥ 1 but with a caveat: such a universe will experience acceleration only once -during the final pahse of collapse.
Using (6)-(9) one can easily surmise the dynamics of the scalar field φ(t) and derive the exact form of the self-action potential V (φ). Indeed, the vacuum energy's density for the aforementioned values of l and m will be equal to (2)P L , and since the collapse phase starts when t ST OP = √ 2n −1 τ (2)P L , we conclude that the universe will not experience an immediate collapse only if n 1. A simple integration yields: We shall choose the negative sign in (23) from physical considerations (this way the field decreases with time instead of increasing). It also proves useful to redefine the field variable as, so that the self-action potential takes the following form: However, what about the case n < 0? Here the picture changes drastically: the universe in this case arises not from an initial singularity but from the Big Rip singularity. This happens at z = 0, then the universe collapses while z ∈ (0, 2), until at z = 2 it experiences a "rebound" (with a finite scale factor!). And then it undergoes a period of exponential expansion in a quasi-de Sitter regime. The corresponding formulas are rather cumbersome, so we will omit them here.

V. A FRACTAL HORIZON?
In a recent article [8] John Barrow has proposed a very interesting idea. According to him, while studying the various micrographs of a SARS-CoV-2 coronavirus he was struck by a sudden realization that an event horizon might theoretically have a fractal structure. After all, we known that any type of interaction of black hole (BH) with its surroundings deforms the surface of its event horizon, including even the vacuum outside of a BH, which causes a slight decay in the radius of the event horizon, producing a famous Hawking radiation along the way. If we go all the way down from the macroscopic scale to the Plank scale we will see the BH surrounded by the spacetime foam [20,21], which might produce an interesting foam-like fractal (or quasi-fractal) perturbations in the event horizon of that BH [8]. Following this idea, Barrow considered a simplest generalization of a Koch snowflake [22], where the standard spherical horizon of radius r 0 is extended by M ≥ 1 additional "spheres", attached to it. However, this is just a first iteration. On a second iteration, every one of those "spheres" is further extended by M more "spheres" of progressively smaller radius, etc (see [23] for a 3D visualisation). By "progressively smaller" we mean that the radius of the new "spheres" on the n-th iteration will be equal to r n = λ n r 0 , where λ < 1. After the the n-th iteration the total d-dimensional volume of the fractal horizon and its surface area will be equal to: Note that, unlike the classical Koch snowflake which carries infinitely many "spheres" of progressively infinitesimal volume, within our framework there exists a natural lower limit on the size of new "spheres": the Plank volume (we are dealing with a quasi-fractal structure). In order to implement this limit, let us assume that the volume of a smallest d-dimensional "sphere" is indeed equal to a d-dimensional Plank volume. i.e., that n in (25) satisfies the condition If we substitute (26) into (25) and use our Main Hypothesis that the d-dimensional surface area of the horizon with r 0 can only contain a whole number N of d-dimensional Plank areas, we will get this: Note, that (27) explicitly depends on the product of two parameters: M and λ. We can estimate this product using the following physical reasoning: if it is too small (i.e., M λ 1) then the result will be essentially identical to the original, non-fractal formula (and if it is too small even the largest of those additional "spheres" will be smaller than the cutoff Plank's length). On the other hand, if M λ 1 the "spheres" will begin to overlap, which is also unphysical. Thus, the product of M and λ shall be of magnitude 1; following this train of thought, let us assume that λM = 1 (we can choose any other number for as long as it is of the same order of magnitude as 1). Such assumption seriously simplifies (27), turning it into Denoting r 0 /L (d)P L = x, M d−2 = µ we can then rewrite (28) as a relatively simple polynomial equation: If µ 1 (which happens when λ 1), the solution of (29) can be nicely approximated by x ∼ N 1/(d−1) , which essentially reproduces the old results (19). For example, in three dimensions (29) will be a quadratic equation with a single positive root while d = 4 case produces a cubic equation with two complex conjugate roots (unimportant for us) and one real root: where lim µ→∞ x = N 1/3 . In other words, it appears that the procedure of "quantization" proposed in this paper remains stable even in the hypothetical case of fractal event horizons.

VI. IN CONCLUSION: A FEW MORE WORDS ON THE KOCH SNOWFLAKES
Before we wrap up, let us briefly discuss the results we have gained throughout our trek, and the avenues of research that remain untrodden. We have seen that higher dimensional cosmological methods and models in more ways than one remain similar to the familiar three-dimensional models and methods. In particular, many analytic methods of integration originally designed for the three-dimensional Friedmann equations can be easily generalized on a d-dimensional case. This is easily illustrated in the Appendix A where we show that it is possible to take a general method, invented by Tipler in [24] to evaluate a lifetime of a closed three-dimensional universe filled by matter with a barotropic equation of state with w = const, and then, by very slightly altering it, make the Tipler's method work for any arbitrarily dimensional universe; and even to get an exact and analytic expression for a total lifespan for a closed universe filled with a phantom energy.
Nevertheless, there do exist at least two points of divergence from the well-known results. First, our Main Hypothesis predicts that the universes with different numbers of dimensions will have different spectrums of the vacuum density (19). We have seen that for d = 2 the spectrum will be of a simple Coulomb type (20) with E (2) ≈ 1/N 2 (similar to a spectrum of a hydrogen atom). However, it is no longer true for higher dimensions: say, for d = 3 the spectrum will be defined as E (3) ≈ 1/N , and for d = 4 it will be E (4) ≈ 1/N 2/3 . Thus, if the Main Hypothesis is correct, the number of dimensions in a universe is indelibly imprinted in its possible values for the vacuum energy -and, therefore, in the ultimate dynamics of that universe.
Another interesting point can be made about the limits on "fractality" of the horizons we have discussed in Section V. In that section we have discussed the Barrow's hypothesis of extending the horizons in a manner similar to the Koch snowlake [22]. The Koch snowflake, like its counterparts Sierpiński Gasket [25] and the Menger Sponge [26], has the following fascinating property: while its volume is finite, its surface area might be infinite (in case of the proper, two-dimensional, Koch snowflake, it means a finite surface with an infinite perimeter). In Section V we have restricted ourselves to the fractal horizons of finite d-volumes and finite surface areas. However, if the numbers M and λ in (25) happen to satisfy the double inequality we will instead have a universe of a finite volume (contained within the horizon), but infinite d-dimensional surface area of the horizon. And, since it is the latter that defines the Bekenstein bound [3,27], the condition (30) actually leads to an infinite entropy.
Granted, since an infinite entropy is tantamount to an infinite "concealed" information, this begs a question: how can this have any physical meaning? In fact, it is most probable that in three-dimensions it most certainly does not. However, it is no longer so certain in the case of higher dimensions! In order to understand why that is, let us recall a recent paper [28] by Barrow, where he has demonstrated that the event horizons of the d-dimensional black holes cease to effectively shield the central singularities when d > 3. In other words, in the 4-and higher-dimensional universes the cosmic censorship can be violated, letting loose the monstrous naked singularities [28]. The reader will agree with us on the harsh epithet "monstrous", once she recalls that near the singularity all local physical laws must crumble. Of course, it ought to be said that most scientists stick to a view that all classical singularities will eventually be entirely undermined by the quantum effects or, at the very least, will get localized as some exotic points with extreme yet finite values of the scalar curvature, energy-momentum tensor etc. Unfortunately, it is a position to which a very simple counter-argument exists: the absence of singularities as space-time boundaries [29] has an unfortunate side-effect of cosmological action becoming singular [30,31]. And this is an even more disastrous phenomena, since it renders any attempt to calculate a, say, Feynman integral moot and meaningless.
Subsequently, Barrow and Tipler have proposed an idea that the cosmological singularities, as well as those hidden inside of the black holes, all have to be treated as a "lesser evil", dangerous but ultimately benevolent as it protects us from a much greater "evil": a singular action. If their position is correct, then we should not expect the long-awaited "quantum gravity" theory to help us get rid of cosmological singularities (for example, via some kind of "rebound" etc.). On the other hand, in order to accept the Barrow and Tipler position we must know for certain that there does exist some version of "cosmic censorship" which shrouds the singularities from the outside observers. All this makes the mere possibility of a "nakedness" for singularities in higher dimensions, announced by Barrow in [28], all the more devastating. And so, we have arrived at an impasse.
There might be two possible solutions to our predicament. The first one would be to do away with the concept of higher dimensions. Of course, this also means dispensing with the string theory (at least in its current form). This is hardly an improvement! A second way, while not being so radical is still rather exotic: according to it, we shall assume that the surface area of a cosmological event horizon at d > 3 must be infinitely large. The reasoning behind this assumption is actually straightforward; after all, a finite surface area leads to a finite entropy and, thus, to a finite number of possible microstates within. Any such theory will fold under the infinite pressure of singularity. If, however, the entropy is allowed to grow indefinitely, we will get infinitely many distinct possible microstates. And then we at least get some fighting chance to perhaps associate some of them with the divergent processes that accompany the singularity. Granted, what we have adumbrated here is at this stage merely a hypothesis, and speculative at that; but it at least gives some glimmer of hope in our struggle with a seemingly unsurmountable problem noticed by Barrow in [28]. In [24] a very elegant method has been proposed by Frank Tipler, which method allows one to easily integrate the Friedmann equations for a closed universe if it is filled with matter whose equation of state is and where γ = const is an arbitrary adiabatic index. In fact, the method is so robust that it can be applied to both the closed and the open universes (the flat case is anyways trivial). However, Tipler was mostly interested in the closed case, because his method has allowed him to analytically calculate the total lifetime of such a universe, from the initial singularity to the final one. And, as we shall see below, the Tipler's method remains quite applicable even in the d-dimensional case. We will consider two examples: a close d-dimensional Friedmann universe whose constituent matter satisfy certain energy conditions (see below), and a similarly closed universe filled with a phantom field. As we shall see, in both of these examples the life of the universe is finite and is sandwiched between two singularities. Of particular interest here will be the phantom example (with γ < 0, γ = const), since its boundary singularities will be none other then the Big Rips, which in and of itself appears to be very unusual (see, however, the previous article [32], where this type of solutions has been first procured, albeit stemming from a more complex equation of state).
Let us integrate the continuity equation, taking into account (A1), using the density formula from (6). Temporarily abstaining from choosing the curvature k of the universe, we get: where R 2 is the integration constant and c is the speed of light. Next, let's introduce a new variable: a = y 2/(γd−2) and switch to a new, conformal time τ , which is related with the standard time t via the formula dt = a(τ )dτ For brevity, we will from now on denote the derivative with respect to τ with an apostrophe, so thatȧa = a . It is then straightforward to show that with a new variable the Equation (A2) takes a form: For a closed universe with k = +1 this equation is an integral of the energy of a harmonic oscillator, which oscillates with a frequency ω = c|γd − 2|/2. Hence, it is easy to derive the shapes of y(τ ) and a(t) in the parametric form: If γ > 0, the fields of matter satisfy the weak energy condition. Let us strengthen this a bit further by choosing γ > 2/d. At this point we should also point out that for the cold matter (γ = 1) this condition will be satisfied when d > 2, while for the radiation-filled universe (with γ = (1 + d)/d) the condition lowers to d > 1. In other words, the condition we have just imposed is a very mild one. In any way, (A4) with condition will describe a universe which begins at τ = 0, expands until the scale factor reaches it maximal value a max = a(τ = π 2ω ) = 4R c d(d − 1), to be precise), so by adding new dimensions we effectively pump up the critical density. On the other hand, the density of a closed universe must by definition exceed the critical density, or it will not be closed at all. And, of course, the bigger the density gets, the sooner the closed universe stops expanding and begins to collapse, severely reducing its own lifespan in the process. Finally, let us take a look at a more exotic case of phantom fields with γ < 0. We have already performed all the preliminary calculations, so we can jump straight to the results. In the conformal time the Friedmann solution will look like a(τ ) = a min sin c(|γ|d + 2)τ 2 and we have a universe that coalesces from the Big Rip singularity from τ = 0, collapses to its minimal size a = a min on the interval τ = π/c(|γ|d + 2), and then undergoes a rapid expansion which concludes in a second, and final, Big Rip singularity. A total lifespan in the normal time can, as usual, be calculated via the integration and is equal to (A10) In an interesting article [33] an interesting problem has been studied: a possibility for a universe to alter its topology while experiencing the Big Rip singularity. Before we conclude this article we would like to point out that we this problem might lead to even more curious results in the d-dimensional cosmologies. After all, more dimensions usually implies more complex topologies, so it would be only natural to expect some rather peculiar scenarios there.