Cosmographic Constraints and Cosmic Fluids

The problem of reproducing dark energy effects is reviewed here with particular interest devoted to cosmography. We summarize some of the most relevant cosmological models, based on the assumption that the corresponding barotropic equations of state evolve as the universe expands, giving rise to the accelerated expansion. We describe in detail the $\Lambda$CDM ($\Lambda$-Cold Dark Matter) and $\omega$CDM models, considering also some specific examples, e.g., Chevallier-Polarsky-Linder, the Chaplygin gas and the Dvali-Gabadadze-Porrati cosmological model. Finally, we consider the cosmological consequences of $f(\mathcal{R})$ and $f(\mathcal{T})$ gravities and their impact on the framework of cosmography. Keeping these considerations in mind, we point out the \emph{model-independent} procedure related to cosmography, showing how to match the series of cosmological observables to the free parameters of each model. We critically discuss the role played by cosmography, as a \emph{selection criterion} to check whether a particular model passes or does not present cosmological constraints. In so doing, we find out cosmological bounds by fitting the luminosity distance expansion of the redshift, z, adopting the recent Union 2.1 dataset of supernovae, combined with the baryonic acoustic oscillation and the cosmic microwave background measurements. We perform cosmographic analyses, imposing different priors on the Hubble rate present value. In addition, we compare our results with recent PLANCK limits, showing that the $\Lambda$CDM and $\omega$CDM models seem to be the favorite with respect to other dark energy models. However, we show that cosmographic constraints on $f(\mathcal{R})$ and $f(\mathcal{T})$ cannot discriminate between extensions of General Relativity and dark energy models, leading to a disadvantageous degeneracy problem.

mass [33,34]. The obtained vacuum energy contribution is different from zero, and it is usually interpreted as the cosmological constant, Λ. In principle, simple calculations show that Λ may be responsible for the cosmic speed up [35], whereas the cosmological constant fluid behaves as a non-interacting perfect fluid, which does not cluster, as confirmed by several observations. Moreover, the equation of state (EoS) of DE, namely the ratio between the total pressure and density, ω ≡ P ρ , is at any epochs of the universe's evolution ω = −1. This is a consequence of the fact that we consider a constant density and, consequently, a constant pressure. At late times, the DE dominates over matter starting from a redshift known as transition redshift [36,37].
It follows that such a vacuum energy cosmological constant [38] could be compatible with early time epoch, and the corresponding model, namely ΛCDM (Λ-Cold Dark Matter), also fits the cosmological data at present epoch [39]. Despite its experimental guidance, the ΛCDM model cannot be interpreted as the final cosmological paradigm. The model may represent, rather, a limiting case of a more general theory, which does not reduce necessarily to vacuum energy in terms of a cosmological constant. The reason lies in the fact that the ΛCDM model is jeopardized by several theoretical shortcomings. In particular, it is plagued by the coincidence and fine-tuning problems. These two issues today lead to the coincidence between matter density and the cosmological constant order of magnitude and the strong discrepancy between the theoretical and observed magnitudes of Λ, respectively [40]. Thus, a plethora of different models have been proposed in order to address these shortcomings [41].
The simplest extension of ΛCDM is the ωCDM model [42], which relies on assuming a quintessence DE term, described by introducing a scalar field, φ [43], within GR, providing a constant EoS [44], i.e., ω ≡ P ρ = const. On the one hand, the ωCDM model is a phenomenological coarse-grained extension of the standard cosmological model and naturally reduces to ΛCDM in the limit z 1. On the other hand, the origin of the scalar field and its physical meaning in terms of thermodynamical fluid is not completely clear. In particular, its meaning and behavior at late epochs should be addressed. In turn, we cannot consider the ωCDM model as the definitive paradigm, able to describe the dynamics of DE. An immediate and simple alternative is offered by accounting for an evolving EoS for the DE fluid, originally named varying quintessence fluid [45], which leads to a time-dependent barotropic factor, i.e., ω = ω(z).
The aim is to reconstruct the EoS of DE, by demanding that the barotropic factor evolves as the universe expands and comparing it with cosmological data at different stages of the universe's evolution. This prescription is due to the fact that the form of ω(z) is not defined a priori, and so, one may reconstruct its evolution by bounding ω(z) and its derivatives, through the use of current observations only [46][47][48].
Several reconstruction methods have been implemented recently. Probably the most relevant approach is offered by the Chevallier-Polarsky-Linder (CPL) parametrization of the EoS [49,50] in which a Taylor series around the value of the cosmological scale factor a(t) = a 0 = 1 of the barotropic factor is involved, i.e., ω(a) = ω 0 + ω 1 (1 − a).
A relevant advantage of the CPL parameterization is the fact that it predicts an early matter-dominated universe, which agrees with current early time bounds. This represents an improvement of a linear approximation of ω(z), which diverges as the redshift increases. Rephrasing it, since a ≡ (1 + z) −1 , we have ω(z) = ω 0 + ω 1 z 1+z , showing at low and at high redshifts, ω(z → 0) = ω 0 and ω(z → ∞) = ω 0 + ω 1 , respectively.
However, the degeneracy between ω 0 and ω 1 and the corresponding experimental difficulties of reconstructing the DE evolution plague the CPL parametrization. Hence, even this approach cannot be considered as the standard paradigm, capable of describing DE at all the epochs.
Other viable alternatives do not seem to rely on physical postulates rather than phenomenological approaches, a possibility that concerns a particular hypothesis is offered by extending the standard Einstein-Hilbert action. Under this scheme, introductions of curvature corrections or quantum gravity modifications permit us to reconstruct the DE effects as a natural consequence of a more general theory of gravitation. In this regime, the possibilities of extending GR span from the f (R) gravity to Horava-Lifshiz corrections, and so forth [51,52]. The main disadvantage is, however, the fact that the correct modification is not known a priori. Thence, the need for model-independent techniques and experimental tests, able to rule out cosmological models, is probably one of the most important prerogatives of modern cosmology.
In other words, a procedure that directly compares cosmological observables in terms of data, instead of cosmological models with data, should guarantee improved constraints, being capable of discarding the models that do not match the corresponding experimental limits. In this regard, the well-known approach of cosmography has increased its importance during this time, becoming a relevant model-independent technique to fix cosmological constraints on late time cosmology. In particular, cosmography represents the part of cosmology that studies the universe's dynamics by requiring the validity of the cosmological principle only. Cosmography does not account for any model a priori, and it is probably the most powerful technique to derive cosmological bounds directly from data surveys, without passing through the assumption of a cosmological model. In so doing, the cosmography prescription consists in expanding the cosmological observables of interest around present time, obtaining Taylor series that can be matched directly with data. The degeneracy problem among cosmological models, which leads to the issue that different models fit data with high precision [53,54], is therefore naturally alleviated.
To better understand this statement, let us recall the fact that a single experiment is influenced by a strong model dependence on the cosmological parameters involved in the analysis. This is a consequence of the basic assumption of any cosmological tests, i.e., the model under examination is statistically assumed to be the best one.
This leads to the thorny degeneracy of fitting cosmological data, since we cannot distinguish which model is really favored over others.
As a possible landscape, one can intertwine observational tests, i.e., supernovae Ia (SNeIa), baryonic acoustic oscillation (BAO), cosmic microwave background (CMB) and so on, reducing the phase diagram of cosmological data.
In other words, by combining different cosmological tests, it would be possible to reduce the best fit phase space region, healing the degeneracy between cosmological models. For these reasons, cosmography may represent a technique for limiting degeneracy, without necessarily invoking several datasets and without assuming any cosmological model a priori [55][56][57]. The main purpose of this review is to point out the connections between cosmography and cosmological models, showing that it is possible to alleviate the cosmological degeneracy, by adopting the cosmographic model-independent tests. We will discuss in detail the main prescriptions of cosmography, and we will describe the most relevant cosmological models, showing how it is possible to relate such models with cosmographic bounds. In addition, we will take a look at the most common cosmological fits, showing constraints on cosmographic parameters through the use of SNeIa.
This review paper is organized as follows: in Section 2, we discuss cosmography as a selection criterion for determining present time constraints without the need for a cosmological model assumed a priori. In Section 3, we illustrate the main disadvantages of cosmography and the problems associated with extending the cosmological observables in Taylor series. In Section 4, we emphasize how to relate cosmography to the EoS of the universe. In Section 5, we review a few cosmological models of particular interest in the framework of modern cosmology. In particular, we summarize the main features of ΛCDM (Section 6), ωCDM (Section 7), the CPL parametrization (Section 8), the Chaplygin gas (Section 9), the DGP model (Section 10) and finally, the corresponding phenomenological extension, i.e., the αDGP model (Section 10.1). In Section 11, we briefly review the so-called Extended Theories of Gravity, focusing on the case of f (R) theories (Section 11.1) and f (T ) (Section 11.2). Further, in Section 12, we introduce the concept of experimental procedures in cosmology, giving light to the observational problem (Section 12.1), BAO, SNeIa and CMB measurements. Finally, Section 13 is devoted to conclusions and the perspectives of this review.

Cosmography: A Selection Criterion for Cosmological Models
We describe in detail the role played by cosmography and its consequences on observational cosmology. Cosmography represents a selection criterion to discriminate which model behaves better than others in comparison with cosmological data [58]. In fact, all numerical tests depend on the choice of the particular cosmological model under examination, leading to a strong degeneracy problem. In order to alleviate the degeneracy between cosmological models, one needs to introduce model-independent procedures for constraining cosmological scenarios, discriminating the statistically favorite paradigm. An intriguing landscape to the degeneracy problem is to take into account the cosmological quantities that can be fitted without the need for postulating a model a priori. Cosmography was first widely discussed by Weinberg [25] and then extended in [59,60]. The core of cosmography is to assume the validity of the cosmological principle only, without any further assumptions on Einstein's equations. In other words, once the Friedmann-Robertson-Walker (FRW) metric is involved, cosmography teaches us how to get bounds on the observable universe, through direct measurements of the observable expansions in terms of a(t). Therefore, if we write the FRW metric as: with a(t) the scale factor and k the standard spatial curvature, then cosmography permits us to infer how much DE or alternative components are required to guarantee current observations, without postulating any cosmological model at the beginning of our analyses. The idea is to relate the cosmographic expansions to the free parameters of a given model.
Hence, one can appraise which models behave fairly well and which are disfavored as a consequence of not satisfying the basic demands introduced by cosmography. Usually, we refer to cosmography as a part of cosmology that tries to infer kinematical quantities by keeping the FRW geometry only. Rephrasing it, the framework of cosmography is represented by assuming Taylor expansions of the scale parameter as the basic postulate. In so doing, standard Hubble law becomes a Taylor series expansion for low z. Starting from the Hubble law v = H 0 D, it is easy to obtain the following expression: where D is the causal distance traveled by photons from an emitter to an observer. Since 1+z = 1+v/c, it follows: which is the redshift definition in terms of the scale factor a(t). Afterwards, we can evaluate a Taylor series expansion of this quantity, obtaining an expression of the redshift in terms of the physical distance, z(D). These descriptions represent the basic ingredient to expand the scale factor into a series, yielding: where we conventionally truncated the series at the fourth order in ∆t ≡ t − t 0 . Here, we assume that t − t 0 > 0, in order to get the causality of cosmological observations. By simply assuming that the constant, a 0 , i.e., the scale factor value today, is a 0 = 1, we have: with the definition of the cosmographic series (CS) as: The Hubble rate represents the first derivative with respect to cosmic time of the logarithm of the scale factor a, whereas the acceleration parameter, q, indicates if the universe is currently accelerated or not. A currently accelerating universe provides −1 ≤ q 0 ≤ 0, where the subscript, 0, points out that q (and all the cosmographic parameters) are evaluated at present time. The variation of acceleration, namely the jerk parameter, j, if positive, would indicate that q changed sign in the past, at the transition redshift, corresponding to q = 0. Finally, s 0 indicates if j changed sign as the universe expands. If negative, the jerk parameter remains with the same sign of its present measurable value.
After straightforward calculations, we could obtain an expression of the luminosity distance, d L , expanded in series around z 0. In particular, considering the expression for the luminosity distance and assuming the causal distance: with the definition of d L given by: we have: where d H = 1/H 0 is the Hubble radius and Ω 0 the total energy of the universe, which can be related to the spatial scalar curvature, Ω k ≡ k a 2 , in terms of Ω k = Ω 0 − 1. As we will clarify later, the closure relation, i.e., Ω k = 0, would provide a total density fixed to Ω 0 = 1.
For the sake of clarity, we want to underline the fact that cosmography can be extended in the framework of an inhomogeneous universe. In that case, extensions of the CS are required in order to define a new set of cosmographic parameters. The role played by the acceleration parameter, the jerk parameter, and so forth, must be inferred by starting from a different metric. An example of inhomogeneous space time is provided by the Lemaitre-Tolman-Bondi metric [61]. However, a self-consistent scheme for inhomogeneous cosmography is not, so far, completely understood (for additional details, see [62]). Inhomogeneous cosmographic investigations are extremely important in order to measure possible departures from the standard homogeneous and isotropic universe. However, in this review, we do not treat the case of inhomogeneous cosmography extensively.

Issues Related to Cosmography
Although cosmography relates cosmological parameters to power series expansions, permitting one to directly measure cosmological observables, the problem of assuming a precise model-independent procedure depends on Ω 0 . In fact, the measurement of j 0 cannot be performed alone. This comes from the fact that one can only measure the sum j 0 + Ω 0 , instead of j 0 . The former prescription entails some other difficulties on the numerical procedure, since Taylor expansions cannot alleviate the degeneracy between j 0 and Ω 0 . In other words, to reduce this problem, one needs geometrical ways to constrain the universe scalar curvature, i.e., finding out limits that permit one to fix scalar curvature. In particular, if the DE term would be completely time independent, the bounds inferred from CMB measurements indicate a compatible spatially flat universe. On the contrary, if the EoS of DE is a function of the cosmic time, limits on Ω k are not so restrictive, and it is not completely clear if space curvature is really negligible. However, at present times, the universe can be fairly well approximated through a vanishing scalar curvature, and so, we will not lose generality imposing hereafter Ω k = 0. In turn, cosmography becomes, therefore, a model-independent procedure to constrain cosmological data. On the one hand, cosmography constrains cosmological models without the need for a particular theoretical framework, but on the other hand it, approximates the original function at a certain order of the truncated series. Drawbacks associated with those approximations are therefore involved. In fact, the exact expansion of cosmological parameters will clearly be different from a truncated series with an error as small as the series tends to higher orders. If one truncates the series at small orders, corresponding errors would propagate into the analyses, considerably complicating the resulting constraints. Even if the analyses are plagued by a truncated series problem, it is desirable to keep the number of fitting parameters as low as possible, in order to reduce the broadening of the posterior distributions for each parameter, derived from extending the parameter space. In other words, a low order of the Taylor expansion would propagate numerical departures within the error bars of the cosmographic parameter. However, a high order may increase the phase space, enlarging the allowed statistical regions of cosmic parameters. To alleviate these shortcomings, one can consider different datasets, combining cosmological data with more than one survey compilation. Afterwards, one may truncate at an intermediate order of the Taylor series all the observable quantities of interest in order to reduce systematics on measurements.
In addition to the above problems, although cosmography represents a viable technique to perform model-independent fits, the numerical procedure of expansion around our time is plagued by problems related to divergences in the Taylor series of the CS. The CS may diverge at z 1, since we are expanding around z ∼ 0, as a consequence of the particular convergence radius, associated with each expansion. In other words, when one exceeds the approximation of low redshift, problems due to convergence may occur, and so, finite truncations could give systematics on the measurements of cosmological quantities, providing possibly misleading results.
One well consolidated way to escape the convergence problem is represented by re-parameterizing the redshift variable, through the use of auxiliary variables, circumscribed in a low redshift domain, in which any possible parameterizations of the redshift, z, namely y i , may be limited to y i < 1. An example is the following well-known variable: which has the advantage of converging for past and future times, respectively: z ∈ (0, ∞); y ∈ (0, 1) ; z ∈ (−1, 0); y ∈ (−∞, 0) one can rewrite the luminosity distance in terms of the auxiliary variable, expanding in a Taylor series around it. The corresponding discrepancies between the two procedures do not influence the systematics of the cosmographic analysis, and so, rewriting the luminosity distance of Equation (9) in terms of y 1 , we have [63]: where Ω 0 is the net energy density of the universe. On the other side, other interesting variables have been introduced [60]: whose limits are, for z ∈ [0, ∞] : y 2 ∈ [0, π 4 ], y 3 ∈ [0, 0], y 4 ∈ [0, π 2 ] and for z ∈ [−1, 0] : . Such variables respect the condition of finite intervals in which the redshift is scaled to values that do not allow bad convergence and divergence in the Taylor expansions.
Some interesting steps to contrive viable redshift parameterizations are based on the reconstruction of the redshift variable and on the invertibility with respect to z. In particular, one may assume that the luminosity distance curve should not behave too steeply in the interval z < 1, providing a smooth profile, without flexes. Rephrasing it mathematically, the curve should be one-to-one invertible, in terms of the auxiliary variables, providing tighter intervals for redshifts. For the sake of completeness, the corresponding luminosity distances are rewritten as follows: respectively, for y 2 , y 3 and y 4 . Note again that we adopted Ω 0 = 1 in the above expansions. The need for accurate cosmographic analyses relies on another important fact. Indeed, we notice that DE can also be interpreted not in terms of a mysterious substance. The interpretation of DE in terms of a mysterious fluid is a consequence of assuming a further fluid within Einstein's field equations, to drive the observed universe speed up. On the contrary, carrying out accurate cosmographic analyses would permit cosmologists to denote DE through possible different interpretations. This philosophy can open new insights for discovering the physics behind DE (for example, anthropic principle, holographic principle, accelerating frames, and so on [64]). In other words, since cosmography could discriminate the real nature of DE, improved cosmographic analyses would refine our theoretical knowledge on the observable universe, as well.

Cosmography and the Equation of State of the Universe
The CS represents a resource for determining cosmological bounds without the need for a cosmological model postulated a priori. The subscript "0" indicates that the CS is evaluated at redshift z = 0, which represents our time. It is useful to combine the CS among themselves, obtaining a combination at a generic time, t: where we assumed q, j and s in terms of the Hubble rate, i.e., H ≡ d dt ln a. The dots indicate the derivatives with respect to the cosmic time. Analogously, one can expand another important quantity of cosmological interest, i.e., the total pressure as a function of the whole energy density P = P(ρ), as: with κ dρ n 0 evaluated at z = 0 and truncated to n = 2. Simple algebra leads to: where we used the fact that the total density of the universe is a function of the redshift, z, i.e., ρ = ρ(z).
Simple calculations permit one to relate the EoS of the universe in terms of cosmographic coefficients. We have: which represents the total EoS of all species that enter the standard energy momentum tensor. At present time, Equation (20) can be expressed in terms of standard matter and DE. We therefore obtain: where we have used the fact that ρ = ρ m + ρ DE and the matter pressure vanishes P m = 0.
With each species, we can associate the corresponding EoS, which is commonly different from the one of Equation (21).
We briefly report below some examples of the EoS of single species: respectively, for standard pressureless matter (baryons and CDM), scalar curvature and radiation. In general, the expression of the EoS of DE reads: in which we neglected radiation and scalar curvature, to be consistent with the current bounds on the observable universe. Note that, generally, ω DE = ω, and so, the corresponding CS is different if the universe is characterized by one, two or more fluids and, further, that ρ > ρ DE .

Dark Energy Cosmological Models
We discussed the role of cosmography in the present status of cosmology. We described how to relate the derivatives of the pressure and EoS in terms of the CS. In this section, we wonder whether it is possible to relate the total matter density of a particular cosmological model in terms of the CS. This prescription is actually important, since the allowed limits on matter density can be inferred from measuring the CS. A model that does not confirm cosmographic bounds is disfavored over the ones that pass those constraints. This methodology is sometimes refereed to as the cosmographic test, and it is similar to the statefinder diagnostic. To perform a cosmographic test, we briefly sketch some cosmological scenarios among the most relevant ones. We can therefore investigate the numerical predictions of any model through the use of cosmography.
As already stressed in the introduction, we focus on the simplest assumption of an homogeneous and isotropic universe. The corresponding Friedmann equations read: which have been written in general, leaving the scalar curvature k. The energy-momentum conservation for a barotropic fluid, i.e.: ∂ µ T µν = 0 provides, and with the definition of the cosmic time, in terms of z, i.e.: we can recast a convenient expression for the continuity equation, rewriting Equation (25) as follows: From Equation (27), it is easy to probe: which teaches us how to relate the total energy density to the barotropic factor, ω(z). Depending on the particular cosmological model or, alternatively, on ω(z), one gets different evolutions of the total density, ρ.
For barotropic fluids, ω degenerates with the spatial curvature density, Ω k , and as previously stressed, one should include Ω k as an additional parameter. Current data are not accurate enough to distinguish between ω(z) and Ω k alone, owing to a strong degeneracy problem between them.
As discussed, this condition is transferred to the measurements of the jerk parameter, j, which cannot be constrained alone if the scalar curvature is not fixed by some geometrical bound. Here, for simplicity, we assume a negligible spatial curvature, Ω k [42], as already discussed in previous sections.
To relate cosmography with cosmological models, we take into account the following paradigms: 1. The ΛCDM model; 2. DE models with a constant equation of state, derived from a scalar field coupling with curvature (ωCDM or quintessence); 3. DE models with EoS parameterized in terms of the power of a(t) (e.g., the CPL parametrization); 4. DE models interacting with CDM (e.g., the Chaplygin gas); 5. DE from quantum effects (e.g., the Dvali-Gabadadze-Porrati (DGP) model and its phenomenological extension); 6. f (R)-gravity models; 7. f (T )-gravity models.
In the upcoming sections, we are going to briefly review each of them, before discussing the connections among them and cosmography.

The ΛCDM Model
The cosmological constant was originally achieved by Einstein himself, in order to reproduce a static universe. After the discovery of a non-static universe, the cosmological constant was discarded, due to its original bad interpretation. Recently, its reintroduction has been proposed, in order to guarantee that the universe accelerates. The physical meaning of Λ is, however, at most different from the one originally proposed by Einstein. In particular, given the Einstein tensor G µν = R µν − 1 2 g µν R and the corresponding energy momentum tensor, T µν : both satisfying the Bianchi identities, it would be natural to include a constant term, Λ, which makes the Einstein equations the most general possible. Indeed, an integration constant does not influence the validity of the Bianchi identities, and in the context of standard GR, quantum field theory interprets the cosmological constant as a vacuum energy contributor, which enters the right side of Einstein's equations. In a homogeneous and isotropic background, the Friedmann equations are rewritten analogously to Equation (24), by only replacing P and ρ with: Despite its theoretical interpretation, the meaning of Λ is nowadays the object of strong debates. Even if particle physics interprets Λ as a vacuum energy density, the role of Λ is plagued by a severe fine-tuning problem. Particularly, the vacuum energy density is naively evaluated by the sum of the zero-point energies of quantum fields with mass m by: which exhibits an ultraviolet divergence, overcoming by assuming a cut-off scale, κ max , proportional to the Planck mass, as already discussed in the Introduction. As a consequence: which is about 10 120 orders of magnitude larger than the observational value measured by cosmological observations. This shortcoming is known in the literature as fine-tuning problem. In addition, a coincidence problem plagues the model, i.e., the present value of Λ density is comparable with matter density Ω m today.
If we define the cosmological constant density as Ω Λ , we find out: which appears as a strange causality, since Ω Λ does not evolve as the universe expands. Hence, it is not clear why we would live exactly in an era in which the two orders are so close to each other. By considering Equations (24) and (30), the normalized Hubble rate, E(z) ≡ H(z) H 0 , easily reads: where, hereafter, Ω m (z) ≡ Ω m (1 + z) 3 and, to guarantee E(z = 0) = 1, Ω Λ = 1 − Ω m . The ΛCDM model depends on one-parameter only, i.e., Ω m , and provides the following expressions for the CS: and: Inverting Equation (35), we get: which represents the matter density as a function of the acceleration parameter. It is easy to show that if a model is one parameter dependent, we expect that it depends on q 0 only. If a model depends on more than one parameter, then we need to relate its Ω m even to j 0 and s 0 .

The Quintessence Model
In analogy to the ΛCDM model, we wonder whether the origin of the negative barotropic factor may be derived from prime principles. Probably the simplest approach, generalizing the ΛCDM model, deals with assuming that an unknown component, namely quintessence, is derived, for example, from a slowly-rolling dynamical scalar field with a non-canonical kinetic term in the Lagrangian. The corresponding paradigm predicts a constant EoS, in which the barotropic factor is consistent with modern observations. For the sake of clarity, the cosmological model is named ωCDM, due to its analogy with the ΛCDM paradigm. Here, the role of the cosmological constant is replaced by a constant barotropic factor. The normalized Hubble rate, E(z), then reads: The ωCDM model is a two-parameters model, Ω m and ω, and Ω Q = 1 − Ω m . The ωCDM model leads to the following CS: and: By inverting the coefficients, we have: and : Alternatively, the ω parameter reads: and so, combining Equations (43) and (45), we have: which represents the value of Ω m in terms of q 0 and j 0 , as expected, since ωCDM depends on two parameters.

The Chevalier-Polarsky-Linder Parameterization
Differently from the two mentioned approaches, we may include a rolling scalar field, φ, in order to establish a direct parametrization of the EoS of DE.
In general, to characterize the DE density, one needs to know the form of ω = ω(z), obtaining the corresponding normalized Hubble rate: where f (z) is a generic function of z, so providing the evolving DE term depending on the redshift, z.
We have: If the evolution of ω(z) is the result of a rolling scalar field, a simple choice is to parameterize ω(z) in terms of a Taylor expansion around z = 0: where the coefficients, ω n , are the derivatives of ω evaluated at z = 0. Assuming a direct Taylor expansion leads to the problem of truncating the series and may provide significant discrepancies with observations at higher redshifts. This drawback can be alleviated by matching Taylor expansions directly to data. Many approaches have been developed in the literature, and here, we discuss the CPL parametrization, given by: where ω 0 and ω 1 are constants. One can interpret the CPL parametrization as a first order Taylor series in the powers of a(t), around a = 1. The corresponding normalized Hubble rate is: where In this case, the model depends on three parameters, Ω m , ω 0 , ω 1 , whereas the CS reads: and: and so: while: Finally: In order to constrain Ω m , one needs to know with high precision q 0 , j 0 and s 0 , since the CPL parametrization is a three-parameter-dependent model. Hence, we expect a broadening of systematics in evaluating the coefficients, since the numerical values of the CS, as we will see later, are not well constrained by present data.

The Chaplygin Gas
Now, we fix our attention on the Chaplygin gas model. Chaplygin first introduced his equation for a perfect fluid in 1904 [65], proposing as pressure: where A is a positive constant. In particular, ρ and P are the (total) density and pressure in a comoving reference domain, where ρ > 0. Chaplygin treated the particles of its fluid as dust, obtaining a vanishing pressure contribution to standard matter. By combining Equations (27) and (60), we get: where B is an integration constant, and a = 1/(1 + z). The total EoS reads: where Ω b here is the baryon matter density, normalized in units of the critical density. Analogously, ω DE can be written as: Finally, we write down the normalized Hubble parameter as: where Ω DE = 1 − Ω b is the DE density in critical DE density units. For the Chaplygin gas, the total matter density is not given by the sum of baryons and CDM. In fact, the Chaplygin gas is known as the first example of a unified dark energy model, in which CDM is generated from assuming baryons only within the energy momentum tensor (For more details, see ). The cosmographic parameters read: and: Inverting, we have:

The Dvali-Gabadadze-Porrati Model
Here, we discuss the DGP model, which attempts to give an alternative to a positive cosmological constant, due to a modification of gravity at large distances [87]. The scale length is regularized by the typical crossover radius, r c . Below it, the Einstein gravity is preserved; over it, the gravitational force follows a five-dimensional 1/r 3 behavior. The DGP model represents an infrared modification of standard gravity. The approach is inspired by Braneworld constructions with infinite-volume from extra dimensions [88]. In other words, these extra dimensions are infinite or, more likely, cosmologically large. It follows that the graviton mass cannot be massless as in standard GR, providing, instead, massive states, whose characteristic width is r −1 c . In particular, by assuming to reduce our description to a four-dimensional space-time, we have: where the crossover scale is defined as: Considering the normalized Hubble rate, i.e., E(z) ≡ H(z) H 0 , we infer: where: may be interpreted as the density of the crossover factor. It is clear that: Thus, the model depends on one parameter only. We get: (1 + Ω m ) 5 and since the model is one-parameter-dependent, we have:

An Extension of DGP: The α Dark Energy Model
A possible phenomenological extension of DGP is the α dark energy model, hereafter αDE [89]. Since the Braneworld theory, at large scales, provides that gravity leaks out into the bulk, as a result, an accelerated expansion of the universe may be related to them by starting from a modified Friedmann equation. Moreover, phenomenological motivations can extend Equations (69) and (71). In particular, it has been proposed in [89] as an extension of the DGP model. In this context, the Friedmann equations are modified by using a further parameter, i.e., α. We write down: and we define: whereas an implicit expression of E(z) is: It is clear that instead of Equation (71), this model accounts for two parameters, i.e., Ω m and α. The limiting cases α = 1 and α = 0 correspond to the DGP and ΛCDM cases, respectively. Even though the physical meaning of α is only phenomenological, it leads to the existence of some hidden and unknown quantum processes. Thus, its introduction relies on the lack of knowledge of quantum processes, occurring at early epochs of the universe's evolution.
The CS for the αDE model are: with: while the explicit forms of Ω m as a function of the CS cannot be inferred from analytical expressions. We will numerically solve the corresponding values of Ω m in terms of the cosmographic coefficients, as we report in Table 1.  (125), and we performed standard Bayesian analyses.
In particular, the first fit leaves the coefficients free to vary; the second one assumes H 0 fixed to the PLANCK value. The third fit uses H 0 fixed in terms of the Hubble Space Telescope (HST) measurement, whereas the last fit fixes the value of the Hubble rate today, through the fit of a first order d L within the interval z < 0.36 by using SNeIa measurements only. The values of Ω m , f 0 , f z0 , and f (iv) (T 0 ) have been obtained by using standard error propagations.
The value of f (T 0 ) for fit 3 is in powers of 10 3 , while the value of f (iv) (T 0 ) is in powers of 10 8 .

Dark Energy from Extended Theories of Gravity
In the previous Sections, we treated the case of DE in terms of evolving fluids within the energy-momentum tensor of Einstein's gravity. Here, we consider an alternative point of view based on extensions of GR. In so doing, the DE fluid naturally arises as further corrections from the Hilbert-Einstein action. In particular, alternative theories of standard Einstein's gravity have been first developed to alleviate the problems of standard cosmology, in which, by simply introducing a vacuum energy cosmological constant, it is not possible to characterize the universe dynamics from prime principles [90]. This class of theories, on the contrary, constitutes a plausible attempt to determine a semi-classical scheme, in which one deems GR as a limiting case of a more general paradigm, described by extensions of the Einstein-Hilbert action. One of the most commonly used approaches is that of Extended Theories of Gravity [91][92][93][94][95]. In these theories, Einstein's Lagrangian is extended and corrected through higher-order curvature invariants and minimally-or non-minimally-coupled scalar fields (e.g., R 2 , R µν R µν , R µναβ R µναβ , R R, or R k R, and φ 2 R). Other interesting motivations to extend general relativity have been carried out for understanding the role played by the Mach principle in the framework of a theory of gravity, leading one to consider a varying gravitational coupling. A well-known example is represented by Brans-Dicke theory [96], which includes a variable gravitational coupling. The corresponding dynamics is influenced by a single scalar field, non-minimally coupled to the geometry, and so, the above-mentioned Mach's principle is recovered [97][98][99].
In addition, one may imagine extending GR, in comparison with every unification scheme of fundamental interactions [100]. Similar approaches have been developed in quantum field theory on curved spacetime, giving interactions between quantum scalar fields and background geometry, or gravitational self-interactions, that actually provide this kind of correction to the Einstein-Hilbert Lagrangian [101,102]. For the sake of clarity, a self-consistent quantum gravity scheme is, however, not derived from these approaches, which do not constitute a full quantum gravity framework. One of the simplest extensions to GR is f (R) gravity in which the Lagrangian density is an arbitrary function of Ricci scalar R [103]. The first example, Starobinsky's model for inflation, f (R) = R + αR 2 (α > 0), could lead to the accelerated expansion of the universe due to the presence of αR 2 term [104]. Furthermore, this model's results are consistent with the temperature anisotropies observed in CMB, and then, it seems to be a viable alternative to the scalar-field models of inflation [103]. f (R)-models to describe DE behavior are studied by [105][106][107][108][109][110][111][112][113][114][115][116]. Furthermore, models that satisfy both Solar System and cosmological constraints have been proposed in [117][118][119][120][121][122][123][124][125][126][127][128][129][130][131][132][133]. In particular, since the Newtonian law on large scales undergoes changes in f (R)-gravity, this leaves several interesting observational signatures, such as the modification of rotational curves of a galaxy, galaxy clustering spectra [134][135][136][137][138][139], CMB [118,140] and weak lensing [141,142]. Another alternative approach is based on teleparallel gravity [143]. This theory corresponds to a gauge theory of a translation group based on Weitzenbök geometry. In teleparallel gravity, gravitation is attributed to torsion, which plays the role of force [144]. In teleparallel gravity, this means that no geodesics exist, and force equations are analogous to the Lorentz force equations of electrodynamics. Thus, those force equations describe the particle trajectories submitted to a gauge gravitational field. This means that the gravitational interaction can be described alternatively in terms of curvature, as usually performed in GR, or in terms of torsion, as in teleparallel gravity. One of the first models in f (T )-gravity was proposed to avoid the Big Bang singularity and recover inflation without the need for an inflaton [145]. Several cosmological applications in f (T )-gravity to describe the accelerated expansion of the universe are studied in [146][147][148][149][150][151].
In what follows, we limit our attention to the case of f (R) and f (T ) gravities, and we discuss their roles in terms of cosmography.

Cosmography of f (R) Gravity
In this subsection, we briefly show how to relate f (R) gravity to cosmography. We start from assuming that the Hilbert-Einstein action is replaced by: where L m is the standard matter term and g is the determinant of the metric tensor. By varying Equation (81) in terms of g µν , we get the following field equations [91][92][93][94][95]: having a curvature DE fluid, associated with f (R) gravities. Here, the prime indicates the derivative of f (R) with respect to the Ricci scalar and T µν is the energy momentum tensor of matter. Particularly, one can assume that the f (R) class should reduce to ΛCDM at z 1, to guarantee the validity of cosmography. By using the FRW metric within Equation (82), we get the modified Friedmann equations: where ρ curv is an effective curvature fluid and: giving a DE term, expressed in terms of curvature, i.e.: with the curvature barotropic factor: and pressure: By means of the above equations, it is clear that the DE problem could be, in principle, addressed by curvature. Further, once f (R) and derivatives are bounded by cosmography, the corresponding DE term is naturally constrained, and one can describe the DE effects in terms of observable quantities. Hence, in order to relate f (R) and its derivatives to cosmography, we can write down R and its derivatives in terms of the redshift, z, as: Thus, since R = R(z), we obtain: which represent the cosmographic constraints on f (z) = f (R(z)).

Cosmography of f (T ) Gravity
As we described above, an alternative view can be achieved by considering torsion. In particular, models based on f (T ) gravity include torsion as the source of DE [143,[152][153][154][155][156]. The general action is: where e A (x µ ) is an orthonormal basis for the tangent space at each point, x µ , on a generic manifold, with A running over zero, one, two and three. Moreover, the metric is given by g µν = η AB e A µ e B ν , where µ and ν are the coordinate indices on the manifold. The torsion and co-torsion, respectively, read: and: while one infers the torsion scalar, T ≡ S µν ρ T ρ µν , where: is the superpotential. The corresponding field equations read: where T (M) ν ρ represents the energy-momentum tensor. The modified Friedmann equations can be rewritten as: with the DE density given by: and the barotropic factor: This result could be, in principle, related to the observed acceleration of the universe. Finally, we find, and f (T 0 ) = 1 to guarantee Solar System constraints. Summing up, the case of extended theories of gravity is at most different from usual DE models. Here, the role played by the introduction of DE is replaced by additional terms proportional to curvature, torsion or quantum effects, and so forth. However, as we pointed out in previous sections, instead of considering the matter density, we focus our attention on constraining the additional terms that extend the Einstein-Hilbert action. In the next sections, we will discuss the role of experimental procedures in cosmology, and we highlight the possibility of relating those techniques to cosmographic analyses. We therefore get constraints on the observable universe, deriving the cosmographic coefficients and the matter densities for each model in terms of the CS. As concern f (R) and f (T ) gravities, we will constrain their values and the corresponding derivatives in terms of CS.

Experimental Procedures
Now, we want to emphasize the role played by three of the most relevant standard candles in cosmology. Once we give information on the experimental techniques based on supernovae, CMB and BAO, we relate those methods to the cosmographic analysis, and we infer present numerical values by directly minimizing the corresponding chi-squared. Let us first start by noticing that one of the main challenges of modern cosmology is to reconstruct the behavior of a(t), for different eras of the universe's evolution. Indeed, reproducing the universe expansion history at different epochs may represent a way of discriminating among models which one behaves better than others. If we limit our attention to present times, another interesting point is even to find out at which redshift the passage, between that matter-to the DE-dominated epochs, occurred.
The corresponding redshift is known in the literature as transition redshift, as we previously stressed. The transition redshift and the corresponding observational experimental consequences depend on the specific form of DE, and so, alleviating the degeneracy problem may represent a viable procedure to fix tighter constraints on the expansion history of the universe. One of the possibilities, as mentioned in Section 1, is to perform a superpositions of different cosmological tests, reducing the allowed phase space diagrams and, thus, finding more accurate bounds. Unfortunately, small and high redshift data dominate over intermediate surveys, showing the fact that at redshift 2 < z < 1, 000, we cannot fix relevant constraints on the expansion history of the universe, with the same precision of small and high redshifts. This problem may be alleviated assuming that other non-conventional standard candles, such as gamma ray bursts, galaxy clusters luminosities, and so forth, may be thought of as non-standard indicators, albeit able to characterize intermediate bounds of the universe's expansion history. Even though this argument is so far the object of intensive studies, a complete and self-consistent way to characterize data from non-conventional standard candles remains a present open question. Our main purpose is to summarize, here and in what follows, three of the major techniques of data fitting, i.e., SNeIa, BAO and CMB. While the first two tests are limited to intervals of z ≤ 1.4, the third test is associated with z ∼ 1100, allowing for an accurate matching between small and high redshifts. However, the discussion on possible non-conventional datasets and the understanding of whether they can be used as standard candles are beyond the purposes of this review.

The Observational Problem
The use of cosmological distances as indicators relies on assuming the validity of a causal effect between photons emitted and measured. Moreover, to deduce the universe's expansion history, several observational procedures exist. Examples are SNeIa observations, the study of BAO, the large-scale structures, the weak lensing, understanding the abundances and evolution of clusters, the CMB measurements, and so forth [157][158][159]. It is a matter of fact that SNeIa represent standard candles to check the cosmic expansion effects at low redshift, showing the existence of a phase transition between a decelerating and accelerating universe [4,9,160]. In order to get a direct relation between observational redshift-distance, one gets: where dτ (z) = dt/a represents the conformal time, and from now on, a * = a(z * ) is the scale factor at which a photon (event) is emitted. Invoking the validity of the Friedmann equations and the fact that −1 < ω < 0 (excluding particular cases in which ω < −1 [161]), the expansion function, a(t), is [50]: and moreover: By looking at the above equations, it is clear that once a scalar field is determined for a particular model, we could extract the corresponding EoS, although this does not allow one to consider a completely model-independent parameter space.
Taking into account observational data, it is possible to characterize different constraints in which the EoS can be folded. By considering the conformal time, τ , we have d = (1 + z)τ , and no foreknowledge and local approximation are required in order to have the relation between conformal time and the scale factor. The error estimation on the observed magnitudes is: whereas the logarithmic derivative of a(τ ) is: which gives the proper time evolution of the scale factor and the conformal horizon scale, (aH) −1 .
The main advantage of the cosmic time is to relate the redshift in terms of a time variable, giving a possible match between distance and time measurements. In what follows, we are going to describe, in detail, the datasets commonly used in the literature, in terms of the redshift, z, and cosmic time, a(τ ).

Baryonic Acoustic Oscillations
Baryonic acoustic oscillations are oscillations with a wavelength of ∼0.06h Mpc −1 (where h is the normalized expansion parameter, H = 100h), occurring in the power spectrum of matter fluctuations after the recombination epoch. Such a wavelength is related to the comoving sound horizon at the baryon dragging epoch. In the early universe, beyond z 1090, matter and energy have high density values, and hydrogen recombination is not allowed. In the relativistic plasma, large Thomson scattering cross-sections of free electrons and photons occurred in a time lower than the Hubble time. Hence, while radiation pressure tends to separate baryonic matter, gravity tends to cluster it. Oscillations as sound waves in the baryonic-photon fluid were present, due to the fact that radiation pressure is larger than gravitational forces [162]. Signatures of BAO are present after the recombination. In this phase, the density decreases, and baryonic perturbations can follow the gravitational instability, as well as DM perturbations. Since BAO can be considered sound wave perturbations, it is possible to investigate the corresponding effects through different linear basis sets [163][164][165][166][167][168][169][170][171][172][173].

Observations of Baryonic Acoustic Oscillations
Measurements of BAO are due to the spectroscopy of the Sloan Digital Sky Survey (SDSS) data galaxy sample. In particular, SDSS observes different slices in the redshift, constraining the distance-redshift relation observed at different epochs. To determine BAO, we introduce the dilatation-scale distance [167,174]: where D(z) ≡ d L (z)/(1 + z) = dz/H(z). When a flat geometry is assumed, we can relate d L to the angular diameter distance through the following relation: d L (z) = (1 + z) 2 D A (z). In particular, the position of the BAO peak approximately constrains d(z), i.e., the ratio of the sound horizon in the decoupling epoch on the dilatation-scale distance: The equation for the comoving sound horizon, r s (z ls ), is: where c s is the sound speed in the early plasma and z ls the redshift in the dragging epoch (last scattering surface). Typically, to estimate D V (z) and D A (z), the use of cosmic priors is required, indicated by the vector, Y. The parameters depending on the particular cosmological model are indicated by P. Therefore, an extended form for D V is: where, re-writing D A : Percival et al. [175] found an almost independent constraint on the ratio of distances, D V (0.35)/D V (0.2), which is consistent at the 1.1σ level with the best fit of the ΛCDM model. A correlation covariance matrix, C, is introduced to define the chi-squared function of BAO: Equation (104) allows one to connect the Hubble parameter to D V and D(z): Equation (110) permits us to measure d L and D V in order to obtain H(z) [176]. Particularly, Shafieloo et al. [177] showed that the ratio between the Hubble parameter at different epochs plays an important role in cosmological analyses. In fact, one can write down: or equivalently: where A(z) is the BAO acoustic parameter, i.e.: the ratios among D V , A and d(z) are also related to the ratio of the Hubble parameters at different z.
The importance of Equation (112) is that H(z i ; z j ) does not depend either on H 0 √ Ω 0m h 2 or on r s (z ls ).

The Supernova Ia Measurements
Supernovae type Ia may be considered the most fruitful standard rulers known in the literature, so far. The mechanism behind supernova explosions is complex and particular for each event, and although the luminosity is different for each supernova [178], the Phillips relation relates the B magnitude peak to the luminous decay after 15 days (however, some exceptions exist, e.g., SN2003fg by Howell [179]). Supernovae are distributed at different z, crossing two eras of the universe, at which it is possible to distinguish between decelerating and accelerating phases. In fact, the whole interval spans from z = 0 to z 2. They are classified as "Type Ia", due to the absence of hydrogen and the presence of once ionized silicon (SiII) in their early-time spectra [180]. The more accredited explanation is that they arise from thermonuclear explosions of white dwarfs, once Chandrasekhar's limit exceeds,i.e., ∼1.4M . Even though a self-consistent explanation of the internal process is not completely clarified, supernovae can be found in all galaxies [181], except in the arms of spiral galaxies [182].

Supernova Ia Observations
SNeIa catalogs are typically used and frequently improved. Recent examples are Union 1 [183], Union 2 [184], the SNLS3 [185][186][187] compilations, and so on. We rewrite the luminosity distance as: The modulus distance reads [50,161,188]: where µ 0 is linked with the absolute magnitude. The error on z is usually negligibly small, while the error on µ, δµ is: The best fit is obtained by minimizing the χ 2 function: The sums run on the number of the sample objects, N s , while µ i represents the index and i-th represents the generic supernova. In our convention, µ obs,i are supernova measurements and σ µ,i measurement variances.

The Cosmic Microwaves Background Measurement
The CMB radiation was discovered by Penzias and Wilson in 1965 [189]. The source of this radiation is placed at the time of the last scattering surface. At higher z, matter and radiation are mixed together, giving a plasma of photons and electrons with short mean free paths. Radiation is trapped, and matter is highly ionized [190]. Present temperature is around T ∼ 2.73 K, with small fluctuations depending on baryon-photon oscillations, that on comoving scales 8h −1 Mpc are known as Silk damping [191].
This leads to an anisotropy of matter density, and the surface of the last scattering consequently emits with different photons, because of the Sachs-Wolfe effect [192]. The Sachs-Wolfe effect is limited to large-scale perturbation, and the DE presence dominates in the expansion history, modifying the positions of acoustic peaks in the CMB. For the presence of such density anisotropies, which are translated in terms of temperature fluctuations, it is possible to make a spherical harmonic expansion, due to the fact that the CMB emits from a spherical surface. We have: where Y l,m (θ, φ) are Bessel spherical functions, while a l,m = α l,m + iβ l,m depends on the Gaussian statistics. The expectation value of the angular power spectrum is given by C l =< |a l,m | 2 >. The corresponding signal is decomposed through the l number, which divides the sphere into the same pole numbers.
We conclude this section, by underlining that recent CMB data can be either found in several works [193][194][195] or in the Planck Legacy Archive [196], which collects all PLANCK's results. In particular, an important result of the PLANCK mission is an anomaly in the observations, leading to the fact that a part of the sky really contains the Cold Spot [197]. This anomaly was, however, still observed by previous WMAPmissions [198], and the thusly obtained PLANCK results corroborate previous discoveries. For the sake of clarity, it was not clear if WMAP measurements were appreciable [199,200], and PLANCK played, therefore, a role of referee in order to understand if this spot really exists or not.

Cosmic Microwaves Background Observations
The importance of CMB is even correlated to DE constraints, since it is possible to provide relevant constraints at high redshifts. The amplitudes of the acoustic peaks in the CMB angular power spectrum depend sensitively on the locations of the peaks and on matter density. A possible test is performed by using the CMB shift parameter, i.e.: where Ω b and Ω DM are, respectively, baryonic and DM densities and z ls is the redshift at the last scattering surface. Possible drawbacks would be associated with Equation (119), when one tests unified cosmological models [201], since the corresponding Hubble flow can significantly be modified when DE is not negligible for high redshifts. Moreover, in the case of unified models, the sum Ω b + Ω DM in Equation (119) reduces to Ω b , due to fact that DM contributions naturally emerge. The fluid that permits the universe to accelerate at late times is able, in fact, to contribute to the total matter density, at early times. In other words, such a fluid behaves as a cosmological constant at present time and as a pressureless matter term at early times. This is the special case of Chaplygin gas, which we have treated in previous sections. Instead of considering Equation (119) to be consistent with all the cosmological paradigms we illustrated, we replace R CMB through the ratio 2l 1 /l 1 . Here, l 1 is the position of the first peak on the CMB TTpower spectrum, while l 1 is the first peak in a Einstein-de Sitter universe with Ω b = 1 − Ω DM . As a matter of fact, in the ΛCDM case, the shift parameter naturally reduces to Equation (119), since the cosmological constant contribution is negligible at early times.
We can write l 1 as: where D A (z ls ) is known in the literature as co-moving angular distance, i.e.: whereas r s (z ls ) is the horizon sound, i.e., Equation (106). Moreover, the sound speed reads: with ρ γ the photon density. It is possible to minimize the following χ-square function: finding alternative constraints to SNeIa and BAO at higher redshifts. The importance of CMB relies on the fact that a certain model can be checked at the last scattering surface, showing if the DE effects are really relevant or not at early times. In the next section, we find out numerical constraints by using cosmography, combining the three datasets, i.e., SNeIa, BAO and CMB.

Cosmographic Fits
The possibility of performing direct fits in order to infer the CS by means of the SNeIa, BAO and CMB measurements is presented here. In so doing, we highlight the experimental procedures we follow and the role played by error propagations. The underlying philosophy of performing the cosmographic analysis through the use of the aforementioned cosmological data is intimately related to the idea of building up a sort of cosmographic test.
First, let us notice that matching cosmography with observables could be performed by ordering the CS in a hierarchical way. This prescription leads to the choice of fixing parameters and determining a maximum order among them. We therefore consider the sets: and we fit the CS in terms of the above cited data, by using the luminosity distance, d L , expressed in terms of the redshift, z. In this review, we used only the redshift, z, instead of parametric variables y 1 , y 2 , y 3 and y 4 . We performed the analysis through the use of z only, for brevity. A more complete and accurate analysis can be found in [60].
Our fits have been carried out by combining the three datasets, SNeIa, BAO and CMB, and minimizing on a grid the corresponding total chi-squared given by the sum: Our first fit leaves the coefficients free to vary, as reported in the first of Equations (124). We therefore fixed the present Hubble rate to the upper value imposed by PLANCK H 0 = 0.68 h, with h = 100 km s −1 Mpc −1 .
Afterwards, we imposed the Hubble Space Telescope (HST) measurement [202] for H 0 . Finally, we performed a fit by considering the first order of the luminosity distance d L = 1 H 0 z within the interval z < 0.36 of SNeIa only, following the technique developed in [203].  Table 1.
Once the numerical bounds have been found, another possible advantage of cosmography is that one can get constraints of a certain model inverting the results of Table 1, obtaining the free coefficients of a given model in terms of the CS. In other words, let us consider that a given model leads to a vector of free parameters, the quantities ξ i , with i = 1, . . . , N . Then, it would be possible to write down, by means of Equations (17), the conditions: and so, since the luminosity curves are thought to be one-to-one invertible, one gets: Then, if i ≤ 3, it is possible to relate ξ i with the whole series; which is equivalent to having: For the sake of clarity, once the CS is known, to derive the corresponding errors on observables, one can invoke the use of logarithmic error propagation, discarding systematics, which has been already considered in the Union 2.1 compilation. In particular, assuming Ψ ∈ ξ i , a generic parameter of a certain model, it follows Ψ = Ψ(α 1 , . . . , α N ), with α i , ∀i ∈ [1, . . . , N ]. Hence, we have: giving: Ψ ± δΨ with δα i , the errors associated with the i-th variable. It is worth noticing that a complete estimation of the total error associated with each parameter suffers from systematic uncertainties, which are notoriously hard to treat. However, recent PLANCK surveys and the improved Union 2.1 compilation are able to significantly reduce systematics on measurements. It follows that overly optimistic estimates of systematic uncertainties do not modify the cosmographic analyses we have performed, and so, we do not account for a full description of systematical errors in this review.
The thus obtained experimental results appear to be actually dependent on particular priors imposed on the analyses. In particular, as is evident from Table 1 and Figure 1, the first fit, in which no priors have been used, provides a more negative acceleration parameter than fits 2-4, showing an excellent agreement with theoretical predictions. A much smaller jerk parameter is also involved, and it is also present when H 0 is fixed in the redshift interval z < 0. 36. This implies that the choice of a higher value for H 0 , than the one inferred from the PLANCK mission, seems to be more comfortable in the framework of cosmography. This does not however contradict PLANCK measurements, since the cosmological results showed in [204] depend on the choice of the cosmological model, as the author claimed in the text. Moreover, the cosmological parameters have been also fixed at a 68% confidence level, showing that future incoming PLANCK results will be clearly able to better match the cosmographic intervals. The outcomes of fit 3 predict a slightly smaller acceleration parameter than fit 1 and a greater value for j 0 . Overall, our results resemble much more the ΛCDM predictions, and agree with a higher value of the Hubble parameter than the one compatible with the ΛCDM model. Figure 1. Two-dimensional marginalized contour plots for the set A and B by using the combined tests with SNeIa, BAO and CMB. We used the luminosity distance as a function of the redshift, z, only. The last two figures below are based on set B with h = 0.6996. We showed in the plots the 1σ, 2σ and 3σ confidence levels, evaluated by means of standard Bayesian analyses on a grid.
The HST value seems to be disfavored, because of the higher H 0 , if compared to the cosmographic value, whereas the PLANCK Hubble rate is also disfavored, providing a lower value, which is incompatible with theoretical predictions on CS. However, the acceleration parameter values are rather close to the theoretical values for fits 1-2-4, while both jerk and snap are significantly depending on the choice of H 0 , giving rise to non-conclusive results. The imposition of priors leads to rather disadvantageous outcomes, even for Ω m evaluations. The predicted values of j 0 and s 0 do not constrain the Chaplygin gas fairly well, while fit 3 cannot constrain CPL. Further, the DGP and αDGP models are not completely well bounded by cosmography, whereas the ΛCDM and ωCDM are excellently constrained by the observations, resulting in them being compatible with recent limits [205][206][207]. Our results are also in close accordance with previous analyses and are able to exclude that the jerk parameter may be negative, showing that the concordance model predicts a transition at redshift z ≤ 1. In addition, we cannot also exclude a priori that the jerk parameter is nearly identical to j 0 = 1, as predicted by the ΛCDM model, although it is apparently larger than the ΛCDM value. This leads to the consequence that f (R) and f (T ) are also well constrained by cosmography, as one can see from Table 1. It turns out that a degeneracy problem occurs between modified theories of gravity and DE models, and cosmography is unable to remove these degeneracies using present data. More likely, the use of future surveys will permit us to discriminate between the modification of GR and DE models. To do so, a possibility is offered by better bounding the snap parameter, s 0 . To get improved constraints on s 0 , we need additional high redshift data. In fact, s 0 is the coefficient that enters the fourth orders of cosmographic Taylor expansions. Thus, future surveys with additional data will likely better constrain s 0 , reducing the degeneracy problems between cosmological models.
Finally, it is worth noticing that by looking at Figure 1, our contours show that cosmographic constraints are fairly well bounded at the 68% confidence level only. The additional 95% and 98% confidence levels provide error bars, which do not lead to accurate bounds on the CS. This is a consequence of the lack of current data. Again, future extended datasets would improve the quality of our fits and reduce error bars. This will permit us to determine precise limits on the cosmographic coefficients.

Conclusions and Perspectives
We reviewed the problem of DE through the use of the cosmography of the universe. In particular, we considered cosmography as a tool to select among competing models, showing that cosmography is, at most, a selection criterion to discriminate which model works fairly well over others.
In so doing, we reviewed several popular DE cosmological models, and we constrained them by cosmographic analyses, through the use of a combined test, involving SNeIa, BAO and CMB measurements.
In particular, we investigated the main features of ΛCDM, ωCDM, CPL parametrization, the DGP model and its extension, αDGP, Chaplygin gas, f (R) and f (T ) gravities. For each class of models, we reported the corresponding interpretation of cosmological parameters in terms of the CS, dealing with the current values of cosmographic parameters. With these considerations in mind, it is possible to recast any model parameter in term of the CS. This is an essential feature to reach a cosmographic test of a particular class of models. In principle, the degeneracy problem among cosmological DE models can be alleviated if one fixes the cosmological observables in terms of the CS, while the degeneracy between extended theories of gravity and DE models is not healed by cosmography.
It is also important to say that cosmography is almost model independent once the scalar curvature is fixed or considered constant. In so doing, once fixing the CS, it is possible to reconstruct the dynamics of any cosmological model in terms of quantities that do not depend on some particular hypothesis.
To this end, standard cosmological datasets, SNeIa, BAO and CMB, have been adopted through the use of a Bayesian analysis and by minimizing the sum of each chi-squared term. We finally showed that the Hubble fluid is currently accelerating within an acceleration parameter in the range −0.60 < q 0 < −0.50 and a jerk parameter j 0 > 0. Unfortunately, no definitive results can be inferred from the snap parameter, s 0 , due to problems on convergence of truncated cosmographic series. The cosmographic results are also plagued by a severe dependence on the Hubble rate today, and so, more precise measurements of H 0 today would improve the quality of cosmographic fits.
Future developments are aimed at improving the role of the cosmographic test to wider observational datasets. In particular, the PLANCK measurements would better discriminate the confidence levels on the CS capable of fixing improved bounds on the observable universe in the upcoming future [193][194][195].