The Dimension of Phaseless Near-Field Data by Asymptotic Investigation of the Lifting Operator

: In this paper, the question of evaluating the dimension of data space in an inverse source problem from near-ﬁeld phaseless data is addressed. The study is developed for a 2 D scalar geometry made up by a magnetic current strip whose square magnitude of the radiated ﬁeld is observed in near non-reactive zone on multiple lines parallel to the source. With the aim of estimating the dimension of data space, at ﬁrst, the lifting technique is exploited to recast the quadratic model as a linear one. After, the singular values decomposition of such linear operator is introduced. Finally, the dimension of data space is evaluated by quantifying the number of “relevant” singular values. In the last part of the article, some numerical simulations that corroborate the analytical estimation of data space dimension are shown.


Introduction
Antenna testing is a relevant step in the characterization of radiating systems that consists in the determination of the far-field pattern of the considered antenna under test. Over the years, different approaches for antenna testing have been developed which can be divided into direct and indirect testing methods. The first class allows evaluating the radiation pattern of the antenna under test by exploiting field measurements directly in far zone. On the contrary, indirect methods starting from the knowledge of the near-field measurements estimate the far-field pattern by a near-field to far-field transformation (NFFFT) [1][2][3][4][5][6][7].
Since indirect testing methods are based on near-field measurements, they require limited spaces and allow to collect the field measurements in secure environments like anechoic chambers. For such reasons, indirect testing methods are usually preferred with respect to their direct counterpart.
However, in particular at high frequencies, a stable phase measurement of the radiated field may be difficult to perform; hence, researchers are lead to investigate phaseless nearfield to far-field transformations [8][9][10][11][12][13][14][15] which allow reconstructing the far-field pattern from the knowledge of the near-field magnitude only.
In this framework, a typical way to retrieve the radiation pattern consists of two steps. The first one is the recovering of the current distribution that generates the radiated field starting from the measurements of near field intensity. Later, the far-field pattern can be easily computed by addressing a classical radiation problem.
From the mathematical point of view, the first step of such process requires addressing a phase retrieval problem which, for a scalar configuration, consists in recovering the density current J from the model where T i is the radiation operator linking the density current J to the electric field E i collected on the i-th scanning line. Since the problem is nonlinear, the correspondent least-squares problem involves a quartic cost functional [15] which, in general, is non-convex. Accordingly, the objective functional may contain trap points like local minima and saddle points with null gradient. The presence of trap points does not ensure the convergence to the global minimum of the optimization procedure which converges to the stationary point closest to the starting point. Hence, when the objective functional is non-convex, the quality of the solution is strongly affected by the choice of the starting point of the iterative minimization procedure.
Over the years, several studies have addressed the question of traps in non-convex optimizations [16][17][18][19][20] but, at the moment, a deterministic procedure that escapes from traps and works for a generic objective function is still not available.
To surmount the question of traps in phase retrieval, in the last decade new methods like PhaseLift [21] and PhaseCut [22] have been introduced. The latter exploits the lifting technique which, through a redefinition of the unknown space, allows recasting the original quadratic problem as a linear one. However, since the number of unknowns of the linear problem is the square of the original one, lifting based methods are suitable only for problems with a little number of unknowns [23].
The failure of lifting-based methods in tackling large scale problems has shifted again the attention to non-convex approaches. In particular, the mathematical conditions for recovery guarantee of the least-squares method associated to Equation (1) have been studying. In this framework, two lines of researches can be distinguished which concern respectively: • the choice of the starting point in the attraction basin of the objective functional, • the conditions under which the objective functional is free from traps.
From the studies on the starting point [24][25][26][27], it emerges that some special initializations provide a starting point in the attraction basin if the dimension of data space M is larger than a prescribed value.
At the same time, from the studies on the absence of trap points [28][29][30][31], it comes out that the objective functional is free from traps if the value of M is larger than a prescribed value depending on the mathematical model (the relationship between the phaseless data and the unknown function), the kind of unknown function, and the dimension of unknown space.
In light of the previous discussion, it is clear that the dimension of data space M plays a key role for converge guarantees of the least-squares approach; hence, it is worth investigating how to evaluate it from an analytical point of view.
Despite the lifting based methods are not always suitable to find a solution of the problem, the lifting process represents a key mathematical tool in the estimation of data space dimension. Indeed, after a linear model has been obtained by means of the lifting technique, the Singular Value Decomposition can be exploited to estimate a good upper bound of the data space dimension.
In this paper, such quantity is analytically evaluated with reference to the square magnitude of the field radiated by a magnetic current when it is observed on multiple lines in near non-reactive zone.
Let us remark that in the case of data in amplitude and phase, an analytical estimation of the dimension of data space has been provided for several configurations [32][33][34]; instead, in the case of phaseless measurements, it has been evaluated only for a strip source observed in Fresnel region [35]. Hence, this work represents an extension of [35] to the case of near-field data.
The paper is structured as below. In Section 2, the geometry of the problem and some preliminary results for the case of data in amplitude and phase are recalled. In Section 3, the dimension of data space for the case of one observation domain is analyzed whereas in Section 4 the case of multiple observation lines is studied. In Section 5, numerical simulations that corroborate the analytical results of the previous sections are sketched. A section of conclusions follows.

Geometry of the Problem and Preliminary Results on the Radiation Operator
In this article, the 2D scalar geometry depicted in Figure 1 is considered. A magnetic current J(x) = J(x)î y directed along the y-axis and supported on the set SD = [−a, a] of the x-axis radiates within a homogeneous medium. The wavenumber of such medium is denoted by β = 2π λ where λ stands for the wavelength. The electric field radiated by such strip source has two components one along the x-axis, and another along the z-axis.
The square amplitude of the x component of the electric field, |E(x, z)| 2 , is observed in near non-reactive zone over one or multiple bounded observation domains that are parallel to the source. The i-th observation domain, OD i , is located along the subset of the axis Before investigating on the dimension of the square magnitude of the radiated field, let us focus our attention on the radiated field.
For the configuration at hand, the x component of the electric field on OD i is given by In near non-reactive zone (z i ≥ λ), the radiation operator T i can be expressed as where With the aim to simplify the discussion of the next sections, it is worth recalling from [36] some useful results on the operator T i T † i where T † i denotes the adjoint of T i . Such operator can be written as where , the kernel K ii can be expressed as For x =x, no stationary point appears in the phase function. Hence, under the hypothesis that βa is large enough, the integral in (6) can be asymptotically evaluated by taking into account only the contributions of the endpoints. Accordingly, for each x =x, K ii can be rewritten as where φ ii is the partial derivative of φ ii (x , x,x) with respect to the variable x . Differently, if x =x the integral in (6) can be evaluated through the integration by parts method which returns However, it is worth noting that the asymptotic evaluation of K ii (x,x) provided by (7) connects continuously with the value of K ii in the point x =x. As can be seen from (7), the kernel of T i T † i is space variant with respect to the variables (x,x). To recast it in a form similar to a convolution operator, it is useful to introduce the elliptic coordinate and to adopt the variables η = η(x, z i ),η = η(x, z i ) in place of (x,x). Accordingly, the kernel of T i T † i becomes and it can be explicitly written in the form below where dx dη = 1 Because of the term dx dη , the kernel function K ii (η,η) is singular asη −→ ±1. However, if X i ≤ a or in other words if the observation domain is no larger than the source domain thenη(X i , z i ) << 1. In such circumstance, K ii (η,η) can be recast in the simple and nice form K ii (η,η) ≈ 2a e −j βa (γ(η,z i )−γ(η,z i )) sinc(βa (η −η) (13) with γ(η,

Dimension of Data Space for a Single Scanning Line
In this section, the question of evaluating the dimension of data space in the case of a single observation line in near non-reactive zone is addressed.
To tackle this issue, at first, a linear representation of |E 1 | 2 (the square amplitude of the x component of the electric field over OD 1 ) is introduced. After, the singular values of such a linear operator are investigated with the aim to evaluate the dimension of data space.
A linear model is obtained in two steps. The first one consists in rewriting the quadratic model as below The second step consists in redefining the unknown space by considering as unknown the function F(x , x ) = J(x ) J * (x ). This allows defining a linear operator A 1 , called lifting operator, which establishes the following mapping and is defined as Accordingly, its adjoint operator A † 1 can be expressed as Thanks to the introduction of the lifting operator, it is possible to recast the square amplitude distribution |E 1 | 2 under the following linear model Since the operator A 1 is linear and compact, its singular system can be introduced. It consists of the triple {v m , σ m , u m } where {v m } and {u m } are the singular functions that represent |E 1 | 2 and F, respectively, while {σ m } are the singular values. As well known, the singular functions v m and u m satisfy the equations [37]. The introduction of the singular system of the lifting operator A 1 allows expanding the square amplitude distribution |E 1 (x)| 2 as Despite from a theoretical point of view the number of singular functions in the expansion (19) should be infinite, such expansion can be truncated. In particular, since the operator A 1 is compact, its singular values approach to zero as the index m increases. Moreover, in near non-reactive zone, the kernel of the lifting operator behaves like an entire function of exponential type; accordingly, the singular values of the lifting operator become negligible in correspondence of a critical index M 1 . This implies that the expansion of |E 1 | 2 can be truncated with a negligible representation error by using a finite number of terms, hence, |E 1 | 2 can be approximated as where M 1 is equal to the number of relevant singular values of the lifting operator. From this discussion, it follows that the dimension of data space is related to the value of the scalar M 1 which represents the object of our investigation.
To compute the singular values σ m of the lifting operator A 1 , the eigenvalues λ m of the operator A 1 A † 1 will be studied. In fact, from the equation where By comparing (5) and (22), it is evident that the kernel of For such reason the estimation of its eigenvalues is a difficult task. With the aim to recast A 1 A † 1 in a form more similar to a convolution operator, let us pass from the couple of variables (x,x) to the variables η = η(x, z 1 ) andη = η(x, z 1 ) defined by (9). In this new variables, the operator A 1 A † 1 can be expressed as with where the last equality has been obtained by considering Equation (10). Taking into account for (7), the following expression of H 11 comes out As highlighted in Section 2, the derivative term dx dη exhibits a singularity asη −→ ±1; in particular, it results that forη −→ ±1 dx dη = O(1/(η ∓ 1) 3/2 ). Despite this, the singularities of the derivative term are perfectly balanced by the second factor of Equation (25). In fact, forη −→ ±1 it results that the terms f 11 (a, η,η)/φ 11 (a, η,η) and f 11 (−a, η,η)/φ 11 (−a, η,η) are an O((η ∓ 1) 3/4 ). Hence, the second factor in (25) when η −→ ±1 vanishes as (η ∓ 1) 3/2 . Accordingly, differently from the kernel of T 1 T † 1 , the kernel of A 1 A † 1 is free from singularities. The expression of H 11 (η,η) provided by (25) does not allow evaluating the eigenvalues of A 1 A † 1 in closed-form. In order to succeed in this task, let us approximate the amplitude terms f 11 (a, η,η)/φ 11 (a, η,η) and f 11 (−a, η,η)/φ 11 (−a, η,η) as below Such approximation is derived in the Appendix A, and it allows expressing the kernel H 11 in the following form Hence, the operator A 1 A † 1 can be expressed as In Figures 2 and 3 the actual kernel of A 1 A † 1 (i.e., the H 11 (η,η) function) and the approximation of such kernel provided by (27) are sketched.   (27) in dB for the configuration a = 20λ, As it can be seen from the figures, the sinc-squared kernel represents a very good approximation of the actual kernel of A 1 A † 1 at the center of the plot, instead, at the edges (forη approaching to 1 and −1) the approximation is a little bit less accurate.
Under the approximation (28), the eigenvalues of A 1 A † 1 can be analytically evaluated. Indeed, according to [38], the eigenvalues of an operator with a sinc-squared kernel are given by the following equation where with [·] denoting the nearest integer. Accordingly, the singular values of the operator A 1 are given by From Equation (31), it is evident that the integer M 1 provides the number of relevant singular values of the lifting operator in the case of a single observation line in near zone.
If the function F(x , x ) was a generic function of the space L 2 (SD × SD), such a number would be exactly equal to the dimension of space. Since in our problem the square magnitude of the radiated field is given by , the scalar M 1 represents an upper bound to the dimension of data space.

Dimension of Data Space for Multiple Scanning Lines
In this section, the question of providing the dimension of data space in the case of P observation lines in near non-reactive zone is tackled. In such case the square amplitude distributions |E 1 | 2 , ... , |E P | 2 supported respectively on the sets OD 1 , ... , OD P are related to the current distribution J by the quadratic model With the aim to evaluate the dimension of data space, at first, a linear operator representing the square amplitude distributions |E 1 | 2 , ... , |E P | 2 is defined. After, a closedform approximation of the singular values of such an operator is derived. Finally, a good upper bound to the dimension of data space is estimated by counting the number relevant singular values.
To obtain a linear representation of the phaseless data, also in this case the lifting technique is exploited. Hence, by considering as unknown the function F(x , x ) = J(x ) J * (x ), the lifting operator (33) is introduced. The latter can be expressed as below where ∀i ∈ {1, . . . , P} Now, the quadratic model (32) can be recast as the following linear model By virtue of (34) and (35), the adjoint operator A † can be expressed as where ∀i = {1, . . . , P} Since the singular values of the lifting operator A are the square root of the eigenvalues of AA † , the need of studying the operator AA † arises. The latter is given by where ∀i, j ∈ {1, . . . , P} with The previous integral can be recast in the following form At this juncture, in order to provide an explicit expression of H ij , let us distinguish the cases i = j, and i = j.

Kernel Evaluation of the Main Diagonal Terms of AA †
If i = j, from the comparison between (41) and (5), it follows that Since the kernel H ii is not of convolution type, an analytical study of its eigenvalues may be difficult. To recast it like a convolution kernel, let us pass from the variables (x,x) to the variables η = η(x, z i )η = η(x, z i ). By doing this, it results that A i A † i can be expressed as Now, taking into account for (11), the following expression of H ii comes out As shown in Appendix A, the terms f ii (−a, η,η)/φ ii (−a, η,η) and f ii (a, η,η)/φ ii (a, η,η) can be approximated as Accordingly, ∀i = {1, . . . , P} the kernel of A i A † i can be rewrite as below

Kernel Evaluation of the Off-Diagonal Terms of AA †
If i = j and βa >> 1 then the kernel H ij can be still evaluated by an asymptotic technique but, in such case, the phase function contains also a stationary point which is represented by the solution x s of the equation with respect to the variable x . For such reason, the asymptotic evaluation of H ij contains not only the contributions due to the endpoints x = a, x = −a, but also the contribution of the stationary point x = x s ; accordingly, H ij is given by By passing from the variables (x,x) to the variables η = η(x, z i )η = η(x, z j ), the kernel of A i A † j can be expressed as Despite Equation (51) provides a closed-form expression of the kernel of A i A † j , a study of such an operator from an analytical point of view is difficult to perform. However, as it will be seen in next sections, some useful considerations can be done to understand the eigenvalues behavior of the operator AA † .

The Role of the Distance between the Scanning Lines
In order to predict the eigenvalues of AA † , the role played by the distance between the i-th and j-th scanning line must be analyzed. From (41), it is evident that if the distance |z i − z j | is small then the operator A i A j is very similar to the operators A i A † i and A j A † j which, in turn, are similar to each other. Hence, if |z i − z j | is small ∀i, j ∈ {1, 2, . . . , P} : i = j then all the operators composing AA † are similar and the number of relevant eigenvalues of AA † is essentially equal to the case of a single observation line.
On the contrary, when the distance |z i − z j | increases, the operator A i A † j starts to differ from A i A † i and A j A † j and such difference becomes more and more evident as such distance raises. In particular, it happens that when the distance between the i-th and the j-th scanning line increases, the norm of the off-diagonal operators A i A † j strongly decreases. Accordingly, it will exist a minimum value of the distance |z i − z j | for which the norm of A i A † j is negligible with respect to A i A † i and A j A † j . To quantify the value of such distance, let us introduce the auxiliary operator T z linking the magnetic current density J(x ) ∈ L 2 ([−a, a]) to E(x o , z) ∈ L 2 ([z min , z max ]), i.e., the xcomponent of the electric field evaluated along the axis x = x o for z min ≤ z ≤ z max .
The introduction of such an operator allows to consider the point spread function in the observation domain associated to it which represents the most focusing field with the maximum in the point z = z o that the source is able to radiate along the axis x = x o .
From the mathematical point of view, the point spread function in the observation domain is the impulsive response of the system made up by the cascade of the regularized inverse operator T −1 z and the forward one, hence, it is defined as where δ(z − z o ) is the impulse function. In our discussion, the point spread function in the observation domain is used to estimate the optimal points where the radiated electric field E(x o , z) must be sampled not to lose relevant information in the discretization process. Next, from the knowledge of the optimal sampling points in the case of data in amplitude and phase, an useful criterion for the spacing between adjacent observation domains in the case of phaseless measurements is derived.
To do this, let us derive an explicit expression of the point spread function. Since the inverse operator T −1 z can be approximated by a weighted adjoint operator T † zw [39], it results that A closed-form expression of the operator T z T † zw is provided in [40]. Hence, on the basis of such result, an approximation of the point spread function in the observation domain is given by where the signum (+) holds for ν while (−) holds for ζ, respectively. As shown in [41], the zeros of the point spread function in the observation domain represent the optimal sampling points of the radiated field; for such reason, E(x o , z) must be sampled at the points {z i+1 } satisfying the equation However, our main aim is that of finding a sampling strategy of |E(x o , z)| 2 . This can be easily done by remembering that the bandwidth of the square magnitude of a function is twice the one of the corresponding function. For such reason, the optimal sampling points of |E(x o , z)| 2 are given by Equation (57) provides the minimum distance for which two samples of |E(x o , z)| 2 collected at different values of z are independent to each other. Hence, it could give a guideline in the choice of the spacing ∆z i = z i+1 − z i between two adjacent scanning lines in the case of phaseless measurements.
However, from Equation (57), it is evident that the distance between the samples of |E(x o , z)| 2 depends also on x o or, in other words, the value of z i+1 obtained by solving (57) is a function of For such reason, to ensure that the data collected on the (i + 1)-th observation domain are independent by those collected on the i-th observation domain, the value of z i+1 must be chosen at least equal to the maximum value of the function z i+1 (x o ) derived by inverting (57). Hence, the spacing ∆z i = z i+1 − z i between two adjacent scanning line is dictated by the maximum value of z i+1 (x o ) which is achieved for From the previous discussion, it follows that a reasonable criterion for the choice of z i+1 is the following where z i+1 (0) and z i+1 (X i ) are the solutions of the equations Hence, if the distance between two adjacent scanning lines is chosen according to the criterion (58), the norm of all the off-diagonal operators A i A j is negligible with respect to the norm of the operator on the main diagonal. In such condition, the operator AA † can be expressed as below

The Dimension of Data Space
Under the approximations (61) and (48) , the eigenvalues of AA † can be analytically computed; in particular, they are given by the union of the eigenvalues of the operators on the main diagonal, i.e., where ∀i ∈ {1, . . . , P} with M i = 4 π βa η(X i , z i ) . Accordingly, the singular values of the lifting operator A are given by where ∀i ∈ {1, . . . , P} From the previous analysis, it comes out that the total number of relevant singular values of the lifting operator A can be expressed as If the extension of the observation lines X 1 , . . . , X P are such that η(X 1 , z 1 ) = . . . = η(X P , z P ) = η max , the previous equation particularizes as below The scalar M represents the desired upper bound to the dimension of data space in the case of P observation lines.
It is worth highlighting that Equation (67) is very similar to Equation (46) shown in [35], i.e., M ≈ 4P π βa u max . The latter represents the number of relevant singular values of the lifting operator when the observation domain is an ensemble of P observation arcs in Fresnel zone subtending an angular sector [−θ max , θ max ]. As can be seen, the relevant difference between Equation (67) of this article and Equation (46) in [35] is that η max is replaced by sin θ max . This is perfectly consistent with the fact that the variable η in Fresnel zone can be approximated by sin θ.

Numerical Simulations
In this section, some numerical simulations that confirm the analytical results of Sections 3 and 4 are shown.
The numerical tests are performed by considering a source with a semi-length a = 20λ whose square magnitude of the radiated field is observed on 1 or 2 truncated observation lines in near zone.

Numerical Simulations for a Single Observation Line
In this section, numerical simulations for the case of 1 observation line are provided. As first example, an observation domain with z 1 = 5λ, X 1 = 20.75λ (η(X 1 , z 1 ) = 0.9) is considered.
In order to check the theoretical result provided by (30), in Figure 4 the singular values of the lifting operator A 1 and their analytical estimation are compared. As can be seen from Figure 4, the two diagrams exactly overlap until to the index M 1 = 144 which represents an upper bound to the dimension of data space. This means that the number of relevant singular values is exactly predicted by (30) whereas their value is well estimated by (31).
A second test has been performed for a configuration with z 1 = 5λ, X 1 = 37λ (η(X 1 , z 1 ) = 0.99). Hence, in such case, the observation domain has been significantly extended along the x-axis.
With reference to such a configuration, in Figure 5 the singular values of the lifting operator and their analytical estimation provided by (31) are sketched. Sinc in this case the approximation of the kernel of A 1 A † 1 with a sinc square function is a little bit less accurate, also the approximation of the singular values provided by (31) is a little bit less accurate than the previous test case.
As concerns the number of relevant singular values, it is exactly equal for both the diagrams and correctly predicted by the Equation (30) which returns M 1 = 158.

Numerical Results for Two Observation Lines
In this section, with reference to the case of 2 observation domains, some key numerical experiments are shown. In all the test cases, the semi-extensions of the observation domains X 1 and X 2 are chosen in such a way that η(X 1 , z 1 ) = η(X 2 , z 2 ) = η max .
As a first test case, a configuration with z 1 = 5λ, z 2 = 5.25λ is considered. The extension of the observation domains is such that η changes on the set [−0.9, 0.9]; hence, it results that X 1 = 20.75λ, X 2 = 21.01λ.
Since the distance z 1 and z 2 are very similar, the operators does not differ significantly each other. Hence, also their eigenvalues (that are sketched in Figure 6) exhibit the same decay. Accordingly, in such case, the second scanning line does not increase significantly the number of significant singular values, and the dimension of data space remains essentially equal to the case of 1 scanning line. This fact is confirmed also by Figure 7 in which the singular values of A 1 are sketched in blue.
In the second test case, a configuration with z 1 = 5λ, z 2 = 5.67λ is analyzed. Also in this case, the extension of the observation lines is such that η ∈ [−0.9, 0.9]; hence, X 1 = 20.75λ and X 2 = 21.47λ.
The main difference between this configuration and that considered before is the choice of the distance z 2 which, in this experiment, has been increased and chosen according to the criterion (58). As stated in Section 4, when the distance |z 2 − z 1 | increases, the norm of the off-diagonal terms A 1 A † 2 and A 2 A † 1 decreases with respect to the norm of A 1 A † 1 and A 2 A † 2 . For each i, j ∈ {1, 2}, the norm of the self-adjoint operator A i A † j can be evaluated through the equation Hence, the relation between the norm of can be understood by a simple plot of their eigenvalues which is sketched in Figure 8.
As can be seen from Figure 8, the eigenvalues of the off-diagonal terms (A 1 A † 2 , A 2 A † 1 ) decay quite faster than the eigenvalues of the terms on the main diagonal (A 1 A † 1 , A 2 A † 2 ). Therefore, ||A 1 A † 1 || and ||A 2 A † 2 || are greater than ||A 1 A † 2 || and ||A 2 A † 1 ||. Accordingly, at least the number of significant singular values of A can by approximated by (66).
In Figure 9, the singular values of A numerically computed are compared with their analytical estimation provided by (65).
As shown by such figure, the estimation of the number of relevant singular values M is quite good and equal to 288 as predicted by (66); however, the approximation of the singular values is not so accurate. This happens since the norm of the off-diagonal terms of AA † (despite negligible with respect to the terms on the main diagonal) does not approach to zero. For such reason, there is still a small effect brought by the off diagonal terms of AA † on its eigenvalues, and consequently, on the singular values of A.
Hence, it is possible to conclude that the choice of the minimum distance satisfying the criterion (58) is sufficient to ensure that the data collected on the second scanning are independent by those collected on the first scanning but it does not allow to reach the minimum dynamics of the singular values of the lifting operator. Figure 9. Singular values of A numerically computed, and their analytical estimation for the case a = 20λ, z 1 = 5λ, z 2 = 5.67λ, X 1 = 20.75λ (η(X 1 , z 1 ) = 0.9), X 2 = 21.47λ (η(X 2 , z 2 ) = 0.9).
A third test case concerns the configuration z 1 = 5λ, z 2 = 7.5λ, X 1 = 20.75λ (η(X 1 , z 1 ) = 0.9), X 2 = 23.75λ (η(X 1 , z 1 ) = 0.9). Hence, the distance |z 2 − z 1 | has been further increased with respect to the previous one. In this test case, the eigenvalues of A 1 A † 2 and A 2 A † 1 decay faster then the previous case (see Figure 10); hence, the ratio between the norm of the off-diagonal terms and the norm of the main diagonal terms is smaller than the previous case. This imply that the approximation of the singular values provided by (65) is more accurate.
In Figure 11, the singular values of A are compared with their theoretical approximation. From such plot, it is evident that in the considered case our theoretical estimations predict accurately not only the dimension of data space (M 1 = 288) but also their value.
A final test case has been done for the configuration z 1 = 5λ, z 2 = 7.5λ, X 1 = 37.03λ (η(X 1 , z 1 ) = 0.99), X 2 = 50.97λ (η(X 2 , z 2 ) = 0.99). Hence, with respect to the previous case, the extension of the observation lines has been enlarged while the distance between them is unchanged.
With reference to such a configuration, in Figure 12 the singular values of A numerically computed are compared with the analytical estimation provided by (65). From such plot it is evident that, despite the distance |z 2 − z 1 | is large enough, the number of relevant singular values is well predicted by (66) but the estimation of the value of the singular values is less accurate than the example in Figure 11. This aspect can be easily understood by remembering that the approximations (47) and (48) work well when the maximum value of η andη does not approach to 1. In considered example, the maximum value of such variables is equal to 0.99. For such reason, the approximation of the kernel of A 1 A † 1 and A 2 A † 2 with the correspondent sinc square function is not so accurate and, consequently, the estimation of the singular values provided by (65) does not match perfectly with its numerical evaluation. Figure 12. Singular values of A numerically computed, and their analytical estimation for the case a = 20λ, z 1 = 5λ, z 2 = 7.5λ, X 1 = 37.03λ (η(X 1 , z 1 ) = 0.99), X 2 = 50.97λ (η(X 2 , z 2 ) = 0.99).

Conclusions
In this article, the question of evaluating the dimension of data space in the quadratic formulation of phase retrieval problem has been addressed. In particular, a good upper bound to the dimension of the functional space containing all the possible square magnitude of the near-field radiated by a magnetic current strip on one and multiple lines parallel to the source non-reactive zone has been estimated.
Such analysis has been developed in multiple steps. In particular, after having introduced the linear lifting operator representing the phaseless data, the upper bound to the dimension of data space has been defined as the number of relevant singular values of such an operator.
With the aim to estimate the number of relevant singular values of the lifting operator, the kernel of the related eigenvalue problem has been first approximated by asymptotic reasoning, and after, it has been recast as a convolution kernel by exploiting a proper change of variables. Finally, the eigenvalues of such an operator have been analytically computed and, from these, the value and the number of relevant singular values have been derived.
Let us remark that our analytical results on the dimension of data space allow to show how the geometrical parameters of the configuration affect the dimension of data space.
Moreover, before concluding, it is worth noting that the adopted methodology for estimating an upper bound to the dimension of data space is quite general, hence, it can be extended also to other geometries.