A New Hybrid Inversion Method for 2D Nuclear Magnetic Resonance Combining TSVD and Tikhonov Regularization

This paper is concerned with the reconstruction of relaxation time distributions in Nuclear Magnetic Resonance (NMR) relaxometry. This is a large-scale and ill-posed inverse problem with many potential applications in biology, medicine, chemistry, and other disciplines. However, the large amount of data and the consequently long inversion times, together with the high sensitivity of the solution to the value of the regularization parameter, still represent a major issue in the applicability of the NMR relaxometry. We present a method for two-dimensional data inversion (2DNMR) which combines Truncated Singular Value Decomposition and Tikhonov regularization in order to accelerate the inversion time and to reduce the sensitivity to the value of the regularization parameter. The Discrete Picard condition is used to jointly select the SVD truncation and Tikhonov regularization parameters. We evaluate the performance of the proposed method on both simulated and real NMR measurements.


Introduction
Nuclear Magnetic Resonance (NMR) relaxometry has become an important tool to study the molecular structure and properties of materials. A typical NMR experiment consists of measuring the relaxation process due to the re-establishment of the nuclear system into its equilibrium state, after the application of a short magnetic pulse parameterized with a predefined flip angle. The relaxation process is described by longitudinal and transversal dynamics, characterized by distributions of longitudinal (T 1 ) and transversal (T 2 ) relaxation times [1]. The computation of the relaxation times distribution requires the numerical solution of a Fredholm integral equation with separable Laplace-type kernels. In particular, we focus on the inversion of 2D NMR relaxation data, acquired using a conventional Inversion-Recovery (IR) experiment detected by a Carr-Purcell-Meiboom-Gill (CPMG) pulse train [2]. Then, the evolution time t 1 in IR and the evolution time t 2 in CPMG are two independent variables, and the 2D NMR relaxation data S(t 1 , t 2 ) can be expressed as: S(t 1 , t 2 ) = ∞ 0 k 1 (t 1 , T 1 )k 2 (t 2 , T 2 )F(T 1 , T 2 ) dT 1 dT 2 + e(t 1 , t 2 ) (1) where the unknown F(T 1 , T 2 ) is the distribution of T 1 and T 2 relaxation times, e(t 1 , t 2 ) represents Gaussian additive noise and k 1 (t 1 , T 1 ), k 2 (t 2 , T 2 ) are Laplace-type kernels given by: k 1 (t 1 , T 1 ) = 1 − 2 exp(−t 1 /T 1 ), k 2 (t 2 , T 2 ) = exp(−t 2 /T 2 ) whose singular values quickly decay to zero. Since the unknown function F corresponds to distribution of the values of the relaxation times T 1 − T 2 , we can assume F(T 1 , T 2 ) ≥ 0 for all T 1 and T 2 . Experimental data is usually collected at discrete values of times; therefore by considering M 1 × M 2 samples of the times t 1 and t 2 , and N 1 × N 2 samples of the relaxation times T 1 and T 2 , problem (1) is discretized as: where K 1 ∈ R M 1 ×N 1 , K 2 ∈ R M 2 ×N 2 are the discretized exponential kernels, s ∈ R M , M = M 1 · M 2 , is the discrete vector of the measured noisy data (which can have negative values), f ∈ R N , N = N 1 · N 2 , is the vector reordering of the unknown distribution and e ∈ R M is the vector with the discretized noise. The inversion of (3) is severely ill-conditioned and several direct and iterative methods have been deeply discussed both in NMR [3][4][5] and mathematical [6][7][8] literature. Direct regularization includes Truncated Singular Value Decomposition (TSVD) and Tikhonov methods. TSVD replaces K by a matrix of reduced rank thus avoiding noise amplification in the inversion procedure. This is usually obtained by removing the singular values smaller than a given tolerance τ > 0. Tikhonov-like regularization substitutes (3) with a more stable optimization problem whose objective function includes a data-fitting term and a regularization term imposing some a priori knowledge about the solution. In NMR, Tikhonov-like regularization is usually employed. It requires the solution of a non-negative least squares (NNLS) problem of the form min f≥0 Kf − s 2 + α f 2 (4) where α is the regularization parameter. In the sequel, · will denote the Euclidean norm of a vector or the Frobenius norm of a matrix. It is well-known that α controls the smoothness in f and makes the inversion less ill-conditioned but, if wrongly chosen, it may cause a bias to the result. For this reason, many parameter selection techniques have been proposed such as Generalized Cross Validation (GCV), the L-curve and U-curve methods and methods based on the Discrepancy Principle (DP) (see [6] and references therein). In NMR, the Butler, Reed, and Dawson (BRD) method [9] or the S-curve method [10] are usually applied for the selection of the regularization parameter. Another critical issue when solving NMR problems is that the amount of data collected for one measurement can be very large, with typical sizes M 1 = 30-150 and M 2 = 1000-50,000. If a grid used for the estimated distribution has 100 × 100 elements, then the matrix K can have up to 5 · 10 9 elements and the computational cost of the data inversion may be extremely high.
The separable kernel structure of K has been widely exploited in the NMR literature to perform data compression using independent SVDs of K 1 and K 2 and reduce computational complexity. The method proposed in [4], in the sequel referred to as Venkataramanan-Song-Hürlimann (VSH) algorithm, is considered a reference method for NMR data inversion and it is widely used both in academia and industrial world. Given the TSVD of K 1 and K 2 : where k i is the number of considered singular values, the VSH method approximates the original problem (4) with the better conditioned one where K k 1 ,k 2 = K k 2 ,2 ⊗ K k 1 ,1 . The data s is then projected onto the column subspace of K k 1 ,k 2 in order to obtain a NNLS problem with compressed data and kernel of significantly smaller dimensions. A method adapted from the BRD algorithm is then used to solve the optimization problem.
Other methods have been proposed in the literature for the inversion of 2DNMR data. The algorithm of Chouzenoux et al. [11] uses a primal-dual optimization strategy coupled with an iterative minimization in order to jointly account for the non-negativity constraint in (4) and introduce a regularization term. A preconditioning strategy is used to reduce the computational cost of the algorithm. In [12], an algorithm for 2DNMR inversion from limited data is presented. Such algorithm uses compressive sensing-like techniques to fill in missing measurements and the resulting regularized minimization problem is solved using the method of [4]. The Discrete Picard Condition (DPC) [13] is used to choose the truncation parameters. The 2DUPEN algorithm [14] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. Without any apriori information about the noise norm, 2DUPEN automatically computes the locally adapted regularization parameters and the distribution of the unknown NMR parameters by using variable smoothing. In [15], an improvement of 2DUPEN (I2DUPEN), obtained by applying data windowing and SVD filtering, is presented. The SVD filtering is applied to reduce the problem size as in [4].
Since the matrices K 1 and K 2 have typically small sizes, it is possible to compute the exact SVD of K by exploiting its Kronecker product structure. In fact, if K = UΣV T is the SVD of K, then However, the TSVD of K cannot be computed as a Kronecker product. For this reason, different approximation of the TSVD can be found in the literature such as the matrix K k 1 ,k 2 of the VSH method or the randomized SVD (RSVD) [16,17] where randomized algorithms are used to approximate the dominant singular values of K. In this work, we show that it is possible to compute the exact TSVD of K efficiently, avoiding approximations and suitably using properties of the Kronecker product structure. Let K k = U k Σ k V T k be the TSVD of K where k is the number of considered singular values; we propose to solve the following Tikhonov-like problem: Moreover, we propose and test an automatic rule, based on the DPC, for the automatic selection of both the TSVD truncation index and the Tikhonov parameter. Finally, we analyze the filtering properties of our method compared to TSVD, Tikhonov and VSH methods. Therefore, our approach can be consideredt o be a hybrid inversion method that combines TSVD and Tikhonov regularization: Tikhonov regularization prevents from discontinuities and artificial peaks and TSVD acts as a preconditioning strategy and reduces the computational cost. Actually, other approaches do exist in the literature combining RTSVD and Tikhonov regularization [16,17]. Such techniques apply randomized algorithms to reduce large-scale problems to much smaller-scale ones and find a regularized solution by applying some regularization method to the small-scale problems in combination with some parameter choice rule. The key point of these randomized algorithms is that the SVD of the original linear operator K is never computed.
Concerning the solution of (8), in the present paper we apply the Newton Projection (NP) method [18] where the Conjugate Gradient (CG) method is used to solve the inner linear systems. This technique guarantees the desired high accuracy and it has been successfully applied in the NMR context [14,15]. Gradient projection methods with acceleration techniques such those discussed in [19] required more computational effort to solve (8) with the required accuracy. However, the typical sparse structure of the solution, represented by nonzero peaks over flat regions, could possibly be taken into account in solving (8), for example by adding L1 penalties [20].
Summarizing, the contribution of this paper is twofold; first, the paper shows that times distribution of improved quality can be obtained by using K k instead of the separate TSVDs of K 1 and K 2 without a significant increase in the computational complexity. In fact, the computational cost of our approach is slightly greater than the cost of VSH and considerably smaller than the cost of RSVD. Second, the paper describes an efficient method for jointly selecting the SVD truncation index k and the Tikhonov regularization parameter α and for solving the NMR data inversion problem.
The remainder of this paper is organized as follows. In Section 2 we introduce our method and, in Section 3, we analyze its filtering properties. In Section 4, we provide implementation details and discuss numerical results. Some conclusions are reported in Section 5. Finally, the VSH method is described in Appendix C.

The Proposed Hybrid Algorithm
In this section we illustrate the details of our approach and formalize our proposed Hybrid Algorithm 1. We first propose to approximate problem (4) with where K k = U k Σ k V T k is the TSVD of K. By projecting the data vector s onto the column subspace of K k and by neglecting constant terms, we obtain the equivalent formulation: which can be written as: with compressed data U T k s ∈ R k and kernel Σ k V T k . We observe that the solution of (11) lies in a subspace of the column space of V k . In the following paragraphs we develop the solution steps of problem (11), the rule for the choice of the TSVD truncation index k, and of the Tikhonov parameter α. Finally, we illustrate the filtering properties of our new method and compare it to Tikhonov, TSVD and VSH methods.

Solution of the Minimization Problem
We use the Newton Projection (NP) method [18] to solve the constrained minimization problem (11); for a detailed description of NP we refer the reader to Appendix B. We apply the Conjugate Gradient (CG) method to solve the inner linear systems. To this purpose, it is necessary to perform several matrix-vector products involving the matrices U k and V k . Although U k and V k are parts of the matrices U and V, they do not inherit the Kronecker product structure. However, we can show that matrix-vector products can be performed efficiently by exploiting the structure of U and V.
Given a vector x ∈ R M , let zero(x, k) be the vector obtained by zeroing the last M − k components of x: Using the Kronecker products property: From Equation (7) we have and where S ∈ R M 1 ×M 2 is the matrix of the measured data s.t. s = vec(S) and F ∈ R N 1 ×N 2 represents the computed distribution s.t. f = vec(F). Thus, matrix-vector products involving U k and V k can be efficiently performed by using the Kronecker product structure of U and V and by setting to zero the last components of the resulting vector.

Choice of the Parameters k and α
The quality of the restored distribution f strongly depends on the values of both the truncation parameter k and the Tikhonov parameter α. We propose to choose both these values by using the Discrete Picard Condition (DPC) [6]. This condition, for illconditioned inverse problems, guarantees that a good solution is obtained by keeping in the SVD expansion for the solution f = ∑ i u T i s σ i v i only the coefficients u T i s/σ i such that the Fourier coefficients u T i s decrease on average faster than the singular values σ i . A truncation parameter k satisfying the DPC can be selected by visual inspection of the so-called Picard plot (i.e., a plot of the quantities u T i s and σ i versus i). Alternatively, an index k satisfying the DPC can also be selected by using automatic techniques such as those described in [21,22].
Once the value for k has been fixed by using the DPC, the value for α is set as since, as explained in [6], the value α = σ 2 i represents the breakpoint at which the ith filter factor changes nature for the Tikhonov method [6]. Therefore this choice is motivated by the low-pass filtering properties of both TSVD and Tikhonov methods.
Summarizing the previous considerations, we outline the steps of our proposed Hybrid Algorithm 1.

Analysis of the Filtering Properties
In this section we prove that our Hybrid algorithm acts as a low-pass filtering method, similarly to TSVD and Tikhonov methods, and we compare it to the filtering properties of VSH. Let us first report some basic properties of the solution of the NNLS problem: Assume that f * is a solution of (17), then it satisfies [23]: Once defined the active set of f * by we can define the diagonal matrix D * as Then, using the relation D * f * = f * , we obtain from (18) which are the normal equations of the following LS problem The following result characterises the minimum norm solution of (22). Theorem 1. Let K ∈ R N×M , s ∈ R N and let D * be the diagonal matrix defined as (20) where A * is the active set of a local solution of (17). Then, the minimum norm solution of the least squares problem (17) has the following expression To prove this result, we first need the following lemma whose proof is given, for the reader's convenience, in Appendix A. Lemma 1. Let K = UΣV T be the SVD of K ∈ R M×N and let D * ∈ R N×N be the diagonal matrix defined in (20). Then, the pseudo-inverse of UΣV T D * ∈ R M×N is given by We are now in the position to prove Theorem 1.
Proof. (Theorem 1) Using the pseudo-inverse notation, we can write the solution of the LS problem (22) as: and, using (24) we have: Theorem 1 shows that the solution of the constrained problem (17) can be written in terms of the SVD of matrix K as follows: Obviously, the vectorsv i may be linear dependent if f * lies in a subspace of R N . It is well known that TSVD and Tikhonov methods both compute a filtered solution f filt of problem (22) with different filter functions φ i (·) [6]. Using the result of Theorem 1, we can express the nonnegatively constrained TSVD solution as where D * is the diagonal matrix (20) and the filter factors are Analogously, the non negative Tikhonov solution is given by where D * TIKH is the diagonal matrix (20) defined with respect to the active set A * of a local solution of (4) and the filter factors are Equations (25) and (27) define a low-pass filter with vectorsv i . The solution provided by the hybrid method is still a filtered solution whose filter factors depend on two parameters; i.e., where D * HYBRID is the diagonal matrix (20) defined with respect to the active set A * of a local solution of (11) and the filter factors are with k ∈ {1, . . . , N} and α ∈ R + . By properly choosing the parameters k and α, the filter factors for low-frequency components can be set close to one while filter factors for highfrequency components can be set close to zero. Figure 1   We observe that also the VSH solution can be expressed in terms of the SVD of K (see algorithm details in Appendix C). We define the index subset I(k 1 , k 2 ) including the indices of the singular values of K which correspond to singular values of K k 2 ,2 ⊗ K k 1 ,1 : Thus, we have: where D * VSH is the diagonal matrix (20) defined with respect to the active set A * of a local solution of (6) and the filter factors are with k 1 ∈ {1, . . . , N 1 }, k 2 ∈ {1, . . . , N 2 } and α ∈ R + . However, it is almost impossible to determine values of the truncation parameters k 1 and k 2 such that the vectorsv i for i ∈ I(k 1 , k 2 ) only correspond to low-frequency components including meaningful information about the unknown solution. An explanatory example will be further discussed in the numerical Section 4. For this reason, the VSH method cannot be considered a low-pass filtering method.

Numerical Results
In this section, we present some results obtained by our numerical tests with both simulated and real 2DNMR measurements. We have compared the proposed Hybrid method with the VSH method which is a reference method for 2DNMR data inversion. Moreover, in the case of real data, the UPEN method has been considered as a benchmark method. The considered methods have been implemented in Matlab R2018b on Windows 10 (64-bit edition), configured with Intel Core i5 3470 CPU running at 3.2GHz.
The relative error value, computed as F ex − F / F ex , is used to measure the effectiveness of the compared methods, while the execution time in seconds is used to evaluate their efficiency (here, F ex and F respectively denote the exact and restored distributions).
The reported values of the execution time are the mean over ten runs. Since both methods are iterative, in the following, we give some details about the implemented stopping criteria and parameters.

Hybrid Method
The NP method is used for the solution of problem (11); it is described in detail in the Appendix B. As initial iterate the constant distribution with values equal to one is chosen. The NP iterations have been stopped on the basis of the relative decrease in the objective function F (f) of (11); i.e., the method is arrested when an iterate f (k) has been determined such that The inner linear system for the search direction computation is solved by the CG method with relative tolerance Tol CG . The values Tol NP = 10 −3 and Tol CG = 10 −7 have been fixed in all the numerical experiments. A maximum of Kmax NP = 500 and Kmax CG = 10 4 iterations have been allowed for NP and CG, respectively.

VSH Method
The VSH method consists of three main steps. In the first step, the data is compressed using SVDs of the kernels; in the second step, the constrained optimization problem is transformed to an unconstrained one in a lower dimensional space, by using a method adapted from the BRD algorithm. In the third step, the optimal value of α is selected by iteratively finding a solution with fit error similar to the known noise variance. The VSH method is described in detail in the Appendix C. Here we just report the values of the parameters required in our Matlab implementation of the VSH method.
Each iteration of the VSH method needs, for a fixed value of α, the solution of a reduced-size optimization problem obtained from (6) by projecting the data s onto the column subspace of K k 1 ,k 2 . The Newton method is used for the subproblems solution; its iterations have been stopped on the basis of the relative decrease in the objective function. The values Tol N = 10 −3 and Kmax N = 500 have been fixed. The values Tol CG = 10 −7 and Kmax CG = 10 4 have been set for the CG method solving the inner linear system. The outer iterations of VSH have been terminated when two sufficiently close approximations of the unknown distribution have been determined or after a maximum of 100 iterations; the relative tolerance value Tol VSH = 0.1 has been set. As initial choice for the regularization parameter, the value α 0 = 1 has been chosen.

Simulated NMR Data
The considered simulated distribution F(T 1 , T 2 ), shown in Figure 2, is a mixture of three Gaussian functions located at (T 1 , T 2 ) given by (81.86, 12.84), (8.59, 6.66) and (433.27, 59.4) ms with standard deviations (0.1, 0.1), (0.1, 0.25) and (0.25, 0.1) ms. We have considered the Laplace-type kernels K 1 and K 2 of (2), we have sampled t 1 logarithmically between 0.001 and 3, and t 2 linearly between 0.001 and 1 with N 1 = 100, N 2 = 100. The 2D data have been obtained according to the 2D observation model (3) with M 1 = 2048, M 2 = 128. A white Gaussian noise has been added to get a signal-to-noise ratio (SNR) equal to 23 dB. We remark that this environmental setting corresponds to a realistic T 1 -T 2 measurement.

Models Comparison
We first compare the proposed hybrid inversion model (9) with the VSH model (6) and with classic Tikhonov model (4). Figure 3 depicts the singular values of K, K 1 and K 2 . Clearly, the singular values of K are obtained by reordering in non increasing order the diagonal elements of Σ = Σ 2 ⊗ Σ 1 and they are different from the diagonal elements of Σ k 1 ,1 ⊗ Σ k 2 ,1 . That is, some unwanted small singular values of K may be included in K k 1 ,1 ⊗ K k 2 ,2 , while some large singular values may be not considered. Figure 4 shows the singular values of the K k obtained for τ = 1 (blue line) and the singular values of K k 1 ,1 ⊗ K k 2 ,2 for τ 1 = τ 2 = 0.5 (blue dotted line), τ 1 = τ 2 = 1 (black dashed line) and for τ 1 = τ 2 = 5 (red dashdotted line). Observe that the singular values of K k are always different from those of K k 1 ,1 ⊗ K k 2 ,2 . Considering the threshold value τ = τ 1 = τ 2 = 1, we have that the number k = 82 of singular values of K k is larger than that of K k 1 ,k 2 , given by k 1 × k 2 = 8 × 6. Comparing, in Figure 4, the blue line and the black dashed one (obtained for τ 1 = τ 2 = 1), we observe that the singular values of K k 1 ,k 2 include a few elements smaller than τ, and miss some larger terms that should be included. Moreover, if τ 1 = τ 2 = 0.5, we have that the number of singular values of K k 1 ,k 2 is k 1 × k 2 = 9 × 7 which is closer to k = 82 but there are many singular values smaller than τ. Finally, in the case τ 1 = τ 2 = 5, K k 1 ,k 2 has only a few large singular values (k 1 = 5, k 2 = 4) and many relevant solution coefficients are missing. The plots in Figure 4 indicate that, when considering problem (6), it is highly probable to include in the solution also those components dominated by noise (if τ is slightly too small) or to discard those components dominated by the contributions from the exact right-hand side (if τ is too large).   A visual inspection of the Picard plot ( Figure 5) indicates, for the hybrid method, the choice τ = 1 for the truncation tolerance giving the value k = 82 of the truncation parameter. The previous considerations suggest to choose the values τ 1 = τ 2 = 1 for the VSH method. Once defined the projection subspaces for VSH and Hybrid methods, we solve the optimization problems (6) and (11) by applying the same numerical solver (Newton Projection method NP) and changing the values of the regularization parameters α. In this way we want to investigate how the different subspace selections affect the solutions computed by the (6) and (11) models for different values of α. Moreover, we apply the Tikhonov method (4) in which no subspace projection is applied. For this reason, we use NP to solve all the different optimization problems, since we aim at comparing the three different models for data inversion, independently from the numerical method used for their solution.
For each model and for ten values of α, logarithmically distributed between 10 −4 and 10 2 , Table 1 reports the relative error values (columns labeled Err), the number of NP iterations (columns labeled It), and the time in seconds required for solving the three inversion models (columns labeled Time). The numerical results of Table 1 are summarized in Figure 6 where the relative error behaviour (on the left) and the computational time in seconds (on the right) are shown versus the regularization parameter values. Figure 7 shows the distributions estimated from the three models, for α = 1 (left column), α = 10 −2 (middle column) and α = 10 −4 (right column). Figures 8 and 9 report the corresponding projections along the T 1 and T 2 dimensions. Figures 7-9 shows the importance of the proper choice of the subspace where the computed solution is represented. Indeed, the distribution computed by our method lies in a subspace of the column space of V k while the distribution determined by the VSH method belongs to a subspace of the space spanned by the column vectors of V k 2 ,2 ⊗ V k 1 ,1 .
The results of Table 1 and the plot of Figures 6 and 7 show that the Hybrid model (9) is less sensitive to small values of α than the Tikhonov one, because the matrix K k is better conditioned than K. When the value of α is properly chosen the quality of distributions given by these two models is comparable, but the solution of (9) requires less computational effort. Also the VSH model is less sensitive to small values of α, because K k 1 ,k 2 is better conditioned than K. Its solution has the least computational cost but the computed distributions exhibit artifacts which are due to subspace where they are represented (see . Table 1. Numerical results for the models comparison on simulated NMR data. Results obtained by applying NP method to (6), (4) and (11) with τ = τ 1 = τ 2 = 1.

Methods Comparison
We now compare the Hybrid and VSH algorithms on the simulated NMR data. Table 2 reports the relative error values and the required time in seconds for several values of the truncation parameter τ (Here, τ 1 = τ 2 = τ). Moreover, the table shows the computed value for the regularization parameter α, the number of performed Newton (or Newton Projection) and CG iterations (columns labeled It N and It CG, respectively) and it reports, only for VSH, the number of outer iterations (column labeled It). The numerical results show that the VSH method has the lowest computational complexity while the Hybrid method gives the most accurate distributions. The execution time of the Hybrid method is very low, although VSH is less time consuming. Figures 10 and 11 depict the best distributions estimated by the Hybrid and VSH methods, i.e.,: the distribution obtained with τ = 0.05 and τ 1 = τ 2 = 0.1, respectively. By visually comparing figures 10 and 11, we observe some spurious small oscillations in the VSH distribution both in the boundary and in the flat region, while the distribution computed by the Hybrid method is less affected by such artefacts.

Real NMR data
In this section we compare the Hybrid and VSH methods using real MR measurements obtained from the analysis of an egg yolk sample. The sample was prepared in NMR Laboratory at the Department of Physics and Astronomy of the University of Bologna, by filling a 10 mm external diameter glass NMR tube with 6 mm of egg yolk. The tube was sealed with Parafilm, and then at once measured. NMR measurements were performed at 25°C by a homebuilt relaxometer based on a PC-NMR portable NMR console (Stelar, Mede, Italy) and a 0.47 T Joel electromagnet. All relaxation experimental curves were acquired using phase-cycling procedures. The π/2 pulse width was of 3.8 µs and the relaxation delay (RD) was set to a value greater than 4 times the maximum T 1 of the sample. In all experiments RD was equal to 3.5 s. For the 2D measurements, longitudinal-transverse relaxation curve (T 1 -T 2 ) was acquired by an IR-CPMG pulse sequence (RD-π x -T I -(π/2) x -TE/2-[π y -TE/2-echo acquisition-TE/2] NE ). The T 1 relaxation signal was acquired with 128 inversion times (T I ) chosen in geometrical progression from 1 ms up to 2.8 s, with NE = 1024 (number of acquired echos, echo times TE = 500 µs) on each CPMG, and number of scans equal to 4. All curves were acquired using phase-cycling procedures. The data acquisition step produces an ascii structured file (in the STELAR data format) including M 1 · M 2 relaxation data s in (3) where M 1 = 128 and M 2 = 1024, and the vectors t 1 ∈ R M 1 (T I inversion times), t 2 ∈ R M 2 (CPMG echo times). The file is freely available upon email request to the authors. For the data inversion, in order to respect approximately the same ratio existing between M 1 and M 2 , we choose the values N 1 = 64, N 2 = 73 and compute the vectors T 1 , T 2 in geometric progression in the ranges of predefined intervals obtained from the minimum and maximum values of the vectors t 1 , t 2 respectively. Finally, using (2) we compute the matrices K 1 and K 2 .
We use the times distribution restored by the 2DUPEN method [14], shown in Figure 12 (top line), as benchmark distribution as the UPEN method uses multiparameter regularization and it is known to provide accurate results [3]. Obviously, 2DUPEN requires more computational effort since it involves the estimation of one regularization parameter for each pixel of the distribution.
By visual inspection of the Picard plot ( Figure 13) the value τ = 1 has been chosen for the hybrid method; the same value is fixed for the VSH method. Figure 13 shows the singular values of K, K 1 and K 2 . For the VSH method, we report the results obtained at the first iteration since, in this case, they worsen as the iteration number increases. Table 3 reports the T 2 − T 1 coordinates (in ms) where a peak is located, its hight in a.u. (arbitrary unit) and the required computational time in seconds. Finally, Figures 12 and 14 illustrate the distribution maps, the surfaces restored and the projections along the T1 and T2 dimensions; the results of the Hybrid method are more similar to those obtained by 2DUPEN, while the distribution provided by the VSH method seems to exhibits larger boundary artifacts.

Conclusions
In this paper, we have presented a hybrid approach to the inversion of 2DNMR measurements. This approach combines main features of Tikhonov and TSVD regularization in order to jointly select a suitable approximation subspace for the restored distribution and to reduce the computational cost of the inversion. The Picard condition is used to select, at the same time, the Tikhonov regularization parameter and the SVD truncation parameter. The numerical results show that the proposed hybrid method is effective, efficient and robust. In our future work, we intend to complement the presented hybrid approach in the 2DUPEN method. The last condition follows analogously.

Appendix B. The Newton Projection Method
Let us now denote by Q (k) (f) the objective function of (11): and by A(f) the set of indices defined as [24] A where is a small positive parameter and [·] + denotes the projection on the positive orthant. Finally, let E and F denote the diagonal matrices [24] such that The NPCG method for the minimization of Q (k) (f) under nonnegativity constraints is formally stated in Algorithm A1.