Analytical Solutions to Minimum-Norm Problems

: For G ∈ R m × n and g ∈ R m , the minimization min (cid:107) G ψ − g (cid:107) 2 , with ψ ∈ R n , is known as the Tykhonov regularization. We transport the Tykhonov regularization to an inﬁnite-dimensional setting, that is min (cid:107) T ( h ) − k (cid:107) , where T : H → K is a continuous linear operator between Hilbert spaces H , K and h ∈ H , k ∈ K . In order to avoid an unbounded set of solutions for the Tykhonov regularization, we transform the inﬁnite-dimensional Tykhonov regularization into a multiobjective optimization problem: min (cid:107) T ( h ) − k (cid:107) and min (cid:107) h (cid:107) . We call it bounded Tykhonov regularization. A Pareto-optimal solution of the bounded Tykhonov regularization is found. Finally, the bounded Tykhonov regularization is modiﬁed to introduce the precise Tykhonov regularization: min (cid:107) T ( h ) − k (cid:107) with (cid:107) h (cid:107) = α . The precise Tykhonov regularization is also optimally solved. All of these mathematical solutions are optimal for the design of Magnetic Resonance Imaging (MRI) coils.

By means of optimization problems, it is possible to model with precision many real-life situations, even though, in the case of multiobjective optimization problems, the existence of global and optimal solutions (optimizing all the objective functions at once) is not guaranteed.This is why Pareto-optimal solutions were defined and introduced in the literature of Optimization Theory.In an informal way, a Pareto-optimal solution is a feasible solution satisfying that if there exists another feasible solution better optimizing one objective function, then the latter has to be less optimal in another objective function.
As shown in [14,[16][17][18][19], the design of optimal MRI coils is modeled by means of a particular case of optimization problems called minimum-norm problems [20], such as min ψ 2 , Gψ = g, min ψ 2 , Gψ − g ∞ ≤ D, and min Gψ − g 2 , ψ ∈ R n , for G ∈ R m×n , ψ ∈ R n , g ∈ R m , and D ≥ 0. Notice that the last one of the three above problems is precisely the finite-dimensional Tykhonov regularization.A theoretical treatment in the framework of Operator Theory and Functional Analysis will be given to the above problems, transporting them to an infinite-dimensional setting, as well as a MATLAB encoding for their finite-dimensional version in Appendices A-D.
We also introduce in the literature of Optimization Theory the following multiobjective optimization problem: min T(h) − k , min h , where T : H → K is a continuous linear operator between Hilbert spaces H, K and h ∈ H, k ∈ K, which we call bounded Tykhonov regularization.We provide a nontrivial Pareto-optimal solution of the bounded Tykhonov regularization.Sometimes, the bounded Tykhonov regularization might produce a nontrivial Pareto-optimal solution of an excessively small norm.This is why we introduce what we name as the precise Tykhonov regularization: min T(h) − k , h = α, which we fully and optimally solve.

Materials and Methods
In this Methodology Section, we properly define the optimization problems that we will deal with.We will also gather all the necessary concepts, notions, techniques, and results needed to accomplish our analytical solutions for the previously mentioned optimization problems.

Mathematical Formulation of the Optimization Problems
The optimization problems that we will deal with in this manuscript are described next.As we mentioned before, these three problems arise from the optimal design of MRI coils.

Problem 1.
Let G ∈ R m×n and g ∈ R m in the range of G. Solve Observe that, under the settings of Problem 1, g must be in the range of G, that is there must exist at least one ψ ∈ R n for which Gψ = g.Otherwise, the feasible region of Problem 1 is empty.Furthermore, if g = 0, then 0 is trivially the unique solution of Problem 1.The infinite-dimensional or abstract version of Problem 1 is displayed next.Problem 2. Let X, Y be Banach Spaces, T : X → Y a continuous linear operator, and y ∈ T(X).Solve min x , for x ∈ X.
Under the settings of Problem 2, x = 0 is the (unique) solution of Problem 2 if and only if y = 0.
Notice that, under the settings of Problem 3, g does not necessarily need to be in the range of G.However, if the condition 0 < D < g ∞ is not imposed, then 0 ≤ g ∞ ≤ D allows 0 to be the unique solution of Problem 3. The infinite-dimensional or abstract version of Problem 3 is as follows.
Problem 4. Let X, Y be Banach Spaces, T : X → Y a continuous linear operator, y ∈ Y \ {0}, and 0 < D < y .Solve min x , for x ∈ X.
Under the settings of Problem 4, the reason for D to lie in the open interval (0, y ) is again to avoid the trivial solution x = 0. Indeed, x = 0 is the (unique) solution of Problem 4 if and only if D ≥ y .The following technical lemma ensures that any solution of Problem 4 must lie in the boundary of the feasible region.This lemma will be useful later on.Lemma 1.Let X, Y be Banach Spaces, T : X → Y a continuous linear operator, y ∈ Y \ {0}, The third problem that we will deal with is the Tykhonov regularization.
Problem 5 (Finite-dimensional Tykhonov regularization).Let G ∈ R m×n and g ∈ R m .Solve The infinite-dimensional or abstract version of Problem 5 follows next.
Problem 6 (Infinite-dimensional Tykhonov regularization).Let X, Y be Banach Spaces, T : X → Y a continuous linear operator, and y ∈ Y. Solve Notice that, under the settings of Problem 6, when y = 0, we obtain the trivial set of solutions given by ker(T).

Supporting Vectors
If X, Y are Banach Spaces and T : X → Y is a continuous linear operator, then the operator norm of T is defined as This norm turns the vector space of continuous linear operators from X to Y, CL(X, Y), into a Banach Space.When X = Y, we will simply denote it by CL(X).When Y = K (R or C), we will simply denote it by X * .
The concept of the supporting vector was formerly introduced for the first time in [1], although it appeared implicitly and scattered throughout the literature of Banach Space Theory, as for instance, in [3,4,9,10].Definition 1 (Supporting vector).Let X, Y be Banach Spaces.Let T : X → Y be a continuous linear operator.The set of supporting vectors of T is defined as suppv(T) := {x ∈ X : x = 1 and T(x) = T }.
We refer the reader to [2,21,22] for a topological and geometrical study of the set of supporting vectors of a continuous linear operator.Supporting vectors have been successfully applied to solve multiobjective optimization problems that typically arise in Bioengineering, Physics, and Statistics [14,15,[23][24][25], improving considerably the results obtained by means of other techniques, such as Heuristic methods [16,18,19].Definition 2 (1-Supporting vector).Let X be a Banach Space.Let f ∈ X * be a continuous linear functional.The set of 1-supporting vectors of f is defined as The 1-supporting vectors are special cases of supporting vectors, that is suppv 1 ( f ) ⊆ suppv( f ).We will strongly rely on 1-supporting vectors later on.A standard geometrical property of 1-supporting vectors is shown in the next remark.
In other words, suppv 1 ( f ) is a convex subset of the unit sphere S X of X.

Riesz Representation Theorem on Hilbert Spaces
The Riesz Representation Theorem is one of the most important results in Functional Analysis and is crucial for working with self-adjoint operators on Hilbert spaces.
Theorem 1 (Riesz Representation Theorem).Let H be a Hilbert space.The dual map of H, where is a surjective linear isometry between H and H * .
In the frame of the Geometry of Banach Spaces, J H is called the duality mapping.In Quantum Mechanics, the dual map J H has a different mention and notation.Under the settings of the Riesz Representation Theorem and by relying on certain techniques of the Geometry of Banach Spaces and on Remark 1, it can be proven that if h ∈ H \ {0}, then h h is the only 1-supporting vector of h * , that is suppv Let H be a Hilbert space.For every closed subspace M of H, the orthogonal subspace of M is denoted by M ⊥ and the orthogonal projection of H onto M is denoted as p M .Notice that H = M ⊕ 2 M ⊥ , in other words, for all x ∈ H, If H, K are Hilbert spaces and T : H → K is a continuous linear operator, then there exists a unique continuous linear operator T * : K → H such that (T(h)|k) = (h|T * (k)) for all h ∈ H and all k ∈ K.This operator T * is called the adjoint of T. The following technical lemma is well known in the literature of Functional Analysis and Operator Theory, and it will be used later on.Lemma 2. Let H, K be Hilbert spaces.Let T : H → K be a continuous linear operator.Then, T(H) ⊥ = ker(T * ) and T(H) = ker(T * ) ⊥ .
The finite-dimensional Hilbert spaces, which are involved in the finite-dimensional problems previously mentioned, will be denoted by n 2 := (R n , • 2 ), where • 2 clearly denotes the Euclidean norm.

Results
This section is aimed at providing analytical solutions for Problems 2 and 4, which will automatically work for Problems 1 and 3, respectively, since these last two problems are particular cases of the first two.

Analytical Solution of Problems 1 and 2 in the Hilbert Space Context
Problem 2 will be actually tackled, and solved completely, in the Hilbert space context.The reformulation of Problem 2 in the previously mentioned setting follows next.Problem 7. Let H, K be Hilbert spaces, T : H → K a continuous linear operator, and k ∈ T(H).Solve min h , for h ∈ H.
Observe that Problem 1 is still a particular case of Problem 7, which is itself a particular case of Problem 2.

Lemma 3.
Let H be a Hilbert space.Let M be a closed subspace of H.For every x ∈ X, Proof.We will show first that (x + M) ∩ M ⊥ = {p M ⊥ (x)}.Indeed, on the one hand, it is clear that p M ⊥ (x) ∈ M ⊥ and, by (9) On the other hand, take arbitrary elements m ∈ M and m ⊥ ∈ M ⊥ such that x + m = m ⊥ .By using again (9), As a consequence, m ⊥ = p M ⊥ (x), and so, x . Fix an arbitrary m ∈ M. By virtue of (9), note that Since p M ⊥ (x) ∈ x + M as we have just proven first, we finally conclude that Remark 2. Under the settings of Lemma 3, for every y ∈ x + M, it is clear that y + M = x + M and p M ⊥ (x) = p M ⊥ (y).
The following theorem solves Problem 7 completely.
Theorem 2. Let H, K be Hilbert spaces.Let T : H → K be a continuous linear operator.For every k 0 ∈ T(H) and every h 0 ∈ T −1 (k 0 ), we have the following:

2.
The above min is attained at p ker(T) Proof.In the first place, observe that {h ∈ H : By relying on Lemma 3, Finally, by taking into consideration Lemma 3 together with Remark 2, we see that p ker(T) ⊥ (h 0 ) ∈ T −1 (k 0 ) and p ker(T) ⊥ (h 0 ) = p ker(T) ⊥ (h 1 ) for each h 1 ∈ T −1 (k 0 ).Now, we are in the right position to provide a full solution to Problem 1.

Corollary 1.
Let G ∈ R m×n and g ∈ R m be in the range of G.The solution of Problem 1 is given by p ker(G) ⊥ (ψ 0 ) 2 for any ψ 0 ∈ R n such that Gψ 0 = g, and it is attained at p ker(G) ⊥ (ψ 0 ).A MATLAB encoding for Corollary 1 is available in Appendix A.

Analytical Solution of Problem 7 When K := K
If we take K := K in Problem 7, then its solution can be also computed in terms of 1-supporting vectors.
Even more, the previous min is attained at λ This proves the inequality: In order to prove the reverse inequality, we will make use of λ Next, This finally shows that As an immediate corollary, if we take m = 1 in Problem 1, then its solution can be also computed in terms of 1-supporting vectors.
Corollary 2. Let G ∈ R 1×n , G = 0, and g ∈ R. The solution of Problem 1 is given by |g| G t , and it is attained at ψ := Proof.It only suffices to call on Theorem 3 by taking H := n 2 , h * 0 := G, λ := g, and h 0 := G t .
A MATLAB encoding for Corollary 2 is available in Appendix B.

Partial Solution of Problem 3
A particular version of Problem 3 was partially solved in [20] (Corollary 13).Here, we will follow a completely different approach.Before tackling Problem 3, we will first solve particular cases of it.Problem 8. Let X be a Banach Space, f ∈ X * \ {0}, and 0 < c ≤ d.Solve We will first solve Problem 8 by relying on 1-supporting vectors.

Proof.
⊆ Fix an arbitrary y ∈ sol (11).We will show that x := f c y ∈ suppv 1 ( f ).For every z ∈ B X \ ker( f ), c f (z) z is in the feasible region of (11) since f c f (z) z = c; therefore, Now, if we take a sequence f is in the feasible region of problem (11), that is cx f is a feasible solution.Next, take y as another feasible solution of (11).Then, c ≤ f (y) ≤ f y ; hence, This shows that cx f ∈ sol (11); in other words, cx f is an optimal solution Notice that the feasible region of Problem 8, c ≤ f (x) ≤ d, can be rewritten as | f (x) − c+d 2 | ≤ d−c 2 ; hence, Problem 8 is of the same form as Problem 3 whenever c < d.In the case that c = d, then Problem 8 is a particular case of Problem 1.In fact, Problem 3 can be rewritten as follows.
If, under the settings of Problem 9, we assume that g 1 = • • • = g m > 0, which is consistent with the design of optimal MRI coils according to [14,16,18,19], then is of the same form as the feasible region of Problem 11.Observe that, under this assumption, G i = 0 for all i = 1, . . ., m.In this situation, the infinite-dimensional generalization of Problem 9 is given as follows.
Notice that, in the case c < d, Problem 10 is a particular case of Problem 4 since one can define the following continuous linear operator: Then, the feasible region of Problem 10 is precisely where g := c+d 2 , m . .., c+d 2 and D := d−c 2 .If c = d, then by using the same operator T given in Equation ( 15), it can be seen that Problem 10 is a particular case of Problem 2. We will strongly rely on Lemma 4 to approach the optimal solutions of Problem 10.Theorem 4. Let X be a Banach Space, f i ∈ X * \ {0}, i = 1, . . ., m, and 0 < c ≤ d.If there exists i ∈ {1, . . ., m} and x ∈ suppv 1 x is an optimal solution of Problem 10.
Proof.In the first place, notice that c f i x is a feasible solution of Problem 10, that is it belongs to the feasible region simply because for all k = 1, . . ., m.Let z ∈ X be another feasible solution of Problem 10.In particular, c ≤ f i (z) ≤ d; therefore, if we consider Problem 8 for f i , we have that c f i x is an optimal solution of such a problem in view of Lemma 4. As a consequence, x is an optimal solution of Problem 10.

Analytical Solution of Problems 5 and 6 in the Hilbert Space Context
The solution of the finite-dimensional Tykhonov regularization is well known in the literature of Optimization Theory.Here, we present a fine argument to solve the infinitedimensional Tykhonov regularization in the Hilbert context, whose formulation follows.
Problem 11 (Infinite-dimensional Tykhonov regularization).Let H, K be Hilbert spaces, T : H → K a continuous linear operator, and k ∈ K. Solve We will rely on the basic techniques of Hilbert spaces and Operator Theory.
Proposition 1.Let H, K be Hilbert spaces, T : H → K a continuous linear operator, and k ∈ K. Then: 1.
If k ∈ T(H), then min{ T(h) − k : h ∈ H} = 0, and it is attained at any element of T −1 (k).

2.
If T has dense range, then inf{ T(h) − k : h ∈ H} = 0. Hence, Problem 11 has a solution if and only if k ∈ T(H).

Proof.
1.This is a simple and trivial exercise.

2.
Suppose that T has a dense range, that is the closure of T(H) is K.There exists a sequence (h n ) n∈N such that (T(h n )) n∈N converges to k.This means that inf{ The following theorem is a refinement of Lemma 3.
Theorem 5. Let H be a Hilbert space.Let M be a closed subspace of H. Let h ∈ H.Then, min{ h − m : m ∈ M} = p M ⊥ (h) .Even more, the previous min is attained at p M (h).

Proof. We can write
This shows that min{ h − m :

and the min is attained at p M (h).
As an immediate consequence of Theorem 5, we obtain the following corollary, which fully solves Problem 11.
Corollary 3. Let H, K be Hilbert spaces.Let T : H → K be a continuous linear operator of closed range.For every k 0 ∈ K, we have the following:

2.
The above min is attained at any element of T −1 p T(H) (k 0 ) .
Proof.The first two items are a direct application of Theorem 5, so let us simply take care of the third item.Note that arg min{ T(h) − k 0 : h ∈ H} = T −1 p T(H) (k 0 ) = h 0 + ker(T) for any h 0 ∈ T −1 p T(H) (k 0 ) .As a consequence, arg min{ T(h) − k 0 : h ∈ H} is bounded if and only if ker(T) = {0}.

Discussion
We discuss in this section several aspects of the obtained results.

Bounded Tykhonov Regularization
The bounded Tykhonov regularization is a novel concept conceived of in this manuscript describing a different way to tackle the classical Tykhonov regularization in an original manner that allows designing efficient MRI coils.The bounded Tykhonov regularization is described as the following multiobjective optimization problem.
The infinite-dimensional version of the bounded Tykhonov regularization follows now.
Problem 13 (Infinite-dimensional bounded Tykhonov regularization).Let X, Y be Banach Spaces, T : X → Y a continuous linear operator, and y ∈ Y. Solve min T(x) − y , min x , ( for x ∈ X.
The infinite-dimensional bounded Tykhonov regularization in the Hilbert space context is described next.Problem 14.Let H, K be Hilbert spaces, T : H → K a continuous linear operator, and k for h ∈ H.
We will find a nontrivial Pareto-optimal solution of Problem 14.
Theorem 6.Let H, K be Hilbert spaces.Let T : H → K be a continuous linear operator of closed range.Let k ∈ K.Then, p ker(T) ⊥ T −1 p T(H) (k) is a Pareto-optimal solution of (19).
Proof.Bear in mind that Theorem 2(3) ensures that p ker(T) k for all h ∈ H. Suppose on the contrary that h 0 is not a Pareto-optimal solution of ( 19).There exists h 1 ∈ H satisfying one of the following two possibilities: This is impossible because we have already proven that T(h 0 ) − k ≤ T(h 1 ) − k .As a consequence of the two previous contradictions, we deduce that the singleton (19).
Under the settings of Theorem 6 and by taking into consideration Lemma 2, the reader should notice that ker(T * ) ⊥ = T(H), thus p ker(T) ⊥ T −1 p ker(T * ) ⊥ (k) = p ker(T) ⊥ T −1 p T(H) (k) .Hence, p ker(T) ⊥ T −1 p ker(T * ) ⊥ (k) is a Pareto-optimal solution of (19).A more intuitive way of understanding Theorem 6 is the following.It is not hard to see, by keeping in mind Theorem 2, that p ker(T) ⊥ T −1 p ker(T * ) ⊥ (k) is the solution of min h , T(h) = p ker(T * ) ⊥ (k).

Precise Tykhonov Regularization
The bounded Tykhonov regularization might produce a solution of an excessively small norm.Sometimes, it is precise to obtain a solution of the Tykhonov regularization with a certain predetermined norm.This is what we call the precise Tykhonov regularization.
The infinite-dimensional precise Tykhonov regularization in the Hilbert space context is described next.Observe that, in accordance with Corollary 3(2), Problem 17.Let H, K be Hilbert spaces, T : H → K a nonzero continuous linear operator with ker(T) = {0}, k ∈ K, and α ≥ dist 0, for h ∈ H.
Notice that Problem 17 is a single-objective optimization problem.We will find an optimal solution of Problem 17.For this, we will make use of several proper technical results from Banach Space Theory and Operator Theory.Recall that, in a vector space, a linear manifold is a translation of a subspace.The dimension of a linear manifold is by definition the dimension of the subspace.Theorem 7. Let X be a Banach Space.Let M ⊆ X be a linear manifold with dim(M) ≥ 1.If α > dist(0, M), then there exists m 0 ∈ M such that m 0 = α.
Proof.On the one hand, since dist(0, M) := inf{ m : m ∈ M}, there exists m 1 ∈ M with dist(0, M) ≤ m 1 < α.On the other hand, since dim(M) ≥ 1, we have that M is unbounded; thus, there exists m 2 ∈ M with m 2 > α.Consider the continuous function Notice that φ(0) = m 1 < α and φ(1) = m 2 > α.As a consequence, Bolzano's Theorem allows the existence of t 0 ∈ (0, 1) such that φ(t 0 ) = α.Finally, it only suffices to take Observe that Theorem 7 works with the exact same proof if we replace "linear manifold with dimension ≥1" with "unbounded convex subset".Theorem 7 can actually be accomplished in the Hilbert space setting with much less effort.Remark 3. Let H, K be Hilbert spaces.Let T : H → K be a nonzero continuous linear operator such that ker satisfies that h 2 ∈ ker(T), and by virtue of Theorem 2, for every h Remark 3 allows easily solving the infinite-dimensional precise Tykhonov regularization in the Hilbert space context, that is Problem 17. Theorem 8. Let H, K be Hilbert spaces, T : H → K a nonzero continuous linear operator with ker(T) = {0}, k ∈ K, and α ≥ δ := dist 0, T −1 p T(H) (k) .For every h 0 ∈ T −1 p T(H) (k) and every h 1 ∈ ker(T) \ {0}, an optimal solution of Problem 17 is given by p ker(T) Proof.First off, notice that, according to Remark 3, This means p ker(T) ⊥ (h 0 ) +  is an optimal solution of (22).
A MATLAB encoding for Corollary 5 is available in Appendix D.

A Generalization of Theorem 5
As the reader may notice, both Lemma 3 and Theorem 5 are technical results crucial for the development of this manuscript.The following theorem generalizes them.Theorem 9. Let X be a Banach Space.Let P : X → X be a continuous linear projection such that I − P = 1.Let x 0 ∈ X.Then, dist(x 0 , P(X)) = min{ x 0 − y : y ∈ P(X)} = x 0 − P(x 0 ) .Even more, the previous min is attained at P(x 0 ).

Conclusions
Let us summarize all the optimization problems we have dealt with throughout this manuscript.The "inclusion" symbol means that the "contained" problem is a particular case of the "continent" problem.The "equal" symbol means that the two involved problems are equivalent in the sense that they have the same set of optimal solutions:

( 20 )Corollary 4 .
What Corollary 3(2) is saying is that the set of constraints of the above problem,h ∈ H : T(h) = p ker(T * ) ⊥ (k) , is indeed the set of solutions of min T(h) − k , h ∈ H. (21) Let G ∈ R m×n and g ∈ R m .Then, p ker(G) ⊥ G −1 p ker(G t ) ⊥ (g) is a Paretooptimal solution of(17).Proof.It only suffices to call on Theorem 6 by taking H := n 2 , K := m 2 , T := G, and k := g.A MATLAB encoding for Corollary 4 is available in Appendix C.