Exact Solutions to the Maxmin Problem max ||Ax|| Subject to ||Bx||<= 1

In this manuscript we provide an exact solution to the maxmin problem max ||Ax|| subject to ||Bx||<= 1, where A and B are real matrices. This problem comes from a remodeling of max ||Ax|| subject to min ||Bx||, because the latter problem has no solution. Our mathematical method comes from the Abstract Operator Theory, whose strong machinery allows us to reduce the first problem to max parallel to Cx parallel to subject to parallel to x parallel to<= 1, which can be solved exactly by relying on supporting vectors. Finally, as appendices, we provide two applications of our solution: first, we construct a truly optimal minimum stored-energy Transcranian Magnetic Stimulation (TMS) coil, and second, we find an optimal geolocation involving statistical variables

1. Introduction 1.1.Scope.Many problems in different disciplines like Physics, Statistics, Economics or Engineering can be modeled by using matrices and their norms (see for instance [7,11]).
Here in this article we make use of supporting vectors to reformulate and solve problems and situations that commonly appear in the previously mentioned disciplines.
Supporting vectors are widely known in the literature of Geometry of Banach Spaces and Operator Theory.They are commonly known as the unit vectors at which an operator attains its norm.In the matrix setting, the supporting vectors of a matrix A are the solutions of max the generalized supporting vectors of a sequence of matrices (A i ) i∈N are the solutions of max This optimization problem clearly generalizes the previous one.
A first application of supporting vectors was given in [2] where a TMS coil was truly optimally designed.In that paper a three-component problem is stated but only the case of one component was solved.In [5] the three-component case was stated and solved by means of the generalized supporting vectors.Moreover, an optimal location problem using Principal Component Analysis is solved by means of generalized supporting vectors.
For other perspective on supporting vectors and generalized supporting vectors, we refer the reader to [6].
1.2.Novelties.In this subsection we intend to enumerate the novelties provided by this work: (1) We provide an exact solution of an optimization problem, not an heuristic method for approaching it.Specifically, we solve the maxmin problem (1.1) max ∥Ax∥ ∥Bx∥ ⩽ 1 (2) A MATLAB code is provided for computing the solution to the maxmin problem.
(3) Our solution applies to design truly optimal minimum stored-energy TMS coils and to find optimal geolocations involving statistical variables.(4) This is an interdisciplinary work that englobes pure abstract nontrivial theorems with their proofs and programming codes with their results to directly apply them to real-life situations.

Preliminaries. A multiobjective optimization problem has the form
where f 1 , . . ., f l , g 1 , . . ., g k : X → R are functions defined on a nonempty set X. Two special sets are associated to P , the feasible solutions of P fea(P ) := {x ∈ X : and the set of optimal solutions of P sol(P ) := {x ∈ fea(P ) : f j (x) ⩽ f j (y) ∀y ∈ fea(P ) ∀ 1 ⩽ j ⩽ l}.
Any multiobjective optimization problem can be rewritten as the intersection of optimization problems, that is, if for 1 ⩽ j ⩽ l, then • fea(P ) = fea(P j ) for all 1 ⩽ j ⩽ l, and • sol(P ) = sol(P 1 ) ∩ • • • ∩ sol(P l ).
It commonly happens with multiobjective optimization problems that sol(P ) = sol(P 1 ) In this situation, we have to search for another multiobjective optimization problem which has a solution and still models accurately the real-life situation where Problem P comes from.In order to avoid the lack of solutions, it is a common practice to reduce the multiobjective optimization problem into a single optimization problem (increasing the number of constraints).Two typical reformulation are the following: where h : R l → R is a function conveniently chosen (usually an increasing function on each coordinate).
On the other hand, observe that if ϕ : Y → X is a bijection, then it is easy to check that fea(P ) = ϕ(fea(Q)) and sol(P ) = ϕ(sol(Q)) where Also note that fea(P ) = fea(R) and sol(P ) = sol(R) where and ϕ j , χ i : R → R are strictly increasing for 1 ⩽ j ⩽ l and 1 ⩽ i ⩽ k.
The original maxmin optimization problem has the form where f, g : X → (0, ∞) are functions and X is a nonempty set.Notice that sol(M ) = arg max g(x) ∩ arg min f (x).
Many real-life problems can be mathematically model like a maxmin.However, this kind of multiobjective optimization problems may have the inconvenience of lacking a solution.If this occurs, then we are in need of remodeling the real-life problem with another mathematical optimization problem that has a solution and still models the reallife problem very accurately.
In [2, Theorem 5.1] it was shown that This suggests that, in case sol(M ) = ∅, the following optimization problems are good alternatives to keep modeling the real-life problem accurately: g(x) ̸ = 0 .Here we have used the second typical reformula- Here we have used the second typical reformula- Here we have used the first typical reformulation 2) where a is an appropriately chosen constant.
Here we have used the first typical reformulation (1.2) where b is an appropriately chosen constant.
We will prove in the third section that all four previous reformulations are equivalent for the original maxmin max ∥Ax∥ min ∥Bx∥ .In the fourth section, we will solve the reformulation max ∥Ax∥ ∥Bx∥ ⩽ 1 .

Characterizations of operators with null kernel
Kernels will play a fundamental role towards solving the general reformulated maxmin (3.2) as shown in the next section.This is why we first study the operators with null kernel.
Throughout this section, all monoid actions considered will be left, all rngs will be associative, all rings will be unitary rngs, all absolute semi-values and all semi-norms will be non-zero, all modules over rings will be unital, all normed spaces will be real or complex and all algebras will be unitary and complex.
Recall that an element p of a monoid is called involutive if p 2 = 1.Given a rng R, an involution is an additive, antimultiplicative, composition-involutive map * : R → R. A * -rng is a rng endowed with an involution.
The categorical concept of monomorphism will be very present in this work.Recall that a morphism f ∈ hom C (A, B) between objects A and B in a category C is called a monomorphism provided that f ) and there exist C 0 ∈ ob(C) and In particular, if f ∈ hom C (A, B) is a section, that is, exists g ∈ hom C (B, A) such that g • f = I A , then f is a monomorphism.As a consequence, the elements of the hom C (A, A) that have a left inverse are monomorphisms.In some categories, the last condition suffices to characterize monomorphisms.This is the case, for instance, of the category of vector spaces over a division ring.
Recall that CL(X, Y ) stands for the space of continuous linear operators from X to Y .(1) If T is a section, then ker(T ) = {0} (2) If X and Y are Banach spaces, T (X) is complemented in Y and ker(T ) = {0}, then T is a section. Proof.
(2) Consider T : X → T (X).Since T (X) is complemented in Y we have that it is closed in Y , thus it is a Banach space.Therefore, the Open Mapping Theorem assures that T : X → T (X) is an isomorphism.Let T −1 : T (X) → X be the inverse of T : X → T (X).Now consider P : Y → Y to be a continuous linear projection such that P (Y ) = T (X).Finally, it suffices to define

□
We will finalize this section with a trivial example of a matrix It is not hard to check that ker(A) = {(0, 0)} thus A is left-invertible by Theorem 2.2(2) and so A ∈ rd (I).In fact, where A, B ∈ R m×n .
Recall that B(X, Y ) stands for the space of bounded operators from X to Y .Proof.If ker(S) \ ker(T ) ̸ = ∅, then it suffices to consider the sequence (nx 0 ) n∈N for x 0 ∈ ker(S)\ker(T ), since ∥S(nx 0 )∥ = 0 ⩽ 1 for all n ∈ N and ∥T (nx The general maxmin (3.1) can also be reformulated by using the second typical reformulation (1.3).This way we obtain Let X and Y be Banach spaces and T, S ∈ B(X, Y ).If the second general reformulated maxmin problem max ∥T (x)∥ ∥S(x)∥ ∥S(x)∥ ̸ = 0 has a solution, then ker(S) ⊆ ker(T ).
Proof.Suppose there exists x 0 ∈ ker(S) \ ker(T ).Then fix an arbitrary x 1 ∈ X \ ker(S).Notice that The next theorem shows that the previous two reformulations are in fact equivalent.
Theorem 3.4.Let X and Y be Banach spaces and T, S ∈ B(X, Y ).
Notice that x 0 / ∈ ker(S) in virtue of Theorem 3.2.Then ∥T (x)∥ and thus We spare of the details of the proof of the previous theorem to the reader.Notice that if ker(S) \ ker(T ) ̸ = ∅, then arg min ∥T (x)∥⩾1 ∥S(x)∥ = ker(S) \ {x ∈: ∥T (x)∥ < 1}.However, if ker(S) ⊆ ker(T ), then all four reformulations are equivalent, as shown in the next theorem, whose proof's details we spare again to the reader.
4.1.First case: S is an isomorphism over its image.By bearing in mind Theorem 3.6, we can focus on the first reformulation proposed at the beginning of the previous section: The idea we propose to solve the previous reformulation is to make use of supporting vectors (see [2,3,5,6]).Recall that if R : X → Y is a continuous linear operator between Banach spaces, then the set of supporting vectors of R is defined by suppv(R) := arg max ∥x∥⩽1 ∥R(x)∥.
The idea of using supporting vectors is that the optimization problem max ∥R(x)∥ ∥x∥ ⩽ 1 whose solutions are by definition the supporting vectors of R, can be easily solved theoretically and computationally (see [5]).
Our first result towards this direction considers the case where S is an isomorphism over its image.
Theorem 4.1.Let X and Y be Banach spaces and T, S ∈ B(X, Y ).Suppose that S is an isomorphism over its image and S −1 : S(X) → X denotes its inverse.Suppose also that S(X) is complemented in Y , being p : Y → Y a continuous linear projection onto S(X).Then If, in addition, ∥p∥ = 1, then arg max Proof.We will show first that Indeed, let x ∈ X with ∥S(x)∥ ⩽ 1.Since ∥S(x 0 )∥ = ∥y 0 ∥ ⩽ 1, by assumption we obtain Now assume that ∥p∥ = 1.We will show that S arg max

□
Notice that, in the settings of Theorem 4.1, S −1 • p is a left-inverse of S, in other words, S is a section, as in Theorem 2.2(2).
Taking into consideration that every closed subspace of a Hilbert space is 1-complemented (see [1,8] to realize that this fact characterizes Hilbert spaces of dimension ⩾ 3), we directly obtain the following corollary.
Corollary 4.2.Let X be a Banach space, Y a Hilbert space and T, S ∈ B(X, Y ) such that S is an isomorphism over its image and S −1 : S(X) → X its inverse.Then where p : Y → Y is the orthogonal projection on S(X).

4.2.
The Moore-Penrose inverse.If B ∈ K m×n , then the Moore-Penrose inverse of B, denoted by B + , is the only matrix B + ∈ K n×m which verifies the following: If ker(B) = 0, then B + is a left-inverse of B. Even more, BB + is the orthogonal projection onto the range of B, thus we have the following scholium from Corollary 4.2.
According to the previous scholium, in its settings, if y 0 ∈ arg max ∥y∥ 2 ⩽1 ∥AB + y∥ 2 and there exists x 0 ∈ R n such that y 0 = Bx 0 , then x 0 ∈ arg max ∥Bx∥ 2 ⩽1 ∥Ax∥ 2 and x 0 can be computed as

4.3.
Second case: S is not an isomorphism over its image.What happens if S is not an isomorphism over its image?Next theorem answers this question.Proof.Let x 0 ∈ arg max ∥S(x)∥⩽1 ∥T (x)∥.Fix an arbitrary y ∈ X with ∥S(π(y))∥ ⩽ 1.
Fix an arbitrary y ∈ X with ∥S(y)∥ ⩽ 1.Then ∥S(π(y))∥ = ∥S(y)∥ ⩽ 1 therefore This shows that x 0 ∈ arg max ∥S(x)∥⩽1 ∥T (x)∥.□ Notice that, in the settings of Theorem 4.4, if S(X) is closed in Y , then S is an isomorphism over its image S(X), and thus in this case Theorem 4.4 reduces the reformulated maxmin to Theorem 4.1.

4.4.
Characterizing when the finite dimensional reformulated maxmin has a solution.The final part of this section is aimed at characterizing when the finite dimensional reformulated maxmin has a solution.
Lemma 4.5.Let S : X → Y be a linear operator between finite dimensional Banach spaces X and Y .If (x n ) n∈N is a sequence in {x ∈ X : ∥S(x)∥ ⩽ 1}, then there exists a sequence (z n ) n∈N in ker(S) such that (x n + z n ) n∈N is bounded.

Note that
S(x n + ker(S)) = ∥S(x n )∥ ⩽ 1 for all n ∈ N, therefore the sequence (x n + ker(S)) n∈N is bounded in X ker(S) because X ker(S) is finite dimensional and S has null kernel so its inverse is continuous.Finally, choose z n ∈ ker(S) such that ∥x n + z n ∥ < ∥x n + ker(S)∥ +  where X and Y are Banach spaces and T, S ∈ B(X, Y ) with ker(S) ⊆ ker(T ).Notice that if (e i ) i∈I is a Hamel basis of X, then (e i + ker(S)) i∈I is a generator system of X ker(S) .By making use of the Zorn's Lemma, it can be shown that (e i + ker(S)) i∈I contains a Hamel basis of X ker(S) .Observe that a subset C of X ker(S) is linearly independent if and only if S(C) is a linearly independent subset of Y .
In the finite dimensional case, we have B :  If {e 1 , . . ., e n } denotes the canonical basis of R n , then {e 1 + ker(B), . . ., e n + ker(B)} is a generator system of R n ker(B) .This generator system contains a basis of R n ker(B) so let {e j 1 + ker(B), . . ., e j l + ker(B)} be a basis of R n ker(B) .Note that A (e j k + ker(B)) = Ae j k and B (e j k + ker(B)) = Be j k for every k ∈ {1, . . ., l}.Therefore, the matrix associated to the linear map defined by B can be obtained from the matrix B by removing the columns corresponding to the indices {1, . . ., n} \ {j 1 , . . ., j l }, in other words, the matrix Similarly, the matrix associated to the linear map defined by A is [Ae is linearly independent if and only if B(C) is a linearly independent subset of R m .As a consequence, in order to obtain the basis {e j 1 +ker(B), . . ., e j l +ker(B)}, it suffices to look at the rank of B and consider the columns of B that allow such rank, which automatically gives us the matrix associated to B, that is, [Be Finally, let . The vector z ∈ R n defined by x k (e j k + ker(B)) .
To simplify the notation, we can define the map where z is the vector described right above.
4.6.Conclusions: schematic summary.This subsection compiles all the results from the previous subsections and defines the structure of the algorithm that solves the maxmin.

Maxmin involving more operators
Let X and Y be Banach spaces and (T n ) n∈N and (S n ) n∈N sequences of continuous linear operators from X to Y .The maxmin can be reformulated like (recall the second typical reformulation) n=1 ∥S n (x)∥ 2 which can be transformed into a regular maxmin like in (3.1) by considering the operators Observe that for the operators T and S to be well defined it is sufficient that (∥T n ∥) n∈N and (∥S n ∥) n∈N be in ℓ 2 .
Appendix A. Applications to optimal TMS coils A.1.Introduction to TMS coils.Transcranial Magnetic Stimulation (TMS) is a noninvasive technique to stimulate the brain, which is applied to psychiatric and medical conditions, such as major depressive disorder, schizophrenia, bipolar depression, posttraumatic, stress disorder and obsessive-compulsive disorder, amongst others [4].In TMS, strong current pulses driven through a coil are used to induce an electric field stimulating neurons in the cortex.
The goal in TMS coil design is to find optimal positions for the multiple windings of coils (or equivalently the current density) so as to produce fields with the desired spatial characteristics and properties [2] (high focality, field penetration depth, low inductance, etc.), where this design problem has been frequently posed as a constrained optimization problem.
Moreover, an important safety issue in TMS is the minimization of the stimulation of nontarget areas.Therefore, the development of TMS as a medical tool would be benefited with the design of TMS stimulators capable of inducing a maximum electric field in the region of interest, while minimizing the undesired stimulation in other prescribed regions.
A.2. Minimum stored-energy TMS coil.In the following, in order to illustrate an application of the theoretical model developed in this manuscript, we are going to tackle the design of a minimum stored-energy hemispherical TMS coil of radius 9 cm, constructed to stimulate only one cerebral hemisphere.To this end, the coil must produce an E-field which is both maximum in a spherical region of interest (ROI) and minimum in a second region (ROI2).Both volumes of interest are of 1 cm radius and formed by 400 points, where ROI is shifted by 5 cm in the positive z-direction and by 2 cm in the positive y-direction; and ROI2 is shifted by 5 cm in the positive z-direction and by 2 cm in the negative y-direction, as shown in figure 1(a).By using the formalism presented in [2] this TMS coil design problem can be posed as the following optimization problem: where ψ is the stream function (the optimization variable), M = 400 are the number of points in the ROI and ROI2, N = 2122 the number of mesh nodes, L ∈ R N ×N is the inductance matrix, and E x 1 ∈ R M ×N and E x 2 ∈ R M ×N are the E-field matrices in the prescribe x-direction.In order to evaluate the stimulation of the coil we resort to the direct BEM [9], which allows calculation of the electric field induced by coils in conducting systems.In Figure 2(a) a simple human head made of two compartments, scalp and brain, used to evaluate the performance of the designed stimulator is shown.As it can be seen from Figure 2(b), the TMS coil fulfils the initial requirements of stimulating only one hemisphere of the brain (the one where ROI is found); whereas the electric field induced in the other cerebral hemisphere (where ROI2 can be found) is minimum.
A.3.Reformulation of Problem (A.1) to turn it into a maxmin.We proceed now to reformulate the multiobjective optimization problem given in (A.1) in order to transform it into a maxmin problem like in (1.1) so that we can apply the theoretical model described in Subsection 4.6: First, by taking into consideration that raising to the square is a strictly increasing function on [0, ∞), we can apply Equation (1.5) to obtain Next, we apply Cholesky decomposition to L to obtain L = C T C so we have that ψ T Lψ = (Cψ) T (Cψ) = ∥Cψ∥ 2  2 so we obtain Since C is an invertible square matrix, arg min ∥Cψ∥ 2 2 = {0} so the previous multiobjective optimization problem has no solution.Therefore it must be reformulated.We call then on Section 5 to obtain: where A and B are real 16x3 matrices with the values of the three variables (m 1 , m 2 , m 3 ) taking into account (highest temperature, radiation and evapotranspiration) in January and July respectively.To avoid unit effects, we standarized the variables (µ = 0 and σ = 1).The vector x is the solution of the multiobjective problem.
This question can be reformulated as we showed in Section 5 by the following: The solution of (B.3) allow us to draw the sites with a 2D plot considering the X axe as Ax and the Y axe as Bx.We observe that better places have high values of Ax and low values of Bx.Hence, we can sort the sites in order to achieve the objectives in a similar way as factorial analysis works (two factors, the maximum and the minimum, instead of m variables).
Finally, we provide the code to compute the solution of the optimal geolocation problem (B.

. 1 .Theorem 3 . 1 .
The original maxmin problem has no solutions.This subsection begins with the following theorem: If T, S : X → Y are nonzero continuous linear operators between Banach spaces X and Y , then the maxmin problem max ∥T (x)∥ min ∥S(x)∥ (3.1) has trivially no solution.Proof.Observe that arg min ∥S(x)∥ = ker(S) and arg max ∥T (x)∥ = ∅ because T ̸ = {0}.Then the set of solutions of Problem (3.1) is arg min ∥S(x)∥ ∩ arg max ∥T (x)∥ = ker(S) ∩ ∅ = ∅.□ As a consequence, Problem (3.1) must be reformulated or remodeled.3.2.Equivalent reformulations for the original maxmin problem.Following the first typical reformulation, given in Equation (1.2), we obtain max

Figure 1 (
Figure 1(b) shows the coil solution of problem in Eq.A.1 computed by using the theoretical model proposed in this manuscript (see Subsections 4.6 and A.3), and as expected, the wire arrangements is remarkably concentrated over the region of stimulation.

3 .Figure 1 .
Figure 1.a) Description of hemispherical surface where the optimal ψ must been found along with the spherical regions of interest ROI and ROI2 where the electric field must be maximized and minimized respectively.b) Wirepaths with 18 turns of the TMS coil solution (red wires indicate reversed current flow with respect to blue).

Figure 2 .
Figure 2. a) Description of the two compartment scalp-brain model.b) E-field modulus induced at the surface of the brain by the designed TMS coil.

Figure 3 .
Figure 3. Geographic distribution of the sites considered in the study.11 places are in the coastline of the region and 5 in the inner

Figure 4 .
Figure 4. Locations considering Ax and Bx axes.Group named A represents the best places for the tourism rural inn, near Costa Tropical (Granada province).Sites on B are also in the coastline of the region.Sites on C are the worst locations considering the multiobjective problem, they are situated inside the region

Figure 5 .
Figure 5. a) Sites considering Ax and Bx and the function y = −x.The places with high values of Ax (max) and low values of Bx (min) are the best locations for the solution of the multiobjective problem (round).b) Multiobjective scores values obtained for each site projecting the point in the function y = −x.High values of this score indicate better places to locate the tourism rural inn.

Figure 6 .
Figure 6.Distribution of the three areas described on figure 4. A and B areas are in the coastline and C in the inner.
1 n for all n ∈ N. □ Lemma 4.6.Let A, B ∈ R m×n .If ker(B) ⊆ ker(A), then A is bounded on {x ∈ R n : ∥Bx∥ ⩽ 1} and attains its maximum on that set.Proof.Let (x n ) n∈N be a sequence in {x ∈ R n : ∥Bx∥ ⩽ 1}.In accordance with Lemma 4.5, there exists a sequence (z n ) n∈N in ker(B) such that (x n + z n ) n∈N is bounded.Since A(x n ) = A(x n + z n ) by hypothesis (recall that ker(B) ⊆ ker(A)), we conclude that A is bounded on {x ∈ R n : ∥Bx∥ ⩽ 1}.Finally, let (x n ) n∈N be a sequence in {x ∈ R n : ∥Bx∥ ⩽ 1} such that ∥Ax n ∥ → max ∥Bx∥⩽1 ∥Ax∥ as n → ∞.Note that A(x n + ker(B)) = ∥Ax n ∥ for all n ∈ N, so A(x n + ker(B)) n∈N is bounded in R m and so is A(x n + ker(B)) n∈N in R n ker(B) .Fix b n ∈ ker(B) such that ∥x n + b n ∥ < ∥x n + ker(B)∥ + 1n for all n ∈ N.This means that

Table 1 .
To show another application of maxmin multiobjective problems we consider here the issue of optimal geolocation.In particular we focus on the best situation of a tourism rural inn considering several measured climate variables.Locations with low highest temperature m 1 , radiation m 2 and evapotranspiration m 3 in summer time and high values in winter time are sites with climatic characteristics desirable for potential visitors.To solve this problem, we choose 11 locations in the Andalusian coastline and 2 in the inner, near the mountains.We have collected the data from the official Andalusian government webpage [10] evaluating the mean values of these variables on the last 5 years 2013-2019.The referred months of the study were January and July.Mean values of high temperature (T) in Celsius Degrees, radi-To find the optimal location we evaluate the site where the variables mean values are maximum in January and minimum in July.Here we have a typical multiobjective problem with two data matrices that can be formulated as follows: ation (R) in M J/m 2 , and evapotranspiration (E) in mm/day, measures in January (winter time) and July (summer time) between 2013 and 2018.