Next Article in Journal
New Explicit Oscillation Criteria for First-Order Differential Equations with Several Non-Monotone Delays
Next Article in Special Issue
Criteria-Based Model of Hybrid Photovoltaic–Wind Energy System with Micro-Compressed Air Energy Storage
Previous Article in Journal
An Information-Reserved and Deviation-Controllable Binary Neural Network for Object Detection
Previous Article in Special Issue
Solving Poisson Equations by the MN-Curve Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual Variational Formulations for a Large Class of Non-Convex Models in the Calculus of Variations

by
Fabio Silva Botelho
Department of Mathematics, Federal University of Santa Catarina, Florianópolis 88040-900, SC, Brazil
Mathematics 2023, 11(1), 63; https://doi.org/10.3390/math11010063
Submission received: 2 November 2022 / Revised: 19 December 2022 / Accepted: 22 December 2022 / Published: 24 December 2022
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing II)

Abstract

:
This article develops dual variational formulations for a large class of models in variational optimization. The results are established through basic tools of functional analysis, convex analysis and duality theory. The main duality principle is developed as an application to a Ginzburg–Landau-type system in superconductivity in the absence of a magnetic field. In the first section, we develop new general dual convex variational formulations, more specifically, dual formulations with a large region of convexity around the critical points, which are suitable for the non-convex optimization for a large class of models in physics and engineering. Finally, in the last section, we present some numerical results concerning the generalized method of lines applied to a Ginzburg–Landau-type equation.

1. Introduction

In this section, we establish a dual formulation for a large class of models in non-convex optimization.
The main duality principle is applied to the Ginzburg–Landau system in superconductivity in the absence of a magnetic field.
Such results are based on the works of J.J. Telega and W.R. Bielski [1,2,3,4] and on a D.C. optimization approach developed in Toland [5].
About the other references, details on the Sobolev spaces involved are found in [6]. Related results on convex analysis and duality theory are addressed in [7,8,9,10]. Finally, similar models on the superconductivity physics may be found in [11,12].
Remark 1. 
It is worth highlighting that we may generically denote
Ω [ ( γ 2 + K I d ) 1 v * ] v * d x
simply by
Ω ( v * ) 2 γ 2 + K d x ,
where I d denotes a concerning identity operator.
Other similar notations may be used along this text as their indicated meaning are sufficiently clear.
Additionally, 2 denotes the Laplace operator, and for real constants K 2 > 0 and K 1 > 0 , the notation K 2 K 1 means that K 2 > 0 is much larger than K 1 > 0 .
Finally, we adopt the standard Einstein convention of summing up repeated indices, unless otherwise indicated.
In order to clarify the notation, here, we introduce the definition of topological dual space.
Definition 1 
(Topological dual spaces). Let U be a Banach space. We define its dual topological space as the set of all linear continuous functionals defined on U. We suppose that such a dual space of U may be represented by another Banach space U * , through a bilinear form · , · U : U × U * R (here, we are referring to standard representations of dual spaces of Sobolev and Lebesgue spaces). Thus, given f : U R linear and continuous, we assume the existence of a unique u * U * such that
f ( u ) = u , u * U , u U .
The norm of f, denoted by f U * , is defined as
f U * = sup u U { | u , u * U | : u U 1 } u * U * .
At this point, we start to describe the primal and dual variational formulations.
Let Ω R 3 be an open, bounded, connected set with a regular (Lipschitzian) boundary denoted by Ω .
First, we emphasize that, for the Banach space Y = Y * = L 2 ( Ω ) , we have
v , v * L 2 = Ω v v * d x , v , v * L 2 ( Ω ) .
For the primal formulation, we consider the functional J : U R , where
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x u , f L 2 .
Here, we assume α > 0 , β > 0 , γ > 0 , U = W 0 1 , 2 ( Ω ) , f L 2 ( Ω ) . Moreover, we denote
Y = Y * = L 2 ( Ω ) .
Define also G 1 : U R by
G 1 ( u ) = γ 2 Ω u · u d x ,
G 2 : U × Y R by
G 2 ( u , v ) = α 2 Ω ( u 2 β + v ) 2 d x + K 2 Ω u 2 d x ,
and F : U R by
F ( u ) = K 2 Ω u 2 d x ,
where K γ .
It is worth highlighting that in such a case,
J ( u ) = G 1 ( u ) + G 2 ( u , 0 ) F ( u ) u , f L 2 , u U .
Furthermore, define the following specific polar functionals specified, namely, G 1 * : [ Y * ] 2 R by
G 1 * ( v 1 * + z * ) = sup u U u , v 1 * + z * L 2 G 1 ( u ) = 1 2 Ω [ ( γ 2 ) 1 ( v 1 * + z * ) ] ( v 1 * + z * ) d x ,
G 2 * : [ Y * ] 2 R by
G 2 * ( v 2 * , v 0 * ) = sup ( u , v ) U × Y u , v 2 * L 2 + v , v 0 * L 2 G 2 ( u , v ) = 1 2 Ω ( v 2 * ) 2 2 v 0 * + K d x + 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x ,
if v 0 * B * , where
B * = { v 0 * Y * : 2 v 0 * + K > K / 2 in Ω } .
At this point, we give more details about this calculation.
Observe that
G 2 * ( v 2 * , v 0 * ) = sup ( u , v ) U × Y u , v 2 * L 2 + v , v 0 * L 2 G 2 ( u , v ) = sup ( u , v ) U × Y u , v 2 * L 2 + v , v 0 * L 2 α 2 Ω ( u 2 β + v ) 2 d x K 2 Ω u 2 d x .
Defining w = u 2 β + v , we have v = w u 2 + β , so that
G 2 * ( v 2 * , v 0 * ) = sup ( u , v ) U × Y u , v 2 * L 2 + v , v 0 * L 2 α 2 Ω ( u 2 β + v ) 2 d x K 2 Ω u 2 d x = sup ( u , w ) U × Y u , v 2 * L 2 + w u 2 + β , v 0 * L 2 α 2 Ω ( w ) 2 d x K 2 Ω u 2 d x = u ˜ , v 2 * L 2 + w ˜ u ˜ 2 + β , v 0 * L 2 α 2 Ω ( w ˜ ) 2 d x K 2 Ω u ˜ 2 d x ,
where ( u ˜ , w ˜ ) are solution of equations (optimality conditions for such a quadratic optimization problem)
v 0 * α w ˜ = 0 ,
and
v 2 * ( 2 v 0 * + K ) u ˜ = 0 ,
and therefore,
w ˜ = v 0 * α ,
and
u ˜ = v 2 * 2 v 0 * + K .
Substituting such results into (7), we obtain
G * ( v 1 * , v 0 * ) = 1 2 Ω ( v 2 * ) 2 2 v 0 * + K d x + 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x ,
if v 0 * B * .
Finally, F * : Y * R is defined by
F * ( z * ) = sup u U u , z * L 2 F ( u ) = 1 2 K Ω ( z * ) 2 d x .
Define also
A * = { v * = ( v 1 * , v 2 * , v 0 * ) [ Y * ] 2 × B * : v 1 * + v 2 * f = 0 , in Ω } ,
J * : [ Y * ] 4 R by
J * ( v * , z * ) = G 1 * ( v 1 * + z * ) G 2 * ( v 2 * , v 0 * ) + F * ( z * )
and J 1 * : [ Y * ] 4 × U R by
J 1 * ( v * , z * , u ) = J * ( v * , z * ) + u , v 1 * + v 2 * f L 2 .

2. The Main Duality Principle, a Convex Dual Formulation, and the Concerning Proximal Primal Functional

Our main result is summarized by the following theorem.
Theorem 1. 
Considering the definitions and statements in the last section, suppose also that ( v ^ * , z ^ * , u 0 ) [ Y * ] 2 × B * × Y * × U is such that
δ J 1 * ( v ^ * , z ^ * , u 0 ) = 0 .
Under such hypotheses, we have
δ J ( u 0 ) = 0 ,
v ^ * A *
and
J ( u 0 ) = inf u U J ( u ) + K 2 Ω | u u 0 | 2 d x = J * ( v ^ * , z ^ * ) = sup v * A * J * ( v * , z ^ * ) .
Proof. 
Since
δ J 1 * ( v ^ * , z ^ * , u 0 ) = 0 ,
from the variation in v 1 * , we obtain
( v ^ 1 * + z ^ * ) γ 2 + u 0 = 0 in Ω ,
so that
v ^ 1 * + z ^ * = γ 2 u 0 .
From the variation in v 2 * , we obtain
v ^ 2 * 2 v ^ 0 * + K + u 0 = 0 , in Ω .
From the variation in v 0 * , we also obtain
( v ^ 2 * ) 2 ( 2 v ^ 0 * + K ) 2 v ^ 0 * α β = 0 ,
and therefore,
v ^ 0 * = α ( u 0 2 β ) .
From the variation in u, we have
v ^ 1 * + v ^ 2 * f = 0 , in Ω
and, thus,
v ^ * A * .
Finally, from the variation in z * , we obtain
( v ^ 1 * + z ^ * ) γ 2 + z ^ * K = 0 , in Ω .
so that
u 0 + z ^ * K = 0 ,
that is,
z ^ * = K u 0 in Ω .
From such results and v ^ * A * , we have
0 = v ^ 1 * + v ^ 2 * f = γ 2 u 0 z ^ * + 2 ( v 0 * ) u 0 + K u 0 f = γ 2 u 0 + 2 α ( u 0 2 β ) u 0 f ,
so that
δ J ( u 0 ) = 0 .
Additionally, from this and from the Legendre transform proprieties, we have
G 1 * ( v ^ 1 * + z ^ * ) = u 0 , v ^ 1 * + z ^ * L 2 G 1 ( u 0 ) ,
G 2 * ( v ^ 2 * , v ^ 0 * ) = u 0 , v ^ 2 * L 2 + 0 , v 0 * L 2 G 2 ( u 0 , 0 ) ,
F * ( z ^ * ) = u 0 , z ^ * L 2 F ( u 0 ) ,
and thus, we obtain
J * ( v ^ * , z ^ * ) = G 1 * ( v ^ 1 * + z ^ * ) G 2 * ( v ^ 2 * , v ^ 0 * ) + F * ( z ^ * ) = u 0 , v ^ 1 * + v ^ 2 * + G 1 ( u 0 ) + G 2 ( u 0 , 0 ) F ( u 0 ) = u 0 , f L 2 + G 1 ( u 0 ) + G 2 ( u 0 , 0 ) F ( u 0 ) = J ( u 0 ) .
Summarizing, we have
J * ( v ^ * , z ^ * ) = J ( u 0 ) .
On the other hand,
J * ( v ^ * , z ^ * ) = G 1 * ( v ^ 1 * + z ^ * ) G 2 * ( v ^ 2 * , v ^ 0 * ) + F * ( z ^ * ) u , v ^ 1 * + z ^ * L 2 u , v ^ 2 * L 2 0 , v 0 * L 2 + G 1 ( u ) + G 2 ( u , 0 ) + F * ( z ^ * ) = u , f L 2 + G 1 ( u ) + G 2 ( u , 0 ) u , z ^ * L 2 + F * ( z ^ * ) = u , f L 2 + G 1 ( u ) + G 2 ( u , 0 ) F ( u ) + F ( u ) u , z ^ * L 2 + F * ( z ^ * ) = J ( u ) + K 2 Ω u 2 d x u , z ^ * L 2 + F * ( z ^ * ) = J ( u ) + K 2 Ω u 2 d x K u , u 0 L 2 + K 2 Ω u 0 2 d x = J ( u ) + K 2 Ω | u u 0 | 2 d x , u U .
Finally, by a simple computation, we may obtain the Hessian
2 J * ( v * , z * ) ( v * ) 2 < 0
in [ Y * ] 2 × B * × Y * , so that we may infer that J * is concave in v * in [ Y * ] 2 × B * × Y * .
Therefore, from this, (13) and (14), we have
J ( u 0 ) = inf u U J ( u ) + K 2 Ω | u u 0 | 2 d x = J * ( v ^ * , z ^ * ) = sup v * A * J * ( v * , z ^ * ) .
The proof is complete. □

3. A Primal Dual Variational Formulation

In this section, we develop a more general primal dual variational formulation suitable for a large class of models in non-convex optimization.
Consider again U = W 0 1 , 2 ( Ω ) , and let G : U R and F : U R be three times Fréchet differentiable functionals. Let J : U R be defined by
J ( u ) = G ( u ) F ( u ) , u U .
Assume that u 0 U is such that
δ J ( u 0 ) = 0
and
δ 2 J ( u 0 ) > 0 .
Denote v * = ( v 1 * , v 2 * ) , define J * : U × Y * × Y * R by
J * ( u , v * ) = 1 2 v 1 * G ( u ) 2 2 + 1 2 v 2 * F ( u ) 2 2 + 1 2 v 1 * v 2 * 2 2
Denoting L 1 * ( u , v * ) = v 1 * G ( u ) and L 2 * ( u , v * ) = v 2 * F ( u ) , define also
C * = ( u , v * ) U × Y * × Y * : L 1 * ( u , v 1 * ) 1 K and L 2 * ( u , v 1 * ) 1 K ,
for an appropriate K > 0 to be specified.
Observe that in C * , the Hessian of J * is given by
{ δ 2 J * ( u , v * ) } = G ( u ) 2 + F ( u ) 2 + O ( 1 / K ) G ( u ) F ( u ) G ( u ) 2 1 F ( u ) 1 2 ,
Observe also that
det 2 J * ( u , v * ) v 1 * v 2 * = 3 ,
and
det { δ 2 J * ( u , v * ) } = ( G ( u ) F ( u ) ) 2 + O ( 1 / K ) = ( δ 2 J ( u ) ) 2 + O ( 1 / K ) .
Define now
v ^ 1 * = G ( u 0 ) ,
v ^ 2 * = F ( u 0 ) ,
so that
v ^ 1 * v ^ 2 * = 0 .
From this, we may infer that ( u 0 , v ^ 1 * , v ^ 2 * ) C * and
J * ( u 0 , v ^ * ) = 0 = min ( u , v * ) C * J * ( u , v * ) .
Moreover, for K > 0 sufficiently big, J * is convex in a neighborhood of ( u 0 , v ^ * ) .
Therefore, in the last lines, we have proven the following theorem.
Theorem 2. 
Under the statements and definitions of the last lines, there exist r 0 > 0 and r 1 > 0 such that
J ( u 0 ) = min u B r 0 ( u 0 ) J ( u )
and ( u 0 , v ^ 1 * , v ^ 2 * ) C * is such that
J * ( u 0 , v ^ * ) = 0 = min ( u , v * ) U × [ Y * ] 2 J * ( u , v * ) .
Moreover, J * is convex in
B r 1 ( u 0 , v ^ * ) .

4. One More Duality Principle and a Concerning Primal Dual Variational Formulation

In this section, we establish a new duality principle and a related primal dual formulation.
The results are based on the approach of Toland [5].

4.1. Introduction

Let Ω R 3 be an open, bounded, connected set with a regular (Lipschitzian) boundary denoted by Ω .
Let J : V R be a functional such that
J ( u ) = G ( u ) F ( u ) , u V ,
where V = W 0 1 , 2 ( Ω ) .
Suppose G , F are both three times Fréchet differentiable convex functionals such that
2 G ( u ) u 2 > 0
and
2 F ( u ) u 2 > 0
u V .
Assume also that there exists α 1 R such that
α 1 = inf u V J ( u ) .
Moreover, suppose that if { u n } V is such that
u n V ,
then
J ( u n ) + , as n .
At this point, we define J * * : V R by
J * * ( u ) = sup ( v * , α ) H * { u , v * + α } ,
where
H * = { ( v * , α ) V * × R : v , v * V + α F ( v ) , v V } .
Observe that ( 0 , α 1 ) H * , so that
J * * ( u ) α 1 = inf u V J ( u ) .
On the other hand, clearly, we have
J * * ( u ) J ( u ) , u V ,
so that we have
α 1 = inf u V J ( u ) = inf u V J * * ( u ) .
Let u V .
Since J is strongly continuous, there exist δ > 0 and A > 0 such that
α 1 J * * ( v ) J ( v ) A , v B δ ( u ) .
From this, considering that J * * is convex on V, we may infer that J * * is continuous at u, u V .
Hence, J * * is strongly lower semi-continuous on V, and since J * * is convex, we may infer that J * * is weakly lower semi-continuous on V.
Let { u n } V be a sequence such that
α 1 J ( u n ) < α 1 + 1 n , n N .
Hence,
α 1 = lim n J ( u n ) = inf u V J ( u ) = inf u V J * * ( u ) .
Suppose that there exists a subsequence { u n k } of { u n } such that
u n k V , as k .
From the hypothesis, we have
J ( u n k ) + , as k ,
which contradicts
α 1 R .
Therefore, there exists K > 0 such that
u n V K , u V .
Since V is reflexive, from this and the Katutani Theorem, there exists a subsequence { u n k } of { u n } and u 0 V such that
u n k u 0 , weakly in V .
Consequently, from this and considering that J * * is weakly lower semi-continuous, we have
α 1 = lim inf k J * * ( u n k ) J * * ( u 0 ) ,
so that
J * * ( u 0 ) = min u V J * * ( u ) .
Define G * , F * : V * R by
G * ( v * ) = sup u V { u , v * V G ( u ) } ,
and
F * ( v * ) = sup u V { u , v * V F ( u ) } .
Defining also J * : V R by
J * ( v * ) = F * ( v * ) G * ( v * ) ,
from the results in [5], we may obtain
inf u V J ( u ) = inf v * V * J * ( v * ) ,
so that
J * * ( u 0 ) = inf u V J * * ( u ) = inf u V J ( u ) = inf v * V * J * ( v * ) .
Suppose now that there exists u ^ V such that
J ( u ^ ) = inf u V J ( u ) .
From the standard necessary conditions, we have
δ J ( u ^ ) = 0 ,
so that
G ( u ^ ) u F ( u ^ ) u = 0 .
Define now
v 0 * = F ( u ^ ) u .
From these last two equations, we obtain
v 0 * = G ( u ^ ) u .
From such results and the Legendre transform properties, we have
u ^ = F * ( v 0 * ) v * ,
u ^ = G * ( v 0 * ) v * ,
so that
δ J * ( v 0 * ) = F * ( v 0 * ) v * G * ( v 0 * ) v * = u ^ u ^ = 0 ,
G * ( v 0 * ) = u ^ , v 0 * V G ( u ^ )
and
F * ( v 0 * ) = u ^ , v 0 * V F ( u ^ )
so that
inf u V J ( u ) = J ( u ^ ) = G ( u ^ ) F ( u ^ ) = inf v * V * J * ( v * ) = F * ( v 0 * ) G * ( v 0 * ) = J * ( v 0 * ) .

4.2. The Main Duality Principle and a Related Primal Dual Variational Formulation

Considering these last statements and results, we may prove the following theorem.
Theorem 3. 
Let Ω R 3 be an open, bounded, connected set with a regular (Lipschitzian) boundary denoted by Ω .
Let J : V R be a functional such that
J ( u ) = G ( u ) F ( u ) , u V ,
where V = W 0 1 , 2 ( Ω ) .
Suppose G , F are both three times Fréchet differentiable functionals such that there exists K > 0 such that
2 G ( u ) u 2 + K > 0
and
2 F ( u ) u 2 + K > 0
u V .
Assume also that there exists u 0 V and α 1 R such that
α 1 = inf u V J ( u ) = J ( u 0 ) .
Assume that K 3 > 0 is such that
u 0 < K 3 .
Define
V ˜ = { u V : u K 3 } .
Assume that K 1 > 0 is such that if u V ˜ , then
max F ( u ) , G ( u ) , F ( u ) , F ( u ) , G ( u ) , G ( u ) K 1 .
Suppose also
K max { K 1 , K 3 } .
Define F K , G K : V R by
F K ( u ) = F ( u ) + K 2 Ω u 2 d x ,
and
G K ( u ) = G ( u ) + K 2 Ω u 2 d x ,
u V .
Define also G K * , F K * : V * R by
G K * ( v * ) = sup u V { u , v * V G K ( u ) } ,
and
F K * ( v * ) = sup u V { u , v * V F K ( u ) } .
Observe that since u 0 V is such that
J ( u 0 ) = inf u V J ( u ) ,
we have
δ J ( u 0 ) = 0 .
Let ε > 0 be a small constant.
Define
v 0 * = F K ( u 0 ) u V * .
Under such hypotheses, defining J 1 * : V × V * R by
J 1 * ( u , v * ) = F K * ( v * ) G K * ( v * ) + 1 2 ε G K * ( v * ) v * u 2 2 + 1 2 ε F K * ( v * ) v * u 2 2 + 1 2 ε G K * ( v * ) v * F K * ( v * ) v * 2 2 ,
we have
J ( u 0 ) = inf u V J ( u ) = inf ( u , v * ) V × V * J 1 * ( u , v * ) = J 1 * ( u 0 , v 0 * ) .
Proof. 
Observe that from the hypotheses, and the results and statements of the last subsection,
J ( u 0 ) = inf u V J ( u ) = inf v * Y * J K * ( v * ) = J K * ( v 0 * ) ,
where
J K * ( v * ) = F K * ( v * ) G K * ( v * ) , v * V * .
Moreover, we have
J 1 * ( u , v * ) J K * ( v * ) , u V , v * V * .
Additionally, from hypotheses and the results in the last subsection,
u 0 = F K * ( v 0 * ) v * = G K * ( v 0 * ) v * ,
so that clearly, we have
J 1 * ( u 0 , v 0 * ) = J K * ( v 0 * ) .
From these results, we may infer that
J ( u 0 ) = inf u V J ( u ) = inf v * V * J K * ( v * ) = J K * ( v 0 * ) = inf ( u , v * ) V × V * J 1 * ( u , v * ) = J 1 * ( u 0 , v 0 * ) .
The proof is complete. □
Remark 2. 
At this point, we highlight that J 1 * has a large region of convexity around the optimal point ( u 0 , v 0 * ) , for K > 0 sufficiently large and corresponding ε > 0 sufficiently small.
Indeed, observe that for v * V * ,
G K * ( v * ) = sup u V { u , v * V G K ( u ) } = u ^ , v * V G K ( u ^ ) ,
where u ^ V is such that
v * = G K ( u ^ ) u = G ( u ^ ) + K u ^ .
Taking the variation in v * in this last equation, we obtain
1 = G ( u ) u ^ v * + K u ^ v * ,
so that
u ^ v * = 1 G ( u ) + K = O 1 K .
From this, we have
2 u ^ ( v * ) 2 = 1 ( G ( u ) + K ) 2 G ( u ) u ^ v * = 1 ( G ( u ) + K ) 3 G ( u ) = O 1 K 3 .
On the other hand, from the implicit function theorem,
G K * ( v * ) v * = u + [ v * G K ( u ^ ) ] u ^ v * = u ,
so that
2 G K * ( v * ) ( v * ) 2 = u ^ v * = O 1 K
and
3 G K * ( v * ) ( v * ) 3 = 2 u ^ ( v * ) 2 = O 1 K 3 .
Similarly, we may obtain
2 F K * ( v * ) ( v * ) 2 = O 1 K
and
3 F K * ( v * ) ( v * ) 3 = O 1 K 3 .
Denoting
A = 2 F K * ( v 0 * ) ( v * ) 2
and
B = 2 G K * ( v 0 * ) ( v * ) 2 ,
we have
2 J 1 * ( u 0 , v 0 * ) ( v * ) 2 = A B + 1 ε 2 A 2 + 2 B 2 2 A B ,
2 J 1 * ( u 0 , v 0 * ) u 2 = 2 ε ,
and
2 J 1 * ( u 0 , v 0 * ) ( v * ) u = 1 ε ( A + B ) .
From this, we have
det ( δ 2 J * ( v 0 * , u 0 ) ) = 2 J 1 * ( u 0 , v 0 * ) ( v * ) 2 2 J 1 * ( u 0 , v 0 * ) u 2 2 J 1 * ( u 0 , v 0 * ) ( v * ) u 2 = 2 A B ε + 2 ( A B ) 2 ε 2 = O 1 ε 2 0
about the optimal point ( u 0 , v 0 * ) .

5. A Convex Dual Variational Formulation

In this section, again for Ω R 3 , an open, bounded, connected set with a regular (Lipschitzian) boundary Ω , γ > 0 , α > 0 , β > 0 and f L 2 ( Ω ) , we denote F 1 : V × Y R , F 2 : V R and G : V × Y R by
F 1 ( u , v 0 * ) = γ 2 Ω u · u d x K 2 Ω u 2 d x + K 1 2 Ω ( γ 2 u + 2 v 0 * u f ) 2 d x + K 2 2 Ω u 2 d x ,
F 2 ( u ) = K 2 2 Ω u 2 d x + u , f L 2 ,
and
G ( u , v ) = α 2 Ω ( u 2 β + v ) 2 d x + K 2 Ω u 2 d x .
We define also
J 1 ( u , v 0 * ) = F 1 ( u , v 0 * ) F 2 ( u ) + G ( u , 0 ) ,
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x u , f L 2 ,
and F 1 * : [ Y * ] 3 R , F 2 * : Y * R , and G * : [ Y * ] 2 R , by
F 1 * ( v 2 * , v 1 * , v 0 * ) = sup u V { u , v 1 * + v 2 * L 2 F 1 ( u , v 0 * ) } = 1 2 Ω v 1 * + v 2 * + K 1 ( γ 2 + 2 v 0 * ) f 2 ( γ 2 K + K 2 + K 1 ( γ 2 + 2 v 0 * ) 2 ) d x K 1 2 Ω f 2 d x ,
F 2 * ( v 2 * ) = sup u V { u , v 2 * L 2 F 2 ( u ) } = 1 2 K 2 Ω ( v 2 * ) 2 d x ,
and
G * ( v 1 * , v 0 * ) = sup ( u , v ) V × Y { u , v 1 * L 2 v , v 0 * L 2 G ( u , v ) } = 1 2 Ω ( v 1 * ) 2 2 v 0 * + K d x + 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x
if v 0 * B * , where
B * = { v 0 * Y * : v 0 * K / 2 and γ 2 + 2 v 0 * < ε I d } ,
for some small real parameter ε > 0 and where I d denotes a concerning identity operator.
Finally, we also define J 1 * : [ Y * ] 2 × B * R ,
J 1 * ( v 2 * , v 1 * , v 0 * ) = F 1 * ( v 2 * , v 1 * , v 0 * ) + F 2 * ( v 2 * ) G * ( v 1 * , v 0 * ) .
Assuming
K 2 K 1 K max { 1 / ( ε 2 ) , 1 , γ , α }
by directly computing δ 2 J 1 * ( v 2 * , v 1 * , v 0 * ) , we may obtain that for such specified real constants, J 1 * is convex in v 2 * and it is concave in ( v 1 * , v 0 * ) on Y * × Y * × B * .
Considering such statements and definitions, we may prove the following theorem.
Theorem 4. 
Let ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) Y * × Y * × B * be such that
δ J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = 0
and u 0 V be such that
u 0 = v ^ 1 * + v ^ 2 * + K 1 ( γ 2 + 2 v 0 * ) f K 2 K γ 2 + K 1 ( γ 2 + 2 v ^ 0 * ) 2 .
Under such hypotheses, we have
δ J ( u 0 ) = 0 ,
so that
J ( u 0 ) = inf u V J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x = inf v 2 * Y * sup ( v 1 * , v 0 * ) Y * × B * J 1 * ( v 2 * , v 1 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) .
Proof. 
Observe that δ J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = 0 so that, since J 1 * is convex in v 2 * and concave in ( v 1 * , v 0 * ) on Y * × Y * × B * , we obtain
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = inf v 2 * Y * sup ( v 1 * , v 0 * ) Y * × B * J 1 * ( v 2 * , v 1 * , v 0 * ) .
Now, we show that
δ J ( u 0 ) = 0 .
From
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) v 2 * = 0 ,
we have
u 0 + v ^ 2 * K 2 = 0 ,
and thus,
v ^ 2 * = K 2 u 0 .
From
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) v 1 * = 0 ,
we obtain
u 0 v ^ 1 * f 2 v ^ 0 * + K = 0 ,
and thus,
v ^ 1 * = 2 v ^ 0 * u 0 K u 0 + f .
Finally, denoting
D = γ 2 u 0 + 2 v ^ 0 * u 0 f ,
from
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) v 0 * = 0 ,
we have
2 D u 0 + u 0 2 v ^ 0 * α β = 0 ,
so that
v ^ 0 * = α ( u 0 2 β 2 D u 0 ) .
Observe now that
v ^ 1 * + v ^ 2 * + K 1 ( γ 2 + 2 v ^ 0 * ) f = ( K 2 K γ 2 + K 1 ( γ 2 + 2 v ^ 0 * ) 2 ) u 0
so that
K 2 u 0 2 v ^ 0 u 0 K u 0 + f = K 2 u 0 K u 0 γ 2 u 0 + K 1 ( γ 2 + 2 v ^ 0 * ) ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) .
The solution for this last system of Equations (30) and (31) is obtained through the relations
v ^ 0 * = α ( u 0 2 β )
and
γ 2 u 0 + 2 v ^ 0 * u 0 f = D = 0 ,
so that
δ J ( u 0 ) = γ 2 u 0 + 2 α ( u 0 2 β ) u 0 f = 0
and
δ J ( u 0 ) + K 1 2 Ω ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) 2 d x = 0 ,
and hence, from the concerning convexity in u on V,
J ( u 0 ) = min u V J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x .
Moreover, from the Legendre transform properties
F 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = u 0 , v ^ 2 * + v ^ 1 * L 2 F 1 ( u 0 , v ^ 0 * ) ,
F 2 * ( v ^ 2 * ) = u 0 , v ^ 2 * L 2 F 2 ( u 0 ) ,
G * ( v ^ 1 * , v ^ 0 * ) = u 0 , v ^ 1 * L 2 0 , v ^ 0 * L 2 G ( u 0 , 0 ) ,
so that
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = F 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) + F 2 * ( v ^ 2 * ) G * ( v ^ 1 * , v ^ 0 * ) = F 1 ( u 0 , v ^ 0 * ) F 2 ( u 0 ) + G ( u 0 , 0 ) = J ( u 0 ) .
Joining the pieces, we have
J ( u 0 ) = inf u V J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x = inf v 2 * Y * sup ( v 1 * , v 0 * ) Y * × B * J 1 * ( v 2 * , v 1 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) .
The proof is complete. □
Remark 3. 
We could have also defined
B * = { v 0 * Y * : v 0 * K / 2 and γ 2 + 2 v 0 * > ε I d } ,
for some small real parameter ε > 0 . In this case, γ 2 + 2 v 0 * is positive definite, whereas in the previous case, γ 2 + 2 v 0 * is negative definite.

6. Another Convex Dual Variational Formulation

In this section, again for Ω R 3 , an open, bounded, connected set with a regular (Lipschitzian) boundary Ω , γ > 0 , α > 0 , β > 0 and f L 2 ( Ω ) , we denote F 1 : V × Y R , F 2 : V R and G : Y R by
F 1 ( u , v 0 * ) = γ 2 Ω u · u d x + u 2 , v 0 * L 2 + K 1 2 Ω ( γ 2 u + 2 v 0 * u f ) 2 d x + K 2 2 Ω u 2 d x ,
F 2 ( u ) = K 2 2 Ω u 2 d x + u , f L 2 ,
and
G ( u 2 ) = α 2 Ω ( u 2 β ) 2 d x .
We define also
J 1 ( u , v 0 * ) = F 1 ( u , v 0 * ) F 2 ( u ) u 2 , v 0 * L 2 + G ( u 2 ) ,
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x u , f L 2 ,
A + = { u V : u f > 0 , a . e . in Ω } ,
V 2 = { u V : u K 3 } ,
V 1 = A + V 1 ,
and F 1 * : [ Y * ] 2 R , F 2 * : Y * R , and G * : Y * R , by
F 1 * ( v 2 * , v 0 * ) = sup u V { u , v 2 * L 2 F 1 ( u , v 0 * ) } = 1 2 Ω v 2 * + K 1 ( γ 2 + 2 v 0 * ) f 2 ( γ 2 + 2 v 0 * + K 2 + K 1 ( γ 2 + 2 v 0 * ) 2 ) d x K 1 2 Ω f 2 d x ,
F 2 * ( v 2 * ) = sup u V { u , v 2 * L 2 F 2 ( u ) } = 1 2 K 2 Ω ( v 2 * + f ) 2 d x ,
and
G * ( v 0 * ) = sup v Y { v , v 0 * L 2 G ( v ) } = 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x
At this point, we define
B 1 * = { v 0 * Y * : v 0 * K / 2 } ,
B 2 * = { v 0 * Y * : γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 > 0 } ,
B 3 * = { v 0 * Y * : 1 / α + 4 K 1 [ u ( v 2 * , v 0 * ) 2 ] + 100 / K 2 0 , v 2 * E 1 * } ,
where
u ( v 2 * , v 0 * ) = φ 1 φ ,
φ 1 = ( v 2 * + K 1 ( γ 2 + 2 v 0 * ) f )
and
φ = ( γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 + K 2 ) ,
Finally, we also define
E 1 * = { v 2 * Y * : v 2 * ( 5 / 4 ) K 2 } .
E 2 * = { v 2 * Y * : f v 2 * > 0 , a . e . in Ω } ,
E * = E 1 * E 2 * ,
B * = B 1 * B 3 * ,
and J 1 * : E * × B * R , by
J 1 * ( v 2 * , v 0 * ) = F 1 * ( v 2 * , v 0 * ) + F 2 * ( v 2 * ) G * ( v 0 * ) .
Moreover, assume
K 2 K 1 K K 3 max { 1 , γ , α } .
By directly computing δ 2 J 1 * ( v 2 * , v 0 * ) , we may obtain that for such specified real constants, J 1 * is concave in v 0 * on E * × B * .
Indeed, recalling that
φ = ( γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 + K 2 ) ,
φ 1 = ( v 2 * + K 1 ( γ 2 + 2 v 0 * ) f ) ,
and
u = φ 1 φ ,
we obtain
2 J 1 * ( v 2 * , v 0 * ) ( v 2 * ) 2 = 1 / K 2 1 / φ > 0 ,
in E * × B 3 * and
2 J 1 * ( v 2 * , v 0 * ) ( v 0 * ) 2 = 4 u 2 K 1 1 / α + O ( 1 / K 2 ) < 0 ,
in E * × B * .
Considering such statements and definitions, we may prove the following theorem.
Theorem 5. 
Let ( v ^ 2 * , v ^ 0 * ) E * × ( B * B 2 * ) be such that
δ J 1 * ( v ^ 2 * , v ^ 0 * ) = 0
and u 0 V 1 be such that
u 0 = v ^ 2 * + K 1 ( γ 2 + 2 v ^ 0 * ) f K 2 + 2 v ^ 0 * γ 2 + K 1 ( γ 2 + 2 v ^ 0 * ) 2 .
Under such hypotheses, we have
δ J ( u 0 ) = 0 ,
so that
J ( u 0 ) = inf u V 1 J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x = inf v 2 * E * sup v 0 * B * J 1 * ( v 2 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 0 * ) .
Proof. 
Observe that δ J 1 * ( v ^ 2 * , v ^ 0 * ) = 0 so that, since J 1 * is concave in v 0 * on E * × B * , v 0 * B 2 * and J 1 * is quadratic in v 2 * , we have
sup v 0 * B * J 1 * ( v ^ 2 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 0 * ) = inf v 2 * E * J 1 * ( v 2 * , v ^ 0 * ) .
Consequently, from this and the Min–Max Theorem, we obtain
J 1 * ( v ^ 2 * , v ^ 0 * ) = inf v 2 * E * sup v 0 * B * J 1 * ( v 2 * , v 0 * ) = sup v 0 * B * inf v 2 * E * J 1 * ( v 2 * , v 0 * ) .
Now, we show that
δ J ( u 0 ) = 0 .
From
J 1 * ( v ^ 2 * , v ^ 0 * ) v 2 * = 0 ,
we have
u 0 + v ^ 2 * K 2 = 0 ,
and thus
v ^ 2 * = K 2 u 0 .
Finally, denoting
D = γ 2 u 0 + 2 v ^ 0 * u 0 f ,
from
J 1 * ( v ^ 2 * , v ^ 0 * ) v 0 * = 0 ,
we have
2 D u 0 + u 0 2 v ^ 0 * α β = 0 ,
so that
v ^ 0 * = α ( u 0 2 β 2 D u 0 ) .
Observe now that
v ^ 2 * + K 1 ( γ 2 + 2 v ^ 0 * ) f = ( K 2 γ 2 + 2 v ^ 0 * + K 1 ( γ 2 + 2 v ^ 0 * ) 2 ) u 0
so that
K 2 u 0 2 v ^ 0 u 0 K u 0 + f = K 2 u 0 K u 0 γ 2 u 0 + K 1 ( γ 2 + 2 v ^ 0 * ) ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) .
The solution for this last equation is obtained through the relation
γ 2 u 0 + 2 v ^ 0 * u 0 f = D = 0 ,
so that from this and (39), we have
v ^ 0 * = α ( u 0 2 β ) .
Thus,
δ J ( u 0 ) = γ 2 u 0 + 2 α ( u 0 2 β ) u 0 f = 0
and
δ J ( u 0 ) + K 1 2 Ω ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) 2 d x = 0 ,
and hence, from the concerning convexity in u on V,
J ( u 0 ) = min u V J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x .
Moreover, from the Legendre transform properties
F 1 * ( v ^ 2 * , v ^ 0 * ) = u 0 , v ^ 2 * L 2 F 1 ( u 0 , v ^ 0 * ) ,
F 2 * ( v ^ 2 * ) = u 0 , v ^ 2 * L 2 F 2 ( u 0 ) ,
G * ( v ^ 0 * ) = u 0 2 , v ^ 0 * L 2 G ( u 0 2 ) ,
so that
J 1 * ( v ^ 2 * , v ^ 0 * ) = F 1 * ( v ^ 2 * , v ^ 0 * ) + F 2 * ( v ^ 2 * ) G * ( v ^ 0 * ) = F 1 ( u 0 , v ^ 0 * ) F 2 ( u 0 ) u 0 2 , v ^ 0 * L 2 + G ( u 0 2 ) = J ( u 0 ) .
Joining the pieces, we have
J ( u 0 ) = inf u V 1 J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x = inf v 2 * E * sup v 0 * B * J 1 * ( v 2 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 0 * ) .
The proof is complete. □

7. A Related Numerical Computation through the Generalized Method of Lines

In the next few lines, we present some improvements concerning the initial conception of the generalized method of lines, originally published in the book entitled “Topics on Functional Analysis, Calculus of Variations and Duality” [9], 2011.
Concerning such a method, other important results may be found in articles and books such as [7,9,13].
Specifically about the improvement previously mentioned, we have changed the way we truncate the series solution obtained through an application of the Banach fixed point theorem to find the relation between two adjacent lines. The results obtained are very good even as a typical parameter ε > 0 is very small.
In the next few lines and sections, we develop in details such a numerical procedure.

7.1. About a Concerning Improvement to the Generalized Method of Lines

Let Ω R 2 , where
Ω = { ( r , θ ) R 2 : 1 r 2 , 0 θ 2 π } .
Consider the problem of solving the partial differential equation
ε 2 u r 2 + 1 r u r + 1 r 2 2 u θ 2 + α u 3 β u = f , in Ω , u = u 0 ( θ ) , on Ω 1 , u = u f ( θ ) , on Ω 2 .
Here,
Ω = { ( r , θ ) R 2 : 1 r 2 , 0 θ 2 π } ,
Ω 1 = { ( 1 , θ ) R 2 : 0 θ 2 π } ,
Ω 2 = { ( 2 , θ ) R 2 : 0 θ 2 π } ,
ε > 0 , α > 0 , β > 0 , and f 1 , on Ω .
In a partial finite differences scheme (about the standard finite differences method, please see [14]), such a system stands for
ε u n + 1 2 u n + u n 1 d 2 + 1 t n u n u n 1 d + 1 t n 2 2 u n θ 2 + α u n 3 β u n = f n ,
n { 1 , , N 1 } , with the boundary conditions
u 0 = 0 ,
and
u N = 0 .
Here, N is the number of lines and d = 1 / N .
In particular, for n = 1 , we have
ε u 2 2 u 1 + u 0 d 2 + 1 t 1 ( u 1 u 0 ) d + 1 t 1 2 2 u 1 θ 2 + α u 1 3 β u 1 = f 1 ,
so that
u 1 = u 2 + u 1 + u 0 + 1 t 1 ( u 1 u 0 ) d + 1 t 1 2 2 u 1 θ 2 d 2 + ( α u 1 3 + β u 1 f 1 ) d 2 ε / 3.0 ,
We solve this last equation through the Banach fixed point theorem, obtaining u 1 as a function of u 2 .
Indeed, we may set
u 1 0 = u 2
and
u 1 k + 1 = u 2 + u 1 k + u 0 + 1 t 1 ( u 1 k u 0 ) d + 1 t 1 2 2 u 1 k θ 2 d 2 + ( α ( u 1 k ) 3 + β u 1 k f 1 ) d 2 ε / 3.0 ,
k N .
Thus, we may obtain
u 1 = lim k u 1 k H 1 ( u 2 , u 0 ) .
Similarly, for n = 2 , we have
u 2 = u 3 + u 2 + H 1 ( u 2 , u 0 ) + 1 t 1 ( u 2 H 1 ( u 2 , u 0 ) ) d + 1 t 1 2 2 u 2 θ 2 d 2 + ( α u 2 3 + β u 2 f 2 ) d 2 ε / 3.0 ,
We solve this last equation through the Banach fixed point theorem, obtaining u 2 as a function of u 3 and u 0 .
Indeed, we may set
u 2 0 = u 3
and
u 2 k + 1 = u 3 + u 2 k + H 1 ( u 2 k , u 0 ) + 1 t 2 ( u 2 k H 1 ( u 2 k , u 0 ) ) d + 1 t 2 2 2 u 2 k θ 2 d 2 + ( α ( u 2 k ) 3 + β u 2 k f 2 ) d 2 ε / 3.0 ,
k N .
Thus, we may obtain
u 2 = lim k u 2 k H 2 ( u 3 , u 0 ) .
Now reasoning inductively, having
u n 1 = H n 1 ( u n , u 0 ) ,
we may obtain
u n = u n + 1 + u n + H n 1 ( u n , u 0 ) + 1 t n ( u n H n 1 ( u n , u 0 ) ) d + 1 t n 2 2 u n θ 2 d 2 + ( α u n 3 + β u n f n ) d 2 ε / 3.0 ,
We solve this last equation through the Banach fixed point theorem, obtaining u n as a function of u n + 1 and u 0 .
Indeed, we may set
u n 0 = u n + 1
and
u n k + 1 = u n + 1 + u n k + H n 1 ( u n k , u 0 ) + 1 t n ( u n k H n 1 ( u n k , u 0 ) ) d + 1 t n 2 2 u n k θ 2 d 2 + ( α ( u n k ) 3 + β u n k f n ) d 2 ε / 3.0 ,
k N .
Thus, we may obtain
u n = lim k u n k H n ( u n + 1 , u 0 ) .
We have obtained u n = H n ( u n + 1 , u 0 ) , n { 1 , , N 1 } .
In particular, u N = u f ( θ ) , so that we may obtain
u N 1 = H N 1 ( u N , u 0 ) = H N 1 ( 0 ) F N 1 ( u N , u 0 ) = F N 1 ( u f ( θ ) , u 0 ( θ ) ) .
Similarly,
u N 2 = H N 2 ( u N 1 , u 0 ) = H N 2 ( H N 1 ( u N , u 0 ) ) = F N 2 ( u N , u 0 ) = F N 1 ( u f ( θ ) , u 0 ( θ ) ) ,
an so on, until the following is obtained:
u 1 = H 1 ( u 2 ) F 1 ( u N , u 0 ) = F 1 ( u f ( θ ) , u 0 ( θ ) ) .
The problem is then approximately solved.

7.2. Software in Mathematica for Solving Such an Equation

We recall that the equation to be solved is a Ginzburg–Landau-type one, where
ε 2 u r 2 + 1 r u r + 1 r 2 2 u θ 2 + α u 3 β u = f , in Ω , u = 0 , on Ω 1 , u = u f ( θ ) , on Ω 2 .
Here,
Ω = { ( r , θ ) R 2 : 1 r 2 , 0 θ 2 π } ,
Ω 1 = { ( 1 , θ ) R 2 : 0 θ 2 π } ,
Ω 2 = { ( 2 , θ ) R 2 : 0 θ 2 π } ,
ε > 0 , α > 0 , β > 0 , and f 1 , on Ω . In a partial finite differences scheme, such a system stands for
ε u n + 1 2 u n + u n 1 d 2 + 1 t n u n u n 1 d + 1 t n 2 2 u n θ 2 + α u n 3 β u n = f n ,
n { 1 , , N 1 } , with the boundary conditions
u 0 = 0 ,
and
u N = u f [ x ] .
Here, N is the number of lines and d = 1 / N .
At this point, we present the concerning software for an approximate solution.
Such a software is for N = 10 (10 lines) and u 0 [ x ] = 0 .
*************************************
  • m 8 = 10 ;   ( N = 10 l i n e s )
  • d = 1 / m 8 ;
  • e 1 = 0.1 ; ( ε = 0.1 )
  • A = 1.0 ;
  • B = 1.0 ;
  • F o r [ i = 1 , i < m 8 , i + + , f [ i ] = 1.0 ] ;    ( f 1 , on Ω )
  • a = 0.0 ;
  • F o r [ i = 1 , i < m 8 , i + + ,
    C l e a r [ b , u ] ;
    t [ i ] = 1 + i d ;
    b [ x ] = u [ i + 1 ] [ x ] ;
  • F o r [ k = 1 , k < 30 , k + + ,    ( we have fixed the number of iterations )
    z = u [ i + 1 ] [ x ] + b [ x ] + a + 1 t [ i ] ( b [ x ] a ) d + 1 t [ i ] 2 D [ b [ x ] , { x , 2 } ] d 2 + ( A b [ x ] 3 + B u [ x ] + f [ i ] ) d 2 e 1 / 3.0 ;
    z = S e r i e s [ z , { u [ i + 1 ] [ x ] , 0 , 3 } , { u [ i + 1 ] [ x ] , 0 , 1 } , { u [ i + 1 ] [ x ] , 0 , 1 } , { u [ i + 1 ] [ x ] , 0 , 0 } , { u [ i + 1 ] [ x ] , 0 , 0 } ] ;
    z = N o r m a l [ z ] ,
    z = E x p a n d [ z ] ;
    b [ x ] = z ] ;
  • a 1 [ i ] = z ;
  • C l e a r [ b ] ;
  • u [ i + 1 ] [ x ] = b [ x ] ;
  • a = a 1 [ i ] ];
  • b [ x ] = u f [ x ] ;
  • F o r [ i = 1 , i < m 8 , i + + ,
    A 1 = a 1 [ m 8 i ] ;
    A 1 = S e r i e s [ A 1 , { u f [ x ] , 0 , 3 } , { u f [ x ] , 0 , 1 } , { u f [ x ] , 0 , 1 } , { u f [ x ] , 0 , 0 } , { u f [ x ] , 0 , 0 } ] ;
    A 1 = N o r m a l [ A 1 ] ;
    A 1 = E x p a n d [ A 1 ] ;
    u [ m 8 i ] [ x ] = A 1 ;
    b [ x ] = A 1 ] ;
    P r i n t [ u [ m 8 / 2 ] [ x ] ] ;
*************************************
The numerical expressions for the solutions of the concerning N = 10 lines are given by
u [ 1 ] [ x ] = 0.47352 + 0.00691 u f [ x ] 0.00459 u f [ x ] 2 + 0.00265 u f [ x ] 3 + 0.00039 ( u f ) [ x ] 0.00058 u f [ x ] ( u f ) [ x ] + 0.00050 u f [ x ] 2 ( u f ) [ x ] 0.000181213 u f [ x ] 3 ( u f ) [ x ]
u [ 2 ] [ x ] = 0.76763 + 0.01301 u f [ x ] 0.00863 u f [ x ] 2 + 0.00497 u f [ x ] 3 + 0.00068 ( u f ) [ x ] 0.00103 u f [ x ] ( u f ) [ x ] + 0.00088 u f [ x ] 2 ( u f ) [ x ] 0.00034 u f [ x ] 3 ( u f ) [ x ]
u [ 3 ] [ x ] = 0.91329 + 0.02034 u f [ x ] 0.01342 u f [ x ] 2 + 0.00768 u f [ x ] 3 + 0.00095 ( u f ) [ x ] 0.00144 u f [ x ] ( u f ) [ x ] + 0.00122 u f [ x ] 2 ( u f ) [ x ] 0.00051 u f [ x ] 3 ( u f ) [ x ]
u [ 4 ] [ x ] = 0.97125 + 0.03623 u f [ x ] 0.02328 u f [ x ] 2 + 0.01289 u f [ x ] 3 + 0.00147331 ( u f ) [ x ] 0.00223 u f [ x ] ( u f ) [ x ] + 0.00182 u f [ x ] 2 ( u f ) [ x ] 0.00074 u f [ x ] 3 ( u f ) [ x ]
u [ 5 ] [ x ] = 1.01736 + 0.09242 u f [ x ] 0.05110 u f [ x ] 2 + 0.02387 u f [ x ] 3 + 0.00211 ( u f ) [ x ] 0.00378 u f [ x ] ( u f ) [ x ] + 0.00292 u f [ x ] 2 ( u f ) [ x ] 0.00132 u f [ x ] 3 ( u f ) [ x ]
u [ 6 ] [ x ] = 1.02549 + 0.21039 u f [ x ] 0.09374 u f [ x ] 2 + 0.03422 u f [ x ] 3 + 0.00147 ( u f ) [ x ] 0.00634 u f [ x ] ( u f ) [ x ] + 0.00467 u f [ x ] 2 ( u f ) [ x ] 0.00200 u f [ x ] 3 ( u f ) [ x ]
u [ 7 ] [ x ] = 0.93854 + 0.36459 u f [ x ] 0.14232 u f [ x ] 2 + 0.04058 u f [ x ] 3 + 0.00259 ( u f ) [ x ] 0.00747373 u f [ x ] ( u f ) [ x ] + 0.0047969 u f [ x ] 2 ( u f ) [ x ] 0.00194 u f [ x ] 3 ( u f ) [ x ]
u [ 8 ] [ x ] = 0.74649 + 0.57201 u f [ x ] 0.17293 u f [ x ] 2 + 0.02791 u f [ x ] 3 + 0.00353 ( u f ) [ x ] 0.00658 u f [ x ] ( u f ) [ x ] + 0.00407 u f [ x ] 2 ( u f ) [ x ] 0.00172 u f [ x ] 3 ( u f ) [ x ]
u [ 9 ] [ x ] = 0.43257 + 0.81004 u f [ x ] 0.13080 u f [ x ] 2 + 0.00042 u f [ x ] 3 + 0.00294 ( u f ) [ x ] 0.00398 u f [ x ] ( u f ) [ x ] + 0.00222 u f [ x ] 2 ( u f ) [ x ] 0.00066 u f [ x ] 3 ( u f ) [ x ]

7.3. Some Plots Concerning the Numerical Results

In this section, we present the lines 2 , 4 , 6 , 8 related to results obtained in the last section.
Indeed, we present such mentioned lines, in a first step, for the previous results obtained through the generalized of lines and, in a second step, through a numerical method, which is combination of the Newton one and the generalized method of lines. In a third step, we also present the graphs by considering the expression of the lines as those also obtained through the generalized method of lines, up to the numerical coefficients for each function term, which are obtained by the numerical optimization of the functional J, specified below. We consider the case in which u 0 ( x ) = 0 and u f ( x ) = sin ( x ) .
For the procedure mentioned above as the third step, recalling that N = 10 lines, considering that u f ( x ) = u f ( x ) , we may approximately assume the following general line expressions:
u n ( x ) = a ( 1 , n ) + a ( 2 , n ) u f ( x ) + a ( 3 , n ) u f ( x ) 2 + a ( 4 , n ) u f ( x ) 3 , n { 1 , N 1 } .
Defining
W n = e 1 ( u n + 1 ( x ) 2 u n ( x ) + u n 1 ( x ) ) d 2 e 1 t n ( u n ( x ) u n 1 ( x ) ) d e 1 t n 2 u n ( x ) + u n ( x ) 3 u n ( x ) 1 ,
and
J ( { a ( j , n ) } ) = n = 1 N 1 0 2 π ( W n ) 2 d x
we obtain { a ( j , n ) } by numerically minimizing J.
Hence, we have obtained the following lines for these cases. For such graphs, we have considered 300 nodes in x, with 2 π / 300 as units in x [ 0 , 2 π ] .
For the line 2, please see Figure 1, Figure 2 and Figure 3, obtained through the generalized method of lines, through a combination of the Newton and generalized methods of lines, and through the minimization of the functional J, respectively.
For the line 4, please see Figure 4, Figure 5 and Figure 6, obtained through the generalized method of lines, through a combination of the Newton and generalized methods of lines, and through the minimization of the functional J, respectively.
For the line 6, please see Figure 7, Figure 8 and Figure 9, obtained through the generalized method of lines, through a combination of the Newton and generalized methods of lines, and through the minimization of the functional J, respectively.
For the line 8, please see Figure 10, Figure 11 and Figure 12, obtained through the generalized method of lines, through a combination of the Newton and generalized methods of lines, and through the minimization of the functional J, respectively.

8. Conclusions

In the first part of this article, we developed duality principles for non-convex variational optimization. In the following sections, we proposed dual convex formulations suitable for a large class of models in physics and engineering. In the previous section, we presented an advance concerning the computation of a solution for a partial differential equation through the generalized method of lines. In particular, in its previous versions, we used to truncate the series in d 2 ; however, we have realized that the results are much better when taking line solutions in series for u f [ x ] and its derivatives, as is indicated in the present software.
This is a small difference from the previous procedure but results in great improvements as the parameter ε > 0 is small.
Indeed, with a sufficiently large N (number of lines), we may obtain very good qualitative results even as ε > 0 is very small.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bielski, W.R.; Galka, A.; Telega, J.J. The Complementary Energy Principle and Duality for Geometrically Nonlinear Elastic Shells. I. Simple case of moderate rotations around a tangent to the middle surface. Bull. Pol. Acad. Sci. Tech. Sci. 1988, 38, 7–9. [Google Scholar]
  2. Bielski, W.R.; Telega, J.J. A Contribution to Contact Problems for a Class of Solids and Structures. Arch. Mech. 1985, 37, 303–320. [Google Scholar]
  3. Telega, J.J. On the Complementary Energy Principle in Non-Linear Elasticity. Part I: Von Karman Plates and Three Dimensional Solids. C.R. Acad. Sci. Paris Ser. II 1989, 308, 1193–1198. [Google Scholar]
  4. Galka, A.; Telega, J.J. Duality and the Complementary Energy Principle for a Class of Geometrically Non-Linear Structures. Part I. Five parameter shell model; Part II. Anomalous dual variational priciples for compressed elastic beams. Arch. Mech. 1995, 47, 677–724. [Google Scholar]
  5. Toland, J.F. A duality principle for non-convex optimisation and the calculus of variations. Arch. Rat. Mech. Anal. 1979, 71, 41–61. [Google Scholar] [CrossRef]
  6. Adams, R.A.; Fournier, J.F. Sobolev Spaces, 2nd ed.; Elsevier: New York, NY, USA, 2003. [Google Scholar]
  7. Botelho, F.S. Functional Analysis, Calculus of Variations and Numerical Methods in Physics and Engineering; CRC Taylor and Francis: Boca Raton, FL, USA, 2020. [Google Scholar]
  8. Botelho, F.S. Variational Convex Analysis. Ph.D. Thesis, Virginia Tech, Blacksburg, VA, USA, 2009. [Google Scholar]
  9. Botelho, F. Topics on Functional Analysis, Calculus of Variations and Duality; Academic Publications: Sofia, Bulgaria, 2011. [Google Scholar]
  10. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  11. Annet, J.F. Superconductivity, Superfluids and Condensates, 2nd ed.; Oxford Master Series in Condensed Matter Physics; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  12. Landau, L.D.; Lifschits, E.M. Course of Theoretical Physics; Volume 5—Statistical Physics, Part 1; Butterworth-Heinemann; Elsevier: Amsterdam, The Netherlands, 2008. [Google Scholar]
  13. Botelho, F. Existence of solution for the Ginzburg-Landau system, a related optimal control problem and its computation by the generalized method of lines. Appl. Math. Comput. 2012, 218, 11976–11989. [Google Scholar] [CrossRef]
  14. Strikwerda, J.C. Finite Difference Schemes and Partial Differential Equations, 2nd ed.; SIAM: Philadelphia, PA, USA, 2004. [Google Scholar]
Figure 1. Line 2, solution u 2 ( x ) through the general method of lines.
Figure 1. Line 2, solution u 2 ( x ) through the general method of lines.
Mathematics 11 00063 g001
Figure 2. Line 2, solution u 2 ( x ) through Newton’s Method.
Figure 2. Line 2, solution u 2 ( x ) through Newton’s Method.
Mathematics 11 00063 g002
Figure 3. Line 2, solution u 2 ( x ) through the minimization of functional J.
Figure 3. Line 2, solution u 2 ( x ) through the minimization of functional J.
Mathematics 11 00063 g003
Figure 4. Line 4, solution u 4 ( x ) through the general method of lines.
Figure 4. Line 4, solution u 4 ( x ) through the general method of lines.
Mathematics 11 00063 g004
Figure 5. Line 4, solution u 4 ( x ) through Newton’s Method.
Figure 5. Line 4, solution u 4 ( x ) through Newton’s Method.
Mathematics 11 00063 g005
Figure 6. Line 4, solution u 4 ( x ) through the minimization of functional J.
Figure 6. Line 4, solution u 4 ( x ) through the minimization of functional J.
Mathematics 11 00063 g006
Figure 7. Line 6, solution u 6 ( x ) through the general method of lines.
Figure 7. Line 6, solution u 6 ( x ) through the general method of lines.
Mathematics 11 00063 g007
Figure 8. Line 6, solution u 6 ( x ) through Newton’s Method.
Figure 8. Line 6, solution u 6 ( x ) through Newton’s Method.
Mathematics 11 00063 g008
Figure 9. Line 6, solution u 6 ( x ) through the minimization of functional J.
Figure 9. Line 6, solution u 6 ( x ) through the minimization of functional J.
Mathematics 11 00063 g009
Figure 10. Line 8, solution u 8 ( x ) through the general method of lines.
Figure 10. Line 8, solution u 8 ( x ) through the general method of lines.
Mathematics 11 00063 g010
Figure 11. Line 8, solution u 8 ( x ) through Newton’s Method.
Figure 11. Line 8, solution u 8 ( x ) through Newton’s Method.
Mathematics 11 00063 g011
Figure 12. Line 8, solution u 8 ( x ) through the minimization of functional J.
Figure 12. Line 8, solution u 8 ( x ) through the minimization of functional J.
Mathematics 11 00063 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Botelho, F.S. Dual Variational Formulations for a Large Class of Non-Convex Models in the Calculus of Variations. Mathematics 2023, 11, 63. https://doi.org/10.3390/math11010063

AMA Style

Botelho FS. Dual Variational Formulations for a Large Class of Non-Convex Models in the Calculus of Variations. Mathematics. 2023; 11(1):63. https://doi.org/10.3390/math11010063

Chicago/Turabian Style

Botelho, Fabio Silva. 2023. "Dual Variational Formulations for a Large Class of Non-Convex Models in the Calculus of Variations" Mathematics 11, no. 1: 63. https://doi.org/10.3390/math11010063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop