Preprint
Article

Dual Variational Formulations for a Large Class of Non-Convex Models in the Calculus of Variations

Altmetrics

Downloads

793

Views

1039

Comments

1

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

23 January 2023

Posted:

25 January 2023

Read the latest preprint version here

Alerts
Abstract
This article develops dual variational formulations for a large class of models in variational optimization. The results are established through basic tools of functional analysis, convex analysis and duality theory. The main duality principle is developed as an application to a Ginzburg-Landau type system in superconductivity in the absence of a magnetic field. In the first sections, we develop new general dual convex variational formulations, more specifically, dual formulations with a large region of convexity around the critical points which are suitable for the non-convex optimization for a large class of models in physics and engineering. Finally, in the last section we present some numerical results concerning the generalized method of lines applied to a Ginzburg-Landau type equation.
Keywords: 
Subject: Computer Science and Mathematics  -   Applied Mathematics

1. Introduction

In this section we establish a dual formulation for a large class of models in non-convex optimization.
The main duality principle is applied to the Ginzburg-Landau system in superconductivity in an absence of a magnetic field.
Such results are based on the works of J.J. Telega and W.R. Bielski [2,3,13,14] and on a D.C. optimization approach developed in Toland [15].
About the other references, details on the Sobolev spaces involved are found in [1]. Related results on convex analysis and duality theory are addressed in [5,6,7,9,12]. Finally, similar models on the superconductivity physics may be found in [4,11].
Remark 1. 
It is worth highlighting, we may generically denote
Ω [ ( γ 2 + K I d ) 1 v * ] v * d x
simply by
Ω ( v * ) 2 γ 2 + K d x ,
where I d denotes a concerning identity operator.
Other similar notations may be used along this text as their indicated meaning are sufficiently clear.
Also, 2 denotes the Laplace operator and for real constants K 2 > 0 and K 1 > 0 , the notation K 2 K 1 means that K 2 > 0 is much larger than K 1 > 0 .
Finally, we adopt the standard Einstein convention of summing up repeated indices, unless otherwise indicated.
In order to clarify the notation, here we introduce the definition of topological dual space.
Definition 1 
(Topological dual spaces). Let U be a Banach space. We shall define its dual topological space, as the set of all linear continuous functionals defined on U. We suppose such a dual space of U, may be represented by another Banach space U * , through a bilinear form · , · U : U × U * R (here we are referring to standard representations of dual spaces of Sobolev and Lebesgue spaces). Thus, given f : U R linear and continuous, we assume the existence of a unique u * U * such that
f ( u ) = u , u * U , u U .
The norm of f , denoted by f U * , is defined as
f U * = sup u U { | u , u * U | : u U 1 } u * U * .
At this point we start to describe the primal and dual variational formulations.
Let Ω R 3 be an open, bounded, connected set with a regular (Lipschitzian) boundary denoted by Ω .
Firstly we emphasize that, for the Banach space Y = Y * = L 2 ( Ω ) , we have
v , v * L 2 = Ω v v * d x , v , v * L 2 ( Ω ) .
For the primal formulation we consider the functional J : U R where
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x u , f L 2 .
Here we assume α > 0 , β > 0 , γ > 0 , U = W 0 1 , 2 ( Ω ) , f L 2 ( Ω ) . Moreover we denote
Y = Y * = L 2 ( Ω ) .
Define also G 1 : U R by
G 1 ( u ) = γ 2 Ω u · u d x ,
G 2 : U × Y R by
G 2 ( u , v ) = α 2 Ω ( u 2 β + v ) 2 d x + K 2 Ω u 2 d x ,
and F : U R by
F ( u ) = K 2 Ω u 2 d x ,
where K γ .
It is worth highlighting that in such a case
J ( u ) = G 1 ( u ) + G 2 ( u , 0 ) F ( u ) u , f L 2 , u U .
Furthermore, define the following specific polar functionals specified, namely, G 1 * : [ Y * ] 2 R by
G 1 * ( v 1 * + z * ) = sup u U u , v 1 * + z * L 2 G 1 ( u ) = 1 2 Ω [ ( γ 2 ) 1 ( v 1 * + z * ) ] ( v 1 * + z * ) d x ,
G 2 * : [ Y * ] 2 R by
G 2 * ( v 2 * , v 0 * ) = sup ( u , v ) U × Y u , v 2 * L 2 + v , v 0 * L 2 G 2 ( u , v ) = 1 2 Ω ( v 2 * ) 2 2 v 0 * + K d x + 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x ,
if v 0 * B * where
B * = { v 0 * Y * : 2 v 0 * + K > K / 2 in Ω } .
At this point, we give more details about this calculation.
Observe that
G 2 * ( v 2 * , v 0 * ) = sup ( u , v ) U × Y u , v 2 * L 2 + v , v 0 * L 2 G 2 ( u , v ) = sup ( u , v ) U × Y u , v 2 * L 2 + v , v 0 * L 2 α 2 Ω ( u 2 β + v ) 2 d x K 2 Ω u 2 d x .
Defining w = u 2 β + v , we have v = w u 2 + β , so that
G 2 * ( v 2 * , v 0 * ) = sup ( u , v ) U × Y u , v 2 * L 2 + v , v 0 * L 2 α 2 Ω ( u 2 β + v ) 2 d x K 2 Ω u 2 d x = sup ( u , w ) U × Y u , v 2 * L 2 + w u 2 + β , v 0 * L 2 α 2 Ω ( w ) 2 d x K 2 Ω u 2 d x = u ˜ , v 2 * L 2 + w ˜ u ˜ 2 + β , v 0 * L 2 α 2 Ω ( w ˜ ) 2 d x K 2 Ω u ˜ 2 d x ,
where ( u ˜ , w ˜ ) are solution of equations (optimality conditions for such a quadratic optimization problem)
v 0 * α w ˜ = 0 ,
and
v 2 * ( 2 v 0 * + K ) u ˜ = 0 ,
and therefore
w ˜ = v 0 * α ,
and
u ˜ = v 2 * 2 v 0 * + K .
Replacing such results into (7) we obtain
G * ( v 1 * , v 0 * ) = 1 2 Ω ( v 2 * ) 2 2 v 0 * + K d x + 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x ,
if v 0 * B * .
Finally, F * : Y * R is defined by
F * ( z * ) = sup u U u , z * L 2 F ( u ) = 1 2 K Ω ( z * ) 2 d x .
Define also
A * = { v * = ( v 1 * , v 2 * , v 0 * ) [ Y * ] 2 × B * : v 1 * + v 2 * f = 0 , in Ω } ,
J * : [ Y * ] 4 R by
J * ( v * , z * ) = G 1 * ( v 1 * + z * ) G 2 * ( v 2 * , v 0 * ) + F * ( z * )
and J 1 * : [ Y * ] 4 × U R by
J 1 * ( v * , z * , u ) = J * ( v * , z * ) + u , v 1 * + v 2 * f L 2 .

2. The main duality principle, a convex dual formulation and the concerning proximal primal functional

Our main result is summarized by the following theorem.
Theorem 1. 
Considering the definitions and statements in the last section, suppose also ( v ^ * , z ^ * , u 0 ) [ Y * ] 2 × B * × Y * × U is such that
δ J 1 * ( v ^ * , z ^ * , u 0 ) = 0 .
Under such hypotheses, we have
δ J ( u 0 ) = 0 ,
v ^ * A *
and
J ( u 0 ) = inf u U J ( u ) + K 2 Ω | u u 0 | 2 d x = J * ( v ^ * , z ^ * ) = sup v * A * J * ( v * , z ^ * ) .
Proof. 
Since
δ J 1 * ( v ^ * , z ^ * , u 0 ) = 0
from the variation in v 1 * we obtain
( v ^ 1 * + z ^ * ) γ 2 + u 0 = 0 in Ω ,
so that
v ^ 1 * + z ^ * = γ 2 u 0 .
From the variation in v 2 * we obtain
v ^ 2 * 2 v ^ 0 * + K + u 0 = 0 , in Ω .
From the variation in v 0 * we also obtain
( v ^ 2 * ) 2 ( 2 v ^ 0 * + K ) 2 v ^ 0 * α β = 0
and therefore,
v ^ 0 * = α ( u 0 2 β ) .
From the variation in u we get
v ^ 1 * + v ^ 2 * f = 0 , in Ω
and thus
v ^ * A * .
Finally, from the variation in z * , we obtain
( v ^ 1 * + z ^ * ) γ 2 + z ^ * K = 0 , in Ω .
so that
u 0 + z ^ * K = 0 ,
that is,
z ^ * = K u 0 in Ω .
From such results and v ^ * A * we get
0 = v ^ 1 * + v ^ 2 * f = γ 2 u 0 z ^ * + 2 ( v 0 * ) u 0 + K u 0 f = γ 2 u 0 + 2 α ( u 0 2 β ) u 0 f ,
so that
δ J ( u 0 ) = 0 .
Also from this and from the Legendre transform proprieties we have
G 1 * ( v ^ 1 * + z ^ * ) = u 0 , v ^ 1 * + z ^ * L 2 G 1 ( u 0 ) ,
G 2 * ( v ^ 2 * , v ^ 0 * ) = u 0 , v ^ 2 * L 2 + 0 , v 0 * L 2 G 2 ( u 0 , 0 ) ,
F * ( z ^ * ) = u 0 , z ^ * L 2 F ( u 0 )
and thus we obtain
J * ( v ^ * , z ^ * ) = G 1 * ( v ^ 1 * + z ^ * ) G 2 * ( v ^ 2 * , v ^ 0 * ) + F * ( z ^ * ) = u 0 , v ^ 1 * + v ^ 2 * + G 1 ( u 0 ) + G 2 ( u 0 , 0 ) F ( u 0 ) = u 0 , f L 2 + G 1 ( u 0 ) + G 2 ( u 0 , 0 ) F ( u 0 ) = J ( u 0 ) .
Summarizing, we have got
J * ( v ^ * , z ^ * ) = J ( u 0 ) .
On the other hand
J * ( v ^ * , z ^ * ) = G 1 * ( v ^ 1 * + z ^ * ) G 2 * ( v ^ 2 * , v ^ 0 * ) + F * ( z ^ * ) u , v ^ 1 * + z ^ * L 2 u , v ^ 2 * L 2 0 , v 0 * L 2 + G 1 ( u ) + G 2 ( u , 0 ) + F * ( z ^ * ) = u , f L 2 + G 1 ( u ) + G 2 ( u , 0 ) u , z ^ * L 2 + F * ( z ^ * ) = u , f L 2 + G 1 ( u ) + G 2 ( u , 0 ) F ( u ) + F ( u ) u , z ^ * L 2 + F * ( z ^ * ) = J ( u ) + K 2 Ω u 2 d x u , z ^ * L 2 + F * ( z ^ * ) = J ( u ) + K 2 Ω u 2 d x K u , u 0 L 2 + K 2 Ω u 0 2 d x = J ( u ) + K 2 Ω | u u 0 | 2 d x , u U .
Finally by a simple computation we may obtain the Hessian
2 J * ( v * , z * ) ( v * ) 2 < 0
in [ Y * ] 2 × B * × Y * , so that we may infer that J * is concave in v * in [ Y * ] 2 × B * × Y * .
Therefore, from this, (13) and (14), we have
J ( u 0 ) = inf u U J ( u ) + K 2 Ω | u u 0 | 2 d x = J * ( v ^ * , z ^ * ) = sup v * A * J * ( v * , z ^ * ) .
The proof is complete. □

3. A primal dual variational formulation

In this section we develop a more general primal dual variational formulation suitable for a large class of models in non-convex optimization.
Consider again U = W 0 1 , 2 ( Ω ) and let G : U R and F : U R be three times Fréchet differentiable functionals. Let J : U R be defined by
J ( u ) = G ( u ) F ( u ) , u U .
Assume u 0 U is such that
δ J ( u 0 ) = 0
and
δ 2 J ( u 0 ) > 0 .
Denoting v * = ( v 1 * , v 2 * ) , define J * : U × Y * × Y * R by
J * ( u , v * ) = 1 2 v 1 * G ( u ) 2 2 + 1 2 v 2 * F ( u ) 2 2 + 1 2 v 1 * v 2 * 2 2
Denoting L 1 * ( u , v * ) = v 1 * G ( u ) and L 2 * ( u , v * ) = v 2 * F ( u ) , define also
C * = ( u , v * ) U × Y * × Y * : L 1 * ( u , v 1 * ) 1 K and L 2 * ( u , v 1 * ) 1 K ,
for an appropriate K > 0 to be specified.
Observe that in C * the Hessian of J * is given by
{ δ 2 J * ( u , v * ) } = G ( u ) 2 + F ( u ) 2 + O ( 1 / K ) G ( u ) F ( u ) G ( u ) 2 1 F ( u ) 1 2 ,
Observe also that
det 2 J * ( u , v * ) v 1 * v 2 * = 3 ,
and
det { δ 2 J * ( u , v * ) } = ( G ( u ) F ( u ) ) 2 + O ( 1 / K ) = ( δ 2 J ( u ) ) 2 + O ( 1 / K ) .
Define now
v ^ 1 * = G ( u 0 ) ,
v ^ 2 * = F ( u 0 ) ,
so that
v ^ 1 * v ^ 2 * = 0 .
From this we may infer that ( u 0 , v ^ 1 * , v ^ 2 * ) C * and
J * ( u 0 , v ^ * ) = 0 = min ( u , v * ) C * J * ( u , v * ) .
Moreover, for K > 0 sufficiently big, J * is convex in a neighborhood of ( u 0 , v ^ * ) .
Therefore, in the last lines, we have proven the following theorem.
Theorem 2. 
Under the statements and definitions of the last lines, there exist r 0 > 0 and r 1 > 0 such that
J ( u 0 ) = min u B r 0 ( u 0 ) J ( u )
and ( u 0 , v ^ 1 * , v ^ 2 * ) C * is such that
J * ( u 0 , v ^ * ) = 0 = min ( u , v * ) U × [ Y * ] 2 J * ( u , v * ) .
Moreover, J * is convex in
B r 1 ( u 0 , v ^ * ) .

4. One more duality principle and a concerning primal dual variational formulation

In this section we establish a new duality principle and a related primal dual formulation.
The results are based on the approach of Toland, [15].

4.1. Introduction

Let Ω R 3 be an open, bounded, connected set with a regular (Lipschitzian) boundary denoted by Ω .
Let J : V R be a functional such that
J ( u ) = G ( u ) F ( u ) , u V ,
where V = W 0 1 , 2 ( Ω ) .
Suppose G , F are both three times Fréchet differentiable convex functionals such that
2 G ( u ) u 2 > 0
and
2 F ( u ) u 2 > 0
u V .
Assume also there exists α 1 R such that
α 1 = inf u V J ( u ) .
Moreover, suppose that if { u n } V is such that
u n V
then
J ( u n ) + , as n .
At this point we define J * * : V R by
J * * ( u ) = sup ( v * , α ) H * { u , v * + α } ,
where
H * = { ( v * , α ) V * × R : v , v * V + α F ( v ) , v V } .
Observe that ( 0 , α 1 ) H * , so that
J * * ( u ) α 1 = inf u V J ( u ) .
On the other hand, clearly we have
J * * ( u ) J ( u ) , u V ,
so that we have got
α 1 = inf u V J ( u ) = inf u V J * * ( u ) .
Let u V .
Since J is strongly continuous, there exist δ > 0 and A > 0 such that,
α 1 J * * ( v ) J ( v ) A , v B δ ( u ) .
From this, considering that J * * is convex on V, we may infer that J * * is continuous at u, u V .
Hence J * * is strongly lower semi-continuous on V, and since J * * is convex we may infer that J * * is weakly lower semi-continuous on V.
Let { u n } V be a sequence such that
α 1 J ( u n ) < α 1 + 1 n , n N .
Hence
α 1 = lim n J ( u n ) = inf u V J ( u ) = inf u V J * * ( u ) .
Suppose there exists a subsequence { u n k } of { u n } such that
u n k V , as k .
From the hypothesis we have
J ( u n k ) + , as k ,
which contradicts
α 1 R .
Therefore there exists K > 0 such that
u n V K , u V .
Since V is reflexive, from this and the Katutani Theorem, there exists a subsequence { u n k } of { u n } and u 0 V such that
u n k u 0 , weakly in V .
Consequently, from this and considering that J * * is weakly lower semi-continuous, we have got
α 1 = lim inf k J * * ( u n k ) J * * ( u 0 ) ,
so that
J * * ( u 0 ) = min u V J * * ( u ) .
Define G * , F * : V * R by
G * ( v * ) = sup u V { u , v * V G ( u ) } ,
and
F * ( v * ) = sup u V { u , v * V F ( u ) } .
Defining also J * : V R by
J * ( v * ) = F * ( v * ) G * ( v * ) ,
from the results in [15], we may obtain
inf u V J ( u ) = inf v * V * J * ( v * ) ,
so that
J * * ( u 0 ) = inf u V J * * ( u ) = inf u V J ( u ) = inf v * V * J * ( v * ) .
Suppose now there exists u ^ V such that
J ( u ^ ) = inf u V J ( u ) .
From the standard necessary conditions, we have
δ J ( u ^ ) = 0 ,
so that
G ( u ^ ) u F ( u ^ ) u = 0 .
Define now
v 0 * = F ( u ^ ) u .
From these last two equations we obtain
v 0 * = G ( u ^ ) u .
From such results and the Legendre transform properties, we have
u ^ = F * ( v 0 * ) v * ,
u ^ = G * ( v 0 * ) v * ,
so that
δ J * ( v 0 * ) = F * ( v 0 * ) v * G * ( v 0 * ) v * = u ^ u ^ = 0 ,
G * ( v 0 * ) = u ^ , v 0 * V G ( u ^ )
and
F * ( v 0 * ) = u ^ , v 0 * V F ( u ^ )
so that
inf u V J ( u ) = J ( u ^ ) = G ( u ^ ) F ( u ^ ) = inf v * V * J * ( v * ) = F * ( v 0 * ) G * ( v 0 * ) = J * ( v 0 * ) .

4.2. The main duality principle and a related primal dual variational formulation

Considering these last statements and results, we may prove the following theorem.
Theorem 3. 
Let Ω R 3 be an open, bounded, connected set with a regular (Lipschitzian) boundary denoted by Ω .
Let J : V R be a functional such that
J ( u ) = G ( u ) F ( u ) , u V ,
where V = W 0 1 , 2 ( Ω ) .
Suppose G , F are both three times Fréchet differentiable functionals such that there exists K > 0 such that
2 G ( u ) u 2 + K > 0
and
2 F ( u ) u 2 + K > 0
u V .
Assume also there exists u 0 V and α 1 R such that
α 1 = inf u V J ( u ) = J ( u 0 ) .
Assume K 3 > 0 is such that
u 0 < K 3 .
Define
V ˜ = { u V : u K 3 } .
Assume K 1 > 0 is such that if u V ˜ then
max F ( u ) , G ( u ) , F ( u ) , F ( u ) , G ( u ) , G ( u ) K 1 .
Suppose also
K max { K 1 , K 3 } .
Define F K , G K : V R by
F K ( u ) = F ( u ) + K 2 Ω u 2 d x ,
and
G K ( u ) = G ( u ) + K 2 Ω u 2 d x ,
u V .
Define also G K * , F K * : V * R by
G K * ( v * ) = sup u V { u , v * V G K ( u ) } ,
and
F K * ( v * ) = sup u V { u , v * V F K ( u ) } .
Observe that since u 0 V is such that
J ( u 0 ) = inf u V J ( u ) ,
we have
δ J ( u 0 ) = 0 .
Let ε > 0 be a small constant.
Define
v 0 * = F K ( u 0 ) u V * .
Under such hypotheses, defining J 1 * : V × V * R by
J 1 * ( u , v * ) = F K * ( v * ) G K * ( v * ) + 1 2 ε G K * ( v * ) v * u 2 2 + 1 2 ε F K * ( v * ) v * u 2 2 + 1 2 ε G K * ( v * ) v * F K * ( v * ) v * 2 2 ,
we have
J ( u 0 ) = inf u V J ( u ) = inf ( u , v * ) V × V * J 1 * ( u , v * ) = J 1 * ( u 0 , v 0 * ) .
Proof. 
Observe that from the hypotheses and the results and statements of the last subsection
J ( u 0 ) = inf u V J ( u ) = inf v * Y * J K * ( v * ) = J K * ( v 0 * ) ,
where
J K * ( v * ) = F K * ( v * ) G K * ( v * ) , v * V * .
Moreover we have
J 1 * ( u , v * ) J K * ( v * ) , u V , v * V * .
Also from hypotheses and the last subsection results,
u 0 = F K * ( v 0 * ) v * = G K * ( v 0 * ) v * ,
so that clearly we have
J 1 * ( u 0 , v 0 * ) = J K * ( v 0 * ) .
From these last results, we may infer that
J ( u 0 ) = inf u V J ( u ) = inf v * V * J K * ( v * ) = J K * ( v 0 * ) = inf ( u , v * ) V × V * J 1 * ( u , v * ) = J 1 * ( u 0 , v 0 * ) .
The proof is complete.
Remark 2. 
At this point we highlight that J 1 * has a large region of convexity around the optimal point ( u 0 , v 0 * ) , for K > 0 sufficiently large and corresponding ε > 0 sufficiently small.
Indeed, observe that for v * V * ,
G K * ( v * ) = sup u V { u , v * V G K ( u ) } = u ^ , v * V G K ( u ^ )
where u ^ V is such that
v * = G K ( u ^ ) u = G ( u ^ ) + K u ^ .
Taking the variation in v * in this last equation, we obtain
1 = G ( u ) u ^ v * + K u ^ v * ,
so that
u ^ v * = 1 G ( u ) + K = O 1 K .
From this we get
2 u ^ ( v * ) 2 = 1 ( G ( u ) + K ) 2 G ( u ) u ^ v * = 1 ( G ( u ) + K ) 3 G ( u ) = O 1 K 3 .
On the other hand, from the implicit function theorem
G K * ( v * ) v * = u + [ v * G K ( u ^ ) ] u ^ v * = u ,
so that
2 G K * ( v * ) ( v * ) 2 = u ^ v * = O 1 K
and
3 G K * ( v * ) ( v * ) 3 = 2 u ^ ( v * ) 2 = O 1 K 3 .
Similarly, we may obtain
2 F K * ( v * ) ( v * ) 2 = O 1 K
and
3 F K * ( v * ) ( v * ) 3 = O 1 K 3 .
Denoting
A = 2 F K * ( v 0 * ) ( v * ) 2
and
B = 2 G K * ( v 0 * ) ( v * ) 2 ,
we have
2 J 1 * ( u 0 , v 0 * ) ( v * ) 2 = A B + 1 ε 2 A 2 + 2 B 2 2 A B ,
2 J 1 * ( u 0 , v 0 * ) u 2 = 2 ε ,
and
2 J 1 * ( u 0 , v 0 * ) ( v * ) u = 1 ε ( A + B ) .
From this we get
det ( δ 2 J * ( v 0 * , u 0 ) ) = 2 J 1 * ( u 0 , v 0 * ) ( v * ) 2 2 J 1 * ( u 0 , v 0 * ) u 2 2 J 1 * ( u 0 , v 0 * ) ( v * ) u 2 = 2 A B ε + 2 ( A B ) 2 ε 2 = O 1 ε 2 0
about the optimal point ( u 0 , v 0 * ) .

5. A convex dual variational formulation

In this section, again for Ω R 3 an open, bounded, connected set with a regular (Lipschitzian) boundary Ω , γ > 0 , α > 0 , β > 0 and f L 2 ( Ω ) , we denote F 1 : V × Y R , F 2 : V R and G : V × Y R by
F 1 ( u , v 0 * ) = γ 2 Ω u · u d x K 2 Ω u 2 d x + K 1 2 Ω ( γ 2 u + 2 v 0 * u f ) 2 d x + K 2 2 Ω u 2 d x ,
F 2 ( u ) = K 2 2 Ω u 2 d x + u , f L 2 ,
and
G ( u , v ) = α 2 Ω ( u 2 β + v ) 2 d x + K 2 Ω u 2 d x .
We define also
J 1 ( u , v 0 * ) = F 1 ( u , v 0 * ) F 2 ( u ) + G ( u , 0 ) ,
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x u , f L 2 ,
and F 1 * : [ Y * ] 3 R , F 2 * : Y * R , and G * : [ Y * ] 2 R , by
F 1 * ( v 2 * , v 1 * , v 0 * ) = sup u V { u , v 1 * + v 2 * L 2 F 1 ( u , v 0 * ) } = 1 2 Ω v 1 * + v 2 * + K 1 ( γ 2 + 2 v 0 * ) f 2 ( γ 2 K + K 2 + K 1 ( γ 2 + 2 v 0 * ) 2 ) d x K 1 2 Ω f 2 d x ,
F 2 * ( v 2 * ) = sup u V { u , v 2 * L 2 F 2 ( u ) } = 1 2 K 2 Ω ( v 2 * ) 2 d x ,
and
G * ( v 1 * , v 0 * ) = sup ( u , v ) V × Y { u , v 1 * L 2 v , v 0 * L 2 G ( u , v ) } = 1 2 Ω ( v 1 * ) 2 2 v 0 * + K d x + 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x
if v 0 * B * where
B * = { v 0 * Y * : v 0 * K / 2 and γ 2 + 2 v 0 * < ε I d } ,
for some small real parameter ε > 0 and where I d denotes a concerning identity operator.
Finally, we also define J 1 * : [ Y * ] 2 × B * R ,
J 1 * ( v 2 * , v 1 * , v 0 * ) = F 1 * ( v 2 * , v 1 * , v 0 * ) + F 2 * ( v 2 * ) G * ( v 1 * , v 0 * ) .
Assuming
K 2 K 1 K max { 1 / ( ε 2 ) , 1 , γ , α }
by directly computing δ 2 J 1 * ( v 2 * , v 1 * , v 0 * ) we may obtain that for such specified real constants, J 1 * in convex in v 2 * and it is concave in ( v 1 * , v 0 * ) on Y * × Y * × B * .
Considering such statements and definitions, we may prove the following theorem.
Theorem 4. 
Let ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) Y * × Y * × B * be such that
δ J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = 0
and u 0 V be such that
u 0 = v ^ 1 * + v ^ 2 * + K 1 ( γ 2 + 2 v 0 * ) f K 2 K γ 2 + K 1 ( γ 2 + 2 v ^ 0 * ) 2 .
Under such hypotheses, we have
δ J ( u 0 ) = 0 ,
so that
J ( u 0 ) = inf u V J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x = inf v 2 * Y * sup ( v 1 * , v 0 * ) Y * × B * J 1 * ( v 2 * , v 1 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) .
Proof. 
Observe that δ J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = 0 so that, since J 1 * is convex in v 2 * and concave in ( v 1 * , v 0 * ) on Y * × Y * × B * , we obtain
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = inf v 2 * Y * sup ( v 1 * , v 0 * ) Y * × B * J 1 * ( v 2 * , v 1 * , v 0 * ) .
Now we are going to show that
δ J ( u 0 ) = 0 .
From
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) v 2 * = 0 ,
we have
u 0 + v ^ 2 * K 2 = 0 ,
and thus
v ^ 2 * = K 2 u 0 .
From
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) v 1 * = 0 ,
we obtain
u 0 v ^ 1 * f 2 v ^ 0 * + K = 0 ,
and thus
v ^ 1 * = 2 v ^ 0 * u 0 K u 0 + f .
Finally, denoting
D = γ 2 u 0 + 2 v ^ 0 * u 0 f ,
from
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) v 0 * = 0 ,
we have
2 D u 0 + u 0 2 v ^ 0 * α β = 0 ,
so that
v ^ 0 * = α ( u 0 2 β 2 D u 0 ) .
Observe now that
v ^ 1 * + v ^ 2 * + K 1 ( γ 2 + 2 v ^ 0 * ) f = ( K 2 K γ 2 + K 1 ( γ 2 + 2 v ^ 0 * ) 2 ) u 0
so that
K 2 u 0 2 v ^ 0 u 0 K u 0 + f = K 2 u 0 K u 0 γ 2 u 0 + K 1 ( γ 2 + 2 v ^ 0 * ) ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) .
The solution for this last system of equations (30) and (31) is obtained through the relations
v ^ 0 * = α ( u 0 2 β )
and
γ 2 u 0 + 2 v ^ 0 * u 0 f = D = 0 ,
so that
δ J ( u 0 ) = γ 2 u 0 + 2 α ( u 0 2 β ) u 0 f = 0
and
δ J ( u 0 ) + K 1 2 Ω ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) 2 d x = 0 ,
and hence, from the concerning convexity in u on V,
J ( u 0 ) = min u V J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x .
Moreover, from the Legendre transform properties
F 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = u 0 , v ^ 2 * + v ^ 1 * L 2 F 1 ( u 0 , v ^ 0 * ) ,
F 2 * ( v ^ 2 * ) = u 0 , v ^ 2 * L 2 F 2 ( u 0 ) ,
G * ( v ^ 1 * , v ^ 0 * ) = u 0 , v ^ 1 * L 2 0 , v ^ 0 * L 2 G ( u 0 , 0 ) ,
so that
J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) = F 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) + F 2 * ( v ^ 2 * ) G * ( v ^ 1 * , v ^ 0 * ) = F 1 ( u 0 , v ^ 0 * ) F 2 ( u 0 ) + G ( u 0 , 0 ) = J ( u 0 ) .
Joining the pieces, we have got
J ( u 0 ) = inf u V J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x = inf v 2 * Y * sup ( v 1 * , v 0 * ) Y * × B * J 1 * ( v 2 * , v 1 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 1 * , v ^ 0 * ) .
The proof is complete.
Remark 3. 
We could have also defined
B * = { v 0 * Y * : v 0 * K / 2 and γ 2 + 2 v 0 * > ε I d } ,
for some small real parameter ε > 0 . In this case, γ 2 + 2 v 0 * is positive definite, whereas in the previous case, γ 2 + 2 v 0 * is negative definite.

6. Another convex dual variational formulation

In this section, again for Ω R 3 an open, bounded, connected set with a regular (Lipschitzian) boundary Ω , γ > 0 , α > 0 , β > 0 and f L 2 ( Ω ) , we denote F 1 : V × Y R , F 2 : V R and G : Y R by
F 1 ( u , v 0 * ) = γ 2 Ω u · u d x + u 2 , v 0 * L 2 + K 1 2 Ω ( γ 2 u + 2 v 0 * u f ) 2 d x + K 2 2 Ω u 2 d x ,
F 2 ( u ) = K 2 2 Ω u 2 d x + u , f L 2 ,
and
G ( u 2 ) = α 2 Ω ( u 2 β ) 2 d x .
We define also
J 1 ( u , v 0 * ) = F 1 ( u , v 0 * ) F 2 ( u ) u 2 , v 0 * L 2 + G ( u 2 ) ,
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x u , f L 2 ,
A + = { u V : u f > 0 , a . e . in Ω } ,
V 2 = { u V : u K 3 } ,
V 1 = A + V 1 ,
and F 1 * : [ Y * ] 2 R , F 2 * : Y * R , and G * : Y * R , by
F 1 * ( v 2 * , v 0 * ) = sup u V { u , v 2 * L 2 F 1 ( u , v 0 * ) } = 1 2 Ω v 2 * + K 1 ( γ 2 + 2 v 0 * ) f 2 ( γ 2 + 2 v 0 * + K 2 + K 1 ( γ 2 + 2 v 0 * ) 2 ) d x K 1 2 Ω f 2 d x ,
F 2 * ( v 2 * ) = sup u V { u , v 2 * L 2 F 2 ( u ) } = 1 2 K 2 Ω ( v 2 * + f ) 2 d x ,
and
G * ( v 0 * ) = sup v Y { v , v 0 * L 2 G ( v ) } = 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x
At this point we define
B 1 * = { v 0 * Y * : v 0 * K / 2 } ,
B 2 * = { v 0 * Y * : γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 > 0 } ,
B 3 * = { v 0 * Y * : 1 / α + 4 K 1 [ u ( v 2 * , v 0 * ) 2 ] + 100 / K 2 0 , v 2 * E 1 * } ,
where
u ( v 2 * , v 0 * ) = φ 1 φ ,
φ 1 = ( v 2 * + K 1 ( γ 2 + 2 v 0 * ) f )
and
φ = ( γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 + K 2 ) ,
Finally, we also define
E 1 * = { v 2 * Y * : v 2 * ( 5 / 4 ) K 2 } .
E 2 * = { v 2 * Y * : f v 2 * > 0 , a . e . in Ω } ,
E * = E 1 * E 2 * ,
B * = B 1 * B 3 * ,
and J 1 * : E * × B * R , by
J 1 * ( v 2 * , v 0 * ) = F 1 * ( v 2 * , v 0 * ) + F 2 * ( v 2 * ) G * ( v 0 * ) .
Moreover, assume
K 2 K 1 K K 3 max { 1 , γ , α } .
By directly computing δ 2 J 1 * ( v 2 * , v 0 * ) we may obtain that for such specified real constants, J 1 * is concave in v 0 * on E * × B * .
Indeed, recalling that
φ = ( γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 + K 2 ) ,
φ 1 = ( v 2 * + K 1 ( γ 2 + 2 v 0 * ) f ) ,
and
u = φ 1 φ ,
we obtain
2 J 1 * ( v 2 * , v 0 * ) ( v 2 * ) 2 = 1 / K 2 1 / φ > 0 ,
in E * × B 3 * and
2 J 1 * ( v 2 * , v 0 * ) ( v 0 * ) 2 = 4 u 2 K 1 1 / α + O ( 1 / K 2 ) < 0 ,
in E * × B * .
Considering such statements and definitions, we may prove the following theorem.
Theorem 5. 
Let ( v ^ 2 * , v ^ 0 * ) E * × ( B * B 2 * ) be such that
δ J 1 * ( v ^ 2 * , v ^ 0 * ) = 0
and u 0 V 1 be such that
u 0 = v ^ 2 * + K 1 ( γ 2 + 2 v ^ 0 * ) f K 2 + 2 v ^ 0 * γ 2 + K 1 ( γ 2 + 2 v ^ 0 * ) 2 .
Under such hypotheses, we have
δ J ( u 0 ) = 0 ,
so that
J ( u 0 ) = inf u V 1 J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x = inf v 2 * E * sup v 0 * B * J 1 * ( v 2 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 0 * ) .
Proof. 
Observe that δ J 1 * ( v ^ 2 * , v ^ 0 * ) = 0 so that, since J 1 * concave in v 0 * on E * × B * , v 0 * B 2 * and J 1 * is quadratic in v 2 * , we get
sup v 0 * B * J 1 * ( v ^ 2 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 0 * ) = inf v 2 * E * J 1 * ( v 2 * , v ^ 0 * ) .
Consequently, from this and the Min-Max Theorem, we obtain
J 1 * ( v ^ 2 * , v ^ 0 * ) = inf v 2 * E * sup v 0 * B * J 1 * ( v 2 * , v 0 * ) = sup v 0 * B * inf v 2 * E * J 1 * ( v 2 * , v 0 * ) .
Now we are going to show that
δ J ( u 0 ) = 0 .
From
J 1 * ( v ^ 2 * , v ^ 0 * ) v 2 * = 0 ,
we have
u 0 + v ^ 2 * K 2 = 0 ,
and thus
v ^ 2 * = K 2 u 0 .
Finally, denoting
D = γ 2 u 0 + 2 v ^ 0 * u 0 f ,
from
J 1 * ( v ^ 2 * , v ^ 0 * ) v 0 * = 0 ,
we have
2 D u 0 + u 0 2 v ^ 0 * α β = 0 ,
so that
v ^ 0 * = α ( u 0 2 β 2 D u 0 ) .
Observe now that
v ^ 2 * + K 1 ( γ 2 + 2 v ^ 0 * ) f = ( K 2 γ 2 + 2 v ^ 0 * + K 1 ( γ 2 + 2 v ^ 0 * ) 2 ) u 0
so that
K 2 u 0 2 v ^ 0 u 0 K u 0 + f = K 2 u 0 K u 0 γ 2 u 0 + K 1 ( γ 2 + 2 v ^ 0 * ) ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) .
The solution for this last equation is obtained through the relation
γ 2 u 0 + 2 v ^ 0 * u 0 f = D = 0 ,
so that from this and (39), we get
v ^ 0 * = α ( u 0 2 β ) .
Thus,
δ J ( u 0 ) = γ 2 u 0 + 2 α ( u 0 2 β ) u 0 f = 0
and
δ J ( u 0 ) + K 1 2 Ω ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) 2 d x = 0 ,
and hence, from the concerning convexity in u on V,
J ( u 0 ) = min u V J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x .
Moreover, from the Legendre transform properties
F 1 * ( v ^ 2 * , v ^ 0 * ) = u 0 , v ^ 2 * L 2 F 1 ( u 0 , v ^ 0 * ) ,
F 2 * ( v ^ 2 * ) = u 0 , v ^ 2 * L 2 F 2 ( u 0 ) ,
G * ( v ^ 0 * ) = u 0 2 , v ^ 0 * L 2 G ( u 0 2 ) ,
so that
J 1 * ( v ^ 2 * , v ^ 0 * ) = F 1 * ( v ^ 2 * , v ^ 0 * ) + F 2 * ( v ^ 2 * ) G * ( v ^ 0 * ) = F 1 ( u 0 , v ^ 0 * ) F 2 ( u 0 ) u 0 2 , v ^ 0 * L 2 + G ( u 0 2 ) = J ( u 0 ) .
Joining the pieces, we have got
J ( u 0 ) = inf u V 1 J ( u ) + K 1 2 Ω ( γ 2 u + 2 v ^ 0 * u f ) 2 d x = inf v 2 * E * sup v 0 * B * J 1 * ( v 2 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 0 * ) .
The proof is complete.

7. A third duality principle and related convex dual variational formulation

In this section, we assume a finite dimensional version for the model in question, in a finite differences or finite elements context, although the concerning spaces and operators have not been relabeled.
Again, for Ω R 3 an open, bounded, connected set with a regular (Lipschitzian) boundary Ω , γ < 0 , α < 0 , β > 0 , K 2 < 0 and f L 2 ( Ω ) , we denote F 1 : V × Y R , F 2 : V R and G : Y R by
F 1 ( u , v 0 * ) = γ 2 Ω u · u d x + u 2 , v 0 * L 2 + K 1 2 Ω ( γ 2 u + 2 v 0 * u + f ) 2 d x + K 2 2 Ω u 2 d x ,
F 2 ( u ) = K 2 2 Ω u 2 d x u , f L 2 ,
and
G ( u 2 ) = α 2 Ω ( u 2 β ) 2 d x .
We define also
J 1 ( u , v 0 * ) = F 1 ( u , v 0 * ) F 2 ( u ) u 2 , v 0 * L 2 + G ( u 2 ) ,
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x + u , f L 2 ,
A = { u V : u f < 0 , a . e . in Ω } ,
V 2 = { u V : u K 3 } ,
V 1 = A V 1 ,
and F 1 * : [ Y * ] 2 R , F 2 * : Y * R , and G * : Y * R , by
F 1 * ( v 2 * , v 0 * ) = inf u V { u , v 2 * L 2 F 1 ( u , v 0 * ) } = 1 2 Ω v 2 * K 1 ( γ 2 + 2 v 0 * ) f 2 ( γ 2 + 2 v 0 * + K 2 + K 1 ( γ 2 + 2 v 0 * ) 2 ) d x K 1 2 Ω f 2 d x ,
F 2 * ( v 2 * ) = inf u V { u , v 2 * L 2 F 2 ( u ) } = 1 2 K 2 Ω ( v 2 * f ) 2 d x ,
and
G * ( v 0 * ) = inf v Y { v , v 0 * L 2 G ( v ) } = 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x
At this point we define
B 1 * = { v 0 * Y * : v 0 * K / 2 } ,
B 2 * = { v 0 * Y * : γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 > 0 } ,
B 3 * = { v 0 * Y * : 1 / α + 4 K 1 [ u ( v 2 * , v 0 * ) 2 ] + 100 / | K 2 | > 0 , v 2 * E 1 * } ,
where
u ( v 2 * , v 0 * ) = φ 1 φ ,
φ 1 = ( v 2 * + K 1 ( γ 2 + 2 v 0 * ) f )
and
φ = ( γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 + K 2 ) ,
By direct computation we may obtain
2 [ u ( v 2 * , v 0 * ) 2 ] ( v 0 * ) 2 0 , v 0 * B * , v 2 * E 1 *
so that B 3 * is convex.
Finally, we also define
B 4 * = { v 0 * Y * : [ γ 2 + 2 v 0 * ] ε I d } ,
E 1 * = { v 2 * Y * : v 2 * ( 5 / 4 ) | K 2 | } ,
E 2 * = { v 2 * Y * : f v 2 * > 0 , a . e . in Ω } ,
E * = E 1 * E 2 * ,
And J 1 * by
J 1 * ( v 2 * , v 0 * ) = F 1 * ( v 2 * , v 0 * ) + F 2 * ( v 2 * ) G * ( v 0 * ) .
We assume K 2 < 0 , and
| K 2 | K 1 K K 3 max { | α | , | γ | , β , 1 / ε 2 } .
Recalling that
φ = ( γ 2 + 2 v 0 * + K 1 ( γ 2 + 2 v 0 * ) 2 + K 2 ) ,
φ 1 = ( v 2 * + K 1 ( γ 2 + 2 v 0 * ) f ) ,
and
u = φ 1 φ ,
we obtain
2 J 1 * ( v 2 * , v 0 * ) ( v 2 * ) 2 = 1 / K 2 1 / φ > 0 ,
in E * × B 3 * and
2 J 1 * ( v 2 * , v 0 * ) ( v 0 * ) 2 = 4 u 2 K 1 1 / α + O ( 1 / | K 2 | ) .
Also,
2 J 1 * ( v 2 * , v 0 * ) ( v 2 * ) 2 2 J 1 * ( v 2 * , v 0 * ) ( v 0 * ) 2 2 J 1 * ( v 2 * , v 0 * ) v 2 * v 0 * = O K 1 2 A 3 + K 1 A 4 α K 2 φ ,
where
A 3 = 8 α ( f 2 + 4 f ( γ 2 + 2 v 0 * ) u + 3 [ ( γ 2 + 2 v 0 * ) u ] 2
and
A 4 = ( γ 2 + 2 v 0 * ) 2 12 α [ ( γ 2 + 2 v 0 * ) u ] u .
Observe that at a critical point A 3 = 0 and A 4 > 0 so that, for the dual formulation, we set the restrictions A 3 0 and A 4 ε I d .
Thus, we define
B 5 * = { v 0 * Y * : A 3 0 and A 4 ε I d } ,
and
B * = B 1 * B 2 * B 3 * B 4 * B 5 * .
Observe also that
2 J 1 * ( v 2 * , v 0 * ) ( v 2 * ) 2 2 J 1 * ( v 2 * , v 0 * ) ( v 0 * ) 2 2 J 1 * ( v 2 * , v 0 * ) v 2 * v 0 * > 0 ,
on B * × E * , so that J 1 * is convex on E * × B *
Considering such statements and definitions, we may prove the following theorem.
Theorem 6. 
Let ( v ^ 2 * , v ^ 0 * ) E * × B * be such that
δ J 1 * ( v ^ 2 * , v ^ 0 * ) = 0
and u 0 V 1 be such that
u 0 = v ^ 2 * K 1 ( γ 2 + 2 v ^ 0 * ) f K 2 + 2 v ^ 0 * γ 2 + K 1 ( γ 2 + 2 v ^ 0 * ) 2 .
Under such hypotheses, we have
δ J ( u 0 ) = 0 ,
so that
J ( u 0 ) = sup u V 1 J ( u ) + K 2 2 Ω ( u u 0 ) 2 d x = inf ( v 2 * , v 0 * ) E * × B * J 1 * ( v 2 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 0 * ) .
Proof. 
Observe that δ J 1 * ( v ^ 2 * , v ^ 0 * ) = 0 so that, since J 1 * convex on the convex set E * × B * , we have that
J 1 * ( v ^ 2 * , v ^ 0 * ) = inf ( v 2 * , v 0 * ) E * × B * J 1 * ( v 2 * , v 0 * ) .
Now we are going to show that
δ J ( u 0 ) = 0 .
From
J 1 * ( v ^ 2 * , v ^ 0 * ) v 2 * = 0 ,
we have
u 0 + v ^ 2 * K 2 = 0 ,
and thus
v ^ 2 * = K 2 u 0 .
Finally, denoting
D = γ 2 u 0 + 2 v ^ 0 * u 0 f ,
from
J 1 * ( v ^ 2 * , v ^ 0 * ) v 0 * = 0 ,
we have
2 D u 0 + u 0 2 v ^ 0 * α β = 0 ,
so that
v ^ 0 * = α ( u 0 2 β 2 D u 0 ) .
Observe now that
v ^ 2 * + K 1 ( γ 2 + 2 v ^ 0 * ) f = ( K 2 γ 2 + 2 v ^ 0 * + K 1 ( γ 2 + 2 v ^ 0 * ) 2 ) u 0
so that
K 2 u 0 2 v ^ 0 u 0 K u 0 + f = K 2 u 0 K u 0 γ 2 u 0 + K 1 ( γ 2 + 2 v ^ 0 * ) ( γ 2 u 0 + 2 v ^ 0 * u 0 f ) .
The solution for this last equation is obtained through the relation
γ 2 u 0 + 2 v ^ 0 * u 0 f = D = 0 ,
so that from this and (39), we get
v ^ 0 * = α ( u 0 2 β ) .
Thus,
δ J ( u 0 ) = γ 2 u 0 + 2 α ( u 0 2 β ) u 0 f = 0
and
δ J ( u 0 ) + K 2 2 Ω ( u u 0 ) 2 d x = 0 ,
and hence, from the concerning cocavity in u on V,
J ( u 0 ) = min u V 1 J ( u ) + K 2 2 Ω ( u u 0 ) 2 d x .
Moreover, from the Legendre transform properties
F 1 * ( v ^ 2 * , v ^ 0 * ) = u 0 , v ^ 2 * L 2 F 1 ( u 0 , v ^ 0 * ) ,
F 2 * ( v ^ 2 * ) = u 0 , v ^ 2 * L 2 F 2 ( u 0 ) ,
G * ( v ^ 0 * ) = u 0 2 , v ^ 0 * L 2 G ( u 0 2 ) ,
so that
J 1 * ( v ^ 2 * , v ^ 0 * ) = F 1 * ( v ^ 2 * , v ^ 0 * ) + F 2 * ( v ^ 2 * ) G * ( v ^ 0 * ) = F 1 ( u 0 , v ^ 0 * ) F 2 ( u 0 ) u 0 2 , v ^ 0 * L 2 + G ( u 0 2 ) = J ( u 0 ) .
Finally, observe that
J 1 * ( v ^ 2 * , v ^ 0 * ) F 1 ( u , v ^ 0 * ) u , v ^ 2 * L 2 + F 2 * ( v ^ 2 * ) G * ( v ^ 0 * ) inf v 0 * Y * γ 2 Ω u · u d x + K 2 2 Ω u 2 d x u , v ^ 2 * L 2 + F 2 * ( v ^ 2 * ) + u 2 , v 0 * L 2 α 2 Ω ( v 0 * ) 2 d x β Ω v 0 * d x = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x + u , f L 2 + K 2 2 Ω u 2 d x u , K 2 u 0 L 2 + K 2 2 Ω u 0 2 d x = J ( u ) + K 2 2 Ω ( u u 0 ) 2 d x ,
u V 1 where we recall that K 2 < 0 .
Joining the pieces, we have got
J ( u 0 ) = sup u V 1 J ( u ) + K 2 2 Ω ( u u 0 ) 2 d x = inf ( v 2 * , v 0 * ) E * × B * J 1 * ( v 2 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 0 * ) .
The proof is complete.

8. Closely related primal-dual variational formulations

Consider again the functional J : V R where
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x u , f L 2 ,
where α > 0 , γ > 0 , β > 0 , f L 2 ( Ω ) and V = W 0 1 , 2 ( Ω ) .
Observe that
J ( u ) = J ( u ) + u 2 , v 0 * L 2 u 2 , v 0 * L 2 = γ 2 Ω u · u d x + u 2 , v 0 * L 2 u , f L 2 u 2 , v 0 * L 2 + α 2 Ω ( u 2 β ) 2 d x γ 2 Ω u · u d x + u 2 , v 0 * L 2 u , f L 2 + inf v Y v , v 0 * L 2 + α 2 Ω ( v β ) 2 d x = γ 2 Ω u · u d x + u 2 , v 0 * L 2 u , f L 2 1 2 α Ω ( v 0 * ) 2 d x β Ω v 0 * d x = J 1 ( u , v 0 * ) .
Having obtained J 1 * ( u , v 0 * ) , we propose the following exactly penalized primal-dual formulation J 2 * ( u , v 0 * ) , where
J 2 * ( u , v 0 * ) = J 1 * ( u , v 0 * ) + K 1 2 Ω ( γ 2 u + 2 v 0 * u f ) 2 d x ,
so that
J 2 * ( u , v 0 * ) = γ 2 Ω u · u d x + u 2 , v 0 * L 2 u , f L 2 1 2 α Ω ( v 0 * ) 2 d x β Ω v 0 * d x + K 1 2 Ω ( γ 2 u + 2 v 0 * u f ) 2 d x ,
In particular, if we set
V 1 = { u V : u K 3 } ,
and
K 1 = 1 / ( 8 K 3 2 α ) ,
we may also define
J 3 ( u ) = sup v 0 * B * J 2 * ( u , v 0 * ) ,
where
B * = { v 0 * Y * : v 0 * K / 8 } ,
for appropriate K , K 3 > 0 .
Here we highlight that J 2 * is concave in v 0 * on B * (indeed it is concave on Y * ) and the parameter K 1 > 0 multiplying a positive definite quadratic functional in u improves the convexity conditions of J 3 .

9. One more duality principle suitable for the primal formulation global optimization

In this section we establish one more duality principle and related convex dual formulation suitable for a global optimization of the primal variational formulation.
Let Ω R 3 be an open, bounded, connected set with a regular (Lipschitzian) boundary denoted by Ω .
For the primal formulation, we define V = W 0 1 , 2 ( Ω ) and consider a functional J : V R where
J ( u ) = γ 2 Ω u · u d x + α 2 Ω ( u 2 β ) 2 d x u , f L 2 .
Here we assume f L 2 ( Ω ) , and define Y = Y * = L 2 ( Ω )
V 2 = { u V : u K 3 } ,
A + = { u V : u f > 0 , a . e . in Ω } ,
and
V 1 * = A + V 1 ,
for an appropriate constant K 3 > 0 to be specified.
Define also the functionals F 1 : V R , F 2 : V × Y R and G : Y R by
F 1 ( u ) = K 2 2 Ω ( 2 u ) 2 d x u , f L 2 ,
F 2 ( u , v 3 * , v 0 * ) = γ 2 Ω u · u d x u 2 , v 0 * L 2 + K 2 2 Ω ( 2 u ) 2 d x K 1 2 Ω ( γ 1 2 u + 2 v 3 * u h 1 ) 2 d x ,
and
G ( u 2 ) = α 2 Ω ( u 2 β ) 2 d x ,
for appropriate positive constants K 1 , K 2 , K 3 to be specified.
Moreover, define F 1 * : Y * R , and F 2 * : [ Y * ] 2 R and G * : Y * R , by
F 1 * ( v 2 * ) = sup u V { u , v 2 * L 2 F 1 ( u ) } = 1 2 K 2 Ω ( v 2 * + f ) 2 4 d x ,
and
F 2 * ( v 2 * , v 3 * , v 0 * ) = sup u V { u , v 2 * L 2 F 2 ( u , v 3 * , v 0 * ) } = 1 2 Ω ( v 2 * + K 1 ( γ 1 2 + 2 v 3 * ) h 1 ) 2 K 2 4 + γ 2 2 v 0 * K 1 ( γ 1 2 + 2 v 3 * ) 2 K 1 2 Ω h 1 2 d x
for appropriate γ 1 > 0 and H 1 L 2 ( Ω ) , and
G * ( v 0 * ) = sup v Y { v , v 0 * L 2 G ( v ) } = 1 2 α Ω ( v 0 * ) 2 d x + β Ω v 0 * d x .
Furthermore, we define
D * = { v 2 * Y * : v 2 * ( 3 / 2 ) K 2 } ,
B * = v 3 * Y * : v 3 * K 4 ,
for an appropriate constant K 4 > 0 to be specified.
Define also
C 1 * = { v 0 * Y * : v 0 * K 4 } .
and J 1 * : D * × C 1 * R by
J 1 * ( v 2 * , v 3 * , v 0 * ) = F 1 * ( v 2 * ) + F 2 * ( v 2 * , v 3 * , v 0 * ) G * ( v 0 * ) .
Moreover, assuming K 2 K 1 K 4 max { 1 , K 3 , α , β , γ , γ 1 , f , h 1 } .
By directly computing δ 2 J 1 * ( v 2 * , v 3 * , v 0 * ) denoting
A = 2 K 1 h 1 ,
B = 4 K 1 ( γ 1 2 + 2 v 3 * ) ,
φ = K 2 4 γ 2 + 2 v 0 * + K 1 ( γ 1 2 + 2 v 3 * ) 2 ) ,
φ 1 = v 2 * K 1 ( γ 1 2 + 2 v 3 * ) h 1 ,
u = φ 1 φ ,
we may obtain, considering that φ < 0
2 J 1 * ( v 2 * , v 3 * , v 0 * ) ( v 3 * ) 2 = 4 K 1 u 2 ( A u B ) 2 φ > 0
on D * × B * .
Moreover,
2 J 1 * ( v 2 * , v 3 * , v 0 * ) ( v 2 * ) 2 2 J 1 * ( v 2 * , v 3 * , v 0 * ) ( v 3 * ) 2 2 J 1 * ( v 2 * , v 3 * , v 0 * ) v 2 * v 3 * 2 = O K 1 2 H 1 + K 1 H 2 K 2 φ ,
where
H 1 = ( 8 ( h 1 2 + 4 h 1 ( γ 1 2 + 2 v 3 * ) u 3 [ ( γ 1 2 + 2 v 3 * ) 2 u ] u ) ,
and
H 2 = [ ( γ 2 + 2 v 0 * ) u ] u .
At a critical point we have H 1 = 0 and
H 2 = f u 0 > 0 , a . e in Ω .
With such results, we may define the restrictions
C 2 * = { v 0 * Y * : H 1 ( v 2 * , v 3 * , v 0 * ) 0 , v 2 * D * , v 3 * B * } .
C 3 * = { v 0 * Y * : H 2 ( v 2 * , v 3 * , v 0 * ) 0 , v 2 * D * , v 3 * B * } .
Here, we define C * = C 1 * C 2 * C 3 * .
On the other hand, clearly we have
2 J 1 * ( v 2 * , v 3 * , v 0 * ) ( v 0 * ) 2 < 0
From such results, we may obtain that J 1 * in convex in ( v 2 * , v 3 * ) and it is concave in v 0 * on D * × B * × C * .

9.1. The main duality principle and a related convex dual formulation

Considering the statements and definitions presented in the previous section, we may prove the following theorem.
Theorem 7. 
Let ( v ^ 2 * , v ^ 3 * v ^ 0 * ) D * × B * × C * be such that
δ J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) = 0
and u 0 V 1 be such that
u 0 = F 1 * ( v ^ 2 * ) v 2 * .
Assume also
u 0 0 , a . e . in Ω .
Under such hypotheses, we have
δ J ( u 0 ) = 0 ,
γ 1 2 u 0 + 2 v ^ 3 * u 0 h 1 = 0 ,
and
J ( u 0 ) = inf u V 1 J ( u ) = inf ( v 2 * , v 3 * ) D * × B * sup v 0 * C * J 1 * ( v 2 * , v 3 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) .
Proof. 
Observe that δ J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) = 0 so that, since J 1 * is convex in ( v 2 * , v 3 * ) D * × B * × C * and
2 J 1 * ( v ^ 2 * , v ^ 3 * , v 0 * ) ( v 0 * ) 2 > 0 , v 0 * C 1 * ,
we obtain
J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) = inf ( v 2 * , v 3 * ) D * × B * J 1 * ( v 2 * , v 3 * , v ^ 0 * ) = sup v 0 * C * J 1 * ( v ^ 2 * , v ^ 3 * , v 0 * ) .
Consequently, from this and the Saddle Point Theorem, we obtain
J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) = inf ( v 2 * , v 3 * ) D * × B * sup v 0 * C * J 1 * ( v 2 * , v 3 * , v 0 * ) .
Now we are going to show that
δ J ( u 0 ) = 0 .
From
J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) v 2 * = 0 ,
and
F 1 * ( v ^ 2 * ) v 2 * = u 0
we have
F 2 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) v 2 * u 0 = 0
and
v ^ 2 * = K 2 4 u 0 f .
Observe now that
F 2 * ( v ^ 2 * , v ^ 3 * , v 0 * ) = sup u V { u , v 2 * L 2 F 2 ( u , v 3 * , v 0 * ) } .
Denoting
H ( v 2 * , v 3 * , v ) * , u ) = u , v 2 * L 2 F 2 ( u , v 3 * , v 0 * ) ,
there exists u ^ V such that
H ( v ^ 2 * , v ^ 3 * , v ^ 0 * , u ^ ) u = 0 ,
and
F 2 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) = H ( v ^ 2 * , v ^ 3 * , v ^ 0 * , u ^ ) ,
so that
F 2 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) v 2 * = H ( v ^ 2 * , v ^ 3 * , v ^ 0 * , u ^ ) v 2 * + H ( v ^ 2 * , v ^ 3 * , v ^ 0 * , u ^ ) u u ^ v 2 * = u ^ .
Summarizing, we have got
u 0 = F 2 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) v 2 * = u ^ .
From such results and the Legendre tranform proprieties we get
v 2 * = F 1 ( u 0 ) u
and
v 2 * = F 2 ( u 0 , v ^ 3 * , v ^ 0 * ) u .
On the other hand, from the variation of J 1 * in v 3 * , we have
F 2 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) v 3 * = K 1 ( γ 1 2 u 0 + 2 v ^ 3 * u 0 h 1 ) 2 u 0 + H ( v ^ 2 * , v ^ 3 * , v ^ 0 * , u ^ ) u u ^ v 3 * = K 1 ( γ 1 2 u 0 + 2 v ^ 3 * u 0 h 1 ) 2 u 0 = 0 .
From such results, since
u 0 0 , a . e . in Ω ,
we get
γ 1 2 u 0 + 2 v ^ 3 * u 0 h 1 = 0 , a . e . in Ω .
Finally, from the variation of J 1 * in v 0 * we obtain
F 2 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) v 0 * G * ( v 0 * ) v 0 * = 0 ,
so that
u 0 2 + H ( v ^ 2 * , v ^ 3 * , v ^ 0 * , u ^ ) u u ^ v 0 * v 0 * α β = 0 .
Thus,
v 0 * = α ( u 0 2 β ) .
Consequently, from such last results, we have
0 = v ^ 2 * v ^ 2 * = F 1 ( u 0 ) u F 2 ( u 0 , v ^ 3 * , v ^ 0 * ) u = K 2 4 u 0 f K 2 4 u 0 γ 2 u 0 + 2 v 0 * u 0 = γ 2 u 0 + 2 α ( u 0 2 β ) u 0 f = δ J ( u 0 ) .
Summarizing,
δ J ( u 0 ) = 0 .
Furthermore, also from such last results and the Legendre transform properties, we have
F 1 * ( v ^ 2 * ) = u 0 , v ^ 2 * L 2 F 1 ( u 0 ) ,
F 2 * ( v ^ 2 * , v ^ 3 * v ^ 0 * ) = u 0 , v ^ 2 * L 2 F 2 ( u 0 , v ^ 3 * , v ^ 0 * ) ,
G * ( v ^ 0 * ) = u 0 2 , v 0 * L 2 G ( u 0 2 ) ,
so that
J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) = F 1 * ( v ^ 2 * ) + F 2 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) G * ( v ^ 0 * ) = J ( u 0 ) .
Finally, observe that
J 1 * ( v 2 * , v 3 * , v 0 * ) F 1 ( u ) u , v 2 * L 2 + F 2 * ( v 2 * , v 3 * , v 0 * ) G * ( v 0 * ) ,
u V 1 , v 2 * D * , v 3 * B * , v 0 * C * .
Therefore,
sup v 0 * C * J 1 * ( v 2 * , v 3 * , v 0 * ) sup v 0 * C 1 * { u , v 2 * L 2 + F 1 ( u ) + F 2 * ( v 2 * , v 3 * , v 0 * ) G * ( v 0 * ) } ,
so that
inf ( v 2 * , v 3 * ) D * × B * sup v 0 * C * J 1 * ( v 2 * , v 3 * , v 0 * ) inf ( v 2 * , v 3 * ) D * × B * sup v 0 * C 1 * { u , v 2 * L 2 + F 1 ( u ) + F 2 * ( v 2 * , v 3 * , v 0 * ) G * ( v 0 * ) } = J ( u ) , u V 1 .
Summarizing, we have got
J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) = inf ( v 2 * , v 3 * ) D * × B * sup v 0 * C * J 1 * ( v 2 * , v 3 * , v 0 * ) inf u V 1 J ( u ) .
Joining the pieces, we have got
J ( u 0 ) = inf u V 1 J ( u ) = inf ( v 2 * , v 3 * ) D * × B * sup v 0 * C * J 1 * ( v 2 * , v 3 * , v 0 * ) = J 1 * ( v ^ 2 * , v ^ 3 * , v ^ 0 * ) .
The proof is complete.

10. Another duality principle for a related model in phase transition

In this section we present another duality principle for a related model in phase transition.
Let Ω = [ 0 , 1 ] R and consider a functional J : V R where
J ( u ) = 1 2 Ω ( ( u ) 2 1 ) 2 d x + 1 2 Ω u 2 d x u , f L 2 ,
and where
V = { u W 1 , 4 ( Ω ) : u ( 0 ) = 0 and u ( 1 ) = 1 / 2 }
and f L 2 ( Ω ) .
A global optimum point is not attained for J so that the problem of finding a global minimum for J has no solution.
Anyway, one question remains, how the minimizing sequences behave close the infimum of J.
We intend to use duality theory to approximately solve such a global optimization problem.
Denoting V 0 = W 0 1 , 4 ( Ω ) , at this point we define, F : V R and F 1 : V × V 0 R by
F ( u ) = 1 2 Ω ( ( u ) 2 1 ) 2 d x ,
and
F 1 ( u , ϕ ) = 1 2 Ω ( ( u + ϕ ) 2 1 ) 2 d x .
Observe
F ( u ) inf ϕ V 0 F 1 ( u , ϕ ) Q F ( u ) , u V ,
where Q F ( u ) refers to a quasi-convex regularization of F .
We define also
F 2 : V × V 0 R ,
F 3 : V × V 0 R
and
G : V × V 0 R
by
F 2 ( u , ϕ ) = 1 2 Ω ( ( u + ϕ ) 2 1 ) 2 d x + 1 2 Ω u 2 d x u , f L 2 ,
F 3 ( u , ϕ ) = F 2 ( u , ϕ ) + K 2 Ω ( u ) 2 d x + K 1 2 Ω ( ϕ ) 2 d x
and
G ( u , ϕ ) = K 2 Ω ( u ) 2 d x + K 1 2 Ω ( ϕ ) 2 d x
Observe that if K > 0 , K 1 > 0 is large enough, both F 3 and G are convex.
Denoting Y = Y * = L 2 ( Ω ) we also define the polar functional G * : Y * × Y * R by
G * ( v * , v 0 * ) = sup ( u , ϕ ) V × V 0 { u , v * L 2 + ϕ , v 0 * L 2 G ( u , ϕ ) } .
Observe that
inf u U J ( u ) inf ( ( u , ϕ ) , ( v * , v 0 * ) ) V × V 0 × [ Y * ] 2 { G * ( v * , v 0 * ) u , v * L 2 ϕ , v 0 * L 2 + F 3 ( u , ϕ ) } .
With such results in mind, we define a relaxed primal dual variational formulation for the primal problem, represented by J 1 * : V × V 0 × [ Y * ] 2 R , where
J 1 * ( u , ϕ , v * , v 0 * ) = G * ( v * , v 0 * ) u , v * L 2 ϕ , v 0 * L 2 + F 3 ( u , ϕ ) .
Having defined such a functional, we may obtain numerical results by solving a sequence of convex auxiliary sub-problems, through the following algorithm.
  • Set K 150 and K 1 = K / 20 and 0 < ε 1 .
  • Choose ( u 1 , ϕ 1 ) V × V 0 , such that u 1 1 , K / 4 and ϕ 1 1 , K / 4 .
  • Set n = 1 .
  • Calculate ( v n * , ( v 0 * ) n ) solution of the system of equations:
    J 1 * ( u n , ϕ n , v n * , ( v 0 * ) n ) v * = 0
    and
    J 1 * ( u n , ϕ n , v n * , ( v 0 * ) n ) v 0 * = 0 ,
    that is
    G * ( v n * , ( v 0 * ) n ) v * u n = 0
    and
    G * ( v n * , ( v 0 * ) n ) v 0 * ϕ n = 0
    so that
    v n * = G ( u n , ϕ n ) u
    and
    ( v 0 * ) n * = G ( u n , ϕ n ) ϕ
  • Calculate ( u n + 1 , ϕ n + 1 ) by solving the system of equations:
    J 1 * ( u n + 1 , ϕ n + 1 , v n * , ( v 0 * ) n ) u = 0
    and
    J 1 * ( u n + 1 , ϕ n + 1 , v n * , ( v 0 * ) n ) ϕ = 0
    that is
    v n * + F 3 ( u n + 1 , ϕ n + 1 ) u = 0
    and
    ( v 0 * ) n + F 3 ( u n + 1 , ϕ n + 1 ) ϕ = 0
  • If max { u n u n + 1 , ϕ n + 1 ϕ n } ε , then stop, else set n : = n + 1 and go to item 4.
For the case in which f ( x ) = 0 , we have obtained numerical results for K = 1500 and K 1 = K / 20 . For such a concerning solution u 0 obtained, please see Figure 1. For the case in which f ( x ) = sin ( π x ) / 2 , we have obtained numerical results for K = 100 and K 1 = K / 20 . For such a concerning solution u 0 obtained, please see Figure 2.
Remark 4. 
Observe that the solutions obtained are approximate critical points. They are not, in a classical sense, the global solutions for the related optimization problems. Indeed, such solutions reflect the average behavior of weak cluster points for concerning minimizing sequences.

11. A related numerical computation through the generalized method of lines

We start by recalling that the generalized method of lines was originally introduced in the book entitled "Topics on Functional Analysis, Calculus of Variations and Duality" [7], published in 2011.
Indeed, the present results are extensions and applications of previous ones which have been published since 2011, in books and articles such as [5,7,8,9]. About the Sobolev spaces involved we would mention [1]. Concerning the applications, related models in physics are addressed in [4,11].
We also emphasize that, in such a method, the domain of the partial differential equation in question is discretized in lines (or more generally, in curves) and the concerning solution is written on these lines as functions of boundary conditions and the domain boundary shape.
In fact, in its previous format, this method consists of an application of a kind of a partial finite differences procedure combined with the Banach fixed point theorem to obtain the relation between two adjacent lines (or curves).
In the present article, we propose an improvement concerning the way we truncate the series solution obtained through an application of the Banach fixed point theorem to find the relation between two adjacent lines. The results obtained are very good even as a typical parameter ε > 0 is very small.
In the next lines and sections we develop in details such a numerical procedure.

11.1. About a concerning improvement for the generalized method of lines

Let Ω R 2 where
Ω = { ( r , θ ) R 2 : 1 r 2 , 0 θ 2 π } .
Consider the problem of solving the partial differential equation
ε 2 u r 2 + 1 r u r + 1 r 2 2 u θ 2 + α u 3 β u = f , in Ω , u = u 0 ( θ ) , on Ω 1 , u = u f ( θ ) , on Ω 2 .
Here
Ω = { ( r , θ ) R 2 : 1 r 2 , 0 θ 2 π } ,
Ω 1 = { ( 1 , θ ) R 2 : 0 θ 2 π } ,
Ω 2 = { ( 2 , θ ) R 2 : 0 θ 2 π } ,
ε > 0 , α > 0 , β > 0 , and f 1 , on Ω .
In a partial finite differences scheme, such a system stands for
ε u n + 1 2 u n + u n 1 d 2 + 1 t n u n u n 1 d + 1 t n 2 2 u n θ 2 + α u n 3 β u n = f n ,
n { 1 , , N 1 } , with the boundary conditions
u 0 = 0 ,
and
u N = 0 .
Here N is the number of lines and d = 1 / N .
In particular, for n = 1 we have
ε u 2 2 u 1 + u 0 d 2 + 1 t 1 ( u 1 u 0 ) d + 1 t 1 2 2 u 1 θ 2 + α u 1 3 β u 1 = f 1 ,
so that
u 1 = u 2 + u 1 + u 0 + 1 t 1 ( u 1 u 0 ) d + 1 t 1 2 2 u 1 θ 2 d 2 + ( α u 1 3 + β u 1 f 1 ) d 2 ε / 3.0 ,
We solve this last equation through the Banach fixed point theorem, obtaining u 1 as a function of u 2 .
Indeed, we may set
u 1 0 = u 2
and
u 1 k + 1 = u 2 + u 1 k + u 0 + 1 t 1 ( u 1 k u 0 ) d + 1 t 1 2 2 u 1 k θ 2 d 2 + ( α ( u 1 k ) 3 + β u 1 k f 1 ) d 2 ε / 3.0 ,
k N .
Thus, we may obtain
u 1 = lim k u 1 k H 1 ( u 2 , u 0 ) .
Similarly, for n = 2 , we have
u 2 = u 3 + u 2 + H 1 ( u 2 , u 0 ) + 1 t 1 ( u 2 H 1 ( u 2 , u 0 ) ) d + 1 t 1 2 2 u 2 θ 2 d 2 + ( α u 2 3 + β u 2 f 2 ) d 2 ε / 3.0 ,
We solve this last equation through the Banach fixed point theorem, obtaining u 2 as a function of u 3 and u 0 .
Indeed, we may set
u 2 0 = u 3
and
u 2 k + 1 = u 3 + u 2 k + H 1 ( u 2 k , u 0 ) + 1 t 2 ( u 2 k H 1 ( u 2 k , u 0 ) ) d + 1 t 2 2 2 u 2 k θ 2 d 2 + ( α ( u 2 k ) 3 + β u 2 k f 2 ) d 2 ε / 3.0 ,
k N .
Thus, we may obtain
u 2 = lim k u 2 k H 2 ( u 3 , u 0 ) .
Now reasoning inductively, having
u n 1 = H n 1 ( u n , u 0 ) ,
we may get
u n = u n + 1 + u n + H n 1 ( u n , u 0 ) + 1 t n ( u n H n 1 ( u n , u 0 ) ) d + 1 t n 2 2 u n θ 2 d 2 + ( α u n 3 + β u n f n ) d 2 ε / 3.0 ,
We solve this last equation through the Banach fixed point theorem, obtaining u n as a function of u n + 1 and u 0 .
Indeed, we may set
u n 0 = u n + 1
and
u n k + 1 = u n + 1 + u n k + H n 1 ( u n k , u 0 ) + 1 t n ( u n k H n 1 ( u n k , u 0 ) ) d + 1 t n 2 2 u n k θ 2 d 2 + ( α ( u n k ) 3 + β u n k f n ) d 2 ε / 3.0 ,
k N .
Thus, we may obtain
u n = lim k u n k H n ( u n + 1 , u 0 ) .
We have obtained u n = H n ( u n + 1 , u 0 ) , n { 1 , , N 1 } .
In particular, u N = u f ( θ ) , so that we may obtain
u N 1 = H N 1 ( u N , u 0 ) = H N 1 ( 0 ) F N 1 ( u N , u 0 ) = F N 1 ( u f ( θ ) , u 0 ( θ ) ) .
Similarly,
u N 2 = H N 2 ( u N 1 , u 0 ) = H N 2 ( H N 1 ( u N , u 0 ) ) = F N 2 ( u N , u 0 ) = F N 1 ( u f ( θ ) , u 0 ( θ ) ) ,
an so on, up to obtaining
u 1 = H 1 ( u 2 ) F 1 ( u N , u 0 ) = F 1 ( u f ( θ ) , u 0 ( θ ) ) .
The problem is then approximately solved.

11.2. Software in Mathematica for solving such an equation

We recall that the equation to be solved is a Ginzburg-Landau type one, where
ε 2 u r 2 + 1 r u r + 1 r 2 2 u θ 2 + α u 3 β u = f , in Ω , u = 0 , on Ω 1 , u = u f ( θ ) , on Ω 2 .
Here
Ω = { ( r , θ ) R 2 : 1 r 2 , 0 θ 2 π } ,
Ω 1 = { ( 1 , θ ) R 2 : 0 θ 2 π } ,
Ω 2 = { ( 2 , θ ) R 2 : 0 θ 2 π } ,
ε > 0 , α > 0 , β > 0 , and f 1 , on Ω . In a partial finite differences scheme, such a system stands for
ε u n + 1 2 u n + u n 1 d 2 + 1 t n u n u n 1 d + 1 t n 2 2 u n θ 2 + α u n 3 β u n = f n ,
n { 1 , , N 1 } , with the boundary conditions
u 0 = 0 ,
and
u N = u f [ x ] .
Here N is the number of lines and d = 1 / N .
At this point we present the concerning software for an approximate solution.
Such a software is for N = 10 (10 lines) and u 0 [ x ] = 0 . .
************************************
  • m 8 = 10 ; ( N = 10 l i n e s )
  • d = 1 / m 8 ;
  • e 1 = 0.1 ; ( ε = 0.1 )
  • A = 1.0 ;
  • B = 1.0 ;
  • F o r [ i = 1 , i < m 8 , i + + , f [ i ] = 1.0 ] ; ( f 1 , on Ω )
  • a = 0.0 ;
  • F o r [ i = 1 , i < m 8 , i + + ,
    C l e a r [ b , u ] ;
    t [ i ] = 1 + i * d ;
    b [ x ] = u [ i + 1 ] [ x ] ;
  • F o r [ k = 1 , k < 30 , k + + , ( we have fixed the number of iterations )
    z = u [ i + 1 ] [ x ] + b [ x ] + a + 1 t [ i ] ( b [ x ] a ) * d + 1 t [ i ] 2 D [ b [ x ] , { x , 2 } ] * d 2 + ( A * b [ x ] 3 + B * u [ x ] + f [ i ] ) * d 2 e 1 / 3.0 ;
    z = S e r i e s [ z , { u [ i + 1 ] [ x ] , 0 , 3 } , { u [ i + 1 ] [ x ] , 0 , 1 } , { u [ i + 1 ] [ x ] , 0 , 1 } , { u [ i + 1 ] [ x ] , 0 , 0 } , { u [ i + 1 ] [ x ] , 0 , 0 } ] ;
    z = N o r m a l [ z ] ,
    z = E x p a n d [ z ] ;
    b [ x ] = z ] ;
  • a 1 [ i ] = z ;
  • C l e a r [ b ] ;
  • u [ i + 1 ] [ x ] = b [ x ] ;
  • a = a 1 [ i ] ];
  • b [ x ] = u f [ x ] ;
  • F o r [ i = 1 , i < m 8 , i + + ,
    A 1 = a 1 [ m 8 i ] ;
    A 1 = S e r i e s [ A 1 , { u f [ x ] , 0 , 3 } , { u f [ x ] , 0 , 1 } , { u f [ x ] , 0 , 1 } , { u f [ x ] , 0 , 0 } , { u f [ x ] , 0 , 0 } ] ;
    A 1 = N o r m a l [ A 1 ] ;
    A 1 = E x p a n d [ A 1 ] ;
    u [ m 8 i ] [ x ] = A 1 ;
    b [ x ] = A 1 ] ;
    P r i n t [ u [ m 8 / 2 ] [ x ] ] ;
*************************************
The numerical expressions for the solutions of the concerning N = 10 lines are given by
u [ 1 ] [ x ] = 0.47352 + 0.00691 u f [ x ] 0.00459 u f [ x ] 2 + 0.00265 u f [ x ] 3 + 0.00039 ( u f ) [ x ] 0.00058 u f [ x ] ( u f ) [ x ] + 0.00050 u f [ x ] 2 ( u f ) [ x ] 0.000181213 u f [ x ] 3 ( u f ) [ x ]
u [ 2 ] [ x ] = 0.76763 + 0.01301 u f [ x ] 0.00863 u f [ x ] 2 + 0.00497 u f [ x ] 3 + 0.00068 ( u f ) [ x ] 0.00103 u f [ x ] ( u f ) [ x ] + 0.00088 u f [ x ] 2 ( u f ) [ x ] 0.00034 u f [ x ] 3 ( u f ) [ x ]
u [ 3 ] [ x ] = 0.91329 + 0.02034 u f [ x ] 0.01342 u f [ x ] 2 + 0.00768 u f [ x ] 3 + 0.00095 ( u f ) [ x ] 0.00144 u f [ x ] ( u f ) [ x ] + 0.00122 u f [ x ] 2 ( u f ) [ x ] 0.00051 u f [ x ] 3 ( u f ) [ x ]
u [ 4 ] [ x ] = 0.97125 + 0.03623 u f [ x ] 0.02328 u f [ x ] 2 + 0.01289 u f [ x ] 3 + 0.00147331 ( u f ) [ x ] 0.00223 u f [ x ] ( u f ) [ x ] + 0.00182 u f [ x ] 2 ( u f ) [ x ] 0.00074 u f [ x ] 3 ( u f ) [ x ]
u [ 5 ] [ x ] = 1.01736 + 0.09242 u f [ x ] 0.05110 u f [ x ] 2 + 0.02387 u f [ x ] 3 + 0.00211 ( u f ) [ x ] 0.00378 u f [ x ] ( u f ) [ x ] + 0.00292 u f [ x ] 2 ( u f ) [ x ] 0.00132 u f [ x ] 3 ( u f ) [ x ]
u [ 6 ] [ x ] = 1.02549 + 0.21039 u f [ x ] 0.09374 u f [ x ] 2 + 0.03422 u f [ x ] 3 + 0.00147 ( u f ) [ x ] 0.00634 u f [ x ] ( u f ) [ x ] + 0.00467 u f [ x ] 2 ( u f ) [ x ] 0.00200 u f [ x ] 3 ( u f ) [ x ]
u [ 7 ] [ x ] = 0.93854 + 0.36459 u f [ x ] 0.14232 u f [ x ] 2 + 0.04058 u f [ x ] 3 + 0.00259 ( u f ) [ x ] 0.00747373 u f [ x ] ( u f ) [ x ] + 0.0047969 u f [ x ] 2 ( u f ) [ x ] 0.00194 u f [ x ] 3 ( u f ) [ x ]
u [ 8 ] [ x ] = 0.74649 + 0.57201 u f [ x ] 0.17293 u f [ x ] 2 + 0.02791 u f [ x ] 3 + 0.00353 ( u f ) [ x ] 0.00658 u f [ x ] ( u f ) [ x ] + 0.00407 u f [ x ] 2 ( u f ) [ x ] 0.00172 u f [ x ] 3 ( u f ) [ x ]
u [ 9 ] [ x ] = 0.43257 + 0.81004 u f [ x ] 0.13080 u f [ x ] 2 + 0.00042 u f [ x ] 3 + 0.00294 ( u f ) [ x ] 0.00398 u f [ x ] ( u f ) [ x ] + 0.00222 u f [ x ] 2 ( u f ) [ x ] 0.00066 u f [ x ] 3 ( u f ) [ x ]

11.3. Some plots concerning the numerical results

In this section we present the lines 2 , 4 , 6 , 8 related to results obtained in the last section.
Indeed, we present such mentioned lines, in a first step, for the previous results obtained through the generalized of lines and, in a second step, through a numerical method which is combination of the Newton's one and the generalized method of lines. In a third step, we also present the graphs by considering the expression of the lines as those also obtained through the generalized method of lines, up to the numerical coefficients for each function term, which are obtained by the numerical optimization of the functional J, below specified. We consider the case in which u 0 ( x ) = 0 and u f ( x ) = sin ( x ) .
For the procedure mentioned above as the third step, recalling that N = 10 lines, considering that u f ( x ) = u f ( x ) , we may approximately assume the following general line expressions:
u n ( x ) = a ( 1 , n ) + a ( 2 , n ) u f ( x ) + a ( 3 , n ) u f ( x ) 3 + a ( 4 , n ) u f ( x ) 3 , n { 1 , N 1 } .
Defining
W n = e 1 ( u n + 1 ( x ) 2 u n ( x ) + u n 1 ( x ) ) d 2 e 1 t n ( u n ( x ) u n 1 ( x ) ) d e 1 t n 2 u n ( x ) + u n ( x ) 3 u n ( x ) 1 ,
and
J ( { a ( j , n ) } ) = n = 1 N 1 0 2 π ( W n ) 2 d x
we obtain { a ( j , n ) } by numerically minimizing J.
Hence, we have obtained the following lines for these cases. For such graphs, we have considered 300 nodes in x, with 2 π / 300 as units in x [ 0 , 2 π ] .
For the Lines 2, 4, 6, 8, through the generalized method of lines, please see Figure 3, Figure 6, Figure 9 and Figure 12.
For the Lines 2, 4, 6, 8, through a combination of the Newton's and the generalized method of lines, please see Figure 4, Figure 7, Figure 10 and Figure 13.
Finally, for the Line 2, 4, 6, 8 obtained through the minimization of the functional J, please see Figure 5, Figure 8, Figure 11 and Figure 14.
Figure 3. Line 2, solution u 2 ( x ) through the general method of lines
Figure 3. Line 2, solution u 2 ( x ) through the general method of lines
Preprints 67750 g003
Figure 4. Line 2, solution u 2 ( x ) through the Newton's Method
Figure 4. Line 2, solution u 2 ( x ) through the Newton's Method
Preprints 67750 g004
Figure 5. Line 2, solution u 2 ( x ) through the minimization of functional J
Figure 5. Line 2, solution u 2 ( x ) through the minimization of functional J
Preprints 67750 g005
Figure 6. Line 4, solution u 4 ( x ) through the general method of lines
Figure 6. Line 4, solution u 4 ( x ) through the general method of lines
Preprints 67750 g006
Figure 7. Line 4, solution u 4 ( x ) through the Newton's Method
Figure 7. Line 4, solution u 4 ( x ) through the Newton's Method
Preprints 67750 g007
Figure 8. Line 4, solution u 4 ( x ) through the minimization of functional J
Figure 8. Line 4, solution u 4 ( x ) through the minimization of functional J
Preprints 67750 g008
Figure 9. Line 6, solution u 6 ( x ) through the general method of lines
Figure 9. Line 6, solution u 6 ( x ) through the general method of lines
Preprints 67750 g009
Figure 10. Line 6, solution u 6 ( x ) through the Newton's Method
Figure 10. Line 6, solution u 6 ( x ) through the Newton's Method
Preprints 67750 g010
Figure 11. Line 6, solution u 6 ( x ) through the minimization of functional J
Figure 11. Line 6, solution u 6 ( x ) through the minimization of functional J
Preprints 67750 g011
Figure 12. Line 8, solution u 8 ( x ) through the general method of lines
Figure 12. Line 8, solution u 8 ( x ) through the general method of lines
Preprints 67750 g012
Figure 13. Line 8, solution u 8 ( x ) through the Newton's Method
Figure 13. Line 8, solution u 8 ( x ) through the Newton's Method
Preprints 67750 g013
Figure 14. Line 8, solution u 8 ( x ) through the minimization of functional J
Figure 14. Line 8, solution u 8 ( x ) through the minimization of functional J
Preprints 67750 g014

12. Conclusion

In the first part of this article we develop duality principles for non-convex variational optimization. In the final concerning sections we propose dual convex formulations suitable for a large class of models in physics and engineering. In the last article section, we present an advance concerning the computation of a solution for a partial differential equation through the generalized method of lines. In particular, in its previous versions, we used to truncate the series in d 2 however, we have realized the results are much better by taking line solutions in series for u f [ x ] and its derivatives, as it is indicated in the present software.
This is a little difference concerning the previous procedure, but with a great result improvement as the parameter ε > 0 is small.
Indeed, with a sufficiently large N (number of lines), we may obtain very good qualitative results even as ε > 0 is very small.

References

  1. R.A. Adams and J.F. Fournier, Sobolev Spaces, 2nd edn. (Elsevier, New York, 2003).
  2. W.R. Bielski, A. Galka, J.J. Telega, The Complementary Energy Principle and Duality for Geometrically Nonlinear Elastic Shells. I. Simple case of moderate rotations around a tangent to the middle surface. Bulletin of the Polish Academy of Sciences, Technical Sciences, Vol. 38, No. 7-9, 1988.
  3. W.R. Bielski and J.J. Telega, A Contribution to Contact Problems for a Class of Solids and Structures, Arch. Mech., 37, 4-5, pp. 303-320, Warszawa 1985.
  4. J.F. Annet, Superconductivity, Superfluids and Condensates, 2nd edn. ( Oxford Master Series in Condensed Matter Physics, Oxford University Press, Reprint, 2010).
  5. F.S. Botelho, Functional Analysis, Calculus of Variations and Numerical Methods in Physics and Engineering, CRC Taylor and Francis, Florida, 2020.
  6. F.S. Botelho, Variational Convex Analysis, Ph.D. thesis, Virginia Tech, Blacksburg, VA -USA, (2009).
  7. F. Botelho, Topics on Functional Analysis, Calculus of Variations and Duality, Academic Publications, Sofia, (2011).
  8. F. Botelho, Existence of solution for the Ginzburg-Landau system, a related optimal control problem and its computation by the generalized method of lines, Applied Mathematics and Computation, 218, 11976-11989, (2012). [CrossRef]
  9. F. Botelho, Functional Analysis and Applied Optimization in Banach Spaces, Springer Switzerland, 2014.
  10. J.C. Strikwerda, Finite Difference Schemes and Partial Differential Equations, SIAM, second edition (Philadelphia, 2004).
  11. L.D. Landau and E.M. Lifschits, Course of Theoretical Physics, Vol. 5- Statistical Physics, part 1. (Butterworth-Heinemann, Elsevier, reprint 2008).
  12. R.T. Rockafellar, Convex Analysis, Princeton Univ. Press, (1970).
  13. J.J. Telega, On the complementary energy principle in non-linear elasticity. Part I: Von Karman plates and three dimensional solids, C.R. Acad. Sci. Paris, Serie II, 308, 1193-1198; Part II: Linear elastic solid and non-convex boundary condition. Minimax approach, ibid, pp. 1313-1317 (1989).
  14. A.Galka and J.J.Telega Duality and the complementary energy principle for a class of geometrically non-linear structures. Part I. Five parameter shell model; Part II. Anomalous dual variational priciples for compressed elastic beams, Arch. Mech. 47 (1995) 677-698, 699-724.
  15. J.F. Toland, A duality principle for non-convex optimisation and the calculus of variations, Arch. Rat. Mech. Anal., 71, No. 1 (1979), 41-61. [CrossRef]
Figure 1. solution u 0 ( x ) for the case f ( x ) = 0 .
Figure 1. solution u 0 ( x ) for the case f ( x ) = 0 .
Preprints 67750 g001
Figure 2. solution u 0 ( x ) for the case f ( x ) = sin ( π x ) / 2 .
Figure 2. solution u 0 ( x ) for the case f ( x ) = sin ( π x ) / 2 .
Preprints 67750 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated