Preprint
Article

An Exposition on Critical Point Theory with Applications

Altmetrics

Downloads

81

Views

42

Comments

0

This version is not peer-reviewed

Submitted:

14 May 2024

Posted:

16 May 2024

You are already at the latest version

Alerts
Abstract
The notion of Critical Point Theory has extensive application in the filed of Partial Differential Equations, one of which is studying the existence and commenting on the uniqueness and multiplicity (in case when the solution is not unique) of weak solutions to Elliptic PDEs under certain boundary conditions.\par This article provides a brief survey about specific concepts like : differentiation on Banach Spaces, maxima and minima and its applications to PDEs. Furthermore, we've discussed the notion of defining a weak topology on Banach Spaces, before introducing the Variational Principle and its applications.\par The epilogue of the article primarily deals with results which studies the existence of weak solutions to Dirichlet Boundary Value Problems under specific conditions. The most notable aspect of this article is the proof of \textit{Rabinowitz's Saddle Point Theorem} using the method of \textit{Brouwer Degree}. Readers who are highly motivated in pursuing research in any of the topics relevent to the contents of this paper will surely find the \texttt{References} section to be extremely resourceful.
Keywords: 
Subject: Computer Science and Mathematics  -   Analysis

1. Introduction

1.1. Preliminaries

A priori given a partial differential equation on a bounded domain, be it linear or non-linear, we might end up obtaining solutions which also happens to be critical points of certain functionals defined on some appropriate Sobolev Space.
Suppose, we consider the following boundary value problem on some bounded domain Ω R n as follows:
Δ u = f in   Ω u = 0 on   Ω
Where, Ω denotes the boundary of Ω in R n . A priori from the boundary condition, it only suffices to search for weak solutions of (1) in the Sobolev Space H 0 1 ( Ω ) , the later assertion follows from the fact that, not all functions in H 0 1 ( Ω ) is smooth (i.e. C 2 ).
Definition 1. 
(Weak Solution) A function u H 0 1 ( Ω ) is defined to be a  weak solution of (1) if,
Ω u ϕ = Ω f ϕ ϕ H 0 1 ( Ω )
In case when u is a classical solution, i.e., in other words, when, u C 2 ( Ω ) C ( Ω ¯ ) and satisfies (1) point-wise, then, we can indeed obtain (2) by multiplying (1) by ϕ and integrating by parts.
Remark 1. 
(2) is valid only when, f H 1 ( Ω ) = H 0 1 ( Ω ) * .
Define a functional, J : H 0 1 ( Ω ) R as,
J ( u ) : = 1 2 Ω u 2 Ω f u
It can be deduced that, J is in fact a C 1 -function, and, J ( u ) : H 0 1 ( Ω ) R is a bounded linear functional having the following expression,
J ( u ) ϕ = Ω u ϕ Ω f ϕ ϕ H 0 1 ( Ω )
Hence, we can infer that, u H 0 1 ( Ω ) is a weak solution of (1) ⇔ J ( u 0 ) = 0 .

1.2. Conditions on a C 1 -Function Defined on a Real Banach Space

Given a real Banach Space X and a C 1 -function, I : X R , our primary objective in this section shall be to obtain certain conditions on I in order to ensure that, ∃ x 0 X satisfying, I ( x 0 ) = 0 .
The answer is quite simple for the one-dimensional case. Consider the example when, X = R . Correspondingly, choose any C 1 -function, I : R R . Then, for every x 1 , x 2 R with, x 1 < x 2 , if ∃ x 3 ( x 1 , x 2 ) satisfying, I ( x 3 ) > m a x { I ( x 1 ) , I ( x 2 ) } , it follows that, ∃ x 0 ( x 1 , x 2 ) such that, I ( x 0 ) = 0 .
Although, for higher dimensional cases, it’s rather difficult. For example, if we consider, I : R 2 R defined as, I ( x , y ) : = e x y 2 . We can deduce that, for any ( x 1 , y 1 ) and ( x 1 , y 1 ) in R 2 with x 1 , y 1 > 0 , I ( x 1 , ± y 1 ) < 0 and, I ( x , 0 ) = e x > 0 . This implies that, the line, y = 0 separates the points ( x 1 , ± y 1 ) where,
I ( x , 0 ) > m a x { I ( x 1 , y 1 ) , I ( x 1 , y 1 ) } , x R
But, an important observation that, I ( x , y ) = ( e x , 2 y ) helps us conclude that, any critical point for I.

1.3. Palais-Smale Condition

Definition 2. 
(Palais-Smale Condition) Given a Banach Space X and, I C 1 ( X ; R ) , we define the functional I to satisfy thePalais-Smale Condition, if every sequence { x n } n N in X for which I ( x n ) is bounded and I ( x n ) 0 contains a convergent subsequence.
We can infer about the above problem as discussed in the Section 1.2 in case for an arbitrary Banach Spaces using a famous result by Ambrosetti and Rabinowitz.
Theorem 1. 
(Mountain-Pass Theorem) Given a Banach Space X and, I C 1 ( X ; R ) , assume that, I satisfies the Palais-Smale Condition. Furthermore, suppose, R > 0 and, e X satisfying, | | e | | > R and, b = inf x B R ( 0 ) I ( x ) > m a x { I ( 0 ) , I ( e ) } . Then, ∃ x 0 X such that, I ( x 0 ) = 0 and, I ( x 0 ) b .
Remark 2. 
In other words, the theorem implies that, if a pair of points in the graph of I are indeed separated by a mountain  B R ( 0 ) , then I has a critical point.
Remark 3. 
One of our aims in this article is to present aproof of the Mountain-Pass Theorem (1), as well as applying the statement of the same in order to look for non-negative weak solution u H 0 1 ( Ω ) to the non-linear boundary value problem (1) over a bounded domain Ω R n , f : R R being continuous.

1.4. Differentiation in Banach Spaces

Suppose, we choose any two real Banach Spaces X and Y, and, B ( X ; Y ) denotes the space of all bounded linear operators from X to Y. Let us further denote, X * : = B ( X ; R ) . Moreover, let A X be open.
Definition 3. 
Let, x 0 A . A function, I : A Y is defined to be Frechet Differentiable (or, just differentiable) at x 0 , if ∃ a bounded linear operator, D I ( x 0 ) : X Y , in other words, D I ( x 0 ) B ( X ; Y ) satisfying,
lim h 0 | | I ( x 0 + h ) I ( x 0 ) D I ( x 0 ) h | | Y | | h | | X = 0
Example 1. 
For any differentiable function, f : R R at some x 0 R , we have, f ( x 0 ) R . Correspondingly, theFrechet Derivative, D f ( x 0 ) of f at the point x 0 has the following expression, D f ( x 0 ) ( x ) = f ( x 0 ) . x , x R .
In general, in case when, X = R , A = ( a , b ) and, I : A Y be differentiable at x 0 ( a , b ) , then we define, I ( x 0 ) : = D I ( x 0 ) ( 1 ) Y via the canonical isomorphism, i : B ( R ; Y ) Y given by, i ( T ) : = T ( 1 ) .
For arbitrary real Banach Spaces X and Y, if I : A Y be Frechet Differentiable at some x 0 A , then, corresponding to the Frechet Derivative of I at x 0 defined as, D I ( x 0 ) satisfying (5), we write I ( x 0 ) to be the derivative of I at x 0 .
Definition 4. 
A function, I : A Y is defined to be differentiable on A if it is differentiable at every point of A. Furthermore, I C 1 ( A ; Y ) if, I : A B ( X ; Y ) is continuous.
Example 2. 
  • Suppose, ( X , . , . ) be a Hilbert Space and, I : X R defined as, I ( x ) = x , x . Then, I C 1 ( X ; R ) such that,
    I ( x ) y = 2 x , y , x , y X .
  • Suppose, f H 1 ( Ω ) . Define, J : H 0 1 ( Ω ) R as,
    J ( u ) : = 1 2 Ω | u | 2 Ω f u .
    We can in fact conclude that, J C 1 ( H 0 1 ( Ω ) ; R ) . Furthermore,
    J ( u ) ϕ = Ω u ϕ Ω f ϕ u , ϕ H 0 1 ( Ω ) .
We can comment on some important properties of the derivative as follows.
  • (Chain Rule) A priori given X , Y and Z, and, A X , B Y being non-empty open sets, suppose, I : A Y and J : B Z be functions satisfying, I ( A ) B . If, I and J are differentiable at x 0 A and I ( x 0 ) respectively, then, J I : A Z is differentiable at x 0 , and,
    ( J I ) ( x 0 ) = J ( I ( x 0 ) ) I ( x 0 ) .
  • (Mean Value Theorem) A priori given a differentiable function, I : A Y and, x 0 , x 1 A , let us define,
    [ x 0 , x 1 ] = { λ x 0 + ( 1 λ ) x 1 : 0 λ 1 }
    to be a line segment in A.. Then,
    | | I ( x 1 ) I ( x 0 ) | | sup x [ x 0 , x 1 ] | | I ( x ) | | | | x 1 x 0 | | .
  • (Taylor’s Formula) A priori given a function, I : A Y being differentiable at some x A , let us define,
    R ( h ) : = I ( x + h ) I ( x ) + I ( x ) ( h )
    Then, by (5), the Remainder Term, R ( h ) satisfies,
    lim h 0 | | R ( h ) | | | | h | | = 0 , i . e . , R ( h ) = o ( | | h | | ) .
Theorem 2. 
Suppose, Ω R n be bounded and open, and, p > 1 . Moreover, let g : R R be a C 1 -function satisfying,
  • | g ( t ) | a + b | t | p ,
  • | g ( t ) | a + b | t | p 1
for some constants a , b . Also,
I ( u ) : = Ω g ( u ( x ) ) .
Then, I C 1 ( L p ( Ω ) ; R ) , and, for every u L p ( Ω ) ,
I ( u ) ϕ = Ω g ( u ) ϕ , ϕ L p ( Ω ) .
Corollary 1. 
Suppose, for any continuous function, f : R R satisfying,
| f ( t ) | a + b | t | p
for every, 1 < p ( n + 2 ) ( n 2 ) . Moreover, let, F ( t ) = 0 t f ( s ) d s be the primitive of f. If, I : H 0 1 ( Ω ) R has the following definition,
I ( u ) : = Ω F ( u ( x ) ) d x .
Then, I C 1 ( H 0 1 ( Ω ) ; R ) , and,
I ( u ) ϕ = Ω f ( u ) ϕ ϕ H 0 1 ( Ω ) .

2. Critical Points

In this section, we assume everywhere unless mentioned otherwise, that, X is a Banach Space, and A X .
Definition 5. 
(Minima of a Function) Given a function, I : A R , we define a point x 0 A to be a local minimum (resp. strictly local minimum) of I if, ∃ U ( x 0 ) of x 0 in X satisfying,
I ( x 0 ) I ( x ) x U ( x 0 ) A
resp.,
I ( x 0 ) < I ( x ) x U ( x 0 ) A { x 0 }
On the other hand, x 0 is defined to be a global minimum of I if,
I ( x 0 ) I ( x ) x A
Therefore, we write, I ( x 0 ) = min x A I ( x ) .
Definition 6. 
(Maxima of a Function) Given a function, I : A R , we define a point x 0 A to be a local maximum (resp. strictly local maximum) of I if, ∃ U ( x 0 ) of x 0 in X satisfying,
I ( x 0 ) I ( x ) x U ( x 0 ) A
resp.,
I ( x 0 ) > I ( x ) x U ( x 0 ) A { x 0 }
On the other hand, x 0 is defined to be a global maximum of I if,
I ( x 0 ) I ( x ) x A
Therefore, we write, I ( x 0 ) = max x A I ( x ) .
Remark 4. 
Assume A X to be open, and I : A R be differentiable on A. Then,
I ( x 0 ) = 0
provided, x 0 A is a local minima (or, a local maxima) of I.
Definition 7. 
(Critical Point) Suppose, A X be open, and I : A R be differentiable on A. A point x 0 A is defined to be acritical pointof I if, I ( x 0 ) = 0 . Subsequently, I ( x 0 ) R is called the critical value.
It is evident from the definition that, local minima and local maxima of a differentiable function I on some subset A X are indeed critical points of I. However, the same cannot be infered about the converse of this result.
For example, we take, X = R 2 , I ( x , y ) = x 2 y 2 . Then, I ( x , y ) = ( 2 x , 2 y ) , and, I ( 0 , 0 ) = 0 . Thus, ( 0 , 0 ) is a critical point of I, although, is neither a local minima nor a local maxima of I. It follows from the fact that,
I ( 0 , 0 ) = 0 , I ( x , 0 ) = x 2 > 0 f o r , x 0 ,
I ( 0 , y ) = y 2 < 0 f o r , y 0
Remark 5. 
Critical points satisfying the condition as above as termed as Saddle Points.
Definition 8. 
(Saddle Point) Suppose, A X be open, and I : A R be differentiable on A. We define a critical point x 0 A of I to be a saddle point of I if, for every neighbourhood U ( x 0 ) of x 0 , ∃ x 1 , x 2 U ( x 0 ) satisfying,
I ( x 1 ) < I ( x 0 ) < I ( x 2 )
Definition 9. 
(Convex Function) A function I : X R is defined to be convex , if x , y X ,
I ( t x + ( 1 t ) y ) t I ( x ) + ( 1 t ) I ( y ) , t ( 0 , 1 )
Example 3. 
Consider, X = R . Then, the function, f : R R defined as, f ( x ) = e x is convex everywhere on R .
Remark 6. 
In case when, I is convex on X, the critical points of I are in fact global minima.
Proposition 1. 
For a convex and differentiable function I : X R ,
I ( x 0 ) I ( x ) x X ,
i . e . , I ( x 0 ) = min x X I ( x ) .
Proposition 2. 
Suppose, I : X R be a differentiable function satisfying the following condition,
I ( y ) I ( x ) + I ( y ) ( y x ) x , y X
Then, I is convex.
Proof. 
To prove the result, we’ll use the definition (9) of convexity.
Given that I ( y ) I ( x ) + I ( y ) ( y x ) for every x , y X .
For λ [ 0 , 1 ] , consider z = λ x + ( 1 λ ) y .
By the given condition:
I ( z ) I ( x ) + I ( z ) ( z x ) = I ( x ) + I ( z ) ( λ x + ( 1 λ ) y x ) I ( x ) + λ I ( z ) ( x y )
Similarly,
I ( z ) I ( y ) + I ( z ) ( z y ) = I ( y ) + I ( z ) ( λ x + ( 1 λ ) y y ) I ( y ) + ( 1 λ ) I ( z ) ( x y )
Combining these inequalities:
I ( z ) I ( x ) + λ I ( z ) ( x y ) I ( x ) + λ I ( z ) ( x y ) + ( 1 λ ) I ( z ) ( x y ) = λ I ( x ) + ( 1 λ ) I ( y ) + λ ( x y ) I ( z )
Since λ [ 0 , 1 ] , I ( z ) λ I ( x ) + ( 1 λ ) I ( y ) . Hence, I ( x ) is convex.
Before we investigate the existence of critical points of a function, one needs to justify that, the definitions (5) and (6) of minima and maxima respectively are well-defined.
Definition 10. 
A priori, given X to be a Hausdorff topological space, a function, I : X R is defined to be lower semi-continuous if c R , the set { x X | I ( x ) c } is closed.
Proposition 3. 
(i) 
Every continuous function is lower semi-continuous.
(ii) 
If A X is open, then, χ A is lower semi-continuous.
Proof. 
(i) 
We intend to show that for any continuous function f : X R , where X is a topological space, the set { x X : f ( x ) > α } = A (say) is open for every α R . This implies that for any point x 0 in X, there exists a neighborhood U of x 0 such that f ( x ) > α for all x in U.
Let α R be arbitrarily chosen. For any point x 0 A , since f ( x 0 ) > α , by the continuity of f, ∃ a neighborhood U x 0 of x 0 such that f ( x ) > α x U x 0 .
Thus, for any x 0 A , there exists a neighborhood U x 0 of x 0 contained in A, implying A is open.
Therefore, every continuous function f : X R is lower semi-continuous.
(ii) 
To prove that the indicator function χ A of an open set A X is lower semi-continuous, it suffices to show that for any x 0 X , and for any α < 1 , ∃ a neighborhood U of x 0 such that χ A ( x ) > α for all x U .
Formally, let χ A be the indicator function of A, defined as:
χ A ( x ) = 1 if x A 0 if x A
Let x 0 X be arbitrary, and let α < 1 be given. Since A is open, ∃ a neighborhood U of x 0 contained in A. For any x U , x A , so χ A ( x ) = 1 . Since α < 1 , χ A ( x ) > α for all x U .
Therefore, χ A of an open set A X is indeed lower semi-continuous.
Remark 7. 
A function which is lower semi-continuous may not be continuous. For example, we can consider X = R , and, A = a , b R for any a , b R with a < b . Then, χ A is indeed lower semi-continuous on R (using above proposition), although it is not continuous at the points a and b in R .
Theorem 3. 
A priori given X to be a compact and Hausdorff topological space. Furthermore, I : X R be lower semi-continuous. Then, I is bounded below, and, x 0 X such that,
I ( x 0 ) = min x A I ( x ) .
Proof. 
I being lower semi-continuous implies that, the set, A n = { x X | I ( x ) > n } is open n N . Moreover, X = n = 1 A n , and, is compact. Hence, we must have, X = n = 1 n 0 A n for some n 0 N .
As a consequence, I ( x ) > n 0 x X . Thus, it establishes that, X is bounded below.
Let us choose, inf X I = l ( > ) . We claim that, such a " l " do exists. If not, then, we consider the collection { B n } n = 1 where, B n : = x X | I ( x ) > l + 1 n for every n N such that,
X = n = 1 B n
A priori from the compactness condition of X, n 1 N satisfying, X = n = 1 n 1 B n . In other words, the collection, { B n } n = 1 n 1 indeed is a finite subcover of X for some n 1 N .
Subsequently, I ( x ) > l + 1 n 1 , x X , a contradiction to the fact that, l = inf X I . □
For the ease of computation, we introduce the notion of sequentially lower semi-continuous.
Definition 11. 
Given a Hausdorff topological space X, a function, I : X R is defined to be sequentially lower semi-continuous if, for every sequence { x n } tending to x in X,
I ( x ) lim ̲ n I ( x n )
Proposition 4. 
For a Hausdorff topological space X and a function I : X R , the following holds true:
(1)
I is lower semi-continuous ⇒I is sequentially lower semi-continuous.
(2)
Converse holds true only if X is a metric space.
Proof. 
(1)
Given I : X R to be lower semi-continuous. Suppose, ( x n ) be a sequence in X converging to x, and suppose I ( x n ) I ( x ) . We wish to show that I ( x ) lim inf n I ( x n ) .
Since I is lower semi-continuous, for any ϵ > 0 , ∃ a neighborhood U of x such that I ( y ) > I ( x ) ϵ y U .
Since x n x , ∃ N N such that ∀ n N , x n U .
Thus, for every n N , I ( x n ) > I ( x ) ϵ .
Taking n ,
lim inf n I ( x n ) I ( x ) ϵ
Since ϵ was arbitrary, we conclude:
I ( x ) lim inf n I ( x n )
Therefore, I is sequentially lower-semi continuous, and the proof is thus complete.
(2)
Assume I : X R be a sequentially lower semi-continuous function, and let α R be arbitrary. It suffices to establish that, the set A : = { x X : I ( x ) > α } is open in X.
Let x 0 be any point in the set, say, { x X : I ( x ) > α } , i.e., I ( x 0 ) > α .
Since I is sequentially lower semi-continuous, for any sequence ( x n ) in X converging to x 0 , we have I ( x 0 ) lim inf n I ( x n ) .
Since x 0 is in the set { x X : I ( x ) > α } , we have lim inf n I ( x n ) > α .
As a result, ∃ N N such that ∀ n N , I ( x n ) > α [ ∵ the limit inferior of a sequence is the greatest lower bound of the set of subsequential limits ].
Therefore, for any sequence ( x n ) in X converging to x 0 , ∃ a neighborhood U of x 0 such that I ( x ) > α for all x U , which implies that { x X : I ( x ) > α } is open in X.
Since α was arbitrary, this holds for all α R . Therefore, we can conclude that, I is lower semi-continuous.
Remark 8. 
A sequentially lower semi-continuous function on a non metrizable space may not be lower semi-continuous. Consider the example where, X = R is equipped with the cofinite topology.
Define the function f : X R as follows:
f ( x ) = x if x Q 0 if x Q
Consider any sequence ( x n ) in X converging to x. Since every neighborhood of x in the cofinite topology contains all but finitely many points of X, x n equals x for all sufficiently large n. Therefore, lim inf n f ( x n ) = f ( x ) , which makes f ( x ) sequentially lower semi-continuous.
Let’s examine the point x = 0 . The set { x X : f ( x ) > 0 } is the set of irrational numbers in X, which is not open in the cofinite topology because it contains infinitely many points. Hence, f ( x ) is not lower semi-continuous.
Proposition 5. 
Given a Hausdorff topological space X and, I : X R , suppose that, for every c R ,
T h e s e t { x X : I ( x ) c } i s   compact ,
Then, I is bounded below, and, ∃ x 0 X satisfying, I ( x 0 ) = inf x X I ( x ) .
Proof. 
To prove this statement, we’ll use the fact that compactness of the level sets of I implies certain properties about I.
Suppose I is not bounded below. Then, for each n N , ∃ x n X such that I ( x n ) < n . However, this implies that the set { x X : I ( x ) n } is nonempty and contains the sequence { x n } , but it cannot be compact as the sequence has no convergent subsequence due to the unboundedness of I. This contradicts the assumption that all such sets are compact. Hence, I must be bounded below.
Since I is bounded below, let c = inf x X I ( x ) . Then, for each n N , ∃ x n X such that I ( x n ) < c + 1 n . Consider the sequence { x n } . Since { x X : I ( x ) c + 1 n } is compact, there exists a subsequence { x n k } converging to some x 0 in X. By continuity of I, I ( x n k ) I ( x 0 ) . But since I ( x n k ) < c + 1 n k , we have I ( x 0 ) c . Conversely, since c is the greatest lower bound of I, we have I ( x 0 ) c . Therefore, I ( x 0 ) = c , and x 0 is the desired point. □
Remark 9. 
Consider, X = R . Define a function, I : R R as, I ( x ) = e x . We can observe that, I is indeed smooth and bounded below, but I does not achieve its infimum. Applying Theorem (3) and Proposition (4) in order to obtain infimum, a compactness condition either for the space or, for the function must be required.
In case for infinite dimensional Banach Space X, the compactness condition is not achieved under the norm topology. However a certain amount of compactness is achieved to ensure the attainment of the infimum can be obtained in weaker topology on X.
In the next section, we shall learn more about a weaker topology called weak topology on X as compared to the norm topology defined on it.

3. Weak Topology on Banach Spaces

3.1. Weak Convergence

Suppose we consider ( X , | | . | | ) to be a Banach Space with X * as its dual. Moreover, τ be the metric topology on X induced by the norm | | . | | , having the following definition, d ( x , y ) = | | x y | | , x , y X .
We intend to define the weakest topology  τ w on X as follows : “Every functional f X * is continuous on X with respect to the topology τ w on X”. The topology τ w thus formed is defined as the Weak Topology on X. As for convergence of any sequence under weak topology, we provide the following definition.
Definition 12. 
(Weak Convergence) A sequence { x n } n = 1 in X is said to converge to x X weakly, and is denoted by, x n x , if,
f ( x n ) f ( x ) f X *
Remark 10. 
If a sequence { x n } n = 1 converges to x with respect to the norm | | . | | (i.e., | | x n x | | 0 as n ), we assert that, x n converges to x strongly in X, in other words, x n x .
Proposition 6. 
We can accumulate some important properties of weak convergence of sequences as follows:
  • x n x i n X x is unique.
  • x n x i n X x n x . The converse is not true in general.
  • x n x { | | x n | | } n = 1 is bounded, and, | | x | | lim inf n | | x n | | .
    This implies, | | . | | : X R is weakly sequentially lower semi-continuous.
  • Suppose, X be reflexive. Furthermore, | | x n | | M n N for some, M > 0 . Then, x 0 X and a subsequence, { x n k } k = 1 of { x n } n = 1 in X such that, x n k x 0 .
  • Let, Y be another Banach Space and T B ( X , Y ) . Then,
    • x n x T x n T x .
    • If T is compact and, x n x T x n T x in Y.
  • If X be a Hilbert Space, Then, x n x and, | | x n | | | | x | | x n x .
Proof. 
  • Suppose, { x n } and { y n } be two sequences in X such that, x n x and, x n y . Then, for every f X * , f ( x n ) f ( x ) and, f ( x n ) f ( y ) . SInce, { f ( x n ) } is a sequence of numbers, hence its limit is unique, i.e., f ( x ) = f ( y ) , i.e., for every f X * , we have, f ( x y ) = 0 .
    Therefore, using Corollary ( 4.3 . 4 ) [ref. [24], Pg 223]], we conclude, x y = 0 , and thus, the weak limit is indeed unique.
  • By definition, x n x means, | | x n x | | 0 For every f X * ,
    | f ( x n ) f ( x ) | = | f ( x n x ) | | | f | | . | | x n x | | 0 .
    Implying that, x n x .
    Converse is not true in general.
    Consider the sequence { x n } , where, x n = e n , n N in 2 , where e n is the sequence whose n-th term is 1 and all other terms are 0.
    Let f be any bounded linear functional on 2 . Then f ( e n ) = 1 for all n. Since e n is the unit vector along the n-th coordinate, | f ( e n ) | = f e n = f (by the Cauchy-Schwarz inequality). Therefore, f ( e n ) = 1 for all n implies that f does not converge to 0 as n . Hence, x n = e n converges weakly to 0.
    For strong convergence, we need to show that x n 0 = e n 0 as n . However, e n = 1 for all n, so x n 0 = 1 for all n, which does not tend to 0.
    Therefore, the sequence x n in 2 converges weakly to 0 but does not converge strongly.
  • Given, x n x . Thus, f ( x n ) f ( x ) f X * { f ( x n ) } is a convergent sequence of numbers, hence is bounded.
    Let, | f ( x n ) | c f n N , where, c f is a constant depending on f, but not on n. Using the canonical mapping, C : X X * * (ref. ( 5 ) of Sec. 4.6 [24]), where X * * denotes the double dual of X, we can in fact define, g n X * * by,
    g n ( f ) = f ( x n ) , f X *
    Then,
    | g n ( f ) | = | f ( x n ) | c f n N .
    Implying that, the sequence, { g n ( f ) } is bounded for every f X * . Since, X * is complete, by ( 2.10 . 4 ) [Ref. [24], we can apply the Uniform Boundedness Theorem [Ref. [24] to conclude that, { | | g n | | } is bounded.
    Now, | | g n | | = | | x n | | by ( 4.6 . 1 ) (Ref. [24]) helps us conclude that, { | | x n | | } is bounded.
    As for the second part, if, x = 0 , then, | | x | | = 0 , and the statement is obviously true. Now, we assume, | | x | | 0 . By Theorem ( 4.3 . 3 ) (Ref. [24]), ∃ some f X * such that,
    | | f | | = 1 , f ( x ) = | | x | |
    Since, { x n } converges weakly to x, and, f is indeed continuous, we have,
    lim n f ( x n ) = f ( x ) = | | x | | .
    But, f ( x n ) | f ( x n ) | | | f | | . | | x n | | = | | x n | | . Hence,
    lim inf n | | x n | | lim n f ( x n ) = | | x | | .
  • To prove this statement, we can utilize the Eberlein–Šmulian Theorem, which states that in a reflexive Banach space, every bounded sequence has a weakly convergent subsequence.
    Since X is reflexive, every bounded sequence in X has a weakly convergent subsequence.
    Let { x n } n = 1 be a bounded sequence in X, i.e., x n M for all n N . By the Eberlein–Šmulian Theorem, ∃ a subsequence { x n k } k = 1 of { x n } n = 1 such that x n k x 0 in X.
    Therefore, x n k x 0 weakly, where x 0 X and { x n k } k = 1 is a subsequence of { x n } n = 1 . The proof is thus complete.
    • Given, x n x in X, thus, f ( x n ) f ( x ) for every f X * .We intend to show that,
      φ ( T ( x n ) ) φ ( T ( x ) ) φ Y *
      In other words,
      ( φ T ) ( x n ) ( φ T ) ( x ) φ Y *
      Although, φ T X * , therefore, our hypothesis guarantees our desired conclusion.
    • Suppose T is a compact operator from X to Y, and x n x in X implies T x n T x in Y. By the definition of compact operators, every bounded sequence { x n } in X has a weakly convergent subsequence { x n k } in X. Suppose, x n k x . Since T is compact, { T x n k } has a convergent subsequence { T x n k j } in Y. Let T x n k j y in Y.
      Now, we have x n k j x and T x n k j y . Since weak convergence implies boundedness, we have { x n k j } is bounded in X.
      By the first part of the statement, T x n k j T x in Y. But since T x n k j converges to y in Y, by uniqueness of limits in Banach spaces, T x = y . Hence, T x n T x in Y.
      Therefore, if T is compact and x n x implies T x n T x in Y, and the statement holds true.
  • To prove this statement, let X be a Hilbert space, and suppose x n x weakly in X. Also, assume that x n x .
    Since x n x weakly, for any y X , we have x n , y x , y .
    Now, consider the sequence y n = x n x . We have:
    y n 2 = y n , y n = x n x , x n x = x n 2 2 x n , x + x 2
    Given that x n x , and x n , y x , y for any y, it follows that x n , x x , x .
    Thus, y n 2 = x n 2 2 x n , x + x 2 x 2 2 x , x + x 2 = 0 as n .
    This implies that y n 0 . But y n = x n x , so we have x n x in X.
    Therefore, if X is a Hilbert space, x n x weakly and x n x , then x n x strongly in X.

3.2. Existence of Minima

Theorem 4. 
Given areflexiveBanach Space ( X , | | . | | ) , suppose, A be a weakly sequentially closed subset of X. Define, I : A R satisfying the following:
(I) 
(Compactness Condition) I is coercive on A, i.e.,
I ( u ) a s | | u | | , u A .
(II) 
I is weakly sequentially lower semi-continuous on A, i.e., if u n , u A with u n u in X, then,
I ( u ) lim inf n I ( u n ) .
Then, I is indeed bounded below, and, u 0 A such that,
I ( u 0 ) = min u A I ( u ) .
Proof. 
Let us denote, l : = i n f I ( u ) | u A . We intend on proving that, l > and, I ( u 0 ) = l for some u 0 A . Suppose, { u n } A satisfying, I ( u n ) l .
A priori using the fact that, I is coercive on A, M > 0 such that, | | u n | | M n N .
Since, X is reflexive, ∃ a subsequence { u n k } k = 1 of { u n } n = 1 with u n k u 0 in X for some u 0 X .
Moreover, A being weakly sequentially closed, we thus conclude that, u 0 A . Also, I is given to be weakly sequentially lower semi-continuous on A, which yields,
l I ( u 0 ) lim inf n I ( u n k ) = l = inf u A I ( u ) .
And the proof is complete. □

3.3. Applications of the Existence of Minima

3.3.1. Application in Linear PDEs

We can in fact utilize Theorem (4) in order to find solutions to linear partial differential equations in the following manner.
Theorem 5. 
Suppose, Ω R n be a bounded domain. For every f L 2 ( Ω ) (more generally, f H 1 ( Ω ) ), ∃ a weak solution, u 0 H 0 1 ( Ω ) to the following problem,
Δ u = f i n Ω u = 0 o n Ω
In other words, for every ϕ H 0 1 ( Ω ) ,
Ω u 0 ϕ = Ω f ϕ .

3.3.2. Constrained Minimization

A priori given a Hilbert Space X over R and f , g C 1 ( H , R ) , let us define, G : = { u H : g ( u ) = 0 } . Moreover, let, g ( u ) 0 u H (This implies that, G is a manifold of co-dimension 1).
An important observation is, the gradient of g, denoted as g ( u ) is in fact normal to G. Correspondingly, the tangent space T u at u G is defined as,
T u : = { v H : g ( u ) , v = 0 } .
Definition 13. 
A point u 0 G is defined to be a critical point of ( f | G ) , i.e., ( f | G ) ( u 0 ) = 0 if,
f ( u 0 ) ( v ) = 0 v T u .
T u 0 being of co-dimension 1, we conclude that, f ( u 0 ) = μ g ( u 0 ) , for some μ R .
μ in the above case is defined to be the Lagrange Multiplier.
Proposition 7. 
If u 0 G and, f ( u 0 ) = min u G { f ( u ) } G , then, we have,
( f | G ) ( u 0 ) = 0 .

3.3.3. Application in Non-Linear PDEs

Assume Ω R n be a bounded domain. We choose, λ R and, 1 < p ( n + 2 ) ( n 2 ) . We intend to obtain a weak solution to the following non-linear Dirichlet boundary value problem,
Δ u = | u | p 1 u + λ u in   Ω u 0 , u ¬ 0 in   Ω u = 0 on   Ω
In other words, ϕ H 0 1 ( Ω ) ,
Ω u 0 ϕ = Ω | u | p 1 u 0 ϕ + λ u 0 ϕ .
Taking, ϕ = u 0 in (22),
Ω u 0 2 = Ω | u | p + 1 + λ u 0 2 .
Applying the Sobolev Embedding Theorem, we can in fact obtain the following embedding, H 0 1 ( Ω ) L q ( Ω ) , where, q 2 n ( n 2 ) . Moreover, q = p + 1 helps us conclude that, there indeed exist a weak solution to the equation (21) in H 0 1 ( Ω ) for every 1 < p ( n + 2 ) ( n 2 ) .
Suppose, ϕ H 0 1 ( Ω ) with ϕ 0 be the Eigenfunction corresponding to the first Dirichlet Eigenvalue, λ 1 ( Ω ) of Δ for the Dirichlet boundary condition as described in (21), having the following expression,
λ 1 ( Ω ) : = inf ϕ H 0 1 ( Ω ) { 0 } Ω ϕ 2 Ω ϕ 2
Furthermore, from (21),
Δ ϕ = λ 1 ( Ω ) ϕ in   Ω ϕ = 0 on   Ω
Proposition 8. 
For 1 < p ( n + 2 ) ( n 2 ) and, λ λ 1 ( Ω ) , the problem (21) admits no solution.
Sobolev Embedding Theorem ensures the existence of an embedding, H 0 1 ( Ω ) L p + 1 ( Ω ) , which is compact for every p < 2 n ( n 2 ) . We shall comment later on the case when, p = ( n + 2 ) ( n 2 ) , n 3 .
Theorem 6. 
Assume, 1 < p < ( n + 2 ) ( n 2 ) . For every λ ( , λ 1 ( Ω ) ) , the Dirichlet boundary value problem (21) admits of aweak solution. In other words, u 0 H 0 1 ( Ω ) satisfying,
Ω u 0 ϕ = Ω | u 0 | p 1 u 0 ϕ + λ u 0 ϕ .
for every ϕ H 0 1 ( Ω ) .
Proof. 
Proof of the above result primarily hinges on two claims. First, we shall introduce some notations.
We define, J : H 0 1 ( Ω ) R as,
J ( u ) : = 1 2 Ω | u | 2 1 p + 1 Ω | u | p + 1 λ 2 Ω u 2 .
Moreover, for g C 1 ( R ) given by, g ( t ) : = | t | p 1 t + λ t , and, G ( t ) : = 0 t g ( s ) d s , we can in fact infer that,
G ( u ) = 1 p + 1 Ω | u | p + 1 + λ 2 Ω u 2
and, also, G C 1 ( H 0 1 ( Ω ) , R ) . Thus, J C 1 ( H 0 1 ( Ω ) , R ) as well and also for every u H 0 1 ( Ω ) and ϕ H 0 1 ( Ω ) ,
J ( u ) ϕ = Ω u ϕ Ω | u | p 1 u ϕ λ Ω u ϕ .
Which suggests that, if u 0 H 0 1 ( Ω ) with J ( u 0 ) = 0 , then u 0 should satisfy (25). Hence, it only suffices to check for critical points of J.
Choose 0 ¬ u 1 H 0 1 ( Ω ) . Therefore, for any t R , t u H 0 1 ( Ω ) as well, and,
J ( t u 1 ) = t 2 2 Ω | u 1 | 2 t p + 1 p + 1 Ω | u 1 | p + 1 λ t 2 2 Ω u 1 2 .
A priori fromt he fact that, J is unbounded below on H 0 1 ( Ω ) , in other words, J ( t u 1 ) as t , we conclude that, J does not admit any minimum on H 0 1 ( Ω ) .
To find critical points of J, we denote,
A : = u H 0 1 ( Ω ) : Ω | u | p + 1 = 1
and, I : A R as,
I ( u ) : = 1 2 Ω | u | 2 λ 2 Ω u 2 .
As a result, using Theorem (4), we state our first claim.
Lemma 1. 
For any λ ( , λ 1 ( Ω ) ) , I is bounded on A. Further, u λ A satisfying,
I ( u λ ) = min u A I ( u ) .
On the other hand, considering,
g ( u ) : = Ω | u | p + 1 1
and, subsequently, we define, A : = { u H 0 1 ( Ω ) : g ( u ) = 0 ) } . Hence, applying the concepts of constrained minimization as mentioned in Definition (13), we can in fact, establish our second claim.
Lemma 2. 
For every λ ( , λ 1 ( Ω ) ) , c λ > 0 such that,
J ( c λ u λ ) = 0 .
In this case, we can in fact deduce that, c λ = μ ( p + 1 ) 1 / ( p 1 ) , for some μ R + .
Therefore, u λ ˜ : = c λ u λ is a weak solution of (21). Furthermore, u λ 0 , u λ ¬ 0 implying, u λ ˜ 0 , and the proof is complete. □
Remark 11. 
For p = ( n + 2 ) ( n 2 ) , the embedding, H 0 1 ( Ω ) L p + 1 ( Ω ) is not compact. Under this scenario, the corresponding Dirichlet boundary value problem (21) is termed as the problem with lack of compactness, or, the critical exponent problem. As a result, the set A as defined above during our second claim need not be weakly sequentially closed, and thus our desired result is not achieved. In fact, the existence of weak solution in this case depends stricly on the choice of λ and the geometry of Ω.
Theorem 7. 
(Pohozaev’s Identity) A priori given a locally Lipschitz function, f : R R with, Ω R n being a bounded domain with smooth boundary. Moreover, let u C 2 ( Ω ) C 1 ( Ω ¯ ) satisfies,
Δ u = f i n Ω u = 0 o n Ω
Define, F ( t ) : = 0 t f ( s ) d s . Suppose also that, ν ( x ) is the unit outward normal at x Ω and, u ν = u . ν . Under these assumptions, we have the following identity,
2 n Ω F ( u ) ( n 2 ) Ω f ( u ) u = Ω x . ν u ν 2
As an important derivation from the above identity, we might infer the following.
Corollary 2. 
Assuming, Ω = B ( R ) , the following Dirichlet problem,
Δ u = u ( n + 2 ) ( n 2 ) + λ u i n B ( R ) u 0 , u ¬ 0 i n B ( R ) u = 0 o n B ( R )
doesn’t admit any solution in C 2 ( Ω ) C 1 ( Ω ¯ ) for every λ , 0 .
Theorem 8. 
λ 1 ( Ω ) > 0 and, ϕ 0 H 0 1 ( Ω ) , ϕ 0 0 , ϕ 0 ¬ 0 satisfying,
λ 1 ( Ω ) = Ω ϕ 0 2
and,
Δ ϕ 0 = λ 1 ( Ω ) ϕ 0 in   Ω ϕ 0 0 , ϕ 0 ¬ 0 in   Ω ϕ 0 = 0 on   Ω
Furthermore, ϕ H 0 1 ( Ω ) , we must have,
λ 1 ( Ω ) Ω ϕ 2 Ω ϕ 2
Proof. 
We define,
S ; = ϕ H 0 1 ( Ω ) : Ω ϕ 2 = 1
Hence,
λ 1 ( Ω ) = inf ϕ S Ω ϕ 2
Applying a result by Poincare, λ 1 ( Ω ) > 0 . We choose a minimizing sequence ϕ n S , such that,
Ω ϕ n 2 λ 1 ( Ω ) .
Implying that, ϕ n is bounded in H 0 1 ( Ω ) . Thus, ∃ a sunsequence, { ϕ n k } k = 1 of { ϕ n } n = 1 with ϕ n k ϕ 0 in H 0 1 ( Ω ) . From Rellich’s Theorem, H 0 1 ( Ω ) L 2 ( Ω ) is indeed compact. Hence, ϕ n k ϕ 0 in L 2 ( Ω ) , and,
lim k Ω ϕ n k 2 = Ω ϕ 0 2 .
Now, ϕ n k S and, Ω ϕ 0 2 = 1 ϕ 0 S .
Furthermore, | | . | | H 0 1 ( Ω ) being weakly sequentially lower semi-continuous implies,
λ 1 ( Ω ) Ω ϕ 0 2 lim inf k Ω ϕ n k 2 = λ 1 ( Ω ) .
Therefore, λ 1 ( Ω ) = Ω ϕ 0 2 .
Moreover, assuming ϕ 0 0 , f ( u ) = Ω u 2 a n d , g ( u ) = Ω u 2 1 , where, u S , we obtain,
g ( u ) u = 2 Ω u 2 = 2 0 .
A priori from the concepts discussed in the section of Constrained Minimization, we get,
Ω ϕ 0 ϕ = 2 μ Ω ϕ 0 ϕ
μ being the Lagrange Multiplier. Setting ϕ = ϕ 0 S , we obtain, λ 1 ( Ω ) = 2 μ .
Applying (30),
Ω ϕ 0 ϕ = λ 1 ( Ω ) Ω ϕ 0 ϕ , ϕ H 0 1 ( Ω ) .
We thus conclude that, λ 1 ( Ω ) and ϕ 0 indeed solve (30).
For any, u H 0 1 ( Ω ) , u ¬ 0 , defining, v = u Ω u 2 1 / 2 , it can be observed that, v S as well, so by definition of λ 1 ( Ω ) ,
λ 1 ( Ω ) Ω v 2 = Ω | u | 2 Ω u 2 .
i.e.,
λ 1 ( Ω ) Ω u 2 Ω u 2 .
And, hence the proof is complete.

4. Ekeland Variational Principle and its Applications

4.1. Variational Principle

Assume, I C 1 ( X , R ) to be bounded below. Ekeland Variational Principle yields a sequaence of minimizers of I over X with certain conditions, which enables us to conclude that, in various situations we can derive the infimum of I over X. Especially, this principle is quite beneficial for problems of minimax type.
Theorem 9. 
(Ekeland Variational Principle-Strong Form) Given a complete metric space ( X , d ) , and a lower semi-continuous function I C 1 ( X , R ) which is bounded below, for every ϵ > 0 , λ > 0 and, x 0 X satisfying,
I ( x 0 ) inf X I + ϵ ,
x ¯ X such that, the following conditions hold true:
(1)
I ( x ¯ ) + ϵ λ d ( x ¯ , x 0 ) I ( x 0 ) .
(2)
d ( x 0 , x ¯ ) λ .
(3)
I ( x ¯ ) < I ( x ) + ϵ λ d ( x , x ¯ ) , ∀ x X , x x ¯ .
Proof. 
Suppose, d 1 ( x , y ) : = ϵ λ d ( x , y ) . Then, ( X , d 1 ) is complete. Moreover, for x X , we define,
G ( x ) : = y X : I ( y ) + d 1 ( x , y ) I ( x ) .
The above definition yields the following:
(1)
x G ( x ) and, G ( x ) is in fact closed.
(2)
G ( y ) G ( x ) if, y G ( x ) .
(3)
For y G ( x ) , we have, d 1 ( x , y ) I ( x ) v ( x ) , where, v ( x ) : = inf z G ( x ) I ( z ) .
A priori from the fact that, I ( . ) + d 1 ( x , . ) being lower semi-continuous, and by definition, we can assert that, G ( x ) is indeed closed and, x G ( x ) .
Let, z G ( y ) I ( z ) + d 1 ( z , y ) I ( y ) I ( y ) + d 1 ( y , x ) I ( x ) . Hence, ( 2 ) follows from the triangle inequality. Subsequently, we can deduce ( 3 ) using definitions of G ( x ) and v ( x ) .
Starting from x 0 , our objective is to construct a sequence, { x n } X such that,
x n + 1 G ( x n ) such   that   , I ( x n + 1 ) v ( x n ) + 1 2 n for   n 0 .
Since, v ( x 0 ) = inf x G ( x 0 ) I ( x ) , thus, x 1 with,
x 1 G ( x 0 ) such   that , I ( x 1 ) v ( x 0 ) + 1 2 .
Now, considering v ( x 1 ) , we obtain x 2 and so on. Using the fact that, x n + 1 G ( x n ) for every n 0 , and ( 2 ) ,
G ( x n ) G ( x n + 1 ) n 0 .
We can derive that,
d i a m G ( x n + 1 ) = sup d 1 ( x , y ) : x , y G ( x n + 1 ) 0 a s n .
Now, { G ( x n ) } n 0 being a decreasing sequence of closed sets with diameter tending to 0, by Cantor’s Intersection Theorem, we get,
n = 0 G ( x n ) = { x ¯ } f o r s o m e x ¯ X .
We claim that, x ¯ is the required point satisfying all the conditions as described in the statement of the Theorem.
x ¯ G ( x 0 ) I ( x ¯ ) + d 1 ( x 0 , x ¯ ) I ( x 0 ) . Furthermore,
d 1 ( x ¯ , x 0 ) I ( x 0 ) v ( x 0 ) ϵ .
Important to observe that, G ( x ¯ ) = { x ¯ } . Then, for every x X , x x ¯ , we must have, x G ( x ¯ ) . Consequently,
I ( x ) + d 1 ( x , x ¯ ) > I ( x ¯ ) .
That completes the proof, a priori using definition, d 1 = ϵ λ d . □
For the case, λ = 1 , we can have a weaker version as follows.
Theorem 10. 
(Ekeland Variational Principle-Weak Form) Assuming ( X , d ) to be a complete metric space, and I : X R be lower semi-continuous and bounded below. For every chosen ϵ > 0 , x ϵ X satisfying,
(I) 
I ( x ϵ ) inf x X I ( x ) + ϵ .
(II) 
I ( x ϵ ) < I ( x ) + ϵ d ( x , x ϵ ) x ϵ X , x ϵ x .
As a corollary, one can deduce the following.
Corollary 3. 
Suppose, X be a Banach Space, and, I C 1 ( X , R ) be bounded below. Then, ∃ a sequence, { x n } n 0 in X satisfying,
I ( x n ) inf x X I ( x ) a n d , I ( x n ) 0 i n X * .
Proof. 
Applying the weak form of Ekeland Variational Principle (Theorem (10)), a priori given any ϵ > 0 , x ϵ X satisfying,
I ( x ϵ ) inf X I + ϵ
I ( x ϵ ) < I ( x ) + ϵ | | x x ϵ | | , x X , x x ϵ .
Choose, x = x ϵ = t y , t > 0 , y X , y 0 .
(31) yields,
I ( x ϵ ) I ( x ϵ + t y ) < ϵ t | | y | | .
Thus,
lim t 0 I ( x ϵ ) I ( x ϵ + t y ) t ϵ | | y | |
I ( x ϵ ) ( y ) ϵ | | y | | , y X
I ( x ϵ ) ( y ) ϵ | | y | | , y X ( c h a n g i n g y t o y )
| | I ( x ϵ ) | | = sup y X y 0 | I ( x ϵ ) y | | | y | | ϵ .
Now, taking ϵ = 1 n , x ϵ = x n , we thus have,
inf X I I ( x n ) inf X I + 1 n | | I ( x n ) | | 1 n .
Hence, we conclude our desired result using definition of strong convergence. □

4.2. Palais-Smale Condition

We begin with a proper definition of the above.
Definition 14. 
Assume X to be a Banach Space, and, I C 1 ( X , R ) . For any c R , we say that, I satisfies the Palais-Smale Condition at c ( ( P S ) c as abbreviation) if, every sequence { x n } n 0 X with, I ( x n ) c , I ( x n ) 0 in X * , has a convergent subsequence.
If in fact, I satisfies ( P S ) c at every c R , then, we conclude that, I indeed satisfies the Palais-Smale Condition ( ( P S ) as in short).
Theorem 11. 
X be a Banach Space, and, I C 1 ( X , R ) is bounded below. Furthermore, let, I satisfy ( P S ) c , where, c = inf X I . Then, x 0 X satisfying,
I ( x 0 ) = inf x X I ( x ) a n d , I ( x 0 ) = 0 .
Proof. 
Corollary (3) implies that, ∃ a sequence { x n } n 0 X such that, I ( x n ) inf X I = c (say), and, I ( x n ) 0 as n . Since, I satisfies the ( P S ) c , ∃ a subsequence, { x n k } k 0 of { x n } n 0 and x 0 X satisfying, x n x 0 in X.
Now, I C 1 ( X , R ) and I C ( X , X * ) both being continuous, we shall obtain,
I ( x 0 ) = lim k I ( x n k ) = c ,
and,
I ( x 0 ) = lim k I ( x n k ) = 0 .
and the result holds true. □
Corollary 4. 
Suppose, Ω R n be bounded, and, 1 q < p < ( n + 2 ) ( n 2 ) . For every λ R , we define, I : H 0 1 ( Ω ) R as,
I ( u ) : = 1 2 Ω u 2 1 p + 1 Ω | u | p + 1 λ 2 Ω | u | q + 1
Then, I satisfies the ( P S ) condition.
Corollary 5. 
Let, p = ( n + 2 ) ( n 2 ) , and, I be defined as in (32). Then, I satisfies ( P S ) c for every c , 1 n S n / 2 , where,
S = inf Ω u 2 : u H 0 1 ( Ω ) a n d , Ω | u | 2 n / ( n 2 ) = 1 .
S as defined above is termed as theBest Sobolev Constant. Moreover, I does not satisfy ( P S ) c , where, c = 1 n S n / 2 .
Proof. 
  • To prove that I satisfies ( P S ) c for c < 1 n S n / 2 , where S is the Best Sobolev Constant, we proceed as follows:
    Given c < 1 n S n / 2 , let { u n } H 0 1 ( Ω ) be a Palais-Smale sequence at level c. By definition, this means that I ( u n ) c and I ( u n ) 0 as n .
    We aim to show that every Palais-Smale sequence at level c has a convergent subsequence. To do so, we will use the Mountain Pass Theorem.
    The Mountain Pass Theorem states that if I satisfies certain conditions, including the Palais-Smale condition and coercivity, then it possesses a critical point at every level below the Mountain Pass value.
    Now, since c < 1 n S n / 2 , it implies that c is below the Mountain Pass value γ . Therefore, by the Mountain Pass Theorem, every Palais-Smale sequence at level c has a convergent subsequence converging to a minimum of I.
    Hence, I satisfies ( P S ) c for c < 1 n S n / 2 .
  • To prove that ( P S ) c fails for c = 1 n S n / 2 , where S is the Best Sobolev Constant, we construct a Palais-Smale sequence { u n } H 0 1 ( Ω ) at level c that does not have a convergent subsequence.
    Given c = 1 n S n / 2 , we construct a Palais-Smale sequence { u n } as follows:
    Define,
    u n ( x ) = 2 λ 1 ( Ω ) sin ( λ n x 1 ) sin ( λ n x 2 ) sin ( λ n x n ) ,
    where λ n is the n-th eigenvalue of Δ with Dirichlet boundary conditions.
    Each function u n is an eigenfunction of Δ with Dirichlet boundary conditions, normalized such that u n = 1 . Therefore, I ( u n ) achieves the desired level c = 1 n S n / 2 .
    By the properties of eigenfunctions, I ( u n ) = u n λ 1 ( Ω ) u n = ( 1 λ 1 ( Ω ) ) u n . Since λ 1 ( Ω ) > 0 , I ( u n ) 0 as n .
    However, the sequence u n does not converge, as it remains constant for all n. This lack of convergence implies that there is no convergent subsequence, violating the Palais-Smale condition.
    Hence, we’ve found a Palais-Smale sequence { u n } at level c = 1 n S n / 2 that does not have a convergent subsequence. Therefore, I fails the Palais-Smale condition at c = 1 n S n / 2 .
  • Given the functional I ( u ) defined as:
    I ( u ) = 1 2 Ω | u | 2 λ 1 ( Ω ) 2 Ω u 2
    where λ 1 ( Ω ) is the first Dirichlet eigenvalue of Δ on Ω , we aim to derive the Best Sobolev Constant S, defined as:
    S = inf Ω | u | 2 : u H 0 1 ( Ω ) , Ω | u | 2 n / ( n 2 ) = 1
    Consider the functional J ( u ) = Ω | u | 2 subject to the constraint Ω | u | 2 n / ( n 2 ) = 1 . By the Euler-Lagrange equation, the critical points of J ( u ) subject to the constraint satisfy:
    Δ u + λ | u | 2 n / ( n 2 ) 2 u = 0
    where λ is a Lagrange multiplier. Let u be a nontrivial solution of the above equation. By scaling, we may assume that u L 2 n / ( n 2 ) = 1 . Then, by the variational characterization of λ 1 ( Ω ) , we have:
    λ 1 ( Ω ) = Ω | u | 2 Ω | u | 2
    Since u L 2 n / ( n 2 ) = 1 , we have Ω | u | 2 n / ( n 2 ) = 1 , implying that Ω | u | 2 achieves its maximum. Thus, λ 1 ( Ω ) is the smallest eigenvalue. Therefore, the Best Sobolev Constant S is equal to the first Dirichlet eigenvalue λ 1 ( Ω ) of Δ on Ω .
    Hence, S = λ 1 ( Ω ) . This completes the proof.
Proposition 9. 
Suppose, λ 1 ( Ω ) be the first Dirichlet Eigenvalue of Δ . Define, I : H 0 1 ( Ω ) R as,
I ( u ) : = 1 2 Ω u 2 λ 1 ( Ω ) 2 Ω u 2 .
Then, I does not satisfy Palais Smale Condition at the point 0.
Proof. 
Given the functional I ( u ) defined as:
I ( u ) = 1 2 Ω | u | 2 λ 1 ( Ω ) 2 Ω u 2
where λ 1 ( Ω ) is the first Dirichlet eigenvalue of Δ on Ω , we aim to demonstrate that I does not satisfy the Palais-Smale condition at the point 0.
To do so, we construct a sequence { u n } H 0 1 ( Ω ) such that I ( u n ) 0 , u n , and I ( u n ) 0 as n .
Consider the function,
u n ( x ) = 2 λ 1 ( Ω ) 1 n sin ( λ n x 1 ) sin ( λ n x 2 ) sin ( λ n x n ) ,
where x = ( x 1 , x 2 , , x n ) represents the spatial coordinates in Ω , and λ n is the n-th eigenvalue of Δ with Dirichlet boundary conditions.
Now, let’s justify the properties:
  • Convergence of I ( u n ) 0 : Substituting u n into I ( u ) , we have:
    I ( u n ) = 1 2 λ n λ 1 ( Ω ) 2 0
    as n , since λ n is the n-th eigenvalue and λ 1 ( Ω ) is the first eigenvalue of Δ on Ω .
  • Divergence of u n : The norm of u n is given by:
    u n = 2 λ 1 ( Ω ) 1 n
    as n .
  • Convergence of I ( u n ) 0 : The derivative of I ( u ) at u n is given by:
    I ( u n ) = ( 1 λ 1 ( Ω ) ) u n 0
    as n , since λ 1 ( Ω ) > 0 .
Therefore, the sequence { u n } satisfies the conditions for the Palais-Smale condition to fail at the point 0. Thus, the functional I does not satisfy ( P S ) 0 . □
Lemma 3. 
Consider a Banach Space X and, I C 1 ( X , R ) . Assume, α = lim | | u | | I ( u ) R . Then, ∃ a sequence, { u n } n 0 X satisfying, | | u n | | , I ( u n ) α and, I ( u n ) 0 .
Proof. 
A priori it is given to us that, I C 1 ( X , R ) on the Banach space X, with α = lim | | u | | I ( u ) R .
Let ϵ > 0 and u 0 X . Then, Ekeland’s Variational Principle says that u X such that:
  • I ( u ) I ( u 0 ) ,
  • | | u u 0 | | < ϵ ,
  • I ( u ) < I ( v ) + 1 2 ϵ | | u v | | 2 for all v u .
Now, we try to construct a sequence { u n } which’ll satisfy the desired conditions. Choose u 0 X arbitrarily and ϵ = 1 . By Ekeland’s Variational Principle, there exists u 1 X satisfying conditions (1), (2), and (3) with ϵ = 1 .
Now, recursively construct the sequence { u n } as follows: For n 2 , apply Ekeland’s Principle with ϵ = 1 n and u 0 = u n 1 . This gives us u n satisfying conditions (1), (2), and (3) with ϵ = 1 n .
By our construction, the sequence { u n } satisfies the following properties:
  • I ( u n ) I ( u n 1 ) for all n, implying that { I ( u n ) } is a decreasing sequence.
  • | | u n u n 1 | | < 1 n for all n, so { u n } is a Cauchy sequence.
  • By the completeness of X, { u n } converges to some u X .
Furthermore, since { I ( u n ) } is decreasing and bounded below by α , it converges to α . Also, since I is continuously differentiable, I ( u n ) converges to I ( u ) = 0 as n .
Assume if possible that, | | u n | | . Then, there exists M > 0 such that | | u n | | M for all n. But then lim n I ( u n ) = , contradicting the assumption that α is finite. Thus, | | u n | | must tend to infinity.
Therefore, we have constructed a sequence { u n } satisfying all the desired properties using Ekeland’s Variational Principle. Thus the proof is complete.
As a consequence of Lemma (3), we can conclude the following.
Proposition 10. 
If I C 1 ( X , R ) is bounded below, and satisfies the ( P S ) condition, then, I iscoercive.

4.3. Applications

We can in fact observe an application of Ekeland Variational Principle to derive critical point(s) at the minimax level.
Theorem 12. 
(Brezis Theorem) Given a Banach Space X and, I C 1 ( X , R ) , assume K to be a compact metric space. Moreover, let, K 0 K be a closed set, and, p 0 : K 0 X be a continuous function.
Define,
Γ : = p C ( K , X ) : p | K 0 = p 0
and,
c : = inf p Γ max ξ K I ( p ( ξ ) ) = inf p Γ max x p ( K ) I ( x ) .
Also, suppose that,
max ξ K I ( p ( ξ ) ) > max ξ K 0 I ( p 0 ( ξ ) ) p Γ .
Then, ∃ a sequence, { x n } n 0 X satisfying, I ( x n ) c and, I ( x n ) 0 in X * as n .
Furthermore, if I satisfies ( P S ) c , then, x 0 X with, I ( x 0 ) = c and, I ( x 0 ) = 0 .
Important to mention that, the proof mainly hinges upon the following result.
Lemma 4. 
(Pseudo-Gradient Lemma) Assume any metric space Y, and X to be a Banach Space with, F C ( Y , X * ) . Given any σ > 0 , ∃ a function, h : Y X , locally Lipschitz such that, y Y ,
  • | | h ( y ) | | X 1 .
  • F ( y ) , h ( y ) | | F ( y ) | | X * σ .
Brezis Theorem can also be considered as a generalization of the Mountain-Pass Theorem introduced by Ambrosetti and Rabinowitz.
Theorem 13. 
(Mountain Pass Theorem) A priori under the assumptions that, X be a Banach Space and, I C 1 ( X , R ) satisfies ( P S ) . Moreover, we consider the following condition,
R > 0   a n d ,   e X   s u c h   t h a t ,   | | e | | > R   a n d ,   b = inf x B R ( 0 ) I ( x ) > max { I ( 0 ) , I ( e ) } .
Then, x 0 X satisfying, I ( x 0 ) = 0 and, I ( x 0 ) = c b , where, we can in fact characterize c as,
c = inf p Γ max t [ 0 , 1 ] I ( p ( t ) ) .
such that,
Γ = p C ( 0 , 1 , X ) : p ( 0 ) = 0 , p ( 1 ) = e .
Proof. 
A priori from the definition, we have, c < . Thus, for every p Γ , p ( [ 0 , 1 ] ) B R ( 0 ) ϕ . As a result,
max t [ 0 , 1 ] I ( p ( t ) ) = max x p [ 0 , 1 ] I ( x ) inf x B R ( 0 ) I ( x ) = b
Therefore, c b . To establish the existence of critical point, we need to utilize the hypothesis of Brezis Theorem. Let, K = [ 0 , 1 ] , K 0 = { 0 , 1 } , p 0 ( 0 ) = 0 a n d , p 0 ( 1 ) = e .
Now then it remains for us to verify the condition (34). Although, from (39), we obtain,
max t [ 0 , 1 ] I ( p ( t ) ) inf x B R ( 0 ) I ( x ) = b > max { I ( 0 ) , I ( e ) } .
Applying Brezis Theorem, we conclude that, x 0 X satisfying, I ( x 0 ) = c and, I ( x 0 ) = 0 . This completes the proof. □
Remark 12. 
The statement of the Mountain Pass Theorem need not be true in case if I does not satisfies ( P S ) .
For example, we can look into the function, I C 1 ( R 2 , R ) defined as,
I ( x , y ) : = x 2 ( x 1 ) 3 y 2 .
SInce, I ( 0 ) = 0 and, 0 is in fact a local minima, we can choose R > 0 small enough such that, b = inf x B R ( 0 ) I ( x ) > 0 . Furthermore, given e R 2 with, | e | > R and, I ( e ) < 0 . It can be observed that, all the conditions of the Mountain Pass Theorem are satisfied, except that, I satisfies ( P S ) .
Also, since I ( 0 ) = 0 and, 0 is indeed the only critical point of I, there ∄ any x 0 R 2 satisfying, I ( x 0 ) b > 0 and, I ( x 0 ) = 0 .
Remark 13. 
Geometrically speaking, Mountain Pass Theorem implies that, if a pair of points in the graph of a function I are indeed separated by a mountain range, then ∃ a mountain pass containing a critical point of I.
Another version of the famous Mountain Pass Theorem can be found in [15].
Theorem 14. 
Given a Banach Space X, and I : X R be a C 1 functional which indeed satisfies the ( P S ) condition. Suppose, S be a closed subset of X which disconnects X. Furthermore, given x 0 , x 1 X which belong to distinct connected components of X S , if I is bounded below in S, and in fact, the following condition is verified:
inf S I b a n d max I ( x 0 ) , I ( x 1 ) < b
Also, let,
Γ = f C ( 0 , 1 , X ) : f ( 0 ) = x 0 , f ( 1 ) = x 1 .
Then, we shall have,
c = inf f Γ max t [ 0 , 1 ] I ( f ( t ) ) >
will be a critical value. In other words, x 0 X satisfying,
I ( x 0 ) = c , I ( x 0 ) = 0 .
Remark 14. 
The connectedness referred above is in fact arc-wise connectedness. Thus, X S is indeed a union of open arcwise connected components (ref. [16]). Hence, x 0 and x 1 being in distinct components implies that, any arc in X connecting x 0 and x 1 intercept S.
For example, one can in fact consider X to be a hyperplane in X or, the boundary of an open set [in particular, the boundary of a ball].

5. Applications to the Critical Point Theory

Theorem 15. 
Suppose, 1 < q p < ( n + 2 ) ( n 2 ) , and, λ R . Also, we consider Ω R n to be a bounded domain. Then u 0 H 0 1 ( Ω ) which is in fact a weak solution of the problem,
Δ u = | u | p 1 u + λ | u | q 1 u i n Ω u = 0 o n Ω u ¬ 0 i n Ω
in the sense that,
Ω u 0 ϕ = Ω | u 0 | p 1 u 0 ϕ + λ Ω | u 0 | q 1 u 0 ϕ ϕ D ( Ω ) .
Proof. 
We define, I C 1 ( H 0 1 ( Ω ) , R ) as,
I ( u ) : = 1 2 Ω u 2 1 p + 1 Ω | u | p + 1 λ q + 1 Ω | u | q + 1 .
Clearly, we can verify that, I satisfies ( P S ) . Next we check the conditions for the Mountain Pass Theorem. We obtain,
I ( 0 ) = 0 .
SInce, H 0 1 ( Ω ) L p + 1 ( Ω ) , H 0 1 ( Ω ) L q + 1 ( Ω ) , hence,
Ω | u | p + 1 c | | u | | p + 1 , Ω | u | q + 1 c | | u | | q + 1
for some c > 0 . Therefore,
I ( u ) 1 2 | | u | | 2 c p + 1 | | u | | p + 1 c q + 1 λ | | u | | q + 1
= 1 2 | | u | | c 1 | | u | | p c 2 λ | | u | | q | | u | |
For | | u | | = R , we have,
I ( u ) 1 2 R c 1 R p c 2 R q R
Since, p , q > 1 , thus choosing R small enough such that,
I ( u ) > a , on   | | u | | = R for   some   a > 0 .
Let us take any u 1 H 0 1 ( Ω ) . Thus, t u 1 H 0 1 ( Ω ) for any t R . Now, from the fact that,
I ( t u 1 ) a s t .
Choosing t large enough such that, | | t 0 u 1 | | = | t 0 | . | | u 1 | | > R and, I ( t 0 u 1 ) < 0 , and, e = t 0 u , and applying the Mountain Pass Theorem, we can assert that, u 0 H 0 1 ( Ω ) I ( u 0 ) = 0 and, I ( u 0 ) a .
It can also be derived that, u 0 0 , as, a > 0 , and the proof is thus complete.
On the other hand, Brezisand Nirenberg [28] has developed signifcant results in observing the model problem:
Δ u = u p + λ u in   Ω u = 0 on   Ω u > 0 in   Ω
When p = ( n + 2 ) ( n 2 ) , n 3 and, λ be any real constant. We can consider the following cases.
Theorem 16. 
In case when, n 4 , the problem (41) indeed has a solution for every λ ( 0 , λ 1 ) , where, λ 1 denotes the first eigenvalue of Δ .
Moreover, the problem does not admit any solution if λ ( 0 , λ 1 ) and, Ω is starshaped.
Theorem 17. 
For n = 3 , the problem (41) turns out to be much more delicate. In this scenario, a complete solution exists only if Ω is a ball. Subsequently, we shall have that, (41) yields a solution iff λ 1 4 λ 1 , λ 1 , λ 1 being the first eigenvalue of Δ .
Remark 15. 
In case when p > ( n + 2 ) ( n 2 ) , Brezis and Nirenberg [28] discusses the concept of commenting on the results related to existence of solutions to (41) using the notion of general Bifurcation Theory. For example, as mentioned by Rabinowitz [29], the problem (41) possesses a component C of solutions ( λ , u ) , which meets ( λ 1 , 0 ) and which is unbounded in R × L ( Ω ) . Furthermore, if p = ( n + 2 ) ( n 2 ) and n 4 , applying the result in Theorem (16), we can conclude that, the projection of C on the λ-axis does in fact contain the interval ( 0 , λ 1 ) (with appropriate modifications being done when n = 3 , p = 5 ).
As in another scenario when, p > ( n + 2 ) ( n 2 ) and Ω is star-shaped, then the problem (41) has no solution for λ λ * , λ * being some positive constant depending on Ω and p. This was explicitly derived by Rabinowitz [30] for the case when, n = 3 , p = 7 . One can in fact use similar argument in the general case by applying Pohozaev’s Identity.
In greater generality as compared to the Dirichlet boundary value problem (41), Brezis and Nirenberg [28] also have dealt with the following problem in detail:
Δ u = u p + λ u q in   Ω u = 0 on   Ω u > 0 in   Ω
Where, p = ( n + 2 ) ( n 2 ) and, 1 < q < p , λ > 0 being a constant. Considering different cases for n, we can conclude about the existence of solution (if any) for (42) in the manner as described below.
Theorem 18. 
For n 4 , the problem (42) indeed admits of a solution for every λ > 0 .
Theorem 19. 
In case when n = 3 and consequently, p = 5 , we can assert the following about existence of solution to the DIrichlet problem (42):
(i) 
If 3 < q < 5 , ∃ solution to the problem for every λ > 0 .
(ii) 
For 1 < q 3 , solution does exists only for sufficiently large values of λ.
Brezis Theorem allows us to conclude that, for any function I C 1 ( X , R ) which is bounded below, and satisfy ( P S ) , u 0 X satisfying, I ( u 0 ) = 0 .
As for another application of Theorem (12), we next justify the existence of a critical point for I in case it is bounded below on a finite dimensional subspace of X.
Theorem 20. 
(Saddle Point Theorem (Rabinowitz) [1]) Assume X to be a Banach Space and, I C 1 ( X , R ) satisfies ( P S ) . Also suppose, V 0 be a finite dimensional subspace of X and, X = V E . Furthermore, we consider that, R > 0 , α , β R such that,
max x B R V ( 0 ) I ( x ) α < β inf x E I ( x ) .
Where,
B R V ( 0 ) = x V : | | x | | R a n d , B R V = x V : | | x | | = R .
Then, x 0 X satisfying, I ( x 0 ) = 0 . Moreover, the critical value, c = I ( x 0 ) β , which in fact, can be characterized as,
c = inf S Γ max u S I ( u ) .
Where,
Γ : = S = ϕ ( B ¯ R V ( 0 ) ) : ϕ C B ¯ R V ( 0 ) , X a n d , ϕ | B R V = i d
Proof of the above Rabinowitz Theorem requires an application of the Topological Degree in R n .
Definition 15. 
(Topological Degree) Suppose, Ω R n be a bounded and open set. Given φ C ( Ω ¯ ) and, p R n φ ( Ω ) , the topological degree (or, Brouwer Degree), d ( φ , Ω , p ) is defined to be an integer satisfying the following properties:
(I) 
d ( i d , Ω , p ) = 1 if p Ω , 0 if p Ω ¯ .
(II) 
d ( φ , Ω , p ) 0 q Ω such that, φ ( q ) = p .
(III) 
d ( φ , Ω , p ) = 0 if, p φ ( Ω ¯ ) .
(IV) 
(Addition-Excision Property) If Ω 1 , Ω 2 Ω are open with Ω 1 Ω 2 = ϕ and, p φ ( Ω ¯ ( Ω 1 Ω 2 ) ) , then,
d ( φ , Ω , p ) = d ( φ , Ω 1 , p ) + d ( φ , Ω 2 , p ) .
(V) 
If φ : [ 0 , 1 ] × Ω ¯ R and, p : [ 0 , 1 ] R n be continuous, and moreover, p ( t ) φ ( t , Ω ) t [ 0 , 1 ] , then, d ( φ ( t , . ) , Ω , p ( t ) ) is independent of t.
(VI) 
d ( φ 1 , Ω , p ) = d ( φ 2 , Ω , p ) whenever, φ 1 | Ω = φ 2 | Ω .
(VII) 
(Product Property) If Ω j ’s are bounded open sets in R n j for every j = 1 , 2 , and, φ j and p j are such that, p j R n j φ j ( Ω j ) , j = 1 , 2 . Then,
d ( φ 1 × φ 2 , Ω 1 × Ω 2 , ( p 1 , p 2 ) ) = d ( φ 1 , Ω 1 , p 1 ) d ( φ @ , Ω 2 , p 2 )
Remark 16. 
Given Ω R n to be bounded and open, φ C 1 ( Ω ¯ , R n ) and, p R n φ ( Ω ) , we can relate the theory of the Brouwer Degree with the existence and multiplicity of solutions of the equation,
φ ( q ) = p
Assuming φ ( q ) to be non-singular whenever (45) holds true. Then, the Inverse Function Theorem yields, (45) can have only a finite number of solutions in Ω. In this so called "nice" case, the corresponding Brouwer Degree of φ with respect to Ω and p, denoted by d ( φ , Ω , p ) has the following expression,
d ( φ , Ω , p ) = q φ 1 ( p ) Ω s i g n { det φ ( q ) } ,
where, det A denotes the determinant of a square matrix A.
Remark 17. 
The notion of the Brouwer Degree can also be extended from "regular" to "singular" values of C 2 -functions, and then to continuous functions on R n [ref. [27].
Remark 18. 
The definition of topological degree as provided above as well as its properties can in fact be extended to an infinite dimensional space, in which case it is kown as Leray-Schauder Degree [ref. [27].
Proof of Theorem (20):
Proof. 
For any c R and I C 1 ( X , R ) , we define the following sets,
K : = x X : I ( x ) = 0
and,
K c : = x K : I ( x ) = c .
First, we intend to prove that, K c ϕ for c as mentioned in (44). Assume if possible that, K c = ϕ in this case. We choose ϵ as,
0 < ϵ < 1 4 ( β α )
and, consider S Γ such that,
max x S I ( x ) < c + ϵ .
Such that, ( P S ) c condition is satisfied under these assumptions. Let, η : [ 0 , 1 ] × X X be a I-decreasing homotopy (ref. Corollary ( 1.7 ) [25]) satisfying the following conditions,
  • η ( t , x ) = x if, | I ( x ) c | 2 ϵ .
  • η ( 1 , I c + ϵ ) I c ϵ , where, I c : = { x X : I ( x ) c } .
We denote, S 1 = η ( 1 , S ) . If S 1 Γ , then by (47) we derive,
max x S 1 I ( x ) c ϵ
which contradicts the definition of c.
Let us establish that, S 1 Γ . Consider ϕ C ( B ¯ R V ( 0 ) , X ) such that, ϕ | B R V ( 0 ) = i d and, S = ϕ ( B ¯ R V ( 0 ) ) . We thus have, ϕ 1 = η ( 1 , ϕ ) C ( B ¯ R V ( 0 ) , X ) and, S 1 = ϕ 1 ( B ¯ R V ( 0 ) ) . Hence, S 1 Γ if, ϕ 1 ( x ) = x , x B R V ( 0 ) .
We claim that,
c β .
If (48) does hold true, then, for x B R V ( 0 ) , using (43) and (46),
I ( x ) α < α + 2 ϵ < β 2 ϵ c 2 ϵ .
This helps us assert that, ϕ 1 ( x ) = x .
Therefore, it only suffices to show (48). Wednote P 1 and P 2 to be the projections of X onto V and E respectively. Furthermore, we identify V with R n for some n. For ϕ C ( B ¯ R V ( 0 ) , X ) , ϕ = i d o n B R V ( 0 ) . Hence, by properties ( I ) and ( V I ) in the definition (15) of the Topological Degree, we obtain,
d ( P 1 ϕ , B R V ( 0 ) , 0 ) = d ( i d , B R V ( 0 ) , 0 ) = 1 .
Applying property ( I I ) in definition (15), x 0 B R V ( 0 ) satisfying, P 1 ϕ ( x 0 ) = 0 . Consequently, for each S = ϕ ( B ¯ R V ( 0 ) ) Γ , x 0 B R V ( 0 ) such that,
ϕ ( x 0 ) = P 2 ϕ ( x 0 ) E .
On the other hand, from (43), we can conclude,
max x B ¯ R V ( 0 ) I ( ϕ ( x ) ) I ( ϕ ( x 0 ) ) β .
Using (44), it follows that, c β , and the proof is thus complete. □
Remark 19. 
Heuristically speaking, in the above Theorem (20), c is the minimax of I over all surfaces modelled on B R V ( 0 ) , sharing the same boundary. Unlike the Mountain-Pass Theorem, in applications of the Saddle Point Theorem, in general, no critical points of I are known initially. Important to note that, the condition (43) are satisfied if I is convex on E, concave on V, and appropriately coercive.
Another version of the Rabinowitz Saddle Point Theorem (Theorem (20)) can be found in [26].
Theorem 21. 
Given a real Banach Space X having the following direct sum decomposition, X = V E , where V and E are closed subspaces with dim V < . Suppose, I C 1 ( X , R ) satisfy ( P S ) condition, and,
I ( x ) a s | | x | | f o r x X
and,
inf y E I ( x ) = d >
Define, D : = { x V : | | x | | R } with R is chosen so large that, x V and | | x | | = R I ( x ) < d .
If,
Γ : = ϕ C ( D , X ) : ϕ | D = i d
Then,
c = inf ϕ Γ max x D I ( ϕ ( x ) ) >
and, ∃ u 0 X satisfying, I ( u 0 ) = c and, I ( u 0 ) = 0 .
Remark 20. 
For other versions of proof of Theorem (20) and, other important applications of the Mountain Pass Theorem, one can refer to [1,11,15].
Remark 21. 
Theorem (20) essentially states that under certain conditions, a functional (a function of functions) will have at least one critical point that is a saddle point. This critical point is where the functional doesn’t increase or decrease, representing a sort of equilibrium.
To put it in a more formal geometric context, consider a functional defined on an infinite-dimensional space. The space can be thought of as a “landscape” of all possible functions. The functional assigns a “height” (or value) to each function in this landscape. Rabinowitz’s theorem guarantees that there’s at least one function in this landscape that has a saddle point: it’s not the highest or lowest point, but it’s a point of balance between different “directions” in the function space.
This geometric interpretation is quite abstract because we’re dealing with spaces that are not easy to visualize. However, the concept of a saddle point as a point of equilibrium remains a powerful image to understand the essence of the theorem.

Data Availability Statement

I as the sole author of this aarticle confirm that the data supporting the findings of this study are available within the article [and/or] its supplementary materials.

Conflicts of Interest

I as the sole author of this article certify that I have no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affi liations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.

References

  1. P. H. Rabinowitz, Minimax Methods in Critical Point Theory with Applications to Differential Equations, CBME Regional Conference Series in Mathematics, Vol. 65 (American Mathematical Society, Providence, RI, 1986).
  2. A. Ambrosetti, P. H. Rabinowitz, Dual variational methods in critical point theory and application, J. Funct. Anal. 14 (1973) 349-381.
  3. D. Bahuguna, V. Raghavendra, B.V. Rathish Kumar, Topics in Sobolev Space and Applications.
  4. Elias, M. Stein, Rami Shakarchi, Fourier Analysis : An Introduction, Princeton Lectures in Analysis, Princeton university Press, Princeton and Oxford, ISBN 978-0-691-11384-5.
  5. L. Ho¨rmander, Linear Partial Differential Operators, Springer-Verlag, 1969.
  6. R. A. Adams, Sobolev Spaces, Academic Press, New York, 1975.
  7. L. C. Evans, Partial Differential Equations, Vol. 1 & Vol. 2, Berkeley Mathematics Lecture Notes, 1994.
  8. V. G. Mazya, Sobolev Spaces, Springer-Verlag, Springer Series in Soviet Mathematics, 1985.
  9. S. L. Sobolev, On some estimates relating to families of functions having derivatives that are square integrable, Dokl. Adale Nauk. SSSR (1936), 267-270.
  10. S. L. Sobolev, On a theorem of functional analysis, Math. Sh. 46 (1938), 471-497.
  11. M. Struwe, Variational Methods : Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems, Springer-Verlag, 1990.
  12. W. P. Ziemer, Weakly Differentiable Functions, Sobolev Spaces and Functions of Bounded Variation, Springer-Verlag, 1980.
  13. H. Brezis, L. Nirenberg, Remarks on finding critical points, Comm. Pure Appl. Math 44 (1991), 939-963.
  14. S. Shuzhong, Convex Analysis and Nonsmooth Analysis, ICTP Notes, 1996.
  15. D. G. De Figueredo, The Ekeland variational principle with applications and detours, TIFR Lecture notes, Springer-Verlag, 1989.
  16. J. Dungundji, Topology, Allyn & Bacon, 1966.
  17. Wenming Zou, Martin Schechter, Critical Point Theory and Its Applications, Springer Science & Business Media, 2006.
  18. Daniela Kraus, Oliver Roth, Critical points of inner functions, nonlinear partial differential equations, and an extension of Liouville’s theorem, Journal of the London Mathematical Society 77.1 (2008): 183-202.
  19. Jean Mawhin, Critical point theory and Hamiltonian systems, Vol. 74. Springer Science & Business Media, 2013.
  20. Abbas Bahri, Henri Berestycki, A perturbation method in critical point theory and applications, Transactions of the American Mathematical Society 267.1 (1981): 1-32.
  21. Abbas Bahri, Henri Berestycki, A perturbation method in critical point theory and applications, Transactions of the American Mathematical Society 267.1 (1981): 1-32.
  22. Nassif Ghoussoub, Duality and perturbation methods in critical point theory, No. 107. Cambridge University Press, 1993.
  23. Richard, S. Palais, Chuu-Lian Terng, Critical point theory and submanifold geometry, Vol. 1353. Springer, 2006.
  24. Erwin Kreyszig, Introductory Functional Analysis with Applications, Wiley Classics Library, 1978.
  25. Maria do Rosa´rio Grossinho, Stepan Agop Tersian, An Introduction to Minimax Theorems and Their Applications to Differential Equations, Springer-Science+Business Media, B.Y., 2001.
  26. P. Rabinowitz, A note on a semilinear elliptic equations on Rn, Scuola Norm. Sup. Pisa, 1991.
  27. K. Deimling, Nonlinear Functional Analysis, Springer, Berlin, Heidelberg, 1985.
  28. H. Brezis, L. Nirenberg, Positive Solutions of Nonlinear Elliptic Equations Involving Critical Sobolev Exponents, Communications on Pure and Applied Mathematics, Vol. XXXVI, 437-477 (1983).
  29. P. Rabinowitz, Some global results for nonlinear eigenualue problems, J. Funct. Anal. 7, 1971, pp. 487-513.
  30. P. Rabinowitz, Variational methods for nonlinear elliptic eigenvalue problems, Indiana Univ. Math. J. 23, 1974, pp. 729-754.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated