Preprint
Article

On the Sub Convexlike Optimization Problems

Altmetrics

Downloads

103

Views

77

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

14 June 2023

Posted:

15 June 2023

You are already at the latest version

Alerts
Abstract
In this paper, we prove that the sub convexlikeness introduced by V. Jeyakumar [1], and the subconvexlikeness defined in V. Jeyakumar [2] are equivalent in loccallly convex topological spaces. And then, we work with set-valued vector optimization problems and obtain some vector saddle-point theorems and vector Lagrangian theorems.
Keywords: 
Subject: Computer Science and Mathematics  -   Applied Mathematics

0. Introduction

Generalized convex optimization are very well studied branches of mathematics. There are many very meaningful and useful definitions of generalized convexities. Let X be a normed space, and X+ a convex cone of X. K. Fan [3] introduced the definition of X+-convexlike function. Jeyakumar [1] introduced the definition of X+-sub convexlike function. And, Jeyakumar defined X+-subconvexlike function in [2]. There are plenty of research articles discussing subconvexlike optimization problems, e.g., see [4,5,6,7,8,9,10,11,12]. In this paper, we show that the sub convexlikeness introduced in [1] and the subconvexlikeness in [2] are actually equivalent in locally convex topological spaces (including normed linear spaces).
Most of papers in set-valued optimization studied the problem with inequality constraint and the abstract constraint. In this paper we consider the set-valued optimization problem with not only inequality, abstract but also equality constraints. The explicit statement of the equality constraint would be very convenient in applications. For example, recently, the mathematical programs with equilibrium constraints has received a lot of attentions from the optimization community. The mathematical programs with equilibrium constraints are a class of optimization problem with variational inequality constraints. By representing the variational inequality as a generalized equation, e.g., [8, 13, 14], a mathematical program with equilibrium constraints can be reformulated as an optimization problem with an equality constraint. This paper deals with the set-valued optimization problem with inequality, equality as well as abstract constraints, and obtains some vector saddle-point theorems and vector Lagrangian theorems.

1. Preliminary

Let X be a real topological vector space, a subset X+ of X is said to be a convex cone if
α x 1 + β x 2 X + , x 1 , x 2 X + , α , β 0 .
We denote by 0 X the zero element in the topological space X and simply by 0 if there is no confusion.
A convex cone X+ of X is called a pointed cone if X + ( X + ) = { 0 } .
A real topological vector space X with a pointed cone is said to be an ordered topological liner space. We denote intX+ the topological interior of X+ . The partial order on X is defined by
x 1 X + x 2 ,   if x 1 x 2 X + , x 1 X + x 2 ,   if x 1 x 2 i n t X + ,
Or, if there is no confusion, they may just be denoted by
x 1 x 2 ,   if x 1 x 2 X + , x 1 x 2 ,   if x 1 x 2 i n t   X + ,
If A , B X , we denoted by
A X + B ,   if x X + y for x A , y B , A X + B ,   if x X + y for x A , y B .
Or,
A B ,   if x y for x A , y B , A B , if   x y for x A , y B .
A linear functional on X is a continuous linear function from X to R (1-dimensinal Euclidean space). The set X * of all linear functionals on X is the dual space of X. The subset
X + * = { ξ X * : x , ξ 0 , x X + }
of X * is said to be the dual cone of the cone X+ , where x , ξ = ξ ( x ) .
Suppose that X and Y are two real topological vector spaces. Let f: X→2Y be a set-valued function, where 2Y denotes the power set of Y.
Let D be a nonempty subset of X. Setting f ( D ) = x D f ( x ) , and
f ( x ) , η = { y , η : y f ( x ) } , f ( D ) , η = x D f ( x ) , η .
For x D , η Y * , we write
f ( x ) , η 0 ,   if   y , η 0 , y f ( x ) , f ( D ) , η 0 ,   if   f ( x ) , η 0 , x D .
The following Definitions 1.1 and 1.2 can be found in [15].
Definition 1.1 (convex, bounded, and absorbing) A subset M of X is said to be convex, if x 1 , x 2 M and 0 < α < 1 implies α x 1 + ( 1 α ) x 2 M ; M is said to be balanced if x M and | α | 1 implies α x M ; M is said to be absorbing if for any given neighbourhood U of 0, there exists a positive scalar β such that β 1 M U , where β 1 M = { x X ; x = β 1 v ; v M } .
Definition 1.2 (locally convex topological space) A topological vector space X is called a locally convex topological space if any neighborhood of 0 X contains a convex, balanced, and absorbing open set.
From [15, pp.26 Theorem, pp.33 Definition 1], a normed linear space is a locally convex topological space.

2. The Sub Convexlikeness

This section shows that the definitions of sub convexlikeness and subconvexlikeness given by Jeyakumar [1, 2] are actually one.
A set-valued function f: X2Y is said to be Y+ -convex on D if ∀x1, x2D, ∀α∈[0, 1], one has
α f ( x 1 ) + ( 1 - α )   f   ( x 2 )   Y + f ( α x 1 + ( 1 - α ) x 2 ) .
The following definition of convexlikeness was introduced by Ky Fan [8].
A set-valued function f: X2Y is said to be Y+ -convexlike on D if ∀x1, x2D, ∀α∈[0, 1], ∃x3D such that
α f ( x 1 ) + ( 1 - α )   f   ( x 2 )   Y + f ( x 3 )
Jeyakumar [2] introduced the following subconvexlikeness.
Definition 2.1 (subconvexlike) Let Y be a topological vector space and D X be a nonempty set and Y+ be a convex cone in Y. A set-valued map f : D 2 Y is said to be Y+-subconvexlike on D if θ int Y + such that x 1 , x 2 D , ε > 0 , α [ 0 , 1 ] , x 3 D here holds
ε θ + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) .
Lemma 2.1 is [16, Lemma 2.3].
Lemma 2.1 Let Y be a topological vector space and D X be a nonempty set and Y+ be a convex cone in Y. A set-valued map f : D 2 Y is Y+-subconvexlike on D if and only if θ int Y + , x 1 , x 2 D , α [ 0 , 1 ] , x 3 D such that
θ + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) .
A bounded function in a topological space can be defined as following Definition 2.2 (e.g,, see Yosida [15]).
Definition 2.2 (bounded set-valued map) A subset M of a real topological vector space Y is said to be a bounded subset if for any given neighbourhood U of 0, there exists a positive scalar β such that β 1 M U , where β 1 M = { y Y ; y = β 1 v ; v M } . A set-valued map f : D Y is said to bounded map if f (Y) is a bounded subset of Y.
Jeyakumar [1] introduced the following sub convexlikeness.
Definition 2.3 (sub convexlike) Let Y be a topological vector space and D X be a nonempty set. A set-valued map f : D 2 Y is said to be Y+-sub convexlike on D if bounded set-valued map u: D Y , x 1 , x 2 D , ε > 0 , α [ 0 , 1 ] , x 3 D such that
ε u + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) .
The following Lemma 2.1 is from Li and Wang [16, Lemma 2.3].
Lemma 2.2 Let Y be a locally convex topological space and D X be a nonempty set Y. A set-valued map f : D 2 Y is Y+-subconvexlike on D if and only if f ( D ) + int Y + is Y+-convex.
Theorem 2.1 Let Y be a locally convex topological space, D X a nonempty set, and Y+ a convex cone in Y. A set-valued map f : D 2 Y is Y+-sub convexlike on D if and only if f ( D ) + int Y + is Y+-convex.
Proof. The necessity.
Suppose that f is Y+-sub convexlike..
z 1 = y 1 + y 0 1 , z 2 = y 2 + y 0 2 f ( D ) + int Y + ,   x 1 , x 2 D such that y 1 f ( x 1 ) ,   y 2 f ( x 2 ) . Let
y 0 = α y 0 1 + ( 1 α ) y 0 2 ,
then y 0 int Y + . Therefore, neighbourhood U of 0 such that y + 0 + U is a neighbourhood of y + 0 and
y + 0 + U int Y + .
By Definition 1.2, we may assume that U is convex, balanced, and absorbing.
From the assumption of sub convexlikeness, i.e., bounded set-valued map u: x 1 , x 2 D , ε > 0 , α [ 0 , 1 ] , x 3 D such that
ε u + α f ( x 1 ) + ( 1 α ) f ( x 2 ) f ( x 3 ) + Y + .
Therefore
α z 1 + ( 1 α ) z 2 = α y 1 + ( 1 α ) y 2 + α y + 1 + ( 1 α ) y + 2 f ( x 3 ) ε u + Y + + y + 0
Since U is convex, balanced, and absorbing, we may take ε > 0 small enough such that
ε u U .
Therefore
ε u + y + 0 y + 0 + U int Y + .
And then
α z 1 + ( 1 α ) z 2 = α y 1 + ( 1 α ) y 2 + α y + 1 + ( 1 α ) y + 2 f ( x 3 ) + int Y + f ( D ) + int Y . +
Hence, f ( D ) + int Y + is a Y+-convex set.
The sufficiency.
If f ( D ) + int Y + is Y+-convex, then, by Lemma 2.1, f is Y+-subconvexlike. And, it is clear that Y+-subconvexlikeness implies Y+-sub convexlikeness.
From Lemma 2.2 and Theorem 2.1 one has Theorem 2.2.
Theorem 2.2 Let Y be a locally convex topological space and D X a nonempty set, and Y+ a convex cone in Y. A set-valued map f : D 2 Y is Y+-subconvexlike on D if and only if f is Y+-sub convexlike on D.

3. Vector Saddle-Point Theorems

This section works on vector saddle-point theorems for set-valued optimization problems.
A set-valued map f: D 2 Y is said to be affine on D if x 1 , x 2 D , β R , there holds
β f ( x 1 ) + ( 1 β ) f ( x 2 ) = f ( β x 1 + ( 1 β ) x 2 )
We introduce the notion of sub affinelike functions as follows.
Definition 3.1 (sub affinelike) A set-valued map f : D 2 Y is said to be Y+-sub affinelike on D if x 1 , x 2 D , α ( 0 , 1 ) , v int Y + , x 3 D there holds
v + α f ( x 1 ) + ( 1 α ) f ( x 2 ) = f ( x 3 ) .
Theorem 3.1 Let X, Y, Z and W be real topological vector spaces, D X .   Y + , Z + , W + pointed convex cones of Y, Z and W, respectively. Assume that functions f : D Y , g : D Z , h : D W satisfy that
(a) 
f and g are sub convexlike maps on D, i.e., u 1 int Y + , u 2 int Z + , α ( 0 , 1 ) , x 1 , x 2 D , x ' , x ' ' D such that
u 1 + α f ( x 1 ) + ( 1 α ) f ( x 2 ) f ( x ' ) , u 2 + α g ( x 1 ) + ( 1 α ) g ( x 2 ) g ( x ' ' ) ;
(b) 
h is a sub affinelike map on D, i.e., α ( 0 , 1 ) , x 1 , x 2 D ,   x ' ' ' D , v int W + such that
v + α h ( x 1 ) + ( 1 α ) h ( x 2 ) = h ( x ' ' ' ) ;
(c) 
int h ( D ) ;
and (i) and (ii) denote the systems
(i) 
x D , s . t . , f ( x ) 0 , g ( x ) 0 , h ( x ) = 0 ;
(ii) 
( ξ , η , ζ ) ( Y + * × Z + * × W * ) \ { ( 0 Y , 0 Z , 0 W ) } such that
ξ ( f ( x ) ) + η ( g ( x ) ) + ς ( h ( x ) ) 0 ,   x D .
If (i) has no solution then (ii) has solutions.
Moreover if (ii) has a solution ( ξ , η , ς ) with ξ 0 Y * then (i) has no solutions.
Proof.  w 1 , w 2 t > 0 t h ( D ) + int W + , α ( 0 , 1 ) ,   x 1 , x 2 D , b 1 , b 2 int W + ,   t 1 , t 2 > 0 such that
α w 1 + ( 1 α ) w 2 = α t 1 h ( x 1 ) + ( 1 α ) t 2 h ( x 2 ) + α b 1 + ( 1 α ) b 2 = ( α t 1 + ( 1 α ) t 2 ) [ α t 1 α t 1 + ( 1 α ) t 2 h ( x 1 ) + ( 1 α ) t 2 α t 1 + ( 1 α ) t 2 h ( x 2 ) ] + α b 1 + ( 1 α ) b 2 .
By the assumption (b), x 3 D ,   v int W + , ε > 0 such that
α t 1 α t 1 + ( 1 α ) t 2 h ( x 1 ) + ( 1 α ) t 2 α t 1 + ( 1 α ) t 2 h ( x 2 ) = h ( x 3 ) ε v .
Since v int Z + ,   neighbourhood U of 0 in W for which V = α b 1 + ( 1 α ) b 2 + U is a neighbourhood of α b 1 + ( 1 α ) b 2 .
By Definition 1.2, we may take ε > 0 small enough such that
ε ( α t 1 + ( 1 α ) t 2 ) v U .
Then,
α b 1 + ( 1 α ) b 2 ε ( α t 1 + ( 1 α ) t 2 ) v V int W + .
Therefore,
α w 1 + ( 1 α ) w 2 α t 1 h ( x 1 ) + ( 1 α ) t 2 h ( x 2 ) + α b 1 + ( 1 α ) b 2 = ( α t 1 + ( 1 α ) t 2 ) [ α t 1 α t 1 + ( 1 α ) t 2 h ( x 1 ) + ( 1 α ) t 2 α t 1 + ( 1 α ) t 2 h ( x 2 ) ] + α b 1 + ( 1 α ) b 2 = ( α t 1 + ( 1 α ) t 2 ) h ( x 3 ) + α b 1 + ( 1 α ) b 2 ε ( α t 1 + ( 1 α ) t 2 ) v t > 0 t h ( D ) + int W + .
So, t > 0 t h ( D ) + int Y + , is a convex set.
Similarly, t > 0 t f ( D ) + int Y + , and t > 0 t g ( D ) + int Z + are also convex. Therefore, the set
C = ( t > 0 t f ( D ) + int Y + ) × ( t > 0 t g ( D ) + int Z + ) × ( t > 0 t h ( D ) + int W + )
is convex.
From the assumption (c), int C . We also have ( 0 Y , 0 Z , 0 W ) B since (i) has no solution. Therefore, according to the separation theorem of convex sets of topological vector space, nonzero vector ( ξ , η , ς ) Y * × Z * × W * such that
ξ ( t 1 f ( x ) + y 0 ) + η ( g t 2 ( x ) + z 0 ) + ς ( t 3 h ( x ) + w 0 ) 0 ,
for t 1 , t 2 , t 3 > 0 , x D , y 0 int Y + , z 0 int Z + , w 0 B .
Since int Y + , int Z + are convex cones, and B is a linear space, one gets
ξ ( t 1 f ( x ) + λ 1 y 0 ) + η ( t 2 g ( x ) + λ 2 z 0 ) + ς ( t 3 h ( x ) + λ 3 w 0 ) 0
x D , y 0 int Y + , z 0 int Z , + w 0 B , λ i > 0 , ( i = 1 , 2 , 3 ) , t i > 0 , ( i = 1 , 2 , 3 ) .
Let λ i 0 ( i = 2 , 3 ) , t i 0 ( i = 1 , 2 , 3 ) one has
ξ ( y 0 ) 0 , y 0 int Y + .
Therefore ξ ( y ) 0 , y Y + . Hence ξ Y + * . Similarly, η int Z + , ς int W + .
Thus
( ξ , η , ς ) Y + * × Z + * × W * .
Therefore,
ξ ( f ( x ) ) + η ( g ( x ) ) + ς ( h ( x ) ) 0 ,   x D .
Which means that (ii) has solutions.
On the other hand, suppose that (ii) has a solution ( ξ , η , ς ) with ξ 0 Y + * , i.e.,
ξ ( f ( x ) ) + η ( g ( x ) ) + ς ( h ( x ) ) 0 ,   x D .
We are going to prove that (i) has no solution.
Otherwise, if (i) has a solution x ˜ D , then f ( x ˜ ) 0 , g ( x ˜ ) 0 , h ( x ˜ ) = 0 . Hence, one would have
ξ ( f ( x ˜ ) ) + η ( g ( x ˜ ) ) + ς ( h ( x ˜ ) ) < 0 .
Which is a contradiction. The proof is completed.
We consider the following optimization problem with set-valued maps:
( VP ) Y + - min f ( x ) s . t . g i ( x ) ( Z i + ) 0 ,   i = 1 , 2 , , m , 0 h j ( x ) , j = 1 , 2 , , n , x D ,
where f: X 2 Y , g i : X 2 Z i , h j : X 2 W j are set-valued maps, Zi+ is a closed convex cone in Zi and D is a nonempty subset of X.
Definition 3.2 (weakly efficient solution)A point x ¯ F is said to be a weakly efficient solution of (VP) if there exists no x D satisfying f ( x ¯ ) f ( x ) , where F : = { x D : g ( x ) 0 , h ( x ) = 0 } .
Let
P min [ A , Y + ] = { y A : ( y A ) int Y + = } , P max [ A , Y + ] = { y A : ( A y ) int Y + = } .
In the sequel, B ( W , Y ) denotes the set of all continuous linear mappings T from W to Y; B + ( Z , Y ) denotes the set of all non-negative and continuous linear mappings S from Z to Y, where non-negative mapping S means that S ( z ) Y + , z Z .
Let L ( x ¯ , S ¯ , T ¯ ) = f ( x ¯ ) + S ¯ ( g ( x ¯ ) ) + T ¯ ( h ( x ¯ ) ) .
Definition 3.3 (vector saddle-point) ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y ) is said to be a vector saddle-point of L ( x ¯ , S ¯ , T ¯ ) if
L ( x ¯ , S ¯ , T ¯ ) P min [ L ( X , S ¯ , T ¯ ) , Y + ] P max [ L ( x ¯ , B + ( Z , Y ) , B ( W , Y ) ) , Y + ] .
Where
P max [ L ( x ¯ , B + ( Z , Y ) , B ( W , Y ) ) , Y + ] = { μ : μ = P max [ L ( x ¯ , S , T ) , Y + ] , ( S , T ) B + ( Z , Y ) × B ( W , Y ) } .
Theorem 3.2  ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y ) is a vector saddle-point of L ( x ¯ , S ¯ , T ¯ ) , if and only if y ¯ f ( x ¯ ) , z ¯ g ( x ¯ ) , such that
(i) 
y ¯ P min [ L ( X , S ¯ , T ¯ ) , Y + ] ,
(ii) 
g ( x ¯ ) Z + , h ( x ¯ ) = { 0 } ,
(iii) 
( f ( x ¯ ) y ¯ S ¯ ( z ¯ ) ) int Y + = .
Proof. The sufficiency. Suppose that the conditions (i)-(iii) are satisfied. Note that g ( x ¯ ) Z + , h ( x ¯ ) = { 0 } imply
S ( g ( x ¯ ) ) Y + ,   T ( h ( x ¯ ) ) = { 0 } ,   ( S , T ) B + ( Z , Y ) × B ( W , Y ) ,
and the condition (i) states that
{ y ¯ [ f ( X ) + S ¯ ( g ( X ) ) + T ¯ ( h ( X ) ) ] } int Y + = ,
So Y + + int Y + Y + and S ( z ¯ ) Y + together imply
{ y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) [ f ( X ) + S ¯ ( g ( X ) ) + T ¯ ( h ( X ) ) ] } int Y + = .
Hence
y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) P min [ L ( X , S ¯ , T ¯ ) , Y + ] .
On the other hand, since ( f ( x ¯ ) [ y ¯ + S ( z ¯ ) ] ) int Y + = , from (3.1), and from int Y + + Y + int Y + we conclude that
{ ( S , T ) B + ( Z , Y ) × B ( W , Y ) [ f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) ] [ y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) ] } int Y + = .
Hence
y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) P max [ L ( x ¯ , B + ( Z , Y ) , B ( W , Y ) ) , Y + ]
Consequently,
L ( x ¯ , S ¯ , T ¯ ) P min [ L ( X , S ¯ , T ¯ ) , Y + ] P max [ L ( x ¯ , B + ( Z , Y ) , B ( W , Y ) ) , Y + ] .
Therefore ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y ) is a vector saddle-point of L ( x ¯ , S ¯ , T ¯ ) .
The necessity. Assume that ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y ) is a vector saddle-point of L ( x ¯ , S ¯ , T ¯ ) . From Definition 3.3 one has
L ( x ¯ , S ¯ , T ¯ ) P min [ L ( X , S ¯ , T ¯ ) , Y + ] P max [ L ( x ¯ , B + ( Z , Y ) × B ( W , Y ) ) , Y + ] .
So, y ¯ f ( x ¯ ) , z ¯ g ( x ¯ ) , w ¯ h ( x ¯ ) , i.e.,
y ¯ + S ( z ¯ ) + T ( w ¯ ) L ( x ¯ , S ¯ , T ¯ ) = f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) ,
such that
f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) [ y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) ] int Y + = , ( S , T ) B + ( Z , Y ) × B ( W , Y ) ,
and
( y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) [ f ( X ) + S ¯ ( g ( X ) ) + T ¯ ( h ( X ) ) ] ) int Y + = .
Taking T = T ¯ in (3.2) we get
S ( z ) S ¯ ( z ¯ ) int Y + , z g ( x ¯ ) , S B + ( Z , Y ) .
Aim to show that z ¯ Z + .
Otherwise, since 0 Z + , if z ¯ Z + , we would have z ¯ 0 ,
Because Z + is a closed convex set, by the separate theorem η Z * \ { 0 }
η ( t z + ) > η ( z ¯ ) , z Z + , t > 0 .
i.e.,
η ( z + ) > 1 t η ( z ¯ ) , z Z + , t > 0 .
Let t we obtain η ( z + ) 0 , z Z + . Which means that η Z + * \ { 0 } . Meanwhile, 0 Z + and (3.5) yield that η ( z ¯ ) > 0 . Given z ˜ int Z + and let
S ( z ) = η ( z ) η ( z ¯ ) z ˜ + S ¯ ( z ) .
Then S ¯ B + ( Z , Y ) and
S ( z ¯ ) S ¯ ( z ¯ ) = z ˜ int Y + .
Contradicting to (3.4). Therefore
z ¯ Z + .
Now, aim to prove that g ( x ¯ ) Z + .
Otherwise, if g ( x ¯ ) Z + , then z 0 g ( x ¯ ) such that 0 z 0 Z + . Similar to the above η 0 Z * \ { 0 } such that η 0 Z + * \ { 0 } , η 0 ( z 0 ) > 0 . Given z ˜ int Z + and let
S 0 ( z ) = η 0 ( z ) η 0 ( z 0 ) z ˜ .
Then S 0 B + ( Z , Y ) and S 0 ( z 0 ) = z ˜ int Y + . And we have proved that z ¯ Z + , so S ¯ ( z ¯ ) Y + . Therefore
S 0 ( z 0 ) S ¯ ( z ¯ ) int Y + + Y + int Y + .
Again, contradicting to (3.4).
Therefore g ( x ¯ ) Z + . Similarly, one has h ( x ¯ ) W + . From (3.2) we get
[ T ( h ( x ¯ ) ) T ¯ ( w ¯ ) ] int Y + = .
Hence
T ( w ¯ ) T ¯ ( w ¯ ) int Y + , T B ( W , Y ) .
Similarly, from (3.2) again we have
T ( w ) T ¯ ( w ¯ ) int Y + , w h ( x ¯ ) , T B ( W , Y ) .
If w ¯ 0 , since h ( x ¯ ) W + and W + is a pointed cone, we have w ¯ W + . Because Y + is a closed convex set, by the separation theorem ς W * , such that
ς ( w ) < ς ( w ¯ ) , w W + .
So ς ( w ¯ ) 0 since 0 W + . Taking y 0 int Y + and define T 0 B + ( W , Y ) by
T 0 ( w ) = ς ( w ) ς ( w ¯ ) y 0 + T ¯ ( w ) .
Then
T 0 ( w ¯ ) T ¯ ( w ¯ ) = y 0 int Y + ,
Contradicting to (3.6). Therefore w ¯ = 0 .Thus
0 h ( x ¯ ) .
Now, we’d like to prove h ( x ¯ ) = { 0 } .
Otherwise, if w 0 h ( x ¯ ) : w 0 0 , similar to (3.8) ς 0 W * , such that ς 0 ( w ) < ς 0 ( w 0 ) , w W + . So ς 0 ( w 0 ) 0 . Given y 0 int Y + and define T 0 B ( W , Y ) , by
T 0 ( w ) = ς 0 ( w ) ς 0 ( w 0 ) y 0 .
Then T 0 ( w 0 ) = y 0 int Y + , i.e., w ¯ = O ) T 0 ( w 0 ) T ¯ ( w ¯ ) int Y + . Contradicting to (3.7). Therefore we must have
h ( x ¯ ) = { 0 } .
Combining (3.2), (3.3), (3.9), and we conclude that
y ¯ P min [ L ( X , S ¯ , T ¯ ) , Y + ] ,
and
( f ( x ¯ ) y ¯ S ¯ ( z ¯ ) ) int Y + = .
We have proved that, if ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y ) is a vector saddle-point of L ( x ¯ , S ¯ , T ¯ ) , then the conditions (i)-(iii) hold.
Theorem 3.3 If ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y ) is a vector saddle-point of L ( x ¯ , S ¯ , T ¯ ) , and if 0 S ¯ ( g ( x ¯ ) ) , then x ¯ is a weak efficient solution of (VP).
Proof. Assume that ( x ¯ , S ¯ , T ¯ ) D × B + ( Z , Y ) × B ( W , Y ) is a vector saddle-point of L ( x ¯ , S ¯ , T ¯ ) , from Theorem 3.2 we have
S ( g ( x ¯ ) ) Y + , h ( x ¯ ) = { 0 } .
So x ¯ D (the feasible solution of (VP). And y ¯ f ( x ¯ ) such that y ¯ P min [ L ( X , S ¯ , T ¯ ) , Y + ] , i.e.
( y ¯ [ f ( X ) + S ¯ ( g ( X ) ) + T ¯ ( h ( X ) ] ) int Y + = .
Thus
( y ¯ [ f ( D ) + S ¯ ( g ( x ¯ ) ) + T ¯ ( h ( x ¯ ) ] ) int Y + = .
Since 0 S ¯ ( g ( x ¯ ) ) , by (3.11) , one has
( y ¯ f ( D ) ) int Y + = .
Therefore, x ¯ is a weakly efficient solution of (VP).

4. Vector Lagrangian Theorems

Definition 4.1 (vector Lagrangian map) The vector Lagrangian map L : X × B + ( Z , Y ) × B ( W , Y ) 2 Y of (VP) is defined by the set-valued map
L ( x , S , T ) = f ( x ) + S ( g ( x ) ) + T ( h ( x ) ) .
Given ( S , T ) B + ( Z , Y ) × B ( W , Y ) , we consider the minimization problem induced by (VP):
( VPST ) Y + min L ( x , S , T ) , s . t . , x D .
Definition 4.2 (slater constrained qualification (SC)) Let x ¯ F . We say that (VP) satisfies the Slater Constrained Qualification at x ¯ if the following conditions hold:
(1)
x D , s.t. h j ( x ) = 0 , g i ( x ) 0 ;
(2)
0 int h j ( D ) for all j.
According the following Theorem 4.1, (VPST) can also be considered as a dual problem of (VP).
Theorem 4.1 Let x ¯ D . Assume that f ( x ) f ( x ¯ ) , g ( x ) , h ( x ) satisfy the generalized convexity condition (a), the generalized affineness condition (b), as well as the inner point condition (c), and (VP) satisfies the Slater Constrained Qualification (SC). Then, x ¯ D is a weakly efficient solution of (VP) if and only if ( S , T ) B + ( Z , Y ) × B ( W , Y ) such that x ¯ D is a weakly efficient solution of (VPST).
Proof. Assume ( S , T ) B + ( Z , Y ) × B ( W , Y ) such that x ¯ D is a weakly efficient solution of (VPST). Then there exist y ¯ f ( x ¯ ) , z ¯ g ( x ¯ ) , w ¯ h ( x ¯ ) , such that
( y ¯ + S ( z ¯ ) + T ( w ¯ ) [ f ( D ) + S ( g ( D ) ) + T ( h ( D ) ) ] ) int Y + = ,
If ( y ¯ f ( D ) ) int Y + , then y f ( D ) such that y ¯ y int Y + , i.e.,
( y ¯ + S ( z ¯ ) + T ( w ¯ ) [ y + S ( z ¯ ) + T ( w ¯ ) ] ) int Y + .
Which means that
( y ¯ + S ( z ¯ ) + T ( w ¯ ) [ f ( D ) + S ( g ( D ) ) + T ( h ( D ) ) ] ) int Y + .
Which is a contradiction.
Therefore
( y ¯ f ( D ) ) int Y + = .
Hence, x ¯ D is a weakly efficient solution of (VP).
Conversely, suppose that x ¯ D is a weakly efficient solution of (VP). So y ¯ f ( x ¯ ) such that there is not any x D for which f ( x ) y ¯ int Y + . That is to say, there is not any x X such that
f ( x ) y ¯ int Y + , g ( x ) Z + , 0 W h ( x ) .
By Theorem 3.1, ( ξ , η , ς ) Y + * × Z + * × W * \ { ( 0 Y * , 0 Z * , 0 W * ) } such that
ξ ( f ( x ) y ¯ ) + η ( g ( x ) ) + ς ( h ( x ) ) 0 , x D .
Since y ¯ f ( x ¯ ) and 0 W h ( x ¯ ) , take x = x ¯ in (1) we obtain
η ( g ( x ¯ ) ) 0 .
But x ¯ D and η Z + * imply that z ¯ g ( x ¯ ) ( Z + ) for which
η ( z ¯ ) 0 .
Hence η ( z ¯ ) = 0 , which means
0 η ( g ( x ¯ ) ) .
Since x D implies 0 W h ( x ) , and g ( x ) ( Z + ) implies z g ( x ) ( Z + ) such that η ( z ) 0 , we have
ξ ( f ( x ) y ¯ ) 0 , x D .
Because the Slater Constraint Qualification is satisfied, similar to the proof of Theorem 3.2, we have ξ 0 Y * . So we may take y 0 int Y + such that
ξ ( y 0 ) = 1.
Define the operator S : Z Y and T : W Y by
S ( z ) = η ( z ) y 0 , T ( w ) = ς ( w ) y 0 .
It is easy to see that
S B + ( Z , Y ) , S ( Z + ) = η ( Z + ) y 0 Y + , T B ( W , Y ) .
And (4.2) implies
S ( g ( x ¯ ) ) = η ( g ( x ¯ ) ) y 0 0 Y + = 0 Y .
Since x ¯ D , we have 0 W h ( x ¯ ) . Hence
0 Y T ( h ( x ¯ ) ) .
Therefore, by (4.4) and (4.5) one gets
y ¯ f ( x ¯ ) f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) .
From (4.1) and (4.3)
ξ [ f ( x ) + S ( g ( x ) ) + T ( h ( x ) ) ] = ξ ( f ( x ) ) + η ( ( g ( x ) ) ξ ( y 0 ) + ς ( h ( x ) ) ξ ( y 0 ) = ξ ( f ( x ) ) + η ( g ( x ) ) + ς ( h ( x ) ) ξ ( y ¯ ) , x D .
i.e.,
ξ [ f ( x ) y ¯ ) + S ( g ( x ) ) + T ( h ( x ) ) ] 0 , x D .
Taking F ( x ) = f ( x ) + S ( g ( x ) ) + T ( h ( x ) ) , G ( x ) = { 0 Z } and H ( x ) = { 0 W } , applying Theorem 3.2 to the functions F ( x ) y ¯ , G ( x ) , H ( x ) , then (4.6) deduces that
( y ¯ [ f ( D ) + S ( g ( D ) ) + T ( h ( D ) ) ] int Y + = ,
and
y ¯ F ( x ¯ ) = f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) ,
since 0 Y S ( g ( x ¯ ) ) , 0 Y T ( h ( x ¯ ) ) .
Consequently, x ¯ D is a weakly efficient solution of (VPST).
We complete the proof.
Definition 4.3 (NNAMCQ) Let x ¯ F . We say that (VP) satisfies the No Nonzero Abnormal Multiplier Constraint Qualification (NNAMCQ) at x ¯ if there is no nonzero vector ( η , ς ) Π i = 1 m Z i * × Π j = 1 n W j * satisfying the system
min x D U ( x ¯ ) [ i = 1 m η i g i ( x ) + j = 1 n ς j h j ( x ) ] = 0 i = 1 m η i g i ( x ¯ ) = 0 ,
where U ( x ¯ ) is some neighborhood of x ¯ .
Similar to the proof of Theorem 4.1, one has Theorem 4.2.
Theorem 4.2 Let x ¯ D . Assume that f ( x ) f ( x ¯ ) , g ( x ) , h ( x ) satisfy the generalized convexity condition (a), the generalized affineness condition (b), as well as the inner point condition (c). If x ¯ is a weakly efficient solution of (VP), then ∃vector Lagrangian multiplier ( S , T ) B + ( Z , Y ) × B ( W , Y ) such that x ¯ D is a weakly efficient solution of (VPST). Inversely, if (NNAMCQ) holds at x ¯ D , and if ∃vector Lagrangian multiplier ( S , T ) B + ( Z , Y ) × B ( W , Y ) such that x ¯ is a weakly efficient solution of (VPST), then x ¯ is a weakly efficient solution of (VP).

5. Conclusions.

Jeyakumar [1] introduced the following definition of sub convexlike functions for single-valued functions.
Let Y be a topological vector space and D X be a nonempty set. A set-valued map f : D 2 Y is said to be Y+-sub convexlike on D if bounded set-valued map u: D Y , x 1 , x 2 D , ε > 0 , α [ 0 , 1 ] , x 3 D such that
ε u + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) ,
where the partial order is induced by a convex cone Y + of Y.
Jeyakumar [2] introduced the following subconvexlikeness.
A set-valued map f : D 2 Y is said to be Y+-subconvexlike on D if θ int Y + such that x 1 , x 2 D , ε > 0 , α [ 0 , 1 ] , x 3 D here holds
ε θ + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) .
In this paper, we prove that the above two generalized convexities are equivalent.
A set-valued map f: D 2 Y is said to be affine on D if x 1 , x 2 D , β R , there holds
β f ( x 1 ) + ( 1 β ) f ( x 2 ) = f ( β x 1 + ( 1 β ) x 2 )
We define the following sub affinelike maps, in order to weaken the condition of the “equality constraints” for optimization problems.
A set-valued map f : D 2 Y is said to be Y+-sub affinelike on D if x 1 , x 2 D , α ( 0 , 1 ) , v int W + ,   x 3 D there holds
v + α f ( x 1 ) + ( 1 α ) f ( x 2 ) = f ( x 3 ) .
And then, we consider the following optimization problem with set-valued maps:
( VP ) Y + - min f ( x ) s . t . g i ( x ) ( Z i + ) ,   i = 1 , 2 , , m , 0 h j ( x ) , j = 1 , 2 , , n , x D ,
where f: X 2 Y and g i : X 2 Z i are sub convexlike, and h j : X 2 W j are sub affinelike.
For a single-valued situation, above optimization problem (VP) may be written as follows.
Y + - min f ( x ) s . t . g i ( x ) 0 ,   i = 1 , 2 , , m , h j ( x ) = 0 , j = 1 , 2 , , n , x D .
We obtain some vector saddle-point theorems and some vector Lagrangian theorems for the set-valued optimization problem (VP). Our Theorem 3.1 is a generalization of theorems of alternatives in [1, 2], a modification of theorems of alternatives in [5, 10, 11, 17, 18]. Our saddle-points theorems (Theorems 3.2 and 3.3) are generalizations of the saddle-point theorem in [16, 20], and modifications of saddle-point theorems in [4]. Our Lagrangian theorems (Theorems 4.1 and 4.2) are generalizations of Lagrangian theorems in [16] and modifications of those in [19].

References

  1. V. Jeyakumar, Convexlike Alternative Theorems and Mathematical Programming, Optimization, 16(1985), pp. 643-652. [CrossRef]
  2. V. Jeyakumar, A Generalization of a Minimax Theorem of Fan via a Theorem of the Alternative, J. Optim. Theory Appl., 48 (1986), pp. 525-533. [CrossRef]
  3. K. Fan, Minimax theorems, Proc. Nat. Acad. Sci., 39(1953), pp. 42-47. [CrossRef]
  4. C. Gutiérrez, and L. Huerga, Scalarization and Saddle Points of Approximate Proper Solutions in Nearly Subconvexlike Vector Optimization Problems, J. Math. Anal. Appl., 2(2012)389, pp. 1046-1058. [CrossRef]
  5. Z.-A. Zhou, and J.-W. Peng, A Generalized Alternative Theorem of Partial and Generalized Cone Subconvexlike Set-Valued Maps and Its Applications in Linear Spaces, J. Appl. Math., Volume 2012, Article ID 370654. [CrossRef]
  6. Y. D. Xu and S. J. Li. Tightly Proper Efficiency in Vector Optimization with Nearly Cone-Subconvexlike Set-Valued Maps, J. Inequal. Appl., 2011, 839679. [CrossRef]
  7. L. Y. Xia, and J. H. Qiu, Superefficiency in Vector Optimization with Nearly Subconvexlike Set-Valued Maps, J. Optim. Theory Appl., 1(2008)136, pp. 125-137. [CrossRef]
  8. J. Chen, Y. Xu, and K. Zhang, Approximate Weakly Efficient Solutions of Set-Valued Vector Equilibrium Problems, J. Inequal. Appl., 2018(181). [CrossRef]
  9. Z.-A. Zhou Department of Applied Mathematics , Chongqing University of Technology , Chongqing , 400054 , P.R. China , X.-M. Yang School of Mathematics , Chongqing Normal University , Chongqing , 400047 , P.R. China & J.-W. Peng School of Mathematics , Chongqing Normal University , Chongqing , 400047 , P.R. China Correspondencejwpeng6@yahoo.com.cn, Optimality Conditions of Set-Valued Optimization Problem Involving Relative Algebraic Interior in Ordered Linear Spaces, Optimization , 3(2014)63, pp.433-446. [CrossRef]
  10. E. Hern´andez, B. Jim´enez, and V. Novo, Weak and Proper Efficiency in Set-Valued Optimization on Real Linear Spaces, J. Convex Anal., 2(2007)14, pp.275–296.
  11. C. H. Yuan, and W. D. Rong, 𝜀-Properly Efficiency of Multiobjective Semidefinite Programming with Set Valued Functions, Math. Probl. Eng., Volume 2017, Article ID 5978130. [CrossRef]
  12. Z. Li, The Optimality Conditions of Differentiable Vector Optimization Problems, J. Math. Anal. Appl., (1996)201, pp.35-43. [CrossRef]
  13. J. J. Ye and X. Y. Ye, Necessary Optimality Conditions for Optimization Problems with Variational Inequality Constriants, Math. Oper. Res., 22(1997), pp. 977-997. [CrossRef]
  14. J. J. Ye and Q. J. Zhu, Multiobjective Optimization Problem with Variational Inequality Constraints, Math. Program. Ser. A, 96(2003), pp. 139-160. [CrossRef]
  15. K. Yosida, Functional Analysis, Springer-Verlag, Berlin, 1978.
  16. Li Z. F., and Wang S. Y., Lagrange Multipliers and Saddle Points in Multiobjective Programming, J. Optim. Theo. Appl., 83(1994)1, pp.63-81. [CrossRef]
  17. Y. W. Huang, A Farkas-Minkowski Type Alternative Theorem and Its Applications to Set-Valued Equilibrium Problems, J. Nonlinear and Con. Anal., 1(2002)3, pp.100-118.
  18. M. R. Galán, A Theorem of the Alternative with an Arbitrary Number of Inequalities and Quadratic Programming, J. Glob. Optim., 69(2017)2, pp.427–442. [CrossRef]
  19. Y. Zhou, J. C. Zhou, X. Q. Yang, Existence of Augmented Lagrange Multipliers for Cone Constrained Optimization Problems, J. Glob. Optim., (2014)58, pp.243–260. [CrossRef]
  20. C. Gutiérrez, L. Huerga, and V. Novob, Scalarization and Saddle Points of Approximate Proper Solutions in Nearly Subconvexlike Vector Optimization Problems, J. Math. Anal. Appl. 389 (2012), pp.1046–1058. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated