1. Introduction
Let
be a mapping that is Lipschitz continuous and strongly monotone, and let
denote the fixed point set of
. For
, suppose that
is nonlinear mappings such that
. The variational inequality problem defined on the solution set of a common fixed point problem, also known as the hierarchical variational inequality problem (HVIP, see also [
1]), is defined as follows:
where
denotes the inner product in Hilbert space
and
represents the common fixed point set. This type of HVIP is pivotal in various practical applications in real-world fields, such as signal restructuring, power supervision, bandwidth distribution, optimal regulating, network positioning, beamforming, and machine learning [
2,
3]. The broad applicability of HVIP has sparked significant research interest in recent years, leading to numerous studies exploring its various facets. For more detail, one can refer to [
1,
2,
3] and the references cited within.
As known to all, inverse problem is a fundamental challenge in computational mathematics, with a wide range of applications. Over the past few decades, the study of inverse problems has seen rapid development, finding applications in diverse fields such as computer vision, tomography, machine learning, physics, medical imaging, remote sensing, statistics, ocean acoustics, aviation, and geography. In 1994, Censor et al. [
4] introduced split feasibility problem (SFP) as a model to address certain types of inverse problems. Specifically, let
and
be two real Hilbert spaces. Then SFP can be formulated as
where
and
are nonempty closed convex subsets, and
represents a bounded linear operator. SFP has become a crucial tool in a range of applications, such as computed tomography, image restoration, signal processing, and intensity-modulated radiation therapy (IMRT) [
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17]. And then, motivated by challenges in inverse problems related to IMRT, Censor et al. [
6] generalized SFP on the solution set of a common fixed point problem and proposed the following multiple-sets split feasibility problem (MSFP): Find a point
such that
where
p and
r are integers with
, and
and
represent nonempty, closed, convex subsets of Hilbert spaces
and
, respectively. In this case, when
, the MSFP (
1) further simplifies to SFP. As an extension of convex feasibility problem, SFP, and MSFP, split common fixed point problems (SCFPs) were later introduced by Censor and Segal [
8] in 2009. SCFP seeks to find
such that
where
is a bounded linear operator, and
and
are two nonlinear mappings. SCFP has attracted significant research interest owing to its wide array of applications, such as signal processing, image reconstruction, IMRT, inverse problem modeling, and electron microscopy [
8,
15].
Next, we turn our attention to a class of multiple-sets split common fixed point problems (MSCFPs), which generalize SCFPs , and are recognized for its extensive applications in fields such as image reconstruction, computed tomography, and radiotherapy treatment planning [
4,
5,
6,
7]. In recent years, it has attracted considerable interest from researchers because of its broad applicability (see [
1,
18,
19]). Formally, MSCFP tries to find
such that:
where
is a bounded linear operator, and for
and
,
and
denote the fixed point sets of the nonlinear mappings
and
, respectively. Notably, when
, MSCFP reduces to SCFP. Furthermore, if
and
are severally projection operators onto nonempty closed convex subsets
and
, the MSCFP (
2) simplifies to the MSFP (
1).
On the other hand, in order to address the MSCFP (
2) and circumvent the aforementioned challenges, Wang and Xu [
20] introduced a cyclic iterative algorithm designed to solve MSCFP for directed operators
where
,
, and
. They established the weak convergence of the sequence
.
It is important to note that many existing algorithms rely on the calculation of operator norms to determine the appropriate step size. However, in practical applications, obtaining the operator norm is often challenging due to the complexity and high computational cost. While selecting such a step size may be theoretically optimal, it poses significant difficulties in practice, complicating the implementation of these algorithms. To overcome these difficulty, Gupta et al. [
18] and Zhao et al. [
19] introduced some cyclic iterative algorithms, whose step size do not depend on the operator norm. Especially, we note that the algorithm proposed by Zhao et al. [
19] is defined as follows:
where the step size
is determined by
with
, and
and
being quasi-nonexpansive operators. Under suitable conditions, the sequence
produced by this algorithm converges weakly to a solution of MSCFP, offering a practical advantage by eliminating the need to compute the operator norm.
Moreover, over the past few years, there has been a significant increase in activity and significant progress in the field of MSCFP, driven by the need to broaden its applicability to more generalized operators. Researchers have extended the framework to include various types of operators, such as quasi-nonexpansive mappings, firmly quasi-nonexpansive mappings [
19], demicontractive mappings [
18], and directed mappings [
21]. These advancements have allowed for more robust and versatile solutions to MSCFP across different application domains. In this paper, we focus on quasi-pseudocontractive operators, which further expand the scope of MSCFP to encompass a broader class of nonlinear mappings, including demicontractive mappings, directed mappings, and firmly quasi-nonexpansive mappings (see [
22]). By examining quasi-pseudocontractive operators, we aim to provide new insights and extend the applicability of MSCFP to a wider range of practical problems.
Recently, there has been an increasing focus on constructing iterative schemes that achieve faster convergence rates, particularly in the context of solving fixed point and optimization problems. Inertial techniques, which discretely simulate second-order dissipative dynamical systems, have gained recognition for their ability to accelerate the convergence behavior of iterative methods. The most prevalent method in this category is the single step inertial extrapolation, given by
. Since the introduction of inertial-type algorithms, many researchers have incorporated the inertial term
into various iterative schemes such as Mann, Krasnoselski, Halpern, and Viscosity methods to approximate solutions for fixed point and optimization problems. While most studies have established weak convergence results, achieving strong convergence remains challenging and relatively rare. Recently, Kim [
23] and Maing
[
24] explored the inertial extrapolation technique, incorporating an extra correction term, in the context of optimization problems. This correction term plays a crucial role in enhancing the acceleration rate of the algorithm, and their studies have demonstrated some promising weak convergence results, paving the way for further exploration in this direction.
Inspired by recent advancements in iterative methods and motivated by the need for more efficient algorithms, the purpose of this paper is to explore a class of novel self-adaptive cyclic iterative algorithms for approximating solutions of HVIPs for quasi-pseudocontractive mappings. Our iterative approximation integrates the inertial approach combined with correction terms and the primal-dual approach, enhancing the rate of convergence of the iterative process. A key feature of the iterative approximation algorithms presented in this paper is its use of a self-adaptive step size strategy, which can be implemented without requiring prior knowledge of the operator norm, thus making it more practical for real-world applications. By imposing appropriate control conditions on the relevant parameters, We establish that the iterates converge strongly to the unique solution of the hierarchical variational inequality problem under consideration. We further apply our theoretical results to the multiple set split feasibility problem, demonstrating the broad applicability of our approach. To confirm the effectiveness of the proposed algorithm, we provide numerical examples that illustrate its superior performance in comparison to existing methods.
2. Preliminaries
In this paper, the inner product is denoted by and the norm by . The identity operator on the Hilbert space is denoted by I. We represent the fixed point set of an operator as . Strong convergence is indicated by →, while weak convergence is represented by ⇀. The weak -limit set of the sequence is denoted by .
Definition 2.1. Let be a real Hilbert space. Then for all and , one has
(i) ,
(ii) ,
(iii)
Definition 2.2.
A mapping is termed l-Lipschitz continuous provided that there exists a constant such that
The mapping is referred to as τ-strongly monotone if a constant can be found such that
Lemma 2.1. ([
25])
Let be a l-Lipschitizian mapping with . Denote
If , then the following conclusions hold:
(i) ;
(ii) If is demiclosed at 0, then is also demiclosed at 0;
(iii)
In addition, if is quasi-pseudocontractive, then the mapping is quasi-nonexpansive, that is,
Lemma 2.2. Let be a real Hilbert space, be both l-Lipschitzian and quasi-pseudocontractive with coefficient , , where . Setting for , then for all , , the following results own:
(i) and ,
(ii) ,
(iii) .
Proof. By Lemma 2.1, it is known that is quasi-nonexpansive and satisfies , for all and .
(i) Combining the classic
, we have
Similarly, we can obtain
.
(ii) From (i) that
(iii) By (i) we have
□
Remark 2.1.
Let for , where with , be both l-Lipschitzian and quasi-pseudocontractive with . We have and . It follows form(ii)of Lemma 2.2that , which implies that is firmly quasi-nonexpansive when . On the other hand, if is a firmly quasi-nonexpansive operator, we can easily obtain , where is quasi-nonexpansive operator.
Lemma 2.3. Let be both l-Lipschitzian and quasi-pseudocontractive with and . If with , , then for all .
Proof. By Lemma 2.1, it is known that is quasi-nonexpansive and satisfies . It follows easily from (iii) of Lemma 2.2. □
Lemma 2.4. ([
26])
Let the operator be l-Lipschitz continuous and τ-strongly monotone with constants . Assume that . Define for . Then for all ,
holds, where .
Lemma 2.5. ([
1])
Suppose that is a sequence of nonnegative real numbers satisfying the condition that
where , , and and are two sequences in such that
(i) ,(ii),
(iii) implies for any subsequence .
Then .
3. Main Results
In this section, we present our novel accelerated cyclic iterative approximation algorithm and convergence analysis. We begin by outlining the assumptions necessary for achieving strong convergence.
Assumption 3.1. Let and be real Hilbert spaces. Additionally, we assume that the following conditions are satisfied:
(i) is a nonzero bounded linear operator with an adjoint operator denoted by , is an -Lipschitzian quasi-pseudocontractive operator with and is an -Lipschitzian quasi-pseudocontractive operator with such that and are demiclosed at 0 for all and , here and are separately the identity operators on and , and is l-Lipschitz continuous and δ-strongly monotone.
(ii) .
(iii)
For all , and , where with and with , , , and for and , and with
for .
(iv) and are two positive sequences such that , , , where , satisfies and , and .
Next, we shall propose the following novel iterative approximation (i.e., Algorithm 3.1) to solve HVIP controlled the MSCFP (
2).
Remark 3.1.
Since Ω in Assumption 3.1 (ii)
is a nonempty closed convex set, the variational inequality
has a unique solution by Assumption 3.1 (i).
Remark 3.2.
It can be seen that the newly proposed adaptive cyclic iterative algorithm (i.e., Algorithm 3.1
), distinct from Zhao et al. [
19]
, combines the inertial approach with a correction term and the primal-dual ideal, without requiring prior knowledge of the operator norm. This makes the algorithm easier to implement in practice and helps to improve the convergence speed of the iterative process. In addition, we have conducted our research using a more generalized quasi-pseudocontractive operator, which expand the scope to various categories of nonlinear mappings, including demicontractive, directed, and firmly quasi-nonexpansive mappings [
22].
Theorem 3.1.
Suppose that is a sequence generated by Algorithm 3.1
under Assumption 3.1
. Then the sequence converges strongly to , which is the unique solution to HVIP as follows
where Ω is defined by Assumption 3.1 (ii)
. This implies that is also a solution of the MSCFP (2).
Proof. Firstly, we show that the sequences is bounded.
Taking , we have such that . For each , , it follows from the definitions of and , Assumption 3.1, Lemma 2.1 and Remark 2.1 that and .
Algorithm 3.1 |
Let . Pick out sequences , , and such that Assumption 3.1 is satisfied, and give the initial points .
For , based on the iterates and , make the following computation for :
: Calculate
where and with the condition that
and
Evaluate
where and the stepsize is selected in such a manner that
Set and go to step 1.
|
According to Lemma 2.1 and Lemma 2.3, we have
From Algorithm 3.1 and (
8), it follows that
For the case
, one obtains
Otherwise, we deduce from (
7) and (
9) that
By Assumption 3.1 (iv), (
10) and (
11), we see that
By Lemma 2.3, we have
then we obtain
It follows from Algorithm 3.1 and the triangle inequality that
From (
5) we have
for all
n. This, combined with Assumption 3.1 (iv), leads to
Following a similar reasoning from (
6), we determine that
From (
12)–(
14), we have
where
.
Consider
. Given that
, there exists
such that for all
,
. Consequently,
. According to Lemma 2.4 for all
, one can deduce that
where
. By applying the inequalities (
15) and (
16), we know that
thus, the sequence
is bounded. As a result, the sequences
,
and
are also bounded.
According to the definition of
, along with by applying the Cauchy-Schwarz inequality, we obtain
Since
, and
are bounded, there are constants
and
such that
Form (
12), one gets
By Algorithm 3.1 and (
18), we have
Utilizing (
16) and (
17), one arrives at for all
,
where
It is evident that
. Given that the sequence
is bounded, implying the existence of a constant
such that
Hence, by (
16), we obtain
from the preceding inequality, as well as (
9) and (
19), we have for all
that
Then for some constant
,
Combining (
21) and (
22), we get that for all
,
where
.
Thus, (
23) can be reformulated as follows
It is easy to see that
. To establish that
, according to Lemma 2.5, (and taking into account (
20) and (
24)), it is enough to demonstrate that for any subsequence
, if
, then
We suppose that
. When
, it is clear that
Otherwise, as indicated by (
7), it follows that
By our assumption we get
which implies that
Since
is a bounded linear operator, we can conclude
Hence, we have
by considering that
. Using (
25)- (
27), it follows that
Since
and the sequence
is bounded, it follows that
It is evident that as
, we have
Further, according to (
26) and (
7) that
From above inequalities we arrive at
According to the arbitrariness of
, we have
By Algorithm 3.1, we show that
then we get
which means that
and
Given that is bounded, there exists a subsequence that converges weakly to . Without loss of generality, we assume that the entire sequence converges weakly to .
At the same time, it follows from (
28) that
and
as
. By (
28)-(
31), we have
as
. Noting that the pool of indexes is finite and
is asymptotically regular, for any
, we can select a subsequence
such that
,
as
, with
for all
m. Consequently, we find that
Similarly, for any
, we can select a subsequence
such that
as
, and
for all
s. It turns out that
Since
and
are demiclosed at 0, by employing Lemma 2.1, we have
and
are demiclosed at 0, it follows from (
32) and (
33) that
,
, by Lemma 2.1 that
,
.
Subsequently, we demonstrate that
To demonstrate this inequality, we select a subsequence
from
such that
Given that
is the unique solution to the variational inequality (
4) and
converges weakly to
, we can deduce that
Therefore, all the conditions specified in Lemma 2.5 are met. As a result, we can directly conclude that
. This implies that the sequence
converges strongly to
, which serves as the unique solution to the variational inequality (
4). □
4. Application to MSFP and a Numerical Example
It is widely recognized that the projection operator
on a nonempty closed convex subset
Q is firmly nonexpansive, implying that it is demiclosed at 0. Now, as an application, we will solve the MSFP (
1).
Theorem 4.1.
Let and be real Hilbert spaces, for all and for each are nonempty closed convex subsets. Suppose that , , , , , , , and are the same as in Assumption 3.1
. Additionally, if , , , and for and , and with , then the sequence arised by the following Algorithm 3.1
converges strongly to a point , that is to say, is a solution of the MSFP (1), which is also the sole solution of the following HVIP:
Proof. Let , for all and for each in Assumption 3.1 (i). Consequently, it follows from Theorem 3.1 that the conclusions can be easily drawn. □
In the sequel, we showcase the effectiveness of the proposed Algorithm 3.1 by using it to solve the MSFP (
1). The algorithm was implemented using MATLAB R2020a and executed on a laptop with an Intel(R) Core(TM) i5-8300H CPU @ 2.30GHz and 8.00GB of RAM.
Example 4.1.
For and , we select subsets and in the MSFP (1), defined by and , respectively, here , , and , . For these ranges, the components of and are randomly selected from the closed interval , and and are randomly chosen from the closed interval . Additionally, we define as a bounded linear operator with entries randomly generated within the closed interval . Further, we define the function by
and use the stopping rule . Set , , , , and , , , , and for each .
By applying Algorithm 3.1 to solve Example 4.1,we can select the inertial extrapolation factor
within the interval
. Specifically, by setting
, we can vary the inertial extrapolation factors by adjusting the parameter
within the range
. Setting
, and
, where
, we compare different inertial extrapolation factors of Algorithm 3.1 across various dimensional spaces.
Table 1 and
Table 2 present the iteration numbers and CPU time of Algorithm 3.1 for
, and 1 when the dimensions are
and
, respectively.
Based on the data presented in
Table 1 and
Table 2, it is evident that Algorithm 3.1 outperforms the algorithm (1.3) in terms of convergence speed across various dimensions. The computational results indicate that Algorithm 3.1, with the adjustment parameter
, demonstrates superior performance across different dimensions.