Preprint
Article

Novel Accelerated Cyclic Iterative Approximation for Hierarchical Variational Inequalities Constrained by Multiple-Set Split Common Fixed Point Problems

Altmetrics

Downloads

91

Views

30

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

27 August 2024

Posted:

05 September 2024

You are already at the latest version

Alerts
Abstract
In this paper, we investigate a class of hierarchical variational inequalities (HVIPs, i.e., strongly monotone variational inequality problems defined on the solution set of multiple-set split common fixed point problems) with quasi-pseudocontractive mappings in real Hilbert spaces, which special cases can be found in many important engineering practical applications such as image recognizing, signal processing and machine learning. In order to solve HVIPs of potential application value, inspired by the primal-dual algorithm, we propose a novel accelerated cyclic iterative algorithm that combines the inertial method with a correction term and a self-adaptive step-size technique. Our approach eliminates the need for prior knowledge of the bounded linear operator norm. Under appropriate assumptions, we establish strong convergence of the algorithm. Finally, we apply our novel iterative approximation to solve multiple-set split feasibility problems and verify the effectiveness of the proposed iterative algorithm through numerical results.
Keywords: 
Subject: Computer Science and Mathematics  -   Computational Mathematics

MSC:  65K15; 47J20; 47H10; 47H09

1. Introduction

Let F : H H be a mapping that is Lipschitz continuous and strongly monotone, and let Fix ( F ) denote the fixed point set of F . For i = { 1 , 2 , , p } , suppose that S i : H H is nonlinear mappings such that Γ = i = 1 p Fix ( S i ) . The variational inequality problem defined on the solution set of a common fixed point problem, also known as the hierarchical variational inequality problem (HVIP, see also [1]), is defined as follows:
Find ϑ * i = 1 p Fix S i such that F ϑ * , ς ϑ * 0 , ς Γ ,
where · , · denotes the inner product in Hilbert space H and Γ represents the common fixed point set. This type of HVIP is pivotal in various practical applications in real-world fields, such as signal restructuring, power supervision, bandwidth distribution, optimal regulating, network positioning, beamforming, and machine learning [2,3]. The broad applicability of HVIP has sparked significant research interest in recent years, leading to numerous studies exploring its various facets. For more detail, one can refer to [1,2,3] and the references cited within.
As known to all, inverse problem is a fundamental challenge in computational mathematics, with a wide range of applications. Over the past few decades, the study of inverse problems has seen rapid development, finding applications in diverse fields such as computer vision, tomography, machine learning, physics, medical imaging, remote sensing, statistics, ocean acoustics, aviation, and geography. In 1994, Censor et al. [4] introduced split feasibility problem (SFP) as a model to address certain types of inverse problems. Specifically, let H 1 and H 2 be two real Hilbert spaces. Then SFP can be formulated as
Find ϑ * C such that A ϑ * Q ,
where C H 1 and Q H 2 are nonempty closed convex subsets, and A : H 1 H 2 represents a bounded linear operator. SFP has become a crucial tool in a range of applications, such as computed tomography, image restoration, signal processing, and intensity-modulated radiation therapy (IMRT) [5,6,7,8,9,10,11,12,13,14,15,16,17]. And then, motivated by challenges in inverse problems related to IMRT, Censor et al. [6] generalized SFP on the solution set of a common fixed point problem and proposed the following multiple-sets split feasibility problem (MSFP): Find a point ϑ * such that
ϑ * i = 1 p C i such that A ϑ * j = 1 r Q j ,
where p and r are integers with p , r 1 , and { C i } i = 1 p and { Q j } j = 1 r represent nonempty, closed, convex subsets of Hilbert spaces H 1 and H 2 , respectively. In this case, when p = r = 1 , the MSFP (1) further simplifies to SFP. As an extension of convex feasibility problem, SFP, and MSFP, split common fixed point problems (SCFPs) were later introduced by Censor and Segal [8] in 2009. SCFP seeks to find ϑ * H 1 such that
ϑ * Fix ( U ) and B ϑ * Fix ( T ) ,
where B : H 1 H 2 is a bounded linear operator, and U : H 1 H 1 and T : H 2 H 2 are two nonlinear mappings. SCFP has attracted significant research interest owing to its wide array of applications, such as signal processing, image reconstruction, IMRT, inverse problem modeling, and electron microscopy [8,15].
Next, we turn our attention to a class of multiple-sets split common fixed point problems (MSCFPs), which generalize SCFPs , and are recognized for its extensive applications in fields such as image reconstruction, computed tomography, and radiotherapy treatment planning [4,5,6,7]. In recent years, it has attracted considerable interest from researchers because of its broad applicability (see [1,18,19]). Formally, MSCFP tries to find ϑ * H 1 such that:
ϑ * i = 1 p Fix S i , A ϑ * j = 1 r Fix T j ,
where A : H 1 H 2 is a bounded linear operator, and for 1 i p and 1 j r , Fix ( S i ) and Fix ( T j ) denote the fixed point sets of the nonlinear mappings S i : H 1 H 1 and T j : H 2 H 2 , respectively. Notably, when p = r = 1 , MSCFP reduces to SCFP. Furthermore, if S i and T j are severally projection operators onto nonempty closed convex subsets C i and Q j , the MSCFP (2) simplifies to the MSFP (1).
On the other hand, in order to address the MSCFP (2) and circumvent the aforementioned challenges, Wang and Xu [20] introduced a cyclic iterative algorithm designed to solve MSCFP for directed operators
ϑ n + 1 = U [ n ] 1 ϑ n + γ A * T [ n ] 2 I A ϑ n ,
where 0 < γ < 2 ρ ( A * A ) , [ n ] 1 : = n ( mod p ) , and [ n ] 2 : = n ( mod r ) . They established the weak convergence of the sequence { ϑ n } .
It is important to note that many existing algorithms rely on the calculation of operator norms to determine the appropriate step size. However, in practical applications, obtaining the operator norm is often challenging due to the complexity and high computational cost. While selecting such a step size may be theoretically optimal, it poses significant difficulties in practice, complicating the implementation of these algorithms. To overcome these difficulty, Gupta et al. [18] and Zhao et al. [19] introduced some cyclic iterative algorithms, whose step size do not depend on the operator norm. Especially, we note that the algorithm proposed by Zhao et al. [19] is defined as follows:
ς n = ϑ n + a n ( ϑ n ϑ n 1 ) , v n = ς n γ n A * ( I T n ) A ς n , ω n + 1 = ( I U n ) v n + ( 1 λ ) ω n , ϑ n + 1 = v n λ ω n + 1 ,
where the step size γ n is determined by
γ n : = ρ n ( I T n ) A ς n 2 A * ( I T n ) A ς n 2 , ( I T n ) A ς n 0 , γ , ( I T n ) A ς n = 0
with γ > 0 , and U and T being quasi-nonexpansive operators. Under suitable conditions, the sequence { ϑ n } produced by this algorithm converges weakly to a solution of MSCFP, offering a practical advantage by eliminating the need to compute the operator norm.
Moreover, over the past few years, there has been a significant increase in activity and significant progress in the field of MSCFP, driven by the need to broaden its applicability to more generalized operators. Researchers have extended the framework to include various types of operators, such as quasi-nonexpansive mappings, firmly quasi-nonexpansive mappings [19], demicontractive mappings [18], and directed mappings [21]. These advancements have allowed for more robust and versatile solutions to MSCFP across different application domains. In this paper, we focus on quasi-pseudocontractive operators, which further expand the scope of MSCFP to encompass a broader class of nonlinear mappings, including demicontractive mappings, directed mappings, and firmly quasi-nonexpansive mappings (see [22]). By examining quasi-pseudocontractive operators, we aim to provide new insights and extend the applicability of MSCFP to a wider range of practical problems.
Recently, there has been an increasing focus on constructing iterative schemes that achieve faster convergence rates, particularly in the context of solving fixed point and optimization problems. Inertial techniques, which discretely simulate second-order dissipative dynamical systems, have gained recognition for their ability to accelerate the convergence behavior of iterative methods. The most prevalent method in this category is the single step inertial extrapolation, given by ϑ n + Θ n ( ϑ n ϑ n 1 ) . Since the introduction of inertial-type algorithms, many researchers have incorporated the inertial term [ Θ n ( ϑ n ϑ n 1 ) ] into various iterative schemes such as Mann, Krasnoselski, Halpern, and Viscosity methods to approximate solutions for fixed point and optimization problems. While most studies have established weak convergence results, achieving strong convergence remains challenging and relatively rare. Recently, Kim [23] and Maing e ´  [24] explored the inertial extrapolation technique, incorporating an extra correction term, in the context of optimization problems. This correction term plays a crucial role in enhancing the acceleration rate of the algorithm, and their studies have demonstrated some promising weak convergence results, paving the way for further exploration in this direction.
Inspired by recent advancements in iterative methods and motivated by the need for more efficient algorithms, the purpose of this paper is to explore a class of novel self-adaptive cyclic iterative algorithms for approximating solutions of HVIPs for quasi-pseudocontractive mappings. Our iterative approximation integrates the inertial approach combined with correction terms and the primal-dual approach, enhancing the rate of convergence of the iterative process. A key feature of the iterative approximation algorithms presented in this paper is its use of a self-adaptive step size strategy, which can be implemented without requiring prior knowledge of the operator norm, thus making it more practical for real-world applications. By imposing appropriate control conditions on the relevant parameters, We establish that the iterates converge strongly to the unique solution of the hierarchical variational inequality problem under consideration. We further apply our theoretical results to the multiple set split feasibility problem, demonstrating the broad applicability of our approach. To confirm the effectiveness of the proposed algorithm, we provide numerical examples that illustrate its superior performance in comparison to existing methods.

2. Preliminaries

In this paper, the inner product is denoted by · , · and the norm by · . The identity operator on the Hilbert space H is denoted by I. We represent the fixed point set of an operator T as Fix ( T ) . Strong convergence is indicated by →, while weak convergence is represented by ⇀. The weak ω -limit set of the sequence { ϑ n } is denoted by ω w ( ϑ n ) .
Definition 2.1.
Let H be a real Hilbert space. Then for all ϑ , ς H and α ( 0 , 1 ) , one has
(i) 2 ϑ , ς = ϑ 2 + ς 2 ϑ ς 2 = ϑ + ς 2 ϑ 2 ς 2 ,
(ii) α ϑ + ( 1 α ) ς 2 = α ϑ 2 + ( 1 α ) ς 2 α ( 1 α ) ϑ + ς 2 ,
(iii) ϑ + ς 2 ϑ 2 + 2 ς , ϑ + ς .
Definition 2.2.
A mapping F : H H is termed l-Lipschitz continuous provided that there exists a constant l > 0 such that
F ϑ F ς l ϑ ς , ϑ , ς H .
The mapping F is referred to as τ-strongly monotone if a constant τ > 0 can be found such that
F ϑ F ς , ϑ ς τ ϑ ς 2 , ϑ , ς H .
Lemma 2.1. ([25])Let T : H H be a l-Lipschitizian mapping with l 1 . Denote
K : = ( 1 ξ ) I + ξ T ( ( 1 η ) I + η T ) .
If 0 < ξ < η < 1 1 + 1 + l 2 , then the following conclusions hold:
(i) Fix ( T ) = Fix ( T ( ( 1 η ) I + η T ) ) = Fix ( K ) ;
(ii) If T is demiclosed at 0, then K is also demiclosed at 0;
(iii) In addition, if T : H H is quasi-pseudocontractive, then the mapping K is quasi-nonexpansive, that is,
K ϑ ϑ * ϑ ϑ * , ϑ H , ϑ * Fix ( T ) = Fix ( K ) .
Lemma 2.2.
Let H be a real Hilbert space, T : H H be both l-Lipschitzian and quasi-pseudocontractive with coefficient l 1 , K : = ( 1 ξ ) I + ξ T ( ( 1 η ) I + η T ) , where 0 < ξ < η < 1 1 + 1 + l 2 . Setting K μ = ( 1 μ ) I + μ K for μ ( 0 , 1 ) , then for all ϑ H , ϑ * Fix ( T ) , the following results own:
(i) ϑ K ϑ , ϑ ϑ * 1 2 ϑ K ϑ 2 and ϑ K ϑ , ϑ * K ϑ 1 2 ϑ K ϑ 2 ,
(ii) K μ ϑ ϑ * 2 ϑ ϑ * 2 μ ( 1 μ ) K ϑ ϑ 2 ,
(iii) ϑ K μ ϑ , ϑ ϑ * μ 2 K ϑ ϑ 2 .
Proof. 
By Lemma 2.1, it is known that K is quasi-nonexpansive and satisfies K ϑ ϑ * ϑ ϑ * , for all ϑ H and ϑ * Fix ( T ) = Fix ( K ) .
(i) Combining the classic ϑ , ς = 1 2 ϑ ς 2 + 1 2 ϑ 2 + 1 2 ς 2 , we have
ϑ K ϑ , ϑ ϑ * = 1 2 K ϑ ϑ * 2 + 1 2 ϑ K ϑ 2 + 1 2 ϑ ϑ * 2 1 2 ϑ ϑ * 2 + 1 2 ϑ K ϑ 2 + 1 2 ϑ ϑ * 2 = 1 2 ϑ K ϑ 2 .
Similarly, we can obtain ϑ K ϑ , ϑ * K ϑ 1 2 ϑ K ϑ 2 .
(ii) From (i) that
K μ ϑ ϑ * 2 = [ ( 1 μ ) I + μ K ] ϑ ϑ * 2 = ϑ ϑ * 2 2 μ ϑ ϑ * , ϑ K ϑ + μ 2 K ϑ ϑ 2 ϑ ϑ * 2 μ ( 1 μ ) K ϑ ϑ 2 .
(iii) By (i) we have
ϑ K μ ϑ , ϑ ϑ * = μ ϑ K ϑ , ϑ ϑ * μ 2 K ϑ ϑ 2 .
   □
Remark 2.1.
Let K μ = ( 1 μ ) I + μ K for μ ( 0 , 1 ) , where K : = ( 1 ξ ) I + ξ T ( ( 1 η ) I + η T ) with 0 < ξ < η < 1 1 + 1 + l 2 , T : H H be both l-Lipschitzian and quasi-pseudocontractive with l 1 . We have Fix K μ = Fix ( K ) and K μ ϑ ϑ 2 = μ 2 K ϑ ϑ 2 . It follows form(ii)of Lemma 2.2that K μ ϑ ϑ * 2 ϑ ϑ * 2 1 μ μ K μ ϑ ϑ 2 , which implies that K μ is firmly quasi-nonexpansive when μ = 1 2 . On the other hand, if K ^ is a firmly quasi-nonexpansive operator, we can easily obtain K ^ = 1 2 I + 1 2 K , where K is quasi-nonexpansive operator.
Lemma 2.3.
Let T : H H be both l-Lipschitzian and quasi-pseudocontractive with l 1 and μ ( 0 , 1 ) . If K : = ( 1 ξ ) I + ξ T ( ( 1 η ) I + η T ) with 0 < ξ < η < 1 1 + 1 + l 2 , K μ = ( 1 μ ) I + μ K , then I K μ ϑ 2 2 μ ϑ ϑ * , I K μ ϑ for all ϑ H , ϑ * Fix ( T ) .
Proof. 
By Lemma 2.1, it is known that K is quasi-nonexpansive and satisfies ϑ * Fix ( T ) = Fix ( K ) . It follows easily from (iii) of Lemma 2.2.    □
Lemma 2.4. ([26])Let the operator F : H H be l-Lipschitz continuous and τ-strongly monotone with constants l > 0 , τ > 0 . Assume that ϵ 0 , 2 δ l 2 . Define G μ = I μ ϵ F for μ ( 0 , 1 ) . Then for all x , y H ,
G μ ϑ G μ ς ( 1 μ χ ) ϑ ς
holds, where χ = 1 1 ϵ 2 δ ϵ l 2 ( 0 , 1 ) .
Lemma 2.5. ([1])Suppose that ϑ n is a sequence of nonnegative real numbers satisfying the condition that
ϑ n + 1 1 τ n ϑ n + τ n ψ n , n 0 , ϑ n + 1 ϑ n ϱ n + θ n , n 0 ,
where τ n ( 0 , 1 ) , ϱ n 0 , and { ψ n } and { θ n } are two sequences in R such that
(i) n = 1 τ n = ,(ii) lim n θ n = 0 ,
(iii) lim l ϱ n l = 0 implies lim sup l ψ n l 0 for any subsequence n l { n } .
Then lim n ϑ n = 0 .

3. Main Results

In this section, we present our novel accelerated cyclic iterative approximation algorithm and convergence analysis. We begin by outlining the assumptions necessary for achieving strong convergence.
Assumption 3.1.
Let H 1 and H 2 be real Hilbert spaces. Additionally, we assume that the following conditions are satisfied:
(i) A : H 1 H 2 is a nonzero bounded linear operator with an adjoint operator denoted by A * , S i : H 1 H 1 is an l 1 i -Lipschitzian quasi-pseudocontractive operator with 1 < l 1 i l 1 and T j : H 2 H 2 is an l 2 j -Lipschitzian quasi-pseudocontractive operator with 1 < l 2 j l 2 such that I 1 S i and I 2 T j are demiclosed at 0 for all 1 i p and 1 j r , here I 1 and I 2 are separately the identity operators on H 1 and H 2 , and F : H 1 H 1 is l-Lipschitz continuous and δ-strongly monotone.
(ii) Ω : = υ H 1 υ i = 1 p Fix ( S i ) a n d A υ j = 1 r Fix T j .
(iii) For all n 1 , J n : = ( 1 κ n ) I 1 + κ n J n , [ n ] 1 and D n : = ( 1 ι n ) I 1 + ι n D n , [ n ] 2 , where { κ n } ( 0 , 1 2 ] with κ = sup n 1 { κ n } and { ι n } ( 0 , 1 ) with ι = sup n 1 { ι n } , [ n ] 1 : = n ( mod p ) + 1 , [ n ] 2 : = n ( mod ) r + 1 , and for i { 1 , 2 , , p } and j { 1 , 2 , , r } , J n , i : = ( 1 ξ n ) I 1 + ξ n S i ( ( 1 η n ) I 1 + η n S i ) and D n , j : = ( 1 ξ n ) I 2 + ξ n T j ( ( 1 η n ) I 2 + η n T j ) with
0 < ξ n η n < 1 1 + 1 + l 3 2
for l 3 = max { l 1 , l 2 } .
(iv) ε n and ρ n are two positive sequences such that lim n ϕ n σ n = 0 , lim n ε n σ n = 0 , lim n ρ n σ n = 0 , where { ϕ n } [ 0 , 1 ] , σ n ( 0 , 1 ) satisfies lim n σ n = 0 and n = 0 σ n = , and 0 lim inf n ρ n lim sup n ρ n < 1 ι .
Next, we shall propose the following novel iterative approximation (i.e., Algorithm 3.1) to solve HVIP controlled the MSCFP (2).
Remark 3.1.
Since Ω in Assumption 3.1 (ii)is a nonempty closed convex set, the variational inequality
F ϑ * , z ϑ * 0 , z Ω ,
has a unique solution by Assumption 3.1 (i).
Remark 3.2.
It can be seen that the newly proposed adaptive cyclic iterative algorithm (i.e., Algorithm 3.1), distinct from Zhao et al. [19], combines the inertial approach with a correction term and the primal-dual ideal, without requiring prior knowledge of the operator norm. This makes the algorithm easier to implement in practice and helps to improve the convergence speed of the iterative process. In addition, we have conducted our research using a more generalized quasi-pseudocontractive operator, which expand the scope to various categories of nonlinear mappings, including demicontractive, directed, and firmly quasi-nonexpansive mappings [22].
Theorem 3.1.
Suppose that ϑ n is a sequence generated by Algorithm 3.1under Assumption 3.1. Then the sequence ϑ n converges strongly to ϑ * Ω , which is the unique solution to HVIP as follows
F ϑ * , z ϑ * 0 , z Ω ,
where Ω is defined by Assumption 3.1 (ii). This implies that ϑ * is also a solution of the MSCFP (2).
Proof. 
Firstly, we show that the sequences { ϑ n } is bounded.
Taking ϑ * Ω , we have ϑ * i = 1 p Fix S i such that A ϑ * j = 1 r Fix T j . For each i { 1 , 2 , , p } , j { 1 , 2 , , r } , it follows from the definitions of J n and D n , Assumption 3.1, Lemma 2.1 and Remark 2.1 that ϑ * n = 1 Fix J n and A ϑ * n = 1 Fix D n .
Algorithm 3.1
  • Initialization Let α , β , γ > 0 . Pick out sequences ε n , { ρ n } , ϕ n and σ n such that Assumption 3.1 is satisfied, and give the initial points ϑ 0 , ϑ 1 , ν 0 , ω 0 H 1 .
  • IterativeSteps : For ( n 1 ) , based on the iterates ϑ n 1 and ϑ n , make the following computation for ϑ n + 1 :
  • Step 1 : Calculate
    ν n = ϑ n + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 ,
    where α n ( 0 , α ¯ n ] and β n 0 , β ¯ n with the condition that
    α ¯ n = min ε n ϑ n ϑ n 1 + ω n , α , if ϑ n ϑ n 1 or ω n 0 , α , otherwise ,
    and
    β ¯ n = min ε n ν n 1 ϑ n 1 + ω n , β , if ν n 1 ϑ n 1 or ω n 0 , β , otherwise ,
  • Step 2 : Evaluate
    z n = ν n γ n A * ( I 2 D n ) A ν n , ω n + 1 = ( I 1 J n ) ( z n + λ n ω n ) ,
    where λ n = α n + β n and the stepsize γ n is selected in such a manner that
    γ n : = ρ n ( I 2 D n ) A ν n 2 A * ( I 2 D n ) A ν n 2 , if ( I 2 D n ) A ν n 0 γ , otherwise ,
  • Step 3 : Ascertain
    ς n = ( 1 ϕ n ) z n + ϕ n ω n + 1 .
  • Step 4 : Compute
    ϑ n + 1 = I 1 σ n F ς n .
  • Set n : = n + 1 and go to step 1.
According to Lemma 2.1 and Lemma 2.3, we have
ν n ϑ * , A * I 2 D n A ν n = A ν n A ϑ * , I 2 D n A ν n 1 2 ι n I 2 D n A ν n 2 1 2 ι I 2 D n A ν n 2 .
From Algorithm 3.1 and (8), it follows that
z n ϑ * 2 = ν n γ n A * I 2 D n A ν n ϑ * 2 = ν n ϑ * 2 2 γ n ν n ϑ * , A * I 2 D n A ν n + γ n 2 A * I 2 D n A ν n 2 ν n ϑ * 2 γ n ι I 2 D n A ν n 2 + γ n 2 A * I 2 D n A ν n 2 .
For the case ( I 2 D n ) A ν n = 0 , one obtains
z n ϑ * 2 ν n ϑ * 2 γ n 1 ι I 2 D n A ν n 2 γ n A * I 2 D n A ν n 2 = ν n ϑ * 2 .
Otherwise, we deduce from (7) and (9) that
z n ϑ * 2 ν n ϑ * 2 ρ n 1 ι ρ n ( I 2 D n ) A ν n 4 A * ( I 2 D n ) A ν n 2 .
By Assumption 3.1 (iv), (10) and (11), we see that
z n ϑ * ν n ϑ * .
By Lemma 2.3, we have
ω n + 1 2 = ( I 1 J n ) ( z n + λ n ω n ) ( I 1 J n ) ϑ * 2 2 κ n ω n + 1 , z n ϑ * + λ n ω n ω n + 1 , z n ϑ * + λ n ω n ω n + 1 z n ϑ * + λ n ω n ,
then we obtain
ω n + 1 z n ϑ * + λ n ω n z n ϑ * + λ n ω n .
It follows from Algorithm 3.1 and the triangle inequality that
ν n ϑ * = ϑ n + α n ϑ n 1 ϑ n + β n ν n 1 ϑ n 1 ϑ * ϑ n ϑ * + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 .
From (5) we have α n ( ϑ n ϑ n 1 + ω n ) ε n for all n. This, combined with Assumption 3.1 (iv), leads to
lim n α n σ n ( ϑ n ϑ n 1 + ω n ) = 0 .
Following a similar reasoning from (6), we determine that
lim n β n σ n ( ν n 1 ϑ n 1 + ω n ) = 0 .
From (12)–(14), we have
ς n ϑ * = ( 1 ϕ n ) z n + ϕ n ω n + 1 ϑ * ( 1 ϕ n ) z n ( 1 ϕ n ) ϑ * + ϕ n ϑ * + ϕ n ω n + 1 ( 1 ϕ n ) z n ϑ * + ϕ n z n ϑ * + λ n ω n + ϕ n ϑ * ν n ϑ * + λ n ω n + ϕ n ϑ * ϑ n ϑ * + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 + λ n ω n + ϕ n ϑ * = ϑ n ϑ * + σ n α n σ n ( ϑ n ϑ n 1 + ω n ) + σ n β n σ n ( ν n 1 ϑ n 1 + ω n ) + ϕ n ϑ * ϑ n ϑ * + 2 σ n M 1 ,
where M 1 = sup n N α n σ n ( ϑ n ϑ n 1 + ω n ) , β n σ n ( ν n 1 ϑ n 1 + ω n ) , ϕ n ϑ * < .
Consider ϵ 0 , 2 δ l 2 . Given that lim n σ n = 0 , there exists n 0 N such that for all n > n 0 , σ n < ϵ . Consequently, σ n ϵ ( 0 , 1 ) . According to Lemma 2.4 for all n > n 0 , one can deduce that
I 1 σ n F ς n I 1 σ n F ϑ * = I 1 σ n ϵ · ϵ F ς n I 1 σ n ϵ · ϵ F ϑ * 1 σ n ϵ χ ς n ϑ * ,
where χ = 1 1 ϵ 2 δ ϵ l 2 ( 0 , 1 ) . By applying the inequalities (15) and (16), we know that
ϑ n + 1 ϑ * = I 1 σ n F ς n ϑ * = I 1 σ n F ς n I 1 σ n F ϑ * σ n F ϑ * I 1 σ n F ς n I 1 σ n F ϑ * + σ n F ϑ * 1 σ n ϵ χ ς n ϑ * + σ n F ϑ * 1 σ n ϵ χ ϑ n ϑ * + 2 σ n M 1 + σ n F ϑ * 1 σ n ϵ χ ϑ n ϑ * + σ n ϵ χ ϵ 2 M 1 + F ϑ * χ max ϑ n ϑ * , ϵ 2 M 1 + F ϑ * χ max ϑ n 0 ϑ * , ϵ 2 M 1 + F ϑ * χ ,
thus, the sequence ϑ n is bounded. As a result, the sequences ς n , w n + 1 and ν n are also bounded.
According to the definition of ν n , along with by applying the Cauchy-Schwarz inequality, we obtain
ν n ϑ * 2 = ϑ n + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 ϑ * 2 = ϑ n ϑ * 2 + α n 2 ϑ n 1 ϑ n 2 + β n 2 ν n 1 ϑ n 1 2 + 2 α n β n ϑ n 1 ϑ n , ν n 1 ϑ n 1 + 2 α n ϑ n ϑ * , ϑ n 1 ϑ n + 2 β n ϑ n ϑ * , ν n 1 ϑ n 1 ϑ n ϑ * 2 + α n 2 ϑ n ϑ n 1 2 + β n 2 ν n 1 ϑ n 1 2 + 2 α n β n ϑ n ϑ n 1 ν n 1 ϑ n 1 + 2 α n ϑ n ϑ * ϑ n ϑ n 1 + 2 β n ϑ n ϑ * ν n 1 ϑ n 1 = ϑ n ϑ * 2 + β n ν n 1 ϑ n 1 2 ϑ n ϑ * + β n ν n 1 ϑ n 1 + α n ϑ n ϑ n 1 α n ϑ n ϑ n 1 + 2 β n ν n 1 ϑ n 1 + 2 ϑ n ϑ * .
Since ϑ n , α n , β n , and ν n are bounded, there are constants M 2 > 0 and M 3 > 0 such that
ν n ϑ * 2 ϑ n ϑ 2 + M 2 α n ϑ n ϑ n 1 + M 3 β n ν n 1 ϑ n 1 .
Form (12), one gets
z n ϑ * 2 ν n ϑ * 2 .
By Algorithm 3.1 and (18), we have
ς n ϑ * 2 = ( 1 ϕ n ) z n + ϕ n ω n + 1 ϑ * 2 ϕ n ω n + 1 ϑ * 2 + ( 1 ϕ n ) z n ϑ * 2 z n ϑ * 2 + ϕ n ω n + 1 ϑ * 2 ν n ϑ * 2 + ϕ n ω n + 1 ϑ * 2 .
Utilizing (16) and (17), one arrives at for all n > n 0 ,
ϑ n + 1 ϑ * 2 = I 1 σ n F ς n I 1 σ n F ϑ * σ n F ϑ * 2 I 1 σ n F ς n I 1 σ n F ϑ * 2 2 σ n F ϑ * , ϑ n + 1 ϑ * 1 σ n ϵ χ 2 ς n ϑ * 2 + 2 σ n F ϑ * , ϑ * ϑ n + 1 1 σ n ϵ χ ν n ϑ * 2 + 2 σ n F ϑ * , ϑ * ϑ n + 1 + ϕ n ω n + 1 ϑ * 2 1 σ n ϵ χ ϑ n ϑ * 2 + 2 σ n F ϑ * , ϑ * ϑ n + 1 + M 2 α n ϑ n ϑ n 1 + M 3 β n ν n 1 ϑ n 1 + ϕ n ω n + 1 ϑ * 2 1 σ n ϵ χ ϑ n ϑ * 2 + σ n ϵ χ 2 ϵ χ F ϑ * , ϑ * ϑ n + 1 + ϵ α n σ n M 2 χ ϑ n ϑ n 1 + ϵ β n σ n M 3 χ ν n 1 ϑ n 1 + ϵ ϕ n σ n χ ω n + 1 ϑ * 2 = 1 τ n ϑ n ϑ * 2 + τ n ψ n ,
where
τ n = σ n ϵ χ , ψ n = 2 ϵ χ F ϑ * , ϑ * ϑ n + 1 + ϵ α n σ n M 2 χ ϑ n ϑ n 1 + ϵ β n σ n M 3 χ ν n 1 ϑ n 1 + ϵ ϕ n σ n χ ω n + 1 ϑ * 2 .
It is evident that τ n 0 , n = 1 τ n = . Given that the sequence ϑ n is bounded, implying the existence of a constant M 4 > 0 such that
2 F ϑ * , ϑ * ϑ n + 1 M 4 .
Hence, by (16), we obtain
ϑ n + 1 ϑ * 2 = I 1 σ n F ς n ϑ * 2 = I 1 σ n F ς n I 1 σ n F ϑ * σ n F ϑ * 2 I 1 σ n F ς n I 1 σ n F ϑ * 2 2 σ n F ϑ * , ϑ n + 1 ϑ * 1 σ n ϵ χ 2 ς n ϑ * 2 + 2 σ n F ϑ * , ϑ * ϑ n + 1 ς n ϑ * 2 + σ n M 4 n > n 0 ,
from the preceding inequality, as well as (9) and (19), we have for all n > n 0 that
ϑ n + 1 ϑ * 2 z n ϑ * 2 + ϕ n ω n + 1 ϑ * 2 + σ n M 4 ν n ϑ * 2 γ n ι I 2 D n A ν n 2 + γ n 2 A * I 2 D n A ν n 2 + ϕ n ω n + 1 ϑ * 2 + σ n M 4 .
We know from (14) that
ν n ϑ * ϑ n ϑ * + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 = ϑ n ϑ * + α n ( ϑ n ϑ n 1 + ω n ) + β n ( ν n 1 ϑ n 1 + ω n ) λ n ω n ϑ n ϑ * + σ n α n σ n ( ϑ n ϑ n 1 + ω n ) + σ n β n σ n ( ν n 1 ϑ n 1 + ω n ) ϑ n ϑ * + 2 σ n M 1 .
Then for some constant M 5 > 0 ,
ν n ϑ * 2 ϑ n ϑ * + 2 σ n M 1 2 = ϑ n ϑ * 2 + σ n 4 M 1 ϑ n ϑ * + 4 σ n M 1 2 ϑ n ϑ * 2 + σ n M 5 .
Combining (21) and (22), we get that for all n > n 0 ,
ϑ n + 1 ϑ * 2 ϑ n ϑ * 2 + σ n M 5 + γ n 2 A * I 2 D n A ν n 2 + σ n M 4 + ϕ n ω n + 1 ϑ * 2 γ n ι I 2 D n A ν n 2 ϑ n ϑ * 2 + σ n M 5 + σ n γ n 2 σ n A * I 2 D n A ν n 2 + σ n M 4 + σ n ϕ n σ n ω n + 1 ϑ * 2 γ n ι I 2 D n A ν n 2 ϑ n ϑ * 2 + σ n ( M 4 + M 5 + M 6 ) γ n ι I 2 D n A ν n 2 ,
where M 6 = sup n N γ n 2 σ n A * I 2 D n A ν n 2 , ϕ n σ n ω n + 1 ϑ * 2 < .
Now we set
ϱ n = γ n ι I 2 D n A ν n 2 ,
and
θ n = σ n M 4 + M 5 + M 6 , Γ n = ϑ n ϑ * 2 .
Thus, (23) can be reformulated as follows
Γ n + 1 Γ n ϱ n + θ n .
It is easy to see that lim n θ n = 0 . To establish that Γ n 0 , according to Lemma 2.5, (and taking into account (20) and (24)), it is enough to demonstrate that for any subsequence n k { n } , if lim k ϱ n k = 0 , then
lim sup k ψ n k 0 .
We suppose that lim k ϱ n k = 0 . When ( I 2 D n ) A ν n k = 0 , it is clear that
lim k γ ι I 2 D n A ν n k 2 = 0 .
Otherwise, as indicated by (7), it follows that
lim k γ n k ι I 2 D n A ν n k 2 = 0
By our assumption we get
lim k γ n k ι I 2 D n A ν n k 2 = lim k ρ n k ( I 2 D n ) A ν n k 4 ι A * ( I 2 D n ) A ν n k 2 ,
which implies that
lim k ( I 2 D n ) A ν n k 4 A * ( I 2 D n ) A ν n k 2 = 0 .
Further, we obtain
lim k I 2 D n A ν n k 2 A * I 2 D n A ν n k = 0 .
Since A is a bounded linear operator, we can conclude
A * I 2 D n A ν n k A I 2 D n A ν n k .
Hence, we have
1 A I 2 D n A ν n k = I 2 D n A ν n k 2 A I 2 D n A ν n k I 2 D n A ν n k 2 A * I 2 D n A ν n k ,
by considering that A 0 . Using (25)- (27), it follows that
lim k I 2 D n A ν n k = 0 .
Since lim n σ n = 0 and the sequence F ς n is bounded, it follows that
ϑ n k + 1 ς n k = σ n k F ς n k 0 , ( k ) .
It is evident that as n , we have
ν n ϑ n α n ( ϑ n ϑ n 1 + ω n ) + β n ( ν n 1 ϑ n 1 + ω n ) = σ n α n σ n ϑ n ϑ n 1 + ω n + σ n β n σ n ν n 1 ϑ n 1 + ω n 0 .
Further, according to (26) and (7) that
lim k ν n k z n k = lim k γ n k A * I 2 D n A ν n k = lim k ρ n k I 2 D n A ν n k 2 A * I 2 D n A ν n k = 0 .
From above inequalities we arrive at
ϑ n k + 1 ϑ n k ϑ n k + 1 ς n k + ς n k ν n k + ν n k ϑ n k 0 , ( k ) ,
According to the arbitrariness of n k , we have
I 2 D n A ν n 0 ; ϑ n + 1 ϑ n 0 , ( n ) .
By Algorithm 3.1, we show that
ν n ϑ n α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 = α n ( ϑ n ϑ n 1 + ω n ) + β n ( ν n 1 ϑ n 1 + ω n ) λ n ω n σ n α n σ n ( ϑ n ϑ n 1 + ω n ) + σ n β n σ n ( ν n 1 ϑ n 1 + ω n ) λ n ω n 2 σ n M 1 λ n ω n ,
then we get
λ n ω n 2 σ n M 1 ν n ϑ n ,
which means that
lim n λ n ω n = lim n 2 σ n M 1 ν n ϑ n 0 ,
and
lim n ω n 0 .
Given that { ϑ n k } is bounded, there exists a subsequence { ϑ n k l } that converges weakly to ϑ ^ . Without loss of generality, we assume that the entire sequence ϑ n k converges weakly to ϑ ^ .
At the same time, it follows from (28) that ν n k ϑ ^ and A ν n k A ϑ ^ as k . By (28)-(31), we have z n k + λ n k ω n k ϑ ^ as k . Noting that the pool of indexes is finite and { ϑ n } is asymptotically regular, for any 1 i p , we can select a subsequence n i m { n } such that ϑ n i m ϑ ^ , z n i m + λ n i m ω n i m ϑ ^ as m , with [ n i m ] 1 = i for all m. Consequently, we find that
lim m I 1 J n i m , i z n i m + λ n i m ω n i m = lim m I 1 J n i m , n i m 1 z n i m + λ n i m ω n i m = lim m 1 κ n i m I 1 J n i m , n i m z n i m + λ n i m ω n i m = lim m 1 κ n i m ω n i m + 1 = 0 .
Similarly, for any 1 j r , we can select a subsequence { n j s } { n } such that A ν n j s A ϑ ^ as s , and n j s 2 = j for all s. It turns out that
lim s I 2 D n j s , j A ν n j s = lim s I 2 D n j s , n j s 2 A ν n j s = lim s 1 ι n j s I 2 D n j s , n j s A ν n j s = 0 .
Since I 1 S i ( 1 i p ) and I 2 T j ( 1 j r ) are demiclosed at 0, by employing Lemma 2.1, we have I 1 J n , i ( 1 i p ) and I 2 D n , j ( 1 j r ) are demiclosed at 0, it follows from (32) and (33) that ϑ ^ i = 1 p Fix J n , i , A ϑ ^ j = 1 r Fix D n , j , by Lemma 2.1 that ϑ ^ i = 1 p Fix S i , A ϑ ^ j = 1 r Fix T j .
Subsequently, we demonstrate that
lim sup k F ϑ * , ϑ * ϑ n k 0 .
To demonstrate this inequality, we select a subsequence ϑ n k l from ϑ n k such that
lim l F ϑ * , ϑ * ϑ n k l = lim sup k F ϑ * , ϑ * ϑ n k .
Given that ϑ * is the unique solution to the variational inequality (4) and ϑ n k l converges weakly to ϑ ^ Ω , we can deduce that
lim sup k F ϑ * , ϑ * ϑ n k = lim l F ϑ * , ϑ * ϑ n k l = F ϑ * , ϑ * ϑ ^ 0 .
From (30) we get
lim sup k F ϑ * , ϑ * ϑ n k + 1 = lim sup k F ϑ * , ϑ * ϑ n k 0
and
lim sup k ψ n k 0 .
Therefore, all the conditions specified in Lemma 2.5 are met. As a result, we can directly conclude that lim n Γ n = lim n ϑ n ϑ * 2 = 0 . This implies that the sequence ϑ n converges strongly to ϑ * , which serves as the unique solution to the variational inequality (4). □

4. Application to MSFP and a Numerical Example

It is widely recognized that the projection operator P Q on a nonempty closed convex subset Q is firmly nonexpansive, implying that it is demiclosed at 0. Now, as an application, we will solve the MSFP (1).
Theorem 4.1.
Let H 1 and H 2 be real Hilbert spaces, C i H 1 for all i = 1 , 2 , , p and Q j H 2 for each j = 1 , 2 , , r are nonempty closed convex subsets. Suppose that A , F , [ n ] 1 , [ n ] 2 , J n , D n , { ϕ n } , ε n and σ n are the same as in Assumption 3.1. Additionally, if Ω ¯ = υ ¯ H 1 υ ¯ i = 1 p C i a n d A ϑ * j = 1 r Q j , 0 lim inf n ρ n lim sup n ρ n < 2 , lim n ρ n σ n = 0 , and for i { 1 , 2 , , p } and j { 1 , 2 , , r } , J n , i = ( 1 ξ n ) I 1 + ξ n P C i ( ( 1 η n ) I 1 + η n P C i ) and D n , j = ( 1 ξ n ) I 2 + ξ n P Q j ( ( 1 η n ) I 2 + η n P Q j ) with 0 < ξ n < η n < 1 2 + 1 , then the sequence { ϑ n } arised by the following Algorithm 3.1converges strongly to a point ϑ * Ω ¯ , that is to say, ϑ * is a solution of the MSFP (1), which is also the sole solution of the following HVIP:
F ϑ * , z ϑ * 0 , z Ω ¯ .
Proof. 
Let l 1 = l 2 = 1 , S i = P C i for all i = 1 , 2 , , p and T j = P Q j for each j = 1 , 2 , , r in Assumption 3.1 (i). Consequently, it follows from Theorem 3.1 that the conclusions can be easily drawn. □
In the sequel, we showcase the effectiveness of the proposed Algorithm 3.1 by using it to solve the MSFP (1). The algorithm was implemented using MATLAB R2020a and executed on a laptop with an Intel(R) Core(TM) i5-8300H CPU @ 2.30GHz and 8.00GB of RAM.
Example 4.1.
For 1 i p and 1 j r , we select subsets C i R M and Q j R N in the MSFP (1), defined by C i : = ϑ R M a i C , ϑ b i C and Q j : = ϑ R N a j Q , ϑ b j Q , respectively, here a i C R M , a j Q R N , and b i C , b j Q R . For these ranges, the components of a i C and a j Q are randomly selected from the closed interval [ 1 , 3 ] , and b i C and b j Q are randomly chosen from the closed interval [ 2 , 4 ] . Additionally, we define A = a ^ k l N × M as a bounded linear operator with entries h a t a k l randomly generated within the closed interval [ 20 , 120 ] . Further, we define the function Φ : R M R by
Φ ( ϑ ) = i = 1 p 1 p ϑ P C i ( ϑ ) 2 + j = 1 r 1 r A ϑ P Q j ( A ϑ ) 2 ,
and use the stopping rule Φ ( ϑ ) < ε = 10 20 . Set p = r = 10 , α = 0.9 , β = 0 , F = I , and κ n = ι n 1 2 , ε n = 1 n 2 , ρ n = 1 ( n + 1 ) 0.7 , σ n = 1 log ( n + 2 ) , ϕ n = 1 log ( n + 2 ) 1.1 and ρ n = 1.95 for each n 1 .
By applying Algorithm 3.1 to solve Example 4.1,we can select the inertial extrapolation factor α n within the interval ( 0 , α ¯ n ] . Specifically, by setting α n = ϖ α ¯ n , we can vary the inertial extrapolation factors by adjusting the parameter ϖ within the range ( 0 , 1 ] . Setting ϑ 0 = 5 e 1 , ϑ 1 = 10 e 1 , and ω 0 = 10 e 1 , where e 1 = ( 1 , 1 , , 1 ) T , we compare different inertial extrapolation factors of Algorithm 3.1 across various dimensional spaces. Table 1 and Table 2 present the iteration numbers and CPU time of Algorithm 3.1 for ϖ = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , and 1 when the dimensions are ( N , M ) = ( 10 , 15 ) and ( N , M ) = ( 50 , 50 ) , respectively.
Based on the data presented in Table 1 and Table 2, it is evident that Algorithm 3.1 outperforms the algorithm (1.3) in terms of convergence speed across various dimensions. The computational results indicate that Algorithm 3.1, with the adjustment parameter ϖ = 1 , demonstrates superior performance across different dimensions.

5. Conclusions

In this paper, we introduced a novel algorithm to approximate solutions for strongly monotone variational inequality problems within the solution set of the split common fixed point problem with quasi-pseudocontractive mappings. Our method integrates a self-adaptive step size, which negates the need for prior operator norm knowledge, and incorporates an inertial method with a correction term to accelerate convergence. We proved a strong convergence theorem under reasonable conditions and applied our results to solve a multiple-set split feasibility problem. Numerical experiments further validated the effectiveness of our proposed algorithm. The issue addressed in this paper holds significant potential for practical applications in real-world scenarios, including image recognition, signal processing, and machine learning. Our findings contribute to the development of more efficient and practical iterative methods, potentially influencing future research and applications in these fields.

Author Contributions

Conceptualization, Y.Y. and H.-Y.L.; methodology, Y.Y.; software, Y.Y.; validation, Y.Y. and H.-Y.L.; formal analysis, Y.Y.; investigation, Y.Y.; resources, Y.Y.; data curation, Y.Y.; writing-original draft preparation, Y.Y.; writing-review and editing, H.-Y.L.; visualization, H.-Y.X.; supervision, H.-Y.L.; project administration, H.-Y.L.; funding acquisition, H.-Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Teaching Construction Project of Postgraduate, Sichuan University of Science & Engineering (Y2023331) and the Scientific Research and Innovation Team Program of Sichuan University of Science and Engineering (SUSE652B002).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Abbreviations

The following abbreviations are used in this manuscript:
HVIPs hierarchical variational inequality problems
SFP Split feasibility problem
IMRT Intensity-modulated radiation therapy
MSFP Multiple-sets split feasibility problem
SCFPs Split common fixed point problems
MSCFPs multiple-sets split common fixed point problems

References

  1. Eslamian, M.; Kamandi, A. A novel method for hierarchical variational inequality with split common fixed point constraint. J. Appl. Math. Comput. 2024, 70, 1837–1857. [Google Scholar] [CrossRef]
  2. Eslamian, M.; Kamandi, A. Hierarchical variational inequality problem and split common fixed point of averaged operators. J. Comput. Appl. Math. 2024, 2024 437, 17. [Google Scholar] [CrossRef]
  3. Jiang, B.N.; Wang, Y.H.; Yao, J. C. Two new multi-step inertial regularized algorithms for the hierarchical variational inequality problem with a generalized Lipschitzian mapping. J. Nonlinear Convex Anal. 2024, 2024 25, 99–121. [Google Scholar]
  4. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  5. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  6. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  7. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef]
  8. Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  9. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  10. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  11. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  12. He, Z. The split equilibrium problem and its convergence algorithms. J. Inequalities Appl. 2012, 2012, 162. [Google Scholar] [CrossRef]
  13. Lorenz, D.A.; Schöpfer, F.; Wenger, S. The linearized Bregman method via split feasibility problems: Analysis and generalizations. SIAM J. Imag. Sci. 2014, 7, 1237–1262. [Google Scholar] [CrossRef]
  14. He, H.J.; Ling, C.; Xu, H.K. An implementable splitting algorithm for the 1-norm regularized split feasibility problem. J. Sci. Comput. 2016, 67, 281–298. [Google Scholar] [CrossRef]
  15. Jirakitpuwapat, W.; Kumam, P.; Cho, Y.J.; Sitthithakerngkiet, K. A general algorithm for the split common fixed point problem with its applications to signal processing. Math. 2019, 7. [Google Scholar] [CrossRef]
  16. Sahu, D.R.; Pitea, A.; Verma, M. A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems. Numer. Algor. 2020, 83, 421–449. [Google Scholar] [CrossRef]
  17. Usurelu, G.I. Split feasibility handled by a single-projection three-step iteration with comparative analysis. J. Nonlinear Convex Anal. 2021, 22, 543–557. [Google Scholar]
  18. Gupta, N.; Postolache, M.; Nandal, A.; Chugh, R. A cyclic iterative algorithm for multiple-sets split common fixed point problem of demicontractive mappings without prior knowledge of operator norm. Math. 2021, 9, 19. [Google Scholar] [CrossRef]
  19. Zhao, J.; Wang, H.; Zhao, N. Accelerated cyclic iterative algorithms for the multiple-set split common fixed-point problem of quasi-nonexpansive operators. J. Nonlinear Var. Anal. 2023, 7, 1–22. [Google Scholar]
  20. Wang, F.H.; Xu, H.K. Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 2011, 74, 4105–4111. [Google Scholar] [CrossRef]
  21. Zhao, J.; Zhao, N.; Hou, D. Inertial accelerated algorithms for the split common fixed-point problem of directed operators. Optimization 2021, 70, 1375–1407. [Google Scholar] [CrossRef]
  22. Chang, S.S.; Wang, L.; Zhao, Y.H.; et al. Split common fixed point problem for quasi-pseudocontractive mapping in Hilbert spaces. Bull. Malays. Math. Sci. Soc. 2021, 44, 1155–1166. [Google Scholar] [CrossRef]
  23. Kim, D. Accelerated proximal point method for maximally monotone operators. Math. Program. 2021, 190, 57–87. [Google Scholar] [CrossRef]
  24. Mainge´, P.E. Accelerated proximal algorithms with a correction term for monotone inclusions. Appl. Math. Optim. 2021, 84, 2027–2061. [Google Scholar] [CrossRef]
  25. Taiwo, A.; Jolaoso, L.O.; Mewomo, O.T. Viscosity approximation method for solving the multiple-set split equality common fixed-point problems for quasi-pseudocontractive mappings in Hilbert spaces. J. Ind. Manag. Optim. 2021, 17, 2733–2759. [Google Scholar] [CrossRef]
  26. Yamada, I. The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application; Butnariu, D., Censor, Y., Reich, S., Eds.; Elsevier: North-Holland, Amsterdam, 2001. [Google Scholar]
Table 1. Numerical results with different α n , where α n = ϖ α ¯ n , N = 10 , M = 15 .
Table 1. Numerical results with different α n , where α n = ϖ α ¯ n , N = 10 , M = 15 .
        ϖ = 0.1 ϖ = 0.2 ϖ = 0.3 ϖ = 0.4 ϖ = 1
λ = 0.93 Iter 292 252 212 171 16
algorithm (3) CPU(s) 0.0116 0.0109 0.0179 0.0069 0.0010
Iter 6 6 6 6 5
Algorithm (1.3) CPU(s) 0.0012 0.0012 0.0008 0.0006 0.0005
Table 2. Numerical results with different α n , where α n = ϖ α ¯ n , N = 50 , M = 50 .
Table 2. Numerical results with different α n , where α n = ϖ α ¯ n , N = 50 , M = 50 .
        ϖ = 0.1 ϖ = 0.2 ϖ = 0.3 ϖ = 0.4 ϖ = 1
λ = 0.98 Iter 267 227 187 139 8
algorithm (3) CPU(s) 0.0226 0.0131 0.0115 0.0070 0.0007
Iter 8 7 6 6 6
Algorithm 3.1 CPU(s) 0.0023 0.0013 0.0012 0.0004 0.0004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated