Preprint
Article

High Order Difference Schemes for Space Riesz Variable-Order Nonlinear Fractional Diffusion Equations

Altmetrics

Downloads

38

Views

18

Comments

0

This version is not peer-reviewed

Submitted:

25 September 2024

Posted:

26 September 2024

You are already at the latest version

Alerts
Abstract
This article is aimed at studying new finite difference methods for one-dimensional (1D) and two-dimensional (2D) space Riesz variable-order (VO) nonlinear fractional diffusion equations (SRVONFDEs). In the presented model, fractional derivatives are defined in the Riemann-Liouville type. Based on 4-point weighted-shifted-Gr\"unwald-difference (4WSGD) operators for Riemann-Liouville constant-order (CO) fractional derivatives, which have a free parameter and have at least third order accuracy, we derive 4WSGD operators for space Riesz VO fractional derivatives. In order that the fully discrete schemes have good stability and can handle the nonlinear term efficiently, we apply the implicit Euler (IE) method to discretize the time derivative, which leads to IE-4WSGD schemes for SRVONFDEs. The stability and convergence of the IE-4WSGD schemes are analysed theoretically. In addiction, a parameter selection strategy is derived for 4WSGD schemes and banded preconditioners are put forward to accelerate the GMRES methods for solving the discretization linear systems. Numerical resutls demonstrate the effectiveness of the proposed schemes and preconditioners.
Keywords: 
Subject: Computer Science and Mathematics  -   Computational Mathematics

MSC:  65M06; 26A33; 65M12

1. Introduction

Fractional differential equations have been successfully applied in many fields [2,3,4,36] and attracted many scientists. And many researchers investigated different numerical methods. Commonly used discretization methods include finite difference method, finite element method, and finite volume method [14,20,34]. In genearal, the discretization algebra systems are solved by certain preconditioned iteration methods; see for instance [18,35].
Since constant-order (CO) fractional differential equations fail to describe some complex transport diffusion processes [6,13], the variable-order fractional derivatives have be proposed and the variable-order fractional differential equations have received more and more attention [7,15,19,28,37,43]. Samko and Ross firstly gave the definition of variable-order (VO) fractional derivatives [32]. Lorenzo and Hartley derived different type of VO fractional calculus [21]. For applications of VO fractional differential equations, we refer readers to [15,37,43].
In recent years, many scholars have been studying approximate schemes for variable-order fractional differential equations. In [19], the relationship between the Riemann-Liouville VO fractional derivative and Grünwald-Letnikov expansion has been developed, and in [45], the first order explicit and implicit Euler methods are used to discretize variable-order fractional differential equations. Du, Alikhanov and Sun proposed a second order approximation scheme for time Caputo VO derivative by using special points on each time interval [10]. By utilizng the relationship between VO and CO fractional derivative, VO-WSGD operators for Riemann-Liouville VO fractional derivatives is derived, theory results suggest that the proposed schemes is greater than or equal to second order accuracy [17]. Here WSGD is the abbreviation of weighted-shifted-Grünwald-difference. Kheirkhah, Hajipour and Baleanu proposed a third-order WSGD formula to Caputo derivative for one-dimensional (1D) and two-dimensional (2D) time-fractional advection-reaction-subdiffusion equations with variable-order α ( x , t ) ( 0 , 1 ) [16]. Wang, She, Lao and Lin proposed second-order FCD and fourth-order WSFCD operators for space Riesz VO nonlinear fractional diffusion equations (SRVONFDEs) [42]. However, the fourth-order WSFCD may cause stability problem if
α ( x , t ) 1 , 3 + 48 + 62207 27 3 + 48 62207 27 3 ( 1 , 1.6516 ) .
The aim of this work is to develop new difference schemes for the Riesz VO fractional derivative by 4-point weighted-shifted-Grünwald-difference (4WSGD) operators. Similar to [17,42], by applying the relationship between the Riesz CO and VO fractional derivatives, we obtain VO-4WSGD operators for Riesz VO fractional derivative. Then we employ the VO-4WSGD operators to SRVONFDEs.
In this paper, we first consider the following initial-boundary value (IBV) problem for a 1D SRVONFDE:
u ( x , t ) t = K ( x , t ) α ( x , t ) u ( x , t ) | x | α ( x , t ) + f ( u , x , t ) , ( x , t ) ( a , b ) × ( 0 , T ] , u ( x , 0 ) = u 0 ( x ) , x ( a , b ) , u ( a , t ) = u a ( t ) = 0 , u ( b , t ) = u b ( t ) = 0 , t [ 0 , T ] ,
where 0 < K C ̲ K C ( x , t ) = K ( x , t ) C α ( x , t ) K C ¯ for certain positive constants K C ̲ and K C ¯ , f ( u , x , t ) is a nonlinear source term satisfying the Lipschitz condition
| f ( u 1 , x , t ) f ( u 2 , x , t ) | L | u 1 u 2 |
for a certain positive constant L, α ( x , t ) u ( x , t ) | x | α ( x , t ) denotes the α ( x , t ) th order Riesz VO fractional derivative (with regard to x) with 1 < α ̲ α ( x , t ) α ¯ < 2 . Here, the Riesz VO fractional derivative is defined by [30,45]
α ( x , t ) u ( x , t ) | x | α ( x , t ) = C α ( x , t ) D x α ( x , t ) a R + x R D b α ( x , t ) u ( x , t ) , C α ( x , t ) = 1 2 cos π α ( x , t ) 2 ,
where D x α ( x , t ) a R and D b α ( x , t ) x R are the left-sided and right-sided Riemann-Liouville VO fractional derivatives defined by
D x α ( x , t ) a R u ( x , t ) = 1 Γ ( 2 α ( x , t ) ) d 2 d ξ 2 a ξ u ( η , t ) ( ξ η ) α ( x , t ) 1 d η ξ = x , x ( a , b ) ,
D b α ( x , t ) x R u ( x , t ) = ( 1 ) 2 Γ ( 2 α ( x , t ) ) d 2 d ξ 2 ξ b u ( η , t ) ( η ξ ) α ( x , t ) 1 d η ξ = x , x ( a , b ) .
The 2D problem will be discussed in Section 5.
The rest of paper is organised as below. In Section 2, one gains VO-4WSGD operators for Riemann-Liouville VO fractional derivatives and analyze their accuracy. In Section 3, we obtain IE-4WSGD schemes for 1D SRVONFDE and derive stability and convergence. In Section 4, two banded preconditioners are proposed for the discrete systems of linear equations. The spectral radius and the condition number of the preconditioned systems are derived. Extension of the IE-4WSGD scheme to 2D SRVONFDE is discussed in Section 5. Numerical experiments are carried out in Section 6 to verify our theoretical results and concluding remarks are given in Section 7.

2. 4WSGD Operators for Riemann-Liouville VO Fractional Derivatives

Denote α > 0 and p N * , the shifted Grünword operator for the left-sided fractional derivative D x α R u ( x ) is
A h , p α L u ( x ) = 1 h α k = 0 g k ( α ) u ( x ( k p ) h ) ,
where g k ( α ) = ( 1 ) k i n o m α k , i.e.,
g 0 ( α ) = 1 , g k ( α ) = ( 1 ) k k ! j = 0 k 1 ( α j ) , k = 1 , 2 , .
As a matter of fact, g k ( α ) satisfies
( 1 z ) α = k = 0 g k ( α ) z k , | z | < 1 .
To approximate the left-sided Riemann-Liouville fractional derivative, the A h , p α L has first-order accuracy:
A h , p α L u ( x ) = R D x α u ( x ) + O ( h ) ;
see [22]. Similarly, for the right-sided Riemann-Liouville fractional derivative, the shifted Grünword operator is
A h , p α R u ( x ) = 1 h α k = 0 g k ( α ) u ( x + ( k p ) h )
and
A h , p α R u ( x ) = D α x R u ( x ) + O ( h ) .
In [34], a class of third-order 4WSGD operators are derived. γ p ( p = 1 , 0 , 1 , 2 ) represents the weights, and satisfies
γ 2 = γ , γ 1 = 3 α 2 + 5 α 24 3 γ , γ 0 = 1 + 3 γ + α 3 α 2 12 , γ 1 = 3 α 2 7 α 24 γ ,
where γ is a free parameter. The left 4WSGD operator is defined by
D h , γ α L u ( x ) = γ 2 · A h , 2 α L u ( x ) + γ 1 · A h , 1 α L u ( x ) + γ 0 · A h , 0 α L u ( x ) + γ 1 · A h , 1 α L u ( x ) = 1 h α k = 0 ω k , γ ( α ) u ( x ( k 2 ) h ) ,
where
ω k , γ ( α ) = γ 2 g k ( α ) + γ 1 g k 1 ( α ) + γ 0 g k 2 ( α ) + γ 1 g k 3 ( α )
with g 1 ( α ) = g 2 ( α ) = g 3 ( α ) = 0 and g k ( α ) for k 0 being defined by (2). Similarly, the right 4WSGD operator is defined by
D h , γ α R u ( x ) = 1 h α k = 0 ω k , γ ( α ) u ( x + ( k 2 ) h ) .
The following theorem indicates that if the weights are given by (4) and u ( x ) is sufficiently smooth, then the the corresponding 4WSGD scheme has at least third-order accuracy.
Theorem 1.
[34] Assume u ( x ) L 1 ( R ) C 2 ( R ) , and D x α + 4 R u ( x ) , D α + 4 x R u ( x ) , F [ D x α + 4 u ( x ) ] ( R ) , and F [ D α + 4 x R u ( x ) ] ( R ) L 1 ( R ) , and γ 1 , γ 0 , γ 1 , γ 2 are given by (4), then
D x α R u ( x ) = D h , γ α L u ( x ) + η L , D α x R u ( x ) = D h , γ α R u ( x ) + η R ,
where
| η L | C L | γ α 3 α 2 4 α 48 | h 3 + O ( h 4 ) , | η R | C R | γ α 3 α 2 4 α 48 | h 3 + O ( h 4 )
with C L and C R being positive constants.
Remark 1.
From the inequalities for | η L | and | η R | , we know that the 4WSGD operators corresponding to γ = α 3 α 2 4 α 48 has fourth order accuracy. But the 4WSGD operator with 4th order may not suitable for variable-order fractional differential equations since it often causes stability problem.
Now we present the 4WSGD operator for the Riemann-Liouville VO fractional derivatives. According to the concept of Riemann-Liouville VO fractional derivatives, for each α ( x n , t m ) , we can rewrite the definition as
D x α ( x n , t m ) a R u ( x , t m ) | x = x n = D ξ α ( x n , t m ) a R u ( ξ , t m ) ξ = x n ,
D b α ( x n , t m ) x R u ( x , t m ) | x = x n = D b α ( x n , t m ) ξ R u ( ξ , t m ) ξ = x n .
The formulas (7)–(8) inspire an idea to develop 4WSGD operators for the Riemann-Liouville VO fractional derivatives. From (5) and (7), we get 4WSGD operators for D x α ( x n , t m ) a R u ( x , t m ) | x = x n :
D h , γ α ( x n , t m ) L u ( x n , t m ) = D h , γ α ( x n , t m ) L u ( ξ , t m ) ξ = x n = 1 h α ( x n , t m ) k = 0 ω k , γ ( α ( x n , t m ) ) u ( ξ ( k 2 ) h , t m ) ξ = x n = 1 h α ( x n , t m ) k = 0 ω k , γ ( α ( x n , t m ) ) u ( x n ( k 2 ) h , t m ) ,
where ω k ( α ( x n , t m ) ) are given by
ω k , γ ( α ( x n , t m ) ) = γ 2 g k ( α ( x n , t m ) ) + γ 1 g k 1 ( α ( x n , t m ) ) + γ 0 g k 2 ( α ( x n , t m ) ) + γ 1 g k 3 ( α ( x n , t m ) ) ,
with
γ 2 = γ , γ 1 = 3 γ + 3 α ( x n , t m ) 2 + 5 α ( x n , t m ) 24 , γ 0 = 3 γ + α ( x n , t m ) 3 α ( x n , t m ) 2 12 + 1 , γ 1 = γ + 3 α ( x n , t m ) 2 7 α ( x n , t m ) 24 ,
for k = 0 , 1 , 2 , . Here
g 1 ( α ( x n , t m ) ) = g 2 ( α ( x n , t m ) ) = g 3 ( α ( x n , t m ) ) = 0 , g 0 ( α ( x n , t m ) ) = 1 ,
g k ( α ( x n , t m ) ) = ( 1 ) k k ! j = 0 k 1 ( α ( x n , t m ) j ) , k = 1 , 2 , .
We refer readers to [17] for the idea of deriving formula (9). Similarly, 4WSGD operators for D α ( x n , t m ) x R u ( x , t m ) | x = x n are
D h , γ α ( x n , t m ) R u ( x n , t m ) = 1 h α ( x n , t m ) k = 0 ω k , γ ( α ( x n , t m ) ) u ( x n + ( k 2 ) h , t m ) .
We call the formulas (9) and (14) left and right VO-4WSGD operator, respectively.
The zero extension of u ( x , t ) to R × [ 0 , T ] :
u ˜ ( x , t ) = u ( x , t ) , ( x , t ) ( a , b ) × [ 0 , T ] , 0 , else .
Theorem 2.
Let 1 < α ̲ α ( x , t ) α ¯ < 2 and u ( x , t ) L 1 ( R ) C 2 ( R ) , and D x α ¯ + 4 R u ˜ ( x , t ) , D α ¯ + 4 x R u ˜ ( x , t ) and their Fourier transforms belong to L 1 ( R ) . Let D h , γ α ( x n , t m ) L u ˜ ( x n , t m ) and D h , γ α ( x n , t m ) R u ˜ ( x n , t m ) be the VO-4WSGD operators given by (9) and (14), respectively. Then
D x α ( x n , t m ) R u ˜ ( x , t m ) | x = x n = D h , γ α ( x n , t m ) L u ˜ ( x n , t m ) + η L ( x n , t m ) , D α ( x n , t m ) x R u ˜ ( x , t m ) | x = x n = D h , γ α ( x n , t m ) ) R u ˜ ( x n , t m ) ) + η R ( x n , t m ) ) ,
where
| η L | C L | γ α ( x n , t m ) 3 α ( x n , t m ) 2 4 α ( x n , t m ) 48 | h 3 + O ( h 4 ) ,
| η R | C R | γ α ( x n , t m ) 3 α ( x n , t m ) 2 4 α ( x n , t m ) 48 | h 3 + O ( h 4 )
with C L and C R being positive constants.
Proof. 
Since α ( x , t ) [ α ̲ , α ¯ ] ( 1 , 2 ) , for given x and t, we have that D ξ α ( x n , t m ) + 4 R u ˜ ( ξ , t m ) , D α ( x n , t m ) + 4 x R u ˜ ( ξ , t ) and their Fourier transforms belong to L 1 ( R ) . Thus, for α ( x n , t m ) , one gets by Theorem 1,
D ξ α ( x n , t m ) R u ˜ ( ξ , t m ) = D h , γ α ( x n , t m ) L u ˜ ( ξ , t m ) + η L ( ξ , t m ) .
In particular, by (7) we have
D x α ( x n , t m ) R u ˜ ( x , t m ) | x = x n = D ξ α ( x n , t m ) R u ˜ ( ξ , t m ) ξ = x n = D h , γ α ( x , t ) L u ˜ ( ξ , t m ) + η L ( ξ , t ) ξ = x n = D h , γ α ( x n , t m ) L u ˜ ( x n , t m ) + η L ( x n , t m ) ,
where | η L ( x n , t m ) | C L | γ α ( x n , t m ) 3 α ( x n , t m ) 2 4 α ( x n , t m ) 48 | h 3 + O ( h 4 ) .
Similarly, the relation between the right-sided Riemann-Liouville VO fractional derivative and the right-sided VO-4WSGD operator is obtained by using Theorem 1 and (8).□
Remark 2.
(i) Let u ( x , t ) be a function defined in [ a , b ] × [ 0 , T ] with u ( a , t ) = u ( b , t ) = 0 for t [ 0 , T ] , then D x α ( x n , t m ) a R u ( x , t m ) | x = x n can be approximated by
D h , γ α ( x n , t m ) L u ( x n , t m ) = 1 h α ( x n , t m ) k = 0 ω k , γ ( α ( x n , t m ) ) u ( x n ( k 2 ) h , t m ) = 1 h α ( x n , t m ) k = 0 ( x n a ) / h + 2 ω k , γ ( α ( x n , t m ) ) u ( x n ( k 2 ) h , t m ) ,
where x denotes the largest integer that is not greater than x. Similarly, D b α ( x n , t m ) x R u ( x , t m ) | x = x n can be approximated by
D h , γ α ( x n , t m ) R u ( x n , t m ) = 1 h α ( x n , t m ) k = 0 ( b x n ) / h + 2 ω k , γ ( α ( x n , t m ) ) u ( x n + ( k 2 ) h , t m ) .
(ii) Typically, for a discretization scheme that has good properties, the free parameter γ , which dependents on the value of α ( x , t ) , is of great importance.

3. IE-4WSGD Scheme for 1D SRVONFDE

3.1. IE-4WSGD Scheme

In this subsection, we derive IE-4WSGD schemes for the IBV problem of 1D SRVONFDE Eq (1). Let N , M N * , and let h = ( b a ) / N and τ = T / M , which represent the space grid size and the time step length, respectively. And denote
x n = a + n h , n = 0 , 1 , 2 , N , t m = m τ , m = 0 , 1 , 2 , M .
The following symbols are utilized
u n m = u ( x n , t m ) , α n m = α ( x n , t m ) , f e , n m = f ( u ( x n , t m ) , x n , t m ) , d n m = τ K C , n m h α n m , ( a R D x α n m u ) n m = D x α ( x n , t m ) a R u ( x , t m ) | x = x n , ( x R D b α n m u ) n m = D b α ( x n , t m ) x R u ( x , t m ) | x = x n .
Applying the implicit Euler (IE) formula
u ( x n , t m ) t = u ( x n , t m ) u ( x n , t m 1 ) τ + O ( τ )
to discretize u ( x , t ) t , one gains
u n m u n m 1 τ = K C , n m ( a R D x α n m u ) n m + ( x R D b α n m u ) n m + f e , n m + O ( ) τ .
Approximating ( a R D x α n m u ) n m and ( x R D b α n m u ) n m by the left- and right-4WSGD operators respectively, and approximating f ( u n m , x n , t m ) by the Taylor expansion, we get the following systems:
u n m = u n m 1 + τ K C , n m h α n m k = 0 n + 2 ω k , γ ( α n m ) u n k + 2 m + k = 0 N n + 3 ω k , γ ( α n m ) u n + k 2 m + τ f e , n m 1 + τ R n m ,
where n = 1 , 2 , , N 1 , m = 1 , 2 , , M , | R n m | C ( τ + h l ) for a certain constant C > 0 with l { 3 , 4 } .
Let the numerical approximation of u n m by the IE-4WSGD scheme be denoted by U n m and let f n m = f ( U n m , x n , t m ) . By using u 0 m = u N m = 0 , u 1 m = u N + 1 m = 0 , we obtain the linear equations about { U n m : n = 1 , 2 , , N 1 , m = 1 , 2 , , M } :
U n m d n m k = 0 n + 2 ω k , γ ( α n m ) U n k + 2 m + k = 0 N n + 3 ω k , γ ( α n m ) U n + k 2 m = U n m 1 + τ f n m 1 .
Let
W L m = ω 2 , γ ( α 1 m ) ω 1 , γ ( α 1 m ) ω 0 , γ ( α 1 m ) 0 0 ω 3 , γ ( α 2 m ) ω 2 , γ ( α 2 m ) ω 1 , γ ( α 2 m ) ω 0 , γ ( α 2 m ) 0 0 0 ω N 1 , γ ( α N 2 m ) ω N 2 , γ ( α N 2 m ) ω N 3 , γ ( α N 2 m ) ω N 4 , γ ( α N 2 m ) ω 2 , γ ( α N 2 m ) ω 1 , γ ( α N 2 m ) ω N , γ ( α N 1 m ) ω N 1 , γ ( α N 1 m ) ω N 2 , γ ( α N 1 m ) ω N 3 , γ ( α N 1 m ) ω 3 , γ ( α N 1 m ) ω 2 , γ ( α N 1 m ) ,
W R m = ω 2 , γ ( α 1 m ) ω 3 , γ ( α 1 m ) ω 4 , γ ( α 1 m ) ω N 1 , γ ( α 1 m ) ω N , γ ( α 1 m ) ω 1 , γ ( α 2 m ) ω 2 , γ ( α 2 m ) ω 3 , γ ( α 2 m ) ω N 2 , γ ( α 2 m ) ω N 1 , γ ( α 2 m ) ω 0 , γ ( α 3 m ) ω 1 , γ ( α 3 m ) ω 2 , γ ( α 3 m ) ω N 3 , γ ( α 3 m ) ω N 2 , γ ( α 3 m ) 0 ω 0 , γ ( α 4 m ) ω 1 , γ ( α 4 m ) ω N 4 , γ ( α 4 m ) ω N 3 , γ ( α 4 m ) ω 2 , γ N 2 m ( α N 2 m ) ω 3 , γ N 2 m ( α N 2 m ) 0 0 ω 0 , γ ( α N 1 m ) ω 1 , γ ( α N 1 m ) ω 2 , γ ( α N 1 m ) ,
where
ω 0 , γ ( α j m ) = γ j m , ω 1 , γ ( α j m ) = ( 3 + α j m ) γ j m + 3 ( α j m ) 2 + 5 α j m 24 , ω 2 , γ ( α j m ) = ( α j m ) 2 + 5 α j m + 6 2 γ j m α j m ( 3 ( α j m ) 2 + 11 α j m 2 ) 24 , ω k , γ ( α j m ) = γ j m g k ( α j m ) + ( 3 γ j m + 3 ( α j m ) 2 + 5 α j m 24 ) g k 1 ( α j m ) + ( 3 γ j m + α j m 3 ( α j m ) 3 12 + 1 ) g k 2 ( α j m ) + ( γ j m + 3 ( α j m ) 2 7 α j m 24 ) g k 3 ( α j m ) , k 3 .
Denote
D m = diag d 1 m , d 2 m , , d N 1 m , U m = ( U 1 m , U 2 m , , U N 1 m ) T , F m = ( f 1 m , f 2 m , , f N 1 m 1 ) T .
We get the matrix form of Eq. (15):
A m U m = U m 1 + τ F m 1 , m = 1 , 2 , , M ,
where
A m = I D m W m ,
where W m = W L m + W R m . Obviously, W m = [ w i j m ] i , j = 1 N 1 satisfy
w i j m = 2 ω 2 , γ ( α i m ) , j = i , ω 1 , γ ( α i m ) + ω 3 , γ ( α i m ) , j = i ± 1 , ω 0 , γ ( α i m ) + ω 4 , γ ( α i m ) , j = i ± 2 , ω | i j | + 2 , γ ( α i m ) , | i j | > 2 .

3.2. Stability and Convergence Analysis

In this subsection, we analyze the stability and convergence of the IE-4WSGD scheme (15). We note that W L m , W R m and W m are not Toeplitz matrices since the variable order α ( x , t ) depends on the space variable x (and the time variable t). Recalling that D m are diagonal matrices with nonnegative entries, we put forward conditions for W m to be diagonally dominant with negative diagonal entries, which is sufficient for the IE-4WSGD scheme to be stable.
Lemma 1.
[27,41] For n = 0 , 1 , 2 , . . . , N , m = 1 , 2 , . . . , M and α n m ( 1 , 2 ) , the coefficient { g k ( α n m ) } k 0 satisfy
(i)
g 0 ( α n m ) = 1 , g 1 ( α n m ) = α n m , g k ( α n m ) > 0 k 2 ;
(ii)
k = 0 g k ( α n m ) = 0 , k = 0 q g k ( α n m ) < 0 , q 1 ;
(iii)
α n m ( α n m 1 ) ( 2 α n m ) ( 3 α n m ) 180 4 k α n m + 1 < g k ( α n m ) α n m ( α n m 1 ) 2 3 k + 1 α n m + 1 , k 2 ;
(iv)
( α n m 1 ) ( 2 α n m ) ( 3 α n m ) 45 4 k α n m < q = k g q ( α n m ) ( α n m 1 ) 3 α n m + 1 2 k α n m , k 2 ;
(v)
g k ( α n m ) = O ( k ( α n m + 1 ) ) , k 2 .
Lemma 2.
Assume 1 < α n m < 2 and ω k , γ ( α n m ) , n = 0 , 1 , 2 , . . . , N , m = 1 , 2 , . . . , M , are given by (18). Then we have
ω 0 , γ ( α n m ) = γ n m , ω 1 , γ ( α n m ) = ( 3 + α n m ) γ n m + 3 ( α n m ) 2 + 5 α n m 24 , ω 2 , γ ( α n m ) = ( α n m ) 2 + 5 α n m + 6 2 γ n m α n m ( 3 ( α n m ) 2 + 11 α n m 2 ) 24 , ω k , γ ( α n m ) = ( ( α n m ) 3 6 ( α n m ) 2 11 α n m 6 k ( k 1 ) ( k 2 ) γ n m + 24 k 2 + ( 12 ( α n m ) 2 36 α n m 96 ) k + ( 3 ( α n m ) 4 + 14 ( α n m ) 3 + 33 ( α n m ) 2 + 46 α n m + 72 ) 24 ( k 1 ) ( k 2 ) ) g k 3 ( α n m ) , k 3 , k = 0 ω k , γ ( α n m ) = 0 .
Proof. 
The formulas for ω 0 , γ ( α i m ) , ω 1 , γ ( α i m ) , and ω 2 , γ ( α i m ) can be easily obtained from (11) and (18).
For k 3 , utilizing (13), we get
g k ( α n m ) = k 1 α n m k g k 1 ( α n m ) = ( k 1 α n m ) ( k 2 α n m ) k ( k 1 ) g k 2 ( α n m ) = ( k 1 α n m ) ( k 2 α n m ) ( k 3 α n m ) k ( k 1 ) ( k 2 ) g k 3 ( α n m ) .
Furthermore, by (10), we get
ω k , γ ( α n m ) = γ n m g k ( α n m ) + 3 γ n m + 3 ( α n m ) 2 + 5 α n m 24 g k 1 ( α n m ) + 3 γ n m + α n m 3 ( α n m ) 3 12 + 1 g k 2 ( α n m ) + γ n m + 3 ( α n m ) 2 7 α n m 24 g k 3 ( α n m ) = ( γ n m ( k 1 α n m ) ( k 2 α n m ) ( k 3 α n m ) k ( k 1 ) ( k 2 ) + 3 γ n m + 3 ( α n m ) 2 + 5 α n m 24 ( k 2 α n m ) ( k 3 α n m ) ( k 1 ) ( k 2 ) ) g k 3 ( α n m ) + 3 γ n m + α n m 3 ( α n m ) 3 12 + 1 k 3 α n m k 2 + γ n m + 3 ( α n m ) 2 7 α n m 24 g k 3 ( α n m ) = ( ( α n m ) 3 6 ( α n m ) 2 11 α n m 6 k ( k 1 ) ( k 2 ) γ n m + 24 k 2 + ( 12 ( α n m ) 2 36 α n m 96 ) k + ( 3 ( α n m ) 4 + 14 ( α n m ) 3 + 33 ( α n m ) 2 + 46 α n m + 72 ) 24 ( k 1 ) ( k 2 ) ) g k 3 ( α n m ) .
Finally, by Lemma 1 and Eq. (12), we get
k = 0 ω k , γ ( α n m ) = γ n m k = 0 g k ( α n m ) + 3 γ n m + 3 ( α n m ) 2 + 5 α n m 24 k = 0 g k 1 ( α n m ) + 3 γ n m + α n m 3 ( α n m ) 3 12 + 1 k = 0 g k 2 ( α n m ) + γ n m + 3 ( α n m ) 2 7 α n m 24 k = 0 g k 3 ( α n m ) = γ n m k = 0 g k ( α n m ) + 3 γ n m + 3 ( α n m ) 2 + 5 α n m 24 k = 0 g k ( α n m ) + 3 γ n m + α n m 3 ( α n m ) 3 12 + 1 k = 0 g k ( α n m ) + γ n m + 3 ( α n m ) 2 7 α n m 24 k = 0 g k ( α n m ) = 0 .
Lemma 3.
Let γ i m be chosen such that ω k , γ ( α i m ) , k = 0 , 1 , 2 , , satisfy
ω 2 , γ ( α i m ) 0 , ω 1 , γ ( α i m ) + ω 3 , γ ( α i m ) 0 , ω 0 , γ ( α i m ) + ω 4 , γ ( α i m ) 0 , ω k , γ ( α i m ) 0 , k 5 .
Then W m are diagonally dominant with negative diagonal entries.
Proof. 
Consider the ith row of W m . From (21) and (23), it can be seen that all diagonal entries w i i m 0 and all off-diagonal entries w i j m 0 , i j . Thus, by Lemma 2, it is easily checked that
j = 1 N 1 ( W m ) i , j = j = 1 N 1 w i j m 2 ω 2 , γ ( α i m ) + 2 ( ω 1 , γ ( α i m ) + ω 3 , γ ( α i m ) ) + 2 ( ω 0 , γ ( α i m ) + ω 4 , γ ( α i m ) ) + 2 j = 5 ω j , γ ( α i m ) = 2 j = 0 ω j , γ ( α i m ) = 0 .
Equivalently,
j = 1 , j i w i j m w i i m = | w i i m | .
Therefore, W m is diagonally dominant with negative diagonal entries. □
Theorem 3.
Let α i m ( 1 , 2 ) , i = 1 , 2 , . . . , N 1 , m = 1 , 2 , . . . , M . If γ i m satisfies
γ ˜ 3 ( α i m ) γ i m min { γ ˜ 2 ( α i m ) , γ ˜ 4 ( α i m ) }
where
γ ˜ 2 ( α i m ) = 3 ( α i m ) 4 + 14 ( α i m ) 3 + 3 ( α i m ) 2 52 α i m 8 ( ( α i m ) 3 + 6 ( α i m ) 2 + 17 α i m + 24 ) , γ ˜ 3 ( α i m ) = α i m [ 3 ( α i m ) 4 + 14 ( α i m ) 3 15 ( α i m ) 2 98 α i m + 72 ] 6 ( ( α i m ) 4 + 6 ( α i m ) 3 + 11 ( α i m ) 2 + 6 α i m + 24 ) , γ ˜ 4 ( α i m ) = 5 [ 3 ( α i m ) 4 + 14 ( α i m ) 3 27 ( α i m ) 2 134 α i m + 192 ] 24 ( ( α i m ) 3 + 6 ( α i m ) 2 + 11 α i m + 6 ) .
Then (23) holds, i.e., ω 2 , γ ( α i m ) 0 , ω 1 , γ ( α i m ) + ω 3 , γ ( α i m ) 0 , ω 0 , γ ( α i m ) + ω 4 , γ ( α i m ) 0 , ω k , γ ( α i m ) 0 , k 5 .
Moreover, Eq. (24) has a solution for α i m ( 1 , α * ) , where
α * 1.83384978
is the unique solution of
3 μ 8 + 32 μ 7 + 42 μ 6 556 μ 5 1377 μ 4 + 2204 μ 3 + 2484 μ 2 12048 μ + 23040 = 0 .
in the interval ( 1 , 2 ) .
Proof. 
For the sake of simplicity, we denote α i m by μ . According to Lemma 2, it can be seen that
ω 2 , γ ( μ ) = μ 2 + 5 μ + 6 2 γ i m μ ( 3 μ 2 + 11 μ 2 ) 24 0 ,
if and only if
γ i m μ ( 3 μ 2 + 11 μ 2 ) 12 ( μ + 2 ) ( μ + 3 ) γ ˜ 1 ( μ ) .
Notice that
ω 1 , γ ( μ ) + ω 3 , γ ( μ ) = ( 3 + μ ) γ i m + 3 μ 2 + 5 μ 24 + μ 3 6 μ 2 11 μ 6 6 γ i m + 3 μ 4 + 14 μ 3 3 μ 2 62 μ 48 = μ 3 + 6 μ 2 + 17 μ + 24 6 γ i m + 3 μ 4 + 14 μ 3 + 3 μ 2 52 μ 48 ,
it follows that ω 1 , γ ( μ ) + ω 3 , γ ( μ ) 0 if and only if
γ i m 3 μ 4 + 14 μ 3 + 3 μ 2 52 μ 8 ( μ 3 + 6 μ 2 + 17 μ + 24 ) γ ˜ 2 ( μ ) .
By
ω 0 , γ ( μ ) + ω 4 , γ ( μ ) = γ i m + μ ( μ 3 + 6 μ 2 + 11 μ + 6 ) 24 γ i m μ ( 3 μ 4 + 14 μ 3 15 μ 2 98 μ + 72 ) 144 = μ 4 + 6 μ 3 + 11 μ 2 + 6 μ + 24 24 γ i m μ ( 3 μ 4 + 14 μ 3 15 μ 2 98 μ + 72 ) 144 ,
we see that ω 0 , γ ( μ ) + ω 4 , γ ( μ ) 0 if and only if
γ i m μ ( 3 μ 4 + 14 μ 3 15 μ 2 98 μ + 72 ) 6 ( μ 4 + 6 μ 3 + 11 μ 2 + 6 μ + 24 ) γ ˜ 3 ( μ ) .
For k 5 , we have
ω k , γ ( μ ) = ( μ 3 6 μ 2 11 μ 6 k ( k 1 ) ( k 2 ) γ i m + 24 k 2 + ( 12 μ 2 36 μ 96 ) k + 3 μ 4 + 14 μ 3 + 33 μ 2 + 46 μ + 72 24 ( k 1 ) ( k 2 ) ) g k 3 ( μ ) .
Notice that g j ( μ ) > 0 for j 2 , we have that w k , γ ( μ ) 0 if and only if
γ i m k [ 24 k 2 + ( 12 μ 2 36 μ 96 ) k + 3 μ 4 + 14 μ 3 + 33 μ 2 + 46 μ + 72 ] 24 ( μ 3 + 6 μ 2 + 11 μ + 6 ) .
Let
g ( x ) = 24 x 3 + ( 12 μ 2 36 μ 96 ) x 2 + ( 3 μ 4 + 14 μ 3 + 33 μ 2 + 46 μ + 72 ) x .
It’s not difficult to verify that g ( x ) monotonically increases. Thus g ( k ) g ( 5 ) for k 5 . In the other hand, for k 5 , we have
γ i m 5 [ 600 5 ( 12 μ 2 + 36 μ + 96 ) + 3 μ 4 + 14 μ 3 + 33 μ 2 + 46 μ + 72 ] 24 ( μ 3 + 6 μ 2 + 11 μ + 6 ) = 5 [ 3 μ 4 + 14 μ 3 27 μ 2 134 μ + 192 ] 24 ( μ 3 + 6 μ 2 + 11 μ + 6 ) γ ˜ 4 ( μ ) .
In the following, we will show that
γ ˜ 1 ( μ ) γ ˜ 2 ( μ ) γ ˜ 3 ( μ )
and derive the condition under which γ ˜ 3 ( μ ) γ ˜ 4 ( μ ) (and therefore (24) holds).
We have
γ ˜ 1 ( μ ) γ ˜ 2 ( μ ) = μ ( 3 μ 2 + 11 μ 2 ) 12 ( μ + 2 ) ( μ + 3 ) 3 μ 4 + 14 μ 3 + 3 μ 2 52 μ 8 ( μ 3 + 6 μ 2 + 17 μ + 24 ) = 2 μ ( 3 μ 2 + 11 μ 2 ) ( μ 3 + 6 μ 2 + 17 μ + 24 ) 24 ( μ + 2 ) ( μ + 3 ) ( μ 3 + 6 μ 2 + 17 μ + 24 ) 3 ( 3 μ 4 + 14 μ 3 + 3 μ 2 52 μ ) ( μ 2 + 5 μ + 6 ) 24 ( μ + 2 ) ( μ + 3 ) ( μ 3 + 6 μ 2 + 17 μ + 24 ) = 3 μ 6 29 μ 5 43 μ 4 + 353 μ 3 + 1186 μ 2 + 840 μ 24 ( μ 4 + 8 μ 3 + 29 μ 2 + 58 μ + 48 ) ( μ + 3 ) = ( 3 μ 5 20 μ 4 + 17 μ 3 + 302 μ 2 + 280 μ ) ( μ + 3 ) 24 ( μ 4 + 8 μ 3 + 29 μ 2 + 58 μ + 48 ) ( μ + 3 ) = 3 μ 5 20 μ 4 + 17 μ 3 + 302 μ 2 + 280 μ 24 ( μ 4 + 8 μ 3 + 29 μ 2 + 58 μ + 48 ) .
Let
γ ˜ 12 ( μ ) = 3 μ 5 20 μ 4 + 17 μ 3 + 302 μ 2 + 280 μ , μ ( 1 , 2 ) .
Then
γ ˜ 12 ( μ ) = 15 μ 4 80 μ 3 + 51 μ 2 + 604 μ + 280 .
It’s easy to check that γ ˜ 12 ( μ ) 0 for μ ( 1 , 2 ) . Thus γ ˜ 12 ( μ ) monotonically increases. It follows that γ ˜ 12 ( μ ) γ ˜ 12 ( 1 ) = 576 > 0 for μ ( 1 , 2 ) , and therefore γ ˜ 1 ( μ ) γ ˜ 2 ( μ ) .
Next, we prove that γ ˜ 2 ( μ ) γ ˜ 3 ( μ ) . We have
γ ˜ 2 ( μ ) γ ˜ 3 ( μ ) = 3 μ 4 + 14 μ 3 + 3 μ 2 52 μ 8 ( μ 3 + 6 μ 2 + 17 μ + 24 ) μ ( 3 μ 4 + 14 μ 3 15 μ 2 98 μ + 72 ) 6 ( μ 4 + 6 μ 3 + 11 μ 2 + 6 μ + 24 ) = γ ˜ 23 ( μ ) 24 ( μ 7 + 12 μ 6 + 64 μ 5 + 198 μ 4 + 391 μ 3 + 510 μ 2 + 552 μ + 576 ) ,
where
γ ˜ 23 ( μ ) = 3 μ 8 32 μ 7 120 μ 6 74 μ 5 + 1371 μ 4 + 5722 μ 3 + 3792 μ 2 10656 μ .
The derivative of γ ˜ 23 ( μ ) is
γ ˜ 23 ( μ ) = 24 μ 7 224 μ 6 720 μ 5 370 μ 4 + 5484 μ 3 + 17166 μ 2 + 7584 μ 10656 .
It’s easy to check that γ ˜ 23 ( μ ) 0 ,   μ ( 1 , 2 ) . Therefore,
γ ˜ 23 ( μ ) γ ˜ 23 ( 1 ) = 18240 > 0 , μ ( 1 , 2 ) .
Hence, γ ˜ 2 ( μ ) γ ˜ 3 ( μ ) .
In order that (24) has a solution, it is required that γ ˜ 4 ( μ ) γ ˜ 3 ( μ ) . By some computation, we get
γ ˜ 4 ( μ ) γ ˜ 3 ( μ ) = 5 ( 3 μ 4 + 14 μ 3 27 μ 2 134 μ + 192 ) 24 ( μ 3 + 6 μ 2 + 11 μ + 6 ) μ ( 3 μ 4 + 14 μ 3 15 μ 2 98 μ + 72 ) 6 ( μ 4 + 6 μ 3 + 11 μ 2 + 6 μ + 24 ) = γ ˜ 34 ( μ ) 24 ( μ 7 + 12 μ 6 + 58 μ 5 + 144 μ 4 + 217 μ 3 + 276 μ 2 + 300 μ + 144 ) ,
where
γ ˜ 34 ( μ ) = 3 μ 8 + 32 μ 7 + 42 μ 6 556 μ 5 1377 μ 4 + 2204 μ 3 + 2484 μ 2 12048 μ + 23040 .
It is easy to get
γ ˜ 34 ( μ ) = 24 μ 7 + 224 μ 6 + 252 μ 5 2780 μ 4 5508 μ 3 + 6612 μ 2 + 4968 μ 12048 .
We can verify that γ ˜ 34 ( μ ) decreases for μ ( 1 , 2 ) and it follows that γ ˜ 34 ( μ ) γ ˜ 34 ( 1 ) = 8256 < 0 . Therefore, γ ˜ 34 ( μ ) also decreases. Notice that
γ ˜ 34 ( 1 ) = 13824 > 0 , γ ˜ 34 ( 2 ) = 5760 < 0 ,
we have that there exists a unique α * ( 1 , 2 ) such that γ ˜ 34 ( α * ) = 0 . By using Matlab, we get α * 1.83384978 . Thus, γ ˜ 4 ( μ ) γ ˜ 3 ( μ ) for μ ( 1 , α * ) , and (24) has solution(s). □
Lemma 4.
[1] (The barrier lemma) Assume M A is a a monotone matrix with M A R n × n . If there is a unit vector ϱ , i.e., ϱ = 1 , and min 1 i n ( M A ϱ ) i a M with a M > 0 a positive constant. Then one gains M A 1 1 / a M .
Define e = ( 1 , 1 , , 1 ) T . For any vectors x , y R n , denote x ( or > ) y if [ x y ] i 0 (or [ x y ] i > 0 ) i = 1 , 2 , , n .
Lemma 5.
Suppose 1 < α i m α * with α * being given by (26) and γ i m is chosen such that γ ˜ 3 ( α i m ) γ i m min { γ ˜ 2 ( α i m ) , γ ˜ 4 ( α i m ) } , i = 1 , 2 , , N 1 , where γ ˜ 2 ( α i m ) , γ ˜ 3 ( α i m ) and γ ˜ 4 ( α i m ) are given by (25). Then ( A m ) 1 1 .
Proof. 
Notice that A m = I D m W m with W m given by (21), we are aware of that A m is an L-matrix. By Lemmas 2 and 3, we have
j = 1 N 1 [ A m ] i j = 1 d i m j = 1 N 1 w i j ( α i m ) 1 2 ω 2 , γ ( α i m ) + 2 ( ω 1 , γ ( α i m ) + ω 3 , γ ( α i m ) ) + 2 ( ω 0 , γ ( α i m ) + ω 4 , γ ( α i m ) ) + 2 j = 5 ω j , γ ( α i m ) 1 2 j = 0 ω j , γ ( α i m ) 1 , i = 1 , 2 , , N 1 .
Hence, A m e e > 0 . It follows that A m is an M-matrix [12]. Furthermore, ( A m ) 1 min 1 i N [ e ] i = 1 from Lemma 4. □
Lemma 6.
[45] (Discrete Gronwall Inequality) Let { ζ n | n 0 } be a sequence with ζ n > 0 , and ϱ n 0 . If ζ n + 1 ( 1 + c τ ) ζ n + τ ϱ n , n = 0 , 1 , 2 , , then ζ n satisfies
ζ n + 1 e c n τ k = 0 n τ ϱ k ,
where ζ 0 = 0 , c is a nonnegative constant.
Theorem 4.
If 1 < α i m α * with α * being given by (26) and γ i m is chosen such that γ ˜ 3 ( α i m ) γ i m min { γ ˜ 2 ( α i m ) , γ ˜ 4 ( α i m ) } ,   i = 1 , 2 , , N 1 , m = 1 , 2 , M , where γ ˜ 2 ( α i m ) , γ ˜ 3 ( α i m ) and γ ˜ 4 ( α i m ) are given by (25). Then the IE-4WSGD scheme (19) is stable.
Proof. 
Suppose U ˜ m is the approximation solution of U m and F ˜ m = ( f ˜ 1 m , f ˜ 2 m , , f ˜ N 1 m 1 ) T . Let E m = U m U ˜ m . Then we have
A m E m = E m 1 + τ ( F m 1 F ˜ m 1 ) , m = 1 , 2 , , M ,
that is,
E m = ( A m ) 1 E m 1 + τ ( A m ) 1 ( F m 1 F ˜ m 1 ) .
From Lemma 5, we get
E m ( A m ) 1 E m 1 + τ L ( A m ) 1 E m 1 ( 1 + τ L ) E m 1 ( 1 + τ L ) m E 0 e m τ L E 0 e T L E 0 .
This can lead to the conclusion. □
Theorem 5.
Denote u n m the exact solution of IBV problem (1) and U n m the solution of (19). If 1 < α i m α * and γ ˜ 3 ( α i m ) γ i m min { γ ˜ 2 ( α i m ) , γ ˜ 4 ( α i m ) } ,   i = 1 , 2 , , N 1 , m = 1 , 2 , , M . Then for 1 m M , it holds
U m u m C e T L T ( τ + h l ) ,
where C > 0 and l { 3 , 4 } .
Proof. 
Denote e m = U m u m . From Eqs. (1) and (19), we get
A m e m = e m 1 + τ ( F e m 1 F m 1 ) + τ R m , m = 1 , 2 , , M ,
where R m C ( τ + h l ) , l { 3 , 4 } . That is,
e m = ( A m ) 1 e m 1 + τ ( A m ) 1 ( F e m 1 F m 1 ) + τ ( A m ) 1 R m .
Notice that e 0 = 0 , from Lemmas 5 and 6, we get
e m ( A m ) 1 e m 1 + τ L ( A m ) 1 e m 1 + τ ( A m ) 1 R m ( 1 + τ L ) e m 1 + τ R m e m τ L C m τ ( τ + h l ) C e T L T ( τ + h l ) .
This can lead to the conclusion. □
In the following, we discuss how to choose the parameters γ i m such that the IE-4WSGD scheme is stable and has the optimal error bound.
Theorem 6.
Let γ o b j ( α i m ) = ( α i m ) 3 ( α i m ) 2 4 α i m 48 , i = 1 , 2 , , N 1 , m = 1 , 2 , , M and let
α * * 1.79968326
be the unique solution of μ 6 5 μ 5 + 29 μ 4 + 169 μ 3 220 μ 2 1316 μ + 1920 = 0 in the interval ( 1 , 2 ) . Then for α i m 1 , α * * , it holds
γ ˜ 3 ( α i m ) γ o b j ( α i m ) min { γ ˜ 2 ( α i m ) , γ ˜ 4 ( α i m ) } ,
where γ ˜ 2 ( α i m ) , γ ˜ 3 ( α i m ) and γ ˜ 4 ( α i m ) are given by (25). That is, if α ( x , t ) ( 0 , α * * ) , then we can select the free parameter of the 4WSGD scheme so that the corresponding scheme has fourth order of accuracy.
Proof. 
We first compare γ o b j ( μ ) and γ ˜ 3 ( μ ) . We have
γ o b j ( μ ) γ ˜ 3 ( μ ) = μ 3 μ 2 4 μ 48 μ ( 3 μ 4 + 14 μ 3 15 μ 2 98 μ + 72 ) 6 ( μ 4 + 6 μ 3 + 11 μ 2 + 6 μ + 24 ) = γ o b j 3 ( μ ) 48 ( μ 4 + 6 μ 3 + 11 μ 2 + 6 μ + 24 ) ,
where
γ o b j 3 ( μ ) = μ 7 + 5 μ 6 23 μ 5 141 μ 4 + 94 μ 3 + 736 μ 2 672 μ .
It follows that
γ o b j 3 ( μ ) = 7 μ 6 + 30 μ 5 115 μ 4 564 μ 3 + 282 μ 2 + 1472 μ 672 .
It is easily checked that γ o b j 3 ( μ ) increases first and then decreases with γ o b j 3 ( 1 ) > 0 and γ o b j 3 ( α * ) < 0 for ( 1 , α * ) . Thus γ o b j 3 ( μ ) also increases first and then decreases and there exists a minimum at μ = 1 or μ = α * . Since γ o b j 3 ( 1 ) = 0 ,   γ o b j 3 ( α * ) > 0 , therefore γ o b j 3 ( μ ) 0 , that is γ o b j ( μ ) γ ˜ 3 ( μ ) for ( 1 , α * ) .
Next we compare γ ˜ 2 ( μ ) and γ o b j ( μ ) . We have
γ ˜ 2 ( μ ) γ o b j ( μ ) = 3 μ 4 + 14 μ 3 + 3 μ 2 52 μ 8 ( μ 3 + 6 μ 2 + 17 μ + 24 ) μ 3 μ 2 4 μ 48 = γ o b j 2 ( μ ) 48 ( μ 3 + 6 μ 2 + 17 μ + 24 ) ,
where
γ o b j 2 ( μ ) = μ 6 5 μ 5 + 11 μ 4 + 101 μ 3 + 110 μ 2 216 μ .
Then it is easy to get
γ o b j 2 ( μ ) = 6 μ 5 25 μ 4 + 44 μ 3 + 303 μ 2 + 220 μ 216 ,
and one can verify that γ o b j 2 ( μ ) increases monotonically for μ ( 1 , α * ) . Notice that γ o b j 2 ( 1 ) = 320 > 0 , we have that γ o b j 2 ( μ ) γ o b j 2 ( 1 ) 0 . Equivalently, γ ˜ 2 ( μ ) γ o b j ( μ ) for ( 1 , α * ) .
Finally, we compare γ ˜ 4 ( μ ) and γ o b j ( μ ) . We have
γ ˜ 4 ( μ ) γ o b j ( μ ) = 5 ( 3 μ 4 + 14 μ 3 27 μ 2 134 μ + 192 ) 24 ( μ 3 + 6 μ 2 + 11 μ + 6 ) μ 3 μ 2 4 μ 48 = γ o b j 4 ( μ ) 48 ( μ 3 + 6 μ 2 + 11 μ + 6 ) ,
where
γ o b j 4 ( μ ) = μ 6 5 μ 5 + 29 μ 4 + 169 μ 3 220 μ 2 1316 μ + 1920 .
It can be checked that
γ o b j 4 ( μ ) = 6 μ 5 25 μ 4 + 116 μ 3 + 507 μ 2 440 μ 1316
increases monotonically and γ o b j 4 ( α * ) < 0 . Therefore, γ o b j 4 ( μ ) decreases monotonically. Notice that
γ o b j 4 ( 1 ) > 0 and γ o b j 4 ( α * ) 0 ,
we see that there exists a unique α * * ( 1 , α * ) such that γ o b j 4 ( α * * ) = 0 . By using Matlab, we get an approximate value of α * * , i.e., α * * 1.79968326 . That is γ o b j 4 ( μ ) > 0 for α i m ( 1 , α * * ) , and γ o b j 4 ( α * ) < 0 for α i m ( α * * , α * ) .
In summary,
γ ˜ 3 ( α i m ) γ o b j ( α i m ) min { γ ˜ 2 ( α i m ) , γ ˜ 4 ( α i m ) } α i m 1 , α * * .
To end this section, we discuss how to choose the parameter γ i m for α * * < α i m α * , where α * and α * * are given by (26) and (27) respectively. The minimization problem will take into account
min | γ i m ( α i m ) 3 ( α i m ) 2 4 α i m 48 | s . t . γ ˜ 3 ( α i m ) γ i m min { γ ˜ 2 ( α i m ) , γ ˜ 4 ( α i m ) } , α * * α i m α * .
To seek above-mentioned problem, we draw the curves below (see Figure 1):
  • Objective: γ o b j ( α i m ) ,
  • Lower bound: γ ˜ 3 ( α i m ) ,
  • Upper bound: min { γ ˜ 2 ( α i m ) , γ ˜ 4 ( α ) i m } .
Combining Theorem 6 and the curves in Figure 1, it can be easily seen that γ ˜ 4 ( α i m ) γ o b j ( α i m ) , γ ˜ 3 ( α i m ) γ o b j ( α i m ) for [ α * * , α * ) , and γ ˜ 4 ( α i m ) is closer to the objective γ ˜ o b j ( α i m ) . Therefore, we can choose γ i m = γ ˜ 4 ( α i m ) for α * * , α * . In summary, select γ i m as follows:
γ i m = γ o b j ( α i m ) , α i m 1 , α * * γ ˜ 4 ( α i m ) , α i m α * * , α * .

4. Preconditioning Techniques

In this section, based on Section 3, we consider applying preconditioned GMRES methods to solve the linear systems in Eq. (19). Although the coefficient matrices are dense and do not have a Toeplitz-like structure, each W m has off-diagonal decay property, hence we employ banded matrices as preconditioners for the linear systems to speed up the relevant GMRES methods. Similar discussion can be found in the literature; see for instance [18,42].
Firstly, a banded preconditioner with diagonal compensation will be take into account. Letting Q = ( q i j ) i , j = 1 N , we denote diag ( j = 1 N q 1 j , j = 1 N q 2 j , , j = 1 N q N j ) by diag ( sum ( Q , 2 ) ) . Let k b be an integer that is much less than N. Split W m as
W m = W k b m + ( W m W k b m ) ,
where
W k b m = W ˜ k b m + diag sum ( W m W ˜ k m , 2 )
with
W ˜ k b m = w 1 , 1 ( m ) w 1 , k b ( m ) 0 0 w k b , 1 ( m ) w k b , k b ( m ) 0 0 w N k b , N 1 ( m ) 0 0 w N 1 , N k b ( m ) w N 1 , N 1 ( m ) .
Then we use
P k b m = I D m W k b m
as preconditioner.
Lemma 7.
For k b 3 , we have
W m W ˜ k b m r α , k b m , W m W k b m 2 r α , k b m ,
where r α , k b m = 4 max 1 n N 1 ( | γ | , | γ 1 | , | γ 0 | , | γ 1 | ) max 0 n N ( α n m 1 ) 3 α n m + 1 ( k b 1 ) α n m .
Proof. 
From (21) and Lemma 1, we get
W m W ˜ k b m 2 max 1 n N 1 q = k b + 2 ω q , γ ( α n m ) 8 max 1 n N 1 ( | γ | , | γ 1 | , | γ 0 | , | γ 1 | ) max 1 i N 1 q = k b 1 g q ( α n m ) 8 max 1 n N 1 ( | γ | , | γ 1 | , | γ 0 | , | γ 1 | ) max 1 n N 1 ( α n m 1 ) 3 α n m + 1 2 ( k b 1 ) α n m r α , k b m .
Similarly, we can get
W m W k b m 2 r α , k b m .
Theorem 7.
Let d max m = max 1 n N 1 ( d n m ) and r α , k b m be the same as in Lemma 7. For k b 3 , we have following conclusions:
(i)
The spectrum radius of ( P k b m ) 1 A m I is bounded as follows:
ρ ( P k b m ) 1 A m I 2 d max m r α , k b m ;
(ii)
The condition number of the preconditioned matrix is bounded as follows:
cond ( P k b m ) 1 ( A m ) ( 1 + 2 d max m r α , k b m ) 2 , m = 1 , 2 , , M .
Proof. 
Similar to Lemma 5, we have I D m W k b m 1 1 by using Lemma 4.
(i) Utilizing Lemma 7, we get
( P k b m ) 1 A m I = ( I D m W k b m ) 1 ( I D m W m ) I = ( I D m W k b m ) 1 I D m W k b m + ( W m W k b m ) I = ( I D m W k b m ) 1 D m ( W m W k b m ) ( I D m W k b m ) 1 D m ( W m W k b m ) max 1 n N 1 ( d n m ) W m W k b m 2 d max m r α , k b m .
Therefore, we get ρ ( P k b m ) 1 ( A m ) I ( P k b m ) 1 ( A m ) I 2 d max m r α , k b m .
(ii) We have
( P k b m ) 1 A m = ( P k b m ) 1 A m I + I 1 + ( P k b m ) 1 A m I 1 + 2 d max m r α , k b m .
Similarly, we have
( P k b m ) 1 A m 1 = ( I D m W m ) 1 ( I D m W k b m ) = I + ( I D m W m ) 1 D m ( W m W k b m ) 1 + ( I D m W m ) 1 D m ( W m W k b m ) 1 + 2 d max m r α , k b m .
The conclusion of the theorem follows. □
Now, a banded preconditioner without diagonal compensation will be take into account.
P ˜ k b m = I D m W ˜ α , k b m .
Theorem 8.
Let d max m = max 1 n N 1 ( d n m ) and r α , k b m be the same as in Lemma 7. For k b 3 , we have following conclusions:
(i)
The spectrum radius of ( P ˜ k b m ) 1 A m I is bounded as follows:
ρ ( P ˜ k b m ) 1 A m I d max m r α , k b m .
(ii)
The condition number of the preconditioned matrix is bounded as follows:
cond ( P ˜ k b m ) 1 ( A m ) ( 1 + d max m r α , k b m ) 2 , m = 1 , 2 , , M .
Proof. 
By using Lemma 7, the theorem can be proved similar to the proof of Theorem 7. □

5. IE-4WSGD Scheme for 2D SRVONFDE

Nowadays, we take into account the IBV problem for a 2D SRVONFDE:
u ( x , y , t ) t = K α ( x , y , t ) α ( x , y , t ) u ( x , y , t ) | x | α ( x , y , t ) + K β ( x , y , t ) β ( x , y , t ) u ( x , y , t ) | y | β ( x , y , t ) + f ( u , x , y , t ) , ( x , y ) Ω = ( x L , x R ) × ( y L , y R ) , t ( 0 , T ] , u ( x , y , 0 ) = u 0 ( x , y ) , ( x , y ) Ω ¯ , u ( x , y , t ) = 0 , ( x , y ) Ω , t [ 0 , T ] ,
where α ( x , y , t ) u ( x , y , t ) | x | α ( x , y , t ) and β ( x , y , t ) u ( x , y , t ) | y | β ( x , y , t ) are Riesz VO fractional derivatives, 0 < K C α ̲ K C α ( x , y , t ) = K α ( x , y , t ) C α ( x , y , t ) K α ¯ for certain positive constants K C α ̲ and K C α ¯ , 0 < K C β ̲ K C β ( x , y , t ) = K β ( x , y , t ) C β ( x , y , t ) K C β ¯ for certain positive constants K C β ̲ and K C β ¯ , f ( u , x , y , t ) is the source term that satisfies the Lipschitz condition.
We discuss discretization of the IBV problem firstly. N x , N y , M N * Let h x = x R x L N x , h y = y R y L N y and τ = T M . Let x i = x L + i h x , 0 i N x , y j = y L + j h y , 0 j N y , and t m = m τ , m = 0 , 1 , 2 , , M .   g ( x , y , t ) is a given function, let g i j m or g i , j m represent g ( x i , y j , t m ) . The IE-4WSGD scheme presented in Section 3 can also be applied to 2D RVONFDE, i.e., Eq. (31). The IE-4WSGD scheme for Eq. (31) is
u i j m = u i j m 1 + τ K C α , i , j m h x α i , j m 1 k = 0 i + 2 ω k , γ ( α i , j m ) u i k + 2 , j m + k = 0 N x i + 3 ω k , γ ( α i , j m ) u i + k 2 , j m + τ K C β , i , j m h y β i , j m 1 k = 0 j + 2 ω k , γ ( α i , j m ) u i , j k + 2 m + k = 0 N y j + 3 ω k , γ ( α i , j m ) u i , j + k 2 m + τ f e , i j m 1 + τ R i j m ,
where f e , i j m = f ( u i j m , x i , y j , t m ) and | R i j m | C x y ( τ + h x l + h y l ) , l { 3 , 4 } . Let U i j m be an approximation of u i j m , we have the following systems of linear equations:
U i j m τ K C α , i , j m h x α i , j m 1 k = 0 i + 2 ω k , γ ( α i , j m ) U i k + 2 , j m + k = 0 N x i + 3 ω k , γ ( α i , j m ) U i + k 2 , j m τ K C β , i , j m h y β i , j m 1 k = 0 j + 2 ω k , γ ( α i , j m ) U i , j k + 2 m + k = 0 N y j + 3 ω k , γ ( α i , j m ) U i , j + k 2 m = U i j m 1 + τ f i j m 1 .
Now, we take into account the matrix form of the discretization linear system (33). Denote
U m = ( U 1 , 1 m , U 2 , 1 m , , U N x 1 , 1 m , , U 1 , N y 1 m , , U N x 1 , N y 1 m ) T .
For i = 1 , 2 , , N x 1 , j = 1 , 2 , N y 1 , let f i j m = f ( U i j m , x i , y j , t m ) and then define
F m = ( f 1 , 1 m , f 2 , 1 m , , f N x 1 , 1 m , , f 1 , N y 1 m , , f N x 1 , N y 1 m ) T .
For j = 1 , 2 , , N y 1 , denote
D x , j m = diag τ K C α , 1 , j m h x α 1 , j m , τ K C α , 2 , j m h x α 2 , j m , , τ K C α , N x 1 , j m h x α N x 1 , j m ,
D y , j m = diag τ K C β , 1 , j m h y β 1 , j m , τ K C β , 2 , j m h y β 2 , j m , , τ K C β , N x 1 , j m h y β N x 1 , j m .
Then define
D x m = diag D x , 1 m , D x , 2 m , , D x , N y 1 m , D y m = diag D y , 1 m , D y , 2 m , , D y , N y 1 m .
Similar to Eq. (21), let W x , j m = [ w x , i 1 , i 2 m , j ] i 1 , i 2 = 1 N x 1 , where
w x , i 1 , i 2 m , j = 2 ω 2 , γ ( α i 1 , j m ) , i 2 = i 1 , ω 1 , γ ( α i 1 , j m ) + ω 3 , γ ( α i 1 , j m ) , i 2 = i 1 ± 1 , ω 0 , γ ( α i 1 , j m ) + ω 4 , γ ( α i 1 , j m ) , i 2 = i 1 ± 2 , ω | i 1 i 2 | + 2 , γ ( α i 1 , j m ) , | i 1 i 2 | > 2 .
Then let
W x m = diag W x , 1 m , W x , 2 m , , W x , N y 1 m .
Let W y , i m = [ w y , j 1 , j 2 m , i ] j 1 , j 2 = 1 N y 1 , where
w y , j 1 , j 2 m , i = 2 ω 2 , γ ( β i , j 1 m ) , j 2 = j 1 , ω 1 , γ ( β i , j 1 m ) + ω 3 , γ ( β i , j 1 m ) , j 2 = j 1 ± 1 , ω 0 , γ ( β i , j 1 m ) + ω 4 , γ ( β i , j 1 m ) , j 2 = j 1 ± 2 , ω | j 1 j 2 | + 2 , γ ( β i , j 1 m ) , | j 1 j 2 | > 2 .
Let
P = 1 , 1 + ( N y 1 ) , , 1 + ( N x 2 ) ( N y 1 ) , , N y 1 , 2 ( N y 1 ) , , ( N x 1 ) ( N y 1 ) ,
where j mean the jth column of the identity square matrix and the matrix belongs to ( N x 1 ) ( N y 1 ) . Then let
W y m = P T diag W y , 1 m , W y , 2 m , , W y , N x 1 m P .
Using the above notations, we obtain the linear system of (33):
I D x m W x m D y m W y m U m = U m 1 + τ F m 1 , m = 1 , 2 , , M .
The following theoretical results require the conditions:
1 < α i j m α * , γ ˜ 3 ( α i j m ) γ x , i j m min { γ ˜ 2 ( α i j m ) , γ ˜ 4 ( α i j m ) } ; 1 < β i j m α * , γ ˜ 3 ( β i j m ) γ y , i j m min { γ ˜ 2 ( β i j m ) , γ ˜ 4 ( β i j m ) } ;
for i = 1 , 2 , , N x 1 , j = 1 , 2 , , N y 1 , where α * 1.83384978 is given by (26), and γ ˜ 2 ( · ) , γ ˜ 3 ( · ) and γ ˜ 4 ( · ) are given by (25).
Lemma 8.
Suppose (37) holds, then
I D x m W x m D y m W y m 1 1 .
Proof. 
Based on the given conditions, I D x m W x m D y m W y m is an L-matrix. We get the sum of ( ( j 1 ) ( N x 1 ) + i ) th row of I D x m W x m D y m W y m is
1 τ K C α , i , j m h x α i , j m j = 1 ( N x 1 ) ( N y 1 ) [ W x m ] i j τ K C β , i , j m h y β i , j m j = 1 ( N x 1 ) ( N y 1 ) [ W y m ] i j 1 τ K C α , i , j m h x α i , j m 2 ω 2 , γ ( α i , j m ) + 2 ( ω 1 , γ ( α i , j m ) + ω 3 , γ ( α i , j m ) ) + 2 ( ω 0 , γ ( α i , j m ) + ω 4 , γ ( α i , j m ) ) + 2 j = 5 ω j , γ ( α i , j m ) τ K C β , i , j m h x β i , j m 2 ω 2 , γ ( β i , j m ) + 2 ( ω 1 , γ ( β i , j m ) + ω 3 , γ ( β i , j m ) ) + 2 ( ω 0 , γ ( β i , j m ) + ω 4 , γ ( β i , j m ) ) + 2 j = 5 ω j , γ ( β i , j m ) 1 2 τ K C α , i , j m h x α i , j m j = 0 ω j , γ ( α i , j m ) 2 τ K C β , i , j m h x β i , j m j = 0 ω j , γ ( β i , j m ) 1 .
Therefore, ( I D x m W x m D y m W y m ) e e > 0 . It follows that I D x m W x m D y m W y m is an M-matrix. Furthermore, I D x m W x m D y m W y m 1 1 by Lemma 4. □
Theorem 9.
Suppose (37) holds, then the difference scheme (36) is stable.
Proof. 
Define E m = U m U ˜ m . Then we have
( I D x m W x m D y m W y m ) E m = E m 1 + τ ( F m 1 F ˜ m 1 ) , m = 1 , 2 , , M .
That is
E m = ( I D x m W x m D y m W y m ) 1 E m 1 + τ ( I D x m W x m D y m W y m ) 1 ( F m 1 F ˜ m 1 ) .
By Lemma 8, we have
E m ( 1 + τ L ) E m 1 ( 1 + τ L ) m E 0 e m τ L E 0 e T L E 0 .
The conclusion of the theorem follows. □
Theorem 10.
Let the exact solution to IBV problem (31) at the grid points and the solution of (36) be denoted by u i j m and U i j m , respectively. Suppose (37) holds, then for m = 1 , 2 , , M , it holds
u m U m C T ( τ + h x l + h y l ) ,
where C > 0 ,   l { 3 , 4 } .
Proof. 
Denote e m = u m U m . Utilizing Eqs. (31) and (36), we get
( I D x m W x m D y m W y m ) e m = e m 1 + τ ( F e m 1 F m 1 ) + τ R m , m = 1 , 2 , , M ,
where R m C ( τ + h x l + h y l ) , l { 3 , 4 } .
That is,
e m = ( I D x m W x m D y m W y m ) 1 e m 1 + τ ( I D x m W x m D y m W y m ) 1 ( ( F e m 1 F m 1 ) + R m ) .
Notice that e 0 = 0 , and from Lemmas 6 and 8, we get
e m ( 1 + τ L ) e m 1 + τ R m e m τ L C m τ ( τ + h x l + h y l ) C e T L T ( τ + h x l + h y l ) .
The conclusion of the theorem follows. □
Remark 3.
For preconditioners, we extend 1D case to 2D case. Let
W x , j m = W x , j , k b m + ( W x , j m W x , j , k b m ) , j = 1 , 2 , , N y 1 ,
W y , i m = W y , i , k b m + ( W y , i m W y , i , k b m ) , i = 1 , 2 , , N x 1 ,
where
W x , j , k b m = W ˜ x , j , k b m + diag sum ( W x , j , m W ˜ x , j , k m , 2 ) , W y , i , k b m = W ˜ y , i , k b m + diag sum ( W y , i , m W ˜ y , i , k m , 2 ) ,
wich W ˜ x , j , k b m and W ˜ y , i , k b m being similar to (30). Let
W x , k b m = diag W x , 1 , k b m , W x , 2 , k b m , , W x , N y 1 , k b m .
W y , k b m = P T diag W y , 1 , k b m , W y , 2 , k b m , , W y , N x 1 , k b m P .
Then we use
P k b m = I D x m W x , k b m D y m W y , k b m
or
P ˜ k b m = I D x m W ˜ x , k b m D y m W ˜ y , k b m
as a preconditioner.
Spectral analysis of preconditioners is similar to Section 4.

6. Numerical Results

We verify the validity of the developed IE-4WSGD schemes with two examples though numerical experiments in this section. We utilize Gauss elimination (GE) and the preconditioned GMRES (PGMRES) method to solve the relevant linear systems. Moreover, their CPU time is compared. The CPU time cover the whole solving process, involving the original matrix, corresponding preconditioner, and solving linear systems. For the restarted PGMRES method, we take the initial guess:
U m + 1 , 0 = U m , m = 0 , 2 U ^ m U ^ m 1 , m > 0 .
The stopping criterion is
F r j 2 F b m + 1 2 < 10 5 ,
where F r j represents the residual vector after j iterations and F b m + 1 represents the right-sided vector of the corresponding linear equations.
For 2D problem, we take N x = N y = N . We use a parenthesis (Iter, CPU time) to represent the average number of iterations for solving discrete linear systems (the first component) and the CPU time in seconds (the second component). For the Gauss elimination (GE) method, the first component is simply replaced by “–”. The order of spatial accuracy is denoted by “rate h ”, which is given by
r a t e h = log 2 ( Error ( N 1 , M 1 ) / Error ( N 2 , M 2 ) ) log 2 ( N 2 / N 1 )
with M i = O ( N i 4 ) and
Error ( N , M ) = u ˜ M u T ,
where u ˜ M denote the approximate solution vector at the final time step and u T denote the exact solution vector at t = T . All computations are performed via Matlab version 2021a on a DESKTOP-1VC6MHV (Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz 2.30GHz, 12.0GB RAM).
Example 1.
Consider the IBV problem for a 1D SRVONFDE with a = 0 , b = 2 , T = 1 , α ( x , t ) = 7 5 + 1 2 e x 2 t 2 1 or 545 ( x t ) 4 300 , and K ( x , t ) = 2 ( x + 1 ) 2 e x cos π α ( x , t ) 2 . Let u ( x , t ) = e t x 4 ( 2 x ) 4 , we can get
u ( x , t ) t = e t x 4 ( 2 x ) 4 , α ( x , t ) u ( x , t ) | x | α ( x , t ) = h ( x , t ) ,
where
h ( x , t ) = e t sec ( π α ( x , t ) 2 ) 2 i = 5 9 ( 1 ) i 1 2 9 i C 4 i 5 Γ ( i ) Γ ( i α ( x , t ) ) x i 1 α ( x , t ) + ( 2 x ) i 1 α ( x , t ) .
Let
f ( u , x , t ) = e t x 4 ( 2 x ) 4 K ( x , t ) h ( x , t ) + u 3 e 3 t x 12 ( 2 x ) 12 .
It can be checked that u ( x , t ) = e t x 4 ( 2 x ) 4 is the exact solution of the relevant IBV problem.
Table 1 and Table 2 demonstrate the spatial convergence rate for Example 1. We take different variable order functions α 1 ( x , t ) = 1.4 + 0.5 e ( x t ) 2 1 ( 1 , α * * ) and α 2 ( x , t ) = 545 ( x t ) 4 300 ( 1 , α * ) , where α * 1.83384978 and α * * 1.79968326 are given by (26) and (27) respectively. Table 1 shows that the IE-4WSGD scheme has spatial fourth accuracy. For α 2 ( x , t ) = 545 ( x t ) 4 300 , we choose γ i m by using (29). From Table 2, it can also be seen that the IE-4WSGD scheme achieve spatial fourth accuracy for α 2 ( x , t ) . In fact, although some values of α 2 ( x , t ) are larger than α * * , the difference α 2 ( x , t ) α * * is small, which leads to high order spatial accuracy. For the PGMRES method, we choose k b = 5 . We can see that the PGMRES methods require less CPU times than the GE method for sufficiently large N. Although the average number of iterations between preconditioners P k b m and P ˜ k b m is barely no difference for large N, the former requires more CPU times than the latter because of higher costs of constructing P k b m at each time step. Note that for the N = 2 6 , M = 2 2 and N = 2 7 , M = 2 6 , the PGMRES method with P ˜ k b m need much more iterations than the one with P k b m , which indicates that P k b m is more efficient in the sense of the number of iterations.
Example 2.
Consider the IBV problem for 2D SRVONFDE (31) with following conditions:
( x , y , t ) ( 1 , 3 ) × ( 2 , 4 ) × ( 0 , 1 ) , u ( x , y , 0 ) = 0 , K α ( x , y , t ) = 10 e x y + 1 cos ( π α ( x , y , t ) 2 ) , K β ( x , y , t ) = 16 e x y + 2 cos ( π β ( x , y , t ) 2 ) ,
and
f ( u , x , y , t ) = cos ( t ) ( x 1 ) 4 ( 3 x ) 4 ( y 2 ) 4 ( 4 y ) 4 + ( e sin ( t ) ( x 1 ) 4 ( 3 x ) 4 ( y 2 ) 4 ( 4 y ) 4 e u ) + h ( x , y , t )
with
h ( x , y , t ) = sin ( t ) K α ( x , y , t ) 2 cos ( π α ( x , y , t ) / 2 ) × i = 5 9 ( 1 ) i 1 C 4 i 5 Γ ( i ) Γ ( i α ( x , y , t ) ) ( x 1 ) i 1 α ( x , y , t ) + ( 3 x ) i 1 α ( x , y , t ) ( y 2 ) 4 ( 4 y ) 4 + sin ( t ) K β ( x , y , t ) 2 cos ( π β ( x , y , t ) / 2 ) × i = 5 9 ( 1 ) i 1 C 4 i 5 Γ ( i ) Γ ( i β ( x , y , t ) ) ( y 2 ) i 1 β ( x , y , t ) + ( 4 y ) i 1 β ( x , y , t ) ( x 1 ) 4 ( 3 x ) 4 .
Then the exact solution is
u ( x , y , t ) = sin ( t ) ( x 1 ) 4 ( 3 x ) 4 ( y 2 ) 4 ( 4 y ) 4 .
The order of the fractional orders are specified in Table 3 and Table 4.
We observe the spatial accuracy of the IE-4WSGD scheme for the 2D problem (31). In Table 3 and Table 5, the orders α ( x , y , t ) , β ( x , y , t ) are in ( 1 , α * * ) , so we can choose the corresponding parameters to achieve fourth order spatial accuracy. Numerical results in Table 3 and Table 4 verify that the derived scheme has 4th order accuracy. In Table 4 and Table 6, the orders α ( x , y , t ) , β ( x , y , t ) are in ( 1 , α * ) . Similar to one-dimensional example, numerical results illustrate that the proposed scheme can achieve theoretical accuracy presented in Section 5 for α ( x , y , t ) and β ( x , y , t ) that satisfy the conditions. For PGMRES method, we choose k b = 3 in this example. Similar to Example 1, PGMRES methods are more efficient than the GE method for large N. Moreover, the PGMRES method with P ˜ k b m need sightly more the average number of iterations than the one with P k b m while the former requires more CPU time.

7. Concluding Remarks

In this paper, we proposed 4WSGD schemes for the Riesz VO fractional derivative, and derived the convergence and stability of IE-4WSGD schemes for 1D and 2D SRVONFDEs. Numerical experiments show that the presented schemes and the PGMRES methods are very efficient. In the future, we will study the application of the schemes and algorithms for solving other SRVONFDEs, e.g., the movement of groundwater pollution and control.

Author Contributions

Conceptualization, Q.W. and F.L.; methodology, Q.W. and F.L.; writing—original draft preparation, Q.W.; writing—review and editing, F.L.; visualization, F.L.; supervision, F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Doctoral Start-up Foundation of Shandong Jiaotong University (BS2023060).

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Acknowledgments

We thank the anonymous referees for providing detailed and valuable comments and suggestions, which are very helpful for improving our paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Axelsson, O.; Kolotilina, L. Montonicity and discretization error eatimates. SIAM J. Numer. Anal. 1990, 27, 1591–1611. [Google Scholar] [CrossRef]
  2. Bai, J.; Feng, X.C. Fractional-order anisotropic diffusion for image denoising. IEEE Trans. Image Proc. 2007, 16, 2492–2502. [Google Scholar] [CrossRef] [PubMed]
  3. Benson, D.A.; Wheatcraft, S.W.; Meerschaert, M.M. Application of a fractional advection-dispersion equation. Water Resour. Res. 2000, 36, 1403–1412. [Google Scholar] [CrossRef]
  4. Benson, D.A.; Wheatcraft, S.W.; Meerschaert, M.M. The fractional-order governing equation of Lévy motion. Water Resour. Res. 2000, 36, 1413–1423. [Google Scholar] [CrossRef]
  5. Çelik, C.; Duman, M. Crank-Nicolson method for the fractional diffusion equation with the Riesz fractional derivative. J. Comput. Phys. 2012, 231, 1743–1750. [Google Scholar] [CrossRef]
  6. Chechkin, A.V.; Gorenflo, R.; Sokolov, I.M. Fractional diffusion in inhomogeneous media. J. Phys. A 2005, 38, L679–L684. [Google Scholar] [CrossRef]
  7. Coimbra, C. F. Mechanics with variable-order differential operators. Ann. Phys. 2003, 515, 692–703. [Google Scholar]
  8. Diethelm, K. The Analysis of Fractional Differential Equations; Spring-Verlag, Berlin and Heidelberg, 2010.
  9. Ding, H.F.; Li, C.P.; Chen, Y.Q. High-order algorithms for Riesz derivative and their applications (I). Abstr. Appl. Anal. 2014, 2014, 653797. [Google Scholar] [CrossRef]
  10. Du, R.; Alikhanov, A.A.; Sun, Z.Z. Temporal second order difference schemes for the multi-dimensional variable-order time fractional sub-diffusion equations. Comput. Math. Appl. 2020, 79, 2952–2972. [Google Scholar] [CrossRef]
  11. Henry, B.I.; Wearne, S.L. Fractional reaction-diffusion. Physica A 2000, 276, 448–455. [Google Scholar] [CrossRef]
  12. Horn, R.A.; Johnson, C.R. Toptics in Matrix Analysis; Academic Press, Cambridge, 1994.
  13. Ingman, D.; Suzdalnitsky, J. Control of damping oscillations by fractional differential operator with time-dependent order. Comput. Methods Appl. Mech. Engrg. 2004, 193, 5585–5595. [Google Scholar] [CrossRef]
  14. Jin, B.T.; Lazarov, R.; Pasciak, J.; Zhou, Z. Error analysis of a finite element method for the space-fractional parabolic equation. SIAM J. Numer. Anal. 2014, 52, 2272–2294. [Google Scholar] [CrossRef]
  15. Kameni, S.N.; Djida, J.D.; Atangana, A. Modelling the movement of groundwater pollution with variable order derivative. J. Nonlinear Sci. Appl. 2017, 10, 5422–5432. [Google Scholar] [CrossRef]
  16. Kheirkhah, F.; Hajipour, M.; Baleanu, D. The performance of a numerical scheme on the variable-order time-fractional advection-reaction-subdiffusion equations. Appl. Numer. Math. 2022, 178, 25–40. [Google Scholar] [CrossRef]
  17. Lin, F.R.; Wang, Q.Y.; Jin, X.Q. Crank-Nicolson-weighted-shifted-Grünwald difference schemes for space Riesz variable-order fractional diffusion equations. Numer. Algor. 2021, 87, 601–631. [Google Scholar] [CrossRef]
  18. Lin, F.R.; Yang, S.W.; Jin, X.Q. Preconditioned iterative methods for fractional diffusion equation. J. Comput. Phys. 2014, 256, 109–117. [Google Scholar] [CrossRef]
  19. Lin, R.; Liu, F.; Anh, V.; Turner, I. Stability and convergence of a new explicit finite-difference approximation for the variable-order nonlinear fractional diffusion equation. Appl. Math. Comput. 2009, 212, 435–445. [Google Scholar] [CrossRef]
  20. Liu, F.; Zhuang, P.; Turner, I.; Burrage, K.; Anh, V. A new fractional finite volume method for solving the fractional diffusion equation. Appl. Math. Model. 2014, 38, 3871–3878. [Google Scholar] [CrossRef]
  21. Lorenzo, C.F.; Hartley, T.T. Variable-order and distributed order fractional operators. Nonlinear Dyn. 2002, 29, 57–98. [Google Scholar] [CrossRef]
  22. Meerschaert, M.M.; Tadjeran, C. Finite difference approximtionns for fractional advection-dispersion flow equations. J. Comput. Appl. Math. 2004, 172, 65–77. [Google Scholar] [CrossRef]
  23. Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for two-sided space-fractional partial differential equations. Appl. Numer. Math. 2006, 56, 80–90. [Google Scholar] [CrossRef]
  24. Miller, K.S.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; John Wiley, New York, 1993.
  25. Oldham, K.B.; Spanier, J. The Fractional Calculus; Academic Press, New York, 1974.
  26. Ortigueira, M.D. Riesz potential operators and inverses via fractional centred derivatives. Int. J. Math. Math. Sci. 2006, 2006, 048391. [Google Scholar] [CrossRef]
  27. Pang, H.K.; Sun, H.W. A fast algorithm for the variable-order spatia fractional advection-diffusion equation. J. Sci. Comput. 2021, 87, 15. [Google Scholar] [CrossRef]
  28. Patnaik, S.; Hollkamp, J.P.; Semperlotti, F. Applications of variable-order fractional operators: a review. Proc. R. Soc. A 2020, 87476, 20190498. [Google Scholar] [CrossRef]
  29. Podlubny, I. Fractional Differential Equations; Cambridge University Press, New York, 1999.
  30. Ruiz-Medina, M.D.; Anh, V.; Angulo, J.M. Fractional generalized random fields of variable order. Stochastic Anal. Appl. 2004, 22, 775–799. [Google Scholar] [CrossRef]
  31. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integerals and Derivatives: Theory and Applications; Gordon and Breach Science Publishers, 1993.
  32. Samko, S.G.; Ross, B. Integration and differentiation to a variable fractional order. Integr. Transf. Special Funct. 1993, 1, 277–300. [Google Scholar] [CrossRef]
  33. Seki, K.; Wojcik, M.; Tachiya, M. Fractional reaction-diffusion equation. J. Chem. Phys. 2003, 119, 2165–2174. [Google Scholar] [CrossRef]
  34. She, Z.H. A class of unconditioned stable 4-point WSGD schemes and fast iteration methods for space fractional diffusion equations. J. Sci. Comput. 2022, 92, 18. [Google Scholar] [CrossRef]
  35. She, Z.H.; Lao, C.X.; Yang, H.; Lin, F.R. Banded preconditioners for Riesz space fractional diffusion equations. J. Sci. Comput. 2021, 86, 31. [Google Scholar] [CrossRef]
  36. Sokolov, I.M.; Klafter, J.; Blumen, A. Fractional kinetics. Phys. Today 2002, 55, 48–54. [Google Scholar] [CrossRef]
  37. Sun, H.; Chang, A.; Zhang, Y.; Chen, W. A review on variable-order fractional differential equations: mathematical foundations, physical models, numerical methods and applications. Fract. Calc. Appl. Anal. 2019, 22(1), 27–59. [Google Scholar] [CrossRef]
  38. Sun, Z.Z.; Gao, G.H. Finite Difference Method for Fractional Differential Equations (Chinese version). Science Press, Beijing, 2015.
  39. Tian, W.; Zhou, H.; Deng, W. A class of second order difference approximations for solving space fractional diffusion equations. Math. Comput. 2015, 84, 1703–1727. [Google Scholar] [CrossRef]
  40. Wang, D.L.; Xiao, A.G.; Yang, W. Maximum-norm error analysis of a difference scheme for the space fractional CNLS. Appl. Math. Comput. 2015, 257, 241–251. [Google Scholar] [CrossRef]
  41. Wang, Q.Y.; Lin, F.R. Banded preconditioners for two-sided space variable-order fractional diffusion equations with a nonlinear source term. Accepted by Commun. Appl. Math. Comput.
  42. Wang, Q.Y.; She, Z.H.; Lao, C.X.; Lin, F.R. Fractional centered difference schemes and banded preconditioners for nonlinear Riesz space variable-order fractional diffusion equations. Numer. Algor. 2023, 95, 859–895. [Google Scholar] [CrossRef]
  43. Zeng, F.H.; Zhang, Z.Q.; Karniadakis, G.E. A generalized spectral collocation method with tunable accuracy for variable-order fractional differential equations. SIAM J. Sci. Comput. 2015, 37, A2710–A2732. [Google Scholar] [CrossRef]
  44. Zhao, X.; Sun, Z.Z.; Karniadakis, G.E. Second-order approximations for variable order fractional derivatives: Algorithms and applications. J. Comput. Phys. 2015, 293, 184–200. [Google Scholar] [CrossRef]
  45. Zhuang, P.; Liu, F.; Anh, V.; Turner, I. Numerical methods for the variable-order fractional advection-diffusion equation with a nonlinear source term SIAM J. Numer. Anal. 2009, 47, 1760–1781. [Google Scholar] [CrossRef]
Figure 1. The upper bound, lower bound and objective curves
Figure 1. The upper bound, lower bound and objective curves
Preprints 119355 g001
Table 1. The CPU time, the average number of iterations (for PGMRES) and errors of numerical solutions for IE-4WSGD scheme for α ( x , t ) = 1.4 + 0.5 e ( x t ) 2 1 .
Table 1. The CPU time, the average number of iterations (for PGMRES) and errors of numerical solutions for IE-4WSGD scheme for α ( x , t ) = 1.4 + 0.5 e ( x t ) 2 1 .
N M Error rate h GE P k b m P ˜ k b m
2 6 2 2 1.6776e-01 (–, 0.0216) (6.00, 0.0369) (26.00, 0.0394)
2 7 2 6 8.6713e-03 4.26 (–, 0.0895) (3.92, 0.1379) (13.81, 0.1398)
2 8 2 10 5.3568e-04 4.02 (–, 2.3517) (1.01, 2.3228) (1.01, 2.2034)
2 9 2 14 3.3459e-05 4.00 (–, 175.5498) (1.00, 156.8173) (1.00, 149.2442)
Table 2. The CPU time, the average number of iterations (for PGMRES) and errors of numerical solutions for IE-4WSGD scheme for α ( x , t ) = 545 ( x t ) 4 300 .
Table 2. The CPU time, the average number of iterations (for PGMRES) and errors of numerical solutions for IE-4WSGD scheme for α ( x , t ) = 545 ( x t ) 4 300 .
N M Error rate h GE P k b m P ˜ k b m
2 6 2 2 1.0610e-01 (–, 0.0212) (4.00, 0.0365) (28.50, 0.0391)
2 7 2 6 5.7878e-03 4.20 (–, 0.0918) (2.94, 0.1334) (22.73, 0.1488)
2 8 2 10 3.5884e-04 4.01 (–, 2.4540) (1.01, 2.4022) (1.22, 2.3063)
2 9 2 14 2.2415e-05 4.00 (–, 177.6686) (1.00, 161.0472) (1.00, 152.0569)
Table 3. The CPU time, the average number of iterations (for PGMRES) and errors of numerical solutions for IE-4WSGD scheme with α ( x , y , t ) = 1 + 1.6 e x y / 2 , β ( x , y , t ) = 1 + sin ( π x y / 48 ) .
Table 3. The CPU time, the average number of iterations (for PGMRES) and errors of numerical solutions for IE-4WSGD scheme with α ( x , y , t ) = 1 + 1.6 e x y / 2 , β ( x , y , t ) = 1 + sin ( π x y / 48 ) .
N M Error rate h GE P k b m P ˜ k b m
2 3 2 2 1.4136e-01 (–, 0.1455) (7.33, 0.1474) (6.67, 0.1569)
2 4 2 6 7.8665e-03 4.17 (–, 0.2134) (2.60, 0.2876) (3.03, 0.3827)
2 5 2 10 4.8806e-04 4.01 (–, 126.3812) (1.05, 11.7302) (1.10, 13.8425)
2 6 2 14 3.0296e-05 4.01 (–, 29107.2033) (1.00, 748.1152) (1.02, 753.8003)
Table 4. The CPU time, the average number of iterations (for PGMRES) and errors of numerical solutions for IE-4WSGD scheme with α ( x , y , t ) = 1.72 0.1 ( sin ( x ) cos ( x ) sin ( y ) cos ( y ) ) , β ( x , y , t ) = 1 + e x y 3 + 0.3 cos ( x y 32 ) .
Table 4. The CPU time, the average number of iterations (for PGMRES) and errors of numerical solutions for IE-4WSGD scheme with α ( x , y , t ) = 1.72 0.1 ( sin ( x ) cos ( x ) sin ( y ) cos ( y ) ) , β ( x , y , t ) = 1 + e x y 3 + 0.3 cos ( x y 32 ) .
N M Error rate h GE P k b m P ˜ k b m
2 3 2 2 1.3983e-01 (–, 0.1073) (16.00, 0.1375) (16.00, 0.1886)
2 4 2 6 7.6815e-03 4.19 (–, 0.2687) (4.68, 0.4383) (5.59, 0.4450)
2 5 2 10 4.7459e-04 4.02 (–, 139.7812) (1.09, 13.7007) (1.17, 12.0109)
2 6 2 14 2.9450e-05 4.01 (–, 13772.3810) (1.02, 1010.3408) (1.07, 1216.1459)
Table 5. Errors of numerical solutions IE-4WSGD scheme with α ( x , y , t ) = 1 + e x y / e t , β ( x , y , t ) = 1 + sin ( π x y / 48 ) / e t .
Table 5. Errors of numerical solutions IE-4WSGD scheme with α ( x , y , t ) = 1 + e x y / e t , β ( x , y , t ) = 1 + sin ( π x y / 48 ) / e t .
N M Error rate h
2 3 2 2 5.6644e-02
2 4 2 6 6.1470e-03 3.20
2 5 2 10 4.0610e-04 3.92
2 6 2 14 2.5891e-05 3.97
Table 6. Errors of numerical solutions IE-4WSGD scheme with α ( x , y , t ) = 1.72 0.1 ( sin ( x ) cos ( x t ) sin ( y ) cos ( y t ) ) , β ( x , y , t ) = 1 + e t e x y / 11 .
Table 6. Errors of numerical solutions IE-4WSGD scheme with α ( x , y , t ) = 1.72 0.1 ( sin ( x ) cos ( x t ) sin ( y ) cos ( y t ) ) , β ( x , y , t ) = 1 + e t e x y / 11 .
N M Error rate h
2 3 2 2 6.7791e-02
2 4 2 6 3.9463e-03 4.10
2 5 2 10 2.5590e-04 3.95
2 6 2 14 1.6331e-05 3.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated