Preprint
Article

Derivative Free Conformable Iterative Methods for Solving Nonlinear Equations

Altmetrics

Downloads

150

Views

44

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

22 June 2023

Posted:

22 June 2023

You are already at the latest version

Alerts
Abstract
In the last years, a recent growing line of research has been developed with fruitful results by using fractional and conformable derivatives in the iterative procedures of classical methods for solving nonlinear equations. In that sense, the use of conformable derivatives has shown better behavior than fractional ones, not only in the theory, but also in the practice. In this work, we adapt the approximation of conformable derivatives in order to design the first conformable derivative-free iterative schemes to solve nonlinear equations: a Steffensen’s type method and a Secant type method; the latter with memory. Convergence analysis is made, preserving the order of classical cases, and the numerical performance is studied in order to confirm the results in the theory. It is shown that these methods can present some numerical advantages versus their classical partners, with wide sets of converging initial estimations.
Keywords: 
Subject: Computer Science and Mathematics  -   Applied Mathematics

1. Introduction

Fractional calculus establishes a generalization of classical one, where derivatives and integrals are of noninteger orders. The tools of fractional calculus can be used to model many applied problems, because of the higher degree of freedom of its tools compared to classical calculus tools [1,2,3]. Then, several iterative procedures with Riemann-Liouville and Caputo fractional derivatives (see, for example, [4,5,6,7,8,9]) have been proposed for solving nonlinear equations, but these methods do not hold the order of convergence of their classical versions.
On the other hand, another approach is the conformable calculus [10,11], whose low computational cost constitutes an advantage versus fractional calculus, due to special functions as Gamma or Mittag-Leffler functions are not evaluated [2]. In that sense, several conformable iterative schemes (see [12,13,14]) were designed for finding x ¯ R , solution of the nonlinear equation f ( x ) = 0 , where f : I R R . The theoretical convergence order of these procedures is preserved in practice. Indeed, these methods show good qualitative behavior, improving even their respective classical cases in some numerical aspects.
These fractional and conformable schemes mentioned above need the evaluation of fractional and conformable derivatives, respectively. Since conformable procedures have presented many advantages versus fractional ones, in this manuscript we focus in the approximation of conformable derivatives in order to design, up to our knowledge, the first conformable derivative-free iterative methods to solve nonlinear equations: a Steffensen-type method and a Secant-type method; we also compare them with their classical partners.
Let us recall some basic definitions from conformable calculus: Given a function f : [ a , ) R , its left conformable derivative, starting from a, of order α ( 0 , 1 ] , where a , α , x R , being x > a , can be defined as shown next [10,11]
( T α a f ) ( x ) = lim ε 0 f ( x + ε ( x a ) 1 α ) f ( x ) ε .
If this limit does exist, then f is α -differentiable. Let us suppose that f is differentiable, then ( T α a f ) ( x ) = f ( x ) ( x a ) 1 α . Given b R such that f is α -differentiable in ( a , b ) , then ( T α a f ) ( a ) = lim x a + ( T α a f ) ( x ) .
This derivative preserves the property of non fractional derivatives: T α a K = 0 , being K a constant. As mentioned before, this kind of derivative does not require to evaluate any special function.
In [15], an appropriate conformable Taylor series is provided, as shown in the following result.
Theorem 1
(Theorem 4.1, [15]). Let f ( x ) be an infinitely α-differentiable function, α ( 0 , 1 ] , about a 1 , where the conformable derivatives start at a. Then, the conformable Taylor series of f ( x ) can be given by
f ( x ) = f ( a 1 ) + ( T α a f ) ( a 1 ) α δ 1 + ( T α a f ) ( 2 ) ( a 1 ) 2 α 2 δ 2 + R 2 ( x , a 1 , a ) ,
being L = a 1 a , H = x a , δ 1 = H α L α , δ 2 = H 2 α L 2 α 2 L α δ 1 , ....
We can easily prove that δ 2 = δ 1 2 , δ 3 = δ 1 3 , and so on. So, (2) is expressed as
f ( x ) = f ( a 1 ) + 1 α ( T α a f ) ( a 1 ) δ 1 + 1 2 α 2 ( T α a f ) ( 2 ) ( a 1 ) δ 1 2 + R 2 ( x , a 1 , a ) .
Since the Secant type method we propose includes memory, we need to introduce a generalization of order of convergence (The R-order [16,17]), but first, let us see the concept of R-factor:
Definition 1.
Let ϕ be a converging iterative method to some limit β, and let { x k } be an arbitrary sequence in R n converging to β. Then, the R-factor of the sequence { x k } is
R m ( x ) = lim sup k | | x k β | | 1 / k , for m = 1 , lim sup k | | x k β | | 1 / m k , for m > 1 .
We can define now the R-order:
Definition 2.
The R-order of convergence of an iterative method ϕ at the point β is
O R ( ϕ , β ) = + , if R m ( ϕ , β ) = 0 m [ 1 , + ) , inf { m [ 1 , + ) : R m ( ϕ , β ) = 1 } , in other case .
The following result states a relation between the roots of a characteristic polynomial and the R-order of an iterative procedure with memory:
Theorem 2.
Let ϕ be an iterative method with memory generating the sequence { x k } of approximations of the root x ¯ , and let us suppose that the sequence { x k } converges to x ¯ . If exists a not null constant η, and nonnegative numbers t i , i [ 0 , m ] , such that
| e k + 1 | η i = 0 m | e k i | t i
is fulfilled, then the R-order of iterative scheme ϕ satisfies
O R ( ϕ , x ¯ ) s * ,
being s * the only positive root of polynomial
s m + 1 i = 0 m t i s m i = 0 .
Finally, if we take into account that, given an iteration function ϕ ( x ) of order p, its asymptotical error constant C is defined as [18]
C = lim x x ¯ x ¯ ϕ ( x ) ( x ¯ x ) p ,
then the next result permits the calculation of the error constant of an iterative scheme with p-order of convergence, knowing that of other iterative method with the same order.
Theorem 3.( [18], Theorem 2-8) Let us consider iteration functions ϕ 1 ( x ) and ϕ 2 ( x ) with order p and fixed point x ¯ with multiplicity m. If we define
G ( x ) = ϕ 2 ( x ) ϕ 1 ( x ) ( x x ¯ ) p , x x ¯ ,
and C 1 and C 2 are the asymptotical error constants of ϕ 1 and ϕ 2 , respectively. Therefore,
C 2 = C 1 + lim x x ¯ G ( x ) .
Later, we support Theorem 3 in the conformable schemes proposed in this work, and we use m = 1 for our purposes.
In next section, we design the Steffensen’s and Secant type procedures, the convergence of these methods is analyzed in Section 3, In Section 4 we study their numerical performance, and the concluding remarks are provided in Section 5.

2. Deduction of the methods

As in [19], we consider the approximation of (1) with the following conformable finite divided difference of linear order:
( T α a f ) ( x ) f ( x + ε ( x a ) 1 α ) f ( x ) ε , ε 0 .
In [12,13,14], we can see that the conformable schemes preserve the theoretical order of their classical versions (when α = 1 ), no matter if these procedures are scalar or vectorial, one-point or multipoint. Now, we wonder if the conformable version of derivative free methods (with or without memory) hold the order of convergence of their classical versions too. For this aim, we use the general technique proposed in [14], which is useful for finding the conformable partner of any known procedure, and show that these procedures preserve the order of convergence of their classical versions.
The general technique given in [14], states that the classical method
ϕ ( x ) = x g ( x ) f ( x )
has the conformable version
ϕ ( x ) = a + ( x a ) α α g α ( x ) f ( x ) 1 / α .
If g ( x ) in (11) includes classical derivatives of f ( x ) , then g α ( x ) in (12) includes conformable derivatives of f ( x ) . So, given a classical scheme, we need to identify the analytical expression of g ( x ) to get its conformable version.
In the case of Steffensen’s procedure [17,18,20]:
ϕ 1 ( x ) = x f ( x ) f ( x + f ( x ) ) f ( x ) f ( x ) ,
where f ( x + f ( x ) ) f ( x ) f ( x ) 0 is an approximation of the classical derivative. So,
g ( x ) = f ( x ) f ( x + f ( x ) ) f ( x ) .
Regarding (10),
g α ( x ) = f ( x ) f ( x + f ( x ) ( x a ) 1 α ) f ( x ) ,
being ε = f ( x ) in (10). Hence, the conformable version of Steffensen’s method is
ϕ 2 ( x ) = a + ( x a ) α α f ( x ) 2 f ( x + f ( x ) ( x a ) 1 α ) f ( x ) 1 / α ,
and we denote it by SeCO; note that when α = 1 the classical Steffensen’s scheme is obtained.
In the case of Secant procedure [17,18]:
ϕ 3 ( x ; y ) = x ( y x ) f ( x ) f ( y ) f ( x ) = x y x f ( y ) f ( x ) f ( x ) ,
where
f ( y ) f ( x ) y x 0
is an approximation of the classical derivative. Then,
g ( x ) = y x f ( y ) f ( x ) .
Considering (10),
g α ( x ) = y x f ( x + ( y x ) ( x a ) 1 α ) f ( x ) ,
being ε = y x in (10). Therefore, the conformable version of Secant method is
ϕ 4 ( x ; y ) = a + ( x a ) α α ( y x ) f ( x ) f ( x + ( y x ) ( x a ) 1 α ) f ( x ) 1 / α ,
and we denote it by EeCO; note that when α = 1 the classical Secant scheme is obtained.

3. Convergence analysis

The next result establishes the conditions for the quadratic order of convergence of SeCO. We use the notation x = x k and ϕ ( x ) = x k + 1 .
Theorem 4.
Let us consider a sufficiently differentiable function f : I R R in the open interval I, that holds a zero x ¯ of f ( x ) . Assume that the seed x ¯ is close enough to x ¯ . Therefore, the order of convergence (local) of conformable Steffensen’s scheme (SeCO) defined by
x k + 1 = a + ( x k a ) α α f ( x k ) 2 f ( x k + f ( x k ) ( x k a ) 1 α ) f ( x k ) 1 / α , k = 0 , 1 , 2 , ,
is at least 2, being α ( 0 , 1 ] , and its error equation is
e k + 1 = 1 + f ( x ¯ ) ( x ¯ a ) 1 α C 2 + 1 2 1 α x ¯ a e k 2 + O e k 3 ,
where C q = f ( q ) ( x ¯ ) q ! f ( x ¯ ) , q 2 , such that x k > a , k = 0 , 1 , 2 .
Proof. 
Knowing that x k = e k + x ¯ , the Taylor expansion of f ( x k ) and f ( x k ) 2 about x ¯ are
f ( x k ) = f ( x ¯ ) e k + C 2 e k 2 + C 3 e k 3 + O e k 4
and
f ( x k ) 2 = f ( x ¯ ) 2 e k 2 + 2 C 2 e k 3 + O e k 4 ,
respectively, being C q = f ( q ) ( x ¯ ) q ! f ( x ¯ ) , for q 2 .
Generalized binomial Theorem is [21]
( x + y ) r = k = 0 r k x r k y k , k { 0 } N ,
where (see [22])
r k = Γ ( r + 1 ) k ! Γ ( r k + 1 ) , k { 0 } N ,
and Γ ( · ) is the Gamma function. So, using this result,
x k + f ( x k ) ( x k a ) 1 α = 1 + f ( x ¯ ) ( x ¯ a ) 1 α e k + f ( x ¯ ) 1 α + ( x ¯ a ) C 2 ( x ¯ a ) α e k 2 + O e k 3 .
Then,
f ( x k + f ( x k ) ( x k a ) 1 α ) = f ( x ¯ ) 1 + f ( x ¯ ) ( x ¯ a ) 1 α e k + 1 + f ( x ¯ ) ( x ¯ a ) 1 α 2 C 2 + 1 α + ( x ¯ a ) C 2 ( x ¯ a ) α e k 2 + O e k 3 ,
and
f ( x k + f ( x k ) ( x k a ) 1 α ) f ( x k ) = f ( x ¯ ) 2 ( x ¯ a ) 1 α e k + 1 α ( x ¯ a ) α + ( x ¯ a ) 1 2 α f ( x ¯ ) ( x ¯ a ) + 3 ( x ¯ a ) α C 2 e k 2 + O e k 3 .
The quotient α f ( x k ) 2 f ( x k + f ( x k ) ( x k a ) 1 α ) f ( x k ) results
α f ( x k ) 2 f ( x k + f ( x k ) ( x k a ) 1 α ) f ( x k ) = α ( x ¯ a ) α 1 e k + α 1 ( x ¯ a ) 2 α f ( x ¯ ) ( x ¯ a ) + ( x ¯ a ) α C 2 x ¯ a e k 2 + O e k 3 .
Again,
( x k a ) α = ( x ¯ a ) α + α ( x ¯ a ) α 1 e k + 1 2 α ( α 1 ) ( x ¯ a ) α 2 e k 2 + O e k 3 ,
hence,
( x k a ) α α f ( x k ) 2 f ( x k + f ( x k ) ( x k a ) 1 α ) f ( x k ) = ( x ¯ a ) α + α 1 α 2 ( x ¯ a ) 2 α + f ( x ¯ ) ( x ¯ a ) + ( x ¯ a ) α C 2 x ¯ a e k 2 + O e k 3 .
By generalized binomial result,
( x k a ) α α f ( x k ) 2 f ( x k + f ( x k ) ( x k a ) 1 α ) f ( x k ) 1 / α = x ¯ a + 1 + f ( x ¯ ) ( x ¯ a ) 1 α C 2 + 1 2 1 α x ¯ a e k 2 + O e k 3 .
Finally,
e k + 1 = 1 + f ( x ¯ ) ( x ¯ a ) 1 α C 2 + 1 2 1 α x ¯ a e k 2 + O e k 3 ,
and proof is finished. □
Remark 1.
Classical Steffensen’s method has the error equation
e k + 1 = 1 + f ( x ¯ ) C 2 e k 2 + O e k 3 .
Then, it is confirmed the relation between the asymptotic error constants seen in Theorem 3.
Remark 2.
SeCO is, up to our knowledge, the first optimal conformable derivative free scheme according to Kung-Traub’s conjecture (see [23]).
The following result states the conditions for the superlinear order of convergence of EeCO. We use the notation x = x k , y = x k 1 and ϕ ( x ; y ) = x k + 1 .
Theorem 5.
Let us consider f : I R R a sufficiently differentiable function in the open interval I, holding a zero x ¯ of f ( x ) . If the initial approximations x 0 and x 1 are close enough to x ¯ , then the Secant procedure (EeCO)
x k + 1 = a + ( x k a ) α α ( x k 1 x k ) f ( x k ) f ( x k + ( x k 1 x k ) ( x k a ) 1 α ) f ( x k ) 1 / α , k = 0 , 1 , 2 , ,
has local order of convergence of conformable at least 1.618 , being 0 < α 1 , and the error equation is
e k + 1 = ( x ¯ a ) 1 α C 2 e k e k 1 + O e k e k 1 2 ,
where C q = f ( q ) ( x ¯ ) q ! f ( x ¯ ) , for q 2 , such that a < x k , a < x k 1 , k .
Proof. 
Knowing that x k = e k + x ¯ , the Taylor expansion of f ( x k ) about x ¯ is calculated as
f ( x k ) = f ( x ¯ ) e k + C 2 e k 2 + C 3 e k 3 + O e k 4 ,
being C j = f ( j ) ( x ¯ ) j ! f ( x ¯ ) , for j 2 .
Knowing that x k 1 = e k 1 + x ¯ , then x k 1 x k = e k 1 e k . So, using generalized binomial Theorem,
x k + ( x k 1 x k ) ( x k a ) 1 α = x ¯ + ( x ¯ a ) 1 α e k 1 + 1 ( x ¯ a ) 1 α e k + ( 1 α ) ( x ¯ a ) α e k e k 1 + ( α 1 ) ( x ¯ a ) α e k 2 + O e k 2 e k 1 .
Thus, the expansion of f ( x k + ( x k 1 x k ) ( x k a ) 1 α ) is
f ( x k + ( x k 1 x k ) ( x k a ) 1 α ) = f ( x ¯ ) ( x ¯ a ) 1 α e k 1 + 1 ( x ¯ a ) 1 α e k + ( x ¯ a ) 2 2 α C 2 e k 1 2 + ( 1 α ) ( x ¯ a ) α + 2 ( x ¯ a ) 1 α C 2 2 ( x ¯ a ) 2 2 α C 2 e k e k 1 + O e k e k 1 2 ,
and
f ( x k + ( x k 1 x k ) ( x k a ) 1 α ) f ( x k ) = f ( x ¯ ) ( x ¯ a ) 1 α e k 1 ( x ¯ a ) 1 α e k + ( x ¯ a ) 2 2 α C 2 e k 1 2 + ( 1 α ) ( x ¯ a ) α + 2 ( x ¯ a ) 1 α C 2 2 ( x ¯ a ) 2 2 α C 2 e k e k 1 + O e k e k 1 2 .
Therefore, knowing that x k 1 x k = e k 1 e k , the quotient α ( x k 1 x k ) f ( x k ) f ( x k + ( x k 1 x k ) ( x k a ) 1 α ) f ( x k ) results
α ( x k 1 x k ) f ( x k ) f ( x k + ( x k 1 x k ) ( x k a ) 1 α ) f ( x k ) = α ( x ¯ a ) α 1 e k α C 2 e k e k 1 + O e k e k 1 2 .
Hence,
( x k a ) α α ( x k 1 x k ) f ( x k ) f ( x k + ( x k 1 x k ) ( x k a ) 1 α ) f ( x k ) = ( x ¯ a ) α + α C 2 e k e k 1 + O e k e k 1 2 ,
and
( x k a ) α α ( x k 1 x k ) f ( x k ) f ( x k + ( x k 1 x k ) ( x k a ) 1 α ) f ( x k ) 1 / α = x ¯ a + ( x ¯ a ) 1 α C 2 e k e k 1 + O e k e k 1 2 .
Finally,
e k + 1 = ( x ¯ a ) 1 α C 2 e k e k 1 + O e k e k 1 2 .
Using Theorem 2, the characteristic polynomial obtained is s 2 s 1 = 0 , whose only positive root is s 1.618 . So, the convergence of conformable Secant method is superlinear. □
Remark 3.
Since the error equation of classical Secant scheme is
e k + 1 = C 2 e k e k 1 + O e k e k 1 2 ,
the relation between asymptotic error constants seen in Theorem 3 is checked.
Remark 4.
EeCO is the first conformable iterative procedure with memory.
The theoretical order of convergence of derivative-free classical schemes (with memory or not) is also preserved in the conformable version of these methods. In the next section, some numerical tests are made under several nonlinear functions, and the stability of such methods is studied.

4. Numerical results

To obtain the results shown in this section, we have used Matlab R2020a with double precision arithmetics, | x k + 1 x k | < 10 8 or | f ( x k + 1 ) | < 10 8 as stopping criteria, and a maximum of 500 iterates. The Approximate Computational Order of Convergence (ACOC)
A C O C = ρ = ln ( | x k + 1 x k | / | x k x k 1 | ) ln ( | x k x k 1 | / | x k 1 x k 2 | ) , k = 2 , 3 , 4 , ,
defined in [24], is used to confirm that theoretical order of convergence is also conserved in practice.
Now, we test six nonlinear functions with the methods that we have designed in the previous section; in this sense, we compare each scheme with its classic version (when α = 1 ). For EeCO we choose x 1 = x 0 + 1 to perform the first iteration, we fix a = 10 for each mrthod, and α ( 0 , 1 ] .
In each table we show the results obtained for each test function using the two schemes designed in the previous section (SeCO and EeCO), where x 0 coincides in both procedures.
The first test function is f 1 ( x ) = 12.84 x 6 25.6 x 5 + 16.55 x 4 2.21 x 3 + 26.71 x 2 4.29 x 15.21 , with real and complex roots x ¯ 1 0.82366 + 0.24769 i , x ¯ 2 0.82366 0.24769 i , x ¯ 3 2.62297 , x ¯ 4 0.584 , x ¯ 5 0.21705 + 0.99911 i and x ¯ 6 0.21705 0.99911 i .
In Table 1 we can see that SeCO can require the same number of iterations as the classical Steffensen’s method (when α = 1 ), and ρ can be slightly higher than 2 when α 1 . Note that SeCO needs the initial estimate x 0 to be very close to x ¯ 4 to converge with any α . We observe that EeCO require in some cases less iterations than Secant scheme for most values of α , and the ACOC can be slightly higher than 1.618.
Our second test function is f 2 ( x ) = sin x x 2 + 1 , with real roots x ¯ 1 0.6367 and x ¯ 2 1.4096 .
In Table 2 we note that SeCO can converge in fewer iterations than its classical partner, and a different root can be found; case α = 0.8 is not shown because this method converges to some point which is not a root of f 2 ( x ) , due to one of the stopping criteria is much greater than zero, also, no results are shown when it is required more than 500 iterations. We can see that EeCO can require the same number of iterations of its classical partner, and ρ can be slightly higher than 1.618.
Next test function is f 3 ( x ) = sin x x 2 2 , with double roots x ¯ 1 1.8956 , x ¯ 2 = 0 and x ¯ 3 1.8956 .
In Table 3 we observe that many times SeCO requires lower number of iterations than Steffensen’s procedure, and the ACOC is linear for all α , because the multiplicity of all roots is m = 2 . We note that number of iterations is increasing when EeCO is used, a distinct root can be found when a different value of α is chosen, and again, ρ is linear for any α , because the multiplicity of these roots is 2; we show no results for α = 0.8 , 0.4 as this scheme converges to some point which is not a root of f 3 ( x ) , due to one of the stopping criteria is much greater than zero.
The fourth nonlinear function is f 4 ( x ) = x 4 + 8 sin π x 2 + 2 + x 3 x 4 + 1 6 + 8 17 , with roots x ¯ 1 = 2 and x ¯ 2 1.1492 .
In Table 4 we see that SeCO needs fewer iterations than its classical partner in many cases; ρ is not provided for α = 0.5 because it is necessary at least 3 iterations to be computed, and we do not show results for α = 0.1 as this procedure do not converge to a root of f 4 ( x ) . We can observe that the classical Secant method has failed, whereas, the conformable version can find solution for some values of α , and the ACOC can be slightly higher than 1.618; and again, no results are shown for α = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 because this procedure do not converge to a root of f 4 ( x ) .
The fifth test function is f 5 ( x ) = x 4 + 8 sin π x 2 + 2 + x 3 x 4 + 1 6 + 8 17 , with real root x ¯ 1 0.8541 . Also, the complex root x ¯ 2 0.1498 + 0.8244 i can be obtained.
In Table 5 we note that SeCO can require fewer iterations than its classical version in some cases, a different root can be found when choosing a distinct value of α , a complex root can be obtained starting from a real initial estimate, and ρ can be slightly higher than 2. We can see that EeCO can converge in a lower number of iterations than its classical version, and the ACOC can be slightly higher than 1.618. No results are shown in both methods when they require more than 500 iterations.
Finally, our sixth test function is f 6 ( x ) = sin 10 x 0.5 x + 0.2 , with real roots x ¯ 1 = 1.4523 , x ¯ 2 = 1.3647 , x ¯ 3 = 0.87345 , x ¯ 4 = 0.6857 , x ¯ 5 = 0.27949 , x ¯ 6 = 0.021219 , x ¯ 7 = 0.31824 , x ¯ 8 = 0.64036 , x ¯ 9 = 0.91636 , x ¯ 10 = 1.3035 , x ¯ 11 = 1.5118 , x ¯ 12 = 1.9756 and x ¯ 13 = 2.0977 .
In Table 6 we observe that SeCO and EeCO need lower number of iterations than their classical partners, respectively, for some values of α , and that ρ is similar to the classical one in each case; no results is shown for α = 0.3 because this scheme converges to some point which is not a root of f 6 ( x ) , due to one of the stopping criteria is much greater than zero. Neither are results shown for EeCO when α = 0.1 , since it requires more than 500 iterations to converge. We note that with each procedure different roots are obtained by modifying the value of α .

4.1. Qualitative performance

Now, we analyze the dependency regarding the initial estimations of the schemes designed in this manuscript; for this, convergence planes defined in [25] are used. In them, the abscissa axis corresponds to the initial estimate x 0 (for both schemes), and the order of the derivative α corresponds to the ordinate axis. We define a mesh of 400 × 400 points, that can be represented in different colors. Those not represented in black correspond to the pairs ( x 0 , α ) converging to one of the roots, with a tolerance of 10 3 . Different roots have associated different colors. Therefore, a point represented in black means that the iterative process does not converge to any root in a maximum of 500 iterations.
For all convergence planes we calculate the percentage of convergent ( x 0 , α ) , to compare the efficiency of these procedures. For each method we set a = 10 in each plane, x 0 [ 5 , 5 ] , and α ( 0 , 1 ] . In the case of EeCO we select x 1 = x 0 + 1 to perform the first iteration, just as we did in the numerical tests in Section 3.
In Figure 1, we can observe that SeCO achieves approximately only 0.2% of convergence, and only one root is obtained, whereas EeCO reaches approximately 43% of convergence, being all the roots reached.
In Figure 2, SeCO attains around 67% of convergence, and EeCO gets around 97% of convergence. Both roots are obtained on each plane.
In Figure 3, we see that SeCO achieves about 81% of convergence, but EeCO reaches only about 1% of convergence. The three roots are found.
In Figure 4, we observe that SeCO attains approximately 41% of convergence, whereas EeCO gets approximately 24% of convergence. Both roots are obtained on each plane.
In Figure 5, we note that SeCO achieves around 71% of convergence, and EeCO reaches only around 66% of convergence. Both roots (the real an the complex) are found on each plane.
In Figure 6, we can see that SeCO attains about 93% of convergence, but EeCO gets about 74% of convergence. All roots are obtained on each plane.
We point out that, f 2 ( x ) , f 3 ( x ) , f 4 ( x ) , f 5 ( x ) and f 6 ( x ) have infinite roots, so, a higher percentage of converging ( x 0 , α ) could be obtained in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 if we consider some of the complex roots of these functions.

5. Concluding remarks

In this work, the first derivative free conformable iterative methods have been designed (SeCO and EeCO), where SeCO is the first optimal derivative free conformable method, and EeCO is the first conformable scheme with memory in the literature. The convergence of these procedures was analyzed, and these methods preserve the order of convergence of their classical versions (when α = 1 ). The numerical performance of these schemes was studied, and we found that: these procedures can converge when the classical versions fail, roots can be found in fewer iterations, a different solution can be obtained by selecting different values of α , complex roots can be achieved with real seeds, and the approximated computational order of convergence can be slightly higher. Also, we visualized the dependence on initial estimates by using convergence planes, and these methods showed good stability, due to the percentage of converging points ( x 0 , α ) , furthermore, all roots are obtained in most of planes.

Author Contributions

Conceptualization, G.C. ; methodology, M.P.V.; software, A.C.; formal analysis, J.R.T.; investigation, A.C. ; writing—original draft preparation, G.C. and M.P.V.; writing—review and editing, A.C. and J.R.T. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. K.S. Miller, An Introduction to Fractional Calculus and Fractional Differential Equations, J. Wiley and Sons, New York (1993).
  2. I. Podlubny, Fractional Differential Equations, Academic Press, New York (1999).
  3. A.M. Mathai, H.J. Haubold, Fractional and multivariable calculus, model building and optimization problems, Springer Optimization and Its Applications, Berlin (2017).
  4. A. Akgül, A. Cordero, J.R. Torregrosa, A fractional Newton method with 2αth-order of convergence and its stability, Applied Mathematics Letters 98 (2019) 344–351.
  5. G. Candelario, A. Cordero, J.R. Torregrosa, Multipoint Fractional Iterative Methods with (2α+1)th-Order of Convergence for Solving Nonlinear Problems, Mathematics 8(3), 452 (2020). [CrossRef]
  6. K. Gdawiec, W. Kotarski, A. Lisowska, Newton’s method with fractional derivatives and various iteration processes via visual analysis, Numerical Algorithms 86 (2021) 953–1010. [CrossRef]
  7. A. Torres-Hernandez, F. Brambila-Paz, U. Iturrarán-Viveros, R. Caballero-Cruz, Fractional Newton–Raphson Method Accelerated with Aitken’s Method, Axioms 10, 47 (2021). [CrossRef]
  8. S.K. Nayak, P.K. Parida, The dynamical analysis of a low computational cost family of higher-order fractional iterative method, International Journal of Computer Mathematics 100:6 (2023) 1395-1417. [CrossRef]
  9. M.A. Bayrak, A. Demir, E. Ozbilge, On Fractional Newton-Type Method for Nonlinear Problems, Journal of Mathematics, 2022 (2022). [CrossRef]
  10. R. Khalil, M. Al Horani, A. Yousef, M. Sababheh, A new definition of fractional derivative, Journal of Computational and Applied Mathematics 264 (2014) 65–70.
  11. T. Abdeljawad, On conformable fractional calculus, Journal of Computational and Applied Mathematics 279 (2014) 57–66.
  12. G. Candelario, A. Cordero, J.R. Torregrosa, M.P. Vassileva, An optimal and low computational cost fractional Newton-type method for solving nonlinear equations, Applied Mathematics Letters 124, 107650 (2022). [CrossRef]
  13. G. Candelario, A. Cordero, J.R. Torregrosa, M.P. Vassileva, Generalized conformable fractional Newton-type method for solving nonlinear systems, Numerical Algorithms, (2023). [CrossRef]
  14. G. Candelario, A. Cordero, J.R. Torregrosa, M.P. Vassileva, Solving Nonlinear Transcendental Equations by Iterative Methods with Conformable Derivatives: A General Approach, Mathematics 11, 2568 (2023). [CrossRef]
  15. S. Toprakseven, Numerical Solutions of Conformable Fractional Differential Equations by Taylor and Finite Difference Methods, Journal of Natural and Applied Sciences 23 (2019) 850–863.
  16. J.M. Ortega, W.C. Rheinboldt, terative Solution of Nonlinear Equations in Several Variables, Academic Press, New York (1970).
  17. M.S. Petković, B. Neta, L.D. Petković, J. Džunić, Multipoint Methods for Solving Nonlinear Equations, Elsevier, USA (2013).
  18. J.F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, New Jersey (1964).
  19. G. Candelario, Métodos iterativos fraccionarios para la resolución de ecuaciones y sistemas no lineales: Diseño, Análisis y Estabilidad, Doctoral thesis, Universitat Polite`cnica de Vale`ncia (2023), http://hdl.handle.net/10251/194270.
  20. J.F. Steffensen, Remarks on iteration, Scandinavian Actuarial Journal 1 (1933) 64–72.
  21. R.L. Graham, D.E. Knuth, O. Patashnik, Concrete Mathematics, Addison-Wesley Longman Publishing, Boston MA (1994).
  22. M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions, Dover Publications, New York (1970).
  23. H.T. Kung, J.F. Traub, Optimal Order of One-Pont and Multipoint Iteration, Journal of the Association for Computing Machinery 21 (1974) 643–651.
  24. A. Cordero, J.R. Torregrosa, Variants of Newton’s method using fifth order quadrature formulas, Applied Mathematics and Computation 190 (2007) 686–698.
  25. A.Á. Magreñan, A new tool to study real dynamics: The convergence plane, Applied Mathematics and Computation 248 (2014) 215–224.
Figure 1. Convergence planes for f 1 ( x ) .
Figure 1. Convergence planes for f 1 ( x ) .
Preprints 77395 g001
Figure 2. Convergence planes for f 2 ( x ) .
Figure 2. Convergence planes for f 2 ( x ) .
Preprints 77395 g002
Figure 3. Convergence planes for f 3 ( x ) .
Figure 3. Convergence planes for f 3 ( x ) .
Preprints 77395 g003
Figure 4. Convergence planes for f 4 ( x ) .
Figure 4. Convergence planes for f 4 ( x ) .
Preprints 77395 g004
Figure 5. Convergence planes for f 5 ( x ) .
Figure 5. Convergence planes for f 5 ( x ) .
Preprints 77395 g005
Figure 6. Convergence planes for f 6 ( x ) .
Figure 6. Convergence planes for f 6 ( x ) .
Preprints 77395 g006
Table 1. Results for f 1 ( x ) , with initial estimates x 0 = 0.58 for SeCO, and x 1 = 0.42 and x 0 = 0.58 for EeCO.
Table 1. Results for f 1 ( x ) , with initial estimates x 0 = 0.58 for SeCO, and x 1 = 0.42 and x 0 = 0.58 for EeCO.
SeCO Method EeCO Method
α x ¯ | f 1 ( x k + 1 ) | | x k + 1 x k | iter ρ x ¯ | f 1 ( x k + 1 ) | | x k + 1 x k | iter ρ
1 x ¯ 4 1.07 × 10 14 2.18 × 10 10 5 2.00 x ¯ 4 9.95 × 10 14 3.48 × 10 10 5 2.38
0.9 x ¯ 4 2.06 × 10 13 4.04 × 10 9 5 2.01 x ¯ 4 3.60 × 10 10 6.53 × 10 8 5 2.47
0.8 x ¯ 4 3.48 × 10 11 6.31 × 10 8 5 2.03 x ¯ 4 1.50 × 10 10 3.71 × 10 8 5 2.23
0.7 x ¯ 4 6.78 × 10 9 7.87 × 10 7 5 2.07 x ¯ 4 6.02 × 10 12 6.64 × 10 9 4 1.48
0.6 x ¯ 4 2.04 × 10 12 1.22 × 10 8 6 2.02 x ¯ 4 1.88 × 10 9 3.22 × 10 7 4 0.90
0.5 x ¯ 4 6.98 × 10 9 6.37 × 10 7 6 2.07 x ¯ 4 3.39 × 10 9 4.64 × 10 7 4 0.80
0.4 x ¯ 4 9.73 × 10 11 6.72 × 10 8 7 2.04 x ¯ 4 5.25 × 10 9 5.85 × 10 7 4 0.74
0.3 x ¯ 4 5.16 × 10 12 1.39 × 10 8 8 2.02 x ¯ 4 8.13 × 10 9 7.30 × 10 7 4 0.70
0.2 x ¯ 4 6.36 × 10 13 4.01 × 10 9 9 2.02 x ¯ 4 2.24 × 10 13 2.09 × 10 10 5 2.53
0.1 x ¯ 4 2.16 × 10 11 2.24 × 10 8 9 2.03 x ¯ 4 6.54 × 10 13 3.29 × 10 10 5 2.63
Table 2. Results for f 2 ( x ) , with initial estimates x 0 = 2 for SeCO, and x 1 = 3 and x 0 = 2 for EeCO.
Table 2. Results for f 2 ( x ) , with initial estimates x 0 = 2 for SeCO, and x 1 = 3 and x 0 = 2 for EeCO.
SeCO Method EeCO Method
α x ¯ | f 2 ( x k + 1 ) | | x k + 1 x k | iter ρ x ¯ | f 2 ( x k + 1 ) | | x k + 1 x k | iter ρ
1 x ¯ 2 6.30 × 10 12 1.60 × 10 6 6 2.01 x ¯ 2 1.26 × 10 10 5.21 × 10 7 6 1.62
0.9 x ¯ 1 2.93 × 10 14 1.20 × 10 7 5 2.00 x ¯ 2 1.59 × 10 9 2.28 × 10 6 6 1.62
0.8 - - - - - x ¯ 2 1.47 × 10 13 6.57 × 10 7 7 1.63
0.7 - - - > 500 - x ¯ 2 6.32 × 10 12 6.13 × 10 8 7 1.63
0.6 - - - > 500 - x ¯ 2 2.04 × 10 10 4.78 × 10 7 7 1.64
0.5 - - - > 500 - x ¯ 2 4.93 × 10 9 3.11 × 10 6 7 1.65
0.4 - - - > 500 - x ¯ 2 3.84 × 10 12 3.46 × 10 8 8 1.61
0.3 - - - > 500 - x ¯ 2 3.46 × 10 10 5.10 × 10 7 8 1.60
0.2 - - - > 500 - x ¯ 2 3.97 × 10 13 6.88 × 10 9 9 1.64
0.1 - - - > 500 - x ¯ 2 9.16 × 10 11 1.82 × 10 7 9 1.65
Table 3. Results for f 3 ( x ) , with initial estimates x 0 = 1 for SeCO, and x 1 = 2 and x 0 = 1 for EeCO.
Table 3. Results for f 3 ( x ) , with initial estimates x 0 = 1 for SeCO, and x 1 = 2 and x 0 = 1 for EeCO.
SeCO Method EeCO Method
α x ¯ | f 3 ( x k + 1 ) | | x k + 1 x k | iter ρ x ¯ | f 3 ( x k + 1 ) | | x k + 1 x k | iter ρ
1 x ¯ 3 7.62 × 10 9 1.07 × 10 4 18 1.00 x ¯ 3 5.84 × 10 9 5.77 × 10 5 19 1.00
0.9 x ¯ 3 4.88 × 10 9 8.53 × 10 5 20 1.00 x ¯ 3 4.65 × 10 9 4.79 × 10 5 24 1.00
0.8 x ¯ 3 3.27 × 10 9 6.98 × 10 5 19 1.00 - - - - -
0.7 x ¯ 3 6.76 × 10 9 1.00 × 10 4 19 1.00 x ¯ 2 8.88 × 10 9 9.43 × 10 5 23 1.00
0.6 x ¯ 3 3.17 × 10 9 6.88 × 10 5 12 1.00 x ¯ 2 6.28 × 10 9 7.33 × 10 5 26 1.00
0.5 x ¯ 3 3.03 × 10 9 6.72 × 10 5 18 1.00 x ¯ 3 7.23 × 10 9 4.28 × 10 5 53 1.00
0.4 x ¯ 3 3.37 × 10 9 7.08 × 10 5 14 1.00 - - - - -
0.3 x ¯ 3 4.45 × 10 9 8.14 × 10 5 9 1.00 x ¯ 3 6.04 × 10 9 3.24 × 10 5 57 1.00
0.2 x ¯ 3 7.05 × 10 9 1.03 × 10 6 9 1.00 x ¯ 3 9.30 × 10 9 3.64 × 10 5 29 1.00
0.1 x ¯ 3 7.57 × 10 9 1.06 × 10 4 16 1.00 x ¯ 3 9.72 × 10 9 3.35 × 10 5 55 1.01
Table 4. Results for f 4 ( x ) , with initial estimates x 0 = 3 for SeCO, and x 1 = 2 and x 0 = 3 for EeCO.
Table 4. Results for f 4 ( x ) , with initial estimates x 0 = 3 for SeCO, and x 1 = 2 and x 0 = 3 for EeCO.
SeCO Method EeCO Method
α x ¯ | f 4 ( x k + 1 ) | | x k + 1 x k | iter ρ x ¯ | f 4 ( x k + 1 ) | | x k + 1 x k | iter ρ
1 x ¯ 1 1.37 × 10 14 4.18 × 10 7 4 2.04 - - - - -
0.9 x ¯ 1 1.67 × 10 15 1.67 × 10 7 4 2.04 x ¯ 1 4.88 × 10 9 2.41 × 10 5 4 1.18
0.8 x ¯ 1 1.00 × 10 16 4.57 × 10 8 4 2.03 x ¯ 1 1.21 × 10 9 5.24 × 10 6 5 2.75
0.7 x ¯ 1 2.47 × 10 9 2.35 × 10 4 3 1.56 x ¯ 1 2.53 × 10 9 1.14 × 10 5 9 1.49
0.6 x ¯ 1 1.75 × 10 10 8.66 × 10 5 3 1.72 - - - - -
0.5 x ¯ 1 7.05 × 10 9 0.0335 2 - - - - - -
0.4 x ¯ 1 3.04 × 10 9 3.02 × 10 4 3 1.78 - - - - -
0.3 x ¯ 1 9.20 × 10 13 3.60 × 10 6 4 2.01 - - - - -
0.2 x ¯ 1 6.99 × 10 15 2.40 × 10 7 5 2.00 - - - - -
0.1 - - - - - - - - - -
Table 5. Results for f 5 ( x ) , with initial estimates x 0 = 3 for SeCO, and x 1 = 2 and x 0 = 3 for EeCO.
Table 5. Results for f 5 ( x ) , with initial estimates x 0 = 3 for SeCO, and x 1 = 2 and x 0 = 3 for EeCO.
SeCO Method EeCO Method
α x ¯ | f 5 ( x k + 1 ) | | x k + 1 x k | iter ρ x ¯ | f 5 ( x k + 1 ) | | x k + 1 x k | iter ρ
1 x ¯ 1 3.84 × 10 10 1.37 × 10 5 11 2.09 x ¯ 1 8.84 × 10 13 2.31 × 10 8 11 2.51
0.9 x ¯ 2 7.17 × 10 12 2.86 × 10 7 131 2.00 x ¯ 1 2.03 × 10 9 4.21 × 10 6 9 1.06
0.8 x ¯ 1 1.40 × 10 9 2.22 × 10 5 15 3.01 x ¯ 1 1.31 × 10 12 4.45 × 10 8 11 1.36
0.7 x ¯ 1 3.05 × 10 10 9.44 × 10 6 33 2.67 x ¯ 1 2.99 × 10 14 3.67 × 10 9 8 1.49
0.6 x ¯ 1 1.46 × 10 12 5.94 × 10 7 17 1.97 x ¯ 1 9.44 × 10 11 3.79 × 10 7 7 1.89
0.5 x ¯ 1 4.63 × 10 11 3.03 × 10 6 7 2.04 x ¯ 1 2.73 × 10 9 1.89 × 10 6 6 3.60
0.4 x ¯ 1 1.22 × 10 15 5.75 × 10 9 9 1.76 - - - > 500 -
0.3 - - - > 500 - - - - > 500 -
0.2 - - - > 500 - - - - > 500 -
0.1 x ¯ 1 4.59 × 10 11 2.00 × 10 6 7 2.04 - - - > 500 -
Table 6. Results for f 6 ( x ) , with initial estimates x 0 = 3 for SeCO, and x 1 = 4 and x 0 = 3 for EeCO.
Table 6. Results for f 6 ( x ) , with initial estimates x 0 = 3 for SeCO, and x 1 = 4 and x 0 = 3 for EeCO.
SeCO Method EeCO Method
α x ¯ | f 6 ( x k + 1 ) | | x k + 1 x k | iter ρ x ¯ | f 6 ( x k + 1 ) | | x k + 1 x k | iter ρ
1 x ¯ 3 6.38 × 10 15 5.41 × 10 9 20 2.00 x ¯ 10 4.48 × 10 12 2.41 × 10 8 17 1.20
0.9 x ¯ 13 1.57 × 10 12 7.60 × 10 8 16 2.01 x ¯ 7 1.46 × 10 10 9.06 × 10 7 10 0.74
0.8 x ¯ 4 1.85 × 10 12 7.12 × 10 8 18 1.99 x ¯ 2 1.60 × 10 11 3.61 × 10 8 11 1.31
0.7 x ¯ 1 6.41 × 10 12 1.39 × 10 7 28 2.02 x ¯ 9 6.89 × 10 10 4.16 × 10 7 24 1.38
0.6 x ¯ 1 6.33 × 10 13 3.88 × 10 8 23 2.01 x ¯ 9 6.14 × 10 11 1.03 × 10 7 23 1.14
0.5 x ¯ 11 1.09 × 10 9 1.17 × 10 6 21 1.95 x ¯ 10 2.32 × 10 12 6.76 × 10 9 20 1.81
0.4 x ¯ 1 8.15 × 10 9 3.49 × 10 6 95 2.05 x ¯ 5 4.45 × 10 13 2.23 × 10 9 14 1.98
0.3 - - - - - x ¯ 4 8.39 × 10 13 3.87 × 10 9 104 1.37
0.2 x ¯ 5 7.17 × 10 12 8.35 × 10 8 66 1.47 x ¯ 13 2.58 × 10 10 8.57 × 10 8 175 1.67
0.1 x ¯ 3 7.92 × 10 14 4.44 × 10 9 58 2.00 - - - > 500 -
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated