Preprint
Article

Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems

Altmetrics

Downloads

94

Views

37

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

10 September 2024

Posted:

11 September 2024

You are already at the latest version

Alerts
Abstract
This article introduces a novel two-step fifth-order Jacobian-free iterative method aimed at efficiently solving systems of nonlinear equations. The method leverages the benefits of Jacobian-free approaches, utilizing divided differences to circumvent the computationally intensive calculation of Jacobian matrices. This adaptation significantly reduces computational overhead and simplifies the implementation process while maintaining high convergence rates. We demonstrate that this method achieves fifth-order convergence under specific parameter settings, with a broad applicability across various types of nonlinear systems. The effectiveness of the proposed method is validated through a series of numerical experiments which confirm its superior performance in terms of accuracy and computational efficiency compared to existing methods.
Keywords: 
Subject: Computer Science and Mathematics  -   Computational Mathematics

1. Introduction

Let F ( x ) = 0 be a nonlinear system of equations, F : D R n R n , and the functions f i for i = 1 , 2 , , n , are the coordinate components of F, expressed as F ( x ) = f 1 ( x ) , f 2 ( x ) , , f n ( x ) T . Solving nonlinear systems is generally challenging, and solutions ξ are typically found by linearizing the problem or employing a fixed-point iteration function G : D R n R n , leading to an iterative fixed-point method. Among the various root-finding techniques for nonlinear systems, Newton’s method is the most well-known, which follows the second-order iterative procedure:
x ( k + 1 ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , k = 0 , 1 , ,
being F ( x ( k ) ) the Jacobian matrix of F at the k-th iterate.
Recenty, many researchers have focused on developing iterative methods that outperform Newton’s method in terms of both efficiency and order of convergence. Numerous approaches need the computation of F at different points along each iteration. Nevertheless, calculating the Jacobian poses significant challenges, particularly in high-dimensional problems, where its computation can be costly or even impractical. In some instances, the Jacobian may not exist at all.
To address this issue, alternative approaches have been proposed, such as replacing the Jacobian matrix with a divided difference operator. One of the simplest alternatives is the multidimensional version of Steffensen’s method, attributed to Samanskii [1,2], which substitutes the Jacobian in Newton’s procedure with a first-order operator of divided differences:
x ( k + 1 ) = x ( k ) [ x ( k ) , z ( k ) ; F ] 1 F ( x ( k ) ) , k = 0 , 1 , ,
being z ( k ) = x ( k ) + F ( x ( k ) ) , and [ · , · ; F ] : Ω × Ω R n × R n L ( R n ) is the operator of divided differences related to F [3],
[ y , x ; F ] ( y x ) = F ( y ) F ( x )   for   any   x , y Ω .
This substitution retains the 2-nd order of convergence while bypassing the calculation of the F .
Although both Steffensen and Newton methods exhibit quadratic convergence, it has been shown that Steffensen’s scheme is less stable than Newton’s method, with stability depending more on the initial guess. This has been thoroughly analyzed in [4,5], where it was found that, for scalar cases f ( x ) = 0 , derivative-free iterative methods become more stable when selecting z = x + α f ( x ) for small real values of α .
However, substituting the Jacobian with divided differences can result in lower convergence order for some iterative methods. For example, the multidimensional version of Ostrowski’s fourth-order method (see [6,13]):
y ( k ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 2 [ y ( k ) , x ( k ) ; F ] F ( x ( k ) ) 1 F ( y ( k ) ) , k = 0 , 1 , ,
achieves only cubic convergence if F ( x ) is replaced by [ y , x ; F ] , as follows:
y ( k ) = x ( k ) [ x ( k ) + F ( x ( k ) ) , x ( k ) ; F ] 1 F ( x ( k ) ) , k = 0 , 1 , , x ( k + 1 ) = y ( k ) 2 [ y ( k ) , x ( k ) ; F ] [ x ( k ) + F ( x ( k ) ) , x ( k ) ; F ] 1 F ( y ( k ) ) .
Other fourth-order methods also loose their convergence order when the Jacobian is replaced with divided differences, such as Jarratt’s scheme [7], Sharma’s method [8], Montazeri’s method [9], Ostrowski’s vectorial extension ([13,15]), and Sharma-Arora’s fifth-order scheme [10]. In all these cases, Jacobian-free versions of the methods reduce to lower orders of convergence.
Nevertheless, Amiri et al. [11] demonstrated that using a specialized divided difference operator of the form [ x , x + G ( x ) ; F ] , where G ( x ) = f 1 ( x ) m , f 2 ( x ) m , , f n ( x ) m T , m N , as an approximation of the Jacobian matrix, may preserve the convergence order. By selecting an appropriate parameter m, the original fourth-order convergence of these methods can be maintained.
Despite the reduction in performance observed in some Jacobian-free methods, it is important to highlight that there are iterative methods that are successfully modified in their iterative expressions to preserve the order of convergence, even after fully transitioning to Jacobian-free formulations. This is the case of a combination of the Traub-Steffensen family of methods and a second step with divided differences operators, proposed by Behl et al. in [12]
y ( k ) = x ( k ) u ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , k = 0 , 1 , 2 , x ( k + 1 ) = y ( k ) y ( k ) , x ( k ) ; F 1 u ( k ) , x ( k ) ; F u ( k ) , y ( k ) ; F 1 F ( y ( k ) ) ,
where u ( k ) = x ( k ) + β F ( x ( k ) ) , β R . The iterative schemes have a fourth order of convergence for every β , β 0 , for our purposes we choose β = 1 and this be called Traub Ste .
Now, we consider several efficient vectorial iterative schemes existing in the literature to transform them in their Jacobian-free versions following the idea of Amiri et al. [11]. In the following sections, we compare these later schemes with our proposed procedures, in terms of efficiency and numerical performance. The first one is the vectorial extension of Ostrowski’s scheme (see [13–15], for instance),
y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 2 y ( k ) , x ( k ) ; F F ( x ( k ) ) 1 F ( y ( k ) ) ,
whose Jacobian-free version obtained by substituting the Jacobian matrix by the divided difference operator (with Amiri et al. approach [11], m = 2 ) is
y ( k ) = x ( k ) u ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 2 y ( k ) , x ( k ) ; F u ( k ) , x ( k ) ; F 1 F ( y ( k ) ) ,
where u ( k ) , x ( k ) , F F ( x ( k ) ) , and u ( k ) = x ( k ) + α G ( x ( k ) ) (with m = 2 ), α R . We denote this method as Ostro 01 .
Another fourth-order method proposed by Sharma in [16] using Jacobian matrices is
y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 3 I 2 F ( x ( k ) ) 1 [ y ( k ) , x ( k ) ; F ] F ( x ( k ) ) 1 F ( y ( k ) ) ,
to which we apply the same Amiri’s procedure performed for (2), getting its Jacobian-free partner
y ( k ) = x ( k ) u ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 3 I 2 u ( k ) , x ( k ) ; F 1 [ y ( k ) , x ( k ) ; F ] u ( k ) , x ( k ) ; F 1 F ( y ( k ) ) ,
that we denote by M 4 , 3 , where u ( k ) , x ( k ) , F F ( x ( k ) ) and u ( k ) = x ( k ) + α G ( x ( k ) ) , m = 2 , firstly appeared in [11].
We finish with a sixth-order scheme [16], which is obtained by adding a step to the previous method (4),
y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , k = 0 , 1 , , z ( k ) = y ( k ) 3 I 2 F ( x ( k ) ) 1 [ y ( k ) , x ( k ) ; F ] F ( x ( k ) ) 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) 3 I 2 F ( x ( k ) ) 1 [ y ( k ) , x ( k ) ; F ] F ( x ( k ) ) 1 F ( z ( k ) ) .
Similarly, its Jacobian-free version was constructed in [11] and denoted by M 6 , 3 ,
y ( k ) = x ( k ) u ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , k = 0 , 1 , , z ( k ) = y ( k ) 3 I 2 u ( k ) , x ( k ) ; F 1 [ y ( k ) , x ( k ) ; F ] u ( k ) , x ( k ) ; F 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) 3 I 2 u ( k ) , x ( k ) ; F 1 [ y ( k ) , x ( k ) ; F ] u ( k ) , x ( k ) ; F 1 F ( z ( k ) ) .
where again u ( k ) , x ( k ) , F F ( x ( k ) ) and u ( k ) = x ( k ) + α G ( x ( k ) ) , m = 2 . It should be noticed that in schemes (3), (5) and (7), we employed a quadratic element-by-element power of F ( x ( k ) ) in the divided differences. This adjustment was essential for preserving the convergence order of the original method (see [11]). However, in our proposal, the order of convergence of the original schemes is held avoiding the computational cost of this element-by-element power.
Therefore, to avoid the calculation of Jacobian matrices, which can be a bottleneck in terms of computational efficiency especially for large systems, this article presents a two-step fifth-order efficient Jacobian-free iterative method that addresses these challenges by eliminating the need for direct Jacobian computation. Our approach is grounded in the use of divided differences and scalar accelerators recently developed in some very efficient schemes (using Jacobian matrices), [17,18]. This not only reduces the computational costs, but also accelerates the convergence with simpler iterative expressions. The proposed method’s design and theoretical underpinnings are discussed, emphasizing its ability to achieve high-order convergence without the Jacobian calculations typically required.
In the Section 2, we develop a new parametric class of Jacobian-free iterative methods using scalar accelerators and demonstrate its theoretical order of convergence, depending on the values of the parameters involved. Subsequently, in Section 3 we carry out an efficiency analysis in which we compare our proposed method with the Jacobian-free versions of others previously cited in the literature. Finally, Section 4 presents practical results of these iterative methods applied to different nonlinear systems of equations.

2. Construction and Convergence of New Jacobian-Free Iterative Method

In 2023, Singh, Sharma and Kumar [18] proposed a family of iterative methods,
w ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , k = 0 , 1 , 2 , x ( k + 1 ) = w ( k ) p 1 + p 2 F ( w ( k ) ) T F ( w ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) F ( w ( k ) ) 1 F ( w ( k ) ) ,
where F y ( k ) T F y ( k ) F x ( k ) T F x ( k ) is a scalar accelerator that can be interpreted as F ( y ( k ) ) T F ( y ( k ) ) = F ( y ( k ) ) 2 , and F ( x ( k ) ) T F ( x ( k ) ) = F ( x ( k ) ) 2 , respectively. The real parameters p 1 and p 2 make the order of convergence of the method five if p 1 = p 2 = 1 , order four if p 1 = 1 and p 2 arbitrary and order two if p 1 1 and p 2 arbitrary. It is known that in many practical applications, computing the Jacobian matrix can be very resource-intensive and time-consuming, therefore, Jacobian-free methods are often preferred.
Making a modification to the scheme (8) by replacing the Jacobian matrices by specific divided differences, we obtain the following family:
y ( k ) = x ( k ) u x ( k ) , x ( k ) , F 1 F ( x ( k ) ) , k = 0 , 1 , 2 , x ( k + 1 ) = y ( k ) p 1 + p 2 F ( y ( k ) ) T F ( y ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) u y ( k ) , y ( k ) , F 1 F ( y ( k ) ) ,
where u x ( k ) , x ( k ) ; F F ( x ( k ) ) , u y ( k ) , y ( k ) ; F F ( y ( k ) ) , u x ( k ) = x ( k ) + α F ( x ( k ) ) and u y ( k ) = y ( k ) + α F ( y ( k ) ) . From now on, we will refer to our modified scheme as M S ( p 1 , p 2 ) .
The following result shows the error equations arising from method (9) for the possible parameter values, thereby demonstrating that the convergence results of the family (8) hold.
Theorem 1.
Let F be a differentiable enough function F : Ω R n R n defined in the open convex neighbourhood Ω of ξ, solution of F ( x ) = 0 . Let us also consider an initial seed x ( 0 ) near enough to ξ and let F ( x ) be continuous and invertible at ξ. Then, the parametric class of iterative schemes presented in (9) locally converges for all α R , with the order of convergence given by:
(a) 
Fifth-order convergence if p 1 = p 2 = 1 , being the corresponding error equation
x ( k + 1 ) ξ = ( M 4 A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
(b) 
Fourth-order convergence if p 1 = 1 , p 2 1 , being the corresponding error equation
x ( k + 1 ) ξ = ( A 2 M 1 2 + C 1 M 1 p 2 M 1 3 ) e ( k ) 4 + ( M 4 p 1 ( A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ) ( p 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
(c) 
Second-order convergence if p 1 1 , p 2 a r b i t r a r y , being the corresponding error equation
x ( k + 1 ) ξ = M 1 ( 1 p 1 ) e ( k ) 2 + M 2 ( 1 p 1 ) e ( k ) 3 + ( M 3 ( 1 p 1 ) p 1 A 2 M 1 2 + p 1 C 1 M 1 p 2 M 1 3 ) e ( k ) 4 + ( M 4 p 1 ( A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ) ( p 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) ,
being A j = 1 j ! F ( ξ ) 1 F ( j ) ( ξ ) , j = 2 , 3 , , and C i , M i , i = 1 , 2 , , are combinations of A j , and denoting the error at iteration k by e ( k ) = x ( k ) ξ .
Proof. 
Let e ( k ) = x ( k ) ξ be the error at the k-th iteration and let ξ R n be a solution of F ( x ) = 0 . Then, expanding F ( x ( k ) ) in the neighborhood of ξ , we have
F ( x ( k ) ) = F ( ξ ) e ( k ) + A 2 e ( k ) 2 + A 3 e ( k ) 3 + A 4 e ( k ) 4 + A 5 e ( k ) 5 + A 6 e ( k ) 6 + O e ( k ) 7 , F ( x ( k ) ) = F ( ξ ) I + 2 A 2 e ( k ) + 3 A 3 e ( k ) 2 + 4 A 4 e ( k ) 3 + 5 A 5 e ( k ) 4 + 6 A 6 e ( k ) 5 + O e ( k ) 6 , F ( x ( k ) ) = F ( ξ ) 2 A 2 + 6 A 3 e ( k ) + 12 A 4 e ( k ) 2 + 20 A 5 e ( k ) 3 + 30 A 6 e ( k ) 4 + O e ( k ) 5 , F ( x ( k ) ) = F ( ξ ) 6 A 3 + 24 A 4 e ( k ) + 60 A 5 e ( k ) 2 + 120 A 6 e ( k ) 3 + O e ( k ) 4 , F ( i v ) ( x ( k ) ) = F ( ξ ) 24 A 4 + 120 A 5 e ( k ) + 360 A 6 e ( k ) 2 + O e ( k ) 3 , F ( v ) ( x ( k ) ) = F ( ξ ) 120 A 5 + 720 A 6 e ( k ) + O e ( k ) 2 , F ( v i ) ( x ( k ) ) = F ( ξ ) 720 A 6 + O e ( k ) .
Then, based on the formula of Genochi and Hermite (see [3]) we have
u x ( k ) , x ( k ) ; F = F ( x ( k ) ) + 1 2 ! F ( x ( k ) ) ( u x ( k ) x ( k ) ) + 1 3 ! F ( x ( k ) ) ( u x ( k ) x ( k ) ) 2 + 1 4 ! F ( i v ) ( x ( k ) ) ( u x ( k ) x ( k ) ) 3 + 1 5 ! F ( v ) ( x ( k ) ) ( u x ( k ) x ( k ) ) 4 + 1 6 ! F ( v i ) ( x ( k ) ) ( u x ( k ) x ( k ) ) 5 + O ( ( u x ( k ) x ( k ) ) 6 ) .
Taking into account that u x ( k ) x ( k ) = α F ( x ( k ) ) , and performing a series expansion up to fifth-order, we get
α 2 ( F ( x ( k ) ) ) 2 = α 2 ( F ( ξ ) ) 2 e ( k ) 2 + α [ ( F ( ξ ) ) 2 A 2 + F ( ξ ) A 2 F ( ξ ) ] e ( k ) 3 + α 2 [ ( F ( ξ ) ) 2 A 3 + F ( ξ ) A 2 F ( ξ ) A 2 + F ( ξ ) A 3 F ( ξ ) ] e ( k ) 4 + α 2 [ ( F ( ξ ) ) 2 A 4 + F ( ξ ) A 2 F ( ξ ) A 3 + F ( ξ ) A 3 F ( ξ ) A 2 + F ( ξ ) A 4 F ( ξ ) ] e ( k ) 5 + O ( e ( k ) 6 ) , α 3 ( F ( x ( k ) ) ) 3 = α 3 ( F ( ξ ) ) 3 e ( k ) 3 + α 3 [ ( F ( ξ ) ) 3 A 2 + ( F ( ξ ) ) 2 A 2 F ( ξ ) + F ( ξ ) A 2 ( F ( ξ ) ) 2 ] e ( k ) 4 + α 3 [ ( F ( ξ ) ) 3 A 3 + ( F ( ξ ) ) 2 A 2 F ( ξ ) A 2 + F ( ξ ) A 2 ( F ( ξ ) ) 2 A 2 + ( F ( ξ ) ) 2 A 3 F ( ξ ) + F ( ξ ) A 2 F ( ξ ) A 2 F ( ξ ) + F ( ξ ) A 3 ( F ( ξ ) ) 2 ] e ( k ) 5 + O ( e ( k ) 6 ) , α 4 ( F ( x ( k ) ) ) 4 = α 4 ( F ( ξ ) ) 4 e ( k ) 4 + α 4 [ ( F ( ξ ) ) 4 A 2 + ( F ( ξ ) ) 3 A 2 F ( ξ ) + ( F ( ξ ) ) 2 A 2 ( F ( ξ ) ) 2 + F ( ξ ) A 2 ( ( F ( ξ ) ) 3 ] e ( k ) 5 + O ( e k 6 ) , α 5 ( F ( x ( k ) ) ) 5 = α ( F ( ξ ) ) 5 e ( k ) 5 + O ( e ( k ) 6 ) .
By combining formulas (13), (15) in the Taylor series expansion (14) we obtain:
u x ( k ) , x ( k ) ; F = F ( ξ ) I + B 1 e k + B 2 e ( k ) 2 + B 3 e ( k ) 3 + B 4 e ( k ) 4 + B 5 e ( k ) 5 + O ( e ( k ) 6 ) ,
where
B 1 = 2 A 2 + α A 2 F ( ξ ) , B 2 = 3 A 3 + α A 2 F ( ξ ) A 2 + 3 α A 3 F ( ξ ) + α 2 A 3 ( F ( ξ ) ) 2 , B 3 = 4 A 4 + α A 2 F ( ξ ) A 3 + 3 α A 3 F ( ξ ) A 2 + 6 α A 4 F ( ξ ) + α 2 A 3 ( F ( ξ ) ) 2 A 2 + α 2 A 3 F ( ξ ) A 2 F ( ξ ) + α 2 4 A 4 ( F ( ξ ) ) 2 + α 3 A 4 ( F ( ξ ) ) 3 , B 4 = 5 A 5 + α A 2 F ( ξ ) A 4 + 3 α A 3 F ( ξ ) A 3 + 6 α A 4 F ( ξ ) A 2 + 10 α A 5 F ( ξ ) + α 2 A 3 ( F ( ξ ) ) 2 + α 2 A 3 F ( ξ ) A 2 F ( ξ ) A 2 + α 2 A 3 F ( ξ ) A 3 F ( ξ ) + 4 α 2 A 4 ( F ( ξ ) ) 2 A 2 + 4 α 2 A 4 F ( ξ ) A 2 F ( ξ ) + 10 α 2 A 5 ( F ( ξ ) ) 2 + α 3 A 4 ( F ( ξ ) ) 3 A 2 + α 3 A 4 ( F ( ξ ) ) 2 A 2 F ( ξ ) + α 3 A 4 F ( ξ ) A 2 ( F ( ξ ) ) 2 + 5 α 3 A 5 ( F ( ξ ) ) 3 + α 4 A 5 ( F ( ξ ) ) 4 , a n d B 5 = 6 A 6 + α A 2 F ( ξ ) A 5 + 3 α A 3 F ( ξ ) A 4 + 6 α A 4 F ( ξ ) A 3 + 10 α A 5 F ( ξ ) A 2 + 15 α A 6 F ( ξ ) + α 2 A 3 ( F ( ξ ) ) 2 A 4 + α 2 A 3 F ( ξ ) A 2 F ( ξ ) A 3 + α 2 A 3 F ( ξ ) A 3 F ( ξ ) A 2 + α 2 A 3 F ( ξ ) A 4 F ( ξ ) + 4 α 2 A 4 ( F ( ξ ) ) 2 A 3 + 4 α 2 A 4 F ( ξ ) A 2 F ( ξ ) A 2 + 4 α 2 A 4 F ( ξ ) A 3 F ( ξ ) + 10 α 2 A 5 ( F ( ξ ) ) 2 A 2 + 10 α 2 A 5 F ( ξ ) A 2 F ( ξ ) + 20 α 2 A 6 ( F ( ξ ) ) 2 + α 3 A 4 ( F ( ξ ) ) 3 A 3 + α 3 A 4 ( F ( ξ ) ) 2 A 2 F ( ξ ) ) A 2 + α 3 A 4 F ( ξ ) A 2 ( F ( ξ ) ) 2 A 2 + α 3 A 4 ( F ( ξ ) ) 2 A 3 F ( ξ ) + α 3 A 4 F ( ξ ) A 2 F ( ξ ) A 2 F ( ξ ) + α 3 A 4 F ( ξ ) A 3 ( F ( ξ ) ) 2 + 5 α 3 A 5 ( F ( ξ ) ) 3 A 2 + 5 α 3 A 5 ( F ( ξ ) ) 2 A 2 F ( ξ ) + 5 α 3 A 5 F ( ξ ) A 2 ( F ( ξ ) ) 2 + 15 α 3 A 6 ( F ( ξ ) ) 3 + α 4 A 5 ( F ( ξ ) ) 4 A 2 + α 4 A 5 ( F ( ξ ) ) 3 A 2 F ( ξ ) + α 4 A 5 ( F ( ξ ) ) 2 A 2 ( F ( ξ ) ) 2 + α 4 A 5 F ( ξ ) A 2 ( F ( ξ ) ) 3 + 6 α 4 A 6 ( F ( ξ ) ) 4 + α 5 A 6 ( F ( ξ ) ) 5 .
Next, we expand the inverse of the divided difference operator u x ( k ) , x ( k ) ; F , forcing it to satisfy u x ( k ) , x ( k ) ; F 1 u x ( k ) , x ( k ) ; F = I ,
u x ( k ) , x ( k ) ; F 1 = I + X 2 e k + X 3 e ( k ) 2 + X 4 e ( k ) 3 + X 5 e ( k ) 4 + O ( e ( k ) 5 ) F ( ξ ) 1 ,
where
X 2 = B 1 , X 3 = B 1 2 B 2 , X 4 = B 1 B 2 + B 2 B 1 B 1 3 B 3 , X 5 = B 1 B 3 + B 3 B 1 + B 1 4 B 4 B 1 2 + B 2 2 B 1 B 2 B 1 B 2 B 1 2 .
By using the Taylor expansions of F ( x ( k ) ) defined in (13) and u x ( k ) , x ( k ) , F 1 obtained in (17), we get the error equation for the first step:
y ( k ) ξ = M 1 e ( k ) 2 + M 2 e ( k ) 3 + M 3 e ( k ) 4 + M 4 e ( k ) 5 + O ( e ( k ) 6 ) ,
where
M 1 = ( X 2 + A 2 ) , M 2 = ( A 3 + X 2 A 2 + X 3 ) , M 3 = ( A 4 + X 2 A 3 + X 3 A 2 + X 4 ) , M 4 = ( A 5 + X 2 A 4 + X 3 A 3 + X 4 A 2 + X 5 ) .
Now, we find the error equation for the second step,
F ( y ( k ) ) = F ( ξ ) e y ( k ) + A 2 e y ( k ) 2 + O ( e y ( k ) 3 ) , F ( y ( k ) ) = F ( ξ ) I + 2 A 2 e y ( k ) + 3 A 3 e y ( k ) 2 + O ( e y ( k ) 3 ) , F ( y ( k ) ) = F ( ξ ) 2 A 2 + 6 A 3 e y ( k ) + 12 A 4 e y ( k ) 2 + O ( e y ( k ) 3 ) , F ( y ( k ) ) = F ( ξ ) 6 A 3 + O ( e y ( k ) ) ,
from which arises
F ( y ( k ) ) = F ( ξ ) M 1 e ( k ) 2 + M 2 e ( k ) 3 + ( M 3 + A 2 M 1 2 ) e ( k ) 4 + A 2 ( M 2 M 1 + M 1 M 2 ) e ( k ) 5 + O ( e ( k ) 6 ) , F ( y ( k ) ) = F ( ξ ) I + 2 A 2 M 1 e ( k ) 2 + 2 A 2 M 2 e ( k ) 3 + ( 2 A 2 M 3 + 3 A 3 M 1 2 ) e ( k ) 4 + ( 2 A 2 M 4 + 3 A 3 ( M 1 M 2 + M 2 M 1 ) ) e ( k ) 5 + O ( e ( k ) 6 ) , F ( y ( k ) ) = F ( ξ ) 2 A 2 + 6 A 3 M 1 e ( k ) 2 + 6 A 3 M 3 e ( k ) 3 + O ( e ( k ) 4 ) , F ( y ( k ) ) = F ( ξ ) 6 A 3 + O ( e ( k ) ) .
Then, following the process seen in (14), the expansion of the second difference operator is given by
u y ( k ) , y ( k ) ; F = F ( ξ ) I + ( 2 A 2 M 1 + α A 2 F ( ξ ) M 1 ) e ( k ) 2 + ( 2 A 2 M 2 + α A 2 F ( ξ ) M 2 ) e ( k ) 3 + ( 2 A 3 M 3 + 3 α A 3 M 1 2 + α A 2 F ( ξ ) ( M 3 + A 2 M 2 2 ) + 3 α A 3 M 1 F ( ξ ) M 1 + α A 3 F ( ξ ) M 1 F ( ξ ) M 1 ) e ( k ) 4 + 2 A 2 M 4 + 3 A 3 ( M 1 M 2 + M 2 M 1 ) + α A 2 F ( ξ ) A 2 ( M 2 M 1 + M 1 M 2 ) + 3 α A 3 M 1 F ( ξ ) ( M 2 + M 1 ) + α A 3 F ( ξ ) ( M 1 F ( ξ ) M 2 + M 2 F ( ξ ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) ,
that can be expressed as
u y ( k ) , y ( k ) ; F = F ( ξ ) I + C 1 e ( k ) 2 + C 2 e ( k ) 3 + C 3 e ( k ) 4 + C 4 e ( k ) 5 + O ( e ( k ) 6 ) ,
being
C 1 = 2 A 2 M 1 + α A 2 F ( ξ ) M 1 , C 2 = 2 A 2 M 2 + α A 2 F ( ξ ) M 2 , C 3 = 2 A 3 M 3 + 3 α A 3 M 1 2 + α A 2 F ( ξ ) ( M 3 + A 2 M 2 2 ) + 3 α A 3 M 1 F ( ξ ) M 1 + α A 3 F ( ξ ) M 1 F ( ξ ) M 1 , C 4 = 2 A 2 M 4 + 3 A 3 ( M 1 M 2 + M 2 M 1 ) + α A 2 F ( ξ ) A 2 ( M 2 M 1 + M 1 M 2 ) + 3 α A 3 M 1 F ( ξ ) ( M 2 + M 1 ) + α A 3 F ( ξ ) ( M 1 F ( ξ ) M 2 + M 2 F ( ξ ) M 1 ) .
Again, we get in a similar way as in (17),
u y ( k ) , y ( k ) ; F 1 = C 1 e ( k ) 2 C 2 e ( k ) 3 + O e ( k ) 4 .
Now, we proceed to calculate the expansion of u y ( k ) , y ( k ) ; F 1 F y ( k ) , obtaining
u y ( k ) , y ( k ) ; F 1 F y k = M 1 e ( k ) 2 + M 2 e ( k ) 3 + ( M 3 A 2 M 1 2 C 1 M 1 ) e ( k ) 4 + ( A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ) e ( k ) 5 + O ( e ( k ) 6 ) .
According to Theorem 1 proven by Singh, Sharma and Kurmar in [18], we have
F ( y ( k ) ) T F ( y ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) = P e y ( k ) 2 + O ( e y ( k ) 3 ) P e ( k ) 2 + Q e ( k ) 3 + O ( e ( k ) 4 ) ,
where
P = i = 1 n m i P i ,   with   P i = R i T × R i , Q = i = 1 n m i Q i ,   with   Q i = R i T H i + H i T R i ,
and
R i = f i x 1 , f i x 2 , , f i x n , H i = 1 2 2 f i x j x r n × n .
Using (19), we get
e y ( k ) 2 = M 1 2 e ( k ) 4 + ( M 1 M 2 + M 2 M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
After substituting (29) in (28) when performing the quotient, we obtain:
F ( y ( k ) ) T F ( y ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) = M 1 2 e ( k ) 2 + M 1 M 2 + M 2 M 1 P 1 M 1 2 Q e ( k ) 3 + O ( e ( k ) 4 ) .
Finally, fitting (19), (27) and the last result obtained in (30), we have
x ( k + 1 ) ξ = M 1 ( 1 p 1 ) e ( k ) 2 + M 2 ( 1 p 1 ) e ( k ) 3 + ( M 3 ( 1 p 1 ) p 1 A 2 M 1 2 + p 1 C 1 M 1 p 2 M 1 3 ) e ( k ) 4 + ( M 4 p 1 ( A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ) ( p 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
In this last result, it is easy to observe that when p 1 is different from one ( p 1 1 ) the iterative method has order of convergence equal to two, since the term e ( k ) 2 would not cancel out. However, when p 1 = p 2 = 1 both terms with e ( k ) 2 and e ( k ) 3 cancel out while the term e ( k ) 4 is as follows
A 2 M 1 2 + C 1 M 1 M 1 3 .
Let us remember that C 1 = 2 A 2 M 1 + α A 2 F ( ξ ) M 1 , so that
A 2 M 1 2 + α A 2 F ( ξ ) M 1 2 M 1 3 ,
but since M 1 = ( X 2 + A 2 ) , then X 2 = B 1 , so M 1 = ( B 1 + A 2 ) . Also, B 1 = 2 A 2 + α A 2 F ( ξ ) . Given that M 1 = A 2 + α A 2 F ( ξ ) , replacing M 1 in equation (32), we get
( A 2 + α A 2 F ( ξ ) ) M 1 2 M 1 3 M 1 3 M 1 3 = 0 ,
resulting in the error equation
e ( k + 1 ) ξ = ( M 4 A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
From the above it is clear to see that if p 1 = 1 but p 2 1 only order four is reached. □

3. Efficiency Analysis

We have demonstrated the order of convergence of the proposed class of the iterative method for the different values of the parameters p 1 , p 2 . In this section, we perform a computational effort study considering the effort of solving the involved linear systems per iteration and the other computational cost (functional evaluations, amount of product/quotients,...), not only for the proposed class but also for some Jacobian-free schemes presented in the introductory section.
In order to get this aim, it is known that the needed operations (products/quotients) of solving a n × n linear system is
1 3 n 3 + n 2 1 3 n .
However, if other linear systems with the same coefficient matrix are solved, then the cost upgrades only in n 2 operations each; for each divided difference we calculate n 2 quotients; for each functional evaluation of F at different points, a cost of n real evaluations; for each evaluation of a divided differences, n 2 n scalar evaluations; Indeed, a matrix–vector product needs n 2 product/quotients. Based on the above, the computational cost for each method appears in Table 1. From family (9), which we call M S ( p 1 , p 2 ) , we consider the fifth-order member p 1 = p 2 = 1 and its fourth-order partner p 1 = 1 , p 2 = 1 . They have the same computational cost, which will be reflected, along with the others, in Table 1.
The results presented in Table 1 show that the method with the highest computational cost is T r a u b S t e , while those with intermediate costs are O s t r o 01 and M S ( p 1 , p 2 ) , with the latter being slightly better. The ones that offer the lowest cost are M 4 , 3 and M 6 , 3 , although the latter has sixth-order convergence, which is a factor to consider when obtaining the efficiency index.
In order to show more clearly how computational cost influences the efficiency index I = p 1 C , where p is the convergence order of the corresponding scheme (see [20]), we present Figure 1 and Figure 2 for different sizes of the nonlinear system to be solved.
In Figure 1, we observe that for systems with dimensions n = 2 , 3 , , 10 , the proposed class of vectorial iterative methods (9) shows a better computational efficiency than the other schemes, for both members of the family. On the other hand, as the system grows to sizes n = 10 , 11 , , 50 it becomes apparent that methods M 4 , 3 and M 6 , 3 yield better results. Let us notice that for sizes of n < 11 , our method has better efficiency than M 6 , 3 , and for sizes n < 12 , it has better efficiency compared to M 4 , 3 (see Figure 2).
In the next section, we check the theoretical convergence order of our proposed method and assess the efficiency of different schemes on nonlinear systems of various sizes.

4. Numerical Results

In this section, we test numerically that the theoretical order holds for practical purposes in the proposed Jacobian-free class (9). Moreover, we compare it with the methods appearing in the efficiency section to show their accuracy and computational performance.
Below we show some nonlinear problems, including one whose related nonlinear function is not differentiable, in order to test the applicability of the methods on different kind of problems.
  • We consider F 1 ( x ) = f 1 1 ( x ) , f 2 1 ( x ) , , f 25 1 ( x ) T , where
    f i 1 ( x ) = x i 2 x i + 1 1 , i = 1 , 2 , , 24 , f 25 1 ( x ) = x 25 2 x 1 1 .
    whose solution is ξ ¯ = ( 1 , 1 , 1 , , 1 ) .
  • We also have F 2 ( x ) = f 1 2 ( x ) , f 2 2 ( x ) , , f 8 2 ( x ) T , where
    f i 2 ( x ) = x i cos 2 x i k = 1 8 x k , i = 1 , 2 , , 8 ,
    being ξ ¯ ( 0.5149 , 0.5149 , , 0.5149 ) its solution.
  • We also consider F 3 ( x ) = f 1 3 ( x ) , f 2 3 ( x ) , , f 5 3 ( x ) T , where
    f i 3 ( x ) = k = 1 5 x k x i e ( x i ) , i = 1 , 2 , 3 , 4 , 5 ,
    where the solution is ξ ¯ ( 0.20389 , 0.20389 , 0.20389 , , 0.20389 ) .
  • We test also F 4 ( x ) = f 1 4 ( x ) , f 2 4 ( x ) , , f 5 4 ( x ) T , where
    f i 4 ( x ) = k = 1 5 x k x i e ( x i ) x i , i = 1 , 2 , 3 , 4 , 5 ,
    whose solution is ξ ¯ = ( 0 , 0 , 0 , , 0 ) .
  • F 5 ( x ) = f 1 5 ( x ) , f 2 5 ( x ) , , f 10 5 ( x ) T , where
    f i 5 ( x ) = x i + 1 2 log 1 x i + k = 1 10 ( x k ) , i = 1 , 2 , 3 , , 10 ,
    being it solution ξ ¯ ( 7.4370 , 7.4370 , , 7.4370 ) .
  • F 6 ( x ) = f 1 6 ( x ) , f 2 6 ( x ) , , f 5 6 ( x ) T , where
    f i 6 ( x ) = x i + 1.5 sin k = 1 5 x k x i , i = 1 , 2 , 3 , 4 , 5 ,
    and there exist two solutions, ξ ¯ 1 ( 0.3004 , 0.3004 , , 0.3004 ) , and ξ ¯ 2 ( 0.4579 , 0.4579 , , 0.4579 ) .
  • F 7 ( x ) = f 1 7 ( x ) , f 2 7 ( x ) T , where
    F i 7 ( x ) = arctan x i + 1 2 k = 1 2 x k 2 x i 2 = 0 , i = 1 , 2 ,
    being ξ ¯ ( 0.936 , 0.936 ) .
  • We also test F 8 ( x ) = f 1 8 ( x ) , f 2 8 ( x ) T , where
    f 1 8 ( x ) = l o g ( | x 1 | ) + | x 2 | , f 2 8 ( x ) = e ( x 1 ) + x 2 1 ,
    with solutions ξ ¯ 1 ( 0.6275 , 0.4661 ) and ξ ¯ 2 ( 0.5122 , 0.669 ) .
Numerical results have been obtained with Matlab2022b version, using 8000 digits in variable precision arithmetics, a processor AMD A 12 9720 P RADEON R7, 12 COMPUTE CORES 4C, +8G-Ram, 2.70 GHz. These results are shown in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, including the following information, where the appearing norms are Euclidean:
  • k: amount of iterations needed ("-" appears when the scheme does not converge or it needs more iterations than the maximum allowed).
  • ξ ¯ : obtained solution.
  • Cpu-time: average time in seconds required by the iterative method to reach the solution of the problem when executed ten times.
  • ρ : approximated computational order of convergence, ACOC, firstly appeaaring in [19]
    ρ = ln x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) ln x ( k ) x ( k 1 ) x ( k 1 ) x ( k 2 ) , k = 2 , 3 , ,
    (if ρ is not stable, then "-" appears in the table).
  • ϵ a p r o x = x ( k + 1 ) x ( k ) .
  • ϵ f = F x ( k + 1 ) . If ϵ f or ϵ a p r o x are very far from zero or we get infinity or NAN, then "-" appears in the table.
Regarding the stopping criterium, the iterative process ends when one of the following conditions is fulfilled:
  (i)
F x ( k + 1 ) < 10 100 ,
 (ii)
x ( k + 1 ) x ( k ) < 10 100 ,
(iii)
50 iterations are reached.
In Table 2, O s t r o 01 method reached the maximum number of iterations without converging to the solution, while the most notable scheme is M 6 , 3 in almost all aspects such as errors, iterations and computational time. On the other hand, although M S ( 1 , 1 ) exhibits a lower computational time than M 4 , 3 , the latter has an additional iteration, a relatively similar time, and better errors, proving that it might even be superior in terms of efficiency.
In Table 3, we observe that our proposed schemes yield the best computational times, with M S ( 1 , 1 ) method standing out for having the smallest error norm among all schemes. The M 4 , 3 method, while showing good overall performance in terms of errors, is more computationally expensive as it takes one iteration longer to reach the solution to the system and slightly more time compared to M S ( p 1 , p 2 ) methods, indicating that the program takes more time to generate each iteration.
The results in Table 4 and Table 5 are very balanced, with the M S ( 1 , 1 ) method showing the best errors while the shortest computational time is achieved by the M 6 , 3 method. However, this last one takes more effort to generate an iteration because, despite producing one less iteration than the other iterative methods, their execution times are very similar.
For Table 6, we note that the best computational times are obtained by the methods M S ( p 1 , p 2 ) . On the other hand, the best errors are obtained by O s t r o 01 scheme, which has a competitive computational time considering it requires four iterations, similar to T r a u b S t e . The latter has shown to be the one that requires the most time to converge.
In Table 7, the best errors were obtained by M S ( 1 , 1 ) and T r a u b S t e , which converged to different solutions, while in terms of performance, M S ( 1 , 1 ) and O s t r o 01 appeared to be better.
In Table 8, O s t r o 01 method could not converge to the solution because it reached the maximum number of iterations, whereas the M S ( p 1 , p 2 ) and M 4 , 3 methods showed better overall behavior.
In the Table 9, M S ( 1 , 1 ) method stands out with the lowest computational time, being more efficient than the others, despite requiring a similar number of iterations and showing a convergence order of 4.20 , close to the other methods. The T r a u b S t e method is less efficient, with a computational time of 3.1954 but with a larger errors. O s t r o 01 , is competitive only falling slightly behind in computational time. The methods M 4 , 3 and M 6 , 3 are the most expensive in terms of computational time, requiring 7 and 6 iterations respectively, with the highest time recorded for M 4 , 3 . Despite their higher orders of convergence, both methods show lower overall efficiency compared to the M S and O s t r o 01 methods.
In Table 10, method M S ( 1 , 1 ) stands out as the one with the lowest computational time, requiring fewer iterations compared to the other methods and showing a convergence order of 4.05. The T r a u b S t e method, despite having the same convergence order as M S ( 1 , 1 ) , shows a slightly higher computational time and larger errors in the difference between iterates. O s t r o 01 , which requires more iterations, falls only slightly behind in computational time but remains competitive overall. Methods M 4 , 3 and M 6 , 3 do not converge to the solution, as division by zero occurred when calculating divided differences.
Method M S ( 1 , 1 ) stands out as the one with the lowest computational time, requiring fewer iterations compared to the other methods and showing a convergence order of 4.28 . T r a u b S t e scheme, although it has the same convergence order as M S ( 1 , 1 ) , shows a significantly higher computational time but better results in terms of errors. M S ( 1 , 1 ) , while requiring more iterations than M S ( 1 , 1 ) , still maintains a competitive time compared to T r a u b S t e . Division by zero is the reason why the other methods could not reach the solution.
In Table 12, it is observed that only the members of our proposed class converge, and they do it to different solutions. M S ( 1 , 1 ) stands out as the most efficient in terms of time and iterations, while M S ( 1 , 1 ) yields the best errors. The method O s t r o 01 do not converge because it exceeded the maximum number of iterations, while the other methods could not reach the solution due to division by zero during the computation of divided differences.
The observations for Table 13 are the same as those in the previous one, but now method O s t r o 01 does not converge to the solution due to a division by zero, just like the other methods that do not converge.

5. Conclusions

The fifth-order Jacobian-Free iterative method with scalar accelerators developed in this study (along with its fourth-order partners) has proven to be an efficient tool for solving nonlinear systems of equations. It preserves the convergence order of the original scheme, despite the elimination of Jacobian matrix calculations, with low computational cost. The substitution of these matrices with divided differences not only reduces computational complexity but also facilitates implementation, maintaining high precision and efficiency. The numerical results highlight its superiority in terms of performance compared to conventional methods, especially for not-so-large systems.

Author Contributions

Conceptualization, J.R.T. and A.C.; software, A.R.-C.; methodology, J.G.M.; validation, J.G.M., A.C. and J.R.T.; investigation, A.C.; formal analysis, J.R.; resources, A.R.-C.; writing—original draft preparation, A.R.-C.; visualization, J.G.M.; writing—review and editing, J.R.T. and A.C.; supervision, J.R.T. and A.C. All the authors have read and agreed to the published version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Samanskii, V.: On a modification of the Newton method. Ukr. Math. J. 19, 133–138 (1967).
  2. Steffensen, J.F.: Remarks on iteration. Skand. Aktuarietidskr. 1, 64–72 (1933). [CrossRef]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [CrossRef]
  4. Chicharro, F.I., Cordero, A., Gutiérrez, J.M., Torregrosa, J.R.: Complex dynamics of derivative-free methods for nonlinear equations. Appl. Math. Comput. 219, 7023–7035 (2013). [CrossRef]
  5. Cordero, A., Soleymani, F., Torregrosa, J.R., Shateyi, S.: Basins of attraction for various Steffensen-type methods. Appl. Math. 2014, Article ID 539707 1–17 (2014). [CrossRef]
  6. Cordero, A., García-Maimó, J., Torregrosa, J.R., Vassileva, M.P.: Solving nonlinear problems by Ostrowski–Chun type parametric families. Math. Chem. 53, 430–449 (2015). [CrossRef]
  7. Jarratt, P.: Some fourth order multipoint iterative methods for solving equations. Math. Comp. 20, 434–437 (1966). [CrossRef]
  8. Sharma, J.R., Arora, H.: On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 222, 497–506 (2013). [CrossRef]
  9. Montazeri, H., Soleymani, F., Shateyi, S., Motsa, S.S.: On a new method for computing the numerical solution of systems of nonlinear equations. Appl. Math. 2012 ID. 751975, 1–15 (2012). [CrossRef]
  10. Sharma, J.R., Arora, H.: Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 35, 269–284 (2016). [CrossRef]
  11. Amiri, A.R., Cordero, A., Darvishi, M.T., Torregrosa, J.R.: Preserving the order of convergence: Low-complexity Jacobian-free iterative schemes for solving nonlinear systems. Comput. Appl. Math. 337, 87–97 (2018). [CrossRef]
  12. Behl, R., Cordero, A., Torregrosa, J. R., Bhalla, S., A New High-Order Jacobian-Free Iterative Method with Memory for Solving Nonlinear Systems. Mathematics 9(17), 2122 (2021). [CrossRef]
  13. Ostrowski, A.M., Solutions of Equations and Systems of Equations, Academic Press, New York, London, 1966.
  14. Abad, M.F., Cordero, A., Torregrosa, J.R., A family of seventh-order schemes for solving nonlinear systems, Bull. Math. Soc. Sci. Math. Roumanie 57(105)(2), 133–145 (2014).
  15. Cordero, A. , Maimó, J.G. ,Torregrosa, J.R., Vassileva, M.P., Solving nonlinear problems by Ostrowski-Chun type parametric families, Math. Chem. 53, 430–449 (2015). [CrossRef]
  16. Sharma, J.R., Arora, H., On efficient weighted-Newton methods for solving systems of nonlinear equations, Applied Mathematics and Computation 222, 497–506 (2013). [CrossRef]
  17. Cordero, A, Rojas–Hiciano, R. V., Torregrosa, J. R., Vassileva, M. P: A highly efficient class of optimal fourth-order methods for solving nonlinear systems. Numerical Algorithms 95, 1879–1904 (2024). [CrossRef]
  18. Singh, H., Sharma, J. R., Kumar, S.: A simple yet efficient two-step fifth-order weighted-Newton method for nonlinear models. Numerical Algorithms 93, 203–225 (2023). [CrossRef]
  19. Cordero, A., Torregrosa, J.R., Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 190, 686–698 (2007). [CrossRef]
  20. Traub, I.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [CrossRef]
Figure 1. I indices for M S ( p 1 , p 2 ) and comparison methods
Figure 1. I indices for M S ( p 1 , p 2 ) and comparison methods
Preprints 117868 g001
Figure 2. I indices for M S ( p 1 , p 2 ) and comparison schemes
Figure 2. I indices for M S ( p 1 , p 2 ) and comparison schemes
Preprints 117868 g002
Table 1. Computational effort of new and comparison schemes
Table 1. Computational effort of new and comparison schemes
 Method  Complexity C
  M S ( p 1 , p 2 )   2 3 n 3 + 6 n 2 2 3 n
  T r a u b S t e   n 3 + 10 n 2 2 n
  O s t r o 01   2 3 n 3 + 6 n 2 + 1 3 n
  M 4 , 3   1 3 n 3 + 8 n 2 + 2 3 n
  M 6 , 3   1 3 n 3 + 11 n 2 + 5 3 n
Table 2. Results for function F 1 , using as seed x ( 0 ) = ( 1.5 , 1.5 , 1.5 , , 1.5 )
Table 2. Results for function F 1 , using as seed x ( 0 ) = ( 1.5 , 1.5 , 1.5 , , 1.5 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 7 4.00 1.108e-51 4.624e-204 ξ ¯ 1 191.7242
M S ( 1 , 1 ) 5 4.97 4.904e-24 3.995e-117 ξ ¯ 1 143.1027
T r a u b S t e 5 4.00 1.241e-35 1.517e-140 ξ ¯ 1 227.8747
O s t r o 01 - - - - - -
M 4 , 3 6 4.00 7.338e-40 3.015e-158 ξ ¯ 1 163.9808
M 6 , 3 5 5.96 1.336e-47 7.876e-284 ξ ¯ 1 142.6446
Table 3. Numerical results for F 2 , x ( 0 ) = ( 1 , 1 , 1 , , 1 )
Table 3. Numerical results for F 2 , x ( 0 ) = ( 1 , 1 , 1 , , 1 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 8 3.99 2.794e-29 4.092e-115 ξ ¯ 1 73.3126
M S ( 1 , 1 ) 8 5.00 7.534e-74 2.162e-366 ξ ¯ 1 72.6143
T r a u b S t e 16 4.00 9.389e-68 3.459e-269 ξ ¯ 1 317.9304
O s t r o 01 15 4.00 6.341e-70 1.641e-278 ξ ¯ 1 139.3808
M 4 , 3 7 4.00 8.510e-87 3.949e-346 ξ ¯ 1 76.9114
M 6 , 3 6 6.02 4.223e-47 8.063e-281 ξ ¯ 1 79.7898
Table 4. Numerical results for F 3 , x ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 )
Table 4. Numerical results for F 3 , x ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 4 4.00 1.305e-55 2.980e-221 ξ ¯ 1 12.8166
M S ( 1 , 1 ) 4 5.00 4.997e-101 1.143e-503 ξ ¯ 1 12.6105
T r a u b S t e 4 4.00 9.461e-68 1.372e-270 ξ ¯ 1 18.9457
O s t r o 01 4 4.00 1.420e-47 2.882e-189 ξ ¯ 1 12.4768
M 4 , 3 4 4.00 3.440e-44 1.007e-175 ξ ¯ 1 12.8707
M 6 , 3 3 6.07 1.482e-21 3.012e-127 ξ ¯ 1 10.5536
Table 5. Numerical results for F 4 , x ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 )
Table 5. Numerical results for F 4 , x ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 4 3.99 7.838e-31 4.801e-121 ξ ¯ 1 14.6365
M S ( 1 , 1 ) 4 5.00 2.864e-52 2.514e-258 ξ ¯ 1 14.6532
T r a u b S t e 4 4.00 8.656e-34 3.124e-133 ξ ¯ 1 21.7138
O s t r o 01 4 4.00 2.268e-35 6.443e-140 ξ ¯ 1 14.0140
M 4 , 3 4 4.00 2.594e-47 9.227e-188 ξ ¯ 1 14.8093
M 6 , 3 3 5.32 6.593e-26 7.254e-153 ξ ¯ 1 12.0667
Table 6. Numerical results for F 5 , x ( 0 ) = ( 7 , 7 , 7 , , 7 )
Table 6. Numerical results for F 5 , x ( 0 ) = ( 7 , 7 , 7 , , 7 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 3 4.03 1.343e-24 1.076e-101 ξ ¯ 1 58.6649
M S ( 1 , 1 ) 3 5.03 5.715e-36 1.098e-183 ξ ¯ 1 58.6010
T r a u b S t e 4 4.00 2.252e-97 1.397e-392 ξ ¯ 1 138.3238
O s t r o 01 4 4.00 1.317e-99 1.705e-401 ξ ¯ 1 76.6871
M 4 , 3 3 4.00 1.490e-24 2.175e-101 ξ ¯ 1 64.3101
M 6 , 3 3 6.01 1.565e-53 4.782e-326 ξ ¯ 1 69.4223
Table 7. Numerical results for F 6 x ( 0 ) = ( 0.75 , 0.75 , 0.75 , 0.75 , 0.75 )
Table 7. Numerical results for F 6 x ( 0 ) = ( 0.75 , 0.75 , 0.75 , 0.75 , 0.75 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 6 4.00 5.425e-88 5.270e-349 ξ ¯ 1 20.2517
M S ( 1 , 1 ) 5 4.97 5.922e-39 1.965e-190 ξ ¯ 1 16.9191
T r a u b S t e 20 4.00 9.536e-99 4.618e-391 ξ ¯ 2 154.5691
O s t r o 01 5 4.00 1.050e-33 2.164e-132 ξ ¯ 1 16.5869
M 4 , 3 10 4.01 3.127e-35 1.500e-138 ξ ¯ 1 51.2143
M 6 , 3 17 5.90 6.480e-23 2.076e-132 ξ ¯ 2 105.4212
Table 8. Numerical results for F 7 , x ( 0 ) = ( 0.25 , 0.25 )
Table 8. Numerical results for F 7 , x ( 0 ) = ( 0.25 , 0.25 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 5 4.00 5.203e-41 6.322e-161 ξ ¯ 1 3.6588
M S ( 1 , 1 ) 4 5.00 5.282e-73 6.419e-362 ξ ¯ 1 3.0030
T r a u b S t e 6 4.00 6.615e-81 7.658e-321 ξ ¯ 1 4.6891
O s t r o 01 - - - - - -
M 4 , 3 6 4.00 5.613e-98 3.787e-389 ξ ¯ 1 3.7011
M 6 , 3 7 5.92 2.032e-32 5.723e-190 ξ ¯ 1 4.4699
Table 9. Numerical results for F 8 , x ( 0 ) = ( 0.25 , 0.25 )
Table 9. Numerical results for F 8 , x ( 0 ) = ( 0.25 , 0.25 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 4 4.00 1.139e-54 9.420e-217 ξ ¯ 1 2.9146
M S ( 1 , 1 ) 4 4.20 1.415e-72 2.272e-301 ξ ¯ 1 2.8907
T r a u b S t e 4 4.11 4.417e-48 1.391e-191 ξ ¯ 1 3.1954
O s t r o 01 6 4.00 2.520e-78 1.241e-310 ξ ¯ 1 3.4855
M 4 , 3 7 4.02 1.401e-72 3.093e-288 ξ ¯ 1 4.0501
M 6 , 3 6 6.27 4.634e-58 7.902e-346 ξ ¯ 1 3.6475
Table 10. Numerical results for F 8 , x ( 0 ) = ( 1.25 , 1.25 )
Table 10. Numerical results for F 8 , x ( 0 ) = ( 1.25 , 1.25 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 6 4.00 1.019e-65 6.029e-261 ξ ¯ 1 4.1403
M S ( 1 , 1 ) 5 4.05 2.474e-42 4.929e-177 ξ ¯ 1 3.3999
T r a u b S t e 6 4.00 8.946e-86 2.341e-342 ξ ¯ 1 4.4433
O s t r o 01 8 4.00 2.050e-55 5.429e-219 ξ ¯ 1 4.4246
M 4 , 3 - - - - - -
M 6 , 3 - - - - - -
Table 11. Numerical results for F 8 , x ( 0 ) = ( 2 , 2 )
Table 11. Numerical results for F 8 , x ( 0 ) = ( 2 , 2 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 10 4.00 6.679e-71 1.113e-281 ξ ¯ 1 6.9568
M S ( 1 , 1 ) 8 4.28 4.387e-77 3.113e-321 ξ ¯ 1 5.4126
T r a u b S t e 15 4.00 4.978e-100 2.442e-397 ξ ¯ 2 10.6624
O s t r o 01 - - - - - -
M 4 , 3 - - - - - -
M 6 , 3 - - - - - -
Table 12. Numerical results for F 8 , x ( 0 ) = ( 2.25 , 2.25 )
Table 12. Numerical results for F 8 , x ( 0 ) = ( 2.25 , 2.25 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 16 4.00 2.821e-94 3.543e-375 ξ ¯ 1 10.5084
M S ( 1 , 1 ) 6 4.19 8.959e-45 1.550e-184 ξ ¯ 2 4.2948
T r a u b S t e - - - - - -
O s t r o 01 k > 50 - - - - -
M 4 , 3 - - - - - -
M 6 , 3 - - - - - -
Table 13. Numerical results for F 8 , x ( 0 ) = ( 2.5 , 2.5 )
Table 13. Numerical results for F 8 , x ( 0 ) = ( 2.5 , 2.5 )
Iterative method k ρ ϵ a p r o x ϵ f ξ ¯ Cpu-time
M S ( 1 , 1 ) 10 4.00 2.063e-86 6.071e-344 ξ ¯ 2 6.8361
M S ( 1 , 1 ) 7 4.17 3.281e-52 6.327e-218 ξ ¯ 1 4.7535
T r a u b S t e - - - - - -
O s t r o 01 - - - - - -
M 4 , 3 - - - - - -
M 6 , 3 - - - - - -
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated