Preprint
Article

A Spectral Polak-Ribi{\'e}re-Polyak Conjugate Gradient Method on Quantum Derivative for Unconstrained Optimization Problems

Altmetrics

Downloads

126

Views

33

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

24 October 2023

Posted:

25 October 2023

You are already at the latest version

Alerts
Abstract
Quantum computing is an emerging field that have given a significant impact on optimization. Among the diverse quantum algorithm, the quantum gradient descent has become a prominent technique for solving unconstrained optimization problems. In this paper, we propose a quantum-spectral Polak-Ribi{\'e}re-Polyak (PRP) conjugate gradient approach. The technique is considered as a generalization of the spectral PRP method which employs a $q$-gradient that approximate the classical gradient with quadratically better dependence on the quantum variable $q$. Additionally, the proposed method reduces to the classical variant as quantum variable $q$ gets closer to $1$. The quantum search direction always satisfies the sufficient descent condition and does not depend on any line search. This approach is globally convergent with the standard Wolfe line search without any convexity assumption. Numerical experiments are conducted and compared with the existing approach to demonstrate the improvement of the proposed strategy.
Keywords: 
Subject: Computer Science and Mathematics  -   Computational Mathematics

MSC:  65K10; 90C52; 90C26

1. Introduction

Conjugate gradient methods remain a preferred alternative for solving a variety of multi-variable objective functions due to its good convergence rate and high accuracy [1]. Several authors have developed novel and high performing CG optimization methods [2,3,4]. These methods are: classical [5], hybrid [6], scale [7], and parameterized [8] CG methods. The key challenges to address are precisely computing the step-length and choosing orthogonal conjugate directions in succession until the optimum point is attained [9]. When the starting point is far from the optimal solution and the objective functions contain multiple local optima, the classical steepest descent method based on gradient direction hits its limits [10]. Conjugate gradient methods require the gradient of the function. Generally, researchers use classical derivative to compute the gradient. The quantum derivative is also useful to find the quantum gradient of the function. We now consider the following nonlinear UO problem:
minimize θ ( z ) , z R n ,
where θ : R n R is a continuously quantum differentiable function whose quantum gradient is given by h q ( z ) . A better CG algorithms always converge to an optimum solution and converge quickly as well. Researchers have already shown the efficient performance by replacing the gradient vector with quantum gradient vector [11,12,13,14,15,16,17,18,19]
The CG method [1] is one of the most efficient and accurate methods for solving large-scale UO problems (1), whose iterative sequence [10] { z ( λ ) } is generated in the context of quantum calculus as:
z ( λ + 1 ) = z ( λ ) + α ( λ ) p q ( λ ) ( λ ) , where λ = 0 , 1 , 2 ,
where scalar variable α ( λ ) is a positive step-length. This variable is computed through any LS, and p q ( λ ) ( λ ) is a quantum descent search direction as:
p q ( λ ) ( λ ) = h q ( λ ) ( λ ) , where λ = 0 .
The vector h q ( λ ) ( λ ) = q ( m ) θ z ( λ ) is the quantum steepest descent direction [11] at the starting point z ( 0 ) . For next quantum iteration,
p q ( λ ) ( λ ) = h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) , where λ 1 .
Note that β ( m ) q u a n t u m P R P is the scalar quantity and it is assumed as a quantum CG parameter. The quantum PRP CG method given by Mishra et al. [11] is a popular CG method whose quantum CG parameter is given as:
β ( λ ) Q u a n t u m P R P = h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 ,
where · denotes the Euclidean norm of vector h q ( λ 1 ) ( λ 1 ) . Further, sufficient quantum descent condition is required to reach the global convergence (GC) point for objective function (1). This condition is represented as:
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) c h q ( λ ) ( λ ) 2 , c > 0 .
Researchers have proposed the variant of the PRP CG method that establishes the condition (6). For instance, Zhang [12] suggested a modified PRP method that consistently executes a descent direction regardless of the LS employed. Wan et al. [20] established a distinct PRP method called the spectral PRP method. At each iteration, the search direction of the suggested method was demonstrated for the descent direction of the objective function.Hu et al. [21] proposed a class of improved CG method in the support of descent direction to solve unconstrained non-convex optimization problems. More pertinent contributions are available in [22,23,24,25,26] and references therein.
In this article, we propose to merge the concept of quantum derivative with the spectral gradient and CG. We present a quantum-spectral-PRP approach where the search direction is a quantum-descent direction at each quantum iteration. The rapid search for the descent point in the proposed algorithm is made possible by the quantum variable q.

1.1. A Quantum Spectral PRP CG Algorithm

In this subsection, we construct our method step by step. Consider the UO problem defined by (1). The CG method of Wan et al. [20] solves (1) by generating a sequence of iterate { z ( λ ) } . As shown by Wan et al. [20], we now introduce the quantum-spectral PRP method for solving (1). The iterative schema of this method based on the quantum gradient is already shown. Motivated by [20], the quantum search direction p q ( λ ) ( λ ) for λ 1 is presented as:
p q ( λ ) ( λ ) = t ( λ ) h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) ,
where
t ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) 2 h q ( λ 1 ) ( λ 1 ) 2 .
Based on all above, we present the algorithm for the quantum-spectral PRP-CG method as follows:
Algorithm 1.1. 
A Quantum-Spectral PRP CG Algorithm
  • Step 0 Given a starting point z ( 0 ) R n , constants 0 < ρ < σ < 1 , δ 1 , δ 2 > 0 , ϵ > 0 . Let λ = 0 .
  • Step 1 If h q ( λ ) ( λ ) ϵ , the algorithm terminates. Otherwise, compute p q ( λ ) ( λ ) using (3)-(4), and go to Step 2.
  • Step 2 Find the step-length α ( m ) > 0 such that
    θ z ( λ ) θ z ( λ ) + α ( λ ) p q ( λ ) ( λ ) p q ( λ ) ( λ ) 2 ρ ( α ( λ ) ) 2 , h q ( λ ) z ( λ ) + α ( λ ) p q ( λ ) ( λ ) T p q ( λ ) ( λ ) 2 p q ( λ ) ( λ ) 2 σ α ( λ ) .
  • Step 3 Compute quantum descent point as z ( λ + 1 ) : = z ( λ ) + α ( λ ) p q ( λ ) ( λ ) .
  • Step 4 Set q i ( λ + 1 ) = 1 q i ( λ ) ( λ + 1 ) 2 for all i = 1 , , n .
  • Step 5 Set λ = λ + 1 , go to Step 1.

2. Convergence Analysis

In this section, the GC of Algorithm 1.1 is proved. We begin with the given assumptions that play an important role in establishing the convergence proof of the proposed technique.
Assumption 2.1. 
A1. 
The level set L = z | θ ( z ) θ z ( 0 ) is bounded, where the starting point is z ( 0 ) .
A2. 
There exists a constant L > 0 such that θ is continuously quantum differentiable in the neighborhood N of Ω and its quantum gradient is Lipschitz continuous such that
h q ( y ) h q ( z ) L y z , y , z N .
A3. 
The following inequality holds for λ large enough:
h q ( λ ) ( λ ) T h q ( λ ) ( λ ) 1 2 h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) > 0 .
Remark 2.2. 
Assumptions ( A 1 ) and ( A 2 ) imply that there exists positive constants γ such that
h q ( z ) γ , z N .
Lemma 2.3. 
If the direction p q ( λ ) ( λ ) is yielded by (3) and (7), then the following equation holds for any quantum iteration m
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = h q ( λ ) ( λ ) .
Proof. 
First, for λ = 0 , using (3) and (7), it is easy to see that (13) is true. For λ > 0 , we assume that
p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) = h q ( λ 1 ) ( λ 1 ) 2
holds for λ 1 . Thus, We have the following:
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = t ( λ ) h q ( λ ) ( λ ) 2 + h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) g q m 1 m 1 2 p q ( m 1 ) ( m 1 ) T h q ( λ ) ( λ ) .
From (9), we have
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = p q ( m 1 ) ( m 1 ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) 2 h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 + h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) .
Therefore,
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 + h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) ,
that is,
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 + p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 + h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) .
Therefore,
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 .
From (14), we get
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = h q ( λ ) ( λ ) 2 .
Therefore, proof is accomplished. □
Remark 2.4. 
It is known from Lemma 2.3 that the descent direction of θ at z ( λ ) is p q ( λ ) ( λ ) . Additionally, if the exact LS is employed, then h q ( λ ) ( λ ) T p q ( λ 1 ) ( λ 1 ) = 0 , thus
t ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) T p q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) 2 h q ( λ 1 ) ( λ 1 ) 2 .
Therefore,
t ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 = 1 .
Since from (14), we have
p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) = h q ( λ 1 ) ( λ 1 ) 2 ,
thus,
t ( λ ) = 1 .
We are aware that Lemma 2.3 and Algorithm 1.1 are both clearly specified. In addition, the Lemma 2.3 and Assumption 2.1 can be used to demonstrate this result.
Lemma 2.5. 
Suppose Assumption 2.1 hold, we have
λ 0 h q ( λ ) ( λ ) 4 p q ( λ ) ( λ ) 2 < .
Proof. 
From the LS rule (9) and Assumption 2.1, it follows that
( 2 σ + L ) α ( λ ) p q ( λ ) ( λ ) 2 = 2 σ α ( λ ) p q ( λ ) ( λ ) 2 + L α ( λ ) p q ( λ ) ( λ ) 2 .
Therefore,
h q ( λ + 1 ) ( λ + 1 ) h q ( λ ) ( λ ) T p q ( λ ) ( λ ) ( 2 σ + L ) α ( λ ) p q ( λ ) ( λ ) 2 .
We get
( h q ( λ ) ( λ ) ) T ( p q ( λ ) ( λ ) ) ( 2 σ + L ) α ( λ ) p q ( λ ) ( λ ) 2 .
Hence,
α ( λ ) p q ( λ ) ( λ ) 2 1 2 σ + L h q ( λ ) ( λ ) T p q ( λ ) ( λ ) p q ( λ ) ( λ ) 2 .
Therefore,
α ( λ ) 2 p q ( λ ) ( λ ) 4 1 2 σ + L 2 h q ( λ ) ( λ ) T p q ( λ ) ( λ ) p q ( λ ) ( λ ) 2 2 .
From the quantum LS procedure and Assumption A1, we have
λ = 0 h q ( λ ) ( λ ) T p q ( λ ) ( λ ) p q ( λ ) λ 2 2 ( 2 σ + L ) 2 λ = 0 α ( λ ) 2 p q ( λ ) ( λ ) 2 ( 2 σ + L ) 2 ρ λ = 0 θ z ( λ ) θ z ( λ + 1 ) < + .
It is simple to finish the proof of (15) using Lemma 13. □
The following results are established for the GC.
Theorem 2.6. 
Under Assumption 2.1, we have
lim inf λ h q ( λ ) ( λ ) = 0 .
Proof. 
Assume that condition (17) is false, and there exists an ϵ > 0 such that for any λ ,
h q ( λ ) ( λ ) ϵ .
We are able to write as:
p q ( λ ) ( λ ) 2 = p q ( λ ) ( λ ) T p q ( λ ) ( λ ) .
From (4), we can express as:
p q ( λ ) ( λ ) 2 = t ( λ ) h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) T t ( λ ) h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) = t ( λ ) 2 h q ( λ ) ( λ ) 2 2 t ( λ ) β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 .
From (7), we can write as:
p q ( λ ) ( λ ) 2 = t ( λ ) 2 h q ( λ ) ( λ ) 2 2 t ( λ ) p q ( λ ) ( λ ) T + h q ( λ ) ( λ ) T t ( λ ) h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 = h q ( λ ) ( λ ) 2 t ( λ ) 2 2 t ( λ ) p q ( λ ) ( λ ) T h q ( λ ) ( λ ) 2 t ( λ ) 2 h q ( λ ) ( λ ) T h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 .
Therefore,
p q ( λ ) ( λ ) 2 = β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 2 t ( λ ) h q ( λ ) ( λ ) T p q ( λ ) ( λ ) t ( λ ) 2 h q ( λ ) ( λ ) 2 .
Dividing by h q ( λ ) ( λ ) 4 in both sides of the above equality
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 = β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 2 t ( λ ) p q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) 2 t ( λ ) 2 h q ( λ ) ( λ ) 4 .
From (5), we get
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 = h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) 2 h q ( λ 1 ) ( λ 1 ) 4 p q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 4 2 t ( λ ) h q ( λ ) ( λ ) T p q ( λ ) ( λ ) + t ( λ ) 2 h q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 .
Thus,
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 = p q ( λ 1 ) ( λ 1 ) 2 h q ( λ 1 ) ( λ 1 ) 4 p q ( λ 1 ) ( λ 1 ) 2 h q ( λ 1 ) ( λ 1 ) 4 2 h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) 4 t ( λ ) 1 2 h q ( λ ) ( λ ) 2 + 1 h q ( λ ) ( λ ) 2 .
We get
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 p q ( λ 1 ) ( λ 1 ) 2 h q ( λ 1 ) ( λ 1 ) 4 + 1 h q ( λ ) ( λ ) 2 .
Therefore,
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 1 h q ( 0 ) ( 0 ) 2 + + 1 h q ( λ 1 ) ( λ 1 ) 2 i = 0 λ 1 1 g q ( i ) ( i ) 2 .
We get the following
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 λ ϵ 2 .
The above inequality implies that
λ 1 h q ( λ ) ( λ ) 4 p q ( λ ) ( λ ) 2 ϵ 2 0 1 λ = 0 1 1 λ d q λ = log λ 0 1 = log 1 log 0 = + .
This contradicts with (15). Thus, the result (17) holds.

3. Numerical Illustration

3.1. Test on UO Problems

We now solve numerical problems using Algorithm 1. We have taken 30 test problems from [27] and performed 37 experiments using 37 starting points. Our numerical tests are carried out on R 3.6.1 with an Intel(R) Core(TM) CPU (i5-4005U@1.70GHz). We apply the stopping condition as:
h q ( λ ) ( z ( λ ) ) 10 5 .
The code terminates if the quantum iteration number is greater than 400.Dolan and Moré [28] proposed an appropriate approach to illustrate the performance profile, which is a statistical process. The performance ratio is as:
ρ p r , s = r ( p r , s ) min { r ( p r , s ) : 1 r n s } ,
where r ( p r , s ) denotes the number of quantum iteration for solver s resided on test problem p r and n s refers to the number of total test problems given in the model test. The cumulative distribution function is presented as:
p s ( τ ) = 1 n p r size p r ρ ( p r , s ) τ ,
where p s ( τ ) is the probability that a performance ratio ρ ( p r , s ) is within a factor of τ of the optimum possible ratio. We plot the fraction p s ( τ ) of test problems to analyze the subset of test problems under the assumption that the algorithm is within a factor τ of the optimum. We use this technique to depict the efficiency of Algorithm 1. Thus, Figure 1 shows that quantum-spectral-PRP method is efficient in comparison to other existing method.
Figure 1. Performance profile based on number of quantum iterations using Table 1 and Table 2.
Figure 1. Performance profile based on number of quantum iterations using Table 1 and Table 2.
Preprints 88618 g001
Table 1. Numerical Results
Table 1. Numerical Results
Preprints 88618 i001
Table 2. Numerical Results
Table 2. Numerical Results
Preprints 88618 i002

4. Conclusion

A quantum-spectral PRP CG method has been put forth for solving UO problems. The GC theorem of the generalized algorithm has been established with Wolfe LS conditions of quantum type. The choice of quantum variable q effects the practical behavior of the method. With the proper selection of iterative value of q, the proposed algorithm clearly outperforms. We compared our proposed algorithm with the existing method. The numerical results confirmed that the proposed strategy performs better.

Acknowledgments

The third author acknowledges for the financial support by the Centre for Digital Transformation, Indian Institute of Management Ahmedabad, India.

References

  1. Mishra, S.K.; Ram, B. Conjugate gradient methods. In Introduction to Unconstrained Optimization with R; Springer, 2019; pp. 211–244. [Google Scholar]
  2. Babaie-Kafaki, S. A survey on the Dai–Liao family of nonlinear conjugate gradient methods. RAIRO-Operations Research 2023, 57, 43–58. [Google Scholar] [CrossRef]
  3. Wu, X.; Shao, H.; Liu, P.; Zhang, Y.; Zhuo, Y. An efficient conjugate gradient-based algorithm for unconstrained optimization and its projection extension to large-scale constrained nonlinear equations with applications in signal recovery and image denoising problems. Journal of Computational and Applied Mathematics 2023, 422, 114879. [Google Scholar] [CrossRef]
  4. Liu, J.; Du, S.; Chen, Y. A sufficient descent nonlinear conjugate gradient method for solving M-tensor equations. Journal of Computational and Applied Mathematics 2020, 371, 112709. [Google Scholar] [CrossRef]
  5. Andrei, N. On three-term conjugate gradient algorithms for unconstrained optimization. Applied Mathematics and Computation 2013, 219, 6316–6327. [Google Scholar] [CrossRef]
  6. Dai, Y.h.; Yuan, Y. An efficient hybrid conjugate gradient method for unconstrained optimization. Annals of Operations Research 2001, 103, 33–47. [Google Scholar] [CrossRef]
  7. Shanno, D.F. Conjugate gradient methods with inexact searches. Mathematics of operations research 1978, 3, 244–256. [Google Scholar] [CrossRef]
  8. Johnson, O.G.; Micchelli, C.A.; Paul, G. Polynomial preconditioners for conjugate gradient calculations. SIAM Journal on Numerical Analysis 1983, 20, 362–376. [Google Scholar] [CrossRef]
  9. Wei, Z.; Li, G.; Qi, L. New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems. Applied Mathematics and computation 2006, 179, 407–430. [Google Scholar] [CrossRef]
  10. Mishra, S.K.; Ram, B. Steepest descent method. In Introduction to Unconstrained Optimization with R; Springer, 2019; pp. 131–173. [Google Scholar]
  11. Mishra, S.K.; Chakraborty, S.K.; Samei, M.E.; Ram, B. A q-Polak–Ribière–Polyak conjugate gradient algorithm for unconstrained optimization problems. Journal of Inequalities and Applications 2021, 2021, 1–29. [Google Scholar] [CrossRef]
  12. Zhang, L.; Zhou, W.; Li, D.H. A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence. IMA Journal of Numerical Analysis 2006, 26, 629–640. [Google Scholar] [CrossRef]
  13. Soterroni, A.C.; Galski, R.L.; Ramos, F.M. The q-gradient vector for unconstrained continuous optimization problems. In Operations Research Proceedings 2010; Springer, 2011; pp. 365–370. [Google Scholar]
  14. Gouvêa, É.J.; Regis, R.G.; Soterroni, A.C.; Scarabello, M.C.; Ramos, F.M. Global optimization using q-gradients. European Journal of Operational Research 2016, 251, 727–738. [Google Scholar] [CrossRef]
  15. Chakraborty, S.K.; Panda, G. Newton like line search method using q-calculus. In Proceedings of the International Conference on Mathematics and Computing; Springer, 2017; pp. 196–208. [Google Scholar]
  16. Mishra, S.K.; Panda, G.; Ansary, M.A.T.; Ram, B. On q-Newton’s method for unconstrained multiobjective optimization problems. Journal of Applied Mathematics and Computing 2020, 63, 391–410. [Google Scholar] [CrossRef]
  17. Lai, K.K.; Mishra, S.K.; Panda, G.; Ansary, M.A.T.; Ram, B. On q-steepest descent method for unconstrained multiobjective optimization problems. AIMS Mathematics 2020, 5, 5521–5540. [Google Scholar]
  18. Mishra, S.K.; Panda, G.; Chakraborty, S.K.; Samei, M.E.; Ram, B. On q-BFGS algorithm for unconstrained optimization problems. Advances in Difference Equations 2020, 2020, 1–24. [Google Scholar] [CrossRef]
  19. Lai, K.K.; Mishra, S.K.; Panda, G.; Chakraborty, S.K.; Samei, M.E.; Ram, B. A limited memory q-BFGS algorithm for unconstrained optimization problems. Journal of Applied Mathematics and Computing 2021, 66, 183–202. [Google Scholar] [CrossRef]
  20. Wan, Z.; Yang, Z.; Wang, Y. New spectral PRP conjugate gradient method for unconstrained optimization. Applied Mathematics Letters 2011, 24, 16–22. [Google Scholar] [CrossRef]
  21. Hu, Q.; Zhang, H.; Zhou, Z.; Chen, Y. A class of improved conjugate gradient methods for nonconvex unconstrained optimization. Numerical Linear Algebra with Applications 2023, 30, e2482. [Google Scholar] [CrossRef]
  22. Gilbert, J.C.; Nocedal, J. Global convergence properties of conjugate gradient methods for optimization. SIAM Journal on optimization 1992, 2, 21–42. [Google Scholar] [CrossRef]
  23. Polyak, B.T. The conjugate gradient method in extremal problems. USSR Computational Mathematics and Mathematical Physics 1969, 9, 94–112. [Google Scholar] [CrossRef]
  24. Powell, M.J.D. Restart procedures for the conjugate gradient method. Mathematical programming 1977, 12, 241–254. [Google Scholar] [CrossRef]
  25. Powell, M.J. Nonconvex minimization calculations and the conjugate gradient method. In Numerical analysis; Springer, 1984; pp. 122–141. [Google Scholar]
  26. Wang, C.y.; Chen, Y.y.; Du, S.q. Further insight into the Shamanskii modification of Newton method. Applied mathematics and computation 2006, 180, 46–52. [Google Scholar] [CrossRef]
  27. Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. International Journal of Mathematical Modelling and Numerical Optimisation 2013, 4, 150–194. [Google Scholar] [CrossRef]
  28. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Mathematical Programming 2002, 91, 201–213. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated