# 2.6: Unconstrained Optimization- Numerical Methods - Mathematics

The types of problems that we solved in the previous section were examples of unconstrained optimization problems. That is, we tried to find local (and perhaps even global) maximum and minimum points of real-valued functions (f (x, y)), where the points ((x, y)) could be any points in the domain of (f). The method we used required us to find the critical points of (f), which meant having to solve the equation ( abla f = extbf{0}), which in general is a system of two equations in two unknowns ((x ext{ and }y)). While this was relatively simple for the examples we did, in general this will not be the case. If the equations involve polynomials in (x ext{ and }y) of degree three or higher, or complicated expressions involving trigonometric, exponential, or logarithmic functions, then solving even one such equation, let alone two, could be impossible by elementary means.

For example, if one of the equations that had to be solved was

[ onumber x^3 +9x−2 = 0 ,]

you may have a hard time getting the exact solutions. Trial and error would not help much, especially since the only real solution turns out to be

[ onumber sqrt[3]{sqrt{28}+1-sqrt[3]{sqrt{28}-1}}.]

In a situation such as this, the only choice may be to find a solution using some numerical method which gives a sequence of numbers which converge to the actual solution. For example, Newton’s method for solving equations (f (x) = 0), which you probably learned in single-variable calculus. In this section we will describe another method of Newton for finding critical points of real-valued functions of two variables.

Let (f (x, y)) be a smooth real-valued function, and define

[ onumber D(x, y) = dfrac{∂^2 f}{ ∂x^2} (x, y) dfrac{∂^2 f}{ ∂y^2} (x, y)− left (dfrac{∂^2 f}{∂y∂x} (x, y) ight )^2 ]

Newton’s algorithm: Pick an initial point ((x_0 , y_0)). For (n) = 0,1,2,3,..., define:

[ x_{n+1}=x_n - dfrac{egin{vmatrix} dfrac{∂^2 f}{ ∂y^2} (x_n, y_n) & dfrac{∂^2 f}{∂x∂y} (x_n, y_n) [4pt] dfrac{∂f}{∂y} (x_n, y_n) & dfrac{∂f}{∂x} (x_n, y_n)[4pt] end{vmatrix}}{D(x_n, y_n)},quad y_{n+1}=y_n - dfrac{egin{vmatrix} dfrac{∂^2 f}{ ∂x^2} (x_n, y_n) & dfrac{∂^2 f}{∂x∂y} (x_n, y_n) [4pt] dfrac{∂f}{∂x} (x_n, y_n) & dfrac{∂f}{∂y} (x_n, y_n)[4pt] end{vmatrix}}{D(x_n, y_n)}label{Eq2.14}]

Then the sequence of points ((x_n, y_n)_{n=1}^{infty}) converges to a critical point. If there are several critical points, then you will have to try different initial points to find them.

(f (x, y) = x^ 3 − x y− x+ x y^3 − y^ 4 ext{ for }−1 ≤ x ≤ 0 ext{ and }0 ≤ y ≤ 1)

The derivation of Newton’s algorithm, and the proof that it converges (given a “reasonable” choice for the initial point) requires techniques beyond the scope of this text. See RALSTON and RABINOWITZ for more detail and for discussion of other numerical methods. Our description of Newton’s algorithm is the special two-variable case of a more general algorithm that can be applied to functions of (n ge 2) variables.

In the case of functions which have a global maximum or minimum, Newton’s algorithm can be used to find those points. In general, global maxima and minima tend to be more interesting than local versions, at least in practical applications. A maximization problem can always be turned into a minimization problem (why?), so a large number of methods have been developed to find the global minimum of functions of any number of variables. This field of study is called nonlinear programming. Many of these methods are based on the steepest descent technique, which is based on an idea that we discussed in Section 2.4. Recall that the negative gradient (- abla f) gives the direction of the fastest rate of decrease of a function (f). The crux of the steepest descent idea, then, is that starting from some initial point, you move a certain amount in the direction of (- abla f) at that point. Wherever that takes you becomes your new point, and you then just keep repeating that procedure until eventually (hopefully) you reach the point where f has its smallest value. There is a “pure” steepest descent method, and a multitude of variations on it that improve the rate of convergence, ease of calculation, etc. In fact, Newton’s algorithm can be interpreted as a modified steepest descent method. For more discussion of this, and of nonlinear programming in general, see BAZARAA, SHERALI and SHETTY.

This book treats quantitative analysis as an essentially computational discipline in which applications are put into software form and tested empirically.

Author: Manfred Gilli

Computationally-intensive tools play an increasingly important role in financial decisions. Many financial problems-ranging from asset allocation to risk management and from option pricing to model calibration-can be efficiently handled using modern computational techniques. Numerical Methods and Optimization in Finance presents such computational techniques, with an emphasis on simulation and optimization, particularly so-called heuristics. This book treats quantitative analysis as an essentially computational discipline in which applications are put into software form and tested empirically. This revised edition includes two new chapters, a self-contained tutorial on implementing and using heuristics, and an explanation of software used for testing portfolio-selection models. Postgraduate students, researchers in programs on quantitative and computational finance, and practitioners in banks and other financial companies can benefit from this second edition of Numerical Methods and Optimization in Finance. Introduces numerical methods to readers with economics backgrounds Emphasizes core simulation and optimization problems Includes MATLAB and R code for all applications, with sample code in the text and freely available for download

## Classics in Applied Mathematics

### Title Information

This book has become the standard for a complete, state-of-the-art description of the methods for unconstrained optimization and systems of nonlinear equations. Originally published in 1983, it provides information needed to understand both the theory and the practice of these methods and provides pseudocode for the problems. The algorithms covered are all based on Newton's method or “quasi-Newton” methods, and the heart of the book is the material on computational methods for multidimensional unconstrained optimization and nonlinear equation problems. The republication of this book by SIAM is driven by a continuing demand for specific and sound advice on how to solve real problems.

The level of presentation is consistent throughout, with a good mix of examples and theory, making it a valuable text at both the graduate and undergraduate level. It has been praised as excellent for courses with approximately the same name as the book title and would also be useful as a supplemental text for a nonlinear programming or a numerical analysis course. Many exercises are provided to illustrate and develop the ideas in the text. A large appendix provides a mechanism for class projects and a reference for readers who want the details of the algorithms. Practitioners may use this book for self-study and reference.

For complete understanding, readers should have a background in calculus and linear algebra. The book does contain background material in multivariable calculus and numerical linear algebra.

We are delighted that SIAM is republishing our original 1983 book after what many in the optimization field have regarded as “premature termination” by the previous publisher. At 12 years of age, the book may be a little young to be a “classic,” but since its publication it has been well received in the numerical computation community. We are very glad that it will continue to be available for use in teaching, research, and applications.

We set out to write this book in the late 1970s because we felt that the basic techniques for solving small to medium-sized nonlinear equations and unconstrained optimization problems had matured and converged to the point where they would remain relatively stable. Fortunately, the intervening years have confirmed this belief. The material that constitutes most of this book—the discussion of Newton-based methods, globally convergent line search and trust region methods, and secant (quasi-Newton) methods for nonlinear equations, unconstrained optimization, and nonlinear least squares—continues to represent the basis for algorithms and analysis in this field. On the teaching side, a course centered around Chapters 4 to 9 forms a basic, in-depth introduction to the solution of nonlinear equations and unconstrained optimization problems. For researchers or users of optimization software, these chapters give the foundations of methods and software for solving small to medium-sized problems of these types.

### Books

• Christian Kanzow and Alexandra Schwartz
Spieltheorie. Theorie und Verfahren zur Lösung von Nash- und verallgemeinerten Nash-Gleichgewichtsproblemen
164+viii pages (in german), Birkhäuser-Verlag, 2018
• Christian Kanzow
Numerik linearer Gleichungssysteme - Direkte und iterative Verfahren
349+xiv pages (in german, Springer-Verlag, 2004
• Carl Geiger and Christian Kanzow
Theorie und Numerik restringierter Optimierungsaufgaben
487+xii pages (in german), Springer-Verlag, 2002
• Carl Geiger and Christian Kanzow
Numerische Verfahren zur Lösung unrestringierter Optimierungsaufgaben
349+xiv pages (in german). Springer-Verlag, 1999

### Journal Articels

Christian Kanzow, Andreas B. Raharja and Alexandra Schwartz
Sequential Optimality Conditions for Cardinality-Constrained Optimization Problems with Applications
Computational Optimization and Applications, to appear (Preprint)

Christian Kanzow, Andreas B. Raharja and Alexandra Schwartz
An Augmented Lagrangian Method for Cardinality-Constrained Optimization Problems
Journal of Optimization Theory and Applications 189 (2021), 793–813 ( 2021 ), (DOI: 10.1007/s10957-021-01854-7 ) (Preprint)

Christian Kanzow and Theresa Lechner
Globalized Inexact Proximal Newton-type Methods for Nonconvex Composite Functions
Computational Optimization and Applications 78 (2021), 377-410, (DOI: 10.1007/s10589-020-00243-6) ( Preprint) (BibTeX)

Eike Börgens and Christian Kanzow
ADMM-type Methods for Generalized Nash Equilibrium Problems in Hilbert Spaces
SIAM Journal on Optimization (Preprint), to appear

Eike Börgens, Christian Kanzow, Patrick Mehlitz und Gerd Wachsmuth
New Constraint Qualifications for Optimization Problems in Banach Spaces Based on Asymptotic KKT Conditions
SIAM Journal on Control and Optimization 30 (4), 2020, pp. 2956 - 2982 (DOI: 10.1137/19M1306804) (Preprint) (BibTeX)

Christian Kanzow, Patrick Mehlitz and Daniel Steck
Relaxation schemes for mathematical programs with switching constraints.
Optimization Methods and Software (Preprint), to appear

Christian Kanzow and Daniel Steck
Quasi-Variational Inequalities in Banach Spaces: Theory and Augmented Lagrangian Methods
SIAM Journal on Optimization 29 (4), 2019, pp. 3174 - 3200 (DOI: 10.1137/18M1230475) (Preprint) (BibTeX)

Eike Börgens, Christian Kanzow and Daniel Steck
Local and Global Analysis of Multiplier Methods for Constrained Optimization in Banach Spaces.
SIAM Journal on Control and Optimization (Preprint), to appear

• APA
• Author
• BIBTEX
• Harvard
• Standard
• RIS
• Vancouver

Research output : Contribution to journal › Article › peer-review

T1 - Enriched methods for large-scale unconstrained optimization

N1 - Funding Information: The first author was supported by CONACyT grant 25710-A and by the Asociación Mexi-cana de Cultura. The second author was supported by National Science Foundation grants CDA-9726385 and INT-9416004 and by Department of Energy grant DE-FG02-87ER25047-A004.

N2 - This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method (L-BFGS) and a Hessian-free Newton method (HFN) in such a way that the information collected by one type of iteration improves the performance of the other. Curvature information about the objective function is stored in the form of a limited memory matrix, and plays the dual role of preconditioning the inner conjugate gradient iteration in the HFN method and of providing an initial matrix for L-BFGS iterations. The lengths of the L-BFGS and HFN cycles are adjusted dynamically during the course of the optimization. Numerical experiments indicate that the new algorithms are both effective and not sensitive to the choice of parameters.

AB - This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method (L-BFGS) and a Hessian-free Newton method (HFN) in such a way that the information collected by one type of iteration improves the performance of the other. Curvature information about the objective function is stored in the form of a limited memory matrix, and plays the dual role of preconditioning the inner conjugate gradient iteration in the HFN method and of providing an initial matrix for L-BFGS iterations. The lengths of the L-BFGS and HFN cycles are adjusted dynamically during the course of the optimization. Numerical experiments indicate that the new algorithms are both effective and not sensitive to the choice of parameters.

## Numerical Optimization Techniques

• Author : Yurij G. Evtushenko
• Publisher : Springer
• Release Date : 2012-01-19
• Genre: Science
• Pages : 562
• ISBN 10 : 1461295300

The book of Professor Evtushenko describes both the theoretical foundations and the range of applications of many important methods for solving nonlinear programs. Particularly emphasized is their use for the solution of optimal control problems for ordinary differential equations. These methods were instrumented in a library of programs for an interactive system (DISO) at the Computing Center of the USSR Academy of Sciences, which can be used to solve a given complicated problem by a combination of appropriate methods in the interactive mode. Many examples show the strong as well the weak points of particular methods and illustrate the advantages gained by their combination. In fact, it is the central aim of the author to pOint out the necessity of using many techniques interactively, in order to solve more dif ficult problems. A noteworthy feature of the book for the Western reader is the frequently unorthodox analysis of many known methods in the great tradition of Russian mathematics. J. Stoer PREFACE Optimization methods are finding ever broader application in sci ence and engineering. Design engineers, automation and control systems specialists, physicists processing experimental data, eco nomists, as well as operations research specialists are beginning to employ them routinely in their work. The applications have in turn furthered vigorous development of computational techniques and engendered new directions of research. Practical implementa tion of many numerical methods of high computational complexity is now possible with the availability of high-speed large-memory digital computers.

## Scaled Diagonal Gradient-Type Method with Extra Update for Large-Scale Unconstrained Optimization

We present a new gradient method that uses scaling and extra updating within the diagonal updating for solving unconstrained optimization problem. The new method is in the frame of Barzilai and Borwein (BB) method, except that the Hessian matrix is approximated by a diagonal matrix rather than the multiple of identity matrix in the BB method. The main idea is to design a new diagonal updating scheme that incorporates scaling to instantly reduce the large eigenvalues of diagonal approximation and otherwise employs extra updates to increase small eigenvalues. These approaches give us a rapid control in the eigenvalues of the updating matrix and thus improve stepwise convergence. We show that our method is globally convergent. The effectiveness of the method is evaluated by means of numerical comparison with the BB method and its variant.

#### 1. Introduction

In this paper, we consider the unconstrained optimization problem

where is a continuously differentiable function from to . Given a starting point

as an approximation to the Hessian

, the quasi-Newton-based methods for solving (1) are defined by the iteration

where the stepsize is determined through an appropriate selection. The updating matrix is usually required to satisfy the quasi-Newton equation

where and . One of the widely used quasi-Newton method to solve general nonlinear minimization is the BFGS method, which uses the following updating formula:

On the numerical aspect, this method supersedes most of the optimization methods however, it needs

storage which makes it unsuitable for large-scale problems.

On the other hand, an ingenious stepsizes selection for gradient method was proposed by Barzilai and Borwein [1] in which the updating scheme is defined by

Since that, the study of new effective methods in the frame of BB-like gradient methods becomes an interesting research topic for a wide range of mathematical programming for example, see [2–10]. However, it is well known that BB method cannot guarantee a descent in the objective function at each iteration and the extent of the nonmonotonicity depends in some way on the size of the condition number of objective function [11]. Therefore, the performance of BB method is greatly influenced by the condition of the problem (particularly, condition number of the Hessian matrix). Some new fixed stepsizes gradient-type methods of BB kind are proposed by [12–16] to overcome these difficulties. In contrast with the BB approach in which the stepsize is computed by means of a simple approximation of the Hessian in the form of scalar multiple of identity, these proposed methods consider approximation of the Hessian and its inverse in diagonal matrix form based on the weak secant equation and quasi-cauchy relation, respectively (for more details see [15, 16]). Though these diagonal updating methods are efficient, their performance can be greatly affected by solving ill-conditioned problems. Thus, there is room for improve on the quality of the diagonal updates formulation. Since methods as described in [15, 16] have useful theoretical and numerical properties, it is desirable to derive a new and more efficient updating frame for general functions. Therefore our aim is to improve the quality of diagonal updating when it is poor in approximating Hessian.

This paper is organized as follows. In the next section, we describe our motivation and propose our new-gradient type method. The global convergence of the method under mild assumption will be established in Section 3. Numerical evidence of the vast improvements due to the new approach is given in Section 4. Finally, conclusion is made in the last section.

#### 2. Scaling and Extra Updating

Assume that is positive definite, and let

and be two sequences of -vectors such that

for all . Because it is usually difficult to satisfy the quasi-Newton equation (3) with a nonsingular of the diagonal form, one can consider satisfying it in some directions. If we project the quasi-Newton equation (3) (also called the secant equation), in a direction

If is chosen, it leads to the so-called weak-secant relation,

Under this weak-secant equation, [15, 16] employ variational technique to derive updating matrix that approximates the Hessian matrix diagonally. The resulting update is derived to be the solution of the following variational problem:

University of Niš, Faculty of Sciences and Mathematics, Višegradska 33, 18000 Niš, Serbia

Technical Faculty in Bor, University of Belgrade, Vojske Jugoslavije 12, 19210 Bor, Serbia

School of Mathematical Science, Harbin Normal University, Harbin 150025, China

* Corresponding author: Predrag S. Stanimirović

Received October 2020 Revised November 2020 Published December 2020 Early access November 2020

The paper surveys, classifies and investigates theoretically and numerically main classes of line search methods for unconstrained optimization. Quasi-Newton (QN) and conjugate gradient (CG) methods are considered as representative classes of effective numerical methods for solving large-scale unconstrained optimization problems. In this paper, we investigate, classify and compare main QN and CG methods to present a global overview of scientific advances in this field. Some of the most recent trends in this field are presented. A number of numerical experiments is performed with the aim to give an experimental and natural answer regarding the numerical one another comparison of different QN and CG methods.

##### References:

J. Abaffy, A new reprojection of the conjugate directions, Numer. Algebra Control Optim., 9 (2019), 157-171. doi: 10.3934/naco.2019012. Google Scholar

M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, IMA J. Numer. Anal., 5 (1985), 121-124. doi: 10.1093/imanum/5.1.121. Google Scholar

N. Andrei, An unconstrained optimization test functions collection, Adv. Model. Optim., 10 (2008), 147-161. Google Scholar

N. Andrei, An acceleration of gradient descent algorithm with backtracking for unconstrained optimization, Numer. Algorithms, 42 (2006), 63-73. doi: 10.1007/s11075-006-9023-9. Google Scholar

N. Andrei, A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues, Numer. Algorithms, 77 (2018), 1273-1282. doi: 10.1007/s11075-017-0362-5. Google Scholar

N. Andrei, Relaxed gradient descent and a new gradient descent methods for unconstrained optimization, Visited August 19, (2018). Google Scholar

N. Andrei, Nonlinear Conjugate Gradient Methods for Unconstrained Optimization, 1 st edition, Springer International Publishing, 2020. doi: 10.1007/978-3-030-42950-8. Google Scholar

L. Armijo, Minimization of functions having Lipschitz continuous first partial derivatives, Pacific J. Math., 16 (1966), 1-3. doi: 10.2140/pjm.1966.16.1. Google Scholar

S. Babaie-Kafaki and R. Ghanbari, The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices, European J. Oper. Res., 234 (2014), 625-630. doi: 10.1016/j.ejor.2013.11.012. Google Scholar

B. Baluch, Z. Salleh, A. Alhawarat and U. A. M. Roslan, A new modified three-term conjugate gradient method with sufficient descent property and its global convergence, J. Math., 2017 (2017), Article ID 2715854, 12 pages. doi: 10.1155/2017/2715854. Google Scholar

J. Barzilai and J. M. Borwein, Two-point step-size gradient method, IMA J. Numer. Anal., 8 (1988), 141-148. doi: 10.1093/imanum/8.1.141. Google Scholar

M. Bastani and D. K. Salkuyeh, On the GSOR iteration method for image restoration, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020013. Google Scholar

A. E. J. Bogaers, S. Kok, B. D. Reddy and T. Franz, An evaluation of quasi-Newton methods for application to FSI problems involving free surface flow and solid body contact, Computers & Structures, 173 (2016), 71-83. doi: 10.1016/j.compstruc.2016.05.018. Google Scholar

I. Bongartz, A. R. Conn, N. Gould and Ph. L. Toint, CUTE: Constrained and unconstrained testing environments, ACM Trans. Math. Softw., 21 (1995), 123-160. doi: 10.1145/200979.201043. Google Scholar

C. Brezinski, A classification of quasi-Newton methods, Numer. Algorithms, 33 (2003), 123-135. doi: 10.1023/A:1025551602679. Google Scholar

J. Cao and J. Wu, A conjugate gradient algorithm and its applications in image restoration, Appl. Numer. Math., 152 (2020), 243-252. doi: 10.1016/j.apnum.2019.12.002. Google Scholar

W. Cheng, A two-term PRP-based descent method, Numer. Funct. Anal. Optim., 28 (2007), 1217-1230. doi: 10.1080/01630560701749524. Google Scholar

Y. Cheng, Q. Mou, X. Pan and S. Yao, A sufficient descent conjugate gradient method and its global convergence, Optim. Methods Softw., 31 (2016), 577-590. doi: 10.1080/10556788.2015.1124431. Google Scholar

A. I. Cohen, Stepsize analysis for descent methods, J. Optim. Theory Appl., 33 (1981), 187-205. doi: 10.1007/BF00935546. Google Scholar

A. R. Conn, N. I. M. Gould and Ph. L. Toint, Convergence of quasi-Newton matrices generated by the symmetric rank one update, Math. Programming, 50 (1991), 177-195. doi: 10.1007/BF01594934. Google Scholar

Y.-H. Dai, Nonlinear Conjugate Gradient Methods, Wiley Encyclopedia of Operations Research and Management Science, (2011). doi: 10.1002/9780470400531.eorms0183. Google Scholar

Y. Dai, A nonmonotone conjugate gradient algorithm for unconstrained optimization, J. Syst. Sci. Complex., 15 (2002), 139-145. Google Scholar

Y.-H. Dai and R. Fletcher, On the Asymptotic Behaviour of some New Gradient Methods, Numerical Analysis Report, NA/212, Dept. of Math. University of Dundee, Scotland, UK, 2003. Google Scholar

Y.-H. Dai and C.-X. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search, SIAM. J. Optim., 23 (2013), 296-320. doi: 10.1137/100813026. Google Scholar

Y.-H. Dai and L.-Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87-101. doi: 10.1007/s002450010019. Google Scholar

Y.-H. Dai and L.-Z. Liao, R-linear convergence of the Barzilai and Borwein gradient method, IMA J. Numer. Anal., 22 (2002), 1-10. doi: 10.1093/imanum/22.1.1. Google Scholar

Y.-H. Dai and Q. Ni, Testing different conjugate gradient methods for large-scale unconstrained optimization, J. Comput. Math., 21 (2003), 311-320. Google Scholar

Z. Dai and F. Wen, Another improved Wei–Yao–Liu nonlinear conjugate gradient method with sufficient descent property, Appl. Math. Comput., 218 (2012), 7421-7430. doi: 10.1016/j.amc.2011.12.091. Google Scholar

Y.-H. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 10 (1999), 177-182. doi: 10.1137/S1052623497318992. Google Scholar

Y.-H. Dai and Y. Yuan, An efficient hybrid conjugate gradient method for unconstrained optimization, Ann. Oper. Res., 103 (2001), 33-47. doi: 10.1023/A:1012930416777. Google Scholar

Y. H. Dai and Y. Yuan, A class of Globally Convergent Conjugate Gradient Methods, Research report ICM-98-030, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 1998. Google Scholar

Y. Dai and Y. Yuan, A class of globally convergent conjugate gradient methods, Sci. China Ser. A, 46 (2003), 251-261. Google Scholar

Y. Dai, J. Yuan and Y.-X. Yuan, Modified two-point step-size gradient methods for unconstrained optimization, Comput. Optim. Appl., 22 (2002), 103-109. doi: 10.1023/A:1014838419611. Google Scholar

Y.-H. Dai and Y.-X. Yuan, Alternate minimization gradient method, IMA J. Numer. Anal., 23 (2003), 377-393. doi: 10.1093/imanum/23.3.377. Google Scholar

Y.-H. Dai and Y.-X. Yuan, Analysis of monotone gradient methods, J. Ind. Manag. Optim., 1 (2005), 181-192. doi: 10.3934/jimo.2005.1.181. Google Scholar

Y.-H. Dai and H. Zhang, An Adaptive Two-Point Step-size gradient method, Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 2001. Google Scholar

J. W. Daniel, The conjugate gradient method for linear and nonlinear operator equations, SIAM J. Numer. Anal., 4 (1967), 10-26. doi: 10.1137/0704002. Google Scholar

S. Delladji, M. Belloufi and B. Sellami, Behavior of the combination of PRP and HZ methods for unconstrained optimization, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020032. Google Scholar

Y. Ding, E. Lushi and Q. Li, Investigation of quasi-Newton methods for unconstrained optimization, International Journal of Computer Application, 29 (2010), 48-58. Google Scholar

S. S. Djordjević, Two modifications of the method of the multiplicative parameters in descent gradient methods, Appl. Math, Comput., 218 (2012), 8672-8683. doi: 10.1016/j.amc.2012.02.029. Google Scholar

S. S. Djordjević, Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods, Book Chapter, 2019. doi: 10.5772/intechopen.84374. Google Scholar

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201-213. doi: 10.1007/s101070100263. Google Scholar

M. S. Engelman, G. Strang and K.-J. Bathe, The application of quasi-Nnewton methods in fluid mechanics, Internat. J. Numer. Methods Engrg., 17 (1981), 707-718. doi: 10.1002/nme.1620170505. Google Scholar

D. K. Faddeev and I. S. Sominskiǐ, Collection of Problems on Higher Algebra. Gostekhizdat, 2 nd edition, Moscow, 1949. Google Scholar

A. G. Farizawani, M. Puteh, Y. Marina and A. Rivaie, A review of artificial neural network learning rule based on multiple variant of conjugate gradient approaches, Journal of Physics: Conference Series, 1529 (2020). doi: 10.1088/1742-6596/1529/2/022040. Google Scholar

R. Fletcher, Practical Methods of Optimization, Unconstrained Optimization, 1 st edition, Wiley, New York, 1987. Google Scholar

R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149-154. doi: 10.1093/comjnl/7.2.149. Google Scholar

J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21-42. doi: 10.1137/0802003. Google Scholar

A. A. Goldstein, On steepest descent, J. SIAM Control Ser. A, 3 (1965), 147-151. doi: 10.1137/0303013. Google Scholar

L. Grippo, F. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method, SIAM J. Numer. ANAL., 23 (1986), 707-716. doi: 10.1137/0723046. Google Scholar

L. Grippo, F. Lampariello and S. Lucidi, A class of nonmonotone stability methods in unconstrained optimization, Numer. Math., 59 (1991), 779-805. doi: 10.1007/BF01385810. Google Scholar

L. Grippo, F. Lampariello and S. Lucidi, A truncated Newton method with nonmonotone line search for unconstrained optimization, J. Optim. Theory Appl., 60 (1989), 401-419. doi: 10.1007/BF00940345. Google Scholar

L. Grippo and M. Sciandrone, Nonmonotone globalization techniques for the Barzilai-Borwein gradient method, Comput. Optim. Appl., 23 (2002), 143-169. doi: 10.1023/A:1020587701058. Google Scholar

W. W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170-192. doi: 10.1137/030601880. Google Scholar

W. W. Hager and H. Zhang, Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Software, 32 (2006), 113-137. doi: 10.1145/1132973.1132979. Google Scholar

W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2 (2006), 35-58. Google Scholar

L. Han and M. Neumann, Combining quasi-Newton and Cauchy directions, Int. J. Appl. Math., 12 (2003), 167-191. Google Scholar

B. A. Hassan, A new type of quasi-Newton updating formulas based on the new quasi-Newton equation, Numer. Algebra Control Optim., 10 (2020), 227-235. doi: 10.3934/naco.2019049. Google Scholar

M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952), 409-436. doi: 10.6028/jres.049.044. Google Scholar

Y. F. Hu and C. Storey, Global convergence result for conjugate gradient methods, J. Optim. Theory Appl., 71 (1991), 399-405. doi: 10.1007/BF00939927. Google Scholar

S. Ishikawa, Fixed points by a new iteration method, Proc. Am. Math. Soc., 44 (1974), 147-150. doi: 10.1090/S0002-9939-1974-0336469-5. Google Scholar

B. Ivanov, P. S. Stanimirović, G. V. Milovanović, S. Djordjević and I. Brajević, Accelerated multiple step-size methods for solving unconstrained optimization problems, Optimization Methods and Software, (2019). doi: 10.1080/10556788.2019.1653868. Google Scholar

Z. Jia, Applications of the conjugate gradient method in optimal surface parameterizations, Int. J. Comput. Math., 87 (2010), 1032-1039. doi: 10.1080/00207160802275951. Google Scholar

J. Jian, L. Han and X. Jiang, A hybrid conjugate gradient method with descent property for unconstrained optimization, Appl. Math. Model., 39 (2015), 1281-1290. doi: 10.1016/j.apm.2014.08.008. Google Scholar

S. H. Khan, A Picard-Mann hybrid iterative process, Fixed Point Theory Appl., 2013 (2013), Article number: 69, 10 pp. doi: 10.1186/1687-1812-2013-69. Google Scholar

Z. Khanaiah and G. Hmod, Novel hybrid algorithm in solving unconstrained optimizations problems, International Journal of Novel Research in Physics Chemistry & Mathematics, 4 (2017), 36-42. Google Scholar

N. Kontrec and M. Petrović, Implementation of gradient methods for optimization of underage costs in aviation industry, University Thought, Publication in Natural Sciences, 6 (2016), 71-74. doi: 10.5937/univtho6-10134. Google Scholar

J. Kwon and P. Mertikopoulos, A continuous-time approach to online optimization, J. Dyn. Games, 4 (2017), 125-148. doi: 10.3934/jdg.2017008. Google Scholar

M. S. Lee, B. S. Goh, H. G. Harno and K. H. Lim, On a two phase approximate greatest descent method for nonlinear optimization with equality constraints, Numer. Algebra Control Optim., 8 (2018), 315-326. doi: 10.3934/naco.2018020. Google Scholar

D.-H. Li and M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35. doi: 10.1016/S0377-0427(00)00540-9. Google Scholar

X. Li and Q. Ruan, A modified PRP conjugate gradient algorithm with trust region for optimization problems, Numer. Funct. Anal. Optim., 32 (2011), 496-506. doi: 10.1080/01630563.2011.554948. Google Scholar

X. Li, C. Shen and L.-H. Zhang, A projected preconditioned conjugate gradient method for the linear response eigenvalue problem, Numer. Algebra Control Optim., 8 (2018), 389-412. doi: 10.3934/naco.2018025. Google Scholar

D.-H. Li and X.-L. Wang, A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations, Numer. Algebra Control Optim., 1 (2011), 71-82. doi: 10.3934/naco.2011.1.71. Google Scholar

K. H. Lim, H. H. Tan and H. G. Harno, Approximate greatest descent in neural network optimization, Numer. Algebra Control Optim., 8 (2018), 327-336. doi: 10.3934/naco.2018021. Google Scholar

J. Liu, S. Du and Y. Chen, A sufficient descent nonlinear conjugate gradient method for solving $M$-tensor equations, J. Comput. Appl. Math., 371 (2020), 112709, 11 pp. doi: 10.1016/j.cam.2019.112709. Google Scholar

J. K. Liu and S. J. Li, A projection method for convex constrained monotone nonlinear equations with applications, Comput. Math. Appl., 70 (2015), 2442-2453. doi: 10.1016/j.camwa.2015.09.014. Google Scholar

Y. Liu and C. Storey, Efficient generalized conjugate gradient algorithms, part 1: Theory, J. Optim. Theory Appl., 69 (1991), 129-137. doi: 10.1007/BF00940464. Google Scholar

Q. Liu and X. Zou, A risk minimization problem for finite horizon semi-Markov decision processes with loss rates, J. Dyn. Games, 5 (2018), 143-163. doi: 10.3934/jdg.2018009. Google Scholar

I. E. Livieris and P. Pintelas, A descent Dai-Liao conjugate gradient method based on a modified secant equation and its global convergence, ISRN Computational Mathematics, 2012 (2012), Article ID 435495. doi: 10.5402/2012/435495. Google Scholar

M. Lotfi and S. M. Hosseini, An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation, J. Comput. Appl. Math., 371 (2020), 112708, 15 pp. doi: 10.1016/j.cam.2019.112708. Google Scholar

Y.-Z. Luo, G.-J. Tang and L.-N. Zhou, Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method, Applied Soft Computing, 8 (2008), 1068-1073. doi: 10.1016/j.asoc.2007.05.013. Google Scholar

W. R. Mann, Mean value methods in iterations, Proc. Amer. Math. Soc., 4 (1953), 506-510. doi: 10.1090/S0002-9939-1953-0054846-3. Google Scholar

M. Miladinović, P. Stanimirović and S. Miljković, Scalar correction method for solving large scale unconstrained minimization problems, J. Optim. Theory Appl., 151 (2011), 304-320. doi: 10.1007/s10957-011-9864-9. Google Scholar

S. K. Mishra and B. Ram, Introduction to Unconstrained Optimization with R, 1 st edition, Springer Singapore, Springer Nature Singapore Pte Ltd., 2019. doi: 10.1007/978-981-15-0894-3. Google Scholar

I. S. Mohammed, M. Mamat, I. Abdulkarim and F. S. Bt. Mohamad, A survey on recent modifications of conjugate gradient methods, Proceedings of the UniSZA Research Conference 2015 (URC 5), Universiti Sultan Zainal Abidin, 14–16 April 2015. Google Scholar

H. Mohammad, M. Y. Waziri and S. A. Santos, A brief survey of methods for solving nonlinear least-squares problems, Numer. Algebra Control Optim., 9 (2019), 1-13. doi: 10.3934/naco.2019001. Google Scholar

Y. Narushima and H. Yabe, A survey of sufficient descent conjugate gradient methods for unconstrained optimization, SUT J. Math., 50 (2014), 167-203. Google Scholar

J. L. Nazareth, Conjugate-Gradient Methods, In: C. Floudas and P. Pardalos (eds), Encyclopedia of Optimization, 2$^nd$ edition, Springer, Boston, 2009. doi: 10.1007/978-0-387-74759-0. Google Scholar

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, 1999. doi: 10.1007/b98874. Google Scholar

W. F. H. W. Osman, M. A. H. Ibrahim and M. Mamat, Hybrid DFP-CG method for solving unconstrained optimization problems, Journal of Physics: Conf. Series, 890 (2017), 012033. doi: 10.1088/1742-6596/890/1/012033. Google Scholar

S. Panić, M. J. Petrović and M. Mihajlov-Carević, Initial improvement of the hybrid accelerated gradient descent process, Bull. Aust. Math. Soc., 98 (2018), 331-338. doi: 10.1017/S0004972718000552. Google Scholar

A. Perry, A modified conjugate gradient algorithm, Oper. Res., 26 (1978), 1073-1078. doi: 10.1287/opre.26.6.1073. Google Scholar

M. J. Petrović, An accelerated double step-size method in unconstrained optimization, Applied Math. Comput., 250 (2015), 309-319. doi: 10.1016/j.amc.2014.10.104. Google Scholar

M. J. Petrović, N. Kontrec and S. Panić, Determination of accelerated factors in gradient descent iterations based on Taylor's series, University Thought, Publication in Natural Sciences, 7 (2017), 41-45. doi: 10.5937/univtho7-14337. Google Scholar

M. J. Petrović, V. Rakočević, N. Kontrec, S. Panić and D. Ilić, Hybridization of accelerated gradient descent method, Numer. Algorithms, 79 (2018), 769-786. doi: 10.1007/s11075-017-0460-4. Google Scholar

M. J. Petrović, V. Rakočević, D. Valjarević and D. Ilić, A note on hybridization process applied on transformed double step size model, Numerical Algorithms, 85 (2020), 449-465. doi: 10.1007/s11075-019-00821-8. Google Scholar

M. J. Petrović and P. S. Stanimirović, Accelerated double direction method for solving unconstrained optimization problems, Math. Probl. Eng., 2014 (2014), Article ID 965104, 8 pages. doi: 10.1155/2014/965104. Google Scholar

M. J. Petrović, P. S. Stanimirović, N. Kontrec and J. Mladenović, Hybrid modification of accelerated double direction method, Math. Probl. Eng., 2018 (2018), Article ID 1523267, 8 pages. doi: 10.1155/2018/1523267. Google Scholar

M. R. Peyghami, H. Ahmadzadeh and A. Fazli, A new class of efficient and globally convergent conjugate gradient methods in the Dai-Liao family, Optim. Methods Softw., 30 (2015), 843-863. doi: 10.1080/10556788.2014.1001511. Google Scholar

E. Picard, Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives, J. Math. Pures Appl., 6 (1890), 145-210. Google Scholar

E. Polak and G. Ribière, Note sur la convergence des méthodes de directions conjuguées, Rev. Française Imformat Recherche Opértionelle, 3 (1969), 35–43. Google Scholar

B. T. Polyak, The conjugate gradient method in extreme problems, USSR Comput. Math. and Math. Phys., 9 (1969), 94-112. doi: 10.1016/0041-5553(69)90035-4. Google Scholar

M. J. D. Powell, Algorithms for nonlinear constraints that use Lagrangian functions, Math. Programming, 14 (1978), 224-248. doi: 10.1007/BF01588967. Google Scholar

M. Raydan, On the Barzilai and Borwein choice of steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), 321-326. doi: 10.1093/imanum/13.3.321. Google Scholar

M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM J. Optim., 7 (1997), 26-33. doi: 10.1137/S1052623494266365. Google Scholar

M. Raydan and B. F. Svaiter, Relaxed steepest descent and Cauchy-Barzilai-Borwein method, Comput. Optim. Appl., 21 (2002), 155-167. doi: 10.1023/A:1013708715892. Google Scholar

N. Shapiee, M. Rivaie, M. Mamat and P. L. Ghazali, A new family of conjugate gradient coefficient with application, International Journal of Engineering & Technology, 7 (2018), 36-43. doi: 10.14419/ijet.v7i3.28.20962. Google Scholar

Z.-J. Shi, Convergence of line search methods for unconstrained optimization, Appl. Math. Comput., 157 (2004), 393-405. doi: 10.1016/j.amc.2003.08.058. Google Scholar

S. Shoid, N. Shapiee, N. Zull, N. H. A. Ghani, N. S. Mohamed, M. Rivaie and M. Mamat, The application of new conjugate gradient methods in estimating data, International Journal of Engineering & Technology, 7 (2018), 25-27. doi: 10.14419/ijet.v7i2.14.11147. Google Scholar

H. S. Sim, W. J. Leong, C. Y. Chen and S. N. I. Ibrahim, Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization, Numer. Algebra Control Optim., 8 (2018), 377-387. doi: 10.3934/naco.2018024. Google Scholar

P. S. Stanimirović, B. Ivanov, S. Djordjević and I. Brajević, New hybrid conjugate gradient and Broyden-Fletcher-Goldfarb-Shanno conjugate gradient methods, J. Optim. Theory Appl., 178 (2018), 860-884. doi: 10.1007/s10957-018-1324-3. Google Scholar

P. S. Stanimirović, V. N. Katsikis and D. Pappas, Computation of $<2, 4>$ and $<2, 3>$-inverses based on rank-one updates, Linear Multilinear Algebra, 66 (2018), 147-166. doi: 10.1080/03081087.2017.1290042. Google Scholar

P. S. Stanimirović, V. N. Katsikis and D. Pappas, Computing $<2, 4>$ and $<2, 3>$-inverses by using the Sherman-Morrison formula, Appl. Math. Comput., 273 (2015), 584-603. doi: 10.1016/j.amc.2015.10.023. Google Scholar

P. S. Stanimirović and M. B. Miladinović, Accelerated gradient descent methods with line search, Numer. Algor., 54 (2010), 503-520. doi: 10.1007/s11075-009-9350-8. Google Scholar

P. S. Stanimirović, G. V. Milovanović, M. J. Petrović and N. Z. Kontrec, A transformation of accelerated double step-size method for unconstrained optimization, Math. Probl. Eng., 2015 (2015), Article ID 283679, 8 pages. doi: 10.1155/2015/283679. Google Scholar

W. Sun, J. Han and J. Sun, Global convergence of nonmonotone descent methods for unconstrained optimization problems, J. Comp. Appl. Math., 146 (2002), 89-98. doi: 10.1016/S0377-0427(02)00420-X. Google Scholar

Z. Sun and T. Sugie, Identification of Hessian matrix in distributed gradient based multi agent coordination control systems, Numer. Algebra Control Optim., 9 (2019), 297-318. doi: 10.3934/naco.2019020. Google Scholar

W. Sun and Y.-X. Yuan, Optimization Theory and Methods: Nonlinear Programming, 1 st edition, Springer, New York, 2006. doi: 10.1007/b106451. Google Scholar

Ph. L. Toint, Non–monotone trust–region algorithm for nonlinear optimization subject to convex constraints, Math. Prog., 77 (1997), 69-94. doi: 10.1007/BF02614518. Google Scholar

D. Touati-Ahmed and C. Storey, Efficient hybrid conjugate gradient techniques, J. Optim. Theory Appl., 64 (1990), 379-397. doi: 10.1007/BF00939455. Google Scholar

M. N. Vrahatis, G. S. Androulakis, J. N. Lambrinos and G. D. Magoulas, A class of gradient unconstrained minimization algorithms with adaptive step-size, J. Comp. Appl. Math., 114 (2000), 367-386. doi: 10.1016/S0377-0427(99)00276-9. Google Scholar

Z. Wei, G. Li and L. Qi, New quasi-Newton methods for unconstrained optimization problems, Appl. Math. Comput., 175 (2006), 1156-1188. doi: 10.1016/j.amc.2005.08.027. Google Scholar

Z. Wei, G. Li and L. Qi, New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems, Appl. Math. Comput., 179 (2006), 407-430. doi: 10.1016/j.amc.2005.11.150. Google Scholar

Z. Wei, S. Yao and L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput., 183 (2006), 1341-1350. doi: 10.1016/j.amc.2006.05.150. Google Scholar

P. Wolfe, Convergence conditions for ascent methods, SIAM Rev., 11 (1969), 226-235. doi: 10.1137/1011036. Google Scholar

M. Xi, W. Sun and J. Chen, Survey of derivative-free optimization, Numer. Algebra Control Optim., 10 (2020), 537-555. doi: 10.3934/naco.2020050. Google Scholar

Y. Xiao and H. Zhu, A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing, J. Math. Anal. Appl., 405 (2013), 310-319. doi: 10.1016/j.jmaa.2013.04.017. Google Scholar

H. Yabe and M. Takano, Global convergence properties of nonlinear conjugate gradient methods with modified secant condition, Comput. Optim. Appl., 28 (2004), 203-225. doi: 10.1023/B:COAP.0000026885.81997.88. Google Scholar

X. Yang, Z. Luo and X. Dai, A global convergence of LS-CD hybrid conjugate gradient method, Adv. Numer. Anal., 2013 (2013), Article ID 517452, 5 pp. doi: 10.1155/2013/517452. Google Scholar

S. Yao, X. Lu and Z. Wei, A conjugate gradient method with global convergence for large-scale unconstrained optimization problems, J. Appl. Math., 2013 (2013), Article ID 730454, 9 pp. doi: 10.1155/2013/730454. Google Scholar

G. Yuan and Z. Wei, Convergence analysis of a modified BFGS method on convex minimizations, Comp. Optim. Appl., 47 (2010), 237-255. doi: 10.1007/s10589-008-9219-0. Google Scholar

S. Yao, Z. Wei and H. Huang, A notes about WYL's conjugate gradient method and its applications, Appl. Math. Comput., 191 (2007), 381-388. doi: 10.1016/j.amc.2007.02.094. Google Scholar

Y. Yuan, A new stepsize for the steepest descent method, J. Comput. Math., 24 (2006), 149-156. Google Scholar

G. Yuan, T. Li and W. Hu, A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems, Appl. Numer. Math., 147 (2020), 129-141. doi: 10.1016/j.apnum.2019.08.022. Google Scholar

G. Yuan, T. Li and W. Hu, A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration, J. Inequal. Appl., 2019 (2019), Article number: 247, 25 pp. doi: 10.1186/s13660-019-2192-6. Google Scholar

G. Yuan, Z. Wei and Y. Wu, Modified limited memory BFGS method with nonmonotone line search for unconstrained optimization, J. Korean Math. Soc., 47 (2010), 767-788. doi: 10.4134/JKMS.2010.47.4.767. Google Scholar

L. Zhang, An improved Wei-Yao-Liu nonlinear conjugate gradient method for optimization computation, Appl. Math. Comput., 215 (2009), 2269-2274. doi: 10.1016/j.amc.2009.08.016. Google Scholar

Y. Zheng and B. Zheng, Two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems, J. Optim. Theory Appl., 175 (2017), 502-509. doi: 10.1007/s10957-017-1140-1. Google Scholar

H. Zhong, G. Chen and X. Guo, Semi-local convergence of the Newton-HSS method under the center Lipschitz condition, Numer. Algebra Control Optim., 9 (2019), 85-99. doi: 10.3934/naco.2019007. Google Scholar

W. Zhou and L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optim. Methods Softw., 21 (2006), 707-714. doi: 10.1080/10556780500137041. Google Scholar

G. Zoutendijk, Nonlinear programming, computational methods, In: J. Abadie (eds.): Integer and Nonlinear Programming, North-Holland, Amsterdam, (1970), 37–86. Google Scholar

N. Zull, N. 'Aini, M. Rivaie and M. Mamat, A new gradient method for solving linear regression model, International Journal of Recent Technology and Engineering, 7 (2019), 624-630. Google Scholar

##### References:

J. Abaffy, A new reprojection of the conjugate directions, Numer. Algebra Control Optim., 9 (2019), 157-171. doi: 10.3934/naco.2019012. Google Scholar

M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, IMA J. Numer. Anal., 5 (1985), 121-124. doi: 10.1093/imanum/5.1.121. Google Scholar

N. Andrei, An unconstrained optimization test functions collection, Adv. Model. Optim., 10 (2008), 147-161. Google Scholar

N. Andrei, An acceleration of gradient descent algorithm with backtracking for unconstrained optimization, Numer. Algorithms, 42 (2006), 63-73. doi: 10.1007/s11075-006-9023-9. Google Scholar

N. Andrei, A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues, Numer. Algorithms, 77 (2018), 1273-1282. doi: 10.1007/s11075-017-0362-5. Google Scholar

N. Andrei, Relaxed gradient descent and a new gradient descent methods for unconstrained optimization, Visited August 19, (2018). Google Scholar

N. Andrei, Nonlinear Conjugate Gradient Methods for Unconstrained Optimization, 1 st edition, Springer International Publishing, 2020. doi: 10.1007/978-3-030-42950-8. Google Scholar

L. Armijo, Minimization of functions having Lipschitz continuous first partial derivatives, Pacific J. Math., 16 (1966), 1-3. doi: 10.2140/pjm.1966.16.1. Google Scholar

S. Babaie-Kafaki and R. Ghanbari, The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices, European J. Oper. Res., 234 (2014), 625-630. doi: 10.1016/j.ejor.2013.11.012. Google Scholar

B. Baluch, Z. Salleh, A. Alhawarat and U. A. M. Roslan, A new modified three-term conjugate gradient method with sufficient descent property and its global convergence, J. Math., 2017 (2017), Article ID 2715854, 12 pages. doi: 10.1155/2017/2715854. Google Scholar

J. Barzilai and J. M. Borwein, Two-point step-size gradient method, IMA J. Numer. Anal., 8 (1988), 141-148. doi: 10.1093/imanum/8.1.141. Google Scholar

M. Bastani and D. K. Salkuyeh, On the GSOR iteration method for image restoration, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020013. Google Scholar

A. E. J. Bogaers, S. Kok, B. D. Reddy and T. Franz, An evaluation of quasi-Newton methods for application to FSI problems involving free surface flow and solid body contact, Computers & Structures, 173 (2016), 71-83. doi: 10.1016/j.compstruc.2016.05.018. Google Scholar

I. Bongartz, A. R. Conn, N. Gould and Ph. L. Toint, CUTE: Constrained and unconstrained testing environments, ACM Trans. Math. Softw., 21 (1995), 123-160. doi: 10.1145/200979.201043. Google Scholar

C. Brezinski, A classification of quasi-Newton methods, Numer. Algorithms, 33 (2003), 123-135. doi: 10.1023/A:1025551602679. Google Scholar

J. Cao and J. Wu, A conjugate gradient algorithm and its applications in image restoration, Appl. Numer. Math., 152 (2020), 243-252. doi: 10.1016/j.apnum.2019.12.002. Google Scholar

W. Cheng, A two-term PRP-based descent method, Numer. Funct. Anal. Optim., 28 (2007), 1217-1230. doi: 10.1080/01630560701749524. Google Scholar

Y. Cheng, Q. Mou, X. Pan and S. Yao, A sufficient descent conjugate gradient method and its global convergence, Optim. Methods Softw., 31 (2016), 577-590. doi: 10.1080/10556788.2015.1124431. Google Scholar

A. I. Cohen, Stepsize analysis for descent methods, J. Optim. Theory Appl., 33 (1981), 187-205. doi: 10.1007/BF00935546. Google Scholar

A. R. Conn, N. I. M. Gould and Ph. L. Toint, Convergence of quasi-Newton matrices generated by the symmetric rank one update, Math. Programming, 50 (1991), 177-195. doi: 10.1007/BF01594934. Google Scholar

Y.-H. Dai, Nonlinear Conjugate Gradient Methods, Wiley Encyclopedia of Operations Research and Management Science, (2011). doi: 10.1002/9780470400531.eorms0183. Google Scholar

Y. Dai, A nonmonotone conjugate gradient algorithm for unconstrained optimization, J. Syst. Sci. Complex., 15 (2002), 139-145. Google Scholar

Y.-H. Dai and R. Fletcher, On the Asymptotic Behaviour of some New Gradient Methods, Numerical Analysis Report, NA/212, Dept. of Math. University of Dundee, Scotland, UK, 2003. Google Scholar

Y.-H. Dai and C.-X. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search, SIAM. J. Optim., 23 (2013), 296-320. doi: 10.1137/100813026. Google Scholar

Y.-H. Dai and L.-Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87-101. doi: 10.1007/s002450010019. Google Scholar

Y.-H. Dai and L.-Z. Liao, R-linear convergence of the Barzilai and Borwein gradient method, IMA J. Numer. Anal., 22 (2002), 1-10. doi: 10.1093/imanum/22.1.1. Google Scholar

Y.-H. Dai and Q. Ni, Testing different conjugate gradient methods for large-scale unconstrained optimization, J. Comput. Math., 21 (2003), 311-320. Google Scholar

Z. Dai and F. Wen, Another improved Wei–Yao–Liu nonlinear conjugate gradient method with sufficient descent property, Appl. Math. Comput., 218 (2012), 7421-7430. doi: 10.1016/j.amc.2011.12.091. Google Scholar

Y.-H. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 10 (1999), 177-182. doi: 10.1137/S1052623497318992. Google Scholar

Y.-H. Dai and Y. Yuan, An efficient hybrid conjugate gradient method for unconstrained optimization, Ann. Oper. Res., 103 (2001), 33-47. doi: 10.1023/A:1012930416777. Google Scholar

Y. H. Dai and Y. Yuan, A class of Globally Convergent Conjugate Gradient Methods, Research report ICM-98-030, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 1998. Google Scholar

Y. Dai and Y. Yuan, A class of globally convergent conjugate gradient methods, Sci. China Ser. A, 46 (2003), 251-261. Google Scholar

Y. Dai, J. Yuan and Y.-X. Yuan, Modified two-point step-size gradient methods for unconstrained optimization, Comput. Optim. Appl., 22 (2002), 103-109. doi: 10.1023/A:1014838419611. Google Scholar

Y.-H. Dai and Y.-X. Yuan, Alternate minimization gradient method, IMA J. Numer. Anal., 23 (2003), 377-393. doi: 10.1093/imanum/23.3.377. Google Scholar

Y.-H. Dai and Y.-X. Yuan, Analysis of monotone gradient methods, J. Ind. Manag. Optim., 1 (2005), 181-192. doi: 10.3934/jimo.2005.1.181. Google Scholar

Y.-H. Dai and H. Zhang, An Adaptive Two-Point Step-size gradient method, Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 2001. Google Scholar

J. W. Daniel, The conjugate gradient method for linear and nonlinear operator equations, SIAM J. Numer. Anal., 4 (1967), 10-26. doi: 10.1137/0704002. Google Scholar

S. Delladji, M. Belloufi and B. Sellami, Behavior of the combination of PRP and HZ methods for unconstrained optimization, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020032. Google Scholar

Y. Ding, E. Lushi and Q. Li, Investigation of quasi-Newton methods for unconstrained optimization, International Journal of Computer Application, 29 (2010), 48-58. Google Scholar

S. S. Djordjević, Two modifications of the method of the multiplicative parameters in descent gradient methods, Appl. Math, Comput., 218 (2012), 8672-8683. doi: 10.1016/j.amc.2012.02.029. Google Scholar

S. S. Djordjević, Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods, Book Chapter, 2019. doi: 10.5772/intechopen.84374. Google Scholar

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201-213. doi: 10.1007/s101070100263. Google Scholar

M. S. Engelman, G. Strang and K.-J. Bathe, The application of quasi-Nnewton methods in fluid mechanics, Internat. J. Numer. Methods Engrg., 17 (1981), 707-718. doi: 10.1002/nme.1620170505. Google Scholar

D. K. Faddeev and I. S. Sominskiǐ, Collection of Problems on Higher Algebra. Gostekhizdat, 2 nd edition, Moscow, 1949. Google Scholar

A. G. Farizawani, M. Puteh, Y. Marina and A. Rivaie, A review of artificial neural network learning rule based on multiple variant of conjugate gradient approaches, Journal of Physics: Conference Series, 1529 (2020). doi: 10.1088/1742-6596/1529/2/022040. Google Scholar

R. Fletcher, Practical Methods of Optimization, Unconstrained Optimization, 1 st edition, Wiley, New York, 1987. Google Scholar

R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149-154. doi: 10.1093/comjnl/7.2.149. Google Scholar

J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21-42. doi: 10.1137/0802003. Google Scholar

A. A. Goldstein, On steepest descent, J. SIAM Control Ser. A, 3 (1965), 147-151. doi: 10.1137/0303013. Google Scholar

L. Grippo, F. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method, SIAM J. Numer. ANAL., 23 (1986), 707-716. doi: 10.1137/0723046. Google Scholar

L. Grippo, F. Lampariello and S. Lucidi, A class of nonmonotone stability methods in unconstrained optimization, Numer. Math., 59 (1991), 779-805. doi: 10.1007/BF01385810. Google Scholar

L. Grippo, F. Lampariello and S. Lucidi, A truncated Newton method with nonmonotone line search for unconstrained optimization, J. Optim. Theory Appl., 60 (1989), 401-419. doi: 10.1007/BF00940345. Google Scholar

L. Grippo and M. Sciandrone, Nonmonotone globalization techniques for the Barzilai-Borwein gradient method, Comput. Optim. Appl., 23 (2002), 143-169. doi: 10.1023/A:1020587701058. Google Scholar

W. W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170-192. doi: 10.1137/030601880. Google Scholar

W. W. Hager and H. Zhang, Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Software, 32 (2006), 113-137. doi: 10.1145/1132973.1132979. Google Scholar

W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2 (2006), 35-58. Google Scholar

L. Han and M. Neumann, Combining quasi-Newton and Cauchy directions, Int. J. Appl. Math., 12 (2003), 167-191. Google Scholar

B. A. Hassan, A new type of quasi-Newton updating formulas based on the new quasi-Newton equation, Numer. Algebra Control Optim., 10 (2020), 227-235. doi: 10.3934/naco.2019049. Google Scholar

M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952), 409-436. doi: 10.6028/jres.049.044. Google Scholar

Y. F. Hu and C. Storey, Global convergence result for conjugate gradient methods, J. Optim. Theory Appl., 71 (1991), 399-405. doi: 10.1007/BF00939927. Google Scholar

S. Ishikawa, Fixed points by a new iteration method, Proc. Am. Math. Soc., 44 (1974), 147-150. doi: 10.1090/S0002-9939-1974-0336469-5. Google Scholar

B. Ivanov, P. S. Stanimirović, G. V. Milovanović, S. Djordjević and I. Brajević, Accelerated multiple step-size methods for solving unconstrained optimization problems, Optimization Methods and Software, (2019). doi: 10.1080/10556788.2019.1653868. Google Scholar

Z. Jia, Applications of the conjugate gradient method in optimal surface parameterizations, Int. J. Comput. Math., 87 (2010), 1032-1039. doi: 10.1080/00207160802275951. Google Scholar

J. Jian, L. Han and X. Jiang, A hybrid conjugate gradient method with descent property for unconstrained optimization, Appl. Math. Model., 39 (2015), 1281-1290. doi: 10.1016/j.apm.2014.08.008. Google Scholar

S. H. Khan, A Picard-Mann hybrid iterative process, Fixed Point Theory Appl., 2013 (2013), Article number: 69, 10 pp. doi: 10.1186/1687-1812-2013-69. Google Scholar

Z. Khanaiah and G. Hmod, Novel hybrid algorithm in solving unconstrained optimizations problems, International Journal of Novel Research in Physics Chemistry & Mathematics, 4 (2017), 36-42. Google Scholar

N. Kontrec and M. Petrović, Implementation of gradient methods for optimization of underage costs in aviation industry, University Thought, Publication in Natural Sciences, 6 (2016), 71-74. doi: 10.5937/univtho6-10134. Google Scholar

J. Kwon and P. Mertikopoulos, A continuous-time approach to online optimization, J. Dyn. Games, 4 (2017), 125-148. doi: 10.3934/jdg.2017008. Google Scholar

M. S. Lee, B. S. Goh, H. G. Harno and K. H. Lim, On a two phase approximate greatest descent method for nonlinear optimization with equality constraints, Numer. Algebra Control Optim., 8 (2018), 315-326. doi: 10.3934/naco.2018020. Google Scholar

D.-H. Li and M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35. doi: 10.1016/S0377-0427(00)00540-9. Google Scholar

X. Li and Q. Ruan, A modified PRP conjugate gradient algorithm with trust region for optimization problems, Numer. Funct. Anal. Optim., 32 (2011), 496-506. doi: 10.1080/01630563.2011.554948. Google Scholar

X. Li, C. Shen and L.-H. Zhang, A projected preconditioned conjugate gradient method for the linear response eigenvalue problem, Numer. Algebra Control Optim., 8 (2018), 389-412. doi: 10.3934/naco.2018025. Google Scholar

D.-H. Li and X.-L. Wang, A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations, Numer. Algebra Control Optim., 1 (2011), 71-82. doi: 10.3934/naco.2011.1.71. Google Scholar

K. H. Lim, H. H. Tan and H. G. Harno, Approximate greatest descent in neural network optimization, Numer. Algebra Control Optim., 8 (2018), 327-336. doi: 10.3934/naco.2018021. Google Scholar

J. Liu, S. Du and Y. Chen, A sufficient descent nonlinear conjugate gradient method for solving $M$-tensor equations, J. Comput. Appl. Math., 371 (2020), 112709, 11 pp. doi: 10.1016/j.cam.2019.112709. Google Scholar

J. K. Liu and S. J. Li, A projection method for convex constrained monotone nonlinear equations with applications, Comput. Math. Appl., 70 (2015), 2442-2453. doi: 10.1016/j.camwa.2015.09.014. Google Scholar

Y. Liu and C. Storey, Efficient generalized conjugate gradient algorithms, part 1: Theory, J. Optim. Theory Appl., 69 (1991), 129-137. doi: 10.1007/BF00940464. Google Scholar

Q. Liu and X. Zou, A risk minimization problem for finite horizon semi-Markov decision processes with loss rates, J. Dyn. Games, 5 (2018), 143-163. doi: 10.3934/jdg.2018009. Google Scholar

I. E. Livieris and P. Pintelas, A descent Dai-Liao conjugate gradient method based on a modified secant equation and its global convergence, ISRN Computational Mathematics, 2012 (2012), Article ID 435495. doi: 10.5402/2012/435495. Google Scholar

M. Lotfi and S. M. Hosseini, An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation, J. Comput. Appl. Math., 371 (2020), 112708, 15 pp. doi: 10.1016/j.cam.2019.112708. Google Scholar

Y.-Z. Luo, G.-J. Tang and L.-N. Zhou, Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method, Applied Soft Computing, 8 (2008), 1068-1073. doi: 10.1016/j.asoc.2007.05.013. Google Scholar

W. R. Mann, Mean value methods in iterations, Proc. Amer. Math. Soc., 4 (1953), 506-510. doi: 10.1090/S0002-9939-1953-0054846-3. Google Scholar

M. Miladinović, P. Stanimirović and S. Miljković, Scalar correction method for solving large scale unconstrained minimization problems, J. Optim. Theory Appl., 151 (2011), 304-320. doi: 10.1007/s10957-011-9864-9. Google Scholar

S. K. Mishra and B. Ram, Introduction to Unconstrained Optimization with R, 1 st edition, Springer Singapore, Springer Nature Singapore Pte Ltd., 2019. doi: 10.1007/978-981-15-0894-3. Google Scholar

I. S. Mohammed, M. Mamat, I. Abdulkarim and F. S. Bt. Mohamad, A survey on recent modifications of conjugate gradient methods, Proceedings of the UniSZA Research Conference 2015 (URC 5), Universiti Sultan Zainal Abidin, 14–16 April 2015. Google Scholar

H. Mohammad, M. Y. Waziri and S. A. Santos, A brief survey of methods for solving nonlinear least-squares problems, Numer. Algebra Control Optim., 9 (2019), 1-13. doi: 10.3934/naco.2019001. Google Scholar

Y. Narushima and H. Yabe, A survey of sufficient descent conjugate gradient methods for unconstrained optimization, SUT J. Math., 50 (2014), 167-203. Google Scholar

J. L. Nazareth, Conjugate-Gradient Methods, In: C. Floudas and P. Pardalos (eds), Encyclopedia of Optimization, 2$^nd$ edition, Springer, Boston, 2009. doi: 10.1007/978-0-387-74759-0. Google Scholar

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, 1999. doi: 10.1007/b98874. Google Scholar

W. F. H. W. Osman, M. A. H. Ibrahim and M. Mamat, Hybrid DFP-CG method for solving unconstrained optimization problems, Journal of Physics: Conf. Series, 890 (2017), 012033. doi: 10.1088/1742-6596/890/1/012033. Google Scholar

S. Panić, M. J. Petrović and M. Mihajlov-Carević, Initial improvement of the hybrid accelerated gradient descent process, Bull. Aust. Math. Soc., 98 (2018), 331-338. doi: 10.1017/S0004972718000552. Google Scholar

A. Perry, A modified conjugate gradient algorithm, Oper. Res., 26 (1978), 1073-1078. doi: 10.1287/opre.26.6.1073. Google Scholar

M. J. Petrović, An accelerated double step-size method in unconstrained optimization, Applied Math. Comput., 250 (2015), 309-319. doi: 10.1016/j.amc.2014.10.104. Google Scholar

M. J. Petrović, N. Kontrec and S. Panić, Determination of accelerated factors in gradient descent iterations based on Taylor's series, University Thought, Publication in Natural Sciences, 7 (2017), 41-45. doi: 10.5937/univtho7-14337. Google Scholar

M. J. Petrović, V. Rakočević, N. Kontrec, S. Panić and D. Ilić, Hybridization of accelerated gradient descent method, Numer. Algorithms, 79 (2018), 769-786. doi: 10.1007/s11075-017-0460-4. Google Scholar

M. J. Petrović, V. Rakočević, D. Valjarević and D. Ilić, A note on hybridization process applied on transformed double step size model, Numerical Algorithms, 85 (2020), 449-465. doi: 10.1007/s11075-019-00821-8. Google Scholar

M. J. Petrović and P. S. Stanimirović, Accelerated double direction method for solving unconstrained optimization problems, Math. Probl. Eng., 2014 (2014), Article ID 965104, 8 pages. doi: 10.1155/2014/965104. Google Scholar

M. J. Petrović, P. S. Stanimirović, N. Kontrec and J. Mladenović, Hybrid modification of accelerated double direction method, Math. Probl. Eng., 2018 (2018), Article ID 1523267, 8 pages. doi: 10.1155/2018/1523267. Google Scholar

M. R. Peyghami, H. Ahmadzadeh and A. Fazli, A new class of efficient and globally convergent conjugate gradient methods in the Dai-Liao family, Optim. Methods Softw., 30 (2015), 843-863. doi: 10.1080/10556788.2014.1001511. Google Scholar

E. Picard, Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives, J. Math. Pures Appl., 6 (1890), 145-210. Google Scholar

E. Polak and G. Ribière, Note sur la convergence des méthodes de directions conjuguées, Rev. Française Imformat Recherche Opértionelle, 3 (1969), 35–43. Google Scholar

B. T. Polyak, The conjugate gradient method in extreme problems, USSR Comput. Math. and Math. Phys., 9 (1969), 94-112. doi: 10.1016/0041-5553(69)90035-4. Google Scholar

M. J. D. Powell, Algorithms for nonlinear constraints that use Lagrangian functions, Math. Programming, 14 (1978), 224-248. doi: 10.1007/BF01588967. Google Scholar

M. Raydan, On the Barzilai and Borwein choice of steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), 321-326. doi: 10.1093/imanum/13.3.321. Google Scholar

M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM J. Optim., 7 (1997), 26-33. doi: 10.1137/S1052623494266365. Google Scholar

M. Raydan and B. F. Svaiter, Relaxed steepest descent and Cauchy-Barzilai-Borwein method, Comput. Optim. Appl., 21 (2002), 155-167. doi: 10.1023/A:1013708715892. Google Scholar

N. Shapiee, M. Rivaie, M. Mamat and P. L. Ghazali, A new family of conjugate gradient coefficient with application, International Journal of Engineering & Technology, 7 (2018), 36-43. doi: 10.14419/ijet.v7i3.28.20962. Google Scholar

Z.-J. Shi, Convergence of line search methods for unconstrained optimization, Appl. Math. Comput., 157 (2004), 393-405. doi: 10.1016/j.amc.2003.08.058. Google Scholar

S. Shoid, N. Shapiee, N. Zull, N. H. A. Ghani, N. S. Mohamed, M. Rivaie and M. Mamat, The application of new conjugate gradient methods in estimating data, International Journal of Engineering & Technology, 7 (2018), 25-27. doi: 10.14419/ijet.v7i2.14.11147. Google Scholar

H. S. Sim, W. J. Leong, C. Y. Chen and S. N. I. Ibrahim, Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization, Numer. Algebra Control Optim., 8 (2018), 377-387. doi: 10.3934/naco.2018024. Google Scholar

P. S. Stanimirović, B. Ivanov, S. Djordjević and I. Brajević, New hybrid conjugate gradient and Broyden-Fletcher-Goldfarb-Shanno conjugate gradient methods, J. Optim. Theory Appl., 178 (2018), 860-884. doi: 10.1007/s10957-018-1324-3. Google Scholar

P. S. Stanimirović, V. N. Katsikis and D. Pappas, Computation of $<2, 4>$ and $<2, 3>$-inverses based on rank-one updates, Linear Multilinear Algebra, 66 (2018), 147-166. doi: 10.1080/03081087.2017.1290042. Google Scholar

P. S. Stanimirović, V. N. Katsikis and D. Pappas, Computing $<2, 4>$ and $<2, 3>$-inverses by using the Sherman-Morrison formula, Appl. Math. Comput., 273 (2015), 584-603. doi: 10.1016/j.amc.2015.10.023. Google Scholar

P. S. Stanimirović and M. B. Miladinović, Accelerated gradient descent methods with line search, Numer. Algor., 54 (2010), 503-520. doi: 10.1007/s11075-009-9350-8. Google Scholar

P. S. Stanimirović, G. V. Milovanović, M. J. Petrović and N. Z. Kontrec, A transformation of accelerated double step-size method for unconstrained optimization, Math. Probl. Eng., 2015 (2015), Article ID 283679, 8 pages. doi: 10.1155/2015/283679. Google Scholar

W. Sun, J. Han and J. Sun, Global convergence of nonmonotone descent methods for unconstrained optimization problems, J. Comp. Appl. Math., 146 (2002), 89-98. doi: 10.1016/S0377-0427(02)00420-X. Google Scholar

Z. Sun and T. Sugie, Identification of Hessian matrix in distributed gradient based multi agent coordination control systems, Numer. Algebra Control Optim., 9 (2019), 297-318. doi: 10.3934/naco.2019020. Google Scholar

W. Sun and Y.-X. Yuan, Optimization Theory and Methods: Nonlinear Programming, 1 st edition, Springer, New York, 2006. doi: 10.1007/b106451. Google Scholar

Ph. L. Toint, Non–monotone trust–region algorithm for nonlinear optimization subject to convex constraints, Math. Prog., 77 (1997), 69-94. doi: 10.1007/BF02614518. Google Scholar

D. Touati-Ahmed and C. Storey, Efficient hybrid conjugate gradient techniques, J. Optim. Theory Appl., 64 (1990), 379-397. doi: 10.1007/BF00939455. Google Scholar

M. N. Vrahatis, G. S. Androulakis, J. N. Lambrinos and G. D. Magoulas, A class of gradient unconstrained minimization algorithms with adaptive step-size, J. Comp. Appl. Math., 114 (2000), 367-386. doi: 10.1016/S0377-0427(99)00276-9. Google Scholar

Z. Wei, G. Li and L. Qi, New quasi-Newton methods for unconstrained optimization problems, Appl. Math. Comput., 175 (2006), 1156-1188. doi: 10.1016/j.amc.2005.08.027. Google Scholar

Z. Wei, G. Li and L. Qi, New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems, Appl. Math. Comput., 179 (2006), 407-430. doi: 10.1016/j.amc.2005.11.150. Google Scholar

Z. Wei, S. Yao and L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput., 183 (2006), 1341-1350. doi: 10.1016/j.amc.2006.05.150. Google Scholar

P. Wolfe, Convergence conditions for ascent methods, SIAM Rev., 11 (1969), 226-235. doi: 10.1137/1011036. Google Scholar

M. Xi, W. Sun and J. Chen, Survey of derivative-free optimization, Numer. Algebra Control Optim., 10 (2020), 537-555. doi: 10.3934/naco.2020050. Google Scholar

Y. Xiao and H. Zhu, A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing, J. Math. Anal. Appl., 405 (2013), 310-319. doi: 10.1016/j.jmaa.2013.04.017. Google Scholar

H. Yabe and M. Takano, Global convergence properties of nonlinear conjugate gradient methods with modified secant condition, Comput. Optim. Appl., 28 (2004), 203-225. doi: 10.1023/B:COAP.0000026885.81997.88. Google Scholar

X. Yang, Z. Luo and X. Dai, A global convergence of LS-CD hybrid conjugate gradient method, Adv. Numer. Anal., 2013 (2013), Article ID 517452, 5 pp. doi: 10.1155/2013/517452. Google Scholar

S. Yao, X. Lu and Z. Wei, A conjugate gradient method with global convergence for large-scale unconstrained optimization problems, J. Appl. Math., 2013 (2013), Article ID 730454, 9 pp. doi: 10.1155/2013/730454. Google Scholar

G. Yuan and Z. Wei, Convergence analysis of a modified BFGS method on convex minimizations, Comp. Optim. Appl., 47 (2010), 237-255. doi: 10.1007/s10589-008-9219-0. Google Scholar

S. Yao, Z. Wei and H. Huang, A notes about WYL's conjugate gradient method and its applications, Appl. Math. Comput., 191 (2007), 381-388. doi: 10.1016/j.amc.2007.02.094. Google Scholar

Y. Yuan, A new stepsize for the steepest descent method, J. Comput. Math., 24 (2006), 149-156. Google Scholar

G. Yuan, T. Li and W. Hu, A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems, Appl. Numer. Math., 147 (2020), 129-141. doi: 10.1016/j.apnum.2019.08.022. Google Scholar

G. Yuan, T. Li and W. Hu, A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration, J. Inequal. Appl., 2019 (2019), Article number: 247, 25 pp. doi: 10.1186/s13660-019-2192-6. Google Scholar

G. Yuan, Z. Wei and Y. Wu, Modified limited memory BFGS method with nonmonotone line search for unconstrained optimization, J. Korean Math. Soc., 47 (2010), 767-788. doi: 10.4134/JKMS.2010.47.4.767. Google Scholar

L. Zhang, An improved Wei-Yao-Liu nonlinear conjugate gradient method for optimization computation, Appl. Math. Comput., 215 (2009), 2269-2274. doi: 10.1016/j.amc.2009.08.016. Google Scholar

Y. Zheng and B. Zheng, Two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems, J. Optim. Theory Appl., 175 (2017), 502-509. doi: 10.1007/s10957-017-1140-1. Google Scholar

H. Zhong, G. Chen and X. Guo, Semi-local convergence of the Newton-HSS method under the center Lipschitz condition, Numer. Algebra Control Optim., 9 (2019), 85-99. doi: 10.3934/naco.2019007. Google Scholar

W. Zhou and L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optim. Methods Softw., 21 (2006), 707-714. doi: 10.1080/10556780500137041. Google Scholar

G. Zoutendijk, Nonlinear programming, computational methods, In: J. Abadie (eds.): Integer and Nonlinear Programming, North-Holland, Amsterdam, (1970), 37–86. Google Scholar

N. Zull, N. 'Aini, M. Rivaie and M. Mamat, A new gradient method for solving linear regression model, International Journal of Recent Technology and Engineering, 7 (2019), 624-630. Google Scholar

 Quasi-Newton Eqs. $ilde>_$ Ref. $B__!=! ilde>_$ $ilde>_ !=! varphi_ mathbf_ + (1-varphi_)B_mathbf_$ [ 104 ] $B__= ilde>_$ $ilde>_ !=! mathbf_ + t_mathbf_, t_leq10^ <-6>$ [ 72 ] $B__= ilde>_$ $ilde>_ !=! mathbf_ + frac <2(f_- f_k)+(mathbf_k + mathbf_)^ mathrm mathbf_><|mathbf_|^2>mathbf_$ [ 123 ] $B__= ilde>_$ $ilde>_ !=! mathbf_ + frac - f_k)+(mathbf_k + mathbf_)^ mathrm mathbf_)><|mathbf_|^2>mathbf_$ [ 133 ] $B__= ilde>_$ $ilde>_ !=! mathbf_ + frac - f_k)+3(mathbf_k + mathbf_)^ mathrm mathbf_)><|mathbf_|^2>mathbf_$ [ 134 ] $B__= ilde>_$ $ilde>_ !=! frac<1><2>mathbf_ + frac <(f_- f_k) - frac<1> <2>mathbf_k^ mathrm mathbf_>_^ mathrm mathbf_>mathbf_$ [ 59 ]
 Quasi-Newton Eqs. $ilde>_$ Ref. $B__!=! ilde>_$ $ilde>_ !=! varphi_ mathbf_ + (1-varphi_)B_mathbf_$ [ 104 ] $B__= ilde>_$ $ilde>_ !=! mathbf_ + t_mathbf_, t_leq10^ <-6>$ [ 72 ] $B__= ilde>_$ $ilde>_ !=! mathbf_ + frac <2(f_- f_k)+(mathbf_k + mathbf_)^ mathrm mathbf_><|mathbf_|^2>mathbf_$ [ 123 ] $B__= ilde>_$ $ilde>_ !=! mathbf_ + frac - f_k)+(mathbf_k + mathbf_)^ mathrm mathbf_)><|mathbf_|^2>mathbf_$ [ 133 ] $B__= ilde>_$ $ilde>_ !=! mathbf_ + frac - f_k)+3(mathbf_k + mathbf_)^ mathrm mathbf_)><|mathbf_|^2>mathbf_$ [ 134 ] $B__= ilde>_$ $ilde>_ !=! frac<1><2>mathbf_ + frac <(f_- f_k) - frac<1> <2>mathbf_k^ mathrm mathbf_>_^ mathrm mathbf_>mathbf_$ [ 59 ]
 $eta_k$ Title Year Reference $eta_k^>!=!dfrac<_^ mathrm _k><_^ mathrm _>$ $m Hestenses–Stiefel$ 1952 [ 60 ] $eta_k^>!=!dfrac<|_k|^2><|_|^2>$ $m Fletcher–Reeves$ 1964 [ 48 ] $eta_k^>!=!dfrac<_^ mathrm _ _><_^ mathrm __>$ 1967 [ 38 ] $eta_k^>!=!dfrac<_^ mathrm _k><|_|^2>$ $m Polak–Ribiere–Polyak$ 1969 [ 102 , 103 ] $eta_k^>!=!-dfrac<|_k|^2><_^ mathrm _>$ $m Conjugate Descent$ 1987 [ 47 ] $eta_k^>!=!-dfrac<_^ mathrm _k><_^ mathrm _>$ $m Liu–Storey$ 1991 [ 79 ] $eta_k^>!=!dfrac<|_k|^2><_^ mathrm _>$ $m Dai–Yuan$ 1999 [ 30 ]
 $eta_k$ Title Year Reference $eta_k^>!=!dfrac<_^ mathrm _k><_^ mathrm _>$ $m Hestenses–Stiefel$ 1952 [ 60 ] $eta_k^>!=!dfrac<|_k|^2><|_|^2>$ $m Fletcher–Reeves$ 1964 [ 48 ] $eta_k^>!=!dfrac<_^ mathrm _ _><_^ mathrm __>$ 1967 [ 38 ] $eta_k^>!=!dfrac<_^ mathrm _k><|_|^2>$ $m Polak–Ribiere–Polyak$ 1969 [ 102 , 103 ] $eta_k^>!=!-dfrac<|_k|^2><_^ mathrm _>$ $m Conjugate Descent$ 1987 [ 47 ] $eta_k^>!=!-dfrac<_^ mathrm _k><_^ mathrm _>$ $m Liu–Storey$ 1991 [ 79 ] $eta_k^>!=!dfrac<|_k|^2><_^ mathrm _>$ $m Dai–Yuan$ 1999 [ 30 ]
 Denominator Numerator $|mathbf_|^2$ $mathbf_^ < m T>mathbf_$ $-mathbf_^ < m T>mathbf_$ $|mathbf_|^2$ FR DY CD $mathbf_^ < m T>mathbf_$ PRP HS LS
 Denominator Numerator $|mathbf_|^2$ $mathbf_^ < m T>mathbf_$ $-mathbf_^ < m T>mathbf_$ $|mathbf_|^2$ FR DY CD $mathbf_^ < m T>mathbf_$ PRP HS LS
 IT profile FE profile CPU time Test function AGD MSM SM AGD MSM SM AGD MSM SM Perturbed Quadratic 353897 34828 59908 13916515 200106 337910 6756.047 116.281 185.641 Raydan 1 22620 26046 14918 431804 311260 81412 158.359 31.906 36.078 Diagonal 3 120416 7030 12827 4264718 38158 69906 5527.844 52.609 102.875 Generalized Tridiagonal 1 670 346 325 9334 1191 1094 11.344 1.469 1.203 Extended Tridiagonal 1 3564 1370 4206 14292 10989 35621 55.891 29.047 90.281 Extended TET 443 156 156 3794 528 528 3.219 0.516 0.594 Diagonal 4 120 96 96 1332 636 636 0.781 0.203 0.141 Extended Himmelblau 396 260 196 6897 976 668 1.953 0.297 0.188 Perturbed quadratic diagonal 2542050 37454 44903 94921578 341299 460028 44978.750 139.625 185.266 Quadratic QF1 366183 36169 62927 13310016 208286 352975 12602.563 81.531 138.172 Extended quadratic penalty QP1 210 369 271 2613 2196 2326 1.266 1.000 0.797 Extended quadratic penalty QP2 395887 1674 3489 9852040 11491 25905 3558.734 3.516 6.547 Quadratic QF2 100286 32727 64076 3989239 183142 353935 1582.766 73.438 132.703 Extended quadratic exponential EP1 48 100 73 990 894 661 0.750 0.688 0.438 Extended Tridiagonal 2 1657 659 543 8166 2866 2728 3.719 1.047 1.031 ARWHEAD (CUTE) 5667 430 270 214284 5322 3919 95.641 1.969 1.359 Almost Perturbed Quadratic 356094 33652 60789 14003318 194876 338797 13337.125 73.047 133.516 LIARWHD (CUTE) 1054019 3029 18691 47476667 27974 180457 27221.516 9.250 82.016 ENGVAL1 (CUTE) 743 461 375 6882 2285 2702 3.906 1.047 1.188 QUARTC (CUTE) 171 217 290 402 494 640 2.469 1.844 2.313 Generalized Quartic 187 181 189 849 493 507 0.797 0.281 0.188 Diagonal 7 72 147 108 333 504 335 0.625 0.547 0.375 Diagonal 8 60 120 118 304 383 711 0.438 0.469 0.797 Full Hessian FH3 45 63 63 1352 566 631 1.438 0.391 0.391 Diagonal 9 329768 10540 13619 13144711 68189 89287 6353.172 43.609 38.672
 IT profile FE profile CPU time Test function AGD MSM SM AGD MSM SM AGD MSM SM Perturbed Quadratic 353897 34828 59908 13916515 200106 337910 6756.047 116.281 185.641 Raydan 1 22620 26046 14918 431804 311260 81412 158.359 31.906 36.078 Diagonal 3 120416 7030 12827 4264718 38158 69906 5527.844 52.609 102.875 Generalized Tridiagonal 1 670 346 325 9334 1191 1094 11.344 1.469 1.203 Extended Tridiagonal 1 3564 1370 4206 14292 10989 35621 55.891 29.047 90.281 Extended TET 443 156 156 3794 528 528 3.219 0.516 0.594 Diagonal 4 120 96 96 1332 636 636 0.781 0.203 0.141 Extended Himmelblau 396 260 196 6897 976 668 1.953 0.297 0.188 Perturbed quadratic diagonal 2542050 37454 44903 94921578 341299 460028 44978.750 139.625 185.266 Quadratic QF1 366183 36169 62927 13310016 208286 352975 12602.563 81.531 138.172 Extended quadratic penalty QP1 210 369 271 2613 2196 2326 1.266 1.000 0.797 Extended quadratic penalty QP2 395887 1674 3489 9852040 11491 25905 3558.734 3.516 6.547 Quadratic QF2 100286 32727 64076 3989239 183142 353935 1582.766 73.438 132.703 Extended quadratic exponential EP1 48 100 73 990 894 661 0.750 0.688 0.438 Extended Tridiagonal 2 1657 659 543 8166 2866 2728 3.719 1.047 1.031 ARWHEAD (CUTE) 5667 430 270 214284 5322 3919 95.641 1.969 1.359 Almost Perturbed Quadratic 356094 33652 60789 14003318 194876 338797 13337.125 73.047 133.516 LIARWHD (CUTE) 1054019 3029 18691 47476667 27974 180457 27221.516 9.250 82.016 ENGVAL1 (CUTE) 743 461 375 6882 2285 2702 3.906 1.047 1.188 QUARTC (CUTE) 171 217 290 402 494 640 2.469 1.844 2.313 Generalized Quartic 187 181 189 849 493 507 0.797 0.281 0.188 Diagonal 7 72 147 108 333 504 335 0.625 0.547 0.375 Diagonal 8 60 120 118 304 383 711 0.438 0.469 0.797 Full Hessian FH3 45 63 63 1352 566 631 1.438 0.391 0.391 Diagonal 9 329768 10540 13619 13144711 68189 89287 6353.172 43.609 38.672
 IT profile FE profile CPU time Test function HS PRP LS HS PRP LS HS PRP LS Perturbed Quadratic 1157 1157 6662 3481 3481 19996 0.234 0.719 1.438 Raydan 2 NaN 174 40 NaN 373 120 NaN 0.094 0.078 Diagonal 2 NaN 1721 5007 NaN 6594 15498 NaN 1.313 2.891 Extended Tridiagonal 1 NaN 170 17079 NaN 560 54812 NaN 0.422 13.641 Diagonal 4 NaN 70 1927 NaN 180 5739 NaN 0.078 0.391 Diagonal 5 NaN 154 30 NaN 338 90 NaN 0.172 0.078 Extended Himmelblau 160 120 241 820 600 1043 0.172 0.125 0.172 Full Hessian FH2 5096 5686 348414 15294 17065 1045123 83.891 80.625 5081.875 Perturbed quadratic diagonal 1472 1120 21667 4419 3363 65057 0.438 0.391 2.547 Quadratic QF1 1158 1158 5612 3484 3484 16813 0.281 0.313 1.047 Extended quadratic penalty QP2 NaN 533 NaN NaN 5395 NaN NaN 0.781 NaN Quadratic QF2 2056 2311 NaN 9168 9862 NaN 0.969 0.859 NaN Extended quadratic exponential EP1 NaN NaN 70 NaN NaN 350 NaN NaN 0.141 TRIDIA (CUTE) 6835 6744 NaN 20521 20248 NaN 1.438 1.094 NaN Almost Perturbed Quadratic 1158 1158 5996 3484 3484 17998 0.281 0.328 1.063 LIARWHD (CUTE) NaN 408 11498 NaN 4571 50814 NaN 0.438 2.969 POWER (CUTE) 7781 7789 190882 23353 23377 572656 1.422 1.219 14.609 NONSCOMP (CUTE) 4545 3647 NaN 15128 12433 NaN 0.875 0.656 NaN QUARTC (CUTE) NaN 165 155 NaN 1347 1466 NaN 0.781 0.766 Diagonal 6 NaN 174 137 NaN 373 442 NaN 0.109 0.125 DIXON3DQ (CUTE) NaN 12595 12039 NaN 37714 36091 NaN 1.641 2.859 BIGGSB1 (CUTE) NaN 11454 11517 NaN 34293 34530 NaN 1.969 2.141 Generalized Quartic NaN 134 139 NaN 458 445 NaN 0.125 0.094 Diagonal 7 NaN 51 80 NaN 142 240 NaN 0.063 0.109 Diagonal 8 NaN 70 80 NaN 180 180 NaN 0.063 0.125 FLETCHCR (CUTE) 18292 19084 20354 178305 170266 171992 8.859 6.203 7.484
 IT profile FE profile CPU time Test function HS PRP LS HS PRP LS HS PRP LS Perturbed Quadratic 1157 1157 6662 3481 3481 19996 0.234 0.719 1.438 Raydan 2 NaN 174 40 NaN 373 120 NaN 0.094 0.078 Diagonal 2 NaN 1721 5007 NaN 6594 15498 NaN 1.313 2.891 Extended Tridiagonal 1 NaN 170 17079 NaN 560 54812 NaN 0.422 13.641 Diagonal 4 NaN 70 1927 NaN 180 5739 NaN 0.078 0.391 Diagonal 5 NaN 154 30 NaN 338 90 NaN 0.172 0.078 Extended Himmelblau 160 120 241 820 600 1043 0.172 0.125 0.172 Full Hessian FH2 5096 5686 348414 15294 17065 1045123 83.891 80.625 5081.875 Perturbed quadratic diagonal 1472 1120 21667 4419 3363 65057 0.438 0.391 2.547 Quadratic QF1 1158 1158 5612 3484 3484 16813 0.281 0.313 1.047 Extended quadratic penalty QP2 NaN 533 NaN NaN 5395 NaN NaN 0.781 NaN Quadratic QF2 2056 2311 NaN 9168 9862 NaN 0.969 0.859 NaN Extended quadratic exponential EP1 NaN NaN 70 NaN NaN 350 NaN NaN 0.141 TRIDIA (CUTE) 6835 6744 NaN 20521 20248 NaN 1.438 1.094 NaN Almost Perturbed Quadratic 1158 1158 5996 3484 3484 17998 0.281 0.328 1.063 LIARWHD (CUTE) NaN 408 11498 NaN 4571 50814 NaN 0.438 2.969 POWER (CUTE) 7781 7789 190882 23353 23377 572656 1.422 1.219 14.609 NONSCOMP (CUTE) 4545 3647 NaN 15128 12433 NaN 0.875 0.656 NaN QUARTC (CUTE) NaN 165 155 NaN 1347 1466 NaN 0.781 0.766 Diagonal 6 NaN 174 137 NaN 373 442 NaN 0.109 0.125 DIXON3DQ (CUTE) NaN 12595 12039 NaN 37714 36091 NaN 1.641 2.859 BIGGSB1 (CUTE) NaN 11454 11517 NaN 34293 34530 NaN 1.969 2.141 Generalized Quartic NaN 134 139 NaN 458 445 NaN 0.125 0.094 Diagonal 7 NaN 51 80 NaN 142 240 NaN 0.063 0.109 Diagonal 8 NaN 70 80 NaN 180 180 NaN 0.063 0.125 FLETCHCR (CUTE) 18292 19084 20354 178305 170266 171992 8.859 6.203 7.484
 IT profile FE profile CPU time Test function DY FR CD DY FR CD DY FR CD Perturbed Quadratic 1157 1157 1157 3481 3481 3481 0.469 0.609 0.531 Raydan 2 86 40 40 192 100 100 0.063 0.016 0.016 Diagonal 2 1636 3440 2058 4774 7982 8063 0.922 1.563 1.297 Extended Tridiagonal 1 2081 690 1140 4639 2022 2984 1.703 1.141 1.578 Diagonal 4 70 70 70 200 200 200 0.047 0.031 0.016 Diagonal 5 40 124 155 100 258 320 0.109 0.141 0.125 Extended Himmelblau 383 339 207 1669 1467 961 0.219 0.172 0.172 Full Hessian FH2 4682 4868 4794 14054 14610 14390 65.938 66.469 65.922 Perturbed quadratic diagonal 1036 1084 1276 3114 3258 3834 0.406 0.422 0.422 Quadratic QF1 1158 1158 1158 3484 3484 3484 0.297 0.297 0.328 Quadratic QF2 NaN NaN 2349 NaN NaN 10073 NaN NaN 1.531 Extended quadratic exponential EP1 NaN 60 60 NaN 310 310 NaN 0.109 0.125 Almost Perturbed Quadratic 1158 1158 1158 3484 3484 3484 0.422 0.453 0.391 LIARWHD (CUTE) 2812 1202 1255 12366 7834 7379 0.938 1.000 1.109 POWER (CUTE) 7779 7781 7782 23347 23353 23356 1.078 1.500 1.328 NONSCOMP (CUTE) 2558 13483 10901 49960 43268 33413 1.203 1.406 1.422 QUARTC (CUTE) 134 94 95 1132 901 916 0.688 0.672 0.563 Diagonal 6 86 40 40 192 100 100 0.047 0.063 0.063 DIXON3DQ (CUTE) 16047 18776 19376 48172 56369 58176 2.266 2.516 2.734 BIGGSB1 (CUTE) 15274 17835 18374 45853 53546 55170 2.875 2.922 2.484 Generalized Quartic 142 214 173 497 712 589 0.078 0.172 0.109 Diagonal 7 50 50 50 160 160 160 0.063 0.047 0.094 Diagonal 8 50 40 40 160 130 130 0.109 0.125 0.063 Full Hessian FH3 43 43 43 139 139 139 0.063 0.109 0.109 FLETCHCR (CUTE) NaN NaN 26793 NaN NaN 240237 NaN NaN 10.203
 IT profile FE profile CPU time Test function DY FR CD DY FR CD DY FR CD Perturbed Quadratic 1157 1157 1157 3481 3481 3481 0.469 0.609 0.531 Raydan 2 86 40 40 192 100 100 0.063 0.016 0.016 Diagonal 2 1636 3440 2058 4774 7982 8063 0.922 1.563 1.297 Extended Tridiagonal 1 2081 690 1140 4639 2022 2984 1.703 1.141 1.578 Diagonal 4 70 70 70 200 200 200 0.047 0.031 0.016 Diagonal 5 40 124 155 100 258 320 0.109 0.141 0.125 Extended Himmelblau 383 339 207 1669 1467 961 0.219 0.172 0.172 Full Hessian FH2 4682 4868 4794 14054 14610 14390 65.938 66.469 65.922 Perturbed quadratic diagonal 1036 1084 1276 3114 3258 3834 0.406 0.422 0.422 Quadratic QF1 1158 1158 1158 3484 3484 3484 0.297 0.297 0.328 Quadratic QF2 NaN NaN 2349 NaN NaN 10073 NaN NaN 1.531 Extended quadratic exponential EP1 NaN 60 60 NaN 310 310 NaN 0.109 0.125 Almost Perturbed Quadratic 1158 1158 1158 3484 3484 3484 0.422 0.453 0.391 LIARWHD (CUTE) 2812 1202 1255 12366 7834 7379 0.938 1.000 1.109 POWER (CUTE) 7779 7781 7782 23347 23353 23356 1.078 1.500 1.328 NONSCOMP (CUTE) 2558 13483 10901 49960 43268 33413 1.203 1.406 1.422 QUARTC (CUTE) 134 94 95 1132 901 916 0.688 0.672 0.563 Diagonal 6 86 40 40 192 100 100 0.047 0.063 0.063 DIXON3DQ (CUTE) 16047 18776 19376 48172 56369 58176 2.266 2.516 2.734 BIGGSB1 (CUTE) 15274 17835 18374 45853 53546 55170 2.875 2.922 2.484 Generalized Quartic 142 214 173 497 712 589 0.078 0.172 0.109 Diagonal 7 50 50 50 160 160 160 0.063 0.047 0.094 Diagonal 8 50 40 40 160 130 130 0.109 0.125 0.063 Full Hessian FH3 43 43 43 139 139 139 0.063 0.109 0.109 FLETCHCR (CUTE) NaN NaN 26793 NaN NaN 240237 NaN NaN 10.203
 Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10 Perturbed Quadratic 1157 1157 1157 1157 1157 1157 1157 1157 1157 1157 Raydan 2 40 40 40 57 78 81 40 69 NaN 126 Diagonal 2 1584 1581 1542 1488 1500 2110 2193 1843 1475 1453 Extended Tridiagonal 1 805 623 754 2110 2160 10129 1167 966 NaN 270 Diagonal 4 60 60 70 60 70 70 60 70 NaN 113 Diagonal 5 124 39 98 39 120 109 39 141 154 130 Extended Himmelblau 145 139 111 161 181 207 159 381 109 108 Full Hessian FH2 5036 5036 5036 4820 4820 4800 4994 4789 5163 5705 Perturbed quadratic diagonal 1228 1214 1266 934 1093 987 996 1016 NaN 2679 Quadratic QF1 1158 1158 1158 1158 1158 1158 1158 1158 NaN 1158 Quadratic QF2 2125 2098 2174 1995 1991 2425 2378 NaN 2204 2034 TRIDIA (CUTE) NaN NaN NaN 6210 6210 5594 NaN NaN 6748 7345 Almost Perturbed Quadratic 1158 1158 1158 1158 1158 1158 1158 1158 1158 1158 LIARWHD (CUTE) 1367 817 1592 1024 1831 1774 531 2152 NaN 573 POWER (CUTE) 7782 7782 7782 7779 7779 7802 7781 7780 NaN 7781 NONSCOMP (CUTE) 10092 10746 8896 10466 9972 13390 11029 3520 3988 11411 QUARTC (CUTE) 94 160 145 150 126 95 160 114 165 154 Diagonal 6 40 40 40 57 78 81 40 69 NaN 126 DIXON3DQ (CUTE) 12182 5160 11257 5160 11977 14302 5160 17080 NaN 12264 BIGGSB1 (CUTE) 10664 5160 10479 5160 11082 13600 5160 16192 NaN 11151 Generalized Quartic 129 107 110 107 142 153 107 123 131 145 Diagonal 7 50 NaN 40 NaN 40 50 NaN 50 51 40 Diagonal 8 40 40 40 50 NaN 50 40 NaN NaN 40 Full Hessian FH3 43 42 42 42 42 43 42 43 NaN NaN FLETCHCR (CUTE) 17821 17632 18568 17272 17446 26794 24865 NaN 17315 20813
 Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10 Perturbed Quadratic 1157 1157 1157 1157 1157 1157 1157 1157 1157 1157 Raydan 2 40 40 40 57 78 81 40 69 NaN 126 Diagonal 2 1584 1581 1542 1488 1500 2110 2193 1843 1475 1453 Extended Tridiagonal 1 805 623 754 2110 2160 10129 1167 966 NaN 270 Diagonal 4 60 60 70 60 70 70 60 70 NaN 113 Diagonal 5 124 39 98 39 120 109 39 141 154 130 Extended Himmelblau 145 139 111 161 181 207 159 381 109 108 Full Hessian FH2 5036 5036 5036 4820 4820 4800 4994 4789 5163 5705 Perturbed quadratic diagonal 1228 1214 1266 934 1093 987 996 1016 NaN 2679 Quadratic QF1 1158 1158 1158 1158 1158 1158 1158 1158 NaN 1158 Quadratic QF2 2125 2098 2174 1995 1991 2425 2378 NaN 2204 2034 TRIDIA (CUTE) NaN NaN NaN 6210 6210 5594 NaN NaN 6748 7345 Almost Perturbed Quadratic 1158 1158 1158 1158 1158 1158 1158 1158 1158 1158 LIARWHD (CUTE) 1367 817 1592 1024 1831 1774 531 2152 NaN 573 POWER (CUTE) 7782 7782 7782 7779 7779 7802 7781 7780 NaN 7781 NONSCOMP (CUTE) 10092 10746 8896 10466 9972 13390 11029 3520 3988 11411 QUARTC (CUTE) 94 160 145 150 126 95 160 114 165 154 Diagonal 6 40 40 40 57 78 81 40 69 NaN 126 DIXON3DQ (CUTE) 12182 5160 11257 5160 11977 14302 5160 17080 NaN 12264 BIGGSB1 (CUTE) 10664 5160 10479 5160 11082 13600 5160 16192 NaN 11151 Generalized Quartic 129 107 110 107 142 153 107 123 131 145 Diagonal 7 50 NaN 40 NaN 40 50 NaN 50 51 40 Diagonal 8 40 40 40 50 NaN 50 40 NaN NaN 40 Full Hessian FH3 43 42 42 42 42 43 42 43 NaN NaN FLETCHCR (CUTE) 17821 17632 18568 17272 17446 26794 24865 NaN 17315 20813
 Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10 Perturbed Quadratic 3481 3481 3481 3481 3481 3481 3481 3481 3481 3481 Raydan 2 100 100 100 134 176 182 100 158 NaN 282 Diagonal 2 6136 6217 6006 5923 5944 8281 8594 4822 5711 5636 Extended Tridiagonal 1 2369 1991 2275 4678 4924 22418 3119 2661 NaN 869 Diagonal 4 170 170 200 170 200 200 170 200 NaN 339 Diagonal 5 258 88 206 88 270 228 88 292 338 270 Extended Himmelblau 855 687 583 763 813 961 757 1613 567 594 Full Hessian FH2 15115 15115 15115 14467 14467 14407 14989 14374 15495 17122 Perturbed quadratic diagonal 3686 3647 3805 2805 3282 2967 2993 3053 NaN 8044 Quadratic QF1 3484 3484 3484 3484 3484 3484 3484 3484 NaN 3484 Quadratic QF2 9455 9202 9501 9016 9054 10229 10086 NaN 9531 9085 TRIDIA (CUTE) NaN NaN NaN 18640 18640 16792 NaN NaN 20260 22051 Almost Perturbed Quadratic 3484 3484 3484 3484 3484 3484 3484 3484 3484 3484 LIARWHD (CUTE) 7712 5931 8275 6165 8113 9395 5854 10305 NaN 4848 POWER (CUTE) 23356 23356 23356 23347 23347 23416 23353 23350 NaN 23353 NONSCOMP (CUTE) 31355 33211 27801 32705 31458 40807 34013 23411 13367 35106 QUARTC (CUTE) 901 1254 1261 1224 1224 916 1254 1041 1347 1305 Diagonal 6 100 100 100 134 176 182 100 158 NaN 282 DIXON3DQ (CUTE) 36508 15534 33759 15534 35926 42952 15534 51284 NaN 36796 BIGGSB1 (CUTE) 31960 15534 31427 15534 33247 40846 15534 48620 NaN 33469 Generalized Quartic 457 371 370 371 481 529 371 439 446 467 Diagonal 7 160 NaN 130 NaN 130 160 NaN 160 142 13 Diagonal 8 130 130 130 160 NaN 160 130 NaN NaN 130 Full Hessian FH3 139 136 136 136 136 139 136 139 NaN NaN FLETCHCR (CUTE) 166463 165774 168739 175309 175845 240240 184939 NaN 174406 215687
 Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10 Perturbed Quadratic 3481 3481 3481 3481 3481 3481 3481 3481 3481 3481 Raydan 2 100 100 100 134 176 182 100 158 NaN 282 Diagonal 2 6136 6217 6006 5923 5944 8281 8594 4822 5711 5636 Extended Tridiagonal 1 2369 1991 2275 4678 4924 22418 3119 2661 NaN 869 Diagonal 4 170 170 200 170 200 200 170 200 NaN 339 Diagonal 5 258 88 206 88 270 228 88 292 338 270 Extended Himmelblau 855 687 583 763 813 961 757 1613 567 594 Full Hessian FH2 15115 15115 15115 14467 14467 14407 14989 14374 15495 17122 Perturbed quadratic diagonal 3686 3647 3805 2805 3282 2967 2993 3053 NaN 8044 Quadratic QF1 3484 3484 3484 3484 3484 3484 3484 3484 NaN 3484 Quadratic QF2 9455 9202 9501 9016 9054 10229 10086 NaN 9531 9085 TRIDIA (CUTE) NaN NaN NaN 18640 18640 16792 NaN NaN 20260 22051 Almost Perturbed Quadratic 3484 3484 3484 3484 3484 3484 3484 3484 3484 3484 LIARWHD (CUTE) 7712 5931 8275 6165 8113 9395 5854 10305 NaN 4848 POWER (CUTE) 23356 23356 23356 23347 23347 23416 23353 23350 NaN 23353 NONSCOMP (CUTE) 31355 33211 27801 32705 31458 40807 34013 23411 13367 35106 QUARTC (CUTE) 901 1254 1261 1224 1224 916 1254 1041 1347 1305 Diagonal 6 100 100 100 134 176 182 100 158 NaN 282 DIXON3DQ (CUTE) 36508 15534 33759 15534 35926 42952 15534 51284 NaN 36796 BIGGSB1 (CUTE) 31960 15534 31427 15534 33247 40846 15534 48620 NaN 33469 Generalized Quartic 457 371 370 371 481 529 371 439 446 467 Diagonal 7 160 NaN 130 NaN 130 160 NaN 160 142 13 Diagonal 8 130 130 130 160 NaN 160 130 NaN NaN 130 Full Hessian FH3 139 136 136 136 136 139 136 139 NaN NaN FLETCHCR (CUTE) 166463 165774 168739 175309 175845 240240 184939 NaN 174406 215687
 Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10 Perturbed Quadratic 0.656 0.516 0.781 0.719 0.594 0.438 0.719 0.688 0.844 0.688 Raydan 2 0.031 0.063 0.078 0.078 0.078 0.078 0.078 0.078 NaN 0.078 Diagonal 2 1.453 1.328 1.656 1.172 1.438 1.797 1.813 1.266 1.250 1.141 Extended Tridiagonal 1 1.016 1.125 1.359 2.250 2.375 7.578 1.672 1.375 NaN 0.922 Diagonal 4 0.031 0.031 0.031 0.078 0.078 0.047 0.109 0.094 NaN 0.094 Diagonal 5 0.141 0.063 0.156 0.094 0.094 0.125 0.109 0.078 0.219 0.156 Extended Himmelblau 0.172 0.172 0.109 0.141 0.172 0.141 0.125 0.141 0.172 0.125 Full Hessian FH2 83.125 91.938 86.984 85.766 94.484 78.281 77.141 74.500 80.969 82.469 Perturbed quadratic diagonal 0.406 0.609 0.641 0.375 0.563 0.359 0.328 0.344 NaN 0.734 Quadratic QF1 0.359 0.438 0.422 0.422 0.406 0.391 0.484 0.422 NaN 0.281 Quadratic QF2 1.047 1.313 1.203 1.156 1.063 1.156 1.000 NaN 1.094 1.047 TRIDIA (CUTE) NaN NaN NaN 1.688 1.391 1.859 NaN NaN 1.875 1.391 Almost Perturbed Quadratic 0.406 0.438 0.516 0.594 0.250 0.359 0.406 0.578 0.641 0.422 LIARWHD (CUTE) 0.938 0.828 1.203 0.797 1.125 1.172 0.938 1.203 NaN 0.594 POWER (CUTE) 1.563 1.672 1.750 1.609 1.625 1.578 1.625 1.188 NaN 1.453 NONSCOMP (CUTE) 1.547 1.484 1.063 1.766 1.422 1.719 1.516 1.063 1.203 1.703 QUARTC (CUTE) 0.750 1.000 0.969 0.969 0.875 0.797 0.938 0.703 1.266 0.93 Diagonal 6 0.078 0.078 0.078 0.094 0.063 0.016 0.016 0.125 NaN 0.109 DIXON3DQ (CUTE) 2.047 1.453 2.016 1.484 2.359 2.234 1.406 2.297 NaN 2.078 BIGGSB1 (CUTE) 1.875 2.047 2.359 1.750 2.250 2.391 1.422 2.672 NaN 2.422 Generalized Quartic 0.063 0.125 0.141 0.156 0.125 0.094 0.078 0.109 0.172 0.109 Diagonal 7 0.063 NaN 0.016 NaN 0.109 0.063 NaN 0.063 0.063 0.063 Diagonal 8 0.078 0.125 0.078 0.031 NaN 0.063 0.109 NaN NaN 0.078 Full Hessian FH3 0.063 0.047 0.109 0.047 0.031 0.063 0.047 0.109 NaN NaN FLETCHCR (CUTE) 5.656 6.750 7.922 9.484 6.484 8.766 7.281 NaN 6.906 7.547
 Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10 Perturbed Quadratic 0.656 0.516 0.781 0.719 0.594 0.438 0.719 0.688 0.844 0.688 Raydan 2 0.031 0.063 0.078 0.078 0.078 0.078 0.078 0.078 NaN 0.078 Diagonal 2 1.453 1.328 1.656 1.172 1.438 1.797 1.813 1.266 1.250 1.141 Extended Tridiagonal 1 1.016 1.125 1.359 2.250 2.375 7.578 1.672 1.375 NaN 0.922 Diagonal 4 0.031 0.031 0.031 0.078 0.078 0.047 0.109 0.094 NaN 0.094 Diagonal 5 0.141 0.063 0.156 0.094 0.094 0.125 0.109 0.078 0.219 0.156 Extended Himmelblau 0.172 0.172 0.109 0.141 0.172 0.141 0.125 0.141 0.172 0.125 Full Hessian FH2 83.125 91.938 86.984 85.766 94.484 78.281 77.141 74.500 80.969 82.469 Perturbed quadratic diagonal 0.406 0.609 0.641 0.375 0.563 0.359 0.328 0.344 NaN 0.734 Quadratic QF1 0.359 0.438 0.422 0.422 0.406 0.391 0.484 0.422 NaN 0.281 Quadratic QF2 1.047 1.313 1.203 1.156 1.063 1.156 1.000 NaN 1.094 1.047 TRIDIA (CUTE) NaN NaN NaN 1.688 1.391 1.859 NaN NaN 1.875 1.391 Almost Perturbed Quadratic 0.406 0.438 0.516 0.594 0.250 0.359 0.406 0.578 0.641 0.422 LIARWHD (CUTE) 0.938 0.828 1.203 0.797 1.125 1.172 0.938 1.203 NaN 0.594 POWER (CUTE) 1.563 1.672 1.750 1.609 1.625 1.578 1.625 1.188 NaN 1.453 NONSCOMP (CUTE) 1.547 1.484 1.063 1.766 1.422 1.719 1.516 1.063 1.203 1.703 QUARTC (CUTE) 0.750 1.000 0.969 0.969 0.875 0.797 0.938 0.703 1.266 0.93 Diagonal 6 0.078 0.078 0.078 0.094 0.063 0.016 0.016 0.125 NaN 0.109 DIXON3DQ (CUTE) 2.047 1.453 2.016 1.484 2.359 2.234 1.406 2.297 NaN 2.078 BIGGSB1 (CUTE) 1.875 2.047 2.359 1.750 2.250 2.391 1.422 2.672 NaN 2.422 Generalized Quartic 0.063 0.125 0.141 0.156 0.125 0.094 0.078 0.109 0.172 0.109 Diagonal 7 0.063 NaN 0.016 NaN 0.109 0.063 NaN 0.063 0.063 0.063 Diagonal 8 0.078 0.125 0.078 0.031 NaN 0.063 0.109 NaN NaN 0.078 Full Hessian FH3 0.063 0.047 0.109 0.047 0.031 0.063 0.047 0.109 NaN NaN FLETCHCR (CUTE) 5.656 6.750 7.922 9.484 6.484 8.766 7.281 NaN 6.906 7.547
 Label T1 T2 T3 T4 T5 T6 Value of the scalar $t$ $t_k=alpha_k$ 0.05 0.1 0.2 0.5 0.9
 Label T1 T2 T3 T4 T5 T6 Value of the scalar $t$ $t_k=alpha_k$ 0.05 0.1 0.2 0.5 0.9
 Method T1 T2 T3 T4 T5 T6 DHSDL 32980.14 31281.32 33640.45 32942.36 34448.32 33872.36 DLSDL 30694.00 28701.14 31048.32 30594.77 31926.59 31573.05 MHSDL 29289.73 27653.64 29660.00 29713.50 30491.18 30197.27 MLSDL 25398.82 22941.77 24758.27 24250.68 25722.64 25032.64
 Method T1 T2 T3 T4 T5 T6 DHSDL 32980.14 31281.32 33640.45 32942.36 34448.32 33872.36 DLSDL 30694.00 28701.14 31048.32 30594.77 31926.59 31573.05 MHSDL 29289.73 27653.64 29660.00 29713.50 30491.18 30197.27 MLSDL 25398.82 22941.77 24758.27 24250.68 25722.64 25032.64
 Method T1 T2 T3 T4 T5 T6 DHSDL 1228585.50 1191960.55 1252957.09 1238044.36 1271176.59 1255710.45 DLSDL 1131421.41 1083535.14 1149482.41 1134315.00 1167030.14 1158554.77 MHSDL 1089700.41 1036710.32 1089777.64 1091985.41 1105299.91 1101380.18 MLSDL 904217.14 845017.55 891669.50 879473.14 913165.68 895652.36
 Method T1 T2 T3 T4 T5 T6 DHSDL 1228585.50 1191960.55 1252957.09 1238044.36 1271176.59 1255710.45 DLSDL 1131421.41 1083535.14 1149482.41 1134315.00 1167030.14 1158554.77 MHSDL 1089700.41 1036710.32 1089777.64 1091985.41 1105299.91 1101380.18 MLSDL 904217.14 845017.55 891669.50 879473.14 913165.68 895652.36
 Method T1 T2 T3 T4 T5 T6 DHSDL 902.06 894.73 917.77 930.56 911.28 870.93 DLSDL 816.08 790.63 804.69 816.28 803.84 809.67 MHSDL 770.78 751.65 728.61 749.70 712.64 720.57 MLSDL 573.14 587.41 581.50 576.32 582.62 580.96
 Method T1 T2 T3 T4 T5 T6 DHSDL 902.06 894.73 917.77 930.56 911.28 870.93 DLSDL 816.08 790.63 804.69 816.28 803.84 809.67 MHSDL 770.78 751.65 728.61 749.70 712.64 720.57 MLSDL 573.14 587.41 581.50 576.32 582.62 580.96

Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934/jimo.2013.9.595

Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial & Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934/jimo.2008.4.565

Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial & Management Optimization, 2008, 4 (4) : 739-755. doi: 10.3934/jimo.2008.4.739

Zhong Wan, Chaoming Hu, Zhanlu Yang. A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modified line search. Discrete & Continuous Dynamical Systems - B, 2011, 16 (4) : 1157-1169. doi: 10.3934/dcdsb.2011.16.1157

Guanghui Zhou, Qin Ni, Meilan Zeng. A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 595-608. doi: 10.3934/jimo.2016034

Hong Seng Sim, Wah June Leong, Chuei Yee Chen, Siti Nur Iqmal Ibrahim. Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 377-387. doi: 10.3934/naco.2018024

El-Sayed M.E. Mostafa. A nonlinear conjugate gradient method for a special class of matrix optimization problems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 883-903. doi: 10.3934/jimo.2014.10.883

Yuhong Dai, Ya-xiang Yuan. Analysis of monotone gradient methods. Journal of Industrial & Management Optimization, 2005, 1 (2) : 181-192. doi: 10.3934/jimo.2005.1.181

Yigui Ou, Haichan Lin. A class of accelerated conjugate-gradient-like methods based on a modified secant equation. Journal of Industrial & Management Optimization, 2020, 16 (3) : 1503-1518. doi: 10.3934/jimo.2019013

Sanming Liu, Zhijie Wang, Chongyang Liu. On convergence analysis of dual proximal-gradient methods with approximate gradient for a class of nonsmooth convex minimization problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 389-402. doi: 10.3934/jimo.2016.12.389

Sarra Delladji, Mohammed Belloufi, Badreddine Sellami. Behavior of the combination of PRP and HZ methods for unconstrained optimization. Numerical Algebra, Control & Optimization, 2021, 11 (3) : 377-389. doi: 10.3934/naco.2020032

Feng Bao, Thomas Maier. Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems. Foundations of Data Science, 2020, 2 (1) : 1-17. doi: 10.3934/fods.2020001

C.Y. Wang, M.X. Li. Convergence property of the Fletcher-Reeves conjugate gradient method with errors. Journal of Industrial & Management Optimization, 2005, 1 (2) : 193-200. doi: 10.3934/jimo.2005.1.193

Saman Babaie–Kafaki, Reza Ghanbari. A class of descent four–term extension of the Dai–Liao conjugate gradient method based on the scaled memoryless BFGS update. Journal of Industrial & Management Optimization, 2017, 13 (2) : 649-658. doi: 10.3934/jimo.2016038

M. S. Lee, H. G. Harno, B. S. Goh, K. H. Lim. On the bang-bang control approach via a component-wise line search strategy for unconstrained optimization. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 45-61. doi: 10.3934/naco.2020014

Tengteng Yu, Xin-Wei Liu, Yu-Hong Dai, Jie Sun. Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021084

Robert I. McLachlan, G. R. W. Quispel. Discrete gradient methods have an energy conservation law. Discrete & Continuous Dynamical Systems, 2014, 34 (3) : 1099-1104. doi: 10.3934/dcds.2014.34.1099

Giacomo Frassoldati, Luca Zanni, Gaetano Zanghirati. New adaptive stepsize selections in gradient methods. Journal of Industrial & Management Optimization, 2008, 4 (2) : 299-312. doi: 10.3934/jimo.2008.4.299

Richard A. Norton, David I. McLaren, G. R. W. Quispel, Ari Stern, Antonella Zanna. Projection methods and discrete gradient methods for preserving first integrals of ODEs. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2079-2098. doi: 10.3934/dcds.2015.35.2079

Richard A. Norton, G. R. W. Quispel. Discrete gradient methods for preserving a first integral of an ordinary differential equation. Discrete & Continuous Dynamical Systems, 2014, 34 (3) : 1147-1170. doi: 10.3934/dcds.2014.34.1147

## Nick Gould

Before starting university in Oxford, I worked for 8 months in the Department of Numerical Analysis and Computing at the National Physical Laboratory in Teddington. I finished my D.Phil. at Oxford in 1982, having spent one year in the O.R. Department at Stanford University in California. I then spent three years as an Assistant Professor in the Department of Combinatorics and Optimization at the University of Waterloo, Ontario, Canada, before returning to England and joining the Numerical Analysis Group in the Computer Science and Systems Division at AEA Harwell.

I moved with the Group to the Central Computing Department (as it was then) at RAL in 1990 and have been there almost ever since. I spent a sabbatical year at CERFACS in Toulouse, France, during 1993.

I am a visiting Professor within the Department of Mathematics and Statistics at the University of Edinburgh, and at the University of Oxford. I won the Leslie Fox prize in numerical analysis in September 1986, and the Beale-Orchard-Hays prize for excellence in computational mathematical programming in August 1994. In May 2009, I was elected as one of 183 inaugural SIAM Fellows. I have published two books: the first on the software package LANCELOT in 1992, and the second on trust-region methods in 2000. I was editor-in-chief of the SIAM Journal on Optimization from 2005 until 2010, and an associate editor for the ACM Transactions on Mathematical Software, for the IMA Journal of Numerical Analysis, for Mathematics of Computation, and for Mathematical Programming, and as well as being an Area Editor for Mathematical Programming Computation, until 2016.

My research interests are currently on the theory and practice of optimization methods, on numerical linear algebra, on large-scale scientific computation, and on the links between these fields. My other interests include church-bell ringing, hill walking, messing about on canals, mushroom collecting, and appreciating cask-conditioned ales of all varieties. I am a long-suffering supporter of Tottenham Hotspur FC.

In 2006, I was appointed as Professor of Numerical Optimisation and Tutorial Fellow of Exeter College at the University of Oxford. I returned full-time to RAL in 2008. I became an STFC Senior Fellow in 2011, and have worked half time since the summer of 2017.

My CV (PDF) gives all the gory details.

• Numerical optimization
• Numerical analysis
• Linear algebra
• Large scale scientific computation
• High performance computation

### ORCID id: 0000-0002-1031-1588, Web of Science Reseacher id: AAS-2273-2020

N. I. M. Gould, S. Leyffer and Ph. L. Toint, Nonlinear programming: theory and practice'' Mathematical Programming B 100:1 (2004).

## 2.6: Unconstrained Optimization- Numerical Methods - Mathematics

(S) = Spring and (F) = Fall

CAAM 378 (F) INTRODUCTION TO OPERATIONS RESEARCH AND OPTIMIZATION
Formulation and solution of mathematical models in management, economics, engineering and science applications in which one seeks to minimize or maximize an objective function subject to constraints including models in linear, nonlinear and integer programming basic solution methods for these optimization models problem solving using a modeling language and optimization software.
Recommended Prerequisite(s): MATH 212 and any one of the following: CAAM 335, MATH 211, or MATH 355.

CAAM 454 (S) NUMERICAL ANALYSIS II
Iterative methods for linear systems of equations including Krylov subspace methods Newton and Newton-like methods for nonlinear systems of equations Gradient and Newton-like methods for unconstrained optimization and nonlinear least squares problems techniques for improving the global convergence of these algorithms linear programming duality and primal-dual interior-point methods. Credit may not be received for both CAAM 454 and CAAM 554.
Recommended Prerequisite(s): CAAM 453.

CAAM 471 (S) INTRODUCTION TO LINEAR AND INTEGER PROGRAMMING
Linear and integer programming involve formulating and solving fundamental optimization models widely used in practice. This course introduces the basic theory, algorithms, and software of linear and integer programming. Topics studied in the linear programming part include polyhedron concepts, simplex methods, duality, sensitivity analysis and decomposition techniques. Building on linear programming, the second part of this course introduces modeling with integer variables and solution methodologies in integer programming including branch-and-bound and cutting-plane techniques. This course will provide a basis for further studies in convex and combinatorial optimization.
Recommended Prerequisite(s): CAAM 335.

CAAM 474 (F) COMBINATORIAL OPTIMIZATION
General theory and approaches for solving combinatorial optimization problems are studied. Specific topics include basic polyhedral theory, minimum spanning tress, shortest paths, network flow, matching and matroids. The course also cover the traveling salesman problem.
Recommended Prerequisite(s): CAAM 378 or 471.
Biennial Offered in Even Years

CAAM 554 (S) NUMERICAL ANALYSIS II
This course covers the same lecture material as CAAM 454, but fosters greater theoretical sophistication through more challenging problem sets and exams. Credit may not be received for both CAAM 454 and CAAM 554.
Recommended Prerequisite(s): CAAM 553.

CAAM 560 (F) OPTIMIZATION THEORY
Derivation and application of necessity conditions and sufficiency conditions for constrained optimization problems.

CAAM 564 (S) NUMERICAL OPTIMIZATION
Numerical algorithms for constrained optimization problems in engineering and sciences, including simplex and interior-point methods for linear programming, penalty, barrier, augmented Lagrangian and SQP methods for nonlinear programming.
Recommended Prerequisite(s): CAAM 560 (may be taken concurrently) and CAAM 454.
Biennial Offered in Even Years

CAAM 565 (F) CONVEX OPTIMIZATION
Convex optimization problems arise in communication, system theory, VLSI, CAD, finance, inventory, network optimization, computer vision, learning, statistics, . etc, even though oftentimes convexity may be hidden and unrecognized. Recent advances in interior-point methodology have made it much easier to solve these problems and various solvers are now available. This course will introduce the basic theory and algorithms for convex optimization, as well as its many applications to computer science, engineering, management science and statistics.
Recommended Prerequisite(s): CAAM 335 and CAAM 401
Biennial Offered in Odd Years

CAAM 571 (S) INTRODUCTION TO LINEAR AND INTEGER PROGRAMMING
This course covers the same lecture material as CAAM 471, but fosters greater theoretical sophistication through more challenging problem sets and exams. Credit may not be received for both CAAM 471 and CAAM 571.

CAAM 640 (BOTH) OPTIMIZATION WITH SIMULATION CONSTRAINTS
Content varies from year to year. Course may be repeated for credit.
Recommended Prerequisite(s): CAAM 564.

CAAM 654 (BOTH) TOPICS IN OPTIMIZATION
Content varies from year to year.