On q-BFGS algorithm for unconstrained optimization problems

Variants of the Newton method are very popular for solving unconstrained optimization problems. The study on global convergence of the BFGS method has also made good progress. The q-gradient reduces to its classical version when q approaches 1. In this paper, we propose a quantum-Broyden–Fletcher–Goldfarb–Shanno algorithm where the Hessian is constructed using the q-gradient and descent direction is found at each iteration. The algorithm presented in this paper is implemented by applying the independent parameter q in the Armijo–Wolfe conditions to compute the step length which guarantees that the objective function value decreases. The global convergence is established without the convexity assumption on the objective function. Further, the proposed method is verified by the numerical test problems and the results are depicted through the performance profiles.


Introduction
Several numerical methods have been developed extensively for solving unconstrained optimization problems. The gradient descent method is one of the most simplest and commonly used method in the field of the optimization [1]. This method is globally convergent, but suffers from the slow convergence rate as the iterative point approaches the minimizer. In order to improve the convergence rate, optimizers use the Newton method [1]. This method is one of the most popular methods due to its quadratic convergence. A major disadvantage of the Newton method is its slowness or non-convergence for the starting point not being taken close to an optima, and it also requires one to compute the inverse of the Hessian at every iteration, which is rather costly. The components of the Hessian matrix are constructed using the classical derivative, which is positive definite at every iteration. In quasi-Newton methods, instead of computing the actual Hessian, an approximation of the Hessian is considered [1]. These methods use only first derivatives to make an approximation whose computing costs are low.
The global convergence of the BFGS method have been studied by several authors [5,12,[19][20][21] under the convexity assumption on the objective function. An example was given in [22] to fail the standard BFGS method for non-convex functions with inexact line search [12]. A modified BFGS method was developed to converge globally without a convexity assumption on the objective function [23]. In the reference, Li et al. concerned with the open problem of whether the BFGS method with inexact line search converges globally when applied to non-convex unconstrained optimization problems [23]. We propose a cautious BFGS update and prove that the method with either a Wolfe-type or an Armijo-type line search converges globally if the function to be minimized has Lipschitz continuous gradients. The q-calculus, particularly known as quantum calculus, has gained a lot of interest in various fields of science, mathematics [24], physics [25], quantum theory [26], statistical mechanics [27] and signal processing [28], etc., where the q-derivative is employed. It is also known as the Jackson derivative, as the concept was first introduced by Jackson [29]; it was further studied in the case of a q-difference equation by Carmichael [30], Mason [31], Adams [32] and Trjitzinsky [33]. The word quantum usually refers to the smallest discrete quantity of some physical property and it comes from the Latin word "quantus", which literally means how many. In mathematics, the quantum calculus is referred as the calculus without limits and it replaces the classical derivative by a difference operator.
A q-version of the steepest descent method was first developed in the field of optimization to solve single objective nonlinear unconstrained problems. The method was able to escape from many local minima and reach the global minimum [34]. The q-LMS (Least Mean Square) algorithm is proposed by employing the q-gradient to compute the secant of the cost function instead of the tangent [28]. The algorithm takes the larger steps towards the optimum solution and achieves a higher convergence rate. An improved version of q-LMS algorithm was developed based on a new class of stochastic q-gradient methods. The proposed approach shows the high convergence rate by utilizing the concept of error correlation energy, and normalization of signal [35]. Global optimization using the q-gradient was further studied in [36], where the parameter q is a dilation that is used to control the degree of localness of the search, solving several multimodal functions. Furthermore, a modified Newton method based on a deterministic scheme using the q-derivative was proposed [37,38]. Recently, a mathematical package for q-series and partition theory applications has been developed using MATHEMATICA software [39].
A sequence q k is introduced instead of using a fixed positive number q in the Newton and limited memory BFGS schemes in [37,40]. After some large value of k, it is obvious that the Hessian becomes almost the same as the exact Hessian of the objective function. The concept of q-gradient, in contrast to the classical gradient in the q-least mean squares algorithm [28], provides extra freedom to control the performance of the algorithm, which we adopt in our proposed method.
In this article, we propose a method using the q-derivative for solving unconstrained optimization problems. This algorithm is different from the classical BFGS algorithm as the search process moves from global in the beginning to local at the end. We utilize an independent parameter q ∈ (0, 1) in Armijo-Wolfe conditions for finding the step length. The proposed algorithm with the Armijo-Wolfe line search is globally convergent for general objective functions. Then we compare the new approach with the existing method.
his paper is organized as follows: In the next section, we give the preliminary idea about the q-calculus. In Sect. 3, we present the q-BFGS (quantum-Broyden-Fletcher-Goldfarb-Shanno) method, using q-calculus. In Sect. 4, the global convergence of proposed algorithm is proved. In Sect. 5, we report some numerical experiments. Finally, we present a conclusion in the last section.

Preliminaries
In this section, we present some basic definitions of q-calculus. Given value of q = 1, we present the q-integer [n] q [41] by [42] of a function f : R → R is given by whenever scalar q ∈ (0, 1), if f is differentiable. The q-derivative of a function of the form x n is Let f (x) be a continuous function on [a, b]. Then there existsq ∈ (0, 1) and x ∈ (a, b) [43] such that for q ∈ (q, 1) ∪ (1,q -1 ). The q-partial derivative of a function f : R n → R at x ∈ R n with respect to x i , where scalar q ∈ (0, 1), is given as [34] D q, We now choose the parameter q as a vector, that is, Then the q-gradient vector [34] of f is Let {q k i } be a real sequence defined by for each i = 1, . . . , n, where k ∈ {0} ∪ N and q 0 i ∈ (0, 1) is a fixed starting number, then the sequence {q k i } converges to (1, . . . , 1) T as k → ∞ [38]. Thus, the q-gradient reduces to its classical version. For the sake of convenience, we represent the q-gradient vector of f at x k as Then the q-gradient is given as We focus our attention on solving the following unconstrained optimization problems: where f : R n → R is a continuously q-derivative. In the next section, we present the q-BFGS algorithm.

On q-BFGS algorithm
The BFGS method for solving optimization problems (4) generates a sequence {x k } by the following iterative scheme: for k = {0} ∪ N, where α k is the step length, and d k q k is the q-BFGS descent direction obtained by solving the following equation: and, for k ≥ 1, we have where W k is the q-quasi-Newton update Hessian. The sequence {W k } satisfies the following equation: where y k = g k+1 q kg k q k . We call the famous BFGS (Broyden [2], Fletcher [3], Goldfarb [4], and Shanno [5]) updated formula in the context of q-calculus: q-BFGS. Thus, the Hessian W k is updated by the q-BFGS formula: where s k = x k+1x k . A good property of Eq. (7) is that W k+1 should inherit the positive definiteness of W k as long as (y k ) T s k > 0 and numerically support in the sense of classical BFGS update. The condition (y k ) T s k > 0 is guaranteed to hold if the step length α k is determined by the exact or inexact line search methods. For computing the step length, the modified Armijo-Wolfe line search conditions due to the q-gradient are presented and where 0 < σ 1 < σ 2 < 1. The first condition (8) is called the Armijo condition; it ensures a sufficient reduction of the objective function while the second condition (9) is called the curvature condition which ensures the nonacceptance short step length.
The Armijo-type line search does not ensure the condition (y k ) T s k > 0 and hence W k+1 is not positive definite even if W k is positive definite. In order to ensure positive definiteness of W k+1 , the condition (y k ) T s k > 0 is sometimes used to decide whether or not W k is updated. More specifically, we present the following update due to [23]: where and β are positive constants.
It is not difficult to see from (10) that the updated matrix W k is symmetric and positive definite for all k, which is in turn implies that {f (x k )} is a non-increasing sequence when the modified Armijo-Wolfe line search conditions (8) and (9) are used. On the basis of the above theory, we present the following q-BFGS Algorithm 1. In the next section, we shall investigate the global convergence of Algorithm 1.

Algorithm 1 q-BFGS algorithm
Require: Objective function f : R n → R, is a tolerance for convergence. Choose an initial point x 0 ∈ R n , fix q 1 i ∈ (0, 1), and an initial symmetric positive definite matrix W 0 ∈ R n×n . Ensure: Minimizer x * encountered with corresponding objective function value f (x * ).

Global convergence
In this section, we present the global convergence of Algorithm 1 under the following two assumptions.

Assumption 1
The objective function f (x) has a lower bound on the level set where x 0 is the starting point of Algorithm 1.
Assumption 2 Let function f be a continuously q-derivative on , and there exists a constant L > 0 such that g q k (x)g q k (y) ≤ L(xy), for each x, y ∈ .
Since {f (x k )} is a non-increasing sequence, it is clear that the sequence {x k } generated by Algorithm 1 is contained in . We present the index set as We can again express (10) as The following lemma is used to prove the global convergence of Algorithm 1 within the context of q-calculus.
Lemma 3 Let f be satisfied by Assumption 1 and Assumption 2. Let {x k } be generated by Algorithm 1, with q k i ∈ (0, 1), where i = 1, . . . , n. If there are positive constants γ 1 , and γ 2 such that the inequalities Proof Since s k = α k d k q k , using Part 1 of this lemma, and (6), we have and From (13) and (14), we get Substituting We present the case where the Armijo-type line search (8) is used with backtracking parameter ρ. If α k = 1, then we have From the q-mean value theorem, there is a θ k ∈ (0, 1) such that that is, From Assumption 2, we get From (17) and (18), we get for any k ∈ K Using (16) in the above inequality we get We now consider the case where the Wolfe-type line search (9) is used. From (9) and Assumption 2, we get This implies that The inequalities (19) together with (20) show that {α k } k∈K is bounded below away from zero when we use the Armijo-Wolfe line search conditions. Moreover, This together with (9) gives This together with (15) and (16) implies (12).
Lemma 3 indicates that to prove the global convergence of Algorithm 1, it suffices to show that there are positive constants γ 1 and γ 2 such that Part 1 and Part 2 hold for infinitely many k. For this purpose, we require the following lemma which may be proved in the light of [20, Theorem 2.1].

Lemma 4
If there are positive constants γ 1 and γ 2 such that, for each k ≥ 0, then there exist constants γ 1 and γ 2 such that, for any positive integer t, Part 1 and Part 2 of Lemma 3 hold for at least t 2 values of k ∈ {1, . . . , t}.
From Lemma 3 and Lemma 4, we now prove the global convergence for Algorithm 1.
Theorem 5 Let f satisfy Assumption 1 and Assumption 2, and {x k } be generated by Algorithm 1. Then Eq. (12) holds.
Proof If K is finite then W k remains constant after a finite number of iterations. Since W k is symmetric and positive definite for each k, it is obvious that there are constants γ 1 , and γ 2 such that Part 1 and Part 2 of Lemma 3 hold for all k sufficiently large. If K is infinite, then for the sake of contradiction (12) is not true. Then there exists a positive constant δ such that for all k Since (y k ) T s k ≥ δ α s k 2 , We know that y k 2 ≤ L 2 s k 2 . Thus, we get From Assumption 2, we get Applying Lemma 4 to the matrix subsequence {W k } k∈K , we conclude that Part 1 and Part 2 of Lemma 3 hold for infinitely many k. There exists a subsequence of {x k } converging to the q-critical point of (4). As k → ∞, since q k approaches (1, 1, . . . , 1) T , a q-critical point eventually approximates a critical point. If the objective function f is convex then every local minimum point is a global minimum point. Since the sequence {f (x k )} converges, every accumulation point of {x k } is a global optimal solution of (4). Then Lemma 3 completes the proof.
The above theorem proved the global convergence of q-BFGS algorithm without the convexity assumption on the objective function.

Numerical experiments
This section reports some numerical experiments with Algorithm 1. We teated on some test problems taken from [44]. Our numerical experiments are performed on a Laptop with Intel(R) Core(TM) CPU (i3-4005U@1.70 GHz) and 4 GB RAM, with MATLAB (2017a).  We used the condition g q x k ≤ 10 -6 , as the stopping criterion. The program stops if the total iteration number is larger than 400. For each problem we choose the initial matrix W 0 = I n , where I n is an identity matrix. First, we find the q-gradient of the following problem when the parameter q is not fixed.
Example 2 Consider a function f : We need to find the q-gradient vector at x = (2, 3) T and x = (-4, 5) T . This function uses the sequence {q k } with an initial vector of q 0 = (0.32, 0.32) T .
In fact, Tables 1 and 2 show the computational values of f (x), f (q k x) and the q-gradient, where g q k (1) and g q k (2) are the first and second component of q-gradient. We obtain the The Rosenbrock function is a non-convex function, introduced by Rosenbrock in 1960.
We consider this function to measure the performance of Algorithm 1. In this case, a  Fig. 3. Of course, Fig. 3(a) shows that our proposed method takes larger steps to converge due to the q-gradient.
With different starting points, we compare our algorithm with [23] on the Rosenbrock function. The numerical results is shown in Tables 3 and 4 where the columns 'it' , 'fe' , and 'ge' indicate the total numbers of iterations, the total number of function evaluations, and the total number of gradient evaluations, respectively. Note the total number of q-gradient evaluations for q-BFGS and total number of gradient evaluations for BFGS use the same notation.  Dolan and Moré [46] presented an appropriate technique to demonstrate the performance profiles, which is a statistical process. The performance ratio is presented as where r (p,s) refers to the iteration, function evaluations and q-gradient evaluations, respectively for solver s spent on problem p and n s refers to the number of problems in the model test. The cumulative distribution function is expressed as where P s (τ ) is the probability that a performance ratio ρ (p,s) is within a factor of τ of the best possible ratio. That is, for a subset of the methods being analyzed, we plot the fraction ρ s (τ ) of problems for which any given method is within a factor τ of the best. We use this tool to show the performance of Algorithm 1. Here, Fig. 4(a), Fig. 4(b) and Fig. 4(c) show that q-BFGS method solves about 82%, 59% and 89% of Rosenbrock test problem with the least number of iterations, function evaluations and gradient evaluations, respectively.  Tables 3 and 4 Example 5 Consider an unconstrained optimization problem f : R 2 → R such that f (x 1 , x 2 ) = 2 + (x 1 -2) 2 + (x 2 -2) 2 .
The Rastrigin function is a non-convex. The visualization of in an area from -1 to 1 is shown in Fig. 6(a) with many local minima, and the global optimum at (0, 0) T in an area from -1 to 1 is shown in Fig. 6(b) with successive iterative points. The numerical results of this function is shown in Table 5.
We have used the 19 test problems as shown in Table 6 with attributes problem number, problem's name and starting point, respectively. Table 7 shows the computational results for q-BFGS and BFGS method [23] on small scale test problems. Of course, Fig. 7(a), Fig. 7(b) and Fig. 7(c) show that the q-BFGS method solves about 95%, 79% and 90% of the test problems with the least number of iterations, function evaluations and gradient evaluations, respectively. Therefore, this is concluded that the q-BFGS performs better than the BFGS of [23] and improves the performance in fewer iterations, function evaluations, and gradient evaluations, respectively.

Conclusion
We have proposed a q-BFGS update and shown that the method converges globally with the Armijo-Wolfe line search conditions. The variant of the proposed method behaves like the classical BFGS method in limiting case where the existence of second-order partial derivatives at every point is not required. First-order q-differentiability of the function is sufficient to prove the global convergence of the proposed method. The q-gradient enables the q-BFGS quasi-Newton search process to be carried out in a more diverse set of directions and takes larger steps to converge. The reported numerical results show that the proposed method is efficient in comparison to the existing method for solving unconstrained optimization problems. However, other modified BFGS methods using the q-derivative are yet to study.  Table 7