A New Filter Nonmonotone Adaptive Trust Region Method for Unconstrained Optimization

: In this paper, a new filter nonmonotone adaptive trust region with fixed step length for unconstrained optimization is proposed. The trust region radius adopts a new adaptive strategy to overcome additional computational costs at each iteration. A new nonmonotone trust region ratio is introduced. When a trial step is not successful, a multidimensional filter is employed to increase the possibility of the trial step being accepted. If the trial step is still not accepted by the filter set, it is possible to find a new iteration point along the trial step and the step length is computed by a fixed formula. The positive definite symmetric matrix of the approximate Hessian matrix is updated using the MBFGS method. The global convergence and superlinear convergence of the proposed algorithm is proven by some classical assumptions. The efficiency of the algorithm is tested by numerical results.


Introduction
Consider the following unconstrained optimization problem: ( where f : n R R → is a twice continuously differentiable function. The trust region method is one of the prominent classes of iterative methods. At the iteration point k x , the trial step k d is obtained by the following quadratic subproblem: where . is the Euclidean norm, , k B is a symmetric approximation of k G , and k Δ is a trust region radius. The most ordinary ratio is defined as follows: Generally, the numerator is referred to as the actual reduction and the denominator is known as the predicted reduction.
The disadvantage of the traditional trust region method is that the subproblem needs to be solved many times to achieve an acceptable trial step in one iteration. To overcome this drawback, Mo et al. [1] first proposed a nonmonotone trust region algorithm with a fixed step length. When the trial step is not acceptable, we use a fixed step length to find a new iteration point instead of resolving the subproblem. Based on this advantage, Ou, Hang, and Wang have proposed a trust region algorithm with fixed step length in [2][3][4], respectively. The fixed step length is computed by It is well known that the strategy of selecting a trust region radius has a significant impact on the performance of the trust region methods. In 1997, Sartenaer [5] presented a strategy which can automatically determine an initial trust region radius. This fact leads to an increase in the number of subproblems to be solved in some problems, thereby reducing the efficiency of these methods. In 2002, Zhang et al. [6] provided another scheme to reduce the number of subproblems that need to be solved, where the trust region radius uses: Zhang's strategy requires an estimation of the inverse of the matrixes k B and  1 k B − in each iteration; however, Li [4] has suggested another practically efficient adaptive trust region radius that uses The strategy requires not only the gradient value but also the function value.
As we all know, monotone techniques may slow down the rate of convergence, especially in the presence of the narrow curved valley. The monotone techniques require the value of the function to be decreased at each iteration. In order to overcome these disadvantages, Deng et al. [11] proposed a nonmonotone trust region algorithm in 1993. The general nonmonotone term where , and 0 N ≥ is an integer. Deng et al. [11] modified the ratio (3) which evaluates the consistency between the quadratic model and the objective function in trust region methods. The most common nonmonotone ratio is defined as follows: The general nonmonotone term ( ) l k f suffers from various drawbacks, such as the fact that numerical performance is highly dependent on the choice of N . In order to introduce a more suitable nonmonotone strategy, Ahookhosh et al. [12] proposed a new nonmonotone ratio as follows.
where ( ) η η ∈ . We recommend that interested readers refer to [13,14] for more details and progress on the nonmonotone trust region algorithm.
In order to overcome the difficulties associated with using the penalty function, especially the adjustment of the penalty parameter. The filter methods were recently presented by Fletcher and Leyffer [15] for constrained nonlinear optimization. More recently, Gould et al. [16] explored a new nonmonotone trust region algorithm for the unconstrained optimization problems with the multidimensional filter technique in [17]. Compared with the standard nonmonotone algorithm, the new algorithm dynamically determines iterations based on filter elements and increases the possibility of the trial step being accepted. Therefore, this topic has received great attention in recent years (see [18][19][20][21]).
The remainder of this paper is organized as follows. In Section 2, we describe a new trust region method. The global convergence is investigated in Section 3. In Section 4, we prove the superlinear convergence of the algorithm. Numerical results are shown in Section 5. Finally, the paper ends with some conclusions in Section 6.

The New Algorithm
In this section, we propose a trust region method by combining a new trust region radius and the modified trust region ratio to effectively solve unconstrained optimization problems. In each iteration, a trial step k d is generated by solving an adaptive trust region subproblem, : where 0 1 γ < < and k c is an adjustment parameter. Prompted by the adaptive technique, the proposed method has the following effective properties: it is not necessary to calculate the matrix of the inverse and the value of each iteration point, and the algorithm also reduces the related workload and calculation time.
In fact, the matrix k B is usually obtained by approximation, and the subproblems are only solved roughly. In this case, it may be more reasonable to adjust the next trust radius To improve the efficiency of the nonmonotone trust region methods, we can define the modified ratio formula based on (7) as follows: where m is a positive integer and ki w is the weight of  More exactly,  k ρ can be used to determine whether the trial step is acceptable. Adjusting the next radius 1 k+ Δ depends on (11), thus k c is updated by In what follows, we refer to When an A multidimensional filter F is a list of n-tuples of the form where k g , l g belong to F . Our discussion can be summarized as the following Algorithm 1.

Algorithm 1.
A new filter nonmonotone adaptive trust region method.
Step 0. (Initialization) An initial point 0 n x R ∈ and a symmetric matrix 0 n n B R R ∈ × are given.
The constants η η ∈ are also given.
Step 2. Solve the subproblem (2) to find the trial step k d .
Step 3. Choose Step 4. Test the trial step.
if k x + is acceptable by the filter F , then Otherwise, find the step length k α satisfying (4), set Step 5. Update the trust region radius by Step 6. Compute the new Hessian approximation In order to obtain convergence results, we use the following notation: is accepted by the filter or . When k S ∉ , we have

Convergence Analysis
To establish the convergence of Algorithm 1, we make the following common assumption.

H1. The level set
Remark 1. In order to analyze the convergence of the new algorithm, it should be seen to that the trial step k d satisfies the following conditions: where the constant Remark 2. If f is a twice continuously differentiable function, then H1 means that there is a positive Proof. The proof is given by using Taylor Proof. We are able to prove this by using contradiction; we assume that there exists an iteration k , and that It follows from (11) which contradicts (20). This completes the proof of Lemma 2. 

Lemma 3. Suppose that H1-H3 hold and the sequence { }
Proof. According to the definition of k R , for all k , we have k k f R ≤ . Using the differential mean value theorem, we get Note that (4) and (16) imply that > . According to (17), (25), and (26), we can conclude that (24) (a) When k S ∈ , consider two cases: Case 1: k D ∈ . According to (7) and (16), we obtain The above two inequalities show that This shows that the sequence is an increasing sequence. Thus, we prove that (a) When k D ∈ , i.e.,  1 k ρ μ ≥ . Using (11) and (7), we deduce that  1 k ρ μ ′ ≥ . From (12), we get Then, assuming that Using Taylor's formula and H1-H3, it is easy to show that When combining (35) with (36), we discover that Moreover, the inequality (16), together with k g ε ≥ , imply that Multiply the two sides of inequality (38) by (1 ) μ − , such that On the other hand, from H3, (37), and (39), we have The proof is completed.  Based on the analyses and lemmas above, we obtain the global convergence of Algorithm 1 as follows: Proof. Consider the following two cases: Then, the sequence { } k x converges to * x superlinearly, that is, Proof. Following Lemmas 1 and 2, it is obvious that  1 k ρ μ ≥ for sufficiently large k . This shows that Algorithm 1 has been simplified to the superlinear convergence standard quasi-Newton methods [22]. Thus, the superlinear convergence of this algorithm can be proven to be similar to Theorem 5.5.1 in [22]. We omit it for convenience. 

Preliminary Numerical Experiments
In this section, we perform numerical experiments on Algorithm 1, and compare it with Mo [1] and Hang [4]. A set of unconstrained test problems (of variable dimension) are selected from [23]. The simulation experiment uses MATLAB 9.4 and the processor uses Intel (R) Core (TM), 2.00 GHz, 6 GB RAM. Take exactly the same value for the public parameters of these algorithms: 1 0.25

Conclusions
In this paper, the authors proposed a new nonmonotone trust region method and also put forward the following innovations: (1) a new adaptive radius strategy to reduce the number of calculations; (2) a modified trust region ratio to solve effectively unconstrained optimization problems. The filter technology is also important. Theorems 1 and 2 show that the proposed algorithm can preserve global convergence and superlinear convergence, respectively. According to preliminary numerical experiments, we can conclude that the new algorithm is very effective for unconstrained optimization, and the nonmonotone technology is very helpful for many optimization problems. In the future, we will have more ideas for solving many optimization problems, such as combining the modified conjugate gradient algorithm with a modified trust region method. We can also use the new algorithm for solving constrained optimization problems.
Funding: "This research received no external funding."