Abstract
Reliable bounding of a function’s range is essential for deterministic global optimization, approximation, locating roots of nonlinear equations, and several other computational mathematics areas. Despite years of extensive research in this direction, there is still room for improvement. The traditional and compelling approach to this problem is interval analysis. We show that accounting convexity/concavity can significantly tighten the bounds computed by interval analysis. To make our approach applicable to a broad range of functions, we also develop the techniques for handling nondifferentiable composite functions. Traditional ways to ensure the convexity fail in such cases. Experimental evaluation showed the remarkable potential of the proposed methods.
1. Introduction
Reliable bounding of univariate functions is one of the primary techniques in global optimization, i.e., finding the solution for the following problem:
The problem (1) has many practical applications [,,,,,]. Besides solving problems of one variable, univariate search serves as an auxiliary method in multivariate global optimization. A promising optimization technique known as space-filling curves reduces an optimization [,] or approximation [] problem of multiple variables to a sequence of univariate problems. Univariate optimization techniques are widely used in separable programming [], where an objective and constraints are sums of functions of one variable. Many univariate optimization methods are directly extended to the multivariate case [,].
Univariate global optimization has been intensively studied last decades. The first results date back to the early 1970s. Seminal works in this area [,,,] relied on the Lipschitzian property of a function:
In [,] the “saw-tooth cover” lower and upper bounding functions for Lipschitzian objectives were proposed. The lower (upper) bounding functions were defined as (), where L is a Lipschitz constant and is a set of function evaluation points. Since the functions are piecewise linear, their range can be easily computed. This makes such estimates attractive for bounding an objective from below and/or above. Other approaches exploiting the property (2) were studied in numerous papers [,,]. In papers [,,], the Lipschitzian first derivatives were used to facilitate the global search. Good surveys on Lipschitzian optimization can be found in [,,,].
Interval analysis [,] is another powerful technique for global optimization. The goal of interval analysis is to find the tightest enclosing interval for a range of a function. The left end of the enclosing interval provides a lower bound for a function over an interval that can be used to reduce the search space in global optimization methods. Most promising approaches are based on interval arithmetic, and more advanced techniques based on interval Taylor expansions [,]. Promising approaches based on combining Lipschitzian optimization and interval analysis ideas were proposed in []. Efficient optimization algorithms based on piecewise linear [,], piecewise convex [], slopes techniques [], and DC-decomposition [,] should also be mentioned.
The approaches outlined above apply various methods to obtain bounds on a range of a function. However, they do not analyze the convexity of the objective function. Meanwhile, the convexity plays an essential role in global optimization. If the objective is proved to be convex, then efficient local search techniques can be applied to locate its minimum. For example, the univariate convexification technique developed in [] even sacrifices the dimensionality of the problem for convexity.
The convexity test [] helps to reduce the search region by pruning areas where a function is proved to be nonconvex. Usually, the convexity is checked by analyzing the range of the second derivative. If this range lies above (below) zero, the function is convex (concave). This approach works only for functions with continuous second derivatives.
Checking convexity is, in general, an NP-hard problem (see []) and references therein. Approaches based on the symbolical proof and the numerical disproof of convexity are described in []. In the context of convexity checking, it is necessary to mention the disciplined convex programming [,], which also relies on a set of rules for proving the convexity of the problem under consideration. However, authors limit their techniques to proving the convexity of the entire mathematical programming problem for a subsequent use of convex programming methods. As we show below, monotonicity, convexity and concavity properties can also remarkably improve the accuracy of interval bounds when applied to subexpression of the function’s algebraic representation.
The main contribution of our paper is the novel techniques for bounding the function’s range by accounting monotonicity, convexity or concavity of subexpressions of its algebraic expression. This approach efficiently restricts the objective function’s range even if the latter is not convex neither concave. We proved experimentally that the introduced techniques can significantly reduce the bounds on the function’s range and remarkably enhance the conventional interval global search procedures. A set of rules for deducing monotonicity, concavity and convexity properties of a univariate function from its algebraic expression is clearly and concisely formulated and proved. These rules complement the traditional ways of establishing the properties of the objective function based on evaluating its derivatives’ ranges.
Notation:
- — the set of real numbers;
- — the set of integers;
- — the set of positive integers (natural numbers);
- — the set of all intervals in ;
- — intervals are denoted with bold font;
- — the range of function over interval ;
- —an interval extension of a function , i.e., a mapping such that for any , notice, there may be many different interval extensions for a function ;
- — is non-decreasing monotonic on or an interval if additionally specified;
- — is non-increasing monotonic on or an interval if additionally specified.
By elementary functions we mean commonly used mathematical functions, i.e., power exponential, logarithmic and trigonometric functions. We distinguish smooth elementary functions that have derivatives of any order in the domain of the definition and nonsmooth functions, which are nondifferentiable at some points. The list of elementary functions supported by our method is given in Table 1. Notice that other elementary functions can be expressed as algebraic expressions over the functions listed in the table and thus omitted. We consider only univariate functions in what follows. Thus, we do not mention it in each statement. We restrict our study to the case of continuous functions.
Table 1.
Supported elementary functions.
The paper is organized as follows. Section 2 describes the deduction techniques to evaluate the convexity/concavity of a function automatically. Then, in Section 3 it is shown how this technique is used to bound the range of a function. Section 4 contains the experimental results demonstrating the the proposed approach’s efficiency. Section 5 concludes the paper and discusses possible future research directions.
2. Automatic Deduction of the Convexity and Concavity of a Function
2.1. Deducing Monotonicity
The monotonicity significantly helps in global optimization. If a function is monotonically nondecreasing on a segment , then , and the segment can be eliminated from further consideration after updating the record (best known solution so far). A similar statement is valid for a nonincreasing function. This techniques is known as the monotonicity test [,,]. Moreover, as it is shown below, the monotonicity is crucial for evaluating the convexity/concavity of a composite function.
The usual way to ensure the monotonicity of a differentiable univariate function on an interval is to compute an interval extension for its derivative . If , then the function is nondecreasing monotonic on . Similarly, if , then the function is nonincreasing monotonic on .
If a function is not differentiable, its monotonicity can still be evaluated using the rules described below. The Proposition 1 lists rules for evaluating an expression’s monotonicity composed with the simple arithmetic operations.
Proposition 1.
The following rules hold:
- if on then on ;
- if on then on ;
- if and on then on ;
- if , and , on then on ;
- if and on then on ;
- if and on then on .
The proof of Proposition 1 is obvious. The rules for evaluating the monotonicity of the composition of functions are summarized in Proposition 2. The proof is intuitive and not presented here.
Proposition 2.
Let be a composition of univariate functions and : . Then, the following four statements hold.
- If on , on and then on .
- If on , on and then on .
- If on , on , then on .
- If on , on , then on .
The monotonicity of elementary univariate functions on a given interval can easily be established as these functions’ behavior is well-known (Table 2).
Table 2.
The monotonicity of elementary functions.
The monotonicity of a composite function defined by an arbitrary complex algebraic expression can be evaluated automatically using Propositions 1 and 2 and the data from the Table 2. Let us consider an example.
Example 1.
Evaluate the monotonicity of the function , where . This function is nonsmooth: it can be easily shown that does not have derivatives in two points on . Apply rules from Propositions 1 and 2:
Thus, is nonincreasing monotonic on . In the same way, it can be established that is nonincreasing monotonic on . From the Proposition 1, it follows that is also nonincreasing monotonic on .
It is worth noting that the rules outlined above help to prove the monotonicity of nondifferentiable functions. However, for differentiable functions, the analysis of the the range of the first derivative is a better way to establish monotonicity. For example, a function is monotonic on an interval . Indeed, the range of its first derivative computed by the natural interval expansion is non-negative. However, its monotonicty cannot be established by the outlined rules since is not monotonic on . The general recommendation is to compute the first derivative’s range when the function is smooth and use Propositions 1 and 2 otherwise.
Monotonicity itself plays a vital role in optimization. The following obviously valid Proposition shows how the interval bounds can be computed for a monotonic function.
Proposition 3.
Let be a monotonic function on an interval . Then
2.2. Deducing Convexity
First, we recall some well-known mathematical notions used in the rest of the paper. A function is convex on an interval if
for any , and any , . A function is called concave on the interval if is convex on .
Convexity plays an important role in optimization due to the following two observations. If a function is convex on some interval, then a minimum point of can be efficiently found by well elaborated local search techniques [,]. If a function is concave on , then .
If the function is two times differentiable, the convexity can be deduced from the second derivative. If one can prove that on a segment , then is convex (concave) on this segment. However, if the function is nonsmooth, the convexity property should be computed in some other way. Even if is smooth, the accurate bounding of its second derivative can be a complicated task, and the convexity test becomes difficult.
The conical combination and the maximum of two functions are known to preserve convexity. The proof can be found in seminal books on convex analysis, e.g., []. For the sake of completeness, we reproduce these rules in the following Proposition 4.
Proposition 4.
Let and be convex functions on an interval . Then, the following statements hold:
- is convex on ,
- is convex on if ,
- is convex on .
The product of two convex functions is not always a convex function. For example, is not convex while both and are convex functions on . In [], it is proved that if f and g are two positive convex functions defined on an interval , then their product is convex provided that they are synchronous in the sense that
for all . However checking this general property automatically is difficult. Instead, we propose the following sufficient condition that can be effectively evaluated.
Proposition 5.
Let and be convex positive functions on an interval such that and are both nonincreasing or both nondecreasing. Then, the function is convex on .
Proof.
Consider , and . Since and are convex, we get
Since and , we get
where
is a quadratic function. Since and are both nonincreasing or both nondecreasing, we have that . Therefore is convex. Note that , . From convexity of , we obtain the following inequality:
This completes the proof. □
Propositions 4 and 5 can be readily reformulated for concave functions. The following Proposition gives rules for evaluating the convexity of a composite function.
Proposition 6.
Let and there be intervals , such that . Then, the following holds:
- g is convex and nondecreasing on , h is convex on , then f is convex on ,
- g is convex and nonincreasing on , h is concave on , then f is convex on ,
- g is concave and nondecreasing on , h is concave on , then f is concave on ,
- g is concave and nonincreasing on , h is convex on , then f is concave on .
The proof of the Proposition 6 can be found in numerous books for convex analysis, e.g., [].
Many elementary functions are convex/concave on a whole domain of the definition, e.g., , , for even natural n. For other functions, the intervals of concavity/convexity can be efficiently established as these function’s behavior is well-known (Table 3).
Table 3.
The convexity/concavity of elementary functions.
Propositions 4–6 enable an automated convexity deduction for composite functions, as the following examples show.
Example 2.
Consider the function on the interval . The function is convex on and nondecreasing. The function is convex on . According to the Proposition 6 function, is convex. Since is also convex, we conclude (Proposition 4) that is convex.
It is worth noting that the convexity can be proved by computing the interval bounds for the second derivative in the considered example. Indeed, is obviously positive on . Since there are plenty of tools for automatic differentiation and interval computations, the convexity can be proved automatically.
However, a convex function does not necessarily have derivatives in all points. Moreover, even if it is piecewise differentiable, locating the points where the function is not continuously differentiable can be difficult. Fortunately, the theory outlined above efficiently copes with such situations.
Example 3.
Consider the following function
on an interval . Since is concave on , we conclude that is convex on . The convexity of follows from the convexity of the linear function and the Proposition 6. From the convexity of x, , and Proposition 4 we derive that is convex.
Notice that automatic symbolic differentiation techniques cannot compute the derivative of because it involves computing the intersection points of x and functions, which is a rather complex problem.
3. Application to Bounding the Function’s Range
An obvious application of the proposed techniques is the convexity/concavity test [] that helps to eliminate the interval from the further consideration and reduce the number of steps of branch-and-bound algorithms. Consider the following problem:
If the objective is concave on , then the global minimum can be easily computed as follows: . If the concavity does not hold for the entire search region, the test can be used in branch-and-bound, interval Newton or other global minimization methods by applying it to subintervals of processed by the algorithm.
However, if the objective is convex on , then any local minimum in is a global minimum and can be easily found by a local search procedure. Since any continuously differentiable function is convex or concave on a sufficiently small interval, the convexity/concavity test can tremendously reduce the number of algorithm’s steps by preventing excessive branching.
Another situation commonly encountered in practice occurs when a subexpression represents a convex/concave function while the entire function is not convex neither concave. For example, the function is convex on interval while is not. In such cases, the interval cannot be discarded by the convexity/concavity test. Nevertheless, the convexity and concavity can be used to compute tight upper and lower bounds for the sub-expression yielding better bounds for the entire function.
For computing upper and lower bounds, recall that a convex function graph always lies above any of its tangents. This property and the convexity definition yield the Proposition 7.
Proposition 7.
Let be a convex function on . Then
where
Proof.
First, prove that is an underestimator for . For a function convex on an interval and a point , the following inequality holds []:
From (8), it directly follows that
The right part is from (6).
Now prove that is the overestimator for . Taking , , in the definition of a convex function (3) we obtain:
The rightmost part is from (6). This completes the proof. □
Proposition 7 is illustrated in Figure 1. The figure shows the original function (blue curve), its overestimator consisting of one green line segment and the underestimator consisting of two connected line segments and marked with red. The estimators are constructed by following (6).
Figure 1.
The overestimator (green) and the underestimator (red) of a convex function on an interval .
The similar proposition holds for concave functions.
Proposition 8.
Let be a concave function over on . Then
where
Proof.
Fortunately, the minimum and maximum of estimators and can be found analytically as stated by the following propositions.
Proposition 9.
If a function f is convex on an interval then
otherwise.
Proof.
Equation (11) is obviously valid. Denote , . Equation (12) follows from the fact the function lies above its tangent , coincides with it at and the tangent is a monotonically nondecreasing function. In the same way, Equation (13) is proved.
For the remaining case the minimum of the underestimator is achieved at the intersection point of lines defined by , (point C in the Figure 1). This point is the solution of the following equation:
Simple transformations yield:
Since the minimum of the underestimator is achieved at the point
Substituting this value to we obtain:
This concludes the proof. □
Similarly, the validity of the following Proposition giving bounds for a concave function is justified.
Proposition 10.
If a function f is concave over an interval then
otherwise.
Proof.
The bounds computed with the help of the Propositions 9 and 10 are often more precise with respect to other bounds. Below we compare the ranges computed according to Propositions 9 and 10 with the results of interval analysis techniques.
4. Numerical Experiments
In this section, we experimentally evaluate the proposed approach. First, in Section 4.1 the interval bounds and bounds computed with the proposed techniques are compared for a set of functions. In Section 4.2, we study the impact of the accounting of the monotonicity and convexity properties on global optimization algorithms’ performance.
4.1. Comparison with Interval Bounds
We selected two well-known [,] interval analysis techniques for computing the range of a function. The first is the natural interval expansion that computes the interval bounds of a function’s range by applying interval arithmetic rules according to the function’s expression. The second approach is so-called first-order Taylor expansion:
where and denotes the natural interval expansion for the derivative of . The detailed proof of (19) can be found in [].
Example 4.
Let and . The convexity of can easily be established by applying evaluation rules introduced in the previous section:
- is concave on ,
- is convex on (by definition),
- x is concave on ,
- is convex on (by definition),
- is convex on (by Proposition 6),
- is convex on (by Proposition 4).
Applying (9), we get the following enclosing interval for on :
with the width . Natural interval expansion gives:
with the width and the first order Taylor expansion produces
with the width . Thus, the interval computed with the proposed techniques is nearly times narrower than produced by the natural interval and Taylor expansions.
It is worth noting that the bounds provided by Propositions 9 and 10 can be computed for functions that are not differentiable at a set of points. It suffices that a function has its derivatives at the ends of the interval. The latter can be computed using the forward mode automatic differentiation [], which is merely the application of differentiation rules at a point.
Table 4 compares bounds computed with the interval analysis techniques and the bounds computed by the proposed method for five convex functions. The convexity of these functions can be easily deduced by the introduced convexity evaluation rules. For an interval , three bounds are presented in the respective columns:
Table 4.
Comparison of natural interval expansion (Natural) Taylor expansion (Taylor) and bounds produced by the proposed techniques (Convex).
- Natural—a bound computed by the natural interval expansion techniques,
- Taylor—a bound computed by the 1st order Taylor expansion,
- Convex—a bound computed according to Propositions 9 and 10.
For all functions except No. 4, the bound produced by the proposed techniques contain both intervals produced by interval techniques and significantly more tight. For function number 4, the interval computed by the Convex method is narrower than the natural interval expansion but does not enclose it. However, as neither of these intervals contains each other, they can be intersected to obtain a better enclosing interval . The 5th function is non-differentiable in . Thus the symbolic differentiation does not give a meaningful result, and the Taylor expansion cannot be applied in this case. For that reason, the respective cell is marked with “−”.
4.2. Impact on the Performance of Global Search
In Section 4.1, we observed that accounting convexity can significantly improve the interval bounds. As expected, the application of these bounds entails reducing the number of steps of the global search algorithm.
We implemented a standard branch-and-bound algorithm that uses the lower-bound test to discard subintervals from the further search. The description of this algorithm can be found elsewhere [,]. For completeness, we outline it here (Figure 2).
Figure 2.
The standard branch-and-bound algorithm.
The algorithm operates over a list L of intervals, initialized with the feasible set (line 04). The record point (incumbent solution) is initialized with the center of the interval (line 05). The main loop (lines 06—16) iterates until the list L is not empty. At each iteration one of the intervals from this set is taken (line 07) and examined. First, the value in the middle of this interval is computed, and if necessary, the record is updated (lines 08–11). The interval extension is computed at the line 12. The interval lying above is discarded from the further search. Otherwise, it is partitioned into two smaller intervals. The obtained intervals are added to the list L (line 14).
We consider three variants of computing the interval extension:
- Natural—the natural interval expansion techniques,
- Taylor—the 1-st order Taylor expansion,
- Convex—the range is computed according to Propositions 3, 9 and 10.
The described methods can be applied in combination, when the intervals computed by several methods are intersected to obtain the resulting range. We considered four different combinations of the range bounding techniques to compute the enclosing interval of the objective function:
- Natural—pure natural interval expansion;
- Natural + Convex—the natural interval expansion combined with the proposed techniques;
- Natural + Taylor—the natural interval expansion combined with the first-order Taylor expansion;
- Natural + Taylor + Convex—the natural interval expansion combined with the first-order Taylor expansion and the proposed techniques.
The convexity and the monotonicity are detected by analyzing the ranges of the first/second derivatives in differentiable cases or by using the introduced evaluation rules for the non-differentiable expressions.
Table 5 lists the set of test problems used in the experiments. For each problem, the objective function (), the interval (), and the global optimum value () are presented. The first ten problems are taken from []. The objective functions in these problems have both first and second derivatives.
Table 5.
Test problems.
To demonstrate the applicability of the proposed automatic convexity deduction techniques, we also have added four nondifferentiable problems. Test cases 11 and 12 were proposed by us, and 13 and 14 were taken from [].
The results of numerical experiments are summarized in Table 6. The cells contain the number of steps performed by the branch-and-bound method. Columns correspond to different ways for computing the range of the objective functions, and rows correspond to test problems. The Taylor expansion cannot be applied to nondifferentiable problems 11–14, and the respective cells are blank.
Table 6.
Testing results.
Experimental results demonstrate that the proposed techniques tremendously improve the standard branch-and-bound algorithm’s performance that uses the natural interval expansion for the majority of the test problems. The combination of the natural interval expansion and the proposed method always outperform the combination of the natural and the first-order Taylor interval expansion. The comparison of the last two columns of Table 6 indicates that the Taylor expansion version of branch-and-bound can be further improved when combined with the proposed techniques. However, for problems 1 and 5–10, the proposed method does not benefit from the Taylor expansion.
5. Discussion
The standard way to ensure the convexity is to bound the range of the function’s second derivative. However, this approach is only applicable to smooth functions. We defined a set of rules that can efficiently handle nonsmooth functions. The algebraic representation for the function, however, should be available.
It is worth noting that the proposed approach can be efficiently coded in modern programming languages supporting the operator’s overloading techniques. To run the experiments presented in Table 4 and Table 6 we have implemented our approach in Python. The elementary functions and operators were overloaded to support a particular data type that carries monotonicity and convexity information and the range of the function. The overloaded methods work according to the rules described in Section 2 and interval arithmetic.
As we have shown above, evaluating convexity can improve interval bounds on the function’s range and accelerate the global optimization algorithms. Moreover, the over- and underestimators defined by the Propositions 7 and 8 enable efficient reduction techniques. The reduction techniques are widely used to accelerate the search for a global minimum of a function or a root of an equation.
We believe that the proposed approach has great potential as it can be extended to various generalized notions of convexity, e.g., quasiconvexity []. Quasiconvex functions possess the unimodality property, and thus recognizing the quasiconvexity (quasiconcavity) can tremendously enhance global optimization algorithms.
Author Contributions
Investigation: both authors. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by the Ministry of Science and Higher Education of the Russian Federation, project No 075-15-2020-799.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Johnson, D.E. Introduction to Filter Theory; Prentice Hall: Englewood Cliffs, NJ, USA, 1976. [Google Scholar]
- Zilinskas, A. Optimization of one-dimensional multimodal functions. J. R. Stat. Soc. Ser. C Appl. Stat. 1978, 27, 367–375. [Google Scholar]
- Kvasov, D.; Menniti, D.; Pinnarelli, A.; Sergeyev, Y.D.; Sorrentino, N. Tuning fuzzy power-system stabilizers in multi-machine systems by global optimization algorithms based on efficient domain partitions. Electr. Power Syst. Res. 2008, 78, 1217–1229. [Google Scholar] [CrossRef]
- Bedrosian, D.; Vlach, J. Time-domain analysis of networks with internally controlled switches. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1992, 39, 199–212. [Google Scholar] [CrossRef]
- Femia, N.; Tucci, V. On the modeling of PWM converters for large signal analysis in discontinuous conduction mode. IEEE Trans. Power Electron. 1994, 9, 487–496. [Google Scholar] [CrossRef]
- Lassere, J.B. Connecting optimization with spectral analysis of tri-diagonal matrices. Math. Program. 2020. [Google Scholar] [CrossRef]
- Strongin, R.G.; Sergeyev, Y.D. Global Optimization with Non-Convex Constraints: Sequential and Parallel Algorithms; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 45. [Google Scholar]
- Lera, D.; Sergeyev, Y.D. GOSH: Derivative-free global optimization using multi-dimensional space-filling curves. J. Glob. Optim. 2018, 71, 193–211. [Google Scholar] [CrossRef]
- Lera, D.; Posypkin, M.; Sergeyev, Y.D. Space-filling curves for numerical approximation and visualization of solutions to systems of nonlinear inequalities with applications in robotics. Appl. Math. Comput. 2021, 390, 125660. [Google Scholar] [CrossRef]
- Jensen, P.A.; Bard, J.F.; Jensen, P. Operations Research Models and Methods; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
- Pintér, J. Extended univariate algorithms for n-dimensional global optimization. Computing 1986, 36, 91–103. [Google Scholar] [CrossRef]
- Sergeyev, Y.D.; Kvasov, D.E. Deterministic Global Optimization: An Introduction to the Diagonal Approach; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
- Evtushenko, Y.G. Numerical methods for finding global extrema (case of a non-uniform mesh). USSR Comput. Math. Math. Phys. 1971, 11, 38–54. [Google Scholar] [CrossRef]
- Pijavskij, S. An algorithm for finding the global extremum of function. Optim. Decis. 1967, 2, 13–24. [Google Scholar]
- Shubert, B.O. A sequential method seeking the global maximum of a function. SIAM J. Numer. Anal. 1972, 9, 379–388. [Google Scholar] [CrossRef]
- Timonov, L. Algorithm for search of a global extremum. Eng. Cybern. 1977, 15, 38–44. [Google Scholar]
- Jones, D.R.; Perttunen, C.D.; Stuckman, B.E. Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory Appl. 1993, 79, 157–181. [Google Scholar] [CrossRef]
- Kvasov, D.E.; Sergeyev, Y.D. A univariate global search working with a set of Lipschitz constants for the first derivative. Optim. Lett. 2009, 3, 303–318. [Google Scholar] [CrossRef]
- Lera, D.; Sergeyev, Y.D. Acceleration of univariate global optimization algorithms working with Lipschitz functions and Lipschitz first derivatives. SIAM J. Optim. 2013, 23, 508–529. [Google Scholar] [CrossRef]
- Gergel, V.P. A global optimization algorithm for multivariate functions with Lipschitzian first derivatives. J. Glob. Optim. 1997, 10, 257–281. [Google Scholar] [CrossRef]
- Sergeyev, Y.D. Global one-dimensional optimization using smooth auxiliary functions. Math. Program. 1998, 81, 127–146. [Google Scholar] [CrossRef]
- Sergeyev, Y.D.; Nasso, M.C.; Mukhametzhanov, M.S.; Kvasov, D.E. Novel local tuning techniques for speeding up one-dimensional algorithms in expensive global optimization using Lipschitz derivatives. J. Comput. Appl. Math. 2020, 383, 113134. [Google Scholar] [CrossRef]
- Hansen, P.; Jaumard, B.; Lu, S.H. Global optimization of univariate Lipschitz functions: I. Survey and properties. Math. Program. 1992, 55, 251–272. [Google Scholar] [CrossRef]
- Hansen, P.; Jaumard, B.; Lu, S.H. Global optimization of univariate Lipschitz functions: II. New algorithms and computational comparison. Math. Program. 1992, 55, 273–292. [Google Scholar] [CrossRef]
- Pintér, J.D. Global Optimization in Action: Continuous and Lipschitz Optimization: Algorithms, Implementations and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 6. [Google Scholar]
- Hansen, E.; Walster, G.W. Global Optimization Using Interval Analysis: Revised and Expanded; CRC Press: Boca Raton, FL, USA, 2003; Volume 264. [Google Scholar]
- Moore, R.E.; Kearfott, R.B.; Cloud, M.J. Introduction to Interval Analysis; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
- Kearfott, R.B. Rigorous Global Search: Continuous Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 13. [Google Scholar]
- Casado, L.G.; MartÍnez, J.A.; GarcÍa, I.; Sergeyev, Y.D. New interval analysis support functions using gradient information in a global minimization algorithm. J. Glob. Optim. 2003, 25, 345–362. [Google Scholar] [CrossRef]
- Fasano, G.; Pintér, J.D. Efficient piecewise linearization for a class of non-convex optimization problems: Comparative cesults and extensions. In Springer Proceedings in Mathematics & Statistics; Pintér, J., Terlaky, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 279, pp. 39–56. [Google Scholar]
- Posypkin, M.; Usov, A.; Khamisov, O. Piecewise linear bounding functions in univariate global optimization. Soft Comput. 2020, 24, 17631–17647. [Google Scholar] [CrossRef]
- Floudas, C.; Gounaris, C. Tight convex underestimators for C2-continuous functions: I. Univariate functions. J. Glob. Optim 2008, 42, 51–67. [Google Scholar]
- Ratz, D. A nonsmooth global optimization technique using slopes: The one-dimensional case. J. Glob. Optim. 1999, 14, 365–393. [Google Scholar] [CrossRef]
- Tuy, H.; Hoang, T.; Hoang, T.; Mathématicien, V.N.; Hoang, T.; Mathematician, V. Convex Analysis and Global Optimization; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
- Strekalovsky, A.S. On local search in dc optimization problems. Appl. Math. Comput. 2015, 255, 73–83. [Google Scholar]
- Arıkan, O.; Burachik, R.; Kaya, C. Steklov regularization and trajectory methods for univariate global optimization. J. Glob. Optim. 2020, 76, 91–120. [Google Scholar] [CrossRef]
- Ahmadi, A.; Hall, G. On the complexity of detecting convexity over a box. Math. Program. 2020, 182, 429–443. [Google Scholar] [CrossRef]
- Fourer, R.; Maheshwari, C.; Neumaier, A.; Orban, D.; Schichl, H. Convexity and concavity detection in computational graphs: Tree walks for convexity assessment. Informs J. Comput. 2010, 22, 26–43. [Google Scholar] [CrossRef]
- Grant, M.; Boyd, S. CVX: MATLAB Software for Disciplined Convex Programming. Version 1.21. 2010. Available online: http://cvxr.com/cvx (accessed on 9 January 2020).
- Grant, M.C.; Boyd, S.P. Graph implementations for nonsmooth convex programs. In Recent Advances in Learning and Control; Springer: Berlin/Heidelberg, Germany, 2008; pp. 95–110. [Google Scholar]
- Ratschek, H.; Rokne, J. New Computer Methods for Global Optimization; Horwood: Chichester, UK, 1988. [Google Scholar]
- Nataraj, P.S.; Arounassalame, M. A new subdivision algorithm for the Bernstein polynomial approach to global optimization. Int. J. Autom. Comput. 2007, 4, 342–352. [Google Scholar] [CrossRef]
- Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 87. [Google Scholar]
- Niculescu, C.; Persson, L.-E. Convex Functions and their Applications. A Contemporary Approach; Springer International Publishing: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Hadjisavvas, N.; Komlósi, S.; Schaible, S.S. Handbook of Generalized Convexity and Generalized Monotonicity; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 76. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).