Next Article in Journal
Using Hazard and Surrogate Functions for Understanding Memory and Forgetting
Next Article in Special Issue
Signatures of Duschinsky Rotation in Femtosecond Coherence Spectra
Previous Article in Journal
The Wiener–Hopf Equation with Probability Kernel and Submultiplicative Asymptotics of the Inhomogeneous Term
Previous Article in Special Issue
Continued Fractions and Probability Estimations in Shor’s Algorithm: A Detailed and Self-Contained Treatise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

On One Problem of the Nonlinear Convex Optimization

Institute of Applied Informatics, Automation and Mechatronics, Slovak University of Technology in Bratislava, Bottova 25, 917 01 Trnava, Slovakia
AppliedMath 2022, 2(4), 512-517; https://doi.org/10.3390/appliedmath2040030
Submission received: 12 August 2022 / Revised: 13 September 2022 / Accepted: 19 September 2022 / Published: 21 September 2022
(This article belongs to the Special Issue Applications of Number Theory to the Sciences and Mathematics)

Abstract

:
In this short paper, we study the problem of traversing a crossbar through a bent channel, which has been formulated as a nonlinear convex optimization problem. The result is a MATLAB code that we can use to compute the maximum length of the crossbar as a function of the width of the channel (its two parts) and the angle between them. In case they are perpendicular to each other, the result is expressed analytically and is closely related to the astroid curve (a hypocycloid with four cusps).
MSC:
46N10; 26A51

1. Formulation of the Problem

In this paper, we deal with the following problem:
Consider two navigable channels that make an angle β with each other having widths d 1 and d 2 . We are to find out what is the longest crossbar (=the line segment, after mathematical abstraction) that can be navigated through this channel! (see Figure 1 for the meaning of the parameters d 1 , d 2 and β ).
As we see in a moment, we are able to convert this problem into a convex optimization problem, allowing us to take advantage of its rich and efficient apparatus. The main reasons for focusing on the convex optimization problems are as follows [1]:
-
They are close to being the broadest class of problems we know how to solve efficiently.
-
They enjoy nice geometric properties (e.g., local minima are global).
-
There are excellent softwares that readily solve a large class of convex problems.
-
Numerous important problems in a variety of application domains are convex.

A Brief History of Convex Optimization

In the 19th century, optimization models were used mostly in physics, with the concept of energy as the objective function. No attempt, with the notable exception of Gauss’ algorithm for least squares (y1822), as a result of the most famous dispute in the history of statistics between Gauss and Legendre [2], is made to actually solve these problems numerically.
In the period from 1900 to 1970, an extraordinary effort was made in mathematics to build the theory of optimization. The emphasis was on convex analysis, which allows to describe the optimality conditions of a convex problem. As important milestones in this effort, we can mention [3]:
  • 1947: The simplex algorithm for linear programming—the computational tool still prominent in the field today for the solution of these problems (Dantzig [4,5]).
  • 1960s: Early interior-point methods (Fiacco and McCormick [6], Dikin [7,8], …).
  • 1970s: Ellipsoid method and other subgradient methods, which positively answered the question whether there is another algorithm for linear programming in addition to the simplex method, which has polynomial complexity ([9,10,11,12,13]).
  • 1980s: Polynomial-time interior-point methods for linear programming (Karmarkar 1984 [14,15]). From a theoretical point of view, this was a polynomial-time algorithm, in contrast to Dantzig’s simplex method, which in the worst case has exponential complexity [16].
  • Late 1980s–now: Polynomial-time interior-point methods for nonlinear convex optimization (Nesterov & Nemirovski 1994 [17,18]).
The growth of convex optimization methods (theory and algorithms) and computational techniques (many algorithms are computationally time-consuming) has led to their widespread application in many areas of science and engineering (control, signal processing, communications, circuit design, and …) and new problem classes (for example, semidefinite and second-order cone programming, robust optimization). For more recent achievements in optimization algorithms, see [19,20,21,22,23] and the references therein.
This paper does not claim to develop new scientific knowledge, but its aim is to use well-known techniques of convex optimization to solve, in my opinion, an interesting practical problem that can also be used in the process of teaching nonlinear convex optimization techniques, and the pedagogical value of the contributions is one of the objectives of this special issue of the journal. The interesting thing about this problem/case study is that there is a naturally occurring minimization problem (without multiplication by a factor of ( 1 ) or similar) when looking for the maximum length.
Now, we introduce the theorem, which is one of the most important theorems in convex analysis [24].
Theorem 1.
Consider an optimization problem
minimize f ( x ) s . t . x Ω ,
where f : Ω R is a convex function and Ω R n is a convex set. Then, any local minimum is also a global minimum.

2. Analytical and Numerical Solution of the Convex Optimization Problem

In the spirit of the previous considerations, our problem can be formulated as follows:
minimize l β ( α ) = l 1 ( α ) + l 2 ( α ) = d 1 sin α + d 2 sin α + β s . t . α 0 , α π β ,
where β ( 0 , π ) is fixed. The angles α , β are measured in radians, and d 1 , d 2 (both are positive real numbers) are expressed in units of length.
The solution of the problem (1) is hereafter referred to as l maxLength ( β ) .
The first two derivatives of the function l β ( α ) are:
l β ( α ) = d 1 cos α cos 2 α 1 + d 2 cos α + β cos 2 α + β 1
l β ( α ) = 2 d 1 sin 3 α d 1 sin α + 2 d 2 sin 3 α + β d 2 sin α + β
= 2 d 1 d 1 sin 2 α sin 3 α + 2 d 2 d 2 sin 2 α + β sin 3 α + β
d 1 sin 3 α + d 2 sin 3 α + β > 0
for all α ( 0 , π β ) , and so the function l β ( α ) is continuous (together with its derivatives) and (strictly) convex on the the convex set ( 0 , π β ) R , with l β ( α ) for α 0 + and α ( π β ) , see also Figure 2 for a better idea of the behavior of the function l β ( α ) . The limits above (both equal to ) guarantee the existence, and the strict convexity of the function l β ( α ) in turns the uniqueness of the local minimum of the function l β ( α ) on Ω . Theorem 1 says that this local minimum is a solution of the problem (1).
For β = π 2 , that is, the channel is bent to a right angle, we can compute the value l maxLength ( π 2 ) analytically:
Theorem 2.
For β = π 2 , we have
l maxLength π 2 = d 1 2 3 + d 2 2 3 3 2 .
Proof. 
First, using the identity sin α + π 2 = cos ( α ) , we obtain from (1)
l β = π / 2 ( α ) = d 1 sin α + d 2 cos α , α [ 0 , π β ] = 0 , π 2
and
l β = π / 2 ( α ) = d 1 cos 3 α d 2 sin 3 α cos 2 α sin 2 α
with the unique stationary point
α * = a r c t g d 1 d 2 3 , α * 0 , π 2 ,
which is the local minimum of the function l β = π / 2 ( α ) . According to Theorem 1, it holds that
l maxLength π 2 = l β = π / 2 ( α * ) .
By substituting the value of α = α * into the objective function l β = π / 2 ( α ) defined by (2) and using the basic trigonometry formulas for x π 2 , π 2  [25]
sin x = tg x 1 + tg 2 x and cos x = 1 1 + tg 2 x
we obtain
l β = π / 2 ( α * ) = d 1 1 + d 1 d 2 2 3 d 1 d 2 3 + d 2 1 + d 1 d 2 2 3
= d 1 2 3 + d 2 2 3 3 2 .
   □
Remark 1.
Note that the formula for l maxLength π 2 is defined by the astroid curve, its graph for positive values of d 1 and d 2 is shown in Figure 3. The dependence
d 1 2 3 + d 2 2 3 = d 2 3
represents the minimum values of the widths of the two parts of the channel, d 1 and d 2 , for a crossbar of length d to pass through the channel.
Remark 2.
For the limiting values of the parameter β, we obtain, with the obvious interpretation:
β 0 + l maxLength β ( d 1 + d 2 ) ( = min { l β = 0 ( α ) : α [ 0 , π ] } )
and
β π l maxLength β ( = min { l β = π ( α ) : α = 0 } ) .
As an illustrative example, if
β 0 = 1.999999999999 π / 2 , d 1 = 1 , d 2 = 2 ,
then
l maxLength β 0 = 3710327376774.69 .
Listing 1 shows the MATLAB code for calculating l maxLength β and Table 1 shows the l maxLength β for the different values of the parameters d 1 , d 2 and β , indicating the asymptotics of the solved problem, that is, l maxLength β ( d 1 + d 2 ) for β 0 + and l maxLength β for β π . The analysis of borderline cases (for us, β near 0 and π ), although not of great practical importance, is meaningful from the point of view of mathematical analysis because it points to overall trends in the change in the observed values (here, l maxLength β ).
Listing 1. MATLAB code used for calculating l maxLength β for d 1 = 1 , d 2 = 2 and β = π 2 .
Listing 1. MATLAB code used for calculating l maxLength β for d 1 = 1 , d 2 = 2 and β = π 2 .
syms alpha beta d1 d2
d1 = 1; % width of the first channel
d2 = 2; % width of the second channel
beta = pi/2; % an angle between the navigable channels
l = (d1/sin(alpha))+(d2/sin(alpha+beta)); % the cost function from (1)
la = diff(l,alpha);
eqn = la == 0;
num = vpasolve(eqn,alpha,[0 pi-beta]); % numerical solver
solution_max_length = simplify(subs(l,num),’Steps’,20) % output

3. Conclusions

The purpose of the present paper is to solve the practical problem of channel navigability and to calculate the maximum length of a crossbar (a line segment, after mathematical abstraction) that will pass through a bent channel. As it turns out, this problem can be formulated as a convex optimization problem. Moreover, of interest is the relationship between the values of d 1 , d 2 and d (the width of the channel sections and the maximum length of the navigable crossbar, respectively), where these values for β = π / 2 represent the first-quadrant portion of the astroid curve d 1 2 3 + d 2 2 3 = d 2 3 . In the future, it would certainly be interesting to derive an analogous analytical relationship (if it exists) for other values of the β angle ( 0 < β < π ).
From the future development prospects, the proposed approach could be extended, for example, for solving fuzzy linear programming, fuzzy transportation and fuzzy shortest path problems and DEA models [26,27].

Funding

This publication has been published with the support of the Ministry of Education, Science, Research and Sport of the Slovak Republic within project VEGA 1/0193/22 “Návrh identifikácie a systému monitorovania parametrov výrobných zariadení pre potreby prediktívnej údržby v súlade s konceptom Industry 4.0 s využitím technológií Industrial IoT”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ahmadi, A.A. Princeton University. 2015. Available online: https://www.princeton.edu/~aaa/Public/Teaching/ORF523/S16/ORF523_S16_Lec4_gh.pdf (accessed on 10 August 2022).
  2. Stigler, S.M. Gauss and the Invention of Least Squares. Ann. Stat. 1981, 9, 465–474. [Google Scholar] [CrossRef]
  3. Ghaoui, L.E. University of Berkeley. 2013. Available online: https://people.eecs.berkeley.edu/~elghaoui/Teaching/EE227BT/LectureNotes_EE227BT.pdf (accessed on 10 August 2022).
  4. Dantzig, G. Origins of the simplex method. In A History of Scientific Computing; Nash, S.G., Ed.; Association for Computing Machinery: New York, NY, USA, 1987; pp. 141–151. [Google Scholar]
  5. Dantzig, G. Linear Programming and Extensions; Princeton University Press: Princeton, NJ, USA, 1963. [Google Scholar]
  6. Fiacco, A.; McCormick, G. Nonlinear programming: Sequential Unconstrained Minimization Techniques; John Wiley and Sons: New York, NY, USA, 1968. [Google Scholar]
  7. Dikin, I. Iterative solution of problems of linear and quadratic programming. Sov. Math. Dokl. 1967, 174, 674–675. [Google Scholar]
  8. Dikin, I. On the speed of an iterative process. Upr. Sist. 1974, 12, 54–60. [Google Scholar]
  9. Yudin, D.; Nemirovskii, A. Informational complexity and efficient methods for the solution of convex extremal problems. Matekon 1976, 13, 22–45. [Google Scholar]
  10. Khachiyan, L. A polynomial time algorithm in linear programming. Sov. Math. Dokl. 1979, 20, 191–194. [Google Scholar]
  11. Shor, N. On the Structure of Algorithms for the Numerical Solution of Optimal Planning and Design Problems. Ph.D. Thesis, Cybernetic Institute, Academy of Sciences of the Ukrainian SSR, Kiev, Ukraine, 1964. [Google Scholar]
  12. Shor, N. Cut-off method with space extension in convex programming problems. Cybernetics 1977, 13, 94–96. [Google Scholar] [CrossRef]
  13. Rodomanov, A.; Nesterov, Y. Subgradient ellipsoid method for nonsmooth convex problems. Math. Program. 2022, 1–37. [Google Scholar] [CrossRef]
  14. Karmarkar, N. A new polynomial time algorithm for linear programming. Combinatorica 1984, 4, 373–395. [Google Scholar] [CrossRef]
  15. Adler, I.; Karmarkar, N.; Resende, M.; Veiga, G. An implementation of Karmarkar’s algorithm for linear programming. Math. Program. 1989, 44, 297–335. [Google Scholar] [CrossRef]
  16. Klee, V.; Minty, G. How Good Is the Simplex Algorithm? Inequalities—III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972; pp. 159–175. [Google Scholar]
  17. Nesterov, Y.; Nemirovski, A.S. Interior Point Polynomial Time Methods in Convex Programming; SIAM: Philadelphia, PA, USA, 1994. [Google Scholar]
  18. Nemirovski, A.S.; Todd, M.J. Interior-point methods for optimization. Acta Numer. 2008, 17, 191–234. [Google Scholar] [CrossRef]
  19. Nimana, N. A Fixed-Point Subgradient Splitting Method for Solving Constrained Convex Optimization Problems. Symmetry 2020, 12, 377. [Google Scholar] [CrossRef]
  20. Han, D.; Liu, T.; Qi, Y. Optimization of Mixed Energy Supply of IoT Network Based on Matching Game and Convex Optimization. Sensors 2020, 20, 5458. [Google Scholar] [CrossRef] [PubMed]
  21. Popescu, C.; Grama, L.; Rusu, C. A Highly Scalable Method for Extractive Text Summarization Using Convex Optimization. Symmetry 2021, 13, 1824. [Google Scholar] [CrossRef]
  22. Jiao, Q.; Liu, M.; Li, P.; Dong, L.; Hui, M.; Kong, L.; Zhao, Y. Underwater Image Restoration via Non-Convex Non-Smooth Variation and Thermal Exchange Optimization. J. Mar. Sci. Eng. 2021, 9, 570. [Google Scholar] [CrossRef]
  23. Alfassi, Y.; Keren, D.; Reznick, B. The Non-Tightness of a Convex Relaxation to Rotation Recovery. Sensors 2021, 21, 7358. [Google Scholar] [CrossRef]
  24. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  25. Bartsch, H.-J.; Sachs, M. Taschenbuch Mathematischer Formeln für Ingenieure und Naturwissenschaftler, 24th ed.; Neu Bearbeitete Auflage; Carl Hanser Verlag GmbH & Co., KG.: Munich, Germany, 2018. [Google Scholar]
  26. Ebrahimnejad, A.; Verdegay, J.L. An efficient computational approach for solving type-2 intuitionistic fuzzy numbers based Transportation Problems. Int. J. Comput. Intell. Syst. 2016, 9, 1154–1173. [Google Scholar] [CrossRef]
  27. Ebrahimnejad, A.; Nasseri, S.H. Linear programmes with trapezoidal fuzzy numbers: A duality approach. Int. J. Oper. Res. 2012, 13, 67–89. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the channel.
Figure 1. Schematic diagram of the channel.
Appliedmath 02 00030 g001
Figure 2. The objective function l β ( α ) and its second derivative for β = π 2 , d 1 = 1 and d 2 = 1 on the interval [ 0 , π β ] .
Figure 2. The objective function l β ( α ) and its second derivative for β = π 2 , d 1 = 1 and d 2 = 1 on the interval [ 0 , π β ] .
Appliedmath 02 00030 g002
Figure 3. The minimum values of d 1 and d 2 for a 5-unit-long crossbar (d) to pass through the channel (with β = π 2 ).
Figure 3. The minimum values of d 1 and d 2 for a 5-unit-long crossbar (d) to pass through the channel (with β = π 2 ).
Appliedmath 02 00030 g003
Table 1. l maxLength β for the different values of the parameters by employing the code from Listing 1.
Table 1. l maxLength β for the different values of the parameters by employing the code from Listing 1.
β = π 100 β = π 6 β = π 4 β = π 2 β = 2 π 3 β = 3 π 4 β = 99 π 100
d 1 = 1 , d 2 = 1 2.002.072.162.824.005.22127.32
d 1 = 1 , d 2 = 3 4.004.104.255.407.549.80237.60
d 1 = 1 , d 2 = 5 6.006.126.297.7710.6913.84333.35
d 1 = 2 , d 2 = 7 9.009.229.5412.0116.7021.69524.70
d 1 = 2 , d 2 = 9 11.0011.2311.5814.3819.8425.71620.27
d 1 = 2 , d 2 = 11 13.0013.2413.6016.7022.9029.61712.44
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vrabel, R. On One Problem of the Nonlinear Convex Optimization. AppliedMath 2022, 2, 512-517. https://doi.org/10.3390/appliedmath2040030

AMA Style

Vrabel R. On One Problem of the Nonlinear Convex Optimization. AppliedMath. 2022; 2(4):512-517. https://doi.org/10.3390/appliedmath2040030

Chicago/Turabian Style

Vrabel, Robert. 2022. "On One Problem of the Nonlinear Convex Optimization" AppliedMath 2, no. 4: 512-517. https://doi.org/10.3390/appliedmath2040030

APA Style

Vrabel, R. (2022). On One Problem of the Nonlinear Convex Optimization. AppliedMath, 2(4), 512-517. https://doi.org/10.3390/appliedmath2040030

Article Metrics

Back to TopTop