# RPCGB Method for Large-Scale Global Optimization Problems

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Notations and Assumptions

## 3. Conditional Gradient Method

#### 3.1. Conditional Gradient Algorithm

Algorithm 1 Conditional gradient algorithm. |

1 Choose an initial point ${x}^{\left(0\right)}\in M$ in the feasible set M. |

2:$\phantom{\rule{4pt}{0ex}}\mathbf{for}\phantom{\rule{4pt}{0ex}}t=0,1,2,\dots ,T\phantom{\rule{4pt}{0ex}}\mathbf{do}$ |

3: Compute ${\mathbf{s}}_{t}:=$LMO${}_{M}$$(\nabla f\left({x}^{\left(t\right)}\right)):=\underset{s\in M}{argmin}$$\nabla f{\left({x}^{\left(t\right)}\right)}^{\top}s$ |

($\mathbf{LMO}$: Linear minimization oracle) |

4: Let ${\mathbf{d}}_{t}:={\mathbf{s}}_{t}-{x}^{\left(t\right)}$ (Conditional gradient direction) |

5: Compute ${g}_{t}:=\u2329-\nabla f\left({x}^{\left(t\right)}\right),{\mathbf{d}}_{t}\u232a$ (Conditional gradient gap) |

6: $\mathbf{if}$ ${g}_{t}<\epsilon \phantom{\rule{4pt}{0ex}}\mathbf{then}\phantom{\rule{4pt}{0ex}}\mathbf{return}$ ${x}^{\left(t\right)}$ |

7: optimal line search step size |

${\alpha}_{k}\in \underset{\alpha \in [0,1]}{argmin}f({x}^{\left(k\right)}+\alpha {\mathbf{d}}_{k})$ |

8: Update ${x}^{(t+1)}:={x}^{\left(t\right)}+{\alpha}_{k}{\mathbf{d}}_{t}$ |

9: $\mathbf{end}\phantom{\rule{4pt}{0ex}}\mathbf{for}$ |

10: $\mathbf{return}\phantom{\rule{4pt}{0ex}}{x}^{\left(T\right)}$ |

#### 3.2. Bisection Algorithm

## 4. RPCGB Method

**Lemma**

**1.**

**Proof.**

**Lemma**

**2.**

**Theorem**

**1.**

**Proof.**

**Theorem**

**2.**

**Proof.**

## 5. Numerical Results

- (i)
- “CGB”, the method of conditional gradient and bisection;
- (ii)
- “RPCGB”, the method of random perturbation of conditional gradient and bisection.

**Problem**

**1.**

**Problem**

**2.**

**Problem**

**3.**

**Problem**

**4.**

**Problem**

**5.**

**Problem**

**6.**

**d**) show that the proposed algorithm performs better than the CGB algorithm, with the majority of cases showing that the suggested algorithm achieves convergence in fewer iterations than the CGB algorithm. However, there is an exception observed in Problem 4, as presented in Figure 4, where the CGB algorithm stops early. This demonstrates that the convergence behavior of optimization algorithms can vary based on the problem being solved. It is worth noting that both algorithms terminated their execution before the 30th iteration, which is because a stopping criterion of approximately $\u03f5={10}^{-4}$ was met. The algorithms cease their iterations upon reaching the optimal solution (the local or global solution) or upon reaching the maximum number of iterations. We observe that the random perturbation has a significant effect on the convergence. This suggests that the changes made to the algorithm led to an improvement in its performance.

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Frausto Solis, J.; Purata Aldaz, J.L.; González del Angel, M.; González Barbosa, J.; Castilla Valdez, G. SAIPO-TAIPO and Genetic Algorithms for Investment Portfolios. Axioms
**2022**, 42, 11. [Google Scholar] [CrossRef] - Kuang, X.; Lamadrid, A.J.; Zuluaga, L.F. Pricing in non-convex markets with quadratic deliverability costs. Energy Econ.
**2019**, 80, 123–131. [Google Scholar] [CrossRef] [Green Version] - Pang, L.P.; Chen, S.; Wang, J.H. Risk management in portfolio applications of non-convex stochastic programming. Appl. Math. Comput.
**2015**, 258, 565–575. [Google Scholar] [CrossRef] - Chan, R.; Lanza, A.; Morigi, S.; Sgallari, F. Convex non-convex image segmentation. Numer. Math.
**2018**, 138, 635–680. [Google Scholar] [CrossRef] - Oh, S.; Woo, H.; Yun, S.; Kang, M. Non-convex hybrid total variation for image denoising. J. Vis. Commun. Image Represent.
**2013**, 24, 332–344. [Google Scholar] [CrossRef] - Chu, H.; Zheng, L.; Wang, X. Semi-blind millimeter-wave channel estimation using atomic norm minimization. IEEE Commun.
**2018**, 22, 2535–2538. [Google Scholar] [CrossRef] - Di Martino, F.; Sessa, S. A Multilevel Fuzzy Transform Method for High Resolution Image Compression. Axioms
**2022**, 11, 551. [Google Scholar] [CrossRef] - Wen, S.; Liu, G.; Chen, Q.; Qu, H.; Wang, Y.; Zhou, P. Optimization of precoded FTN signaling with MMSE-based turbo equalization. In Proceedings of the IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar]
- Kaveh, A.; Hamedani, K.B. Improved arithmetic optimization algorithm and its application to discrete structural optimization. Structures
**2022**, 35, 748–764. [Google Scholar] [CrossRef] - Zeng, G.Q.; Xie, X.Q.; Chen, M.R.; Weng, J. Adaptive population extremal optimization-based PID neural network for multivariable nonlinear control systems. Swarm Evolut. Comput.
**2019**, 44, 320–334. [Google Scholar] [CrossRef] - El Mouatasim, A. Fast gradient descent algorithm for image classification with neural networks. Signal Image Video Process.
**2020**, 14, 1565–1572. [Google Scholar] [CrossRef] - Nanuclef, R.; Frandi, E.; Sartori, C.; Allende, H. A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training. Inf. Sci.
**2014**, 285, 66–99. [Google Scholar] [CrossRef] [Green Version] - Zheng, M.; Wang, F.; Hu, X.; Miao, Y.; Cao, H.; Tang, M. A Method for Analyzing the Performance Impact of Imbalanced Binary Data on Machine Learning Models. Axioms
**2022**, 11, 607. [Google Scholar] [CrossRef] - Berrada, L.; Zisserman, A.; Kumar, M.P. Deep Frank-Wolfe for neural network optimization. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Amoura, N.; Benaissa, B.; Al Ali, M.; Khatir, S. Deep Neural Network and YUKI Algorithm for Inner Damage Characterization Based on Elastic Boundary Displacement; Capozucca. Lect. Notes Civ. Eng.
**2023**, 317, 220–233. [Google Scholar] - Benaissa, B.; Hocine, N.A.; Khatir, S.; Riahi, M.K.; Mirjalili, S. YUKI Algorithm and POD-RBF for Elastostatic and Dynamic Crack Identification. J. Comput. Sci.
**2021**, 55, 101451. [Google Scholar] [CrossRef] - Moxnes, E. An Introduction to Deterministic and Stochastic Optimization, Analytical methods for Dynamic Modelers; MIT Press: Cambridge, MA, USA, 2015. [Google Scholar]
- Pogu, M.; Souza de Cursi, J.E. Global optimization by random perturbation of the gradient method with a fixed parameter. J. Glob. Optim.
**1994**, 5, 159–180. [Google Scholar] [CrossRef] - Mandt, S.; Hoffman, M.; Blei, D. A variational analysis of stochastic gradient algorithms. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 354–363. [Google Scholar]
- Nesterov, Y.; Spokoiny, V. Random gradient-free minimization of convex functions. Found. Comput. Math.
**2017**, 17, 527–566. [Google Scholar] [CrossRef] - Lu, S.; Zhao, Z.; Huang, K.; Hong, M. Perturbed projected gradient descent converges to approximate second-order points for bound constrained nonconvex problems. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 5356–5360. [Google Scholar]
- El Mouatasim, A.; Ettahiri, A. Conditional gradient and bisection algorithms for non-convex optimization problem with random perturbation. Appl. Math. E-Notes
**2022**, 22, 142–159. [Google Scholar] - Frank, M.; Wolfe, P. An Algorithm for Quadratic Programming. Naval Res. Logist. Q.
**1956**, 3, 95–110. [Google Scholar] [CrossRef] - Khamaru, K.; Wainwright, M.J. Convergence guarantees for a class of non-convex and non-smooth optimization problems. J. Mach. Learn. Res.
**2019**, 20, 1–52. [Google Scholar] - Baushev, A.N.; Morozova, E.Y. A multidimensional bisection method for minimizing function over simplex. Lect. Notes Eng. Comput. Sci.
**2007**, 2, 801–803. [Google Scholar] - El Mouatasim, A.; Ellaia, R.; Souza de Cursi, J.E. Random perturbation of projected variable metric method for linear constraints nonconvex nonsmooth optimization. Int. J. Appl. Math. Comput. Sci.
**2011**, 21, 317–329. [Google Scholar] - Bouhadi, M.; Ellaia, R.; Souza de Cursi, J.E. Random perturbations of the projected gradient for linearly constrained problems. Nonconvex Optim. Appl.
**2001**, 487–499. [Google Scholar] - L’Ecuyer, P.; Touzin, R. On the Deng-Lin random number generators and related methods. Stat. Comput.
**2003**, 14, 5–9. [Google Scholar] [CrossRef] [Green Version] - Ali, M.M.; Khompatraporn, C.; Zabinsky, Z.B. A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Glob. Optim.
**2005**, 31, 635–672. [Google Scholar] [CrossRef] - Aslimani, N.; Ellaia, R. A new chaos optimization algorithm based on symmetrization and levelling approaches for global optimization. Numer. Algorithms
**2018**, 79, 1021–1047. [Google Scholar] [CrossRef] - Che, H.; Li, C.; He, X.; Huang, T. An intelligent method of swarm neural networks forequalities constrained nonconvex optimization. Neurocomputing
**2015**, 167, 569–577. [Google Scholar] [CrossRef] - Li, C.; Li, D. An extension of the Fletcher Reeves method to linear equality constrained optimization problem. Appl. Math. Comput.
**2003**, 219, 10909–10914. [Google Scholar] [CrossRef]

**Figure 1.**(

**a**) Scatter plot of solution distribution for the CGB and RPCGB algorithms (n = 900). (

**b**) Objective function values over iterations for the CGB method (n = 9000). (

**c**) Objective function values over iterations for the RPCGB method (n = 9000). (

**d**) Convergence performance for the CGB and RPCGB methods with n = 9000 for Problem 1.

**Figure 2.**(

**a**) Scatter plot of solution distribution for the CGB and RPCGB algorithms (n = 900). (

**b**) Objective function values over iterations for the CGB method (n = 9000). (

**c**) Objective function values over iterations for the RPCGB method (n = 9000). (

**d**) Convergence performance for the CGB and RPCGB methods with n = 9000 for Problem 2.

**Figure 3.**(

**a**) Scatter plot of solution distribution for the CGB and RPCGB algorithms (n = 900). (

**b**) Objective function values over iterations for the CGB method (n = 9000). (

**c**) Objective function values over iterations for the RPCGB method (n = 9000). (

**d**) Convergence performance for the CGB and RPCGB methods with n = 9000 for Problem 3.

**Figure 4.**(

**a**) Scatter plot of solution distribution for the CGB and RPCGB algorithms (n = 2). (

**b**) Objective function values over iterations for the CGB method (n = 9000). (

**c**) Objective function values over iterations for the RPCGB method (n = 9000). (

**d**) Convergence performance for the CGB and RPCGB methods with n = 9000 for Problem 4.

**Figure 5.**(

**a**) Scatter plot of solution distribution for the CGB and RPCGB algorithms (n = 2). (

**b**) Objective function values over iterations for the CGB method (n = 9000). (

**c**) Objective function values over iterations for the RPCGB method (n = 9000). (

**d**) Convergence performance for the CGB and RPCGB methods with n = 9000 for Problem 5.

**Figure 6.**(

**a**) Scatter plot of solution distribution for the CGB and RPCGB algorithms (n = 2). (

**b**) Objective function values over iterations for the CGB method (n = 4000). (

**c**) Objective function values over iterations for the RPCGB method (n = 4000). (

**d**) Convergence performance for the CGB and RPCGB methods with n = 4000 for Problem 6.

Problem 1 | Algorithm | |||||||
---|---|---|---|---|---|---|---|---|

CGB | RPCGB | |||||||

$\mathit{n}$ | ${\mathit{n}}_{\mathit{c}}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{f}}_{\mathit{CGB}}^{*}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{k}}_{\mathit{sto}}$ | ${\mathit{f}}_{\mathit{RPCGB}}^{*}$ |

500 | 1000 | 9 | 0.11 | −1.06 × ${10}^{6}$ | 9 | 0.68 | 2 | −1.61 × ${10}^{7}$ |

900 | 1800 | 9 | 0.13 | −3.41 × ${10}^{6}$ | 4 | 0.25 | 2 | −6.32 × ${10}^{7}$ |

2000 | 4000 | 10 | 3.27 | −1.08 × ${10}^{7}$ | 6 | 5.31 | 5 | −3.53 × ${10}^{8}$ |

4000 | 8000 | 12 | 19.33 | −5.53 × ${10}^{7}$ | 7 | 27.27 | 10 | −1.48 × ${10}^{9}$ |

6000 | 12,000 | 19 | 35.70 | −1.34 × ${10}^{7}$ | 9 | 46.54 | 10 | −3.35 × ${10}^{9}$ |

9000 | 18,000 | 27 | 87.92 | −1.61 × ${10}^{7}$ | 13 | 91.16 | 10 | −7.62 × ${10}^{9}$ |

Problem 2 | Algorithm | |||||||
---|---|---|---|---|---|---|---|---|

CGB | RPCGB | |||||||

$\mathit{n}$ | ${\mathit{n}}_{\mathit{c}}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{f}}_{\mathit{CGB}}^{*}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{k}}_{\mathit{sto}}$ | ${\mathit{f}}_{\mathit{RPCGB}}^{*}$ |

500 | 1000 | 4 | 0.07 | −50 | 2 | 0.02 | 1 | −50 |

900 | 1800 | 5 | 0.09 | −90 | 2 | 0.05 | 1 | −90 |

2000 | 4000 | 7 | 0.11 | −199.99 | 3 | 0.09 | 1 | −200 |

4000 | 8000 | 9 | 0.19 | −400 | 4 | 0.12 | 1 | −400 |

6000 | 12,000 | 10 | 0.37 | −599.99 | 7 | 0.15 | 1 | −600 |

9000 | 18,000 | 13 | 0.42 | −900 | 9 | 0.21 | 1 | −900 |

Problem 3 | Algorithm | |||||||
---|---|---|---|---|---|---|---|---|

CGB | RPCGB | |||||||

$\mathit{n}$ | ${\mathit{n}}_{\mathit{c}}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{f}}_{\mathit{CGB}}^{*}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{k}}_{\mathit{sto}}$ | ${\mathit{f}}_{\mathit{RPCGB}}^{*}$ |

500 | 1000 | 5 | 0.05 | −498.99 | 3 | 0.04 | 1 | −499 |

900 | 1800 | 7 | 0.07 | −898.99 | 6 | 0.07 | 1 | −898.76 |

2000 | 4000 | 11 | 0.12 | −1475.44 | 11 | 0.19 | 1 | −1998.99 |

4000 | 8000 | 18 | 0.31 | −2951.62 | 12 | 0.47 | 1 | −3999 |

6000 | 12,000 | 24 | 0.79 | −4427.81 | 19 | 0.74 | 1 | −5998.87 |

9000 | 18,000 | 35 | 1.03 | −6642.08 | 27 | 0.96 | 1 | −8998.25 |

Problem 4 | Algorithm | |||||||
---|---|---|---|---|---|---|---|---|

CGB | RPCGB | |||||||

$\mathit{n}$ | ${\mathit{n}}_{\mathit{c}}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{f}}_{\mathit{CGB}}^{*}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{k}}_{\mathit{sto}}$ | ${\mathit{f}}_{\mathit{RPCGB}}^{*}$ |

500 | 1000 | 23 | 11.41 | −131.81 | 45 | 19.53 | 25 | −176.72 |

900 | 1800 | 29 | 17.06 | −214.79 | 57 | 21.34 | 30 | −293.51 |

2000 | 4000 | 34 | 42.66 | −417.95 | 69 | 71.26 | 30 | −536.38 |

4000 | 8000 | 56 | 67.18 | −768.22 | 75 | 96.02 | 50 | −1.06 × ${10}^{3}$ |

6000 | 12,000 | 73 | 79.63 | −846.01 | 94 | 110.63 | 70 | −1.11 × ${10}^{3}$ |

9000 | 18,000 | 89 | 99.25 | −919.85 | 124 | 136.71 | 90 | −1.35 × ${10}^{3}$ |

Problem 5 | Algorithm | |||||||
---|---|---|---|---|---|---|---|---|

CGB | RPCGB | |||||||

n | ${\mathit{n}}_{\mathit{c}}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{f}}_{\mathit{CGB}}^{*}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{k}}_{\mathit{sto}}$ | ${\mathit{f}}_{\mathit{RPCGB}}^{*}$ |

500 | 500 | 19 | 8.04 | 4.71 × ${10}^{-4}$ | 14 | 9.23 | 70 | 1.38 × ${10}^{-11}$ |

900 | 900 | 29 | 14.13 | 8.51 × ${10}^{-4}$ | 14 | 14.25 | 70 | 3.05 × ${10}^{-10}$ |

2000 | 2000 | 42 | 33.09 | 0.0097 | 26 | 45.31 | 150 | 4.96 × ${10}^{-10}$ |

4000 | 4000 | 30 | 53.63 | 0.0194 | 17 | 97.27 | 200 | 7.93 × ${10}^{-7}$ |

6000 | 6000 | 59 | 71.47 | 0.0291 | 19 | 122.54 | 300 | 7.93 × ${10}^{-9}$ |

9000 | 9000 | 77 | 92.55 | 0.0436 | 13 | 153.16 | 800 | 7.93 × ${10}^{-5}$ |

Problem 6 | Algorithm | |||||||
---|---|---|---|---|---|---|---|---|

CGB | RPCGB | |||||||

n | ${\mathit{n}}_{\mathit{c}}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{f}}_{\mathit{CGB}}^{*}$ | ${\mathit{k}}_{\mathit{iter}}$ | CPU | ${\mathit{k}}_{\mathit{sto}}$ | ${\mathit{f}}_{\mathit{RPCGB}}^{*}$ |

500 | 499 | 2 | 0.09 | −1.8856 | 5 | 0.28 | 10 | −4.0147 |

900 | 899 | 19 | 1.65 | 0.7033 | 7 | 0.38 | 10 | −4.4432 |

1000 | 999 | 13 | 0.92 | 0.1626 | 12 | 1.08 | 10 | −4.6572 |

2000 | 1999 | 21 | 1.70 | 2.0535 | 8 | 17.24 | 40 | −3.1399 |

3000 | 2999 | 23 | 2.96 | 0.9158 | 9 | 27.54 | 60 | −3.2145 |

4000 | 3999 | 12 | 1.47 | 2.0097 | 11 | 48.56 | 100 | −4.6168 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ettahiri, A.; El Mouatasim, A.
RPCGB Method for Large-Scale Global Optimization Problems. *Axioms* **2023**, *12*, 603.
https://doi.org/10.3390/axioms12060603

**AMA Style**

Ettahiri A, El Mouatasim A.
RPCGB Method for Large-Scale Global Optimization Problems. *Axioms*. 2023; 12(6):603.
https://doi.org/10.3390/axioms12060603

**Chicago/Turabian Style**

Ettahiri, Abderrahmane, and Abdelkrim El Mouatasim.
2023. "RPCGB Method for Large-Scale Global Optimization Problems" *Axioms* 12, no. 6: 603.
https://doi.org/10.3390/axioms12060603