# The Algorithm of Continuous Optimization Based on the Modified Cellular Automaton

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methods

#### 2.1. Optimization on the Basis of Learning Cellular Automata

**P**is a probability vector that determines the selection probability of a state at each step. The learning algorithm modifies the action probability distribution vector according to the received environmental responses. In [16], the linear reward penalty algorithm is used for automaton learning.

- Step 1.
- Initialize the population: initial elements of the search space are generated randomly; probabilities of all actions of each learning automaton are accepted as equal.
- Step 2.
- If the condition of stop is not reached, synchronously update each cell based on Model Based Evolution.
- Step 2.1.
- For each cell with candidate solution $CS=\{C{S}_{1},\dots ,C{S}_{n}\}$ and solution model $M=\{L{A}_{1},\dots ,L{A}_{n}\}$ do:
- Step 2.1.1.
- Randomly partition M into l mutually disjoint groups $G=\{{G}_{1},\dots ,{G}_{l}\}$.
- Step 2.1.2.
- For each nonempty group ${G}_{i}$ do:
- Step 2.1.3.
- Create a copy $CS{C}^{i}=\{CS{C}_{1}^{i},\dots ,CS{C}_{n}^{i}\}$ of $CS=\{C{S}_{1},\dots ,C{S}_{n}\}$ for ${G}_{i}$.
- Step 2.1.4.
- For each $L{A}_{d}\in {G}_{i}$ associated with the d-th dimension of CS do:
- Step 2.1.4.1.
- Select an action from the action set of $L{A}_{d}$ according to its probability.
- Step 2.1.4.2.
- Let this action correspond to an interval like $[{s}_{d,j},{e}_{d,j}]$.
- Step 2.1.4.3.
- Create a uniform random number r from the interval $[{s}_{d,j},{e}_{d,j}]$, and alter the value of $CS{C}_{d}^{i}$ to r.

- Step 2.1.5.
- Evaluate $CS{C}_{i}$ with objective function.

- Step 2.2.
- For each cell do:
- Step 2.2.1.
- Create a reinforcement signal for each one of its learning automata.
- Step 2.2.2.
- Update the action probabilities of each learning automaton based on its received reinforcement signal.
- Step 2.2.3.
- Refine the actions of each learning automaton in cell.

- Step 3.
- If generation number is a multiple of 5 do:
- Step 3.1.
- Synchronously update each cell based on DE Based Evolution.

- Step 4.
- Return the best solution found.

#### 2.2. The Cellular Automaton with an Objective Function

**Definition 1.**

**Y**, are taken to be σ function arguments [14,17,18].

**Definition 2.**

**L**, and

**Y**correspond to analogical components of classic model; the set A is modified and looks like $A={\Re}^{m}\times D$, where D is a set of labels; $\mathsf{\sigma}:{\left({\Re}^{m}\times D\right)}^{k}\to {\Re}^{m}$ is a local transition function; $U:{\Re}^{m}\to D$ is a cell marking-out rule; and $\mathrm{\Phi}=\mathrm{\Phi}\left({x}_{1},\dots ,{x}_{m}\right)$ is an objective function.

**x**vector in terms of its proximity to the best solution acquired at this step of a cellular automaton evolution. Let us call the

**x**vector the state of a cell. The local transition function can be defined only analytically, because it operates with real instead integer values.

**x**vector, contained in the cell, vary sequentially and independently from each other. The new value of the i-th element of the

**x**vector is calculated according to the chosen rule of evolution on the basis of the corresponding values contained in the neighboring cells belonging to the neighborhood of the given cell.

#### 2.3. The Procedure of the Cellular Automaton with an Objective Function Evolvement

**x**with j-th element value.

#### 2.4. The Suggested Algorithm of Continuous Optimization on the Basis of the Cellular Automaton with an Objective Function

**Y**; cell marking-out rule parameter μ; a ratio of an acceptable remoteness of neighborhood cell states from corresponding central cell elements ω; an objective function of m variables $\mathrm{\Phi}=\mathrm{\Phi}\left({x}_{1},\dots ,{x}_{m}\right)$, the optimum(minimum) of which has to be found; search space borders ${\left[a,b\right]}^{m}\subset {\Re}^{m}$; a number of cellular automaton evolvement steps T.

- Step 1.
- Fill the states of all cellular automaton lattice cells with random values from ${\left[a,b\right]}^{m}$.
- Step 2.
- Calculate an objective function value for each lattice cell, storing these values in a $\mathbf{M}={\left({m}_{ij}\right)}_{i=1,j=1}^{{l}_{1},{l}_{2}}$ matrix.
- Step 3.
- Find a minimal element in the
**M**matrix. Write this element value to ${\mathrm{\Phi}}_{\text{min}}$ and its coordinates to $\left(p,q\right)$. - Step 4.
- Find the maximum element in the
**M**matrix. Write this element value to ${\mathrm{\Phi}}_{\text{max}}$. - Step 5.
- Calculate the label for each cell according to the rule (1).
- Step 6.
- For t from 1 to T do:
- Step 6.1.
- For i from 1 to 3 do:
- Step 6.1.1.
- Update every lattice cell state according to the evolvement rule (2) and neighborhood pattern
**Y**. - Step 6.1.2.
- Update the
**M**matrix, ${\mathrm{\Phi}}_{\text{min}}$, ${\mathrm{\Phi}}_{\text{max}}$, and $\left(p,q\right)$. - Step 6.1.3.
- Update every cell label according to the rule (1).

- Step 6.2.
- Update every cell state according to the evolvement rule (3) and neighborhood pattern
**Y**. - Step 6.3.
- Update the
**M**matrix, ${\mathrm{\Phi}}_{\text{min}}$, ${\mathrm{\Phi}}_{\text{max}}$, and $\left(p,q\right)$. - Step 6.4.
- Update every cell label according to the rule (1).

- Step 7.
- Return state of the cell with coordinates $\left(p,q\right)$ as the
**x**vector and finish the algorithm.

## 3. Experiments

**L**= (10, 10), $\mathbf{Y}=\left(\left(-1,-1\right),\left(-1,0\right),\left(-1,1\right),\left(0,-1\right),\left(0,0\right),\left(0,1\right),\left(1,-1\right),\left(1,0\right),\left(1,1\right)\right)$, $\mathsf{\mu}=0.4$, $\mathsf{\omega}\in [0.01,0.025]$, $T<500$.

## 4. Conclusions

- The objective function $\mathrm{\Phi}$ is included in the model.
- Each cell of the array contains the vector real value $\mathbf{x}\in {\Re}^{m}$ which is considered to be the element of function domain of the objective function $\mathrm{\Phi}$.
- The development of the cellular automaton is organized in such a way that, while renovating the state of each cell of the array, its new state will be closer to the optimum.

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Kennedy, J.; Ebenhart, R. Particle Swarm Optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, University of Western Australia, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948.
- Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim.
**1997**, 11, 341–359. [Google Scholar] - Karaboga, D.; Akay, B. A survey: Algorithms simulating bee swarm intelligence. Artif. Intell. Rev.
**2009**, 31, 61–85. [Google Scholar] - Khodashinskii, I.A.; Dudin, P.A. Identification of fuzzy systems using a continuous ant colony algorithm. Optoelectron. Instrum. Data Process.
**2012**, 48, 54–61. [Google Scholar] - Canyurt, O.E.; Hajela, P. Cellular genetic algorithm technique for the multicriterion design optimization. Struct. Multidiscip. Optim.
**2010**, 40, 201–214. [Google Scholar] - Sidiropoulos, E.; Fotakis, D. Cell-based genetic algorithm and simulated annealing for spatial groundwater allocation. WSEAS Trans. Environ. Dev.
**2009**, 5, 351–360. [Google Scholar] - Sidiropoulos, E. Harmony Search and Cellular Automata in Spatial Optimization. Appl. Math.
**2012**, 3, 1532–1537. [Google Scholar] - Du, T.S.; Fei, P.S.; Shen, Y.J. A new cellular automata-based mixed cellular ant algorithm for solving continuous system optimization programs. In Proceedings of the 4th International Conference on Natural Computation, Jinan, China, 18–20 October 2008; pp. 407–411.
- Gholizadeh, S. Layout optimization of truss structures by hybridizing cellular automata and particle swarm optimization. Comput. Struct.
**2013**, 125, 86–99. [Google Scholar] - Tovar, A.; Patel, N.M.; Niebur, G.L.; Sen, M.; Renaud, J.E. Topology optimization using a hybrid cellular automation method with local control rules. J. Mech. Des.
**2006**, 128, 1205–1216. [Google Scholar] - Penninger, C.L.; Watson, L.T.; Tovar, A.; Renaud, J.E. Convergence analysis of hybrid cellular automata for topology optimization. Struct. Multidiscip. Optim.
**2010**, 40, 201–214. [Google Scholar] - Bochenek, B.; Tajs-Zielinska, K. Novel local rules of cellular automata applied to topology and size optimization. Eng. Optim.
**2012**, 44, 23–35. [Google Scholar] - Han, M.; Liua, C.; Xing, J. An evolutionary membrane algorithm for global numerical optimization problems. Inf. Sci.
**2014**, 276, 219–241. [Google Scholar] - Toffoli, T.; Margolus, M. Cellular Automata Machines; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
- Yazdani, D.; Golyari, S.; Meybodi, M.R. A New Hybrid Algorithm for Optimization Based on Artificial Fish Swarm Algorithm and Cellular Learning Automata. In Proceedings of the 5th International Symposium on Telecommunications, Sharif University of Technology, Tehran, Iran, 4–6 December 2010; pp. 932–937.
- Vafashoar, R.; Meybodi, M.R.; Momeni Azandaryani, A.H. CLA-DE: A hybrid model based on cellular learning automata for numerical optimization. Appl. Intell.
**2012**, 36, 735–748. [Google Scholar] - Bandman, O.L. Discrete Models of Physicochemical Processes and Their Parallel Implementation. In Proceedings of the Second Russia-Taiwan Symposium on Methods and Tools of Parallel Programming Multicomputers, Vladivostok, Russia, 16–19 May 2010.
- Tanaka, I. Effects of Initial Symmetry on the Global Symmetry of One-Dimensional Legal Cellular Automata. Symmetry
**2015**, 7, 1768–1779. [Google Scholar] - Derrac, J.; Garcia, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput.
**2011**, 1, 3–18. [Google Scholar]

**Figure 1.**The symmetry of the cellular automaton neighborhood: (

**a**) Moore neighborhoods; (

**b**) Von Neumann neighborhoods.

**Figure 4.**The results of optimization under independent usage of the rules of the cellular automaton (2) and (3): (

**a**) for the objective function ${\mathrm{\Phi}}_{2}$; (

**b**) for the objective function ${\mathrm{\Phi}}_{6}$.

**Figure 5.**The results of optimization of the objective function ${\mathrm{\Phi}}_{3}$ under different values of the parameters of the cellular automaton: (

**a**) under change of the sizes of the lattice and fixed Moore neighborhood 3 × 3 of the cells; (

**b**) under change of the sizes of the neighborhood for lattice 10 × 10 of the cells; (

**c**) under change of the lattice form and a comparable number of cells.

**Figure 6.**The comparison of the developed algorithm and algorithm CLA-DE: (

**a**) for the objective function ${\mathrm{\Phi}}_{1}$; (

**b**) for the objective functions ${\mathrm{\Phi}}_{2}$; (

**c**) for the objective functions ${\mathrm{\Phi}}_{3}$; (

**d**) for the objective functions ${\mathrm{\Phi}}_{4}$; (

**e**) for the objective functions ${\mathrm{\Phi}}_{5}$; (

**f**) for the objective functions ${\mathrm{\Phi}}_{6}$; (

**g**) for the objective functions ${\mathrm{\Phi}}_{7}$; (

**h**) for the objective functions ${\mathrm{\Phi}}_{8}$; (

**i**) for the objective functions ${\mathrm{\Phi}}_{9}$; (

**j**) for the objective functions ${\mathrm{\Phi}}_{10}$; (

**k**) for the objective functions ${\mathrm{\Phi}}_{11}$; (

**l**) for the objective functions ${\mathrm{\Phi}}_{12}$; (

**m**) for the objective functions ${\mathrm{\Phi}}_{13}$; (

**n**) for the objective functions ${\mathrm{\Phi}}_{14}$; (

**o**) for the objective functions ${\mathrm{\Phi}}_{15}$.

Function | Dimension | Search Space | Optimum |
---|---|---|---|

${\mathrm{\Phi}}_{1}={\displaystyle \sum _{i=1}^{m}\left(-{x}_{i}\text{sin}\left(\sqrt{\left|{x}_{i}\right|}\right)\right)}$ | 30 | ${\left[-500,500\right]}^{m}$ | –12,569.49 |

${\mathrm{\Phi}}_{1}^{\ast}={\displaystyle \sum _{i=1}^{m}\left(-{x}_{i}\text{sin}\left(\sqrt{\left|{x}_{i}\right|}\right)\right)}+418.9829m$ | 30 | ${\left[-500,500\right]}^{m}$ | 0 |

${\mathrm{\Phi}}_{2}={\displaystyle \sum _{i=1}^{m}\left({x}_{i}^{2}-10\text{cos}\left(2\mathsf{\pi}{x}_{i}\right)+10\right)}$ | 30 | ${\left[-5.12,5.12\right]}^{m}$ | 0 |

$\begin{array}{ll}{\mathrm{\Phi}}_{3}=& -20\text{exp}\left(-\frac{1}{5}\sqrt{\frac{1}{m}{\displaystyle \sum _{i=1}^{m}{x}_{i}^{2}}}\right)-\\ & -\text{exp}\left(\frac{1}{m}{\displaystyle \sum _{i=1}^{m}\text{cos}\left(2\pi {x}_{i}\right)}\right)+20+\text{exp}(1)\end{array}$ | 30 | ${\left[-32,32\right]}^{m}$ | 0 |

${\mathrm{\Phi}}_{4}=\frac{1}{4000}{\displaystyle \sum _{i=1}^{m}{x}_{i}^{2}}-{\displaystyle \prod _{i=1}^{m}\text{cos}\left(\frac{{x}_{i}}{\sqrt{i}}\right)}+1$ | 30 | ${\left[-600,600\right]}^{m}$ | 0 |

${\mathrm{\Phi}}_{5}=-{\displaystyle \sum _{i=1}^{m}\text{sin}\left({x}_{i}\right){\text{sin}}^{20}\left(\frac{i{x}_{i}^{2}}{\mathsf{\pi}}\right)}$ | 100 | ${\left[0,\mathsf{\pi}\right]}^{m}$ | –99.2784 |

${\mathrm{\Phi}}_{6}=\frac{1}{N}{\displaystyle \sum _{i=1}^{m}\left({x}_{i}^{4}-16{x}_{i}^{2}+5{x}_{i}\right)}$ | 100 | ${\left[-5,5\right]}^{m}$ | –78.33236 |

${\mathrm{\Phi}}_{7}={\displaystyle \sum _{i=1}^{m-1}\left[100{\left({x}_{i}-{x}_{i+1}\right)}^{2}+{\left({x}_{i}-1\right)}^{2}\right]}$ | 30 | ${\left[-30,30\right]}^{m}$ | 0 |

${\mathrm{\Phi}}_{8}={\displaystyle \sum _{i=1}^{m}{x}_{i}^{2}}$ | 30 | ${\left[-500,500\right]}^{m}$ | 0 |

${\mathrm{\Phi}}_{9}={\displaystyle \sum _{i=1}^{m}\left|{x}_{i}\right|}+{\displaystyle \prod _{i=1}^{m}\left|{x}_{i}\right|}$ | 30 | ${\left[-10,10\right]}^{m}$ | 0 |

${\mathrm{\Phi}}_{10}={\displaystyle \sum _{i=1}^{m-1}\left[100{\left({x}_{i+1}-{x}_{i}^{2}\right)}^{2}+{\left({x}_{i}-1\right)}^{2}\right]}$ | 10 | ${\left[-2.048,2.048\right]}^{m}$ | 0 |

${\mathrm{\Phi}}_{11}={\displaystyle \sum _{i=1}^{m}{\left(\lfloor {x}_{i}+0.5\rfloor \right)}^{2}}$ | 30 | ${\left[-100,100\right]}^{m}$ | 0 |

${\mathrm{\Phi}}_{12}=-\text{cos}\left(2\mathsf{\pi}\sqrt{{\displaystyle \sum _{i=1}^{m}{x}_{i}^{2}}}\right)+0.1\sqrt{{\displaystyle \sum _{i=1}^{m}{x}_{i}^{2}}}+1$ | 30 | ${\left[-100,100\right]}^{m}$ | 0 |

$\begin{array}{c}{\mathrm{\Phi}}_{13}={\displaystyle \sum _{j=1}^{m}{\displaystyle \sum _{i=1}^{m}\left(\frac{{y}_{i,j}^{2}}{4000}-\text{cos}\left({y}_{i,j}\right)+1\right),}}\\ {y}_{i,j}=100{\left({x}_{j}-{x}_{i}^{2}\right)}^{2}+{\left(1-{x}_{i}\right)}^{2}\end{array}$ | 30 | ${\left[-100,100\right]}^{m}$ | 0 |

$\begin{array}{ll}{\mathrm{\Phi}}_{14}=& \frac{\mathsf{\pi}}{m}({\displaystyle \sum _{i=1}^{m-1}{\left({y}_{i}-1\right)}^{2}\left[1+10{\text{sin}}^{2}\left(\mathsf{\pi}{y}_{i+1}\right)\right]}+\\ & +10{\text{sin}}^{2}\left(\mathsf{\pi}{y}_{1}\right)+{\left({y}_{m}-1\right)}^{2})+\\ & +{\displaystyle \sum _{i=1}^{m}u\left({x}_{i},10,100,4\right)},\end{array}$ ${y}_{i}=1+\frac{1}{4}\left({x}_{i}+1\right),$ $u({x}_{i},a,k,m)=\{\begin{array}{ll}k{\left({x}_{i}-a\right)}^{m},& \text{if}{x}_{i}a,\\ 0,& \text{if}-a\le {x}_{i}\le a,\\ k{(-{x}_{i}-a)}^{m},& \text{if}{x}_{i}-a\end{array}$ | 30 | ${\left[-50,50\right]}^{m}$ | 0 |

$\begin{array}{ll}{\mathrm{\Phi}}_{15}=& \frac{1}{10}({\displaystyle \sum _{i=1}^{m-1}{\left({x}_{i}-1\right)}^{2}\left[1+{\text{sin}}^{2}\left(3\mathsf{\pi}{x}_{i+1}\right)\right]}+\\ & +{\text{sin}}^{2}\left(3\mathsf{\pi}{x}_{1}\right)+{\left({x}_{m}-1\right)}^{2}\left[1+{\text{sin}}^{2}\left(2\mathsf{\pi}{x}_{m}\right)\right])+\\ & +{\displaystyle \sum _{i=1}^{m}u\left({x}_{i},5,100,4\right)}\end{array}$ | 30 | ${\left[-50,50\right]}^{m}$ | 0 |

Function | Our Algorithm | CLA-DE | ||||||
---|---|---|---|---|---|---|---|---|

50,000 Function Evaluation | 500,000 Function Evaluation | 50,000 Function Evaluation | 500,000 Function Evaluation | |||||

Mean | Best | Mean | Best | Mean | Best | Mean | Best | |

${\mathrm{\Phi}}_{1}$ | 2.07 × 10^{3} | 1.36 × 10^{3} | 4.50 × 10^{2} | 3.38 × 10^{−3} | 1.49 × 10^{2} | 7.96 × 10^{1} | 3.39 × 10^{−3} | 3.38 × 10^{−3} |

${\mathrm{\Phi}}_{2}$ | 5.50 | 9.96 × 10^{−1} | 6.63 × 10^{−2} | 0 | 3.39 × 10^{1} | 2.52 × 10^{1} | 1.50 × 10^{−1} | 3.40 × 10^{−6} |

${\mathrm{\Phi}}_{3}$ | 9.94 × 10^{−4} | 2.15 × 10^{−5} | 3.74 × 10^{−4} | 4.44 × 10^{−16} | 6.09 | 4.68 | 1.58 × 10^{−3} | 3.98 × 10^{−4} |

${\mathrm{\Phi}}_{4}$ | 1.88 × 10^{−5} | 6.82 × 10^{−7} | 2.47 × 10^{−11} | 0 | 5.56 | 2.92 | 3.44 × 10^{−2} | 2.13 × 10^{−6} |

${\mathrm{\Phi}}_{5}$ | 1.84 × 10^{1} | 1.48 × 10^{1} | 1.04 × 10^{1} | 5.25 | 4.31 × 10^{1} | 3.89 × 10^{1} | 1.61 × 10^{1} | 9.79 |

${\mathrm{\Phi}}_{6}$ | 1.01 × 10^{1} | 8.55 | 3.49 | 1.14 | 1.17 × 10^{1} | 1.05 × 10^{1} | 2.03 × 10^{−1} | 4.92 × 10^{−2} |

${\mathrm{\Phi}}_{7}$ | 3.06 × 10^{1} | 3.81 | 2.79 × 10^{1} | 3.62 | 2.13 × 10^{3} | 1.03 × 10^{3} | 7.93 × 10^{3} | 3.83 × 10^{3} |

${\mathrm{\Phi}}_{8}$ | 4.19 × 10^{−5} | 3.15 × 10^{−7} | 2.64 × 10^{−11} | 8.63 × 10^{−103} | 1.15 × 10^{4} | 5.24 × 10^{3} | 3.88 × 10^{−4} | 2.17 × 10^{−5} |

${\mathrm{\Phi}}_{9}$ | 1.57 × 10^{−5} | 3.88 × 10^{−7} | 1.48 × 10^{−12} | 5.61 × 10^{−70} | 4.45 | 3.18 | 9.13 × 10^{−5} | 2.37 × 10^{−5} |

${\mathrm{\Phi}}_{10}$ | 1.40 | 5.00 × 10^{−2} | 1.34 | 4.61 × 10^{−2} | 2.51 | 1.09 | 4.17 × 10^{−11} | 2.66 × 10^{−21} |

${\mathrm{\Phi}}_{11}$ | 0 | 0 | 0 | 0 | 5.11 × 10^{2} | 2.15 × 10^{2} | 0 | 0 |

${\mathrm{\Phi}}_{12}$ | 1.15 | 6.00 × 10^{−1} | 1.14 | 6.00 × 10^{−1} | 6.11 | 3.90 | 9.26 × 10^{−1} | 7.00 × 10^{−1} |

${\mathrm{\Phi}}_{13}$ | 3.74 × 10^{2} | 2.02 × 10^{2} | 6.76 × 10^{1} | 1.02 × 10^{1} | 1.20 × 10^{13} | 2.15 × 10^{11} | 1.00 × 10^{3} | 7.43 × 10^{2} |

${\mathrm{\Phi}}_{14}$ | 6.40 × 10^{−9} | 9.40 × 10^{−10} | 5.59 × 10^{−13} | 1.57 × 10^{−32} | 4.67 × 10^{3} | 9.10 | 6.53 × 10^{−2} | 6.12 × 10^{−5} |

${\mathrm{\Phi}}_{15}$ | 7.03 × 10^{−8} | 1.56 × 10^{−9} | 7.05 × 10^{−21} | 1.35 × 10^{−32} | 9.55 × 10^{4} | 3.28 × 10^{2} | 8.26 × 10^{−4} | 1.00 × 10^{−5} |

Results | 50,000 Function Evaluation | 500,000 Function Evaluation | ||
---|---|---|---|---|

Mean | Best | Mean | Best | |

Wins (+) | 14 | 14 | 11 | 13 |

Losses (–) | 1 | 1 | 4 | 2 |

Detected differences | α = 0.05 | α = 0.05 | 0.1 | α = 0.05 |

Results | 50,000 Function Evaluation | 500,000 Function Evaluation | ||
---|---|---|---|---|

Mean | Best | Mean | Best | |

T-value | 10 | 12 | 39 | 20 |

p-value | 0.004514 | 0.006407 | 0.396727 | 0.041328 |

Function | AFSA-CLA | EMA | Our Algorithm |
---|---|---|---|

${\mathrm{\Phi}}_{1}^{\ast}$ | — | 4.56 × 10^{2} | 1.18 × 10^{2} |

${\mathrm{\Phi}}_{2}$ | 0 | 2.06 × 10^{1} | 0 |

${\mathrm{\Phi}}_{3}$ | 1.33 × 10^{−14} | 6.42 × 10^{−1} | 3.99 × 10^{−15} |

${\mathrm{\Phi}}_{4}$ | 4.44 × 10^{−16} | 1.73 × 10^{−2} | 0 |

${\mathrm{\Phi}}_{5}$ | — | — | −9.33 × 10^{1} |

${\mathrm{\Phi}}_{6}$ | — | — | −7.52 × 10^{1} |

${\mathrm{\Phi}}_{7}$ | — | — | 3.62 |

${\mathrm{\Phi}}_{8}$ | 3.19 × 10^{−104} | 7.11 × 10^{−103} | 2.32 × 10^{−82} |

${\mathrm{\Phi}}_{9}$ | — | — | 1.46 × 10^{−36} |

${\mathrm{\Phi}}_{10}$ | 3.30 × 10^{−3} | 8.34 × 10^{−1} | 5.58 × 10^{−2} |

${\mathrm{\Phi}}_{11}$ | 0 | — | 0 |

${\mathrm{\Phi}}_{12}$ | — | 8.12 × 10^{−2} | 6.00 × 10^{−1} |

${\mathrm{\Phi}}_{13}$ | — | 9.26 × 10^{1} | 1.02 × 10^{1} |

${\mathrm{\Phi}}_{14}$ | — | 3.73 × 10^{−2} | 1.57 × 10^{−32} |

${\mathrm{\Phi}}_{15}$ | — | 2.19 × 10^{−3} | 1.35 × 10^{−32} |

Results | Our Algorithm versus AFSA-CLA | Our Algorithm versus EMA |
---|---|---|

Sign Test | ||

Wins (+) | 3 | 7 |

Loses (–) | 3 | 2 |

Detected differences | — | α = 0.1 |

Wilcoxon Test | ||

T-value | 5 | 6 |

p-value | 1.0 | 0.050613 |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Evsutin, O.; Shelupanov, A.; Meshcheryakov, R.; Bondarenko, D.; Rashchupkina, A.
The Algorithm of Continuous Optimization Based on the Modified Cellular Automaton. *Symmetry* **2016**, *8*, 84.
https://doi.org/10.3390/sym8090084

**AMA Style**

Evsutin O, Shelupanov A, Meshcheryakov R, Bondarenko D, Rashchupkina A.
The Algorithm of Continuous Optimization Based on the Modified Cellular Automaton. *Symmetry*. 2016; 8(9):84.
https://doi.org/10.3390/sym8090084

**Chicago/Turabian Style**

Evsutin, Oleg, Alexander Shelupanov, Roman Meshcheryakov, Dmitry Bondarenko, and Angelika Rashchupkina.
2016. "The Algorithm of Continuous Optimization Based on the Modified Cellular Automaton" *Symmetry* 8, no. 9: 84.
https://doi.org/10.3390/sym8090084