Cat and Mouse Based Optimizer: A New Nature-Inspired Optimization Algorithm

Numerous optimization problems designed in different branches of science and the real world must be solved using appropriate techniques. Population-based optimization algorithms are some of the most important and practical techniques for solving optimization problems. In this paper, a new optimization algorithm called the Cat and Mouse-Based Optimizer (CMBO) is presented that mimics the natural behavior between cats and mice. In the proposed CMBO, the movement of cats towards mice as well as the escape of mice towards havens is simulated. Mathematical modeling and formulation of the proposed CMBO for implementation on optimization problems are presented. The performance of the CMBO is evaluated on a standard set of objective functions of three different types including unimodal, high-dimensional multimodal, and fixed-dimensional multimodal. The results of optimization of objective functions show that the proposed CMBO has a good ability to solve various optimization problems. Moreover, the optimization results obtained from the CMBO are compared with the performance of nine other well-known algorithms including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Teaching-Learning-Based Optimization (TLBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Marine Predators Algorithm (MPA), Tunicate Swarm Algorithm (TSA), and Teamwork Optimization Algorithm (TOA). The performance analysis of the proposed CMBO against the compared algorithms shows that CMBO is much more competitive than other algorithms by providing more suitable quasi-optimal solutions that are closer to the global optimal.


Motivation
Optimization is the adjustment and modification of the inputs and properties of a device, a mathematical process, or an experimental experiment in order to obtain the best output or result. Each optimization problem has three main parts: decision variables, constraints, and objective functions [1]. Decision variables should be adjusted and quantified in such a way that the objective function of the problem is optimized according to the constraints. In fact, there are several solutions to an optimization problem where finding the best solution is the main challenge in optimizing the objective function [2].

Literature Review
Optimization problem solving methods from the general point of view are grouped into two categories: (i) deterministic methods and (ii) stochastic methods [3].
Deterministic methods also are grouped into two categories: (i) gradient-based and (ii) non-gradient-based methods. Gradient-based methods are valid and easy to use for simple cost functions. Many complex problems can be transformed into functions with main advantages of GWO. Slow convergence, low solving precision, having controller parameters, and bad local searching ability are the main disadvantages of GWO.
Whale Optimization Algorithm (WOA) is a nature-inspired algorithm which is developed based on social behavior of humpback whales and bubble-net hunting strategy. WOA have three operators to simulate the search for prey, encircling prey, and bubble-net foraging behavior of humpback whales [12]. Appropriate balance between exploration and exploitation is the main advantage of WOA. The main disadvantages of WOA are slow convergence speed, weak exploring search space, and easy falling into local optimal.
Marine Predators Algorithm (MPA) is introduced based on the movement strategies that marine predators use when trapping their prey in the oceans. MPA performance is simulated based on the behavior and strategy of search and pursuit of marine predators due to the different speeds of predators and prey in three phases. Phase (i): When the prey moves faster than the predator; Phase (ii): When the prey and the predator move at almost the same speed; and Phase (iii): When the predator is moving faster than the prey [13]. Good global search and fast convergence are the main advantages of MPA. The main disadvantages of MPA are lack of escaping from the local optimization, the inability to produce a diverse initial population with high productivity, and lack of broadly and widely exploration of the search space.
Tunicate Swarm Algorithm (TSA) is a bio-inspired method which is introduced based on simulation of jet propulsion and swarm behaviors of tunicates during the navigation and foraging process. In TSA the jet propulsion behavior is simulated considering three conditions including movement towards the position of best search agent, avoid the conflicts between search agents, and remains close to the best search agent [14]. Good global search and appropriate balance between exploration and exploitation are the main advantages of TSA. Low convergence rate and weakness in local search are the main disadvantages of TSA.
Teamwork Optimization Algorithm (TOA) is a population-based approach which is developed based on mathematical modeling of relationships and interactions between team members in doing a teamwork to achieve the goal of that team. In TOA, team members are updated on each iteration in three phases: supervisor guidance, information sharing, and individual activity [15]. Although TOA has advantages such as not requiring any parameter controlling, good global search, having appropriate balance between exploration and exploitation, and fast convergence, fall to local optimal solutions in solving highdimensional multimodal problems is the most important drawback of this algorithm.
In addition, several well-known optimization algorithms in the recent literature are represented in Table 1. Table 1. Proposed well-known optimization algorithms in the recent literature.

Ref.
Algorithm Main Idea (Inspiration Source) [16] Cuckoo Search Behavior of cuckoo [17] Aquila Optimizer Behavior of Aquila in nature during the process of catching the prey [18] Lion Optimization Algorithm Behavior of lion [19] Grasshopper Optimization Algorithm Grasshopper behavior [20] Emperor Penguin Optimizer The behavior of emperor penguin [21] Cat Swarm Optimization Algorithm Behaviors of cats [22] Pity Beetle Algorithm Aggregation behavior, searching for nest and food [23] Mouth Brooding Fish The behavior of mouthbrooding fish [24] Sailfish Optimizer Group of hunting sailfish [25] Following Optimization Algorithm Relationships between members and the leader of a community [26] Multi-Leader Optimizer The presence of several leaders simultaneously for the population members [27] Differential Evolution the natural phenomenon of evolution [28] Evolution Strategy Darwinian evolution theory Table 1. Cont.

Ref. Algorithm
Main Idea (Inspiration Source) [29] Biogeography-Based Optimizer Biogeographic concepts [30] Artificial Infectious Disease SEIQR epidemic model [31] Rooted Tree Optimization Plant roots movement looking for water [32] Weighted Superposition Attraction Weighted superposition of active fields [33] Plant Intelligence Plants nervous system [34] Chemotherapy Science Chemotherapy method [35] Tree Growth Algorithm Trees competition for acquiring light and foods [36] Simulated Annealing Metal annealing process [37] Water Cycle Algorithms Water cycle process and how rivers and streams flow to the sea in the real world [38] Water Evaporation Optimization Evaporation of water molecules [39] Galactic Swarm Optimized Motion The motion of stars, galaxies [40] Spring Search Algorithms Hooke's law [41] Collective Decision Optimization The social behavior of human beings [42] Very Optimistic Method Real-life practices of successful persons [43] Momentum Search Algorithm Momentum law and Newton's laws of motion [44] Archimedes Optimization Algorithm Law of physics Archimedes' Principle which imitates the principle of buoyant force exerted upward on an object [45] Dice Game Optimizer Rules governing the game of dice and the impact of players on each other [46] Orientation Search Algorithm Game of orientation, in which players move in the direction of a referee [47] Hide Objects Game Optimization Behavior and movements of players to find a hidden object [48] Football Game Based Optimization Simulation of behavior of clubs in football league. [49] Darts Game Optimizer Rules of the Darts game [50] Shell Game Optimization Rules of the shell game

Research Gap and Question
Every optimization problem has a basic solution called global optimal solution. The important thing about optimization algorithms is that there is no guarantee that the solutions obtained from these methods necessarily be global optimal solution. For this reason, the solutions that are obtained using optimization algorithms for optimization problems are called quasi-optimal solutions [51].
At best, the quasi-optimal solution is equal to the global optimal solution; otherwise, it must be close to it. Therefore, in analyzing the performance of several optimization algorithms in solving an optimization problem, the algorithm that is able to provide a quasi-optimal solution closer to the global optimal solution is the superior algorithm for solving that optimization problem. Another point is that the optimization algorithm may work very well in solving the optimization problem, but it will not be able to solve another optimization problem. That is why researchers have developed many optimization algorithms to achieve quasi-optimal solutions that are more appropriate and closer to the global optimal solution.
In order to evaluate the performance of optimization algorithms in achieving quasioptimal solutions, optimization algorithms are implemented on standard optimization problems as benchmark functions whose optimal solution is already known. The criterion of superiority of optimization algorithms over each other is to provide a solution closer to the global optimal. Therefore, it is always possible to design a new optimization algorithm that provides better performance than existing algorithms in optimizing optimization problems. In this regard, the main research question of this paper is whether it is possible to design a new optimization algorithm that can provide a quasi-optimal solution closer to a global optimal solution.

Contribution and Applications
In this paper, a new stochastic method called Cat and Mouse Optimization Algorithm (CMBO) is introduced to solve various optimization problems and provide suitable quasioptimal solutions. The contributions proposed by this paper are as follows: (i) (CMBO is designed based on the simulation of natural interactions between cat and mouse. (ii) The various steps and theory of the proposed CMBO are described and its mathematical model is presented to use in optimizing objective functions. (iii) The capability of the CMBO in solving optimization problems has been tested on twenty-three standard objective functions. (iv) The results obtained from the CMBO are also compared with the performance of nine well-known optimization algorithms.
Optimization algorithms are used in all disciplines and real-world problems where the optimization process or problem is designed and defined. The proposed CMBO can be used to minimize or maximize various objective functions. CMBO can be used in engineering sciences and optimal designs where decision variables must be well selected to optimize device performance. In medical science, data mining, clustering, and in general in any application that faces optimization, the proposed CMBO can be used.

Paper Organization
The rest of this paper is organized in such a way that the proposed CMBO is introduced in Section 2. Simulation studies and evaluation of the CMBO are presented in Section 3. The discussion and analysis of the results is presented in Section 4. Finally, in Section 5, conclusions as well as several suggestions for future studies are provided.

Cat and Mouse Optimization Algorithm
In this section, the theory of the Cat and Mouse Optimization Algorithm (CMBO) is stated, then its mathematical model is presented in order to use in optimizing various problems.
The CMBO is a population-based algorithm which is designed by inspiration from the natural behaviors of a cat attacks on mouse and mouse escape to the haven. The search agents in the proposed algorithm are divided into two groups of cats and mice that scan the problem search space with random movements. The proposed algorithm updates population members in two phases. In the first phase, the movement of cats towards mice is modeled, and in the second phase, the escape of mice to havens to save its lives is modeled.
From a mathematical point of view, each member of the population is a proposed solution to the problem. In fact, a member of the population specifies values for the problem variables according to its position in the search space. Thus, each member of the population is a vector whose values determine the variables of the problem. The population of the algorithm is determined using a matrix called the population matrix in Equation (1).
where X is the population matrix of CMBO, X i is the ith search agent, x i,d is the value for the dth problem variable obtained by the ith search agent, N is the number of population members, and m is the number of problem variables. As mentioned, each member of the population determines the proposed values for the problem variables. Therefore, for each member of the population, a value is specified for the objective function. The values obtained for the objective function are denoted using a vector in Equation (2).
where F is the vector of objective function values and F i is the objective function value for the ith search agent. Based on the values obtained for the objective functions, the members of the population are ranked from the best member with the lowest value of the objective function to the worst member of the population with the highest value of the objective function. The sorted population matrix as well as the sorted objective function are determined using Equations (3) and (4).
where X S is the sorted population matrix based on objective function value, X S i is the ith member of sorted population matrix, x s i,d is the value for the dth problem variable obtained by the ith search agent of sorted population matrix, and F S is the sorted vector of an objective function.
The population matrix in the proposed CMBO consists of two groups of cats and mice. In the CMBO, it is assumed that half of the population members who provided better values for the objective function constitute the population of mice and the other half of the population members who provided lower values for the objective function constitute the cat population. Based on this concept, the populations of mice and cats are determined in Equations (5) and (6), respectively.
where M is the population matrix of mice, N m is the number of mice, M i is the jth mouse, C is the population matrix of cats, N c is the number of cats, and C j is the ith cat.
In order to update the search factors, in the first phase, the change of position of cats is modeled based on the natural behavior of cats and movement towards mice. This phase of the update of the proposed CMBO is mathematically modeled using Equations (7)- (9).
Here, C new j is the new status of the jth cat, c new j,d is the new value for the dth problem variable obtained by the jth cat, r is a random number in interval [0, 1], m k,d is the dth dimension of the kth mouse, F c,new j is the objective function value based on new status of the jth cat.
In the second phase of the proposed CMBO, the escape of mice to havens is modeled. In CMBO, it is assumed that there is a random haven for each mouse, and mice take refuge in these havens. The position of the havens in the search space is randomly created based on patterning the positions of different members of the algorithm. This phase of updating the position of mice is mathematically modeled using Equations (10)- (12).
Here, H i is the haven for the ith mouse and F H i is its objective function value. M new i is the new status of the ith mouse and F m,new i is its objective function value. After all members of the algorithm population have been updated, the algorithm enters the next iteration and, based on Equations (5)- (12), the iterations of the algorithm continue until the stop condition is reached. The condition for stopping optimization algorithms can be a certain number of iterations, or by defining an acceptable error between obtained solutions in consecutive iterations. Moreover, the condition for stopping the algorithm may be a certain period of time. Upon completion of the iterations and full implementation of the algorithm on the optimization problem, the CMBO provides the best obtained quasi-optimal solution. Flowcharts of different stages of the proposed CMBO are specified in Figure 1 and its pseudocode is also presented in Algorithm 1.

Algorithm 1 Pseudocode of CMBO Start CMBO.
Input problem information: variables, objective function, and constraints.

Set number of search agents (N) and iterations (T).
Generate an initial population matrix at random. Evaluate the objective function. For t = 1:T Sort population matrix based on objective function value using Equations (3) and (4). Select population of mice M using Equation (5). Select population of cats C using Equation (6). Phase 1: update status of cats. For j = 1:N c Update status of the jth cat using Equations (7)-(9). end Phase 2: update status of mice.
For i = 1:N m Create haven for the ith mouse using Equation (10). Update status of the ith mouse using Equations (11) and (12). end End Output best quasi-optimal solution obtained with the CMBO. End CMBO

Step-by-Step Example
In this subsection, a step-by-step example of how to implement the proposed CMBO is provided to explain it in more detail. In this example, CMBO is applied to optimize the sphere function. In this example, it is assumed that the number of problem variables is 2, the number of population members is 10, and the condition of stopping the algorithm is 50 iterations. The mathematical model and information of the sphere function are as follows: Sphere function: Step 1: In this step, the initial population of feasible solutions is created randomly. The following general formula is used to define the initial random population: For example: Step 2: In this step, each member of the population is evaluated in the objective function of the problem. In fact, each member of the population proposes values for the problem variables based on which the objective function can be evaluated.
For example: Step 3: In this step, based on comparing the values obtained for the objective function, the population members are sorted from the best solution (minimum value of the objective function) to the worst solution (maximum value of the objective function). Thus, the sort criterion is the value of the objective function.
Step 4: In this step, the population of mice (first half of the population with better objective function values) and the population of cats (second half of the population with worse objective function values) are determined according to Equations (5) and (6).
Step 5: In this step, the position of the cats is updated based on Equations (7)-(9).
Step 6: In this step, the position of the mice is updated based on Equations (10)-(12).
Step 7: The third to sixth steps of the algorithm are repeated until the stop condition is met. Finally, after the full implementation of the proposed algorithm on the objective function, the best proposed solution using CMBO is presented for the problem.
The calculations of the different steps of CMBO for the first iteration are presented in Table 2. The final solution for the intended problem after full implementation is also specified in this table. Table 2. The various steps of the proposed CMBO for the first iteration in sphere function solving.

Simulation Study and Results
In this section, the efficiency and ability of the proposed CMBO in solving various optimization problems and providing quasi-optimal solutions are evaluated. For this purpose, a standard set consisting of twenty-three objective functions of different types in three groups of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is applied. Complete information on these functions is provided in Appendix A and Tables A1-A3.
In order to analyze the quality of the proposed algorithm, the results obtained from the CMBO are compared with nine other optimization algorithms including (i) popular and widely used algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO); (ii) highly cited algorithms: Gravitational Search Algorithm (GSA), Teaching-Learning-Based Optimization (TLBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA); and (iii) recently published algorithms: Tunicate Swarm Algorithm (TSA), Marine Predators Algorithm (MPA), and Teamwork Optimization Algorithm (TOA). The performance results of optimization algorithms are presented using two indicators of average of the best quasi-optimal solutions (ave) and standard deviation of the best quasi-optimal solutions (std). The used values for the parameters of the optimization algorithms are specified in Table 3.

Evaluation of Unimodal Objective Functions
Objective functions F1 to F7 are considered to analyze and evaluate the ability of optimization algorithms to solve and optimize unimodal optimization problems. The results of the implementation of the proposed CMBO as well as nine compared optimization algorithms are presented in Table 4. The proposed algorithm provides the global optimal solution for F6. In addition, CMBO performs very well in the F1, F2, F3, F4, F5, and F7 functions and provides quasi-optimal solutions that are close to the global optimal. Analysis and comparison of the results obtained from the proposed algorithm against the other nine optimization algorithms shows that the CMBO has a higher ability to solve unimodal optimization problems.

Evaluation of High-Dimensional Objective Functions
Six F8 to F13 objective functions of the high-dimensional multi-model functions are selected to evaluate the ability of optimization algorithms to provide optimal quasi-optimal solutions. The results of optimization of these objective functions using the proposed CMBO and nine compared algorithms are presented in Table 5. CMBO provides the global optimal solution for the objective functions of F9 and F11. For F12 and F13 functions, CMBO provides the best performance and provides suitable quasi-optimal solutions. The optimization results show that CMBO obtains very competitive results in majority of the objective functions than other algorithms.

Evaluation of Fixed-Dimensional Objective Functions
F14 to F23 objective functions are selected to evaluate the ability of optimization algorithms to provide suitable solutions for fixed-dimensional multimodal optimization problems. The results of the implementation of optimization algorithms on this type of objective functions are presented in Table 6. CMBO provides good performance in all F14 to F23 objective functions and provides appropriate quasi-optimal solutions for these objective functions. In addition, comparison and analysis of the results show that the proposed algorithm is provided more appropriate solutions in most cases. On the other hand, in functions where CMBO has a similar performance in index "ave" with some algorithms, it is able to solve these optimization problems more effectively with a more appropriate index "std".
In order to further analyze and visually compare the performance of the optimization algorithms, the boxplot of results for each algorithm and objective function is shown in Figure 2. In Tables 4-6, the bold results indicate an algorithm that has performed better in optimizing the specified function.

Statistical Analysis
Presentation and analysis of optimization results using the two indicators of the average of the best results and the standard deviation of the best results provide valuable and useful information about the performance of optimization algorithms. However, even with a very low probability, the superiority of one algorithm over several other algorithms may be coincidental. In this regard, in this subsection, a statistical analysis called Wilcoxon rank-sum test is presented in order to further evaluate and analyze the performance of optimization algorithms as well as the proposed CMBO. The Wilcoxon rank-sum test is one of the nonparametric tests which is used in statistical analysis.
In the Wilcoxon test, a p-value determines whether the considered optimization algorithm is statistically significant or not. If the p-value of the algorithm is less than 0.05, the result is that the algorithm is statistically significant. Table 7 presents the simulation results of statistical analysis using Wilcoxon rank-sum test. What can be concluded from the comparison and analysis of the values presented in this table is that the proposed CMBO has a significant superiority over the compared algorithm in cases where the p-value is less than 0.05. In fact, a p-value indicates whether the proposed CMBO has significant superiority over the compared algorithms. Based on the simulation results, the proposed CMBO has a significant superiority over MPA, WOA, GSA, PSO and GA in optimizing the F1 to F7 unimodal function group. In the second group of objective functions including F8 to F13, CMBO has a significant superiority over TSA, MPA, WOA, GWO, and GSA. The proposed CMBO in optimizing the objective functions of the third group, including F14 to F23, has a significant superiority over all TSA, MPA, WOA, GWO, TLBO, GSA, PSO, GA.

Sensitivity Analysis
In this subsection, the sensitivity analysis of the proposed CMBO with respect to the two parameters of the number of population members of the algorithm and the maximum number of iterations of the algorithm is presented.
In order to sensitivity analyze of the performance of the CMBO to the number of parameters of population members, it has been implemented on all twenty-three objective functions for different populations with 20, 30, 50, and 80 members. The results of this analysis are presented in Table 8, and also the behavior of convergence curves due to changes in the number of population members is presented in Figure 3. What has been concluded from the simulation results of the sensitivity analysis to the number of population member's parameter is that as the number of members of the algorithm increases, the proposed CMBO converges to more suitable quasi-optimal solutions and the values of the objective function decrease.  In order to sensitivity analyze of the performance of the CMBO to the maximum number of iterations parameter, the proposed algorithm has been run independently on all twenty-three objective functions for the maximum number of iterations equal to 100, 500, 800, and 1000. Table 9 presents the evaluation results of this analysis and the behavior of convergence curves under the influence of changes in the maximum number of iterations is presented in Figure 4. The simulation results of the sensitivity analysis of the proposed CMBO with respect to the maximum number of iterations parameter indicate that increasing the maximum number of iterations has led the CMBO to converge to solutions closer to the global optimal.

Discussion
Exploitation and exploration are two important criteria that play a valuable role in evaluating and determining the quality of optimization algorithms. Optimization algorithms must have a favorable situation in these two criteria in order to be able to have acceptable performance in solving optimization problems.
The concept of exploitation means the ability of optimization algorithms to achieve a suitable quasi-optimal solution that is close to the global optimal. In fact, an optimization algorithm must provide a suitable quasi-optimal solution to an optimization problem after fully implemented. Therefore, in analyzing the effectiveness of several optimization algorithms in solving an optimization problem, the algorithm that suggests a better quasioptimal solution for that problem has a higher quality in the criterion of exploitation. This criterion is especially important for optimization problems that have only one main solution. The F1 to F7 objective functions, which are selected as unimodal functions, have only one main optimal solution and no optimal local areas. These types of functions are suitable for evaluating the exploitation criterion because of this feature. The results of optimization of these objective functions using the proposed CMBO as well as nine compared algorithms are presented in Table 4. The analysis of these results indicates that the CMBO with high exploitation capability has been able to provide suitable quasi-optimal solutions for F1 to F7 functions, which have a much higher quality than similar algorithms. Therefore, the CMBO is in a much better position than the nine compared algorithms in the exploitation criterion.
The concept of exploration means the ability of optimization algorithms to accurately and appropriately scan the search space of an optimization problem. In fact, optimization algorithms must be able to search different areas of the search space in order to achieve solutions closer to the global optimization. Therefore, in analyzing the performance of several optimization algorithms, an algorithm has a higher quality in the exploration index that provides a more suitable quasi-optimal solution by accurately scanning the search space. This indicator is especially important in optimization problems that have local optimal solutions in addition to the main optimal solution. The F8 to F13 high-dimensional multimodal functions and the F14 to F23 fixed-dimensional multimodal functions have optimal local solutions in addition to basic optimal solution; therefore, these functions are suitable for evaluating the exploration power of optimization algorithms. The optimization results of F8 to F13 objective functions and F14 to F23 objective functions are presented in Tables 5 and 6, respectively, using the proposed CMBO as well as nine compared algorithms. Based on the simulation results, it is determined that the CMBO with high ability to scan the search space is able to converge to quasi-optimal solutions without getting stuck in local optimal points. Therefore, the proposed CMBO has a high capability in the exploration index and is much more competitive than the competing algorithms.

Execution Time Analysis
In this subsection, studies of the execution time of optimization algorithms in solving objective functions are presented. The experimentation and algorithms are implemented in Matlab R2014a (8.3.0.532) version and run in the environment of Microsoft Windows 10 with 64 bits on Core i-7 processor with 2.40 GHz and 6 GB memory. The average execution time (ave_time) in seconds and the standard deviation for execution time (std_time) are computed as the metrics of performance. To generate and report the results, for each objective function, optimization algorithms utilize 20 independent runs where each run employs 1000 times of iterations.
The results of execution time analysis for all twenty-three objective functions are presented in Table 10. What can be deduced from the simulation results of this analysis is that the proposed CMBO is implemented on optimization problems in less time and has provided quasi-optimal solutions. A comparative review of the CMBO and compared algorithms is presented in Table 11. Fast convergence due to continuous reduction of search space, less storage and computational requirements, and easy to implement due to its simple structure.
WOA Low accuracy, slow convergence, and easy to fall into local optimum.
Simple structure, less required operator, and having appropriate balance between exploration and exploitation.
MPA High computations, time consuming, and having control parameters.
Good global search and fast convergence.

TSA
Poor convergence, having control parameters and fall to local optimal solutions in solving high-dimensional multimodal problems.
Fast convergence, good global search, having appropriate balance between exploration and exploitation.

TOA
Fall to local optimal solutions in solving high-dimensional multimodal problems.
Not requiring any parameter, good global search, having appropriate balance between exploration and exploitation, and fast convergence.

CMBO
The important thing about all optimization algorithms is that it cannot be claimed that one particular algorithm is the best optimizer for all optimization problems. It is also always possible to develop new optimization algorithms that can provide more desirable quasi-optimal solutions that are also closer to the global optimal.
Easy implementation, simplicity of equations, lack of control parameters, proper exploitation, proper exploration, high convergence power, and not getting caught up in local optimal solutions.

Conclusions and Feature Works
Designed optimization problems in different sciences should be solved using appropriate methods. Optimization algorithms are one of the most widely used and effective methods to provide appropriate solutions to optimization problems. In this paper, a new optimizer called Cat and Mouse-Based Optimizer (CMBO) has been presented that mimics the natural behavior between cats and mice. The mathematical model of the proposed CMBO has been presented based on simulating the cats attack on mice and the escape of mice to shelters. The performance of the CMBO in optimization was tested on a standard set consisting of twenty-three objective functions and the results were compared with the performance of nine algorithms Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Teaching-Learning-Based Optimization (TLBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Marine Predators Algorithm (MPA), Tunicate Swarm Algorithm (TSA), and Teamwork Optimiza-tion Algorithm (TOA). The results of optimization of Unimodal objective functions showed that the proposed CMBO has a high capability in solving this type of optimization problems and has a very good exploitation power. The results of the implementation of the proposed algorithm on the objective functions of high-dimensional and fixed-dimensional multimodal showed the high exploration power of the proposed CMBO in order to accurately scan the search space of optimization problems. Moreover, analyzing the results and comparing the performance of the mentioned algorithms with the performance of the CMBO showed the superiority and more competitiveness of the proposed algorithm.
The conclusions presented in this section about the performance and ability of the proposed CMBO to solve optimization problems were based on the optimization of twentythree standard objective functions. From a general point of view, in optimization studies, it cannot be claimed that a particular optimization algorithm is the best optimizer to solve all optimization problems. In fact, the algorithm should be used to solve the problems, and based on the results, it should be stated whether the proposed algorithm is generally better than the existing methods or for a set of problems that need to be identified. The important thing about all optimization algorithms is that it is always possible to develop new optimization algorithms that can provide more desirable quasi-optimal solutions that are also closer to the global optimal.
The authors present several ideas as potentials for future studies, including the design of a multi-objective version as well as a binary version of the CMBO. In addition, the application of the proposed CMBO in solving real-life problems and other optimization problems in various sciences is suggestions for further research.