In this section, we aim to introduce two traditional heuristic algorithms, i.e., the ant colony algorithm and the simulated annealing algorithm, and then develop two improved Algorithms 1 and 2 based on the simulated annealing algorithm (ISAA-CC and ISAA-SS), to solve the problem in this paper and to obtain the maximum of the objective function, i.e.,

$\varphi \left(\theta \right)$ and the unknown parameter to be estimated, i.e.,

$\widehat{\theta}=\left[\widehat{{\theta}_{1}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\widehat{{\theta}_{2}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\widehat{{\theta}_{3}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\cdots \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\widehat{{\theta}_{8}}\right]$ in short time, efficiently; and we also compare the performance of these algorithms with the Newton method according to a real numerical experiment in next section.

**Algorithm 1:** A heuristic bionic evolutionary algorithm based on swarm intelligence. |

**Step 1.** The relevant parameters need to be initialized, including ant colony size (the total number of ants) $ant\_max$, the pheromones volatilization coefficient $\rho $, total pheromones released by ants for one iteration Q (constant), a constant of transfer probability ${p}_{0}$, maximum number of iterations $iter\_max$. Initial pheromones ${\tau}_{t}\left(0\right)=\varphi \left(\theta \right)$. **Step 2.** Do for i = 1, 2, ⋯, $iter\_max$,
**Step 2.1.** Do for t = 1, 2, ⋯, $ant\_max$,
**Step 2.1.1.** Each ant is randomly placed in different positions, and the next position of ant t, namely next feasible solution, is determined according to transfer probability ${p}_{t}$, i.e.,
**Step 2.1.2.** According to following judgement, iterative formula of parameter to be estimated is
(a) Local Search: If the pheromones of ant t is closer to the highest concentration of pheromones in the current population (i.e. the current maximum of the function), the transfer probability ${p}_{t}$ is smaller, and variable value $\theta $ tends to be fine-tuning, i.e., $\theta (i+1)=\theta \left(i\right)+rand\xb7\lambda $ where $rand$ is a 0-1 random number, and $\lambda =1/(t+1)$ is heuristic function (degree of expectation) and it gradually decreases as the iteration progresses.
(b) Global Search: The farther away the ant t is from the position with the highest concentration of pheromones in the current population, the greater of transfer probability ${p}_{t}$, the more the algorithm tends to search for the optimal value in a wider range, i.e., $\theta (i+1)=\theta \left(i\right)+rand\xb7(upper-lower)/2$, where $upper$ is upper bound and $lower$ is lower bound of $\theta $.
**Step 3.** Calculate the pheromones of each path, and update the concentration of pheromones by the iteration formula as follows, i.e., ${\tau}_{t}(i+1)=(1-\rho )\xb7{\tau}_{t}\left(i\right)+Q\xb7\varphi \left(\theta \right)$. At the same time, the optimal solution of the current iteration is recorded. |

#### 3.1. Ant Colony Algorithm

The ant colony algorithm, first proposed by Italian scholar, Dorigo [

26] in 1992, is an intelligent algorithm and always applied in the travelling salesman problem [

27]. It is designed by imitating the cooperative manner of an ant colony and the characteristics of ants foraging behaviors, and then abstracting this manner into mathematical description. In biology, the foraging behaviors of an ant colony have the following characteristics.

(a) While building the paths from their nest to food source, ants can deposit and sniff a chemical substance, called pheromones, which can mark the paths and provide ants with the ability to communicate with each other.

(b) Generally speaking, ants essentially move at random, but they always choose one path with higher concentration of pheromones and release a certain amount of pheromones to enhance the concentration of pheromones on this path. Therefore, the higher the concentration of pheromones is, the shorter the distance of corresponding path will be.

(c) With the continuous actions of the ant colony, the shorter paths are more frequently visited and become more attractive for the subsequent ants. By contrast, the longer paths are less attractive because the pheromones will evaporate with the passing of time. Finally, the shortest way from nest to food source is found.

Nowadays, the ant colony algorithm is widely used in optimization problems. For the problem to be optimized, the basic idea of applying the ant colony algorithm is that feasible solution is expressed by walking paths of ants, and all paths of the whole ant colony are constituted the solution space. Finally, the whole ant colony will be concentrated on one path which corresponds to the optimal solution. So by analyzing the foraging process of ant colony, the generic ant colony algorithm can be roughly summed up four steps as follows. Firstly, set initial population of the ant colony and the pheromones, and place starting nodes for all ants randomly. Secondly, take into account the problem dependent heuristic information and the trail intensity of the paths with that each ant choosing the next node that has not been visited to move by probability. Then, repeat the step until a completed solution is constructed. Thirdly, evaluate the solutions and deposit pheromones on the paths according to the quality of solutions. The better the solution is, the higher concentration of pheromones will be deposited. Finally, the pheromones of all paths are decreased at the end of an iteration of building completed solutions due to some constant factors.

Aiming at the research content of this paper, we estimate the unknown parameter $\widehat{\theta}$ and solve the optimal solution of function $\varphi \left(\widehat{\theta}\right)$ by the ant colony algorithm whose detailed solution process is summarized as follows.

#### 3.2. Simulated Annealing Algorithm

The simulated annealing algorithm, first proposed by American physicist, Metropolis [

28] in 1953, and applied to combinatorial optimization by Kirkpatrick [

29] in 1983, is an optimization algorithm based on probability. The algorithm is initially inspired by the change rules of internal molecular state and internal energy of solids in the process from high temperature to low temperature.

The algorithm takes the temperature of the solid as the control parameter, and with the decrease of temperature, the internal energy of the solid (i.e., the objective function value) decreases gradually until it reaches the global minimum. It is actually a greedy algorithm, but its search process introduces random factors. It accepts a solution worse than the current one with a certain probability, so it is possible to jump out of the local optimal solution and reach the global optimal solution.

Aiming at the research content of this paper, we estimate the unknown parameter $\widehat{\theta}$ and solve the optimal solution of function $\varphi \left(\widehat{\theta}\right)$ by the simulated annealing algorithm whose detailed solution process is summarized as follows.

#### 3.3. Improved Algorithms Based on Simulated Annealing Algorithm

Therefore, we summarize some advantages and disadvantages of the Newton method, the ant colony algorithm and the simulated annealing algorithm which are shown as the following

Table 4.

From above table, we can see that both ant colony algorithm and the simulated annealing algorithm can solve many problems of the Newton method, and the shortcomings of the ant colony algorithm can also be dealt with the simulated annealing algorithm. Generally speaking, the simulated annealing algorithm has strong global search ability but low solution accuracy. Thus we proposed two improved algorithms based on the simulated annealing algorithm (ISAA-CC and ISAA-SS).

(a) Improved simulated annealing algorithm of cooling coefficient (ISAA-CC).

The cooling coefficient $\omega <$ (0, 1) is an important parameter affecting the convergence of the simulated annealing algorithm from the procedure of this algorithm. When this coefficient is too large, the solution accuracy is high but the algorithm runs long. On the other hand, when this coefficient is too small, the search accuracy is low. Therefore, considering this disadvantages, this paper firstly proposes an improved algorithm based on the simulated annealing algorithm, named ISAA-CC, to improve the efficiency of the algorithm and the quality of the optimal solution.

The value of

$\omega $ is set as large as possible at the beginning of the algorithm, so that the algorithm has strong global search ability. With the iteration of the algorithm, the value of

$\omega $ decreases, in order that the algorithm can search for the optimal solution better. As a result, a functional relationship between the cooling coefficient

$\omega $ and the number of iterations

i as shown in

Figure 4 will be established as follows.

(b) Improved simulated annealing algorithm of step-size (ISAA-SS).

Chen et al. [

30] used an improved path-based local linearization algorithm to solve a special logit model and used recent advances in line search methods to improve the computational efforts. The experimental results of this article examined two exact line search methods (i.e., bisection and method of successive averages (MSA)) along with three inexact line search methods (i.e., self-regulated averaging (SRA), quadratic interpolation and Armijo). These line search methods are compared in the

Table 5 including the computation with respect to the objective function and derivative evaluations.

Numerical results in this paper revealed SRA and quadratic interpolation were more efficient and robust compared to others. The computational efficiency and robustness of them are attributed to their smart step-size determination mechanism. Compared to other methods, SRA is easy to implement because it does not need to evaluate the complex objective function or its derivatives.

The Newton method used in this revised manuscript includes the derivation of the objective function; and the disadvantages of the Newton method compared with the other three heuristic algorithms are its derivatives, as shown in

Table 4. Thus we choose embedding SRA to improve the simulated annealing algorithm on determining a suitable step-size, namely ISAA-SS.

In the literature, self-regulated averaging (SRA), was recently developed by Liu et al. [

31], to enhance he computational performance of determining a suitable step-size for solving the multinomial logit stochastic user equilibrium (MNL SUE) problem. This method determines a suitable step-size as follows:

where

$\alpha \left(i\right)$ is step-size at iteration

i,

$\sigma \left(i\right)$ is a measure of similarity index and

$\beta \left(i\right)$ is a measure of dissimilarity index, defined as

$\beta \left(i\right)=1-\sigma \left(i\right)$; and the following conditions should be satisfied in order to guarantee convergence [

31,

32,

33]:

An illustration of SRA is provided in

Figure 5. From Equation (

13), we can observe that the step-size sequences from SRA are still strictly decreasing. However, the decreasing speed is more efficient since the next step-size is determined according to the residual error (i.e., the deviation between the current solution and its auxiliary solution) relationship between two consecutive iterations. When the current residual error is increased compared to the previous iteration (i.e., tends to diverge), the parameter

${\lambda}_{1}>$ 1 is used to make the step-size reduction more aggressive (e.g., at iterations 19 and 31). In contrast, when the residual error is decreased (i.e., tends to converge), the parameter 0

$<{\lambda}_{2}<$ 1 is used to make the step-size reduction more conservative. Hence, the step-size sequences from SRA indeed satisfy the above conditions.

In

Section 4, we use these five methods, i.e., the Newton method (

Appendix A), the ant colony algorithm, the simulated annealing algorithm, two improved algorithms, ISAA-CC and ISAA-SS to solve the problem, and then compare the performance of them. The results show that two improved algorithms proposed in this paper are better than others about solving the optimal solution.