Next Article in Journal
Convergence Analysis of a New Implicit Iterative Scheme and Its Application to Delay Caputo Fractional Differential Equations
Next Article in Special Issue
Applications of Fractional Operator in Image Processing and Stability of Control Systems
Previous Article in Journal
Fractional Prospect Theory-Based Bidding Strategy of Power Retail Company in the Uniform Pricing Electricity Market under Price Uncertainty
Previous Article in Special Issue
Frac-Vector: Better Category Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study of Fractional-Order Memristive Ant Colony Algorithm: Take Fracmemristor into Swarm Intelligent Algorithm

College of Computer Science, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(3), 211; https://doi.org/10.3390/fractalfract7030211
Submission received: 6 January 2023 / Revised: 17 February 2023 / Accepted: 20 February 2023 / Published: 23 February 2023

Abstract

:
As the fourth fundamental circuit element, the memristor may execute computations while storing data. Fracmemristor takes advantage of the fractional calculate’s long-term memory, non-locality, weak singularity, and the memristor’s storage–computational integration. Since the physical structure of the fracmemristor is similar to the topology of the ant transfer probability flow in ACO, we propose the fractional-order memristive ant colony algorithm (FMAC), which uses the fracmemristor physical system to record the probabilistic transfer information of the nodes that the ant will crawl through in the future and pass it to the current node of the ant, so that the ant acquires the ability to predict the future transfer. After instigating the optimization capabilities with TSP, we discovered that FMAC is superior to PACO-3opt, the best integer-order ant colony algorithm currently available. FMAC operates substantially more quickly than the fractional-order memristor ant colony algorithm due to the transfer probability prediction module based on the physical fracmemristor system (FACA).

1. Introduction

Dorigo [1] introduced the first algorithm for the feeding behavior of an ant colony in 1992. This model was later perfected to become the meta-heuristic known as ant colony optimization (ACO) [2]. The first ant-based algorithm, namely the ant system [3], was subsequently transformed into an ant colony system [4] with other improvements. In contrast to the ant system [5], the ant colony system possesses four characteristics that set it apart from the former: a distinct transition rule, a variation in pheromone update rules, the implementation of local pheromone updates, and the presence of a unique node bias. To ensure that ants do not reach a state of early stagnation, Stützle and Hoos proposed max–min ant (MMAS) [6]. MMAS strictly regulates the total quantity of pheromones released within a specified period, which is a crucial difference from the ant system. Additionally, several improved algorithms, including algorithms for the optimization of ant nodes in ant colonies [7,8,9], ant colony parallel computing [10,11,12], ant colony initialization node selection [13,14,15], ant colony interaction system [16], etc. Each ant in a conventional integer-order ant colony algorithm (IACA) selects a neighboring edge based on a transition probability calculated from the value of the pheromone. This probability is determined by the present node and neighboring edges of the ant graph.
Based on the concept of combinatorial exhaustiveness of core circuit variables, Léon O. Chua [17,18] foretold the existence of a memristor in 1971. He referred to memristors as the fourth fundamental circuit element that humans have yet to find: Memristor—the missing circuit element [17]. By contrast, at that time, no physical component had been detected. When a research team of the HP company built a nanoscale T i O 2 device in 2008 [19], they stunned the international electricity and electronics industry. An analysis revealed that the device was a physical memristor entity, which shook the sector internationally. Memristor research then began in several nations throughout the world. Chua [20,21,22] pushed for a more inclusive definition encompassing non-volatile memory devices with two terminals and a resistance-changing architecture. In the field of computer science, subjects such as neural networks [23,24], biological engineering [25], communication engineering, nonlinear circuits [26], as well as other domains, have a wide variety of application prospects, as well as people’s interests and studies.
For more than 200 years, the fractional calculation was a vital aspect of mathematical analysis. One area of investigation that needs to be explored is how fractional calculations can be used in modern signal analysis and processing. Non-integral order operation results in fractional computation with long-term memory, non-locality, and low singularity [27,28]. In addition, the extra freedom of movement of the system can be enhanced by increasing the fractional parameters. Fractional order calculus is extensively used in image processing [29,30,31,32] and signal processing [33,34,35]. Chen [36], in 2009, was the first to propose a fractional-order memristor. Chen came up with the idea for the present fractional-order memristor, which indicates that the stored charge q is in the fractional order. Then, various fractional-order memristors [37,38,39,40] were presented as possible solutions. Long-term memory systems were found to be better described by the fractional order model. These assumptions can be derived from Chua’s axiomatic element system. The expanded fractional-order Chua’s periodic table was proposed by Pu [41,42]. After that, Pu [43,44] proposed the fractional-order memristors (fracmemristor), which are naturally transmitted from a fractor to a fracmemristor.
As a relatively new area of study, the use of fractional computation in swarm intelligence has attracted interest among some researchers. Some attempts have been made to use the fractional calculation in optimal control, fractional evolutionary equations [45], and ant colony algorithms that employ a fractional calculus to modify the pheromone update rates [46]. Because of its intrinsic qualities, such as long-term memory, non-locality, and weak singularity, fractional calculus has the potential to lead to widespread use in applications in intelligent signal processing. Pu [47] attempted to modify the transfer probability selection mechanism of IACA using a fractional-order calculus and proposed the fractional-order ant colony algorithm (FACA), thus enabling the ant colony algorithm to predict future transition probabilities and allowing FACA to converge more quickly. Building on this, we proposed a new fractional-order memristive ant colony algorithm (FMCA) that allows a single ant to update pheromones using the non-local nature of fractional-order calculus and using fractional-order memristors (fracmemristor) physical systems to quickly obtain the transfer probability of future nodes, allowing ants to be more accurate when choosing the following paths based on the transfer probability.
The text is structured as follows:
(a)
Section 2 gives some of the mathematical and physical knowledge needed for this paper. Section 2 concentrates on the mathematical principles of fractional memories and how to construct the capacitive scale of v-order chains. There are also several properties contained in the fractional-order memristor.
(b)
The formulation of the suggested fractional-order memristive ant colony algorithm (FMCA) is described in Section 3. By designing a fractional-order memristor for each ant to recall the future transfer probability information, a memristive physical system based on fracmemristor is conferred onto the ant. Based on the information placed by the fractional-order memristor (fracmemristor), the non-local characteristic of the fractional order is employed to enable the ant to forecast possibly better probability transfer pathways in the future. We also perform mathematical arguments to ensure that the FMAC converges.
(c)
The experimental findings are presented in Section 4. First, we fixed the other parameters of the FMAC and experimentally determined the order range of the fracmemristor. Then, we compared the convergence speed of FMAC with those of ACO, MMAS, and FACA. Finally, we compared the optimal results of several algorithms for the different TSP problems.

2. Background

There are six relationships between the four circuit elements { u , i , q , φ }, and it follows that
{ ( u , φ ) , ( i , q ) , ( u , q ) , ( i , φ ) , ( q , φ ) , ( i , u ) } ,
We are most familiar with ( i , q )   a n d   ( u , φ ) .
q ( t ) = t i ( τ ) d τ ,
φ ( t ) = t u ( τ ) d τ ,
The pair (i, u) is Ohm’s law, which determines the resistor R in Equation (4). The couple (u, q) indicates the capacitator C in Equation (4), and the couple (i, φ) suggests the inductor L in Equation (4).
u = R i , u = q C , φ = L i .
Leon O. Chua could not identify a circuit item that directly represents the connection between ( q , φ ) . Leon O. Chua argued that, from the point of view of axiomatic comprehensiveness, an electrical component exists to reflect their fundamental relationship in Equation (5). He designated the new component memory-resistor (memristor).
f ( q , φ ) = 0 .

2.1. Memristor

If the φ is a unique value function of q, Equation (5) can be rewritten as follows,
φ = φ ( q ) .
This calculates the differentiation between the two sides of time t
d φ d t = d φ ( q ( t ) ) d q d q d t = d φ ( q ( t ) ) d q i ( t ) ,
The memristive characteristics are illustrated by rewriting Equation (7) as the volt–current Formula (8), which is mathematically symmetrical with the resistance formula: where R M is the memristor’s resistance; and R M is proportional to the current’s value passed through it over time. Therefore, it is referred to as a charge- or current-controlled memristor.
u ( t ) = R M ( q ( t ) ) i ( t ) , R M ( q ( t ) ) d φ ( q ) d q ,
If q is a unique value function of φ , Equation (5) may be rewritten as follows:
q = q ( φ ) ,
R M is the resistance of the memristor. Thus, R M depends on how much voltage has been put on its terminals in the past. Therefore, the current-controlled memristors are also known as flux-controlled memristors.
u ( t ) = R M ( φ ( t ) ) i ( t ) , R M ( φ ) d φ d ( φ ) .
The expressions for the resistor, voltage-controlled memristor, and current-controlled memristor equations are mathematically equivalent. The resistor does not store any information, and its resistance remains the same throughout time. In the current-controlled memristor, R M depends on charge q . The volt-controlled memristor R M depends on flux φ , and its resistance changes with flux φ . Schematic diagrams of the circuit resistor, voltage-controlled memristor, and current-controlled memristor are shown in Figure 1.

2.2. Fracmemristor

Using the Laplace transform on both sides of Equation (4), we obtain Equation (11).
L { f ( t ) } = F ( s ) = U ( s ) I ( s ) = F s u ,
In Figure 2, Pu [37,38] determines that S 1 and S 2 , respectively, represent the capacitive fractor and inductive fractor; whilst S 3 and S 4 , respectively, represent the capacitive fracmemristor and inductive fracmemristor. The expression of the arbitrary-order capacitive fractor was obtained by Pu [37,38]. The derivation is given in Equation (12).
F v c   = F ( m + p ) c = V i ( s ) I i ( s ) = c ( m + p ) r 1 p s ( m + p ) .
In Equation (12), m = [ v ] is the rounded-down integer part of v , and p = v m . The driving function, capacitance, voltance, and resistance for a perfectly optimal v-order capacitive fractor are represented by the letters F v c , c , v , r , respectively. Following the same procedure, we can determine the universal expression for the inductive fractor of the arbitrary order in Equation (13).
F v l   = F m + p l   = V i ( s ) I i ( s )   = l m + p r 1 p s m + p .
Apply the signal and system analysis to the memristor in Equation (8). Equation (14) is derived by rewriting Equation (8).
  u ( t )   = d φ ( q ) d q i ( t ) = [ M ( q ) + q d M ( q ) d q ] i ( t ) = H [ q ( t ) ] * i ( t ) ,
The symbol * means convolution and the term H [ q ( t ) ] refers to the transfer function of the memristor. We can obtain u ( s ) = r [ q ( s ) ] i ( s ) by realizing the multiplication in the area of Laplace transforms. r [ q ( s ) ] is the reactance of the memristor.
In Equation (13), we do the same thing for the inductive fractor,
F v c = c v r 1 p s v .
We can replace r with r [ q ( s ) ] . Because it depends on the q , we can obtain the current-controlled capacitive factor (CCFM) in Equation (16).
F v ( C M ) ( s ) = u ( s ) i ( s ) = c v { r [ q ( s ) ] } 1 p s v .
F v ( C M ) ( s ) represents the v -order CCFM and c is the capacitance.
We can replace r with r [ φ ( s ) ] in the inductive factor. Due to the fact that r [ φ ( s ) ] depends on the flux φ , we can obtain the volt-controlled inductive factor in Equation (17). We name it VIFM for short.
F v ( L M ) ( s ) = u ( s ) i ( s ) = l v { r [ φ ( s ) ] } 1 p s v .
F v ( L M ) ( s ) represents the v -order VIFM and l is the inductance.
The schematic diagrams of CCFM and VIFM are shown in Figure 3.

3. Algorithm

This section suggests a fractional-order memristive ant colony algorithm (FMAC) based on fracmemristor. To provide an estimate of the likelihood of transfer along potential future pathways, the fractional sequence memory system is provided. We can estimate the trend of future transfer probabilities based on the process of transfer probability change during the previous iteration using the transfer probabilities of the nodes along the ant’s route. The employment of fractional-order memristive capabilities in the transfer probability prediction model is primarily due to its inherent benefits in long-term memory and nonlocality. Fracmemristor can be used to calculate the probability transfer function in the ACO algorithm.

3.1. Physical Memristive System of Transition Probability Series in FMAC

In the traditional integer-order ant colony algorithm (IACA), each ant decides which edge to follow based on a transition probability. As an intriguing theoretical problem, we studied whether the forces inherent in the fractional calculation can be used to change the transition in IACAs by substituting the simple one-step probability with a more sophisticated expression that allows for some future information. In IACA, when arriving at a node, an ant evaluates the likelihood of its next move based on the transition probabilities p i j among the possible nodes of node i and follows a predetermined path to get there. However, in FMCA, the ant obtains the transition probability p i j at node i , and selects the next node j by p i j . At node j, we choose the next node k by the transition probability p j k , and so on. In this manner, we obtain an ant’s future transition probability sequence in IACA. As shown in Figure 4, the memory computation mode of this ant in FMCA is almost identical to the working mode of the memristor, so we can construct a physical fracmemristor system to perform the prediction of ant transfer probabilities.
From the algorithm flowchart in Figure 4, we can construct a probabilistic transfer prediction system based on fracmemristor, the physical structure of which is shown in Figure 5. In this paper, we only discuss the low-pass filtering fracmemristor. The v -order low-pass filter capacitive-scale-chain fracmemristor (LCSF) [42] is shown in Figure 5a.
We perform a mathematical analysis of the LCSF. The n t h series circuit of v - o r d e r LCSF can be rewritten as N F M v c n ( w ) = 1 H ( s ) F M v c n ( s ) = c s w F M v c n ( s ) and w = H ( s ) c s . Thus, we obtain Equation (18),
NFM v c 1 ( w ) = i = 0 1 1 1 α i + β i w = 1 1 + w , NFM v c 2 ( w ) = i = 0 2 1 1 α i + β i w = 1 1 + w + α [ NFM v c 1 ( σ w ) ] , NFM v c 3 ( w ) = i = 0 3 1 1 α i + β i w = 1 1 + w + α [ NFM v c 2 ( σ w ) ] .
Additionally, σ = α β ; similarly, n t h   v - o r d e r LCSF is
NFM v c n ( w ) = i = 0 n 1 1 α i + β i w = 1 1 + w + α [ NFM v c n 1 ( σ w ) ] ,
Equation (19) is an irregular iterative scaling equation. In Figure 6, if n , we obtain
NFM v c n ( w ) = NFM v c n 1 ( σ w ) ,
Then,
NFM v c n ( w ) = i = 0 n 1 1 β i w + α i = 1 w + 1 + α [ NFM v c n ( σ w ) ] ,
By the general dynamical scaling law, which can be derived from Equation (15), the approximate solution to Equation (21) is
NFM v c ( w ) = [ H ( s ) ] 1 p c v s v H ( s ) w h e n p = w and   0 < v < 1 [ H ( s ) s ] v c v , if w = H ( s ) c s [ w ] v
Said another way, c v H ( s ) is a scalar. Taking Equation (22) into Equation (21), we obtain Equation (23),
[ w ] v = 1 w + 1 + α [ σ w ] v .
Because LCSF is a low-pass filter, when s 0 , w = H ( s ) c s 0 . Equation (23) is approximated as Equation (24),
[ w ] v when w 0 1 + α [ α β w ] v
Given that the transition probability series in FMAC is similar in structure and function to the physical LCSF system, the circuit of LCSF is used to predict the transition probability flow i i n in the future.
I i n ( s ) = V i n ( s ) [ H ( s ) ] p 1 c v s v
The inverse Laplace transform of Equation (25) is as follows:
i i n ( t ) = c v D t v { L 1 { [ H ( s ) ] p 1 * v i n ( t ) } } k c v { i i n ( t ) ( D t 1 i i n ( t ) ) + ( v ) [ i i n ( t ) ( α D t 1 i i n ( t ) ] + ( v ) ( v + 1 ) 2 [ i i n ( t ) ( α 2 D t 1 i i n ( t ) ] + ( v ) ( v + 1 ) ( v + 2 ) 6 [ i i n ( t ) ( α 3 D t 1 i i n ( t ) ] + + Γ ( k v 1 ) ( k 1 ) ! Γ ( v ) [ i i n ( t ) ( α k 1 D t 1 i i n ( t ) ] }
Substituting the results of Equation (26) into the FMCA probability transfer sequence model, we can obtain:
p i j v m ( t ) = p i j ( t ) + ( v ) p j k ( t ) + ( v ) ( v + 1 ) 2 p k l ( t ) + + Γ ( m v ) Γ ( v )     Γ ( m + 1 ) p y z ( t ) = p i j ( t ) + k = 1 N 1 1 | Γ ( m v ) Γ ( v )     Γ ( m + 1 ) | p ( j + m 1 ) ( j + m ) ( t )
In Equation (27), the fracmemristor’s physical system conveys information on future transfer probabilities, combined with fractional order weights, to the current node.

3.2. Proposed Iterative Algorithm of the Fractional-Order Memristive Ant Colony Algorithm (FMCA)

In Section 3.1, we obtain the fracmemristor-based physical system, which is a transfer probability predictor of the ant colony algorithm. In Equation (27), p ( j + m 1 ) ( j + m ) ( t ) denotes the transfer probability of transferring to the m t h node at the m 1 t h node after node j . Since the probability sums to 1, we normalize the probabilities. We obtain the transfer probability of FACA,
p i j v m ( t ) = 1 f { p i j ( t ) + m = 1 N 1 1 | Γ ( m v ) Γ ( v ) Γ ( m + 1 ) | p ( j + m 1 ) ( j + k ) ( t ) i f   j C i n ( t )   a n d       ( j + m ) C i n ( t ) 0                       i f   j C i n ( t )   ,
f = k = 0 N 1 1 | Γ ( m v ) Γ ( v ) Γ ( m + 1 ) | is a normalization factor, and p i j v m ( t ) means the v - o r d e r transition probability of an ant from the i node to the j node. C i n ( t ) means the set of the following optional nodes; ( N 1 1 ) is the number of nodes connecting to the j node. p i j v m ( t ) represents the v - order differential of p i j ( t ) . p i j ( t ) is the transition probability of IACA, so
p i j ( t ) = { [ τ i j ( t ) ]   α [ η i j ( t ) ]   β c C i m ( t ) [ τ i c ( t ) ]   α [ η i c ( t ) ]   β i f   j C i n ( t ) 0               i f   j C i n ( t )     ,
p ( j + m 1 ) ( j + m ) ( t ) = { [ τ ( j + m 1 ) ( j + m ) ( t ) ]   α [ η ( j + m 1 ) ( j + m ) ( t ) ]   β c C ( j + m 1 ) m ( t ) [ τ ( j + m 1 ) C j + m 1 n ( t ) ]   α [ η ( j + m 1 ) C j + m n ( t ) ]   β i f   ( j + m ) C ( j + m 1 ) n ( t ) 0               i f   ( j + m ) C ( j + m 1 ) n ( t )   ,
α is the pheromone concentration and β is the heuristic information. In Equations (29) and (30), τ ( j + m - 1 ) ( j + m ) ( t ) is the pheromone concentration.
We obtained the pheromone concentration of the t t h iteration and performed an update of the pheromone concentration of FMAC.
τ i j ( t + 1 ) = ( 1 ρ ) τ i j ( t ) + m = 1 k | Γ ( m v 1 ) Γ ( v )     Γ ( m ) | Δ τ i j m ( t )   ,
ρ is the pheromone volatilization rate, and 0 < ρ < 1 , τ i j ( t ) [ τ m i n , τ m a x ] , where τ m i n and τ m a x are the lower-bound and upper-bound of the pheromone concentration. We obtained all the formulas and parameters needed for FMCA. Therefore, the algorithmic flow of the FMCA is as follows (Algorithm 1):
Algorithm 1: Proposed Iterative Algorithm of Fractional-Order Memristive Ant Colony Algorithm (FMCA)
1:Initialization:
2:Set the number of iterations t = 0 ;
3:Initialize ant colony algorithm (ACO) parameters, v , α , β , ρ , ξ , Q G , Q a , Q P , γ , τ 0 , τ m i n , and τ m a x ;
4:Place each ant in G randomly, m = 1 , , Q a ;
5:for each edge ( i , j ) , do
6:   Set τ i j ( t ) ~ U ( 0 , τ 0 ) ;
7:   Set η i j ( t ) = 1 / d i j ;
8End
9:Initialize low-pass filtering fracmemristor (LCSF),
10:Initialize the parameter of LCSF: A , B , v 1
11:Repeat
12:   for each ant population, number the population from 1 to γ ,
13:   for each ant, number the ant from 1 to m = int   ( Q a / γ ) , do
14:      Set t a b u t ( m , θ ) = Φ ;
15:      Repeat
16:      Take transfer probability series of ant m into LCSF
17:      the m t h ant travels from the i node to the j node;
18:      than set t a b u t ( m , θ ) = t a b u t ( m , θ ) { ( i , j ) } ;
19:      compute all transfer probability in t a b u t ( m , θ )
20:   Until completes travelling all nodes in G ;
21:      compute and rank L m ( t ) ,
22:      return the shortest path L 1 ( t ) ;
23:end
24:   for the edge in graph,
25:      update pheromone concentration;
26      then update the transfer probability
27:    end
28:end
29:Compute the shortest way of γ parts of ants;
30:   Update the pheromone concentration on the shortest visited way with τ i j ( t + 1 ) = ( 1 ρ ) τ i j ( t ) + 1 / L m i n ( t ) ;
31:   Set t = t + 1 ;
32:until the maximum number of cycles is reached;
33:Compute min m = 1 , 2 , , Q a [ L m ( t ) ] .

3.3. Convergence Analysis of FMCA from a Mathematical Point of View

The convergence of the FMAC is mathematically proven and analyzed in this section.
Lemma 1.
Concerning the concentration of pheromones at any given edge ( i , j ) of G , τ i j ( t ) , because of 0 < ρ < 1 , an upper bound τ max can be theoretically established, which is:
lim t τ i j ( t ) τ max = Δ τ max / ρ ,
where Δ τ m ax = max [   Δ τ ( t ) ] = max [ m = 1 N 3 | Γ ( m v 1 ) Γ ( v ) Γ ( m ) | Δ τ i j m ( t ) ] = m = 1 N 3 | Γ ( m v 1 ) Γ ( v ) Γ ( m ) | 1 L t s 0     , t ( 1 , ) , and L t s     is the theoretical shortest route.
Proof of Lemma 1.
From Equation (31), after the initial iteration, the maximum pheromone concentration τ i j max along the edge ( i , j ) in the graph can be calculated as:
τ i j max ( 1 ) = ( 1 ρ ) τ i j ( 0 ) + Δ τ max ,
Thus, from Equations (31) and (33), after another iteration, we may obtain the following formula for the maximum pheromone concentration τ i j max along each edge ( i , j ) :
τ i j max ( 2 ) = ( 1 ρ ) τ i j ( 1 ) + Δ τ max   = ( 1 ρ ) 2 τ i j ( 0 ) + ( 1 ρ ) Δ τ max + Δ τ max ,
The maximum pheromone concentration τ i j max of edge ( i , j ) after the t t h iteration was similarly revised,
τ i j max ( t ) = ( 1 ρ )   t τ i j ( 0 ) + k = 1 t [ ( 1 ρ )   t k Δ τ max ]     .
Then, Equation (35) derives an inequality:
τ i j ( t ) τ i j max ( t ) = ( 1 ρ )   t τ i j ( 0 ) + k = 1 t [ ( 1 ρ )   t k Δ τ max ]     ,
Because 0 < ρ < 1 , depending on the characteristics of the geometric progression sum, we obtain
lim t τ i j ( t ) lim t { ( 1 ρ )   t τ i j ( 0 ) +   { [ 1 ( 1 ρ )   t ] / ρ } Δ τ max } = Δ τ max / ρ   .
The proof ends. □
Lemma 2.
If   p i d ( t ) is the lowest probability of finding the optimum travel route π in the t t h iteration, a random tiny integer p s u d t and the upper bound p u p t , 0 < p u p t < 1 , a sufficient number of iterations t , we obtain that
p i d ( t ) 1 p u b t ,
lim t p i d ( t ) = 1 .
Proof of Lemma 2.
Since the pheromone is bound: τ i j ( t ) [ τ min , τ max ] , in Equations (28)–(30). We calculate the bottom bound p m i n v of the minimum probability of an ant searching for the best trip path.
p min v = 1 f [ p min + k = 1 N 1 1 | Γ ( m v ) Γ ( v ) Γ ( m + 1 ) | p min ] ,
Taking the worst-case scenario into consideration, if the amount of pheromones is τ min , and the pheromones of the others are τ max , p min = ( T min )   α ( N min )   β ( T min )   α ( N min )   β + ( i ,     u )     π     [ T i u ( t ) ]   α [ N i u ( t ) ]   β = ( T min )   α ( N min )   β ( T min )   α ( N min )   β + ( K G 1 ) ( T max )   α ( N max )   β   , K G is the interconnectedness of G ,
T min = min j C i n ( t )   ,   ( j + m ) C ( j + m 1 ) n ( t ) [ τ i j ( t ) ,   τ ( j + m 1 ) ( j + m ) ( t ) ] , N min = min j C i n ( t )   , ( j + m ) C ( j + k 1 ) n ( t ) [ η i j ( t ) ,   η ( j + m 1 ) ( j + m ) ( t ) ] .
Consequently, the v - order transition probability of the ant on a journey path π in the t t h iteration
0 < p min v ( t ) p i j v m ( t ) < 1 ,
Because of ( i , j ) π and π π , any viable path developed satisfies the following conditions:
0 < [ p min v ( t ) ] Q G [ p i j v m ( t ) ] Q G < 1 ,
where Q G means the number of the nodes in the graph G , depending on the features of the complementary assembly, the upper bound p u b on the probability of not recognizing the best travel path π is
p u b = 1 [ p min v ( t ) ] Q G   .
Then, from Equation (43), after the initial t trials, the probability of covering any edge in the optimum travel route π with the lowest upper bound p u p t :
p u p t = { 1 [ p min v ( t ) ] Q G   }   t > 0 .
Therefore, the lowest bound p i d of the probability of traversing the edge of the optimum travel path π is that
p i d 1 p u p t = 1 { 1 [ p min v ( t ) ] Q G   }   t .
From (42), we obtain the following:
lim t p i d lim t {   1 { 1 [ p min v ( t ) ] Q G   }   t } = 1 .
The proof ends. □
From above-mentioned proofs, as we have seen, FMAC can eventually find an optimal path.

4. Experiment of FMAC in TSP Problem

Ant colony algorithm (ACO) is a well-known method for solving discrete group optimization problems, such as the traveling salesman problem (TSP) and the vehicle routing issue, by emulating an ant colony seeking food. For the FMAC performance testing, we utilized a publicly accessible TSP problem benchmark library (TSPLIB [48]).
FMAC consists of two major components: the module of transfer probability prediction based on the fracmemristor (LCSF), whose series number is 8, and the FMAC module. Parameters of the FMAC in Table 1: pheromone importance α , heuristic factor importance β , pheromone evaporation coefficient ρ , constraint leniency ξ , fractional order v , the coefficient v should be the same as the fractional order of LCSF.

4.1. Effect of Fractional-Order Coefficients V in FMAC

The fractional order is the most crucial parameter in FMAC, which controls the systematic properties of the physical memristive system LCSF and also plays a vital role in the fractional order ant colony algorithm module for pheromone concentration updating. In this section, we conducted experiments involving the TSP problem with the best known solution to determine the fractional order values.
In Equation (29), we selected the first eight look-forward transfer probability coefficients, and the value changes with v . Table 2 shows the result. When 1 < v < 1 , w n 1 > w n , the information regarding the future transfer probability is given to the current ant node, and the more away the node is, the less information is passed, corresponding with our intuitive perception.
We chose TSPLIB’s KroA100 to evaluate the performance of various fractional orders. The optimal theoretical global solution for kroA100 is 21282.
In Table 3, when v = 0.75 , the weighted coefficient makes sense, and FMAC achieves good optimization results. v = 0.75 can only sometimes be the best fractional order of the FMCA for the TSP problem. As seen in Table 3, FMAC can accomplish a more desirable optimization when 0.5 < v < 1 . In this article, we set v = 0.75 for all TSP problems.

4.2. FMAC Convergence Experiments

FMAC can communicate the future transfer probability information to the current node. From an application standpoint, FMAC should be able to converge more quickly than ICAC. In this part, we compare ACO, MMAS, and FACA with FMAC. The test cases were chosen from the TSPLIB: berlin52, eil51, eil76, eil101.
The hardware environment is 8.00 GB RAM and Intel (R) Core (TM) i5-4800 CPU 3.2 GHz; the software environment is Window10, MATLAB R2018a. The experiment is executed 20 times, and the maximum number of iterations is 300, and the ant’s number is 100.
As shown in Figure 7, Figure 8, Figure 9 and Figure 10, FMAC and FACA converge much faster than the traditional integer-order IACA in solving the TSP problems such as berlin52. As can be seen, for TSP problems with a small number of cities, FMAC gives better results in approximately 20 rounds, and converges almost to the optimal outcome at 50 iterations. In contrast, traditional IACAs such as ACO and MMAS converge much slower. Moreover, after 300 iterations, it is still possible to converge near the global optimum.

4.3. FMAC and Various Improved Ant Colony Algorithms for Comparison

For comparative purposes, the optimization capabilities of the FACA are evaluated by examining a large number of well-improved ACO algorithms, such as ACO [2], MMAS [6], PACO-3opt [49], and FACA [47]. We used the following TSPLIB test datasets: berlin52, eil51, eil76, eil101, rat99, and st70. The best solutions for berlin52, eil51, eil76, eil101, rat99, and st70 are 7542, 426, 538, 629, 1211, and 675, respectively.
The hardware environment is 8.00 GB RAM, and Intel (R) Core (TM) i5-4800 CPU 3.2 GHz; the software environment is Window10, MATLAB R2018a. The experiment is executed 20 times, the max iteration is 500, and the ant’s number is 100.
Table 4 indicates that PACO-3opt, FACA, and FMAC are superior to ACO and MMAS. PACO-3opt [47] is now the best integer-order improved ant colony algorithm. FACA and FMAC obtain the shortest path of TSP in rat99 and st70, and are better than PACO-3opt. FMAC and FACA are almost the best algorithms for dealing with small-scale TSP problems among the ACO improvement algorithms. Applying LSCF to FACA, the physical equivalent circuit improves the efficiency of computing fractional order transfer probabilities, so FMAC should be faster than FACA operations. The experimental results verify our conjecture.

5. Discussion and Conclusions

5.1. Discusion

We conducted three experiments in Section 4. The fractional order of FMAC was determined in the first experiment, and FMAC produces the best performance in korA100 when v = 0.75. However, the optimal value of v varies for different TSP problems; in reality, FMAC has better results 0.5 < v < 1 . For the second and third experiments, we find that FMAC and FACA converge significantly faster than ACO and MMAS, and the optimal outcomes are superior to those obtained by ACO and MMAS. Nowadays, FMAC and FACA are nearly the optimum algorithms for small-scale TSP problems. Due to the fractional calculus being substituted into IACA to rewrite the transfer probabilities, FMAC and FACA need significantly more time than IACA. Thanks to the use of an equivalent physical circuit, the speed of FMAC is improved relative to FACA by approximately 30%.

5.2. Conclusions

The present research aimed to apply the fracmemristor to the fractional order ant colony algorithm. In our investigation of the analogous circuit of the fracmemristor, we discovered that the topology of the ant’s future transition probability sequence and the LCSF circuit are comparable. We continued to derive the LCSF formulation and found that it may be easily used for fractional order transfer probability calculations. This paper empirically demonstrates that hardware topological dimensions can be leveraged to accelerate the software algorithms with comparable topologies. However, the analysis is restricted to v-order LCSFs with the same topology and transition probability sequence for the ant’s future transitions. Then, can we artificially design fracmemristor systems with matching topologies to accelerate the processing of other algorithms? We will continue our research in this area—building fracmemristor physical systems to accelerate the algorithm.

Author Contributions

Conceptualization, W.Z. and Y.P.; methodology, Y.P.; software, W.Z. and Y.P.; validation, W.Z. and Y.P.; formal analysis, W.Z. and Y.P.; investigation, W.Z. and Y.P.; resources, Y.P.; data curation, Y.P.; writing—original draft preparation, W.Z.; writing—review and editing, Y.P.; visualization, W.Z.; supervision, Y.P.; project administration, Y.P.; funding acquisition, Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

Project supported in part by the National Natural Science Foundation of China (Grant No. 62171303), in part by the China South Industries Group Corporation (Chengdu) Fire Control Technology Center Project (non-secret) (Grant No. HK20-03), in part by the National Key Research and Development Program Foundation of China (Grant No. 2018YFC0830300).

Data Availability Statement

We will make some of our code available as open source on GitHub in the future.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dorigo, M. Optimization, Learning and Natural Algorithms. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 1992. [Google Scholar]
  2. Dorigo, M.; Caro, G.D. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 1472, pp. 1470–1477. [Google Scholar] [CrossRef]
  3. Dorigo, M.; Gambardella, L. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef] [Green Version]
  4. Maniezzo, V.; Colorni, A. The ant system applied to the quadratic assignment problem. IEEE Trans. Knowl. Data Eng. 1999, 11, 769–778. [Google Scholar] [CrossRef] [Green Version]
  5. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef] [Green Version]
  6. Stutzle, T.; Hoos, H. MAX-MIN Ant System and local search for the traveling salesman problem. In Proceedings of the 1997 IEEE International Conference on Evolutionary Computation, Indianapolis, IN, USA, 13–16 April 1997; pp. 309–314. [Google Scholar] [CrossRef]
  7. Dorigo, M.; Gambardella, L.M. A Study of Some Properties of Ant-Q. In Proceedings of the PPSN Fourth International Conference on Parallel Problem Solving from Nature, Berlin, Germany, 22–26 September 1996; pp. 656–665. [Google Scholar]
  8. Taillard, E.D. FANT: Fast Ant System; Technical Report; Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale: Lugano, Switzerland, 1998. [Google Scholar]
  9. Roux, O.; Fonlupt, C.; Talbi, E.G.; Robilliard, D. ANTabu—Enhanced Version; Technical Report LIL-99-01; Laboratoire d’Informatique du Littorral, Université du Littoral: Calais, France, 1999. [Google Scholar]
  10. Kaji, T. Approach by ant tabu agents for Traveling Salesman Problem. In Proceedings of the 2001 IEEE International Conference on Systems, Man and Cybernetics, e-Systems and e-Man for Cybernetics in Cyberspace (Cat. No. 01CH37236), Tucson, AZ, USA, 7–10 October 2001. [Google Scholar] [CrossRef]
  11. Bullnheimer, B.; Kotsis, G.; Strauß, C. Parallelization Strategies for the Ant System. In High Performance Algorithms and Software in Nonlinear Optimization. Applied Optimization; Springer: Boston, MA, USA, 1998; pp. 87–100. [Google Scholar] [CrossRef] [Green Version]
  12. Maniezzo, V.; Carbonaro, A. An ANTS heuristic for the frequency assignment problem. Futur. Gener. Comput. Syst. 2000, 16, 927–935. [Google Scholar] [CrossRef]
  13. Varela, G.; Sinclair, M. Ant colony optimisation for virtual-wavelength-path routing and wavelength allocation. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 3, p. 1809. [Google Scholar] [CrossRef]
  14. Watanabe, I.; Matsui, S. Improving the performance of ACO algorithms by adaptive control of candidate set. In Proceedings of the 2003 Congress on Evolutionary Computation, CEC’03, Canberra, ACT, Australia, 8–12 December 2003; Volume 2, pp. 1355–1362. [Google Scholar] [CrossRef]
  15. Watanabe, I.; Matsui, S. Boosting ACO with a Preprocessing Step. In Applications of Evolutionary Computing; Cagnoni, S., Gottlieb, J., Hart, E., Middendorf, M., Raidl, G.R., Eds.; Springer: Berlin, Germany, 2002; pp. 163–172. [Google Scholar] [CrossRef]
  16. Shi, L.; Hao, J.; Zhou, J.; Xu, G. Ant colony optimization algorithm with random perturbation behavior to the problem of optimal unit commitment with probabilistic spinning reserve determination. Electr. Power Syst. Res. 2004, 69, 295–303. [Google Scholar] [CrossRef]
  17. Chua, L. Memristor-The missing circuit element. IEEE Trans. Circuit Theory 1971, 18, 507–519. [Google Scholar] [CrossRef]
  18. Chua, L.O.; Kang, S.M. Memristive devices and systems. Proc. IEEE 1976, 64, 209–223. [Google Scholar] [CrossRef]
  19. Strukov, D.B.; Snider, G.S.; Stewart, D.R.; Williams, R.S. The missing memristor found. Nature 2008, 453, 80–83. [Google Scholar] [CrossRef]
  20. Di Ventra, M.; Pershin, Y.V.; Chua, L.O. Circuit Elements With Memory: Memristors, Memcapacitors, and Meminductors. Proc. IEEE 2009, 97, 1717–1724. [Google Scholar] [CrossRef] [Green Version]
  21. Kim, H.; Sah, M.P.; Yang, C.; Cho, S.; Chua, L.O. Memristor Emulator for Memristor Circuit Applications. IEEE Trans. Circuits Syst. I Regul. Pap. 2012, 59, 2422–2431. [Google Scholar] [CrossRef]
  22. Adhikari, S.P.; Sah, M.P.; Kim, H.; Chua, L.O. Three Fingerprints of Memristor. IEEE Trans. Circuits Syst. I Regul. Pap. 2013, 60, 3008–3021. [Google Scholar] [CrossRef]
  23. Velmurugan, G.; Rakkiyappan, R.; Cao, J. Finite-time synchronization of fractional-order memristor-based neural networks with time delays. Neural Netw. 2016, 73, 36–46. [Google Scholar] [CrossRef]
  24. Chen, J.; Zeng, Z.; Jiang, P. On the periodic dynamics of memristor-based neural networks with time-varying delays. Inf. Sci. 2014, 279, 358–373. [Google Scholar] [CrossRef]
  25. Lashkare, S.; Panwar, N.; Kumbhare, P.; Das, B.; Ganguly, U. PCMO-Based RRAM and NPN Bipolar Selector as Synapse for Energy Efficient STDP. IEEE Electron Device Lett. 2017, 38, 1212–1215. [Google Scholar] [CrossRef]
  26. Bennett, C.H.; Lorival, J.-E.; Marc, F.; Cabaret, T.; Jousselme, B.; Derycke, V.; Klein, J.-O.; Maneux, C. Multiscaled Simulation Methodology for Neuro-Inspired Circuits Demonstrated with an Organic Memristor. IEEE Trans. Multi-Scale Comput. Syst. 2017, 4, 822–832. [Google Scholar] [CrossRef]
  27. Oldham, K.B.; Spanier, J. The Fractional Calculus: Integrations and Differentiations of Arbitrary Order; Academic Press: New York, NY, USA, 1974. [Google Scholar]
  28. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, Some Methods of Their Solution and Some of Their Applications; Academic Press: San Diego, CA, USA, 1998. [Google Scholar]
  29. Zhang, X.; Dai, L. Image Enhancement Based on Rough Set and Fractional Order Differentiator. Fractal Fract. 2022, 6, 214. [Google Scholar] [CrossRef]
  30. Zhang, X.; Liu, R.; Ren, J.; Gui, Q. Adaptive Fractional Image Enhancement Algorithm Based on Rough Set and Particle Swarm Optimization. Fractal Fract. 2022, 6, 100. [Google Scholar] [CrossRef]
  31. Zhang, X.-F.; Yan, H.; He, H. Multi-focus image fusion based on fractional-order derivative and intuitionistic fuzzy sets. Front. Inf. Technol. Electron. Eng. 2020, 21, 834–843. [Google Scholar] [CrossRef]
  32. Yan, H.; Zhang, J.-X.; Zhang, X. Injected Infrared and Visible Image Fusion via L1 Decomposition Model and Guided Filtering. IEEE Trans. Comput. Imaging 2022, 8, 162–173. [Google Scholar] [CrossRef]
  33. Yan, H.; Zhang, X. Adaptive fractional multi-scale edge-preserving decomposition and saliency detection fusion algorithm. ISA Trans. 2020, 107, 160–172. [Google Scholar] [CrossRef]
  34. Koeller, R.C. Applications of the Fractional Calculus to the Theory of Viscoelastinode. J. Appl. Mech. 1984, 51, 294–298. [Google Scholar] [CrossRef]
  35. Rossikhin, Y.A.; Shitikova, M.V. Applications of Fractional Calculus to Dynamic Problems of Linear and Nonlinear Heredi-Tary Mechanics of Solids. Appl. Mech. Rev. 1997, 50, 15–67. [Google Scholar] [CrossRef]
  36. Elsafty, A.H.; Hamed, E.M.; Fouda, M.E.; Said, L.A.; Madian, A.H.; Radwan, A.G. Study of fractional flux-controlled memristor emulator connections. In Proceedings of the 2018 7th International Conference on Modern Circuits and Systems Technologies (MOCAST), Thessaloniki, Greece, 7–9 May 2018; pp. 1–4. [Google Scholar] [CrossRef]
  37. Guo, Z.; Si, G.; Diao, L.; Jia, L.; Zhang, Y. Generalized modeling of the fractional-order memcapacitor and its character analysis. Commun. Nonlinear Sci. Numer. Simul. 2017, 59, 177–189. [Google Scholar] [CrossRef]
  38. Radwan, A.G.; Moaddy, K.; Hashim, I. Amplitude Modulation and Synchronization of Fractional-Order Memristor-Based Chua’s Circuit. Abstr. Appl. Anal. 2013, 2013, 758676. [Google Scholar] [CrossRef] [Green Version]
  39. Khalil, N.A.; Said, L.A.; Radwan, A.G.; Soliman, A.M. General fractional order mem-elements mutators. Microelectron. J. 2019, 90, 211–221. [Google Scholar] [CrossRef]
  40. Cafagna, D.; Grassi, G. On the simplest fractional-order memristor-based chaotic system. Nonlinear Dyn. 2012, 70, 1185–1197. [Google Scholar] [CrossRef]
  41. Pu, Y.-F. Measurement Units and Physical Dimensions of Fractance-Part I: Position of Purely Ideal Fractor in Chua’s Axiomatic Circuit Element System and Fractional-Order Reactance of Fractor in Its Natural Implementation. IEEE Access 2016, 4, 3379–3397. [Google Scholar] [CrossRef]
  42. Pu, Y.-F. Measurement Units and Physical Dimensions of Fractance-Part II: Fractional-Order Measurement Units and Physical Dimensions of Fractance and Rules for Fractors in Series and Parallel. IEEE Access 2016, 4, 3398–3416. [Google Scholar] [CrossRef]
  43. Pu, Y.-F.; Yuan, X.; Yu, B. Analog Circuit Implementation of Fractional-Order Memristor: Arbitrary-Order Lattice Scaling Fracmemristor. IEEE Trans. Circuits Syst. I Regul. Pap. 2018, 65, 2903–2916. [Google Scholar] [CrossRef]
  44. Zhu, W.-Y.; Pu, Y.-F.; Liu, B.; Yu, B.; Zhou, J.-L. A mathematical analysis: From memristor to fracmemristor. Chin. Phys. B 2022, 31, 060204. [Google Scholar] [CrossRef]
  45. Pu, Y.-F.; Wang, J. Fractional-order global optimal backpropagation machine trained by an improved fractional-order steepest descent method. Front. Inf. Technol. Electron. Eng. 2020, 21, 809–833. [Google Scholar] [CrossRef]
  46. Couceiro, M.; Sivasundaram, S. Novel fractional order particle swarm optimization. Appl. Math. Comput. 2016, 283, 36–54. [Google Scholar] [CrossRef]
  47. Pu, Y.-F.; Siarry, P.; Zhu, W.-Y.; Wang, J.; Zhang, N. Fractional-Order Ant Colony Algorithm: A Fractional Long Term Memory Based Cooperative Learning Approach. Swarm Evol. Comput. 2022, 69, 101014. [Google Scholar] [CrossRef]
  48. TSPLIB. Standard Test Set for TSP Problem of Universität Heidelberg. Available online: http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95 (accessed on 13 July 2022).
  49. Gülcü, Ş.; Mahi, M.; Baykan, Ö.K.; Kodaz, H. A parallel cooperative hybrid method based on ant colony optimization and 3-Opt algorithm for solving traveling salesman problem. Soft Comput. 2018, 22, 1669–1685. [Google Scholar] [CrossRef]
Figure 1. Schematic diagrams: (a) circuit diagram of the resistor; (b) circuit diagram of the current-controlled memristor; and (c) circuit diagram of the volt-controlled memristor.
Figure 1. Schematic diagrams: (a) circuit diagram of the resistor; (b) circuit diagram of the current-controlled memristor; and (c) circuit diagram of the volt-controlled memristor.
Fractalfract 07 00211 g001
Figure 2. Chua’s periodic table of all two-terminal circuit elements.
Figure 2. Chua’s periodic table of all two-terminal circuit elements.
Fractalfract 07 00211 g002
Figure 3. Schematic diagrams of CCFM and VIFM: (a) CCFM; and (b) VIFM.
Figure 3. Schematic diagrams of CCFM and VIFM: (a) CCFM; and (b) VIFM.
Fractalfract 07 00211 g003
Figure 4. The analogy of FACA and fracmemristor: (a) transition probability sequence in FACA; and (b) charge transfer in fracmemristor.
Figure 4. The analogy of FACA and fracmemristor: (a) transition probability sequence in FACA; and (b) charge transfer in fracmemristor.
Fractalfract 07 00211 g004
Figure 5. (a) Low-pass capacitive fracmemristor; and (b) high-pass capacitive fracmemristor.
Figure 5. (a) Low-pass capacitive fracmemristor; and (b) high-pass capacitive fracmemristor.
Fractalfract 07 00211 g005
Figure 6. (a) First series LCSF; (b) second series LCSF; and (c) third series LCSF.
Figure 6. (a) First series LCSF; (b) second series LCSF; and (c) third series LCSF.
Fractalfract 07 00211 g006
Figure 7. Average shortest path curve after each iteration in berlin52.
Figure 7. Average shortest path curve after each iteration in berlin52.
Fractalfract 07 00211 g007
Figure 8. Average shortest path curve after each iteration in eil51.
Figure 8. Average shortest path curve after each iteration in eil51.
Fractalfract 07 00211 g008
Figure 9. Average shortest path curve after each iteration in eil76.
Figure 9. Average shortest path curve after each iteration in eil76.
Fractalfract 07 00211 g009
Figure 10. Average shortest path curve after each iteration in eil101.
Figure 10. Average shortest path curve after each iteration in eil101.
Fractalfract 07 00211 g010
Table 1. Parameters of FMAC.
Table 1. Parameters of FMAC.
ParametersValue
α 1
β 5
τ 0 0.1
ρ 0.2
ξ 1.7
Table 2. The weighting coefficient changes with fractional-order v .
Table 2. The weighting coefficient changes with fractional-order v .
v w 0 w 1 w 2 w 3 w 4 w 5 w 6 w 7
−111111111
−0.7510.750.65630.60160.56400.53580.51340.4951
−0.510.50.3750.31250.27340.24610.22560.2095
−0.2510.250.15630.11720.09520.08090.07080.0632
010000000
0.2510.250.09380.05470.03760.02820.02230.0183
0.510.50.1250.06250.03910.02730.02050.0161
0.7510.750.09380.03910.02200.01430.01010.0076
111000000
Table 3. Evaluation of FMCA in KroA100 with the change of fractional-order v .
Table 3. Evaluation of FMCA in KroA100 with the change of fractional-order v .
v Minimum
Solution
Maximum
Solution
Average
Solution
Root Mean
Square Error
Relative
Error (%)
−0.7521,28222,06421,673.7450.831.85
−0.521,28221,97621,550.3327.161.37
−0.2521,28221,51821,483.7136.190.39
0.021,28221,92621,582.4373.541.48
0.2521,28221,92621,571.6346.771.23
0.521,28221,87321,490.4201.860.51
0.7521,28221,50821,301.554.120.13
121,28221,70321,432.797.160.32
Table 4. Performance of experiment: (a) berlin52; (b) eil51; (c) eil76; (d) eil101; (e) rat99, and (f) st70 (NA in table means no data were obtained from the PACO-3opt paper).
Table 4. Performance of experiment: (a) berlin52; (b) eil51; (c) eil76; (d) eil101; (e) rat99, and (f) st70 (NA in table means no data were obtained from the PACO-3opt paper).
Optimization
Algorithms
Best
Solution
Average
Solution
Time
Consumption
(a)
ACO [2]75427657.243.24
MMAS [6]75427596.356.48
PACO-3opt [49]75427542NA
FACA [47]75427542119.57
FMAC (ours)7542754275.17
(b)
ACO [2]437443.557.84
MMAS [6]431436.175.34
PACO-3opt [49]426426.3NA
FACA [47]426427.4156.77
FMAC (ours)426426.892.38
(c)
ACO [2]544563.6129.04
MMAS [6]537552.9167.58
PACO-3opt [49]538539.85NA
FACA [47]538541.0354.22
FMAC (ours)538541.3186.18
(d)
ACO [2]648662.1274.62
MMAS [6]634651.3358.81
PACO-3opt [49]629630.5NA
FACA [47]629630.6758.10
FMAC (ours)629630.39489.37
(e)
ACO [2]12121216.7300.71
MMAS [6]12121214.5383.83
PACO-3opt [49]12131217.1NA
FACA [47]12111213.0758.17
FMAC (ours)12121214.1512.09
(f)
ACO [2]678686.3119.76
MMAS [6]675682.6155.10
PACO-3opt [49]676677.85NA
FACA [47]675680.1323.82
FMAC (ours)675679.2219.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, W.; Pu, Y. A Study of Fractional-Order Memristive Ant Colony Algorithm: Take Fracmemristor into Swarm Intelligent Algorithm. Fractal Fract. 2023, 7, 211. https://doi.org/10.3390/fractalfract7030211

AMA Style

Zhu W, Pu Y. A Study of Fractional-Order Memristive Ant Colony Algorithm: Take Fracmemristor into Swarm Intelligent Algorithm. Fractal and Fractional. 2023; 7(3):211. https://doi.org/10.3390/fractalfract7030211

Chicago/Turabian Style

Zhu, Wuyang, and Yifei Pu. 2023. "A Study of Fractional-Order Memristive Ant Colony Algorithm: Take Fracmemristor into Swarm Intelligent Algorithm" Fractal and Fractional 7, no. 3: 211. https://doi.org/10.3390/fractalfract7030211

APA Style

Zhu, W., & Pu, Y. (2023). A Study of Fractional-Order Memristive Ant Colony Algorithm: Take Fracmemristor into Swarm Intelligent Algorithm. Fractal and Fractional, 7(3), 211. https://doi.org/10.3390/fractalfract7030211

Article Metrics

Back to TopTop