Next Article in Journal
Smart Nanocarriers in Cosmeceuticals Through Advanced Delivery Systems
Previous Article in Journal
Myoelectric Control in Rehabilitative and Assistive Soft Exoskeletons: A Comprehensive Review of Trends, Challenges, and Integration with Soft Robotic Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FROM: A Fish Recognition-Inspired Optimization Method for Multi-Agent Decision-Making Problems with a Fluid Environment

School of Software, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(4), 215; https://doi.org/10.3390/biomimetics10040215
Submission received: 11 March 2025 / Revised: 1 April 2025 / Accepted: 1 April 2025 / Published: 2 April 2025

Abstract

:
Underwater multi-agent systems face critical hydrodynamic constraints that significantly degrade the performance of conventional constraint optimization algorithms in dynamic fluid environments. To meet the needs of underwater multi-agent applications, a fish recognition-inspired optimization method (FROM) is proposed in this paper. The proposed method introduces the characteristics of fish recognition. There are two major improvements in the proposed method: the neighbor topology improvement based on vision recognition and the learning strategies improvement based on hydrodynamic recognition. The computational complexity of the proposed algorithm was analyzed, and it was found to be acceptable. The statistical analysis of the experimental results shows that the FROM algorithm performs better than other algorithms in terms of minimum, maximum, standard deviation, mean, and median values calculated from objective functions. With solid experiment results, we conclude that the proposed FROM algorithm is a better solution to solve multi-agent decision-making problems with fluid environment constraints.

1. Introduction

Cooperative multi-agent decision-making problems are important to multi-agent systems. Learning algorithms have an advantage in terms of solving large-scale and high-complexity problems. Several learning algorithms have been proposed for multi-agent system decision-making problems (MASDMPs) [1,2], including task assignment and path planning. Both classical and novel learning algorithms are used to solve such problems. For the decentralized multi-agent system, the learning algorithms continuously update the agents’ state through iterations in a short period of time. A learning algorithm for solving MASDMPs must ensure its convergence and approximate the pareto optimal solution.
There are many theoretical studies of learning algorithms applied to MASDMPs. It is generally assumed that the multi-agent system is in an ideal state, for example, a multi-agent system in non-interactive cyberspace. There is no force between agents in the system. These methods can be used to solve universal situations. The multi-agent system applications in the real world have more constraints. To meet the needs of real-world MASDMPs, some real-world constraint multi-agent learning methods were proposed. In disaster situations, it is urgent that trapped survivors can be found and rescued within a short period. Decentralized partially observable Markov decision processes (Dec-POMDPs) provided a general framework for multi-agent sequential decision-making under uncertainty [3]. In large and partially observable stochastic environments, domain models were unavailable or accessible due to their high cost or for security reasons. To learn policies in the macro-action case, a policy-based reinforcement learning method was proposed [4]. Many bio-inspired and behavioral-based algorithms have been designed to solve MASDMPs. For example, particle swarm optimization (PSO) [5] was inspired by the foraging behavior of bird flocks. Ant colony optimization (ACO) [6] was a metaheuristic that introduced the concept of pheromones of ant colonies.
The main motivation of this work is to propose a novel method for underwater multi-agent systems since there is a lack of an efficient way to solve fluid environment-constrained MASDMPs. In real-world problems, the movement of agents in a fluid environment will generate a dynamic field, which has a great influence on other agents. The real-world underwater environment is complex and dynamic. Underwater multi-agent applications (e.g., underwater drone swarm) is a major field in multi-agent studies. It is common for agents to work in a fluid environment. Current multi-agent learning algorithms for distributed constraint optimization problems perform poorly under specific fluid conditions. It is necessary to propose an improved learning algorithm for MASDMPs with the constraints of a fluid environment.
We found that the PSO can be used in this case. With the evolutionary computation approach, a robust solution can be worked out through a stochastic and self-adaptive process. Although the basic PSO has advantages in terms of resource costs and global optimal solution searching, it still has some flaws: premature convergence and diversity loss. Many PSO variant studies have improved the learning algorithm in four different approaches. Firstly, the convergence speed could be straightforwardly improved by modified learning strategies (e.g., Peer-learning PSO [7] and orthogonal learning PSO [8]). Another improved method was improvement based on parameters such as the inertia weight factor and acceleration coefficients [9,10,11,12]. Both of them are important parameters that can influence the rate of convergence. Some other learning algorithms such as the genetic algorithm (GA) [13], extremal optimization (EO) [14], and ACO could be integrated with PSO. The new hybrid optimization could take advantage of the exploration ability of PSO and the exploitation ability of the other optimization [15]. Lastly, others have studied particle neighbor topologies to enhance the exploration capability of PSO [16]. There were some topologic improved PSO variants such as fully informed particle swarm (FIPS) [17] and dynamic multi-swarm particle swarm optimization (DMSPS) [18].
Different from the behavior pattern of bird flocks that inspired the PSO algorithm, the behavior pattern of fish schools is more complicated. Recent research revealed that it is the special recognition of fish schools that decides their behaviors, such as swarming, milling, and schooling [19,20]. In this paper, we analyze the meta mechanisms of fish recognition, and a bio-inspired optimization method named the fish recognition improved optimization method is proposed. The FROM is a learning strategy-improved and neighbor topology-improved PSO. The mechanisms of fish recognition have been categorized into two parts. Firstly, the hydrodynamic recognition of fish schools in the water contributes to the learning strategies’ improvement of the proposed method. Secondly, while most of the studies used the popular three−A rules (avoidance, alignment, and attraction) to choose a neighbor [21], we organize the neighbor topology of the FROM with the Voronoi diagram, which is considered to be the recognition range of the fish school. The recognition range of fish schools contributes to the neighbor topology improvement in the proposed method.
The remainder of this paper is organized as follows. In Section 2, we review the basic PSO algorithm, fish recognition, and Voronoi diagram definition. Section 3 describes the FROM algorithm in detail, including every element of the new term. We calculate the computational complexity of the proposed algorithm. Section 4 lists our experiment results, and our conclusion is drawn in Section 5.

2. Background

In this section, we review several pieces of research that are related to the proposed work, including the basic PSO algorithm, fish behavior studies, and the topological method in fish school—Voronoi neighbor. They will be introduced in the following subsections.

2.1. Particle Swarm Optimization

Particle swarm optimization searches for the best solution by updating the particles’ position over iterations. The next positions of particles depend on the velocities of particles. Every particle will record its best location in history as the pbest (particle best). Among the best locations from all particles, the most suitable one will be marked as the gbest (global best). For the target space, the basic PSO velocity and position update formula is described as follows:
v i k + 1 = ω v i k + c 1 · r 1 · pbest i k p i k + c 2 · r 2 · gbest i k p i k
p i k + 1 = p i k + v i k + 1
where k is the iteration of the evolution process, v i k is the velocity of particle i at k th iteration, and p i k is the location of particle i at k th iteration. Factors r 1 and r 2 are the random factors with a distribution in the range of [0,1]. Parameter c 1 is the acceleration weight toward the particle’s best location. Parameter c 2 is the acceleration weight toward the particles’ global best location. New velocity will be worked out by weighing the particle’s best location p b e s t i k and global best location g b e s t i k . The inertial weight factor ω controls the impact of the previous velocity on the current velocity. Finally, the new position of the focal particle can be calculated with the old position and the velocity augmenter.

2.2. Fish Recognition

The definition of fish recognition is the method of how fish detect the surrounding neighbors and environment. Fish recognition includes the following two parts.
Vision recognition: The fish decides its next movement based on its observable neighbors.
Hydrodynamic recognition: The movement of fish causes a spreading hydrodynamic field, which will be detected by other fish, and they change their motions.
The hydrodynamic recognition mechanisms take part in learning strategies, and they will be defined in Section 3. Vision recognition is determined by the fish neighbor topology. One of the differences between basic PSO and the improved algorithm we proposed is the neighbor topology. To solve MASDMPs with the constraints of a fluid environment, the best way is to learn from bionics experience. Multi-agent behavior can be found in many species groups, such as mosquito swarms [22], starling bird flocks [23], and fish schools. In avian collective intelligence systems, the three−A rules (alignment, attraction, and avoidance) fundamentally govern neighbor topology formation. Birds’ superior visual perception and aerial maneuverability enable extensive multi-agent interactions under these rules, where alignment maintains directional coherence, attraction preserves group cohesion, and avoidance prevents collisions. However, such visual-dependent topological configurations differ fundamentally from fish schooling mechanisms; aquatic organisms primarily utilize lateral line sensing and hydrodynamic interactions, resulting in more localized, density-regulated neighbor networks. This biological distinction motivated our topology optimization strategy in the FROM, where the three−A framework is adaptively constrained to better match aquatic sensory limitations. The fish school is organized differently [24]. Unlike birds, fish sense neighbor motion with its lateral line and a hair-based sensor along with the side. The range of fish vision recognition in the water is not as broad as birds’ vision, but the hydrodynamic interaction is very sensitive and suitable for fast communication with close neighbors. In this way, the fish school shows different behaviors such as swarming, milling, and schooling. The different behaviors in fish schools are shown in Figure 1.
In aquatic collective intelligence, fish exhibit three distinct behavioral modes: (a) swarming—dense aggregation with alignment; (b) milling—circular motion maintaining local density; (c) schooling—polarized directional movement. These self-organized patterns inspired our neighbor topology design: edge individuals dynamically adjust velocities through hydrodynamic sensing (milling/swarming modes), while internal individuals autonomously limit neighbors to 5–8 via pressure gradient detection (schooling mode). This bio-constrained neighborhood size prevents information overload in dense populations, maintains individual diversity to avoid premature convergence, and enables emergent gradient following. They are the key mechanisms enhancing PSO’s exploration–exploitation balance compared to avian-inspired unlimited neighbor approaches.

2.3. Voronoi Neighbor

It has been proven that the school of fish tends to use the Voronoi neighbor to set neighbor topology [25,26]. The Voronoi diagram represents a geometric partitioning scheme that divides a planar domain into convex polygonal regions determined by minimal distance relationships to a predefined set of generator points. Each generator governs a unique Voronoi cell comprising all spatial locations closer to it than to other generators, with these cells collectively forming a dual structure to the corresponding Delaunay triangulation derived from the same point configuration. This spatial duality constitutes a foundational principle in discrete computational geometry.
The Voronoi diagram is divided into multiple Voronoi regions. Each point in the Voronoi diagram is assigned a Voronoi region. The formal definition of the Voronoi region is defined as follows:
R k = x X d x , P k d x , P j for   all   j k
where P k is an ordered collection of agents in the space X . d x , P = min d x , p p P denotes the distance between the point and the subset. Any points in X whose distance to the current agent P k is not greater than their distance from the other agents P j belong to the Voronoi region R k .
We propose using the Voronoi neighbor as a new neighbor topology. If two Voronoi regions to which two agents belong are adjacent, the two agents should be Voronoi neighbors. For the agents within the edge Voronoi regions, their number of neighbors tends to be lower. The rest agents have more neighbors, but the number of neighbors is still much lower than that of the three−A rule.

2.4. Proposed Algorithm

We first present the problem statement and then present our proposed approach to solve the problem.
Problem Statement: Assume that a set of n agents in an obstacle-free two-dimensional underwater environment aims to conduct single-objective optimization to achieve the goal of various underwater applications. The underwater hydrodynamics must be considered; hence, any approaches applied to the problem should iteratively be affected by hydrodynamics. When the agents move in the underwater environment, each agent will generate a polarized potential field, which decays with the distance between agents, and the back of the potential field is stronger than the front. The goal is to propose an algorithm that has a better performance and faster convergence under the hydrodynamic constraint.

3. Fish Recognition Optimization Method

The vision recognition topology structure and the hydrodynamic recognition learning strategy are implemented in the proposed method. Under the basic framework of displacement velocity, the hydrodynamic interaction between the topological structures is introduced into the particle swarm algorithm. With new terms set, the algorithm is more adaptive. Consequently, the algorithm has kept the characteristics of schooling fish. In the FROM, the update of agent (fish) positions is described by the following equations:
v i k + 1 = ω v i k + c 1 · r 1 · pbest i k p i k + c 2 · r 2 · vbest i k p i k + c 3 · r 3 · F i k
p i k + 1 = p i k + v i k + 1
Compared with the original PSO algorithm, the global best location gbest i k is changed to the Voronoi neighbor best location vbest i k . The fish recognition improvement term F i k is modified with a random factor r 3 and the fish recognition learning factor c 3 . The FROM defines a new item, the fish recognition velocity vector, in the polynomial as follows:
F i k = e i · v i h
where the fish recognition velocity vector is the scalar product of the orientation vector e i and the average Voronoi neighbor velocity v i h . They can be defined as follows:
e i = A l g + R c g · e
v i h = 1 N j R i v j
where R i is the set of Voronoi neighbors of the focal particle, and N is the number of Voronoi neighbors. The focal particle’s orientation is affected by the alignment term A l g and hydrodynamic recognition term R c g .
As a bio-inspired algorithm, the FROM also has the alignment term. The alignment term changes the particles’ location by referring to their Voronoi neighbors. It enhances the consistency of the particle swarm. The alignment term can be calculated by Formula (9).
A l g = j R i d i j sin α + I 1 sin γ
where d i j is the distance between the focal particle and the Voronoi neighbor particle, and I 1 is the alignment intensity parameter. Angles α , β , and γ are as shown in Figure 2.
The hydrodynamic recognition term R c g shows the influence of water flow on the hydrodynamic recognition of fish. The hydrodynamic recognition term R c g is described as Formula (10).
R c g = j R i 1 cos β · e i · u j i · e i
The hydrodynamic field generated by fish in the water shows obvious anisotropic characteristics. The focal fish behind the fish that generate the hydrodynamic field are more susceptible. As shown in Figure 3, each fish generates a dipolar hydrodynamic field. Due to the anisotropic characteristic of the hydrodynamic interaction, a weight 1 cos β is introduced to the hydrodynamic recognition term. For the focal fish A, it applied a greater weight ( β 1 > 90 ° , 1 cos β > 1 ). For the focal fish B, it applied a smaller weight ( β 2 < 90 ° , 1 cos β < 1 ). For the FROM algorithm, the anisotropic weight makes the influence behind the particles more obvious.
u j i = I 2 π · e j i sin β + e j i cos β d i j 2
where I 2 is the dipole intensity parameter. e j i and e j i are the polar coordinates in the framework of the neighbor particle.
Inertial weight and learning factor adjustment
The inertial weight factor ω floats with the compactness of the swarm. To prevent the particles becoming too close (in our case a fish collision), a greater inertial weight factor is needed. The inertial weight factor ω is self-adaptive and can be calculated by Formula (13).
ω = ω m a x k i t e r m a x ω m a x ω m i n + r · e E
where ω m i n , ω m a x is the dynamic range of the linear decreasing weight. ω m a x is the starting value of the inertia weight and ω m i n is the ending value of the inertia weight. The random seed r will be selected within 0,1 . k is the current iteration number and i t e r m a x is the maximum number of iterations. By introducing the kinetic energy of the particle swarm, the inertia weight is reduced with a certain buffer, so that inertia weight can be adjusted adaptively with the kinetic energy of the particle swarm. The self-adaptive updating of inertia weight is very beneficial to the improvement in the precision and also enables the algorithm to converge better.
The learning factor of the hydrodynamic recognition term c 3 can maintain the diversity of the particle swarm. The updated formula of the hydrodynamic recognition term learning factor is described as follows:
c 3 = 1 ω   k 2 3 i t e r m a x c 3 = 0   k > 2 3 i t e r m a x
In the iterative process, as the inertia weight ω gradually decreases, the particle gradually loses its diversity. In the early phase, the learning factor of the hydrodynamic recognition term should be enhanced. This can help the algorithm not to fall into the local optimal solution easily. In the later phase, we want to maintain algorithm accuracy. It is assumed that after two-thirds of the total number of iterations, the algorithm has more chance of reaching the global optimal solution. Consequently, c 3 will be set to 0 because the particle swarm no longer needs to maintain diversity.

3.1. Computational Complexity Analysis

It is important and necessary to analyze the computational complexity of the proposed algorithm. The flowchart of the proposed algorithm is shown in Figure 4 to analyze the computational complexity of the FROM. The detailed procedure of the FROM can be described in the following steps.
Step 1: Initialize the parameters and particle swarm position value.
Step 2: Calculate the Voronoi neighbor topology of the particle swarm.
Step 3: Find particle best position pbest i k and particle global best position gbest i k .
Step 4: Calculate the alignment term A l g and hydrodynamic recognition term R c g with the Voronoi neighbor.
Step 5: Calculate the orientation vector e i and the Voronoi topology average velocity v i h .
Step 6: Calculate the fish recognition velocity F i k .
Step 7: Update the inertial weight factor ω .
Step 8: If the current iteration does not surpass 2/3 max iteration, update the hydrodynamic recognition learning factor.
Step 9: Update the velocity and location of the particle swarm. Determine if the termination condition is satisfied. If not, return to Step 2. Otherwise, output the result.
We assume that n is the number of particles and k is the number of iterations. The number of operations that are performed in the above algorithm is as follows:
Step 1 contributes one operation for n times for initializing the particle swarm position value and constant operations for initializing the parameters;
Step 2 contributes one operation for n times to calculate the Voronoi topology of the current iteration;
Step 3 contributes O ( n log n ) operations for finding the particle history’s best location and O ( log n ) operations for finding the global best location;
Step 4 contributes O ( n ) operations for calculating the alignment term A and hydrodynamic recognition term R ;
Step 5 contributes one operation for calculating the orientation vector and O ( n ) operations for calculating the Voronoi topology average velocity;
Step 6 contributes one operation for calculating the hydrodynamic recognition velocity;
Step 7 updates the inertial weight factor with one operation;
Step 8 contributes constant operations for judgment and updating;
Step 9 contributes O ( n ) operations for updating the velocity and location of the particle swarm.
In summary, the total number of operations performs k times to check the termination condition. So, the final result is O ( k n log n ) . The computational complexity of the proposed algorithm and the original PSO algorithm are of the same order of magnitude.

3.2. Experiments

In this section, the performance of the FROM is evaluated. The proposed method is applied to several well-known benchmark functions. The benchmark functions are selected from CEC2015 [27]. Firstly, the benchmark functions are shown and the parameters we used are analyzed. Secondly, the performance of the proposed algorithm is compared with classical learning algorithms and some other state-of-the-art PSO variants. Finally, the convergence curve of the prepared algorithms is shown. In this way, the applicability of the FROM can be validated.
All experiments were carried out using a 64-bit Windows 7 environment with an Intel®Core™ i7-3770 processor (Intel Corporation, Santa Clara, CA, USA), 3.4 GHz, and 8 GM RAM. The software specification is MATLAB 2017b.

3.3. Benchmark Functions and Parameter Setting

We categorize the benchmark functions into two types. Function F 1 to Function F 5 are unimodal functions, and Function F 6 to F 14 are multimodal functions. The benchmark functions are listed in Table 1.
The proposed algorithm is compared with some classical algorithms, such as differential evolution (DE) [28], artificial bee colony (ABC) [29], genetic algorithm (GA) [13], and firefly swarm optimization (FSO) [30]. All of the prepared algorithms are set with the same maximum iteration iter m a x = 100 . The population size is 50 times the number of decision variables. We run each algorithm the same number of times (50 times) for a fair and reasonable comparison. The problem dimension is set to 5; therefore, the population size is set to 5 × 50 = 250 . All of the required parameter values for classical algorithms are listed in Table 2. The classical learning algorithms, PSO variants, and the FROM are run with the same hydrodynamic constraint mentioned in Section 3.
The alignment intensity parameter and dipole intensity parameter, which are original parameters of the FROM, have been verified by experiment. A set of repetition experiment procedures (REPs) based on different population sizes ( N = 10 2 , N = 10 3 , N = 10 4 ) are carried out. The alignment intensity parameter I 1 in the experiment varies in the range of [ 0 , 10 ] , and the dipole intensity parameter I 2 varies in the range of [ 10 3 , 10 1 ] . The total number of repetitions is 100 100 = 10,000 . The REPs have different parameter settings, and they are applied on F 1 to F 14 . The results of the REPs are shown in Figure 5 and Figure 6.
For the results of the REPs, a fitness rank and a convergence speed iteration are obtained from every REP and benchmark function combination. For each REP, we average the results of 14 functions and obtain the final results. The final fitness rank results are shown in Figure 5, where some of the ranks are tied, and the best fitness is marked. The REPs with N = 10 2 population obtain the best fitness when I 1 = 9.0 , I 2 = 10 2.1 . The REPs with N = 10 3 population obtain the best fitness when I 1 = 9.0 , I 2 = 10 1.9 . The REPs with N = 10 4 population obtain the best fitness when I 1 = 9.0 , I 2 = 10 2.0 . In general, the FROM performs best with the parameter I 1 = 9.0 , I 2 = 10 2.0 . The final convergence speed results are shown in Figure 6. The convergence speed appears to significantly decrease when the alignment intensity I 1 increases, while there is no significant difference when the dipole intensity I 2 changes. When the parameters are set to I 1 = 9.0 , I 2 = 10 2.0 , the numbers of convergence iteration are 76, 90, and 91, corresponding to N = 10 2 , N = 10 3 , and N = 10 4 , respectively. This is a moderate convergence speed for evolutionary algorithms, which can converge quickly and avoid being premature.

4. Experimental Results

The results are obtained by running the FROM algorithm 50 times independently for each function, which includes the Min, Max, Std, Mean, and Mdn values. Since the theoretically optimal objective function value is 0 for each function, the output value of learning algorithms will be treated as a satisfactory solution. The statistical results obtained are analyzed using the criteria listed below:
Min (best fitness solution): the best fitness solution among the solutions obtained during the 50 runs.
Max (worst fitness solution): the worst fitness solution among the solutions obtained during the 50 runs.
Std (standard deviation): a measure of how spread out solutions are.
Mean (mean fitness solution): a measure of the precision (quality) of the result that the algorithm can obtain within the given iterations in all 50 runs.
Mdn (median fitness solution): the median of the fitness solution among the solutions obtained during the 50 runs.
The experiments are carried out with the same scaled problems. The statistical data of the FROM and classical algorithms are shown in Table 3. The best performance values have been bolded, and the results that are significantly worse than the best result are marked with asterisks. For unimodal functions F 1 to F 5 , the FROM achieves the best overall performance among all the algorithms. For multimodal functions, it is the only one that achieves the best performance for 8 functions out of 9. There is a statistically significant difference in performance between the FROM and other algorithms with a medium effect size. For 6 to F 10 , F 12 , F 13 , and F 14 , the FROM achieves a better performance than the other algorithms according to the solution accuracy. For F 11 , all the algorithms expect GA to achieve the best performance.
The proposed algorithm is also compared with several state-of-the-art PSO variants. The variants are enhanced leader PSO (ELPSO) [31], chaotic inertia weight PSO (CAIWPSO) [32], chaotic random inertia weight PSO (CRIWPSO) [32], and time-varying acceleration coefficient PSO (TVAPSO) [33]. The maximum iteration, problem dimension, and particle population remain the same (100, 5, 250).
The statistical data of the FROM and PSO variants are shown in Table 4. They show that the FROM achieves the best performance among all the algorithms for unimodal functions. For multimodal functions, it shows that the FROM achieves the best performance for F 6 , F 7 , F 10 , and F 13 among all the algorithms. For F 8 and F 9 , TVAPSO achieves a better performance than the FROM. For F 11 , all the algorithms achieve the best performance. For F 12 and F 14 , CRIWPSO outperforms the FROM. Overall, HIISPO achieves the majority of the 10 best performances out of 14 benchmark functions. For the results where the FROM does not outperform other algorithms, none of them are significantly worse than the best results.

Convergence Curve of Algorithms

The convergence rate of the FROM and other well-known algorithms, including PSO variants, is evaluated. Each algorithm is run 50 times. The average number of function evaluations of each algorithm is taken into account to achieve a certain quality of solution. The convergence analysis of different learning algorithms for some unimodal and multimodal functions is presented in Figure 7a–f. The yellow curve is the convergence curve of the FROM. From the graphs, it is clear that the FROM demonstrates a better convergence speed, a better ability to escape premature convergence, and a better global search ability in most cases (except for F 11 in Figure 7k, GA achieved the best convergence). While other learning algorithms may trap in local optima, as shown by the flat parts of their curves, the FROM can escape from that. The performance is evidence that the proposed algorithm is very efficient and capable of complementing the global search ability of PSO to obtain quality results by making it overcome premature convergence. The results showed that the FROM improved the convergence premature problem of basic PSO and performs better than other algorithms.

5. Conclusions

In this paper, a novel evolutionary algorithm named the fish recognition optimization method (FROM) has been proposed to solve multi-agent decision-making problems in a fluid environment. We analyzed the recognition pattern in fish schools and applied its recognition characteristics to the basic PSO. Two major characteristics are implemented.
The Voronoi topological structure. The characteristic shows a new possibility of organizing particle swarms.
The hydrodynamic recognition in the fluid environment. The hydrodynamic recognition-based interaction is used to improve the term.
The computational complexity of the FROM is calculated and considered to be within an acceptable range. The performance of the FROM is compared with both classical algorithms and PSO variants mentioned in Section 4. Using the same experimental environment and fluid constraints, we obtained the statistical data. The statistical data show that in most conditions, the algorithm we proposed outperformed other learning algorithms with better solution quality, convergence precision, global search ability, and robustness. The FROM is capable of finding the global optimum a respectable proportion of the time, depending on the topology, and getting there in a respectably fast time. It does not suffer from early convergence and at the same time it reaches a minimum which is better than that of the others with the same underwater environment constraint. As expected, changing the size of the neighborhood and introducing hydrodynamic recognition revise the performance of the swarm in a fluid environment. There is an expectation that it has the potential to perform efficiently in real-world underwater multi-agent problem domains.
The FROM offers a novel approach to addressing multi-agent decision-making problems in fluid and dynamic environments by emulating the adaptive behaviors observed in fish schooling. Drawing on the principles of fluid mechanics and decentralized coordination, the FROM dynamically adjusts agent strategies through real-time environmental feedback and local interaction rules. Specifically, it incorporates swarm intelligence mechanisms—such as velocity synchronization, collision avoidance, and gradient-driven navigation—to optimize global objectives under uncertainty. For fluid environments, the FROM’s parameterization of environmental viscosity and turbulence enables agents to autonomously balance exploration–exploitation trade-offs, while its pheromone-inspired information diffusion model ensures scalable coordination. This bio-inspired framework demonstrates enhanced robustness in scenarios requiring rapid adaptation to flow variations and partial observability, making it particularly suitable for applications like underwater swarm robotics or crowd evacuation simulations.

Author Contributions

Conceptualization, methodology, validation, writing—original draft preparation, visualization, supervision, Y.W.; software, writing—review and editing, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in CEC2015 at “Problem definitions and evaluation criteria for cec 2015 special session on bound constrained single-objective computationally expensive numerical optimization”. Technical Report (2014). [https://www.al-roomi.org/multimedia/CEC_Database/CEC2015/RealParameterOptimization/ExpensiveOptimization/CEC2015_ExpensiveOptimization_TechnicalReport.pdf] (accessed on 31 March 2025).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rădulescu, R.; Mannion, P.; Roijers, D.M.; Nowé, A. Multi-objective multi-agent decision making: A utility-based analysis and survey. Auton. Agents Multi-Agent Syst. 2019, 34, 10. [Google Scholar]
  2. Pal, M.; Mittal, M.L.; Soni, G.; Chouhan, S.S.; Kumar, M. A multi-agent system for FJSP with setup and transportation times. Expert Syst. Appl. 2022, 216, 119474. [Google Scholar]
  3. Koops, W.; Jansen, N.; Junges, S.; Simão, T.D. Recursive small-step multi-agent A* for dec-POMDPs. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligenc, Macao, China, 19–25 August 2023. [Google Scholar]
  4. Liu, M.; Amato, C.; Anesta, E.; Griffith, J.; How, J. Learning for decentralized control of multiagent systems in large, partially-observable stochastic environments. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  5. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  6. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999; IEEE: Piscataway, NJ, USA, 1999. Cat. No. 99TH8406. Volume 2, pp. 1470–1477. [Google Scholar]
  7. Lim, W.H.; Isa, N.A.M. An adaptive two-layer particle swarm optimization with elitist learning strategy. Inf. Sci. 2014, 273, 49–72. [Google Scholar] [CrossRef]
  8. Zhan, Z.H.; Zhang, J.; Liu, O. Orthogonal learning particle swarm optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar]
  9. Houssein, E.H.; Gad, A.G.; Hussain, K.; Suganthan, P.N. Major advances in particle swarm optimization: Theory, analysis, and application. Swarm Evol. Comput. 2021, 63, 100868. [Google Scholar]
  10. Dagal, I.; Akın, B.; Akboy, E. Improved salp swarm algorithm based on particle swarm optimization for maximum power point tracking of optimal photovoltaic systems. Int. J. Energy Res. 2022, 46, 8742–8759. [Google Scholar]
  11. Wang, Z.; Li, G.; Ren, J. Dynamic path planning for unmanned surface vehicle in complex offshore areas based on hybrid algorithm. Comput. Commun. 2021, 166, 49–56. [Google Scholar]
  12. Zeng, N.; Wang, Z.; Liu, W.; Zhang, H.; Hone, K.; Liu, X. A dynamic neighborhood-based switching particle swarm optimization algorithm. IEEE Trans. Cybern. 2020, 52, 9290–9301. [Google Scholar] [CrossRef]
  13. Whitley, D. A genetic algorithm tutorial. Stat. Comput. 1994, 4, 65–85. [Google Scholar]
  14. Boettcher, S.; Percus, A.G. Extremal optimization: Methods derived from co-evolution. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation, Orlando, FL, USA, 13–17 July 1999; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1999; Volume 1, pp. 825–832. [Google Scholar]
  15. Eslami, M.; Shareef, H.; Khajehzadeh, M.; Mohamed, A. A survey of the state of the art in particle swarm optimization. Res. J. Appl. Sci. Eng. Technol. 2012, 4, 1181–1197. [Google Scholar]
  16. Parrott, D.; Li, X. Locating and tracking multiple dynamic optima by a particle swarm model using speciation. IEEE Trans. Evol. Comput. 2006, 10, 440–458. [Google Scholar] [CrossRef]
  17. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  18. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  19. Brown, C.; Laland, K.; Krause, J. Fish Cognition and Behavior; John Wiley & Sons: Hoboken, NJ, USA, 2011; Volume 21. [Google Scholar]
  20. Makowicz, A.M.; Tiedemann, R.; Steele, R.N.; Schlupp, I. Kin recognition in a clonal fish, Poecilia formosa. PLoS ONE 2016, 11, e0158442. [Google Scholar] [CrossRef]
  21. Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. In ACM SIGGRAPH Computer Graphics; ACM: New York, NY, USA, 1987; Volume 21, pp. 25–34. [Google Scholar]
  22. Shimoyama, N.; Sugawara, K.; Mizuguchi, T.; Hayakawa, Y.; Sano, M. Collective motion in a system of motile elements. Phys. Rev. Lett. 1996, 76, 3870–3873. [Google Scholar] [CrossRef]
  23. Cavagna, A.; Cimarelli, A.; Giardina, I.; Parisi, G.; Santagati, R.; Stefanini, F.; Viale, M. Scale-free correlations in starling flocks. Proc. Natl. Acad. Sci. USA 2010, 107, 11865–11870. [Google Scholar] [CrossRef]
  24. Ballerini, M.; Cabibbo, N.; Candelier, R.; Cavagna, A.; Cisbani, E.; Giardina, I.; Lecomte, V.; Orlandi, A.; Parisi, G.; Procaccini, A.; et al. Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study. Proc. Natl. Acad. Sci. USA 2008, 105, 1232–1237. [Google Scholar] [CrossRef]
  25. Calovi, D.S.; Lopez, U.; Ngo, S.; Sire, C.; Chaté, H.; Theraulaz, G. Swarming, schooling, milling: Phase diagram of a data-driven fish school model. New J. Phys. 2014, 16, 015026. [Google Scholar] [CrossRef]
  26. Solar, R.; Suppi, R.; Luque, E. High performance distributed cluster-based individual-oriented fish school simulation. Procedia Comput. Sci. 2011, 4, 76–85. [Google Scholar] [CrossRef]
  27. Chen, Q.; Liu, B.; Zhang, Q.; Liang, J.; Suganthan, P.; Qu, B. Problem Definitions and Evaluation Criteria for CEC 2015 Special Session on Bound Constrained Single-Objective Computationally Expensive Numerical Optimization; Technical Report, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China and Technical Report; Nanyang Technological University: Singapore, 2014. [Google Scholar]
  28. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  29. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar]
  30. Yang, X.S. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic Algorithms; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  31. Jordehi, A.R. Enhanced leader PSO (ELPSO): A new PSO variant for solving global optimisation problems. Appl. Soft Comput. 2015, 26, 401–417. [Google Scholar]
  32. Kiani, A.T.; Nadeem, M.F.; Ahmed, A.; Sajjad, I.A.; Raza, A.; Khan, I.A. Chaotic Inertia Weight Particle Swarm Optimization (CIWPSO): An Efficient Technique for Solar Cell Parameter Estimation. In Proceedings of the 2020 3rd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 29–30 January 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  33. Ghasemi, M.; Aghaei, J.; Hadipour, M. New self-organising hierarchical PSO with jumping time-varying acceleration coefficients. Electron. Lett. 2017, 53, 1360–1362. [Google Scholar] [CrossRef]
Figure 1. The different behaviors in fish schools.
Figure 1. The different behaviors in fish schools.
Biomimetics 10 00215 g001
Figure 2. Hydrodynamic analysis based on fish hydrodynamic recognition between focal particle (green) and Voronoi neighbor particle (blue).
Figure 2. Hydrodynamic analysis based on fish hydrodynamic recognition between focal particle (green) and Voronoi neighbor particle (blue).
Biomimetics 10 00215 g002
Figure 3. Heatmap of the hydrodynamic recognition weight analysis based on the fish in different locations. The velocity u j i is induced by Voronoi neighbor particle j at the focal particle position. It can be calculated by Formula (11).
Figure 3. Heatmap of the hydrodynamic recognition weight analysis based on the fish in different locations. The velocity u j i is induced by Voronoi neighbor particle j at the focal particle position. It can be calculated by Formula (11).
Biomimetics 10 00215 g003
Figure 4. The flowchart of FROM.
Figure 4. The flowchart of FROM.
Biomimetics 10 00215 g004
Figure 5. Heatmap results of fitness rank with different populations. The best fitness positions have been marked. Note that the dipole intensity I 2 applied log transformation.
Figure 5. Heatmap results of fitness rank with different populations. The best fitness positions have been marked. Note that the dipole intensity I 2 applied log transformation.
Biomimetics 10 00215 g005
Figure 6. Heatmap results of convergence speed with different populations.
Figure 6. Heatmap results of convergence speed with different populations.
Biomimetics 10 00215 g006
Figure 7. Convergence curve of all algorithms. FROM increased convergence in early iterations and does more global search activities. FROM did not get stuck in local optima for lacking local search ability. FROM also has enough momentum to do a local search as it moves towards its goal.
Figure 7. Convergence curve of all algorithms. FROM increased convergence in early iterations and does more global search activities. FROM did not get stuck in local optima for lacking local search ability. FROM also has enough momentum to do a local search as it moves towards its goal.
Biomimetics 10 00215 g007
Table 1. Benchmark functions.
Table 1. Benchmark functions.
FunctionsNameSearch Space
F 1 Sphere 5.12 , 5.12
F 2 Dixon&Price 10 , 10
F 3 Zakharov 5 , 10
F 4 Bent Cigar 100 , 100
F 5 Discus 100 , 100
F 6 Rastrigin 5.12 , 5.12
F 7 Levy 15 , 30
F 8 Griewank 600 , 600
F 9 Rosenbrock 5 , 10
F 10 Ackley 15 , 30
F 11 Katsuura 100 , 100
F 12 HGBat 100 , 100
F 13 Weierstrass 100 , 100
F 14 HappyCat 100 , 100
Table 2. Parameter settings.
Table 2. Parameter settings.
AlgorithmParameterValue
FROMInertia weight range(0.4,0.9)
Learning factor2
Alignment intensity9
Dipole intensity0.01
DECrossover probability0.9
Scaling factor0.5
ABCSole control50
GACrossover rate0.8
FSOAttraction coefficient0.6
Randomness factor0.5
Attraction exponent1
Table 3. Statistical data of FROM and classical algorithms.
Table 3. Statistical data of FROM and classical algorithms.
FROMDEABCGAFSO
F 1 Min1.30E−206.31E−09 *1.34E−09 *0.00398 *7.16E−06 *
Max1.33E−131.85E−07 *3.69E−08 *0.06715 *0.00007 *
Std1.60E−143.22E−08 *1.05E−08 *0.01580 *9.46E−06 *
Mean4.59E−156.21E−08 *1.15E−08 *0.01253 *0.00002 *
Mdn3.03E−164.48E−08 *9.65E−09 *0.00566 *0.00002 *
F 2 Min5.63E−144.93E−04 *0.00378 *0.92415 *0.00029 *
Max0.009690.016370.18252 *1.79E+01 *0.13001 *
Std0.000280.005160.02304 *1.94616 *0.00035
Mean0.000380.007070.02247 *1.44535 *0.00106
Mdn4.35E−120.00709 *0.02457 *0.84357 *0.00093 *
F 3 Min4.32E−159.83E−06 *0.00057 *0.00334 *0.00002 *
Max2.12E−110.00014 *0.00948 *2.43237 *0.00021 *
Std2.44E−120.00003 *0.00210 *0.35081 *0.00003 *
Mean1.08E−120.00006 *0.00303 *0.07064 *0.00008 *
Mdn4.47E−130.00005 *0.00206 *0.02581 *0.00009 *
F 4 Min1.75E−121.32756 *2.43570 *1.10E+05 *3.20E+03 *
Max7.37E−052.91E+01 *3.19E+01 *1.57E+04 *5.07E+04 *
Std7.75E−066.01800 *5.84972 *1.68E+04 *7.95E+03 *
Mean2.43E−067.89165 *1.08E+01 *1.66E+04 *2.18E+04 *
Mdn1.18E−075.07370 *8.74219 *1.16E+04 *1.71E+04 *
F 5 Min3.38E−160.00010 *0.00032 *9.10E+01 *2.47E+01 *
Max1.58E−070.00802 *0.00704 *4.25E+04 *1.56E+03 *
Std2.89E−080.00106 *0.00119 *6.77E+03 *3.78E+02 *
Mean6.85E−090.00083 *0.00300 *1.81E+03 *5.32E+02 *
Mdn3.89E−100.00050 *0.00259 *9.91E+01 *4.25E+02 *
F 6 Min02.87513 *0.71500 *0.04474 *0.00447 *
Max0.118018.892754.405508.657747.41608
Std0.116801.452780.922791.810621.38240
Mean0.049296.22205 *2.60928 *1.19091 *3.28183 *
Mdn2.34E−096.24618 *2.63736 *0.40725 *3.47737 *
F 7 Min2.38E−186.17E−07 *5.47E−07 *0.00490 *0.00004 *
Max4.29E−140.00001 *0.00003 *0.04756 *0.00035 *
Std6.02E−152.25E−06 *3.38E−06 *0.01162 *0.00009 *
Mean3.02E−153.81E−06 *4.09E−06 *0.01583 *0.00017 *
Mdn8.37E−163.59E−06 *3.39E−06 *0.00734 *0.00021 *
F 8 Min0.011880.050900.043230.183960.01448
Max0.066330.280070.197010.836741.08647 *
Std0.010460.048370.031170.081620.24884
Mean0.042000.165840.142800.354280.25656
Mdn0.049780.165600.122140.253740.19978
F 9 Min0.004640.23492 *0.18165 *0.50303 *0.00028
Max0.048240.836202.52288 *1.71746 *1.93466 *
Std0.009170.14942 *0.35353 *0.31492 *0.25213 *
Mean0.020540.483410.894250.540890.30615
Mdn0.018590.403610.890800.532050.31455
F 10 Min6.28E−090.00089 *0.00027 *3.71091 *0.01626 *
Max3.06E−070.00554 *0.00162 *1.21E+01 *0.06134 *
Std7.27E−080.00064 *0.00026 *1.38500 *0.01197 *
Mean8.51E−080.00241 *0.00074 *7.85928 *0.03676 *
Mdn3.91E−080.00232 *0.00067 *5.80086 *0.03803 *
F 11 Min00000
Max0002.10E−07 *0
Std0004.62E−08 *0
Mean0006.58E−08 *0
Mdn0004.29E−08 *0
F 12 Min0.494930.512900.523261.583120.58300
Max0.502850.553680.599516.952680.63250
Std0.006400.017120.008681.37484 *0.01470
Mean0.471340.640550.434312.410940.55364
Mdn0.557120.622160.609941.878500.64537
F 13 Min03.38E−15 *1.96924 *0.84638 *1.52E−14 *
Max1.31E−141.51E−143.83714 *2.98080 *0.13062 *
Std1.56E−151.46E−150.38622 *0.43279 *0.04251 *
Mean2.14E−153.66E−152.85102 *2.70720 *0.01958 *
Mdn1.49E−154.03E−153.26898 *2.74500 *0.00179 *
F 14 Min0.049890.075650.087960.196710.06136
Max0.193930.412870.236121.743240.22415
Std0.011970.064240.044010.394800.03163
Mean0.028510.250640.150060.557330.11247
Mdn0.100680.206670.188040.471240.11113
* indicates that the best algorithm performs significantly better than this algorithm.
Table 4. Statistical data of FROM and PSO variants.
Table 4. Statistical data of FROM and PSO variants.
FROMELPSOCAIWPSOCRIWPSOTVAPSO
F 1 Min1.30E−204.95E−16 *6.25E−14 *1.28E−10 *4.66E−13 *
Max1.33E−132.74E−123.55E−11 *1.68E−08 *1.74E−11 *
Std1.60E−145.43E−12 *6.36E−12 *3.28E−09 *2.86E−12 *
Mean4.59E−157.18E−151.58E−12 *2.15E−09 *4.09E−12 *
Mdn3.03E−164.17E−156.33E−13 *1.11E−09 *3.10E−12 *
F 2 Min5.63E−148.60E−131.09E−10 *3.28E−08 *2.02E−10 *
Max0.00969 0.55669 * 0.78004 * 0.77471 * 0.53669 *
Std0.00028 0.18884 * 0.16598 * 0.17420 * 0.17749 *
Mean0.00038 0.03396 * 0.03304 * 0.05623 * 0.04304 *
Mdn4.35E−123.51E−10 *1.09E−08 *1.17E−06 *4.82E−09 *
F 3 Min4.32E−154.51E−13 *1.39E−11 *6.07E−09 *1.52E−11 *
Max2.12E−111.77E−103.51E−09 *1.58E−06 *2.38E−09 *
Std2.44E−122.49E−118.77E−10 *2.29E−07 *3.10E−10 *
Mean1.08E−121.63E−115.83E−10 *9.97E−08 *3.30E−10 *
Mdn4.47E−135.44E−122.26E−10 *6.21E−08 *2.88E−10 *
F 4 Min1.75E−127.86E−07 *0.00005 *0.01327 * 0.00089 *
Max7.37E−051.03E+04 *9.97E+03 *0.72730 * 0.06350 *
Std7.75E−062.40E+03 *1.15E+03 *0.12979 * 0.01020 *
Mean2.43E−066.39E+02 *1.97E+02 *0.17407 * 0.01065 *
Mdn1.18E−077.17E−060.00080 * 0.12701 * 0.00501 *
F 5 Min3.38E−163.62E−10 *3.40E−08 *0.00002 * 1.77E−06 *
Max1.58E−071.42E−070.00003 * 0.00200 * 0.00017 *
Std2.89E−082.95E−082.85E−06 *0.00048 * 0.00005 *
Mean6.85E−091.32E−082.10E−06 *0.00047 * 0.00005 *
Mdn3.89E−104.35E−097.46E−07 *0.00029 * 0.00003 *
F 6 Min0 2.49E−11 *6.78E−10 *2.98E−06 *6.66E−09 *
Max0.118012.17271 0.90050 1.73373 1.06283
Std0.11680 0.45405 0.37369 0.48283 0.37693
Mean0.04929 0.16666 0.18609 0.33479 0.17190
Mdn2.34E−091.90E−06 *0.00004 * 0.00897 * 0.00013 *
F 7 Min2.38E−185.71E−15 *1.24E−12 *2.96E−09 *1.02E−12 *
Max4.29E−148.75E−133.21E−10 *1.60E−07 *3.67E−10 *
Std6.02E−151.34E−13 *4.25E−11 *2.85E−08 *8.23E−11 *
Mean3.02E−151.20E−13 *2.53E−11 *3.09E−08 *7.22E−11 *
Mdn8.37E−167.99E−14 *1.12E−11 *1.51E−08 *6.69E−11 *
F 8 Min0.01188 0.01036 0.00192 0.01254 0.00717
Max0.06633 0.21000 0.18537 0.19089 0.05865
Std0.01046 0.04513 0.03344 0.03242 0.00910
Mean0.04200 0.05674 0.06536 0.06012 0.03507
Mdn0.04978 0.04606 0.04698 0.03869 0.03546
F 9 Min0.00464 0.00076 0.05263 * 0.01695 * 0.00063
Max0.04824 2.03537 * 1.32526 * 5.98459 * 0.03487
Std0.00917 0.33229 * 0.15340 * 0.79859 * 0.00386
Mean0.02054 0.34094 * 0.32745 * 0.45111 * 0.00553
Mdn0.01859 0.26697 * 0.35009 * 0.38428 * 0.00211
F 10 Min6.28E−099.20E−081.95E−06 *0.00006 *4.25E−06 *
Max3.06E−072.19E−060.00002 * 0.00069 * 0.00004 *
Std7.27E−083.24E−074.74E−06 *0.00015 * 7.52E−06 *
Mean8.51E−087.10E−079.40E−06 *0.00039 * 0.00002 *
Mdn3.91E−086.21E−078.71E−06 *0.00029 * 0.00001 *
F 11 Min0 0 0 0 0
Max0 0 0 0 0
Std0 0 0 0 0
Mean0 0 0 0 0
Mdn0 0 0 0 0
F 12 Min0.49493 0.49998 0.51538 0.41493 0.56342
Max0.50285 0.51123 0.60289 0.45152 0.57651
Std0.00640 0.00993 0.00907 0.00369 0.00524
Mean0.47134 0.59616 0.42643 0.41689 0.51907
Mdn0.55712 0.57819 0.59783 0.46012 0.60424
F 13 Min001.73E−15 *1.49E−15 *2.46E−14 *
Max1.31E−141.51E−143.35861 *2.22042 * 2.61827 *
Std1.56E−153.86E−150.79514 * 0.50589 * 1.03460 *
Mean2.14E−152.97E−150.69415 * 0.32486 * 1.33731 *
Mdn1.49E−152.02E−150.22951 * 3.40E−141.35667 *
F 14 Min0.04989 0.04242 0.06914 0.02379 0.04127
Max0.19393 0.31693 0.22975 0.13560 0.23886
Std0.01197 0.05514 0.05050 0.00497 0.04110
Mean0.02851 0.14643 0.13524 0.01523 0.10104
Mdn0.10068 0.10974 0.16777 0.09275 0.09319
* indicates that the best algorithm performs significantly better than this algorithm.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Sun, L. FROM: A Fish Recognition-Inspired Optimization Method for Multi-Agent Decision-Making Problems with a Fluid Environment. Biomimetics 2025, 10, 215. https://doi.org/10.3390/biomimetics10040215

AMA Style

Wang Y, Sun L. FROM: A Fish Recognition-Inspired Optimization Method for Multi-Agent Decision-Making Problems with a Fluid Environment. Biomimetics. 2025; 10(4):215. https://doi.org/10.3390/biomimetics10040215

Chicago/Turabian Style

Wang, Yuchen, and Lei Sun. 2025. "FROM: A Fish Recognition-Inspired Optimization Method for Multi-Agent Decision-Making Problems with a Fluid Environment" Biomimetics 10, no. 4: 215. https://doi.org/10.3390/biomimetics10040215

APA Style

Wang, Y., & Sun, L. (2025). FROM: A Fish Recognition-Inspired Optimization Method for Multi-Agent Decision-Making Problems with a Fluid Environment. Biomimetics, 10(4), 215. https://doi.org/10.3390/biomimetics10040215

Article Metrics

Back to TopTop