Next Article in Journal
A Deep Joint Network for Monocular Depth Estimation Based on Pseudo-Depth Supervision
Previous Article in Journal
Stationary Pattern and Global Bifurcation for a Predator–Prey Model with Prey-Taxis and General Class of Functional Responses
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Effects of Search Strategies on Collective Problem-Solving

Department of Information Science Technology, University of Houston, Houston, TX 77204-4007, USA
Mathematics 2023, 11(22), 4642;
Submission received: 5 October 2023 / Revised: 26 October 2023 / Accepted: 10 November 2023 / Published: 14 November 2023
(This article belongs to the Section Financial Mathematics)


In today’s dynamic and complex social environments, collaborative human groups play a critical role in addressing a wide range of real-world challenges. Collective problem-solving, the process of finding solutions through the collaboration of individuals, has become imperative in addressing scientific and technical problems. This paper develops an agent-based model to investigate the influence of different search strategies (simple local search, random search, and adaptive search) on the performance of collective problem-solving under various conditions. The research involves simulations on various problem spaces and considers distinct search errors. Results show that random search initially outperforms other strategies when the search errors are relatively small, yet it is surpassed by adaptive search in the long term when the search errors increase. A simple local search consistently performs the worst among the three strategies. Furthermore, the findings regarding adaptive search reveal that the speed of adaptation in adaptive search varies across problem spaces and search error levels, emphasizing the importance of context-specific parameterization in adaptive search strategies. Lastly, the values of P s = 0.9 and P f = 0.2 obtained through human subject experiments in adaptive search appear to be a favorable choice across various scenarios in this simulation work, particularly for complex problems entailing substantial search errors. This research contributes to a deeper understanding of the effectiveness of search strategies in complex environments, providing insights for improving collaborative problem-solving processes in real-world applications.

1. Introduction

Given the increasing complexity and dynamism of our social environments, a growing number of real-world challenges, ranging from the development of complex technologies [1] and pandemic management [2] to addressing climate change [3] and conducting large-scale military operations [4], necessitate the involvement of collaborative human groups. Consequently, the capacity of these collaborative groups to function effectively has become imperative in addressing a multitude of scientific and technical issues [5,6,7]. Collective problem-solving stands as a crucial facet of research, denoting the process of tackling challenges or finding solutions through the concerted efforts of a group of individuals. It is important to note that the performance of a group engaged in collective problem-solving does not merely reflect the sum or aggregation of individual abilities and expertise. Instead, within this collaborative problem-solving framework, group members significantly influence each other as they interact during tasks, such as studying [8,9,10], recalling information [11,12], making plans or decisions [13,14], and proposing innovative solutions [15,16,17]. It is reasonable to assert that the performance of collective problem-solving arises from intricate and dynamic social interactions among group members [18,19,20,21,22,23]. Theoretical problem-solving processes are often depicted as a sequence of steps encompassing formulation, evaluation, and the identification of alternatives, all aimed at shifting from an undesirable current state to a more desirable future state [24]. In many problem-solving contexts, the pursuit of new solutions unfolds through the consideration of multiple correlated decision variables [25,26,27]. These various decision variables can be envisioned as distinct dimensions within a problem space, where their interplay and adjustments give rise to complex and rugged landscapes [26,28,29]. Collective problem-solving, being a multifaceted endeavor, relies heavily on gaining a thorough understanding of this landscape, which is critical in facilitating successful collaboration. In this context, the collective problem-solving process can be likened to a collaborative endeavor focused on discovering the optimal solution within the constraints of a rugged landscape. Moreover, numerous empirical studies have sought to comprehend the factors contributing to a team’s success in collective problem-solving. These factors include group composition, encompassing considerations such as average intelligence, gender distribution, and diversity [30,31,32,33,34,35,36]; the dynamics of group communication [37,38,39,40,41]; the influence of organizational structures [42,43,44,45]; emotional and psychological behaviors exhibited within the group [46,47,48,49,50,51]; and the sharing of information [52,53,54,55,56,57].
Despite the extensive body of research in this field, the fundamental factors influencing a group’s ability to achieve exceptional performance in collective problem-solving remain elusive. This lack of clarity stems from the challenge of quantifying or monitoring the concealed mechanisms governing members’ cognitive processes. A key cognitive process pertains to how individuals seek solutions at the intellectual level. Scientists have identified two primary search strategies, exploration and exploitation [58,59,60], within the context of collective problem-solving processes and dynamics. Exploitation typically involves choosing the best-known local options based on past experiences, while exploration entails experimenting with more innovative options that hold the potential for improved future outcomes. Striking the optimal balance between these two search strategies is crucial for achieving superior problem-solving performance. Efficient search methodologies, such as brainstorming and divergent thinking, possess the potential to unearth groundbreaking solutions within this vast field, akin to an adventurous exploration. Simultaneously, the persistent pursuit of small incremental enhancements, e.g., “Kaizen” within the realm of quality management [61], can also bolster problem-solving capabilities without incurring substantial errors or risks. This type of search behavior falls under the category of exploitation.
Studying the effects of search strategies on collective problem-solving through empirical research is a challenging endeavor. Fortunately, computer modeling and simulations offer a more practical and insightful approach to delve into the theoretical aspects of collective problem-solving within various structured, complex, and dynamic scenarios [62,63]. A limited number of simulation studies have explored different aspects of individual search behaviors in collective problem-solving. For instance, one study introduced an agent-based model to investigate the influence of individuals’ cognitive styles on collaborative team innovation and design [64]. Another model [65] analyzed how group members’ characteristics impact their behaviors by simulating the problem-solving process of the widely recognized Subarctic Survival Situation [66]. An additional study [67] examined the ways to enhance random search in collective problem-solving by proposing a scheme that derives a sampling distribution from a rapidly fitted Gaussian process. Furthermore, a separate work explored the adaptive balance between exploration and exploitation strategies in problem-solving [68]. However, none of these reported studies have systematically investigated the impact of search strategies on the performance of collective problem-solving.
This paper aims to theoretically and systematically address the aforementioned gap in the literature. It builds an agent-based model [69,70,71] based on solid theoretical and empirical findings. Specifically, this model includes multiple individual agents (i.e., group members) and their interactions within a given problem space (i.e., environment). Each agent autonomously performs according to predefined rules to search for solutions, exchange information, and contribute to collective decision-making. This research delves into a comparative analysis of three distinct search strategies: simple local search, random search, and adaptive search, across various problem spaces along with distinct search error levels. The adaptive search strategy is derived from empirical experiments involving human subjects [28]. Furthermore, the study investigates the optimal adaptation speed for adaptive search in different problem spaces. Simulation outcomes highlight that adaptive search emerges as a notably effective approach for collective problem-solving. This study suggests that search strategies have significant impacts on the performance of collective problem-solving. To gain higher team performance, it is important to be aware of the context of environments in real-world problem-solving.

2. Model

The modeling approach employed here is grounded in the concept of rugged landscapes [26,28,29], which is widely used to simulate problem-solving processes and foster shared mental models. This approach enables us to formalize the dynamics of agents involved in problem-solving activities and to deduce the resulting collective dynamics, drawing parallels with established methods in collective problem-solving research. In essence, the proposed model views collective problem-solving as a sequence of steps involving the collaborative exploration of a problem space resembling a rugged landscape. In this landscape, individuals within a group or team work together to search for new and improved alternatives. It is important to note that agents often position themselves in different areas of this landscape, representing diverse perspectives or backgrounds. They engage in continuous collaboration to enhance the group’s plans and move towards positions with higher value. Finding straightforward solutions can be challenging due to the complexity of the problem space and the limitations of local and global information. Each agent operates independently, actively exploring the problem space from various angles, evaluating proposals from others, and suggesting modifications to the current group solution. In contrast to previously reported models and simulations [27,28,29,30,72,73,74,75,76], this model stands out by placing a specific emphasis on various search strategies and their performance in addressing diverse problems characterized by different complexity and error profiles. A detailed description of the components of the proposed agent-based model is given as follows.

2.1. Problem Space

This work employs a well-established methodology, i.e., NK models, as proposed by the author’s previous research [64], to construct diverse problem spaces of varying complexity. Specifically, the problem space is defined within a discrete domain, denoted as S P = 0 , 1 n , 2 n ,   , n 1 n ,   1 m , where n represents the number of choices in each dimension and m denotes the dimensionality of the problem space. For instance, a two-dimensional problem space has m = 2 . In this context, each point (e.g., ( x , y ) in a two-dimensional problem space) within the problem space represents a potential solution, with the height of each data point indicating the solution’s utility, which quantifies its quality.
To create a problem space under these conditions, multiple initial solutions are randomly generated, resulting in sets R s = [ s 1 , s 2 , s q ] , each paired with randomly assigned utilities, forming U s = [ u 1 , u 2 , u q ] , where q represents the number of initial solutions. Subsequently, the maximum utility value in U s is adjusted to 1. Finally, the true utility function for the problem space is constructed based on the previously specified parameters and the provided initial solutions. The true utility function U T is defined as follows:
U T ( s ) = u i × D i s t s i ,   s 2 D i s t s i ,   s 2
Here, u i represents the utility of an initially generated solution, s i , while s represents any potential solution (data point) within the problem space. D i s t ( s i ,   s ) represents the Euclidean distance between s i and s . Formula (1) presents a simple interpolating algorithm that calculates a weighted average of the utility values of the initially generated solutions using normalized inverse square distances between these initial solutions and any other potential solutions within the problem space. While the initial solution count, q, can provide an estimate of the general number of peaks within the problem space, it is important to note that the generation of the problem space remains a stochastic process. This stochastic nature implies that identical parameter settings can yield diverse problem space outcomes. Following these rules, three distinct problem spaces were generated and utilized in the simulations, as depicted in Figure 1. For a comprehensive understanding of the parameter settings employed, please refer to Table 1. These three problem spaces exhibit unique characteristics, each representing different types of problems.

2.2. Agent

A group of agents works together to collaboratively address a specific problem. The group size is defined as h . Within this group, each agent operates autonomously throughout the problem-solving process. They have the freedom to conduct individual searches in pursuit of improved solutions based on their own comprehension of a given problem. Agents can assume two roles: one as a speaker, where they introduce a new potential solution to the group, and the other as a listener, where they assess and evaluate solutions proposed by their peers. Each agent maintains a memory bank that stores multiple distinct solutions. Additionally, they possess a unique utility function that reflects their personal interpretation of the problem space. This utility function guides their individual search efforts and influences their evaluations of solutions proposed by others. The following section outlines the detailed design characteristics of the agents employed in this study.

2.2.1. Agent’s Memory

Each agent possesses the ability to store a specific number of solutions along with their corresponding utility values. This memory capacity is denoted as c . Agents have the option to update their memory through either individual searching or by sharing information with other group members during collaborative processes. In the event that an agent’s memory reaches its maximum capacity, it becomes necessary for the agent to remove the oldest solution from its memory to make room for a new one. At the onset of group processes, one agent, designated as i , begins by incorporating a single solution denoted as s i o . The utility of this solution, denoted as u i 0 , is determined as u i 0 = U T ( s i o ) , where U T represents the true utility function of the problem space (i.e., Equation (1)).

2.2.2. Agent’s Individual Utility Function

Each agent’s individual utility function serves as a personalized mental model of the problem space, encapsulating their unique understanding of it. In the context of this study, these utility functions are constructed based on the solutions that agents have memorized. For a given agent, denoted as i , its individual utility function is formally defined as follows:
U i ( s ) = u i j × D i s t s i j ,   s 2 D i s t s i j ,   s 2
Here, s represents any solution or data point within the problem space. The term u i j signifies the utility associated with the solution s i j stored in the agent’s memory, while D i s t ( s i j ,   s ) represents the Euclidean distance between s i j and s (i.e., a solution to be evaluated). Equation (2) has a format similar to that of the true utility function shown as Equation (1), which employs a straightforward interpolating algorithm to calculate a weighted average of utility values stored in the agent’s memory. It does so by normalizing these values using the inverse square distances between the memorized solutions and the given solution s . Consequently, each agent is capable of shaping the otherwise inaccessible problem space according to its unique utility function. It is worth noting that agents’ utility functions can be inherently biased due to the limitations in their available information. Furthermore, these individual utility functions evolve over time in response to any updates that occur within the agent’s memory.

2.2.3. Agent’s Other Properties

In the context of collective problem-solving, this work makes the assumption that all individuals possess an equal ability for problem-solving, thus treating their personalities as equal in this regard. Additionally, it assumes that the level of participation or engagement in collective problem-solving is uniform among all participants.

2.3. Group Solution

A group solution represents the collective outcome achieved by a group when tackling a given problem. Agents collaboratively contribute their novel solutions, which are then subject to approval by their peers before being incorporated into the group solution. This group solution remains constantly accessible to all agents within the group. However, it is important to note that the utility of this group solution may vary among individual agents. Each agent assesses the group solution based on their individual utility functions, as shown in Equation (2), leading to differing perceptions. This implies that the utility of a group solution is not solely determined by its inherent quality, which corresponds to its actual utility in the problem space. It is also influenced by how agents evaluate or comprehend it based on their individual perspectives. In the context of this study, the initial group solution is intentionally designed as a random solution, denoted as g o . This randomness involves selecting points in the problem space without a specific pattern or purpose.

2.4. Search Strategies

A well-defined search strategy is of paramount importance in the realm of collective problem-solving, as it plays a pivotal role in shaping the behavior and performance of agents. In the complex landscape of real-world problem-solving scenarios, the quest for novel solutions or alternatives is influenced by a multitude of factors, or a combination thereof. These factors encompass considerations such as expertise, cognitive profiles, backgrounds, learning styles, and abilities, among others. In theoretical investigations, three prominent search strategies have garnered substantial attention in the field of problem-solving. These strategies, extensively studied, will serve as the foundation of the approach in this work.
Moreover, each agent’s search behavior can be likened to a process of “trial and error,” akin to learning through experimentation. However, it is important to acknowledge that the results of these experiments often introduce some degree of error into the accurate assessment of the utility associated with the searched solutions. For a given agent, denoted as i , its estimation of a new candidate solution, say s , is defined as below.
U i e s = U T s + ε
where U i e s is the estimated utility of the candidate solution, s (obtained by individual search), by agent i . U T s and ε represent the true utility of s and search error, respectively. The definition of ε will be described in Section 2.5. The three search strategies used in this work are described as follows.

2.4.1. Simple Local Search

A simple local search, often employed in theoretical problem-solving research, as discussed in [26,28], is a strategy that revolves around enhancing the current solution by exploring nearby alternatives. In real-world scenarios, organizations frequently engage in local search within the immediate vicinity of a status quo option. For instance, event scheduling typically commences with an initial schedule and then iteratively adjusts it by swapping events or modifying time slots to enhance efficiency and meet constraints. Similarly, in political districting, an initial plan is established, and precincts or boundaries are iteratively reassigned or adjusted to align with legal requirements while optimizing other objectives such as compactness.
A simple local search leans more towards exploitation, where choices are from close areas (comparing with background), resulting in relatively low search errors. Each agent in this approach explores neighboring solutions (e.g., eight neighbors in a two-dimensional problem space) to its current solution (i.e., the solution with the highest utility in the agent’s memory). It then randomly selects one of these neighbors as a candidate solution for experimentation to assess its utility through experiments (i.e., according to Equation (3)). If the candidate solution proves superior to the agent’s current solution, the latter is updated; accordingly, otherwise, it remains unchanged. When the agent’s memory reaches its capacity c, the oldest solution is removed.

2.4.2. Random Search

Random search is another problem-solving technique that entails exploring the problem space by randomly selecting candidate solutions in the hope of finding an optimal or satisfactory solution. In practical scenarios, organizations may resort to a random search when seeking alternatives from more distant locations within the problem landscape. This type of search is often associated with radical organizational reorientation brought about by “long jumps,” as discussed in [26,28,65,66].
The distinction between simple local and more distant searches is closely tied to the exploration–exploitation in collective problem-solving [58,59,60]. A greater search distance corresponds to a more exploration approach, while a smaller distance aligns with an exploitation search, akin to a simple local search. In a random search, an agent randomly selects a solution (i.e., a random point within the problem space) as a candidate and assesses its utility according to Equation (3). If the candidate surpasses the agent’s current solution in terms of utility, it is adopted as the new solution; otherwise, the current solution remains unchanged. When the agent’s memory reaches its capacity c, the oldest entry is purged.

2.4.3. Adaptive Search

Theoretical studies suggest that human search behaviors tend to be adaptive, meaning that they adjust search distances based on different situations [75,76]. Empirical research has shown that human search behavior is strongly influenced by feedback [28]. Generally, a search trial is considered successful if it leads to performance improvement and vice versa. Human agents respond to success and failure by gradually adapting their search distance. Specifically, if a search is successful, the agent tends to reduce the search distance, while an unsuccessful search often leads to an increase in the search distance [28]. These adaptive search rules, based on human subject experiments, can be defined as follows:
I f   s u c c e s s , P s : D t + 1 = D t 0.01 ,   D t + 1 0.01   1 P s : D t + 1 = D t
I f   f a i l , P f : D t + 1 = D t + 0.01 ,   D t + 1 1   1 P f : D t 1 = D t
Here, D represents the search distance, bounded by the edges of the problem landscape. P s and P f are probabilities that govern the speed of adjustment in response to success and failure, respectively. These probabilities depend on real-world factors and experimental settings, such as individual search styles, risk associated with a distant search, the number of available trials, potential alternatives, and more. According to the study in [28], rough estimates for P s and P f were 0.9 and 0.2, respectively. The search distances are adjusted in increments of 0.01 (the basic unit of distance), in accordance with the design of the problem spaces. Similar to a simple local search and random search, an agent tests candidate solutions through experimentation (according to Equation (3)) and updates its memory and current solution. Figure 2 provides visual examples of these three types of searches in a two-dimensional problem space.

2.5. Search Errors

Every search conducted by an individual agent is inherently subject to a margin of error due to a variety of factors, including personal abilities, background knowledge, complex environmental conditions, and limited available information. In this model, understanding and quantifying these search errors is crucial to analyze the effects of search strategies on group performance. The model was designed to account for search errors by taking into consideration the distance between a candidate solution and an agent’s memory. The concept here is that the greater the distance between a candidate solution and an agent’s memory, the larger the potential search error. These errors are defined in a stochastic manner, as depicted below:
ε [ α d 1 , α d 1 ]
where ε represents the value of the search error, while α is a tunable constant that is equal to or greater than 1. The variable d signifies the average Euclidean distance between a candidate solution and all the solutions stored in an agent’s memory. When α is set to 1, ε equals 0, indicating that all searches are expected to be perfectly accurate, without any bias. However, when α exceeds 1, ε becomes a random variable with a range from α d 1 to α d 1 . The value of ε is also closely linked to the distance d . A greater distance is more likely to result in larger search errors. This relationship can be interpreted as follows: candidate solutions that are closer to an agent’s background or expertise (i.e., the agent’s memory) are more likely to yield relatively smaller search errors, while those farther away are more likely to lead to greater errors.

2.6. Social Interactions

This model primarily shapes the dynamics of social interactions within a group through communication and the exchange of information. Specifically, each individual agent has the ability to present their current solution, denoted as the solution with the highest utility in their memory, along with its corresponding utility to the entire group. They seek approval from the group for their proposal. On the flip side, an agent playing the role of a listener evaluates a proposal made by a speaker based on their own individual utility function. For example, if a speaker introduces a new candidate solution, let’s call it s x , and asserts that it will enhance the group’s current status, another agent, acting as a listener (let’s call this agent i ), will assess s x , as follows:
U i ( s x ) = u i j × D i s t s i j ,   s x 2 D i s t s i j ,   s x 2
where U i ( s x ) represents the utility of the proposed solution, s x , as estimated by agent i . If U i ( s x ) exceeds the utility of the current group solution, agent i will lend their support to this proposal and incorporate it into their memory. Moreover, agent i will also incorporate this solution into its memory. If a proposal garners support from more than half of the agents, including the speaker, it will be adopted as the new group solution. Conversely, if a proposal fails to secure such widespread support, it will be rejected, and the group solution will remain unchanged.

2.7. Procedure in Each Iteration of Simulation

Each iteration comprises four key steps.
  • Firstly, every agent conducts an individual search and updates their memory and current solution as necessary.
  • Secondly, a speaker is randomly chosen from among the group members. If the speaker’s current solution is deemed superior to the current group solution, as per the speaker’s judgment, it is proposed as a potential candidate for the group solution. Otherwise, the process proceeds to the next iteration.
  • Thirdly, other agents assess the proposed candidate group solution based on their individual utility functions, expressing either support or rejection.
  • Finally, the evaluation results are summarized, leading to the ultimate group decision. This decision involves either adopting or rejecting the candidate group solution as the new collective solution at the group level.

3. Results

To provide practical model parameters for simulations, the model adopts the following parameter settings, as shown in Table 1. The three problem spaces utilized in the simulations are illustrated in Figure 1. Moreover, the simulations in this study were executed using the Python 3.9 programming language. The outcomes were derived from a series of Monte Carlo simulations, involving one hundred independent runs.

3.1. Group Processes over Various Search Errors and Problem Spaces

To investigate the impact of different search strategies on group performance in collective problem-solving, this study conducted simulation experiments using problem spaces with distinct features: a 4-peak problem space (as depicted in Figure 1a), an 11-peak problem space (Figure 1b), and a 16-peak problem space (Figure 1c). The simulations also took into account search errors based on tuning the parameter of α in Equation (5). Figure 3 presents the simulation results for the three search strategies when the search errors were set to zero ( α = 1 ). The results in Figure 3a–c correspond to collective searches on the aforementioned problem spaces, respectively. The performance of the three search strategies gradually improved over time (i.e., iterations) and reached a relatively stable state after three hundred iterations. When comparing the strategies, random search exhibited superior performance in both short-term (e.g., 50 iterations) and long-term (e.g., 250 iterations) scenarios compared to adaptive search and simple local search. Simple local search, on the other hand, demonstrated the lowest performance among the three search strategies. It is important to note that these simulation results reflect an idealized scenario, where all searches were considered perfect, with zero errors. This implies that the estimation of alternative solutions obtained during the search process is identical to the true utility of the solution within a given problem space.
The simulation results, considering “minor” search errors ( α = 2 ), are presented in Figure 4. These errors are contingent upon the average distance between alternative solutions and the solutions stored in an agent’s memory. When an alternative solution is only one unit away (i.e., 0.01 in this context) from an agent’s memorized solutions, the search error, denoted as ε , fluctuates within the range of [ 2 0.01 1 , 2 0.01 1 ] (equivalent to [ 0.007 ,   0.007 ] ). This range is relatively small, indicating that the search error is minor when the alternative solution closely aligns with the agent’s understanding of the problem space. Conversely, when an alternative solution is significantly distant from the agent’s memorized solutions, say 50 units (i.e., 0.5 in this context), the search error spans a broader spectrum, ranging from [ 0.414 ,   0.414 ] . In contrast to the idealized scenarios depicted in Figure 3, these settings resemble real-world searches more closely, where search outcomes are often influenced by knowledge limitations and background constraints. In general, exploring relatively uncharted territory (i.e., alternatives far from an agent’s memorized solutions) is prone to higher uncertainty and misjudgment.
Comparing these results with those in Figure 3 (where no search errors were considered), it becomes evident that all search strategies were affected by the introduced search errors, as evident in Figure 4. Random search, which frequently explores distant solutions, exhibited a more pronounced susceptibility to these errors. Nevertheless, the adaptive search demonstrated relatively resilient performance, showing results close to those in Figure 3. This suggests that an adaptive search might offer greater robustness in the face of search errors. While random search initially outperforms adaptive search in the early stages of collective problem-solving across all three problem spaces, adaptive search ultimately converges to yield superior or comparably beneficial solutions in the long run. Furthermore, a simple local search remains the least effective search strategy in terms of overall performance.
As the search error index increases to α = 3 , the penalty for conducting long-distance searches becomes more pronounced. As previously discussed, the performance of the three search strategies exhibits distinct patterns over time. The search errors associated with alternatives situated at distances of 0.01 and 0.5 from an agent’s memory are characterized by ranges of [ 0.011 ,   0.011 ] and [ 0.732 ,   0.732 ] , respectively. With these larger search errors in play, the performance of the three search strategies in collective problem-solving reveals varying trends, as depicted in Figure 5. Similar to Figure 4, the random search strategy experiences even greater challenges in both short-term and long-term scenarios across various problem spaces. Although random search demonstrates stronger short-term performance, as illustrated in Figure 5a,c, adaptive search significantly outperforms random search in the long term across all three problem spaces.
In summary, when dealing with minor search errors and aiming for swift and satisfactory collective problem-solving performance, random search emerges as the optimal choice among the three strategies. On the other hand, in scenarios where search errors are more significant and time constraints are not a critical factor in group processes, adaptive search stands out as the preferred option. As a prudent guideline, it is advisable to steer clear of employing the simple local search strategy in various collective problem-solving situations.

3.2. Optimal P s and P f Parameters in Adaptive Search

P s and P f represent two crucial parameters within an adaptive search strategy. These parameters guide how agents adapt their search behaviors based on feedback, whether from successful or failed trials. While this research has adopted the recommended values for P s and P f from a previous study [28], it is important to note that there are no universally accepted standard values for these two probabilities across diverse scenarios. This study delves into a theoretical exploration of how problem spaces and search errors influence the optimal selection of these two parameters in adaptive search strategies. Figure 5 illustrates the collective performance of different combinations of P s and P f in adaptive search. All the parameter settings and initialization remain unchanged except the values for P s and P f . The group performance metrics were derived from three hundred iterations, each based on one hundred independent simulation runs. For instance, as depicted in Figure 6a, when P s is set to 0.2 and P f is 0.0, the team’s performance on the 4-peak problem space (as given in Figure 1a) reaches 0.53, with no search errors. In contrast, as shown in Figure 6h, when P s equals 0.6 and P f is 0.8, the collective problem-solving performance reaches 0.91 with a corresponding α value of 3 on the 11-peak problem space (as given in Figure 1b).
It is evident that the optimal combination of P s and P f for achieving higher performance varies across different problem spaces and levels of search errors. To enhance team performance, it is also advisable to avoid extremely small values of P f (e.g., 0 or 0.1). This means that increasing the search distance when the team encounters frequent failures tends to benefit collective problem-solving efforts, particularly for complex problems. This observation holds true consistently across all the results presented in this paper. Furthermore, as exemplified in Figure 6g–i, combinations featuring relatively smaller P s and larger P f values tend to be suboptimal choices, particularly when dealing with substantial search errors. Lastly, it is worth noting that empirical findings from a prior study [28], which propose values of P s = 0.9 (close to 1) and P f = 0.2 , appear to be a favorable choice across various scenarios, particularly for complex problems entailing substantial search errors.

4. Discussion

This paper contributes valuable insights into collective problem-solving by shedding light on the dynamics of search strategies and their effects on group performance. It studied three primary search strategies—simple local search, random search, and adaptive search—and explored their impacts on collective problem-solving within various problem spaces. The findings of the simulations revealed several key insights. In ideal scenarios with zero search errors, random search outperformed the other strategies. This suggests that in situations where errors are minimal or non-existent, a random exploration approach may yield better results. Simple local search consistently demonstrated the lowest performance among the three strategies. As search errors were introduced to simulate real-world scenarios, random search continued to exhibit better short-term performance, while adaptive search outperformed it in the long term across different problem spaces, underscoring its adaptability and effectiveness in the face of search errors. This highlights the importance of adaptability and resilience in problem-solving when dealing with uncertainties and errors. Furthermore, the paper examined the critical parameters, P s and P f (i.e., the parameters regarding how human adjust search behaviors in respond to success and failure), in the adaptive search strategy and their influence on performance. It demonstrated that the optimal combination of these parameters varies depending on the problem space and the magnitude of search errors. The paper emphasizes that the choice of the optimal search strategy and parameter settings may vary depending on the specific problem space and the presence of search errors. This highlights the importance of context-aware decision-making in real-world problem-solving.
This study fills the gap in the existing literature by studying a systematic exploration of search strategies’ impact on team/group performance within the context of collective problem-solving. It is worth emphasizing that this simulation work diverges from established problem-solving and optimization methods found in computer science and mathematics, such as swarm intelligence [77,78], unimodal/multimodal optimizations [79], and metaheuristics [80,81]. This distinction arises from the primary objective of this simulation work, which centers on emulating human behaviors at both the individual and group levels, rather than pursuing optimal solutions through rigid “machine” logic. Moreover, this research holds broad applicability across diverse real-world domains. For instance, the modeling approach and simulation outcomes offer valuable theoretical insights into the collaborative formulation and execution of strategic plans aiming to achieve collective goals and objectives. Moreover, it has the potential to enhance the effectiveness of design teams by guiding their strategic decision-making and problem-solving processes, ultimately facilitating positive change. Furthermore, in the realms of economics and business, this study contributes to a deeper understanding of resource allocation, encompassing the strategic allocation of financial, human, and temporal resources to optimize problem-solving efficiency and effectiveness.
This work is subject to several limitations. Firstly, the problem spaces modeled may oversimplify real-world problems, which can exhibit substantial variations across different situations and scenarios. Secondly, the exploration of search strategies in this study is somewhat constrained, necessitating further empirical research to diversify and explore various types of search strategies or their combinations. Thirdly, this work did not systematically investigate the interactions among team members. Finally, it is crucial to validate the parameters and assumptions introduced into the model for greater robustness and accuracy. These limitations underscore the importance of future research initiatives. For instance, in addition to the two-dimensional rugged landscape problem space, future work should explore other problem spaces (e.g., problem spaces with higher dimensions) that simulate different real-world challenges. Additionally, another avenue of simulation work could focus on assessing the effectiveness of information-sharing and how agents integrate received information, as this has profound implications for collective problem-solving. Furthermore, to validate the conclusions derived from this study, a systematic experimental investigation is paramount and should be a central focus of future research endeavors.


This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the stochastic nature of the simulations.

Conflicts of Interest

The author declares no conflict of interest.


  1. Wu, L.; Wang, D.; Evans, J.A. Large teams develop and small teams disrupt science and technology. Nature 2019, 566, 378–382. [Google Scholar] [CrossRef]
  2. Comfort, L.K.; Kapucu, N.; Ko, K.; Menoni, S.; Siciliano, M. Crisis decision-making on a global scale: Transition from cognition to collective action under threat of COVID-19. Public Adm. Rev. 2020, 80, 616–622. [Google Scholar] [CrossRef]
  3. Victor, D.G. Toward effective international cooperation on climate change: Numbers, interests and institutions. Glob. Environ. Politics 2006, 6, 90–103. [Google Scholar] [CrossRef]
  4. Yammarino, F.J.; Mumford, M.D.; Connelly, M.S.; Dionne, S.D. Leadership and team dynamics for dangerous military contexts. Mil. Psychol. 2010, 22 (Suppl. S1), S15–S41. [Google Scholar] [CrossRef]
  5. Wuchty, S.; Jones, B.F.; Uzzi, B. The increasing dominance of teams in production of knowledge. Science 2007, 316, 1036–1039. [Google Scholar] [CrossRef]
  6. Fortus, D.; Krajcik, J.; Dershimer, R.C.; Marx, R.W.; Mamlok-Naaman, R. Design-based science and real-world problem-solving. Int. J. Sci. Educ. 2005, 27, 855–879. [Google Scholar] [CrossRef]
  7. Milojević, S. Principles of scientific research team formation and evolution. Proc. Natl. Acad. Sci. USA 2014, 111, 3984–3989. [Google Scholar] [CrossRef]
  8. Aggarwal, I.; Woolley, A.W.; Chabris, C.F.; Malone, T.W. The impact of cognitive style diversity on implicit learning in teams. Front. Psychol. 2019, 10, 112. [Google Scholar] [CrossRef]
  9. Veissière, S.P.; Constant, A.; Ramstead, M.J.; Friston, K.J.; Kirmayer, L.J. Thinking through other minds: A variational approach to cognition and culture. Behav. Brain Sci. 2020, 43, e90. [Google Scholar] [CrossRef]
  10. Woolley, A.W.; Aggarwal, I. Collective intelligence and group learning. In The Oxford Handbook of Group and Organizational Learning; Oxford University Press: Oxford, UK, 2017. [Google Scholar]
  11. Reese, E.; Fivush, R. The development of collective remembering. Memory 2008, 16, 201–212. [Google Scholar] [CrossRef]
  12. Isurin, L. Collective Remembering; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  13. Bell, D.E.; Raiffa, H.; Tversky, A. (Eds.) Decision Making: Descriptive, Normative, and Prescriptive Interactions; Cambridge University Press: Cambridge, UK, 1988. [Google Scholar]
  14. Davis, J.H. Group decision and social interaction: A theory of social decision schemes. Psychol. Rev. 1973, 80, 97–125. [Google Scholar] [CrossRef]
  15. Gross, J.; De Dreu, C.K. Individual solutions to shared problems create a modern tragedy of the commons. Sci. Adv. 2019, 5, eaau7296. [Google Scholar] [CrossRef] [PubMed]
  16. Chiu, M.M. Group Problem-Solving Processes: Social Interactions and Individual Actions. J. Theory Soc. Behav. 2000, 30, 26–49. [Google Scholar] [CrossRef]
  17. Xu, W.; Edalatpanah, S.A.; Sorourkhah, A. Solving the Problem of Reducing the Audiences’ Favor toward an Educational Institution by Using a Combination of Hard and Soft Operations Research Approaches. Mathematics 2023, 11, 3815. [Google Scholar] [CrossRef]
  18. Malone, T.W. Superminds: The Surprising Power of People and Computers Thinking Together; Little, Brown Spark: Boston, MA, USA, 2018. [Google Scholar]
  19. Perc, M.; Gómez-Gardenes, J.; Szolnoki, A.; Floría, L.M.; Moreno, Y. Evolutionary dynamics of group interactions on structured populations: A review. J. R. Soc. Interface 2013, 10, 20120997. [Google Scholar] [CrossRef]
  20. Arguello, J.; Butler, B.S.; Joyce, E.; Kraut, R.; Ling, K.S.; Rosé, C.; Wang, X. Talk to me: Foundations for successful individual-group interactions in online communities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 22–27 April 2006; pp. 959–968. [Google Scholar]
  21. Long, M.H. Task, Group, and Task-Group Interactions; ERIC—Institute of Education Sciences: Washington, DC, USA, 1990. [Google Scholar]
  22. Luria, A.R. The Working Brain: An Introduction to Neuropsychology; Basic Books: New York, NY, USA, 1973. [Google Scholar]
  23. Mayo, A.T.; Woolley, A.W. Variance in group ability to transform resources into performance, and the role of coordinated attention. Acad. Manag. Discov. 2021, 7, 225–246. [Google Scholar] [CrossRef]
  24. Hoffman, L.R. Group problem solving. In Advances in Experimental Social Psychology; Academic Press: Cambridge, MA, USA, 1965; Volume 2, pp. 99–132. [Google Scholar]
  25. Fleming, L. Recombinant uncertainty in technological search. Manag. Sci. 2001, 47, 117–132. [Google Scholar] [CrossRef]
  26. Levinthal, D.A. Adaptation on rugged landscapes. Manag. Sci. 1997, 43, 934–950. [Google Scholar] [CrossRef]
  27. Rivkin, J.W. Imitation of complex strategies. Manag. Sci. 2000, 46, 824–844. [Google Scholar] [CrossRef]
  28. Billinger, S.; Stieglitz, N.; Schumacher, T.R. Search on rugged landscapes: An experimental study. Organ. Sci. 2014, 25, 93–108. [Google Scholar] [CrossRef]
  29. Ganco, M.; Hoetker, G. NK modeling methodology in the strategy literature: Bounded search on a rugged landscape. In Research Methodology in Strategy and Management; Emerald Group Publishing Limited: Leeds, UK, 2009; pp. 237–268. [Google Scholar]
  30. Moreland, R.L.; Levine, J.M.; Wingert, M.L. Creating the ideal group: Composition effects at work. In Understanding Group Behavior; Psychology Press: London, UK, 2018; pp. 11–35. [Google Scholar]
  31. Wilkinson, I.A.; Fung, I.Y. Small-group composition and peer effects. Int. J. Educ. Res. 2002, 37, 425–447. [Google Scholar] [CrossRef]
  32. Woolley, A.W.; Chabris, C.F.; Pentland, A.; Hashmi, N.; Malone, T.W. Evidence for a collective intelligence factor in the performance of human groups. Science 2010, 330, 686–688. [Google Scholar] [CrossRef] [PubMed]
  33. Spearman, C. “General Intelligence” Objectively Determined and Measured. Am. J. Psychol. 1961, 15, 201–292. [Google Scholar] [CrossRef]
  34. Engel, D.; Woolley, A.W.; Jing, L.X.; Chabris, C.F.; Malone, T.W. Reading the mind in the eyes or reading between the lines? Theory of mind predicts collective intelligence equally well online and face-to-face. PLoS ONE 2014, 9, e115212. [Google Scholar] [CrossRef] [PubMed]
  35. O’Reilly , C.A., III; Williams, K.Y.; Barsade, S. Group demography and innovation: Does diversity help? In Composition; Elsevier Science: Amsterdam, The Netherlands, 1998. [Google Scholar]
  36. Kozhevnikov, M.; Evans, C.; Kosslyn, S.M. Cognitive style as environmentally sensitive individual differences in cognition: A modern synthesis and applications in education, business, and management. Psychol. Sci. Public Interest 2014, 15, 3–33. [Google Scholar] [CrossRef]
  37. Gouran, D.S. Communication in groups. In The Handbook of Group Communication Theory and Research; SAGE: Thousand Oaks, CA, USA, 1999; pp. 3–34. [Google Scholar]
  38. Keyton, J. Relational communication in groups. In The Handbook of Group Communication Theory and Research; SAGE: Thousand Oaks, CA, USA, 1999; pp. 192–222. [Google Scholar]
  39. Albrecht, T.L.; Johnson, G.M.; Walther, J.B. Understanding communication processes in focus groups. Success. Focus Groups Adv. State Art 1993, 5, 1–64. [Google Scholar]
  40. Finholt, T.; Sproull, L.; Kiesler, S. Communication and performance in ad hoc task groups. In Intellectual Teamwork; Psychology Press: London, UK, 2014; pp. 305–340. [Google Scholar]
  41. Morrison, E.W.; Wheeler-Smith, S.L.; Kamdar, D. Speaking up in groups: A cross-level study of group voice climate and voice. J. Appl. Psychol. 2011, 96, 183. [Google Scholar] [CrossRef]
  42. Parker, K.C. Speaking turns in small group interaction: A context-sensitive event sequence model. J. Personal. Soc. Psychol. 1988, 54, 965. [Google Scholar] [CrossRef]
  43. Bernstein, E.; Shore, J.; Lazer, D. How intermittent breaks in interaction improve collective intelligence. Proc. Natl. Acad. Sci. USA 2018, 115, 8734–8739. [Google Scholar] [CrossRef]
  44. Zhang, Z.; Gao, Y.; Li, Z. Consensus reaching for social network group decision making by considering leadership and bounded confidence. Knowl.-Based Syst. 2020, 204, 106240. [Google Scholar] [CrossRef]
  45. Becker, J.; Brackbill, D.; Centola, D. Network dynamics of social influence in the wisdom of crowds. Proc. Natl. Acad. Sci. USA 2017, 114, E5070–E5076. [Google Scholar] [CrossRef]
  46. Iyer, A.; Leach, C.W. Emotion in inter-group relations. Eur. Rev. Soc. Psychol. 2008, 19, 86–125. [Google Scholar] [CrossRef]
  47. Bruner, M.W.; Eys, M.A.; Wilson, K.S.; Côté, J. Group cohesion and positive youth development in team sport athletes. Sport Exerc. Perform. Psychol. 2014, 3, 219. [Google Scholar] [CrossRef]
  48. Carron, A.V.; Brawley, L.R. Cohesion: Conceptual and measurement issues. Small Group Res. 2012, 43, 726–743. [Google Scholar] [CrossRef]
  49. Whitton, S.M.; Fletcher, R.B. The Group Environment Questionnaire: A multilevel confirmatory factor analysis. Small Group Res. 2014, 45, 68–88. [Google Scholar] [CrossRef]
  50. Kinicki, A.J.; Jacobson, K.J.; Peterson, S.J.; Prussia, G.E. Development and validation of the performance management behavior questionnaire. Pers. Psychol. 2013, 66, 1–45. [Google Scholar] [CrossRef]
  51. Knierim, M.T.; Hariharan, A.; Dorner, V.; Weinhardt, C. Emotion feedback in small group collaboration: A research agenda for group emotion management support systems. In Proceedings of the 17th International Conference on Group Decision and Negotiation (GDN), Stuttgart, Germany, 14–18 August 2017; pp. 1–12. [Google Scholar]
  52. Moye, N.A.; Langfred, C.W. Information sharing and group conflict: Going beyond decision making to understand the effects of information sharing on group performance. Int. J. Confl. Manag. 2014, 15, 381–410. [Google Scholar] [CrossRef]
  53. Gigone, D.; Hastie, R. The common knowledge effect: Information sharing and group judgment. J. Personal. Soc. Psychol. 1993, 65, 959. [Google Scholar] [CrossRef]
  54. Toma, C.; Butera, F. Hidden profiles and concealed information: Strategic information sharing and use in group decision making. Personal. Soc. Psychol. Bull. 2009, 35, 793–806. [Google Scholar] [CrossRef]
  55. Devine, D.J. Effects of cognitive ability, task knowledge, information sharing, and conflict on group decision-making effectiveness. Small Group Res. 1999, 30, 608–634. [Google Scholar] [CrossRef]
  56. Phillips, K.W.; Mannix, E.A.; Neale, M.A.; Gruenfeld, D.H. Diverse groups and information sharing: The effects of congruent ties. J. Exp. Soc. Psychol. 2004, 40, 497–510. [Google Scholar] [CrossRef]
  57. Tang, Q.; Wang, C.; Feng, T. Research on the Group Innovation Information-Sharing Strategy of the Industry–University–Research Innovation Alliance Based on an Evolutionary Game. Mathematics 2023, 11, 4161. [Google Scholar] [CrossRef]
  58. Berger-Tal, O.; Nathan, J.; Meron, E.; Saltz, D. The exploration-exploitation dilemma: A multidisciplinary framework. PLoS ONE 2014, 9, e95693. [Google Scholar] [CrossRef] [PubMed]
  59. Uotila, J.; Maula, M.; Keil, T.; Zahra, S.A. Exploration, exploitation, and financial performance: Analysis of S&P 500 corporations. Strateg. Manag. J. 2009, 30, 221–231. [Google Scholar]
  60. Hoang, H.A.; Rothaermel, F.T. Leveraging internal and external experience: Exploration, exploitation, and R&D project performance. Strateg. Manag. J. 2010, 31, 734–758. [Google Scholar]
  61. Brunet, A.P.; New, S. Kaizen in Japan: An empirical study. Int. J. Oper. Prod. Manag. 2003, 23, 1426–1446. [Google Scholar] [CrossRef]
  62. Stasser, G. Computer simulation as a research tool: The DISCUSS model of group decision making. J. Exp. Soc. Psychol. 1988, 24, 393–422. [Google Scholar] [CrossRef]
  63. Mollona, E. Computer simulation in social sciences. J. Manag. Gov. 2008, 12, 205–211. [Google Scholar] [CrossRef]
  64. Lapp, S.; Jablokow, K.; McComb, C. KABOOM: An agent-based model for simulating cognitive style in team problem solving. Des. Sci. 2019, 5, e13. [Google Scholar] [CrossRef]
  65. Bergner, Y.; Andrews, J.J.; Zhu, M.; Gonzales, J.E. Agent-based modeling of collaborative problem solving. ETS Res. Rep. Ser. 2016, 2016, 1–14. [Google Scholar] [CrossRef]
  66. Hill, L.A. Orientation to the Subarctic Survival Situation; Harvard Business School Background Note 494-073; Harvard Business School: Boston, MA, USA, 1995. [Google Scholar]
  67. Sun, L.; Hong, L.J.; Hu, Z. Balancing exploitation and exploration in discrete optimization via simulation through a Gaussian process-based search. Oper. Res. 2014, 62, 1416–1438. [Google Scholar] [CrossRef]
  68. Ajdari, A.; Mahlooji, H. An adaptive exploration-exploitation algorithm for constructing metamodels in random simulation using a novel sequential experimental design. Commun. Stat.-Simul. Comput. 2014, 43, 947–968. [Google Scholar] [CrossRef]
  69. Gilbert, N. Agent-Based Models; Sage Publications: Newcastle upon Tyne, UK, 2019. [Google Scholar]
  70. Bankes, S.C. Agent-based modeling: A revolution? Proc. Natl. Acad. Sci. USA 2002, 99 (Suppl. S3), 7199–7200. [Google Scholar] [CrossRef]
  71. Macal, C.M.; North, M.J. Agent-based modeling and simulation. In Proceedings of the 2009 Winter Simulation Conference (WSC), Austin, TX, USA, 13–16 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 86–98. [Google Scholar]
  72. Cao, S.; MacLaren, N.G.; Cao, Y.; Marshall, J.; Dong, Y.; Yammarino, F.J.; Dionne, S.D.; Mumford, M.D.; Connelly, S.; Martin, R.W.; et al. Group Size and Group Performance in Small Collaborative Team Settings: An Agent-Based Simulation Model of Collaborative Decision-Making Dynamics. Complexity 2022, 2022, 8265296. [Google Scholar] [CrossRef]
  73. Siggelkow, N. Evolution toward fit. Adm. Sci. Q. 2002, 47, 125–159. [Google Scholar] [CrossRef]
  74. Tushman, M.L.; Romanelli, E. Organizational evolution: A metamorphosis model of convergence and reorientation. Res. Organ. Behav. 1985, 7, 171–222. [Google Scholar]
  75. Arthur, W.B. Designing economic agents that act like human agents: A behavioral approach to bounded rationality. Am. Econ. Rev. 1991, 81, 353–359. [Google Scholar]
  76. Edmonds, B. Towards a descriptive model of agent strategy search. Comput. Econ. 2001, 18, 111–133. [Google Scholar] [CrossRef]
  77. Kennedy, J. Swarm intelligence. In Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies; Springer: Boston, MA, USA, 2006; pp. 187–219. [Google Scholar]
  78. Beni, G. Swarm intelligence. In Complex Social and Behavioral Systems: Game Theory and Agent-Based Models; Springer: New York, NY, USA, 2020; pp. 791–818. [Google Scholar]
  79. Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A review of affective computing: From unimodal analysis to multimodal fusion. Inf. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef]
  80. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  81. Almufti, S.M. Historical survey on metaheuristics algorithms. Int. J. Sci. World 2019, 7, 1. [Google Scholar] [CrossRef]
Figure 1. Generated problem spaces for simulations. (a) Problem space a. (b) Problem space b. (c) Problem space c. Each point within the problem spaces represents a potential solution, with color coding indicating utility levels. Red signifies high utility, while blue represents low utility. The accompanying table at the bottom of the figure provides statistical properties for the three problem spaces.
Figure 1. Generated problem spaces for simulations. (a) Problem space a. (b) Problem space b. (c) Problem space c. Each point within the problem spaces represents a potential solution, with color coding indicating utility levels. Red signifies high utility, while blue represents low utility. The accompanying table at the bottom of the figure provides statistical properties for the three problem spaces.
Mathematics 11 04642 g001
Figure 2. Examples of the three types of search strategies employed in this work. (a) Simple local search. (b) Random search. (c) Adaptive search.
Figure 2. Examples of the three types of search strategies employed in this work. (a) Simple local search. (b) Random search. (c) Adaptive search.
Mathematics 11 04642 g002
Figure 3. Dynamical group performance with zero search errors ( α = 1 ). (a) Dynamical group performance on problem space as given in Figure 1a. (b) Dynamical group performance on problem space as given in Figure 1b. (c) Dynamical group performance on problem space as given in Figure 1c.
Figure 3. Dynamical group performance with zero search errors ( α = 1 ). (a) Dynamical group performance on problem space as given in Figure 1a. (b) Dynamical group performance on problem space as given in Figure 1b. (c) Dynamical group performance on problem space as given in Figure 1c.
Mathematics 11 04642 g003
Figure 4. Dynamical group performance with minor search errors ( α = 2 ). (a) Dynamical group performance on problem space as given Figure 1a. (b) Dynamical group performance on problem space as given in Figure 1b. (c) Dynamical group performance on problem space as given in Figure 1c.
Figure 4. Dynamical group performance with minor search errors ( α = 2 ). (a) Dynamical group performance on problem space as given Figure 1a. (b) Dynamical group performance on problem space as given in Figure 1b. (c) Dynamical group performance on problem space as given in Figure 1c.
Mathematics 11 04642 g004
Figure 5. Dynamical group performance with relatively larger search errors ( α = 3 ). (a) Dynamical group performance on problem space as given in Figure 1a. (b) Dynamical group performance on problem space as given in Figure 1b. (c) Dynamical group performance on problem space as given in Figure 1c.
Figure 5. Dynamical group performance with relatively larger search errors ( α = 3 ). (a) Dynamical group performance on problem space as given in Figure 1a. (b) Dynamical group performance on problem space as given in Figure 1b. (c) Dynamical group performance on problem space as given in Figure 1c.
Mathematics 11 04642 g005
Figure 6. Group performance with different combination of P s and P f in adaptive search. (a) α = 1 and 4-peak problem space as given in Figure 1a. (b) α = 1 and 11-peak problem space as given in Figure 1b. (c) α = 1 and 16-peak problem space as given in Figure 1c. (d) α = 2 and 4-peak problem space as given in Figure 1a. (e) α = 2 and 11-peak problem space as given in Figure 1b. (f) α = 2 and 16-peak problem space as given in Figure 1c. (g) α = 3 and 4-peak problem space as given in Figure 1a. (h) α = 3 and 11-peak problem space as given in Figure 1b. (i) α = 3 and 4-peak problem space as given in Figure 1c. The value of group performance in each sub squared area represents the final utility of the group solution after three hundred iterations based on one hundred independent simulation runs.
Figure 6. Group performance with different combination of P s and P f in adaptive search. (a) α = 1 and 4-peak problem space as given in Figure 1a. (b) α = 1 and 11-peak problem space as given in Figure 1b. (c) α = 1 and 16-peak problem space as given in Figure 1c. (d) α = 2 and 4-peak problem space as given in Figure 1a. (e) α = 2 and 11-peak problem space as given in Figure 1b. (f) α = 2 and 16-peak problem space as given in Figure 1c. (g) α = 3 and 4-peak problem space as given in Figure 1a. (h) α = 3 and 11-peak problem space as given in Figure 1b. (i) α = 3 and 4-peak problem space as given in Figure 1c. The value of group performance in each sub squared area represents the final utility of the group solution after three hundred iterations based on one hundred independent simulation runs.
Mathematics 11 04642 g006
Table 1. Parameter settings for the simulations.
Table 1. Parameter settings for the simulations.
m dimension of a problem space2
n number of choices in each dimension100
q number of initial solutions for problem space generation5, 20, 50
h group size4
c capacity of each agent’s memory20
s i o agent   i ’s initial solutionrandom
u i 0 utility   of   agent   i ’s initial solution true   utility   of   s i o
g o group’s initial solutionrandom
P s probabilities of decreasing search distance (success)0.9
P f probabilities of increasing search distance (failure)0.2
α index for search errors1, 2, 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, S. Effects of Search Strategies on Collective Problem-Solving. Mathematics 2023, 11, 4642.

AMA Style

Cao S. Effects of Search Strategies on Collective Problem-Solving. Mathematics. 2023; 11(22):4642.

Chicago/Turabian Style

Cao, Shun. 2023. "Effects of Search Strategies on Collective Problem-Solving" Mathematics 11, no. 22: 4642.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop