Next Article in Journal
Digital-Twin-Based Monitoring System for Slab Production Process
Next Article in Special Issue
The Microverse: A Task-Oriented Edge-Scale Metaverse
Previous Article in Journal
Distributed Mobility Management Support for Low-Latency Data Delivery in Named Data Networking for UAVs
Previous Article in Special Issue
Clustering on the Chicago Array of Things: Spotting Anomalies in the Internet of Things Records
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CROWDMATCH: Optimizing Crowdsourcing Matching through the Integration of Matching Theory and Coalition Games

by
Adedamola Adesokan
,
Rowan Kinney
and
Eirini Eleni Tsiropoulou
*
Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, NM 87131-0001, USA
*
Author to whom correspondence should be addressed.
Future Internet 2024, 16(2), 58; https://doi.org/10.3390/fi16020058
Submission received: 28 December 2023 / Revised: 22 January 2024 / Accepted: 8 February 2024 / Published: 11 February 2024
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)

Abstract

:
This paper tackles the challenges inherent in crowdsourcing dynamics by introducing the CROWDMATCH mechanism. Aimed at enabling crowdworkers to strategically select suitable crowdsourcers while contributing information to crowdsourcing tasks, CROWDMATCH considers incentives, information availability and cost, and the decisions of fellow crowdworkers to model the utility functions for both the crowdworkers and the crowdsourcers. Specifically, the paper presents an initial Approximate CROWDMATCH mechanism grounded in matching theory principles, eliminating externalities from crowdworkers’ decisions and enabling each entity to maximize its utility. Subsequently, the Accurate CROWDMATCH mechanism is introduced, which is initiated by the outcome of the Approximate CROWDMATCH mechanism, and coalition game-theoretic principles are employed to refine the matching process by accounting for externalities. The paper’s contributions include the introduction of the CROWDMATCH system model, the development of both Approximate and Accurate CROWDMATCH mechanisms, and a demonstration of their superior performance through comprehensive simulation results. The mechanisms’ scalability in large-scale crowdsourcing systems and operational advantages are highlighted, distinguishing them from existing methods and highlighting their efficacy in empowering crowdworkers in crowdsourcer selection.

1. Introduction

The evolution in communication and computational technologies has facilitated the seamless aggregation of knowledge and efforts from a diverse population, giving rise to a novel online problem-solving paradigm known as crowdsourcing [1]. This entails the broad act of outsourcing tasks, traditionally handled by employees or contractors, to a vast internet population, often referred to as the wise crowd, through an open call [2]. Crowdsourcing finds extensive applications in various domains such as fundraising and urban sensing. Typically, a crowdsourcing platform receives microtasks that are uploaded by crowdsourcers and engages and organizes crowdworkers to perform these tasks iteratively or in parallel [3]. Essentially, human participants autonomously select tasks and contribute without direct interaction or collaboration with one another [4]. While conventional crowdsourcing platforms like Amazon’s Mechanical Turk primarily handle straightforward tasks, newer platforms like Upwork are tailored for more intricate assignments [5]. Consequently, the focus on crowdsourcing complex tasks has garnered substantial interest [6].

1.1. Related Work

1.1.1. Task Allocation

The challenge of task allocation between crowdsourcers and crowdworkers, or, conversely, the autonomous selection of a declared task by crowdworkers from crowdsourcers, has garnered attention from both academia and industry [7]. A set of novel algorithms for influence-aware task assignment in Spatial Crowdsourcing are introduced in [8], utilizing workers’ historical patterns, a historical acceptance approach, and a propagation optimization algorithm to maximize both task assignments and worker–task influence in the era of widespread smartphone use. A methodology for optimizing resource allocation in crowd-based cooperative computing is discussed in [9], focusing on evolutionary heuristics to balance matching rate and collaborative quality, supported by suitable metrics and multicriteria decision-making, demonstrating effectiveness through experiments on various scales of crowd-based cooperative task allocation problems. Focusing on the challenge of online task assignment with specific time windows for data collection, the authors in [10] formulated a profit maximization problem for the crowdsourcing system, proposing two heuristic algorithms whose effectivenesses were validated through simulation results. A mobile crowdsourcing mechanism for urban-scale monitoring is proposed in [11], addressing the cost-fair task allocation problem to balance sensing costs among mobile users, offering offline and online algorithms for efficient and fair task distribution. The concept of opportunistic mobile crowdsourcing is analyzed in [12] by addressing challenges related to uncertain worker trajectories, incorporating worker and task requester preferences and capacity constraints, and proposing novel task assignment algorithms that are proven to be optimal in terms of preference awareness. A game-theoretic approach for task allocation in crowdsourcing is presented in [13], addressing the challenge of optimizing crowdsourcing task assignment by incorporating trust evaluation, virtual currency incentives, and a bargaining game model.

1.1.2. Incentive Mechanisms

However, except for the problem of task allocation in crowdsourcing environments, motivating user participation is a critical challenge due to the inadequacy of satisfactory participants, leading researchers to actively explore incentive mechanisms to encourage diverse participant engagement in crowdsourcing [14]. A two-tiered social crowdsourcing architecture to address insufficient participation in budget-constrained online crowdsourcing is studied in [15], where three system models and corresponding incentive mechanisms are analyzed, demonstrating through analysis and simulations their effectiveness in achieving computational efficiency, individual rationality, and budget feasibility. An incentive mechanism for crowdsourcing is introduced in [16], tackling the challenge of unknown worker qualities by using a multi-armed bandit and three-stage Stackelberg game. Similarly, a multi-leader, multi-follower Stackelberg game for socially aware crowdsourcing is proposed in [17,18], addressing incentive design issues by integrating social influence and strategic interactions among service providers and users. The authors in [19] proposed innovative task diffusion models for large-scale crowdsourcing via social networks, introducing sealed reverse auction incentive mechanisms that achieve computational efficiency, truthfulness, and guaranteed approximation, with superior outcomes in social cost, overpayment ratio, and task completion rates. A crowdsourcing framework for user recruitment in Online Social Networks, employing a labor economics approach and formulating the competitive process as a Generalized Colonel Blotto game to determine optimal rewards, is proposed in [20]. Also, the authors in [21] applied psychological game theory to show that crowdsourcers can reduce costs in crowdsourcing by incorporating psychological payoffs, proving optimal incentive plans, and presenting a unique psychological model for improved incentive mechanism design.

1.1.3. Privacy and Trustworthiness

Nevertheless, challenges emerge in the crowdsourcing process concerning the privacy of crowdworkers and the trustworthiness of the information they provide to the crowdsourcers [22]. A novel crowdsourcing network architecture addressing location privacy concerns during task allocation is proposed in [23] by implementing a privacy-preserving, decentralized dispute arbitration protocol to handle payment disputes without revealing users’ private information, showcasing resistance to forgery attacks and efficient performance. A game-theoretic model is introduced in [24] to analyze the dissemination of user personal information in online social networks, considering factors like intimacy and subject popularity [25]. The authors in [26] focused on user privacy and data trustworthiness and proposed game-theoretic solutions, specifically revisiting coalitional game formation and subgame perfect equilibrium-based concepts.

1.1.4. Crowdworker Recruitment

Efficient crowdworker recruitment plays a pivotal role in the success of crowdsourcing campaigns [27]. Traditionally, crowdsourcing applications employ a direct approach, where the platform directly chooses and enlists suitable individuals for task execution [28]. Matching theory and coalition games have been widely used to support the crowdworker matching to tasks or crowdsourcers [29]. The crowdsourcing last-mile delivery problem is addressed in [30] involving orders with diverse destinations and time windows, crowdsourced drivers with preplanned trips, and introduction a non-cooperative game framework. The proposed algorithms efficiently find stable matches, demonstrating effectiveness through computational experiments and extending applicability to stochastic settings with random release times of orders. The authors in [31] introduce the Acceptance-aware Worker Recruitment (AWR) as a novel game in socially aware crowdsourcing, employing a random diffusion model for task invitation propagation on social networks, formulating the AWR game as an NP-hard combinatorial optimization problem to maximize overall task acceptance within a specified incentive budget, and presenting a meta-heuristic-based evolutionary approach, demonstrating its effectiveness and efficiency through comprehensive experiments on real-world datasets. A distributed team formation-based batch crowdsourcing for complex tasks is analyzed in [32], presenting two approaches—forming a fixed team for all tasks or a dynamically adjusted basic team for each task—demonstrating improved cost efficiency, requester payments, communication, task success rates, and scalability compared to previous benchmarks on a real-world dataset. Two crowdworker recruitment strategies are discussed in [33], i.e., platform-based and leader-based, optimizing team formation through an integer linear program considering expertise, social ties, cost, and confidence.

1.2. Contributions

In spite of the advancements made in prior research on the allocation of tasks to crowdworkers, examination of their privacy, trustworthiness in contributing information to crowdsourcers, and the recruitment of crowdworkers, the issue of facilitating crowdworkers to make selections of crowdsourcers based on provided incentives remains largely unexplored. Additionally, addressing externalities in the decision-making process of selecting a crowdsourcer, influenced by the decisions of other crowdworkers, poses an even more formidable challenge.
This paper aims to address these challenges. Specifically, our research introduces a crowdsourcing matching mechanism, named CROWDMATCH, aimed at empowering crowdworkers to select a suitable crowdsourcer and contribute information to a crowdsourcing task, taking into account the provided incentives, the availability and cost of information, and decisions made by other crowdworkers. An Approximate CROWDMATCH mechanism is initially developed, drawing on matching theory principles, to eliminate externalities from the decisions of other crowdworkers, allowing each crowdworker to maximize its utility while selecting a crowdsourcer. Simultaneously, all crowdsourcers also maximize their benefits. Subsequently, the Accurate CROWDMATCH mechanism is introduced, based on coalition game-theoretic principles, refining the selection process by accounting for externalities stemming from other crowdworkers’ decisions and incentives provided by other crowdsourcers. The distinctive contributions of this research, setting it apart from the existing literature, are succinctly outlined below:
  • The CROWDMATCH system model is introduced, comprising a set of crowdsourcers such as Amazon Mechanical Turk, Upwork, CrowdFlower, Google Maps, etc., and a set of crowdworkers. Operational characteristics, including the availability and cost of information, provided incentives, and the assessment of crowdworkers’ benefits and crowdsourcers’ incentive costs, are encapsulated in utility functions that reflect the advantages derived by both crowdworkers and crowdsourcers in the context of the crowdsourcing process;
  • The Approximate CROWDMATCH mechanism is initially devised based on matching theory principles, taking into account the absence of externalities in the decision-making process of crowdworkers as they select the most suitable crowdsourcer to contribute information to a corresponding crowdsourcing task. This mechanism converges to a stable matching of crowdworkers to crowdsourcers, optimizing the utility for both entities, and serves as input for the Accurate CROWDMATCH mechanism;
  • Addressing the system’s externalities, the Accurate CROWDMATCH mechanism is proposed, based on coalition game-theoretic principles, to ascertain an optimal matching between crowdworkers and crowdsourcers, maximizing their respective utilities. The existence of a Nash–Individually stable matching of crowdworkers is analytically demonstrated;
  • Comprehensive simulation results demonstrate the operational advantages of the CROWDMATCH mechanism, its scalability in large-scale crowdsourcing systems, and its superior performance compared to existing methods that empower crowdworkers in the selection of crowdsourcers.

1.3. Outline

The subsequent sections of this paper are structured as follows: Section 2 introduces the CROWDMATCH system model. The mechanisms of Approximate and Accurate CROWDMATCH are presented in Section 3 and Section 4, respectively. Simulation results are presented in Section 6. Finally, Section 7 concludes the paper.

2. System Model

In this section, the CROWDMATCH system model is presented. We consider a set of crowdsourcers, such as Amazon Mechanical Turk, Upwork, CrowdFlower, Google Maps, etc., M = { 1 , , m , , M } , and a set of crowdworkers, denoted as N = { 1 , , n , , N } . Each crowdsourcer has a total budget P m [ $ bits ] that is provided as a reward to the crowdworkers in order to incentivize the latter to submit their information to crowdsourcing tasks announced by the crowdsourcer. Each crowdworker is characterized by an amount of information I n [bits], which can be composed of text, videos, photos, etc., and a corresponding cost P n [ $ bits ] to collect the information. The crowdworker’s utility is defined as follows:
U n = a ^ n ( P ^ m P ^ n ) I ^ n n N m I ^ n
where a n R + [ bits $ ] denotes the evaluation of the crowdworker’s profit from uploading I n amount of information to the crowdsourcer m, and N m denotes the set of crowdworkers that selected crowdsourcer m. The values have been normalized for presentation purposes and without loss of generality, as follows: a ^ n = a n max { a n } n N , I ^ n = I n max { I n } n N , P ^ n = P n max { P n } n N , and P ^ m = P m max { P m } m M . Also, it is noted that the crowdworker’s utility depends on the amount of information that all the crowdworkers upload to the selected crowdsourcer; if the crowdsourcer receives a large amount of information from other crowdworkers, then it is less incentivized to provide competitive rewards to the crowdworkers.
Focusing on the crowdsourcers’ benefit from the crowdsourcing process, the corresponding crowdsourcer’s utility is formulated as follows:
U m = n N m U n γ ^ m ( P ^ m C ^ m ) 2 δ ^ m m M P ^ m
where C m [ $ bits ] denotes the cost incurred by the crowdsourcer for executing its fixed operations, γ m [ bits 2 $ 2 ] signifies the assessment of profit, and δ m [ $ bits ] represents the evaluation of the market impact. Analogously, in consideration of the crowdworker’s utility, we normalize the values for presentation purposes and without loss of generality, as follows: C ^ m = C m max { C m } m M and δ ^ m = δ m max { δ m } m M . The primary objective for each crowdworker is to select a crowdsourcer to upload its information, with the aim of maximizing its utility. Similarly, each crowdsourcer aims to recruit a group of crowdworkers to perform the designated crowdsourcing task while maximizing its utility. It is noted that each crowdsourcer can accommodate a maximum number of N m m a x crowdworkers based on its available budget P m to provide rewards.

3. Approximate CROWDMATCH Based on Matching Theory

The crowdworkers seek to employ a strategic approach in selecting a crowdsourcer for uploading information, with the objective of maximizing their utilities. Conversely, the crowdsourcers endeavor to identify an optimal group of crowdworkers to enhance their utilities in a distributed fashion. This situation presents a many-to-one matching problem, wherein multiple crowdworkers are paired with a single crowdsourcer, and can be studied based on the matching theory.
Definition 1. 
(Matching) A set of crowdworkers, denoted as N = { 1 , , n , , N } and a set of crowdsourcers, denoted as M = { 1 , , M } are regarded as two non-intersecting sets. The matching function Λ represents a mapping of elements from N to elements from M , subject to the following conditions:
  • ( C 1 ): n N , | Λ ( n ) | 1 ;
  • ( C 2 ): m M , | Λ ( m ) | N m m a x ;
  • ( C 3 ): Λ ( n ) M Λ ( m ) N ;
  • ( C 4 ): n Λ ( m ) Λ ( n ) = m .
The magnitude of a matching, expressed as | Λ ( · ) | , signifies the cardinality of the matching. N max denotes the maximum count of crowdworkers that a crowdsourcer can accommodate, contingent upon its available rewards. In cases where Λ ( n ) = , crowdworker n remains unmatched to any crowdsourcer. Similarly, if Λ ( m ) = , crowdsourcer m is not chosen by any crowdworker.
For each crowdworker n N , preference is given to the crowdsourcer that yields the highest utility, as defined in Equation (1). The utility of crowdworker n, as expressed in Equation (1), is intricately tied not only to its individual decision, but also to the decisions of other crowdworkers, captured by the term n N m I ^ n . This phenomenon is commonly referred to as externality in matching theory.
In the initial phase of our analysis, we consider the absence of externalities in the matching process. Consequently, the reformulation of the crowdworker’s approximate utility is articulated as follows:
U ^ n = a ^ n ( P ^ m P ^ n ) I ^ n
and the crowdsourcer’s approximate utility after omitting the externalities stemming from the rewards offered by the other crowdsourcers is formulated as follows:
U ^ m = n N m U n ^ γ ^ m ( P ^ m C ^ m ) 2
Definition 2. 
(Preference Relation) A preference relation < is a binary relation between elements of the set N and elements of the set M that is complete, self-referential, and transitive. The preference relation < n is formally defined as follows: for any crowdworker n N and any crowdsourcer m , m M , where m m , the relation is given by
m > n m U ^ n ( m ) > U ^ n ( m )
Similarly, the preference relation < m is defined as follows: for any crowdsourcer m M and any pair of crowdworkers n , n N where n n , the relation is given by
n > m n U ^ m ( n ) > U ^ m ( n )
With Definition 2, preference lists for crowdsourcers and crowdworkers are established. Subsequently, based on these preference lists, crowdsourcers and crowdworkers are matched with their preferred partners through the Approximate CROWDMATCH algorithm, outlined in Algorithm 1.
Algorithm 1 Approximate CROWDMATCH Algorithm
  1:
Input:  { I n , a n , P n } n N , N , M , { N m m a x , γ m , δ m , P m } m M
  2:
Output: Matching Results Λ
  3:
Initialization:
  4:
Initialize the unmatched set as N * N and the alternative crowdsourcers of crowdworkers as M n { m m M } for all n N
  5:
while  N * and n N * : M n  do
  6:
   for  n N *  do
  7:
     Crowdworker n selects their preferred crowdsourcers and sends an invitation based on Equation (5).
  8:
   end for
  9:
   for  m M  do
10:
     if  | Λ ( m ) | N m m a x (m received invitation) then
11:
        Crowdsourcer m selects the favorite crowdworkers to pair from the ones who sent a pair invitation based on Equation (6).
12:
        Delete m from the alternative crowdsourcers of the crowdworkers that sent an invitation but were not accepted.
13:
     end if
14:
   end for
15:
end while
The Approximate CROWDMATCH algorithm is based on a set of fundamental principles. Initially, all the crowdworkers participate in the competition for crowdsourcing tasks independently, without being paired with specific crowdsourcers. This approach provides flexibility, allowing the crowdworkers to be matched with any available crowdsourcer offering incentives. Unmatched crowdworkers take the initiative by extending invitations to pair with their preferred crowdsourcers. When a crowdsourcer possesses available rewards to accommodate crowdworkers, it consistently selects preferred crowdworkers for pairing from those who extended invitations. In instances where a crowdworker’s invitation to a crowdsourcer is not accepted, it signals that the crowdsourcer should consider a more favored crowdworker to maximize its utility, leading to the rejection of the initial pairing. Consequently, the rejected crowdworkers from a crowdsourcer strategically refrain from extending further invitations to disinterested crowdsourcers, preventing the wastage of time. The convergence of the Approximate CROWDMATCH algorithm occurs when either all crowdsourcers have recruited the maximum number of crowdworkers based on the available rewards they offer or when all the crowdworkers have been successfully paired.

4. Accurate CROWDMATCH Based on Coalition Games

Given the presence of an externality, the outcome derived from the matching outlined in Section 3 using the Approximate CROWDMATCH algorithm ensures an approximate optimality. The outcome of the stable matching derived from the Approximate CROWDMATCH algorithm acts as an initial input to the Accurate CROWDMATCH algorithm. Specifically, a coalition game has been formulated, along with the corresponding Accurate CROWDMATCH algorithm, to enhance the matching outcome. This approach is devised to address the externalities and achieve an improved result by building upon the findings of the initial matching outcome.
Definition 3. 
(Coalition Game) The coalition game is formally characterized by the triple ( N , M , { U m } m M ), where N denotes the set of players (crowdworkers), M represents the set of coalitions (crowdsourcers), and U m signifies the utility function associated with each coalition (i.e., crowdsourcer), as defined in Equation (2).
Definition 4. 
(Switching Rules) The following switching rules must be followed in the coalition game:
Rule 1:  For a crowdworker n who has not selected a crowdsourcer, n will join a crowdsourcer m, if m = arg max m * M { U m * ( m * { n } ) U m * ( m * ) | U m * ( m * { n } ) U m * ( m * ) > 0 } . The new set of coalitions is M = { M { m } } { m { n } } ;
Rule 2:  For n m , n will leave crowdsourcer m if U m ( m { n } ) > U m ( m ) . The new set of coalitions is M = { M { n } } { m { n } } ;
Rule 3:  For n N m and crowdsourcer m , n will leave the original crowdsourcer m (where m m ) and join another crowdsourcer m if and only if U m ( m { n } ) + U m ( m { n } ) > U m ( m ) + U m ( m ) . The new set of coalitions is M = { M { m , m } } { m { n } } { m { n } } ;
Rule 4:  For n m and n m , where n n , n and n switch crowdsourcers if U m ( ( m { n } ) { n } ) + U m ( ( m { n } ) { n } ) > U m ( m ) + U m ( m ) . The new set of coalitions is M = { M { m , m } } { ( m { n } ) { n } } { ( m { n } ) { n } } .
In accordance with the switching rules articulated in Definition 4, we have formulated the Accurate CROWDMATCH Algorithm, presented in Algorithm 2. This algorithm is strategically devised to streamline the formation of robust coalitions between crowdworkers and crowdsourcers within the context of the crowdsourcing process. Its core aim is to systematically enhance the utility for the crowdworkers, optimizing their contributions to crowdsourcers and facilitating the efficient execution of the crowdsourcing tasks. Simultaneously, it endeavors to improve the utility for the crowdsourcers, thereby establishing a mutually beneficial framework that supports their collaborative involvement in the crowdsourcing process.
Theorem 1. 
(Nash–Individually Stable Matching) The concept of Nash–Individually stable matching, represented as Λ * , pertains to the allocation of crowdworkers to crowdsourcers. This allocation is considered stable when no individual crowdworker can improve their utility by switching to a different crowdsourcer. The Accurate CROWDMATCH algorithm is specifically crafted to ensure the existence of at least one Nash–Individually stable partition, denoted as Λ * .
Proof. 
Initially, let us consider that the matching configuration of crowdworkers, denoted as Λ * and determined by the Accurate CROWDMATCH algorithm, does not exhibit Nash–Individual stability. In this context, the following conditions must be satisfied: (i) n N m , m M , m = arg max m * M { U m * ( m * { n } ) U m * ( m * ) | U m * ( m * { n } ) U m * ( m * ) > 0 } ; (ii) n N m , satisfying U m ( m { n } ) > U m ( m ) ; (iii) n N m , m , m m , satisfying U m ( m { n } ) + U m ( m { n } ) > U m ( m ) + U m ( m ) ; and (iv) n N m , n N m , and m m , satisfying U m ( ( m { n } ) { n } ) + U m ( ( m { n } ) { n } ) > U m ( m ) + U m ( m ) . However, in the Accurate CROWDMATCH algorithm, if any of the aforementioned conditions holds true, the crowdworkers will adhere to the corresponding switching rules described in Definition 4. Consequently, the matching of the crowdworkers cannot be considered final, as they will persistently modify their associations with crowdsourcers by following these switching rules. This discrepancy challenges our initial assumption, leading to the deduction that the Accurate CROWDMATCH algorithm converges to a Nash–Individually stable matching among the crowdworkers and the crowdsourcers.    □
Algorithm 2 Accurate CROWDMATCH Algorithm
  1:
Input:  Λ initial from Approximate CROWDMATCH Algorithm
  2:
Output: Optimal Matching Λ *
  3:
repeat
  4:
   Randomly select a crowdworker n and its crowdsourcer m
  5:
   if n does not belong to any crowdsourcer then
  6:
      m = arg max m * M { U m * ( m * { n } ) U m * ( m * ) | U m * ( m * { n } ) U m * ( m * ) > 0 }
  7:
      M = { M { m } } { m { n } }
  8:
   else
  9:
     Another crowdsourcer m , m m is randomly selected
10:
     if  N m N m m a x  then
11:
        if  U m ( m { n } ) + U m ( m { n } ) > U m ( m ) + U m ( m )  then
12:
           M = { M { m , m } } { m { n } } { m { n } }
13:
        end if
14:
     else
15:
        Randomly select a crowdworker n of crowdsourcer m
16:
        if  U m ( ( m { n } ) { n } ) + U m ( ( m { n } ) { n } ) > U m ( m ) + U m ( m )  then
17:
           M = { M { m , m } } { ( m { n } ) { n } } { ( m { n } ) { n } }
18:
        end if
19:
     end if
20:
   end if
21:
   Update m to the current crowdsourcer of n
22:
   if  U m ( m { n } ) > U m ( m )  then
23:
      M = { M { n } } { m { n } }
24:
   end if
25:
until no further updates of the crowdworkers

5. Complexity Analysis

The complexity of the Approximate CROWDMATCH Algorithm is O ( N · M + M ) . Specifically, the complexity of the Approximate CROWDMATCH Algorithm’s component where the crowdworkers select their preferred crowdsourcers is O ( N · M ) and the complexity of the algorithm’s part where the crowdsourcers select their preferred crowdworkers is O ( M ) . The complexity of the Accurate CROWDMATCH Algorithm is O ( i t e ) , where i t e denotes the number of iterations that the Accurate CROWDMATCH Algorithm needs to converge to the optimal matching Λ * .
Specifically, the Approximate CROWDMATCH algorithm follows the matching theory and the Accurate CROWDMATCH algorithm follows the theory of coalition games. Specifically, focusing on the Approximate CROWDMATCH algorithm, given the calculation of the approximate utilities for the crowdworkers and the crowdsourcers, each crowdworker creates a sorted list of their preferred crowdsourcers based on their calculated approximate utility Equation (5). Initially, all the crowdworkers belong to the unmatched set N * , and they send their invitations to the crowdsourcers. Then, the crowdsourcers, considering that they still have capacity in terms of accommodating recruited crowdworkers (line 10 in the Approximate CROWDMATCH algorithm), select their most preferred crowdworkers by calculating their utility function and following the preference relation defined in Equation (6). Then, the crowdworkers that were not selected by a crowdsourcer delete that crowdsourcer from their alternative crowdsourcer list to avoid resending another matching invitation to a crowdsourcer that has already rejected them. This process is continued iteratively until all crowdworkers are matched to the crowdsourcers or until the crowdsourcers have reached their capacity in terms of crowdworkers that they can accommodate. In order for the Approximate CROWDMATCH algorithm to be implemented, the crowdworkers need to know the provided rewards P m by the crowdsourcers and the crowdsourcers require the information of the value of the utility of the crowdworkers that have sent a pair invitation to them. In a realistic implementation, this information can be exchanged in one packet transmitted in both directions that includes a very small amount of information, i.e., a single value. Focusing on the Accurate CROWDMATCH algorithm, the analysis is very similar in terms of the complexity and signaling that is needed, as the crowdworkers calculate their utility based on the announced rewards announced by the crowdsourcers, and then they follow the switching rules described in Definition 4 to converge to a stable matching. Based on the provided numerical results in Section 6, it is shown that, even for a large-scale setup, i.e., thousands of crowdworkers and tens of crowdsourcers, the Approximate and Accurate CROWDMATCH algorithms converge to a stable matching in a few seconds.

6. Numerical Evaluation

In this section, an exhaustive evaluative assessment is carried out for the proposed mechanisms, namely, Approximate and Accurate CROWDMATCH. The objective is to demonstrate their operational benefits and superior performance compared to contemporary methods. The primary focus is on presenting the self-managed matching between the crowdworkers and the crowdsourcers within the context of the crowdsourcing process. Section 6.1 provides a thorough exploration of the operational characteristics and effectiveness of the proposed framework for both the crowdworkers and the crowdsourcers. Moving on to Section 6.2, a scalability analysis is executed, progressively increasing the number of crowdworkers and crowdsourcers. This is undertaken to demonstrate the efficiency and resilience of the CROWDMATCH mechanism. Additionally, Section 6.3 conducts an all-encompassing comparative assessment of the CROWDMATCH mechanism in relation to other prevailing matching mechanisms found in the current literature. This comparative analysis aims to highlight CROWDMATCH mechanism’s superiority in concurrently meeting the requirements of both crowdworkers and crowdsourcers.
In the rest of the simulation results, we consider the following parameters: N = 15 , I n [ 1 , 8 ] Mbits, P n [ 5.0 , 9.5 ] · 10 4 [ $ b i t s ], a n [ 0.1 , 0.8 ] [ b i t s $ ], M = 3 , P m = [ 3 , 3.5 , 4.5 ] · 10 3 [ b i t s $ ], γ m = [ 5 , 6 , 7 ] [ b i t s 2 $ 2 ] , δ m = [ 5 , 6 , 7 ] · 10 2 [ $ b i t s ], C m = [ 5 , 1 , 1.5 ] · 10 4 [ $ b i t s ], and N m a x = [ 5 , 6 , 7 ] , unless otherwise explicitly stated. The values of the parameters are adopted from real crowdsourcers, such as Amazon Mechanical Turk [34]. The evaluation was performed on an HP Envy Desktop with Intel i7 8700 K 3.2 GHz processor, 24 GB available RAM.

6.1. Pure Operation Performance

In this section, we present a comprehensive analysis of the performance and functionality of the Approximate and Accurate CROWDMATCH algorithms. Specifically, Figure 1a,b depicts the approximate and accurate utility of crowdworkers, respectively, with respect to their assigned IDs. The results highlight a direct correlation between higher crowdworker IDs, indicative of increased information availability for upload to crowdsourcers, and elevated levels of both approximate (Figure 1a) and accurate utility (Figure 1b) following the convergence of the Approximate and Accurate CROWDMATCH algorithms. Furthermore, the results reveal that the achieved accurate utility of the crowdworkers is lower compared to the corresponding approximate utility given the consideration of the externalities imposed in the crowdsourcing process, stemming from the other crowdworkers’ decisions and the crowdsourcer rewards’ availability.
Moreover, our findings reveal that the Approximate CROWDMATCH algorithm empowers crowdworkers to judiciously select the most suitable crowdsourcer, thereby maximizing their utility. This selection process takes into account the maximum capacity ( N M m a x ) of each crowdsourcer to accommodate crowdworkers based on the availability of rewards. Figure 1a specifically illustrates the potential approximate utility that the crowdworkers could receive when paired with their preferred crowdsourcer. However, the Approximate CROWDMATCH algorithm ensures a balanced approach by prioritizing crowdworkers characterized by higher information availability, allowing them to be selected first by crowdsourcers to facilitate the crowdsourcing process.
Figure 2a,b presents the number of matched crowdworkers and the crowdsourcers’ utility as a function of the crowdsourcers’ ID under the Approximate and Accurate CROWDMATCH algorithms, respectively. The findings indicate that an increase in crowdsourcers’ IDs, signifying greater reward availability, leads to the recruitment of a larger pool of crowdworkers for participation in the crowdsourcing process. This, in turn, results in higher utility, as evidenced in both the Approximate (Figure 2a) and Accurate (Figure 2b) CROWDMATCH algorithms. Furthermore, crowdsourcers with higher IDs, denoting enhanced rewards availability, demonstrate a heightened capacity for accommodating and recruiting crowdworkers. This capacity is initially exhausted, allowing lower-budget crowdsourcers to subsequently engage crowdworkers under the Approximate CROWDMATCH algorithm. On the other hand, the Accurate CROWDMATCH algorithm mitigates the disparity between higher- and lower-budget crowdsourcers, facilitating even the latter to secure a higher share of recruited crowdworkers.
Focusing on the utility realized by crowdworkers when paired with crowdsourcers using the Approximate (Figure 3a) and Accurate (Figure 3b) CROWDMATCH algorithms, the results reveal that the utility experienced by the crowdworkers is contingent upon the selection of crowdsourcers. Specifically, within the framework of the Approximate CROWDMATCH algorithm, the crowdworkers possessing abundant information availability are matched with crowdsourcers offering the highest rewards in the crowdsourcing system. Conversely, the crowdworkers with low information availability are paired with crowdsourcers providing lower rewards. This imbalance leads to substantial variations in the overall utility achieved by the crowdworkers, associated with the crowdsourcers characterized by low versus high reward availability. This incongruity is alleviated by adhering to the Accurate CROWDMATCH algorithm, where crowdworkers with low information availability still have a favorable likelihood of being matched with crowdsourcers offering high rewards, thereby experiencing lower differences compared to the Approximate CROWDMATCH algorithm.

6.2. Scalability Analysis

In this section, an exhaustive scalability analysis is conducted, considering a progressive increase in the number of crowdworkers and crowdsourcers. A real online platform scenario is considered, consisting of crowdsourcers on the order of tens and crowdworkers on the order of thousands. The objective of this scalability analysis is twofold: first, to showcase the real-time viability of the CROWDMATCH mechanism, and, second, to elucidate the influence of an increasing number of crowdworkers and crowdsourcers on the utility experienced by the crowdworkers.
Figure 4a,b delineates the execution time and the average achieved utility of crowdworkers, respectively, in relation to the increasing number of crowdworkers involved in the crowdsourcing process. These analyses are conducted under both the Approximate and Accurate CROWDMATCH algorithms. For the sake of clarity in the presentation, we assume that a percentage increase in the number of crowdworkers results in an equivalent percentage decrease in the availability of their information. This approach ensures a consistent overall information pool within the system, irrespective of the number of engaged crowdworkers. The rationale behind this assumption lies in simplifying the presentation and modeling process. By assuming a proportional relationship between the increase in crowdworkers and the decrease in information availability, we aim to create a clear and comprehensible framework for our analysis. This assumption allows us to focus on the core aspects of our model without introducing unnecessary complexity.
Furthermore, considering that the uploaded information by crowdworkers to crowdsourcers remains constant, a percentage increase in the number of crowdworkers corresponds to an equivalent percentage increase in the number of crowdworkers that each crowdsourcer can accommodate, i.e., N m a x = [ 3 , 5 , 7 ] . The results demonstrate that, with an escalating number of crowdworkers participating in the crowdsourcing process, the execution time of both the Approximate and Accurate CROWDMATCH algorithms follows a comparable upward trajectory, as depicted in Figure 4a.
Turning attention to the average utility achieved by crowdworkers, a discernible downward trend is observed under both the Approximate and Accurate CROWDMATCH algorithms. It should be highlighted that the decreasing trend becomes more pronounced under the Accurate CROWDMATCH algorithm, reflecting heightened competition among crowdworkers competing to be matched with the most rewarding crowdsourcer in terms of incentives.
Focusing our scalability analysis on the scenario of increasing crowdsourcers, Figure 5a,b depicts the execution time and the average utility of the crowdworkers, respectively, in relation to the increasing number of participating crowdsourcers within the crowdsourcing process. The evaluation is conducted under the Approximate and Accurate CROWDMATCH algorithms. The results indicate that, with a growing number of crowdsourcers, the execution times of both the Approximate and Accurate CROWDMATCH algorithms exhibit a similar upward trend (Figure 5a). Nevertheless, the results affirm that, even with a substantial number of participating crowdsourcers, the execution time of the CROWDMATCH mechanism remains within the order of magnitude of sub-milliseconds. Furthermore, the outcomes reveal that an increasing number of crowdsourcers leads to a diminished average utility for crowdworkers. This is attributed to the constant overall number of rewards available in the system, which, when distributed among numerous crowdsourcers, results in smaller portions for individual crowdworkers.

6.3. Comparative Evaluation

In this section, a comprehensive comparative evaluation is conducted following the baseline scenario in terms of the number of crowdsourcers and crowdworkers for the proposed CROWDMATCH mechanism in comparison to three alternative models: (i) Highest Reward—wherein crowdworkers select crowdsourcers offering the highest rewards until their capacity is reached, subsequently moving to the next most rewarding crowdsourcer [35]; (ii) Random Matching—entailing a random allocation of crowdworkers to crowdsourcers; and (iii) Stochastic Learning Automata (SLA)—where the crowdworkers select a crowdsourcer following a reinforcement learning approach, based on the following probabilistic rules:
P r n , m ( i t e + 1 ) = P r n , m ( i t e ) + b · r ^ n m ( i t e ) · ( 1 P r n , m ( i t e ) ) , m ( i t e + 1 ) = m ( i t e )
P r n , m ( i t e + 1 ) = P r n , m ( i t e ) b · r ^ n m ( i t e ) · P r n , m ( i t e ) , m ( i t e + 1 ) m ( i t e )
where r ^ n m ( i t e ) = r n m ( i t e ) max { r n m ( i t e ) } denotes the normalized reward, where
r n m ( i t e ) = P m N m ( i t e 1 ) C m if N m ( i t e 1 ) < N m m a x 0 if N m ( i t e 1 ) = N m m a x
denotes the reward of crowdworker n after selecting crowdsourcer m, b ( 0 , 1 ) (in this simulation, b = 0.8 ) denotes the learning rate and i t e is the SLA algorithm’s iteration index [36,37]. Figure 6a,b depicts the utility of crowdsourcers based on their ID as well as their average utility and the average utility of crowdworkers across various matching mechanisms, respectively. The results indicate that, under the Highest Reward model, the crowdworkers tend to favor crowdsourcers with the highest rewards, resulting in diminished utility for crowdsourcers offering lower rewards. Moreover, the Random Matching model yields inferior results compared to the CROWDMATCH mechanism, attributable to its simplistic approach in matching crowdworkers with crowdsourcers. The SLA scenario achieves better results for the crowdsourcers compared to the Highest Reward and Random Matching models; however, it still performs worse than the CROWDMATCH mechanism.
When examining the utility achieved by crowdworkers, the CROWDMATCH mechanism outperforms the Highest Reward, Random Matching, and SLA models. This is derived from the observation that, under the Highest Reward model, the crowdworkers with greater information availability disproportionately select crowdsourcers that offer the highest rewards, thereby leaving other crowdworkers with suboptimal utility. Conversely, the Random Matching model achieves a mix of selections from crowdworkers with varying information availability, resulting in better outcomes compared to the Highest Reward model. In contrast, the proposed CROWDMATCH mechanism achieves superior average utility for crowdworkers, even compared to the SLA reinforcement learning-based model, as it strategically considers both crowdworkers’ and crowdsourcers’ characteristics. The CROWDMATCH approach enables a more effective and balanced allocation, enhancing the overall utility for both parties involved. Considering the complexity of the proposed models, the execution time of the CROWDMATCH mechanism was on the order of magnitude of 56 ms, while the execution time of the SLA model was double. The Highest Reward and Random Matching models had negligible execution time due to their non-iterative nature.

7. Conclusions

In conclusion, this paper introduces the CROWDMATCH mechanism as a novel solution to challenges in crowdsourcing. Specifically, our research presents an innovative approach that empowers crowdworkers to strategically select suitable crowdsourcers for information contribution, considering incentives (i.e., rewards), information availability, and information collection costs, while accounting for decisions made by other crowdworkers. The development of the Approximate CROWDMATCH mechanism, grounded in matching theory principles, eliminates externalities from crowdworkers’ decisions, allowing for individual utility maximization and simultaneous benefits for crowdsourcers. Building upon this, the Accurate CROWDMATCH mechanism incorporates coalition game-theoretic principles, refining the matching process to consider externalities, converging to an optimal and Nash–Individually stable matching. The unique contributions of this research include the introduction of the CROWDMATCH system model, the formulation of utility functions for the crowdworkers and crowdsourcers, and the proposal of two distributed matching mechanisms. Comprehensive simulation results demonstrate the operational advantages, scalability, and superior performance of the CROWDMATCH mechanism over different methods in empowering crowdworkers in crowdsourcer selection within large-scale crowdsourcing systems based on numerical evaluations.
Our current and future work includes the refinement and optimization of the CROWDMATCH mechanism to accommodate dynamic and evolving contexts, such as fluctuating incentive structures or varying information availability. Additionally, the integration of machine learning techniques could enhance the mechanism’s adaptability by learning from historical matching patterns and continuously improving its decision-making process. Exploring the application of CROWDMATCH across diverse domains and real-world platforms would provide valuable insights into its versatility and effectiveness in different contexts. Furthermore, investigating the implications of incorporating fairness and diversity considerations into the matching process could contribute to the development of more inclusive and equitable crowdsourcing systems.

Author Contributions

Conceptualization and writing, A.A. and R.K.; methodology and supervision, E.E.T. All authors have read and agreed to the published version of the manuscript.

Funding

The research of Eirini Eleni Tsiropoulou was partially supported by the UNM Research Allocation Committee award, and the NSF Awards #2219617 and #2319994.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yao, W. A game based model for group decision making in social networks. In Proceedings of the 2022 International Conference on Frontiers of Communications, Information System and Data Science (CISDS), Guangzhou, China, 25–27 November 2022; pp. 148–153. [Google Scholar] [CrossRef]
  2. Zhou, X.; Pan, D.; Song, H.; Huang, X. Socially-Aware D2D Pair Strategy: A Stable Matching Approach. In Proceedings of the 2020 IEEE 39th International Performance Computing and Communications Conference (IPCCC), Austin, TX, USA, 6–8 November 2020; pp. 1–4. [Google Scholar] [CrossRef]
  3. Zhao, L.; Tan, W.; Li, B.; Xu, L.; Yang, Y. Multiple Cooperative Task Assignment on Reliability-Oriented Social Crowdsourcing. IEEE Trans. Serv. Comput. 2022, 15, 3402–3416. [Google Scholar] [CrossRef]
  4. Tan, W.; Zhao, L.; Li, B.; Xu, L.; Yang, Y. Multiple Cooperative Task Allocation in Group-Oriented Social Mobile Crowdsensing. IEEE Trans. Serv. Comput. 2022, 15, 3387–3401. [Google Scholar] [CrossRef]
  5. Wang, Y.; Dai, W.; Jin, Q.; Ma, J. BciNet: A Biased Contest-Based Crowdsourcing Incentive Mechanism Through Exploiting Social Networks. IEEE Trans. Syst. Man, Cybern. Syst. 2020, 50, 2926–2937. [Google Scholar] [CrossRef]
  6. Ji, G.; Yao, Z.; Zhang, B.; Li, C. Quality-Driven Online Task-Bundling-Based Incentive Mechanism for Mobile Crowdsensing. IEEE Trans. Veh. Technol. 2022, 71, 7876–7889. [Google Scholar] [CrossRef]
  7. Jiang, J.; An, B.; Jiang, Y.; Zhang, C.; Bu, Z.; Cao, J. Group-Oriented Task Allocation for Crowdsourcing in Social Networks. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 4417–4432. [Google Scholar] [CrossRef]
  8. Chen, X.; Zhao, Y.; Zheng, K.; Yang, B.; Jensen, C.S. Influence-aware Task Assignment in Spatial Crowdsourcing. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE), Kuala Lumpur, Malaysia, 9–12 May 2022; pp. 2141–2153. [Google Scholar] [CrossRef]
  9. Zhao, L.; Tan, W.; Xu, L.; Xie, N.; Huang, L. Crowd-Based Cooperative Task Allocation via Multicriteria Optimization and Decision-Making. IEEE Syst. J. 2020, 14, 3904–3915. [Google Scholar] [CrossRef]
  10. Peng, S.; Zhang, B.; Yan, Y.; Li, C. Time Window-based Online Task Assignment for Mobile Crowdsensing. In Proceedings of the ICC 2021—IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
  11. Sun, G.; Wang, Y.; Ding, X.; Hu, R. Cost-Fair Task Allocation in Mobile Crowd Sensing with Probabilistic Users. IEEE Trans. Mob. Comput. 2021, 20, 403–415. [Google Scholar] [CrossRef]
  12. Yucel, F.; Bulut, E. Online Stable Task Assignment in Opportunistic Mobile Crowdsensing with Uncertain Trajectories. IEEE Internet Things J. 2022, 9, 9086–9101. [Google Scholar] [CrossRef]
  13. Su, Z.; Dai, M.; Qi, Q.; Wang, Y.; Xu, Q.; Yang, Q. Task Allocation Scheme for Cyber Physical Social Systems. IEEE Trans. Netw. Sci. Eng. 2020, 7, 832–842. [Google Scholar] [CrossRef]
  14. Lin, Y.; Cai, Z.; Wang, X.; Hao, F.; Wang, L.; Sai, A.M.V.V. Multi-Round Incentive Mechanism for Cold Start-Enabled Mobile Crowdsensing. IEEE Trans. Veh. Technol. 2021, 70, 993–1007. [Google Scholar] [CrossRef]
  15. Xu, J.; Luo, Z.; Guan, C.; Yang, D.; Liu, L.; Zhang, Y. Hiring a Team From Social Network: Incentive Mechanism Design for Two-Tiered Social Mobile Crowdsourcing. IEEE Trans. Mob. Comput. 2023, 22, 4664–4681. [Google Scholar] [CrossRef]
  16. Xu, Y.; Xiao, M.; Wu, J.; Zhang, S.; Gao, G. Incentive Mechanism for Spatial Crowdsourcing with Unknown Social-Aware Workers: A Three-Stage Stackelberg Game Approach. IEEE Trans. Mob. Comput. 2023, 22, 4698–4713. [Google Scholar] [CrossRef]
  17. Nie, J.; Luo, J.; Xiong, Z.; Niyato, D.; Wang, P.; Poor, H.V. A Multi-Leader Multi-Follower Game-Based Analysis for Incentive Mechanisms in Socially-Aware Mobile Crowdsensing. IEEE Trans. Wirel. Commun. 2021, 20, 1457–1471. [Google Scholar] [CrossRef]
  18. Nie, J.; Luo, J.; Xiong, Z.; Niyato, D.; Wang, P.; Zhang, Y. Multi-Leader Multi-Follower Game-based Incentive Scheme for Socially-Aware Mobile Crowdsensing. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–6. [Google Scholar] [CrossRef]
  19. Xu, J.; Chen, G.; Zhou, Y.; Rao, Z.; Yang, D.; Xie, C. Incentive Mechanisms for Large-Scale Crowdsourcing Task Diffusion Based on Social Influence. IEEE Trans. Veh. Technol. 2021, 70, 3731–3745. [Google Scholar] [CrossRef]
  20. Rahman, A.B.; Siraj, M.S.; Kubiak, N.; Tsiropoulou, E.E.; Papavassiliou, S. Network Economics-based Crowdsourcing in Online Social Networks. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 4655–4660. [Google Scholar] [CrossRef]
  21. Xin, K.; Fan, S.; Cai, W. Psychological Game Analysis for Crowdsourcing with Reciprocity. In Proceedings of the ICC 2022—IEEE International Conference on Communications, Seoul, Republic of Korea, 16–20 May2022; pp. 4920–4925. [Google Scholar] [CrossRef]
  22. Tang, W.; Sun, H.; Wang, J.; Liu, C.; Qi, Q.; Wang, J.; Liao, J. Identifying Users Across Social Media Networks for Interpretable Fine-Grained Neighborhood Matching by Adaptive GAT. IEEE Trans. Serv. Comput. 2023, 16, 3453–3466. [Google Scholar] [CrossRef]
  23. Meng, Z.; Yu, C.; Qian, Y. Privacy-Preserving Task Allocation and Decentralized Dispute Protocol in Mobile Crowdsourcing. In Proceedings of the ICC 2023—IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023; pp. 1579–1584. [Google Scholar] [CrossRef]
  24. He, J.; Li, Y.; Zhu, N. A Game Theory-Based Model for the Dissemination of Privacy Information in Online Social Networks. Future Internet 2023, 15, 92. [Google Scholar] [CrossRef]
  25. Adesokan, A.; Siraj, M.S.; Rahman, A.B.; Tsiropoulou, E.E.; Papavassiliou, S. How to become an Influencer in Social Networks. In Proceedings of the ICC 2023—IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023; pp. 5570–5575. [Google Scholar] [CrossRef]
  26. Pouryazdan, M.; Kantarci, B. On Coalitional and Non-Coalitional Games in the Design of User Incentives for Dependable Mobile Crowdsensing Services. In Proceedings of the 2020 IEEE International Conference on Service Oriented Systems Engineering (SOSE), Oxford, UK, 3–6 August 2020; pp. 40–48. [Google Scholar] [CrossRef]
  27. Yang, G.; Wang, B.; He, X.; Wang, J.; Pervaiz, H. Competition-congestion-aware stable worker-task matching in mobile crowd sensing. IEEE Trans. Netw. Serv. Manag. 2021, 18, 3719–3732. [Google Scholar] [CrossRef]
  28. Alagha, A.; Singh, S.; Otrok, H.; Mizouni, R. Influence- and Interest-Based Worker Recruitment in Crowdsourcing Using Online Social Networks. IEEE Trans. Netw. Serv. Manag. 2023, 20, 1924–1936. [Google Scholar] [CrossRef]
  29. Jin, R.; Su, J. Self-Organizing Coalition Formation Based on Non-Cooperative Games in Social Networks. In Proceedings of the 2022 International Conference on Information Technology, Communication Ecosystem and Management (ITCEM), Bangkok, Thailand, 19–21 December 2022; pp. 90–94. [Google Scholar] [CrossRef]
  30. Zhang, N.; Liu, Z.; Li, F.; Xu, Z.; Chen, Z. Stable Matching for Crowdsourcing Last-Mile Delivery. IEEE Trans. Intell. Transp. Syst. 2023, 24, 8174–8187. [Google Scholar] [CrossRef]
  31. Wang, L.; Yang, D.; Yu, Z.; Han, Q.; Wang, E.; Zhou, K.; Guo, B. Acceptance-Aware Mobile Crowdsourcing Worker Recruitment in Social Networks. IEEE Trans. Mob. Comput. 2023, 22, 634–646. [Google Scholar] [CrossRef]
  32. Jiang, J.; Di, K.; An, B.; Jiang, Y.; Bu, Z.; Cao, J. Batch Crowdsourcing for Complex Tasks Based on Distributed Team Formation in E-Markets. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 3600–3615. [Google Scholar] [CrossRef]
  33. Hamrouni, A.; Ghazzai, H.; Alelyani, T.; Massoud, Y. Low-Complexity Recruitment for Collaborative Mobile Crowdsourcing Using Graph Neural Networks. IEEE Internet Things J. 2022, 9, 813–829. [Google Scholar] [CrossRef]
  34. Hara, K.; Adams, A.; Milland, K.; Savage, S.; Callison-Burch, C.; Bigham, J.P. A data-driven analysis of workers’ earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–14. [Google Scholar]
  35. Dang, K.H.; Cao, K.T. Towards reward-based spatial crowdsourcing. In Proceedings of the 2013 International Conference on Control, Automation and Information Sciences (ICCAIS), Nha Trang, Vietnam, 25–28 November 2013; pp. 363–368. [Google Scholar]
  36. Fragkos, G.; Apostolopoulos, P.A.; Tsiropoulou, E.E. ESCAPE: Evacuation strategy through clustering and autonomous operation in public safety systems. Future Internet 2019, 11, 20. [Google Scholar] [CrossRef]
  37. Fragkos, G.; Kemp, N.; Tsiropoulou, E.E.; Papavassiliou, S. Artificial intelligence empowered UAVs data offloading in mobile edge computing. In Proceedings of the ICC 2020-2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–7. [Google Scholar]
Figure 1. Crowdworkers’ (a) approximate (Equation (3)), and (b) accurate utility (Equation (1)).
Figure 1. Crowdworkers’ (a) approximate (Equation (3)), and (b) accurate utility (Equation (1)).
Futureinternet 16 00058 g001
Figure 2. Number of matched crowdworkers to crowdsourcers and crowdsourcers’ utility under the (a) Approximate CROWDMATCH algorithm, and (b) Accurate CROWDMATCH algorithm.
Figure 2. Number of matched crowdworkers to crowdsourcers and crowdsourcers’ utility under the (a) Approximate CROWDMATCH algorithm, and (b) Accurate CROWDMATCH algorithm.
Futureinternet 16 00058 g002
Figure 3. Total crowdworkers’ utility per associated crowdsourcer under the (a) Approximate, and (b) Accurate CROWDMATCH algorithms.
Figure 3. Total crowdworkers’ utility per associated crowdsourcer under the (a) Approximate, and (b) Accurate CROWDMATCH algorithms.
Futureinternet 16 00058 g003
Figure 4. Scalability analysis for an increasing number of crowdworkers: (a) execution time, and (b) average crowdworkers’ utility under the Approximate and Accurate CROWDMATCH algorithms.
Figure 4. Scalability analysis for an increasing number of crowdworkers: (a) execution time, and (b) average crowdworkers’ utility under the Approximate and Accurate CROWDMATCH algorithms.
Futureinternet 16 00058 g004
Figure 5. Scalability analysis for an increasing number of crowdsourcers: (a) execution time, and (b) average crowdworkers’ utility under the Approximate and Accurate CROWDMATCH Algorithms.
Figure 5. Scalability analysis for an increasing number of crowdsourcers: (a) execution time, and (b) average crowdworkers’ utility under the Approximate and Accurate CROWDMATCH Algorithms.
Futureinternet 16 00058 g005
Figure 6. Comparative evaluation: (a) average crowdsourcers’ utility and (b) average crowdworkers’ utility under the (i) Accurate CROWDMATCH mechanism, (ii) Highest Reward, (iii) Random Matching, and (iv) Stochastic Learning Automata (SLA) models.
Figure 6. Comparative evaluation: (a) average crowdsourcers’ utility and (b) average crowdworkers’ utility under the (i) Accurate CROWDMATCH mechanism, (ii) Highest Reward, (iii) Random Matching, and (iv) Stochastic Learning Automata (SLA) models.
Futureinternet 16 00058 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adesokan, A.; Kinney, R.; Tsiropoulou, E.E. CROWDMATCH: Optimizing Crowdsourcing Matching through the Integration of Matching Theory and Coalition Games. Future Internet 2024, 16, 58. https://doi.org/10.3390/fi16020058

AMA Style

Adesokan A, Kinney R, Tsiropoulou EE. CROWDMATCH: Optimizing Crowdsourcing Matching through the Integration of Matching Theory and Coalition Games. Future Internet. 2024; 16(2):58. https://doi.org/10.3390/fi16020058

Chicago/Turabian Style

Adesokan, Adedamola, Rowan Kinney, and Eirini Eleni Tsiropoulou. 2024. "CROWDMATCH: Optimizing Crowdsourcing Matching through the Integration of Matching Theory and Coalition Games" Future Internet 16, no. 2: 58. https://doi.org/10.3390/fi16020058

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop