Next Article in Journal
An Integrated Fuzzy Quality Function Deployment Model for Designing Touch Panels
Previous Article in Journal
The Synergistic Impact of 5G on Cloud-to-Edge Computing and the Evolution of Digital Applications
Previous Article in Special Issue
Decision Uncertainty from Strict Preferences in Sequential Search Scenarios with Multiple Criteria
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Quadratic Programming Model for Fair Resource Allocation

1
School of Transportation Science and Engineering, Beihang University, Beijing 100191, China
2
Faculty of Business, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong 999077, China
3
Business School, University of Bristol, Bristol BS8 1PY, UK
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(16), 2635; https://doi.org/10.3390/math13162635
Submission received: 1 July 2025 / Revised: 12 August 2025 / Accepted: 14 August 2025 / Published: 16 August 2025

Abstract

In collaborative projects, traditional resource allocation methods often rely on company-assigned contribution rates, which can be subjective and lead to unfair outcomes. To address this, we propose a quadratic programming model that integrates participants’ self-reported rankings of their contributions across projects with company evaluations. The model aims to minimize deviations from company-assigned rates while ensuring consistency with participants’ perceived contribution rankings. Extensive simulations demonstrate that the proposed method reduces allocation errors by an average of 50.8% compared to the traditional approach and 21.4% against the method considering only individual estimation tendencies. Additionally, the average loss reduction in individual resource allocation ranges from 40% to 70% compared to the traditional method and 10% to 50% against the estimation-based method, with our approach outperforming both. Sensitivity analyses further reveal the model’s robustness and its particular value in flawed systems; the error is reduced by approximately 75% in scenarios where company evaluations are highly inaccurate. While its effectiveness is affected by factors such as team size variability and self-assessment errors, the approach consistently provides more equitable allocation of resources that better reflects actual individual contributions, offering valuable insights for improving fairness in team projects.

1. Introduction

Evaluating individual contributions in collaborative projects presents a critical challenge in project management, directly impacting fairness, team morale, and project success [1]. Companies depend on accurate performance assessments to fairly distribute resources like bonuses, budgets, or rewards across multiple projects. However, managerial evaluations used for assigning contribution rates often suffer from subjective biases and measurement errors [2], leading to inaccurate assessments that disrupt team trust. These inaccuracies can reduce motivation and productivity [3], undermining project outcomes. The complexity of collaborative work, where individual efforts intertwine with team dynamics [4], makes isolating and quantifying personal contributions particularly difficult.
To address these challenges, self-assessment mechanisms have been developed, enabling team members to assess and rank their contributions across projects [5]. These mechanisms offer a perspective to counterbalance managerial inaccuracies. Research shows that integrating self-assessment data into data-driven optimization models significantly improves fairness and accuracy in resource allocation [6], as these models use mathematical objectives to balance overall efficiency with fairness metrics such as minimizing inequality. However, self-assessments are susceptible to psychological biases like overconfidence, where individuals overestimate contributions due to inflated self-perception, or social comparison, where evaluations are adjusted based on perceived peer performance [7]. Such biases can compromise reliability. Recent advancements propose combining self-assessments with peer evaluations, which provide external perspectives on contributions, and objective metrics like task completion rates or measurable deliverables [8]. For instance, peer evaluations can cross-validate self-reported rankings, while project milestone completion rates serve as quantifiable benchmarks to reduce subjective distortions. These findings emphasize the need for robust, multi-source frameworks that integrate diverse evaluation methods to ensure equitable resource allocation in complex project environments.
To address the inherent limitations in traditional resource allocation methods, which often rely on company-assigned contribution rates that are prone to inaccuracies and subjectivity, self-assessment mechanisms have gained attention. These mechanisms allow team members to evaluate and rank their contributions to projects, offering a more personalized view of individual effort [6]. Participants are allowed to rank their own perceived contributions across multiple projects, thereby providing a valuable perspective that helps to reduce the biases associated with company-assigned evaluations [9]. Although self-assessments are also prone to bias, integrating them with company evaluations in a hybrid optimization model helps to reconcile these differing perspectives [10], thereby mitigating the inaccuracies of any single source and improving the overall equity of resource allocation [11].
This study presents an optimization framework that merges company-assigned contribution rates with individual self-reported rankings across projects to enhance equity and efficiency in team project resource allocations. The framework systematically incorporates participants’ self-assessed project rankings [12]. By integrating insights from self-assessment methodologies [10], it mitigates biases in traditional company-only evaluations through dual-source validation, while fostering collaboration mechanisms [13] that are aligned with group performance studies. The core of our approach is a quadratic programming (QP) model that adjusts the original company-assigned contribution rates. This model employs a deviation minimization strategy rooted in quadratic optimization principles [14], which minimizes the squared differences between the adjusted and original rates to maintain proximity to the company’s initial assessments. Guided by optimization theory fundamentals [6], linear constraints derived from self-reported rankings preserve participants’ ordinal inputs. This dual-source-based approach ensures that resource allocations better reflect actual contributions than the traditional allocation approach based on company assessment. The key contributions of this paper are as follows: (i) we develop a new resource allocation method based on the QP model that uniquely integrates company-assigned contribution rates with participants’ self-reported project rankings based on their perceived contribution rates; (ii) we conduct extensive experiments to demonstrate that this method significantly outperforms both the traditional company-assigned allocation method and the method considering estimation tendencies, providing more reliable and fair allocation in diverse scenarios; and (iii) we obtain managerial insights through sensitivity analyses to support the practical application of our proposed new resource allocation method. Among these, the most important contribution is that we have transformed participants’ evaluations into a ranking format. The method relies on participants’ self-ranking of the projects that they are involved in, rather than comparison of their contributions with those of others. This approach mitigates the impact of individual evaluation tendencies in self-assessments, such as intentional overstatement or understatement.
The remainder of this paper is structured as follows. Section 2 provides an overview of the relevant literature on performance evaluation, team resource allocation, and self-assessment. Section 3 presents the problem description and formulates the QP model. Section 4 describes the experimental setup, reports the basic optimization results, and performs sensitivity analyses on key parameters. Finally, Section 5 concludes the paper and outlines future research directions.

2. Literature Review

To comprehensively understand the challenges in achieving fair resource allocation in team projects, we first review the literature on performance evaluation and team resource allocation. Then, we examine studies that explore self-assessment methods and investigate strategies aimed at mitigating the risks associated with evaluation inaccuracies.

2.1. Performance Evaluation and Team Resource Allocation

Performance evaluation systematically assesses employees’ job performance based on factors like productivity, quality, and contributions [15]. In enterprise management, it supports professional development, compensation decisions, and resource allocation [16]. Common systems include annual reviews, continuous feedback, and multi-source appraisals (e.g., manager, peer, and self-assessments) [17]. These systems are complex due to diverse roles and varying metrics, like judging code quality for developers or sales targets for marketers [18]. For example, evaluating a participant’s contribution, which involves leadership, coordination, and deliverables, is difficult because it requires specific criteria and is often affected by subjective judgments that can introduce biases [19].
Resource allocation involves distributing incentives, like bonuses, based on individuals’ contributions across projects [20]. This process is challenging due to interdependencies among team members and the need for equitable distribution [21]. For instance, reward allocation often prioritizes central or consistently high-performing members, potentially overlooking others whose contributions are equally essential [22]. While optimized strategies are required for effective resource allocation, this process is frequently undermined by information asymmetry and subjective biases [23].
Research explores methods to address these challenges. Grand et al. [24] propose simulation-based assessments to evaluate resource allocation strategies. Tavoletti et al. [25] examine nationality biases in peer evaluations, emphasizing their impact on resource distribution, especially in global teams. Jiang et al. [26] develop a data-driven optimization model to refine contribution rate assessments, improving fairness. Machine learning techniques, such as Resce et al.’s [27] use of predictions in academic networks, are applied to team resource allocation. Vander Schee and Birrittella [28] use hybrid peer grading for resource allocation based on perceived fairness. These methods, applied in project management and agile workflows, aim to reduce biases and enhance allocation efficiency [18].

2.2. Self-Assessment Methods and Risk Mitigation

Self-assessment allows participants to evaluate their contributions through ranking, percentage allocation, or scoring systems, offering transparency in performance evaluation and resource allocation in enterprise settings [6]. Studies highlight its value in aligning tasks with individual capabilities, guiding training, and supporting resource allocation decisions. Sun et al. [29] present a fuzzy assessment model, validated through case studies, which enhances resource allocation efficiency. Li and Chen [30] introduce anonymous self-assessment tools in a university case study to mitigate social anxieties in peer evaluation. Yan et al. [31] conduct a meta-analysis on the effect of self-assessment on academic performance, emphasizing its role in aligning goals and improving resource planning in organizations.
Self-assessment benefits enterprise management by promoting engagement, self-awareness, and accountability, complementing managerial evaluations [32]. It offers insights into individual efforts, reducing dependence on potentially biased supervisor assessments [33]. In resource allocation, it helps to identify skills and contributions in distributed teams, improving task allocation productivity [34]. However, self-assessment can be prone to inaccuracies, undermining reliability. Froese and Roelle [32] note that psychological biases, such as task characteristics or self-efficacy, can lead individuals to over-report or under-report their contributions. Suls and Wills [35] describe how social comparison biases distort evaluations when participants compare themselves to peers. Psychological biases in self-assessment can lead to inaccurate contribution evaluations, often manifesting as underestimation or overestimation in problem-solving tasks [36]. In corporate settings, unstructured self-assessments can lead to lenient or inconsistent evaluations, distorting resource allocation and negatively impacting team morale [37].
To address these challenges, research highlights effective strategies to enhance self-assessment reliability. Li and Chen [30] suggest calibrating self-assessments with objective metrics like task completion data to improve accuracy. Ogryczak et al. [38] recommend combining self-assessments with peer evaluations or anonymous methods to reduce bias. Karpen [7] integrates self-assessment data into optimization models using fairness metrics like the Gini coefficient to correct inaccuracies. Clayton Bernard and Kermarrec [39] reveal that combining self-assessments and peer-assessments enhances evaluation accuracy in collaborative settings. These strategies strengthen self-assessment for performance management and resource allocation in enterprise contexts.

2.3. Advances in QP Models

QP is a well-established optimization technique for optimizing quadratic objective functions that are subject to linear constraints. Its applications span various fields, including finance, engineering, and operational research [40]. Recent research has enhanced its capabilities, particularly with the development of more robust and efficient solvers. For example, active-set algorithms have been introduced for convex QP problems with box constraints [41], while high-performance toolkits like the Proximal Interior-Point QP (PIQP) solver efficiently handle generic sparse quadratic programs [42]. QP models are also being integrated with other advanced technologies to address complex, modern problems. This includes developing hybrid models that integrate QP with other techniques, such as evolutionary computation, to solve complex multi-objective problems [43]. Other work has explored the use of advanced neurodynamic models to solve general convex quadratic programs, demonstrating continuous effort to enhance the efficiency and scope of QP solvers [44]. These advancements show a clear trend towards using QP as a core component in sophisticated, hybrid models.
Recent advancements in QP models have extended their applications across various domains, showcasing their versatility and effectiveness in solving complex optimization problems. Mosleh et al. [45] integrate QP into proportional–fair resource allocation for Long Term Evolution networks, while Guo and Wang [46] apply cooperative game-based QP for profit distribution in construction projects. Quirynen et al. [47] utilize PIQP to enable real-time solutions for vehicle routing problems. Garcia [48] formulates a quadratic objective model for virtual team coordination aimed at achieving fair assignments that maximize collective goals. These studies demonstrate the effectiveness of QP’s objective function in modeling fairness, often through minimizing variance or squared deviations from an equitable state, making it highly suitable for fair resource allocation in team settings.

2.4. Optimization Techniques for Fair Resource Allocation

Recent studies explore advanced optimization techniques to enhance fairness in resource allocation. Tanasescu et al. [49] use machine learning algorithms, including random forest and XGBoost, to predict employee performance scores, minimizing bias and supporting more equitable bonus allocation. Freund and Hssaine [50] apply dynamic programming to balance individual contributions and team equity in crowdsourcing teams, reducing bonus disparities. Figueiredo et al. [51] conduct a systematic review of reward systems, emphasizing the importance of equity and transparency in team settings. Their findings recommend criteria-based models to foster knowledge sharing and commitment, with implications for fair bonus distribution in remote and collaborative teams.
In blockchain-inspired models, Sahin et al. [52] propose game-theoretic optimization for fair rewards in proof-of-authority systems, ensuring proportional bonus distribution based on contributions. Liu et al. [53] use graph models for team formation and reward allocation under budget constraints in crowdsensing contexts, maximizing performance and fairness through integer programming. Kumar and Yeoh [54] advance multi-agent reinforcement learning for fair resource allocation, employing weighted optimization of fairness and utility. Their Split Q-estimators for equity and efficiency outperform traditional methods in team-based bonus distribution simulations.
These advancements reflect a shift toward integrated, data-driven models that incorporate machine learning and fairness metrics, outperforming traditional subjective methods.

2.5. Research Gaps

Despite progress in performance evaluation and optimization modeling, significant gaps remain. A primary limitation is the use of assessment methods, such as supervisor and self-evaluations, in isolation, failing to leverage their complementary strengths. These methods, while valuable, are prone to biases. Self-assessments can be distorted by psychological factors, leading to over- or under-reporting of contributions [32,36], while supervisor-based appraisals are affected by subjective judgment and bias [33]. Relying on a single evaluation source can result in inaccurate assessments, distorting resource allocation and harming team morale [37]. Most of the existing literature lacks effective methods to reconcile conflicting assessments from multiple sources.
Furthermore, while optimization frameworks have been proposed to improve fairness, they have limitations. Some models that incorporate fairness metrics, such as the Gini coefficient, often lack a clear mechanism for integrating and balancing organizational-level assessments with individual-level perceptions [38]. The challenge of aligning company-assigned contribution rates with individual self-perceptions remains unresolved, as corrections reduce but cannot fully eliminate subjective biases [26]. Therefore, there is a need for models that specifically address the reconciliation of divergent viewpoints between the organization and its employees within a unified framework [45,46]. Existing research often focuses on optimizing performance prediction or decentralized reward systems, but fails to adequately address subjective biases in self-assessments or inconsistencies across multiple sources. For example, some models focus on predictive accuracy without accounting for individual perceptions [49], while others optimize reward distribution without aligning organizational and self-assessments [52]. Additionally, some incentive models address fairness in specific contexts, but fail to generalize to enterprise settings with multi-source inconsistencies [50]. These gaps highlight the need for an integrated approach that combines organizational and individual perspectives for fairer resource allocation.
This study addresses these gaps by proposing a QP model that refines company-assigned contribution estimates by incorporating self-assessment rankings. The QP model minimizes squared errors, balancing organizational evaluations with individual perspectives while respecting fairness constraints. Unlike previous heuristic methods, our model formalizes this reconciliation process, providing a structured, repeatable solution for aligning self-rankings with company assessments. We also demonstrate the robustness of our QP model through sensitivity analyses, enhancing its practical applicability in real-world corporate resource allocation. By integrating diverse data sources into a formal optimization framework, our approach provides a novel solution to challenges left unaddressed by earlier research.

3. Problem Formulation

In this section, we first present a mathematical description of the resource allocation problem based on the contribution rate in Section 3.1. Following that, we formulate the problem in Section 3.2. Then, we introduce the evaluation metrics used to assess the effectiveness of the proposed method in Section 3.3.

3.1. Mathematical Description

Consider a setting in which a group of participants have collaborated to complete a set of projects. Each project involves at least two participants, and each participant is assigned to at least two different projects. The set of all participants is denoted by P (indexed by p ), and the set of all projects is denoted by Q (indexed by q ). For each participant p , we denote the set of projects in which he or she participated by Q p Q . Ideally, our model would be validated using empirical data for company-assigned contribution rates and self-reported rankings. However, given the novelty of our framework that formally integrates these specific data sources, a comprehensive real-world dataset is not yet available. Consequently, this study employs a simulation-based approach, which begins by postulating a set of unobservable true contribution rates, serving as the objective ground truth for our experiments. From this ground truth, the company-assigned contribution rates and the participant rankings are systematically generated by introducing biases and random noise. The true contribution rate of participant p to project q is represented by n p q , which lies in the interval [0, 1]. For each project q , the total true contribution rate of all participants in the project is definitely 100%, i.e., p : q Q p n p q = 100 % , q Q . To ensure analytical tractability and comparability across projects, we assume that all projects are of approximately equal scale and workload. Let B R + denote the total amount of resources available for allocation in each project. Then the fair allocation based on true contribution rates is defined as
y p q t r u e = B · n p q ,   p P ,
for all q Q p , where y p q t r u e represents the amount of resources that participant p should fairly receive from project q .
Traditional Allocation Method. In practice, these true contribution rates are not directly observable. Therefore, companies rely on estimated contribution rates derived from internal evaluations. The company-assigned contribution rate of participant p to project q is represented by m p q , which lies in the interval [0, 1]. The company-assigned contribution rates deviate from the true values due to subjective judgment. We assume the deviations follow a uniform distribution with a maximum error α [ 0 ,   1 ] . Specifically, for all p P and q Q p , an initial estimate m p q is drawn from m p q U m a x n p q α , 0 , m i n n p q + α , 1 . These initial estimates are then normalized for each project q to ensure that the sum of company-assigned contribution rates for each project equal 100%. Thus, the company-assigned contribution rate of participant p to project q is calculated as follows:
m ^ p q = m p q p : q Q p m p q ,   p P ,
for all q Q p . In the traditional method, this company-assigned contribution rate is directly used as the final basis for resource allocation. Then the amount of resources allocated to participant p in project q is calculated as follows:
y p q t r a d i = B · m ^ p q , p P ,
for all q Q p . While this approach is easy to implement, it is vulnerable to subjective bias, and it does not reflect the participants’ own perception of their relative efforts in the projects.
Estimation Tendency-Based Allocation Method. To adjust company-assigned contribution rates and achieve fairer resource allocations, a commonly adopted method involves generating adjusted contribution rates by incorporating individual self-assessed contribution rates and their estimation tendencies [36]. The core innovation of this method lies in considering the degree of overestimation or underestimation of one’s own contribution by individuals, assuming that each individual has the same estimation tendency across all the projects they participate in. Each participant p forms a personal estimate of the contribution rate for each project q that he or she is involved in, denoted by s p q [ 0 ,   1 ] . For participant p , the estimation tendency is denoted as e p R + , where 0 < e p < 1 indicates that participant p has a tendency to underestimate his or her contributions, and e p > 1 indicates a tendency to overestimate individual contributions. Then the adjusted contribution rate of participant p in project q that accounts for this estimation tendency is denoted as t p q = s p q / e p . The objective of this method is to determine adjusted contribution rates that are as close as possible to the original company evaluations. Then, this method for adjusting the contribution rates, considering estimation tendencies, can be formulated as follows:
m i n p P q Q p t p q m ^ p q 2  
s u b j e c t   t o   t p q = s p q e p               p P , q Q p
0 t p q 1                     p P , q Q p
e p q > 0                     p P , q Q p
The objective function minimizes the total squared difference between the adjusted contribution rates, considering estimation tendencies, and the company-assigned contribution rates. Constraints (5) ensure that each participant has a consistent estimation tendency of contribution rates for all the projects that they are involved in. Constraints (6) and (7) are the domains of the decision variables.
After solving the problem, the allocation based on the adjusted contribution rates considering estimation tendencies can be calculated as follows:
y p q t e n d = B · t p q p : q Q p t p q ,   p P ,
for all q Q p , where y p q t e n d represents the amount of resources allocated to participant p in project q by this method. This method incorporates two sources of information: the company-assigned contribution rates and individual self-assessed contribution rates, while also taking into account participant tendencies to overestimate or underestimate their own assessments. Compared with the traditional method that solely relies on company assignments, this method integrates self-assessments, thus neither significantly altering the company-assigned contribution rates nor ignoring each individual self-evaluation.
However, even with the consideration of individual estimation tendencies, the self-assessed contribution rates may still deviate considerably from the true contribution rates. Therefore, we propose a more accurate resource allocation method, which requires participants to provide their own rankings of the projects that they are involved in based on their perceived contribution rates.
Our Proposed Method. In our method, participants are also asked to provide their own rankings of the projects that they are involved in based on their perceived contribution rates. The rankings are then used to adjust contribution rates so that they align as closely as possible with the values assigned by the company while respecting self-reported rankings. We assume that the individual perceived contribution rate s p q is modeled by taking the true contribution rate n p q , scaling it by a personal estimation coefficient γ p R + , and then adding a random perceptual error ε , i.e., for all p P and q Q p , s p q = n p q γ p + ε . The coefficient γ p reflects a participant’s average estimation tendency: γ p = 1 indicates accurate estimation, γ p > 1 implies overestimation, and 0 < γ p < 1 implies underestimation of their true contribution rate. The term ε is assumed to follow a normal distribution with a mean of zero and a standard deviation of σ , i.e., ε ~ N 0 ,   σ 2 . For each participant p , after calculating the personal estimates of contribution rate s p q for all projects q Q p , they then rank these projects based on these perceived contribution rate values. The ranking submitted by participant p is given as an ordered list R p = φ p ( 1 ) ,   φ p ( 2 ) ,   ,   φ p ( Q p ) , where Q p denotes the number of projects that participant is involved in (i.e., the cardinality of the set Q p ), and φ p ( i ) is the project index, in which participant p perceives that he or she has the i -th highest contribution rate among all the projects in Q p . Since the total amount of resources available for allocation in each project is the same (i.e., B ), participants have no incentive to overstate their contributions in projects that might otherwise appear to have more resources. In other words, it can be reasonably assumed that participants will not intentionally falsify their rankings of contribution levels across the projects that they are involved in.
The objective of this study is to determine a set of adjusted contribution rates that are as close as possible to the original company evaluations, while fully respecting the self-reported rankings submitted by the participants. To this end, we need to decide the adjusted contribution rate for each project and for each participant involved in that project. The decision model is formulated in Section 3.2.

3.2. Model Formulation

In this section, we present a QP model to refine the initial company-assigned contribution rates. It focuses on optimizing the numerical estimates of individual contributions based on two sources of information: the original company evaluations and the self-reported rankings provided by participants. The objective is to minimize the total squared deviation between the adjusted contribution rates and the original company-assigned rates, while ensuring that the adjusted rates strictly comply with the rank-order constraints. Table 1 summarizes the notations used in the model.
The QP model is developed to refine contribution rate estimations, and can be formulated as follows:
m i n p P q Q p x p q m ^ p q 2
subject   to x p φ p ( 1 ) x p φ p ( 2 ) x p φ p ( Q p )               p P
0 x p q 1                     p P , q Q p
The objective function (9) minimizes the total squared difference between the adjusted contribution rates and the company-assigned contribution rates. Constraints (10) require that the adjusted contribution rates across their involved projects conform to their self-reported rankings. Constraints (11) are the domains of the decision variables. The objective function (9) essentially works to minimize the total squared difference between the two sets of contribution rates, serving a key purpose: by penalizing the square of deviations rather than absolute values, the model prioritizes reducing larger discrepancies, which helps to keep the adjusted contribution rates from deviating too much from the original company-assigned contribution rates. This model achieves a balance between corporate evaluations, reflected in the objective function, and participant rankings, which are incorporated through Constraints (10). Such a design helps to mitigate biases that may arise from relying solely on company-assigned evaluations. Figure 1 illustrates the optimization flow of the QP model.
After solving the QP model, the allocation based on the adjusted contribution rates can be calculated as follows:
y p q a d j = B · x p q p : q Q p x p q ,   p P ,
for all q Q p , where p denotes any participant involved in project q , and y p q a d j represents the amount of resources allocated to participant p in project q by our proposed method.
The traditional allocation method (Formula (3)) relies solely on company-assigned contribution rates, which may not accurately reflect individual contributions, leading to potential biases and unfair allocations. The estimation tendency-based allocation method (Formula (8)) adjusts these rates by incorporating self-assessed contributions and accounting for individual estimation tendencies, which are rooted in quadratic optimization (Formulas (4)–(7)), minimizing the squared difference between the adjusted contribution rates and company-assigned rates. While this method reduces some bias, it may still deviate from the true contribution rates. Our proposed method (Formula (12)) improves upon this by integrating participants’ self-reported rankings, which better align with actual contributions. This method combines both company-assigned rates and participant rankings, ensuring a more equitable distribution of resources. Formula (12) is rooted in quadratic optimization (Formulas (9)–(11)), ensuring minimal adjustments while maintaining consistency with the company’s original assessments.

3.3. Evaluation Metrics

To evaluate the effectiveness of the two traditional methods and our proposed method, we use the mean squared error (MSE) as the evaluation metric, as it provides a straightforward measure of the accuracy between the allocated and deserved resources for each participant. We define baseline losses based on Traditional Method 1 (denoted as l t r a d i ) and Traditional Method 2 (denoted as l t e n d ), which are calculated as follows:
l t r a d i = 1 p P Q p p P q Q p y p q t r a d i y p q t r u e 2 ,
l t e n d = 1 p P Q p p P q Q p y p q t e n d y p q t r u e 2 .
Similarly, the loss of our proposed model can be calculated as follows:
l a d j = 1 p P Q p p P q Q p y p q a d j y p q t r u e 2 .
To measure the improvement achieved through the adjustment process, we compute the loss reduction percentages, with r l o s s corresponding to the traditional method and u l o s s corresponding to the method that accounts for individual estimation tendencies, as follows:
r l o s s = l t r a d i l a d j l t r a d i × 100 % ,
u l o s s = l t e n d l a d j l t e n d × 100 % .
A positive value of r l o s s indicates that the resource allocation based on the adjusted contribution rates yields a lower loss compared to the traditional company-assigned method. A higher value of r l o s s indicates a greater improvement in allocation accuracy. Similarly, for the method that accounts for individual estimation tendencies, the value of u l o s s can be interpreted in the same way.

4. Experiments

In this section, we first conduct computational experiments to validate the effectiveness of the proposed method. Additionally, a comprehensive sensitivity analysis is performed to explore how parameter variations influence the efficiency of this approach. In the experiments, the models were solved using Gurobi Optimizer 10.0.1, accessed via the Python 3.11.5 API.

4.1. Experiment Settings

The experimental setup comprises 20 participants and 50 projects, i.e., P = 20 , Q = 50 . These values were chosen to simulate a representative scenario of a moderately sized organization, though other appropriate values could certainly be used to model organizations of different scales. The number of projects each participant is involved in is drawn from a uniform distribution over the interval N + g , N g , where N , g   Z + . We set the central value N to 10 and the deviation g to 5, ensuring that every participant is involved in at least two projects. The number of team members for each project ranges from 5 to 15, an optimal size for fostering effective communication, collaboration, and decision-making, while avoiding coordination issues and diluted responsibility. This size supports diverse skill sets and promotes high-quality output and strong team dynamics [55]. The maximum error allowed in company-assigned contribution rates α is set to 0.1, consistent with findings on managerial judgment biases [56]. In the model of personal perceived contribution rate s p q = n p q γ p + ε , the participant’s average estimation tendency γ p b 1 ,   b 2 . As people typically tend to slightly underestimate, but more often significantly overestimate, their own efforts [35], we set γ p 0.7 ,   2 . The perceptual noise term ε is drawn from a normal distribution ε ~ N 0 ,   σ 2 . To capture realistic subjective uncertainty, the standard deviation σ is calibrated such that the probability of the perception error ε being less than 0.1 is 90%, i.e., P ( ε < 0.1 ) = 0.9 . Based on the properties of the normal distribution, this condition yields a value of σ 0.0608 , so we set σ to 0.06. This realistic assumption is grounded in statistical calibration and self-assessment accuracy research [7]. Finally, to ensure the perceived contribution s p q [ 0,1 ] , the additive noise term ε is subsequently truncated to the interval a ,   a , where a = m i n n p q γ p ,   1 n p q γ p .
After establishing the parameter settings, we use these values to derive the basic results, and then we conduct sensitivity analysis to examine the impacts of these parameters.

4.2. Basic Results

This section presents a detailed analysis of the basic results derived from the QP model. The evaluation covers the overall improvement in allocation accuracy relative to the traditional method, followed by an examination of the adjusted allocations for two participants.

4.2.1. Evaluation of Method Effectiveness

Following the experimental setup detailed in Section 4.1, we execute the model ten times with distinct random seeds for each parameter configuration. This repeated execution with varied seeds is designed to capture the full range of stochastic variations arising from the random error distributions of company-assigned rates and individual estimations, thereby ensuring that the observed results are statistically robust and not skewed by isolated random fluctuations. The performance of the proposed QP model is first evaluated at an aggregate level by comparing its MSE with that of the traditional allocation method. The computation yields an average loss of l t r a d i = 249,448 for the traditional method based on direct company assignments. Moreover, for the method considering individual estimation tendencies, we obtain an average loss of l t e n d = 156,100 . In contrast, our method based on adjusted contribution rates achieves a significantly lower average loss of l a d j = 122,703 . Then the average loss reduction percentage for the traditional method comparison is r l o s s = 50.8 % , and for the method considering individual estimation tendencies, it is u l o s s = 21.4 % , which signifies substantial improvements in the accuracy of the resource allocation. This result indicates that, in an average sense, by integrating participants self-reported project rankings as constraints, our QP model effectively mitigates the inherent subjectivity and errors in the initial company evaluations, yielding a resource distribution that aligns more closely with fair allocation based on the true contribution rates.

4.2.2. Individual Resource Allocation Analysis

To further illustrate the practical implications of the model for individual participants, we establish an indicator for each participant to reflect the model’s efficacy in correcting the deserved resources for each participant. For each p in P , the evaluation metrics of the three methods for the individual can be calculated as follows:
l t r a d i p = 1 Q p q Q p y p q t r a d i y p q t r u e 2 ,
l t e n d p = 1 Q p q Q p y p q t e n d y p q t r u e 2 ,
l a d j p = 1 Q p q Q p y p q a d j y p q t r u e 2 ,
where l a d j p , l t r a d i p , and l t e n d p respectively represent the MSE between the bonus amount allocated to participant p under our proposed method, the traditional method, and the method considering individual estimation tendencies and the deserved bonus amount. For each p in P , the related loss reduction percentages are calculated as follows:
r l o s s p = l t r a d i p l a d j p l t r a d i p × 100 % ,
u l o s s p = l t e n d p l a d j p l t e n d p × 100 % ,
where r l o s s p and u l o s s p respectively represent the loss reduction percentage of the adjusted method compared to the traditional method and to the method considering individual estimation tendencies for participant p . Table 2 presents these calculation indicators for each individual. Figure 2 illustrates the values of r l o s s p and u l o s s p for each participant.
It can be observed that r l o s s p is mostly between 40% and 70%, and u l o s s p is mostly between 10% and 50%. The bar chart also effectively demonstrates the superiority of our proposed method in reducing losses and improving allocation accuracy compared to the other two methods. It can be concluded that our method outperforms the allocation method considering estimation tendencies, which, in turn, is superior to the traditional allocation method based on company-assigned contribution rates.
Furthermore, we present a detailed analysis of the resource allocations for two randomly selected participants, denoted as p 1 and p 2 (i.e., the participants corresponding to indices 1 and 2 in the set P ). For each p in P and q in Q p , to better capture the actual allocation differences, we calculate the differences between the allocated bonus and the true deserving bonus as follows:
d t r a d i p q = y p q t r a d i y p q t r u e ,
d t e n d p q = y p q t e n d y p q t r u e ,
d a d j p q = y p q a d j y p q t r u e ,
where d t r a d i p q , d t e n d p q , and d a d j p q represent the absolute differences between the deserved bonus amount and the bonus allocated to participant p in project q under the traditional method, the method considering individual estimation tendencies, and our proposed method, respectively. These values indicate whether a participant receives more or less than the true deserving amount. For each p in P and q in Q p , the related loss reduction percentages are calculated as follows:
r l o s s p q = d t r a d i p q d a d j p q d t r a d i p q × 100 % ,
u l o s s p q = d t e n d p q d a d j p q d t e n d p q × 100 % ,
where r l o s s p q and u l o s s p q represent the loss reduction percentages of the adjusted method compared to the traditional method and the method considering individual estimation tendencies for participant p in project q , respectively. The results shown in Table 3 and Figure 3 and Figure 4 are drawn from a single instance of the ten experiments conducted.
The results highlight the model’s ability to improve the accuracy of resource allocation. For participant p 1 , the traditional method yields allocations y p q t r a d i that deviate significantly from the true deserved amounts y p q t r u e . In several projects, the company’s evaluation leads to a substantial overestimate, while in others, the participant receives an allocation far below what is deserved. Our adjusted method significantly reduces these errors. The resulting allocations y p q a d j are consistently closer to the true deserved amounts. The allocations y p q t e n d from the method considering individual estimation tendencies show deviations that are smaller than those of the traditional method, but larger than those of our proposed method. A similar allocation result is observed for participant p 2 . It is worth noting that for participant p 1 in project q 24 , and for p 2 in projects q 9 and q 49 , our method does not show an improvement over the method considering estimation tendencies, as indicated by u l o s s p q values close to 0 or negative. However, these are isolated cases, and overall, the majority of the results, as reflected by the positive values of r l o s s p q and u l o s s p q in Table 3, demonstrate that our method consistently outperforms the other two methods. This indicates that our approach ensures that individuals receive resources are closer to the amounts they truly deserve, thereby promoting fairness.

4.3. Sensitivity Analysis

In this section, we further conduct sensitivity analysis on our QP model, using r l o s s as the evaluation index, with respect to changes in input parameters. The studied parameters include N and g , which define the distribution of project team sizes; b 1 and b 2 , which define the range of participants’ self-assessment bias; the maximum error α in the company-assigned contribution rates; and the standard deviation of the perceptual noise in participants self-evaluations σ .
The parameter settings of the experiments are shown in Table 4. Generally, we conduct 59 experiments with the experiment ID (EID) indexed from 0 to 58. Each of the experiments is included in the corresponding group with a group ID (GID). G0–G5 are the groups of experiments that aim to illustrate the performance of the QP model with different input parameters. Each group tests a single parameter across a range of values. For example, group G0 uses experiments EID 0–8 to vary the central value for the number of participants per project N from 7 to 15, while all other parameters are unchanged. In the “ N ”, “ g ”, “ b 1 ”, “ b 2 ”, “ α ”, and “ σ ” columns, [ c , d , e ] represents a list of numbers generated from c to d with a step size of e .
Figure 5 shows the results of the sensitivity analysis. To ensure statistical reliability, each experiment is conducted ten times with different random seeds. The x-axis represents the values of the changed parameters. The corresponding value on the y-axis represents the average r l o s s calculated from the ten runs.
Figure 5a shows that r l o s s remains relatively stable as N increases. In contrast, Figure 5b shows that r l o s s exhibits a clear negative correlation with g . There results suggest that the average team size has a limited effect on the effectiveness of the QP model, while the heterogeneity in team size significantly diminishes the model’s accuracy.
Figure 5c,d explore the model’s performance as a function of the range of the personal estimation coefficient γ p b 1 ,   b 2 . The results show that as either the lower bound b 1 or the upper bound b 2 increases, r l o s s exhibits a consistent positive trend. This relationship arises because an increase in either parameter leads to a higher expected value for γ p , which amplifies the signal term n p q γ p in the perceived contribution rate equation s p q = n p q γ p + ε . This amplification diminishes the relative impact of the perceptual noise term ε , thereby increasing the accuracy of the participants’ self-reported project rankings. Supplying the QP model with these more accurate ordinal constraints leads to a more accurate adjustment of the original company evaluations, resulting in a higher value of r l o s s .
Figure 5e shows that as the maximum allowed error in company-assigned rates α increases, r l o s s is initially close to zero, then rises sharply, and finally begins to plateau at a high level. Specifically, at α = 0.05 , r l o s s = 0.43 % , which indicates that when the company’s evaluation is already highly accurate, our proposed method introduces more error, making it less effective than the traditional resource allocation method. The subsequent sharp increase in r l o s s occurs because as α grows, the original company evaluations become more inaccurate. Consequently, the allocation based on the company-assigned contribution rates deviates further from the fair allocation based on the true contribution rates, thereby allowing our model to achieve a more accurate result. The plateau is reached because at large α values, the company’s initial estimates m p q are frequently clipped at the 0 ,   1 boundaries. Therefore, further increases in α no longer lead to a significant decrease in the accuracy of the traditional allocation method. This result underscores that unless the error of the company’s evaluation is very small (e.g., α 0.04 ), our proposed method is superior to the traditional allocation method.
Figure 5f demonstrates a strong negative correlation between r l o s s and the standard deviation of perceptual noise σ . This downward trend is expected, as a larger σ increases the magnitude of the random noise ε , which, in turn, degrades the accuracy of participants’ self-rankings and reduces the quality of the constraints provided to the model. However, as σ continues to grow, the rate of decline diminishes, and r l o s s stabilizes at a positive value of approximately 10%. This stabilization indicates that even when individual rankings are subject to substantial random noise, our method remains superior to the traditional allocation approach.
Our sensitivity analyses not only confirm the robustness of our QP model, but also quantify its performance benefits across a range of conditions. The results clearly show that the advantage of our method becomes most pronounced as the quality of the initial company assessment decreases. As the company’s evaluation error α increases, the r l o s s achieved by our model rises dramatically from near-zero to a stable plateau of approximately 75%, demonstrating its powerful corrective capability in high-error environments. Conversely, the model shows remarkable resilience to noisy participant input. While higher perceptual noise σ reduces the model’s effectiveness, r l o s s stabilizes at a positive value of approximately 10% even under substantial noise, indicating that our method consistently outperforms the traditional approach. Furthermore, our analysis reveals that while the model’s performance improves to over 60% with higher self-assessment accuracy γ p , it is negatively impacted by significant heterogeneity in team sizes. These quantitative findings provide strong evidence for the model’s practical utility and lead to several key managerial insights.
Building on these findings from our sensitivity analyses, we further explore how the QP model’s effectiveness can inform practical managerial strategies. Specifically, we make the following conclusions: (i) when the traditional allocation method is prone to subjectivity or significant errors, introducing self-reported contribution rankings can substantially improve fairness and reliability, particularly in projects where outcomes are difficult to quantify; (ii) the allocation method based on the QP model performs better when participants can accurately assess their own contributions, so companies should invest in training and feedback processes that help individuals to better evaluate their performance; and (iii) maintaining relatively consistent team sizes across projects enhances allocation accuracy more effectively than focusing solely on average team size. These insights provide actionable guidance for building a more equitable resource allocation method. For the individual level, the accuracy improvements of the QP model are primarily due to its ability to make precise, targeted adjustments. It corrects both overestimates and underestimates across participants’ project portfolios, ensuring that each project’s allocation more closely reflects its true deservingness.

5. Conclusions

Traditional resource allocation methods in collaborative projects often have significant limitations, as these methods typically depend on company-assigned contribution rates which can contain errors due to subjective judgment. This can lead to unfair outcomes where resources like bonuses do not match an individual’s actual effort, potentially causing a decline in employee morale and providing an inaccurate assessment of performance.
To address this challenge, we formulate a QP model designed to adjust company-assigned contribution rates by incorporating participants’ self-reported rank orders of their perceived contribution levels across projects that they are involved in. These ordinal assessments are integrated as constraints in the optimization model, enabling a systematic adjustment process to mitigate errors inherent in the original company evaluations. The QP model aims to derive adjusted contribution rates that are closer to the true contribution rates, thereby enabling resource allocations to better align with their deserved amounts and promoting fairness.
In the numerical experiments, our proposed method yields an average loss reduction of 50.8% in resource allocation compared to the traditional method, and 21.4% compared to the estimation tendency-based allocation method, signifying a major improvement in allocation accuracy. For individual resource allocation, the loss reduction ranges from 40% to 70% for the traditional method and 10% to 50% for the estimation tendency-based allocation method. Sensitivity analysis further reveals that our model reduces the error significantly in cases where company evaluations are inaccurate, with performance improvements of up to 75%. Additionally, when the errors in individual assessments are large, our method remains stable, consistently outperforming the traditional method by approximately 10%, even in scenarios where self-ranking errors are substantial. The sensitivity analysis clarifies the key drivers of performance. Greater accuracy in participants’ self-estimations and larger errors in the initial company evaluations both lead to more significant improvements from our model. Conversely, higher variance in team sizes and increased perceptual noise in self-rankings tend to diminish the model’s accuracy. Despite these sensitivities, our method generally remains superior to the traditional method across the tested scenarios.
The main limitation of our study, which also guides future research, is the model’s heavy reliance on the quality of self-reported rankings, posing two key challenges. First, it is necessary to investigate how unintentional psychological and behavioral factors (e.g., self-efficacy, workplace dynamics) systematically skew personal estimations. Second, the framework should be enhanced to mitigate intentional strategic misreporting in high-stakes settings, using mechanisms like game-theoretic models to ensure truthfulness or cross-validation with objective metrics to detect and reduce manipulation. Addressing these will boost the model’s robustness and accuracy, fostering a fairer and more motivating team environment.

Author Contributions

Conceptualization, S.W.; methodology, Y.T., B.J., Q.C. and S.W.; software, Y.T.; validation, Y.T.; formal analysis, Y.T.; investigation, Y.T., B.J. and S.W.; resources, S.W.; data curation, Y.T. and B.J.; writing—original draft preparation, Y.T., B.J. and S.W.; writing—review and editing, Y.T., B.J., Q.C. and S.W.; visualization, Y.T.; supervision, S.W.; project administration, S.W.; funding acquisition, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Argyris, N.; Karsu, Ö.; Yavuz, M. Fair resource allocation: Using welfare-based dominance constraints. Eur. J. Oper. Res. 2022, 297, 560–578. [Google Scholar] [CrossRef]
  2. Bertsimas, D.; Farias, V.F.; Trichakis, N. On the efficiency-fairness trade-off. Manag. Sci. 2012, 58, 2234–2250. [Google Scholar] [CrossRef]
  3. Porter, C.O.; Itir Gogus, C.; Yu, R.C.F. When does teamwork translate into improved team performance? A resource allocation perspective. Small Group Res. 2010, 41, 221–248. [Google Scholar] [CrossRef]
  4. Hu, J.; Liden, R.C. Making a difference in the teamwork: Linking team prosocial motivation to team processes and effectiveness. Acad. Manag. J. 2015, 58, 1102–1127. [Google Scholar] [CrossRef]
  5. Ion, G.; Díaz-Vicario, A.; Mercader, C. Making steps towards improved fairness in group work assessment: The role of students’ self-and peer-assessment. Act. Learn. High. Educ. 2024, 25, 425–437. [Google Scholar] [CrossRef]
  6. Chen, X.V.; Hooker, J.N. A guide to formulating fairness in an optimization model. Ann. Oper. Res. 2023, 326, 581–619. [Google Scholar] [CrossRef] [PubMed]
  7. Karpen, S.C. The social psychology of biased self-assessment. Am. J. Pharm. Educ. 2018, 82, 6299. [Google Scholar] [CrossRef]
  8. London, M.; Smither, J.W. Can multi-source feedback change perceptions of goal accomplishment, self-Evaluations, and performance-related outcomes? Theory-based applications and directions for research. Pers. Psychol. 1995, 48, 803–839. [Google Scholar] [CrossRef]
  9. Karsu, Ö.; Morton, A. Inequity averse optimization in operational research. Eur. J. Oper. Res. 2015, 245, 343–359. [Google Scholar] [CrossRef]
  10. Lejk, M.; Wyvill, M. The effect of the inclusion of self-assessment with peer assessment of contributions to a group project: A quantitative study of secret and agreed assessments. Assess. Eval. High. Educ. 2001, 26, 551–561. [Google Scholar] [CrossRef]
  11. Scott, E.; van der Merwe, N.; Smith, D. Peer assessment: A complementary instrument to recognise individual contributions in IS Student group projects. Electron. J. Inf. Syst. Eval. 2005, 8, 61–70. [Google Scholar]
  12. Ogryczak, W.; Śliwiński, T. On solving linear programs with the ordered weighted averaging objective. Eur. J. Oper. Res. 2003, 148, 80–91. [Google Scholar] [CrossRef]
  13. Kozlowski, S.W.; Ilgen, D.R. Enhancing the effectiveness of work groups and teams. Psychol. Sci. Public Interest 2006, 7, 77–124. [Google Scholar] [CrossRef]
  14. Luss, H. On equitable resource allocation problems: A lexicographic minimax approach. Oper. Res. 1999, 47, 361–378. [Google Scholar] [CrossRef]
  15. DeNisi, A.S.; Murphy, K.R. Performance appraisal and performance management: 100 years of progress? J. Appl. Psychol. 2017, 102, 421. [Google Scholar] [CrossRef]
  16. Cappelli, P.; Tavis, A. The performance management revolution. Harv. Bus. Rev. 2016, 94, 58–67. [Google Scholar] [CrossRef]
  17. Maley, J.F.; Dabić, M.; Neher, A.; Wuersch, L.; Martin, L.; Kiessling, T. Performance management in a rapidly changing world: Implications for talent management. Manag. Decis. 2024, 62, 3085–3108. [Google Scholar] [CrossRef]
  18. Peng, J. Performance appraisal system and its optimization method for enterprise management employees based on the kpi index. Discret. Dyn. Nat. Soc. 2022, 2022, 1937083. [Google Scholar] [CrossRef]
  19. Garalde, A.; Solabarrieta, J.; Urquijo, I.; Ortiz de Anda-Martín, I. Assessing peer teamwork competence: Adapting and validating the comprehensive assessment of team member effectiveness–short in university students. Front. Educ. 2024, 9, 1429485. [Google Scholar] [CrossRef]
  20. Bah, M.O.P.; Sun, Z.; Hange, U.; Edjoukou, A.J.R. Effectiveness of organizational change through employee involvement: Evidence from telecommunications and refinery companies. Sustainability 2024, 16, 2524. [Google Scholar] [CrossRef]
  21. Sunny, M.N.M.; Sakil, M.B.H.; Al, A. Project management and visualization techniques a details study. Proj. Manag. 2024, 13, 28–44. [Google Scholar] [CrossRef]
  22. Velghe, C.; McIlquham-Schmidt, A.; Celik, P.; Storme, M.; De Spiegelaere, S. Protocol: Employee work motivation, effort, and performance under a merit pay system: A systematic review. Campbell Syst. Rev. 2024, 20, e70001. [Google Scholar] [CrossRef] [PubMed]
  23. Jia, J.; Lai, Y.; Yang, Z.; Li, L. The optimal strategy of enterprise key resource allocation and utilization in collaborative innovation project based on evolutionary game. Mathematics 2022, 10, 400. [Google Scholar] [CrossRef]
  24. Grand, J.A.; Pearce, M.; Rench, T.A.; Chao, G.T.; Fernandez, R.; Kozlowski, S.W. Going deep: Guidelines for building simulation-based team assessments. BMJ Qual. Saf. 2013, 22, 436–448. [Google Scholar] [CrossRef] [PubMed]
  25. Tavoletti, E.; Stephens, R.D.; Taras, V.; Dong, L. Nationality biases in peer evaluations: The country-of-origin effect in global virtual teams. Int. Bus. Rev. 2022, 31, 101969. [Google Scholar] [CrossRef]
  26. Jiang, B.; Tian, X.; Pang, K.W.; Cheng, Q.; Jin, Y.; Wang, S. Rightful rewards: Refining equity in team resource allocation through a data-driven optimization approach. Mathematics 2024, 12, 2095. [Google Scholar] [CrossRef]
  27. Resce, G.; Zinilli, A.; Cerulli, G. Machine learning prediction of academic collaboration networks. Sci. Rep. 2022, 12, 21993. [Google Scholar] [CrossRef]
  28. Vander Schee, B.A.; Birrittella, T.D. Hybrid and online peer group grading: Adding assessment efficiency while maintaining perceived fairness. Mark. Educ. Rev. 2021, 31, 275–283. [Google Scholar] [CrossRef]
  29. Sun, H.; Ni, W.; Huang, L. Fuzzy assessment of management consulting projects: Model validation and case studies. Mathematics 2023, 11, 4381. [Google Scholar] [CrossRef]
  30. Li, Y.; Chen, L. Peer-and self-assessment: A case study to improve the students’ learning ability. J. Lang. Teach. Res. 2016, 7, 780. [Google Scholar] [CrossRef]
  31. Yan, Z.; Wang, X.; Boud, D.; Lao, H. The effect of self-assessment on academic performance and the role of explicitness: A meta-analysis. Assess. Eval. High. Educ. 2023, 48, 1–15. [Google Scholar] [CrossRef]
  32. Froese, L.; Roelle, J. How to support self-assessment through standards in dissimilar-solution-tasks. Learn. Instr. 2024, 94, 101998. [Google Scholar] [CrossRef]
  33. Osório, A. Performance evaluation: Subjectivity, bias and judgment style in sport. Group Decis. Negot. 2020, 29, 655–678. [Google Scholar] [CrossRef]
  34. Magpili, N.C.; Pazos, P. Self-managing team performance: A systematic review of multilevel input factors. Small Group Res. 2018, 49, 3–33. [Google Scholar] [CrossRef]
  35. Suls, J.; Wills, T.A. Social Comparison: Contemporary Theory and Research; Taylor & Francis: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  36. Barana, A.; Boetti, G.; Marchisio, M. Self-assessment in the development of mathematical problem-solving skills. Educ. Sci. 2022, 12, 81. [Google Scholar] [CrossRef]
  37. Woodcock, M. Team Metrics: Resources for Measuring and Improving Team Performance; Routledge: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  38. Ogryczak, W.; Luss, H.; Pióro, M.; Nace, D.; Tomaszewski, A. Fair optimization and networks: A survey. J. Appl. Math. 2014, 2014, 612018. [Google Scholar] [CrossRef]
  39. Clayton Bernard, R.; Kermarrec, G. Peer-and self-assessment in collaborative online language-learning tasks: The role of modes and phases of regulation of learning. Eur. J. Psychol. Educ. 2025, 40, 7. [Google Scholar] [CrossRef]
  40. McCarl, B.A.; Moskowitz, H.; Furtan, H. Quadratic programming applications. Omega 1977, 5, 43–55. [Google Scholar] [CrossRef]
  41. Vogklis, K.; Lagaris, I.E. An active-set algorithm for convex quadratic programming subject to box constraints with applications in non-linear optimization and machine learning. Mathematics 2025, 13, 1467. [Google Scholar] [CrossRef]
  42. Schwan, R.; Jiang, Y.; Kuhn, D.; Jones, C.N. PIQP: A proximal interior-point quadratic programming solver. In Proceedings of the 62nd IEEE Conference on Decision and Control, Singapore, 13–15 December 2023; pp. 1088–1093. [Google Scholar] [CrossRef]
  43. Shir, O.M.; Emmerich, M. Multi-objective mixed-integer quadratic models: A study on mathematical programming and evolutionary computation. IEEE Trans. Evol. Comput. 2024, 29, 661–675. [Google Scholar] [CrossRef]
  44. Jahangiri, M.; Nazemi, A. Solving general convex quadratic multi-objective optimization problems via a projection neurodynamic model. Cogn. Neurodynamics 2024, 18, 2095–2110. [Google Scholar] [CrossRef]
  45. Mosleh, S.; Liu, L.; Zhang, J. Proportional-fair resource allocation for coordinated multi-point transmission in LTE-advanced. IEEE Trans. Wirel. Commun. 2016, 15, 5355–5367. [Google Scholar] [CrossRef]
  46. Guo, S.; Wang, J. Profit distribution in IPD projects based on weight fuzzy cooperative games. J. Civ. Eng. Manag. 2022, 28, 68–80. [Google Scholar] [CrossRef]
  47. Quirynen, R.; Safaoui, S.; Di Cairano, S. Real-time mixed-integer quadratic programming for vehicle decision-making and motion planning. IEEE Trans. Control Syst. Technol. 2024, 33, 77–91. [Google Scholar] [CrossRef]
  48. Garcia, C. Adaptive virtual team planning and coordination: A mathematical programming approach. J. Model. Manag. 2025, 20, 238–257. [Google Scholar] [CrossRef]
  49. Tanasescu, L.G.; Vines, A.; Bologa, A.R.; Vîrgolici, O. Data analytics for optimizing and predicting employee performance. Appl. Sci. 2024, 14, 3254. [Google Scholar] [CrossRef]
  50. Freund, D.; Hssaine, C. Fair incentives for repeated engagement. Prod. Oper. Manag. 2025, 34, 16–29. [Google Scholar] [CrossRef]
  51. Figueiredo, E.; Margaça, C.; García, J.C.S.; Ribeiro, C. The contribution of reward systems in the work context: A systematic review of the literature and directions for future research. J. Knowl. Econ. 2025, 1–35. [Google Scholar] [CrossRef]
  52. Sahin, H.; Akkaya, K.; Ganapati, S. Optimal incentive mechanisms for fair and equitable rewards in PoS blockchains. In Proceedings of the IEEE International Performance, Computing, and Communications Conference, Phoenix, AZ, USA, 7–9 November 2022; pp. 367–373. [Google Scholar] [CrossRef]
  53. Liu, H.; Zhang, C.; Chen, X.; Tai, W. Optimizing collaborative crowdsensing: A graph theoretical approach to team recruitment and fair incentive distribution. Sensors 2024, 24, 2983. [Google Scholar] [CrossRef]
  54. Kumar, A.; Yeoh, W. DECAF: Learning to be fair in multi-agent resource allocation. In Proceedings of the Autonomous Agents and Multi-Agent Systems, Detroit, MI, USA, 19–23 May 2025; pp. 2591–2593. [Google Scholar]
  55. Kononenko, I.; Sushko, H. Creation of a software development team in scrum projects. In Proceedings of the Conference on Computer Science and Information Technologies, Cham, Switzerland, 9–11 September 2020. [Google Scholar] [CrossRef]
  56. Weber, J.R. Enhancing Team Effectiveness: A study on the efficacy of servant leadership experiential training as an intervention. Servant Leadersh. Theory Pract. 2025, 12, 4. [Google Scholar]
Figure 1. The optimization flow of the QP model.
Figure 1. The optimization flow of the QP model.
Mathematics 13 02635 g001
Figure 2. Individual-level comparisons of r l o s s p and u l o s s p across participants.
Figure 2. Individual-level comparisons of r l o s s p and u l o s s p across participants.
Mathematics 13 02635 g002
Figure 3. Four distinct allocation outcomes for participant p 1 .
Figure 3. Four distinct allocation outcomes for participant p 1 .
Mathematics 13 02635 g003
Figure 4. Four distinct allocation outcomes for participant p 2 .
Figure 4. Four distinct allocation outcomes for participant p 2 .
Mathematics 13 02635 g004
Figure 5. The result of r l o s s with different input parameters. (a) N . (b) g . (c) b 1 . (d) b 2 . (e) α . (f) σ .
Figure 5. The result of r l o s s with different input parameters. (a) N . (b) g . (c) b 1 . (d) b 2 . (e) α . (f) σ .
Mathematics 13 02635 g005
Table 1. Notations used in the model formulation.
Table 1. Notations used in the model formulation.
Sets, Indices, and List
P The set of all participants, p P
Q The set of all projects, q Q
Q p The set of projects that p participates in, Q p Q
φ p ( i ) The index of the project in which participant p perceives that he or she has the i -th highest contribution rate among all projects in Q p
R p The ordered list of projects in Q p , representing the perceived ranking by participant p , from the highest to lowest contribution rate,
R p = φ p ( 1 ) ,   φ p ( 2 ) ,   ,   φ p ( Q p )
Parameters
n p q The true contribution rate of participant p to project q , n p q [ 0,1 ] , p : q Q p n p q = 1 , for all q Q
m ^ p q The company-assigned contribution rate of participant p to project q , m p q [ 0,1 ] , p : q Q p m ^ p q = 1 , for all q Q
α The maximum error allowed in company-assigned contribution rates
s p q The personal estimate of the contribution rate of participant p to project q , s p q [ 0,1 ]
B The total amount of resources available for allocation in one project
y p q t r u e The amount of resources that participant p should fairly receive from project q
y p q t r a d i The amount of resources allocated to participant p in project q based on the company-assigned contribution rates in the traditional method
Decision Variables
x p q Continuous variable, indicating the adjusted contribution rate of participant p to project q , x p q [ 0,1 ] , for all p P and q Q p
Table 2. Individual evaluation metrics and loss reduction percentages across three allocation methods.
Table 2. Individual evaluation metrics and loss reduction percentages across three allocation methods.
Participant l t r a d i p l t e n d p l a d j p r l o s s p (%) u l o s s p (%)
p 1 320,421192,472160,62049.87 16.55
p 2 155,236121,351104,04032.98 14.27
p 3 224,71192,40153,96675.98 41.60
p 4 328,543182,130104,95368.06 42.37
p 5 212,533204,50670,07667.03 65.73
p 6 239,176175,981107,57855.02 38.87
p 7 191,832112,69394,43450.77 16.20
p 8 390,077 289,110220,91643.37 23.59
p 9 190,684 122,47468,50564.07 44.07
p 10 203,759 144,87882,57059.48 43.01
p 11 189,108123,47368,69263.68 44.37
p 12 267,581174,97485,69867.97 51.02
p 13 258,191133,693100,14561.21 25.09
p 14 258,223185,18936,36485.92 80.36
p 15 229,803 124,14497,18757.71 21.71
p 16 272,961 179,267162,74640.38 9.22
p 17 372,144 249,525169,44654.47 32.09
p 18 229,809 208,008113,91250.43 45.24
p 19 317,529252,492212,95932.93 15.66
p 20 210,237 137,67976,01463.84 44.79
Table 3. Allocation outcomes based on four different contribution rates.
Table 3. Allocation outcomes based on four different contribution rates.
ParticipantProject y p q t r u e (USD) y p q t r a d i (USD) y p q t e n d (USD) y p q a d j (USD) r l o s s p q (%) u l o s s p q (%)
p 1 q 2 860134298096179.0515.83
q 8 79421451385789.1477.58
q 9 178619851944166237.6921.52
q 11 113819191989103086.1787.31
q 14 102656661097588.9187.74
q 16 206614501789189572.2438.27
q 21 135816771932156236.0564.46
q 24 74527756192661.321.63
q 30 97137674594996.3090.27
q 45 10511996181195189.4286.84
q 46 11191503130198264.3224.73
p 2 q 2 1272878705 137474.11 82.01
q 9 185125272291 236424.11 −16.59
q 13 728289457 59870.39 52.03
q 19 1091455594 137455.50 43.06
q 22 927175606 78881.52 56.70
q 26 89915101382 127438.63 22.36
q 27 771308501 59662.20 35.19
q 49 672541707 63773.28 0.00
Table 4. Experiment parameter settings.
Table 4. Experiment parameter settings.
GIDEID N g b 1 b 2 α σ
G00–8[7, 15, 1]50.720.10.06
G19–1610[1, 8, 1] 0.7 20.10.06
G217–24105 0.3 ,   1 ,   0.1 20.10.06
G325–351050.7 1 ,   3 ,   0.2 0.10.06
G436–45105 0.7 2[0.05, 0.5, 0.05]0.06
G546–581050.720.1[0.02, 0.5, 0.04]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tao, Y.; Jiang, B.; Cheng, Q.; Wang, S. A Quadratic Programming Model for Fair Resource Allocation. Mathematics 2025, 13, 2635. https://doi.org/10.3390/math13162635

AMA Style

Tao Y, Jiang B, Cheng Q, Wang S. A Quadratic Programming Model for Fair Resource Allocation. Mathematics. 2025; 13(16):2635. https://doi.org/10.3390/math13162635

Chicago/Turabian Style

Tao, Yanmeng, Bo Jiang, Qixiu Cheng, and Shuaian Wang. 2025. "A Quadratic Programming Model for Fair Resource Allocation" Mathematics 13, no. 16: 2635. https://doi.org/10.3390/math13162635

APA Style

Tao, Y., Jiang, B., Cheng, Q., & Wang, S. (2025). A Quadratic Programming Model for Fair Resource Allocation. Mathematics, 13(16), 2635. https://doi.org/10.3390/math13162635

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop