Fairness in Algorithmic Decision-Making: Applications in Multi-Winner Voting, Machine Learning, and Recommender Systems
Abstract
:1. Introduction
2. Multi-Winner Voting
- Dichotomous preference. Each voter classifies candidates into two classes, namely, the approved candidates and the disapproved candidates. In particular, all approved candidates are preferred to all disapproved candidates, and candidates inside each class are equally preferred.
- Linear preference. Each voter ranks all candidates in a linear order ≻, from the best to the worst. For two candidates, a and b, means that the corresponding voter strictly prefers a to b.
2.1. Voter Fairness in Ranking-Based Voting
- q-Proportionality for solid coalition (q-PSC) [16]. For a rational number, q, a k-committee satisfies q-PSC if for every positive integer ℓ and for every solid coalition supporting some such that , it holds that .
- Weak q-PSC. [16]. A committee satisfies weak q-PSC if the following holds, for every positive integer ℓ, every such that , and every -solid coalition U of size at least , it holds that .
-Computing | |
Input: | An election and a positive integer . |
Question: | Is there a k-committee which provides the property at ? |
-Testing | |
Input: | An election and a committee . |
Question: | Does w satisfy at ? |
- Committee scoring rules. Under a committee scoring rule, each voter provides a score to each committee based on the positions of the committee-members in the preference of this voter, and winning committees are those with the maximum total score. Committee scoring rules were first studied by Elkind et al. [17] as a general framework to encapsulate many concrete multi-winner voting rules, including, e.g., Bloc, k-Borda, Chamberlin–Courant, etc.
- k-Borda. Each voter gives points to each candidate ranked in the i-th position, where m denotes the number of candidates. The score of a committee from a voter is the sum of the scores of all its members from the voter.
- Bloc. Every voter gives 1 point to all of their top k ranked candidates. The score of a committee from a voter is the sum of the scores of all its members from the voter.
- Single nontransferable vote (SNTV). Every voter gives 1 point to her top ranked candidate. The score of a committee from a voter is the sum of the scores of all its members from the voter.
- Chamberlin–Courant (CC). Different from the above three rules where all members of the winning committee are counted to accumulate the satisfaction of a voter, in CC, for each voter, only the best candidate in the winning committee contributes to the satisfaction of this voter. In other words, each voter is assumed to be only represented by her best candidate in the winning committee. Precisely, each voter has a nonincreasing mapping , such that is a voter’s satisfaction of a candidate ranked in the i-th position. For a voter v with preference and a nonempty committee , let be the top-ranked candidate of v among w, i.e., is the candidate , such that for all . The CC score of a committee from a voter with mapping is then . In this section, we consider only the Borda satisfaction function , which, for m candidates, holds that .
- Monroe’s rule. This rule is similar to the CC rule but with a further restriction that every candidate can represent at most voters. Let be an assignment function and , be the set of voters, , such that . Moreover, let be the set of all assignment functions from V to C. The Monroe score of a k-committee is then defined as
- Single-transferable voting (STV). STV rules are a large class of voting rules each of which is featured by a rational number q and some vote-reweighting approach. A common principle of these rules is to guarantee certain groups of voters are proportionally represented. Fixing a rational quota q and a vote-reweighting approach, the STV rule selects winning committees iteratively as shown below. For a candidate c, let be the set of voters ranking c in the top.
- Initially, we associate to each voter a weight denoted by . (Usually, all voters have weight 1 initially, but this is not necessarily the case.)
- If there is a candidate, , that is ranked in the top by at least q voters, that candidate is added to the winning committee. Then, we apply the vote-reweighting approach so that the total weight of all votes ranking c in the top are reduced by , where is the sum of the weights of all voters ranking c in the top before the reweighting. Moreover, the candidate c is deleted from C and from all votes.
- If there is no such a candidate c as discussed above, then a candidate that is ranked in the top by the least number of voters is eliminated.
- The procedure terminates until k candidates are selected.
- Many of concrete STV rules have been considered in the literature (see the works by the authors of [18,19] for a history and a summary of many important STV rules). However, for simplicity, in this survey, we discuss only STV rules where initially all voters have weight 1, and the uniform reweighting approach is used in Step 2. Particularly, according to this reweighting approach, in Step 2, the weight of a voter v which ranks c in the top is reduced to . Two important STV rules are those when q is equal to the Hare quota or the Droop quota, i.e., and . We denote these two special STV rules as D-STV and H-STV, respectively.
2.2. Voter Fairness in Approval-Based Voting
- Justified representation (JR). A k-committee, , provides JR, if, for every subset of at least votes such that , at least one of the candidates approved by some vote in U is included in w, i.e.,
- Proportional justified representation (PJR). A k-committee, , provides PJR if for every positive integer , and for every subset of at least votes such that , the committee w contains at least ℓ candidates from , i.e., . This property was proposed in the work by the authors of [24].
- Extended justified representation (EJR). A k-committee provides EJR if for every positive integer and for every subset of at least votes such that , the committee w contains at least ℓ candidates from every vote , i.e., for all . This property was proposed by Aziz et al. [23].
- Perfect representation (PR). PR is defined for special elections. Particularly, let be an election such that for some integer t. A k-committee provides PR if there is a partition of V such that for all and is approved by all votes in . This property was studied in the work by the authors of [24].
- Approval voting (AV). The AV score of a candidate is the number of votes approving this candidate, and a winning k-committee consists of k candidates with the highest AV scores.
- Satisfaction approval voting (SAV). The SAV score of a candidate c is defined as
- Minimax approval voting (MAV). This rule aims to find a committee that is most close to every voter’s opinion. More precisely, the Hamming distance between a committee w and a vote v is , and this rule selects a k-committee w minimizing .
- Proportional approval voting (PAV). The PAV score of a committee w is defined asA winning k-committee is an one with the maximum score.
- Sequential proportional approval voting (seq-PAV). This rule provides an approximation solution to PAV rule. It selects k winners in k rounds, one in each round. Precisely, initially we let . Assume that we have an i-committee w after round . Then, in the next round, we find a candidate c which offers the maximum PAV score of , and we extend w by resetting . After k rounds, w contains exactly k candidates.
- Chamberlin–Courant approval voting (CCAV). This rule is a variant of CC rule for approval-based voting. In particular, a voter satisfies with a committee if and only if this committee contains at least one of her approved candidates. This rule selects a k-committee that satisfies the maximum number of voters.
- Monroe’s approval voting (MonAV). This is a variant of Monroe’s rule for approval-based voting and is similar to CCAV. In CCAV, a candidate can satisfy all voters who approve this candidate. However, in MonAV, we require that each candidate is assigned to at most voters approving this candidate and, moreover, each voter can be assigned to at most one candidate. The MonAV score of a committee is the maximum number of voters who are satisfied by this committee and fulfill the above conditions.
- For each and it holds that
- For every and , if , thenThis corresponds to winner that is only distributed over voters approving that winner.
- It holds thatThat is, there are in total k pointes to be distributed.
- For every , it holds thatThis together with the previous restriction ensure that exactly k candidates have points to distribute.
- max-Phragmén. This rule first calculates a load distribution such that is minimized. Then, is the winning committee.
- var-Phragmén. This rule first calculates a load distribution such that is minimized. Then, is the winning committee.
- seq-Phragmén. This rule takes k rounds to select the winners, one for each round. For a candidate c, let be the set of voters approving c. Initially, let . Let denote the voter loads after round j. At first, all voters have a load of 0, i.e., for all . As a first candidate, we select one that receives the most approvals and add c into w. Then, the voter load of each voter approving this selected candidate is increased to . In the next round, we choose a candidate that induces a (new) maximal voter load that is as small as possible, but now we have to take into account that some voters already have a non-zero load. The new maximal load if some candidate is chosen as the -st committee member is measured as
Is there a natural rule (or an algorithm) whose outcome always provide JR, EJR, PJR, and PR simultaneously?
2.3. Fairness for Candidates with Sensitive Attributes
2.4. Stable Fairness
3. Machine Learning Algorithms
3.1. Fairness Notions
- Disparate Treatment. Given dataset , with a set of sensitive attributes A (such as race, gender, etc.), remaining attributes X, and binary class to be predicted Y, predicted binary class , disparate treatment is said to exist in data D if
- Disparate Impact. Given dataset , with a set of sensitive attributes A (such as race, gender, etc.), remaining attributes X, and binary class to be predicted Y, disparate impact is said to exist in data D if
- Anticlassification, also known as unawareness, seeks to achieve fairness in ML outcomes by excluding the use of protected features such as race, gender, or ethnicity from the statistical model. This notion is consistent with disparate treatment. Despite being intuitive, easy-to-use and having legal support, a crucial difficulty of this approach is that a protected feature might be correlated with many other unprotected features, and it is practically infeasible to identify all such covariate “proxies” and remove them from the statistical model. For example, protected class race might be correlated with various other features, such as education level, salary, life-expectancy, etc., and removing all these proxies from the statistical model could have detrimental effects in predictive performance. Consider we have a vector that represents the visible attributes of individual i such as race, gender, education level, age, etc. An algorithmic decision can be represented as a function , where , , means that action is taken. Suppose that x can be partitioned into protected and unprotected features: . Let denote the set of all protected features. Then, anticlassification requires that decisions do not consider protected attributes, more formally,
- Statistical parity (also known by the names of demographic parity, independence, statistical parity, and classification parity) requires that common measures of predictive accuracy and performance errors remain uniform across various groups segmented by the protected features. This includes notions such as statistical parity, equality of accuracy, equality of false positive/false negative rates, and equality of positive/negative predictive values [55,56,57]. The main idea of this notion is to quantify and equate benefit and harm of the impact of the ML prediction to groups segmented by protected attributes equally and to distribute the errors among different stakeholders equally [55]. This notion of fairness has recently found application in criminal justice [58] and is consistent with disparate impact.The measure of classification parity based on false positive rate and the proportion of decisions that are positive have received considerable attention in machine learning domain [55,59,60]. For formal definition, please refer to Table 5.Recent research by Hu and Chen [61] suggests that the enforcement of statistical parity criteria in the short-term benefits building up the reputation of the disadvantageous minority in labor market in the long run. Note that, a critical flaw of notion of statistical parity is that it is easy to satisfy it by some arbitrary configuration, for example selecting best and qualified candidates from one group and random alternatives from the other group can still satisfy statistical parity. Moreover, the definition also ignores any possible correlation between positive outcome and protected attributes.
- Calibration requires that ML outcomes remain independent of protected features after controlling for estimated risk. Calibration relates to the fairness of risk scores and requires that for a given risk score, the proportion of individuals re-offending remains uniform across protected groups. Calibration is beneficial as a fairness condition as it does not require much intervention in the existing decision-making process [62]. A major disadvantage of calibration is that it has been shown that risk score can be manipulated to appear calibrated by ignoring information about the favored group [63]. Formally, given risk scores , calibration is satisfied when
3.2. Fairness Mechanisms
- A.
- Preprocessing. Preprocessing methods deal with removing the protected features or their covariates before training the model. Similar to anticlassification, this method come with severe disadvantages as the protected feature might be correlated with many other unprotected features, and it is practically infeasible to identify all such covariates and exclude them without losing a lot on predictive accuracy. Kamiran and Calders [71] suggest a set of data processing techniques aimed at ensuring fairness for classification tasks. These include suppression, massaging the dataset, reweighting, and sampling.
- Suppression. In this process, exactly like anticlassification, all the features that correlate with the protected set of features are first identified which are then removed from the classification model.
- Massaging the dataset. In this process, labels of some data points are manipulated in order to remove existing discrimination from the training data. In order to find a good set of labels to change, Kamiran and Calders [71] proposed a combination of ranking and learning.
- Reweighting. Instead of changing the labels, in this method the tuples in the training dataset are assigned asymmetric weights in order to overcome the bias.
- Sampling. Kamiran and Calders [71] introduced “uniform sampling” and “preferential sampling”, where the training data is sampled with the help of a ranker as a debiasing method.
Kamiran and Calders [71] found that suppression of the protected attributes does not always result in the removal of bias and massaging and preferential sampling techniques performed best for debiasing with a minimal loss in accuracy.Another idea developed in preprocessing is to learn a new representation of the data such that it removes the information correlated to the sensitive attribute [50,72,73]. The central algorithm such as classification then use the cleaned data. An advantage of this method is that the analyst can avoid the need to modify the classifier or access sensitive attributes during test time. - B.
- In-processing. In this method, the optimization procedure is modified to incorporate cost of unfairness. This is typically done by addition of a constraint to the optimizing problem or addition of cost of fairness as a regularizer. For example, Agarwal et al. incorporate cost-sensitive classification into their original objective function [59]. Given a dataset, , where is the cost of predicting 0 on and is the cost of predicting 1 on , a cost-sensitive classification algorithm given the dataset outputsMore generally, the reduction approach by Agarwal et al. suggests the reduction of training with fairness constraints and solving a series of cost-sensitive classifications using off-the-shelf methods [59].An important advantage of this method is that there is no need to access sensitive attributes at test time. This method also provides higher flexibility in terms of trade-off between accuracy and fairness measures. An important disadvantage is that this method is task specific and requires modification of classifier which can often exponentially increase the computational complexity.The method to optimize counterfactual fairness also falls into this category. Kusner et al. [68] propose “counterfactual fairness” that explicitly specifies the assumptions about the data generating process. This can be done by adding a linear or convex surrogate for the fairness constraint in the learning models. For example, consider a predictive problem with fairness considerations, where A, X, and Y represent the protected attributes, remaining attributes, and the output of interest, respectively.
- C.
- Postprocessing. Postprocessing methods require editing the posteriors in order to satisfy the fairness constraints. The method searches for a proper threshold using the original score function for each group. We refer to Hard et al. [55] for more details on this postprocessing method. This method requires test-time access to the protected attribute and lacks flexibility in terms of trade-off between accuracy and fairness. However, this method benefits from being general and applicable to any classifier without any modification.
4. Recommender Systems
4.1. Fairness for Users and Groups of Users
- Proportionality. Given a package, P, and a parameter, , we say that a user u likes an item if i is ranked in the top- of the preferences of u over all items. Consequently, for a user, u, and a package, P, we say that P is m-proportional for u, for , if there exists at least m items in P, which are liked by u.
- Envy-freeness. Given a group G, a package P, and a parameter , we say that a user is envy-free for an item , if is in top- of the preferences in the set . Consequently, for a user u, a package P and a group G, we say that the package P is m-envy-free for u, for , if u is envy-free for at least m items in P.
4.2. Fairness for Items
4.3. Multiple Stakeholder Fairness
5. Conclusions
5.1. Challenges and Future Research Directions
5.2. Limitations
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Rejab, F.B.; Nouira, K.; Trabelsi, A. Health Monitoring Systems Using Machine Learning Techniques. In Intelligent Systems for Science and Information; Springer: Cham, Switzerland, 2014; pp. 423–440. [Google Scholar]
- Chalfin, A.; Danieli, O.; Hillis, A.; Jelveh, Z.; Luca, M.; Ludwig, J.; Mullainathan, S. Productivity and Selection of Human Capital with Machine Learning. Am. Econ. Rev. 2016, 106, 124–127. [Google Scholar] [CrossRef] [Green Version]
- Waters, A.; Miikkulainen, R. GRADE: Machine Learning Support for Graduate Admissions. AI Mag. 2014, 35, 64–75. [Google Scholar] [CrossRef]
- Barocas, S.; Selbst, A.D. Big Data’s Disparate Impact. Calif. Law Rev. 2016, 104, 671–732. [Google Scholar] [CrossRef]
- Caliskan, A.; Bryson, J.J.; Narayanan, A. Semantics Derived Automatically from Language Corpora Contain Human-Like Biases. Science 2017, 356, 183–186. [Google Scholar] [CrossRef] [PubMed]
- Angwin, J.; Jeff, L.; Surya, M.; Kirchner, L. Machine Bias. ProPublica, 23 May 2006. [Google Scholar]
- Epstein, R.; Robertson, R.E. The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections. Proc. Natl. Acad. Sci. USA 2015, 112, E4512–E4521. [Google Scholar] [CrossRef] [PubMed]
- Conover, M.D.; Ratkiewicz, J.; Francisco, M.R.; Gonçalves, B.; Menczer, F.; Flammini, A. Political Polarization on Twitter. In Proceedings of the ICWSM 2011 5th International Conference on Weblogs and Social Media, Barcelona, Spain, 17–21 July 2011. [Google Scholar]
- Garimella, V.R.K.; Weber, I. A Long-Term Analysis of Polarization on Twitter. In Proceedings of the ICWSM 2017 11th International Conference on Web and Social Media, Montréal, QC, Canada, 15–18 May 2017; pp. 528–531. [Google Scholar]
- Leavy, S. Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning. In Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, Gothenburg, Sweden, 28 May 2018; pp. 14–16. [Google Scholar]
- Smith, J.H. Aggregation of Preferences with Variable Electorate. Econometrica 1973, 41, 1027–1041. [Google Scholar] [CrossRef]
- May, K.O. A Set of Independent Necessary and Sufficient Conditions for Simple Majority Decision. Econometrica 1952, 20, 680–684. [Google Scholar] [CrossRef]
- Woodall, D. Properties of Preferential Election Rules. Voting Matters 1994, 3, 8–15. [Google Scholar]
- Skowron, P.; Faliszewski, P.; Slinko, A. Axiomatic Characterization of Committee Scoring Rules. J. Econ. Theory 2019, 180, 244–273. [Google Scholar] [CrossRef]
- Dummett, M. Voting Procedures; Oxford University Press: Oxford, UK, 1984. [Google Scholar]
- Aziz, H.; Lee, B.E. The Expanding Approvals Rule: Improving Proportional Representation and Monotonicity. arXiv 2017, arXiv:1708.07580. [Google Scholar] [CrossRef]
- Elkind, E.; Faliszewski, P.; Skowron, P.; Slinko, A. Properties of Multiwinner Voting Rules. Soc. Choice Welf. 2017, 48, 599–632. [Google Scholar] [CrossRef]
- Tideman, N. The Single Transferable Vote. J. Econ. Perspect. 1995, 9, 27–38. [Google Scholar] [CrossRef]
- Tideman, N.; Richardson, D. Better Voting Methods Through Technology: The Refinement-Manageability Trade-off in the Single Transferable Vote. Public Choice 2000, 103, 13–34. [Google Scholar] [CrossRef]
- Lu, T.; Boutilier, C. Budgeted Social Choice: From Consensus to Personalized Decision Making. In Proceedings of the IJCAI 2011 22nd International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011; pp. 280–286. [Google Scholar]
- Betzler, N.; Slinko, A.; Uhlmann, J. On the Computation of Fully Proportional Representation. J. Artif. Intell. Res. 2013, 47, 475–519. [Google Scholar] [CrossRef]
- Sánchez-Fernández, L.; Fernández, N.; Fisteus, J.; Basanta-Val, P. Some Notes on Justified Representation. In Proceedings of the M-PREF 2016 10th Multidisciplinary Workshop on Advances in Preference Handling, New York, NY, USA, 9–11 July 2016. [Google Scholar]
- Aziz, H.; Brill, M.; Conitzer, V.; Elkind, E.; Freeman, R.; Walsh, T. Justified Representation in Approval-Based Committee Voting. Soc. Choice Welf. 2017, 48, 461–485. [Google Scholar] [CrossRef]
- Fernández, L.S.; Elkind, E.; Lackner, M.; García, N.F.; Arias-Fisteus, J.; Basanta-Val, P.; Skowron, P. Proportional Justified Representation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 670–676. [Google Scholar]
- Aziz, H.; Elkind, E.; Huang, S.; Lackner, M.; Fernández, L.S.; Skowron, P. On the Complexity of Extended and Proportional Justified Representation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 902–909. [Google Scholar]
- Aziz, H.; Gaspers, S.; Gudmundsson, J.; Mackenzie, S.; Mattei, N.; Walsh, T. Computational Aspects of Multi-Winner Approval Voting. In Proceedings of the AAMAS 2015 14th International Conference on Autonomous Agents and Multiagent Systems, Istanbul, Turkey, 4–8 May 2015; pp. 107–115. [Google Scholar]
- Brams, S.J.; Kilgour, D.M. Satisfaction Approval Voting. In Voting Power and Procedures; Fara, R., Leech, D., Salles, M., Eds.; Springer: Berlin, Germany, 2014; pp. 323–346. [Google Scholar] [Green Version]
- Brams, S.; Fishburn, P. Approval Voting. Am. Political Sci. Rev. 1978, 72, 831–847. [Google Scholar] [CrossRef]
- Janson, S. Phragmén’s and Thiele’s Election Metholds. arXiv 2016, arXiv:1611.08826. [Google Scholar]
- Zhou, A.; Yang, Y.; Guo, J. Parameterized Complexity of Committee Elections with Dichotomous and Trichotomous Votes. In Proceedings of the AAMAS 2019 18th International Conference on Autonomous Agents and MultiAgent Systems, Montreal, QC, Canada, 13–17 May 2019; pp. 503–510. [Google Scholar]
- Brams, S.J.; Kilgour, D.M.; Sanver, M.R. A Minimax Procedure for Electing Committees. Public Choice 2007, 132, 401–420. [Google Scholar] [CrossRef]
- Yang, Y.; Wang, J. Complexity of Additive Committee Selection with Outliers. In Proceedings of the AAMAS 2019 18th International Conference on Autonomous Agents and MultiAgent Systems, Montreal, QC, Canada, 13–17 May 2019; pp. 2291–2293. [Google Scholar]
- Yang, Y.; Wang, J. Multiwinner Voting with Restricted Admissible Sets: Complexity and Strategyproofness. In Proceedings of the IJCAI 2018 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 576–582. [Google Scholar]
- Yang, Y. On the Tree Representations of Dichotomous Preferences. In Proceedings of the IJCAI 2019 28th International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019. [Google Scholar]
- Fernández, L.S.; Fisteus, J.A. Monotonicity Axioms in Approval-based Multi-winner Voting Rules. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2019, Montreal, QC, Canada, 13–17 May 2019; pp. 485–493. [Google Scholar]
- LeGrand, R. Analysis of the Minimax Procedure; Technical Report; Department of Computer Science and Engineering, Washington University: St. Louis, MO, USA, 2004. [Google Scholar]
- Procaccia, A.D.; Rosenschein, J.S.; Zohar, A. On the Complexity of Achieving Proportional Representation. Soc. Choice Welf. 2008, 30, 353–362. [Google Scholar] [CrossRef]
- Brill, M.; Freeman, R.; Janson, S.; Lackner, M. Phragmén’s Voting Methods and Justified Representation. In Proceedings of the AAAI 2017 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 406–413. [Google Scholar]
- Peters, D. Single-Peakedness and Total Unimodularity: New Polynomial-Time Algorithms for Multi-Winner Elections. In Proceedings of the AAAI 2018 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 1169–1176. [Google Scholar]
- Yang, Y.; Wang, J. Parameterized Complexity of Multi-winner Determination: More Effort Towards Fixed-Parameter Tractability. In Proceedings of the AAMAS 2018 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden, 10–15 July 2018; pp. 2142–2144. [Google Scholar]
- Celis, L.E.; Huang, L.; Vishnoi, N.K. Multiwinner Voting with Fairness Constraints. In Proceedings of the IJCAI 2018 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 144–151. [Google Scholar]
- Monroe, B.L. Fully Proportional Representation. Am. Political Sci. Rev. 1995, 89, 925–940. [Google Scholar] [CrossRef]
- Koriyama, Y.; Macé, A.; Treibich, R.; Laslier, J.F. Optimal Apportionment. J. Political Econ. 2013, 121, 584–608. [Google Scholar] [CrossRef]
- Brill, M.; Laslier, J.F.; Skowron, P. Multiwinner Approval Rules as Apportionment Methods. In Proceedings of the AAAI 2017 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 414–420. [Google Scholar]
- Bredereck, R.; Faliszewski, P.; Igarashi, A.; Lackner, M.; Skowron, P. Multiwinner Elections with Diversity Constraints. In Proceedings of the AAA 2018 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 933–940. [Google Scholar]
- Srivastava, M.; Heidari, H.; Krause, A. Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning. In Proceedings of the KDD 2019 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Anchorage, Alaska, 4–8 August 2019. [Google Scholar]
- Cheng, Y.; Jiang, Z.; Munagala, K.; Wang, K. Group Fairness in Committee Selection. In Proceedings of the EC 2019 20th ACM Conference on Economics and Computation, Phoenix, AZ, USA, 24–28 June 2019; pp. 263–279. [Google Scholar]
- Feldman, M.; Friedler, S.A.; Moeller, J.; Scheidegger, C.; Venkatasubramanian, S. Certifying and Removing Disparate Impact. In Proceedings of the KDD 2015 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 259–268. [Google Scholar]
- Luong, B.T.; Ruggieri, S.; Turini, F. k-NN As an Implementation of Situation Testing for Discrimination Discovery and Prevention. In Proceedings of the KDD 2011 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 21–24 August 2011; pp. 502–510. [Google Scholar]
- Zemel, R.; Wu, Y.; Swersky, K.; Pitassi, T.; Dwork, C. Learning Fair Representations. In Proceedings of the ICML 2013 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 325–333. [Google Scholar]
- Zafar, M.B.; Valera, I.; Gomez-Rodriguez, M.; Gummadi, K.P. Fairness Constraints: A Flexible Approach for Fair Classification. J. Mach. Learn. Res. 2019, 20, 1–42. [Google Scholar]
- Zafar, M.B.; Valera, I.; Gomez Rodriguez, M.; Gummadi, K.P. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification Without Disparate Mistreatment. In Proceedings of the WWW 2017 26th International Conference on World Wide Web, Perth, Australia, 3–7 April 2017; pp. 1171–1180. [Google Scholar]
- Bonchi, F.; Hajian, S.; Mishra, B.; Ramazzotti, D. Exposing the Probabilistic Causal Structure of Discrimination. Int. J. Data Sci. Anal. 2017, 3, 1–21. [Google Scholar] [CrossRef]
- Grgić-Hlača, N.; Zafar, M.B.; Gummadi, K.P.; Weller, A. Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning. In Proceedings of the AAAI 2018 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 51–60. [Google Scholar]
- Hardt, M.; Price, E.; Srebro, N. Equality of Opportunity in Supervised Learning. In Proceedings of the NIPS 2016 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 3315–3323. [Google Scholar]
- Kleinberg, J.M.; Mullainathan, S.; Raghavan, M. Inherent Trade-Offs in the Fair Determination of Risk Scores. In Proceedings of the ITCS 2017 8th Innovations in Theoretical Computer Science Conference, Berkeley, CA, USA, 9–11 January 2017; pp. 43:1–43:23. [Google Scholar]
- Corbett-Davies, S.; Goel, S. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv 2018, arXiv:1808.00023. [Google Scholar]
- Berk, R.; Heidari, H.; Jabbari, S.; Joseph, M.; Kearns, M.J.; Morgenstern, J.; Neel, S.; Roth, A. A Convex Framework for Fair Regression. arXiv 2017, arXiv:1706.02409. [Google Scholar]
- Agarwal, A.; Beygelzimer, A.; Dudík, M.; Langford, J.; Wallach, H.M. A Reductions Approach to Fair Classification. In Proceedings of the ICML 2018 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 60–69. [Google Scholar]
- Chouldechova, A. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. arXiv 2017, arXiv:1703.00056. [Google Scholar] [CrossRef] [PubMed]
- Hu, L.; Chen, Y. A Short-term Intervention for Long-term Fairness in the Labor Market. In Proceedings of the WWW 2018 World Wide Web Conference on World Wide Web, Lyon, France, 23–27 April 2018; pp. 1389–1398. [Google Scholar] [CrossRef]
- Barocas, S.; Hardt, M.; Narayanan, A. Fairness and Machine Learning. arXiv 2017, arXiv:1712.03586. [Google Scholar]
- Corbett-Davies, S.; Pierson, E.; Feller, A.; Goel, S.; Huq, A. Algorithmic Decision Making and the Cost of Fairness. In Proceedings of the KDD 2017 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 797–806. [Google Scholar]
- Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; Zemel, R.S. Fairness Through Awareness. In Proceedings of the ITCS 2012 3rd Innovations in Theoretical Computer Science, Cambridge, MA, USA, 8–10 January 2012; pp. 214–226. [Google Scholar]
- Joseph, M.; Kearns, M.J.; Morgenstern, J.H.; Roth, A. Fairness in Learning: Classic and Contextual Bandits. In Proceedings of the NIPS 2016 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 325–333. [Google Scholar]
- Kim, M.P.; Reingold, O.; Rothblum, G.N. Fairness Through Computationally-Bounded Awareness. In Proceedings of the NIPS 2018 Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, Montreal, QC, Canada, 3–8 December 2018; pp. 4847–4857. [Google Scholar]
- Kilbertus, N.; Rojas-Carulla, M.; Parascandolo, G.; Hardt, M.; Janzing, D.; Schölkopf, B. Avoiding Discrimination through Causal Reasoning. In Proceedings of the NIPS 2017 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 656–666. [Google Scholar]
- Kusner, M.J.; Loftus, J.R.; Russell, C.; Silva, R. Counterfactual Fairness. In Proceedings of the NIPS 2017 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4069–4079. [Google Scholar]
- Nabi, R.; Shpitser, I. Fair Inference on Outcomes. In Proceedings of the AAAI 2018 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 1931–1940. [Google Scholar]
- Kleinberg, J.M. Inherent Trade-Offs in Algorithmic Fairness. In Proceedings of the SIGMETRICS 2018 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, Irvine, CA, USA, 18–22 June 2018; p. 40. [Google Scholar]
- Kamiran, F.; Calders, T. Data Preprocessing Techniques for Classification without Discrimination. Knowl. Inf. Syst. 2012, 33, 1–33. [Google Scholar] [CrossRef]
- Louizos, C.; Swersky, K.; Li, Y.; Welling, M.; Zemel, R.S. The Variational Fair Autoencoder. In Proceedings of the ICLR 2016 4th International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
- Gordaliza, P.; Barrio, E.D.; Fabrice, G.; Loubes, J.M. Obtaining Fairness using Optimal Transport Theory. In Proceedings of the ICML 2019 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 2357–2365. [Google Scholar]
- Bolukbasi, T.; Chang, K.; Zou, J.Y.; Saligrama, V.; Kalai, A.T. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of the NIPS 2016 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 4349–4357. [Google Scholar]
- Zhao, J.; Wang, T.; Yatskar, M.; Ordonez, V.; Chang, K. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In Proceedings of the EMNLP 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 9–11 September 2017; pp. 2979–2989. [Google Scholar]
- Garg, N.; Schiebinger, L.; Jurafsky, D.; Zou, J. Word Embeddings Auantify 100 Years of Gender and Ethnic Stereotypes. Proc. Natl. Acad. Sci. USA 2018, 115, E3635–E3644. [Google Scholar] [CrossRef]
- Zhao, J.; Wang, T.; Yatskar, M.; Cotterell, R.; Ordonez, V.; Chang, K. Gender Bias in Contextualized Word Embeddings. In Proceedings of the NAACL-HLT 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019; pp. 629–634. [Google Scholar]
- Zhao, J.; Zhou, Y.; Li, Z.; Wang, W.; Chang, K. Learning Gender-Neutral Word Embeddings. In Proceedings of the EMNLP 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 4847–4853. [Google Scholar]
- Pitoura, E.; Tsaparas, P.; Flouris, G.; Fundulaki, I.; Papadakos, P.; Abiteboul, S.; Weikum, G. On Measuring Bias in Online Information. SIGMOD Rec. 2017, 46, 16–21. [Google Scholar] [CrossRef]
- Jannach, D.; Zanker, M.; Felfernig, A.; Friedrich, G. Recommender Systems—An Introduction; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
- Konstan, J.A.; Miller, B.N.; Maltz, D.; Herlocker, J.L.; Gordon, L.R.; Riedl, J. GroupLens: Applying Collaborative Filtering to Usenet News. Commun. ACM 1997, 40, 77–87. [Google Scholar] [CrossRef]
- Pazzani, M.; Billsus, D. Learning and Revising User Profiles: The Identification of Interesting Web Sites. Mach. Learn. 1997, 27, 313–331. [Google Scholar] [CrossRef]
- Burke, R. Knowledge-Based Recommender Systems. Encycl. Libr. Inf. Syst. 2000, 69, 175–186. [Google Scholar]
- Sarwar, B.; Karypis, G.; Konstan, J.A.; Riedl, J. Item-based Collaborative Filtering Recommendation Algorithms. In Proceedings of the WWW 2001 10th International World Wide Web Conference, Hong Kong, China, 1–5 May 2001; pp. 285–295. [Google Scholar]
- Park, Y.; Tuzhilin, A. The Long Tail of Recommender Systems and How to Leverage It. In Proceedings of the RecSys 2008 2nd ACM Conference on Recommender Systems, Lausanne, Switzerland, 23–25 October 2008; pp. 11–18. [Google Scholar]
- Koutsopoulos, I.; Halkidi, M. Efficient and Fair Item Coverage in Recommender Systems. In Proceedings of the IEEE 16th International Conference on Dependable, Autonomic and Secure Computing, 16th International Conference on Pervasive Intelligence and Computing, 4th International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress, DASC/PiCom/DataCom/CyberSciTech, Athens, Greece, 12–15 August 2018; pp. 912–918. [Google Scholar]
- Yao, S.; Huang, B. Beyond Parity: Fairness Objectives for Collaborative Filtering. In Proceedings of the NIPS 2017 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 2925–2934. [Google Scholar]
- Farnadi, G.; Kouki, P.; Thompson, S.K.; Srinivasan, S.; Getoor, L. A Fairness-aware Hybrid Recommender System. arXiv 2018, arXiv:1809.09030. [Google Scholar]
- Wasilewski, J.; Hurley, N. Incorporating Diversity in a Learning to Rank Recommender System. In Proceedings of the FLAIRS 2016 29th International Florida Artificial Intelligence Research Society Conference, Key Largo, FL, USA, 16–18 May 2016; pp. 572–578. [Google Scholar]
- Lu, F.; Tintarev, N. A Diversity Adjusting Strategy with Personality for Music Recommendation. In Proceedings of the RecSys 2018 5th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems, IntRS 2018, co-located with ACM Conference on Recommender Systems, Vancouver, BC, Canada, 7 October 2018; pp. 7–14. [Google Scholar]
- Burke, R. Multisided Fairness for Recommendation. arXiv 2017, arXiv:1707.00093. [Google Scholar]
- Zehlike, M.; Bonchi, F.; Castillo, C.; Hajian, S.; Megahed, M.; Baeza-Yates, R.A. FA*IR: A Fair Top-k Ranking Algorithm. In Proceedings of the CIKM 2017 ACM on Conference on Information and Knowledge Management, Singapore, 6–10 November 2017; pp. 1569–1578. [Google Scholar]
- Singh, A.; Joachims, T. Fairness of Exposure in Rankings. In Proceedings of the KDD 2018 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 2219–2228. [Google Scholar]
- Ning, X.; Karypis, G. SLIM: Sparse Linear Methods for Top-N Recommender Systems. In Proceedings of the ICDM 2011 11th IEEE International Conference on Data Mining, Vancouver, BC, Canada, 11–14 December 2011; pp. 497–506. [Google Scholar]
- Steck, H. Calibrated Recommendations. In Proceedings of the RecSys 2018 12th ACM Conference on Recommender Systems, Vancouver, BC, Canada, 2–7 October 2018; pp. 154–162. [Google Scholar]
- Tsintzou, V.; Pitoura, E.; Tsaparas, P. Bias Disparity in Recommendation Systems. arXiv 2018, arXiv:1811.01461. [Google Scholar]
- Abdollahpouri, H.; Adomavicius, G.; Burke, R.; Guy, I.; Jannach, D.; Kamishima, T.; Krasnodebski, J.; Pizzato, L.A. Beyond Personalization: Research Directions in Multistakeholder Recommendation. arXiv 2019, arXiv:1905.01986. [Google Scholar]
- Mehrotra, R.; McInerney, J.; Bouchard, H.; Lalmas, M.; Diaz, F. Towards a Fair Marketplace: Counterfactual Evaluation of the Trade-off Between Relevance, Fairness & Satisfaction in Recommendation Systems. In Proceedings of the 27th ACM Conference on Information and Knowledge Management, Torino, Italy, 22–26 October 2018; pp. 2243–2251. [Google Scholar]
- Guzzi, F.; Ricci, F.; Burke, R.D. Interactive Multi-party Critiquing for Group Recommendation. In Proceedings of the RecSys 2011 5th ACM Conference on Recommender Systems, Chicago, CA, USA, 23–27 October 2011; pp. 265–268. [Google Scholar]
- Dery, L.N.; Kalech, M.; Rokach, L.; Shapira, B. Iterative Voting under Uncertainty for Group Recommender Systems. In Proceedings of the RecSys 2010 4th ACM Conference on Recommender Systems, Barcelona, Spain, 26–30 September 2010; pp. 265–268. [Google Scholar]
- Carvalho, L.A.M.C.; Macedo, H.T. Generation of Coalition Structures to Provide Proper Groups’ Formation in Group Recommender Systems. In Proceedings of the WWW 2013 22nd International World Wide Web Conference, Rio de Janeiro, Brazil, 13–17 May 2013; pp. 945–950. [Google Scholar]
- Carvalho, L.A.M.C.; Macedo, H.T. Users’ Satisfaction in Recommendation Systems for Groups: An Approach based on noncooperative games. In Proceedings of the WWW 2013 22nd International World Wide Web Conference, Rio de Janeiro, Brazil, 13–17 May 2013; pp. 951–958. [Google Scholar]
- Lin, X.; Zhang, M.; Zhang, Y.; Gu, Z.; Liu, Y.; Ma, S. Fairness-Aware Group Recommendation with Pareto-Efficiency. In Proceedings of the RecSys 2017 11th ACM Conference on Recommender Systems, Como, Italy, 27–31 August 2017; pp. 107–115. [Google Scholar]
- Qi, S.; Mamoulis, N.; Pitoura, E.; Tsaparas, P. Recommending Packages to Groups. In Proceedings of the IEEE 16th International Conference on Data Mining, Barcelona, Spain, 12–15 December 2016; pp. 449–458. [Google Scholar]
- Serbos, D.; Qi, S.; Mamoulis, N.; Pitoura, E.; Tsaparas, P. Fairness in Package-to-Group Recommendations. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 April 2017; pp. 371–379. [Google Scholar]
- Qi, S.; Mamoulis, N.; Pitoura, E.; Tsaparas, P. Recommending Packages with Validity Constraints to Groups of Users. Knowl. Inf. Syst. 2018, 54, 345–374. [Google Scholar] [CrossRef]
- Sacharidis, D. Top-N Group Recommendations with Fairness. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, Limassol, Cyprus, 8–12 April 2019; pp. 1663–1670. [Google Scholar]
- Abdollahpouri, H.; Burke, R.; Mobasher, B. Recommender Systems as Multistakeholder Environments. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, Bratislava, Slovakia, 9–12 July 2017; pp. 347–348. [Google Scholar]
- Speicher, T.; Heidari, H.; Grgic-Hlaca, N.; Gummadi, K.P.; Singla, A.; Weller, A.; Zafar, M.B. A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 2239–2248. [Google Scholar]
- Veale, M.; Kleek, M.V.; Binns, R. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 ACM CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018. [Google Scholar]
- Angell, R.; Johnson, B.; Brun, Y.; Meliou, A. Themis: Automatically Testing Software for Discrimination. In Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT, Lake Buena Vista, FL, USA, 4–9 November 2018; pp. 871–875. [Google Scholar]
- Galhotra, S.; Brun, Y.; Meliou, A. Fairness Testing: Testing Software for Discrimination. In Proceedings of the ESEC/FSE 2017 11th Joint Meeting on Foundations of Software Engineering, Paderborn, Germany, 4–8 September 2017; pp. 498–510. [Google Scholar]
- Holstein, K.; Vaughan, J.W.; Daumé, H., III; Dudík, M.; Wallach, H.M. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; p. 600. [Google Scholar]
- Menon, A.K.; Williamson, R.C. The Cost of Fairness in Binary Classification. In Proceedings of the FAT 2018 Conference on Fairness, Accountability and Transparency, New York, NY, USA, 23–24 February 2018; pp. 107–118. [Google Scholar]
- Chierichetti, F.; Kumar, R.; Lattanzi, S.; Vassilvitskii, S. Fair Clustering Through Fairlets. In Proceedings of the NIPS 2017 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5036–5044. [Google Scholar]
- Bera, S.K.; Chakrabarty, D.; Negahbani, M. Fair Algorithms for Clustering. arXiv 2019, arXiv:1901.02393. [Google Scholar]
- Kleindessner, M.; Awasthi, P.; Morgenstern, J. Fair k-Center Clustering for Data Summarization. In Proceedings of the ICML 2019 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 3448–3457. [Google Scholar]
Complexity of Computing | Complexity of Testing | |
---|---|---|
q-PSC | P | P |
weak q-PSC | P | P |
qH-PSC | qD-PSC | Weak qH-PSC | Weak qD-PSC | Complexity | |
---|---|---|---|---|---|
k-Borda | N [17] | N [17] | N [17] | N [17] | P (trivial) |
Bloc | N [17] | N [17] | N [17] | N [17] | P (trivial) |
SNTV | N [17] | N [17] | N [17] | N [17] | P (trivial) |
CC | N [17] | N [17] | N [17] | N [17] | NP-complete [20] |
Monroe | N [17] | N [17] | N [17] | N [17] | NP-complete [21] |
H-STV | Y [16] | Y [16] | Y [16] | Y [16] | P (trivial) |
D-STV | Y [16] | Y [16] | Y [16] | Y [16] | P (trivial) |
Complexity of Computing | Complexity of Testing | |
---|---|---|
justified representation | P [23] | P [23] |
extended Justified representation | P [25] | co-NP-complete [23] |
proportional Justified representation | P [24] | co-NP-complete [25] |
perfect representation | NP-complete [24] | P [24] |
EJR | PJR | JR | PR | Complexity | |
---|---|---|---|---|---|
AV | N [23] | N [23,24] | N [23] | N [24,35] | P (trivial) |
SAV | N [23] | N [23,24] | N [23] | N [24,26,35] | P [26] |
seqPAV | N [23] | N [23,24] | N [23] | N [24,26,35] | P [26] |
MAV | N [23] | N [23,24] | N [23] | N [23] | NP-complete [36] |
CCAV | N [23] | N [24] | Y [23] | Y [35] | NP-complete [37] |
MonAV | N [23] | N [24] | Y [23] | Y [24] | NP-complete [21] |
var-Phragmén | N [38] | N [38] | Y [38] | Y [38] | NP-complete [38] |
seq-Phragmén | N [38] | Y [38] | Y [38] | N [38] | P [38] |
max-Phragmén | N [38] | Y [38] | Y [38] | Y [38] | NP-complete [38] |
PAV | Y [23] | Y [24] | Y [23] | N [24] | NP-complete [26] |
Fairness Definition | Description |
---|---|
Equalized Odds | Predicted outcome satisfies equalized odds with respect to protected attribute A and true outcome Y, if and A are independent conditional on Y, more specifically [55] |
Equal Opportunity | A binary predictor satisfies equal opportunity with respect to A and Y if [55] |
Statistical Parity | A predictor satisfies demographic parity if [64] |
Counterfactual Fairness | For a given causal model where , predictor is said to be “counterfactually fair” if under any context and , for all y and for any value attainable by A [68] |
Fairness through awareness | An algorithm is fair if it gives similar predictions to similar individuals. Any two individuals who are similar with respect to a similarity metric defined for a particular task should be classified similarly [64]. |
Individual fairness | Let be a measurable space and be the space of the distribution over . If denotes a map that maps each individual to a distribution of outcomes, the formulation of individual fairness is then , where are two metric functions on the input space and the output space, respectively [64]. |
Type of RecSys Fairness | Focus | References |
---|---|---|
User & Group Fairness | Ensure fairness for individual or a group of individuals, protected group incurs rating prediction errors in parity with the nonprotected group. | Yao and Huang [87] Ning and Karypis [94] |
Item Fairness | Fairness among item categories when recommended to users | Steck [95] Tsintzou et al. [96] |
Multiple Stakeholder Fairness | Fairness for multiple parties involved | Burke [91] Abdollahpouri et al. [97] Mehrotra et al. [98] |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shrestha, Y.R.; Yang, Y. Fairness in Algorithmic Decision-Making: Applications in Multi-Winner Voting, Machine Learning, and Recommender Systems. Algorithms 2019, 12, 199. https://doi.org/10.3390/a12090199
Shrestha YR, Yang Y. Fairness in Algorithmic Decision-Making: Applications in Multi-Winner Voting, Machine Learning, and Recommender Systems. Algorithms. 2019; 12(9):199. https://doi.org/10.3390/a12090199
Chicago/Turabian StyleShrestha, Yash Raj, and Yongjie Yang. 2019. "Fairness in Algorithmic Decision-Making: Applications in Multi-Winner Voting, Machine Learning, and Recommender Systems" Algorithms 12, no. 9: 199. https://doi.org/10.3390/a12090199