Algorithmic decision-making has become ubiquitous in our societal and economic lives. With more and more decisions being delegated to algorithms, we have also encountered increasing evidence of ethical issues with respect to biases and lack of fairness pertaining to algorithmic decision-making outcomes. Such outcomes may lead to detrimental consequences to minority groups in terms of gender, ethnicity, and race. As a response, recent research has shifted from design of algorithms that merely pursue purely optimal outcomes with respect to a fixed objective function into ones that also ensure additional fairness properties. In this study, we aim to provide a broad and accessible overview of the recent research endeavor aimed at introducing fairness into algorithms used in automated decision-making in three principle domains, namely, multi-winner voting, machine learning, and recommender systems. Even though these domains have developed separately from each other, they share commonality with respect to decision-making as an application, which requires evaluation of a given set of alternatives that needs to be ranked with respect to a clearly defined objective function. More specifically, these relate to tasks such as (1) collectively selecting a fixed number of winner (or potentially high valued) alternatives from a given initial set of alternatives; (2) clustering a given set of alternatives into disjoint groups based on various similarity measures; or (3) finding a consensus ranking of entire or a subset of given alternatives. To this end, we illustrate a multitude of fairness properties studied in these three streams of literature, discuss their commonalities and interrelationships, synthesize what we know so far, and provide a useful perspective for future research.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.