Thesis for my 2020 computer science honours degree at RMIT.

Abstract

Recommendation systems are almost ubiquitous in the online world, existing within many domains such as entertainment (Netflix, Spotify), e-commerce (Amazon, eBay), job seeking or candidate ranking (LinkedIn, Seek). They are responsible for making non-trivial rankings or decisions on behalf of humans, some of which are legally or ethically obliged to ensure that consumers are treated fairly, for example female job seekers having the same likelihood as their male or non-binary counterparts of receiving recommendations for high paying jobs. Our research focuses on measuring and intervening on this type of fairness (group fairness) in novel recommendation models involving contextual multi-armed bandits, which are used due to a number of advantages such as efficiently learning about environments or user preferences when little data is initially available, or increasing recommendation diversity. We make two primary contributions: a new applied group fairness formulation for bandits based on existing literature, and a simple yet effective technique for reducing unfairness at the cost of overall performance, with an empirical evaluation using LinUCB on two datasets known to contain biases.

Contributors

PDF Download

SUBMITION - THESIS__Group_fairness_in_recommendation_bandits.pdf