Skip to content

A Diversity-Aware Learning to Rank Approach to Fair Ranking

Li Heyu (AY 2022)

Fair Ranking aims to ensure that documents about specific protected characteristics receivefair exposure, which is crucial for the public. But most current studies start with BM25 to prerank the documents first. Then they use iterative re-ranking with fairness calculations, whichmay suffer some problems: (1) for learning to rank during the training process, the documents’position may change, making the optimization of IR measures difficult. (2) uncertainty is an inherent part of the machine learning process of the model and retrieval processes. Toaddress these limitations, we proposed a diversity-aware learning to rank method, which can significantly improve the performance of nDCG_AWRF, a metric that means accurate and fairness.

Precisely, we determine the nDCG_AWRF score of each document based on the probability distribution that determines the ranking position of the processes. These distributions over the scores naturally generate learning to rank the ranking position. With this advantage, we can obtain differentiable variants of the nDCG_AWRF and directly optimize the metric. In addition, we use a new probabilistic neural scoring function. It does this by considering crossdocument interactivity and permutation equivariance, which makes it possible to produce a diverse ranking by simple ranking. The experimental results on baselines show that the proposed model achieves ignificantly improved performance over the state-of-the-art results.

Our future work will explore different cutoffs to calculate nDCG_AWRF. We will do the same custom diversification and data fusion as they did for the attributed demographic attributes or distribute positions in the generated rankings proportionally to see if we can effectively improve the AWRF score.


Back to Index