THESIS
2021
1 online resource (xi, 78 pages) : illustrations (some color)
Abstract
Discrimination can cause great harm to minority groups, e.g., women and ethnic minorities, and thus, fairness is becoming increasingly important in many situations, e.g., hiring and admission decisions, recently. Motivated by this, we want to help users alleviate discrimination in two fields: machine learning and top-k query.
In machine learning, we propose a novel structure, called GIFair, for generating a representation that can simultaneously reconcile utility with fairness by adversarial learning. Compared with most relevant studies that only focus on group fairness without individual fairness, GIFair makes sure that the classifiers trained on the generated representation achieve both individual fairness and group fairness. A theoretical proof is provided to show that except in hig...[
Read more ]
Discrimination can cause great harm to minority groups, e.g., women and ethnic minorities, and thus, fairness is becoming increasingly important in many situations, e.g., hiring and admission decisions, recently. Motivated by this, we want to help users alleviate discrimination in two fields: machine learning and top-k query.
In machine learning, we propose a novel structure, called GIFair, for generating a representation that can simultaneously reconcile utility with fairness by adversarial learning. Compared with most relevant studies that only focus on group fairness without individual fairness, GIFair makes sure that the classifiers trained on the generated representation achieve both individual fairness and group fairness. A theoretical proof is provided to show that except in highly con-strained special cases, group fairness and individual fairness cannot be satisfied simultaneously, so we need to make a trade-off between group fairness and individual fairness in addition to the utility of classifiers. Experiments conducted on two real datasets show that GIFair can achieve a better utility-fairness trade-off compared with existing models.
In top-k query, we propose a fairness model, called α-fair, to quantify the fairness of utility functions. An efficient algorithm called FairTQ-Exact is designed to help users find the fairest utility function with minimum modification to the original unfair utility function. We also offer two approximation methods to satisfy users’ different needs. 1/α-FairTQ-Appro1 returns an approximately fairest utility function. (1/α, θ)-FairTQ-Appro2 does not only return an approximately fairest utility function but also with an approximate minimal modification penalty. Both methods are proved to have good performance guarantees. We conduct extensive experiments using both real and synthetic datasets to demonstrate the effectiveness and efficiency of our proposed algorithms compared with the previous works.
Post a Comment