Artificial Intelligence

Fair-A3SL: A Structure Learning Algorithm for Learning Fairness-aware Relational Structures

Introduction

The widespread growth and prevalence of machine learning models for crucial decision-making tasks has raised questions on the fairness of the underlying models. Machine learning models have been mostly employed as a black box with little or no transparency or they are too complex to comprehend for non-experts, which further exacerbates this problem. This has led to an increased interest in creating fair machine learning models.

The goal of fairness-aware machine learning is to ensure that the decisions made by models do not discriminate against a certain group(s) of individuals.
Types of Fairness Algorithms

The state-of-the-art bias mitigation algorithms can be grouped into three categories as described below.

  • Pre-processing Methods
  • Pre-processing methods work by directly mitigating the bias in the training data itself.
  • Model-based Methods
  • Model-based methods are used to mitigate bias in classifiers. Our approach falls in this category. Existing approaches only learn the parameter values or apply regularization to lessen the effect of sensitive attributes. The fairness measures are not used to directly induce the structure, hence leaving behind some possibility of bias. Our approach differs from existing approaches in that it directly learns the graphical model structure by optimizing for the fairness measures. Thus, our approach is capable of mitigating structural bias in the model, which helps in creating an overall fairer model.
  • Post-processing Methods
  • Post-processing methods to mitigate bias in predictions after the classification has been made.
Contributions

In this work, we develop Fair-A3SL, a fairness- aware structure learning algorithm for hinge-loss Markov random fields (HL-MRFs). Fair-A3SL extends a recently developed deep reinforcement learning-based structure learning algorithm for HL-MRFs, A3SL, to automatically learn fair relational graphical model structures. Fair-A3SL has the ability to encode almost all different state-of-the-art widely-used fairness metrics: equalized odds, equal opportunity, statistical parity difference, recently developed relational fairness measures of risk difference, risk reward, and relative chance, and fairness measures for collaborative filtering, non-parity and overestimation. 

Fair-A3SL is capable of mitigating structural bias in the model, which helps in creating an overall fairer model.

Fair-A3SL: Fairness-aware Structure Learning for HL-MRFs

In this section, we develop Fair-A3SL by incorporating the different fairness measures in the A3SL problem formulation and objective.

Fairness Measures as MAP Inference Constraints

Here, we discuss how to integrate different fairness measures as MAP inference constraints.

We group instances or users based on their sensitive attributes into two groups, protected and unprotected. p1 and p2 refer to the proportions of denial for protected and unprotected groups, respectively.

Following Farnadi et al.’s the definition of δ-fairness, the fairness measures can be defined in terms of p1 and p2 as follows, where we constrain the fairness measures using δ, where 0 ≤ δ ≤ 1,

The δ-fairness constraints above translate to six linear inequality con- straints in the HL-MRF framework. For example, the linear inequality constraint for the inequality RD >= –δ becomes as follows, where g1 and g2 are the number of users in the protected and unprotected groups and Y refers to the prediction by the model.

The linear form of the constraints is consistent with MAP inference in HL-MRF model; they can be seamlessly solved using a consensus- optimization algorithm based on the alternating direction method of multipliers (ADMM)

Fairness Measures as Objective Priors

While certain fairness measures can be modeled as MAP inference constraints in the framework, the post-processing fairness measures can only be modeled as priors in our objective due to the absence of ground truth for target Y at test time.

Fair-A3SL Objective Function

In the objective, we use a combination of fairness measures both encoded as constraints and as priors. In the objective given below, we encode the relational fairness measures RR, RC, and RD as MAP inference constraints and the equalized odds difference measure as a prior in the objective along with interpretability priors for the specific domain in question. Equation below gives the Fair-A3SL objective function corresponding to this combination.

 

Note that different combinations of fairness measures can be incorporated in the framework, which adds to its flexibility.

Experimental Evaluation

We conduct experiments to evaluate the learned structures quantitatively and qualitatively on three fairness datasets. In our experiments, we illustrate the capability of Fair-A3SL to be able to: 

  • learn fair network and collective structures that bring out the modeling power of statistical relational models
  • incorporate a wide range of fairness measures and learn model structures using them, and
  • incorporate a wide range of fairness measures and learn model structures using them
Results on COMPAS Dataset

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool produces a risk score that predicts a per- son’s likelihood of committing a crime in the next two years [19]. The output is a score between 1 to 10 that maps to low, medium, or high. We collapse this to a binary prediction: a score of 0 corresponds to a prediction of low risk according to COMPAS, while a score of 1 indicates high or medium risk. The dataset also contains information on recidivism for each person over the next two years, which we use as ground truth. Existing work shows that the COMPAS risk scores discriminate against black defendants, who were predicted to be far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk.

Table below gives the Sensitive-A3SL model. We can see that the model combines other recidivism signals of having committed prior felonies (priors and priorFelony) with the race attribute (africanAmerican), in- dicating how the race attribute and combinations with it are predictive of recidivism and are a natural albeit unfair and discriminatory choice for models that are solely performance driven.

The rules learned by the Fair-A3SL model are given in the below table. Note that the structures learned by fair-A3SL. For example, priorFelonHistory(U,I1)can be grounded with multiple historical felony instances I1 for each user U. Fair-A3SL’s transparency, interpretability, expressibility, along with fairness, makes it ideal candidate for automatically learning prediction models for sensitive domains.

We use the IBM AI Fairness 360 tool for running the existing state-of-the-art models. Table below gives the 5-fold cross-validation results and shows that Fair-A3SL is able to achieve a better prediction performance for both the protected and unprotected groups, individually (AUC-PR for protected and unprotected groups) and combined (AUC-ROC). 

We also demonstrate that our learned model outperforms the state-of-the-art fairness models in the fairness metrics as well, achieving the best scores across all metrics in the table below. 

Video

Subscribe to Our monthly newsletter for exciting data science news and learnings

Subscribe to our monthly newsletter, DataTrain, our thought train on all things data.