header logo image

Is It Time To Start Using Race And Gender To Combat Bias In Lending? – Forbes

August 11th, 2022 1:59 am

Trying to achieve fairness through blindness has not worked.

A woman, lets call her Lisa, applies for a loan. Shes 35 with a graduate degree, a high earning trajectory and a 670 credit score. She also just returned to work after taking time off to start a family.

Her application goes to an algorithm, which assesses her risk profile to determine whether she should be approved. The algorithm sees her recent gap in employment and labels her a risky borrower. The result? Her application is rejected.

Examples like this happen every day in lending. Are these decisions fair?

When it comes to fairness in lending, a cardinal rule is, Thou shalt not use variables like race, gender or age when deciding whether to approve someone for a loan.

This rule dates back to the Equal Credit Opportunity Act (ECOA), passed in 1974 to stop lenders from deliberately denying loans to Black applicants and segregating neighborhoodsa practice called redlining. The problem got so bad, the government had to ban the consideration of race or gender when making loan approval or other high-stakes decisions.

The assumption behind ECOA was that if decision makersbe they humans or machinesare unaware of attributes like race or gender at decision-time, then the actions they take will be based on neutral and objective factors that are fair.

Theres just one problem with this assumption: Its wishful thinking to assume that keeping algorithms blind to protected characteristics means the algorithms wont discriminate.

In fact, building models that are blind to protected status information may reinforce pre-existing biases in the data. As legal scholar Pauline Kim observed:

Simply blinding a model to sensitive characteristics like race or sex will not prevent these tools from having discriminatory effects. Not only can biased outcomes still occur, but discarding demographic information makes bias harder to detect, and, in some cases, could make it worse.

In a credit market where Black applicants are often denied at twice the rate of White applicants and pay higher interest rates despite strong credit performance, the time has come to admit that Fairness Through Blindness in lending has failed.

If we want to improve access to credit for historically underrepresented groups, maybe we need to try something different: Fairness Through Awareness, where race, gender and other protected information is available during model training to shape the resulting models to be fairer.

Why will Fairness Through Awareness work better?

Consider the example of the woman, Lisa, above.

Many underwriting models look for consistent employment as a sign of creditworthiness: the longer youve been working without a gap, the thinking goes, the more creditworthy you are. But if Lisa takes time out of the workforce to start a family, lending models that weigh consistent employment as a strong criterion will rank her as less creditworthy (all other things being equal) than a man who worked through that period.

The result is that Lisa will have a higher chance of being rejected, or approved on worse terms, even if shes demonstrated in other ways that shes just as creditworthy as a similar male applicant.

Models that make use of protected data during training can prevent this outcome in ways that race and gender blind models cannot. If we train AI models to understand that they will encounter a population of applicants called women, and that women are likely to take time off from the workforce, the model will know in production that someone who takes time off shouldnt necessarily be deemed riskier.

Simply put, different people and groups behave differently. And those differences may not make members of one group less creditworthy than members of another.

If we give algorithms the right data during training, we can teach them more about these differences. This new data helps the model evaluate variables like consistent employment in context, and with greater awareness of how to make fairer decisions.

Fairness Through Awareness techniques are showing impressive results in healthcare, where identity-aligned algorithms tailored to specific patient populations are driving better clinical outcomes for underserved groups.

Lenders using Fairness Through Awareness modeling techniques have also reported encouraging results.

In a 2020 study, researchers trained a credit model using information about gender. The gender-specific model resulted in about 80% of women getting higher credit scores than the gender-blind model.

Another study, done by my co-founder John Merrill, found that an installment lender could safely increase its approval rate by 10% while also increasing its fairness (measured in terms of adverse impact ratio) to Black applicants by 16%.

The law does not prohibit using data like gender and race during model trainingthough regulators have never given explicit guidance on the matter. For years lenders have used some consciousness of protected status to avoid discrimination by, say, lowering a credit score approval threshold from 700 to 695 if doing so results in a more demographically balanced portfolio. In addition, using protected status information is expressly permitted to test models for disparate impact and search for less discriminatory alternatives.

Granted, allowing protected data in credit modeling carries some risk. It is illegal to use protected data at decision time, and when lenders are in possession of any protected status information theres the chance that this data will inappropriately influence a lenders decisions.

As such, Fairness Through Awareness techniques in model development require safeguards that limit use and preserve privacy. Protected data can be anonymized or encrypted, access to it can be managed by third party specialists, and algorithms can be designed to maximize both fairness and privacy.

Fairness Through Blindness has created a delusion that the disparities in American lending are attributable to neutral factors found in a credit report. But studies show again and again that protected status information, if used responsibly, can dramatically increase positive outcomes for historically disadvantaged groups at acceptable levels of risk.

Weve tried to achieve fairness in lending through blindness. It hasnt worked. Now its time to try Fairness Through Awareness, before the current disparities in American lending become a self-fulfilling prophecy.

See the original post:
Is It Time To Start Using Race And Gender To Combat Bias In Lending? - Forbes

Related Post

Comments are closed.


2024 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick