Artificial intelligence?(AI) has a range of applications in the financial services industry, from customer service to fraud detection. However, an increasing reliance on AI algorithms?in decision-making processes raises concerns about the potential biases embedded in these systems.
Bias in the training data that forms the basis of these algorithms can have significant consequences in financial services, where decisions can affect individuals’ access to credit, investment opportunities, and overall financial well-being.
The recent executive order on AI issued by the White House aims to establish new standards for AI safety and security while advancing equity and civil rights.
“Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing,” the order states.
“The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety.”
To that end, one of the President’s directions in the order is to “provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.”
What Are the Challenges of AI Algorithm Bias in Financial Services?
Bias in the use of AI refers to systematic and unfair discrimination in algorithms’ decisions. This is caused by the inherent bias in the data sets used to train the algorithms, reflecting the current inequalities in society. The AI model can learn and perpetuate these biases, resulting in unfair and discriminatory outcomes.
“It’s inevitable. AI will start coming up with its own opinions based on the provided data sets. Humans have bias — whether you say you’re unbiased or not, there’s no such thing,” Kevin Shamoun, SVP at Fortis, told Techopedia in a recent interview on AI in finance.
In the financial services sector, bias can manifest in various forms, such as racial or gender-based discrimination, socio-economic bias, and other unintended preferences. This can affect:
- Credit decisions: One of the most significant areas where AI bias can have severe consequences is credit scoring. Suppose the historical training data contains biases against certain demographic groups. In that case, it can result in specific individuals or communities being unfairly denied access to credit or offered less favorable lending terms.
- Investment: AI algorithms are also used to develop investment strategies. Inherent bias can inadvertently favor specific industries, regions, or demographics over others to receive financing, potentially reinforcing entrenched economic disparities.
- Customer service: Chatbots and other AI-powered customer service applications can exhibit biases in interactions. For example, chatbots can potentially provide different responses or levels of assistance based on the user’s demographic information, leading to unequal treatment.
READ MORE: 9 Times AI Controversially Got It Wrong
How Can the Industry Avoid Algorithm Bias?
There are some steps the financial industry can take to avoid negative outcomes from algorithm bias, building on existing regulatory requirements.
Financial services providers need to use diverse and representative datasets reflective of the population as a whole in training the models. This would involve actively seeking out underrepresented groups to ensure their data is included.
“In financial services, there are rules against credit discrimination, around explaining adverse credit decisions, and regulators have already said in the last couple of years that these clearly apply to using AI,” attorney Duane Pozza, a Partner at Wiley Rein, told Techopedia.
“You can’t just say it was AI so you can’t explain it or that you’re exempt from these rules.”
He added: “A lot of data scientists in financial institutions spend their time thinking about how to control those biases.
“A key part of this is understanding the origins and limitations of potential biases in the data set that is being used. A best practice generally when using AI is understanding the data sets that are used to train the models because understanding the potential limitations or biases in the data sets will help to try to control for any biases in the output.”
Prioritizing transparency in their AI algorithms will help financial institutions understand how decisions are made for regulatory compliance and build trust with consumers. Clearly documenting an algorithm’s decision-making processes allows for scrutiny and identification of potential biases.
One possibility is the use of Explainable AI (XAI), which explains how algorithms generate specific outcomes.
“I think a separate question is whether AI tools will be developed in a way that’s explainable to impacted individuals. “If someone has an adverse decision on something they’ve applied for, is there going to be a requirement to have an explanation that they can understand?” Pozza said.
The White House’s executive order directs government agencies to build on existing guidance around fairness and avoiding bias.
The order is focused on directing government agencies to develop guidelines rather than providing direct mandates. However, government departments could impose regulations around explainability or allow consumers to opt out of automated decision-making.
There are other federal government-level workflows developing frameworks for accountability mechanisms, which would provide best practice guidelines for the industry. Agencies such as the Federal Trade Commission (FTC) that can implement their regulations could potentially include AI in privacy rules.
“The quickest way to regulate is Congress passing a law that allows agencies to step in quickly to make rules. It could be part of an overall privacy law or a standalone law, but that’s clearly the most direct way for regulation to happen,” Pozza said.
Financial institutions must adopt ethical AI frameworks that prioritize fairness, accountability, and transparency to maintain accountability to regulators and consumers.
Regularly monitoring and auditing algorithms for bias is essential to identify any drift in the training data and modeling that could occur and affect outcomes.
Promoting diversity within the teams developing and managing AI algorithms is critical to this. This is because a diverse team is more likely to recognize and address potential biases during development and offer different perspectives on how bias can creep in.
The Bottom Line
The adoption of AI is reshaping the future of financial services in ways that could increase efficiency and raise the risk of systemic bias in decisions around credit and investment that perpetuate societal inequalities.
The industry needs to take proactive steps to ensure that the AI algorithms it implements are fair, transparent, and accountable to government regulators and consumers alike.
By incorporating diverse training data sets and implementing ethical frameworks, financial institutions can contribute to building a more inclusive economic landscape.