In seven tweets last month, the insurer Lemonade described how it’s using artificial intelligence (AI) to assess customer risk to improve underwriting, customer acquisition and fraud. As an example, the company said its AI could analyze video claims submissions for “non-verbal cues” that “traditional insurers” can’t detect since they don’t use a digital claims processes.

The result, Lemonade concluded, is a platform that is better at evaluating risks “and delighting customers.”
Twitter users were not delighted. Instead, they flooded Lemonade with concerns about potential AI bias against neuro-atypical people, non-English speakers, and people of color.
Financial institutions are currently investing in AI, raising questions about how banks will navigate similar mishaps by creating non-biased or ethical AI as the technology rapidly develops.
Indeed, when the idea of ethical AI first came to their attention, financial institutions didn’t immediately embrace the issue, acknowledged Stephen Thomas, executive director at the Analytics and AI Ecosystem in the Smith School of Business at Queen’s University in Ontario, Canada. Thomas works closely with the Toronto-based Scotiabank as the school has the Scotiabank Center for Customer Analytics, and has also surveyed larger banks in Canada to learn where they stand with AI. Now, banks are working to address the problem of potentially biased AI.
“When they first learned about this problem, they weren’t too excited. They were like, ‘We’re not doing anything illegal. And what you’re suggesting is going to cost us a lot of money,” Thomas said. Now, he added, banks are beginning to realize making non-biased AI is worthwhile, and isn’t as costly as they first feared.
That constitutes progress since the first step is simply recognizing the problem, said Thomas, who heads the Queens University program that offers an ethical AI certification for executives.
Scotiabank’s plan for ethical AI
The $926 billion Scotiabank actively seeks guidance for its work in ethical AI, said Phil Thomas, the bank’s executive vice president of customer insights, data and analytics. The bank works with Queens University and other academic institutions, management consultant Deloitte, and other external organizations to ensure its AI platform is unbiased, he told Bank Automation News.
Scotiabank’s AI platform is used across the institution for various solutions, including C. Mee, an AI-based tool that provides automatic personalized service and sales messages to customers; AIDOX, an AI-based tool that automates the analysis of legal contracts in the global banking markets division; and the Strategic Operating Framework for Insights and Analytics (SOFIA), which forecasts capital position for retail and small business customers or liquidity for business clients.
SOFIA relies on historical data, said Yannick Abba, vice president of analytics, risk management at the bank. The solution helps the bank predict potential cash flow problems to help make credit decisions for commercial customers. However, the use of historical data can be problematic with regard to creating bias.
To ensure against such bias, Scotiabank first looks at the developers involved in building the AI, said Abba, and makes sure its staff members are trained to develop fair and responsible AI, Abba said.
The bank also makes sure the team itself is diversified, said Phil Thomas. “What we have been doing on the human side is really leveraging our diverse teams to be able to bring various diversity of thought, people with different backgrounds to be able to do checks and balances on how the models are developing,” he said.
The analytics community within the bank also monitors for three characteristics within the data that’s used to train the AI, Abba said. “One is fairness; second, would be transparency; and then last would be robustness,” he said. “Those things are now embedded within our AI systems; [they] will also make sure that bias is removed from my business and credit decisions.”
Finally, the bank has policies in place to ensure fair lending practices, Abba said.
“We are super diligent with our underwriting credit decisions to make sure that we focus solely on credit profiles and we do not consider personal protective attributes such as gender, race, or religion,” Abba said.
Bank of America’s examines Erica’s ethics
Bank of America arguably has one of the most evolved digital assistants in banking. The $2.2 trillion bank is “very focused on” ensuring Erica is an ethical AI, Christian Kitchell, head of AI solutions and the Erica group, told BAN. When developing the AI, the bank consulted with outside groups like the AI Council about the best methods to test for bias, Kitchell said.
Among the factors the Charlotte, N.C.-based bank looks at are demographic and regional information, as well as other variables that might lead to bias.
“We look at a full span of all user-intensive interactions coming through the platform,” Kitchell said. “We look at a number of different variables to ensure that we are not absorbing any bias based on age, ethnicity, gender or regional distinction.”
Tips for avoiding bias in AI
Queens University’s Stephen Thomas suggested additional steps that financial institutions can follow to create ethical AI:
First, define protected attributes by defining who’s part of a minority subgroup.
Second, clean up the data of bias. Historical training data may encapsulate bias. For instance, a set of historical data to train the AI to recognize what makes a “great” U.S. president will only include men and be predominantly white men, reflecting society’s bias over the past 245 years.
There are also options for dealing with bias once it’s been identified.
“We can adjust the machine learning algorithms themselves to basically penalize them for making what humans consider a biased decision, and so the algorithms will not do that,” Thomas said.
Developers can also adjust what’s called the ‘probability threshold of the prediction,’ which basically weights decisions to make it slightly easier for minority subgroups to get a loan, he said.
“All of these methods have their pros and cons,” Thomas said. “Probably the solution will be a combination of all.”
Bank Automation News will host a webinar on automation technology for better risk management and security on Tuesday, June 15, at 11:30 a.m. ET. Register for the webinar.