Data dilemmas: How much is too much (or not enough) for consumers?
Data dilemmas: How much is too much (or not enough) for consumers?
Roger Burkhardt, Partner, Advanced Analytics
The social media firestorm raining down after a blogger sounded the alarm on a potentially “sexist credit card” highlights the importance of customer perceptions around the ethics of algorithms and the data that feed them. Over the past weeks I’ve seen this event shape discussions in a variety of forums ranging from “Expanding Access to Credit through A’’ to ethical AI events at my firm.
As you probably know, the allegations stem from reports of women receiving dramatically lower credit limits on their credit card than their husbands, and led to a formal investigation of “disparate impact” by the NY Department of Finance. Disparate impact can occur due to algorithm design choices, such as training algorithms on data that encapsulates historical biases, whether intentionally or inadvertently.
Much of the commentary frames this as an ethical error in algorithm design and, in time, the investigation will hopefully determine whether this occurred. However, there is an alternative hypothesis that the root cause is a fundamental change in consumer perceptions where a significant segment now expects companies to use all available data to fully understand customers and anticipate their needs.
The tech blogger understood what modern data and analytics make possible and apparently expected that any credit-assessing algorithm would consider the couple’s combined income. This expectation collided with a conventional card product designed for an individual. Indeed, the companies response included a commitment that in the future it would offer the ability for a household to share a credit limit.
If this hypothesis of high expectations is correct, it raises tough questions for companies. For example, in this specific case, will women want an algorithm to suggest that they consider the household card or will they see such an offer as patronizing or feel that it is creepy that the bank knows their household income and marital status? And, of course, how will men with higher-earning spouses react in a similar situation?
In general, should companies use all legally available information, such as income, to anticipate customer needs? Or should they work hard to find that elusive, low-friction mechanism to get explicit customer permission to use a broader set of data on a case-by-case basis? These are questions many companies across industries must grapple with.
One possibility is to provide an option for consumers to explicitly provide more data — in this example, it could be their spouse’s income. An alternative could be to obtain a customer’s permission for the company to gather more information from other sources and then give the customer the ability to check it and benefit from it.
Building a full set of such consumer options and testing them with a representative sample can substantially delay a product launch, making it important for companies comfortable with releasing 1.0 versions to clearly label initial product releases and seek feedback to improve and address more complex situations.
One thing is clear: This is a top-of-the-house issue. Any reputational damage caused by events like the recent one is not limited to just one product or customer segment of a company. Organizations need to determine and communicate to consumers (and employees) their ethical position on the use of data and algorithms. They must also have a product management operating model that ensures their use of data and algorithms is not only legally compliant but also aligned with their principles and the expectations of their diverse customer base.