Data Bias & Data Bias in the Boardroom

How does data become biased and how do we work against data bias in boardrooms?

So much is being made about data fueling the new economy, and despite the hype cycle around data being the new oil, there seems to be no evidence to the contrary. From the front lines to the call centres to the boardroom, every one is talking about the value of data. Data is everywhere and anyone and everyone is trying to capitalise on it.

For all this talk about data, we’re only whispering about the ethics surrounding data and it’s commercialisation. Voices are hushed about the intrinsic risk of bias, and despite an earnest call to attention about this bias, the conversation has yet to go mainstream. My prediction is that it will when it starts to impact potential bottom-lines, lost market share, and end customers who see litigation as a viable option to sloppy and dangerous approaches to using data to fuel markets, corporations, and slap dash snake oil business strategies.

How does data become biased?

Data in and of itself is neutral; it’s how we use it, crunch it, and interpret it that is rife with bias. Because data collection and selection is done by humans, biases such as political leanings, gender stereotypes, race considerations and more can creep in. These days, most of that bias is coded into algorithms and the more digitized our business, social, and political systems become, the more we run the risk that algorithmic bias costs us more than it gives.

In the US, the Financial Industry Regulatory Authority, a securities regulator, developed a predictive algorithmic model to identify high-risk financial advisors and firms, using criteria including complaints and terminations to generate a risk score for financial advisors. Researchers found that women are disproportionately name in firm-initiated complaints and are 20% more likely to be fired after a customer complaint than their male peers. The irony is, though, that the study found that female advisors engage in less costly misconduct and have a lower propensity toward repeat offenses. But this algorithm bias has adverse and inaccurate consequences: an over-representation of women being classified as high-risk advisors. Costly for firms in terms of HR turnover, fines, and end portfolio management costs.

In 2017, data from the UK’s Home Mortgage Disclosure Act showed that 10.1% of Asian applicants were denied a conventional loan; by comparison, just 7.9% of white applicants were denied. 19.3% of Black borrowers and 13.5% of Hispanic borrowers were also denied conventional loans. Clearly this shows that loan denial rates for some ethnic groups are far higher than the average 9.6% denial rate.

It’s from examples like these that we can extrapolate the business risk of bias: entire markets ignored, inaccurate risk pricing that cost both the business and the end customer, high opportunity costs based on assumptions around age, gender, race, socio-economic background, neighborhood, and education rather than correlative evidence.

How do we work against data bias in Boardrooms?

Boardrooms across the globe are recognizing that data is, in fact, the new economic fuel, and that the global economy is shifting into the Fourth Industrial Revolution faster than we imagined. Data and its use are at the heart of this revolution churn. Bias, and unethical use or manipulation of data, is a potently costly risk in our new economy. There is only one insurance policy against it: diversity. It’s almost impossible to ask the necessary questions and challenge the status-quo assumptions we have around human behavior if we don’t have a diverse group of people at the table. Age. Gender. Race. Socioeconomic background. Education. Even neuro-diversity — the more perspectives and backgrounds we pull together, the more likely we are to catch and cull pre-existing bias from our data sets. Smart governance practices call for diversity, and monitoring for bias before, during and after modeling (both data and algorithmic modeling). And most of all, it calls for boardrooms to embrace an open, honest, and transparent conversation about data ethics and bias now, not soon, and definitely not later. 

Learn about this and more at the Data Governance Summit on 11 November where we'll be looking at the challenges of learning to 'talk digital'.

The author of this article is Ghela Boskovich, Head of Fintech and Regtech Partnerships. Ghela will appear as a panel speaker at the Data Governance Summit.  

Search ICSA