01 October 2019
The founder of FemTechGlobal, speaks to Governance and Compliance about data discussions and understanding risks
Every company is built on the premise that it solves a problem; the same goes for FemTechGlobal. But we’re focused on better problem solving, not a single problem. To get to a really good solution, all of our assumptions about the design of that solution need to be challenged. Challenge doesn’t exist if those designing the solution all think alike, look alike, and have like experiences. Good solutions call for diverse thinking, dissent, and discussion. FemTechGlobal is about improving diversity to help us better solve challenges – and to remind companies that diversity is better for business’ bottom lines too.
It’s a lot of talk, but little change. It’s a shame, considering how positively diversity impacts profits. Ethnically diverse companies perform 33% better than the norm, and companies with at least 25% women on the executive team perform better than those with no women at the top, by 47% return on equity (ROE). And companies with the highest board diversity have 53% higher ROE and 14% higher earnings before interest taxes and amortization than those with low to no diversity.
I think most of us recognise the value of data, and the growth of business insight solutions seems to prove it. But data offers more than insight, it can give direction and predict outcomes. Data illiteracy, though, puts a pause on adopting the roadmap that data can provide. There’s an education gap at board level about the scientific method that informs these directions, and that gap affects the risk appetite around certain business decisions that instruct us: pivot, drop a customer segment, redesign the UX, accept more applicants because they’re actually not as risky as you’ve assumed.
Basic data science 101: what big data is and how it is good for trends, as well as what micro data is and how it’s good for decisions around a specific customer. The dangers of pre-existing biases based on underlying social and institutional ideologies, and emergent biases as we uncover new knowledge, adopt new policies, or as our cultural norms start to shift that can be applied to our use of data. And an underscoring of the fact that the company does not own the data, the customer does, so as a custodian of the customer data, the company is liable for what it does with the data.
There are two types of data risk: custodial liability and applying bias to data structure/relying on un-challenged assumptions to analyse the data. The first is very much about data governance, from security to access to distribution. It’s about the liability inherent in being the custodian of customers’ data, and the risk exposure of mismanaging it or mis-using the data, including sharing/selling that data to third parties.
The second is about the limitations of assumptions applied to crunching data.
All data science involves a human: we’re the ones doing the structuring and designing the questions we ask. Humans have biases, and we all make assumptions. That can lead to missed opportunities and less-than-best decisions.
The best digital solutions didn’t solve for the needs of a pre-existing organisation – and boards will have to recognise that technology is not about solving their needs, but the customer/market needs. They also need to realise that whatever marries technological possibilities with customer/market needs or desires is going to be a legitimate business; this means they face a different competitive landscape, no more business as usual. Because of this, the world will move on with no regard for their business.