We use cookies to make this site as useful as possible. Read our cookie policy or ignore.

Business Behaviour

22 July 2019 by Peter Montagnon

Business Behaviour

Boards need to stay on top of artificial intelligence and cyber risk, whilst simultaneously making more ethical and philosophical decisions

Imagine you are on the board of a motor manufacturing company. Technology nowadays permits a level of automation in cars that will make our roads safer and lead to fewer deaths. But it will also occasionally fail, leading to spectacular accidents which the public will blame on the premature introduction of technology.

Would you approve a project to introduce the latest automatic features into your range? If you do not, more people will die on the roads, but, if you do, your company’s reputation would be trashed at the first major failure.

This is the sort of decision that companies are increasingly having to make about the use of advanced technology in their business. The choice is a difficult one, but it is only tangentially about technology. It belongs primarily in the familiar category of risk appetite and risk oversight.

One of the central conclusions of the Institute of Business Ethics latest Board briefing on Corporate Ethics in a Digital Age is that, while boards need to get on top of artificial intelligence and cyber risk, most of the decisions they will need to make are more ethical and philosophical than technical in nature.

The Ethical Dimension

Getting these decisions right will bring competitive advantage because companies will be able to exploit the new opportunities in a climate of trust, and it is not surprising that ethics is centre stage.

This is for two reasons. First, the use of big data creates an information asymmetry that confers power on those who control it over those to whom it relates. Wherever there is an imbalance of power, there is the potential for ethical risk. Second, machines may be rational but they are not moral. They are incapable of making qualitative judgements and someone has to make sure the ethical dimension is considered at critical moments.

From a governance point of view the process starts with the need to ensure that the board remains in control. In the old days, boards could wield power easily because they understood how their business functioned at the coal face. Their judgement, delivered with seniority and expertise, provided strategic direction and oversight.

Now the task is harder because data scientists, who may be technically quite junior and lacking in broader business experience, may be using their skills quietly to create risks that no rational company would be prepared to run. For example, the derivatives team that caused catastrophic losses at UBS in the run up to the banking crisis appears to have been operating beyond the reach and understanding of the board.

Yet boards remain accountable for what happens. The challenge for directors as they confront the technology revolution is to set appropriate limits and also to ensure that appropriate risk mitigation is in place.

Here values and corporate culture play an important role. If the concepts of honesty and respect for the rights of others are properly embedded in the business, then the risks that a rogue employee will put the whole enterprise at risk are smaller. Particular effort may be needed to ensure that tech teams are aware of the values and develop a habit of thinking about the broader implications of what they are doing.

Sometimes it may even be right for a board to reject a new process or product, because it is so complicated that no one can explain it to them clearly or because they cannot evaluate the risks properly. In this case – just as with the choice about the new car range discussed above – the role of the board is to recognize the need to draw the line and then to draw it in ways with which they are comfortable and which are compatible with the firm’s values.

Some basic principles apply. First, it is important for directors to ensure that there is always what one insurance executive calls ‘a human in the loop’. This is someone who can recognize when machine learning is throwing up perverse conclusions and put a halt to the use of that particular programme.

Second, it is important that the benefits of the new technology are shared. The analogy here is with Monsanto and its campaign to introduce genetically modified crops some ten years ago.

Although some people will never accept that such products are safe, even if they have been declared so by the relevant authorities, it is true that Monsanto’s seeds were potentially useful in increasing crop yields. The company undermined its case by trying to keep most of the benefits for itself through rigorous and selfish enforcement of its patent rights. Had it been prepared to share the benefits more widely, its products might have been less controversial.

So it is with AI Firms that use machine learning simply to generate advantage for themselves. They may end up incurring the wrath of the public and, if they push their advantage too far, large fines from the competition authorities. This occurred, for example, last year when the EU authorities fined Google 4.34 billion Euros for what it described as illegal practices regarding Android mobile devices to strengthen the dominance of Google’s internet search engine.

Open Businesses

One of the generally recognised characteristics of an ethical approach to business is openness. Businesses which are open about what they are doing are generally regarded as more ethical and more trustworthy. In the sphere of AI, this is particularly important because of the risk that all decision-making may seem to disappear into a black box.

For this reason financial regulators often expect firms to be able to explain how a decision is made, an expectation that may be difficult to comply with, even today because of the level of sophistication now reached by algorithms. One solution may be to abandon the use of a particular algorithm, even though the overall results may become more accurate as they become less explainable. That is another line that boards have to draw.

It can help if they oversee the way in which the algorithm is being used. They can insist that the company tests its decision-making by feeding in different data, comparing the results to see whether they are consistent and reliable and then tweaking the algorithm to correct any bias. They can keep an audit trail of the data that was used in the model creation, together with ad hoc and periodic reviews of AI deployments and decisions. Most importantly, there must be capacity to challenge or override machine-made decisions when they are clearly wrong in terms of common sense.

Client Data

One overarching requirement is the need to treat individuals fairly. Boards need to have a clear view about what is acceptable and what is not in terms of using client data to anticipate and manipulate their preferences, or in terms of overseeing their workforce, for example by secretly analyzing their keyboard use for signs of stress.

They need also to have a very clear view of what they would do if their company is subject to cyber attack and what they need to do to reduce the risk of this happening. As far as the latter is concerned, we are back to values and behaviours. Since a lot of hacking happens as a result of internal error, companies need to ensure their employees are aware of the risks to the company and to themselves of not following due process.

If they are attacked they need to be able to tell how much data has been lost. That means keeping a continuous audit of what data they hold and where. They need also to have carefully planned procedures around disclosure and crisis management.

All of these issues are now matters for boards to oversee. The origins lie in technology but most of the time they are about understanding where to draw the line on acceptability and about putting processes in place around risk mitigation. Boards need to understand what the technology does and how it is developing. For that they need reliable advice, possibly from a trusted Chief Technology Officer or a specially appointed advisory panel.

But they do not need to be technology experts themselves. Directors should not use their lack of technology expertise to shy away from the task. The qualities required of them are an ethical mindset, experience in business judgement and the management of risk, and the kind of intellectual curiosity which seeks out new opportunities.

Competitive advantage will accrue to those who are able to those who bring these qualities to bear. 

Peter Montagnon was an Associate of the Institute of Business Ethics

Have your say

comments powered by Disqus