AI needs to know who to kill. But we don’t know what to tell it

AI will become ever more pervasive in our lives. We should feel confident that the AI systems we use will be based on a set of ethical principles that we agree with. The problem is, agreeing on these principles is far from straightforward.

By Adrian Baker

Read time: 11 mins

Image credit: Markus Spiske on Unsplash

 

Share article

Far from being a subject that only occupies philosophers, ethics is both the driving force that guides how people behave and interact, and the mirror that can be held aloft to show society’s ails.  It is difficult to disagree on the importance of ‘ethics’ as a whole, but that consensus quickly erodes once we start testing issues through an ethical lens.  Agreement in ethics is not easy, and therein lies the problem.  How can we give machine ethics, if we can’t agree amongst ourselves what to give it?

Why ethics matters when it comes to AI

In the UK, even though euthanasia is illegal, doctors can administer potentially fatal doses of painkilling drugs in order to relieve suffering.  This is called the ‘double effect’, and means that it can be ethical for someone to do something morally good even if it has bad consequences, so long as that person didn’t intend for those consequences to happen (even if they knew that there was a likelihood that it might happen).  This is obviously an incredibly fraught and contentious issue.  So, what should we tell an AI to do if faced with the situation? 

The obvious response to the above example would be to ensure that humans would be the ultimate arbiters of such decisions.  And rightly so.  But there are two problems in avoiding this issue completely.  Firstly, we cannot be absolutely sure that humans would make the final decision in the future.  It is quite conceivable that AI would one day be given autonomous reign in healthcare, as it might with transport, with finance, with manufacturing, with farming, and many other sectors.  Sure, healthcare might be the last to hold out, but if it is possible, which it is, than we must at least be prepared for such an occasion.  Secondly, even if a human makes the final decision, it would invariably be a decision based on input from AI recommendations.  Ignoring such recommendations might be difficult, particularly if AI is used ubiquitously in healthcare and is shown to be more accurate than doctors (at least in terms of diagnosis and prediction).  In a litigious society, it would be a brave doctor to ignore an AI-based recommendation.     

The likely scenario is that in the future healthcare professionals would follow recommendations based on AI algorithms.  So the question of what ethical principles we should give AI is an important one.  Let’s say, very broadly, we try to ensure that the AI will ‘always do good’.  If there is a risk that a person may die from a dose of painkillers, the AI might recommend a dose that would be low enough to be safe, but which would have the result of prolonging suffering.  And with that, the debate of what we should tell AI begins.

Who should give AI ethics?

At the very least, we should strive to give AI a set of broad ethical principles.  But who is ‘we’? 

It’s clear that countries can differ in terms of ethical values, but so too can regions (the West’s individualistic principles versus the East’s more utilitarian principles).  But what happens when there are ethical differences between people within countries?  Whilst we think of the US as currently highly polarised, in other countries ethnic and religious differences can create tensions that lead to civil conflict rather than verbal jousting.  With all these differences found within, let alone between geographies, giving AI a set of ethical principles could quite easily encroach on the principles held by a large portion of a population.  

If you want to start dividing opinion quickly, just ask people to go through an ethical thought experiment.  As fans of the Netflix series The Good Place know, philosopher – and perfect-embodiment-of-analysis-paralysis – Chidi highlighted one of the most famous ethical dilemmas that exists, called ‘the trolley problem’:

One approach to the trolley problem that’s being explored is to crowdsource AI ethics.  In MIT’s Moral Machine website, visitors are given a set of ‘trolley problems’ in the context of self-driving cars.  The scenarios start like this: passengers are being driven by an autonomous vehicle, when it all of a sudden suffers a catastrophic malfunction.  Options are limited.  The road has a big concrete barrier that the car can drive into, killing the passengers, or the car can avoid the barrier and drive into some pedestrians.  Each scenario is met with a different ratio and variation between passengers and pedestrians.  By collecting human-generated responses, the team of researchers behind the website implemented an AI algorithm on the dataset that learnt and then predicted responses to ethical dilemmas based on the previous data.  The aim was to get the AI to be as ‘ethical’ as average respondents.    

Whilst this is a novel approach to the problem, the researchers make clear that it is not “the end all solution”.  Indeed, relying solely on a plurality of votes for such dilemmas sets a dangerous precedence.  That is why democratic countries have judges.  As James Grimmelmann, professor at Cornell Law School told The Outline, crowdsourcing ethics “makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical”.  If we leave it solely to the will of the majority, all sorts of prejudices and biases can creep in.  After all, human history is littered with the will of the majority acting in ways we now find abhorrent. 

But others take a different view.  In a discussion paper from Duke University, researchers argued that far from being no-more-ethical than the average, AI decision-making can espouse even higher ethical standards than an average person because aggregating moral decisions can take out idiosyncrasies and individual bias.  Of course, whilst the argument may be theoretically sound, current evidence suggests that AI is far from overcoming the challenges of bias that it has learnt from humans.

Going beyond laws?

The Universal Declaration of Human Rights can give a ray of optimism that such problems are not insurmountable.  We may find that a clear set of basic ethical principles can be agreed upon for AI, just as with human rights.  But as with most things, it is when we move past the basic principles – question them, challenge them, put them through different scenarios – that issues become ever more complex. 

‘Killing’ is an obvious example.  We might all agree that killing is unethical and should therefore a broad principle should guard against it.  But of course, there are instances where it might not be so clear whether it is unethical.  What if you were protecting yourself or somebody else against immediate and life-threatening harm from an attacker?  What about abortion?  What about euthanasia?  It’s of course unlikely that AI will have an autonomous role to play in these decisions, but as mentioned earlier, AI will be giving doctors recommendations about treatments where the outcome may or may not be death. 

One way out of this is to ensure that whatever AI system is adopted, is stays within country laws.  But ethics and laws aren’t always the same thing, and laws can follow rather than precede what society as a whole currently deems as ethical.  This can create challenges.  Social media companies have tried to stay within the laws setting out freedom of speech, but a great number of people thought that refusing to take down violent material was unethical, regardless of whether the companies stayed within the law.  

Ensuring that the ethical foundations of AI merely stick to the law might not be enough, but going beyond the law is just as problematic.  All it takes is a bit of societal pressure, and companies might instil a set of values on AI from a group that shouts the loudest, or from a rich or powerful minority whose values have been affected.    

Ensuring that AI stays within – and only within – the law might also allow people to avoid taking responsibility for ethical decisions by delegating such decisions to an AI.  In the last decade, the actions of investment bankers and tax accountants have been retrospectively called into question.  Whilst staying within the law, some people who bent what was ethically permissible agreed to give back some money.  But if they took their advice from an AI – an AI which would guarantee that it stays within the law but cannot explain how ethically dubious its financial recommendations are, how can society hold people to account?  

Should we wait for AI to be ethical, before it is let loose?

Elon Musk has been clear in his warnings of the dangers of AI, and the need for clear ethical guidelines.  Some companies, such as Google’s DeepMind, are not waiting around and are establishing ethics councils to ensure that their AI is based on a robust set of ethical principles.  But if AI developments continue at pace without having established ethical foundations that are subscribed by most if not all companies, one solution is for AI to do what it’s best at and learn – learn from its mistakes, and therefore learn what is deemed as ethical.  But unlike individuals, AI software is a system that can have instantaneous implications over a wide geographical area.  Most of the time AI won’t have direct impact on human safety.  But in some cases– autonomous vehicles being one of them – should we wait for AI to learn from its mistakes, or should we agree that systems in such contexts should only be publicly launched when all the conceivable ethical principles have been agreed on?

As Ariel Procaccia from Carnegie Mellon University and Iyad Rahwan from MIT Media Lab argued, their approach to crowd-sourcing moral dilemmas allows for a ‘modular approach’ to ethics.  In essence, some ethical and legal principles might be easy to implement in an AI, but as the dilemmas get more challenging, the AI could operate based on how an average human would choose.  This would have the advantage of not needing to wait for government organisations or ethics experts to make clear decisions on every (often contentious) dilemma.    

Can ethics be customisable?

With the Alan Turing Institute, the Centre for Data Ethics and Innovation, and DeepMind’s own ethics council, the UK leads the world in exploring the ethical ramifications of AI, but the initial outcome of this work is only the start of a process.  The UK may eventually decide on a set of ethical principles, but discussions should equally take place at an international level, given the universal implications of AI. 

But even if blocks of countries can agree on a set of ethical principles for AI, it is unclear whether companies would take a universal approach.  As we can see with privacy and data settings, companies take different positions depending on the region.  Recently, Facebook tested an AI algorithm that can help predict suicide, but did not test it in Europe due to the region’s strict privacy laws. 

‘Ethical settings’, like privacy settings, can overcome at least some of the differences within and between geographies.  But is a principle really a principle if you can turn it on and off so easily?  By now it should be obvious that in the realm of ethics and AI, there are a lot of questions and few answers.  And this will always be the case.  Differences in societies and cultures will always test boundaries – both literally and figuratively.  And history has shown us that as societies and cultures change, so too will ethical principles.  So the work of ensuring that AI is bound by ethics will never end, and will necessarily have to adapt and evolve as developments in AI continue to accelerate.  But the process has started, and that’s a good thing.  And if we’re forced to tell AI who to kill, our answers may tell us more about ourselves than we initially intended.    

Adrian Baker

About the author

I look at emerging technologies from a social science and policy perspective. I completed my PhD on the diffusion of innovations at University College London, looking at how innovation gets implemented in healthcare organisations. My main interests are on policies that encourage innovation and the diffusion of emerging technologies, and understanding their social implications so that everybody wins.