Ethics and AI: How the Hard Decisions Are Being Made
When we’re looking to place blame when it comes to ethical AI dilemmas, there’s an obvious answer: the algorithms. Or isn't it?
The air is thick with excitement. After raising a huge sum of capital for your start-up, you’ve just doubled the size of your team. Everyone has gathered to welcome the new recruits, shaking hands and patting backs, smiles all around.
But something isn’t right. You look around with concern, suddenly realising that your new hires are -- almost exclusively -- white males.
How did this happen? You hired a reputable recruitment firm to handle your draft. Shocked that such an obvious bias has clearly been introduced, you do some digging. The recruitment firm explains that they recently acquired a new recruitment software, and that the filtered results -- or the ones that best suited your firm -- seemed to favour certain genders and races.
It’s one of the tough ethical dilemmas of our time.
Now you face to face with a dilemma. Who is to blame in this scenario? Is it the people who programmed the software? The coders? The people who built and trained the AI technology it uses? The people who uploaded the data to the software? The company that sold the software? The recruitment firm who purchased and used the software? Or should it be the person who hired the recruitment company that used the biased software -- in other words, you?
It’s one of the tough ethical dilemmas of our time. We’ve entered new territory when it comes to ethics and AI. Being such a complex science with so many stakeholders involved, government bodies, lawyers and scientists are conferring to try to work out how biases are being introduced, the potential dangers of such a bias, who is to blame, and most importantly, how to resolve these issues.
When we’re looking to place blame when it comes to ethical AI dilemmas, there’s an obvious answer: the algorithms. Algorithms are the coding that the AI uses to make decisions, like whether or not to break suddenly in a self-driving car. The problem with them is that they’re extremely literal. They do exactly what they’re told -- nothing more. We don’t realise all the different considerations we humans automatically factor into our decisions...that is, until we see the results of the information we’ve given the AI.
Even those who programmed the original algorithm often can’t explain how a machine has reached a certain decision.
Our world is run by algorithms: they decide what we see when we type a search into Google, they decide what ads we see when we read a news article, they decide what movies to recommend for us on Netflix. But most of us don’t really understand how they work.
Algorithms are rules. Simply put, an algorithm could be if X, then Y. Of course, most algorithms of today are far from simple. As these rules have become increasingly complex, and machine learning introduces increasing amounts of data, it gets to a point where even those who programmed the original algorithm often can’t explain how a machine has reached a certain decision.
To see how such a system can go awry, let’s look at risk assessment in the judicial system, which uses an algorithm to determine the likelihood a criminal will re-offend. These algorithms seriously affect the lives of those who are serving time -- influencing decisions about who to set free and who not too at each stage of the criminal justice system.
Yet, they’re rife with bias. A ProPublica investigation into the risk scores of more than 7,000 arrested in a particular Florida county, and found them to be remarkably unreliable. Just 20 percent of those the algorithm had predicted to commit violent crimes within two years actually did. Racial disparity was a big problem -- while black defendants were more likely to be wrongly labelled as likely to re-offend, so too were white defendants mislabelled as less likely to do so.
Just 20 percent of those the algorithm had predicted to commit violent crimes within two years actually did.
Then, there’s the time Uber’s self-driving car hit and killed a pedestrian during testing. Upon investigation, it was found that Uber’s algorithms didn’t recognise the car was going to hit 49-year-old Elaine until 1.2 seconds before collision. The main problem? Herzberg had been jaywalking -- a misdemeanor we all do so often, that we don’t even think of it as such. But Uber’s algorithm hadn’t been programmed to understand jaywalking. It only understood that a pedestrian could cross the road at a crosswalk, so it had no provisions in place for when they don’t.
And while these biases can be fixed after they are identified, what we really need is to avoid them in the first place.
All this brings us to wonder, are the humans the ones who are programming the bias into the machines? Or are machines creating the bias all on their own?
The truth is, it’s a bit of both. Human bias affects every single decision we make on a daily basis. Researchers have identified 180 biases that affect how we see reality. For example, there’s the sunk cost fallacy. It says that we irrationally cling to things that have already cost us something, which can cause us to make unwise investments.
The fact is, AI isn’t perfect. It makes mistakes, and can be prone to errors and dependencies.
Of course, if we want algorithms to take over our decision making at hyperspeed, we need them to do better than we do. The problem is, machines can learn from our human biases. AI, you see, relies on historical to learn how to make future decisions. And if any of our 180 human biases have been introduced into the data -- and they almost always make their way in there -- then the machine learns human bias.
But machines can also create their own biases. The fact is, AI isn’t perfect. It makes mistakes, and can be prone to errors and dependencies.
No one is creating purposely biased AI. And to jump to the defense of computer scientists who work tirelessly to programme the algorithms that make our lives, well, work, it should be said that it may be unfair to expect them to double-major in moral psychology. Still, that’s not to say that there’s nothing that can be done.
Until recently, there has been very little regulation surrounding the use of AI and algorithmic accountability. But that’s changing.
The EU published its guidelines on ethics in artificial intelligence in April 2019. They attempt to carve out a ‘human-centric’ approach to AI that is aligned with European values and principles. It calls for things like fundamental rights impact assessments on an AI system before it is developed. It also insists that end users should not be subjected to decisions made solely by automatic programming. Humans should be able to understand the machine well enough to interact with it, and ultimately override a decision made by a machine, it decrees.
It’s a thorough, logical document, and if its recommendations are followed, AI bias could become a thing of the past. But that’s the issue -- they are merely recommendations. No one is enforcing these guidelines. Not yet, anyway.
There is a big need for more clarity in our governance when it comes to ethics and AI.
The fact is, our laws will have to evolve in response to the new challenges AI poses. Thankfully, this evolution has already begun. In Canada, artist Adam Basanta was sued in 2018 in the Quebec Superior Court for trademark infringement. His crime? His computer system, which he’d programmed to operate independently to produce a series of randomly generated abstract pictures, was claimed to have violated the copyright of a photographic work by another artist with one of the images it had produced.
The matter needs to be dealt with not only nationally, but internationally as well. When the GDPR came out, it was claimed that it would legally mandate a ‘right to explanation’ of all decisions made by automated or AI algorithmic systems. In practice, however, the mandate was ambiguous and limited. There is a big need for more clarity in our governance when it comes to ethics and AI.
Luckily, where there is a lack of regulation, we can count on the importance of a company’s reputation to force it to self-regulate -- something that big organisations like Amazon and Microsoft are keen to protect. When Amazon discovered that its AI recruitment software was unfairly biased against women, it dropped the tool with great haste.
So, where does all this leave us today, when it comes to the ethics of AI? Clearly, there is a long road ahead. Unfortunately, the road to legislation is paved with trial and error, and the errors that AI is making are having real impacts on real lives. Hopefully, humans will figure out how to anticipate the biases AI is prone to before they happen -- or at least put enforceable regulations in place to prevent them from arising in the first place.
- https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 28 Nov. 2019.
- https://www.extremetech.com/extreme/301580-self-driving-uber-car-involved-in-2018-crash-didnt-understand-jaywalking. Accessed 28 Nov. 2019.
- https://www.weforum.org/agenda/2018/12/24-cognitive-biases-that-are-warping-your-perception-of-reality. Accessed 28 Nov. 2019.
- https://www.logikk.com/articles/prevent-machine-bias-in-ai/. Accessed 28 Nov. 2019.
- https://techcrunch.com/2019/08/25/the-risks-of-amoral-a-i/. Accessed 27 Nov. 2019.
- https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/640163/EPRS_BRI(2019)640163_EN.pdf. Accessed 28 Nov. 2019.
- https://www.cbc.ca/radio/spark/409-1.4860495/can-an-artist-sue-an-ai-over-copyright-infringement-1.4860762. Accessed 28 Nov. 2019.
- https://www.ssrn.com/abstract=2903469. Accessed 28 Nov. 2019.
- https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G. Accessed 28 Nov. 2019.
Stay in the loop
Subscribe to our email newsletter. No spam, just occasional insight from our experts.