AI Ethics: How To Act Ethically in an Automated World

Blog_Header_G2_Feature_1200x628_update_2x_2

Artificial intelligence (AI) is becoming ubiquitous in our everyday lives. As AI takes on increasingly sophisticated tasks, it’s important to ensure that an ethical framework is in place for its right use.

 

A version of this article — written by our Co-Founder and Chief Science Officer, Jaakko Pasanen — first appeared as a guest post on G2’s Learn Hub.

Whether you’re aware of it or not, artificial intelligence (AI) is integrated into much of the technology you use on a regular basis. When Netflix recommends a show you might like, or when you go to book a trip online and Google knows which airport you usually depart from, AI has been involved.

And its use is only becoming more common. In fact, 91% of businesses today want to invest in AI. At the same time, 95% of companies across the globe are concerned about the ethical risks of artificial intelligence, according to a recent Deloitte survey.

Although AI may seem extremely technical and bordering on science fiction-levels of complicated, it is ultimately just a tool. And, like any tool, it can be used for good or for bad. This is why, as AI takes over more and more sophisticated tasks, it’s important to ensure ethical frameworks are in place so this tool is used for good.

Here are some of the main concerns when it comes to ethics in AI, some examples of ethical AI to follow, and most importantly, how to ensure ethics is kept front of mind when using AI in business contexts.

What are ethics in AI?

AI ethics are a set of moral principles used to govern and inform the development and use of artificial intelligence technology.

Since AI is doing things that would normally require human intelligence, it requires moral guidelines the same way human decision making does. Without ethical AI regulations, the potential for this technology to be used to perpetuate wrongdoing is high.

Many industries make heavy use of AI, including finance, healthcare, travel, customer service, social media, transportation, and more. Because of its ever-increasing utility throughout so many industries, AI technology has far-reaching effects on every aspect of the world and therefore needs regulation.

Now of course there are varying levels of governance needed depending on the industry and the context in which AI is being used. A robot vacuum that uses AI to determine the layout of a house probably isn’t going to drastically change the world if it doesn’t use an ethical framework. However, a self-driving car that needs to be able to detect pedestrians or an algorithm that determines what type of person is most likely to be approved for a loan can and will have profound effects on society if ethical guidelines aren’t implemented.

By determining what the main ethical concerns of AI are, consulting examples of ethical AI, and looking at best practices on how to use AI ethically, you can ensure that your organization is on the right path to using AI for good.

What are the main ethical concerns of AI?

As previously mentioned, the main ethical concerns vary widely depending on the industry, context, and potential scale of impact. But broadly speaking, the biggest ethical matters when it comes to artificial intelligence are AI biases, the concern that AI could replace human jobs, privacy concerns, and the use of AI for deception or manipulation. Let’s go over them in more detail.

Biases in AI

The most common source of bias in artificial intelligence is training AI models on data sets with in-built biases. A poignant example of this is Georgia Tech’s recent research into object recognition in self-driving cars. Pedestrians with darker skin were hit about 5% more often, as the data set used to train the AI ​​model contained roughly 3.5 times as many examples of light-skinned people — so the AI ​​model could recognize them better.

The good thing about AI and machine learning is that the data set they’re trained on can be modified, and with enough effort invested, they can become largely unbiased.

At Ultimate we’ve taken a number of measures to mitigate the potential for bias, including the deliberate decision to choose an ungendered company name. When it comes to data, we train our in-house AI model on historical customer support conversations, provided by our customers.

Diverse hiring is key to avoiding built-in biases within AI technologies, and here at Ultimate we’re proudly diverse. Our growing team of UltiMATES is made up of people from 35 different nationalities, and 40% women.

AI replacing jobs

Almost every technological innovation in history has been accused of replacing jobs, and so far, it has never panned out this way.

Back in the 1970s, automatic teller machines (ATMs) were introduced and people feared mass bank teller unemployment. The reality was the exact opposite. Because ATMs were taking care of simple tasks — like handling check deposits and withdrawing cash — fewer tellers were needed to operate a bank branch. This meant banks were able to open new branches and the number of teller jobs increased overall.

Similarly, when AI was first introduced to understand and mimic human speech, people panicked that bots would replace human customer service agents. This hasn’t happened. While AI-powered chatbots can take care of repetitive requests — handling up to 80% of routine tasks and customer questions — the most complicated queries still require a human agent’s intervention.

Our mission at Ultimate is to bring joy to customer support, for customers and agents alike. Instead of replacing people, we see customer service automation as a tool: one that supports the talent of human agents by freeing them from repetitive, simple requests.

Realistically, the future of AI is one in which humans and AI-powered bots work together, with the bots handling the simple tasks and humans focusing on more complex matters.

AI and privacy

Privacy is recognized by the UN as a fundamental human right, and various AI applications can pose a real threat to this. When companies aren’t transparent about why and how data is collected and stored, privacy is at risk.

The challenge in creating privacy regulations is that people are generally willing to give up some personal information to get a level of personalization, with 80% of consumers more likely to purchase from brands offering personalized experiences.

Somewhat ironically, AI is a great solution for data protection. AI’s self-learning capabilities mean that AI-powered programs can detect malicious viruses or patterns that often lead to security breaches.

We take security seriously at Ultimate. In November 2021, Ultimate was officially recognized as a SOC2, type 2-compliant company — meaning our customers can rest assured that their data (and their customer’s data) is in safe hands.

Deception and manipulation using AI

Machine learning models can easily generate factually incorrect text, meaning fake news articles can be created in seconds, then distributed through the same channels as real news. AI is also capable of creating false audio recordings as well as synthetic visuals, where someone in an existing image or video is replaced with someone else.

This machine-generated content can be extremely persuasive. And when AI is used to intentionally deceive in this way, it puts the onus on individuals to decide what is real (or not).

With Ultimate’s technology, this isn’t an issue.  While our AI model can understand and process language, it doesn’t generate messages — so there’s no chance of creating a racist, sexist bot that has been trained on bigoted tweets.

How to use AI ethically

With all the challenges AI brings, you might be wondering how to mitigate risk when implementing AI as a solution in your organization. Fortunately, there are some best practices for using AI ethically in a business context.

Education and awareness around AI ethics

Start by educating yourself and your peers about what AI can do, its challenges, and its limitations. Rather than scare people or completely ignore the potential of unethical use of AI, making sure everyone understands the risks and knows how to mitigate them is the first step in the right direction.

The next step is to create a set of ethical guidelines that your organization must adhere to. Finally, since ethics in AI is difficult to quantify, check in regularly to ensure goals are being met and processes are being followed.

Take a human-first approach to AI

Taking a human-first approach means controlling bias. First, make sure your data isn't biased (like the self-driving car example mentioned above). Second, make it inclusive. In the US, the software programmer demographic is approximately 64% male and 62% white.

This means that the people who develop the algorithms that shape the way society works do not necessarily represent the diversity of that society. By taking an inclusive approach to hiring and expanding the diversity of teams working on AI technology, you can ensure that the AI ​​you create reflects the world it was created for.

Prioritizing transparency and security in all AI use cases

When AI is involved in data collection or storage, it’s imperative to educate your users or customers about how their data is stored, what it is used for, and the benefits they derive from sharing that data. This transparency is essential to building trust with your customers. In this way, adhering to an ethical AI framework can be seen as creating positive sentiment for your business rather than restrictive regulation.

AI gets better with the right ethics

As long as best practices are followed, AI has the potential to improve virtually every industry, from healthcare to education and beyond. It’s up to the people creating these AI models to ensure they keep ethics in mind and constantly question how their work can benefit society as a whole.

When you think of AI as a way to scale human intelligence rather than replace it, it doesn’t seem so complex or scary. And with the right ethical framework, it can change the world for the better.

See more of our featured posts on G2’s Learn Hub