A version of this article — written by our Co-Founder and CEO, Reetu Kainulainen — first appeared as a guest post on G2’s Learn Hub.
Generative AI has taken the world by storm. This advanced technology is what lies behind ChatGPT, Google’s Bard, DALL-E, MidJourney, and an ever-growing list of AI-powered tools. The ability of generative AI to mimic human creativity has catapulted this branch of artificial intelligence into the cultural mainstream — and it’s here to stay. Business leaders are taking stock and looking for opportunities to harness this groundbreaking tech, and that's why we here at Ultimate are building generative AI technology into our product.
Much of the hype (and inevitable anxiety around job losses) has centered on creative industries as well as more technical roles: from content writers and graphic designers, to analysts and programmers, gen AI has wide ranging uses. And with the power of generative AI to boost productivity and democratize access to creativity, there are plenty of opportunities across all industries to leverage this technology — including in customer service.
Here’s a deep dive into what gen AI is, the challenges of implementing large language models (LLMs) in a customer service setting, and 6 leading examples of generative AI and how it can be used in the support space.
Generative AI and LLMs in customer support
Previously, one of the most common reasons business leaders were resistant to implementing an automation solution was the worry that customers would find bot-to-human interactions frustrating. With clunky, rule-based, first generation bots, this was a very valid fear. But technology has come a long way since then.
The advanced ability of gen AI chatbots to converse with humans in an easy, natural way means that using this technology in a customer-facing setting is a no-brainer. From enhancing the conversational experience to assisting agents with suggested replies, there are plenty of ways that generative AI and LLMs can help your brand to deliver faster, better support.
But before we jump into the use cases for generative AI in customer support, let’s take a step back.
What exactly is generative AI?
Traditional AI systems are able to recognize patterns and make predictions by analyzing the dataset it has been trained on. But AI has gotten a lot smarter.
Generative AI is a branch of artificial intelligence that is able to process huge amounts of data to create entirely new output. Depending on the training data you use (and what you want the AI model to be able to do) this output might be text, images, videos, and even audio content. Thanks to accelerating interest and investment in gen AI companies, the market valuation for this sector is expected to reach $42.6 billion globally this year.
While this technology feels futuristic, generative AI has been under research since the 1960s — and can trace its modern roots back to the 1950s, when deep learning first emerged as a field of study. But it wasn’t until late 2022 that the gen AI arms race really kicked off, when OpenAI released its conversational AI chatbot ChatGPT.
ChatGPT is a bot that can converse with people in a human-like way. As well as answering questions, it can write poetry, produce lines of code, offer recipe suggestions, and generate pretty much any form of text-based output that you ask for. ChatGPT is so convincingly natural in its responses that the bot was able to pass the Turing test: a measure of how effectively machines can display intelligent behavior.
What is a large language model (LLM)?
The reason for ChatGPT’s fluency is the fact that this model was trained on a vast set of data written by real people. AI models that learn from huge amounts of text data are known as large language models (or LLMs). On top of generating written content, these models are able to summarize text, analyze message sentiment, and perform translations: a pretty impressive range of skills for one single tool.
As with many other LLMs, ChatGPT is built on transformers (nope, not like the anthropomorphic cars in the famous movie franchise). Let’s go a little deeper for the techy folks. A transformer model is a type of deep neural network that — instead of processing data sequentially like traditional neural networks — can process all inputs at once. By assessing data holistically, transformer-based models are much better at understanding context, which makes them better at providing accurate answers. The most important innovation of transformers is the use of self-attention mechanisms — meaning the model is able to weigh the importance of different parts of the training data when generating a response.
TLDR: transformers massively increase both AI accuracy and efficiency.
LLMs have the incredible power to elevate conversational experiences and boost productivity. Because of this, pretty much as soon as ChatGPT launched support leaders and automation providers started thinking about how this technology could be used in a customer service setting.
The challenges around leveraging generative AI for customer support
So far so good, right? Well, maybe a little too good to be true. And the CEO of OpenAI, Sam Altman, admits as much himself. When discussing the hype around his own gen AI bot, Altman explained:
“ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
Generative AI technology is very new (ChatGPT is currently free to use because it’s still in its research phase). And as with any new development, there are a few kinks to iron out. Having said that, it is possible to unite the natural, conversational experience of gen AI bots like ChatGPT with the control and efficiency needed in customer support AI automation. Yep, you really can have your cake and eat it too.
Broadly speaking, the risks and challenges of implementing generative AI for customer support can be split into 2 categories: accuracy and resource use. Let’s go through these risks in more depth, as well as how to mitigate them.
Challenge #1: Accuracy
In the words of the original Bard, “your greatest strength begets your greatest weakness” — and this holds true for LLMs as well as us humans. The reason for the impressive fluency of ChatGPT and other LLMs is the expansive set of data these conversational bots have been trained on. But using such a broad and unconstrained dataset to learn from can lead to accuracy issues with the responses LLMs generate. Depending on the prompt you provide, generative AI models will draw on the entirety of their training data to offer their best estimate of what you want to hear. Unfortunately these estimates don’t take facts into account.
Anyone who has played around with ChatGPT will be aware of its ability to sell fabrications as fact — like the time it guaranteed one user that the world’s fastest marine mammal is a peregrine falcon. The bot will even provide fake references to back up its false claims. These convincing yet factually inaccurate responses are known as hallucinations. And while AI hallucinations are entertaining for recreational users, fibbing won’t fly in a customer service setting.
When customers reach out to your support team, they want accurate responses to help resolve their specific issue as quickly as possible. That’s why plugging ChatGPT straight into your tech stack and letting it loose wouldn’t be a very good idea. So how can you make sure generative AI-enabled conversations don’t end up derailed?
LLMs start making up facts when the data they’re trained on doesn’t contain information about the specific question asked, or when the dataset holds conflicting or irrelevant information. Which makes the solution to this challenge pretty simple — you need to create a system to constrain the AI model.
Here’s how to ensure that support conversations powered by generative AI stay on track:
- Optimize the dataset the LLM is trained on: When it comes to training data, think quality over quantity. In a customer support setting, the gen AI model will be connected to your company knowledge base. To see the most value from implementing an LLM solution, it’s important to go through your knowledge base, remove any old or duplicated articles, and make sure the training data you’re feeding the bot stays up-to-date and relevant to the topics your customers care about.
- Ground the model with a search engine: You can steer the way your LLM navigates the knowledge base it’s trained on with a custom internal search engine. This keeps support interactions on topic by only letting the model access information that is relevant to the question asked.
- Introduce fact-checking processes: If you’re worried about AI accuracy, one of the best ways to make sure the bot is sending relevant, useful responses is to introduce an additional fact-checking layer to your automation solution. After using an LLM to generate a conversational reply, you can use a different AI model to verify if the answer is correct before sending it to a customer.
Putting these guardrails in place will help prevent the bot from sending out rogue answers or going off on a tangent about a completely unrelated topic.
Challenge #2: Resource use
The reason LLM-powered bots are so impressively human-like is because the datasets that feed large language models are (as the name suggests) pretty massive. This means the upkeep of a generative AI solution is resource intensive and poses engineering challenges.
Brands looking to implement a gen AI bot for their support might choose to host their own LLM, but the running costs for this can rack up very quickly. As well as the expense, many cloud providers won’t be able to offer the storage space these models need to run smoothly. This can cause problems with latency — meaning it takes the model a long time to process information — and lead to delayed response times. As 90% of customers say instant responses are important to them, in a support setting an immediate reply can make or break the customer experience.
To avoid these issues, companies might choose to rely on an open-source model, like OpenAI’s GPT-4. This option might seem like the easiest solution, but it comes with its own challenges. Ever tried logging into ChatGPT to find it was at capacity or down for the day? It’s a pretty frustrating experience. So using an open-source third-party API is a risky move in customer service, where reliability is key. On top of this, while choosing an open-source LLM might seem like the most cost-effective option, the cost of single API requests can quickly add up.
But when it comes to support, the full power of LLMs aren’t really needed to see the benefits of generative AI. So the solution to these issues is using a “reasonably-sized” language model. While the name isn’t quite as catchy, you’ll still be able to see impressive results with smaller, more reasonably sized language models — as long as you’ve got the right training data.
According to Jaakko Pasanen, Chief Science Officer and AI expert at Ultimate, in order to see the best results with generative AI, “we need to think of AI in customer support as not just one neural network, but a whole brain, where different parts of the brain handle different tasks.” That means instead of relying completely on LLMs to take over customer support automation tasks, the best strategy is to use generative AI as one part of a broader automation solution.
Read more about generative AI for customer support
The top 6 use cases of generative AI in customer support
Generative AI is a powerful tool that (when built into a broader automation or CX strategy) can help companies to deliver faster, better support — from offering a more conversational experience for customers, to assisting agents and supporting bot builders. So let’s take a look at the top 6 use cases of gen AI for customer service.
How the AI behind ChatGPT creates better conversations with customers
ChatGPT has shown the world just how advanced and seamless interactions with AI-powered bots can be. So the most obvious use case of LLMs and gen AI is for companies to provide instant, fluent, and on-brand conversations with customers.
More natural conversations
By adding an LLM layer to automated chat conversations, your support bot will be able to greet customers in a friendly way, send natural-sounding replies, and engage in the most human-like small talk that you can imagine. This means that instead of building out dialogue flows for greetings, goodbyes, and any other chit-chat, the LLM layer will take care of this.
Pulling updated info from your web pages
Rather than manually updating conversation flows or double-checking details in your knowledge base, you can let your generative AI bot instantly serve customers this information. By connecting the LLM to your help center, FAQ pages, knowledge base, or any other company pages, the bot will have immediate access to your most up-to-date information. And the bot can deliver this info to customers in a conversational way, zero training needed.
Say a customer wanted to update the delivery address listed on their account. Once they ask your LLM-powered bot how to do this, the bot searches your help center articles for the right answer. Then, instead of pointing them in the direction of the relevant article, the bot summarizes this information and sends accurate instructions directly to the customer on how to edit their address — instantly resolving their issue without any back-and-forth.
How generative AI and LLMs help support agents work faster
One of the most attractive use cases for any kind of automation tool is helping people to work more efficiently — and this applies to LLMs too. Customer service teams can use generative AI to take over manual admin tasks, driving down handling times and supporting agents in their daily roles. Here are two key ways.
Structuring support tickets
A great way to see value with generative AI is using this technology to structure, summarize, and auto-populate tickets. Not only does this help your support team resolve customer requests faster, but it means your human agents can focus on the rewarding tasks that require their empathy and strategic thinking. LLMs can also predict categories and even analyze message sentiment. This allows agents to send tailored responses, depending on whether a customer is satisfied, seething, or somewhere in between.
Support agents can prompt an LLM to transform factual replies to customer requests into a specific tone of voice. And another impressive power of LLMs is that these models can remember context from previous messages and regenerate responses based on new input. That means you can keep on iterating until the reply is 100% on brand.
How generative AI and LLMs make bot training and conversation design easier
The final set of use cases for LLMs in customer support is to speed up analytical and creative tasks around training and maintaining AI-powered bots. This helps automation managers, conversation designers, and bot builders to work more efficiently, and helps companies see faster time to value with automation.
Generating training data
Don’t have the time to work out every single way a customer might ask about a return? No problem — instead of manually creating this training data for intent-based models, you can ask your LLM to generate this instead.
Providing sample conversation flows
Even the best writers can end up getting blocked. But when this happens you can use your LLM as a tool to aid creativity and ease writer’s block by crafting sample replies for your conversation designers. They can either copy and paste these verbatim, or use them as inspiration to brainstorm dialogue flows.
Generative AI is a powerful tool that has the capacity to revolutionize customer experience and the work carried out by support teams in a multitude of ways. But because this tech is still so new, and challenges around its implementation are very real, it’s important to be careful about using it in customer-facing tools.
Rather than viewing this technology as a silver bullet that will solve all support issues, the key to seeing the most value with gen AI is to be deliberate in its use. It wouldn’t be the best idea to plug a generative AI bot straight into your customer service tech stack — but by using LLMs within a broader automation system you can easily overcome the risks associated with this technology.
Despite the challenges, there’s plenty of room for optimism about the positive impact LLMs will have on the customer support space — as the innovative and varied examples of generative AI use cases discussed above show. And as it matures, the people working with generative AI will continue to find new and more advanced use cases for this game-changing tech.