ChatGPT: The Tech Behind the Hype (and What It Means for Your Support)

ChatGPT having a generative AI-powered conversation with a user.

Our in-house AI researchers weigh in on what’s so groundbreaking about the technology behind ChatGPT, and whether the hype around generative AI is justified when it comes to automated customer support. 

We’re not going to lie – having a team of in-house AI researchers when you’re developing a SaaS product can be pretty damn handy. Especially when, years after you’ve started a product based on the most cutting-edge conversational AI technology out there, the rest of the world suddenly catches up. 

We don’t want to be that friend that said they “discovered” your favorite band like five years ago, but to be honest, we’ve been pretty psyched about Large Language Models, natural language processing, and transformers for almost half a decade.

Why are we bringing this up in an article about ChatGPT? Because the above technology is exactly what’s catapulting talk about  OpenAI’s current chatbot into news op-eds, Twitter, and the meeting rooms of Silicon Valley investors right now. 

In fact, we're working on incorporating the tech behind ChatGPT into our product right now -- in real time. Join us on our journey!

Having worked on them for years, our Co-founder and  Chief Science Officer Jaakko Pasanen and Senior AI Researcher Meysam Asgari-Chenaghlu are here to break down the AI basics behind ChatGPT. They’re also weighing in on why the conversational chatbot’s popularity is great news for customer support automation and when we might expect to see concrete changes cropping up in the CS automation space.

What is ChatGPT?

ChatGPT is a conversational AI chatbot. 

What that means is you can ask it a question and rather than throwing a bunch of useful information sources at you as Google search would, ChatGPT presents answers in a conversational way. Kind of how a human would. 

ChatGPT is able to sound so human-like because it’s been trained on massive amounts of data, written by real people (All of the internet prior to 2022, that is).

Tools that learn from huge quantities of data like this work with what we call Large Language Models (LLMs). These models are typically based on neural networks and are trained using a technique called unsupervised learning, where the model learns to predict the next word in a sentence based on the previous words.

ChatGPT: The Tesla of Large Language Models? 

But ChatGPT is not the only large language model (LLM) out there. Think of it as just one “brand” of LLMs among many other, high-quality ones. The Tesla of LLMs, if you will: Many other brands make great cars (and have for decades), but only Tesla is known for the ones that drive themselves (or… try to). And, like a Tesla, ChatGPT still has to fine-tune the tech behind the groundbreaking innovation it’s known for. Even their CEO has admitted as much (He’s a bit more down to earth than Elon Musk):

"ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important right now. It's a preview of progress; we have lots of work to do on robustness and truthfulness."

Sam Altman, CEO of OpenAI 

Created by AI research company OpenAI in November 2022, it’s based on GPT-3.5, a LLM trained to produce text, and is currently free of charge because it’s still in its research phase.'

Transformers: The Michael Bay movie franchise of conversational AI?

GPT is short for “Generative Pre-Trained Transformer”. Let’s break that down:

Generative AI means  AI that can generate new content instead of analyzing existing data. In practice, this means generative AI models can produce text and images, including program code, poetry, blog posts, and artwork. 

ChatGPT (and other LLMs) are based on what we call transformers. Nope, we are not talking about the Michael Bay movies – but we do think that conversational AI without Transformers would be like the Transformers movies without Megan Fox. Which is to say – not worth much. 

So what are transformers?

According to Meysam, who happens to have co-authored a book on the topic, “A transformer model is a deep neural network which can learn the context and thus meaning by tracking the relationship between words (tokens) in a sentence (sequence). The key innovation of transformer models is the use of self-attention mechanisms, which allow the model to weigh the importance of different parts of the input when making a prediction. ChatGPT is a great showcase for the power of transformers in generative AI models.

What does that mean in practice? Well, when we implemented transformer models at Ultimate roughly a year ago, we immediately saw the accuracy of our AI rise by 30%.

Natural language processing: Ours and ChatGPT’s shared love

Another part of what makes ChatGPT work so well? Natural language processing, or NLP. This is the part of ChatGPT – and our virtual agents at Ultimate – that understands incoming messages (to which it can then reply in a conversational tone). 

ChatGPT relies on NLP to understand prompts and reply to them. Similarly, advanced customer support chatbots draw on NLP to understand and group customer intents, so they can lead them down the right path toward getting their issue resolved as quickly as possible. 

It’s here where you’ll start to see the main difference between ChatGPT’s USPs – generating unexpected, long-form output – and the concrete tasks CS chatbots are designed to fulfill – finding concrete solutions for concrete problems. 

Does your customer support need a calculator or a rogue party guest?

ChatGPT is creative and entertaining, yes.  But right now, it still lacks the accuracy that task-oriented chatbots need. When customers get in touch with your support, they have a concrete problem that they want solved, stat. And ChatGPT is just not the best or fastest way to do that – an AI model that’s trained on industry-specific data is. Think of it like using the right tool to calculate 230x40. You don’t need to buy a Macbook pro to perform this task – a calculator is cheaper and easier to use (and purpose-built for this task).

Moreover, because ChatGPT is trained on data from across the internet that ranges in trustworthiness, it’s at risk for two things: bias and spreading misinformation. So using it in customer-facing situations would be kind of like taking your super entertaining friend with a drinking problem to a formal business get together. They will be the life of the party, but there’s no way you can control what they’ll say or do as the night goes on. Imagine letting that rogue energy lose on your most valued client.

“ChatGPT is like a very wise parrot – a parrot that’s actively participating in 
the conversation and contributes meaning to it.”

Meysam Asgari-Chenaghlu, Senior AI Researcher at Ultimate & co-author of “Mastering Transformers

Finally, ChatGPT runs on an open domain. That’s great news for the average consumer, but not for an enterprise that needs to handle customer data securely (think SOC 2 and GDPR compliant). Integrating ChatGPT with customer support automation software right now would  create the risk of directly leaking or indirectly revealing personally identifiable information from the training data through the responses it generates.

So, real talk: We probably  wouldn’t put ChatGPT in its current iteration in front of our customers. But that doesn’t mean it won’t play a huge part in revolutionizing customer support, thanks to the sophisticated tech that powers it: Transfomer models, LLMs, and NLP. 

In fact, we believe that these are exactly the ingredients that customer support automation needs in order to strike that balance between fluent and natural conversational experiences and the accuracy and control required in customer support.

To keep with our Tesla analogy, it’s kind of like looking beneath the flashy hood and looking at the value of the car’s individual parts. To put it in our Co-founder Jaakko Pasanen’s words,

“Ask not what a tool can do for you but instead what tool you need to solve your problem.”

Jaakko Pasanen, Co-founder and Chief Science Officer, Ultimate

How LLMs, Transformers, and NLP will revolutionize customer support automation in the years to come

Here is how we think developments in LLMs like ChatGPT will improve support automation, including the products we build at Ultimate, within the next 1-5 years:

  • Reducing the time and effort needed to manage AI models by finding more overall expressions to match with a customer intent.

  • Reducing  average handle time (AHT) by summarizing tickets that are handed over to human agents from a virtual agent.

  • Suggesting replies for agents working on live chat down the line – like an autocomplete - making sure conversations stay on-brand and improving the overall customer experience. 

So despite current limitations, we’re excited, knowing that LLMs in general and ChatGPT in particular can open doors to amazing use cases. It’s a great showcase of how far large language models can take the conversational experience, and will take us one step closer to reaching our Ultimate mission: Creating the most powerful virtual agent platform in the world.