The Role of Automation in Contact Center Quality Assurance

Blog_Header_Klaus_Feature_ContactCenterQA_1200x628

In 2024, customer-facing interactions are only the tip of the iceberg when it comes to CX automation. There are also a growing number of quality assurance use cases that will streamline your backend processes and make your agents’ lives much easier.

This guest post was written by our friends at Klaus, one of Ultimate’s valued partners.

Unless you’ve been living under a rock these last couple of years, chances are you are already familiar with the wonders of generative AI and what it can do to produce remarkably human-like conversations. This technology has vastly improved the quality of automated support when it comes to customer-facing interactions across channels like chat and email. But beyond the hype of these cutting-edge tools, support automation has a whole lot more to offer when it comes to streamlining your backend processes. 

Quality assurance (QA), for example, has recently seen major advancements thanks to AI-powered automation. QA is the best way to bring your existing organization to its utmost potential. However, if you are dealing with large ticket volumes, global teams, outsourcing, different languages, etc, this is an elaborate and ongoing practice with several plates to spin. 

So let’s look into ways you can use it to help control the spinning plates of quality management. 

What is contact center QA automation?

Contact center QA (Quality Assurance) automation refers to the use of technology to streamline and enhance the quality assurance processes in a contact center environment. Often described also as AutoQA, this typically involves the following elements: conversation discovery, the scoring process, review assignment process, and data analysis.

A bit about automated conversation discovery

Klaus analysis has shown that the average support team can only review 2-5% of their conversations manually. The larger the ticket volume, the harder it is to even achieve that proportion. This is not ideal if you want to have a reliable measure of your support quality.

Many teams will choose tickets at random, thinking it will give them the best shuffle for quality purposes. But that’s not quite how it works – if you select conversations to review at random, you’re mainly going to confront the ‘average’ interactions. Interactions with common problems which agents and protocols are well equipped to handle swiftly. 

Average is, therefore, not helpful if what you want are conversations with the most learning potential. Conversations that highlight where you might need to improve.  

How AI-powered automation helps you find conversations to review

For several years, contact center QA tools have offered sentiment analysis, helping you see at a glance underlying emotions. Machine learning has recently made this AI analysis even more sophisticated. 

Large language models can now be trained to analyze your conversation data and automatically detect conversations in which:

  • There is a higher than usual back-and-forth between agent and customer. 
  • A customer shows dissatisfaction and/or risk of churn
  • A customer showed delight or where the agent went the extra mile (so you can use that ticket as an example). 
  • The customer asked for things to be escalated or for a follow-up.

1. Keeping score, automatically

Machine learning has advanced such that a tool can now analyze thousands of conversations and score them based on set criteria. The amount of time that auto-scoring can save customer service teams is something quality specialists could only have dreamed of a few years ago. 

Automatic scoring helps you: 

  • Cover 100% of conversations across popular categories, like solution, grammar and tone. 
  • Analyze trends and patterns to understand the big picture of support performance.
  • Remove bias from scores. 

However, when we talk about automated scoring, we don’t talk about replacing manual reviews. Rather, it improves visibility and lets you understand quality in a way that has previously been impossible (without finding and hiring many quality specialists who are interested in menial work). 

The most effective tools use AI to automate scoring across every channel and language. 

As the Head of Machine Learning and Data at Klaus, Dr. Mervi Sepp Rei  asserts, it is key to note how scorekeeping fits into real-world support. “At Klaus, we understand that businesses may have specific requirements and nuances that they want to measure within each category. While our solution may not capture every single detail of what a company wants to measure in each category, we’ve designed it to be low-risk and non-penalizing.”

Therefore, at Klaus, she explains that, “We focus on accurately capturing what is present – what we know how to catch. We do not penalize agents or reduce their scores for something our system may not have accurately captured.”

2. Keeping track with automated assignments

Contact center quality management relies on regular data, which relies on regular reviews. Traditional setups used to require manual upkeep, usually by way of reminders and spreadsheets, to ensure that the groundwork was being fulfilled. 

Improving, automating, or eliminating inefficient processes is currently a priority for 59% of customer service leaders.  Automating review assignments is one of the simplest and most effective ways to keep your QA program ticking, and your quality consistently high. 

For example, you have team leads, quality specialists, and agents all conducting reviews. Maybe you wish for team leads to conduct ten a week, quality specialists to conduct thirty per week, and agents to conduct several peer reviews once every quarter. Perhaps you want to direct each group to different types of conversations, as well as having a different cadence. 

This is where a quality assurance tool can help you set up these assignments and track goal completion. You’ll have read time and time again how automation is here to remove the repetitive tasks from our desk(top)s: this is a prime example. 

4. Getting down with data

The final piece of the puzzle is effective data analysis. We live in a data-driven world, where the best decision making is based on facts. 

However, data collection serves only as the initial phase. To genuinely embrace a data-driven approach, leaders must also possess data literacy—a proficiency encompassing the ability to read, analyze, and communicate effectively with data.

Customer service leaders are not data scientists – but with the right software they don’t have to be. Tools that automate data analysis and offer sufficient dashboards and visualizations are an excellent tool in your CX pocket. They can help you measure: 

  • The effective of coaching efforts on agent performance,
  • How different teams and tiers compare on a global support landscape,
  • Internal review data against customer feedback (82% of CX leaders see the value in this).

Automating your data analysis marks a decisive step towards realizing the full potential of contact center quality assurance in an increasingly complex operational landscape.

Should you QA chatbots? 

Over a third of support teams already make use of chatbot solutions like UltimateGPT to alleviate their customer service agents from the more repetitive, simple requests. Chatbot QA is therefore becoming increasingly essential in order to safeguard customer satisfaction. 

Just like agent interactions, chatbot conversations can be included in the automated quality assurance process and scored across different categories. Since one of the main benefits of chatbots is their ability to tackle high volumes, manually checking in on bot performance would be an egregious task. If you autoscore these conversations, it is easier to jump in and see where things are going right, or where you need to switch up your chatbot flow. 

How to measure QA automation 

Taking a data-driven approach by documenting and tracking your QA automation journey is pivotal to your success. As Director of CXT Strategy at PartnerHero, Hossam Hassan explains it, “Like anything else, document your work so that you have a record of what you’ve changed, in case you need to revert an automation and go back to the manual process. So you could start with a small segment to see how it does, and then, if it’s successful, continue and document along the way.”

There is no single metric that can clearly indicate whether or not your QA automation is working. That said, there are several methods to gauge how any changes have influenced your QA program:

  • Have your reviewers periodically check in on automated scores to ensure alignment with their own assessment. 
  • Measure the time saved by the customer service team in the quality assurance process. For example, Klaus helped Helloprint decrease feedback time by up to 30 minutes. 
  • Track changes in quality metrics over time, comparing IQS, CSAT, and NPS, for example, before and after implementing automations. 
  • Ask your team! Encourage honest feedback about whether or not this is helping them improve what they do and how they do it. 

How to improve your automated quality assurance 

The best strategy for your quality management process is a balance of tasks between AI and specialists. Think of AutoQA as your first line of defense. By automatically examining and evaluating the entirety of your ticket volume, it proves invaluable for the initial performance screening.

After the initial automated assessment, then the manual reviews are important. The 2% of cases that are the most complex are the ones which will genuinely require an expert, analytic touch.

Klaus’ AutoQA Tips:

  1. Create a manual scorecard and an automated scorecard with different categories. Things like Grammar are easy to automatically score, things like Product Knowledge demand a human reviewer.
  2. Use different conversation filters to make the most out of assignment automation. For example, make sure team leads are checking all conversations with a subpar CSAT rating.
  3. Use dashboards to highlight training needs. This is the easiest way to spot if agent performance is lacking in certain areas.
  4. Automated quality management can also detect the volume of conversations across languages. This is an excellent way to spot when you need to diversify and expand your Knowledge Base across languages. 

Do all of the above (and more) seamlessly with Klaus’ plug and play AutoQA solution. Our AI helps you make sure there’s no stone unturned when it comes to improving your support across agents, channels, BPOs, and languages. 

Discover more automation use cases for your support