📣 Just Released!
Voice of the Customer Insights, the Generative AI solution that surfaces actionable insights from all your contact center conversations
Omniscient AI is Coming to Your Contact Center: Announcing QA-GPT by Level AI

See Through the Buzzwords: How To Evaluate AI Technologies for Your Contact Center

Photo by Andrea Piacquadio from Pexels
Contents
Share this article
  • Get The Latest News and Resources Right in Your Inbox

    Subscribe to our newsletter today!

  • Everywhere you turn you see services that offer artificial intelligence or machine learning. It can feel overwhelming and difficult to understand where to start, but you also know that you need to leverage new technology to improve the productivity of your team and the efficiency of your operations.

    Most of the technology leveraged in call centers today is stuck in the 2000s and does not take advantage of the significant advancement in voice technology that has resulted from major investment in services like Amazon Alexa, Google Home, and Apple’s Siri. While many services today will make abstract claims of AI and ML, most also fall short.

    I’ve been working in voice technology for several years, including time at Amazon Alexa and one of the early voice search companies, Relcy. I’m going to try and provide a quick overview of the state of AI technologies in contact centers and a simple framework on what to look for and what questions to ask. If you find this sparks thoughts or questions, please don’t hesitate to reach out for a discussion.

    AI Touchpoints in Contact Center

    To start, we look at four distinct phases: before contact, during contact, just post contact, and post contact. While there has been considerable efforts to develop products like chartbots, virtual assistants, and better IVRs, we’ll leave this phase for a later discussion (spoiler alert: most chatbots will underperform your expectations) and focus on the call and post call phases.

    There are generally two buckets that current solutions fit into: they either represent speech to text and keyword detection (old technology) or they do not exist. 

    Intro to Language Technology

    There are basically three layers to modern voice technology: speech to text, natural language processing, and natural language understanding.

    Speech to text refers to transcribing verbal communication into text. For example, you might ask Alexa, “What is the weather in San Francisco, California?” 

    Natural Language Processing, or NLP, focuses on breaking language into constituent elements. In the example above, the elements might be weather, San Francisco, California.

    The third part, and this is the part where most existing services fall short, is Natural Language Understanding, or NLU. NLU refers to understanding the relationship or meaning in language. For our query above, this looks like – Intent: Weather query, Location: San Francisco, California.

    Now consider the following — what does this mean?

    In order to make this actionable, a modern system needs to parse out the intent, the application, and the relationship between the nouns.

    This is where NLU becomes critical. Without NLU, it will be very difficult to take appropriate action based on the constituent elements of the phrase.

    Health Services vs. Cardiac Treatment

    Do you remember the last time you visited a doctor or a hospital, perhaps as a follow-up to a nagging knee injury from a tough spill you took skiing last winter? Do you go and ask generically for “health services”? Probably not. In this case, you know you need an orthopedic surgeon, preferably one experienced in knee injuries.

    In a similar fashion, the first step here is to realize you need to get beyond looking for “AI or ML” and start to ask for the specialty that you need to take your organization to the next level.

    Contact Center AI is Stuck in the 2000s

    Most contact center AI technology is stuck in the 2000s, focused on keyword detection only and not progressing to take advantage of the significant progress to NLU and language technology.

    This leaves teams stuck with unactionable word clouds or tables with tallies of word counts.

    For example, let’s say you want to understand when a customer is calling to cancel their account. While tracking the number of conversations where the word “cancel” appears has served as a good proxy or alarm if things change, it falls short of giving you real intelligence.

    To start, there are several ways to express the desire to cancel your account:

    • I want to cancel my account
    • Can you please cancel this account for me?
    • I want to discontinue my account
    • I no longer want my account
    • Can you discontinue my account?

    As you can see, monitoring for the word cancel will miss some instances because the word cancel is never used.

    There are also false positives.

    • I don’t want to cancel, but I would like to pause my account.
    • I would never cancel my account.

    Every business has hundreds of these types of scenarios.

    A modern approach will spin up an intent model for cancellation, looking to identify sentences with the intent to cancel irrespective of the word choice used. 

    This is all possible because speech to text has become a commodity, with off the shelf models achieving error rates around 5-10%. Large, pre-trained neural nets have made it cost effective for NLU models to deliver value on Day 1. Finally, AI compute capacity is doubling every 3-4 months.

    This should deliver QA automation, deep analytics, and more automation for call centers in real time, across all channels. You should not be getting charged more for “phonetic boost” or have to constantly tune and maintain a long list of every exact word or phrase permutation you might encounter. 

    Getting to the Next Level

    The following are some key components that I think will be important to taking your contact center team to the next level:

    • System Review + Prioritization for QA Auditors. QA auditors currently spend valuable time selecting and reviewing calls, and they often are choosing calls with little meaningful feedback opportunity. You should focus your QA or team leads on reviewing the most important conversations. A semantic engine should scour all contacts and surface conversations that exhibit key characteristics for your organization.
    • Instant Score. QA teams currently score 1-2% of all interactions. Consider a conversation monitoring engine that can automatically score 25-75% of your rubric for every contact.
    • Scenarios. Current solutions require the programming of hundreds of words / phrases and require constant tuning as you identify gaps. Alternatively, you can create scenarios using a few example phrases and then train the AI model to perform an action whenever it comes across the same intent in a support conversation.
    • Case Summarization. Support agents often spend a lot of time documenting or summarizing a conversation. You could have a summary automatically generated for the conversation and saved to your case data.
    • Categorization. Teams and agents struggle to create category trees and to properly categorize calls. Now you can automatically tag/group conversations in a consistent manner.

    Questions You Should Ask

    • Let’s get past the buzz words. Can we talk specifically about what types of AI your service offers?
    • Does your service focus on key word matching or intent based models? If intent based models, can you walk me through how these are configured?
    • Can you support me across my voice, email, social and chat channels?

    Get a free demo today!

    Your customers will thank you for it!