QA teams grade support conversations based on various parameters such as the agent behavior, agents’ compliance to internal processes, the quality of their responses, and so on. And, the way QA teams grade these conversations is not a one size fits all approach. Every organization or team will have its own set of questions and grading systems to measure agent performance and support quality.
Rubric builder helps QA teams set up their own grading systems to evaluate support conversations. They can create different categories, provide them weightage (in terms of percentage) and add questions under each category where they can score agent behavior using a pass/fail system, a bucket system, or even through a raw scoring system.
In this article, you’ll learn about creating rubrics and how to use them to grade support conversations.
Creating a rubric
In order to create a rubric,
- Navigate to Settings > Rubric Builder and click on + Add rubric
- Provide a title and description for your rubric. It should be clear enough for others in your team to understand what the rubric is about
- The next step is to set up categories. You can either list all your compliance-related questions under one category or spread them across multiple categories. When you create an additional category, you will get to provide weightage. You can distribute the weightage as 50/50, 60/40, 30/70, and so on. For example, if you had chosen a weightage of 70/30, then the score of questions under the first category would amount to 70% of the QA, whereas the score of the second category would amount to only 30%.
- After setting up the categories, the next step is to set up the scoring system and the outcome.
- The scoring system can be set to a percentage system or a point-based scoring system.
- The next is choosing what kind of outcomes you’d want from the rubric. You can choose between a pass/fail system, a bucket-based system (you can set an outcome based on the percentage of the total score – ideal if your score system is based on percentage), and a raw scoring system.
- After setting up the categories, results, and outcomes, you’ll have to set up default answers for your questions. It can be a Yes or No; True or False; or something else.
- After question settings, you’ll come across Evaluation Settings (more on that in the next paragraph).
- Once you’re done, click on Continue
- In the next section, you’ll see each category as a tab and you’ll have the option to add questions and answers under each category. You can also create sections and place questions under them.
- After adding questions under each category, click on Save Changes.
Prefill Default Answers
In the rubric setting, each question will have an option to select a default answer. As an admin or QA manager, you can set or change the default answer with one of your choices and Level AI will prefill the conversation evaluation as per the selected choice and saves tons of time for QA to select each question manually and submit.
You can set prefilled answers either while creating a new rubric or by editing an existing rubric.
If you’re creating a new rubric, you will have to enable the ‘Set as prefilled answer’ radio button present below each answer. (Refer screenshot)
If you’re going to set prefilled answers for an existing rubric, you should go to Settings > Rubric Builder and click on the Edit option next to the rubric to which you wish to make changes. Head to the questions where you wish to set prefilled answers and click on the ‘Set as prefilled answer’ radio button below the answer of your choice.
- It is not mandatory to have a prefilled answer for every question. It is optional.
- The option to set prefilled answers is not applicable to ‘autoscore-enabled’ questions.
- Conversations will be prefilled only for the new data. Any change in the rubric will not show up on the older evaluated or un-evaluated autoscored questions.
Enabling rubric for different support channels
After creating a rubric, you can make it live for different support channels (call, chat, and email). You can enable them for channels by clicking on the Live for call, Live for chat, and Live for email.
If you’ve enabled Live for chat, you can see the rubric when you’re looking at a chat conversation. This will help QA teams directly grade the conversation.
Grading support conversations
To grade support conversations based on the rubric,
- Navigate to Interaction History and click on the support conversation you wish to grade.
- On this page, you’ll see all the questions you’ve added to your rubric. You can grade the conversation based on the conversation tags, snippets, transcripts, assists. You can also add overall comments.
With Rubric builder, you can:
- Set your own grading system
- Set up categories for agent evaluation
- Customize the questions under each category
- Grade a conversation from the Interaction History screen.
Rubric builder helps your QA team seamlessly evaluate the performance of your agents, grade them based on their own standards and share feedback with them to ensure continuous improvement.
Comments in Level AI serve as a communication medium. QA auditors can add comments under each rubric question. They can use this space to communicate feedback, make suggestions, or even to provide reasons for failing an evaluation.
Agents can also add comments. They can use the space to have a conversation with the QA auditor who evaluated the conversation.
Users can edit or delete comments from a QA evaluation. But, there are a few things to keep in mind.
- Only users who’ve added a comment can edit or delete it.
- Removing a comment will not delete other replies that are part of a comment thread.