We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
“Your most unhappy customers are your greatest source of learning.”
Customers interact with your business or service every day, and the quality of the interaction experience can get your brand ahead. Brands that want to stay in the game and advance in the market know that they need to continuously listen to the customers to provide services in line with customer expectations.
Using customer data to improve customer experience can help retain customers, which is more straightforward than acquiring new customers. A happy customer will not only return to your service but is likely to promote your company through word of mouth.
So how do we achieve this level of customer satisfaction? CSAT, a metric that directly measures customer satisfaction, has become more than just a fad. Ideally, you would send CSAT surveys when you want to see how your clients feel about an action your business took, or certain aspects of your products/services.
Measuring customer satisfaction using feedback surveys is the starting point, but you can do more with this data to ensure an improved experience. The following case study shows how this can work:
CSAT case study introduction
This case is about TPA (name anonymized), a B2C software company. TPA is a video editing software service with a global presence that allows consumers to download the software through their website and provides various features for editing videos. They have a customer service portal for customer inquiries through phone, email, chat, etc. The customer service portal is run both in-house and outsourced, with the in-house team having a virtual team as well. The issues they address vary from account-related issues to performance attributes.
TPA’s CSAT saw a sudden drop, and SLA metrics (hold time, turnaround time) increased considerably. The operations leadership team was very concerned and needed to determine what was going on.
Using their BADIR Data-to-Decision framework, we were able to quickly find the drivers of TPA’s dropping CSAT scores and recommended actions to address 65% of the CSAT drop.
Let us walk you through how we did it.
Step 1: Identify the business question
TPA needed insights and actions as quickly as possible due to the severe impact on SLAs.
Our first step was to identify the real business questions behind the inquiries around CSAT drop and rising SLA metrics. Using a detailed questioning framework, we arrived at the real business question: What is causing CSAT to drop, and how do we fix the problem?
Step 2: Create an analysis plan
Having identified what questions we needed to answer, we used hypothesis-driven planning to limit the scope of our analysis to the core hypotheses at hand. This allowed us to choose the appropriate data and the correct analysis techniques.
Based on conversations with relevant stakeholders, we first hypothesized the segments where the dip might be happening and then identified the essential hypotheses like the ones below.
- Channel: CSAT dip due to chat support issues.
- Region: EMEA is having problems due to recent privacy laws.
- Call Center Type: Outsourced calls centers are driving a dip in CSAT due to recent changes in agent profiles.
- Issue Type: CSAT is dropping due to issues with the last product push.
We also identified the critical metrics affected as part of the SLA as:
- First Contact Resolution (FCR)
- Customer Satisfaction (CSAT)
- Wait time
- Turnaround time
Based on these hypotheses and metrics, we determined the appropriate data needed and identified correlation analysis as the suitable methodology for analyzing this data.
Step 3: Data collection
Applying the first two steps of the BADIR method to our case study meant that we were on solid footing to keep our data collection focused on the real business question and the analysis plan we had developed.
We collected the following data on the segments and success metrics and performed a data audit to ensure a clean dataset.
Step 4: Using CSAT to derive insights
Before jumping into the reasons, we wanted to check if CSAT is indeed affected and if there is any impact on the SLAs. Any analysis should follow these three essential steps.
A. Is there a problem?
We checked for the CSAT and the SLA time over the past four weeks and noticed a significant difference in the CSAT score and the average wait time.
Now that we have confirmed the dip and its impact, we looked for insights using correlation analysis to understand what is causing the drop in CSAT.
B. Where is the problem?
To test the hypotheses that we established in the analysis plan, we ran bivariate analyses of the segments across the weeks to test each hypothesis.
Our analyses showed that the CSAT is dropping across all channels and regions, and there was no significant difference between segments.
CSAT is dropping across all call center types but is more significant in in-house virtual call centers. The counts for in-house call centers are substantial, so we have narrowed down one of the problem areas.
We ran the same analysis across issue types and observed that CSAT dropped for “Account Recovery” related issues across all channel types.
Next, we wanted to understand the relative influence of each channel and issue type on the CSAT dip, to quantify the impact before making recommendations.
C. What is the impact?
We used the CSAT delta between weeks and the week volume across Issue Type and Call Center Type to understand which segments drove the maximum dip.
We observed that “Account Recovery” issues had a 65% impact on the CSAT dip, “Upgrade” another 13%, and “Order Tracking” caused another 12% dip. The highest share of impact revolves around the in-house call centers.
Step 5: CSAT-based recommendations
The objective of this exercise was to identify the cause of the recent CSAT drop. The analysis showed that issues related to “Account Recovery” had a significant impact (~65%) on the CSAT dip, of which in-house call centers had the major impact (~34% of overall impact).
Based on the findings, Aryng recommended deep dive and triage with in-house call centers, specifically around concerns with any recent change.
We also looked at the Pareto to identify the critical issue types raised by the customers. Resolution of issues around “Account Recovery,” “Upgrade” and “Order Tracking,” which are responsible for 90% of the overall CSAT dip, will help improve customer satisfaction and reduce SLA-related time factors.
Analytics can be complicated with massive databases to comb through and CSAT scores being, at first glance, just some numbers. Critical analysis of CSAT helps find its drivers and helps identify the brand strengths and the crucial customer pain points.
The Data-to-Decision method (BADIR framework) is a valuable recipe for making impactful decisions by focusing on actions based on well-structured analytics. When applied to the TPA company, this method enabled quick identification of the root issue. This directed the leadership team to coordinate with the right team instead of getting distracted by an overwhelming amount of data and too many plots.
If you have questions, you can download the detailed whitepaper here.
Piyanka Jain is an internationally acclaimed best-selling author.
Ananth Mohan is a consultant in product analytics at Aryng.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!