When we perform a contact centre evaluation, we are primarily interested in accessibility indicators %LCR, ASA, %SL, which characterize how quickly subscribers get a response from an agent, sales performance indicators, and quality indicators (CEA, FCR/FLR). Sometimes this is augmented by subscriber satisfaction indicators CSI or a pair of CSAT/CDSAT, but in practice, they are rarely measured by contact centres – clients save money. Of course, economics is always important: Occupancy (the operator’s load factor), CSS UTZ (staff usage rate), and Cost/Revenue per contact. Everything else, essentially, is secondary and serves to explain why the values of the above are what they are. So, what should be measured, but remains out of the frame?
Actually, it’s quite funny. The industry standard COPC, which I’m basing this article on, ensures quality of management but not the quality of the outcome. Moreover, there’s not a single (I mean literally “not one”) indicator that would reflect the quality of management itself. All the KPIs are about one form of the result or another. Allow me to explain via an example. Let’s say you’ve outsourced an outbound telemarketing project (could also be inbound, or any other kind, really – I’m illustrating a principle). The contact center provides a current report and attaches historical data since the beginning of the project. You look at it: seems to be all fine, within tolerance. Everybody’s happy. The agents are spinning in their chairs, money’s being churned.
Everything is not very good over there. At least, it will be soon. Because, if you measure the period from the agent’s admission to independent work to their full achievement of scheduled indicators, it has doubled. The effect is not felt simply because there are few newcomers, and the guys, who have already more or less settled in, continue to “gain momentum”. The coach who governed the basic sales skills resigned, and the new one is worse, not so competent, she’s not really skilled herself. This process urgently needs to be intervened in, but nobody sees the need, the orchestra still plays on the Titanic. The obvious conclusion: to evaluate the contact center (management quality), it is necessary to track the dynamics of achieving the target KPI values in any work direction. As soon as it even slightly swayed downward, that’s it, air raid alarm.
Here we are with another tale, same story, different angle, static view. Consider a sales department consisting of 10 individuals, consistently meeting their targets, with management being rather pleased. The performance of the poorest and top-selling agents differs by 2.5 times; meaning, a few are working ‘double-shifts’, and the achievement of the plan relies heavily on three ‘star’ agents. The issue is clear, firstly, one can earn more if needed (there are scenarios when it’s unnecessary, e.g., when production cannot supply the demand), but it is imperative to ensure additional interchangeability within the salesforce. This is not visible from the sales statistics unless each staff member is scrutinized individually. Thus, it is vital to monitor the variance (dispersion) of the indicator values within the subjects of interest, such as the agent groups. Without it, the inner work of the outsourcer’s ‘kitchen’ would be a mystery. By the way, a good project manager from the outsourcer’s side should monitor target values and variance on his own initiative. And experience shows that such an Oki-Toki manager’s agents quickly adapt to the project, even if, due to the difference in positions, they have very little direct communication with him.
An additional perspective on contact center assessment, what to measure, can be associated with time.
Let’s say, an ACD is handling an inbound line. Does the project manager understand the actual threshold of client tolerance? Tolerance refers to the willingness to wait for an agent’s response, and the threshold of tolerance is the time elapsed from the start of waiting, after which subscribers begin to massively hang up. The Service Level indicator doesn’t provide enough information about the situation, it indicates that there are more lost calls because the waiting time has increased. However, this could also increase due to internal reasons, for instance, when an agent is out due to Covid-19 and could not come online, and no replacement was found. Hence, the change in %SL may not be linked to changes in customer behavior over time, but it’s crucial to monitor such behavior. Besides, most often, the goal for %SL is set not incorrectly, but rather arbitrarily. Naturally, it turns out to be either underestimated or overestimated, and this precisely happens because the threshold of client tolerance is not tracked. By the way, this could be used as a criterion for evaluating the qualification of the project manager: “In which project did you study the client’s tolerance level?”. The response usually makes everything clear. For reference, tolerance has a synonym – patience, used differently in various sources.