Forecasting is where it all starts. Everything that happens in a contact center related to service levels, occupancy, ASA, and abandon rates can be traced back to how accurate or actionable a forecast was. There are a lot of views on how to best measure forecasting accuracy.
Often the reasons given for the various views are due to the industry the contact center is in, the types of contacts forecasted, or the availability of good data to use in a forecast.
In reality, most of the core principles apply to all contact centers.
You must move past the “but we’re different because…” trap that so many organizations fall into. You should anchor to a common set of principles. This allows you to more easily benchmark with other organizations and onboard new talent because the position becomes more interchangeable.
Even as I write this, that statement makes me cringe (as I’m sure it does to you, especially if you get measured based on statements like this). Unless you define the threshold (e.g. the forecast was published 90 days out) or the forecasted metric (was it calls, workload, FTE?), and the frequency (daily, weekly, monthly?), then this statement means nothing.
What this statement is actually saying is that the leader doesn’t feel comfortable that they are getting a quality output and feels the need to set a target to drive improvement.
If I put my head in the freezer and my feet in the oven, then “on average, I feel fine.” Averages can be very misleading. They can make you think you’re doing a great job when it isn’t providing the business the information they need.
Averages are simple to communicate and easy to aggregate, so it’s understandable why people go this route.
What does this mean? It means your science and measurements are rock solid, but your audience can’t comprehend it to make business decisions.
This is probably the biggest challenge forecasters face. Your job is a balance between having an accurate forecast and having a useful forecast. You are better off having a less accurate forecast, than having one that isn’t understandable.
Okay. Now, that we have all the negativity out of our system, let’s talk about what you should do!
This should be simply taking your historical actual data and using it to project the future. This is clean, simple, and tells you what your path is likely to be with no changes.
This also gives you something to track back to. As you start layering in future changes (e.g. new clients, changes in productivity, other business intelligence), it’s very easy to lose sight of how each of these inputs changes the forecast. In order to do a proper variance analysis, you have to be able to track the variance to the intelligence and variables used to forecast it.
Sorry, Excel just isn’t good enough anymore. Yes, it’s readily available, and you know how to manipulate the data. You probably have tons and tons of spreadsheets all linked together in your treasure trove of spreadsheets.
Even if Excel can technically get you through the day, as contact centers (and customer preferences for how they contact you) evolve, you are going to have a lot of new inputs that can’t be modeled the same way as call volumes. Additionally, Excel makes it incredibly difficult to communicate your results in a simple, clear way.
When you see maps tracking a hurricane, the projections get wider as the timeframe progresses. They know where it is today. Forecasters have a smaller margin of error when projecting where it will be tomorrow, and that “cone of uncertainty” grows as you project multiple days out.
It is completely reasonable that a forecast you produce 90 days out (perhaps for hiring decisions) is less accurate than the one you produce 30 days out (for making staffing adjustments). Be open and transparent about this reality. This will buy you credibility, and it starts to move people away from thinking about forecasting as one accuracy number (remember the “98% accuracy” mentioned above?).
Once you have a forecast, it’s important to socialize it with members of your operations or leadership group. Get their thoughts and feedback. Forecasting is a very collaborative process that is based on both science and art. Science is the easy part, because it’s process and measures. Art is the challenging part, because it’s adding in subjective aspects, such as someone’s opinion about whether customers may contact you more frequently due to a product change.
Bring these people into the process and actively engage them (and let them take some accountability with you!). These relationships can also help to poke holes into your methodologies or outcomes in a safe, constructive environment. If the first time they see the forecast is in a large group with their leadership, they are much more likely to challenge or attack the results. If they’ve already seen them, even if they don’t like the numbers, you know what to expect and you can highlight that the concern has been raised and is being (or has been) addressed. Never catch people off-guard in a public forum with your forecast.
Look at your forecasting process today. Is it a one-way communication or collaborative? Do you heavily rely on Excel or do you have a forecasting tool to support you? Are you using average to measure your variance or standard deviation? How open is your environment – both technically and operationally – to making changes? Talk to your leadership about how you can make your forecasting more accurate. Most importantly, start challenging the status quo.