
In a previous blog post, I looked at how the changing global business environment has created uncertainty, which means that traditional annual budgeting no longer represents the future with a high degree of certainty. Volatility, and the need to be able to re-forecast or re-plan in the light of external influences has become the new normal. In this environment, the need to be able to undertake ‘what if’ analysis based on a good understanding of history is ever more necessary to make informed decisions when setting future goals and plans. So how do you go about doing that?
The foundation: a single trusted source of data
Enterprise business intelligence solutions deliver critical intelligence and consistent information, drawn from governed data and presented in an easily consumable form. But in many organisations, finance data for budgeting and analytics ends up in two different databases, delivered through different ETL processes, accessed by two or more different tools by the user.
Understanding finance data is an essential part of the picture. A data warehouse meets the need to mix and match finance and operational data and can thus provide an excellent rear vision mirror view, so we know where we are and how we got here. It can give us us trusted history, and deep insight into what happened. A data warehouse provides a single trusted source of data for reporting and analytics AND planning and forecasting. It gives a holistic view of past performance, allowing us to understand cause and effect across all of an organisation’s data, rather than just part of the overall picture. A 360° view if you will.
“Consider the past and you shall know the future”
This Chinese proverb, credited to Confucius (551 BC - 479 BC), reminds us that the best guide to our future is to learn from our past.
Reporting and analytics and planning and forecasting are two sides of the same coin, one delivering history, the other looking to the future with the advantage of insight into the past. A deep and rich store of historical data facilitates the use of predictive analytics for planning and forecasting.
Statistical forecasting and R
The use of statistical algorithms for forecasting is not new. Time series based forecasting or predictions using historical data periods to calculate the likely values for future periods has been around since the 1950’s. Wider adoption and use of predictive analytics in budgeting, planning and forecasting has been inhibited because of a lack of familiarity with the apparently endless options available. This is compounded by a historical need to ‘’code” the underlying math into whichever system is being used, be that a spreadsheet or a vendor supplied solution.
The R Project for Statistical Computing - a free software environment for statistical computing and graphics - has brought statistical forecasting into the mainstream. It’s also, incidentally, a NZ success story. R was first created by Ross Ihaka and Robert Gentleman at the University of Auckland in 1993. Since then the project leadership has grown to include more than 20 leading statisticians and computer scientists from around the world, and thousands of others have contributed add-on "packages" for use by the 2 million users of R worldwide. There is a strong and vibrant community of R users online, with a rich set of community-maintained resources.
But even with tools like R, the underlying question of which technique to use can still present a significant challenge to the deployment of statistical forecasting algorithms into financial budgeting, forecasting and planning. Backcasting can help. This entails taking (typically) 48 months of historical data and running multiple candidate algorithms using the first 36 months of data, and then comparing the results to the actuals in months 37 – 48. This provides the ability to pick the algorithm that most closely matches what actually happened.
For example, we can extract from the data warehouse the last 4 years or 48 months of actual sales of a product, or a product family. We know with certainty that we have sold exactly x units of our best-selling widget in each month. Using the approach outlined above, we can use the first 3 years of data to produce a prediction using a number of algorithms, and then compare how accurate the algorithms are when measured against last year. Once we know which algorithm to use, we can re-run the algorithm against the last 3 or 4 years of data to give us a prediction for how many widgets we are likely to sell next year.
This approach is tool independent, it can be used in spreadsheets, in BI tools that provide statistical capability, or it can be utilised within your database of choice. The important thing is to use historical data which has integrity to assist you in predicting the future.
Statistical forecasting is not restricted to financial data. Many freely-available sources of statistical data can be fed into predictive analytics models (examples below).
Using a similar approach to internal data which is used in predictive financial planning, we can derive predictions on external influences. Where appropriate, these can be used to inform planning contributors when making judgement calls on the most likely future outcomes.
Budgeting, forecasting and planning can and should be part of the DNA of every organisation, supported by systems which focus on using technology to do the heavy lifting. This lets finance and operational management focus their time and judgement on the decisions that will benefit from their attention.