The importance of interpretability for effective demand planning in the age of machine learning

From black box to glass box

⏱️ 4-minute read 

 

A snowy predicament: how machine learning models can learn the wrong things

A snowy predicament: how machine learning models can learn the wrong things in demand planningImagine a machine learning model that’s supposed to tell the difference between a wolf and a husky, but instead, it’s learned to recognize snow. This might sound like a joke, but it’s a real-world example of the limitations of current machine learning models. Researchers have found that a model trained with inadequate data can make mistakes in classifying images of wolves and huskies, and instead learn to identify snow as the main characteristic! The key message here is that for demand planners to trust a machine learning model, we need to have a way of asking the model how it arrived at its predictions.

 

From wolfs and huskies to machine learning in demand planning

In the field of supply chain demand planning, we’re in the middle of a shift towards the use of machine learning models. At EyeOn we often refer to this as driver-based forecasting. These models offer great potential to provide more accurate and less biased forecasts by incorporating external drivers. However, with an increased use of machine learning also come challenges. One of the main challenges is that these models are often considered to be black boxes, making it difficult for demand planners to understand how the model arrived at its predictions.

To truly reap the benefits of driver-based forecasting, demand planners need to effectively work together with these machine learning models. And to do this, trust is key. But how do we establish trust in a model that we can’t fully understand?

demand forecasting with machine learning: from black box to open boxExplainability addresses this challenge by providing demand planners with a clearer understanding of the predictions and decision-making processes of machine learning models. This transparency helps build trust in the models as practitioners can see how the model arrived at its predictions. By having insight into the model’s prediction processes, planners can more effectively collaborate with the model; trusting it when it is correct and recognizing when it might be off. Additionally, explainability can also lead to a deeper understanding of the underlying business dynamics and challenge previously held assumptions.

 

From theory to practice: explaining the demand forecast

Let’s bring this to life with a concrete example: Imagine you are a demand planner working with machine learning models to generate a demand forecast planning for your portfolio. You have a good understanding of the drivers that influence demand for your products, including the impact of promotions. However, it can be challenging to get a clear picture of how the different drivers interact and affect your forecasts. To validate your understanding, gain trust in the model, and make better-informed decisions, two visualizations can be of great help.

First of all, let’s say you have a demand forecast of 73.64 for a specific product in a certain time period. The ‘waterfall plot’ can show us how the model arrived at this forecast. The plot starts with the baseline forecast, then each row shows the positive (red) or negative (blue) impact of each driver (or feature) on the forecast. In this example, we can see that the forecast is higher than the baseline due to the presence of a promotion for this product and a high number of open orders for the same period.

demand planning machine learning: model explanability

Secondly, for a higher level summary of the impact of each driver, the ‘beeswarm plot’ is a useful tool. It displays the impact of the top drivers on the model’s forecast in a compact and clear format. Each forecast for a product-time period combination is represented by a dot on each feature row. The dot’s position is determined by the driver’s impact and the dots pile up along each feature row to show density. Colour is used to show the original value of the driver. It becomes evident that the presence of a promotion (represented in red) has a positive effect on the forecast, whereas its absence (represented in blue) has a negative effect. Likewise, the number of open orders is also seen to have a direct impact on the forecast, with a higher number generally resulting in a more positive influence.

model explainability

 

Conclusion on using machine learning in demand planning

Machine learning models are rapidly becoming a crucial tool in supply chain demand forecasting. However, their black box nature often makes it challenging for demand planners to understand and trust the predictions made by these models. That’s why explainability is becoming increasingly important, as it provides planners with transparency into the models’ decision-making processes and helps build trust in the results. In conclusion, explainability can act as a catalyst for change in the supply chain planning and forecasting process, as it allows practitioners to challenge business assumptions, learn about underlying business dynamics, and eventually make better-informed decisions.

 

Are you interested to learn more about driver-based forecasting?

Willem Gerbecks, business consultant at EyeOn
Rijk van der Meulen, data scientist at EyeOn

Please contact us to see how driver-based forecasting can be of added value to your organization! Reach out to Rijk van der Meulen or Willem Gerbecks.

 

Search for