# Automotive OEM Demand Forecasting: A Comparative Study of Forecasting Algorithms and Strategies

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## Featured Application

**The outcomes of this work can be applied to B2B discrete demand forecasting in the automotive industry and probably generalized to other demand forecasting domains.**

## Abstract

## 1. Introduction

^{2}-adjusted (R

^{2}adj) metrics. We compute the uncertainty ranges for each forecast and compare if differences between forecasts are statistically significant by performing a Wilcoxon paired rank test [14]. Finally, we analyze the proportion of products with forecasting errors below certain thresholds and the proportion of forecasts that result in under-estimates.

## 2. Related Work

^{2}, they achieved the best results with ANFIS models tuned with genetic algorithms compared to ANFIS and ANN models without any tuning. Ref. [30] compared custom deep learning models trained on real-world products’ data provided by a worldwide automotive original equipment manufacturer (OEM). Ref. [31] developed an long short-term memory (LSTM) model based on car parts sales data in Norway and compared it against Simple Exponential Smoothing, Croston, Syntetos-Boylan Approximation (SBA), Teunter-Syntetos-Babai, and Modified SBA. Best results were obtained with the LSTM model when comparing models’ mean error (ME) and MSE. Ref. [22] developed tree models (autoregressive moving average (ARMA), Vector Autoregression (VAR) model, and the Vector Error Correction Model (VECM)) to forecast automobile sales in China. The models were compared based on their performance measured with RMSE and MAPE metrics, finding the best results with the VECM model. The VECM model was also applied by [20], when forecasting cars demand for the state of Sarawak in Malaysia. Finally, Ref. [32] compared forecasts obtained from different moving average (MA) algorithms (simple MA, weighted MA, and exponential MA) when applied to production and sales data from the Gabungan Industri Kendaraan Bermotor Indonesia. Considering the Mean Absolute Deviation, the best forecasts were obtained with the Exponential Moving Average.

## 3. Methodology

#### 3.1. Business Understanding

#### 3.2. Data Understanding

#### 3.3. Data Preparation

#### 3.3.1. Feature Creation

#### 3.3.2. Feature Selection

#### 3.4. Modeling

#### Feature Analysis and Prediction Techniques

#### 3.5. Evaluation

^{2}metrics to measure the performance of the demand forecasting models related to the automotive industry. While ME, MSE, and RMSE are widely adopted, they all depend on the magnitude of the predicted and observed demands and thus cannot be used to compare groups of products with a different demand magnitude. This issue can be overcome with MAPE or CAPE metrics, though MAPE puts a heavier penalty on negative errors, preferring low forecasts—an undesired property in demand forecasting. Though R

^{2}is magnitude agnostic, it has been noticed that its value can increase when new features are added to the model [66].

^{2}adj. MASE informs the ratio between the MAE of the forecast values against the MAE of the naïve forecast, is magnitude agnostic, and not prone to distortions. R

^{2}adj, informs how well predictions adjust to target values. In addition, it weights the number of features used to make the prediction, preferring succinct models that use fewer features for the same forecasting performance.

## 4. Experiments and Results

^{2}adj and MASE. Therefore, we consider Experiment 3 performed best, having the best MASE and R

^{2}adj values. In contrast, the rest of the evaluation criteria values were acceptable.

^{2}adj was lower, and the under-estimates ratio higher, compared to results in previous experiments, the median MASE values decreased by more than 40%. Models based on the median of past demand had the best results in most aspects, including the proportion of forecasts with more than 90% error. Encouraged by these results, we conducted Experiments 7–8, preserving the grouping criteria but adapting the number of features considered according to the amount of data available in each sub-group. In Experiment 7, we grouped them based on the magnitude of the median of past demand. In contrast, in Experiment 8, we grouped products based on demand type. In both cases, we observed that R

^{2}adj values and under-estimates ratios improved, and MASE values remained low. We consider the best results were obtained in Experiment 7, which achieved the best values in all evaluation criteria, except for MASE. We ranked models of these two experiments by R

^{2}adj, and took the top three. We obtained SVR, voting, and stacking models for Experiment 7 and SVR, voting, and RFR models for Experiment 8. The models from Experiment 8 exhibited lower MASE in all cases, a better ratio of under-estimates, and a better proportion of forecasts with an error ratio higher than 90%. Top 3 models from Experiment 8 remained competitive regarding R

^{2}adj and proportion of forecasts with error ratio bounded to 30% or less error.

^{2}adj and MASE. However, the difference was significant between voting models in both groups for these two metrics. The difference was also significant between the voting model from Experiment 7 and the RFR model from Experiment 8 for the MASE metric. Considering the proportion of forecasts with errors lower than 30%, we observed no differences between both groups’ models. However, differences between SVR and voting models in Experiment 8 were significant. Finally, differences regarding the number of under-estimates were statistically significant between all top three models from Experiment 7 against SVR and RFR models of Experiment 8. For this particular performance aspect, the stacking model from Experiment 7 only achieves significance against the voting model from Experiment 8.

^{2}adj was lower than the top 3 models from Experiment 8, it achieved the best MASE in Experiment 10. It also had among best proportion predictions with less than 5%, 10%, 20%, and 30% error or more than 90% error. However, the proportion of under-estimates, a parameter of crucial importance in our use case, hindered these performance results. ML streaming models had among the highest proportions of under-estimates of all created forecasting models. The highest proportion of under-estimates was obtained in ML streaming models based on the Hoeffding inequality, reaching a median of underestimates above 70%.

^{2}adj was consistently low for statistical models, and though their MASE was better compared to the baseline models, ML models performed better. When assessing the ratio of forecasts with less than 30% error, ML models displayed a better performance. We observed the same when analyzing the under-estimates ratio. Even though the random walk had a low under-estimates ratio, the rest of the metrics indicate the random walk model provides poor forecasts. We consider the best overall performers are the SVR, RFR, and GBRT models, which achieved near-human performance in almost every aspect considered in this research. Even though differences regarding R

^{2}adj, MASE, and the ratio of forecasts with less than 30% error are not statistically significant between them in most cases, they display statistically significant differences when analyzing under-estimates.

## 5. Conclusions

^{2}adj, MASE, the ratio of forecasts with less than 30% error, and the ratio of forecasts with under-estimates)—all of them magnitude-agnostic. These metrics and criteria allow us to characterize results to be comparable regardless of the underlying data. We also assess the statistical significance of results, something we missed in most related literature.

^{2}adj and a better bound on high forecast errors. However, these values were not always statistically significant.

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

ADI | Average Demand Interval |

ANFIS | Adaptive Network-based Fuzzy Inference System |

ANN | Artificial Neural Network |

ARFR | Adaptive Random Forest Regressor |

ARIMA | autoregressive integrated moving average model |

ARMA | Autoregressive Moving Average |

CAPE | Cumulative Absolute Percentage Errors |

CRISP-DM | CRoss-Industry Standard Process for Data Mining |

CV | Coefficient of Variation |

DTR | Decision Tree Regressor |

GBTR | Gradient Boosted Regression Trees |

GDP | Gross Domestic Product |

HATR | Hoeffding Adaptive Tree Regressor |

HTR | Hoeffding Tree Regressor |

KNNR | K-Nearest-Neighbor Regressor |

MA | Moving Average |

MAE | Mean Absolute Error |

MAPE | Mean Absolute Percentage Error |

MASE | Mean Absolute Scaled Error |

ME | Mean Error |

ML | Machine Learning |

MLPR | Multiple Linear Perceptron Regressor |

MLR | Multiple Linear Regression |

MSE | Mean Squared Error |

OEM | Original Equipment Manufacturer |

PMI | Purchasing Managers’ Index |

R^{2} | Coefficient of determination |

R^{2}adj | Coefficient of determination - adjusted |

RFR | Random Forest Regressor |

RMSE | Root Mean Squared Error |

SBA | Syntetos–Boylan Approximation |

SVM | Support Vector Machine |

SVR | Support Vector Regressor |

UE | Under-estimates |

VAR | Vector Autoregression |

VECM | Vector Error Correction Model |

## References

- Cambridge University Press. Cambridge Learner’s Dictionary with CD-ROM; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
- Wei, W.; Guimarães, L.; Amorim, P.; Almada-Lobo, B. Tactical production and distribution planning with dependency issues on the production process. Omega
**2017**, 67, 99–114. [Google Scholar] [CrossRef][Green Version] - Lee, H.L.; Padmanabhan, V.; Whang, S. The bullwhip effect in supply chains. Sloan Manag. Rev.
**1997**, 38, 93–102. [Google Scholar] [CrossRef] - Bhattacharya, R.; Bandyopadhyay, S. A review of the causes of bullwhip effect in a supply chain. Int. J. Adv. Manuf. Technol.
**2011**, 54, 1245–1261. [Google Scholar] [CrossRef] - Brühl, B.; Hülsmann, M.; Borscheid, D.; Friedrich, C.M.; Reith, D. A sales forecast model for the german automobile market based on time series analysis and data mining methods. In Industrial Conference on Data Mining; Springer: Berlin/Heidelberg, Germany, 2009; pp. 146–160. [Google Scholar]
- De Almeida, M.M.K.; Marins, F.A.S.; Salgado, A.M.P.; Santos, F.C.A.; da Silva, S.L. Mitigation of the bullwhip effect considering trust and collaboration in supply chain management: A literature review. Int. J. Adv. Manuf. Technol.
**2015**, 77, 495–513. [Google Scholar] [CrossRef] - Dwaikat, N.Y.; Money, A.H.; Behashti, H.M.; Salehi-Sangari, E. How does information sharing affect first-tier suppliers’ flexibility? Evidence from the automotive industry in Sweden. Prod. Plan. Control.
**2018**, 29, 289–300. [Google Scholar] [CrossRef] - Martinsson, T.; Sjöqvist, E. Causes and Effects of Poor Demand Forecast Accuracy A Case Study in the Swedish Automotive Industry. Master’s Thesis, Chalmers University of Technology/Department of Technology Management and Economics, Gothenburg, Sweden, 2019. [Google Scholar]
- Ramanathan, U.; Ramanathan, R. Sustainable Supply Chains: Strategies, Issues, and Models; Springer: New York, NY, USA, 2020. [Google Scholar]
- Gutierrez, R.S.; Solis, A.O.; Mukhopadhyay, S. Lumpy demand forecasting using neural networks. Int. J. Prod. Econ.
**2008**, 111, 409–420. [Google Scholar] [CrossRef] - Lolli, F.; Gamberini, R.; Regattieri, A.; Balugani, E.; Gatos, T.; Gucci, S. Single-hidden layer neural networks for forecasting intermittent demand. Int. J. Prod. Econ.
**2017**, 183, 116–128. [Google Scholar] [CrossRef] - Syntetos, A.A.; Boylan, J.E.; Croston, J. On the categorization of demand patterns. J. Oper. Res. Soc.
**2005**, 56, 495–503. [Google Scholar] [CrossRef] - Hyndman, R.J. Another look at forecast-accuracy metrics for intermittent demand. Foresight Int. J. Appl. Forecast.
**2006**, 4, 43–46. [Google Scholar] - Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics; Springer: New York, NY, USA, 1992; pp. 196–202. [Google Scholar]
- Williams, T. Stock control with sporadic and slow-moving demand. J. Oper. Res. Soc.
**1984**, 35, 939–948. [Google Scholar] [CrossRef] - Johnston, F.; Boylan, J.E. Forecasting for items with intermittent demand. J. Oper. Res. Soc.
**1996**, 47, 113–121. [Google Scholar] [CrossRef] - Dargay, J.; Gately, D. Income’s effect on car and vehicle ownership, worldwide: 1960–2015. Transp. Res. Part Policy Pract.
**1999**, 33, 101–138. [Google Scholar] [CrossRef] - Wang, F.K.; Chang, K.K.; Tzeng, C.W. Using adaptive network-based fuzzy inference system to forecast automobile sales. Expert Syst. Appl.
**2011**, 38, 10587–10593. [Google Scholar] [CrossRef] - Vahabi, A.; Hosseininia, S.S.; Alborzi, M. A Sales Forecasting Model in Automotive Industry using Adaptive Neuro-Fuzzy Inference System (Anfis) and Genetic Algorithm (GA). Management
**2016**, 1, 2. [Google Scholar] [CrossRef][Green Version] - Ubaidillah, N.Z. A study of car demand and its interdependency in sarawak. Int. J. Bus. Soc.
**2020**, 21, 997–1011. [Google Scholar] [CrossRef] - Sharma, R.; Sinha, A.K. Sales forecast of an automobile industry. Int. J. Comput. Appl.
**2012**, 53, 25–28. [Google Scholar] [CrossRef] - Gao, J.; Xie, Y.; Cui, X.; Yu, H.; Gu, F. Chinese automobile sales forecasting using economic indicators and typical domestic brand automobile sales data: A method based on econometric model. Adv. Mech. Eng.
**2018**, 10, 1687814017749325. [Google Scholar] [CrossRef][Green Version] - Kwan, H.W. On the Demand Distributions of Slow-Moving Items. Ph.D. Thesis, University of Lancaster, Lancaster, UK, 1991. [Google Scholar]
- Eaves, A.H.C. Forecasting for the Ordering and Stock-Holding of Consumable Spare Parts. Ph.D. Thesis, Lancaster University, Lancaster, UK, 2002. [Google Scholar]
- Syntetos, A.A.; Babai, M.Z.; Altay, N. On the demand distributions of spare parts. Int. J. Prod. Res.
**2012**, 50, 2101–2117. [Google Scholar] [CrossRef][Green Version] - Lengu, D.; Syntetos, A.A.; Babai, M.Z. Spare parts management: Linking distributional assumptions to demand classification. Eur. J. Oper. Res.
**2014**, 235, 624–635. [Google Scholar] [CrossRef] - Dwivedi, A.; Niranjan, M.; Sahu, K. A business intelligence technique for forecasting the automobile sales using Adaptive Intelligent Systems (ANFIS and ANN). Int. J. Comput. Appl.
**2013**, 74, 975–8887. [Google Scholar] [CrossRef] - Matsumoto, M.; Komatsu, S. Demand forecasting for production planning in remanufacturing. Int. J. Adv. Manuf. Technol.
**2015**, 79, 161–175. [Google Scholar] [CrossRef] - Farahani, D.S.; Momeni, M.; Amiri, N.S. Car sales forecasting using artificial neural networks and analytical hierarchy process. In Proceedings of the Fifth International Conference on Data Analytics: DATA ANALYTICS 2016, Venice, Italy, 9–13 October 2016; p. 69. [Google Scholar]
- Henkelmann, R. A Deep Learning based Approach for Automotive Spare Part Demand Forecasting. Master Thesis, Otto von Guericke Universitat Magdeburg, Magdeburg, Germany, 2018. [Google Scholar]
- Chandriah, K.K.; Naraganahalli, R.V. RNN/LSTM with modified Adam optimizer in deep learning approach for automobile spare parts demand forecasting. Multimed. Tools Appl.
**2021**, 1–15. [Google Scholar] [CrossRef] - Hanggara, F.D. Forecasting Car Demand in Indonesia with Moving Average Method. J. Eng. Sci. Technol. Manag.
**2021**, 1, 1–6. [Google Scholar] - Zhang, G.P.; Qi, M. Neural network forecasting for seasonal and trend time series. Eur. J. Oper. Res.
**2005**, 160, 501–514. [Google Scholar] [CrossRef] - Athanasopoulos, G.; Hyndman, R.J.; Song, H.; Wu, D.C. The tourism forecasting competition. Int. J. Forecast.
**2011**, 27, 822–844. [Google Scholar] [CrossRef] - Montero-Manso, P.; Hyndman, R.J. Principles and algorithms for forecasting groups of time series: Locality and globality. arXiv
**2020**, arXiv:2008.00444. [Google Scholar] - Salinas, D.; Flunkert, V.; Gasthaus, J.; Januschowski, T. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast.
**2020**, 36, 1181–1191. [Google Scholar] [CrossRef] - Bandara, K.; Bergmeir, C.; Smyl, S. Forecasting across time series databases using recurrent neural networks on groups of similar series: A clustering approach. Expert Syst. Appl.
**2020**, 140, 112896. [Google Scholar] [CrossRef][Green Version] - Laptev, N.; Yosinski, J.; Li, L.E.; Smyl, S. Time-series extreme event forecasting with neural networks at uber. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 34, pp. 1–5. [Google Scholar]
- Wirth, R.; Hipp, J. CRISP-DM: Towards a standard process model for data mining. In Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining; Springer: London, UK, 2000; pp. 29–39. [Google Scholar]
- Wang, C.N.; Tibo, H.; Nguyen, H.A. Malmquist productivity analysis of top global automobile manufacturers. Mathematics
**2020**, 8, 580. [Google Scholar] [CrossRef][Green Version] - Tubaro, P.; Casilli, A.A. Micro-work, artificial intelligence and the automotive industry. J. Ind. Bus. Econ.
**2019**, 46, 333–345. [Google Scholar] [CrossRef][Green Version] - Ryu, H.; Basu, M.; Saito, O. What and how are we sharing? A systematic review of the sharing paradigm and practices. Sustain. Sci.
**2019**, 14, 515–527. [Google Scholar] [CrossRef][Green Version] - Li, M.; Zeng, Z.; Wang, Y. An innovative car sharing technological paradigm towards sustainable mobility. J. Clean. Prod.
**2021**, 288, 125626. [Google Scholar] [CrossRef] - Svennevik, E.M.; Julsrud, T.E.; Farstad, E. From novelty to normality: Reproducing car-sharing practices in transitions to sustainable mobility. Sustain. Sci. Pract. Policy
**2020**, 16, 169–183. [Google Scholar] [CrossRef] - Heineke, K.; Möller, T.; Padhi, A.; Tschiesner, A. The Automotive Revolution is Speeding Up; McKinsey and Co.: New York, NY, USA, 2017. [Google Scholar]
- Verevka, T.V.; Gutman, S.S.; Shmatko, A. Prospects for Innovative Development of World Automotive Market in Digital Economy. In Proceedings of the 2019 International SPBPU Scientific Conference on Innovations in Digital Economy, Saint Petersburg, Russia, 14–15 October 2019; pp. 1–6. [Google Scholar]
- Armstrong, J.S.; Morwitz, V.G.; Kumar, V. Sales forecasts for existing consumer products and services: Do purchase intentions contribute to accuracy? Int. J. Forecast.
**2000**, 16, 383–397. [Google Scholar] [CrossRef][Green Version] - Morwitz, V.G.; Steckel, J.H.; Gupta, A. When do purchase intentions predict sales? Int. J. Forecast.
**2007**, 23, 347–364. [Google Scholar] [CrossRef] - Hotta, L.; Neto, J.C. The effect of aggregation on prediction in autoregressive integrated moving-average models. J. Time Ser. Anal.
**1993**, 14, 261–269. [Google Scholar] [CrossRef] - Souza, L.R.; Smith, J. Effects of temporal aggregation on estimates and forecasts of fractionally integrated processes: A Monte-Carlo study. Int. J. Forecast.
**2004**, 20, 487–502. [Google Scholar] [CrossRef] - Rostami-Tabar, B.; Babai, M.Z.; Syntetos, A.; Ducq, Y. Demand forecasting by temporal aggregation. Nav. Res. Logist. (NRL)
**2013**, 60, 479–498. [Google Scholar] [CrossRef] - Nikolopoulos, K.; Syntetos, A.A.; Boylan, J.E.; Petropoulos, F.; Assimakopoulos, V. An aggregate–disaggregate intermittent demand approach (ADIDA) to forecasting: An empirical proposition and analysis. J. Oper. Res. Soc.
**2011**, 62, 544–554. [Google Scholar] [CrossRef][Green Version] - Syntetos, A.; Babai, M.; Altay, N. Modelling spare parts’ demand: An empirical investigation. In Proceedings of the 8th International Conference of Modeling and Simulation MOSIM, Hammamet, Tunisia, 10–12 May 2010; Citeseer: Forest Grove, OR, USA, 2010; Volume 10. [Google Scholar]
- Hua, J.; Xiong, Z.; Lowey, J.; Suh, E.; Dougherty, E.R. Optimal number of features as a function of sample size for various classification rules. Bioinformatics
**2005**, 21, 1509–1515. [Google Scholar] [CrossRef][Green Version] - Varma, S.; Simon, R. Bias in error estimation when using cross-validation for model selection. BMC Bioinform.
**2006**, 7, 91. [Google Scholar] [CrossRef][Green Version] - Drucker, H.; Burges, C.J.; Kaufman, L.; Smola, A.J.; Vapnik, V. Support vector regression machines. In Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA, 2–5 December 1997; pp. 155–161. [Google Scholar]
- Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev.
**1958**, 65, 386. [Google Scholar] [CrossRef][Green Version] - Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics
**1970**, 12, 55–67. [Google Scholar] [CrossRef] - Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B
**1996**, 58, 267–288. [Google Scholar] [CrossRef] - Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B
**2005**, 67, 301–320. [Google Scholar] [CrossRef][Green Version] - Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat.
**1992**, 46, 175–185. [Google Scholar] - Wolpert, D.H. Stacked generalization. Neural Netw.
**1992**, 5, 241–259. [Google Scholar] [CrossRef] - Gomes, H.M.; Barddal, J.P.; Ferreira, L.E.B.; Bifet, A. Adaptive random forests for data stream regression. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, 2–4 October 2018. [Google Scholar]
- Domingos, P.; Hulten, G. Mining high-speed data streams. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Boston, MA, USA, 20–23 August 2000; pp. 71–80. [Google Scholar]
- Bifet, A.; Gavaldà, R. Adaptive learning from evolving data streams. In International Symposium on Intelligent Data Analysis; Springer: New York, NY, USA, 2009; pp. 249–260. [Google Scholar]
- Ferligoj, A.; Kramberger, A. Some Properties of R 2 in Ordinary Least Squares Regression. 1995. [Google Scholar]
- Armstrong, J.S. Illusions in regression analysis. Int. J. Forecast.
**2011**, 28, 689–694. [Google Scholar] [CrossRef][Green Version] - Tufte, E.R. The Visual Display of Quantitative Information; Graphics Press: Cheshire, CT, USA, 2001; Volume 2. [Google Scholar]
- Ali, M.M.; Boylan, J.E.; Syntetos, A.A. Forecast errors and inventory performance under forecast information sharing. Int. J. Forecast.
**2012**, 28, 830–841. [Google Scholar] [CrossRef] - Bruzda, J. Demand forecasting under fill rate constraints—The case of re-order points. Int. J. Forecast.
**2020**, 36, 1342–1361. [Google Scholar] [CrossRef] - Stone, M. Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B
**1974**, 36, 111–133. [Google Scholar] [CrossRef]

**Figure 1.**Demand types classification by Syntetos et al. [12]. Quadrants correspond to (

**I**) intermittent, (

**II**) lumpy, (

**III**) smooth, and (

**IV**) erratic demand types.

**Figure 2.**Median values for (

**a**) crude oil price, (

**b**) GDP, (

**c**) unemployment rate worldwide, (

**d**) PMI, (

**e**) copper price (last three months average), and (

**f**) demand.

**Figure 3.**Sample demand correlograms, indicating seasonality patterns. The correlogram in panel (

**a**) is computed over the seven years of data, while correlogram in (

**b**) is computed over last three years.

**Figure 4.**Monthly demand over the years of selected products. We compare the last three years of data.

**Figure 5.**Relevant points in time we considered for forecasting purposes. There is a six-week slot between the moment we issue the forecast and the month we predict. The day of the month considered issuing the prediction is fixed.

**Table 1.**Data sources. In the first and second columns, we indicate the kind of data we retrieve and its source. The third column provides information on how frequently new data is available. In contrast, the last column describes the aggregation level at which the data is published. Periodicity and aggregation levels can be at a yearly, quarterly, monthly, or daily level and are denoted by “Y”, “Q”, “M”, or “D”, respectively. The London Metal Exchange published copper prices for weekdays.

Data | Source | Periodicity | Aggregation Level |
---|---|---|---|

History of deliveries | Internal | D | D |

Sales Plan | Internal | Y,Q | M |

Gross Domestic Product (GDP) | World Bank | Y | Y |

Unemployment rate | World Bank | Y | Y |

Crude Oil price | World Bank | M | M |

Purchasing Managers’ Index (PMI) | Institute of Supply Chain Management | M | M |

Copper price | London Metal Exchange | D | D |

Car sales | International Organization of | Y | Y |

Motor Vehicle Manufacturers |

**Table 2.**Demand segmentations, by demand type as per [12], and by demand magnitude, considering demanded quantities per month.

By Demand Type | By Demand Magnitude | ||||||
---|---|---|---|---|---|---|---|

Years | Smooth | Erratic | 10 | 100 | 1 K | 10 K | 100 K |

All years | 13 | 43 | 13 | 2 | 5 | 26 | 10 |

Last 3 years | 19 | 37 | 10 | 1 | 4 | 28 | 13 |

**Table 3.**Top 15 features selected by the GBRT model considering the last three years of data. We did not remove correlated features in this case.

Feature | Brief Description |
---|---|

$wd{p}_{3m}$ | Estimate of target demand based on average demand per working day on third month before predicted month, and amount of working days on target month. |

$sp\xb7\frac{deman{d}_{pastwavg}}{s{p}_{pastwavg}}$ | Planned sales for target month adjusted with ratio of weighted averages of past demand and past planned sales for given month. |

$deman{d}_{lag4m}\xb7\frac{U{E}_{3m}}{U{E}_{15m}}$ | Lagged demand (4 months before target month), adjusted by the ratio of unemployment rates three, and fifteen months before the month we aim to predict. |

$s{p}_{lag12m}$ | Planned sales for last year, same month we aim to predict. |

$sp\xb7\frac{U{E}_{3m}}{U{E}_{15m}}$ | Planned sales, adjusted by the ratio of unemployment rates three and fifteen months before the month we aim to predict. |

$sp$ | Planned sales for target month |

$deman{d}_{lag3m}\xb7\frac{GD{P}_{3m}}{GD{P}_{15m}}$ | Lagged demand (3 months before target month), adjusted by the ratio of GDP three and fifteen months before the month we aim to predict. |

$sp\xb7\frac{GD{P}_{3m}}{GD{P}_{15m}}$ | Planned sales for target month, adjusted by the ratio of GDP three and fifteen months before the month we aim to predict. |

$deman{d}_{lag3m}$ | Lagged demand (3 months before target month) |

$wd{p}_{12}\xb7\frac{sp}{s{p}_{pastwavg}}$ | Estimate of target demand based on average demand per working day a year before the predicted month and amount of working days on target month. Adjusted by the the ratio between planned sales for target month and the weighted average of planned sales for the same month over past years. |

$wd{p}_{8m}$ | Estimate of target demand based on average demand per working day on eighth month before predicted month and amount of working days on target month. |

$wd{p}_{5m}\xb7\frac{U{E}_{3m}}{U{E}_{15m}}$ | An estimate of target demand based on average demand per working day on the fifth month before predicted month and amount of working days on target month. Adjusted by the ratio of unemployment rates three and fifteen months before the month we aim to predict. |

$wd{p}_{12m}\xb7\frac{PM{I}_{13m}}{PM{I}_{14m}}$ | An estimate of target demand based on average demand per working day a year before predicted month and the amount of working days on target month. Adjusted by the ratio between PMI values 13, and 14 months beforethe target month. |

$deman{d}_{lag3{m}_{scaled}}$ | Lagged demand (3 months before target month) - scaled between 0–1, considering products past demand values. |

$wd{p}_{3m}\xb7\frac{GD{P}_{3m}}{GD{P}_{15m}}$ | Estimate of target demand based on average demand per working day on third month before predicted month, and amount of working days on target month. Adjusted by the ratio of GDP three and fifteen months before the month we aim to predict. |

**Table 4.**Description of experiments performed. Regarding the feature selection procedure, we consider two cases: (I) top features ranked by a GBRT model and curated by a researcher, and (II) top features ranked by a GBRT model, removing those with strong collinearity, curated by a researcher as well. N in the “Number of features” column refers to the number of instances in a given dataset.

Years of Data | Experiment | Feature Selection | Number of Features |
---|---|---|---|

All years available | Experiment 1 | I | 6 |

Experiment 2 | II | 6 | |

Last three years | Experiment 3 | I | 6 |

Experiment 4 | II | 6 | |

Experiment 5 | II | 6 | |

Experiment 6 | II | 6 | |

Experiment 7 | II | $\sqrt{N}$ | |

Experiment 8 | II | $\sqrt{N}$ | |

Experiment 9 | II | $\sqrt{N}$ | |

Experiment 10 | II | $\sqrt{N}$ | |

Experiment 11 | Only past demand | 1 |

**Table 5.**Median of results obtained for each ML experiment. We abbreviate under-estimates as UE. In Experiments 9–10, streaming models based on Hoeffding bound show poor performance, resulting in negative R

^{2}adj values. We highlight the best results in bold.

Experiment | R^{2}adj | MASE | 5% Error | 10% Error | 20% error | 30% Error | UE | 90%+ Error |
---|---|---|---|---|---|---|---|---|

Experiment 1 | 0.8584 | 1.1450 | 0.0670 | 0.1086 | 0.2039 | 0.3051 | 0.3854 | 0.4077 |

Experiment 2 | 0.8447 | 1.1450 | 0.0655 | 0.1101 | 0.1920 | 0.2887 | 0.4182 | 0.3928 |

Experiment 3 | 0.9067 | 0.9150 | 0.0655 | 0.1280 | 0.2351 | 0.3095 | 0.4256 | 0.3928 |

Experiment 4 | 0.8998 | 0.9750 | 0.0655 | 0.1176 | 0.2143 | 0.3051 | 0.4152 | 0.4018 |

Experiment 5 | 0.8757 | 0.3900 | 0.0536 | 0.1116 | 0.2173 | 0.3140 | 0.4762 | 0.3497 |

Experiment 6 | 0.8679 | 0.3350 | 0.0565 | 0.1012 | 0.1875 | 0.2768 | 0.4851 | 0.3601 |

Experiment 7 | 0.8903 | 0.3550 | 0.0521 | 0.1131 | 0.2247 | 0.3155 | 0.4851 | 0.3408 |

Experiment 8 | 0.8786 | 0.3100 | 0.0506 | 0.0938 | 0.1890 | 0.2813 | 0.4658 | 0.3497 |

Experiment 9 | −0.1611 | 0.8100 | 0.0357 | 0.0714 | 0.1428 | 0.2143 | 0.7321 | 0.3601 |

Experiment 10 | −1.5344 | 0.5300 | 0.0178 | 0.0536 | 0.1250 | 0.2143 | 0.7143 | 0.4613 |

**Table 6.**Results we obtained for the top 3 performing models from Experiment 8 (ML batch models), best result for experiments 9–10 (ML streaming models), and baseline and statistical models. We abbreviate under-estimates as UE.

Algorithm Type | Algorithm | R^{2}adj | MASE | 5% Error | 10% Error | 20% Error | 30% Error | UE | 90+% Error |
---|---|---|---|---|---|---|---|---|---|

ML batch | SVR | 0.9212 | 0.2600 | 0.0774 | 0.1101 | 0.2321 | 0.3333 | 0.4077 | 0.3304 |

Voting | 0.9059 | 0.2800 | 0.0625 | 0.0923 | 0.1786 | 0.2798 | 0.4792 | 0.3393 | |

RFR | 0.8953 | 0.2900 | 0.0417 | 0.1012 | 0.2173 | 0.3244 | 0.3423 | 0.3482 | |

ML streaming | ARFR (Experiment 9) | 0.8728 | 0.3300 | 0.0744 | 0.1339 | 0.2500 | 0.3274 | 0.5387 | 0.3452 |

ARFR (Experiment 10) | 0.8205 | 0.2200 | 0.0744 | 0.1280 | 0.2232 | 0.3274 | 0.5268 | 0.3423 | |

Baseline | MA(3) | 0.8938 | 0.8800 | 0.1190 | 0.1667 | 0.2530 | 0.3482 | 0.3571 | 0.3065 |

Naïve | 0.8519 | 1.0000 | 0.2024 | 0.2411 | 0.3423 | 0.4137 | 0.4137 | 0.3214 | |

Statistical | ARIMA(2.1.0) | 0.3846 | 0.4500 | 0.0476 | 0.0774 | 0.1429 | 0.1875 | 0.5536 | 0.5208 |

Exponential smoothing | 0.3258 | 0.3600 | 0.0506 | 0.1161 | 0.1905 | 0.2738 | 0.5923 | 0.4434 | |

ARIMA(1.1.0) | 0.2840 | 0.5200 | 0.0387 | 0.0744 | 0.1012 | 0.1726 | 0.5119 | 0.6071 | |

Random walk | −0.6705 | 0.9000 | 0.0327 | 0.0387 | 0.0655 | 0.0923 | 0.3780 | 0.7678 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Rožanec, J.M.; Kažič, B.; Škrjanc, M.; Fortuna, B.; Mladenić, D. Automotive OEM Demand Forecasting: A Comparative Study of Forecasting Algorithms and Strategies. *Appl. Sci.* **2021**, *11*, 6787.
https://doi.org/10.3390/app11156787

**AMA Style**

Rožanec JM, Kažič B, Škrjanc M, Fortuna B, Mladenić D. Automotive OEM Demand Forecasting: A Comparative Study of Forecasting Algorithms and Strategies. *Applied Sciences*. 2021; 11(15):6787.
https://doi.org/10.3390/app11156787

**Chicago/Turabian Style**

Rožanec, Jože M., Blaž Kažič, Maja Škrjanc, Blaž Fortuna, and Dunja Mladenić. 2021. "Automotive OEM Demand Forecasting: A Comparative Study of Forecasting Algorithms and Strategies" *Applied Sciences* 11, no. 15: 6787.
https://doi.org/10.3390/app11156787