Use whatever software resources make the most sense (Tableau, Excel, R, etc.) given that you are working with more than 500 time series.

1.Plot all the series (an advanced data visualization tool is recommended) – what type of components are visible? Are the series similar or different? Check for problems such as missing values and possible errors.

  1. Partition the series into training and validation, so that the last 4 years are in the validation period for each series. What is the logic of such a partitioning? What is the disadvantage?
  2. Generate naive forecasts for all series for the validation period. For each series, create forecasts with horizons of 1,2,3, and 4 years ahead (Ft+1, Ft+2, Ft+3, and Ft+4).
  3. Among the measures MAE, Average error, MAPE and RMSE, which are suitable if we plan to combine the results for the 518 series?
  4. For each series, compute MAPE of the naive forecasts once for the training period and once for the validation period.
  5. The performance measure used in the competition is Mean Absolute Scaled Error (MASE). Explain the advantage of MASE and compute the training and validation MASE for the naive forecasts.
  6. Create a scatterplot of the MAPE pairs, with the training MAPE on the x-axis and the validation MAPE on the y-axis. Create a similar scatter plot for validation MASE vs. MAPE. Now examine both plots. What do we learn? How does performance differ between the training and validation periods? How does performance range across series?
  7. The competition winner, Lee Baker, used an ensemble of three methods:
    • Naive forecasts multiplied by a constant trend (global/local trend: “globally tourism has grown “at a rate of 6% annually.”)
    • Linear regression
    • Exponentially-weighted linear regression
    (a) Write the exact formula used for generating the first method, in the form Ft+k=…(k=1,2,3,4).
    (b) What is the rational behind multiplying the naive forecasts by a constant? (Hint: think empirical and domain knowledge)
    (c) What should be the dependent variable and the predictors in a linear regression model for these data? Explain.
    (d) Fit the linear regression model to the first five series and compute forecast errors for the validation period.
    (e) Before choosing a linear regression, the winner described the following process
    “I examined fitting a polynomial line to the data and using the line to predict future values. I tried using first through fifth order polynomials to find that the lowest MASE was obtained using a first order polynomial (simple regression line). This best fit line was used to predict future values. I also kept the [R2] value of the fit for use in blending the results of the predictor.”
    What are two flaws in this approach?
    (f) If we were to consider exponential smoothing, what particular type(s) of exponential smoothing are reasonable candidates?
    (g) The winner concludes with possible improvements, one being “an investigation into how to come up with a blending [ensemble] method that doesn’t use as much manual tweaking would also be of benefit.” Can you suggest methods or an approach that would lead to easier automation of the ensemble step?

(h) The competition focused on minimizing the average MAPE of the next four values across all 518 series. How does this goal differ from goals encountered in practice when considering tourism demand? Which steps in the forecasting process would likely be different in a real-life tourism forecasting scenario?

Sample Solution

This question has been answered.

Get Answer