Skip to main content

Time Series Forecasting Algorithms

Definition:​

Time Series Forecasting is a technique used to predict future values based on previously observed time-ordered data. Time series data are sequences of data points collected or recorded at specific time intervals. Forecasting techniques attempt to model the underlying patterns such as trend, seasonality, and noise to make accurate future predictions.

Characteristics:​

  • Temporal Dependency:
    Time series forecasting depends heavily on the sequential nature of the data, with future values influenced by past observations.

  • Pattern Recognition:
    Models identify patterns like trends (long-term increases or decreases), seasonality (cyclic behaviors), and random noise to make predictions.

  • Extrapolation:
    The goal is to extrapolate the future behavior of the data based on patterns found in historical data.

  • Real-Time or Batch Predictions:
    Time series models can be applied in real-time scenarios (e.g., stock prices) or for batch forecasting (e.g., sales forecasting for the next quarter).

Types of Time Series Components:​

  1. Trend:
    A long-term increase or decrease in the data.

  2. Seasonality:
    A repeating pattern of behavior at regular intervals (e.g., daily, weekly, monthly).

  3. Cyclical:
    Fluctuations that occur but are not as regular as seasonal patterns, often linked to economic or business cycles.

  4. Residual (Noise):
    Irregular, random variations that are not captured by trend or seasonal components.

Common Time Series Forecasting Techniques:​

  1. ARIMA (AutoRegressive Integrated Moving Average):
    A widely used statistical method that combines autoregression (AR), differencing to make the data stationary (I), and moving average (MA) to handle the noise in the data. ARIMA is suitable for univariate time series data.

    yt=c+Ο•1ytβˆ’1+Ο•2ytβˆ’2+β‹―+ΞΈ1etβˆ’1+ΞΈ2etβˆ’2+β‹―+ety_t = c + \phi_1 y_{t-1} + \phi_2 y_{t-2} + \dots + \theta_1 e_{t-1} + \theta_2 e_{t-2} + \dots + e_t

  2. SARIMA (Seasonal ARIMA):
    Extends ARIMA by accounting for seasonal effects, which can occur in time series data with repeating cycles over time.

  3. Exponential Smoothing (ETS Models):
    Assigns exponentially decreasing weights to past observations, placing more importance on recent data. Variants include Simple Exponential Smoothing, Holt’s Linear Trend Model, and Holt-Winters Seasonal Model.

  4. Prophet:
    Developed by Facebook, Prophet is an additive model suitable for forecasting time series with daily, weekly, and yearly seasonality. It’s designed to handle missing data and outliers.

  5. LSTM (Long Short-Term Memory Networks):
    A type of Recurrent Neural Network (RNN) used for time series forecasting due to its ability to remember long-term dependencies and patterns in sequential data. LSTM is particularly useful for complex, non-linear patterns.

  6. VAR (Vector AutoRegression):
    A multivariate time series forecasting method where each variable is predicted using a linear combination of its own past values and the past values of other variables.

  7. TBATS (Exponential Smoothing State Space Model):
    A method used for time series with complex seasonalities, such as hourly, daily, and yearly patterns, combining trend, seasonality, and a Box-Cox transformation.

  8. XGBoost for Time Series:
    Although traditionally used for classification and regression, XGBoost can be adapted for time series forecasting by engineering features such as lags, rolling means, and other relevant variables.

Steps in Time Series Forecasting:​

  1. Data Collection:
    Gather historical time series data relevant to the problem, ensuring data is consistently spaced in time.

  2. Data Preprocessing:
    Handle missing values, smooth noisy data, and scale or transform variables if needed. Differencing can be applied to remove trends.

  3. Stationarity Check:
    Check whether the time series data is stationary (i.e., its statistical properties do not change over time). Techniques like the Augmented Dickey-Fuller (ADF) test can be used.

  4. Model Selection:
    Choose an appropriate model (e.g., ARIMA, LSTM, Prophet) based on the nature of the time series data and its components (trend, seasonality, etc.).

  5. Model Training and Validation:
    Split the data into training and validation sets, then train the model on the historical data and evaluate its performance on the validation set.

  6. Hyperparameter Tuning:
    Adjust model hyperparameters to improve accuracy, such as the order of ARIMA components (p, d, q) or the number of LSTM units.

  7. Forecasting:
    Generate future predictions based on the trained model, extrapolating the patterns it has learned from the historical data.

  8. Evaluation:
    Use metrics such as Mean Absolute Error (MAE), Mean Squared Error (MSE), or Mean Absolute Percentage Error (MAPE) to evaluate the model's accuracy on unseen data.

Common Metrics for Forecast Accuracy:​

  • Mean Absolute Error (MAE):
    The average of the absolute differences between predicted and actual values. It provides a direct measure of error. image

  • Mean Squared Error (MSE):
    The average of the squared differences between predicted and actual values. It penalizes larger errors more than MAE. image

  • Root Mean Squared Error (RMSE):
    The square root of the MSE, providing an error metric in the same units as the target variable. image

  • Mean Absolute Percentage Error (MAPE):
    The average of the percentage errors between predicted and actual values. It’s useful for understanding the error in relative terms. image

Time Series Forecasting Algorithms Architecture:​

  1. Input Layer:
    The historical time series data is fed into the model, which could be univariate (a single series) or multivariate (multiple series).

  2. Transformation Layer:
    The data is preprocessed and transformed (e.g., differenced, scaled) to make it suitable for model training.

  3. Model Layer:
    The chosen model (ARIMA, LSTM, etc.) is applied to capture patterns like trend, seasonality, and noise from the data.

  4. Prediction Layer:
    The model forecasts future values based on the learned patterns.

  5. Evaluation Layer:
    Forecasted values are compared against actual values to measure performance and accuracy.

Advantages of Time Series Forecasting:​

  • Temporal Awareness:
    Time series models are designed to handle sequential data and temporal dependencies, making them ideal for forecasting problems.

  • Predictive Power:
    With the right model, time series forecasting can produce highly accurate predictions that capture trends, seasonality, and other patterns.

  • Wide Applicability:
    These methods can be applied in various domains such as finance, sales, weather forecasting, and operations management.

Simple Python Implementation for Time Series Forecasting using ARIMA​


import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.arima.model import ARIMA
from sklearn.metrics import mean_squared_error

#Load sample time series dataset
#Dataset should have a 'Date' column and a 'Value' column (e.g., stock prices, sales data)
data = pd.read_csv('sample_time_series.csv', parse_dates=['Date'], index_col='Date')

#Split the data into training and testing sets
train_size = int(len(data) * 0.8)
train, test = data[0:train_size], data[train_size:]

#Fit the ARIMA model
model = ARIMA(train['Value'], order=(5, 1, 0)) # (p, d, q) parameters
model_fit = model.fit()

#Forecast the next values (based on the test size)
forecast = model_fit.forecast(steps=len(test))

#Plot the forecast vs actual values
plt.figure(figsize=(10,6))
plt.plot(train.index, train['Value'], label='Training Data')
plt.plot(test.index, test['Value'], label='Actual Data', color='green')
plt.plot(test.index, forecast, label='Forecasted Data', color='red')
plt.legend()
plt.title('Time Series Forecasting - ARIMA')
plt.show()

#Evaluate the model with MSE
mse = mean_squared_error(test['Value'], forecast)
print(f'Mean Squared Error: {mse}')

Explanation of the Code:​

  1. Data Loading:
    Load a time series dataset that includes a 'Date' column and a 'Value' column (e.g., stock prices, sales data).

  2. Train-Test Split:
    The dataset is split into a training set (80% of the data) and a testing set (20%).

  3. ARIMA Model:
    The ARIMA model is fitted to the training data. The (5, 1, 0) parameters are the ARIMA model’s hyperparameters: p, d, and q. These need to be tuned for your specific dataset.

  4. Forecasting:
    The model forecasts values for the test period.

  5. Plotting:
    The forecasted data is plotted against the actual test data to visualize the performance.

  6. Model Evaluation:
    The Mean Squared Error (MSE) is calculated to quantify the forecast accuracy.

Conclusion:​

Time series forecasting provides essential insights and predictive power for data that follows a sequential or time-dependent structure. Whether using traditional models like ARIMA or advanced models like LSTM, these methods help extract patterns from historical data to forecast future values.