Multivariate Time Series models: Do we really need them?
A comparison of local, global, univariate and multivariate configurations using the DLinear and NLinear models
LTSF-Linear: Embarrassingly simple time series forecasting models
A review of the 2022 Paper Are Transformers Effective for Time Series Forecasting that introduced DLinear and NLinear models
Pytorch vs MXNet: Which is faster?
An analysis of the computational efficiency of Pytorch and MXNet
The F1 Score: Time Series Model Championships
A ranking system of time series models based on the Monash dataset benchmarks using the mase metric and the formula 1 scoring system.
1Cycle scheduling: State of the art timeseries forecasting
How to get state of the art timeseries forecasting results using machine learning with my variant of DeepAR and 1Cycle Scheduling
Super-convergence: Supercharge your Neural Networks
A look at 1Cycle scheduling, one of my favourite techniques at improving model performance and practical guidance on how to use it
A fistful of MASE: Deconstructing DeepAR
A deep dive into the GluonTS DeepAR neural network model architecture for time series forecasting and an ablation study of the covariates.
Predicting covariates: Is it a good idea?
A study which evaluates the effectiveness of predicting covariates in LSTM Neural Networks for Time Series Forecasting
Teacher Forcing: A look at what it is and the alternatives
Review of teacher forcing, free running, scheduled-sampling, professor forcing and attention forcing for training auto-regresssive neural networks
Human-level control through deep reinforcement learning
Recreating the experiments from the classic 2015 Deepmind Paper by Mnih et al.: Human-level control through deep reinforcement learning
Revisiting Playing Atari with Deep Reinforcement Learning
Recreating the experiments from the classic DQN Deepmind paper by Mnih et al.: Playing Atari with Deep Reinforcement Learning
