Statistical and Machine Learning Models for Time Series Analysis
Period of duration of course
Introduction. Components of a time series (trend, cycle, seasonal, irregular), stationarity, autocorrelation and dependencies, approaches to time series analysis. Review of estimation methods (Least Squares, Maximum Likelihood, Generalized Method of Moments).
Linear models. ARMA processes, partial autocorrelation, invertibility, ARIMA models for non-stationary series. Inference of linear models: identification and fitting, diagnostics, Ljung-Box statistic; model selection. Vector AutoRegressive models, reduced form, structural form e identification issues. Granger causality.
State space models. Filtering, prediction and smoothing; Kalman recursions; local level models. Particle filtering and smoothing, Score Driven models, Hidden Markov Models.
Neural networks for time series. Introduction to (Deep) Neural Networks, Inference of time series models with Machine Learning methods. Overview of time series forecasting via ML and Deep Learning Libraries: TensorFlow, Keras. Recurrent Neural Networks (RNN), Gated Architectures (LSTMs, GRUs), Bi-directional RNNs, Deep RNN. Reservoir computing and Echo State Networks. Applications and examples.
Introduction to Reinforcement Learning.
The objective of the course is to provide the main elements of the theory of time series analysis by using methods from statistics, econometrics, and machine learning. The course also provides working knowledge for the computational modeling of empirical time series as well as for the simulation and inference of statistical models.
J. Hamilton, Time Series Analysis, Princeton University Press
J. Durbin and S.J. Koopman, Time Series Analysis by State Space Models, Oxford University Press
F. Lazzeri, Machine Learning for Time Series Forecasting with Python, Wiley
Additional material (notes, slides, papers) will be provided during the course