# Introduction to dynamical systems

This is the start of a brief, mostly non-technical introduction to dynamical systems, intended to help make (future) behavioral medicine work that uses it basically readable. This won’t be enough to do anything serious in the area, but should give an understanding of the more basic kinds of dynamical models and how they can be used.

A dynamical system model is a class of model that describes how some system (whatever we’re studying) changes over time by specifying two things: a state space, which describes the system at one instant in time, and an evolution rule which describes the change in the state, given the current state. The state may be only partially observable, and the evolution rule may be stochastic (partially random).

An intuitive example of a dynamical system (and the original example, which the field developed from) is classical Newtonian physics. For simple, idealized objects, the state space includes the position, momentum, and mass of all objects, and the evolution rule is given by Newton’s laws of gravity and movement.

One basic distinction between different dynamical systems is how time is defined. Time can be discrete (time 1, time 2, etc.) or it can be continuous. Discrete-time dynamical systems are often thought of as “time series” models, and are particularly heavily used in econometrics. In a discrete-time system, the evolution rule is a function from states to states: $$y_t = f(y_{t-1})$$. In a continuous-time system, the evolution is a differential equation describing the instantaneous change in state.

I’m going to focus on discrete-time systems here. For applying dynamical systems to behavioral interventions, what prior work there is hasn’t really settled on which is more appropriate or useful, although a lot of the most recent work has used continuous-time systems. Discrete-time systems are a bit simpler to get your head around at first, lacking the heavy use of calculus, and that’s what I’ve been working with lately.

A linear dynamical system means that the evolution rule ($$f(y_{t-1})$$ above) is a linear function of the current state.

The simplest linear discrete-time dynamical system is the first-order autoregressive model, usually called the “AR(1)” model. In the AR(1) model, the observation $$y_t$$ depends on the previous observation $$y_{t-1}$$:

$\begin{gather} y_t = \alpha + \rho y_{t-1} + \epsilon_t \\ \epsilon_t \sim N(0, \sigma) \end{gather}$

That is, the evolution rule says that the state of the system is a linear function of the previous state (with intercept $$\alpha$$ and slope $$\rho$$), plus normally-distributed noise.

This can be extended in a straightforward way to higher-order autoregressive models, called “AR(n)” models, where each observation depends on the $$n$$ previous observations. For example, an AR(2) model is written as:

$y_t = \alpha + \rho_1 y_{t-1} + \rho_2 y_{t-2} + \epsilon_t$

Next up is a brief discussion of the properties of AR(n) models, and what it means for a model to be stationary.