Stochastic Volatility model¶
[1]:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
np.random.seed(0)
[2]:
plt.rcParams.update(
{
"axes.prop_cycle": plt.cycler(
"color",
[
"#000000",
"#1b6989",
"#e69f00",
"#009e73",
"#f0e442",
"#50b4e9",
"#d55e00",
"#cc79a7",
],
),
"font.serif": [
"Palatino",
"Palatino Linotype",
"Palatino LT STD",
"Book Antiqua",
"Georgia",
"DejaVu Serif",
],
"font.family": "serif",
"figure.facecolor": "#fffff8",
"axes.facecolor": "#fffff8",
"figure.constrained_layout.use": True,
"font.size": 14.0,
"hist.bins": "auto",
"lines.linewidth": 1.0,
}
)
Asset prices have time-varying volatility (variance of day over day returns
). In some periods, returns are highly variable, while in others very stable. Stochastic volatility models model this with a latent volatility variable, modeled as a stochastic process. The following model is similar to the one described in the No-U-Turn Sampler paper, Hoffman (2011) p21.
Here, \(r\) is the daily return series and \(s\) is the latent log volatility process.
Build Model¶
First we load daily returns of the S&P 500, and calculate the daily log returns. This data is from May 2008 to November 2019.
[3]:
returns = pd.read_csv(pm.get_data("SP500.csv"), index_col="Date")
returns["change"] = np.log(returns["Close"]).diff()
returns = returns.dropna()
returns.head()
[3]:
Close | change | |
---|---|---|
Date | ||
2008-05-05 | 1407.489990 | -0.004544 |
2008-05-06 | 1418.260010 | 0.007623 |
2008-05-07 | 1392.569946 | -0.018280 |
2008-05-08 | 1397.680054 | 0.003663 |
2008-05-09 | 1388.280029 | -0.006748 |
As you can see, the volatility seems to change over time quite a bit but cluster around certain time-periods. For example, the 2008 financial crisis is easy to pick out.
[4]:
fig, ax = plt.subplots(figsize=(14, 4))
returns.plot(y="change", label="S&P 500", ax=ax)
ax.set(xlabel="time", ylabel="returns")
ax.legend();

Specifying the model in PyMC3
mirrors its statistical specification.
[5]:
def make_stochastic_volatility_model(data):
with pm.Model() as model:
step_size = pm.Exponential("step_size", 10)
volatility = pm.GaussianRandomWalk("volatility", sigma=step_size, shape=len(data))
nu = pm.Exponential("nu", 0.1)
returns = pm.StudentT(
"returns", nu=nu, lam=np.exp(-2 * volatility), observed=data["change"]
)
return model
stochastic_vol_model = make_stochastic_volatility_model(returns)
Checking the model¶
Two good things to do to make sure our model is what we expect is to 1. Take a look at the model structure. This lets us know we specified the priors we wanted and the connections we wanted. It is also handy to remind ourselves of the size of the random variables. 2. Take a look at the prior predictive samples. This helps us interpret what our priors imply about the data.
[6]:
pm.model_to_graphviz(stochastic_vol_model)
[6]:
[7]:
with stochastic_vol_model:
prior = pm.sample_prior_predictive(500)
We plot and inspect the prior predictive. This is many orders of magnitude larger than the actual returns we observed. In fact, I cherry-picked a few draws to keep the plot from looking silly. This may suggest changing our priors: a return that our model considers plausible would violate all sorts of constraints by a huge margin: the total value of all goods and services the world produces is ~\(\$10^9\), so we might reasonably not expect any returns above that magnitude.
That said, we get somewhat reasonable results fitting this model anyways, and it is standard, so we leave it as is.
[8]:
fig, ax = plt.subplots(figsize=(14, 4))
returns["change"].plot(ax=ax, lw=1, color="black")
ax.plot(prior["returns"][4:6].T, "g", alpha=0.5, lw=1, zorder=-10)
max_observed, max_simulated = np.max(np.abs(returns["change"])), np.max(np.abs(prior["returns"]))
ax.set_title(f"Maximum observed: {max_observed:.2g}\nMaximum simulated: {max_simulated:.2g}(!)");

Fit Model¶
Once we are happy with our model, we can sample from the posterior. This is a somewhat tricky model to fit even with NUTS, so we sample and tune a little longer than default.
[9]:
with stochastic_vol_model:
trace = pm.sample(2000, tune=2000)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [nu, volatility, step_size]
Sampling 4 chains for 2_000 tune and 2_000 draw iterations (8_000 + 8_000 draws total) took 525 seconds.
The rhat statistic is larger than 1.05 for some parameters. This indicates slight problems during sampling.
The estimated number of effective samples is smaller than 200 for some parameters.
[10]:
with stochastic_vol_model:
posterior_predictive = pm.sample_posterior_predictive(trace)
Note that the step_size
parameter does not look perfect: the different chains look somewhat different. This again indicates some weakness in our model: it may make sense to allow the step_size to change over time, especially over this 11 year time span.
[11]:
pm.traceplot(trace, var_names=["step_size", "nu"]);
/dependencies/arviz/arviz/data/io_pymc3.py:89: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context.
FutureWarning,

Now we can look at our posterior estimates of the volatility in S&P 500 returns over time.
[12]:
fig, ax = plt.subplots(figsize=(14, 4))
y_vals = np.exp(trace["volatility"])[::5].T
x_vals = np.vstack([returns.index for _ in y_vals.T]).T.astype(np.datetime64)
plt.plot(x_vals, y_vals, "k", alpha=0.002)
ax.set_xlim(x_vals.min(), x_vals.max())
ax.set_ylim(bottom=0)
ax.set(title="Estimated volatility over time", xlabel="Date", ylabel="Volatility");

Finally, we can use the posterior predictive distribution to see the how the learned volatility could have effected returns.
[13]:
fig, axes = plt.subplots(nrows=2, figsize=(14, 7), sharex=True)
returns["change"].plot(ax=axes[0], color="black")
axes[1].plot(np.exp(trace["volatility"][::100].T), "r", alpha=0.5)
axes[0].plot(posterior_predictive["returns"][::100].T, "g", alpha=0.5, zorder=-10)
axes[0].set_title("True log returns (black) and posterior predictive log returns (green)")
axes[1].set_title("Posterior volatility");

References¶
Hoffman & Gelman. (2011). The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.
[14]:
%load_ext watermark
%watermark -n -u -v -iv -w
pymc3 3.9.0
pandas 1.0.4
numpy 1.18.5
last updated: Mon Jun 15 2020
CPython 3.7.7
IPython 7.15.0
watermark 2.0.2