Category Archives: Stochastic Calculus

A modified Wright-Fisher Model

 

Consider the ODE

\[ \dot x_t = x_t(1-x_t)\]

and the SDE

\[dX_t = X_t(1-X_t) dt + \sqrt{X_t(1-X_t)} dW_t\]

  1. Argue that \(x_t\) can not leave the interval \([0,1]\) if \( x_0 \in (0,1)\).
  2. What is the behavior of \(x_t\) as \(t \rightarrow\infty\) if if \( x _0\in (0,1)\) ?
  3. Can the diffusion \(X_t\) exit the interval \(  (0,1) \) ? Prove your claims.

No Explosions from Diffusion

Consider the following ODE and SDE:

\[\dot x_t = x^2_t \qquad x_0 >0\]

\[d X_t = X^2_t dt + \sigma |X_t|^\alpha dW_t\qquad X_0 >0\]

where \(\alpha >0\) and \(\sigma >0\).

  1. Show that \(x_t\) blows up in finite time.
  2. Find the values of  \(\sigma\) and \(\alpha\) so that \(X_t\) does not explode (off to infinity).

[ From Klebaner, ex 6.12]

Cox–Ingersoll–Ross model

The following model has SDE has been suggested as a model for interest rates:

\[ dr_t = a(b-r_t)dt +  \sigma \sqrt{r_t} dW_t\]

for \(r_t \in \mathbf R\), \(r_0 >0\) and constants \(a\),\(b\), and \(\sigma\).

  1. Find a closed form expression for \(\mathbf E( r_t)\).
  2. Find a closed form expression  for \(\mathrm{Var}(r_t)\).
  3. Characterize the values of parameters of \(a\), \(b\), and \(\sigma\) such that \(r=0\) is an absorbing point.
  4. What is the nature of the boundary at \(0\) for other values of the parameter ?

 

SDE Example: quadratic geometric BM

Show that the solution \(X_t\) of

\[ dX_t=X_t^2 dt + X_t dB_t\]

where \(X_0=1\) and \(B_t\)  is a standard Brownian motion has the representation

\[ X_t = \exp\Big( \int_0^t X_s ds -\frac12 t + B_t\Big)\]

Practice with Ito and Integration by parts

Define

\[ X_t =X_0 + \int_0^t B_s dB_s\]

where \(B_t\) is a standard Brownian Motion. Show that \(X_t\) can also be written

\[ X_t=X_0 + \frac12 (B^2_t -t)\]

 

Discovering the Bessel Process

Let \(W_t=(W^{(1)}_t,\dots,W^{(n)}_t) \) be an \(n\)-dimensional Brownian motion with \( W^{(i)}_t\) standard independent 1-dim brownian motions and \(n \geq 2\).

Let
\[X_t = \|W_t\| = \Big(\sum_{i=1}^n  (W^{(i)}_t)^2\Big)^{\frac12}\]
be the norm of  the brownian motions. Even though the absolute value is not differentiable at zero we can still apply Itos formula since Brownian motion never visits the origin if the dimension is greater than zeros.

  1. Use Ito’s formula to show that \(X_t\) satisfies the Ito process
    \[  dX_t = \frac{n-1}{2 X_t} dt + \sum_{i=1}^n \frac{W^{(i)}_t }{X_t} dW^{(i)}_t \]
  2. Using the Levy-Doob Theorem show that
    \[Z_t =\sum_{i=1}^n  \int_0^t \frac{W^{(i)}_t }{X_t} dW^{(i)}_t \]
    is a standard Brownian Motion.
  3. In light of the above discussion argue that \(X_t\) and \(Y_t\) have the same distribution if   \(Y_t\) is defined by
    \[ dY_t = \frac{n-1}{2 Y_t} dt + dB_t\]
    where \(B_t\) is a standard Brownian Motion.

Take a moment to reflect on what has been shown. \(W_t\) is a \(\mathbf R^n\) dimensional Markov Process. However, there is no guarantee that the  one dimensional process \(X_t\) will again be a Markov process, much less a diffusion. The above calculation shows that the distribution of  \(X_{t+h}\) is determined completely by \(X_t\) . In particular, it solves a one dimensional SDE. We were sure that \(X_t\) would be an Ito process but we had no guarantee that it could be written as a single closed SDE. (Namely that the coefficients would be only functions of \(X_t\) and not of the details of the \(W^{(i)}_t\)’s.

One dimensional stationary measure

Consider the one dimensional SDE

\[dX_t = f(X_t) dt + g(X_t) dW_t\]

which we assume has a unique global in time solution. For simplicity let us assume that there is a positive constant \(c\) so that \( 1/c < g(x)<c\) for all \(x\) and that \(f\) and \(g\) are smooth.

A stationary measure for the problem is a probability measure \(\mu\) so that if  the initial distribution  \(X_0\) is distributed according to \(\mu\) and independent of the Brownian Motion \(W\) then \(X_t\) will be distributed as \(\mu\) for any \(t \geq 0\).

If the functions \(f\) and \(g\) are “nice” then the distribution at time \(t\) has a density with respect to Lebesgue measure (“dx”). Which is to say there is a function \(p_x(t,y)\) do that for any \(\phi\)

\[\mathbf E_x \phi(X_t) = \int_{-\infty}^\infty p_x(t,y)\phi(y) dy\]

and  \(p_\phi(t,y)\) solves the following equation

\[\frac{\partial p_\phi}{\partial t}(t,y) = (L^* p_\phi)(t,y)\]

with \( p_\phi(0,y) = \phi(y)\) where \(\phi(z)\) is the density with respect to  Lebesgue of the initial density. (The pdf of \(X_0\) .)

\(L^*\) is the formal adjoint of the generator \(L\) of \(X_t\) and is defined by

\[(L^*\phi)(y) =  – \frac{\partial\ }{\partial y}( f \phi)(y) + \frac12 \frac{\partial^2\ }{\partial y^2}( g^2 \phi)(y) \]

Since we want \(p_x(t,y)\) not to change when it is evolved forward with the above equation we want \( \frac{\partial p}{\partial t}=0\) or in other words

\[(L^* p_\phi)(t,y) =0\]

  1. Let \(F\) be such that \(-F’ = f/g^2\). Show that \[ \rho(y)=\frac{K}{g^2(y)}\exp\Big( – 2F(y) \Big)\] is an invariant density where \(K\) is a normalization constant which ensures that
    \[\int \rho(y) dy =1\]
  2. Find the stationary measure for each of the following SDEs:
    \[dX_t = (X_t – X^3_t) dt + \sqrt{2} dW_t\]\[dX_t = – F'(X_t) dt + \sqrt{2} dW_t\]
  3. Assuming that the formula derived above make sense more generally, compare the invariant measure of
    \[ dX_t = -X_t + dW_t\]
    and
    \[ dX_t = -sign(X_t) dt + \frac{1}{\sqrt{|X_t|}} dW_t\]
  4. Again, proceding fromally assuming everything is well defined and makes sense find the stationary density of \[dX_t = – 2\frac{sign(X_t)}{|X_t|} dt + \sqrt{2} dW_t\]

 

 

 

Numerical SDEs – Euler–Maruyama method

If we wanted to simulate the SDE

\[dX_t = b(X_t) dt + \sigma(X_t) dW_t\]
on a computer then we truly want to approximate  the associated  integral equation over a sort time time interval of length \(h\). Namely,
\[X_{t+h} – X_t= \int_t^{t+h}b(X_s) ds + \int_t^{t+h}\sigma(X_s) dW_s\]

It is reasonable for \(s \in [t,t+h]\) to use the   approximation
\begin{align}b(X_s) \approx b(X_t) \qquad\text{and}\qquad\sigma(X_s)\approx \sigma(X_t)\end{align}

which implies

\[X_{t+h} – X_t\approx b(X_t) h + \sigma(X_t) \big( W_{t+h}-W_{t}\big)\]

Since  \(W_{t+h}-W_{t}\) is a Gaussian random variable with mean zero and variance \(h\), this discussion suggests the following numerical scheme:

\[ X_{n+1} = X_n + b(X_n) h + \sigma(X_n) \sqrt{h} \eta_n\]

where \(h\) is the time step and the \(\{ \eta_n : n=0,\dots\}\) are a collection of  mutually independent standard Gaussian random variable (i.e. mean zero and variance 1). This is called the Euler–Maruyama method.

Use this method to numerically approximate (and plot)  several  trajectories of the following SDEs numerically.

  1. \[dX_t = -X_t dt + dW_t\]
  2. For \(r=-1, 0, 1/4, 1/2, 1\)
    \[ dX_t = r X_t dt + X_t dW_t\]Does it head to infinity or approach zero ?Look at different small \(h\). Does the solution go negative ?  Should it ?
  3. \[dX_t = X dt -X_t^3 dt +  \alpha dW_t\]
    Try different values of \(\alpha\). For example, \(\alpha = 1/10, 1, 2 ,10\). How does it act ? What is the long time behavior ? How does it compare with what is learned in the “One dimensional stationary measure” problem about the stationary measure of this equation.

 

 

BDG Inequality

Consider \(I(t)\) defined by \[I(t)=\int_0^t \sigma(s,\omega)dB(s,\omega)\] where \(\sigma\) is adapted and \(|\sigma(t,\omega)| \leq K\) for all \(t\) with probability one. Inspired by   problem “Homogeneous Martingales and Hermite Polynomials”  Let us set
\begin{align*}Y(t,\omega)=I(t)^4 – 6 I(t)^2\langle I \rangle(t) + 3 \langle I \rangle(t)^2 \ .\end{align*}

  1. Quote  the problem “Ito Moments” to show that \(\mathbb{E}\{ |Y(t)|^2\} < \infty\) for all \(t\). Then  verify that \(Y_t\) is  a martingale.
  2. Show that \[\mathbb{E}\{ I(t)^4 \} \leq 6 \mathbb{E} \big\{ \{I(t)^2\langle I \rangle(t) \big\}\]
  3. Recall the Cauchy-Schwartz inequality. In our language it states that
    \begin{align*}
    \mathbb{E} \{AB\} \leq (\mathbb{E}\{A^2\})^{1/2} (\mathbb{E}\{B^2\})^{1/2}
    \end{align*}
    Combine this with the previous inequality to show that\begin{align*}\mathbb{E}\{ I(t)^4 \} \leq 36 \mathbb{E} \big\{\langle I \rangle(t)^2 \big\} \end{align*}
  4. We know that  \(I^4\) is a submartingale (because \(x \mapsto x^4\) is convex). Use the Kolmogorov-Doob inequality and all that we have just derived to show that
    \begin{align*}
    \mathbb{P}\left\{ \sup_{0\leq s \leq T}|I(s)|^4 \geq \lambda \right\} \leq ( \text{const}) \frac{ \mathbb{E}\left( \int_0^T \sigma(s,\omega)^2 ds\right)^2 }{\lambda}
    \end{align*}

Homogeneous Martingales and Hermite Polynomials

  1. Let \(f(x,y):\mathbb{R}^2 \rightarrow \mathbb{R}\) be a twice differentiable function in both \(x\) and \(y\). Let \(M(t)\) be defined by \[M(t)=\int_0^t \sigma(s,\omega) dB(s,\omega)\]. Assume that \(\sigma(t,\omega)\) is adapted and that \(\mathbb{E} M^2 < \infty\) for all \(t\) a.s. .(Here \(B(t)\) is standard Brownian Motion.) Let \(\langle M \rangle(t)\) be the quadratic variation process of \(M(t)\). What equation does \(f\) have to satisfy so that \(Y(t)=f(M(t),\langle M \rangle(t))\) is again a martingale if we assume that \(\mathbf E\int_0^t \sigma(s,\omega)^2 ds < \infty\).
  2. Set
    \begin{align*}
    f_n(x,y) = \sum_{0 \leq m \leq \lfloor n/2 \rfloor} C_{n,m} x^{n-2m}y^m
    \end{align*}
    here \(\lfloor n/2 \rfloor\) is the largest integer less than or equal to \(n/2\). Set \(C_{n,0}=1\) for all \(n\). Then find a recurrence relation for \(C_{n,m+1}\) in terms of \(C_{n,m}\), so that \(Y(t)=f_n(B(t),t)\) will be a martingale.Write out explicitly \(f_1(B(t),), \cdots, f_4(B(t),t)\) as defined in the previous item.
  3. * Do you recognize the recursion relation you obtained above as being associated to a famous recursion relation ? (Hint: Look at the title of the problem)