Tag Archives: JCM_math545_HW5_S17

One dimensional stationary measure

Consider the one dimensional SDE

\[dX_t = f(X_t) dt + g(X_t) dW_t\]

which we assume has a unique global in time solution. For simplicity let us assume that there is a positive constant \(c\) so that \( 1/c < g(x)<c\) for all \(x\) and that \(f\) and \(g\) are smooth.

A stationary measure for the problem is a probability measure \(\mu\) so that if  the initial distribution  \(X_0\) is distributed according to \(\mu\) and independent of the Brownian Motion \(W\) then \(X_t\) will be distributed as \(\mu\) for any \(t \geq 0\).

If the functions \(f\) and \(g\) are “nice” then the distribution at time \(t\) has a density with respect to Lebesgue measure (“dx”). Which is to say there is a function \(p_x(t,y)\) do that for any \(\phi\)

\[\mathbf E_x \phi(X_t) = \int_{-\infty}^\infty p_x(t,y)\phi(y) dy\]

and  \(p_\phi(t,y)\) solves the following equation

\[\frac{\partial p_\phi}{\partial t}(t,y) = (L^* p_\phi)(t,y)\]

with \( p_\phi(0,y) = \phi(y)\) where \(\phi(z)\) is the density with respect to  Lebesgue of the initial density. (The pdf of \(X_0\) .)

\(L^*\) is the formal adjoint of the generator \(L\) of \(X_t\) and is defined by

\[(L^*\phi)(y) =  – \frac{\partial\ }{\partial y}( f \phi)(y) + \frac12 \frac{\partial^2\ }{\partial y^2}( g^2 \phi)(y) \]

Since we want \(p_x(t,y)\) not to change when it is evolved forward with the above equation we want \( \frac{\partial p}{\partial t}=0\) or in other words

\[(L^* p_\phi)(t,y) =0\]

  1. Let \(F\) be such that \(-F’ = f/g^2\). Show that \[ \rho(y)=\frac{K}{g^2(y)}\exp\Big( – 2F(y) \Big)\] is an invariant density where \(K\) is a normalization constant which ensures that
    \[\int \rho(y) dy =1\]
  2. Find the stationary measure for each of the following SDEs:
    \[dX_t = (X_t – X^3_t) dt + \sqrt{2} dW_t\]\[dX_t = – F'(X_t) dt + \sqrt{2} dW_t\]
  3. Assuming that the formula derived above make sense more generally, compare the invariant measure of
    \[ dX_t = -X_t + dW_t\]
    and
    \[ dX_t = -sign(X_t) dt + \frac{1}{\sqrt{|X_t|}} dW_t\]
  4. Again, proceding fromally assuming everything is well defined and makes sense find the stationary density of \[dX_t = – 2\frac{sign(X_t)}{|X_t|} dt + \sqrt{2} dW_t\]

 

 

 

Numerical SDEs – Euler–Maruyama method

If we wanted to simulate the SDE

\[dX_t = b(X_t) dt + \sigma(X_t) dW_t\]
on a computer then we truly want to approximate  the associated  integral equation over a sort time time interval of length \(h\). Namely,
\[X_{t+h} – X_t= \int_t^{t+h}b(X_s) ds + \int_t^{t+h}\sigma(X_s) dW_s\]

It is reasonable for \(s \in [t,t+h]\) to use the   approximation
\begin{align}b(X_s) \approx b(X_t) \qquad\text{and}\qquad\sigma(X_s)\approx \sigma(X_t)\end{align}

which implies

\[X_{t+h} – X_t\approx b(X_t) h + \sigma(X_t) \big( W_{t+h}-W_{t}\big)\]

Since  \(W_{t+h}-W_{t}\) is a Gaussian random variable with mean zero and variance \(h\), this discussion suggests the following numerical scheme:

\[ X_{n+1} = X_n + b(X_n) h + \sigma(X_n) \sqrt{h} \eta_n\]

where \(h\) is the time step and the \(\{ \eta_n : n=0,\dots\}\) are a collection of  mutually independent standard Gaussian random variable (i.e. mean zero and variance 1). This is called the Euler–Maruyama method.

Use this method to numerically approximate (and plot)  several  trajectories of the following SDEs numerically.

  1. \[dX_t = -X_t dt + dW_t\]
  2. For \(r=-1, 0, 1/4, 1/2, 1\)
    \[ dX_t = r X_t dt + X_t dW_t\]Does it head to infinity or approach zero ?Look at different small \(h\). Does the solution go negative ?  Should it ?
  3. \[dX_t = X dt -X_t^3 dt +  \alpha dW_t\]
    Try different values of \(\alpha\). For example, \(\alpha = 1/10, 1, 2 ,10\). How does it act ? What is the long time behavior ? How does it compare with what is learned in the “One dimensional stationary measure” problem about the stationary measure of this equation.

 

 

Entry and Exit through boundaries

Consider the following one dimensional SDE.
\begin{align*}
dX_t&= \cos( X_t )^\alpha dW(t)\\
X_0&=0
\end{align*}
Consider the equation for \(\alpha >0\). On what interval do you expect to find the solution at all times ? Classify the behavior at the boundaries.

For what values of \(\alpha < 0\) does it seem reasonable to define the process ? any ? justify your answer.

Martingale Exit from an Interval – I

Let \(\tau\) be the first time that a continuous martingale \(M_t\) starting from \(x\) exits the interval \((a,b)\), with \(a<x<b\). In all of the following, we assume that \(\mathbf P(\tau < \infty)=1\). Let \(p=\mathbf P_x\{M(\tau)=a\}\).

Find and analytic expression for \(p\) :

  1. For this part assume that \(M_t\) is the solution to a time homogeneous SDE. That is that \[dM_t=\sigma(M_t)dB_t.\] (with \(\sigma\) bounded and smooth.) What PDE should you solve to find \(p\) ? with what boundary data ? Assume for a moment that \(M_t\) is standard Brownian Motion (\(\sigma=1\)). Solve the PDE you mentioned above in this case.
  2. A probabilistic way of thinking: Return to a general martingale \(M_t\). Let us assume that \(dM_t=\sigma(t,\omega)dB_t\) again with \(\sigma\) smooth and uniformly bounded  from above and away from zero. Assume that \(\tau < \infty\) almost surely and notice that \[\mathbf E_x M(\tau)=a \mathbf P_x\{M_\tau=a\} + b \mathbf P_x\{M_\tau=b.\}\] Of course the process has to exit through one side or the other, so \[\mathbf P_x\{M_\tau=a\} = 1 – \mathbf P_x\{M_\tau=b\}\]. Use all of these facts and the Optimal Stopping Theorem to derive the equation for \(p\).
  3. Return to the case when \[dM_t=\sigma(M_t)dB_t\]. (with \(\sigma\) bounded and smooth.) Write down the equations that \(v(x)= \mathbf E_x\{\tau\}\), \(w(x,t)=\mathbf P_x\{ \tau >t\}\), and \(u(x)=\mathbf E_x\{e^{-\lambda\tau}\}\) with \(\lambda > 0\) satisfy. ( For extra credit: Solve them for \(M_t=B_t\) in this one dimensional setting and see what happens as \(b \rightarrow \infty\).)

Probability Bridge

For fixed \(\alpha\) and \(\beta\) consider the stochastic differential equation
\[
dY(t)=\frac{\beta-Y(t)}{1-t} dt + dB(t) ~,~~ 0\leq t < 1 ~,~~Y(0)=\alpha.
\]
Verify that \(\lim_{t\to 1}Y(t)=\beta\) with probability one. ( This is called the Brownian bridge from \(\alpha\) to \(\beta\).)
Hint: In the problem “Solving a class of SDEs“,  you found that this equation had the solution
\begin{equation*}
Y_t = a(1-t) + bt + (1-t)\int_0^t \frac{dB_s}{1-s} \quad 0 \leq t <1\; .
\end{equation*}
To answer the question show that
\begin{equation*}
\lim_{t \rightarrow 1^-} (1-t) \int_0^t\frac{dB_s}{1-s} =0 \quad \text{a.s.}
\end{equation*}