Home » Articles posted by Jonathan Mattingly (Page 2)

# Author Archives: Jonathan Mattingly

## Exponential Martingale

Let \(\sigma(t,\omega)\) be adapted to the filtration generated by a standard Brownian Motion \(B_t\) such that \(|\sigma(x,\omega)| < K\) for some bound \(K\) . Let \(I(t,\omega)=\int_0^t \sigma(s,\omega) dB(s,\omega)\).

- Show that

\[M_t=\exp\big\{\alpha I(t)-\frac{\alpha^2}{2}\int_0^t \sigma^2(s)ds \big\}\]

is a martingale. It is called the exponential martingale. - Show that \(M_t\) satisfies the equation

\[ dM_t =\alpha M_t dI_t = \alpha M_t \sigma_t dB_t\]

## Ito Integration by parts

Recall that if \(u(t)\) and \(v(t)\) are deterministic functions which are once differentiable then the classic integration by parts formula states that

\[ \int_0^t u(s) (\frac{dv}{ds})(s)\,ds = u(t)v(t) – u(0)v(0) – \int_0^t v(s) (\frac{du}{ds})(s)\,ds\]

As is suggested by the formal relations

\[ (\frac{dv}{ds})(s)\,ds=dv(s) \qquad\text{and}\qquad (\frac{du}{ds})(s)\, ds=du(s)\]

this can be rearranged to state

\[ u(t)v(t)- u(0)v(0)= \int_0^t u(s) dv(s) + \int_0^t v(s) du(s)\]

which holds for more general Riemann–Stieltjes integrals. Now consider two Ito processes \(X_t\) and \(Y_t\) given by

\[dX_t=b_s ds + \sigma_s dW_t \qquad\text{and}\qquad dY_t=f_s ds + g_s dW_t \]

where \(W_t\) is a standard Brownian Motion. Derive the “Integration by Parts formula” for Ito calculus by applying Ito’s formula to \(X_tY_t\). Compare this the the classical formula given above.

## Cross-quadratic variation: correlated Brownian Motions

Let \(W_t\) and \(B_t\) be two independent standard Brownian Motions. For \(\rho \in [0,1]\) define

\[ Z_t = {\rho}\, W_t +\sqrt{1-\rho^2}\, B_t\]

- Why is \(Z_t\) a standard Brownian Motion ?
- Calculate the cross-quadratic variations \([ Z,W]_t\) and \([ Z,B]_t\) .
- For what values of \(\rho\) is \(W_t\) independent of \(Z_t\) ?
- ** Argue that two standard Brownian motions are independent if and only if their cross-quadratic variation is zero.

## Paley-Wiener-Zygmund Integral

##### Definition of stochastic integrals by integration by parts

In 1959, Paley, Wiener, and Zygmund gave a definition of the stochastic integral based on integration by parts. The resulting integral will agree with the Ito integral when both are defined. However the Ito integral will have a *much large *domain of definition. We will now follow the develop the integral as outlined by Paley, Wiener, and Zygmund:

- Let \(f(t)\) be a deterministic function with \(f'(t)\) continuous. Prove that \begin{align*} \int_0^1 f(t)dW(t) = f(1)W(1) – \int_0^1 f'(t)W(t) dt\end{align*}

where the first integral is the Ito integral and the last integral is defined path-wise as the standard Riemann integral since the integrands are a.s. continuous. - Now let \(f\) we as above with in addition \(f(1)=0\) and “define” the stochastic integral \(\int_0^1 f(t) * dW(t)\) by the relationship

\begin{align*}

\int_0^1 f(t) *dW(t) = – \int_0^1 f'(t) W(t) dt\;.

\end{align*}

Where the integral on the right hand side is the standard Riemann integral.If the condition \(f(1)=0\) seems unnatural to you, what this is really saying is that \(f\) is supported on \([0,1)\). In many ways it would be most natural to consider \(f\) on \([0,\infty)\) with compact support. Then \(f(\infty)=0\). We consider the unit interval for simplicity.

- Show by direct calculation (not by the Ito isometry) that

\begin{align*}

\mathbf E \left[ \left(\int_0^1 f(t)* dW(t)\right)^2\right]=\int_0^1 f^2(t) dt\;,

\end{align*}

Paley, Wiener, and Zygmund then used this isometry to extend the integral to any deterministic function in \(L^2[0,1]\). This can be done since for any \(f \in L^2[0,1]\), one can find a sequence of deterministic functions in \(\phi_n \in C^1[0,1]\) with \(\phi_n(1)=0\) so that

\begin{equation*}

\int_0^1 (f(s) – \phi_n(s))^2ds \rightarrow 0 \text{ as } n \rightarrow 0\,.

\end{equation*}

## Stratonovich integral: A first example

Let us denote the Stratonovich integral of a standard Brownian motion \(W(t)\) with respect to itself by

\begin{align*}

\int_0^t W(s)\circ dW(t)\;.

\end{align*}

we then define the integral buy

\begin{align*}

\int_0^t W(s)\circ dW(t) = \lim_{n \rightarrow \infty}

\sum_k\frac12\big(W(t_{k+1}^n)+W(t_{k}^n)\big)\big(W(t_{k+1}^n) -W(t_{k}^n)\big)

\end{align*}

where \(t_k^n=k\frac{t}n\). Prove that with probability one

\begin{align*}

X_t= \int_0^t W(s)\circ dW(s)= \frac12 W(t)^2\;.

\end{align*}

Observe that this is what one would have if one used standard (as opposed to Ito) calculus. Calculate \(\mathbf E [ X_t | \mathcal{F}_s]\) for \(s < t\) where \(\mathcal{F}_t\) is the \(\sigma\)-algebra generated by the Brownian motion. Is \(X_t\) a martingale with respect to \(\mathcal{F}_t\).

## Stratanovich integral

Let \(X_t\) be an Ito processes with

\begin{align*}

dX_t&=f_tdt + g_tdW_t

\end{align*}

and \(B_t\) be a second (possibly correlated with \(W\) ) Brownian

motion. We define the Stratanovich integral \(\int X_t \circ dB_t\) by

\begin{align*}

\int_0^T X_t \circ dB_t = \int_0^T X_t dB_t + \frac12 \int_0^T \;d\langle X, B \rangle_t

\end{align*}

Recall that if \(B_t=W_t\) then \(d\langle B, W \rangle_t =dt\) and it is zero if they are independent. Use this definition to calculate:

- \(\int_0^t B_t \circ dB_t\) (Explain why this agrees with the answer you obtained here).
- Let \(F\) be a smooth function. Find equation satisfied by \(Y_t=F(B_t)\) written in terms of Stratanovich integrals. (Use Ito’s formula to find the equation for \(dY_t\) in terms of Ito integrals and then use the above definition to rewrite the Ito integrals as Stratanovich integrals“\(\circ dB_t\)”.) How does this compare to classical calculus ?
- (Integration by parts) Let \(Z_t\) be a second Ito process satisfying

\begin{align*}

dZ_t&=b_tdt + \sigma_tdW_t\;.

\end{align*}

Calculate \(d(X_t Z_t)\) using Ito’s formula and then write it in terms of Stratanovich integrals. Why is this part of the problem labeled integration by parts ? (Write the integral form of the expression you derived for \(d(X_t Z_t)\) in the two cases. What are the differences ?)

## Expansion of Brownian Motion

Let \(\{\eta_k : k=0,\cdots\}\) be a collection of mutually independent standard Gaussian random variable with mean zero and variance one. Define

\begin{align*}

X(t) =\frac{t}{\sqrt\pi} \eta_0 + \sqrt{\frac{2}{\pi}}\sum_{k=1}^\infty \frac{\sin(k t)}{k} \eta_k \;.

\end{align*}

- Show that on the interval \([0,\pi]\), \(X(t)\) has the same mean, variance and covariance as Brownian motion. (In fact, it is Brownian motion. )
- ** Prove it is Brownian motion.

There are a number of ways to prove it is Brownian motion.. One is to see \(X\) as the limit of the finite sums which are each continuous functions. Then prove that \(X\) is the uniform limit of these continuous functions and hence is itself continuous.

Then observe that “formally” the time derivative of \(X(t)\) is the sum of all frequencies with a random amplitudes which are independent and identical \(N(0,1)\) Gaussian random variables. This is the origin of the term “white noise” since all frequencies are equally represented as in white light.

In the above calculations you may need the fact that

\begin{align*}

\min(t,s)= \frac{ts}\pi +\frac{2}\pi \sum_{k=1}^\infty \frac{\sin(k t)\sin(k s)}{k^2}\;.

\end{align*}

If you are interested, this can be shown by periodically extending \(\min(t,s)\) to the interval \([-\pi,\pi]\) and then showing that it has the same Fourier transform as the right-hand side of the above expression. Then use the fact that two continuous functions with the same Fourier transform are equal on \([-\pi,\pi]\).)

## Transforming Brownian Motion

Let \(W(t)\) be standard Brownian motion on \([0,\infty)\).

- Show that \(-W(t)\) is also a Brownian motion.
- For any \(c>0\), show that \(Y(t)= c W(t/c^2)\) is again a standard Brownian motion.
- Fix any \(s>0\) and define \(Z(t)=W(t+s)-W(s)\). Show that \(Z(t)\) is a standard Brownian motion.
- * Define \(X(t)= t W(1/t)\) for \(t>0\) and \(X(0)=0\). Show that \(X(t)\) is a standard Brownian Motion. Do this by arguing that \(X(t)\) is continuous almost surely, that for each \(t \geq 0\) it is a Gaussian random variable with mean zero and variance \(t\). Instead of continuity, one can rather show that \(\text{Cov}(t,s)=\mathbf E X(t) X(s) \) equals \(\min(t,s)\). To prove continuity, notice that

\[ \lim_{t \rightarrow 0+} t W(1/t) = \lim_{s \rightarrow \infty} \frac{W(s)}{s}\]

## Complex Exponential Martingale

Let \(W_t\) be a standard Brownian Motion. Find \(\alpha \in \mathbb{R}\) so that

\[e^{i W_t + \alpha t}\]

is a martingale (and show that it is a martingale).

## Martingale Brownian Squared

Let \(W_t\) be standard Brownian Motion.

- Find a function \(f(t)\) so that \(W_t^2 -f(t)\) is a Martingale.
- * Argue that in some sense this \(f(t)\)is unique among increasing functions with finite variation. Compare this with the problem here.

## Martingale of squares

Let \(\{ Z_n: n=0,1, \cdots\}\) be a collection of mutually independent random variables with \(Z_n\) distributed as a Gaussian with mean zero and variance \(\sigma_n^2\) for \(\sigma_n \in \mathbb{R}\). Define \(X_n\) by

\begin{equation*}

X_n=\sum_{k=0}^n Z_k^2\;.

\end{equation*}

- Find a stochastic process \(Y_n\) so that \(X_n-Y_n\) is a Martingale with respect to the filtration \(\mathcal{F}_n=\sigma(Z_0,\cdots,Z_n)\).
- Find a second process \(\tilde Y_n\) so that \(X_n-Y_n\) is again a Martingale with respect to the filtration \(\mathcal{F}_n\) but\(Y_n \neq \tilde Y_n\) almost surely.

## Conditional Expectation example

Consider the following probability space \((\Omega,P,\mathcal{F})\) where \(\Omega=\big\{ (\omega_0,\omega_1) : \omega_i \in \{-1,0,1\} \big\}\). Take of each of the \(\omega_i\) to be mutually independent with \(\mathbf P(\omega_i=0)=\frac12\) and \(\mathbf P(\omega_i=\pm 1)=\frac14\). (\(\mathcal{F}\) is just the \(\sigma\)-algebra generated by the collection of single points, but this is not important)

For \(n=0\) or \(1\) define the random variables \(X_n\) by \(X_0=\omega_0\) and \(X_1=\omega_0 \omega_1\).

- What is \(\mathbf P(X_1=1)\) ? What is \(\mathbf E X_0\) ?
- Let \(A\) be the event that \(\{X_1\neq 0 \}\). What is \(\sigma(A)\) ?
- What is \(\sigma(X_0)\)( and \(\sigma(X_1)\) ?
- What is \(\mathbf E(X_1 | A)\) ? What is \(E(X_1 | X_0)\) ?

## A simple Ito Integral

Let \(\mathcal F_t\) be a filtration of \(\sigma\)-algebra and \(W_t\) a standard Brownian Motion adapted to the filtration. Define the adapted stochastic process \(X_t\) by

\[ X_t = \alpha_0 \mathbf 1_{[0,\frac12]}(t) + \alpha_{\frac12} \mathbf 1_{(\frac12,1]}(t) \]

where \(\alpha_0\) is a random variable adapted to \(\mathcal F_0\) and \(\alpha_{\frac12}\) is a random variable adapted to \(\mathcal F_{\frac12}\).

Write explicitly the Ito integral

\[\int_0^t X_s dW_s\]

and show by direct calculation that

\[\mathbf E \Big( \int_0^t X_s dW_s\Big) = 0\]

and

\[\mathbf E \Big[\Big( \int_0^t X_s dW_s\Big)^2\Big] = \int_0^t \mathbf E X_s^2 ds\]

## Quadratic Variation of Ito Integrals

Given a stochastic process \(f_t\) and \(g_t\) adapted to a filtration \(\mathcal F_t\) satisfying

\[\int_0^T\mathbf E f_t^2 dt < \infty\quad\text{and}\quad \int_0^T\mathbf E g_t^2 dt < \infty\]

define

\[M_t =\int_0^t f_s dW_s \quad \text{and}\quad N_t =\int_0^t g_s dW_s\]

for some standard Brownian Motion also adapted to the filtration \(\mathcal F_t\) . Though it is not necessary, assume that there exists a \(K>0\) so that \(|f_t|\) and \(|g_t|\) are less than some \(K\) for all \(t\) almost surely.

Let \(\{ t_i^{(n)} : i=0,\dots,N(n)\}\) be sequence of partitions of \([0,T]\) of the form

\[ 0 =t_0^{(n)} < t_1^{(n)} <\cdots<t_N^{(n)}=T\]

such that

\[ \lim_{n \rightarrow \infty} \sup_i |t_{i+1}^{(n)} – t_i^{(n)}| = 0\]

Defining

\[V_n[M]=\sum_{i=1}^{N(n)} \big(M_{t_i} -M_{t_{i-1}}\big)^2\]

and

\[Q_n[M,N]= \sum_{i=1}^{N(n)} \big(M_{t_i} -M_{t_{i-1}}\big)\big(N_{t_i} -N_{t_{i-1}}\big)\]

Clearly \(V_n[M]= Q_n[M,M]\). Show that the following points hold.

- The “polarization equality” holds:

\[ 4 Q_n[M,N] =V_n[M+N] -V_n[M-N]\]

Hence it is enough to understand the limit of \(n \rightarrow \infty\) of \(Q_n\) or \(V_n\). - \[\mathbf E V_n[M]= \int_0^T \mathbf E f_t^2 dt\]
- * \(V_n[M]\rightarrow \int_0^T f_t^2 dt\) as \(n \rightarrow \infty\) in \(L^2\). That is to say

\[ \lim_{n \rightarrow \infty}\mathbf E \Big[ \big( V_n[M] – \int_0^T f_t^2 dt \big)^2 \Big]=0\]

This limit is called the Quadratic Variation of the Martingale \(M\). - Using the results above, show that \(Q_n[M,N]\rightarrow \int_0^T f_t g_t dt\) as \(n \rightarrow \infty\) in \(L^2\). This is called the cross-quadratic variation of \(M\) and \(N\).
- * Prove by direct calculation that in the spirit of 3) from above that \(Q_n[M,N]\rightarrow \int_0^T f_t g_t dt\) as \(n \rightarrow \infty\) in \(L^2\).

In this context, one writes \(\langle M \rangle_T\) for the limit of the \(V_n[M]\) which is called the quadratic variation process of \(M_T\). Similarly one writes \(\langle M,N \rangle_T\) for the limit of \(Q_n[M,N]\) which is called the cross-quadratic variation process of \(M_T\) and \(N_T\). Clearly \(\langle M \rangle_T = \langle M,M \rangle_T\) and \( \langle M+N,M \rangle_T = \langle M, M+N\rangle_T= \langle M \rangle_T + \langle M, N\rangle_T\).

## Covariance of Ito Integrals

Let \(f_t\) and \(f_t\) be two stochastic processes adapted to a filtration \(\mathcal F_t\) such that

\[\int_0^\infty \mathbf E (f_t^2) dt < \infty \qquad \text{and} \qquad \int_0^\infty \mathbf E (g_t^2) dt < \infty\]

Let \(W_t\) be a standard brownian motion also adapted to the filtration \(\mathcal F_t\) and define the stochastic processes

\[ X_t =\int_0^t f_s dW_s \qquad \text{and} \qquad Y_t=\int_0^t g_s dW_s\]

Calculate the following:

- \( \mathbf E (X_t X_s ) \)
- \( \mathbf E (X_t Y_t ) \)

Hint: You know how to compute \( \mathbf E (X_t^2 ) \) and \( \mathbf E (Y_t^2 ) \). Use the fact that \((a+b)^2 = a^2 +2ab + b^2\) to answer the question. Simplify the result to get a compact expression for the answer. - Show that if \(f_t=\sin(2\pi t)\) and \(g_t=\cos(2\pi t)\) then \(X_1\) and \(Y_1\) are independent random variables.(Hint: use the result here to deduce that \(X_1\) and \(Y_1\) are mean zero gaussian random variables. Now use the above results to show that the covariance of \(X_1\) and \(Y_1\) is zero. Combining these two facts implies that the random variables are independent.)

## Gaussian Ito Integrals

In this problem, we will show that the Ito integral of a deterministic function is a Gaussian Random Variable.

Let \(\phi\) be deterministic elementary functions. In other words there exists a sequence of real numbers \(\{c_k : k=1,2,\dots,N\}\) so that

\[ \sum_{k=1}^\infty c_k^2 < \infty\]

and there exists a partition

\[0=t_0 < t_1< t_2 <\cdots<t_N=T\]

so that

\[ \phi(t) = \sum_{k=1}^N c_k \mathbf{1}_{[t_{k-1},t_k)}(t) \]

- Show that if \(W(t)\) is a standard brownian motion then the Ito integral

\[ \int_0^T \phi(t) dW(t)\]

is a Gaussian random variable with mean zero and variance

\[ \int_0^T \phi(t)^2 dt \] - * Let \(f\colon [0,T] \rightarrow \mathbf R\) be a deterministic function such that

\[\int_0^T f(t)^2 dt < \infty\]

Then it can be shown that there exists a sequence of deterministic elementary functions \(\phi_n\) as above such that

\[\int_0^T (f(t)-\phi_n(t))^2 dt \rightarrow 0\qquad\text{as}\qquad n \rightarrow \infty\]

Assuming this fact, let \(\psi_n\) be the characteristic function of the random variable

\[ \int_0^T \phi_n(t) dW(t)\]

Show that for all \(\lambda \in \mathbf R\), show that

\[ \lim_{n \rightarrow \infty} \psi_n(\lambda) = \exp \Big( -\frac{\lambda^2}2 \big( \int_0^T f(t)^2 dt \big) \Big)\]

Then use the the convergence result here to conclude that

\[ \int_0^T f(t) dW(t)\]

is a Gaussian Random Variable with mean zero and variance

\[\int_0^T f(t)^2 dt \]

by identifying the limit of the characteristic functions above.Note: When Probabilistic say the “characteristic function” of a random distribution they just mean the Fourier transform of the random variable. See here.

## Solving a class of SDEs

Let us try a systematic procedure for solving SDEs which works for a class of SDEs. Let

\begin{align*}

X(t)=a(t)\left[ x_0 + \int_0^t b(s) dB(s) \right] +c(t) \ .

\end{align*}

Assuming \(a\), \(b\), and \(c\) are differentiable, use Ito’s formula to find the equation for \(dX(t)\) of the form

\begin{align*}

dX(t)=[ F(t) X(t) + H(t)] dt + G(t)dB(t)

\end{align*}

were \(F(t)\), \(G(t)\), and \(H(t)\) are some functions of time depending on \(a,b\) and maybe their derivatives. Solve the following equations by matching the coefficients. Let \(\alpha\), \(\gamma\) and \(\beta\) be fixed numbers.

Notice that

\begin{align*}

X(t)=a(t)\left[ x_0 + \int_0^t b(s) dB(s) \right] +c(t)=F(t,Y(t)) \ .

\end{align*}

where \(dY(t)=b(t) dB(t)\). Then you can apply Ito’s formula to this definition to find \(dX(t)\).

- First consider

\[dX_t = (-\alpha X_t + \gamma) dt + \beta dB_t\]

with \(X_0 =x_0\). Solve this for \( t \geq 0\) - Now consider

\[dY(t)=\frac{\beta-Y(t)}{1-t} dt + dB(t) ~,~~ 0\leq t < 1 ~,~~Y(0)=\alpha.\]

Solve this for \( t\in[0,1] \). - \begin{align*}

dX_t = -2 \frac{X_t}{1-t} dt + \sqrt{2 t(1-t)} dB_t ~,~~X(0)=\alpha

\end{align*}

Solve this for \( t\in[0,1] \).

## Homogeneous Martingales and BDG Inequality

### Part I

- Let \(f(x,y):\mathbb{R}^2 \rightarrow \mathbb{R}\) be a twice differentiable function in both \(x\) and \(y\). Let \(M(t)\) be defined by \[M(t)=\int_0^t \sigma(s,\omega) dB(s,\omega)\]. Assume that \(\sigma(t,\omega)\) is adapted and that \(\mathbb{E} M^2 < \infty\) for all \(t\) a.s. .(Here \(B(t)\) is standard Brownian Motion.) Let \(\langle M \rangle(t)\) be the quadratic variation process of \(M(t)\). What equation does \(f\) have to satisfy so that \(Y(t)=f(M(t),\langle M \rangle(t))\) is again a martingale if we assume that \(\mathbf E\int_0^t \sigma(s,\omega)^2 ds < \infty\).
- Set

\begin{align*}

f_n(x,y) = \sum_{0 \leq m \leq \lfloor n/2 \rfloor} C_{n,m} x^{n-2m}y^m

\end{align*}

here \(\lfloor n/2 \rfloor\) is the largest integer less than or equal to \(n/2\). Set \(C_{n,0}=1\) for all \(n\). Then find a recurrence relation for \(C_{n,m+1}\) in terms of \(C_{n,m}\), so that \(Y(t)=f_n(B(t),t)\) will be a martingale.Write out explicitly \(f_1(B(t),), \cdots, f_4(B(t),t)\) as defined in the previous item.

### Part II

Now consider \(I(t)\) defined by \[I(t)=\int_0^t \sigma(s,\omega)dB(s,\omega)\] where \(\sigma\) is adapted and \(|\sigma(t,\omega)| \leq K\) for all \(t\) with probability one. In light of the above let us set

\begin{align*}Y(t,\omega)=I(t)^4 – 6 I(t)^2\langle I \rangle(t) + 3 \langle I \rangle(t)^2 \ .\end{align*}

- Quote the problem “Ito Moments” to show that \(\mathbb{E}\{ |Y(t)|^2\} < \infty\) for all \(t\). Then use the first part of this problem to conclude that \(Y\) is a martingale.
- Show that \[\mathbb{E}\{ I(t)^4 \} \leq 6 \mathbb{E} \big\{ \{I(t)^2\langle I \rangle(t) \big\}\]
- Recall the Cauchy-Schwartz inequality. In our language it states that

\begin{align*}

\mathbb{E} \{AB\} \leq (\mathbb{E}\{A^2\})^{1/2} (\mathbb{E}\{B^2\})^{1/2}

\end{align*}

Combine this with the previous inequality to show that\begin{align*}\mathbb{E}\{ I(t)^4 \} \leq 36 \mathbb{E} \big\{\langle I \rangle(t)^2 \big\} \end{align*} - As discussed in class \(I^4\) is a submartingale (because \(x \mapsto x^4\) is convex). Use the Kolmogorov-Doob inequality and all that we have just derived to show that

\begin{align*}

\mathbb{P}\left\{ \sup_{0\leq s \leq T}|I(s)|^4 \geq \lambda \right\} \leq ( \text{const}) \frac{ \mathbb{E}\left( \int_0^T \sigma(s,\omega)^2 ds\right)^2 }{\lambda}

\end{align*}

## Associated PDE

Show that if

- \[I(t,\omega)=\int_0^t \sigma(s,\omega) dB(s,\omega)\]is a stochastic integral then \[I^2(t)-\int_0^t \sigma^2(s)ds\] is a martingale.
- What equation must \(u(t,x)\) satisfy so that

\[ t \mapsto u(t,B(t))e^{\int_0^t V(B(s))ds} \]

is a martingale? Here \(V\) is a bounded function. Hint: Set \(Y(t)=\int_0^t V(B(s))ds\) and apply It\^0’s formula to \(Z(t,B(t),Y(t))=u(t,B(t))\exp(Y(t))\).

## Exponential Martingale Bound

Let \(\sigma(t,\omega)\) be nonanticipating with \(|\sigma(x,\omega)| < M\) for some bound \(M\) . Let \(I(t,\omega)=\int_0^t \sigma(s,\omega) dB(s,\omega)\). Use the exponential martingale \[\exp\big\{\alpha I(t)-\frac{\alpha^2}{2}\int_0^t \sigma^2(s)ds \big\}\] (see the problem here) and the Kolmogorov-Doob inequality to get the estimate

\[

P\Big\{ \sup_{0\leq t\leq T}|I(t)| \geq \lambda \Big\}\leq 2

\exp\left\{\frac{-\lambda^2}{2M^2 T}\right\}

\]

First express the event of interest in terms of the exponential martingale, then use the Kolmogorov-Doob inequality and after this choose the parameter \(\alpha\) to get the best bound.

## Ballistic Growth

Consider the SDE

\[

dX(t)=b(X(t))dt +\sigma(X(t))dB(t)

\]

with \(b(x)\to b_0 >0\) as \(x\to\infty\) and with \(\sigma\) bounded and positive. Suppose that \(b\) and \(\sigma\) are such that

\[\lim_{t\to\infty}X(t)=\infty\], with probability one for any starting point. Show that

\[

P_x\Big\{\lim_{t\to\infty}\frac{X(t)}{b_0 t}=1\Big\}=1 \ .

\]

From

\[

X(t)=x+\int_0^{t}b(X(s))ds +\int_0^{t}\sigma(X(s))dB(s)

\]

and the hypotheses, note that the result follows from showing that

\begin{align*}

\mathbf P_x\Big\{\lim_{t\to\infty}\frac{1}{t}\int_0^{t}\sigma(X(s))dB(s)=0\Big\}=1 \ .

\end{align*}

There are a number of ways of thinking about this. In the end they all come down to essentially the same calculations. One way is to show that for some fixed \(\delta \in(0,1)\) the following statement holds with probability one:

There exist a constants \(C(\omega)\) so that

\begin{align*}

\int_0^{t}\sigma(X(s))dB(s) \leq Ct^\delta

\end{align*}

for all \(t >0\).

To show this partition \([0,\infty]\) into blocks and use the Doob-Kolmogorov inequality to estimate the probability that the max of \( \int_0^{t}\sigma(X(s))ds\) on each block excess \(t^\delta\) on that block. Then use the Borel-Cantelli to show that this happens only a finite number of times.

A different way to organize the same calculation is to estimate

\[

\mathbf P_x\Big\{\sup_{t>a}\frac{1}{t}|\int_0^t \sigma(X(s))dB(s)|>\epsilon\Big\}

\]

by breaking the interval \(t>a\) into the union of intervals of the form \(a2^k <t\leq a2^{k+1}\) for \(k=0,1,\dots\) and using Doob-Kolmogorov Martingale inequality. Then let \(a\to\infty\).

## Entry and Exit through boundaries

Consider the following one dimensional SDE.

\begin{align*}

dX_t&= \cos( X_t )^\alpha dW(t)\\

X_0&=0

\end{align*}

Consider the equation for \(\alpha >0\). On what interval do you expect to find the solution at all times ? Classify the behavior at the boundaries.

For what values of \(\alpha < 0\) does it seem reasonable to define the process ? any ? justify your answer.

## Martingale Exit from an Interval – I

Let \(\tau\) be the first time that a continuous martingale \(M_t\) starting from \(x\) exits the interval \((a,b)\), with \(a<x<b\). In all of the following, we assume that \(\mathbf P(\tau < \infty)=1\). Let \(p=\mathbf P_x\{M(\tau)=a\}\).

Find and analytic expression for \(p\) :

- For this part assume that \(M_t\) is the solution to a time homogeneous SDE. That is that \[dM_t=\sigma(M_t)dB_t.\] (with \(\sigma\) bounded and smooth.) What PDE should you solve to find \(p\) ? with what boundary data ? Assume for a moment that \(M_t\) is standard Brownian Motion (\(\sigma=1\)). Solve the PDE you mentioned above in this case.
- A probabilistic way of thinking: Return to a general martingale \(M_t\). Let us assume that \(dM_t=\sigma(t,\omega)dB_t\) again with \(\sigma\) smooth and uniformly bounded from above and away from zero. Assume that \(\tau < \infty\) almost surely and notice that \[\mathbf E_x M(\tau)=a \mathbf P_x\{M_\tau=a\} + b \mathbf P_x\{M_\tau=b.\}\] Of course the process has to exit through one side or the other, so \[\mathbf P_x\{M_\tau=a\} = 1 – \mathbf P_x\{M_\tau=b\}\]. Use all of these facts and the Optimal Stopping Theorem to derive the equation for \(p\).
- Return to the case when \[dM_t=\sigma(M_t)dB_t\]. (with \(\sigma\) bounded and smooth.) Write down the equations that \(v(x)= \mathbf E_x\{\tau\}\), \(w(x,t)=\mathbf P_x\{ \tau >t\}\), and \(u(x)=\mathbf E_x\{e^{-\lambda\tau}\}\) with \(\lambda > 0\) satisfy. ( For extra credit: Solve them for \(M_t=B_t\) in this one dimensional setting and see what happens as \(b \rightarrow \infty\).)

## Around the Circle

Consider the equation

\begin{align}

dX_t &= -Y_t dB_t – \frac12 X_t dt\\

dY_t &= X_t dB_t – \frac12 Y_t dt

\end{align}

Let \((X_0,Y_0)=(x,y)\) with \(x^2+y^2=1\). Show that \(X_t^2 + Y_t^2 =1\) for all \(t\) and hence the SDE lives on the unit circle. Does this make intuitive sense ?

## Shifted Brownian Motion and a PDE

Let \(f \in C_0^2(\mathbf R^n)\) and \(\alpha(x)=(\alpha_1(x),\dots,\alpha_n(x))\) with \(\alpha_i \in C_0^2(\mathbf R^n)\) be given functions and consider the partial differential equations

\begin{align*}

\frac{\partial u}{\partial t} &= \sum_{i=1}^n \alpha_i(x)

\frac{\partial u}{\partial x_i} + \frac{1}{2} \frac{\partial^2

u}{\partial x_i^2} \ \text{ for } t >0 \text{ and }x\in \mathbf R^n \\

u(0,x)&=f(x) \ \text{ for } \ x \in \mathbf R^n

\end{align*}

Use the Girsonov theorem to show that the unique bounded solution \(u(t,x)\) of this equation can be expressed by

\begin{align*}

u(t,x) = \mathbf E_x \left[ \exp\left(\int_0^t \alpha(B(s))\cdot dB(s) –

\frac{1}{2}\int_0^t |\alpha(B(s))|^2 ds \right)f(B(t))\right]

\end{align*}

where \(\mathbf E_x\) is the with respect to \(\mathbf P_x\) when the Brownian Motion starts at \(x\). (Note there maybe a sign error in the above exponential term. Use what ever sign is right.) For the remainder, assume that \(\alpha\)

is a fixed constant \(\alpha_0\). Now using what you know about the distribution of \(B_t\) write the solution to the above equation as an integral kernel integrated against \(f(x)\). (In other words, write \(u(t,x)\) so that your your friends who don’t know any probability might understand it. ie \(u(t,x)=\int K(x,y,t)f(y)dy\) for some \(K(x,y,t)\))

## Probability Bridge

For fixed \(\alpha\) and \(\beta\) consider the stochastic differential equation

\[

dY(t)=\frac{\beta-Y(t)}{1-t} dt + dB(t) ~,~~ 0\leq t < 1 ~,~~Y(0)=\alpha.

\]

Verify that \(\lim_{t\to 1}Y(t)=\beta\) with probability one. ( This is called the Brownian bridge from \(\alpha\) to \(\beta\).)

Hint: In the problem “Solving a class of SDEs“, you found that this equation had the solution

\begin{equation*}

Y_t = a(1-t) + bt + (1-t)\int_0^t \frac{dB_s}{1-s} \quad 0 \leq t <1\; .

\end{equation*}

To answer the question show that

\begin{equation*}

\lim_{t \rightarrow 1^-} (1-t) \int_0^t\frac{dB_s}{1-s} =0 \quad \text{a.s.}

\end{equation*}

## Making the Cube of Brownian Motion a Martingale

Let \(B_t\) be a standard one dimensional Brownian

Motion. Find the function \(F:\mathbf{R}^5 \rightarrow \mathbf R\) so that

\begin{align*}

B_t^3 – F\Big(t,B_t,B_t^2,\int_0^t B_s ds, \int_0^t B_s^2 ds\Big)

\end{align*}

is a Martingale.

Hint: It might be useful to introduce the processes

\[X_t=B_t^2\qquad Y_t=\int_0^t B_s ds \qquad Z_t=\int_0^t B_s^2 ds\]

## Correlated SDEs

Let \(B_t\) and \(W_t\) be standard Brownian motions which are

independent. Consider

\begin{align*}

dX_t&= (-X_t +1)dt + \rho dB_t + \sqrt{1-\rho^2} dW_t\\

dY_t&= -Y_t dt + dB_t \ .

\end{align*}

Find the covariance of \(\text{Cov}(X_t,Y_t)=\mathbf{E} (X_t Y_t) – \mathbf{E} (X_t) \mathbf{E}( Y_t)\).

## Hyperbolic SDE

Consider

\begin{align*}

dX_t=& Y_t dB_t + \frac12 X_t dt\\

dY_t=& X_t dB_t + \frac12 Y_t dt

\end{align*}

Show that \(X_t^2-Y_t^2\) is constant for all \(t\).

## Diffusion and Brownian motion

Let \(B_t\) be a standard Brownian Motion starting from zero and define

\[ p(t,x) = \frac1{\sqrt{2\pi t}}e^{-\frac{x^2}{2t} } \]

Given any \(x \in \mathbf R \), define \(X_t=x + B_t\) . Of course \(X_t\) is just a Brownian Motion stating from \(x\) at time 0. Fixing a smooth, bounded, compactly supported function \(f:\mathbf R \rightarrow \mathbf R\), we define the function \(u(x,t)\) by

\[u(x,t) = \mathbf E_x f(X_t)\]

where we have decorated the expectation with the subscript \(x\) to remind us that we are starting from the point \(x\).

- Explain why \[ u(x,t) = \int_{\infty}^\infty f(y)p(t,x-y)dy\]
- Show by direct calculation using the formula from the previous question that for \(t>0\), \(u(x,t)\) satisfies the diffusion equation

\[ \frac{\partial u}{\partial t}= c\frac{\partial^2 u}{\partial x^2}\]

for some constant \(c\). (Find the correct \(c\) !) - Again using the formula from part 1), show that

\[ \lim_{t \rightarrow 0} u(t,x) = f(x)\]

and hence the initial condition for the diffusion equation is \(f\).