# Category Archives: Stochastic Calculus

## Getting your feet wet numerically

Simulate the following stochastic differential equations:

• $dX(t) = – \lambda X(t) dt + dW(t)$
• $dY(t) = – \lambda Y(t) dt +Y(t) dW(t)$

by using the following Euler type numerical approximation

• $X_{n+1} = X_n – \lambda X_n h + \sqrt{h} \eta_n$
• $Y_{n+1} = Y_n – \lambda Y_n h + \sqrt{h} Y_n\eta_n$

where $$n=0,1,2,\dots$$ and $$h >0$$ is a small number that give the numerical step side.  That is to say that we consider $$X_n$$ as an approximation of $$X( t)$$ and $$Y_n$$ as an approximation of $$Y( t)$$ each with $$t=h n$$.  Here $$\eta_n$$ are a collection of mutually independent random variables each with a Gaussian distribution with mean zero and variance one. (That is $$N(0,1)$$.)

Write code to simulate the two equations using the numerically methods suggested.  Plot some trajectories. Describe how the behavior changes for different choices of $$\lambda$$. Can you conjecture where it changes ? Compare and contrast the behavior of the two equations.

## A modified Wright-Fisher Model

Consider the ODE

$\dot x_t = x_t(1-x_t)$

and the SDE

$dX_t = X_t(1-X_t) dt + \sqrt{X_t(1-X_t)} dW_t$

1. Argue that $$x_t$$ can not leave the interval $$[0,1]$$ if $$x_0 \in (0,1)$$.
2. What is the behavior of $$x_t$$ as $$t \rightarrow\infty$$ if if $$x _0\in (0,1)$$ ?
3. Can the diffusion $$X_t$$ exit the interval $$(0,1)$$ ? Prove your claims.

## No Explosions from Diffusion

Consider the following ODE and SDE:

$\dot x_t = x^2_t \qquad x_0 >0$

$d X_t = X^2_t dt + \sigma |X_t|^\alpha dW_t\qquad X_0 >0$

where $$\alpha >0$$ and $$\sigma >0$$.

1. Show that $$x_t$$ blows up in finite time.
2. Find the values of  $$\sigma$$ and $$\alpha$$ so that $$X_t$$ does not explode (off to infinity).

[ From Klebaner, ex 6.12]

## Cox–Ingersoll–Ross model

The following model has SDE has been suggested as a model for interest rates:

$dr_t = a(b-r_t)dt + \sigma \sqrt{r_t} dW_t$

for $$r_t \in \mathbf R$$, $$r_0 >0$$ and constants $$a$$,$$b$$, and $$\sigma$$.

1. Find a closed form expression for $$\mathbf E( r_t)$$.
2. Find a closed form expression  for $$\mathrm{Var}(r_t)$$.
3. Characterize the values of parameters of $$a$$, $$b$$, and $$\sigma$$ such that $$r=0$$ is an absorbing point.
4. What is the nature of the boundary at $$0$$ for other values of the parameter ?

## SDE Example: quadratic geometric BM

Show that the solution $$X_t$$ of

$dX_t=X_t^2 dt + X_t dB_t$

where $$X_0=1$$ and $$B_t$$  is a standard Brownian motion has the representation

$X_t = \exp\Big( \int_0^t X_s ds -\frac12 t + B_t\Big)$

## Practice with Ito and Integration by parts

Define

$X_t =X_0 + \int_0^t B_s dB_s$

where $$B_t$$ is a standard Brownian Motion. Show that $$X_t$$ can also be written

$X_t=X_0 + \frac12 (B^2_t -t)$

## Discovering the Bessel Process

Let $$W_t=(W^{(1)}_t,\dots,W^{(n)}_t)$$ be an $$n$$-dimensional Brownian motion with $$W^{(i)}_t$$ standard independent 1-dim brownian motions and $$n \geq 2$$.

Let
$X_t = \|W_t\| = \Big(\sum_{i=1}^n (W^{(i)}_t)^2\Big)^{\frac12}$
be the norm of  the brownian motions. Even though the absolute value is not differentiable at zero we can still apply Itos formula since Brownian motion never visits the origin if the dimension is greater than zeros.

1. Use Ito’s formula to show that $$X_t$$ satisfies the Ito process
$dX_t = \frac{n-1}{2 X_t} dt + \sum_{i=1}^n \frac{W^{(i)}_t }{X_t} dW^{(i)}_t$
2. Using the Levy-Doob Theorem show that
$Z_t =\sum_{i=1}^n \int_0^t \frac{W^{(i)}_t }{X_t} dW^{(i)}_t$
is a standard Brownian Motion.
3. In light of the above discussion argue that $$X_t$$ and $$Y_t$$ have the same distribution if   $$Y_t$$ is defined by
$dY_t = \frac{n-1}{2 Y_t} dt + dB_t$
where $$B_t$$ is a standard Brownian Motion.

Take a moment to reflect on what has been shown. $$W_t$$ is a $$\mathbf R^n$$ dimensional Markov Process. However, there is no guarantee that the  one dimensional process $$X_t$$ will again be a Markov process, much less a diffusion. The above calculation shows that the distribution of  $$X_{t+h}$$ is determined completely by $$X_t$$ . In particular, it solves a one dimensional SDE. We were sure that $$X_t$$ would be an Ito process but we had no guarantee that it could be written as a single closed SDE. (Namely that the coefficients would be only functions of $$X_t$$ and not of the details of the $$W^{(i)}_t$$’s.

## One dimensional stationary measure

Consider the one dimensional SDE

$dX_t = f(X_t) dt + g(X_t) dW_t$

which we assume has a unique global in time solution. For simplicity let us assume that there is a positive constant $$c$$ so that $$1/c < g(x)<c$$ for all $$x$$ and that $$f$$ and $$g$$ are smooth.

A stationary measure for the problem is a probability measure $$\mu$$ so that if  the initial distribution  $$X_0$$ is distributed according to $$\mu$$ and independent of the Brownian Motion $$W$$ then $$X_t$$ will be distributed as $$\mu$$ for any $$t \geq 0$$.

If the functions $$f$$ and $$g$$ are “nice” then the distribution at time $$t$$ has a density with respect to Lebesgue measure (“dx”). Which is to say there is a function $$p_x(t,y)$$ do that for any $$\phi$$

$\mathbf E_x \phi(X_t) = \int_{-\infty}^\infty p_x(t,y)\phi(y) dy$

and  $$p_\phi(t,y)$$ solves the following equation

$\frac{\partial p_\phi}{\partial t}(t,y) = (L^* p_\phi)(t,y)$

with $$p_\phi(0,y) = \phi(y)$$ where $$\phi(z)$$ is the density with respect to  Lebesgue of the initial density. (The pdf of $$X_0$$ .)

$$L^*$$ is the formal adjoint of the generator $$L$$ of $$X_t$$ and is defined by

$(L^*\phi)(y) = – \frac{\partial\ }{\partial y}( f \phi)(y) + \frac12 \frac{\partial^2\ }{\partial y^2}( g^2 \phi)(y)$

Since we want $$p_x(t,y)$$ not to change when it is evolved forward with the above equation we want $$\frac{\partial p}{\partial t}=0$$ or in other words

$(L^* p_\phi)(t,y) =0$

1. Let $$F$$ be such that $$-F’ = f/g^2$$. Show that $\rho(y)=\frac{K}{g^2(y)}\exp\Big( – 2F(y) \Big)$ is an invariant density where $$K$$ is a normalization constant which ensures that
$\int \rho(y) dy =1$
2. Find the stationary measure for each of the following SDEs:
$dX_t = (X_t – X^3_t) dt + \sqrt{2} dW_t$$dX_t = – F'(X_t) dt + \sqrt{2} dW_t$
3. Assuming that the formula derived above make sense more generally, compare the invariant measure of
$dX_t = -X_t + dW_t$
and
$dX_t = -sign(X_t) dt + \frac{1}{\sqrt{|X_t|}} dW_t$
4. Again, proceding fromally assuming everything is well defined and makes sense find the stationary density of $dX_t = – 2\frac{sign(X_t)}{|X_t|} dt + \sqrt{2} dW_t$

## Numerical SDEs – Euler–Maruyama method

If we wanted to simulate the SDE

$dX_t = b(X_t) dt + \sigma(X_t) dW_t$
on a computer then we truly want to approximate  the associated  integral equation over a sort time time interval of length $$h$$. Namely,
$X_{t+h} – X_t= \int_t^{t+h}b(X_s) ds + \int_t^{t+h}\sigma(X_s) dW_s$

It is reasonable for $$s \in [t,t+h]$$ to use the   approximation

which implies

$X_{t+h} – X_t\approx b(X_t) h + \sigma(X_t) \big( W_{t+h}-W_{t}\big)$

Since  $$W_{t+h}-W_{t}$$ is a Gaussian random variable with mean zero and variance $$h$$, this discussion suggests the following numerical scheme:

$X_{n+1} = X_n + b(X_n) h + \sigma(X_n) \sqrt{h} \eta_n$

where $$h$$ is the time step and the $$\{ \eta_n : n=0,\dots\}$$ are a collection of  mutually independent standard Gaussian random variable (i.e. mean zero and variance 1). This is called the Euler–Maruyama method.

Use this method to numerically approximate (and plot)  several  trajectories of the following SDEs numerically.

1. $dX_t = -X_t dt + dW_t$
2. For $$r=-1, 0, 1/4, 1/2, 1$$
$dX_t = r X_t dt + X_t dW_t$Does it head to infinity or approach zero ?Look at different small $$h$$. Does the solution go negative ?  Should it ?
3. $dX_t = X dt -X_t^3 dt + \alpha dW_t$
Try different values of $$\alpha$$. For example, $$\alpha = 1/10, 1, 2 ,10$$. How does it act ? What is the long time behavior ? How does it compare with what is learned in the “One dimensional stationary measure” problem about the stationary measure of this equation.

## BDG Inequality

Consider $$I(t)$$ defined by $I(t)=\int_0^t \sigma(s,\omega)dB(s,\omega)$ where $$\sigma$$ is adapted and $$|\sigma(t,\omega)| \leq K$$ for all $$t$$ with probability one. Inspired by   problem “Homogeneous Martingales and Hermite Polynomials”  Let us set
\begin{align*}Y(t,\omega)=I(t)^4 – 6 I(t)^2\langle I \rangle(t) + 3 \langle I \rangle(t)^2 \ .\end{align*}

1. Quote  the problem “Ito Moments” to show that $$\mathbb{E}\{ |Y(t)|^2\} < \infty$$ for all $$t$$. Then  verify that $$Y_t$$ is  a martingale.
2. Show that $\mathbb{E}\{ I(t)^4 \} \leq 6 \mathbb{E} \big\{ \{I(t)^2\langle I \rangle(t) \big\}$
3. Recall the Cauchy-Schwartz inequality. In our language it states that
\begin{align*}
\mathbb{E} \{AB\} \leq (\mathbb{E}\{A^2\})^{1/2} (\mathbb{E}\{B^2\})^{1/2}
\end{align*}
Combine this with the previous inequality to show that\begin{align*}\mathbb{E}\{ I(t)^4 \} \leq 36 \mathbb{E} \big\{\langle I \rangle(t)^2 \big\} \end{align*}
4. We know that  $$I^4$$ is a submartingale (because $$x \mapsto x^4$$ is convex). Use the Kolmogorov-Doob inequality and all that we have just derived to show that
\begin{align*}
\mathbb{P}\left\{ \sup_{0\leq s \leq T}|I(s)|^4 \geq \lambda \right\} \leq ( \text{const}) \frac{ \mathbb{E}\left( \int_0^T \sigma(s,\omega)^2 ds\right)^2 }{\lambda}
\end{align*}

## Homogeneous Martingales and Hermite Polynomials

1. Let $$f(x,y):\mathbb{R}^2 \rightarrow \mathbb{R}$$ be a twice differentiable function in both $$x$$ and $$y$$. Let $$M(t)$$ be defined by $M(t)=\int_0^t \sigma(s,\omega) dB(s,\omega)$. Assume that $$\sigma(t,\omega)$$ is adapted and that $$\mathbb{E} M^2 < \infty$$ for all $$t$$ a.s. .(Here $$B(t)$$ is standard Brownian Motion.) Let $$\langle M \rangle(t)$$ be the quadratic variation process of $$M(t)$$. What equation does $$f$$ have to satisfy so that $$Y(t)=f(M(t),\langle M \rangle(t))$$ is again a martingale if we assume that $$\mathbf E\int_0^t \sigma(s,\omega)^2 ds < \infty$$.
2. Set
\begin{align*}
f_n(x,y) = \sum_{0 \leq m \leq \lfloor n/2 \rfloor} C_{n,m} x^{n-2m}y^m
\end{align*}
here $$\lfloor n/2 \rfloor$$ is the largest integer less than or equal to $$n/2$$. Set $$C_{n,0}=1$$ for all $$n$$. Then find a recurrence relation for $$C_{n,m+1}$$ in terms of $$C_{n,m}$$, so that $$Y(t)=f_n(B(t),t)$$ will be a martingale.Write out explicitly $$f_1(B(t),), \cdots, f_4(B(t),t)$$ as defined in the previous item.
3. * Do you recognize the recursion relation you obtained above as being associated to a famous recursion relation ? (Hint: Look at the title of the problem)

## Ito Variation of Constants

For  functions $$f(x)$$ and$$g(x)$$ and constant $$\beta>0$$,  define $$X_t$$ as the solution to the following SDE
$dX_t = – \beta X_t dt + h(X_t)dt + g(X_t) dW_t$

where $$W_t$$ is a standard Brownian Motion.

1.  Show that $$X_t$$ can be written as
$X_t = e^{-\beta t} X_0 + \int_0^{t} e^{-\beta (t-s)} h(X_s) ds + \int_0^{t} e^{-\beta (t-s)} g(X_s) dW_s$
See exercise:  Ornstein–Uhlenbeck process for guidance.
2. Assuming that $$|h(x)| < K$$  and $$|g(x)|<K$$, show that  there exists a constant $$C(X_0)$$ so that
$\mathbf E [|X_t|] < C(X_0)$
for all $$t >0$$. It might be convenient the remember the Cauchy–Schwarz inequality.
3. * Assuming that $$|h(x)| < K$$  and $$|g(x)|<K$$, show that  for any integer $$p >0$$ there exists a constant $$C(p,X_0)$$ so that
$\mathbf E [|X_t|^{2p}] < C(p,X_0)$
for all $$t >0$$. See exercise: Ito Moments for guidance.

## Ornstein–Uhlenbeck process

For $$\alpha \in \mathbf R$$ and $$\beta >0$$,  Define $$X_t$$ as the solution to the following SDE
$dX_t = – \beta X_t dt + \alpha dW_t$

where $$W_t$$ is a standard Brownian Motion.

1.  Find $$d(e^{\beta t} X_t)$$ using Ito’s Formula.
2. Use the calculation of   $$d(e^{\beta t} X_t)$$ to show that
\begin{align}  X_t = e^{-\beta t} X_0 + \alpha \int_0^t e^{-\alpha(t-s)} dW_s\end{align}
3. Conclude that $$X_t$$ is Gaussian process (see exercise: Gaussian Ito Integrals ). Find its mean and variance at time $$t$$.
4. * Let $$h(t)$$ and $$g(t)$$ be  deterministic functions of time and let $$Y_t$$ solve
$dY_t = – \beta Y_t dt + h(t)dt+ \alpha g(t) dW_t$
show find a formula analogous to part 2 above for $$Y_t$$ and conclude that $$Y_t$$ is still Gaussian. Find it mean and Variance.

## Exponential Martingale

Let $$\sigma(t,\omega)$$ be adapted to the filtration generated by a standard Brownian Motion $$B_t$$ such that $$|\sigma(x,\omega)| < K$$ for some  bound  $$K$$ . Let $$I(t,\omega)=\int_0^t \sigma(s,\omega) dB(s,\omega)$$.

1. Show that
$M_t=\exp\big\{\alpha I(t)-\frac{\alpha^2}{2}\int_0^t \sigma^2(s)ds \big\}$
is a martingale. It is called the  exponential martingale.
2. Show that $$M_t$$ satisfies the equation
$dM_t =\alpha M_t dI_t = \alpha M_t \sigma_t dB_t$

## Ito Integration by parts

Recall that if $$u(t)$$ and $$v(t)$$ are deterministic functions which are once differentiable then the classic integration by parts formula states that
$\int_0^t u(s) (\frac{dv}{ds})(s)\,ds = u(t)v(t) – u(0)v(0) – \int_0^t v(s) (\frac{du}{ds})(s)\,ds$

As is suggested by the formal relations

$(\frac{dv}{ds})(s)\,ds=dv(s) \qquad\text{and}\qquad (\frac{du}{ds})(s)\, ds=du(s)$

this can be rearranged  to state

$u(t)v(t)- u(0)v(0)= \int_0^t u(s) dv(s) + \int_0^t v(s) du(s)$

which holds for more general Riemann–Stieltjes integrals. Now consider two Ito processes $$X_t$$ and $$Y_t$$ given by

$dX_t=b_s ds + \sigma_s dW_t \qquad\text{and}\qquad dY_t=f_s ds + g_s dW_t$

where $$W_t$$ is a standard Brownian Motion. Derive the “Integration by Parts formula” for Ito calculus by applying Ito’s formula to $$X_tY_t$$. Compare this the the classical formula given above.

## Cross-quadratic variation: correlated Brownian Motions

Let $$W_t$$ and $$B_t$$ be two independent standard Brownian Motions. For $$\rho \in [0,1]$$ define
$Z_t = {\rho}\, W_t +\sqrt{1-\rho^2}\, B_t$

1. Why is $$Z_t$$ a standard Brownian Motion ?
2. Calculate  the cross-quadratic variations $$\langle Z,W\rangle_t$$ and $$\langle Z,B\rangle_t$$ .
3. For what values of $$\rho$$ is $$W_t$$ independent of $$Z_t$$ ?
4. ** Argue that two standard Brownian motions  are independent if and only if  their cross-quadratic variation is zero.

## Paley-Wiener-Zygmund Integral

##### Definition of stochastic integrals by integration by parts

In 1959, Paley, Wiener, and Zygmund gave a definition of the stochastic integral based on integration by parts. The resulting integral will agree with the Ito integral when both are defined. However the Ito integral will have a much large domain of definition. We will now follow the develop the integral as outlined by Paley, Wiener, and Zygmund:

1. Let $$f(t)$$ be a deterministic function with $$f'(t)$$ continuous. Prove that \begin{align*} \int_0^1 f(t)dW(t) = f(1)W(1) – \int_0^1 f'(t)W(t) dt\end{align*}
where the first integral is the Ito integral and the last integral is defined path-wise as the standard Riemann integral since the integrands are a.s. continuous.
2. Now let $$f$$ we as above with in addition $$f(1)=0$$ and “define” the stochastic integral $$\int_0^1 f(t) * dW(t)$$ by the relationship
\begin{align*}
\int_0^1 f(t) *dW(t) = – \int_0^1 f'(t) W(t) dt\;.
\end{align*}
Where the integral on the right hand side is the standard Riemann integral.

If the condition $$f(1)=0$$ seems unnatural to you, what this is really saying is that $$f$$ is supported on $$[0,1)$$. In many ways it would be most natural to consider $$f$$ on $$[0,\infty)$$ with compact support. Then $$f(\infty)=0$$. We consider the unit interval for simplicity.

3. Show by direct calculation (not by the Ito isometry) that
\begin{align*}
\mathbf E \left[ \left(\int_0^1 f(t)* dW(t)\right)^2\right]=\int_0^1 f^2(t) dt\;,
\end{align*}
Paley, Wiener, and Zygmund then used this isometry to extend the integral to any deterministic function in $$L^2[0,1]$$. This can be done since for any $$f \in L^2[0,1]$$, one can find a sequence of deterministic functions in $$\phi_n \in C^1[0,1]$$ with $$\phi_n(1)=0$$ so that
\begin{equation*}
\int_0^1 (f(s) – \phi_n(s))^2ds \rightarrow 0 \text{ as } n \rightarrow 0\,.
\end{equation*}

## Stratonovich integral: A first example

Let us denote the Stratonovich integral of a standard Brownian motion $$W(t)$$ with respect to itself by
\begin{align*}
\int_0^t W(s)\circ dW(t)\;.
\end{align*}
we then define the integral buy
\begin{align*}
\int_0^t W(s)\circ dW(t) = \lim_{n \rightarrow \infty}
\sum_k\frac12\big(W(t_{k+1}^n)+W(t_{k}^n)\big)\big(W(t_{k+1}^n) -W(t_{k}^n)\big)
\end{align*}
where $$t_k^n=k\frac{t}n$$. Prove that with probability one
\begin{align*}
X_t= \int_0^t W(s)\circ dW(s)= \frac12 W(t)^2\;.
\end{align*}
Observe that this is what one would have if one used standard (as opposed to Ito) calculus. Calculate $$\mathbf E [ X_t | \mathcal{F}_s]$$ for $$s < t$$ where $$\mathcal{F}_t$$ is the $$\sigma$$-algebra generated by the Brownian motion. Is $$X_t$$ a martingale with respect to $$\mathcal{F}_t$$.

## Stratanovich integral

Let $$X_t$$ be an Ito processes with
\begin{align*}
dX_t&=f_tdt + g_tdW_t
\end{align*}
and $$B_t$$ be a second (possibly correlated with $$W$$ ) Brownian
motion. We define the Stratanovich integral $$\int X_t \circ dB_t$$  by
\begin{align*}
\int_0^T X_t \circ dB_t = \int_0^T X_t dB_t + \frac12 \int_0^T \;d\langle X, B \rangle_t
\end{align*}
Recall that if $$B_t=W_t$$ then $$d\langle B, W \rangle_t =dt$$ and it is zero if they are independent. Use this definition to calculate:

1. $$\int_0^t B_t \circ dB_t$$ (Explain why this agrees with the answer you obtained here).
2. Let $$F$$ be a smooth function. Find equation satisfied by $$Y_t=F(B_t)$$ written in terms of Stratanovich integrals. (Use Ito’s formula to find the equation for $$dY_t$$ in terms of Ito integrals and then use the above definition to rewrite the Ito integrals as Stratanovich integrals“$$\circ dB_t$$”.) How does this compare to classical calculus ?
3. (Integration by parts) Let $$Z_t$$ be a second Ito process satisfying
\begin{align*}
dZ_t&=b_tdt + \sigma_tdW_t\;.
\end{align*}
Calculate $$d(X_t Z_t)$$ using Ito’s formula and then write it in terms of Stratanovich integrals. Why is this part of the problem labeled integration by parts ? (Write the integral form of the expression you derived for $$d(X_t Z_t)$$ in the two cases. What are the differences ?)

## Expansion of Brownian Motion

Let $$\{\eta_k : k=0,\cdots\}$$ be a collection of mutually independent standard Gaussian random variable with mean zero and variance one. Define
\begin{align*}
X(t) =\frac{t}{\sqrt\pi} \eta_0 + \sqrt{\frac{2}{\pi}}\sum_{k=1}^\infty \frac{\sin(k t)}{k} \eta_k \;.
\end{align*}

1. Show that on the interval $$[0,\pi]$$, $$X(t)$$ has the same mean, variance and covariance as Brownian motion. (In fact, it is Brownian motion. )
2. ** Prove it is Brownian motion.

There are a number of ways to prove it is Brownian motion.. One is to see $$X$$ as the limit of the finite sums which are each continuous functions. Then prove that $$X$$ is the uniform limit of these continuous functions and hence is itself continuous.

Then observe that “formally” the time derivative of $$X(t)$$ is the sum of all frequencies with a random amplitudes which are independent and identical $$N(0,1)$$ Gaussian random variables. This is the origin of the term “white noise” since all frequencies are equally represented as in white light.

In the above calculations you may need the fact that
\begin{align*}
\min(t,s)= \frac{ts}\pi +\frac{2}\pi \sum_{k=1}^\infty \frac{\sin(k t)\sin(k s)}{k^2}\;.
\end{align*}

If you are interested, this can be shown by periodically extending $$\min(t,s)$$ to the interval $$[-\pi,\pi]$$ and then showing that it has the same Fourier transform as the right-hand side of the above expression. Then use the fact that two continuous functions with the same Fourier transform are equal on $$[-\pi,\pi]$$.)

## Transforming Brownian Motion

Let $$W(t)$$ be standard Brownian motion on $$[0,\infty)$$.

1.  Show that $$-W(t)$$ is also a Brownian motion.
2. For any $$c>0$$, show that $$Y(t)= c W(t/c^2)$$ is again a standard Brownian motion.
3. Fix any $$s>0$$ and define $$Z(t)=W(t+s)-W(s)$$. Show that $$Z(t)$$ is a standard Brownian motion.
4. * Define $$X(t)= t W(1/t)$$ for $$t>0$$ and $$X(0)=0$$. Show that $$X(t)$$ is a standard Brownian Motion. Do this by arguing that $$X(t)$$  is continuous almost surely, that for each $$t \geq 0$$ it is a Gaussian random variable with mean zero and variance $$t$$. Instead of continuity, one can rather show that $$\text{Cov}(t,s)=\mathbf E X(t) X(s)$$ equals $$\min(t,s)$$. To prove continuity, notice that
$\lim_{t \rightarrow 0+} t W(1/t) = \lim_{s \rightarrow \infty} \frac{W(s)}{s}$

## Complex Exponential Martingale

Let $$W_t$$ be a standard Brownian Motion. Find $$\alpha \in \mathbb{R}$$ so that
$e^{i W_t + \alpha t}$
is a martingale (and show that it is a martingale).

## Martingale Brownian Squared

Let $$W_t$$ be standard Brownian Motion.

1. Find a function  $$f(t)$$ so that $$W_t^2 -f(t)$$ is a Martingale.
2. * Argue that in some sense this $$f(t)$$is unique among increasing functions with finite variation. Compare this with the problem here.

## Martingale of squares

Let $$\{ Z_n: n=0,1, \cdots\}$$ be a collection of mutually independent random variables with $$Z_n$$ distributed as a Gaussian with mean zero and variance $$\sigma_n^2$$ for $$\sigma_n \in \mathbb{R}$$. Define $$X_n$$ by
\begin{equation*}
X_n=\sum_{k=0}^n Z_k^2\;.
\end{equation*}

1. Find a stochastic process $$Y_n$$ so that $$X_n-Y_n$$ is a Martingale with respect to the filtration $$\mathcal{F}_n=\sigma(Z_0,\cdots,Z_n)$$.
2. Find a second process $$\tilde Y_n$$ so that $$X_n-Y_n$$ is again a Martingale with respect to the filtration $$\mathcal{F}_n$$ but$$Y_n \neq \tilde Y_n$$ almost surely.

## A simple Ito Integral

Let $$\mathcal F_t$$ be a filtration of $$\sigma$$-algebra and $$W_t$$ a standard Brownian Motion adapted to the filtration. Define the adapted stochastic process $$X_t$$ by

$X_t = \alpha_0 \mathbf 1_{[0,\frac12]}(t) + \alpha_{\frac12} \mathbf 1_{(\frac12,1]}(t)$

where $$\alpha_0$$ is a random variable adapted to  $$\mathcal F_0$$ and  $$\alpha_{\frac12}$$ is a random variable adapted to  $$\mathcal F_{\frac12}$$.

Write explicitly the Ito integral

$\int_0^t X_s dW_s$

and show by direct calculation that

$\mathbf E \Big( \int_0^t X_s dW_s\Big) = 0$

and

$\mathbf E \Big[\Big( \int_0^t X_s dW_s\Big)^2\Big] = \int_0^t \mathbf E X_s^2 ds$

## Quadratic Variation of Ito Integrals

Given a stochastic process  $$f_t$$ and $$g_t$$ adapted to a filtration $$\mathcal F_t$$ satisfying

$\int_0^T\mathbf E f_t^2 dt < \infty\quad\text{and}\quad \int_0^T\mathbf E g_t^2 dt < \infty$

define

$M_t =\int_0^t f_s dW_s \quad \text{and}\quad N_t =\int_0^t g_s dW_s$

for some standard Brownian Motion also adapted to the  filtration $$\mathcal F_t$$ . Though it is not necessary, assume that  there exists a $$K>0$$ so that  $$|f_t|$$ and $$|g_t|$$  are less than some $$K$$  for all $$t$$ almost surely.

Let $$\{ t_i^{(n)} : i=0,\dots,N(n)\}$$ be sequence of partitions of $$[0,T]$$ of the form

$0 =t_0^{(n)} < t_1^{(n)} <\cdots<t_N^{(n)}=T$

such that

$\lim_{n \rightarrow \infty} \sup_i |t_{i+1}^{(n)} – t_i^{(n)}| = 0$

Defining

$V_n[M]=\sum_{i=1}^{N(n)} \big(M_{t_i} -M_{t_{i-1}}\big)^2$

and

$Q_n[M,N]= \sum_{i=1}^{N(n)} \big(M_{t_i} -M_{t_{i-1}}\big)\big(N_{t_i} -N_{t_{i-1}}\big)$

Clearly $$V_n[M]= Q_n[M,M]$$. Show  that the following points hold.

1. The “polarization equality” holds:
$4 Q_n[M,N] =V_n[M+N] -V_n[M-N]$
Hence it is enough to understand the limit of $$n \rightarrow \infty$$ of $$Q_n$$ or $$V_n$$.
2. $\mathbf E V_n[M]= \int_0^T \mathbf E f_t^2 dt$
3. * $$V_n[M]\rightarrow \int_0^T f_t^2 dt$$ as $$n \rightarrow \infty$$ in $$L^2$$. That is to say
$\lim_{n \rightarrow \infty}\mathbf E \Big[ \big( V_n[M] – \int_0^T f_t^2 dt \big)^2 \Big]=0$
This limit is called the Quadratic Variation of the Martingale $$M$$.
4. Using the results above, show that $$Q_n[M,N]\rightarrow \int_0^T f_t g_t dt$$ as $$n \rightarrow \infty$$ in $$L^2$$. This is called the cross-quadratic variation of $$M$$ and $$N$$.
5. * Prove by direct calculation that  in the spirit of 3) from above that   $$Q_n[M,N]\rightarrow \int_0^T f_t g_t dt$$ as $$n \rightarrow \infty$$ in $$L^2$$.

In this context, one writes $$\langle M \rangle_T$$ for the limit of the $$V_n[M]$$  which is called the quadratic variation process of $$M_T$$. Similarly  one writes  $$\langle M,N \rangle_T$$ for the  limit of $$Q_n[M,N]$$  which is called the cross-quadratic variation process of $$M_T$$ and $$N_T$$. Clearly $$\langle M \rangle_T = \langle M,M \rangle_T$$ and $$\langle M+N,M \rangle_T = \langle M, M+N\rangle_T= \langle M \rangle_T + \langle M, N\rangle_T$$.

## Covariance of Ito Integrals

Let $$f_t$$ and $$f_t$$ be two stochastic processes adapted to a filtration $$\mathcal F_t$$ such that

$\int_0^\infty \mathbf E (f_t^2) dt < \infty \qquad \text{and} \qquad \int_0^\infty \mathbf E (g_t^2) dt < \infty$

Let $$W_t$$ be a standard brownian motion  also adapted to the filtration $$\mathcal F_t$$ and define the stochastic processes

$X_t =\int_0^t f_s dW_s \qquad \text{and} \qquad Y_t=\int_0^t g_s dW_s$

Calculate the following:

1. $$\mathbf E (X_t X_s )$$
2. $$\mathbf E (X_t Y_t )$$
Hint: You know how to compute $$\mathbf E (X_t^2 )$$ and $$\mathbf E (Y_t^2 )$$. Use the fact that $$(a+b)^2 = a^2 +2ab + b^2$$ to answer the question. Simplify the result to get a compact expression for the answer.
3. Show that if $$f_t=\sin(2\pi t)$$ and $$g_t=\cos(2\pi t)$$ then $$X_1$$ and $$Y_1$$ are independent random variables.(Hint: use the result here  to deduce that $$X_1$$ and $$Y_1$$ are mean zero gaussian random variables. Now use the above results to show that the covariance of $$X_1$$ and $$Y_1$$ is zero. Combining these two facts implies that the random variables are independent.)

## Gaussian Ito Integrals

In this problem, we will show that the Ito integral of a deterministic function is a Gaussian Random Variable.

Let $$\phi$$ be deterministic elementary functions. In other words there exists a  sequence of  real numbers $$\{c_k : k=1,2,\dots,N\}$$ so that

$\sum_{k=1}^\infty c_k^2 < \infty$

and there exists a partition

$0=t_0 < t_1< t_2 <\cdots<t_N=T$

so that

$\phi(t) = \sum_{k=1}^N c_k \mathbf{1}_{[t_{k-1},t_k)}(t)$

1. Show that if $$W(t)$$ is a standard brownian motion then the Ito integral
$\int_0^T \phi(t) dW(t)$
is a Gaussian random variable with mean zero and variance
$\int_0^T \phi(t)^2 dt$
2. * Let $$f\colon [0,T] \rightarrow \mathbf R$$ be a deterministic function such that
$\int_0^T f(t)^2 dt < \infty$
Then it can be shown that there exists a sequence of  deterministic elementary functions $$\phi_n$$ as above such that
$\int_0^T (f(t)-\phi_n(t))^2 dt \rightarrow 0\qquad\text{as}\qquad n \rightarrow \infty$
Assuming this fact, let $$\psi_n$$ be the characteristic function of the random variable
$\int_0^T \phi_n(t) dW(t)$
Show that for all $$\lambda \in \mathbf R$$, show that
$\lim_{n \rightarrow \infty} \psi_n(\lambda) = \exp \Big( -\frac{\lambda^2}2 \big( \int_0^T f(t)^2 dt \big) \Big)$
Then use the the convergence result here to conclude that
$\int_0^T f(t) dW(t)$
is a Gaussian Random Variable with mean zero and variance
$\int_0^T f(t)^2 dt$
by identifying the limit of the characteristic functions above.

## Solving a class of SDEs

Let us try a systematic procedure for solving SDEs which works for a class of SDEs. Let
\begin{align*}
X(t)=a(t)\left[ x_0 + \int_0^t b(s) dB(s) \right] +c(t) \ .
\end{align*}
Assuming $$a$$, $$b$$, and $$c$$ are differentiable, use Ito’s formula to find the equation for $$dX(t)$$ of the form
\begin{align*}
dX(t)=[ F(t) X(t) + H(t)] dt + G(t)dB(t)
\end{align*}
were $$F(t)$$, $$G(t)$$, and $$H(t)$$ are some functions of time depending on $$a,b$$ and maybe their derivatives. Solve the following equations by matching the coefficients. Let $$\alpha$$, $$\gamma$$ and $$\beta$$ be fixed numbers.

1. First consider
$dX_t = (-\alpha X_t + \gamma) dt + \beta dB_t$
with $$X_0 =x_0$$
. Solve this for $$t \geq 0$$
2. Now consider
$dY(t)=\frac{\beta-Y(t)}{1-t} dt + dB(t) ~,~~ 0\leq t < 1 ~,~~Y(0)=\alpha.$
Solve this for $$t\in[0,1]$$.
3. \begin{align*}
dX_t = -2 \frac{X_t}{1-t} dt + \sqrt{2 t(1-t)} dB_t ~,~~X(0)=\alpha
\end{align*}
Solve this for $$t\in[0,1]$$.

## Homogeneous Martingales and BDG Inequality

### Part I

1. Let $$f(x,y):\mathbb{R}^2 \rightarrow \mathbb{R}$$ be a twice differentiable function in both $$x$$ and $$y$$. Let $$M(t)$$ be defined by $M(t)=\int_0^t \sigma(s,\omega) dB(s,\omega)$. Assume that $$\sigma(t,\omega)$$ is adapted and that $$\mathbb{E} M^2 < \infty$$ for all $$t$$ a.s. .(Here $$B(t)$$ is standard Brownian Motion.) Let $$\langle M \rangle(t)$$ be the quadratic variation process of $$M(t)$$. What equation does $$f$$ have to satisfy so that $$Y(t)=f(M(t),\langle M \rangle(t))$$ is again a martingale if we assume that $$\mathbf E\int_0^t \sigma(s,\omega)^2 ds < \infty$$.
2. Set
\begin{align*}
f_n(x,y) = \sum_{0 \leq m \leq \lfloor n/2 \rfloor} C_{n,m} x^{n-2m}y^m
\end{align*}
here $$\lfloor n/2 \rfloor$$ is the largest integer less than or equal to $$n/2$$. Set $$C_{n,0}=1$$ for all $$n$$. Then find a recurrence relation for $$C_{n,m+1}$$ in terms of $$C_{n,m}$$, so that $$Y(t)=f_n(B(t),t)$$ will be a martingale.Write out explicitly $$f_1(B(t),), \cdots, f_4(B(t),t)$$ as defined in the previous item.

### Part II

Now consider $$I(t)$$ defined by $I(t)=\int_0^t \sigma(s,\omega)dB(s,\omega)$ where $$\sigma$$ is adapted and $$|\sigma(t,\omega)| \leq K$$ for all $$t$$ with probability one. In light of the above let us set
\begin{align*}Y(t,\omega)=I(t)^4 – 6 I(t)^2\langle I \rangle(t) + 3 \langle I \rangle(t)^2 \ .\end{align*}

1. Quote  the problem “Ito Moments” to show that $$\mathbb{E}\{ |Y(t)|^2\} < \infty$$ for all $$t$$. Then use the first part of this problem to conclude that $$Y$$ is a martingale.
2. Show that $\mathbb{E}\{ I(t)^4 \} \leq 6 \mathbb{E} \big\{ \{I(t)^2\langle I \rangle(t) \big\}$
3. Recall the Cauchy-Schwartz inequality. In our language it states that
\begin{align*}
\mathbb{E} \{AB\} \leq (\mathbb{E}\{A^2\})^{1/2} (\mathbb{E}\{B^2\})^{1/2}
\end{align*}
Combine this with the previous inequality to show that\begin{align*}\mathbb{E}\{ I(t)^4 \} \leq 36 \mathbb{E} \big\{\langle I \rangle(t)^2 \big\} \end{align*}
4. As discussed in class $$I^4$$ is a submartingale (because $$x \mapsto x^4$$ is convex). Use the Kolmogorov-Doob inequality and all that we have just derived to show that
\begin{align*}
\mathbb{P}\left\{ \sup_{0\leq s \leq T}|I(s)|^4 \geq \lambda \right\} \leq ( \text{const}) \frac{ \mathbb{E}\left( \int_0^T \sigma(s,\omega)^2 ds\right)^2 }{\lambda}
\end{align*}