Stochastic Calculus

1. (20 points) Throughout the course we repeatedly used the property that the stochastic integral
Z t
0
φ(s) · dW(s) (W is a Brownian motion)
is a martingale under suitable conditions. In this problem we study an example where the stochastic
integral is not a martingale.
Consider a 3-dimensional Brownian motion
B(t) = (B1(t), B2(t), B3(t)).
That is, the components of B are independent Brownian motions. We assume that B1(0) = B2(0) =
B3(0) = 1.
(a) (5 points) Consider the function
u(x) = 1
p
x
2
1 + x
2
2 + x
2
3
.
Show that u is a harmonic function on R
3 \ {0}. That is,

2u
∂x2
1
(x) + ∂
2u
∂x2
2
(x) + ∂
2u
∂x2
3
(x) = 0
for x 6= 0.
(b) (5 points) Write |B(t)|
−1 = u(B(t)) as a stochastic integral with respect to the Brownian motion.
(c) (5 points) Find the mean and standard deviation of B2
1
(t) + B2
2
(t) + B2
3
(t).
(d) (5 points) Using the results of (a)-(c), argue heuristically that u(B(t)) → 0 as t → ∞, and explain
why this suggests that u(B(t)) is not a martingale. (To get some intuition, you may simulate
paths of the process and see what happens.)
2. (30 points) Consider the Vasicek model where the short rate process satisfies the dynamics
dr = (b − ar)dt + σdW(t)
under the risk neutral probability measure Q. The unit of time is year.
(a) (10 points) Suppose at time t = 0 we observe that the short rate is r(0) = 0.03, and we also
observe the following term structure:
p(0, 0) = 1, p(0, 0.1) = 0.997, p(0, 0.5) = 0.982, p(0, 1) = 0.965,
p(0, 2) = 0.931, p(0, 5) = 0.8845, p(0, 10) = 0.87.
Here p(t, T) is the price of the T-bond at time t. Calibrate a Vasicek model based on the data,
and explain carefully what you do. Comment on the fit of the model.
(b) (10 points) Price, using Monte-Carlo for example, a European call option on a 7-bond with
maturity date at time 5, with strike price 0.9. (The underlying bond matures in 7 years, but the
option expires in 5 years.)
(c) (10 points) Discuss the sensitivity of the call price with respect to the accuracy of the parameters.
This is an open-ended question and requires programming.
3. (30 points) Consider two stocks whose dynamics are given by
dS1(t) = S1(t)(µ1dt + σ1dW1(t)),
dS2(t) = S2(t)(µ2dt + σ2dW2(t)),
where µ1, µ2, σ1, σ2 are constants, and W1 and W2 are independent standard Brownian motions.
Assume S1(0) = S2(0) = 1.
(a) (5 points) Consider a constant-weighted portfolio (w, 1 − w) of the two stocks, where w ∈ R is
the weight of stock 1. Derive an explicit formula for the value Vw(t) of the constant-weighted
portfolio depending on the weight w. We assume that Vw(0) = 1.
(b) (5 points) Fix t. If the paths of the Brownian motions are fixed, what is the value of w which
maximizes Vw(t)?
(c) (20 points) We have a collection of strategies indexed by the weight w ∈ R. Suppose at time 0
you have $1. Suppose at time 0 we distribute the money among all these portfolios according to
the standard normal distribution: That is,
– We invest $ √
1

e
− w2
2 dw in the portfolio with weight w.
So we we are holding a portfolio of portfolios (think of each strategy as a stock). At time t, the
amount of money with weight w becomes
$
1


e
− w2
2 ×
Vw(t)
Vw(0) × dw,
and our entire portfolio has value
V¯ (t) = Z ∞
−∞
1


e
− w2
2
Vw(t)
Vw(0)dw.
If you consider the distribution of your capital, you have a probability distribution. Show that

1

e
− w2
2 ×
Vw(t)
Vw(0)
V¯ (t)
is the density of a normal distribution, and find the corresponding mean and variance in terms
of the Brownian motions. (This is similar to computing the posterior distribution in Bayesian
statistics.)
If you need hints, look up the paper Asymptotically Optimal Portfolios by Jamshidian (1992).
4. (40 points) Consider a stochastic optimal control problem. The state process is one-dimensional and
is given by
dXt = µ(t, Xu
t
, ut)dt + σ(t, Xu
t
, ut)dWt,
where Wt is a one-dimensional Brownian motion.
Consider the gain functional given by
Hu
(t, x) = Et,x ”
exp Z T
t
F(s, Xu
s
, us)dt + G(Xu
T
)
!# .
The value function is given by
H(t, x) = sup
u∈At,T
Hu
(t, x).
Note that this is different from the usual stochastic control problem we considered as there is an
exponential inside the expectation.
(a) (10 points) Formulate a dynamic programming principle (DPP) along the lines of Section 20 of
the notes.
(b) (10 points) Using the argument in Section 21 of the notes, derive the HJB equation for this
problem. Show that it is given by
∂tH(t, x) + sup
u
(H(t, x)F(t, x, u) + L
uH(t, x)) = 0, H(T, x) = e
G(x)
.
(c) (20 points) Using the DPP and HJB derived above, solve the problem
sup
u
E
h
e
R T
0
u
2
t dt+X2
T
i
,
where the state process evolves according to the SDE
dXu
t = (x + u)dt + dWt.
When solving the resulting HJB, use the ansatz
H(t, x) = e
A(t)x
2+B(t)
where A and B are deterministic functions of time. By solving, I mean finding the value function
for all t, x and finding the optimal control.

 

 

 

Sample Solution

ACED ESSAYS