Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Formula Sheet for Introduction to Stochastic Models, Study notes of Probability and Statistics

A formula sheet for Introduction to Stochastic Models course taught by Professor S. Kou at Columbia University in Fall 2005. The sheet covers topics such as Chapman-Kolmogorov equation, classification of states, recurrent and transient states, Gambler's ruin problem, and exponential distribution. suitable for students who want to use it as study notes, summary, or exam preparation material.

Typology: Study notes

Pre 2010

Uploaded on 05/11/2023

marphy
marphy 🇺🇸

4.3

(30)

284 documents

1 / 4

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Formula Sheet, E3106, Introduction to Stochastic Models
Columbia University, Fall 2005, Professor S. Kou.
Ch. 4
1. Chapman-Kolmogorov equation. Consider a discrete Markov Chain Xn.Let P(n)
ij =
P(Xn+m=j|Xm=i).ThenP(n+m)
ij =P
k=0 P(n)
ik P(m)
kj .Hence P(n)=Pn,whereP(n)=(P(n)
ij )
and P=(Pij ).
2. Classification of states. Accessible, ij:ifP(n)
ij >0for some n0. Communicate,
i←→ j:ifijand ji. Class: all states that communicate. Irreducible: if there is only
one class.
3. Recurrent and transient. Let fi=P(ever come back to i|starts at i).Transient:if
fi<1.Recurrent:iffi=1.
(1) Suppose state iis transient. Then the probability that the MC will be in state ifor
exactly ntime period is fn1
i(1 fi);andE[X]= 1
1fi,whereXisthenumberoftimeperiod
theMCwillbeinstatei.
(2) State iis recurrent if P
n=1 P(n)
ii =, and transient if P
n=1 P(n)
ii <.
(3) If iis recurrent and i←→ j,thenjis also recurrent. Therefore, recurrence and transience
are class properties.
(4) Positive recurrent: if E(T)<,whereTis the time until the process returns to state
i. Null recurrent if E(T)=.
(5) Positive recurrent aperiodic states are call ergodic states.
4. πiis defined to be the solution of πj=P
i=0 πiPij,j0,P
i=0 πi=1.Four interpreta-
tions of πij.
(1) “Limiting probabilities”. For a irreducible ergodic MC, limn→∞ P(n)
ij =πj.
(2) “Stationary probabilities”. If P(X0=j)=πj,thenP(Xn=j)=πj.
(3) “Long-run average frequencies”. Let aj(N)be the number of periods a irreducible MC
spends in state jduring time periods 1,2,...,N. Then as N→∞,aj(N)
Nπj.
(4) mjj =1/πj,where mjj is the expectation of the number of transitions until the MC
returns back to j,startingatj.
5. Gambler’s ruin problem.
(1)ItisaMCwithP00 =PNN =1,andPi,i+1 =p=1Pi,i1,i=1,...,N 1.
(2) Let Pibe the probability of reaching Nbefore 0, starting with $i.ThenPi=pPi+1 +
qPi1.Furth erm ore ,
Pi=1(q/p)i
1(q/p)N,if p6=1
2;Pi=i
N,if p=1
2.
(3) Let PTbe the transition matrix for transient states only. Then S=(IPT)1,where
S=(sij)and sij is the expected time periods that the MC is in state j, starting at i,for
transient states iand j.
(4) Suppose iand jare transient states. Let fij be the probability of ever making a transition
into state j,startingati.Thenfij =sijδij
sjj ,where δij =1if i=j,andδij =0if i6=j.
Ch. 5
1. Exponential distribution: basic properties.
pf3
pf4

Partial preview of the text

Download Formula Sheet for Introduction to Stochastic Models and more Study notes Probability and Statistics in PDF only on Docsity!

Formula Sheet, E3106, Introduction to Stochastic Models Columbia University, Fall 2005, Professor S. Kou. Ch. 4

  1. Chapman-Kolmogorov equation. Consider a discrete Markov Chain Xn. Let P (^) ij(n ) =

P (Xn+m = j|Xm = i). Then P (^) ij(n +m)=

P∞ k=0 P^

(n) ik P^

(m) kj.^ Hence^ P (n) (^) = Pn, where P(n) (^) = (P (n) ij ) and P = (Pij ).

  1. Classification of states. Accessible, i → j: if P (^) ij(n )> 0 for some n ≥ 0. Communicate, i ←→ j: if i → j and j → i. Class: all states that communicate. Irreducible: if there is only one class.
  2. Recurrent and transient. Let fi = P (ever come back to i | starts at i). Transient: if fi < 1. Recurrent: if fi = 1. (1) Suppose state i is transient. Then the probability that the MC will be in state i for exactly n time period is f (^) in −^1 (1 − fi); and E[X] = (^1) −^1 fi , where X is the number of time period the MC will be in state i. (2) State i is recurrent if

P∞ n=1 P^

(n) ii =^ ∞, and transient if^

P∞ n=1 P^

(n) ii <^ ∞. (3) If i is recurrent and i ←→ j, then j is also recurrent. Therefore, recurrence and transience are class properties. (4) Positive recurrent: if E(T ) < ∞, where T is the time until the process returns to state i. Null recurrent if E(T ) = ∞. (5) Positive recurrent aperiodic states are call ergodic states.

  1. πi is defined to be the solution of πj =

P∞ i=0 πiPij^ , j^ ≥^0 ,^

P∞ i=0 πi^ = 1.^ Four interpreta- tions of πij. (1) “Limiting probabilities”. For a irreducible ergodic MC, limn→∞ P (^) ij(n )= πj. (2) “Stationary probabilities”. If P (X 0 = j) = πj , then P (Xn = j) = πj. (3) “Long-run average frequencies”. Let aj (N ) be the number of periods a irreducible MC spends in state j during time periods 1 , 2 , ..., N. Then as N → ∞, aj^ N(N )→ πj. (4) mjj = 1/πj , where mjj is the expectation of the number of transitions until the MC returns back to j, starting at j.

  1. Gambler’s ruin problem. (1) It is a MC with P 00 = PNN = 1, and Pi,i+1 = p = 1 − Pi,i− 1 , i = 1, ..., N − 1. (2) Let Pi be the probability of reaching N before 0, starting with $i. Then Pi = pPi+1 + qPi− 1. Furthermore,

Pi =

1 − (q/p)i 1 − (q/p)N^ , if p 6 =

; Pi =

i N , if p =

(3) Let PT be the transition matrix for transient states only. Then S = (I − PT )−^1 , where S = (sij ) and sij is the expected time periods that the MC is in state j, starting at i, for transient states i and j. (4) Suppose i and j are transient states. Let fij be the probability of ever making a transition into state j, starting at i. Then fij = sij s^ −jjδ ij, where δij = 1 if i = j, and δij = 0 if i 6 = j. Ch. 5

  1. Exponential distribution: basic properties.

(a) density: f (x) = λe−λx, if x ≥ 0. (b) distribution function: F (x) = P (X ≤ x) = 1 − e−λx, if x ≥ 0. P (X ≥ x) = e−λx, x ≥ 0. (c) E[X] = 1/λ and V ar[X] = (^) λ^12.

  1. Exponential distribution: more properties. (a) Memoryless: P (X > t + s|X > t) = P (X > s), E[X|X ≥ t] = t + E[X]. (b) Sum: X 1 + · · · + Xn = Γ(n, λ), where Γ(n, λ) is the gamma distribution with parameters n and λ whose density is λe−λt(λt)n−^1 /(n − 1)!. (c) P (X 1 < X 2 ) = (^) λ 1 λ+^1 λ 2 , where X 1 = Ex(λ 1 ) and X 2 = Ex(λ 2 ) are two independent exponential random variables. (d) P (min{X 1 , ..., Xn} > x) = exp{−x

Pn i=1 λi}.

  1. Hazard rate: r(t) = (^1) −f(Ft )(t) , where f and F are density and distribution functions, respectively. For an exponential random variable, r(t) = λ.
  2. Poisson processes: three definitions. (1) A counting process with independent increments such that N (0) = 0 and

P (N (t + s) − N (s) = n) = e−λt^ (λt)n n!

E[N (t) − N(s)] = λ(t − s), V ar[N (t) − N (s)] = λ(t − s). (2) A counting process with independent increments such that N (0) = 0 and P (N (h) =

  1. = λh + o(h), P (N (h) ≥ 2) = o(h), as h → 0. (3) Let Xi be i.i.d. with the distribution Ex(λ). Let Sn =

Pn i=1 Xi^ and^ S^0 = 0.^ Then N (t) = max{n ≥ 0 : Sn ≤ t}. Here {Xi} are called interarrival times, and Sn is called the nth arrival times. (4) Splitting property. Given a Poisson process N (t) with parameter λ, if an type-I event happens with probability p and type-II event happens with probability 1 − p, then both N 1 (t) and N 2 (t) are independent Poisson processes with parameter λp and λ(1 − p), respectively. Ch. 6.

  1. The first definition of continuous-time Markov chain.

P [X(t + s) = j|X(s) = i, Fs] = P [X(t + s) = j|X(s) = i].

  1. The second definition of continuous-time Markov chain. (i) The amount of time stayed in state i is an exponential random variable with rate vi. (ii) Let Pij =P {next enters state j|the process leaves state i}. Then Pii = 0, and

P j Pij^ = 1.

  1. Birth-death process: birth rate λi, death rate μi. It is a continuous time Markov chain with v 0 = λ 0 , vi = λi + μi (since the min of two exponential random variable is still an exponential random variable with the rate the sum of the two rates), and

P 0 , 1 = 1, Pi,i+1 = λi/(λi + μi), Pi,i− 1 = μi/(λi + μi).

Examples: Yule process, linear growth model, M/M/1/∞ and M/M/s/∞ queues.

Note that {τ (a) ≤ t}={max 0 ≤s≤t B(s) ≥ a}.

  1. Option pricing: binomial tree. No arbitrage argument.
  2. Black-Scholes formula for the European call option. The price of the European call option price u 0 is u 0 = E∗^

³ e−rT^ (S(T ) − K)+

´ , where under the risk-neutral probability P∗

S(t) = S(0) exp{

μ r −

σ^2

¶ t + σW (t)}.

Evaluating the expectation yields the following formula for the price of the call option:

u 0 = S(0) · Φ(μ+) − Ke−rT^ Φ(μ−),

where

μ± :=

σ

T

[log(S(0)/K) + (r ±

³ σ^2 / 2

´ )T ] and Φ(z) =

2 π

Z (^) z

−∞

e−u

(^2) / 2 du.

Ch. 7 Renewal Theory Let Sn =

Pn i=1 Xi, where^ X (^0) s are i.i.d. random variables.

  1. Renewal Process: N (t) = max{n : Sn ≤ t}. The first passage time τ (t) = min{n : Sn > t}. The overshoot γ(t) = Sτ (t) − t. Note that if Xi ≥ 0 , then

N(t) ≥ n ⇔ Sn ≤ t, and τ (t) = N (t) + 1.

  1. Wald’s equation: Suppose T is a stopping time. If E[T ] < ∞, then we have E[

PT i=1 Xi] = E[T ]E[X].

  1. Renewal Equation. Suppose Xi ≥ 0. Then an equation of the form

A(t) = a(t) +

Z (^) t

0

A(t − x)dF (x),

where F (x) = P (X ≤ x), is call the renewal equation. Explicit solutions of the renewal equation are available only for some special cases of F (x), such as uniform distribution.

  1. Elementary renewal theorem: As t → ∞,

τ (t)/t →

μ

, N (t)/t →

μ

, E[τ (t)/t] →

μ

, E[N (t)/t] →

μ

where μ = E[X].