


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A formula sheet for Introduction to Stochastic Models course taught by Professor S. Kou at Columbia University in Fall 2005. The sheet covers topics such as Chapman-Kolmogorov equation, classification of states, recurrent and transient states, Gambler's ruin problem, and exponential distribution. suitable for students who want to use it as study notes, summary, or exam preparation material.
Typology: Study notes
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Formula Sheet, E3106, Introduction to Stochastic Models Columbia University, Fall 2005, Professor S. Kou. Ch. 4
P (Xn+m = j|Xm = i). Then P (^) ij(n +m)=
P∞ k=0 P^
(n) ik P^
(m) kj.^ Hence^ P (n) (^) = Pn, where P(n) (^) = (P (n) ij ) and P = (Pij ).
P∞ n=1 P^
(n) ii =^ ∞, and transient if^
P∞ n=1 P^
(n) ii <^ ∞. (3) If i is recurrent and i ←→ j, then j is also recurrent. Therefore, recurrence and transience are class properties. (4) Positive recurrent: if E(T ) < ∞, where T is the time until the process returns to state i. Null recurrent if E(T ) = ∞. (5) Positive recurrent aperiodic states are call ergodic states.
P∞ i=0 πiPij^ , j^ ≥^0 ,^
P∞ i=0 πi^ = 1.^ Four interpreta- tions of πij. (1) “Limiting probabilities”. For a irreducible ergodic MC, limn→∞ P (^) ij(n )= πj. (2) “Stationary probabilities”. If P (X 0 = j) = πj , then P (Xn = j) = πj. (3) “Long-run average frequencies”. Let aj (N ) be the number of periods a irreducible MC spends in state j during time periods 1 , 2 , ..., N. Then as N → ∞, aj^ N(N )→ πj. (4) mjj = 1/πj , where mjj is the expectation of the number of transitions until the MC returns back to j, starting at j.
Pi =
1 − (q/p)i 1 − (q/p)N^ , if p 6 =
; Pi =
i N , if p =
(3) Let PT be the transition matrix for transient states only. Then S = (I − PT )−^1 , where S = (sij ) and sij is the expected time periods that the MC is in state j, starting at i, for transient states i and j. (4) Suppose i and j are transient states. Let fij be the probability of ever making a transition into state j, starting at i. Then fij = sij s^ −jjδ ij, where δij = 1 if i = j, and δij = 0 if i 6 = j. Ch. 5
(a) density: f (x) = λe−λx, if x ≥ 0. (b) distribution function: F (x) = P (X ≤ x) = 1 − e−λx, if x ≥ 0. P (X ≥ x) = e−λx, x ≥ 0. (c) E[X] = 1/λ and V ar[X] = (^) λ^12.
Pn i=1 λi}.
P (N (t + s) − N (s) = n) = e−λt^ (λt)n n!
E[N (t) − N(s)] = λ(t − s), V ar[N (t) − N (s)] = λ(t − s). (2) A counting process with independent increments such that N (0) = 0 and P (N (h) =
Pn i=1 Xi^ and^ S^0 = 0.^ Then N (t) = max{n ≥ 0 : Sn ≤ t}. Here {Xi} are called interarrival times, and Sn is called the nth arrival times. (4) Splitting property. Given a Poisson process N (t) with parameter λ, if an type-I event happens with probability p and type-II event happens with probability 1 − p, then both N 1 (t) and N 2 (t) are independent Poisson processes with parameter λp and λ(1 − p), respectively. Ch. 6.
P [X(t + s) = j|X(s) = i, Fs] = P [X(t + s) = j|X(s) = i].
P j Pij^ = 1.
P 0 , 1 = 1, Pi,i+1 = λi/(λi + μi), Pi,i− 1 = μi/(λi + μi).
Examples: Yule process, linear growth model, M/M/1/∞ and M/M/s/∞ queues.
Note that {τ (a) ≤ t}={max 0 ≤s≤t B(s) ≥ a}.
³ e−rT^ (S(T ) − K)+
´ , where under the risk-neutral probability P∗
S(t) = S(0) exp{
μ r −
σ^2
¶ t + σW (t)}.
Evaluating the expectation yields the following formula for the price of the call option:
u 0 = S(0) · Φ(μ+) − Ke−rT^ Φ(μ−),
where
μ± :=
σ
[log(S(0)/K) + (r ±
³ σ^2 / 2
´ )T ] and Φ(z) =
2 π
Z (^) z
−∞
e−u
(^2) / 2 du.
Ch. 7 Renewal Theory Let Sn =
Pn i=1 Xi, where^ X (^0) s are i.i.d. random variables.
N(t) ≥ n ⇔ Sn ≤ t, and τ (t) = N (t) + 1.
PT i=1 Xi] = E[T ]E[X].
A(t) = a(t) +
Z (^) t
0
A(t − x)dF (x),
where F (x) = P (X ≤ x), is call the renewal equation. Explicit solutions of the renewal equation are available only for some special cases of F (x), such as uniform distribution.
τ (t)/t →
μ
, N (t)/t →
μ
, E[τ (t)/t] →
μ
, E[N (t)/t] →
μ
where μ = E[X].