Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Mathformula for math 2, Lecture notes of Mathematics

2025 math formula koc university

Typology: Lecture notes

2022/2023

Uploaded on 06/22/2025

berkin-onat
berkin-onat 🇹🇷

1 document

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
ENGR201 FORMULA SHEET
Ch. 1
¯
X=x1+x2+···+xn
n=
n
P
i=1
xi
n=
m
P
i=1
xifi
m
P
i=1
fi
s2=
n
P
i=1
(xi¯x)2
n1=
m
P
i=1
fi(xi¯x)2
m
P
i=1
fi1
Suppose that we have a data set X1,...,Xn. The pth percentile
is conceptually meant to be a value Qpsuch that p proportion
(or 100p%) of the observations fall below Qp.
Qp= the observation of rank (n+1)p(obtained by interpolation
if necessary).
Boxplot limits
| 1.5IQR | 1.5IQR | IQR | 1.5IQR | 1.5IQR |
Ch. 2
The conditional probability of an event B given an event A, de-
noted as P(B|A) is
P(B|A) = P(AB)
P(A)
P(AB) = P(B|A)P(A) = P(A|B)P(B)
Assume E1, E2,··· , Ekare mutually exclusive and exhaustive
events. Then the total probability of B is:
P(B) = P(BE1) + P(BE2) + ···+P(BEk)
=P(B|E1)P(E1) + P(B|E2)P(E2) + ···+P(B|Ek)P(Ek)
Two events are independent if any of the following equivalent
statements is true:
1. P(A|B) = P(A)
2. P(B|A) = P(B)
3. P(AB) = P(A)P(B)
Baye’s Theorem
P(A|B) = P(B|A)P(A)
P(B)
For a discrete random variable X with possible values
x1, x2,...,xna probability mass function is a function such that
1. f(xi)0
2.
n
P
i=1
f(xi)=1
3. f(xi) = P(X=xi)
The cumulative distribution function of a discrete random vari-
able X, denoted as F(x), is
1. F(x) = P(Xx) = P
xix
f(xi)
2. 0 F(x)1
3. If x y, then F(x)F(y)
The mean or expected value of the discrete random variable X,
denoted as µor E(X), is
µ=E(X) = X
x
xf(x)
The variance of X, denoted as σ2or V(X), is
σ2=V(X) = E(Xµ)2=X
x
(xµ)2f(x) = X
x
x2f(x)µ2
The standard deviation of X is σ=p(σ2)
Let X be a discrete random variable with probability mass func-
tion f(x) and h(x) be any arbitrary function of X.
E[h(X)] = X
x
h(x)f(x)
In the special case that h(X) = aX + b for any constants a and
b,
E[h(X)] = aE(X) + b
V[h(X)] = a2V(X)
The joint probability mass function of the discrete random vari-
ables X and Y, denoted as fXY (x, y) satisfies
1. fXY (x, y)0
2. P
xP
y
fXY (x, y)=1
3. fXY (x, y) = P(X=x, Y =y)
The joint probability density function for the continuous random
variables X and Y, denoted as fXY (x, y) satisfies
1. fXY (x, y)0 for all x,y
2. R
−∞ R
−∞ fXY (x, y)dxdy = 1
3. For any region R of two-dimensional space,
P((X, Y )R) = RRRfX Y (x,y)dxdy
If the joint probability density function of random variables X
and Y is fXY (x, y), the marginal probability density functions
of X and Y are
fx(x) = Zy
fXY (x, y)dy and fy(y) = Zx
fXY (x, y)dx
where the first integral is over all points in the range of (X,Y)
for which X=x and the second integral is over all points in the
range of (X,Y) for which Y=y.
Expected Value of a Function of Two Random Variables
E[h(X, Y )] =
PPh(x, y)fXY (x, y ) X,Y discrete
R R h(x, y)fXY (x, y )dxdy X,Y continuous
The covariance between the random variables X and Y, denoted
as cov(X,Y) or σxy, is
σxy =E[(Xµx)(Yµy)] = E(XY )µxµy
The correlation between random variables X and Y, denoted as
ρXY , is
ρXY =cov(X, Y )
pV(X)V(Y)=σXY
σXσY
Mean of a Linear Function: If Y=c1X1+c2X2+···+cnXn,
E(Y) = c1E(X1) + c2E(X2) + ···+cnE(Xn)
Variance of a Linear Function
If X1, X2,...,Xnare random variables, and Y=c1X1+c2X2+
···+cnXn, then in general,
V(Y) = c2
1V(X1)+c2
2V(X2)+...c2
nV(Xn)+2 X
i<j Xcicjcov(Xi, Xj)
If X1, X2,...,Xnare independent,
V(Y) = c2
1V(X1) + c2
2V(X2) + ...c2
nV(Xn)
pf2

Partial preview of the text

Download Mathformula for math 2 and more Lecture notes Mathematics in PDF only on Docsity!

ENGR201 FORMULA SHEET

Ch. 1

X =

x 1

  • x 2
  • · · · + x n

n

n P

i=

x i

n

m P

i=

x i f i

m P

i=

f i

s

2

=

n P

i=

(x i − ¯x)

2

n − 1

m P

i=

f i (x i − ¯x)

2

m P

i=

f i

  • Suppose that we have a data set X 1

,... , X

n

. The pth percentile

is conceptually meant to be a value Q p such that p proportion

(or 100p%) of the observations fall below Q p

Q

p = the observation of rank (n + 1)p (obtained by interpolation

if necessary).

  • Boxplot limits

| ←− 1. 5 IQR →− | ←− 1. 5 IQR →− | ←− IQR →− | ←− 1. 5 IQR →− | ←− 1. 5 IQR →− |

Ch. 2

  • The conditional probability of an event B given an event A, de-

noted as P (B|A) is

P (B|A) =

P (A ∩ B)

P (A)

P (A ∩ B) = P (B|A)P (A) = P (A|B)P (B)

  • Assume E 1

, E

2

, · · · , E

k

are mutually exclusive and exhaustive

events. Then the total probability of B is:

P (B) = P (B ∩ E

1

) + P (B ∩ E

2

) + · · · + P (B ∩ E

k

= P (B|E

1

)P (E

1

) + P (B|E

2

)P (E

2

) + · · · + P (B|E

k

)P (E

k

  • Two events are independent if any of the following equivalent

statements is true:

1. P (A|B) = P (A)

2. P (B|A) = P (B)

3. P (A ∩ B) = P (A)P (B)

  • Baye’s Theorem

P (A|B) =

P (B|A)P (A)

P (B)

  • For a discrete random variable X with possible values

x 1

, x 2

,... , x n a probability mass function is a function such that

  1. f (x i

n P

i=

f (x i

  1. f (x i

) = P (X = x i

  • The cumulative distribution function of a discrete random vari-

able X, denoted as F(x), is

  1. F (x) = P (X ≤ x) =

P

x i

≤x

f (x i

  1. 0 ≤ F (x) ≤ 1
  2. If x ≤ y, then F (x) ≤ F (y)
    • The mean or expected value of the discrete random variable X,

denoted as μ or E(X), is

μ = E(X) =

X

x

xf (x)

The variance of X, denoted as σ

2 or V (X), is

σ

2

= V (X) = E(X − μ)

2

=

X

x

(x − μ)

2

f (x) =

X

x

x

2

f (x) − μ

2

The standard deviation of X is σ =

p

2 )

  • Let X be a discrete random variable with probability mass func-

tion f(x) and h(x) be any arbitrary function of X.

E[h(X)] =

X

x

h(x)f (x)

  • In the special case that h(X) = aX + b for any constants a and

b,

E[h(X)] = aE(X) + b

V [h(X)] = a

2

V (X)

  • The joint probability mass function of the discrete random vari-

ables X and Y, denoted as f XY (x, y) satisfies

  1. f XY

(x, y) ≥ 0

P

x

P

y

f XY (x, y) = 1

  1. f XY (x, y) = P (X = x, Y = y)
  • The joint probability density function for the continuous random

variables X and Y, denoted as f XY

(x, y) satisfies

  1. f XY (x, y) ≥ 0 for all x,y

R

−∞

R

−∞

f XY

(x, y)dxdy = 1

  1. For any region R of two-dimensional space,

P ((X, Y ) ∈ R) =

R

R

R

f XY

(x, y)dxdy

  • If the joint probability density function of random variables X

and Y is f XY (x, y), the marginal probability density functions

of X and Y are

f x (x) =

Z

y

f XY

(x, y)dy and f y (y) =

Z

x

f XY

(x, y)dx

where the first integral is over all points in the range of (X,Y)

for which X=x and the second integral is over all points in the

range of (X,Y) for which Y=y.

  • Expected Value of a Function of Two Random Variables

E[h(X, Y )] =

P P

h(x, y)f XY

(x, y) X,Y discrete

R R

h(x, y)f XY (x, y)dxdy X,Y continuous

  • The covariance between the random variables X and Y, denoted

as cov(X,Y) or σ xy , is

σ xy = E[(X − μ x )(Y − μ y )] = E(XY ) − μ x μ y

  • The correlation between random variables X and Y, denoted as

ρ XY , is

ρ XY

cov(X, Y )

p

V (X)V (Y )

σ XY

σ X σ Y

  • Mean of a Linear Function: If Y = c 1

X

1

  • c 2

X

2

  • · · · + c n

X

n

E(Y ) = c 1

E(X

1 ) + c 2

E(X

2 ) + · · · + c n

E(X

n

  • Variance of a Linear Function

If X 1

, X

2

,... , X

n are random variables, and Y = c 1

X

1

  • c 2

X

2

· · · + c n

X

n , then in general,

V (Y ) = c

2

1

V (X

1

)+c

2

2

V (X

2

)+... c

2

n

V (X

n

X

i<j

X

c i

c j

cov(X i

, X

j

If X 1

, X

2

,... , X

n are independent,

V (Y ) = c

2

1

V (X

1 ) + c

2

2

V (X

2 ) +... c

2

n

V (X

n

  • Mean and Variance of an Average

If

X = (X

1

+X

2

+· · ·+X

n )/n with E(X i ) = μ for i = 1, 2 ,... , n,

E(

X) = μ

If X 1

, X

2

,... , X

n are also independent with V (X i ) = σ

2 for

i = 1, 2 ,... , n,

V (

X) = σ

2 /n

Ch. 3

  • Propagation of Error

Bias= μ - true value

If X 1

, ..., X

n are independent measurements, c i

are constants

and Y = c 1

X

1

  • · · · + c n

X

n

σ

2

Y

= (c 1

2 (σ X 1

2

  • ... + (c 1

2 (σ X 1

2

if X i are dependent:

σ Y

≤ |c 1 |σ X 1

  • ... + |c n |σ X n

Propagation of error formula

If U = U (X 1

,... , X

n ), where X i are random var.

σ U

q

(dU/dX 1

2 σ

2

x 1

  • · · · + (dU/dX n

2 σ

2

x n

and relative uncertainty is σ U

/μ U

Ch. 4

Ch. 7

  • The linear regression line is ˆy =

β 0

β 1 x where

β 0 = ¯y −

β 1 x¯

β 1

n P

i=

y i x i

(

n P

i=

y i )(

n P

i=

x i )

n

n P

i=

x

2

i

(

n P

i=

x i

) 2

n

S

xy

S

xx

and ¯y =

1

n

n P

i=

y i and ¯x =

1

n

n P

i=

x i

  • Total corrected sum of squares

SS

T

= SS

R

+ SS

E

SS

T

n X

i=

(y i − y¯)

2

SS

R

n X

i=

( ˆy i − y¯)

2

SS

E

n X

i=

(y i

− yˆ i

2

  • Coefficient of determination

R

2 = 1 −

SS

E

SS

T

  • Properties of Least Squares Estimators

σ

2

=

SS

E

n − 2

E[

β 1 ] = β 1

V (

β 1

σ

2

S

xx

E[

β 0 ] = β 0

V (

β 0 ) = σ

2

[

n

¯x

2

S

xx

]

For H-test and CI on β 0 and β 1 use t-dist with ν = n − 2