Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

How to combine errors, Exercises of Communications Engineering

All measurements have uncertainties that need to be communicated along with the measurement itself. Suppose we make a measurement of temperature, ˆT,.

Typology: Exercises

2021/2022

Uploaded on 09/12/2022

rossi46
rossi46 🇬🇧

4.5

(10)

313 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
How to combine errors
Robin Hogan
June 2006
1 What is an “error”?
All measurements have uncertainties that need to
be communicated along with the measurement itself.
Suppose we make a measurement of temperature, ˆ
T,
but the “true” temperature is T. In this case our instan-
taneous error is εT=ˆ
TT. Obviously we don’t
know the value of εTfor any specific measurement
(otherwise we could simply subtract it and report the
true value) but we should be able to estimate the root-
mean-squared error, given by
T=qε2
T,(1)
and then report our measurement in the form ˆ
T±T
(for example T=284.6±0.2 K). In Eq. 1, the overbar
denotes the mean taken over a large number of mea-
surements by an identical instrument.
Usually the quantity Tis referred to simply as
the “error” in the measurement. This is a bit mislead-
ing and is easy to confuse with the instantaneous error;
a better term would have been “uncertainty”, but “er-
ror” is in such common use that we had better stick
with it. Just keep in mind that what we usually mean
is the root-mean-squared error.
If the instantaneous errors have aGaussian distri-
bution (also known as a Normal or Bell-shaped distri-
bution) then approximately 68% of the individual mea-
surements will lie between TTand T+T, and
95% of them between T2Tand T+2T. Be aware
that sometimes errors are stated to indicate the “95%
confidence interval”, in which case they are equal to
2T. It should be noted that the “measurement” may
well be the mean of a number of samples, in which
case we might take the standard error of the mean as
an estimate of the error T.
2 Errors for functions of one variable
Usually we will have a formula we want to use to
derive a new variable from one or more of our mea-
sured variables. Section 3 describes the general case
0 100 200 300
0
100
200
300
400
εT
εF
T
F
T + εT
F + εF
Temperature (K)
Blackbody irradiance (W m−2)
Figure 1: Illustration of the estimation of the error in F
from the error in Tusing the gradient of the relationship
between them (using Eq. 2).
in which we use the definition of an error in Eq. 1 to
estimate the error in the new variable given the errors
in the measured variables. However, if only one mea-
surement is involved then we can use a simpler method
based on differentiation.
Suppose we wish to derive the irradiance Femit-
ted by a blackbody with a temperature Tusing
F=σT4,(2)
where σis a constant that is known very accurately. It
can be seen from Fig. 1 that, provided the error in T
is relatively small, the ratio of instantaneous errors in
Fand Tis approximately equal to the gradient of the
relationship between them:
εF
εTdF
dT.(3)
From now on we will replace with =”, but al-
ways be aware that error estimation is an approximate
business so it is not worth quoting errors to high preci-
sion (certainly no more than two significant figures).
With the help of Eq. 1, it can be shown that the
ratio of root-mean-squared errors is
F
T=
εF
εT
=
dF
dT
,(4)
1
pf2

Partial preview of the text

Download How to combine errors and more Exercises Communications Engineering in PDF only on Docsity!

How to combine errors

Robin Hogan June 2006

1 What is an “error”?

All measurements have uncertainties that need to be communicated along with the measurement itself. Suppose we make a measurement of temperature, Tˆ , but the “true” temperature is T. In this case our instan- taneous error is εT = Tˆ − T. Obviously we don’t know the value of εT for any specific measurement (otherwise we could simply subtract it and report the true value) but we should be able to estimate the root- mean-squared error, given by

∆T =

√ ε^2 T , (1)

and then report our measurement in the form ˆT ± ∆T (for example T = 284. 6 ± 0 .2 K). In Eq. 1, the overbar denotes the mean taken over a large number of mea- surements by an identical instrument.

Usually the quantity ∆T is referred to simply as the “error” in the measurement. This is a bit mislead- ing and is easy to confuse with the instantaneous error; a better term would have been “uncertainty”, but “er- ror” is in such common use that we had better stick with it. Just keep in mind that what we usually mean is the root-mean-squared error.

If the instantaneous errors have a Gaussian distri- bution (also known as a Normal or Bell-shaped distri- bution) then approximately 68% of the individual mea- surements will lie between T − ∆T and T + ∆T , and 95% of them between T − 2 ∆T and T + 2 ∆T. Be aware that sometimes errors are stated to indicate the “95% confidence interval”, in which case they are equal to 2 ∆T. It should be noted that the “measurement” may well be the mean of a number of samples, in which case we might take the standard error of the mean as an estimate of the error ∆T.

2 Errors for functions of one variable

Usually we will have a formula we want to use to derive a new variable from one or more of our mea- sured variables. Section 3 describes the general case

0 100 200 300 0

100

200

300

400

ε T

ε F

T

F

T + ε T

F + ε F

Temperature (K)

Blackbody irradiance (W m

)

Figure 1: Illustration of the estimation of the error in F from the error in T using the gradient of the relationship between them (using Eq. 2).

in which we use the definition of an error in Eq. 1 to estimate the error in the new variable given the errors in the measured variables. However, if only one mea- surement is involved then we can use a simpler method based on differentiation. Suppose we wish to derive the irradiance F emit- ted by a blackbody with a temperature T using

F = σT 4 , (2)

where σ is a constant that is known very accurately. It can be seen from Fig. 1 that, provided the error in T is relatively small, the ratio of instantaneous errors in F and T is approximately equal to the gradient of the relationship between them:

εF εT

dF dT

From now on we will replace “≃” with “=”, but al- ways be aware that error estimation is an approximate business so it is not worth quoting errors to high preci- sion (certainly no more than two significant figures). With the help of Eq. 1, it can be shown that the ratio of root-mean-squared errors is

∆F ∆T

∣∣ ∣∣^ εF εT

∣∣ ∣∣ =

∣∣ ∣∣^ dF dT

∣∣ ∣∣ , (4)

where | · | denotes the absolute value (i.e. removing any minus sign) and is present because the root-mean- square of a real number is always positive. If we mea- sure the temperature to be ˆT ± ∆T then we can use Eq. 4 to obtain the error in F:

∆F =

∣∣ ∣∣^ dF dT

∣∣ ∣∣ ∆T = 4 σ Tˆ 3 ∆T. (5)

In the general case of a formula of the form a = f (b), the error in a is given by

∆a =

∣∣ ∣∣^ d^ f^ (b) db

∣∣ ∣∣ ∆b. (6)

The error formulae for some common functions have been calculated using Eq. 6 and are given below (where a and b are variables and λ and μ are constants):

Functions of one variable

a = λ b ∆a = |λ| ∆b a = λ/b ∆a =

∣∣ λ/b^2

∣∣ ∆b = |a/b| ∆b a = λ bμ^ ∆a =

∣∣ μ λ bμ−^1

∣∣ ∆b = |μ a/b| ∆b a = λ exp(μb) ∆a = |μ a| ∆b a = λ ln(μb) ∆a = |λ/b| ∆b (7)

3 Functions of two or more variables

In the general case, the quantity we want to calcu- late depends on more than one measurement, i.e. a = f (b, c, ...), and a more rigorous approach is needed to work out how the error in a depends on the errors in the other variables. Taking the simplest possible formula

a = b + c, (8)

we replace a by ˆa+εa, etc., to obtain ˆa +εa = ˆb+εb + c ˆ + εc. Noting that ˆa = bˆ + cˆ, the relationship between the instantaneous errors is then simply

εa = εb + εc. (9)

From the definition of an error given in Eq. 1 we have

∆a =

√ ε^2 a =

√ (εb + εc)^2

=

√ ε^2 b + ε^2 c + 2 εbεc =

√ (∆b)^2 + (∆c)^2 + 2 εbεc. (10)

The term εbεc is an error covariance and is zero pro- vided that the measurements independent, i.e. their in- stantaneous errors are uncorrelated. Thus for indepen- dent measurements of b and c, the error formula is

∆a =

√ (∆b)^2 + (∆c)^2. (11)

The error formulae for addition, subtraction, multipli- cation and division of independent variables can be de- rived in a similar manner: Functions of more than one variable

a = b + c a = b − c

} (∆a)^2 = (∆b)^2 + (∆c)^2

a = bc a = b/c

} (^) ( ∆a a

) 2

( ∆b b

) 2

( ∆c c

) 2

a = b + c + d (∆a)^2 = (∆b)^2 + (∆c)^2 + (∆d)^2 a = bcd (∆a/a)^2 = (∆b/b)^2 + (∆c/c)^2

  • (∆d/d)^2 (12)

4 Errors for more complicated formulae For more complicated formulae we can combine the approaches in sections 2 and 3. For example, consider the formula a = λ bμ^ exp(c). (13) If we let a = xy where x = λ bμ^ and y = exp(c), then from Eq. 7 we know that ∆x = |μx/b|∆b and ∆y = |y|∆c. According to Eq. 12, we can combine these errors using the multiplication rule to obtain ( ∆a a

) 2

( ∆x x

) 2

( ∆y y

) 2

( μ∆b b

) 2

  • (∆c)^2. (14)

and hence

∆a = a

[( μ∆b b

) 2

  • (∆c)^2

] (^12)

. (15)