Numerical propogation of errors

| categories: statistics | tags:

Matlab post

Propagation of errors is essential to understanding how the uncertainty in a parameter affects computations that use that parameter. The uncertainty propogates by a set of rules into your solution. These rules are not easy to remember, or apply to complicated situations, and are only approximate for equations that are nonlinear in the parameters.

We will use a Monte Carlo simulation to illustrate error propogation. The idea is to generate a distribution of possible parameter values, and to evaluate your equation for each parameter value. Then, we perform statistical analysis on the results to determine the standard error of the results.

We will assume all parameters are defined by a normal distribution with known mean and standard deviation.

1 Addition and subtraction

import numpy as np
import matplotlib.pyplot as plt

N = 1e6 # number of samples of parameters

A_mu = 2.5; A_sigma = 0.4
B_mu = 4.1; B_sigma = 0.3

A = np.random.normal(A_mu, A_sigma, size=(1, N))
B = np.random.normal(B_mu, B_sigma, size=(1, N))

p = A + B
m = A - B

print np.std(p)
print np.std(m)

print np.sqrt(A_sigma**2 + B_sigma**2) # the analytical std dev
>>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> 0.500505424616
0.500113385681
>>> 0.5

2 Multiplication

F_mu = 25.0; F_sigma = 1;
x_mu = 6.4; x_sigma = 0.4;

F = np.random.normal(F_mu, F_sigma, size=(1, N))
x = np.random.normal(x_mu, x_sigma, size=(1, N))

t = F * x
print np.std(t)
print np.sqrt((F_sigma / F_mu)**2 + (x_sigma / x_mu)**2) * F_mu * x_mu
>>> >>> >>> >>> >>> >>> 11.8900166284
11.8726576637

3 Division

This is really like multiplication: F / x = F * (1 / x).

d = F / x
print np.std(d)
print np.sqrt((F_sigma / F_mu)**2 + (x_sigma / x_mu)**2) * F_mu / x_mu
0.293757533168
0.289859806243

4 exponents

This rule is different than multiplication (A^2 = A*A) because in the previous examples we assumed the errors in A and B for A*B were uncorrelated. in A*A, the errors are not uncorrelated, so there is a different rule for error propagation.

t_mu = 2.03; t_sigma = 0.01*t_mu; # 1% error
A_mu = 16.07; A_sigma = 0.06;

t = np.random.normal(t_mu, t_sigma, size=(1, N))
A = np.random.normal(A_mu, A_sigma, size=(1, N))

# Compute t^5 and sqrt(A) with error propogation
print np.std(t**5)
print (5 * t_sigma / t_mu) * t_mu**5
>>> >>> >>> >>> >>> ... 1.72454836176
1.72365440621
print np.std(np.sqrt(A))
print 1.0 / 2.0 * A_sigma / A_mu * np.sqrt(A_mu)
0.00748903477329
0.00748364738749

5 the chain rule in error propogation

let v = v0 + a*t, with uncertainties in vo,a and t

vo_mu = 1.2; vo_sigma = 0.02;
a_mu = 3.0;  a_sigma  = 0.3;
t_mu = 12.0; t_sigma  = 0.12;

vo = np.random.normal(vo_mu, vo_sigma, (1, N))
a = np.random.normal(a_mu, a_sigma, (1, N))
t = np.random.normal(t_mu, t_sigma, (1, N))

v = vo + a*t

print np.std(v)
print np.sqrt(vo_sigma**2 + t_mu**2 * a_sigma**2 + a_mu**2 * t_sigma**2)
>>> >>> >>> >>> >>> >>> >>> >>> >>> 3.62232509326
3.61801050303

6 Summary

You can numerically perform error propogation analysis if you know the underlying distribution of errors on the parameters in your equations. One benefit of the numerical propogation is you do not have to remember the error propogation rules, and you directly look at the distribution in nonlinear cases. Some limitations of this approach include

  1. You have to know the distribution of the errors in the parameters
  2. You have to assume the errors in parameters are uncorrelated.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Discuss on Twitter