One of my hobbies is woodworking. That
involves adjusting machines to cut to
a particular size. I usually take a test
cut on a piece of scrap and adjust the
machine accordingly. If the cut is too
deep wind back a little, if it is too
shallow increase a little more. Then repeat
until the size is correct to the accuracy
I need.

Many tasks involve some sort of adjustment,
the vial filling simulation is another
example. It is important to have an effective
strategy, over-adjusting can make things
worse.

The quality guru W. Edwards Deming used
a 'funnel' experiment to illustrate the
pitfalls of commonly used strategies:

We have taken a sample of 20 vials to
avoid relying on a single result. The
idea is that this 'averages out' the vial
to vial variation and gives a more reliable
estimate.

This is an example of '**inferential
statistics**'. We have used the
**sample mean** to estimate
the **process mean** (also
known as the population mean).

- The sample mean is called a
**'statistic'**
and is represented by .
A statistic is a hard and fast value
calculated from the sample data

- The process mean is called a
**parameter**
and is represented by **μ**
(by convention parameters are represented
by Greek letters).

The sample mean is not a perfect estimate
of the process mean. If we take a number
of samples from the filling machine we
will find that the mean varies from sample
to sample, although the variation in the
sample means is less than the individual
vial to vial variation.

The larger the sample, the more reliable
the estimate. We can never know the exact
value of the process mean, although we
can estimate it to any required accuracy
by taking a sufficiently large sample.

Later in the course we will discover
how to quantify the likely error in the
accuracy of a sample.