ERRORS

During the course of this term, you will be asked to consider errors, almost literally, every time you turn around.  The reason is that the significance of experimental results is severely restricted unless a serious effort is expended to identify potential sources of errors and to estimate the possible or, perhaps, probable magnitude of those errors.  As we progress through the term, then, we will be trying to provide the background that will enable you to begin to make such an effort.

As a first step, we would like to take care of, once and for all, one particular type of error that invariably surfaces whenever students are asked to suggest possible sources of error.  That particular error is usually called, “human error,” or something of the sort.  We are asserting, now, for reasons cited below that, “human error,” is not an appropriate expression and we do not expect to hear or see it again for the remainder of this term.

Why is it not appropriate?  A little reflection will suggest that there are two likely meanings implied by the expression.  The first is that a mistake has been made in either a procedure or an analysis.  However, in the laboratory mistakes are not considered to be legitimate sources of error in the usual sense of the term regardless of how ironic that may seem.  The fact of the matter is that the errors that are appropriate to consider should never be viewed as an indication that some different procedure should have been followed.  Rather, they should be viewed as a realistic description of the situation that existed while measurements were being made.

In adopting this view, we are not assuming that mistakes never occur.  They do, frequently.  We all make mistakes no matter how careful we try to be so it is essential that we adopt procedures that will tell us whether a mistake has, indeed, been made.  The underlying attitude here is skepticism, that is, rather than exercising extreme care to avoid a mistake and hoping that you were successful, assume that a mistake has been made and start testing to see if that is the case.  That is a way of thinking which we urge you to develop in this laboratory work.  For example, suppose you are asked to make some particular measurement two times.  Experience suggests that, if you made a mistake the first time, you have a high probability of making the same mistake the second time.  Apparently, we remember how we did it the first time and have a strong tendency to do it the same way the second time.  After that, we have little chance of detecting the mistake regardless of how often we repeat the measurement.

Have your laboratory partner make the second measurement.  It may seem to be more efficient for you to make all of the “A” type measurements and your partner to make all of the “B” type measurements but it is bad laboratory practice.  You should adopt procedures that result in each of you serving as a check on the other.  In practice, a surprisingly sizeable percentage of the procedures involved in conducting an experiment arise from the need to check or test some part of the results.

The second meaning implied by that unacceptable expression arises from the fact that the experimenter is constantly making judgments.  The experimenter, for example, must decide how far into the next scale division the indicator or needle tip lies, or when the air track is level, or when the system is balanced.  Such decisions will, in general, not be exactly reproducible, either by other experiments or even in repeated trials by the same experimenter.  Again, however, the resulting fluctuations exhibited by the measurements are not regarded as errors in the sense that something should have been done differently.  Instead, those fluctuations represent the limit to the reproducibility that existed at the time the measurements were made.  As such, we prefer to call the fluctuations, “uncertainties,” rather than, “errors.”

Uncertainties are often designated by a ± symbol (e.g., 10.5 ± 0.3 cm).  An alternative designation might be 10.5(3) cm.  The magnitude of the uncertainty is influenced by a number of factors that will be discussed at some length later in the term.  For the present, it is sufficient to say the magnitude is estimated by the experimenter and reflects his/her best guess as to how reproducible the measurement is.  Obviously, the assigned uncertainty should encompass the observed fluctuations in a series of measurements.

Uncertainties are an integral part of any measurement just as the units are.  To omit either the uncertainty or the units is to render the measurement virtually useless.  It is essential that you develop the habit of including each on every measurement you make.  At first, you will find the estimation of uncertainties to be difficult because you will be unduly worried about estimating the “right” magnitude.  Rest assured, it is far more important that you make the effort to estimate, and then record, the uncertainty than it is to have the right value.  The latter will come with experience, the former will come because you force yourself to do it each time.

A proper statement of uncertainty is really a statement that the experimenter has a high degree of confidence that the measurement falls within a particular range of values (e.g. 10.5±0.3 cm says the length is between 10.2 cm and 10.8 cm).  It is only when a proper statement of uncertainty is made can one begin to answer the question “Have any errors occurred?”.