ACCURACY, PRECISION, AND ERROR
ACCURACY, PRECISION, AND ERROR
•
In chemistry, the meanings of accuracy
and precision are quite different.
•
Accuracy
is a measure of how close a measurement comes to the actual or true value of
whatever is measured.
•
Precision
is a measure of how close a series of measurements are to one another,
irrespective of the actual value.
To evaluate the accuracy of a measurement,
the measured value must be compared to the correct value.
To evaluate the precision of a measurement,
you must compare the values of two or more repeated measurements.
Darts on a dartboard illustrate the difference
between accuracy and precision.
The closeness of a dart to the bull’s-eye
corresponds to the degree of accuracy. The closeness of several darts to one
another corresponds to the degree of precision.
Errors
Three
general types of errors occur in lab measurements:
i.
random error,
ii.
systematic error, and
iii.
gross errors
1. Random (or indeterminate) errors are
caused by uncontrollable fluctuations in variables that affect experimental
results. For example, air fluctuations occurring as student’s open and close
lab doors cause changes in pressure readings. A sufficient number of
measurements result in evenly distributed data scattered around an average
value or mean.
This positive and negative scattering of data is
characteristic of random errors. The estimated standard deviation (the
error range for a data set) is often reported with measurements because random
errors are
difficult to eliminate. Also, a "best-fit line" is drawn through
graphed data in order to "smooth out" random error.
2. Systematic (or determinate) errors are
instrumental, methodological, or personal mistakes causing "lopsided"
data, which is consistently deviated in one direction from the true value.
Examples of systematic errors:
i.
an instrumental error results
when a spectrometer drifts away from calibrated settings;
ii.
a methodological error is
created by using the wrong indicator for an acid-base titration; and,
iii.
a personal error occurs
when an experimenter records only even numbers for the last digit of buret
volumes
Systematic errors can be identified and eliminated
after careful inspection of the experimental methods, cross-calibration of
instruments, and examination of techniques.
3. Gross errors are
caused by experimenter carelessness or equipment failure. These
"outliers" are so far above or below the true value that they are
usually discarded when assessing data. The "Q-Test" (discussed later)
is a systematic way to determine if a data point should be discarded.
Precision of a Set of Measurements
A
data set of repetitive measurements is often expressed as a single
representative number called the mean or average. The mean (M) is the
sum of individual measurements (xi) divided by the number of measurements (N).
M = Σxi
N (mean)
Precision
(reproducibility) is quantified by calculating the average deviation (for data
sets with 4 or fewer repetitive measurements) or the standard deviation (for
data sets with 5 or more measurements). Precision is the opposite of
uncertainty Widely scattered data results in a large average or standard
deviation indicating poor precision.
Accuracy of a Result
The
accuracy of a result can be quantified by calculating the percent error.
The percent
error can only be found if the true value is known. Although the percent
error is usually
written as an absolute value, it can be expressed a negative or positive
sign to indicate the direction of error from true value.
% Error = (true value - experimental value) x 100
true value
The Q-Test for Rejecting Data
As
mentioned previously, outliers are data measurements occurring from gross
errors.
Their value deviates significantly from the mean. The Q-Test can be used to
determine whether an individual measurement should be rejected or retained. The
quantity Q is the absolute difference between the questioned measurement (xq)
and the next closest measurement (xn) divided by the spread (ω), the
difference between the largest and smallest measurement, of the entire set
of data.
Q=
(xq - xn)
ω
Q
is compared to a specified confidence levels (the percent probability a
measurement will fall into a range around the mean (x).) If Q is greater than
the values listed below for a particular confidence level, the measurement
should be rejected. If Q is less than the values in the table, the measurement
should be retained.
•
Suppose you use a thermometer to measure
the boiling point of pure water at standard pressure.
•
The thermometer reads 99.1°C.
•
You probably know that the true or
accepted value of the boiling point of pure water at these conditions is
actually
•
100.0°C.
•
There is a difference between the accepted
value, which is the correct value for the measurement based on reliable
references, and the experimental value, the value measured in the
lab.
•
The difference between the experimental
value and the accepted value is called the error.
Determining Error
•
For the boiling-point measurement, the
error is 99.1°C – 100°C, or –0.9°C.
•
The percent error of a
measurement is the absolute value of the measured experimental value
minus the accepted value divided by the accepted value, multiplied by 100%.
UNCERTAINTY IN MEASUREMENT
•
Definition.
The definition of the term
uncertainty of
measurement is: “A parameter associated with the result of a
measurement that characterizes the dispersion of the values that could
reasonably be attributed to be measured.
Uncertainty
sources
In practice the uncertainty on the result may arise from many possible sources,
including examples such as incomplete definition of the measurand, sampling,
matrix effects and interferences, environmental conditions, uncertainties of
masses and volumetric equipment, reference values, approximations and
assumptions incorporated in the measurement method and procedure, and random
variation.
Typical sources of uncertainty are
1. Sampling
Where in-house or field sampling form part of the specified procedure, effects
such as random variations between different samples and any potential for bias
in the sampling procedure form components of uncertainty affecting the final
result
2.
Storage Conditions
Where test items are stored for any period prior to
analysis, the storage conditions may affect the results. The duration of
storage as well as conditions during storage should therefore be considered as
uncertainty sources
3. Instrument effects
Instrument effects may include, for example, the
limits of accuracy on the calibration of an analytical balance; a temperature
controller that may maintain a mean temperature which differs (within
specification) from its indicated set-point; an auto-analyser that could be
subject to carry-over effects.
4.
Reagent purity
The concentration of a volumetric solution will not
be known exactly even if the parent material has been assayed, since some uncertainty
related to the assaying procedure remains. Many organic dyestuffs, for instance,
are not 100 % pure and can contain isomers and inorganic salts. The purity of such
substances is usually stated by manufacturers as being not less than a specified
level. Any assumptions about the degree of purity will introduce an element of uncertainty.
5.
Assumed stoichiometry
Where an analytical process is assumed to follow a
particular reaction stoichiometry, it may be necessary to allow for departures from
the expected stoichiometry, or for incomplete reaction or side reactions.
6.
Measurement conditions
For example, volumetric glassware may be used at an
ambient temperature different from that at which it was calibrated. Gross temperature
effects should be corrected for, but any uncertainty in the temperature of liquid
and glass should be considered. Similarly, humidity may be important where materials
are sensitive possible changes in humidity.
7.
Sample effects
The recovery of an analyte from a complex matrix, or
an instrument response, may be affected by composition of the matrix. Analyte speciation
may further compound this effect. The stability of a sample/analyte may change
during analysis because of a changing thermal regime or photolytic effect. When
a ‘spike’ is used to estimate recovery, the recovery of the analyte from the
sample may differ from the recovery of the spike, introducing an uncertainty
which needs to be evaluated.
8.
Computational effects
Selection of the calibration model, e.g. using a
straight line calibration on a curved
response, leads to poorer fit and higher uncertainty. Truncation and round off
can lead to inaccuracies in the final result. Since these are rarely predictable,
an uncertainty allowance may be necessary.
9.
Blank Correction
There will be an uncertainty on both the value and
the appropriateness of the blank
correction. This is particularly important in trace analysis.
10.
Operator effects
Possibility of reading a meter or scale consistently
high or low. Possibility of making a slightly different interpretation of the
method.
11.
Random effects
Random effects contribute to the uncertainty in all
determinations. This entry should be included in the list as a matter of
course.
No comments