Errors may be defined as the deviation of measured value(M.V) from the true value(T.V) of the quantity of measurement. Diagrammatically it is shown below:
Here, Error =T.V.- M.V.
And , percentage error,% E= (Error)/(T.V.)*100%
Basically, there are three on the basis; they may arise from the source.
This category of errors includes all the human mistakes while reading, recording and the readings. Mistakes in calculating the errors also come under this category. For example, while taking the reading from the meter of the instrument he may read 21 as 31. All these types of error are come under this category. Gross errors can be avoided by using two suitable measures and they are written below:
As the name suggests these types of errors are due to wrong observations. The wrong observations may be due to PARALLAX. In order to minimize the PARALLAX error highly accurate meters are required, provided with mirrored scales. It falls under gross errors.
In order to understand these kinds of errors, let us categorize the systematic errors as
These errors may be due to wrong construction, calibration of the measuring instruments. These types of error may be arises due to friction or may be due to hysteresis. These types of errors also include the loading effect and misuse of the instruments. Misuse of the instruments results in the failure to the adjust the zero of instruments. In order to minimize the gross errors in measurement various correction factors must be applied and in extreme condition instrument must be re-calibrated carefully.
This type of error arises due to conditions external to the instrument. The external condition includes temperature, pressure, humidity or it may include external magnetic field. Following are the steps that one must follow in order to minimize the environmental errors:
After calculating all systematic errors, it is found that there are still some errors in measurement are left. These errors are known as random errors. Some of the reasons of the appearance of these errors are known but still some reasons are unknown. Hence, we cannot fully eliminate these kinds of error.This error can be minimized by using statistical analysis only after minimizing the gross and systematic errors.
The random errors coming out from the unknown sources are very difficult to correct. Once we are confident that systematic errors have been sufficiently minimized and that the remaining error is random, we can use statistics to describe the data set .Here , statistical approaches are normally preferred to minimize the random errors. These techniques are listed below:
If we take N measurements of a certain quantity x, the best estimate of the actual quantity is the mean, x , of the measurements:
We can also estimate the error, δ x. To do this, we look at how far away the individual measured values are from the mean. The value xi –x is the deviation of a particular trial from the mean (also called a residual). We can’t sum the residuals because the construction of the definition of x causes the residuals to always sum to zero. The solution, then, is to square the residuals, sum them, and then take the square root of that sum, resulting in a value referred to as the standard deviation, σ.
The standard deviation characterizes the average uncertainty of measurements x1,…,xN. It is also sometimes known as the root-mean-squared (RMS) deviation. Additionally, sometimes an alternative definition is used for standard deviation, whereby the value of N in (28) is replaced by (N – 1). This second definition produces a more conservative result of σ x, especially when the value of N is low. Much of the time, though, there is little difference between the two forms. If we are unsure of which method to use, the safest course of action is to use the form with (N – 1).
Recall that the statistics of mean and standard deviation apply only to errors that follow the normal, Gaussian, or “bell curve” distribution. By this, we mean that if you take enough measurements of a quantity x using the same method, and if the uncertainties associated with x are small and random, then all of the measurements of x will be distributed around the true value of x, xtrue, following a normal distribution. Although we provide no justification for this claim, it can be shown that for a normal distribution, 68% of the results will fall within a distance σx on either side of xtrue, as shown in Fig. 1. Therefore, if we take a single measurement, we have a 68% probability that we are within σx of the actual measurement. Thus is makes sense to say that for a single measurement:
δ x = σ x .
We are 68% sure that a single measurement is within one standard deviation of the actual value.
The peak of the distribution corresponds to the real value of x, xtrue. The gray shaded area represents the area under the curve within one standard deviation of the true value. When we make a single measurement of x, there is a 68% probability that the value we measured falls in this area. Thus, the stated uncertainty in a single measurement is σ x.
Now that we know the uncertainty in one measurement, we need to know the uncertainty in the mean, x . We expect that because the mean takes into account multiple measurements, its uncertainty will be less than that for a single measurement shown in above equation. Indeed, this is so. The standard deviation of the mean, or standard error, x σ is defined as:
where N is the number of measurements of the quantity x. The standard deviation of the mean is related to the uncertainty in one measurement but is reduced because multiple measurements have been taken. It is easy to see that the more trials performed in an experiment, the smaller the uncertainty will be. Then for a series of measurements of one quantity x with independent and random errors, the best estimate, and uncertainty can be expressed as:
(value of x) = xbest ± δ x = x ± x σ.
1.Stevens, S.S.(1946)On the theory of scales and measurement. Science
Take test again
No discussion on this note yet. Be first to comment on this note
Signup using your facebook account
Login with your Kullabs account