The performance characteristics of an
instrument are mainly divided into two categories
i)
Static characteristics
ii)
Dynamic characteristics
Static characteristics
The set of criteria defined for the
instruments, which are used to measure the quantities which are slowly varying
with time or mostly constant, i.e., do not vary with time, is called static
characteristics.
The various static characteristics are
accuracy, precision, resolution, error, sensitivity, threshold,
reproducibility, zero drift, stability and linearity.
a. Accuracy
It is the degree of closeness with
which the instrument reading approaches the true
value of the quantity to be measured. It denotes the extent to which we
approach the
actual value of the quantity. It indicates the ability of the instrument to
indicate the true value of the quantity. The accuracy can be expressed in the following
way
1)
Accuracy as 'Percentage of Full-Scale Reading: - In the case
of instruments having uniform scale, the accuracy can be expressed as a percentage
of full-scale reading. For example, the accuracy of an instrument having a full-scale
reading of 50 units may be expressed as ± 0.1 % of the full-scale reading. From
this accuracy indication, practical accuracy is expressed in terms of limits of
error. So, for the accuracy limits specified above, there will be ± 0.05 units error
in any measurement. So far, with a reading of 50 units, there will be a reading
of 25 units, there will be Thus as the reading decreases, an error of ± 0.05
units i.e., ± 0.1 % while for an error of ± 0.05 units in the reading i.e., ± 0.2
%. error in measurement is ± 0.05 units but the net percentage error is more. Hence,
the specification of accuracy in this manner is highly misleading.
2)
Accuracy as Percentage of True Value: This is the best method of specifying accuracy.
It is to be specified in terms of the true value of the quantity being measured.
For example, it can be specified as ± 0.1 % of the true value. This indicates that
in such cases, as readings get smaller, the error also gets reduced. Hence the accuracy
of the instrument is better than the instrument for which it is specified as per
cent of full-scale reading.
3)
Accuracy as Percentage of Scale Span: For an instrument,
if amax is the
maximum point for which the scale is calibrated, i.e. full-scale reading and amin,
is the lowest reading on the scale. Then (amax - amin) is called the scale span
or span of the instrument. The accuracy of the instrument can be specified as per
cent of such scale span. Thus, for an instrument having a range from 25 units to
225 units, it can be specified as ± 0.2 % of the span i.e., ± [(0.2/100) x (225
- 25)] which is ± 0.4 units error in any measurement.
4)
Point Accuracy: Such accuracy is specified at only
one point of the scale. It does not give any information about accuracy at any other
point on the scale. The general accuracy of an instrument cannot be specified, in
this manner. But the general accuracy can be specified by providing a table of
the point accuracy values calculated at various points throughout the entire range
of the instrument.
Thus,
the accuracy, in whatever way it may be specified, gives the equipment accuracy with a particular set-up and other conditions and does not include any
personal accuracy.
b. Precision
It is
the measure of consistency or repeatability of measurements. It denotes the closeness
with which individual measurements are departed or distributed about the
average of a number of measured values.
Precision is composed of two
characteristics
i.
Conformity and
ii.
Number of significant figures.
i.
Conformity
Consider a resistor having a true value of 2385692Ω, which is being
measured by an ohmmeter. Now, the meter is consistently measuring the true value
of the resistor. But the reader can read consistently, a value as 2.4 M 1 due to
the non-availability of proper scale. The value 2.4 M Q is estimated by the
reader from the available scale. There are no deviations from the observed
value. The error created due to the limitation of the scale reading is a precision
error.
The example illustrates that conformity is necessary, but not a sufficient
condition for precision. Similarly, precision is necessary but not a sufficient
condition for accuracy.
An accurate instrument should be precise, but a precise instrument
may not be accurate.
ii.
Significant Figures
The precision of the measurement is obtained from the number of significant
figures in which the reading is expressed. The significant figures convey the
actual information about the magnitude and the measurement precision of the quantity.
For example, a resistance of 110 Ω, specified by an instrument may
be closer to 109 Ω or 111 Ω. Thus, there are 3 significant figures. While
if it’s specified as 110.0 Ω then it may be closer to 110.1 Ω or 109.9 Ω.
Thus, there are now 4 significant figures.
c. Error
The most important static characteristic of an instrument is its accuracy, which is generally expressed in terms of the error called static error.
The algebraic difference between the indicated value and the true value of the quantity to be measured is called an error.
d. Sensitivity
The sensitivity denotes the smallest change in the measured variable to which the
instrument responds.
It is defined as the ratio of the changes in the output of an instrument to a change in
the value of the quantity to be measured.
Thus, if the calibration curve is linear, as shown in Fig. (a), the sensitivity of the instrument is the slope of the calibration curve.
If the calibration curve is not linear as shown in Fig. (b), then the sensitivity varies with the input.
The units of the sensitivity are millimetres per micro-ampere, millimetre per ohm, counts per volt, etc. while the units of a deflection factor are millimetres, ohm per millimetre, volts per count, etc.
e.Resolution
It is the smallest increment of the quantity being measured that can be detected with certainty by an instrument. Thus, the resolution means the smallest measurable input change.
So if a non-zero input quantity is slowly increased, the output reading will not increase until some minimum change in the input takes place. This minimum change which causes the change in the output is called resolution. The resolution of an instrument is also referred to as discrimination of the instrument. The resolution can affect the accuracy of the measurement.
f.Threshold
If the input quantity is slowly varied from zero onwards, the output does not change until some minimum value of the input is exceeded. This minimum value of the input is called the threshold.
Thus, the resolution is the smallest measurable input change while the threshold is the smallest measurable input.
g. Linearity
The instrument requires the property of linearity that is the output varies linearly, according to the input. Linearity is defined as the ability to reproduce the input characteristics symmetrically and linearly. Graphically such a relationship between input and output is represented by a straight line.
The graph of output against the input is called the calibration curve.
The linearity properly indicates the straight-line nature of the calibration curve.
h.Zero Drift
The drift is the gradual shift of the instrument indication, over an extended period during which the value of the input variable does not change.
The zero drift is defined as the deviation in the instrument output with time, from its zero value, when the variable to be measured is constant. The whole instrument calibration may gradually shift by the same amount.
There are many environmental factors which affect the drift. These factors are stray electric field, stray magnetic field, temperature changes, contamination of metal, changes in the atomic structure, mechanical vibrations, wear and tear, corrosion, etc.
The drift is undesirable and cannot be easily compensated for. It must be carefully guarded against by continuous inspection.
i. Reproducibility
It is the degree of closeness with which a given value may be repeatedly measured. It may be specified in terms of units for a given period of time. Perfect reproducibility indicates no drift in the instrument.
Repeatability is defined as the variation of scale reading and is random in nature. Both reproducibility and repeatability are a measure of the closeness with which a given input may be measured again and again.
j.Stability
The ability of the instrument to retain its performance throughout its specified operating life and storage life is defined as its stability.
k. Tolerance
The maximum allowable error in the measurement is specified in terms of some value
which is called tolerance. This is closely related to accuracy. Actually, tolerance is not the static characteristic of measuring instruments but it is mentioned because in some instruments the accuracy is specified in terms of tolerance values.
The tolerance indicates the maximum allowable deviation of a manufactured component from a specified value.
l.
Range or Span
The minimum or maximum value of the quantity for which an instrument is designed to measure is called range or span.
m.
Bias
The constant error which exists over the full range of measurement of an instrument is called bias.
Such a bias can be completely eliminated by calibration. The zero error is an example of bias which can be removed by calibration.
n.Hysteresis
If the input to the instrument is increased from a negative value, the output also increases. This is shown by curve 1 in Fig. But if the input is now decreased steadily, tire output does not follow the same curve but lags by a certain value. It traces curve 2 as shown in Fig.
The difference between the two curves is called hysteresis. Tire maximum input hysteresis and the maximum output hysteresis are shown in Fig. These are generally expressed as the percentage of the full-scale reading.
o.
Dead Space
In some instruments, it is possible that till input increases beyond a certain value, the output does not change. So for a certain range of input values, there is no change in output. This range of input is called dead space. There is the possibility that instruments without hysteresis may show dead space in their output characteristics. Backlash in gears is a good example which causes dead space.
p.Span Drift or Sensitivity Drift
If there exists a proportional change in the indication, all along the upward scale then the drift from nominal characteristics is called span drift or sensitivity drift.
Dynamic Characteristics
When the instrument is subjected to rapidly varying inputs, the relation between input and output becomes totally different than that in the case of static or constant inputs. As the input varies from instant to instant, output also varies from instant to instant. The behaviour of the system under such conditions is called the dynamic response of the system.
The various dynamic characteristics of an instrument are speed of response, fidelity, lag and dynamic error.
a. Speed of Response
It is the rapidity with which the system responds to the changes in the quantity to be measured. It gives information about how fast the system reacts to the changes in the input. It indicates the activeness of the system. The system should respond very quickly to the changes in the input.
b.Fidelity
It indicates how faithfully the system reproduces the changes in the input. It is the ability of an instrument to produce a wave shape identical to the wave shape of input with respect to time. It is defined ns the degree to which an instrument indicates the changes in the measured variable without dynamic error.
c.Lag
Every system takes some time, whatever small it may be, to respond to the changes in the measured variable. This retardation or delay in the response of a system is called lag. This is also called measuring lag. The lags are of two types
1. Retardation Lag: In this case, the response of the system begins immediately after a change in the variable has occurred.
2. Time Delay: In this case, the response begins after some time called dead time, after the application of input. Such a delay shifts the response along the time axis and hence causes the dynamic error.
d.Dynamic Error
It is the difference between the true value of the variable to be measured, changing with time and the value indicated by the measurement system, assuming zero static error.
Comments
Post a Comment