error en mediciones

4
32 FEBRUARY 2011 www.tmwr.cm TEsT & MEAsUREMEnT WoRld Calibration Udsdg msum ccucy s yu ppc why sums d gv pc dgs d hps yu mg h v s. T est instruments such as oscillo- scopes and DMMs (digital multi- meters) often let you get the mea- surement results you need with  just the press of a button. But the number on a meter’s display or the waveform on an oscilloscope’s screen is not a perfect measurement. The value displayed is never  exactly the same as the value that was applied to the instrument’s input. In addition, instru- ments use different methods for making the same measurement. Errors are always present, and you need to know how much error you can tolerate. An instrument’ s accuracy shows the closeness between an applied signal and the displayed value. Accuracy is closely related to error, and in fact, the two are reciprocal—the smaller the er ror, the better the accuracy. The measurement-science commu- nity often uses the term “uncer- tainty” instead of error or accuracy. But “ac- curacy” is still widely used in instrument specifications. Instrument specifications may give accu- racy in several wa ys, such as percent of reading or percent of full scale. Meter specifications may also add some number of counts to the percentage. T o find t he difference between the value that was applied to an instrument’s input and the displayed value, you need to calibrate the in- strument with a reliable signal source such as a multifunction calibrat or. Calibrators provide reference signals such as AC and DC voltage and current , resistance, and frequency. They typically deliver these signals over several de- cades of magnitude. Although calibrators aren’t perfect either,  you can consider their output values as “the true value” for your everyday measurements. Calibrators have small er errors than the instr u- ments they calibrate. In general, a calibrator’s error should be four times smaller than the er- rors of the instruments it calibrates. If you denote the value coming from the calibrator as X C and the value measured by the instrument as X M , then the difference is called absolute error : ΔX = X C  X M Absolute error, however, doesn’t represent the quality of a measurement, and it isn’t usually used in instrument data sheets. What you need to know is the relative error , or the instrument’s error relativ e to the stimu- lus signal. Assume you have two instruments. One instrument gets a 10-V input from the calibrator and displays 9.5 V. The other gets 120 V from the calibrator and displays 120.5 V. Both voltage differences are 0.5 V, but the sec- Manage your measurement errors X 0 X MAX X Error () 0 X MAX X (a) (b) FIgure 1. (a) Absolute error can be con- stant over a measuremen t range. (b) Relative error varies within the mea- surement range. BY JoRdAn diMiTRov, CEnTEnniAl CollEgE

Upload: angel-sanchez-roca

Post on 08-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

8/7/2019 Error en Mediciones

http://slidepdf.com/reader/full/error-en-mediciones 1/4

32 FEBRUARY 2011 www.tmwr.cm TEsT & MEAsUREMEnT WoRld

Calibration

Udsdg msum ccucy s yuppc why sums d’ gv pc dgsd hps yu mg h v s.

Test instruments such as oscillo-scopes and DMMs (digital multi-meters) often let you get the mea-surement results you need with just the press of a button. But the

number on a meter’s display or the waveformon an oscilloscope’s screen is not a perfectmeasurement. The value displayed is never exactly the same as the value that was appliedto the instrument’s input. In addition, instru-ments use different methods for making the

same measurement. Errors arealways present, and you need toknow how much error you cantolerate.

An instrument’s accuracy showsthe closeness between an appliedsignal and the displayed value.Accuracy is closely related to

error, and in fact, the two arereciprocal—the smaller the error,the better the accuracy. Themeasurement-science commu-nity often uses the term “uncer-

tainty” instead of error or accuracy. But “ac-curacy” is still widely used in instrumentspecifications.

Instrument specifications may give accu-racy in several ways, such as percent of readingor percent of full scale. Meter specificationsmay also add some number of counts to thepercentage.

To find the difference between the value thatwas applied to an instrument’s input and thedisplayed value, you need to calibrate the in-strument with a reliable signal source such as amultifunction calibrator. Calibrators providereference signals such as AC and DC voltageand current , resistance, and frequency. Theytypically deliver these signals over several de-cades of magnitude.

Although calibrators aren’t perfect either, you can consider their output values as “thetrue value” for your everyday measurements.Calibrators have smaller errors than the instru-ments they calibrate. In general, a calibrator’serror should be four times smaller than the er-rors of the instruments it calibrates.

If you denote the value coming from thecalibrator as X C and the value measured by theinstrument as X M , then the difference is called

absolute error :

ΔX = X C – X M 

Absolute error, however, doesn’t represent thequality of a measurement, and it isn’t usuallyused in instrument data sheets.

What you need to know is the relative error ,or the instrument’s error relative to the stimu-lus signal. Assume you have two instruments.One instrument gets a 10-V input from thecalibrator and displays 9.5 V. The other gets120 V from the calibrator and displays 120.5 V.Both voltage differences are 0.5 V, but the sec-

Manage your

measurement

errors

X

0 XMAX

X

Error ()

0 XMAX

X

(a) (b)

FIgure 1. (a) Absolute error can be con-stant over a measurement range.

(b) Relative error varies within the mea-

surement range.

BY JoRdAn diMiTRov, CEnTEnniAl CollEgE

8/7/2019 Error en Mediciones

http://slidepdf.com/reader/full/error-en-mediciones 2/4

34 FEBRUARY 2011 www.tmwr.cm TEsT & MEAsUREMEnT WoRld

Calibration

ond difference is smaller relative to theinput signal. This error ratio better de-scribes an instr ument’s accuracy. Ex-pressed in percent, the error is:

ε = ΔX 

× 100  X C 

where ε represents relative error in per-cent, usually expressed as a positive num-ber. High-end instrument specs maypresent the relative error in ppm (partsper million). The formula is the same,but the ratio is multiplied by 1 millioninstead of 100.

Absolute and percentage errors be-have differently within an instrument’srange of measurement. For analog me-ters (meters having a needle), you can

consider the absolute error to be con-stant throughout a measurement range(Figure 1a). Not so for relative error. If ΔX in the equation above is constant,then the percentage error follows thecourse shown in Figure 1b. At the lowend of a measurement range, the rela-tive error is rather high. But it smoothly

decays over the rest of the range as itbecomes a smaller portion of the mea-sured value.

An instrument’s precision (also called itsrepeatability or reproducibility) also affectsits measurements. Precision shows thedegree of closeness between multiple re-sults for the same input taken over a cer-

tain period of time. More consistent re-sults mean better precision.

Assume you subject four instrumentsto the same stimulus signal, XC, and youmeasure it with the instruments at thesame settings, taking multiple measure-ments over time. Figure 2 shows four possible outcomes. In Figure 2a, all read-ings are close to the true value, hence ac-curacy is good. The readings are alsoclose to each other, thus the precision isalso good. In Figure 2b, individual read-ings are scattered around the true value,

which indicates large random errors andthus poor precision. The average of all

readings, however, will be close to thetrue value and the instrument has goodaccuracy. In Figure 2c, the readings areconsistent, but inaccurate: good preci-sion, poor accuracy. Its random errors arelow. The graph in Figure 2d shows a setof measurements with poor accuracy andpoor precision.

Resolution, also known as sensitivity, isthe smallest change of input that an in-strument can detect. Resolution is also acomponent of an instrument’s accuracy.For digital instruments, resolution is thevalue of one unit in the least-significantdigit on the display, based on the rangesetting. Table 1 gives a few examples of resolution.

Resolution and accuracy are con-nected through the instrument specifica-tions. It makes no sense to display micro-volts for a signal if the instrument’smeasurements have millivolts of error.The reading will have four incorrectdigits. The last digit always has a possibleerror because of resolution. The other three digits will be incorrect because of errors. A simple guideline is to use an in-

strument with resolution that is 10 timeshigher than accuracy. This ensures thatreadings have no more than two incor-rect digits.

Calibrations are essential You must calibrate your instruments reg-ularly so you can be assured they areoperating within specifications. A cali-bration compares an instrument’s mea-surements to a reference signal. If an in-strument’s readings are outside tolerablelimits, the instrument will need adjust-

ment. In a typical calibration, several(usually 11) equally spaced values will be

calibrated for each range. The graphs inFigure 3 show an ideal performanceand five possible causes of error.

Figure 3a shows an ideal response: astraight line with a slope of 45° goingthrough the origin. Figure 3b is an ex-ample of offset error. The calibration lineis shifted up or down with respect to theideal line. In a gain error (Figure 3c) thecalibration line is rotated to the origin.

Figure 3d shows both gain and offseterrors. Adjustments in gain and offset(mX + b) can compensate for these er-rors. Adjustments are generally donethrough an instrument’s firmware, whichapplies the mX + b calibration constantsto the stimulus signal. Such an adjust-ment is called a two-point adjustment.

Sometimes, though, an instrument’sresponse isn’t linear. In this case, a sim-ple mX + b adjustment won’t compen-sate for errors. Figure 3e shows a casewith no errors at the extremes of therange, but with errors in between. Inthis case, the instrument must apply anonlinear curve fit (three or morepoints) to keep its measurements within

acceptable limits.If an error appears consistently when-

ever the unit is calibrated, the measure-ment instrument has a systematic error.If errors change unpredictably from cal-ibration to calibration, the instrumenthas random errors. Random errors arepresented by the min, max, and averagevalues for a whole set of calibration dataobtained for each calibration point(Figure 3f).

The errors represented in Figures 3b,3c, and 3d are examples of systematic

(d)

 Time

XC

Accuracy: bad

Precision: bad

(c)

 Time

XC

Accuracy: bad

Precision: good

(b)

 Time

XC

Accuracy: good

Precision: bad

(a)

 Time

XC

Accuracy: good

Precision: good

FIgure 2. (a) Consistent measurements close to the actual input means an instru-

ment has good accuracy and precision. (b) Random errors result in poor repeatability

but can still lead to an accurate measurement. (c) Consistent measurements that

don’t reflect the actual signal may be repeatable but are inaccurate. (d) Random

errors not close to the actual value lack accuracy and repeatability.

Table 1. Measurementresolution depends onrange and number of digits.Input Resolution

234.5 V 0.1 V (100 µV)

37.21 V 0.01 V (10 V)

124.7 V 0.1 V (100 V)

8/7/2019 Error en Mediciones

http://slidepdf.com/reader/full/error-en-mediciones 3/4

36 FEBRUARY 2011 www.tmwr.cm TEsT & MEAsUREMEnT WoRld

Calibration

errors because they’re expected and pre-dictable. Proper design and circuit adjust-ments by instrument makers can mini-mize the effects of these errors. Randomerror, however, is unpredictable. It alwaysappears in measurement results. You can’teliminate random error, but you can re-duce its effects by averaging numerousreadings of the same input signal.

Errors caused by ambient factorsMeasurement errors result from instru-ment design as well as from many ambi-ent factors, such as temperature, time,humidity, barometric pressure, and in thecase of AC signals, the shape and fre-quency of the measured signal. During acalibration, these ambient conditionsmust be held within prescribed limits.For example, temperature generally mustbe 23°C ±5°C. Some instruments re-quire tighter temperature limits. Whenall factors are within their limits, thenthe instrument can be calibrated under normal conditions.

The error determined by calibration

under normal conditions is called basic error .It is a single value or an expression describ-ing error behavior within the range of measurement. The basic relative (percent-age) error is the generic accuracy specifica-tion of every instrument.

When ambient factors exceed theprescribed limits, measurement error in-creases. Additional temperature error isspecified in %/°C. Instrument specs pro-vide the time or aging error for three timeintervals after calibration: 24 hr, 90 days,and 1 year.

For example, a DMM calibrated at20°C has a basic accuracy of 0.05% andadditional temperature error of 0.005%/°C. If you operate the instrument at50°C, the additional temperature error will be 30°C x 0.005 %/°C = 0.15%.The total error is the sum of basic andadditional errors. In this case, the error is0.05% + 0.15% = 0.20%. Note that therelative error is now four times higher.

Understand the specsA calibration lab will produce a report of an instrument’s performance. A calibra-tion report shows the applied signal’svalue, low and high measurement limits,instrument readings, and test results for each calibration point. The online ver-sion of this article includes a table from areal calibration of a Fluke 8060A DMM(www.tmworld.com/2011_02).Table 2

shows the DC voltage results from thatcalibration.

To understand a calibration report, you need to know how to calculate lim-its and errors. Look at the result of a

calibration in the 2-VDC range in Table

2. The manufacturer’s specifications de-fine error as “0.04% of reading + 2 dig-its.” “Reading” is the input value (1.9 V)and “2 digits” means two units of theleast-significant digit. For this range, the

value of the least-significant digit (theresolution) is 0.0001 V. That error has aunit, making it an absolute error. Recallthat absolute error can be positive or negative. You can calculate the absoluteerror and the low and high limits for themeasurement range:

 Absolute error:

0.04 × 0.01 × 1.9 V + 2 × 0.0001 V =0.00096 V

Low limit:

1.9 V – 0.00096 V = 1.89904 V ≈1.8990 V

High limit:

1.9 V + 0.00096 V = 1.90096 V ≈1.9010 V

Relative error:

0.00096 V ÷ 1.9 V = 5.05 × 10 –4 =0.05%

The two limits, 1.8990 and 1.9010 V,define a range of acceptable readings. If,during the calibration, all of the instru-ment’s readings are within that range, itpasses. If only one calibration point isout of the limits, the whole calibrationfails. Exceptions may occur. For example,if a DMM is used in production and onlymakes voltage measurements, you maynot care if the current measurements areout of tolerance.

AC measurementsMeasuring AC voltage and current hassome traps not present in DC measure-ments. DMMs report the RMS (rootmean square) value of the AC signal, but you should be aware that instrumentsdiffer in how they calculate AC voltage

or current.

XM

XC

XM

XC

XM

XC

XM

XC

XM

XC

XM

XC

(a) Ideal performance (b) Offset error (c) Gain error

(d) Offset+gain error (e) Nonlinearity (f) Random error

FIgure 3. An instrument’s gain, offset, and linearity, as well as random errors, affect

its measurements.

Table 2. DC-volt calibration data for a Fluke 8060A DMMManufacturer tolerance

Standard applied Low limit High limit Reading Result

190 V 189.90 V 190.10 V 190.02 V P

1.9 V 1.8990 V 1.9010 V 1.9002 V Pass

–1.9 V –1.9010 V –1.8990 V –1.9002 V P

19 V 18.989 V 19.012 V 19.003 V P

190 V 189.89 V 190.12 V 190.04 V P

1000 V 999.3 V 1000.7 V 1000.1 V P

8/7/2019 Error en Mediciones

http://slidepdf.com/reader/full/error-en-mediciones 4/4

FEBRUARY 2011  37TEsT & MEAsUREMEnT WoRld www.tmwr.cm

Many DMMs use the true-RMSmeasurement method, for it providescorrect results for any shape of the sig-nal: sine, triangle, or square. Other ap-proaches such as averaging provide cor-

rect results for sine-wave signals only. Astudy of 20 voltmeters in AC modeshows that instruments that don’t usetrue-RMS can lead to severe errors,where readings can be as low as 52% of the true value (Ref. 1).

The RMS value depends on theshape of the signal. If the peak value of 

a signa l is VP, then the RMS value isVP/√2 for a sine wave, VP/√3 for a tri-angle wave, and VP for a square wave.

Accuracy also depends on the fre-quency of the signal. Figure 4 shows an

example with the Agilent Technologies34401A DMM. The table at the leftcomes from the manufacturer’s specifi-cations. The graph at the right showshow the error changes when the 10-Vrange is used to measure a 9-V sinewave with variable frequency. Althoughthe meter is a true-RMS unit, it does

not provide flat accuracy vs. frequencyresponse.

Once again, seeing a number on thedisplay doesn’t mean that it is the truevalue. And wrong measurement data can

lead to wrong conclusions and wrongdecisions. Instrument designers can alsolimit an instr ument’s er rors throughcareful selection of data converters; see“Design considerations for instruments,”in the online version of this article(www.tmworld.com/2011_02) to learnmore. t&MW

referenCe

1. Williams, J., and T. Owen. “Understandingand selecting rms voltmeters.” EDN, May 11,2000, pp. 54–58. www.edn.com.

Jordan Dimitrov is an engineer in measure-ment and instrumentation. He holds BSEE,MSEE, and PhD degrees from the Techni-cal University in Sofia, Bulgaria. He has 30 

 years of experience in research, design,and calibration. He also holds two patentsand has published more than 60 papers.Currently, Dimitrov teaches at two commu-nity colleges in Toronto, ON, Canada.

1.00 + 0.030.35 + 0.03

0.06 + 0.030.12 + 0.050.60 + 0.084.00 + 0.50

AC ranges: 1 V, 10 V, 100 V, 750 V

FrequencyHz

Error% of reading + % range

4.0

3.5

3.0

2.5

2.0

1.51.0

0.5

01 10 100 1 k 10 k 1 k 1 M

   E

  r  r  o  r   (   %   )

Frequency (Hz)

 VIN = 9 VACRange = 10 V

3 – 55 – 10

10 – 20 k20 k – 50 k50 k – 100 k

100 k – 300 k

FIgure 4. The frequency of an AC signal affects the accuracy of its measurement.