What Index Is Used to Measure the Average Prices Paid by a Typicalã¢â‚¬â€¹ Family?
Measurements and Error Analysis
"It is better to be roughly right than precisely wrong." — Alan Greenspan
The Incertitude of Measurements
Some numerical statements are exact: Mary has iii brothers, and 2 + ii = 4. However, all measurements have some degree of uncertainty that may come from a diversity of sources. The process of evaluating the uncertainty associated with a measurement result is often called doubt analysis or error analysis. The complete argument of a measured value should include an approximate of the level of confidence associated with the value. Properly reporting an experimental issue forth with its uncertainty allows other people to make judgments about the quality of the experiment, and it facilitates meaningful comparisons with other similar values or a theoretical prediction. Without an dubiousness estimate, it is impossible to respond the bones scientific question: "Does my result agree with a theoretical prediction or results from other experiments?" This question is central for deciding if a scientific hypothesis is confirmed or refuted. When nosotros make a measurement, we generally assume that some verbal or true value exists based on how we ascertain what is being measured. While we may never know this truthful value exactly, we try to find this ideal quantity to the best of our ability with the fourth dimension and resources available. Every bit nosotros make measurements by dissimilar methods, or even when making multiple measurements using the same method, we may obtain slightly dissimilar results. So how exercise we report our findings for our best gauge of this elusive true value? The nigh common fashion to bear witness the range of values that we believe includes the truthful value is:
( i )
measurement = (best gauge ± dubiousness) units
Allow's take an example. Suppose y'all want to find the mass of a gilded ring that you would similar to sell to a friend. You do not want to jeopardize your friendship, so you want to go an accurate mass of the ring in order to charge a fair market price. You estimate the mass to be between 10 and 20 grams from how heavy information technology feels in your manus, merely this is not a very precise estimate. Afterwards some searching, you detect an electronic remainder that gives a mass reading of 17.43 grams. While this measurement is much more than precise than the original judge, how do you know that it is accurate, and how confident are you that this measurement represents the truthful value of the ring's mass? Since the digital display of the balance is limited to ii decimal places, you could report the mass as chiliad = 17.43 ± 0.01 g. 17.44 ± 0.02 g. Accurateness is the closeness of agreement betwixt a measured value and a true or accustomed value. Measurement error is the corporeality of inaccuracy. Precision is a measure of how well a result tin exist adamant (without reference to a theoretical or true value). It is the degree of consistency and agreement among independent measurements of the same quantity; also the reliability or reproducibility of the result. The uncertainty estimate associated with a measurement should account for both the accuracy and precision of the measurement.
( 2 )
Relative Uncertainty =
uncertainty measured quantity
Example: m = 75.5 ± 0.v thousand = 0.006 = 0.7%.
( 3 )
Relative Error =
measured value − expected value expected value
If the expected value for m is 80.0 g, then the relative fault is: = −0.056 = −v.6% Notation: The minus sign indicates that the measured value is less than the expected value.
Types of Errors
Measurement errors may be classified as either random or systematic, depending on how the measurement was obtained (an instrument could cause a random error in one situation and a systematic error in another).
Random errors are statistical fluctuations (in either direction) in the measured data due to the precision limitations of the measurement device. Random errors can exist evaluated through statistical analysis and tin can be reduced by averaging over a large number of observations (meet standard error).
Systematic errors are reproducible inaccuracies that are consistently in the same direction. These errors are difficult to detect and cannot exist analyzed statistically. If a systematic mistake is identified when calibrating against a standard, applying a correction or correction factor to recoup for the effect tin reduce the bias. Unlike random errors, systematic errors cannot be detected or reduced by increasing the number of observations.
When making careful measurements, our goal is to reduce as many sources of error equally possible and to keep track of those errors that we can non eliminate. It is useful to know the types of errors that may occur, so that we may recognize them when they ascend. Common sources of error in physics laboratory experiments:
Incomplete definition (may be systematic or random) — 1 reason that it is incommunicable to make verbal measurements is that the measurement is not always clearly divers. For example, if 2 unlike people measure the length of the same string, they would probably get different results considering each person may stretch the string with a different tension. The best fashion to minimize definition errors is to carefully consider and specify the weather that could touch the measurement. Failure to business relationship for a gene (usually systematic) — The most challenging part of designing an experiment is trying to control or business relationship for all possible factors except the one contained variable that is being analyzed. For instance, y'all may inadvertently ignore air resistance when measuring free-fall dispatch, or you may fail to account for the result of the Earth's magnetic field when measuring the field near a small magnet. The best style to business relationship for these sources of error is to brainstorm with your peers about all the factors that could possibly bear on your result. This brainstorm should be washed before start the experiment in club to plan and account for the confounding factors before taking data. Sometimes a correction can be applied to a outcome after taking data to account for an mistake that was non detected earlier. Ecology factors (systematic or random) — Be aware of errors introduced by your immediate working environs. You may demand to have business relationship for or protect your experiment from vibrations, drafts, changes in temperature, and electronic dissonance or other effects from nearby appliance. Instrument resolution (random) — All instruments have finite precision that limits the ability to resolve small measurement differences. For instance, a meter stick cannot exist used to distinguish distances to a precision much amend than almost half of its smallest scale division (0.5 mm in this case). I of the all-time ways to obtain more than precise measurements is to employ a null difference method instead of measuring a quantity direct. Zero or balance methods involve using instrumentation to measure out the divergence between ii similar quantities, one of which is known very accurately and is adaptable. The adaptable reference quantity is varied until the difference is reduced to zero. The two quantities are then balanced and the magnitude of the unknown quantity can be constitute past comparing with a measurement standard. With this method, problems of source instability are eliminated, and the measuring musical instrument can be very sensitive and does not even need a scale. Calibration (systematic) — Whenever possible, the calibration of an instrument should be checked before taking data. If a scale standard is not available, the accuracy of the instrument should be checked past comparing with another instrument that is at least every bit precise, or past consulting the technical data provided by the manufacturer. Scale errors are ordinarily linear (measured as a fraction of the full scale reading), so that larger values consequence in greater absolute errors. Zero commencement (systematic) — When making a measurement with a micrometer caliper, electronic residuum, or electrical meter, e'er bank check the nix reading first. Re-zero the musical instrument if possible, or at least measure and tape the zero offset so that readings tin be corrected later. It is also a good idea to check the nothing reading throughout the experiment. Failure to zero a device will result in a abiding fault that is more significant for smaller measured values than for larger ones. Physical variations (random) — Information technology is ever wise to obtain multiple measurements over the widest range possible. Doing then often reveals variations that might otherwise go undetected. These variations may call for closer examination, or they may be combined to find an average value. Parallax (systematic or random) — This mistake tin can occur whenever there is some distance betwixt the measuring calibration and the indicator used to obtain a measurement. If the observer's eye is not squarely aligned with the pointer and calibration, the reading may be too high or low (some analog meters have mirrors to assistance with this alignment). Instrument drift (systematic) — Almost electronic instruments have readings that migrate over time. The amount of drift is generally not a concern, but occasionally this source of error can be significant. Lag fourth dimension and hysteresis (systematic) — Some measuring devices require time to reach equilibrium, and taking a measurement before the instrument is stable will result in a measurement that is also high or low. A common case is taking temperature readings with a thermometer that has not reached thermal equilibrium with its environment. A similar effect is hysteresis where the instrument readings lag behind and appear to have a "memory" result, every bit data are taken sequentially moving up or down through a range of values. Hysteresis is most commonly associated with materials that become magnetized when a changing magnetic field is applied. Personal errors come from abandon, poor technique, or bias on the part of the experimenter. The experimenter may measure incorrectly, or may employ poor technique in taking a measurement, or may introduce a bias into measurements by expecting (and inadvertently forcing) the results to agree with the expected outcome.
Gross personal errors, sometimes called mistakes or blunders, should be avoided and corrected if discovered. Every bit a rule, personal errors are excluded from the fault analysis discussion because information technology is mostly assumed that the experimental result was obtained by following right procedures. The term human error should too be avoided in error analysis discussions because it is too general to exist useful.
Estimating Experimental Incertitude for a Single Measurement
Whatsoever measurement you brand will have some uncertainty associated with it, no thing the precision of your measuring tool. Then how do you lot determine and report this doubt?
The uncertainty of a single measurement is express past the precision and accuracy of the measuring instrument, along with whatever other factors that might affect the ability of the experimenter to make the measurement.
For example, if you are trying to utilise a meter stick to mensurate the diameter of a tennis ball, the uncertainty might be ± v mm, ± 2 mm.
( iv )
Measurement = (measured value ± standard uncertainty) unit
where the ± standard uncertainty indicates approximately a 68% confidence interval (come across sections on Standard Deviation and Reporting Uncertainties). six.7 ± 0.2 cm.
Example: Bore of lawn tennis ball =
Estimating Incertitude in Repeated Measurements
Suppose you time the flow of oscillation of a pendulum using a digital instrument (that you assume is measuring accurately) and discover: T = 0.44 seconds. This unmarried measurement of the menstruation suggests a precision of ±0.005 s, but this instrument precision may not requite a complete sense of the uncertainty. If you lot repeat the measurement several times and examine the variation among the measured values, you can get a better idea of the uncertainty in the period. For example, hither are the results of v measurements, in seconds: 0.46, 0.44, 0.45, 0.44, 0.41.
( 5 )
Average (mean) =
x 1 + ten two + + x N
N
For this state of affairs, the best estimate of the period is the boilerplate, or mean.
Whenever possible, repeat a measurement several times and average the results. This average is generally the best estimate of the "true" value (unless the data gear up is skewed by i or more outliers which should be examined to determine if they are bad data points that should be omitted from the boilerplate or valid measurements that require farther investigation). Generally, the more repetitions you brand of a measurement, the better this judge will exist, but be careful to avoid wasting time taking more measurements than is necessary for the precision required.
Consider, as another instance, the measurement of the width of a piece of paper using a meter stick. Being conscientious to go along the meter stick parallel to the edge of the paper (to avert a systematic fault which would cause the measured value to be consistently higher than the correct value), the width of the paper is measured at a number of points on the sheet, and the values obtained are entered in a data tabular array. Annotation that the last digit is only a rough approximate, since it is difficult to read a meter stick to the nearest tenth of a millimeter (0.01 cm).
( 6 )
Average =
= = 31.19 cm sum of observed widths no. of observations
This average is the best available estimate of the width of the piece of paper, but it is certainly not exact. We would have to boilerplate an space number of measurements to approach the true hateful value, and even then, we are not guaranteed that the hateful value is authentic because there is yet some systematic error from the measuring tool, which tin can never be calibrated perfectly. So how exercise we express the uncertainty in our boilerplate value? I way to express the variation among the measurements is to use the average deviation. This statistic tells u.s. on average (with l% confidence) how much the individual measurements vary from the mean.
( 7 )
d =
|x ane − 10 | + |x 2 − x | + + |ten N − x |
North
Notwithstanding, the standard deviation is the most common way to characterize the spread of a data prepare. The standard difference is always slightly greater than the average difference, and is used because of its clan with the normal distribution that is frequently encountered in statistical analyses.
Standard Divergence
To summate the standard deviation for a sample of North measurements:
-
1
Sum all the measurements and divide by N to get the average, or mean. -
2
At present, subtract this average from each of the Northward measurements to obtain N "deviations". -
3
Foursquare each of these N deviations and add together them all upwards. -
4
Divide this consequence past( Due north − 1)
and take the square root.
We can write out the formula for the standard divergence equally follows. Permit the Due north measurements be called x 1, x two, ..., xN . Permit the boilerplate of the N values exist called ten . δ 10 i = x i − ten , for i = 1, 2, , Northward .
In our previous example, the average width x d = 0.086 cm. s = x ± ii s,
The average deviation is:
= 0.12 cm.
(0.xiv)2 + (0.04)2 + (0.07)2 + (0.17)ii + (0.01)2 five − one
Figure 1
Standard Deviation of the Hateful (Standard Error)
When we report the boilerplate value of Due north measurements, the uncertainty we should associate with this average value is the standard deviation of the mean, often called the standard mistake (SE).
( 9 )
σ x =
s
N
The standard error is smaller than the standard deviation by a factor of 1/ Average newspaper width = 31.19 ± 0.05 cm.
.
N
.
5
Anomalous Data
The offset step yous should have in analyzing data (and even while taking data) is to examine the data set equally a whole to wait for patterns and outliers. Anomalous data points that lie exterior the general trend of the information may suggest an interesting phenomenon that could lead to a new discovery, or they may merely be the result of a mistake or random fluctuations. In any case, an outlier requires closer exam to determine the cause of the unexpected result. Extreme data should never exist "thrown out" without clear justification and explanation, because yous may be discarding the most significant role of the investigation! Nevertheless, if yous can clearly justify omitting an inconsistent data bespeak, and then you should exclude the outlier from your assay so that the average value is non skewed from the "true" mean.
Fractional Doubtfulness Revisited
When a reported value is determined by taking the average of a set up of independent readings, the partial uncertainty is given by the ratio of the uncertainty divided past the average value. For this example,
( x )
Partial doubt = = = 0.0016 ≈ 0.two%
Annotation that the fractional uncertainty is dimensionless but is often reported every bit a percentage or in parts per million (ppm) to emphasize the partial nature of the value. A scientist might also make the statement that this measurement "is good to about 1 part in 500" or "precise to about 0.two%". The fractional uncertainty is too of import considering it is used in propagating uncertainty in calculations using the result of a measurement, as discussed in the next department.
Propagation of Uncertainty
Suppose we want to determine a quantity f, which depends on 10 and maybe several other variables y, z, etc. We want to know the error in f if we mensurate x, y, ... with errors σ ten , σ y , ... Examples:
( 11 )
f = xy (Area of a rectangle)
( 12 )
f = p cos θ ( x -component of momentum)
( xiii )
f = 10 / t (velocity)
For a unmarried-variable function f(x), the difference in f can be related to the difference in ten using calculus:
( 14 )
δ f =
Thus, taking the square and the boilerplate:
( xv )
δ f two =
δ x 2
ii
and using the definition of σ , we become:
( 16 )
σ f =
Examples: (a) f =
x
( 17 )
=
one 2
ten
( 18 )
σ f =
, or = σ 10 ii
x
(b) f = x ii
(c) f = cos θ
( 22 )
σ f = |sin θ | σ θ , or = |tan θ | σ θ Note : in this situation, σ θ must exist in radians.
In the example where f depends on two or more variables, the derivation above tin be repeated with minor modification. For 2 variables, f(x, y), nosotros have:
The partial derivative means differentiating f with respect to x property the other variables fixed. Taking the square and the average, we become the police of propagation of uncertainty:
If the measurements of 10 and y are uncorrelated, then δ x δ y = 0,
Examples: (a) f = x + y
( 27 )
∴ σ f =
σ x 2 + σ y ii
When calculation (or subtracting) independent measurements, the absolute doubtfulness of the sum (or divergence) is the root sum of squares (RSS) of the individual accented uncertainties. When calculation correlated measurements, the uncertainty in the event is just the sum of the absolute uncertainties, which is ever a larger uncertainty judge than adding in quadrature (RSS). Adding or subtracting a abiding does non change the absolute uncertainty of the calculated value as long as the constant is an exact value.
(b) f = xy
( 29 )
∴ σ f =
y 2 σ x two + x 2 σ y 2
Dividing the previous equation past f = xy, we get:
(c) f = x / y
Dividing the previous equation by f = x / y ,
When multiplying (or dividing) independent measurements, the relative uncertainty of the product (caliber) is the RSS of the individual relative uncertainties. When multiplying correlated measurements, the uncertainty in the result is just the sum of the relative uncertainties, which is always a larger uncertainty approximate than adding in quadrature (RSS). Multiplying or dividing past a constant does non change the relative uncertainty of the calculated value.
Note that the relative dubiety in f, as shown in (b) and (c) above, has the same class for multiplication and sectionalisation: the relative doubtfulness in a product or caliber depends on the relative uncertainty of each private term. Example: Find doubt in v, where five = at
( 34 )
= = =
= 0.031 or 3.1%
(0.010)two + (0.029)2
Notice that the relative uncertainty in t (2.nine%) is significantly greater than the relative uncertainty for a (1.0%), and therefore the relative uncertainty in 5 is essentially the same as for t (near 3%). Graphically, the RSS is like the Pythagorean theorem:
Figure 2
The total doubtfulness is the length of the hypotenuse of a correct triangle with legs the length of each uncertainty component.
Timesaving approximation: "A chain is simply as potent equally its weakest link."
If ane of the dubiousness terms is more than than 3 times greater than the other terms, the root-squares formula tin be skipped, and the combined uncertainty is simply the largest uncertainty. This shortcut can save a lot of time without losing any accuracy in the estimate of the overall incertitude.
The Upper-Lower Bound Method of Incertitude Propagation
An alternative, and sometimes simpler procedure, to the tedious propagation of dubiousness law is the upper-lower jump method of doubt propagation. This alternative method does not yield a standard uncertainty estimate (with a 68% confidence interval), but it does give a reasonable approximate of the uncertainty for practically any situation. The basic idea of this method is to use the doubt ranges of each variable to calculate the maximum and minimum values of the function. You can also think of this procedure as examining the best and worst case scenarios. For example, suppose you measure an angle to be: θ = 25° ± ane° and y'all needed to find f = cos θ , then:
( 35 )
f max = cos(26°) = 0.8988
( 36 )
f min = cos(24°) = 0.9135
( 37 )
∴ f = 0.906 ± 0.007
Note that even though θ was only measured to 2 significant figures, f is known to 3 figures. By using the propagation of doubtfulness police force: σ f = |sin θ | σ θ = (0.423)( π /180) = 0.0074
The uncertainty approximate from the upper-lower bound method is generally larger than the standard dubiousness approximate found from the propagation of uncertainty law, but both methods will requite a reasonable judge of the dubiety in a calculated value.
The upper-lower leap method is especially useful when the functional relationship is not clear or is incomplete. 1 practical application is forecasting the expected range in an expense upkeep. In this case, some expenses may be fixed, while others may be uncertain, and the range of these uncertain terms could be used to predict the upper and lower premises on the total expense.
Meaning Figures
The number of significant figures in a value can be divers every bit all the digits between and including the first non-zero digit from the left, through the last digit. For instance, 0.44 has two pregnant figures, and the number 66.770 has five significant figures. Zeroes are significant except when used to locate the decimal point, every bit in the number 0.00030, which has two significant figures. Zeroes may or may not be significant for numbers like 1200, where information technology is non clear whether two, three, or 4 significant figures are indicated. To avert this ambiguity, such numbers should be expressed in scientific annotation to (e.m. i.20 × ten3 clearly indicates 3 significant figures). When using a calculator, the display volition ofttimes prove many digits, only some of which are meaningful (meaning in a different sense). For instance, if you want to judge the area of a round playing field, you might footstep off the radius to be 9 meters and utilize the formula: A = π r 2. When you compute this area, the calculator might report a value of 254.4690049 m2. It would be extremely misleading to report this number equally the expanse of the field, because it would suggest that you lot know the expanse to an cool degree of precision—to inside a fraction of a square millimeter! Since the radius is just known to one significant figure, the last answer should also contain but one significant figure: Area = 3 × 102 ktwo. From this example, we tin see that the number of significant figures reported for a value implies a sure degree of precision. In fact, the number of pregnant figures suggests a rough estimate of the relative dubiousness: The number of significant figures implies an approximate relative uncertainty:
1 significant effigy suggests a relative uncertainty of almost 10% to 100%
2 pregnant figures propose a relative uncertainty of about 1% to 10%
three significant figures suggest a relative uncertainty of about 0.ane% to 1%
Use of Significant Figures for Unproblematic Propagation of Incertitude
By following a few elementary rules, pregnant figures can be used to find the appropriate precision for a calculated consequence for the four virtually basic math functions, all without the utilize of complicated formulas for propagating uncertainties.
For multiplication and segmentation, the number of pregnant figures that are reliably known in a production or quotient is the same as the smallest number of significant figures in any of the original numbers.
Instance:
6.6 × 7328.7 48369.42 = 48 × 103
(2 significant figures) (five significant figures) (2 significant figures)
For addition and subtraction, the consequence should be rounded off to the terminal decimal place reported for the least precise number.
Examples:
223.64 5560.five + 54 + 0.008 278 5560.5
Dubiousness, Significant Figures, and Rounding
For the same reason that it is quack to study a result with more significant figures than are reliably known, the uncertainty value should too not be reported with excessive precision. For example, it would be unreasonable for a student to report a result like:
( 38 )
measured density = 8.93 ± 0.475328 thou/cm3 Incorrect!
The uncertainty in the measurement cannot possibly exist known and then precisely! In most experimental work, the confidence in the uncertainty gauge is not much better than about ±l% because of all the diverse sources of mistake, none of which tin can be known exactly. Therefore, doubt values should exist stated to only one significant effigy (or mayhap two sig. figs. if the first digit is a one). Because experimental uncertainties are inherently imprecise, they should exist rounded to 1, or at most two, significant figures. = measured density = 8.9 ± 0.five g/cm3.
*The relative uncertainty is given by the estimate formula:
1
2(N − 1)
An experimental value should be rounded to be consequent with the magnitude of its doubtfulness. This generally ways that the last significant figure in any reported value should exist in the same decimal place every bit the uncertainty.
In most instances, this practise of rounding an experimental consequence to be consistent with the uncertainty judge gives the same number of pregnant figures as the rules discussed earlier for simple propagation of uncertainties for adding, subtracting, multiplying, and dividing.
Caution: When conducting an experiment, it is of import to keep in heed that precision is expensive (both in terms of time and textile resources). Practise non waste matter your time trying to obtain a precise consequence when only a crude estimate is required. The price increases exponentially with the amount of precision required, so the potential benefit of this precision must be weighed against the extra toll.
Combining and Reporting Uncertainties
In 1993, the International Standards Organization (ISO) published the showtime official worldwide Guide to the Expression of Uncertainty in Measurement. Before this time, uncertainty estimates were evaluated and reported according to unlike conventions depending on the context of the measurement or the scientific subject area. Hither are a few cardinal points from this 100-page guide, which can be found in modified form on the NIST website. When reporting a measurement, the measured value should be reported along with an estimate of the full combined standard uncertainty U c
Decision: "When practice measurements agree with each other?"
We at present have the resources to reply the fundamental scientific question that was asked at the start of this error analysis discussion: "Does my result agree with a theoretical prediction or results from other experiments?" Generally speaking, a measured result agrees with a theoretical prediction if the prediction lies within the range of experimental doubt. Similarly, if two measured values have standard uncertainty ranges that overlap, and so the measurements are said to be consistent (they concur). If the doubt ranges do not overlap, then the measurements are said to exist discrepant (they practise not agree). However, you should recognize that these overlap criteria can give two opposite answers depending on the evaluation and confidence level of the uncertainty. It would exist unethical to arbitrarily inflate the doubtfulness range just to make a measurement agree with an expected value. A amend procedure would be to discuss the size of the divergence between the measured and expected values within the context of the dubiety, and try to discover the source of the discrepancy if the divergence is truly significant. To examine your ain data, you are encouraged to employ the Measurement Comparison tool available on the lab website. Here are some examples using this graphical analysis tool:
Effigy 3
A = ane.2 ± 0.iv B = i.8 ± 0.4
Figure 4
An culling method for determining understanding between values is to calculate the departure between the values divided by their combined standard uncertainty. This ratio gives the number of standard deviations separating the two values. If this ratio is less than ane.0, and then it is reasonable to conclude that the values agree. If the ratio is more than two.0, then it is highly unlikely (less than about 5% probability) that the values are the same. Instance from above with u = 0.4: = ane.1. u = 0.2: = 2.1.
References
Baird, D.C. Experimentation: An Introduction to Measurement Theory and Experiment Design, 3rd. ed. Prentice Hall: Englewood Cliffs, 1995. Bevington, Phillip and Robinson, D. Information Reduction and Error Analysis for the Concrete Sciences, twond. ed. McGraw-Hill: New York, 1991. ISO. Guide to the Expression of Dubiety in Measurement. International Organization for Standardization (ISO) and the International Committee on Weights and Measures (CIPM): Switzerland, 1993. Lichten, William. Information and Error Analysis., 2nd. ed. Prentice Hall: Upper Saddle River, NJ, 1999. NIST. Essentials of Expressing Measurement Uncertainty. http://physics.nist.gov/cuu/Dubiety/ Taylor, John. An Introduction to Error Analysis, 2nd. ed. University Scientific discipline Books: Sausalito, 1997.
Source: https://www.webassign.net/question_assets/unccolphysmechl1/measurements/manual.html
0 Response to "What Index Is Used to Measure the Average Prices Paid by a Typicalã¢â‚¬â€¹ Family?"
Post a Comment