Which Value Can Specify the Point at Which a Nutrient Could Become Toxic?

Measurements and Mistake Analysis

"Information technology is better to exist roughly right than precisely wrong." — Alan Greenspan

The Uncertainty of Measurements

Some numerical statements are exact: Mary has 3 brothers, and ii + 2 = iv. All the same, all measurements have some degree of uncertainty that may come from a multifariousness of sources. The process of evaluating the uncertainty associated with a measurement issue is ofttimes called doubt analysis or mistake analysis. The consummate statement of a measured value should include an judge of the level of confidence associated with the value. Properly reporting an experimental result along with its uncertainty allows other people to make judgments most the quality of the experiment, and information technology facilitates meaningful comparisons with other similar values or a theoretical prediction. Without an dubiety estimate, it is impossible to answer the bones scientific question: "Does my result concur with a theoretical prediction or results from other experiments?" This question is fundamental for deciding if a scientific hypothesis is confirmed or refuted. When we brand a measurement, we generally assume that some exact or true value exists based on how we ascertain what is existence measured. While we may never know this true value exactly, nosotros attempt to find this platonic quantity to the best of our ability with the time and resources bachelor. As we make measurements by different methods, or even when making multiple measurements using the aforementioned method, we may obtain slightly dissimilar results. And so how exercise we report our findings for our best estimate of this elusive true value? The almost common way to evidence the range of values that we believe includes the truthful value is:

( ane )

measurement = (best estimate ± uncertainty) units

Let'south accept an case. Suppose you want to find the mass of a golden band that you lot would like to sell to a friend. Y'all do not want to jeopardize your friendship, so you desire to get an accurate mass of the ring in social club to charge a fair market toll. You estimate the mass to be between x and xx grams from how heavy information technology feels in your hand, but this is not a very precise estimate. After some searching, you find an electronic residual that gives a mass reading of 17.43 grams. While this measurement is much more than precise than the original judge, how do y'all know that it is authentic, and how confident are y'all that this measurement represents the true value of the ring'south mass? Since the digital display of the balance is limited to two decimal places, you could report the mass equally

m = 17.43 ± 0.01 g.

Suppose yous use the same electronic balance and obtain several more readings: 17.46 thousand, 17.42 g, 17.44 g, so that the average mass appears to exist in the range of

17.44 ± 0.02 one thousand.

By now you may experience confident that you lot know the mass of this ring to the nearest hundredth of a gram, just how practise you know that the true value definitely lies between 17.43 one thousand and 17.45 g? Since you desire to be honest, you lot decide to employ another balance that gives a reading of 17.22 thou. This value is conspicuously beneath the range of values found on the get-go rest, and under normal circumstances, you might non care, but yous desire to be fair to your friend. And so what do you do now? The answer lies in knowing something almost the accurateness of each instrument. To help respond these questions, we should get-go define the terms accuracy and precision:

Accuracy is the closeness of understanding between a measured value and a true or accepted value. Measurement error is the corporeality of inaccuracy.

Precision is a measure of how well a issue can be determined (without reference to a theoretical or true value). It is the caste of consistency and agreement amongst independent measurements of the aforementioned quantity; too the reliability or reproducibility of the event.

The uncertainty estimate associated with a measurement should business relationship for both the accuracy and precision of the measurement.

Note: Unfortunately the terms mistake and dubiety are oft used interchangeably to describe both imprecision and inaccuracy. This usage is so common that it is impossible to avoid entirely. Whenever you encounter these terms, brand sure you understand whether they refer to accuracy or precision, or both. Notice that in guild to determine the accuracy of a particular measurement, we have to know the ideal, truthful value. Sometimes we have a "textbook" measured value, which is well known, and we assume that this is our "ideal" value, and use it to estimate the accurateness of our result. Other times we know a theoretical value, which is calculated from basic principles, and this also may exist taken as an "ideal" value. Only physics is an empirical science, which means that the theory must exist validated by experiment, and non the other mode around. We tin can escape these difficulties and retain a useful definition of accuracy past assuming that, even when nosotros practice not know the truthful value, we can rely on the best available accepted value with which to compare our experimental value. For our case with the gold ring, there is no accepted value with which to compare, and both measured values have the aforementioned precision, so nosotros have no reason to believe one more than than the other. We could look upwards the accuracy specifications for each balance as provided by the manufacturer (the Appendix at the end of this lab manual contains accuracy information for most instruments y'all will utilize), simply the best way to assess the accuracy of a measurement is to compare with a known standard. For this situation, it may be possible to calibrate the balances with a standard mass that is authentic inside a narrow tolerance and is traceable to a primary mass standard at the National Institute of Standards and Technology (NIST). Calibrating the balances should eliminate the discrepancy between the readings and provide a more accurate mass measurement. Precision is often reported quantitatively by using relative or fractional incertitude:

( ii )

Relative Incertitude =

uncertainty
measured quantity

Case:

m = 75.v ± 0.5 g

has a fractional dubiety of:

 = 0.006 = 0.7%.

Accurateness is often reported quantitatively by using relative mistake:

( 3 )

Relative Fault =

measured value − expected value
expected value

If the expected value for thousand is eighty.0 thousand, then the relative mistake is:

 = −0.056 = −5.vi%

Note: The minus sign indicates that the measured value is less than the expected value.

When analyzing experimental data, it is important that you understand the difference betwixt precision and accurateness. Precision indicates the quality of the measurement, without whatsoever guarantee that the measurement is "right." Accuracy, on the other hand, assumes that at that place is an platonic value, and tells how far your respond is from that ideal, "right" answer. These concepts are straight related to random and systematic measurement errors.

Types of Errors

Measurement errors may be classified every bit either random or systematic, depending on how the measurement was obtained (an instrument could crusade a random fault in 1 state of affairs and a systematic fault in another).

Random errors are statistical fluctuations (in either direction) in the measured data due to the precision limitations of the measurement device. Random errors can be evaluated through statistical analysis and can exist reduced past averaging over a big number of observations (meet standard mistake).

Systematic errors are reproducible inaccuracies that are consistently in the aforementioned management. These errors are difficult to detect and cannot be analyzed statistically. If a systematic mistake is identified when calibrating confronting a standard, applying a correction or correction factor to compensate for the outcome can reduce the bias. Unlike random errors, systematic errors cannot be detected or reduced past increasing the number of observations.

When making careful measurements, our goal is to reduce as many sources of error as possible and to continue track of those errors that we can non eliminate. It is useful to know the types of errors that may occur, so that nosotros may recognize them when they arise. Common sources of fault in physics laboratory experiments:

Incomplete definition (may be systematic or random) — One reason that information technology is incommunicable to brand exact measurements is that the measurement is non always clearly divers. For case, if two different people measure the length of the same string, they would probably get different results because each person may stretch the cord with a different tension. The all-time way to minimize definition errors is to carefully consider and specify the conditions that could affect the measurement. Failure to account for a factor (usually systematic) — The near challenging part of designing an experiment is trying to control or account for all possible factors except the one independent variable that is being analyzed. For case, you may inadvertently ignore air resistance when measuring free-autumn acceleration, or you may fail to business relationship for the effect of the Earth'southward magnetic field when measuring the field near a small magnet. The best way to account for these sources of error is to begin with your peers most all the factors that could perchance touch on your result. This begin should be washed before beginning the experiment in order to plan and account for the confounding factors earlier taking data. Sometimes a correction can be applied to a result after taking data to account for an error that was not detected earlier. Environmental factors (systematic or random) — Be aware of errors introduced by your firsthand working environs. You may need to take account for or protect your experiment from vibrations, drafts, changes in temperature, and electronic noise or other furnishings from nearby apparatus. Musical instrument resolution (random) — All instruments have finite precision that limits the power to resolve small measurement differences. For instance, a meter stick cannot exist used to distinguish distances to a precision much amend than about half of its smallest scale division (0.5 mm in this case). One of the best ways to obtain more precise measurements is to use a nothing difference method instead of measuring a quantity direct. Zip or remainder methods involve using instrumentation to measure out the divergence between two similar quantities, i of which is known very accurately and is adjustable. The adjustable reference quantity is varied until the difference is reduced to goose egg. The two quantities are then balanced and the magnitude of the unknown quantity can be found by comparing with a measurement standard. With this method, problems of source instability are eliminated, and the measuring musical instrument can exist very sensitive and does not even need a scale. Calibration (systematic) — Whenever possible, the calibration of an instrument should exist checked earlier taking data. If a calibration standard is not available, the accurateness of the instrument should exist checked by comparing with some other instrument that is at least as precise, or past consulting the technical data provided by the manufacturer. Calibration errors are usually linear (measured as a fraction of the full scale reading), so that larger values result in greater absolute errors. Zero offset (systematic) — When making a measurement with a micrometer caliper, electronic balance, or electrical meter, always check the null reading first. Re-zero the instrument if possible, or at least measure out and record the zero offset so that readings tin can be corrected later. It is too a proficient thought to check the goose egg reading throughout the experiment. Failure to zero a device volition result in a abiding error that is more meaning for smaller measured values than for larger ones. Physical variations (random) — It is ever wise to obtain multiple measurements over the widest range possible. Doing so frequently reveals variations that might otherwise go undetected. These variations may call for closer examination, or they may be combined to find an average value. Parallax (systematic or random) — This error tin occur whenever there is some distance between the measuring scale and the indicator used to obtain a measurement. If the observer'due south heart is not squarely aligned with the pointer and scale, the reading may be too loftier or low (some analog meters have mirrors to help with this alignment). Instrument drift (systematic) — Most electronic instruments take readings that drift over fourth dimension. The amount of drift is by and large not a business, merely occasionally this source of mistake tin be significant. Lag fourth dimension and hysteresis (systematic) — Some measuring devices crave time to reach equilibrium, and taking a measurement before the musical instrument is stable will issue in a measurement that is too high or depression. A mutual example is taking temperature readings with a thermometer that has not reached thermal equilibrium with its environs. A similar effect is hysteresis where the instrument readings lag behind and announced to take a "memory" outcome, every bit data are taken sequentially moving up or down through a range of values. Hysteresis is most commonly associated with materials that become magnetized when a changing magnetic field is practical. Personal errors come from carelessness, poor technique, or bias on the role of the experimenter. The experimenter may mensurate incorrectly, or may use poor technique in taking a measurement, or may introduce a bias into measurements by expecting (and inadvertently forcing) the results to agree with the expected upshot.

Gross personal errors, sometimes called mistakes or blunders, should be avoided and corrected if discovered. As a rule, personal errors are excluded from the error analysis word because it is generally assumed that the experimental outcome was obtained past post-obit right procedures. The term homo error should too be avoided in mistake analysis discussions considering it is too general to be useful.

Estimating Experimental Uncertainty for a Single Measurement

Any measurement you lot make will have some incertitude associated with information technology, no affair the precision of your measuring tool. So how practise you determine and report this dubiousness?

The uncertainty of a single measurement is express past the precision and accuracy of the measuring instrument, along with any other factors that might bear upon the ability of the experimenter to make the measurement.

For example, if you are trying to utilise a meter stick to measure the diameter of a lawn tennis ball, the uncertainty might exist

± five mm,

just if y'all used a Vernier caliper, the doubtfulness could be reduced to perhaps

± 2 mm.

The limiting factor with the meter stick is parallax, while the 2d case is limited by ambivalence in the definition of the tennis brawl's bore (it's fuzzy!). In both of these cases, the uncertainty is greater than the smallest divisions marked on the measuring tool (probable ane mm and 0.05 mm respectively). Unfortunately, there is no general rule for determining the uncertainty in all measurements. The experimenter is the one who tin all-time evaluate and quantify the uncertainty of a measurement based on all the possible factors that affect the event. Therefore, the person making the measurement has the obligation to make the best judgment possible and report the uncertainty in a way that conspicuously explains what the uncertainty represents:

( 4 )

Measurement = (measured value ± standard dubiousness) unit

where the ± standard uncertainty indicates approximately a 68% confidence interval (see sections on Standard Deviation and Reporting Uncertainties).
Example: Diameter of tennis ball =

six.seven ± 0.2 cm.

Estimating Uncertainty in Repeated Measurements

Suppose yous fourth dimension the period of oscillation of a pendulum using a digital instrument (that yous assume is measuring accurately) and observe: T = 0.44 seconds. This single measurement of the menstruation suggests a precision of ±0.005 s, but this instrument precision may non give a complete sense of the uncertainty. If yous repeat the measurement several times and examine the variation amid the measured values, yous tin can get a amend thought of the uncertainty in the period. For example, here are the results of five measurements, in seconds: 0.46, 0.44, 0.45, 0.44, 0.41.

( 5 )

Average (mean) =

x 1 + x ii + + x N
Northward

For this situation, the best estimate of the period is the average, or mean.

Whenever possible, repeat a measurement several times and average the results. This average is mostly the best estimate of the "true" value (unless the data set is skewed by i or more outliers which should be examined to decide if they are bad data points that should be omitted from the average or valid measurements that require further investigation). More often than not, the more repetitions yous make of a measurement, the meliorate this judge will exist, but be careful to avert wasting fourth dimension taking more measurements than is necessary for the precision required.

Consider, as another instance, the measurement of the width of a piece of paper using a meter stick. Being careful to keep the meter stick parallel to the border of the paper (to avoid a systematic error which would cause the measured value to be consistently higher than the correct value), the width of the paper is measured at a number of points on the canvass, and the values obtained are entered in a information table. Note that the last digit is only a crude estimate, since it is hard to read a meter stick to the nearest tenth of a millimeter (0.01 cm).

( 6 )

Average =

sum of observed widths
no. of observations
 = = 31.19 cm

This average is the best bachelor estimate of the width of the piece of paper, but it is certainly not exact. Nosotros would have to boilerplate an space number of measurements to approach the true mean value, and fifty-fifty so, we are non guaranteed that the mean value is accurate considering at that place is all the same some systematic fault from the measuring tool, which can never be calibrated perfectly. So how do we express the uncertainty in our average value? One manner to express the variation among the measurements is to employ the average divergence. This statistic tells us on average (with fifty% confidence) how much the individual measurements vary from the hateful.

( seven )

d =

|10 1 x | + |ten 2 ten | + + |x N x |
Northward

Even so, the standard departure is the most mutual way to characterize the spread of a data set. The standard deviation is always slightly greater than the boilerplate departure, and is used considering of its clan with the normal distribution that is frequently encountered in statistical analyses.

Standard Deviation

To calculate the standard deviation for a sample of Due north measurements:

  • one

    Sum all the measurements and split by N to go the average, or mean.
  • 2

    Now, subtract this average from each of the N measurements to obtain N "deviations".
  • 3

    Square each of these N deviations and add them all up.
  • 4

    Divide this result by

    ( N − one)

    and have the square root.

We can write out the formula for the standard deviation as follows. Let the N measurements be chosen 10 ane, x 2, ..., xN . Let the average of the N values be chosen

ten .

And then each deviation is given past

δ x i = x i x , for i = 1, 2, , Due north .

The standard difference is:

In our previous case, the average width

ten

is 31.nineteen cm. The deviations are: The average deviation is:

d = 0.086 cm.

The standard difference is:

s =

(0.14)2 + (0.04)2 + (0.07)ii + (0.17)2 + (0.01)2
5 − 1
 = 0.12 cm.

The significance of the standard difference is this: if you now make one more measurement using the same meter stick, you tin reasonably expect (with about 68% confidence) that the new measurement will be within 0.12 cm of the estimated average of 31.19 cm. In fact, it is reasonable to use the standard difference as the dubiousness associated with this single new measurement. However, the dubiousness of the average value is the standard deviation of the mean, which is always less than the standard deviation (encounter next section). Consider an example where 100 measurements of a quantity were made. The average or mean value was 10.five and the standard deviation was s = ane.83. The effigy below is a histogram of the 100 measurements, which shows how frequently a certain range of values was measured. For example, in twenty of the measurements, the value was in the range 9.five to ten.five, and most of the readings were close to the mean value of 10.v. The standard difference southward for this set of measurements is roughly how far from the average value almost of the readings roughshod. For a large enough sample, approximately 68% of the readings will exist within ane standard deviation of the mean value, 95% of the readings will exist in the interval

x ± ii s,

and nearly all (99.vii%) of readings will lie inside 3 standard deviations from the hateful. The smooth bend superimposed on the histogram is the gaussian or normal distribution predicted by theory for measurements involving random errors. Every bit more than and more measurements are fabricated, the histogram will more than closely follow the bellshaped gaussian curve, but the standard deviation of the distribution will remain approximately the same.

Figure 1

Figure 1

Standard Deviation of the Mean (Standard Error)

When we written report the average value of N measurements, the doubt nosotros should associate with this average value is the standard deviation of the hateful, often called the standard error (SE).

( 9 )

σ x =

south
N

The standard error is smaller than the standard difference by a factor of

1/

N
.

This reflects the fact that nosotros wait the dubiousness of the average value to go smaller when we employ a larger number of measurements, N. In the previous example, we find the standard error is 0.05 cm, where we have divided the standard divergence of 0.12 by

five
.

The last result should then exist reported as:

Average paper width = 31.xix ± 0.05 cm.

Anomalous Data

The beginning footstep you should take in analyzing data (and even while taking information) is to examine the information set as a whole to await for patterns and outliers. Dissonant data points that lie outside the general trend of the data may suggest an interesting phenomenon that could atomic number 82 to a new discovery, or they may only exist the result of a mistake or random fluctuations. In whatsoever case, an outlier requires closer test to determine the crusade of the unexpected result. Farthermost data should never be "thrown out" without clear justification and caption, because you may be discarding the most meaning role of the investigation! Withal, if y'all can conspicuously justify omitting an inconsistent data point, then you should exclude the outlier from your analysis so that the average value is not skewed from the "true" mean.

Fractional Dubiety Revisited

When a reported value is determined by taking the average of a set up of independent readings, the fractional doubt is given by the ratio of the uncertainty divided past the average value. For this example,

( x )

Fractional uncertainty =  =  = 0.0016 ≈ 0.2%

Annotation that the fractional uncertainty is dimensionless merely is oftentimes reported as a per centum or in parts per million (ppm) to emphasize the fractional nature of the value. A scientist might also brand the statement that this measurement "is good to virtually 1 part in 500" or "precise to about 0.ii%". The fractional uncertainty is also important because it is used in propagating incertitude in calculations using the outcome of a measurement, as discussed in the next department.

Propagation of Uncertainty

Suppose nosotros want to determine a quantity f, which depends on x and perchance several other variables y, z, etc. We want to know the error in f if nosotros measure ten, y, ... with errors σ x , σ y , ... Examples:

( 11 )

f = xy (Area of a rectangle)

( 12 )

f = p cos θ ( x -component of momentum)

( 13 )

f = ten / t (velocity)

For a single-variable function f(x), the departure in f can be related to the deviation in x using calculus:

( 14 )

δ f =

δ x

Thus, taking the square and the average:

( 15 )

δ f 2 =

two
δ ten 2

and using the definition of σ , we go:

( 16 )

σ f =

σ x

Examples: (a)

f =

10

( 17 )

 =

1
2
10

( eighteen )

σ f =

σ x
2
10
, or  =

(b)

f = x ii

(c)

f = cos θ

( 22 )

σ f = |sin θ | σ θ , or  = |tan θ | σ θ


Notation : in this situation, σ θ must be in radians.

In the case where f depends on two or more variables, the derivation above tin can exist repeated with minor modification. For two variables, f(x, y), nosotros have:

The partial derivative means differentiating f with respect to x belongings the other variables fixed. Taking the square and the average, we become the law of propagation of uncertainty:

If the measurements of x and y are uncorrelated, then

δ x δ y = 0,

and we get:

Examples: (a)

f = ten + y

( 27 )

σ f =

σ 10 two + σ y 2

When adding (or subtracting) independent measurements, the absolute uncertainty of the sum (or difference) is the root sum of squares (RSS) of the individual absolute uncertainties. When calculation correlated measurements, the uncertainty in the result is simply the sum of the absolute uncertainties, which is always a larger dubiety guess than adding in quadrature (RSS). Adding or subtracting a constant does not change the absolute uncertainty of the calculated value as long as the constant is an verbal value.

(b)

f = xy

( 29 )

σ f =

y ii σ x 2 + x 2 σ y 2

Dividing the previous equation by f = xy, we get:

(c)

f = ten / y

Dividing the previous equation by

f = ten / y ,

we get:

When multiplying (or dividing) contained measurements, the relative uncertainty of the product (quotient) is the RSS of the individual relative uncertainties. When multiplying correlated measurements, the uncertainty in the effect is just the sum of the relative uncertainties, which is always a larger uncertainty estimate than adding in quadrature (RSS). Multiplying or dividing past a abiding does not change the relative uncertainty of the calculated value.

Notation that the relative dubiousness in f, as shown in (b) and (c) above, has the same form for multiplication and division: the relative uncertainty in a production or caliber depends on the relative uncertainty of each individual term. Example: Detect incertitude in v, where

v = at

with a = 9.8 ± 0.i m/due south2, t = 1.ii ± 0.1 s

( 34 )

=  =  =

(0.010)ii + (0.029)2
 = 0.031 or 3.1%

Discover that the relative uncertainty in t (2.9%) is significantly greater than the relative doubt for a (1.0%), and therefore the relative uncertainty in v is essentially the same every bit for t (about 3%). Graphically, the RSS is like the Pythagorean theorem:

Figure 2

Figure 2

The full uncertainty is the length of the hypotenuse of a right triangle with legs the length of each doubt component.

Timesaving approximation: "A chain is merely every bit strong as its weakest link."
If 1 of the dubiety terms is more than three times greater than the other terms, the root-squares formula tin be skipped, and the combined uncertainty is merely the largest dubiousness. This shortcut tin save a lot of fourth dimension without losing whatsoever accuracy in the guess of the overall uncertainty.

The Upper-Lower Spring Method of Uncertainty Propagation

An alternative, and sometimes simpler procedure, to the tedious propagation of uncertainty law is the upper-lower jump method of uncertainty propagation. This alternative method does non yield a standard doubtfulness judge (with a 68% confidence interval), simply it does requite a reasonable estimate of the uncertainty for practically whatever situation. The basic idea of this method is to use the incertitude ranges of each variable to calculate the maximum and minimum values of the function. You tin as well retrieve of this procedure as examining the best and worst case scenarios. For example, suppose yous measure an angle to exist: θ = 25° ± 1° and you needed to find f = cos θ , then:

( 35 )

f max = cos(26°) = 0.8988

( 36 )

f min = cos(24°) = 0.9135

( 37 )

f = 0.906 ± 0.007

where 0.007 is half the deviation between f max and f min

Note that even though θ was just measured to ii significant figures, f is known to iii figures. By using the propagation of uncertainty law:

σ f = |sin θ | σ θ = (0.423)( π /180) = 0.0074

(same consequence as above).

The doubtfulness estimate from the upper-lower leap method is generally larger than the standard uncertainty guess plant from the propagation of uncertainty police force, simply both methods will give a reasonable estimate of the uncertainty in a calculated value.

The upper-lower bound method is especially useful when the functional relationship is not clear or is incomplete. One practical application is forecasting the expected range in an expense budget. In this instance, some expenses may be stock-still, while others may exist uncertain, and the range of these uncertain terms could be used to predict the upper and lower bounds on the full expense.

Pregnant Figures

The number of significant figures in a value tin can be divers equally all the digits between and including the offset non-zero digit from the left, through the last digit. For example, 0.44 has two significant figures, and the number 66.770 has 5 meaning figures. Zeroes are significant except when used to locate the decimal point, as in the number 0.00030, which has ii significant figures. Zeroes may or may not be significant for numbers like 1200, where information technology is non clear whether two, three, or four meaning figures are indicated. To avoid this ambiguity, such numbers should exist expressed in scientific note to (e.thousand. ane.20 × 103 clearly indicates three significant figures). When using a calculator, the display will often show many digits, only some of which are meaningful (significant in a unlike sense). For instance, if you want to guess the expanse of a circular playing field, you might footstep off the radius to exist 9 meters and use the formula: A = π r two. When you compute this area, the reckoner might report a value of 254.4690049 m2. It would exist extremely misleading to report this number as the area of the field, because it would advise that y'all know the area to an absurd degree of precision—to inside a fraction of a foursquare millimeter! Since the radius is just known to one pregnant figure, the final respond should also comprise only one significant figure: Area = 3 × 102 m2. From this example, nosotros can encounter that the number of significant figures reported for a value implies a certain degree of precision. In fact, the number of meaning figures suggests a rough estimate of the relative uncertainty:

The number of significant figures implies an guess relative doubtfulness:
1 significant figure suggests a relative uncertainty of near 10% to 100%
2 significant figures propose a relative uncertainty of most 1% to ten%
3 pregnant figures suggest a relative doubtfulness of well-nigh 0.1% to one%

To understand this connection more clearly, consider a value with 2 pregnant figures, like 99, which suggests an dubiousness of ±1, or a relative uncertainty of ±ane/99 = ±i%. (Actually some people might argue that the implied uncertainty in 99 is ±0.v since the range of values that would round to 99 are 98.5 to 99.4. But since the dubiousness here is just a crude estimate, there is non much point arguing near the gene of two.) The smallest 2-significant figure number, 10, as well suggests an doubtfulness of ±one, which in this instance is a relative uncertainty of ±1/10 = ±10%. The ranges for other numbers of meaning figures can exist reasoned in a like way.

Apply of Significant Figures for Elementary Propagation of Uncertainty

By post-obit a few simple rules, pregnant figures can be used to notice the advisable precision for a calculated result for the four virtually basic math functions, all without the utilize of complicated formulas for propagating uncertainties.

For multiplication and partitioning, the number of significant figures that are reliably known in a product or quotient is the aforementioned as the smallest number of significant figures in any of the original numbers.

Instance:

half dozen.6
× 7328.vii
48369.42  =   48 × 103
(2 pregnant figures)
(5 meaning figures)
(2 meaning figures)

For addition and subtraction, the result should be rounded off to the last decimal place reported for the least precise number.

Examples:

223.64 5560.v
+ 54 + 0.008
278 5560.five

If a calculated number is to exist used in further calculations, it is skillful practice to keep i actress digit to reduce rounding errors that may accumulate. And then the last respond should be rounded according to the in a higher place guidelines.

Uncertainty, Significant Figures, and Rounding

For the aforementioned reason that it is quack to written report a issue with more significant figures than are reliably known, the uncertainty value should also non be reported with excessive precision. For case, it would be unreasonable for a educatee to study a result like:

( 38 )

measured density = 8.93 ± 0.475328 g/cm3 Wrong!

The uncertainty in the measurement cannot possibly be known so precisely! In most experimental work, the conviction in the dubiousness estimate is not much better than near ±50% considering of all the various sources of error, none of which tin be known exactly. Therefore, uncertainty values should be stated to only one significant figure (or perhaps ii sig. figs. if the first digit is a 1).

Because experimental uncertainties are inherently imprecise, they should be rounded to 1, or at most two, significant figures.

To assistance give a sense of the corporeality of confidence that can be placed in the standard deviation, the following table indicates the relative uncertainty associated with the standard deviation for various sample sizes. Note that in order for an doubtfulness value to be reported to 3 pregnant figures, more than 10,000 readings would be required to justify this caste of precision! *The relative uncertainty is given past the approximate formula:

 =

ane
2(North − ane)

When an explicit uncertainty estimate is made, the uncertainty term indicates how many significant figures should be reported in the measured value (not the other mode effectually!). For instance, the dubiousness in the density measurement in a higher place is well-nigh 0.5 g/cmiii, so this tells us that the digit in the tenths identify is uncertain, and should be the final one reported. The other digits in the hundredths identify and beyond are insignificant, and should non be reported:

measured density = 8.9 ± 0.5 1000/cm3.

Right!

An experimental value should exist rounded to be consequent with the magnitude of its uncertainty. This mostly means that the last meaning figure in any reported value should be in the aforementioned decimal place as the uncertainty.

In most instances, this practice of rounding an experimental result to be consistent with the uncertainty estimate gives the aforementioned number of significant figures as the rules discussed before for unproblematic propagation of uncertainties for adding, subtracting, multiplying, and dividing.

Circumspection: When conducting an experiment, information technology is important to proceed in listen that precision is expensive (both in terms of time and fabric resource). Do not waste product your time trying to obtain a precise result when but a rough guess is required. The cost increases exponentially with the amount of precision required, then the potential do good of this precision must be weighed confronting the extra cost.

Combining and Reporting Uncertainties

In 1993, the International Standards Organization (ISO) published the first official worldwide Guide to the Expression of Dubiousness in Measurement. Before this time, doubtfulness estimates were evaluated and reported according to different conventions depending on the context of the measurement or the science. Here are a few key points from this 100-page guide, which can be constitute in modified class on the NIST website. When reporting a measurement, the measured value should be reported along with an gauge of the total combined standard doubt

U c

of the value. The total doubtfulness is institute past combining the doubt components based on the two types of dubiety assay:
  • Blazon A evaluation of standard uncertainty - method of evaluation of dubiousness by the statistical analysis of a series of observations. This method primarily includes random errors.
  • Type B evaluation of standard doubtfulness - method of evaluation of uncertainty by ways other than the statistical analysis of series of observations. This method includes systematic errors and any other uncertainty factors that the experimenter believes are of import.
The individual uncertainty components u i should exist combined using the constabulary of propagation of uncertainties, commonly called the "root-sum-of-squares" or "RSS" method. When this is done, the combined standard uncertainty should exist equivalent to the standard deviation of the outcome, making this doubtfulness value correspond with a 68% confidence interval. If a wider conviction interval is desired, the uncertainty can be multiplied by a coverage factor (usually k = 2 or 3) to provide an doubt range that is believed to include the true value with a conviction of 95% (for k = 2) or 99.7% (for k = 3). If a coverage factor is used, there should exist a clear caption of its meaning and so there is no confusion for readers interpreting the significance of the uncertainty value. You should be aware that the ± uncertainty notation may be used to indicate different confidence intervals, depending on the scientific discipline or context. For example, a public opinion poll may written report that the results have a margin of mistake of ±3%, which ways that readers can be 95% confident (not 68% confident) that the reported results are authentic within 3 pct points. Similarly, a manufacturer's tolerance rating by and large assumes a 95% or 99% level of confidence.

Conclusion: "When practice measurements hold with each other?"

We at present have the resources to answer the key scientific question that was asked at the beginning of this error analysis word: "Does my upshot agree with a theoretical prediction or results from other experiments?" Generally speaking, a measured result agrees with a theoretical prediction if the prediction lies inside the range of experimental uncertainty. Similarly, if two measured values have standard uncertainty ranges that overlap, then the measurements are said to be consistent (they agree). If the uncertainty ranges do not overlap, so the measurements are said to be discrepant (they do not hold). However, you should recognize that these overlap criteria tin can give two opposite answers depending on the evaluation and confidence level of the uncertainty. It would be unethical to arbitrarily inflate the uncertainty range just to make a measurement concur with an expected value. A meliorate procedure would be to discuss the size of the difference between the measured and expected values within the context of the doubtfulness, and try to discover the source of the discrepancy if the deviation is truly meaning. To examine your own data, you are encouraged to use the Measurement Comparison tool available on the lab website. Here are some examples using this graphical assay tool:

Figure 3

Figure iii

A = ane.two ± 0.4

B = 1.eight ± 0.4

These measurements hold inside their uncertainties, despite the fact that the pct divergence between their central values is forty%. Nonetheless, with half the uncertainty ± 0.2, these same measurements do not agree since their uncertainties do not overlap. Further investigation would be needed to decide the cause for the discrepancy. Perchance the uncertainties were underestimated, at that place may have been a systematic mistake that was not considered, or there may be a true difference betwixt these values.

Figure 4

Figure 4

An alternative method for determining agreement betwixt values is to summate the difference between the values divided by their combined standard dubiety. This ratio gives the number of standard deviations separating the 2 values. If this ratio is less than ane.0, so it is reasonable to conclude that the values agree. If the ratio is more than 2.0, then it is highly unlikely (less than almost 5% probability) that the values are the same. Case from above with

u = 0.iv: = 1.1.

Therefore, A and B likely hold. Example from above with

u = 0.2: = two.ane.

Therefore, it is unlikely that A and B agree.

References

Baird, D.C. Experimentation: An Introduction to Measurement Theory and Experiment Design, 3rd. ed. Prentice Hall: Englewood Cliffs, 1995. Bevington, Phillip and Robinson, D. Information Reduction and Error Assay for the Physical Sciences, 2nd. ed. McGraw-Colina: New York, 1991. ISO. Guide to the Expression of Incertitude in Measurement. International Organisation for Standardization (ISO) and the International Commission on Weights and Measures (CIPM): Switzerland, 1993. Lichten, William. Information and Error Analysis., 2nd. ed. Prentice Hall: Upper Saddle River, NJ, 1999. NIST. Essentials of Expressing Measurement Uncertainty. http://physics.nist.gov/cuu/Doubtfulness/ Taylor, John. An Introduction to Mistake Analysis, 2nd. ed. University Science Books: Sausalito, 1997.

pierceparawascrack.blogspot.com

Source: https://www.webassign.net/question_assets/unccolphysmechl1/measurements/manual.html

0 Response to "Which Value Can Specify the Point at Which a Nutrient Could Become Toxic?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel