{"id":46,"date":"2015-07-30T13:32:12","date_gmt":"2015-07-30T17:32:12","guid":{"rendered":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/uncertaintyanalysis\/physics-1140-notes-on-measurement-uncertainties-and-error-analysis\/"},"modified":"2020-01-24T12:53:24","modified_gmt":"2020-01-24T17:53:24","slug":"physics-1140-notes-on-measurement-uncertainties-and-error-analysis","status":"publish","type":"page","link":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/physics-1140-notes-on-measurement-uncertainties-and-error-analysis\/","title":{"rendered":"Notes on Measurement Uncertainties and &#8220;Error Analysis&#8221;"},"content":{"rendered":"<div id=\"nonfooter\">\n<blockquote>\n<p style=\"padding-left: 30px\">&#8220;What are good measurements and good error analysis? High likelihood that the &#8216;true&#8217; value is within your given uncertainty, while keeping the uncertainty as small as possible.&#8221; &#8211; <a href=\"http:\/\/www.bowdoin.edu\/faculty\/m\/mbattle\/\">Professor Mark Battle<\/a><\/p>\n<\/blockquote>\n<p>The following discussion of uncertainty is largely taken from the book, <a href=\"https:\/\/cbbcat.net\/record=b2510988~S19\"><em>An Introduction to Error Analysis<\/em><\/a>, by John R. Taylor.<\/p>\n<p>Experience has shown that no measurement, however carefully made, can be completely free of uncertainties. Scientists refer to this uncertainty as &#8220;experimental error&#8221;, but the word &#8220;error&#8221; here does not have the usual meaning of &#8220;mistake&#8221; or &#8220;blunder&#8221;. Because the whole structure and application of science depends on measurements, the ability to evaluate these uncertainties and keep them to a minimum is crucially important!<\/p>\n<p>You should be aware of some types of uncertainties to which almost any measurement is subject:<\/p>\n<div id=\"TargetPractice\">\n<div id=\"attachment_1112\" style=\"width: 212px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1112\" class=\"wp-image-1112 size-full\" src=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2015\/07\/TargetPractice2.png\" alt=\"TargetPractice\" width=\"202\" height=\"218\" srcset=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2015\/07\/TargetPractice2.png 202w, https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2015\/07\/TargetPractice2-139x150.png 139w\" sizes=\"auto, (max-width: 202px) 100vw, 202px\" \/><p id=\"caption-attachment-1112\" class=\"wp-caption-text\">Random and systematic errors in target practice. (From &#8220;<a href=\"https:\/\/cbbcat.net\/record=b2510988~S19\">An Introduction to Error Analysis<\/a>&#8221; by John R. Taylor.)<\/p><\/div>\n<\/div>\n<ul>\n<li>Instrumental Resolution &#8211; reflects the fineness of the divisions on the measuring device. For example, one millimeter is the smallest division on a typical meter stick. You can probably estimate between divisions to \\(1\/4\\) or \\(1\/2\\) mm. Another example would be a voltmeter that the manufacturer states is only accurate to \\(\\pm 0.001\\) V.<\/li>\n<li>Random Error or Inherent Uncertainty &#8211; experimental uncertainty that can be revealed by repeating the measurement. For a set of repeated measurements, the &#8220;most representative&#8221; value is given by the <a href=\"#avg\"><em>average<\/em><\/a> or <em>mean<\/em>, \\(\\bar{x}\\). The <em>precision<\/em> of the mean reflects the random variation from measurement to measurement (i.e. how tight the measurement values are around the average) quantified by the <a href=\"#stderr\"><em>standard error of the mean<\/em><\/a>, \\(S_{m}\\).<\/li>\n<li>Systematic Error &#8211; an estimate of a measurement&#8217;s <em>accuracy<\/em>, due to measurement or equipment problems, which cannot be revealed by repeating the measurement. By &#8220;systematic&#8221; we mean something that affects each measurement in a non-random way. So, if the bottom \\(1\\) cm of your meter stick is missing (and you don&#8217;t notice!), all your lengths will be \\(1\\) cm too long. More generally, systematic error represents the difference between your measured value and that which would be obtained by the mythical &#8220;perfect experimenter&#8221;, using &#8220;perfect&#8221; measurement instruments. The tricky part about systematic errors is that you usually don&#8217;t know they&#8217;re there, so they&#8217;re hard to estimate!<\/li>\n<\/ul>\n<h2 id=\"ErrorReporting\">Reporting Measurements: Best Estimates \\(\\pm\\) Uncertainty<\/h2>\n<p class=\"indentedh2\">In general, the result of any measurement of a quantity \\(x\\) is stated as<br \/>\n$$(\\rm{measured}\\:x) = x_{\\rm{best}} \\pm \\delta x.\\label{1}\\tag{1} $$<br \/>\nThis statement means that your best estimate of the quantity concerned is \\(x_{\\rm{best}}\\), and you&#8217;re reasonably sure the value lies somewhere between \\(x_{\\rm{best}}-\\delta x\\) and \\(x_{\\rm{best}}+\\delta x\\). The number \\(\\delta x\\) is called the <em>uncertainty<\/em> or <em>error<\/em> in the measurement of \\(x\\). For convenience, the uncertainty \\(\\delta x\\) is always defined to be positive, so that \\(x_{\\rm{best}}+\\delta x\\) is always the <em>highest<\/em> probable value of the measured quantity and \\(x_{\\rm{best}}-\\delta x\\) is the <em>lowest<\/em>. In our labs, and experimental science in general, the <em>uncertainty of a measurement,<\/em> \\(\\delta x\\), is a function of the number of times that the measurement is made:<\/p>\n<ul class=\"indentedh2\">\n<li id=\"Twinkie\">For <em>single measurements<\/em>, we measure some quantity, \\(x\\), to the best of our ability, and then make a <em>judgment<\/em> of its uncertainty, based primarily on the resolution of the instruments used.\n<ul>\n<li>A simple example would be measuring the length of, say, a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Twinkie\">Twinkie<\/a> with a ruler marked in mm. If your best estimate is \\(68.5\\) mm and you&#8217;re pretty sure that the value is between \\(68.0\\) and \\(69.0\\) mm, then you would report the length as<br \/>\n$$ \\rm{Twinkie\\:length}=68.5\\pm 0.5\\:\\rm{mm}.$$<\/li>\n<\/ul>\n<\/li>\n<li>Where <em>repeated measurements<\/em> are taken, one can use <a href=\"#stat\">statistical analysis<\/a> to state \\(x_{\\rm{best}}\\) and its uncertainty. The \\(x_{\\rm{best}}\\) is generally given by the mean or average, \\(\\bar{x}\\), of the data set, and its uncertainty by the <a href=\"#stderr\">&#8220;standard error of the mean&#8221;<\/a>, \\(S_{m}\\). The beauty of repeated measurements is that you can ignore the uncertainty of each individual data point. Note: If you&#8217;re more interested in the <em>range<\/em> of the result (<em>i.e<\/em> for plotting error bars) than its precision, then the <a href=\"#stddev\">standard deviation<\/a>, \\(s\\), may be a better estimate of the uncertainty.<\/li>\n<\/ul>\n<h2 id=\"stat\">Statistical Analysis of Random Uncertainties<\/h2>\n<p class=\"indentedh2\">One of the best ways to assess the reliability of a measurement is to repeat it several times and examine the different values obtained. Statistics is a very useful tool for analyzing measurements and estimating error in a &#8220;large&#8221; set (<em>at least \\(5\\), and preferably more<\/em>) of repeated measurements.<\/p>\n<p id=\"avg\" class=\"indentedh2\">Suppose we have \\(N\\) measurements of some quantity, \\(x_{1}\\),\\(x_{2}\\),\\(\\ldots\\),\\(x_{N}\\). If there is no systematic error in a set of measurements, the <em>mean<\/em> (or <em>average<\/em>) is the best approximation to the &#8220;true&#8221; value that we can obtain from a set of measured values:<br \/>\n$$\\bar{x} = \\sum^{N}_{i=1}\\frac{x_{i}}{N}.\\tag{2}$$<\/p>\n<p id=\"stddev\" class=\"indentedh2\">The <em>standard deviation<\/em> is technically the root-mean-square average deviation of the data from the average value. It is a measure of the typical variability from measurement to measurement, and says that if your measurements are distributed on a &#8220;normal&#8221; or &#8220;bell-shaped&#8221; curve, then \\(68\\%\\) of your data points will fall within one \\(s\\) on either side of the mean value. The (sample) standard deviation is:<br \/>\n$$s = \\sqrt{\\sum^{N}_{i=1}\\frac{(x_{i}-\\bar{x})^{2}}{N-1}}. \\tag{3}$$<\/p>\n<div id=\"NormalDistribution\">\n<div id=\"attachment_1111\" style=\"width: 286px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1111\" class=\"wp-image-1111 size-full\" src=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2015\/07\/NormalDistribution2.png\" alt=\"NormalDistribution\" width=\"276\" height=\"179\" srcset=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2015\/07\/NormalDistribution2.png 276w, https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2015\/07\/NormalDistribution2-150x97.png 150w\" sizes=\"auto, (max-width: 276px) 100vw, 276px\" \/><p id=\"caption-attachment-1111\" class=\"wp-caption-text\">A sketch of a &#8220;normal distribution&#8221;, showing \\(68\\%\\) of the data within one standard deviation of the mean.<\/p><\/div>\n<\/div>\n<p id=\"stderr\" class=\"indentedh2\">The <em>standard error of the mean<\/em> is an estimate of the uncertainty in the mean, in the sense of roughly how far it may be from the &#8220;true&#8221; value. It is this quantity that answers the question, &#8220;If I repeat the <em>entire series of \\(N\\) measurements<\/em> and get a second mean, when do I have a \\(68\\%\\) confidence that this second average will come close to the first one?&#8221;. The answer is that you should expect a second average (that results from redoing the set of measurements) to have a \\(68\\%\\) probability of lying within one standard error of the first average that you determined. Thus the standard error of the mean is sometimes referred to as the \\(68\\%\\) confidence interval. The standard error of the mean is:<br \/>\n$$S_{m} = \\frac{s}{\\sqrt{N}}. \\tag{4}$$<\/p>\n<p class=\"indentedh2\">How do we determine the number of measurements to take? As we make more measurements, \\(N\\) increases, and at first \\(\\bar{x}\\) bounces around a bit, but the larger \\(N\\), the less \\(\\bar{x}\\) changes. Similarly, \\(s\\) varies at first, but settles down to some value. However, \\(S_{m}\\) varies as \\(1\/\\sqrt{N}\\). Thus it gets smaller as \\(N\\) increases. Qualitatively, this says &#8220;the more numbers you average, the better the mean value is determined&#8221;. In reporting a result we usually want a &#8220;best&#8221; (mean or average) value and an estimate of its uncertainty. We report these as \\(\\bar{x}\\pm S_{m}\\). We will sometimes call \\(S_{m}\\) the <em>precision<\/em> of \\(\\bar{x}\\). Note that \\(S_{m}\\) and \\(s\\) have the same <em><a href=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/physics-1140-notes-on-units\/\">units<\/a><\/em> as the original measured values.<\/p>\n<h2 id=\"SigFigs\">Significant Figures<\/h2>\n<p class=\"indentedh2notes\">See the discussion <a href=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/physics-1140-notes-on-significant-figures\/\">here<\/a> as well.<\/p>\n<p class=\"indentedh2\">Significant figures are also called significant digits. Because \\(\\delta x\\) is an uncertainty, it should not be stated with too much precision. For example, it would be ridiculous to state the <a href=\"#Twinkie\">above measurement<\/a> as<br \/>\n$$ \\rm{Twinkie\\:length}=68.5\\pm 0.4764267\\:\\rm{mm}.$$<br \/>\nThis leads to the following rule for stating uncertainties:<\/p>\n<blockquote>\n<p id=\"StatingUncertainties\" class=\"indentedh2\">Experimental uncertainties should almost always be rounded to one significant figure.<\/p>\n<\/blockquote>\n<p class=\"indentedh2\">One exception: If the leading significant figure in the uncertainty \\(\\delta x\\) is a \\(1\\), then keeping \\(2\\) significant figures in \\(\\delta x\\) may be better. For example, if some calculation gave the uncertainty \\(\\delta x = 0.14\\), then rounding to \\(0.1\\) would be a substantial proportional reduction, so retaining the \\(2\\) figures, \\(0.14\\), would arguably be better.<\/p>\n<p class=\"indentedh2\">Once the uncertainty has been estimated, it determines the number of significant figures in the measured value through the following rule:<\/p>\n<p id=\"MeasuredValueSigFigs\" class=\"indentedh2\">The last significant digit in any stated result should be of the same order of magnitude (in the same decimal position) as the uncertainty:<\/p>\n<ul class=\"indentedh2\">\n<li>\\(98.\\underline{26}\\pm 0.\\underline{03}\\) mm<\/li>\n<li>\\(30.\\underline{0004}\\pm 0.\\underline{0002}\\) g<\/li>\n<li>\\(1\\underline{30}\\pm \\underline{20}\\) s<\/li>\n<li>\\(5\\underline{50,000}\\pm \\underline{10,000}\\) people<\/li>\n<\/ul>\n<p class=\"indentedh2\">In the Twinkie example, if you reported<br \/>\n$$ \\rm{Twinkie\\:length}=68.52443\\pm 0.5\\:\\rm{mm}$$<br \/>\nthe last \\(4\\) digits in the decimal part would be meaningless.<\/p>\n<h2>Absolute and Fractional Errors<\/h2>\n<p class=\"indentedh2\">Errors are either reported as absolute errors or as relative errors:<\/p>\n<ul class=\"indentedh2\">\n<li>The <em>absolute error<\/em> of a quantity has the same units as the measured value, and is simply what you report when you say you can only measure something to such and such certainty. From the <a href=\"#Twinkie\">Twinkie example<\/a>, our absolute error is \\(0.5\\) mm. The absolute error is usually denoted with a small Greek delta, so for a quantity denoted by \\(x\\) (or \\(y\\), etc.) the absolute error would be written as \\(\\delta x\\) (or \\(\\delta y\\), etc.) as in Equation (\\ref{1}). Absolute error must be used when <a href=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/physics-1140-graphical-analysis-for-a-straight-line-graph\/\">graphing<\/a> error bars (to know the size of the error bar) and when <a href=\"#Comparison\">comparing one measured quantity to another<\/a>.<\/li>\n<li>The <em>relative error<\/em> or <em>fractional error<\/em> of a quantity \\(x\\) is simply what fraction the absolute error is of the quantity itself, so<br \/>\n$$ \\rm{relative\\:error} = \\frac{\\rm{absolute\\:error}}{\\rm{best\\:value}}=\\frac{\\delta x}{x}. (\\rm{No\\:units!\\:They\\:cancel!})\\label{5}\\hskip{5em}\\tag{5}$$<br \/>\nThe relative error is usually expressed as a <em>percentage<\/em> (by multiplying by \\(100\\%\\)). So, for our Twinkie length measurement we have a relative error of \\(0.5\\:\\rm{mm}\/68.5\\:\\rm{mm})=0.007=0.7\\%\\). Note that if you have the relative error of a quantity, you can easily calculate its absolute error by rearranging the above equation.<\/li>\n<\/ul>\n<h2 id=\"Comparison\">Comparison of Two Measured Numbers<\/h2>\n<div id=\"GraphicalComparison\">\n<div id=\"attachment_1984\" style=\"width: 243px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1984\" class=\"wp-image-1984 size-full\" src=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2017\/01\/comparison.png\" width=\"233\" height=\"245\" srcset=\"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2017\/01\/comparison.png 233w, https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-content\/uploads\/sites\/105\/2017\/01\/comparison-143x150.png 143w\" sizes=\"auto, (max-width: 233px) 100vw, 233px\" \/><p id=\"caption-attachment-1984\" class=\"wp-caption-text\">A Graphical Comparison of Measured X-Components of Momentum<\/p><\/div>\n<\/div>\n<p class=\"indentedh2\">Many experiments involve measuring two numbers that theory predicts should be equal. For example, the law of conservation of momentum states that the total momentum of an isolated system is constant. To test it we might perform an experiment with \\(2\\) carts that collide on a frictionless track (as we did in Lab \\(2\\) of <a href=\"https:\/\/www.bowdoin.edu\/physics\/courses\/index.html\">\\(\\underline{\\rm{Physics}\\:1130}\\)<\/a>). Let&#8217;s say we measure the total momentum of the \\(2\\) carts before (\\(\\vec{p}\\)) and after (\\(\\vec{q}\\)) the collision and check whether \\(\\vec{p}=\\vec{q}\\) within experimental uncertainties. Suppose we measure<br \/>\n$$ \\rm{initial\\:momentum}\\:\\vec{p} = 1.49\\pm 0.03\\:\\rm{kg\\:m\/s}\\:\\hat{x}$$<br \/>\nand<br \/>\n$$ \\rm{final\\:momentum}\\:\\vec{q} = 1.56\\pm 0.06\\:\\rm{kg\\:m\/s}\\:\\hat{x}.$$<br \/>\nHere, the likely range for the x-component \\(p_x\\) (\\(1.46\\) to \\(1.52\\) kg m\/s) <em>overlaps<\/em> the likely range for the x-component \\(q_x\\) (\\(1.50\\) to \\(1.62\\) kg m\/s). Therefore, these measurements are consistent with conservation of momentum: they are equal within experimental uncertainties.<\/p>\n<p class=\"indentedh2\">If, on the other hand, the two probable ranges were not even close to overlapping, the measurements would be inconsistent with conservation of momentum. We would have to check for mistakes in our measurements and calculations, look for possible systematic errors, and investigate the possibility that external forces (such as gravity and friction) are causing the momentum of the system to change.<\/p>\n<h2 id=\"ErrorPropagation\">Propagating Errors in Calculations<\/h2>\n<h3>(When Two or More Uncertain Quantities are Combined)<\/h3>\n<p class=\"indentedh2\">Let&#8217;s continue now with error analysis. We often carry out calculations using measured values, and we must take into consideration the associated uncertainties since the result cannot be better than the data on which it was based.<\/p>\n<ul class=\"indentedh2\">\n<li>Uncertainty in Sums and Differences: Suppose that \\(x\\), \\(y\\), \\(\\ldots w\\) are measured with uncertainties \\(\\delta x, \\delta y, \\ldots \\delta w\\) and we use the measured values to compute<br \/>\n$$ q = x + \\ldots + z &#8211; (u+\\ldots + w).$$<br \/>\nIf the uncertainties in \\(x, y,\\ldots w\\) are known to be independent and random, then the uncertainty in \\(q\\) is:<br \/>\n$$\\delta q = \\sqrt{(\\delta x)^{2}+\\ldots + (\\delta z)^{2} + (\\delta u)^{2} + \\ldots +(\\delta w)^{2}}.\\label{6}\\hskip{2.5em}\\tag{6}$$<br \/>\nThis is called a <em>quadratic sum<\/em>. Notice that even for the subtracted values, the estimate of the error in the result, \\(\\delta q\\), <em>adds<\/em> the squares of the absolute error of each value (\\(\\delta x\\) through \\(\\delta w\\)) and then takes the square root of the total.<\/li>\n<li>Uncertainties in Products and Quotients: Suppose that \\(x,y,\\ldots w\\) are measured with uncertainties \\(\\delta x, \\delta y,\\ldots \\delta w\\) and we use the measured values to compute<br \/>\n$$q = \\frac{x\\times\\ldots\\times z}{u\\times\\ldots\\times w}.$$<br \/>\nIf the uncertainties in \\(x, y, \\ldots w\\) are known to be independent and random, then the relative uncertainty in \\(q\\) is:<br \/>\n$$\\frac{\\delta q}{q} = \\sqrt{\\left(\\frac{\\delta x}{x}\\right)^{2}+\\ldots + \\left(\\frac{\\delta z}{z}\\right)^{2} + \\left(\\frac{\\delta u}{u}\\right)^{2}+\\ldots + \\left(\\frac{\\delta w}{w}\\right)^{2}}.\\label{7}\\hskip{5em}\\tag{7}$$<br \/>\nThis is a quadratic sum of the <em>relative errors<\/em> of multiplied and divided values. If a particular form is not specified, you can report your calculated \\(q\\) with either relative error (usually given as \\(\\%\\)) or absolute error (the relative error \\(\\times q\\), with units), but make sure it is clear which one you are reporting.\u00a0 Remember: When dealing with the product or quotient of uncertain quantities, use the <em>relative<\/em> errors, and when we have the sum or difference of uncertain quantities, use the <em>absolute<\/em> errors themselves. See Equations (\\ref{6}) and (\\ref{7}). Easier examples for (\\ref{6}) and (\\ref{7}), using values with their respective uncertainties: \\(a\\pm\\delta a\\), \\(b\\pm\\delta b\\), and \\(c\\pm\\delta c\\):<\/p>\n<ul>\n<li>What is the uncertainty of \\(q = a + b + c\\)?<br \/>\n$$\\rm{Using\\:Eq.\\:\\ref{6}:\\:} \\delta q = \\sqrt{(\\delta a)^{2}+(\\delta b)^{2}+(\\delta c)^{2}}.$$<\/li>\n<li>What is the uncertainty of \\(q = \\frac{a b}{c}\\)?<br \/>\n$$\\rm{Using\\:Eq.\\:\\ref{7}:\\:} \\frac{\\delta q}{q} = \\sqrt{\\left(\\frac{\\delta a}{a}\\right)^{2}+\\left(\\frac{\\delta b}{b}\\right)^{2}+\\left(\\frac{\\delta c}{c}\\right)^{2}}.$$<\/li>\n<\/ul>\n<\/li>\n<li>Uncertainty in a Power: If \\(x\\) is measured with uncertainty \\(\\delta x\\) and is used to calculate, say, \\(q=x^{3}\\), then we could write that as \\(q=x\\cdot x\\cdot x\\). However, the three \\(x\\) values here are the same quantity, so we know they are not independent. This requires a different combination of the errors; instead of the quadratic sum (as in Eq. (\\ref{7}) above), we take a regular sum to allow for the largest possible error. Thus the relative error is given by<br \/>\n$$\\frac{\\delta q}{q} = \\frac{\\delta x}{x}+\\frac{\\delta x}{x}+\\frac{\\delta x}{x}=3\\frac{\\delta x}{x}.$$<br \/>\nThis is true for any power, so for the equation \\(q = x^{n}\\) (where \\(n\\) is a fixed, known number), the fractional uncertainty in \\(q\\) is \\(|n|\\) times that in \\(x\\):<br \/>\n$$\\frac{\\delta q}{q}=|n|\\frac{\\delta x}{x}.\\label{8}\\tag{8}$$<br \/>\nFor example, if you measure an area \\(x\\) as \\(81\\pm 6\\:\\rm{cm}^{2}\\), what should you report for \\(q=\\sqrt{x}\\) (square root of \\(x\\)), with its uncertainty? Answer:<br \/>\n$$q = x^{1\/2} = (81\\:\\rm{cm}^2)^{1\/2}=9\\:cm,$$<br \/>\n$$\\frac{\\delta q}{q} = |n|\\frac{\\delta x}{x} = |\\frac{1}{2}|\\frac{\\delta x}{x} = |\\frac{1}{2}|\\frac{6\\:\\rm{cm}}{81\\:\\rm{cm}}=0.037, $$<br \/>\n$$\\rm{So},\\:\\delta q = 0.037 q = 0.037\\times 9\\:\\rm{cm}=0.33\\:\\rm{cm} ,$$<br \/>\n$$\\rm{and\\:we\\:report}\\:q=\\sqrt{x}=9.0\\pm0.3\\:\\rm{cm},$$<br \/>\nwhere the error has been rounded to \\(1\\) significant figure, and the answer is reported with the same decimal places as the error.<\/li>\n<\/ul>\n<p class=\"indentedh2\">For those of you comfortable with calculus, and who like knowing general rules (the sign of a good physicist!), the above three &#8220;rules&#8221; are all derived from one general formula for error propagation. For a function \\(q\\) of measured variables \\(x, y,\\ldots z\\) the uncertainty in \\(q\\) is<br \/>\n$$\\delta q = \\left[\\left(\\frac{\\partial q}{\\partial x}\\delta x\\right)^{2}+\\ldots +\\left(\\frac{\\partial q}{\\partial z}\\delta z\\right)^{2}\\right]^{1\/2} .$$<br \/>\nLet&#8217;s go back to <a href=\"#SigFigs\">significant figures<\/a> for a minute. In a lab setting, significant figures are best given by the uncertainty of your measurements &#8211; the <a href=\"#StatingUncertainties\">two<\/a> <a href=\"#MeasuredValueSigFigs\">rules<\/a> from the significant figures section. But when uncertainty analysis is not required, use the following rules for combining values with differing significant figures:<\/p>\n<ul class=\"indentedh2\">\n<li>Multiplying\/Dividing: A result should have the same number of significant figures as its <em>least<\/em>-significant-figure component. This is simple and makes sense if you think about the fact that a result can be no more precise than its least precise component.<\/li>\n<li>Adding\/Subtracting: The decimal place precision is the key here &#8211; give your answer to the number of decimal places of the value with the least number of decimal places.<\/li>\n<\/ul>\n<p class=\"indentedh2\">Two examples:<\/p>\n<ol class=\"indentedh2\">\n<li>Subtract \\(0.52\\) cm (\\(2\\) decimal places, \\(2\\) significant figures) from \\(12.3\\) cm (<em>\\(1\\) decimal place<\/em>, \\(3\\) significant figures). Answer: \\(11.8\\) cm (<em>\\(1\\) decimal place<\/em>, \\(3\\) significant figures)<\/li>\n<li>Here, we&#8217;ll check to see if the informal rules above are consistent with formal error propagation. Say you know the distance to your grandmother&#8217;s house is \\(1237\\pm 5\\) miles [or use \\((1.237\\pm 0.005)\\times 10^{3}\\) miles, in scientific notation]. This has \\(4\\) significant digits. You estimate that the cost to drive your car is \\(17\\pm 3\\) cents\/mile (or \\(0.17\\pm 0.03\\) dollars\/mile). This has \\(2\\) significant digits. So, let&#8217;s evaluate the total cost of the trip to your grandmother&#8217;s house, first ignoring the uncertainties:<br \/>\n$$\\rm{Total\\:cost} = \\rm{distance}\\times\\rm{cost\/distance},$$<br \/>\n$$\\rm{Total\\:cost} = 1.237\\times 10^{3}\\:\\rm{miles}\\times 0.17\\:\\rm{dollars\/mile},$$<br \/>\n$$\\rm{Total\\:cost} = 210.29\\:\\rm{dollars},$$<br \/>\n$$\\rm{Total\\:cost} = 210 (\\rm{or}\\:2.1\\times 10^{2})\\:\\rm{dollars}.\\:\\:(2\\:\\rm{significant\\:figures})$$<br \/>\nNow, let&#8217;s check what the propagation of errors says about significant digits. The &#8220;least-significant-digit&#8221; cost\/mile estimate is only good to \\(3\\) parts in \\(17\\) relative error, (or \\(3\/17\\times 100\\%=18\\%\\)). So really the relative error of the result can be no better. Equation \\((\\ref{7})\\) tells us that<br \/>\n$$\\frac{\\delta\\rm{Total}}{\\rm{Total}}=\\sqrt{\\left(\\frac{\\delta\\rm{distance}}{\\rm{distance}}\\right)^{2}+\\left(\\frac{\\delta\\rm{Cost\/mile}}{\\rm{Cost\/mile}}\\right)^{2}},$$<br \/>\n$$\\frac{\\delta\\rm{Total}}{\\rm{Total}}=\\sqrt{\\left(\\frac{5}{1237}\\right)^{2}+\\left(\\frac{3}{17}\\right)^{2}},$$<br \/>\n$$\\frac{\\delta\\rm{Total}}{\\rm{Total}}=0.18,$$<br \/>\nwhich is also \\(18\\%\\).\u00a0 Why? Because the \\(5\/1237\\) fraction is negligible compared to the larger \\(3\/17\\) value.\u00a0 The absolute error in total cost, found by rearranging Eq. (\\(\\ref{5}\\)), is \\(\\delta\\rm{Total}=0.18\\times 210.29\\:\\rm{dollars}=$37.85\\rightarrow$40\\) when rounded to one significant digit.\u00a0 This is why it makes sense to state the total cost as just \\($210\\), since you don&#8217;t know it any better than \\($40\\) either way.<\/li>\n<\/ol>\n<\/div>\n<div id=\"footer\">\n<p class=\"centered\"><a href=\"http:\/\/www.mathjax.org\"> <img decoding=\"async\" title=\"Powered by MathJax\" src=\"http:\/\/cdn.mathjax.org\/mathjax\/badge\/badge.gif\" alt=\"Powered by MathJax\" \/><\/a><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>&#8220;What are good measurements and good error analysis? High likelihood that the &#8216;true&#8217; value is within your given uncertainty, while keeping the uncertainty as small as possible.&#8221; &#8211; Professor Mark Battle The following discussion of uncertainty is largely taken from the book, An Introduction to Error Analysis, by John R. Taylor. Experience has shown that [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-46","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-json\/wp\/v2\/pages\/46","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-json\/wp\/v2\/comments?post=46"}],"version-history":[{"count":0,"href":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-json\/wp\/v2\/pages\/46\/revisions"}],"wp:attachment":[{"href":"https:\/\/courses.bowdoin.edu\/physics-1140-lab-manual\/wp-json\/wp\/v2\/media?parent=46"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}