X-Git-Url: https://git.ucc.asn.au/?a=blobdiff_plain;f=LiteratureNotes.tex;h=6dfa524d91d4273c750cf9da18cc70fc9fccad32;hb=e0c1d14ab726e297401c4aca965a29c6322ada0a;hp=fd86b5a86b5f25a4e9b3eae8ab60b5649b716122;hpb=cb7ac26fb36b428b17d76cb220fa1b4edb764abb;p=ipdf%2Fdocuments.git diff --git a/LiteratureNotes.tex b/LiteratureNotes.tex index fd86b5a..6dfa524 100644 --- a/LiteratureNotes.tex +++ b/LiteratureNotes.tex @@ -188,6 +188,74 @@ This paper uses Metaphors a lot. I never met a phor that didn't over extend itse \item Title pretty much summarises it; similar to \cite{hayes2012pixels} except these guys actually did something practical \end{itemize} +\section{27 Bits are not enough for 8 digit accuracy\cite{goldberg1967twentyseven}} + +Proves with maths, that rounding errors mean that you need at least $q$ bits for $p$ decimal digits. $10^p < 2^{q-1}$ + +\begin{itemize} + \item Eg: For 8 decimal digits, since $10^8 < 2^{27}$ would expect to be able to represent with 27 binary digits + \item But: Integer part requires digits bits (regardless of fixed or floating point represenetation) + \item Trade-off between precision and range + \begin{itemize} + \item 9000000.0 $\to$ 9999999.9 needs 24 digits for the integer part $2^{23} = 83886008$ + \end{itemize} + \item Floating point zero = smallest possible machine exponent + \item Floating point representation: + \begin{align*} + y &= 0.y_1 y_2 \text{...} y_q \times 2^{n} + \end{align*} + \item Can eliminate a bit by considering whether $n = -e$ for $-e$ the smallest machine exponent (???) + \begin{itemize} + \item Get very small numbers with the same precision + \item Get large numbers with the extra bit of precision + \end{itemize} +\end{itemize} + +\section{What every computer scientist should know about floating-point arithmetic\cite{goldberg1991whatevery}} + +\begin{itemize} + \item Book: \emph{Floating Point Computation} by Pat Sterbenz (out of print... in 1991) + \item IEEE floating point standard becoming popular (introduced in 1987, this is 1991) + \begin{itemize} + \item As well as structure, defines the algorithms for addition, multiplication, division and square root + \item Makes things portable because results of operations are the same on all machines (following the standard) + \item Alternatives to floating point: Floating slasi and Signed Logarithm (TODO: Look at these, although they will probably not be useful) + + \end{itemize} + \item Base $\beta$ and precision $p$ (number of digits to represent with) - powers of the base can be represented exactly. + \item Largest and smallest exponents $e_{min}$ and $e_{max}$ + \item Need bits for exponent and fraction, plus one for sign + \item ``Floating point number'' is one that can be represented exactly. + \item Representations are not unique! $0.01 \times 10^1 = 1.00 \times 10^{-1}$ Leading digit of one $\implies$ ``normalised'' + \item Requiring the representation to be normalised makes it unique, {\bf but means it is impossible to represent zero}. + \begin{itemize} + \item Represent zero as $1 \times \beta^{e_{min}-1}$ - requires extra bit in the exponent + \end{itemize} + \item {\bf Rounding Error} + \begin{itemize} + \item ``Units in the last place'' eg: 0.0314159 compared to 0.0314 has ulp error of 0.159 + \item If calculation is the nearest floating point number to the result, it will still be as much as 1/2 ulp in error + \item Relative error corresponding to 1/2 ulp can vary by a factor of $\beta$ ``wobble''. Written in terms of $\epsilon$ + \item Maths $\implies$ {\bf Relative error is always bounded by $\epsilon = (\beta/2)\beta^{-p}$} + \item Fixed relative error $\implies$ ulp can vary by a factor of $\beta$ . Vice versa + \item Larger $\beta \implies$ larger errors + \end{itemize} + \item {\bf Guard Digits} + \begin{itemize} + \item In subtraction: Could compute exact difference and then round; this is expensive + \item Keep fixed number of digits but shift operand right; discard precision. Lead to relative error up to $\beta - 1$ + \item Guard digit: Add extra digits before truncating. Leads to relative error of less than $2\epsilon$. This also applies to addition + \end{itemize} + \item {\bf Catastrophic Cancellation} - Operands are subject to rounding errors - multiplication + \item {\bf Benign Cancellation} - Subtractions. Error $< 2\epsilon$ + \item Rearrange formula to avoid catastrophic cancellation + \item Historical interest only - speculation on why IBM used $\beta = 16$ for the system/370 - increased range? Avoids shifting + \item Precision: IEEE defines extended precision (a lower bound only) + \item Discussion of the IEEE standard for operations (TODO: Go over in more detail) + \item NaN allow continuing with underflow and Infinity with overflow + \item ``Incidentally, some people think that the solution to such anomalies is never to compare floating-point numbers for equality but instead to consider them equal if they are within some error bound E. This is hardly a cure all, because it raises as many questions as it answers.'' - On equality of floating point numbers + +\end{itemize}