From: Sam Moore Date: Mon, 21 Apr 2014 17:13:46 +0000 (+0800) Subject: Today's battle with the Literature X-Git-Url: https://git.ucc.asn.au/?a=commitdiff_plain;h=93cb10d1d571c39a1f7b39b88d1fba745f2e31a9;p=ipdf%2Fdocuments.git Today's battle with the Literature Despite a heroic effort I think I'd have to say I lost. --- diff --git a/.gitignore b/.gitignore index d6f44f6..5860cb3 100644 --- a/.gitignore +++ b/.gitignore @@ -6,3 +6,4 @@ *.bbl *.blg *.toc +data/* diff --git a/LiteratureNotes.pdf b/LiteratureNotes.pdf index e63a14d..6d6e3f7 100644 Binary files a/LiteratureNotes.pdf and b/LiteratureNotes.pdf differ diff --git a/LiteratureNotes.tex b/LiteratureNotes.tex index 1d4fe12..2814e4b 100644 --- a/LiteratureNotes.tex +++ b/LiteratureNotes.tex @@ -1,11 +1,12 @@ -\documentclass[8pt]{extarticle} +\documentclass[8pt]{extreport} \usepackage{graphicx} \usepackage{caption} \usepackage{amsmath} % needed for math align \usepackage{bm} % needed for maths bold face \usepackage{graphicx} % needed for including graphics e.g. EPS, PS \usepackage{fancyhdr} % needed for header - +%\usepackage{epstopdf} % Converts eps to pdf before including. Do it manually instead. +\usepackage{float} \usepackage{hyperref} \topmargin -1.5cm % read Lamport p.163 @@ -68,21 +69,31 @@ \tableofcontents -\section{Postscript Language Reference Manual\cite{plrm}} +\chapter{Literature Summaries} + +\section{Postscript Language Reference Manual \cite{plrm}} Adobe's official reference manual for PostScript. It is big. -\section{Portable Document Format Reference Manual\cite{pdfref17}} +\section{Portable Document Format Reference Manual \cite{pdfref17}} Adobe's official reference for PDF. It is also big. +\section{IEEE Standard for Floating-Point Arithmetic \cite{ieee2008-754}} + +The IEEE (revised) 754 standard. + +It is also big. + + + \pagebreak -\section{Portable Document Format (PDF) --- Finally...\cite{cheng2002portable}} +\section{Portable Document Format (PDF) --- Finally... \cite{cheng2002portable}} This is not spectacularly useful, is basically an advertisement for Adobe software. @@ -115,7 +126,7 @@ This is not spectacularly useful, is basically an advertisement for Adobe softwa \end{itemize} \pagebreak -\section{Pixels or Perish \cite{hayes2012pixels}} +\section{Pixels or Perish \cite{hayes2012pixels}} ``The art of scientific illustration will have to adapt to the new age of online publishing'' And therefore, JavaScript libraries ($\text{D}^3$) are the future. @@ -180,14 +191,14 @@ This paper uses Metaphors a lot. I never met a phor that didn't over extend itse \end{itemize} -\section{Embedding and Publishing Interactive, 3D Figures in PDF Files\cite{barnes2013embedding}} +\section{Embedding and Publishing Interactive, 3D Figures in PDF Files \cite{barnes2013embedding}} \begin{itemize} \item Linkes well with \cite{hayes2012pixels}; I heard you liked figures so I put a figure in your PDF \item Title pretty much summarises it; similar to \cite{hayes2012pixels} except these guys actually did something practical \end{itemize} -\section{27 Bits are not enough for 8 digit accuracy\cite{goldberg1967twentyseven}} +\section{27 Bits are not enough for 8 digit accuracy \cite{goldberg1967twentyseven}} Proves with maths, that rounding errors mean that you need at least $q$ bits for $p$ decimal digits. $10^p < 2^{q-1}$ @@ -210,7 +221,7 @@ Proves with maths, that rounding errors mean that you need at least $q$ bits for \end{itemize} \end{itemize} -\section{What every computer scientist should know about floating-point arithmetic\cite{goldberg1991whatevery}} +\section{What every computer scientist should know about floating-point arithmetic \cite{goldberg1991whatevery}} \begin{itemize} \item Book: \emph{Floating Point Computation} by Pat Sterbenz (out of print... in 1991) @@ -260,7 +271,7 @@ Proves with maths, that rounding errors mean that you need at least $q$ bits for %%%% % David's Stuff %%%% -\section{Compositing Digital Images\cite{porter1984compositing}} +\section{Compositing Digital Images \cite{porter1984compositing}} @@ -300,7 +311,7 @@ and is implemented almost exactly by modern graphics APIs such as \texttt{OpenGL all but guaranteed that this is the method we will be using for compositing document elements in our project. -\section{Bresenham's Algorithm: Algorithm for computer control of a digital plotter\cite{bresenham1965algorithm}} +\section{Bresenham's Algorithm: Algorithm for computer control of a digital plotter \cite{bresenham1965algorithm}} Bresenham's line drawing algorithm is a fast, high quality line rasterization algorithm which is still the basis for most (aliased) line drawing today. The paper, while originally written to describe how to control a particular plotter, @@ -323,13 +334,13 @@ sub-pixel coverage into account. Bresenham himself extended this algorithm to produce Bresenham's circle algorithm. The principles behind the algorithm have also been used to rasterize other shapes, including B\'{e}zier curves. -\section{Quad Trees: A Data Structure for Retrieval on Composite Keys\cite{finkel1974quad}} +\section{Quad Trees: A Data Structure for Retrieval on Composite Keys \cite{finkel1974quad}} This paper introduces the ``quadtree'' spatial data structure. The quadtree structure is a search tree in which every node has four children representing the north-east, north-west, south-east and south-west quadrants of its space. -\section{Xr: Cross-device Rendering for Vector Graphics\cite{worth2003xr}} +\section{Xr: Cross-device Rendering for Vector Graphics \cite{worth2003xr}} Xr (now known as Cairo) is an implementation of the PDF v1.4 rendering model, independent of the PDF or PostScript file formats, and is now widely used @@ -355,7 +366,7 @@ providing a trade-off between rendering quality and performance. The library dev that setting the tolerance to greater than $0.1$ device pixels resulted in errors visible to the user. -\section{Glitz: Hardware Accelerated Image Compositing using OpenGL\cite{nilsson2004glitz}} +\section{Glitz: Hardware Accelerated Image Compositing using OpenGL \cite{nilsson2004glitz}} This paper describes the implementation of an \texttt{OpenGL} based rendering backend for the \texttt{Cairo} library. @@ -380,7 +391,7 @@ some transformations were applied. %% Sam again -\section{Boost Multiprecision Library\cite{boost_multiprecision}} +\section{Boost Multiprecision Library \cite{boost_multiprecision}} \begin{itemize} \item ``The Multiprecision Library provides integer, rational and floating-point types in C++ that have more range and precision than C++'s ordinary built-in types.'' @@ -391,7 +402,7 @@ some transformations were applied. % Some hardware related sounding stuff... -\section{A CMOS Floating Point Unit\cite{kelley1997acmos}} +\section{A CMOS Floating Point Unit \cite{kelley1997acmos}} The paper describes the implentation of a FPU for PowerPC using a particular Hewlett Packard process (HP14B 0.5$\mu$m, 3M, 3.3V). It implements a ``subset of the most commonly used double precision floating point instructions''. The unimplemented operations are compiled for the CPU. @@ -419,7 +430,7 @@ It is probably not that useful, I don't think we'll end up writing FPU assembly? FPU's typically have 80 bit registers so they can support REAL4, REAL8 and REAL10 (single, double, extended precision). -\section{Floating Point Package User's Guide\cite{bishop2008floating}} +\section{Floating Point Package User's Guide \cite{bishop2008floating}} This is a technical report describing floating point VHDL packages \url{http://www.vhdl.org/fphdl/vhdl.html} @@ -432,13 +443,13 @@ See also: Java Optimized Processor\cite{jop} (it has a VHDL implementation of a \section{Low-Cost Microarchitectural Support for Improved Floating-Point Accuracy\cite{dieter2007lowcost}} -Mentions how GPUs offer very good floating point performance but only for single precision floats. +Mentions how GPUs offer very good floating point performance but only for single precision floats. (NOTE: This statement seems to contradict \cite{hillesland2004paranoia}. Has a diagram of a Floating Point adder. Talks about some magical technique called "Native-pair Arithmetic" that somehow makes 32-bit floating point accuracy ``competitive'' with 64-bit floating point numbers. -\section{Accurate Floating Point Arithmetic through Hardware Error-Free Transformations\cite{kadric2013accurate}} +\section{Accurate Floating Point Arithmetic through Hardware Error-Free Transformations \cite{kadric2013accurate}} From the abstract: ``This paper presents a hardware approach to performing ac- curate floating point addition and multiplication using the idea of error- @@ -455,7 +466,9 @@ I guess it's time to try and work out how to use the Opensource VHDL implementat This is about reduction of error in hardware operations rather than the precision or range of floats. But it is probably still relevant. -\section{Floating Point Unit from JOP\cite{jop}} +This has the Fast2Sum algorithm but for the love of god I cannot see how you can compute anything other than $a + b = 0 \forall a,b$ using the algorithm as written in their paper. It references Dekker\cite{dekker1971afloating} and Kahn; will look at them instead. + +\section{Floating Point Unit from JOP \cite{jop}} This is a 32 bit floating point unit developed for JOP in VHDL. I have been able to successfully compile it and the test program using GHDL\cite{ghdl}. @@ -463,7 +476,7 @@ I have been able to successfully compile it and the test program using GHDL\cite Whilst there are constants (eg: \verb/FP_WIDTH = 32, EXP_WIDTH = 8, FRAC_WIDTH = 23/) defined, the actual implementation mostly uses magic numbers, so some investigation is needed into what, for example, the "52" bits used in the sqrt units are for. -\section{GHDL\cite{ghdl}} +\section{GHDL \cite{ghdl}} GHDL is an open source GPL licensed VHDL compiler written in Ada. It had packages in debian up until wheezy when it was removed. However the sourceforge site still provides a \shell{deb} file for wheezy. @@ -477,7 +490,7 @@ Using unix domain sockets we can execute the FPU as a child process and communic Using \shell{ghdl} the testbench can also be linked as part a C/C++ program and run using a function; however there is still no way to communicate with it other than forking a child process and using a unix domain socket anyway. Also, compiling the VHDL FPU as part of our document viewer would clutter the code repository and probably be highly unportable. The VHDL FPU has been given a seperate repository. -\section{On the design of fast IEEE floating-point adders\cite{seidel2001onthe}} +\section{On the design of fast IEEE floating-point adders \cite{seidel2001onthe}} This paper gives an overview of the ``Naive'' floating point addition/subtraction algorithm and gives several optimisation techniques: @@ -501,13 +514,91 @@ The paper concludes by summarising the optimisation techniques used by various a This paper does not specifically mention the precision of the operations, but may be useful because a faster adder design might mean you can increase the precision. +\section{Re: round32 ( round64 ( X ) ) ?= round32 ( X ) \cite{beebe2011round32}} + +I included this just for the quote by Nelson H. F. Beebe: + +``It is too late now to repair the mistakes of the past that are present +in millions of installed systems, but it is good to know that careful +research before designing hardware can be helpful.'' + +This is in regards to the problem of double rounding. It provides a reference for a paper that discusses a rounding mode that eliminates the problem, and a software implementation. + +It shows that the IEEE standard can be fallible! + +Not sure how to work this into our literature review though. % Back to software -\section{Basic Issues in Floating Point Arithmetic and Error Analysis\cite{demmel1996basic}} +\section{Basic Issues in Floating Point Arithmetic and Error Analysis \cite{demmel1996basic}} These are lecture notes from U.C Berkelye CS267 in 1996. +\section{Charles Babbage \cite{dodge_babbage, nature1871babbage}} + +Tributes to Charles Babbage. Might be interesting for historical background. Don't mention anything about floating point numbers though. + +\section{GPU Floating-Point Paranoia \cite{hillesland2004paranoia}} + +This paper discusses floating point representations on GPUs. They have reproduced the program \emph{Paranoia} by William Kahan for characterising floating point behaviour of computers (pre IEEE) for GPUs. + + +There are a few remarks about GPU vendors not being very open about what they do or do not do with + + +Unfortunately we only have the extended abstract, but a pretty good summary of the paper (written by the authors) is at: \url{www.cs.unc.edu/~ibr/projects/paranoia/} + +From the abstract: + +``... [GPUs are often similar to IEEE] However, we have found +that GPUs do not adhere to IEEE standards for floating-point op- +erations, nor do they give the information necessary to establish +bounds on error for these operations ... '' + +and ``...Our goal is to determine the error bounds on floating-point op- +eration results for quickly evolving graphics systems. We have cre- +ated a tool to measure the error for four basic floating-point opera- +tions: addition, subtraction, multiplication and division.'' + +The implementation is only for windows and uses glut and glew and things. +Implement our own version? + +\section{A floating-point technique for extending the available precision \cite{dekker1971afloating}} + +This is Dekker's formalisation of the Fast2Sum algorithm originally implemented by Kahn. + +\begin{align*} + z &= \text{RN}(x + y) \\ + w &= \text{RN}(z - x) \\ + zz &= \text{RN}(y - w) \\ + \implies z + zz &= x + y +\end{align*} + +There is a version for multiplication. + +I'm still not quite sure when this is useful. I haven't been able to find an example for $x$ and $y$ where $x + y \neq \text{Fast2Sum}(x, y)$. + + + +\chapter{General Notes} + +\section{Rounding Errors} + +They happen. There is ULP and I don't mean a political party. + +TODO: Probably say something more insightful. Other than "here is a graph that shows errors and we blame rounding". + +Results of \verb/calculatepi/ +\begin{figure}[H] + \centering + \includegraphics[width=0.8\textwidth]{figures/calculatepi.pdf} + \caption{Example of accumulated rounding errors in a numerical calculation} +\end{figure} + +Tests with \verb/calculatepi/ show it's not quite as simple as just blindly replacing all your additions with Fast2Sum from Dekker\cite{dekker1971afloating}. +ie: The graph looks exactly the same for single precision. \verb/calculatepi/ obviously also has multiplication ops in it which I didn't change. Will look at after sleep maybe. + + \pagebreak \bibliographystyle{unsrt} \bibliography{papers} diff --git a/Makefile b/Makefile index ac80c36..4a62ee5 100644 --- a/Makefile +++ b/Makefile @@ -1,4 +1,4 @@ -TARGETS = ProjectProposalSam.pdf ProjectProposalDavid.pdf LiteratureNotes.pdf +TARGETS = LiteratureNotes.pdf all : $(TARGETS) clean diff --git a/ProjectProposalDavid.pdf b/ProjectProposalDavid.pdf index 861b06d..64a419e 100644 Binary files a/ProjectProposalDavid.pdf and b/ProjectProposalDavid.pdf differ diff --git a/ProjectProposalSam.pdf b/ProjectProposalSam.pdf index 0a2ef8d..8017af4 100644 Binary files a/ProjectProposalSam.pdf and b/ProjectProposalSam.pdf differ diff --git a/figures/calculatepi.pdf b/figures/calculatepi.pdf new file mode 100644 index 0000000..98be1ff Binary files /dev/null and b/figures/calculatepi.pdf differ diff --git a/papers.bib b/papers.bib index 0dec3ca..fa6ae7b 100644 --- a/papers.bib +++ b/papers.bib @@ -501,7 +501,7 @@ doi={10.1109/ARITH.1991.145549},} % On the design of IEEE floating point adders % Has algorithms! -@INPROCEEDINGS{seidal2001onthe, +@INPROCEEDINGS{seidel2001onthe, author={Seidel, P.-M. and Even, G.}, booktitle={Computer Arithmetic, 2001. Proceedings. 15th IEEE Symposium on}, title={On the design of fast IEEE floating-point adders}, @@ -527,4 +527,61 @@ ISSN={1063-6889},} howpublished = "\url{http://www.gaisler.com/doc/grfpu_dasia.pdf}" } - +% The best quote ever. +@misc{beebe2011round32, + title = "Re: round32 ( round64 ( X ) ) ?= round32 ( X )", + note = "IEEE 754 Working Group Mail Archives", + author = "Nelson H. F. Beebe", + howpublished = "\url{http://grouper.ieee.org/groups/754/email/msg04169.html}" +} + +% Biography of Charles Babbage because WHY NOT? + % I suspect this year is wrong?75 +@ARTICLE{dodge_babbage, +author={Dodge, N. S.}, +journal={Annals of the History of Computing, IEEE}, +title={Charles Babbage}, +year={2000}, +month={Oct}, +volume={22}, +number={4}, +pages={22-43}, +keywords={Accuracy;Art;Autobiographies;Biographies;Blood;Calculus;Educational institutions;History;Writing}, +doi={10.1109/MAHC.2000.887988}, +ISSN={1058-6180},} + +@article{nature1871babbage, + author = "Unknown Author", + journal = "Nature", + title = "Charles Babbage", + year = 1871, + volume = 5, + number = 106, + pages = "28-29" +} + +%IEEE 754 Really should have put this in earlier +@ARTICLE{ieee2008-754, +journal={IEEE Std 754-2008}, +title={IEEE Standard for Floating-Point Arithmetic}, +year={2008}, +month={Aug}, +pages={1-70}, +keywords={IEEE standards;floating point arithmetic;programming;IEEE standard;arithmetic formats;computer programming;decimal floating-point arithmetic;754-2008;NaN;arithmetic;binary;computer;decimal;exponent;floating-point;format;interchange;number;rounding;significand;subnormal}, +doi={10.1109/IEEESTD.2008.4610935},} + + +@article{dekker1971afloating +year={1971}, +issn={0029-599X}, +journal={Numerische Mathematik}, +volume={18}, +number={3}, +doi={10.1007/BF01397083}, +title={A floating-point technique for extending the available precision}, +url={http://dx.doi.org/10.1007/BF01397083}, +publisher={Springer-Verlag}, +author={Dekker, T.J.}, +pages={224-242}, +language={English} +} diff --git a/references/beebe2011round32.html b/references/beebe2011round32.html new file mode 100644 index 0000000..6ce6440 --- /dev/null +++ b/references/beebe2011round32.html @@ -0,0 +1,177 @@ + + + + + + + + + + + Re: round32 ( round64 ( X ) ) ?= round32 ( X ) + + + + + + +
+ + + + +
+[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] + + + +

Re: round32 ( round64 ( X ) ) ?= round32 ( X )

+
+ + + + + +
+ + +
Peter Lawrence asks about the infamous problem of double rounding on
+systems with long internal registers (Honeywell mainframes of 1970s,
+Motorola 68K, and current Intel x86 and x86_64 families).
+
+Double rounding is indeed a nuisance, and there is a surprising recent
+discovery that it could have been prevented if there were an unusual
+rounding mode, round-to-odd (RO(x)).  The authors of the paper below
+show how to implement that rounding in software, and discuss how it
+can be used to fix the double-rounding problem.
+
+It is too late now to repair the mistakes of the past that are present
+in millions of installed systems, but it is good to know that careful
+research before designing hardware can be helpful.
+
+@String{j-IEEE-TRANS-COMPUT     = "IEEE Transactions on Computers"}
+
+@Article{Boldo:2008:EFC,
+  author =       "Sylvie Boldo and Guillaume Melquiond",
+  title =        "Emulation of a {FMA} and Correctly Rounded Sums:
+                 Proved Algorithms Using Rounding to Odd",
+  journal =      j-IEEE-TRANS-COMPUT,
+  volume =       "54",
+  number =       "4",
+  pages =        "462--471",
+  month =        apr,
+  year =         "2008",
+  CODEN =        "ITCOB4",
+  DOI =          "http://dx.doi.org/10.1109/TC.2007.70819";,
+  ISSN =         "0018-9340",
+  bibdate =      "Sat Feb 19 18:44:18 2011",
+  abstract =     "Rounding to odd is a nonstandard rounding on
+                 floating-point numbers. By using it for some
+                 intermediate values instead of rounding to nearest,
+                 correctly rounded results can be obtained at the end of
+                 computations. We present an algorithm for emulating the
+                 fused multiply-and-add operator. We also present an
+                 iterative algorithm for computing the correctly rounded
+                 sum of a set of floating-point numbers under mild
+                 assumptions. A variation on both previous algorithms is
+                 the correctly rounded sum of any three floating-point
+                 numbers. This leads to efficient implementations, even
+                 when this rounding is not available. In order to
+                 guarantee the correctness of these properties and
+                 algorithms, we formally proved them by using the Coq
+                 proof checker.",
+  acknowledgement = ack-nhfb,
+  fjournal =     "IEEE Transactions on Computers",
+  keyword =      "round-to-odd (RO(x))",
+}
+
+See also discussions of the double-rounding problem in this recent
+useful book:
+
+@String{pub-BIRKHAUSER-BOSTON   = "Birkh{\"a}user Boston Inc."}
+@String{pub-BIRKHAUSER-BOSTON:adr = "Cambridge, MA, USA"}
+
+@Book{Muller:2010:HFP,
+  author =       "Jean-Michel Muller and Nicolas Brisebarre and Florent
+                 de Dinechin and Claude-Pierre Jeannerod and Vincent
+                 Lef{\`e}vre and Guillaume Melquiond and Nathalie Revol
+                 and Damien Stehl{\'e} and Serge Torres",
+  title =        "Handbook of Floating-Point Arithmetic",
+  publisher =    pub-BIRKHAUSER-BOSTON,
+  address =      pub-BIRKHAUSER-BOSTON:adr,
+  pages =        "xxiii + 572",
+  year =         "2010",
+  DOI =          "http://dx.doi.org/10.1007/978-0-8176-4704-9";,
+  ISBN =         "0-8176-4704-X",
+  ISBN-13 =      "978-0-8176-4704-9",
+  LCCN =         "QA76.9.C62 H36 2010",
+  bibdate =      "Thu Jan 27 16:18:58 2011",
+  price =        "US\$90 (est.)",
+  acknowledgement = ack-nhfb,
+}
+
+-------------------------------------------------------------------------------
+- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
+- University of Utah                    FAX: +1 801 581 4148                  -
+- Department of Mathematics, 110 LCB    Internet e-mail: beebe@xxxxxxxxxxxxx  -
+- 155 S 1400 E RM 233                       beebe@xxxxxxx  beebe@xxxxxxxxxxxx -
+- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
+-------------------------------------------------------------------------------
+
+
+ + + +
+ + + + + + + + + +
+
+

+ 754 | + revision | + FAQ | + references | + list archive +

+
+ + + \ No newline at end of file diff --git a/references/dekker1971afloating.pdf b/references/dekker1971afloating.pdf new file mode 100644 index 0000000..5af4b90 Binary files /dev/null and b/references/dekker1971afloating.pdf differ diff --git a/references/dodge_babbage.pdf b/references/dodge_babbage.pdf new file mode 100644 index 0000000..2636d42 Binary files /dev/null and b/references/dodge_babbage.pdf differ diff --git a/references/hillesland2004paranoia.pdf b/references/hillesland2004paranoia.pdf new file mode 100644 index 0000000..1519750 Binary files /dev/null and b/references/hillesland2004paranoia.pdf differ diff --git a/references/ieee2008-754.pdf b/references/ieee2008-754.pdf new file mode 100644 index 0000000..2ffb51c Binary files /dev/null and b/references/ieee2008-754.pdf differ diff --git a/references/nature1871babbage.pdf b/references/nature1871babbage.pdf new file mode 100644 index 0000000..0c2cbf2 Binary files /dev/null and b/references/nature1871babbage.pdf differ