X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fdocuments.git;a=blobdiff_plain;f=LiteratureNotes.tex;h=4e7805949904c93cc910012f4a2b4842b13aa3e6;hp=2814e4b1e37c3b0cc86e27d61ef7f80ae2639d87;hb=23f34189ab55d3d0e5dc725a573164da3592afa6;hpb=93cb10d1d571c39a1f7b39b88d1fba745f2e31a9
diff --git a/LiteratureNotes.tex b/LiteratureNotes.tex
index 2814e4b..4e78059 100644
--- a/LiteratureNotes.tex
+++ b/LiteratureNotes.tex
@@ -18,7 +18,7 @@
\parskip 8.2pt % sets spacing between paragraphs
%\renewcommand{\baselinestretch}{1.5} % Uncomment for 1.5 spacing between lines
\parindent 0pt % sets leading space for paragraphs
-\linespread{0.8}
+\linespread{1}
\newcommand{\vect}[1]{\boldsymbol{#1}} % Draw a vector
@@ -37,14 +37,14 @@
\definecolor{darkred}{rgb}{0.75,0,0}
\definecolor{darkblue}{rgb}{0,0,0.75}
\definecolor{pink}{rgb}{1,0.5,0.5}
-\lstset{language=Java}
+\lstset{language=XML}
\lstset{backgroundcolor=\color{darkgray}}
-\lstset{numbers=left, numberstyle=\tiny, stepnumber=1, numbersep=5pt}
-\lstset{keywordstyle=\color{darkred}\bfseries}
-\lstset{commentstyle=\color{darkblue}}
+%\lstset{numbers=left, numberstyle=\tiny, stepnumber=1, numbersep=5pt}
+%\lstset{keywordstyle=\color{darkred}\bfseries}
+%\lstset{commentstyle=\color{darkblue}}
%\lstset{stringsyle=\color{red}}
-\lstset{showstringspaces=false}
-\lstset{basicstyle=\small}
+%\lstset{showstringspaces=false}
+%\lstset{basicstyle=\small}
\newcommand{\shell}[1]{\texttt{#1}}
\newcommand{\code}[1]{\texttt{#1}}
@@ -77,18 +77,56 @@ Adobe's official reference manual for PostScript.
It is big.
+\begin{itemize}
+ \item First version was published BEFORE the IEEE standard and used smaller floats than binary32
+ \item Now uses binary32 floats.
+\end{itemize}
+
\section{Portable Document Format Reference Manual \cite{pdfref17}}
Adobe's official reference for PDF.
It is also big.
+\begin{itemize}
+ \item Early versions did not use IEEE binary32 but 16-16 exponent/mantissa encodings (Why?)
+ \item Current standard is restricted to binary32
+ \item It specifically says PDF creators must use at most binary32 because higher precision is not supported by Adobe Reader.
+\end{itemize}
+
\section{IEEE Standard for Floating-Point Arithmetic \cite{ieee2008-754}}
The IEEE (revised) 754 standard.
It is also big.
+Successes:
+\begin{itemize}
+ \item Has been adopted by CPUs
+ \item Standardised floats for programmers --- accomplishes goal of allowing non-numerical experts to write reasonably sophisticated platform independent programs that may perform complex numerical operations
+\end{itemize}
+
+Failures:
+\begin{itemize}
+ \item Adoption by GPUs slower\cite{hillesland2004paranoia}
+ \item It specifies the maximum errors for operations using IEEE types but nothing about internal representations
+ \item Many hardware devices (GPUs and CPUs) use non-IEEE representations internally and simply truncate/round the result
+ \begin{itemize}
+ \item This isn't so much of a problem when the device uses additional bits but it is misleading when GPUs use less than binary32 and act as if they are using binary32 from the programmer's perspective.
+ \item Devices using {\bf less} bits internally aren't IEEE compliant
+ \end{itemize}
+ \item Thus the same program compiled and run on different architectures may give completely different results\cite{HFP}
+ \begin{itemize}
+ \item The ultimate goal of allowing people to write numerical programs in total ignorance of the hardware is not entirely realised
+ \end{itemize}
+ \item This is the sort of thing that makes people want to use a virtual machine, and thus Java
+ \begin{itemize}
+ \item Objectively I probably shouldn't say that using Java is in itself a failure
+ \end{itemize}
+ \item Standards such as PostScript and PDF were slow to adopt IEEE representations
+ \item The OpenVG standard accepts IEEE binary32 in the API but specifically states that hardware may use less than this\cite{rice2008openvg}
+\end{itemize}
+
\pagebreak
@@ -389,6 +427,8 @@ Performance was much improved over the software rasterization and over XRender a
on all except nVidia hardware. However, nVidia's XRender implementation did slow down significantly when
some transformations were applied.
+In \cite{kilgard2012gpu}, Kilgard mentions that Glitz has been abandoned. He describes it as ''GPU assisted'' rather than GPU accelerated, since it used the XRender (??) extension.
+
%% Sam again
\section{Boost Multiprecision Library \cite{boost_multiprecision}}
@@ -429,6 +469,8 @@ It is probably not that useful, I don't think we'll end up writing FPU assembly?
FPU's typically have 80 bit registers so they can support REAL4, REAL8 and REAL10 (single, double, extended precision).
+Note: Presumably this is referring to the x86 80 bit floats that David was talking about?
+
\section{Floating Point Package User's Guide \cite{bishop2008floating}}
@@ -578,17 +620,316 @@ There is a version for multiplication.
I'm still not quite sure when this is useful. I haven't been able to find an example for $x$ and $y$ where $x + y \neq \text{Fast2Sum}(x, y)$.
+\section{Handbook of Floating-Point Arithmetic \cite{HFP}}
+
+This book is amazingly useful and pretty much says everything there is to know about Floating Point numbers.
+It is much easier to read than Goldberg or Priest's papers.
+
+I'm going to start working through it and compile their test programs.
+
+\subsection{A sequence that seems to converge to a wrong limit - pgs 9-10, \cite{HFP}}
+
+\begin{align*}
+ u_n &= \left\{ \begin{array}{c} u_0 = 2 \\ u_1 = -4 \\ u_n = 111 - \frac{1130}{u_{n-1}} + \frac{3000}{u_{n-1}u_{n-2}}\end{array}\right.
+\end{align*}
+
+The limit of the series should be $6$ but when calculated with IEEE floats it is actually $100$
+The authors show that the limit is actually $100$ for different starting values, and the error in floating point arithmetic causes the series to go to that limit instead.
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/handbook1-1.pdf}
+ \caption{Output of Program 1.1 from \emph{Handbook of Floating-Point Arithmetic}\cite{HFP} for various IEEE types}
+ \label{HFP-1-1}
+\end{figure}
+
+\subsection{Mr Gullible and the Chaotic Bank Society pgs 10-11 \cite{HFP}}
+
+This is an example of a sequence involving $e$. Since $e$ cannot be represented exactly with FP, even though the sequence should go to $0$ for $a_0 = e - 1$, the representation of $a_0 \neq e - 1$ so the sequence goes to $\pm \infty$.
+
+To eliminate these types of problems we'd need an \emph{exact} representation of all real numbers.
+For \emph{any} FP representation, regardless of precision (a finite number of digits) there will be numbers that can't be represented exactly hence you could find a similar sequence that would explode.
+
+IE: The more precise the representation, the slower things go wrong, but they still go wrong, {\bf even with errorless operations}.
+
+
+\subsection{Rump's example pg 12 \cite {HFP}}
+
+This is an example where the calculation of a function $f(a,b)$ is not only totally wrong, it gives completely different results depending on the CPU. Despite the CPU conforming to IEEE.
+
+\section{Scalable Vector Graphics (SVG) 1.1 (Second Edition) \cite{svg2011-1.1}}
+
+Scalable Vector Graphics (SVG) is a XML language for describing two dimensional graphics. This document is \url{http://www.w3.org/TR/2011/REC-SVG11-20110816/}, the latest version of the standard at the time of writing.
+
+\subsubsection{Three types of object}
+\begin{enumerate}
+ \item Vector graphics shapes (paths)
+ \item Images (embedded bitmaps)
+ \item Text
+\end{enumerate}
+
+\subsubsection{Rendering Model and Precision}
+
+SVG uses the ``painter's model''. Paint is applied to regions of the page in an order, covering the paint below it according to rules for alpha blending.
+
+``Implementations of SVG are expected to behave as though they implement a rendering (or imaging) model cor-
+responding to the one described in this chapter. A real implementation is not required to implement the model in
+this way, but the result on any device supported by the implementation shall match that described by this model.''
+
+SVG uses {\bf single precision floats}. Unlike PDF and PostScript, the standard specifies this as a {\bf minimum} range from \verb/-3.4e+38F/ to \verb/+3.4e+38F/
+
+``It is recommended that higher precision floating point storage and computation be performed on operations
+such as coordinate system transformations to provide the best possible precision and to prevent round-off errors.''
+
+There is also a ``High Quality SVG Viewers'' requirement to use at least {\bf double} precision floats.
+
+\section{OpenVG Specification 1.1 \cite{rice2008openvg}}
+``OpenVG is an application programming interface (API) for hardware-accelerated two-
+dimensional vector and raster graphics developed under the auspices of the Khronos
+Group \url{www.khronos.org}.''
+
+Specifically, provides the same drawing functionality required by a high-performance SVG document viewer (SVG Tiny 1.2)
+TODO: Look at that $\neq$ SVG 1.1
+
+It is designed to be similar to OpenGL.
+
+\subsubsection{Precision}
+``All floating-point values are specified in standard IEEE 754 format. However,
+implementations may clamp extremely large or small values to a restricted
+range, and internal processing may be performed with lesser precision. At least
+16 bits of mantissa, 6 bits of exponent, and a sign bit must be present, allowing
+values from $\pm 2^{-30}$ to $\pm2^{31}$ to be represented with a fractional precision of at least 1
+in $2^{16}$.''
+
+IEEE but with a non standard number of bits.
+
+Presumably the decreased precision is for increased efficiency the standard states that example applications are for ebooks.
+
+
+\section{Document Object Model --- pugixml 1.4 manual \cite{pugixmlDOM}}
+
+Pugixml is a C++ library for parsing XML documents\cite{kapoulkine2014pugixml}. XML is based on the Document Object Model (DOM); this is explained pretty well by the pugixml manual.
+
+The document is the root node of the tree. Each child node has a type. These may
+\begin{itemize}
+ \item Have attributes
+ \item Have child nodes of their own
+ \item Contain data
+\end{itemize}
+
+In the case of HTML/SVG an XML parser within the browser/viewer creates the DOM tree from the XML (plain text) and then interprets that tree to produce the objects that will be rendered.
+
+Example of XML $\to$ DOM tree given at\cite{pugixmlDOM}.
+Example of XML parsing using pugixml is in \shell{code/src/tests/xml.cpp}
+
+
+\begin{figure}[H]
+ \centering
+\begin{lstlisting}[language=XML,basicstyle=\ttfamily]
+
+
+
+ some text
+
+ some more text
+
+
+
+
+
+
+\end{lstlisting}
+
+ \includegraphics[width=0.6\textwidth]{references/pugixmlDOM-dom_tree.png}
+ \caption{Tree representation of the above listing \cite{pugixmlDOM}}
+\end{figure}
+
+\section{An Algorithm For Shading of Regions on Vector Display Devices \cite{brassel1979analgorithm}}
+
+All modern display devices are raster based and therefore this paper is mainly of historical interest. It provides some references for shading on a raster display.
+
+The algorithm described will shade an arbitrary simply-connected polygon using one or two sets of parallel lines.
+The ``traditional'' method is:
+\begin{enumerate}
+ \item Start with a $N$ vertex polygon, rotate coords by the shading angle
+ \item Determine a bounding rectangle
+ \item For $M$ equally spaced parallel lines, compute the intersections with the boundaries of the polygon
+ \item Rotate coordinates back
+ \item Render the $M$ lines
+\end{enumerate}
+
+This is pretty much exactly how an artist would shade a pencil drawing. It is $O(M\times N)$.
+
+The algorithm in this paper does:
+\begin{enumerate}
+ \item Rotate polygon coords by shading angle
+ \item Subdivide the polygon into trapezoids (special case triangle)
+ \item Shade the trapezoids independently
+ \item Rotate it all back
+\end{enumerate}
+It is more complicated than it seems. The subdivision requires a sort to be performed on the vertices of the polygon based on their rotated $x$ and $y$ coordinates.
+
+\section{An Algorithm For Filling Regions on Graphics Display Devices \cite{lane1983analgorithm}}
+
+This gives an algorithm for for polygons (which may have ``holes'').
+It requires the ability to ``subtract'' fill from a region; this is (as far as I can tell) difficult for vector graphics devices but simple on raster graphics devices, so the paper claims it is oriented to the raster graphics devices.
+
+If the polygon is defined by $(x_i, y_i)$ then this algorithm iterates from $i = 2$ and alternates between filling and erasing the triangles $[(x_i, y_i), (x_{i+1}, y_{i+1}), (x_1, y_1)]$. It requires no sorting of the points.
+
+The paper provides a proof that the algorithm is correct and is ``optimal in the number of pixel updates required for convex polygons''.
+In the conclusion it is noted that trapezoids could be used from a fixed line and edge of the polygon, but this is not pixel optimal.
+
+This paper doesn't have a very high citation count but it is cited by the NVIDIA article \cite{kilgard2012gpu}.
+Apparently someone else adapted this algorithm for use with the stencil buffer.
+
+\section{GPU-accelerated path rendering \cite{kilgard2012gpu, kilgard300programming}}
+
+Vector graphics on the GPU; an NVIDIA extension. \cite{kilgard300programming} is the API.
+
+Motivations:
+\begin{itemize}
+ \item The focus has been on 3D acceleration in GPUs; most path rendering is done by the CPU.
+ \item Touch devices allow the screen to be transformed rapidly; CPU rastering of the path becomes inefficient
+ \begin{itemize}
+ \item The source of the ugly pixelated effects on a smartphone when scaling?
+ \end{itemize}
+ \item Especially when combined with increased resolution of these devices
+ \item Standards such as HTML5, SVG, etc, expose path rendering
+ \item Javascript is getting fast enough that we can't blame it anymore (the path rendering is the bottleneck not the JS)
+ \item GPU is more power efficient than the CPU
+\end{itemize}
+
+Results show the extension is faster than almost every renderer it was compared with for almost every test image.
+
+Comparisons to other attempts:
+\begin{itemize}
+ \item Cairo and Glitz \cite{nilsson2004glitz} (abandoned)
+\ \item Direct2D from Microsoft uses CPU to tesselate trapezoids and then renders these on the GPU
+ \item Skia in Android/Chrome uses CPU but now has Ganesh which is also hybrid CPU/GPU
+ \item Khronos Group created OpenVG\cite{rice2008openvg} with several companies creating hardware units to implement the standard. Performance is not as good as ``what we report''
+\end{itemize}
+
+
+\section{A Multiple Precision Binary Floating Point Library With Correct Rounding \cite{fousse2007mpfr}}
+
+This is what is now the GNU MPFR C library; it has packages in debian wheezy.
+
+The library was motivated by the lack of existing arbitrary precision libraries which conformed to IEEE rounding standards.
+Examples include Mathematica, GNU MP (which this library is actually built upon), Maple (which is an exception but buggy).
+
+TODO: Read up on IEEE rounding to better understand the first section
+
+Data:
+\begin{itemize}
+ \item $(s, m, e)$ where $e$ is signed
+ \item Precision $p$ is number of bits in $m$
+ \item $\frac{1}{2} \leq m < 1$
+ \item The leading bit of the mantissa is always $1$ but it is not implied
+ \item There are no denormalised numbers
+ \item Mantissa is stored as an array of ``limbs'' (unsigned integers) as in GMP
+\end{itemize}
+
+The paper includes performance comparisons with several other libraries, and a list of literature using the MPFR (the dates indicating that the library has been used reasonably widely before this paper was published).
+
+\section{Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic \cite{kahan1996ieee754}}
+
+11 years after the IEEE 754 standard becomes official, Kahan discusses various misunderstood features of the standard and laments at the failure of compilers and microprocessors to conform to some of these.
+
+I am not sure how relevant these complaints are today, but it makes for interesting reading as Kahan is clearly very passionate about the need to conform \emph{exactly} to IEEE 754.
+
+Issues considered are: Rounding rules, Exception handling and NaNs (eg: The payload of NaNs is not used in any software Kahan is aware of), Bugs in compilers (mostly Fortran) where an expression contains floats of different precisions (the compiler may attempt to optimise an expression resulting in a failure to round a double to a single), Infinity (which is unsupported by many compilers though it is supported in hardware)...
+
+
+An example is this Fortran compiler ``nasty bug'' where the compiler replaces $s$ with $x$ in line 4 and thus a rounding operation is lost.
+\begin{lstlisting}[language=Fortran, basicstyle=\ttfamily]
+ real(8) :: x, y % double precision (or extended as real(12))
+ real(4) :: s, t % single precision
+ s = x % s should be rounded
+ t = (s - y) / (...) % Compiler incorrectly replaces s with x in this line
+\end{lstlisting}
+
+\subsection{The Baleful Influence of Benchmarks \cite{kahan1996ieee754} pg 20}
+
+This section discusses the tendency to rate hardware (or software) on their speed performance and neglect accuracy.
+
+Is this complaint still relevant now when we consider the eagerness to perform calculations on the GPU?
+ie: Doing the coordinate transforms on the GPU is faster, but less accurate (see our program).
+
+Benchmarks need to be well designed when considering accuracy; how do we know an inaccurate computer hasn't gotten the right answer for a benchmark by accident?
+
+A proposed benchmark for determining the worst case rounding error is given. This is based around computing the roots to a quadratic equation.
+A quick scan of the source code for paranoia does not reveal such a test.
+
+As we needed benchmarks for CPUs perhaps we need benchmarks for GPUs. The GPU Paranoia paper\cite{hillesland2004paranoia} is the only one I have found so far.
+
+\section{Prof W. Kahan's Web Pages \cite{kahanweb}}
+
+William Kahan, architect of the IEEE 754-1985 standard, creator of the program ``paranoia'', the ``Kahan Summation Algorithm'' and contributor to the revised standard.
+
+Kahan's web page has more examples of errors in floating point arithmetic (and places where things have not conformed to the IEEE 754 standard) than you can poke a stick at.
+
+Wikipedia's description is pretty accurate: ``He is an outspoken advocate of better education of the general computing population about floating-point issues, and regularly denounces decisions in the design of computers and programming languages that may impair good floating-point computations.''
+
+Kahan's articles are written with almost religious zeal but they are backed up by examples and results, a couple of which I have confirmed.\footnote{One that doesn't work is an example of wierd arithmetic in Excel 2000 when repeated in LibreOffice Calc 4.1.5.3} This is the unpublished work. I haven't read any of the published papers yet.
+
+The articles are filled sporadically with lamentation for the decline of experts in numerical analysis (and error) which is somewhat ironic; if there were no IEEE 754 standard meaning any man/woman and his/her dog could write floating point arithmetic and expect it to produce platform independent results\footnote{Even if, as Kahan's articles often point out, platforms aren't actually following IEEE 754} this decline would probably not have happened.
+
+These examples would be of real value if the topic of the project were on accuracy of numerical operations. They also explain some of the more bizarre features of IEEE 754 (in a manner attacking those who dismiss these features as being too bizarre to care about of course).
+
+It's somewhat surprising he hasn't written anything (that I could see from scanning the list) about the lack of IEEE in PDF/PostScript (until Adobe Reader 6) and further that the precision is only binary32.
+
+I kind of feel really guilty saying this, but since our aim is to obtain arbitrary scaling, it will be inevitable that we break from the IEEE 754 standard. Or at least, I (Sam) will. Probably. Also there is no way I will have time to read and understand the thousands of pages that Kahan has written.
+
+Therefore we might end up doing some things Kahan would not approve of.
+
+Still this is a very valuable reference to go in the overview of floating point arithmetic section.
\chapter{General Notes}
+\section{The DOM Model}
+
+A document is modelled as a tree. The root of the tree is the document. This has nodes of varying types. Some nodes may have children, attributes, and data.
+
+\section{Floating-Point Numbers\cite{HFP,goldberg1991whatevery,goldberg1992thedesign,priest1991algorithms}}
+
+A set of FP numbers is characterised by:
+\begin{enumerate}
+ \item Radix (base) $\beta \geq 2$
+ \item Precision %$p \req 2$ ``number of sig digits''
+ \item Two ``extremal`` exponents $e_min < 0 < e_max$ (generally, don't have to have the $0$ in there)
+\end{enumerate}
+
+Numbers are represented by {\bf integers}: $(M, e)$ such that $x = M \times \beta^{e - p + 1}$
+
+Require: $|M| \leq \beta^{p}-1$ and $e_min \leq e \leq e_max$.
+
+Representations are not unique; set of equivelant representations is a cohort.
+
+$\beta^{e-p+1}$ is the quantum, $e-p+1$ is the quantum exponent.
+
+Alternate represetnation: $(s, m, e)$ such that $x = (-1)^s \times m \times \beta^{e}$
+$m$ is the significand, mantissa, or fractional part. Depending on what you read.
+
+
+
+
+
\section{Rounding Errors}
+
They happen. There is ULP and I don't mean a political party.
TODO: Probably say something more insightful. Other than "here is a graph that shows errors and we blame rounding".
-Results of \verb/calculatepi/
+\subsection{Results of calculatepi}
+
+We can calculate pi by numerically solving the integral:
+\begin{align*}
+ \int_0^1 \left(\frac{4}{1+x^2}\right) dx &= \pi
+\end{align*}
+
+Results with Simpson Method:
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/calculatepi.pdf}