XGitUrl: https://git.ucc.asn.au/?p=ipdf%2Fsam.git;a=blobdiff_plain;f=chapters%2FBackground.tex;h=ee8da2115fea6c5f82b115e4aaaa401690033f1c;hp=dbb6f26dfe16aa9977abd309180a4f5e279ea160;hb=8d79c1d6010c625b9f0583c51a2511f0f9adeb71;hpb=b1c5fe49ec552755fd19073c3f91c8e9866d6938
diff git a/chapters/Background.tex b/chapters/Background.tex
index dbb6f26..ee8da21 100644
 a/chapters/Background.tex
+++ b/chapters/Background.tex
@@ 1,228 +1,165 @@
\chapter{Literature Review}\label{Background}
\rephrase{0. Here is a brilliant summary of the sections below}
+The first half of this chapter will be devoted to documents themselves, including: the representation and displaying of graphics primitives\cite{computergraphics2}, and how collections of these primitives are represented in document formats, focusing on widely used standards\cite{plrm, pdfref17, svg20111.1}.
This chapter provides an overview of relevant literature. The areas of interest can be broadly grouped into two largely separate categories; Documents and Number Representations.
+We will find that although there has been a great deal of research into the rendering, storing, editing, manipulation, and extension of document formats, modern standards are content to specify at best single precision IEEE754 floating point arithmetic.
The first half of this chapter will be devoted to documents themselves, including: the representation and displaying of graphics primitives\cite{computergraphics2}, and how collections of these primitives are represented in document formats, focusing on well known standards currently in use\cite{plrm, pdfref17, svg20111.1}.

We will find that although there has been a great deal of research into the rendering, storing, editing, manipulation, and extension of document formats, these widely used document standards are content to specify at best a single precision IEEE754 floating point number representations.

The research on arbitrary precision arithmetic applied to documents is very sparse; however arbitrary precision arithmetic itself is a very active field of research. Therefore, the second half of this chapter will be devoted to considering the IEEE754 standard, its advantages and limitations, and possible alternative number representations to allow for arbitrary precision arithmetic.
+The research on arbitrary precision arithmetic applied to documents is rather sparse; however arbitrary precision arithmetic itself is a very active field of research. Therefore, the second half of this chapter will be devoted to considering fixed precision floating point numbers as specified by the IEEE754 standard, possible limitations in precision, and alternative number representations for increased or arbitrary precision arithmetic.
In Chapter \ref{Progress}, we will discuss our findings so far with regards to arbitrary precision arithmetic applied to document formats, and expand upon the goals outlined in Chapture \ref{Proposal}.

\pagebreak

\section{Raster and Vector Images}\label{Raster and Vector Images}
\input{chapters/Background_RastervsVector}
\section{Rasterising Vector Images}\label{Rasterising Vector Images}
+\section{Rendering Vector Images}\label{Rasterising Vector Images}
Throughout Section \ref{vectorvsrastergraphics} we were careful to refer to ``modern'' display devices, which are raster based. It is of some historical significance that vector display devices were popular during the 70s and 80s, and papers oriented towards drawing on these devices can be found\cite{brassel1979analgorithm}. Whilst curves can be drawn at high resolution on vector displays, a major disadvantage was shading; by the early 90s the vast majority of computer displays were raster based\cite{computergraphics2}.
+Hearn and Baker's textbook ``Computer Graphics''\cite{computergraphics2} gives a comprehensive overview of graphics from physical display technologies through fundamental drawing algorithms to popular graphics APIs. This section will examine algorithms for drawing two dimensional geometric primitives on raster displays as discussed in ``Computer Graphics'' and the relevant literature. This section is by no means a comprehensive survey of the literature but intends to provide some idea of the computations which are required to render a document.
Hearn and Baker's textbook ``Computer Graphics''\cite{computergraphics2} gives a comprehensive overview of graphics from physical display technologies through fundamental drawing algorithms to popular graphics APIs. This section will examine algorithms for drawing two dimensional geometric primitives on raster displays as discussed in ``Computer Graphics'' and the relevant literature. Informal tutorials are abundant on the internet\cite{elias2000graphics}.
+It is of some historical significance that vector display devices were popular during the 70s and 80s, and papers oriented towards drawing on these devices can be found\cite{brassel1979analgorithm}. Whilst curves can be drawn at high resolution on vector displays, a major disadvantage was shading; by the early 90s the vast majority of computer displays were raster based\cite{computergraphics2}.
\subsection{Straight Lines}\label{Straight Lines}
+\subsection{Straight Lines}\label{Rasterising Straight Lines}
\input{chapters/Background_Lines}
\subsection{Spline Curves}\label{Spline Curves}
+\subsection{Spline Curves and B{\'e}ziers}\label{Spline Curves}
+\input{chapters/Background_Spline}
Splines are continuous curves formed from piecewise polynomial segments. A polynomial of $n$th degree is defined by $n$ constants $\{a_0, a_1, ... a_n\}$ and:
\begin{align}
 y(x) &= \displaystyle\sum_{k=0}^n a_k x^k
\end{align}
+\subsection{Font Glyphs}\label{Font Rendering}
+\input{chapters/Background_Fonts}
+%\subsection{Shading}\label{Shading}
A straight line is simply a polynomial of $0$th degree. Splines may be rasterised by sampling of $y(x)$ at a number of points $x_i$ and rendering straight lines between $(x_i, y_i)$ and $(x_{i+1}, y_{i+1})$ as discussed in Section \ref{Straight Lines}. More direct algorithms for drawing splines based upon Brasenham and Wu's algorithms also exist\cite{citationneeded}.
There are many different ways to define a spline. One approach is to specify ``knots'' on the spline and solve for the cooefficients to generate a cubic spline ($n = 3$) passing through the points. Beziers are a popular spline which can be created in GUI based graphics editors using several ``control points'' which themselves are not part of the curve.
+%\cite{brassel1979analgorithm}; %\cite{lane1983analgorithm}.
\subsubsection{Bezier Curves}
\input{chapters/Background_Bezier}
+\subsection{Compositing}\label{Compositing and the Painter's Model}
\subsection{Shading}
+%So far we have discussed techniques for rendering vector graphics primitives in isolation, with no regard to the overall structure of a document which may contain many thousands of primitives. A straight forward approach would be to render all elements sequentially to the display, with the most recently drawn pixels overwriting lower elements. Such an approach is particularly inconvenient for antialiased images where colours must appear to smoothly blur between the edge of a primitive and any drawn underneath it.
Algorithms for shading on vector displays involved drawing equally spaced lines within a region; this is limited both in the complexity of shading and the performance required to compute the lines\cite{brassel1979analgorithm}.
+Colour raster displays are based on an additive redgreenblue $(r,g,b)$ colour representation which matches the human eye's response to light\cite{computergraphics2}. In 1984, Porter and Duff introduced a fourth colour channel for rasterised images called the ``alpha'' channel, analogous to the transparency of a pixel\cite{porter1984compositing}. In compositing models, elements can be rendered seperately, with the four colour channels of successively drawn elements being combined according to one of several possible operations.
On raster displays, shading is typically based upon Lane's algorithm of 1983\cite{lane1983analgorithm} which is implemented in the GPU \cite{kilgard2012gpu}
+In the ``painter's model'' as described by the SVG standard the ``over'' operation is used when rendering one primitive over another\cite{svg20111.1}.
+Given an existing pixel $P_1$ with colour values $(r_1, g_1, b_1, a_1)$ and a pixel $P_2$ with colours $(r_2, g_2, b_2, a_2)$ to be painted over $P_1$, the resultant pixel $P_T$ has colours given by:
+\begin{align}
+ a_T &= 1  (1a_1)(1a_2) \\
+ r_T &= (1  a_2)r_1 + r_2 \quad \text{(similar for $g_T$ and $b_T$)}
+\end{align}
+It should be apparent that alpha values of $1$ correspond to an opaque pixel; that is, when $a_2 = 1$ the resultant pixel $P_T$ is the same as $P_2$.
+When the final pixel is actually drawn on an rgb display, the $(r, g, b)$ components are $(r_T/a_T, g_T/a_T, b_T/a_T)$.
\rephrase{6. Sort of starts here... or at least background does}
+The PostScript and PDF standards, as well as the OpenGL API also use a painter's model for compositing. However, PostScript does not include an alpha channel, so $P_T = P_2$ always\cite{plrm}. Figure \ref{SVG} illustrates the painter's model for partially transparent shapes as they would appear in both the SVG and PDF models.
\subsection{Rendering Vector Graphics on the GPU}
+\subsection{Rasterisation on the CPU and GPU}
Traditionally, vector graphics have been rasterized by the CPU before being sent to the GPU for drawing\cite{kilgard2012gpu}. Lots of people would like to change this \cite{worth2003xr, loop2007rendering, rice2008openvg, kilgard2012gpu, green2007improved} ... \rephrase{All of these are things David found except kilgard which I thought I found and then realised David already had it :S}
+Traditionally, vector images have been rasterized by the CPU before being sent to a specialised Graphics Processing Unit (GPU) for drawing\cite{computergraphics2}. Rasterisation of simple primitives such as lines and triangles have been supported directly by GPUs for some time through the OpenGL standard\cite{openglspec}. However complex shapes (including those based on B{\'e}zier curves such as font glyphs) must either be rasterised entirely by the CPU or decomposed into simpler primitives that the GPU itself can directly rasterise. There is a significant body of research devoted to improving the performance of rendering such primitives using the latter approach, mostly based around the OpenGL API\cite{robart2009openvg, leymarie1992fast, frisken2000adaptively, green2007improved, loop2005resolution, loop2007rendering}. Recently Mark Kilgard of the NVIDIA Corporation described an extension to OpenGL for NVIDIA GPUs capable of drawing and shading vector paths\cite{kilgard2012gpu,kilgard300programming}. From this development it seems that rasterization of vector graphics may eventually become possible upon the GPU.openglspec
\rephrase{2. Here are the ways documents are structured ... we got here eventually}
+It is not entirely clear how well supported the IEEE754 standard for floating point computation (which we will discuss in Section \ref{}) is amongst GPUs\footnote{Informal technical articles are prevelant on the internet  Eg: Regarding the Dolphin Wii GPU Emulator: \url{https://dolphinemu.org/blog} (accessed 20140522)}. Although the OpenGL API does use IEEE754 number representations, research by Hillesland and Lastra in 2004 suggested that many GPUs were not internally compliant with the standard\cite{hillesland2004paranoia}. %Arbitrary precision arithmetic, is provided by many software libraries for CPU based calculations
\section{Document Representations}
+ \pagebreak
+\section{Document Representations}\label{Document Representations}
\rephrase{The file format can be either human readable\footnote{For some definition of human and some definition of readable} or binary\footnote{So, our viewer is basically a DOM style but stored in a binary format}. Can also be compressed or not. Here we are interested in how the document is interpreted or traversed in order to produce graphics output.}
+The representation of information, particularly for scientific purposes, has changed dramatically over the last few decades. For example, Brassel's 1979 paper referenced earlier\cite{brassel1979analgorithm} has been produced on a mechanical type writer. Although the paper discusses an algorithm for shading on computer displays, the figures illustrating this algorithm have not been generated by a computer, but drawn by Brassel's assistant. In contrast, modern papers such as Barnes et. al's 2013 paper on embedding 3d images in PDF documents\cite{barnes2013embedding} can themselves be an interactive proof of concept.
\subsection{Interpreted Model}
+Haye's 2012 article ``Pixels or Perish'' discusses the recent history and current state of the art in documents for scientific publications\cite{hayes2012pixels}. Hayes argued that there are currently two different approaches to representing a document: As a sequence of static sheets of paper (Programmed Documents) or as a dynamic and interactive way to convey information, using the Document Object Model. We will now explore these two approaches and the extent to which they overlap.
\rephrase{Did I just invent that terminology or did I read it in a paper? Is there actually existing terminology for this that sounds similar enough to ``Document Object Model'' for me to compare them side by side?}
\begin{itemize}
 \item This model treats a document as the source code program which produces graphics
 \item Arose from the desire to produce printed documents using computers (which were still limited to text only displays).
 \item Typed by hand or (later) generated by a GUI program
 \item PostScript  largely supersceded by PDF on the desktop but still used by printers\footnote{Desktop pdf viewers can still cope with PS, but I wonder if a smartphone pdf viewer would implement it?}
 \item \TeX  Predates PostScript! {\LaTeX } is being used to create this very document and until now I didn't even have it here!
 \begin{itemize}
 \item I don't really want to go down the path of investigating the billion steps involved in getting \LaTeX into an actually viewable format
 \item There are interpreters (usually WYSIWYG editors) for \LaTeX though
 \item Maybe if \LaTeX were more popular there would be desktop viewers that converted \LaTeX directly into graphics
 \end{itemize}
 \item Potential for dynamic content, interactivity; dynamic PostScript, enhanced Postscript
+\subsection{Programmed Documents}
+\input{chapters/Background_Interpreted}
+
+\pagebreak
+\subsection{Document Object Model}\label{Document Object Model}
+\input{chapters/Background_DOM}
 \item Scientific Computing  Mathematica, Matlab, IPython Notebook  The document and the code that produces it are stored together
 \item Problems with security  Turing complete, can be exploited easily
\end{itemize}
+\subsection{The Portable Document Format}
\subsection{Crippled Interpreted Model}
+Adobe's Portable Document Format (PDF) is currently used almost universally for sharing documents; the ability to export or print to PDF can be found in most graphical document editors and even some plain text editors\cite{cheng2002finally}.
\rephrase{I'm pretty sure I made that one up}
+Hayes describes PDF as ``... essentially 'flattened' PostScript; itâs whatâs left when you remove all the procedures and loops in a program, replacing them with sequences of simple drawing commands.''\cite{hayes2012pixels}. Consultation of the PDF 1.7 standard shows that this statement does not a give a complete picture  despite being based on the Adobe PostScript model of a document as a series of ``pages'' to be printed by executing sequential instructions, from version 1.5 the PDF standard began to borrow some ideas from the Document Object Model. For example, interactive elements such as forms may be included as XHTML objects and styled using CSS. ``Actions'' are objects used to modify the data structure dynamically. In particular, it is possible to include Javascript Actions. Adobe defines the API for Javascript actions seperately to the PDF standard\cite{js_3d_pdf}. There is some evidence in the literature of attempts to exploit these features, with mixed success\cite{barnes2013embedding, hayes2012pixels}.
\begin{itemize}
 \item PDF is PostScript but without the Turing Completeness
 \item Solves security issues, more efficient
\end{itemize}
+%\subsection{Scientific Computation Packages}
\subsection{Document Object Model}
\begin{itemize}
 \item DOM = Tree of nodes; node may have attributes, children, data
 \item XML (SGML) is the standard language used to represent documents in the DOM
 \item XML is plain text
 \item SVG is a standard for a vector graphics language conforming to XML (ie: a DOM format)
 \item CSS style sheets represent more complicated styling on objects in the DOM
\end{itemize}
+\section{Precision required by Document Formats}
\subsection{Blurring the Line  Javascript}
+We briefly summarise the requirements of the standards discussed so far in regards to the precision of mathematical operations.
\begin{itemize}
 \item The document is expressed in DOM format using XML/HTML/SVG
 \item A Javascript program is run which can modify the DOM
 \item At a high level this may be simply changing attributes of elements dynamically
 \item For low level control there is canvas2D and even WebGL which gives direct access to OpenGL functions
 \item Javascript can be used to make a HTML/SVG interactive
 \begin{itemize}
 \item Overlooking the fact that the SVG standard already allows for interactive elements...
 \end{itemize}
 \item Javascript is now becoming used even in desktop environments and programs (Windows 8, GNOME 3, Cinnamon, Game Maker Studio) ({\bf shudder})
 \item There are also a range of papers about including Javascript in PDF ``Pixels or Perish'' being the only one we have actually read\cite{hayes2012pixels}
 \begin{itemize}
 \item I have no idea how this works; PDF is based on PostScript... it seems very circular to be using a programming language to modify a document that is modelled on being a (non turing complete) program
 \item This is yet more proof that people will converge towards solutions that ``work'' rather than those that are optimal or elegant
 \item I guess it's too much effort to make HTML look like PDF (or vice versa) so we could phase one out
 \end{itemize}
\end{itemize}
+\subsection{PostScript}
+The PostScript reference describes a ``Real'' object for representing coordinates and values as follows: ``Real objects approximate mathematical real numbers within a much larger interval, but with limited precision; they are implemented as floatingpoint numbers''\cite{plrm}. There is no reference to the precision of mathematical operations, but the implementation limits \emph{suggest} a range of $\pm10^{38}$ ``approximate'' and the smallest values not rounded to zero are $\pm10^{38}$ ``approximate''.
\subsection{Why do we still use static PDFs}
+\subsection{PDF}
+PDF defines ``Real'' objects in a similar way to PostScript, but suggests a range of $\pm3.403\times10^{38}$ and smallest nonzero values of $\pm1.175\times10^{38}$\cite{pdfref17}. A note in the PDF 1.7 manual mentions that Acrobat 6 now uses IEEE754 single precision floats, but ``previous versions used 32bit fixed point numbers'' and ``... Acrobat 6 still converts floatingpoint numbers to fixed point for some components''.
Despite their limitations, we still use static, boring old PDFs. Particularly in scientific communication.
\begin{itemize}
 \item They are portable; you can write an amazing document in Mathematica/Matlab but it
 \item Scientific journals would need to adapt to other formats and this is not worth the effort
 \item No network connection is required to view a PDF (although DRM might change this?)
 \item All rescources are stored in a single file; a website is stored accross many separate files (call this a ``distributed'' document format?)
 \item You can create PDFs easily using desktop processing WYSIWYG editors; WYSIWYG editors for web based documents are worthless due to the more complex content
 \item Until Javascript becomes part of the PDF standard, including Javascript in PDF documents will not become widespread
 \item Once you complicate a PDF by adding Javascript, it becomes more complicated to create; it is simply easier to use a series of static figures than to embed a shader in your document. Even for people that know WebGL.
\end{itemize}
+\subsection{\TeX and METAFONT}
\rephrase{3. Here are the ways document standards specify precision (or don't)}
+In ``The METAFONT book'' Knuth appears to describe coordinates as fixed point numbers: ``The computer works internally with coordinates that are integer multiples of $\frac{1}{65536} \approx 0.00002$ of the width of a pixel''\cite{knuth1983metafont}. There is no mention of precision in ``The \TeX book''. In 2007 Beebe claimed that {\TeX} uses a $14.16$ fixed point encoding, and that this was due to the lack of standardised floating point arithmetic on computers at the time; a problem that the IEEE754 was designed to solve\cite{beebe2007extending}. Beebe also suggests that \TeX and METAFONT could now be modified to use IEEE754 arithmetic.
\section{Precision in Modern Document Formats}
+\subsection{SVG}
\rephrase{All the above is very interesting and provides important context, but it is not actually directly related to the problem of infinite precision which we are going to try and solve. Sorry to make you read it all.}
+The SVG standard specifies a minimum precision equivelant to that of ``single precision floats'' (presumably referring to IEEE754) with a range of \verb/3.4e+38F/ to \verb/+3.4e+38F/, and states ``It is recommended that higher precision floating point storage and computation be performed on operations such as
+coordinate system transformations to provide the best possible precision and to prevent roundoff errors.''\cite{svg20111.1} An SVG Viewer may refer to itself as ``High Quality'' if it uses a minimum of ``double precision'' floats.
+\subsection{Javascript}
+We include Javascript here due to its relation with the SVG, HTML5 and PDF standards.
+According to the EMCA262 standard, ``The Number type has exactly 18437736874454810627 (that is, $2^64^53+3$) values,
+representing the doubleprecision 64bit format IEEE 754 values as specified in the IEEE Standard for Binary FloatingPoint Arithmetic''\cite{ecma262}.
+The Number type does differ slightly from IEEE754 in that there is only a single valid representation of ``Not a Number'' (NaN). The EMCA262 does not define an ``integer'' representation.
+
\begin{itemize}
 \item Implementations of PostScript and PDF must by definition restrict themselves to IEEE binary32 ``single precision''\footnote{The original IEEE754 defined single, double and extended precisions; in the revision these were renamed to binary32, binary64 and binary128 to explicitly state the base and number of bits}
 floating point number representations in order to conform to the standards\cite{plrm, pdfref17}.
 \item Implementations of SVG are by definition required to use IEEE binary32 as a {\bf minimum}. ``High Quality'' SVG viewers are required to use at least IEEE binary64.\cite{svg20111.1}
 \item Numerical computation packages such as Mathematica and Maple use arbitrary precision floats
 \begin{itemize}
 \item Mathematica is not open source which is an issue when publishing scientific research (because people who do not fork out money for Mathematica cannot verify results)
 \item What about Maple? \cite{HFP} and \cite{fousse2007mpfr} both mention it being buggy.
 \item Octave and Matlab use fixed precision doubles
 \end{itemize}
\end{itemize}
+\section{Number Representations}
The use of IEEE binary32 floats in the PostScript and PDF standards is not surprising if we consider that these documents are oriented towards representing static pages. They don't actually need higher precision to do this; 32 bits is more than sufficient for A4 paper.
\rephrase{4. Here is IEEE754 which is what these standards use}
+\subsection{Integers and Fixed Point Numbers}
\section{Representation of Numbers}
Although this project has been motivated by a desire for more flexible document formats, the fundamental source of limited precision in vector document formats is the restriction to IEEE floating point numbers for representation of coordinates.
Whilst David Gow will be focusing on structures \rephrase{and the use of multiple coordinate systems} to represent a document so as to avoid or reduce these limitations\cite{proposalGow}, the focus of our own research will be \rephrase{increased precision in the representation of real numbers so as to get away with using a single global coordinate system}.

\subsection{The IEEE Standard}

\subsection{Floating Point Number Representations}
+\subsection{Floating Points}
+
+A floating point number $x$ is commonly represented by a tuple of values $(s, e, m)$ in base $B$ as\cite{HFP, ieee2008754}:
\begin{align*}
x &= (1)^{s} \times m \times B^{e}
\end{align*}
$B = 2$, although IEEE also defines decimal representations for $B = 10$  these are useful in financial software\cite{ieee2008754}.

\rephrase{Aside: Are decimal representations for a document format eg: CAD also useful because you can then use metric coordinate systems?}


\subsubsection{Precision}

The floats map an infinite set of real numbers onto a discrete set of representations.



\rephrase{Figure: 8 bit ``minifloats'' (all 255 of them) clearly showing the ``precision vs range'' issue}

The most a result can be rounded in conversion to a floating point number is the units in last place; $m_{N} \times B^{e}$.

\rephrase{Even though that paper that claims double is the best you will ever need because the error can be as much as the size of a bacterium relative to the distance to the moon}\cite{} \rephrase{there are many cases where increased number of bits will not save you}.\cite{HFP}
+Where $s$ is the sign and may be zero or one, $m$ is commonly called the ``mantissa'' and $e$ is the exponent. Whilst $e$ is an integer in some range $\pm e_max$, the mantissa $m$ is actually a fixed point value in the range $0 < m < B$. The name ``floating point'' refers to the equivelance of the $\times B^e$ operation to a shifting of the ``fixed point'' along the mantissa.
+For example, the value $7.25$ can be expressed as:
+\begin{enumerate}
+ \item
+
+\end{enumerate}
\rephrase{5. Here are limitations of IEEE754 floating point numbers on compatible hardware}
+The choice of base $B = 2$, closely matches the nature of modern hardware. It has also been found that this base in general gives the smallest rounding errors\cite{HFP}. Early computers had in fact used a variety of representations including $B=3$ or even $B=7$\cite{goldman1991whatevery}, and the revised IEEE754 standard specifies a decimal representation $B = 10$ intended for use in financial applications\cite{ieee754std2008}. From now on we will restrict ourselves to considering base 2 floats.
\subsection{Limitations Imposed By CPU}
+Figure \ref{minifloat.pdf} shows the positive real numbers which can be represented exactly by an 8 bit floating point number encoded in the IEEE754 format, and the distance between successive floating point numbers. We show two encodings using (1,2,5) and (1,3,4) bits to encode (sign, exponent, mantissa) respectively.
CPU's are restricted in their representation of floating point numbers by the IEEE standard.
+For each distinct value of the exponent, the successive floating point representations lie on a straight line with constant slope. As the exponent increases, larger values are represented, but the distance between successive values increases\footnote{A plot of fixed point numbers or integers (which we omit for space considerations) would show points lying on a straight line with a constant slope between points}.
+In the graph of the difference between representations, a single isolated point should be visible; this is not an error, but due to the greater discontinuity between the denormalised and normalised values ($e = 0$ and $1$ respectively).
\subsection{Limitations Imposed By Graphics APIs and/or GPUs}
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.8\textwidth]{figures/minifloat.pdf} \\
+ \includegraphics[width=0.8\textwidth]{figures/minifloat_diff.pdf}
+ \caption{The mapping of 8 bit floats to reals}
+\end{figure}
Traditionally algorithms for drawing vector graphics are performed on the CPU; the image is rasterised and then sent to the GPU for rendering\cite{}. Recently there has been a great deal of literature relating to implementation of algorithms such as bezier curve rendering\cite{} or shading\cite{} on the GPU. As it seems the trend is to move towards GPU
+\subsection{Floating Point Operations}
\rephrase{6. Here are ways GPU might not be IEEE754  This goes *somewhere* in here but not sure yet}
+Floating point operations can in principle be performed using integer operations, but specialised Floating Point Units (FPUs) are an almost universal component of modern processors\cite{kelley1997acmos}. The improvement of FPUs remains highly active in several areas including: efficiency\cite{seidel2001onthe}; accuracy of operations\cite{dieter2007lowcost}; and even the adaptation of algorithms originally used in software for reducing the overal error of a sequence of operations\cite{kadric2013accurate}. In this section we will briefly describe the algorithms for floating point operations without focusing on the hardware implementation of these algorithms.
\begin{itemize}
 \item Internal representations are GPU dependent and may not match IEEE\cite{hillesland2004paranoia}
 \item OpenGL standards specify: binary16, binary32, binary64
 \item OpenVG aims to become a standard API for SVG viewers but the API only uses binary32 and hardware implementations may use less than this internally\cite{rice2008openvg}
 \item It seems that IEEE has not been entirely successful; although all modern CPUs and GPUs are able to read and write IEEE floating point types, many do not conform to the IEEE standard in how they represent floating point numbers internally.
\end{itemize}
+\subsection{Precision and Rounding}
+Real values which cannot be represented exactly in a floating point representation must be rounded to the nearest floating point value. The results of a floating point operation will in general be such values and thus there is a rounding error possible in any floating point operation. Referring to Figure \ref{minifloat.pdf} it can be seen that the largest possible rounding error, or ``units in last place'' (ulp) is half the distance between successive floats; this means that rounding errors increase as the value to be represented increases. The IEEE754 standard specifies the rounding conventions for floating point arithmetic\cite{ieee754std2008}.
\rephrase{7. Sod all that, let's just use an arbitrary precision library (AND THUS WE FINALLY GET TO THE POINT)}
\subsection{Alternate Number Representations}
+Goldberg's assertively titled 1991 paper ``What Every Computer Scientist Needs to Know about Floating Point Arithmetic''\cite{goldberg1991whatevery} provides a comprehensive overview of issues in floating point arithmetic and relates these to requirements of the IEEE754 1985 standard\cite{ieee754std1985}. More recently, after the release of the revised IEEE754 standard in 2008\cite{ieee754std2008}, a textbook ``Handbook Of Floating Point Arithmetic'' has been published which provides a thourough review of literature relating to floating point arithmetic in both software and hardware\cite{HFP}.
\rephrase{They exist\cite{HFP}}.
+William Kahan, one of the architects of the IEEE754 standard in 1984 and a contributor to its revision in 2010, has also published many articles on his website explaining the more obscure features of the IEEE754 standard and calling out software which fails to conform to the standard\footnote{In addition to encodings and acceptable rounding errors, the standard also specifies ``exceptions''  mechanisms by which a program can detect an error such as division by zero  which are sometimes neglected, as in the ECMA256}\cite{kahanweb, kahan1996ieee754}, as well as examples of the limitations of floating point computations\cite{kahan2007wrong}.
Do it all using MFPR\cite{}, she'll be right.
+\subsection{Arbitrary Precision Floating Point Numbers}
\rephrase{8. Here is a brilliant summary of sections 7 above}
+Fouse described
Dear reader, thankyou for your persistance in reading this mangled excuse for a Literature Review.
Hopefully we have brought together the radically different areas of interest together in some sort of coherant fashion.
In the next chapter we will talk about how we have succeeded in rendering a rectangle. It will be fun. I am looking forward to it.