-\subsection{Compositing Digital Images\cite{porter1984compositing}}
-
-
-
-Perter and Duff's classic paper "Compositing Digital Images" lays the
-foundation for digital compositing today. By providing an "alpha channel,"
-images of arbitrary shapes — and images with soft edges or sub-pixel coverage
-information — can be overlayed digitally, allowing separate objects to be
-rasterized separately without a loss in quality.
-
-Pixels in digital images are usually represented as 3-tuples containing
-(red component, green component, blue component). Nominally these values are in
-the [0-1] range. In the Porter-Duff paper, pixels are stored as $(R,G,B,\alpha)$
-4-tuples, where alpha is the fractional coverage of each pixel. If the image
-only covers half of a given pixel, for example, its alpha value would be 0.5.
-
-To improve compositing performance, albeit at a possible loss of precision in
-some implementations, the red, green and blue channels are premultiplied by the
-alpha channel. This also simplifies the resulting arithmetic by having the
-colour channels and alpha channels use the same compositing equations.
-
-Several binary compositing operations are defined:
-\begin{itemize}
-\item over
-\item in
-\item out
-\item atop
-\item xor
-\item plus
-\end{itemize}
-
-The paper further provides some additional operations for implementing fades and
-dissolves, as well as for changing the opacity of individual elements in a
-scene.
-
-The method outlined in this paper is still the standard system for compositing
-and is implemented almost exactly by modern graphics APIs such as \texttt{OpenGL}. It is
-all but guaranteed that this is the method we will be using for compositing
-document elements in our project.
-
-\subsection{Bresenham's Algorithm: Algorithm for computer control of a digital plotter\cite{bresenham1965algorithm}}
-Bresenham's line drawing algorithm is a fast, high quality line rasterization
-algorithm which is still the basis for most (aliased) line drawing today. The
-paper, while originally written to describe how to control a particular plotter,
-is uniquely suited to rasterizing lines for display on a pixel grid.
-
-Lines drawn with Bresenham's algorithm must begin and end at integer pixel
-coordinates, though one can round or truncate the fractional part. In order to
-avoid multiplication or division in the algorithm's inner loop,
-
-The algorithm works by scanning along the long axis of the line, moving along
-the short axis when the error along that axis exceeds 0.5px. Because error
-accumulates linearly, this can be achieved by simply adding the per-pixel
-error (equal to (short axis/long axis)) until it exceeds 0.5, then incrementing
-the position along the short axis and subtracting 1 from the error accumulator.
-
-As this requires nothing but addition, it is very fast, particularly on the
-older CPUs used in Bresenham's time. Modern graphics systems will often use Wu's
-line-drawing algorithm instead, as it produces antialiased lines, taking
-sub-pixel coverage into account. Bresenham himself extended this algorithm to
-produce Bresenham's circle algorithm. The principles behind the algorithm have
-also been used to rasterize other shapes, including B\'{e}zier curves.
-
-\emph{GPU Rendering}\cite{loop2005resolution}, OpenVG implementation on GLES: \cite{oh2007implementation},
-\cite{robart2009openvg}
-
-\emph{Existing implementations of document format rendering}
-
-\subsection{Xr: Cross-device Rendering for Vector Graphics\cite{worth2003xr}}
-
-Xr (now known as Cairo) is an implementation of the PDF v1.4 rendering model,
-independent of the PDF or PostScript file formats, and is now widely used
-as a rendering API. In this paper, Worth and Packard describe the PDF v1.4 rendering
-model, and their PostScript-derived API for it.
-
-The PDF v1.4 rendering model is based on the original PostScript model, based around
-a set of \emph{paths} (and other objects, such as raster images) each made up of lines
-and B\'{e}zier curves, which are transformed by the ``Current Transformation Matrix.''
-Paths can be \emph{filled} in a number of ways, allowing for different handling of self-intersecting
-paths, or can have their outlines \emph{stroked}.
-Furthermore, paths can be painted with RGB colours and/or patterns derived from either
-previously rendered objects or external raster images.
-PDF v1.4 extends this to provide, amongst other features, support for layering paths and
-objects using Porter-Duff compositing\cite{porter1984compositing}, giving each painted path
-the option of having an $\alpha$ value and a choice of any of the Porter-Duff compositing
-methods.
-
-The Cairo library approximates the rendering of some objects (particularly curved objects
-such as splines) with a set of polygons. An \texttt{XrSetTolerance} function allows the user
-of the library to set an upper bound on the approximation error in fractions of device pixels,
-providing a trade-off between rendering quality and performance. The library developers found
-that setting the tolerance to greater than $0.1$ device pixels resulted in errors visible to the
-user.
-
-\subsection{Glitz: Hardware Accelerated Image Compositing using OpenGL\cite{nilsson2004glitz}}
-
-This paper describes the implementation of an \texttt{OpenGL} based rendering backend for
-the \texttt{Cairo} library.
-
-The paper describes how OpenGL's Porter-Duff compositing is easily suited to the Cairo/PDF v1.4
-rendering model. Similarly, traditional OpenGL (pre-version 3.0 core) support a matrix stack
-of the same form as Cairo.
-
-The ``Glitz'' backend will emulate support for tiled, non-power-of-two patterns/textures if
-the hardware does not support it.
-
-Glitz can render both triangles and trapezoids (which are formed from pairs of triangles).
-However, it cannot guarantee that the rasterization is pixel-precise, as OpenGL does not proveide
-this consistently.
-
-Glitz also supports multi-sample anti-aliasing, convolution filters for raster image reads (implemented
-with shaders).
-
-Performance was much improved over the software rasterization and over XRender accellerated rendering
-on all except nVidia hardware. However, nVidia's XRender implementation did slow down significantly when
-some transformations were applied.
-
-
-
-\textbf{Also look at \texttt{NV\_path\_rendering}} \cite{kilgard2012gpu}
-
-\section{Floating-Point Precision}
-
-How floating-point works and what its behaviour is w/r/t range and precision
-\cite{goldberg1991whatevery}
-\cite{goldberg1992thedesign}
-
-Arb. precision exists
-
+While traditionally, rasterization was done entirely in software, modern computers and mobile devices have hardware support for rasterizing
+some basic primitives --- typically lines and triangles ---, designed for use rendering 3D scenes. This hardware is usually programmed with an
+API like \texttt{OpenGL}\cite{openglspec}.
+
+More complex shapes like B\'ezier curves can be rendered by combining the use of bitmapped textures (possibly using signed-distance
+fields\cite{leymarie1992fast}\cite{frisken2000adaptively}\cite{green2007improved}) with polygons approximating the curve's shape\cite{loop2005resolution}\cite{loop2007rendering}.
+
+Indeed, there are several implementations of entire vector graphics systems using OpenGL: OpenVG\cite{robart2009openvg} on top of OpenGL ES\cite{oh2007implementation};
+the Cairo\cite{worth2003xr} library, based around the PostScript/PDF rendering model, has the ``Glitz'' OpenGL backend\cite{nilsson2004glitz} and the SVG/PostScript GPU
+renderer by nVidia\cite{kilgard2012gpu} as an OpenGL extension\cite{kilgard300programming}.
+
+
+\section{Numeric formats}
+
+On modern computer architectures, there are two basic number formats supported:
+fixed-width integers and \emph{floating-point} numbers. Typically, computers
+natively support integers of up to 64 bits, capable of representing all integers
+between $0$ and $2^{64} - 1$, inclusive\footnote{Most machines also support \emph{signed} integers,
+which have the same cardinality as their \emph{unsigned} counterparts, but which
+represent integers in the range $[-(2^{63}), 2^{63} - 1]$}.
+
+By introducing a fractional component (analogous to a decimal point), we can convert
+integers to \emph{fixed-point} numbers, which have a more limited range, but a fixed, greater
+precision. For example, a number in 4.4 fixed-point format would have four bits representing the integer
+component, and four bits representing the fractional component:
+\begin{equation}
+ \underbrace{0101}_\text{integer component}.\underbrace{1100}_\text{fractional component} = 5.75
+\end{equation}
+
+
+Floating-point numbers\cite{goldberg1992thedesign} are the binary equivalent of scientific notation:
+each number consisting of an exponent ($e$) and a mantissa ($m$) such that a number is given by
+\begin{equation}
+ n = 2^{e} \times m
+\end{equation}
+
+The IEEE 754 standard\cite{ieee754std1985} defines several floating-point data types
+which are used\footnote{Many systems' implement the IEEE 754 standard's storage formats,
+but do not implement arithmetic operations in accordance with this standard.} by most
+computer systems. The standard defines 32-bit (8-bit exponent, 23-bit mantissa, 1 sign bit) and
+64-bit (11-bit exponent, 53-bit mantissa, 1 sign bit) formats\footnote{The 2008
+revision to this standard\cite{ieee754std2008} adds some additional formats, but is
+less widely supported in hardware.}, which can store approximately 7 and 15 decimal digits
+of precision respectively.
+
+Floating-point numbers behave quite differently to integers or fixed-point numbers, as
+the representable numbers are not evenly distributed. Large numbers are stored to a lesser
+precision than numbers close to zero. This can present problems in documents when zooming in
+on objects far from the origin.
+
+IEEE floating-point has some interesting features as well, including values for negative zero,
+positive and negative infinity, the ``Not a Number'' (NaN) value and \emph{denormal} values, which
+trade precision for range when dealing with very small numbers. Indeed, with these values,
+IEEE 754 floating-point equality does not form an equivalence relation, which can cause issues
+when not considered carefully.\cite{goldberg1991whatevery}
+
+There also exist formats for storing numbers with arbitrary precising and/or range.
+Some programming languages support ``big integer''\cite{java_bigint} types which can
+represent any integer that can fit in the system's memory. Similarly, there are
+arbitrary-precision floating-point data types\cite{java_bigdecimal}\cite{boost_multiprecision}
+which can represent any number of the form
+\begin{equation}
+ \frac{n}{2^d} \; \; \; \; n,d \in \mathbb{Z} % This spacing is horrible, and I should be ashamed.
+\end{equation}
+These types are typically built from several native data types such as integers and floats,
+paired with custom routines implementing arithmetic primitives.\cite{priest1991algorithms}
+These, therefore, are likely slower than the native types they are built on.
+
+While traditionally, GPUs have supported some approximation of IEEE 754's 32-bit floats,
+modern graphics processors also support 16-bit\cite{nv_half_float} and 64-bit\cite{arb_gpu_shader_fp64}
+IEEE floats. Note, however, that some parts of the GPU are only able to use some formats,
+so precision will likely be truncated at some point before display.