X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fsam.git;a=blobdiff_plain;f=chapters%2FBackground.tex;h=87d18cebee3a9caf81c1600a8dbdc8f8a034de2a;hp=605b4a2a025cb60abe9c7e426662215bd400ec12;hb=3cc6f72b6bbdde973827f4f3cd47563d240cc345;hpb=1626295286be2aeb81e7f29fb02b5630aa98bfa4;ds=sidebyside
diff --git a/chapters/Background.tex b/chapters/Background.tex
index 605b4a2..87d18ce 100644
--- a/chapters/Background.tex
+++ b/chapters/Background.tex
@@ -1,88 +1,218 @@
\chapter{Literature Review}\label{Background}
-This chapter will \rephrase{review the literature. It will also include some figures created by us from our test programs to aid with conceptual understanding of the literature.}
+The first half of this chapter will be devoted to documents themselves, including: the representation and displaying of graphics primitives\cite{computergraphics2}, and how collections of these primitives are represented in document formats, focusing on widely used standards\cite{plrm, pdfref17, svg2011-1.1}.
-\rephrase{TODO: Decide exactly what figures to make, then go make them; so far I have some ideas for a few about Floating Point operations, but none about the other stuff}.
+We will find that although there has been a great deal of research into the rendering, storing, editing, manipulation, and extension of document formats, modern standards are content to specify at best single precision IEEE-754 floating point arithmetic.
-\rephrase{TODO: Actually (re)write this entire chapter}.
-\rephrase{????: Do I really want to make this go down to} \verb/\subsubsection/
+The research on arbitrary precision arithmetic applied to documents is very sparse; however arbitrary precision arithmetic itself is a very active field of research. Therefore, the second half of this chapter will be devoted to considering fixed precision floating point numbers as specified by the IEEE-754 standard, possible limitations in precision, and alternative number representations for increased or arbitrary precision arithmetic.
-A paper by paper summary of the literature is also available at: \\ \url{http://szmoore.net/ipdf/documents/LiteratureNotes.pdf}.
+In Chapter \ref{Progress}, we will discuss our findings so far with regards to arbitrary precision arithmetic applied to document formats, and expand upon the goals outlined in Chapture \ref{Proposal}.
+\section{Raster and Vector Images}\label{Raster and Vector Images}
+\input{chapters/Background_Raster-vs-Vector}
-\rephrase{TODO: Actually make that readable or just remove the link}.
+\section{Rasterising Vector Images}\label{Rasterising Vector Images}
-\section{Document Formats}
+Throughout Section \ref{vector-vs-raster-graphics} we were careful to refer to ``modern'' display devices, which are raster based. It is of some historical significance that vector display devices were popular during the 70s and 80s, and papers oriented towards drawing on these devices can be found\cite{brassel1979analgorithm}. Whilst curves can be drawn at high resolution on vector displays, a major disadvantage was shading; by the early 90s the vast majority of computer displays were raster based\cite{computergraphics2}.
-\subsection{History}
+Hearn and Baker's textbook ``Computer Graphics''\cite{computergraphics2} gives a comprehensive overview of graphics from physical display technologies through fundamental drawing algorithms to popular graphics APIs. This section will examine algorithms for drawing two dimensional geometric primitives on raster displays as discussed in ``Computer Graphics'' and the relevant literature. Informal tutorials are abundant on the internet\cite{elias2000graphics}. This section is by no means a comprehensive survey of the literature but intends to provide some idea of the computations which are required to render a document.
-Since mankind climbed down from the trees... \rephrase{plagiarism alert!}
+\subsection{Straight Lines}\label{Straight Lines}
+\input{chapters/Background_Lines}
-\subsection{Vector Graphics vs Raster Graphics}
+\subsection{Spline Curves}\label{Spline Curves}
-Raster Graphics: Stores the exact pixels as they would appear on a device. Causes obvious issues with scaling.
-Vector Graphics: Stores relative position of primitives - scales better. BUT still can't scale forever.
+Splines are continuous curves formed from piecewise polynomial segments. A polynomial of $n$th degree is defined by $n$ constants $\{a_0, a_1, ... a_n\}$ and:
+\begin{align}
+ y(x) &= \displaystyle\sum_{k=0}^n a_k x^k
+\end{align}
-\rephrase{Figures: Raster and Vector graphics at different scales}
-\subsection{Document Format Categories}
+A straight line is simply a polynomial of $0$th degree. Splines may be rasterised by sampling of $y(x)$ at a number of points $x_i$ and rendering straight lines between $(x_i, y_i)$ and $(x_{i+1}, y_{i+1})$ as discussed in Section \ref{Straight Lines}. More direct algorithms for drawing splines based upon Brasenham and Wu's algorithms also exist\cite{citationneeded}.
-Main reference: Pixels or Perish\cite{hayes2012pixels}
+There are many different ways to define a spline. One approach is to specify ``knots'' on the spline and solve for the cooefficients to generate a cubic spline ($n = 3$) passing through the points. Alternatively, special polynomials may be defined using ``control'' points which themselves are not part of the curve; these are convenient for graphical based editors. Bezier splines are the most straight forward way to define a curve in the standards considered in Section \ref{Document Representations}
+\subsubsection{Bezier Curves}
+\input{chapters/Background_Bezier}
-\begin{enumerate}
- \item DOM - eg: HTML/XMLish - defined in terms of elements that can contain other elements
- \item Programming Language - eg: PostScript - programmer (or program) produces a program that is interpreted
- \item Combination - eg: Javascript with HTML/XML - Program is interpreted that modifies the DOM.
+\subsection{Font Rendering}
+
+Donald Knuth's 1986 textbook ``Metafont'' blargh
+
+
+
+\subsection{Shading}
+
+Algorithms for shading on vector displays involved drawing equally spaced lines in the region with endpoints defined by the boundaries of the region\cite{brassel1979analgorithm}. Apart from being unrealistic, these techniques required a computationally expensive sorting of vertices\cite{lane1983analgorithm}.
+
+On raster displays, shading is typically based upon Lane's algorithm of 1983\cite{lane1983analgorithm}. Lane's algorithm relies on the ability to ``subtract'' fill from a region. This algorithm is now implemented in the GPU \rephrase{stencil buffer-y and... stuff} \cite{kilgard2012gpu}
+
+\subsection{Compositing and the Painter's Model}\label{Compositing and the Painter's Model}
+
+So far we have discussed techniques for rendering vector graphics primitives in isolation, with no regard to the overall structure of a document which may contain many thousands of primitives. A straight forward approach would be to render all elements sequentially to the display, with the most recently drawn pixels overwriting lower elements. Such an approach is particularly inconvenient for anti-aliased images where colours must appear to smoothly blur between the edge of a primitive and any drawn underneath it.
+
+Colour raster displays are based on an additive red-green-blue $(r,g,b)$ colour representation which matches the human eye's response to light\cite{computergraphics2}. In 1984, Porter and Duff introduced a fourth colour channel for rasterised images called the ``alpha'' channel, analogous to the transparency of a pixel\cite{porter1984compositing}. In compositing models, elements can be rendered seperately, with the four colour channels of successively drawn elements being combined according to one of several possible operations.
+
+In the ``painter's model'' as described by the SVG standard, Porter and Duff's ``over'' operation is used when rendering one primitive over another\cite{svg2011-1.1}.
+Given an existing pixel $P_1$ with colour values $(r_1, g_1, b_1, a_1)$ and a pixel $P_2$ with colours $(r_2, g_2, b_2, a_2)$ to be painted over $P_1$, the resultant pixel $P_T$ has colours given by:
+\begin{align}
+ a_T &= 1 - (1-a_1)(1-a_2) \\
+ r_T &= (1 - a_2)r_1 + r_2 \quad \text{(similar for $g_T$ and $b_T$)}
+\end{align}
+It should be apparent that alpha values of $1$ correspond to an opaque pixel; that is, when $a_2 = 1$ the resultant pixel $P_T$ is the same as $P_2$.
+When the final pixel is actually drawn on an rgb display, the $(r, g, b)$ components are $(r_T/a_T, g_T/a_T, b_T/a_T)$.
+
+The PostScript and PDF standards, as well as the OpenGL API also use a painter's model for compositing. However, PostScript does not include an alpha channel, so $P_T = P_2$ always\cite{plrm}. Figure \ref{SVG} illustrates the painter's model for partially transparent shapes as they would appear in both the SVG and PDF models.
+
+\subsection{Rasterisation on the CPU and GPU}
+
+Traditionally, vector graphics have been rasterized by the CPU before being sent to the GPU for drawing\cite{kilgard2012gpu}. Lots of people would like to change this \cite{worth2003xr, loop2007rendering, rice2008openvg, kilgard2012gpu, green2007improved}.
+
+\rephrase{2. Here are the ways documents are structured ... we got here eventually}
+
+\section{Document Representations}\label{Document Representations}
+
+The representation of information, particularly for scientific purposes, has changed dramatically over the last few decades. For example, Brassel's 1979 paper referenced earlier has been produced on a mechanical type writer. Although the paper discusses an algorithm for shading on computer displays, the figures illustrating this algorithm have not been generated by a computer, but drawn by Brassel's assistant\cite{brassel1979analgorithm}. In contrast, modern papers such as Barnes et. al's recent paper on embedding 3d images in PDF documents\cite{barnes2013embeddding} can themselves be an interactive proof of concept.
+
+In this section we will consider various approaches and motivations to specifying the structure and appearance of a document, including: early interpreted formats (PostScript, \TeX, DVI), the Document Object Model popular in standards for web based documents (HTML, SVG), and Adobe's ubiquitous Portable Document Format (PDF). Some of these formats were discussed in a recent paper ``Pixels Or Perish'' by Hayes\cite{hayes2012pixelsor} who argues for greater interactivity in the PDF standard.
+
+\subsection{Interpreted Document Formats}
+\input{chapters/Background_Interpreted}
+
+
+\begin{itemize}
+ \item This model treats a document as the source code program which produces graphics
+ \item Arose from the desire to produce printed documents using computers (which were still limited to text only displays).
+ \item Typed by hand or (later) generated by a GUI program
+ \item PostScript --- largely supersceded by PDF on the desktop but still used by printers\footnote{Desktop pdf viewers can still cope with PS, but I wonder if a smartphone pdf viewer would implement it?}
+ \item \TeX --- Predates PostScript, similar idea
+ \begin{itemize}
+ \item Maybe if \LaTeX were more popular there would be desktop viewers that converted \LaTeX directly into graphics
+ \end{itemize}
+ \item Potential for dynamic content, interactivity; dynamic PostScript, enhanced Postscript
+
+ \item Problems with security --- Turing complete, can be exploited easily
+\end{itemize}
+
+\pagebreak
+\subsection{Document Object Model}\label{Document Object Model}
+\input{chapters/Background_DOM}
+
+\subsection{The Portable Document Format}
+
+
+\subsection{Scientific Computation Packages}
+
+The document and the code that produces it are one and the same.
+
+\begin{itemize}
+ \item Numerical computation packages such as Mathematica and Maple use arbitrary precision floats
\begin{itemize}
- \item The lines are becomming increasingly blurred between 1. and 2.
- \item In the case of Javascript/HTML, a special DOM element \verb/