There are words on the page I will settle for that.
\rephrase{0. Here is a brilliant summary of the sections below}
-This chapter provides an overview of relevant literature. The areas of interest can be broadly grouped into two largely seperate categories; Documents and Number Representations.
+This chapter provides an overview of relevant literature. The areas of interest can be broadly grouped into two largely separate categories; Documents and Number Representations.
-The first half of this chapter will be devoted to documents themselves, including: the representation and rendering of low level graphics primitives, how collections of these primitives are represented in document formats, and the various standards for documents in use today.
+The first half of this chapter will be devoted to documents themselves, including: the representation and displaying of graphics primitives\cite{computergraphics2}, and how collections of these primitives are represented in document formats, focusing on well known standards currently in use\cite{plrm, pdfref17, svg2011-1.1}.
-We will find that although there has been a great deal of research into the rendering, storing, editing, manipulation, and extension of document formats, all popular document standards are content to specify at best a single precision IEEE-754 floating point number representations.
+We will find that although there has been a great deal of research into the rendering, storing, editing, manipulation, and extension of document formats, these widely used document standards are content to specify at best a single precision IEEE-754 floating point number representations.
The research on arbitrary precision arithmetic applied to documents is very sparse; however arbitrary precision arithmetic itself is a very active field of research. Therefore, the second half of this chapter will be devoted to considering the IEEE-754 standard, its advantages and limitations, and possible alternative number representations to allow for arbitrary precision arithmetic.
-In Chapter \ref{Progress}, we will discuss our findings so far with regards to arbitrary precision arithmetic applied to document formats.
+In Chapter \ref{Progress}, we will discuss our findings so far with regards to arbitrary precision arithmetic applied to document formats, and expand upon the goals outlined in Chapture \ref{Proposal}.
\pagebreak
-\section{Raster and Vector Graphics}\label{vector-vs-raster-graphics}
+\section{Raster and Vector Images}\label{Raster and Vector Images}
+\input{chapters/Background_Raster-vs-Vector}
-\rephrase{1. Here are the fundamentals of graphics (raster and vector, rendering)}
+\section{Rasterising Vector Images}\label{Rasterising Vector Images}
-At a fundamental level everything that is seen on a display device is represented as either a vector or raster image. These images can be stored as stand alone documents or embedded in a much more complex document format capable of containing many other types of information.
+Throughout Section \ref{vector-vs-raster-graphics} we were careful to refer to ``modern'' display devices, which are raster based. It is of some historical significance that vector display devices were popular during the 70s and 80s, and papers oriented towards drawing on these devices can be found\cite{brassel1979analgorithm}. Whilst curves can be drawn at high resolution on vector displays, a major disadvantage was shading; by the early 90s the vast majority of computer displays were raster based\cite{computergraphics2}.
-A raster image's structure closely matches it's representation as shown on modern display hardware; the image is represented as a grid of filled square ``pixels''. Each pixel is the same size and contains information describing its colour. This representation is simple and also well suited to storing images as produced by cameras and scanners\cite{citationneeded}.
+Hearn and Baker's textbook ``Computer Graphics''\cite{computergraphics2} gives a comprehensive overview of graphics from physical display technologies through fundamental drawing algorithms to popular graphics APIs. This section will examine algorithms for drawing two dimensional geometric primitives on raster displays as discussed in ``Computer Graphics'' and the relevant literature. Informal tutorials are abundant on the internet\cite{elias2000graphics}.
-The drawback of raster images is that by their very nature there can only be one level of detail. Figures \ref{vector-vs-raster} and \ref{vector-vs-raster-scaled} attempt to illustrate this by comparing raster images to vector images in a similar way to Worth and Packard\cite{worth2003xr}.
+\subsection{Straight Lines}\label{Straight Lines}
+\input{chapters/Background_Lines}
-Consider the right side of Figure \ref{vector-vs-raster}. This is a raster image which should be recognisable as an animal defined by fairly sharp edges. Figure \ref{vector-vs-raster-scaled} shows that zooming on the animal's face causes these edges to appear jagged. There is no information in the original image as to what should be displayed at a larger size, so each square shaped pixel is simply increased in size. A blurring effect will probably be visible in most PDF viewers; the software has attempted to make the ``edge'' appear more realistic using a technique called ``antialiasing'' which averages neighbouring pixels in the original image in order to generate extra pixels in the scaled image\cite{citationneeded}.\footnote{The exact appearance of the images at different zoom levels will depend greatly on the PDF viewer or printer used to display this report. On the author's display using the Atril (1.6.0) viewer, the top images appear to be pixel perfect mirror images at a 100\% scale. In the bottom raster image, antialiasing is not applied at zoom levels above $125\%$ and the effect of scaling is quite noticable.}
+\subsection{Spline Curves}\label{Spline Curves}
-%\footnote{\noindent This behaviour may be configured in some PDF viewers (Adobe Reader) whilst others (Evince, Atril, Okular) will choose whether or not to bother with antialiasing based on the zoom level. For best results experiment with changing the zoom level in your PDF viewer.\footnotemark}\footnotetext{On the author's hardware, the animals in the vector and raster images should appear mirrored pixel for pixel; but they may vary slightly on other PDF viewers or display devices.}
+Splines are continuous curves formed from piecewise polynomial segments. A polynomial of $n$th degree is defined by $n$ constants $\{a_0, a_1, ... a_n\}$ and:
+\begin{align}
+ y(x) &= \displaystyle\sum_{k=0}^n a_k x^k
+\end{align}
-In contrast, the left sides of Figures \ref{vector-vs-raster} and \ref{vector-vs-raster-scaled} are a vector image. A vector image contains information about a number of geometric shapes. To display this image on modern display hardware, the coordinates are transformed according to the view and \emph{then} the image is converted into a raster like representation. Whilst the raster image merely appears to contain edges, the vector image actually contains information about these edges, meaning they can be displayed ``infinitely sharply'' at any level of detail\cite{citationneeded} --- or they could be if the coordinates are stored with enough precision (see Section \ref{}). Thus, vector images are well suited to high quality digital art\footnote{Figure \ref{vector-vs-raster} is not to be taken as an example of this.} and text\cite{citationneeded}.
+A straight line is simply a polynomial of $0$th degree. Splines may be rasterised by sampling of $y(x)$ at a number of points $x_i$ and rendering straight lines between $(x_i, y_i)$ and $(x_{i+1}, y_{i+1})$ as discussed in Section \ref{Straight Lines}. More direct algorithms for drawing splines based upon Brasenham and Wu's algorithms also exist\cite{citationneeded}.
-\rephrase{Woah, an entire page with only one citation ham fisted in after I had written the rest... and the ``actually writing it'' phase of the Lit Review is off to a great start.}
-
-\newlength\imageheight
-\newlength\imagewidth
-\settoheight\imageheight{\includegraphics{figures/fox-raster.png}}
-\settowidth\imagewidth{\includegraphics{figures/fox-raster.png}}
-
-%Height: \the\imageheight
-%Width: \the\imagewidth
-
-
-\begin{figure}[H]
- \centering
- \includegraphics[scale=0.7528125]{figures/fox-vector.pdf}
- \includegraphics[scale=0.7528125]{figures/fox-raster.png}
- \caption{Original Vector and Raster Images}\label{vector-vs-raster}
-\end{figure} % As much as I hate to break up the party, these fit best over the page (at the moment)
-\begin{figure}[H]
- \centering
- \includegraphics[scale=0.7528125, viewport=210 85 280 150,clip, width=0.45\textwidth]{figures/fox-vector.pdf}
- \includegraphics[scale=0.7528125, viewport=0 85 70 150,clip, width=0.45\textwidth]{figures/fox-raster.png}
- \caption{Scaled Vector and Raster Images}\label{vector-vs-raster-scaled}
-\end{figure}
-
-\section{Rendering Vector Images}
-
-Throughout Section \ref{vector-vs-raster-graphics} we were careful to refer to ``modern'' display devices, which are raster based. It is of some historical significance that vector display devices were popular during the 70s and 80s, and so algorithms for drawing a vector image directly without rasterisation exist. An example is the shading of polygons which is somewhat more complicated on a vector display than a raster display\cite{brassel1979analgorithm, lane1983analgorithm}.
-
-All modern displays of practical interest are raster based. In this section we explore the structure of vector graphics in more detail, and how different primitives are rendered.
-
-\rephrase{After the wall of citationless text in Section \ref{vector-vs-raster-graphics} we should probably redeem ourselves a bit here}
-
-\subsection{Bezier Curves}
-
-The bezier curve is of vital importance in vector graphics.
-
-\rephrase{Things this section lacks}
-\begin{itemize}
- \item Who came up with them (presumably it was a guy named Bezier)
- \item Flesh out how they evolved or came into use?
- \item Naive algorithm
- \item De Casteljau Algorithm
-\end{itemize}
-
-Recently, Goldman presented an argument that Bezier's could be considered as fractal in nature, a fractal being the fixed point of an iterated function system\cite{goldman_thefractal}. Goldman's proof depends upon a modification to the De Casteljau Subdivision algorithm. Whilst we will not go the details of the proof, or attempt comment on its theoretical value, it is interesting to note that Goldman's algorithm is not only worse than the De Casteljau algorithm upon which it was based, but it also performs worse than a naive Bezier rendering algorithm. Figure \ref{bezier-goldman} shows our results using implementations of the various algorithms in python.
-
-\begin{figure}[H]
- \centering
- \includegraphics[width=0.7\textwidth]{figures/bezier-goldman.png}
- \caption{Performance of Bezier Subdivision Algorithms}\label{bezier-goldman}
-\end{figure}
-
-\rephrase{Does the Goldman bit need to be here? Probably NOT. Do I need to check very very carefully that I haven't made a mistake before saying this? YES. Will I have made a mistake? Probably.}
-
-
-\subsection{Shapes}
-Shapes are just bezier curves joined together.
-
-\subsubsection{Approximating a Circle Using Cubic Beziers}
-
-An example of a shape is a circle. We used some algorithm on wikipedia that I'm sure is in Literature somewhere
-\cite{citationneeded} and made a circle. It's in my magical ipython notebook with the De Casteljau stuff.
-
-\subsection{Text}
-Text is just Bezier Curves, I think we proved out point with the circle, but maybe find some papers to cite\cite{citationneeded}
+There are many different ways to define a spline. One approach is to specify ``knots'' on the spline and solve for the cooefficients to generate a cubic spline ($n = 3$) passing through the points. Beziers are a popular spline which can be created in GUI based graphics editors using several ``control points'' which themselves are not part of the curve.
+\subsubsection{Bezier Curves}
+\input{chapters/Background_Bezier}
\subsection{Shading}
-Shading is actually extremely complicated! \cite{brassel1979analgorithm, lane1983analgorithm}
-\rephrase{Sure, but do we care enough to talk about it? We will run out of space at this rate}
+Algorithms for shading on vector displays involved drawing equally spaced lines within a region; this is limited both in the complexity of shading and the performance required to compute the lines\cite{brassel1979analgorithm}.
-\subsection{Other Things}
-We don't really care about other things in this report.
+On raster displays, shading is typically based upon Lane's algorithm of 1983\cite{lane1983analgorithm} which is implemented in the GPU \cite{kilgard2012gpu}
\rephrase{6. Sort of starts here... or at least background does}
\item They are portable; you can write an amazing document in Mathematica/Matlab but it
\item Scientific journals would need to adapt to other formats and this is not worth the effort
\item No network connection is required to view a PDF (although DRM might change this?)
- \item All rescources are stored in a single file; a website is stored accross many seperate files (call this a ``distributed'' document format?)
+ \item All rescources are stored in a single file; a website is stored accross many separate files (call this a ``distributed'' document format?)
\item You can create PDFs easily using desktop processing WYSIWYG editors; WYSIWYG editors for web based documents are worthless due to the more complex content
\item Until Javascript becomes part of the PDF standard, including Javascript in PDF documents will not become widespread
\item Once you complicate a PDF by adding Javascript, it becomes more complicated to create; it is simply easier to use a series of static figures than to embed a shader in your document. Even for people that know WebGL.
The floats map an infinite set of real numbers onto a discrete set of representations.
+
+
\rephrase{Figure: 8 bit ``minifloats'' (all 255 of them) clearly showing the ``precision vs range'' issue}
The most a result can be rounded in conversion to a floating point number is the units in last place; $m_{N} \times B^{e}$.
--- /dev/null
+A Bezier Curve of degree $n$ is defined by $n$ ``control points'' $\left\{P_0, ... P_n\right\}$.
+Points $P(t)$ along the curve are defined by:
+\begin{align}
+ P(t) &= \displaystyle\sum_{j=0}^{n} B_j^n(t) P_j
+\end{align}
+
+From this definition it should be apparent $P(t)$ for a Bezier Curve of degree $0$ maps to a single point, whilst $P(t)$ for a Bezier of degree $1$ is a straight line between $P_0$ and $P_1$. $P(t)$ always begins at $P_0$ for $t = 0$ and ends at $P_n$ when $t = 1$.
+
+Figure \ref{bezier_3} shows a Bezier Curve defined by the points $\left\{(0,0), (1,0), (1,1)\right\}$.
+
+A straightforward algorithm for rendering Bezier's is to simply sample $P(t)$ for some number of values of $t$ and connect the resulting points with straight lines using Bresenham or Wu's algorithm (See Section \ref{Straight Lines}). Whilst the performance of this algorithm is linear, in ???? De Casteljau derived a more efficient means of sub dividing beziers into line segments.
+
+Recently, Goldman presented an argument that Bezier's could be considered as fractal in nature, a fractal being the fixed point of an iterated function system\cite{goldman_thefractal}. Goldman's proof depends upon a modification to the De Casteljau Subdivision algorithm which expresses the subdivisions as an iterated function system. The cost of this modification is that the algorithm is no longer $O(n)$ but $O(n^2)$; although it is not explicitly stated by Goldman it seems clear that the modified algorithm is mainly of theoretical interest.
+
--- /dev/null
+It is well known that in cartesian coordinates, a line between points $(x_1, y_1)$ and $(x_2, y_2)$, can be described by:
+\begin{align}
+ y(x) &= m x + b\label{eqn_line} \quad \text{ on $x \in [x_1, x_2]$} \\
+ \text{ for } & m = (y_2 - y_1)/(x_2 - x_1) \\
+ \text{ and } & b
+\end{align}
+
+On a raster display, only points $(x,y)$ with integer coordinates can be displayed; however $m$ will generally not be an integer. Thus a straight forward use of Equation \ref{eqn_line} will require costly floating point operations and rounding (See Section\ref{}). Modifications based on computing steps $\delta x$ and $\delta y$ eliminate the multiplication but are still less than ideal in terms of performance\cite{computergraphics2}.
+
+Bresenham's Line Algorithm was developed in 1965 with the motivation of controlling a particular mechanical plotter in use at the time\cite{bresenham1965algorithm}. The plotter's motion was confined to move between discrete positions on a grid one cell at a time, horizontally, vertically or diagonally. As a result, the algorithm presented by Bresenham requires only integer addition and subtraction, and it is easily adopted for drawing pixels on a raster display. Bresenham himself points out that rasterisation processes have existed since long before the first computer displays\cite{bresenham1996pixel}.
+
+In Figure \ref{rasterising-line} a) and b) we illustrate the rasterisation of a line width a single pixel width. The path followed by Bresenham's algorithm is shown. It can be seen that the pixels which are more than half filled by the line are set by the algorithm. This causes a jagged effect called aliasing which is particularly noticable on low resolution displays. From a signal processing point of view this can be understood as due to the sampling of a continuous signal on a discrete grid\cite{citationneeded}. \rephrase{I studied this sort of thing in Physics, once upon a time... if you just say "Nyquist Sampling" and wave your hands people usually buy it.}
+
+Figure \ref{rasterising-line} c) shows an (idealised) antialiased rendering of the line. The pixel intensity has been set to the average of the line and background colours over that pixel. Such an ideal implementation would be impractically computationally expensive on real devices\cite{elias2000graphics}. In 1991 Wu introduced an algorithm for drawing antialiased lines which, while equivelant in results to existing algorithms by Fujimoto and Iwata, set the state of the art in performance\cite{wu1991anefficient}.
+
+\rephrase{NOTE: Should I actually discuss how the algorithms work? Or just ``review'' the literature? Bearing in mind that how they actually work doesn't really relate to infinite precision document formats...}
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[width=0.25\textwidth]{figures/line1.pdf}
+ \includegraphics[width=0.25\textwidth]{figures/line2.pdf}
+ \includegraphics[width=0.25\textwidth]{figures/line3.pdf}
+ \caption{Rasterising a Straight Line}\label{rasterising-line}
+ a) Before Rasterisation b) Bresenham's Algorithm c) Antialiased Line (Idealised)
+\end{figure} % As much as I hate to break up the party, these fit best over the page (at the moment)
+
--- /dev/null
+At a fundamental level everything that is seen on a display device is represented as either a vector or raster image. These images can be stored as stand alone documents or embedded within a more complex document format capable of containing many other types of information.
+
+A raster image's structure closely matches it's representation as shown on modern display hardware; the image is represented as a grid of filled square ``pixels''. Each pixel is considered to be a filled square of the same size and contains information describing its colour. This representation is simple and also well suited to storing images as produced by cameras and scanners.
+
+The drawback of raster images is that by their very nature there can only be one level of detail. Figures \ref{vector-vs-raster} and \ref{vector-vs-raster-scaled} attempt to illustrate this by comparing raster images to vector images in a similar way to Worth and Packard\cite{worth2003xr}.
+
+Consider the right side of Figure \ref{vector-vs-raster}. This is a raster image which should be recognisable as an animal defined by fairly sharp edges. Figure \ref{vector-vs-raster-scaled} shows that zooming on the animal's face causes these edges to appear jagged. There is no information in the original image as to what should be displayed at a larger size, so each square shaped pixel is simply increased in size. A blurring effect will probably be visible in most PDF viewers; the software has attempted to make the ``edge'' appear more realistic using a technique called ``antialiasing'' (See Section \ref{Straight Lines}).\footnote{The exact appearance of the images at different zoom levels will depend greatly on the PDF viewer or printer used to display this report. On the author's display using the Atril (1.6.0) Document viewer, the top images appear to be pixel perfect mirror images at a 100\% scale. In the bottom raster image, antialiasing is not applied at zoom levels above $125\%$ and the effect of scaling is quite noticeable.}
+
+%\footnote{\noindent This behaviour may be configured in some PDF viewers (Adobe Reader) whilst others (Evince, Atril, Okular) will choose whether or not to bother with antialiasing based on the zoom level. For best results experiment with changing the zoom level in your PDF viewer.\footnotemark}\footnotetext{On the author's hardware, the animals in the vector and raster images should appear mirrored pixel for pixel; but they may vary slightly on other PDF viewers or display devices.}
+
+In contrast, the left sides of Figures \ref{vector-vs-raster} and \ref{vector-vs-raster-scaled} are a vector image. A vector image contains information about a number of geometric shapes. To display this image on modern display hardware, the coordinates are transformed according to the view and \emph{then} the image is converted into a raster like representation. Whilst the raster image merely appears to contain edges, the vector image actually contains information about these edges, meaning they can be displayed ``infinitely sharply'' at any level of detail\cite{citationneeded} --- or they could be if the coordinates are stored with enough precision (see Section \ref{}). Thus, vector images are well suited to high quality digital art\footnote{Figure \ref{vector-vs-raster} is not to be taken as an example of this.} and text.
+
+
+\newlength\imageheight
+\newlength\imagewidth
+\settoheight\imageheight{\includegraphics{figures/fox-raster.png}}
+\settowidth\imagewidth{\includegraphics{figures/fox-raster.png}}
+
+%Height: \the\imageheight
+%Width: \the\imagewidth
+
+
+\begin{figure}[H]
+ \centering
+ \includegraphics[scale=0.7528125]{figures/fox-vector.pdf}
+ \includegraphics[scale=0.7528125]{figures/fox-raster.png}
+ \caption{Original Vector and Raster Images}\label{vector-vs-raster}
+\end{figure} % As much as I hate to break up the party, these fit best over the page (at the moment)
+\begin{figure}[H]
+ \centering
+ \includegraphics[scale=0.7528125, viewport=210 85 280 150,clip, width=0.45\textwidth]{figures/fox-vector.pdf}
+ \includegraphics[scale=0.7528125, viewport=0 85 70 150,clip, width=0.45\textwidth]{figures/fox-raster.png}
+ \caption{Scaled Vector and Raster Images}\label{vector-vs-raster-scaled}
+\end{figure}
\chapter{Introduction}\label{Introduction}
-\rephrase{Most of this chapter is copy pasted from the project proposal} \\
- \url{http://szmoore.net/ipdf/documents/ProjectProposalSam.pdf}
+\section{Motivation}
Early electronic document formats such as PostScript were motivated by a need to print documents onto a paper medium. In the PostScript standard, this lead to a model of the document as a program; a series of instructions to be executed by an interpreter which would result in ``ink'' being placed on ``pages'' of a fixed size\cite{plrm}. The ubiquitous Portable Document Format (PDF) standard provides many enhancements to PostScript taking into account desktop publishing requirements\cite{cheng2002portable}, but it is still fundamentally based on the same imaging model\cite{pdfref17}. This idea of a document as a static ``page'' has lead to limited precision in these and other traditional document formats.
We are now seeing a widespread use of mobile computing devices with touch screens, where the display size is typically much smaller than paper pages and traditional computer monitors; it seems that there is much to be gained by breaking free of the restricted precision of traditional document formats.
-\section{Aim}
-
-In this project, we will explore the state of the art of current document formats including PDF, PostScript, SVG, HTML, and the limitations of each in terms of precision.
-We will consider designs for a document format allowing graphics primitives at an arbitrary level of zoom with no loss of detail. We will refer to such a document format as ``infinite precision''. A viewer and editor will be implemented as a proof of concept; we adopt a low level, ground up approach to designing this viewer so as to not become restricted by any single existing document format.
-
-There are many possible applications for documents in which precision is unlimited. Several areas of use include: visualisation of extremely large or infinite data sets; visualisation of high precision numerical computations; digital artwork; computer aided design; and maps.
-
-\subsection{Clarification of Terms}
-
-It may be necessary to clarify what we mean by the terms ``infinite precision'' and ``document formats''. Regarding the latter, we consider a document format to be any representation of visual information which is capable of being stored indefinitely. Regarding the former, we do not propose to be able to contain an infinite amount of information within such a document. The goal is to be able to render a primitive at the same level of detail it is specified by a document format, regardless of how precise this level is. For example, the precision of coordinates of primitives drawn in a graphical document editor will always be limited by the resolution of the display on which they are drawn, but not by the viewer.
-
-
-
-\section{Methods}
-
-Initial research and software development is being conducted in collaboration with David Gow\cite{proposalGow}. Once a simple testbed application has been developed, we will individually explore approaches for introducing arbitrary levels of precision; these approaches will be implemented as alternate versions of the same software. The focus will be on drawing simple primitives (lines, polygons, circles). However, if time permits we will explore adding more complicated primitives (font glyphs, bezier curves, embedded bitmaps).
-
-The process of rendering a document will be considered as a common area of research, whilst individual research will be conducted on means for allowing infinite precision.
-At this stage we have identified two possible areas for individual research:
-
-\begin{enumerate}
-
- \item {\bf Arbitrary Precision real valued numbers} --- Sam Moore
-
- We plan to investigate the representation of real values to a high or arbitary degree of precision. Such representations would allow for a document to be implemented
- using a single global coordinate system. However, we would expect a decrease in performance with increased complexity of the data structure used to represent a real value. \rephrase{Both software and hardware techniques will be explored.} We will also consider the limitations imposed by performing calculations on the GPU or CPU.
-
- Starting points for research in this area are Priest's 1991 paper, ``Algorithms for Arbitrary Precision Floating Point Arithmetic''\cite{priest1991algorithms}, and Goldberg's 1992 paper ``The design of floating point data types''\cite{goldberg1992thedesign}. \rephrase{A more recent and comprehensive text book, ``Handbook of Floating Point Arithmetic''\cite{HFP}, published in 2010, has also been identified as highly relevant.}
-
- \item {\bf Local coordinate systems} --- David Gow \cite{proposalGow}
-
- An alternative approach involves segmenting the document into different regions using fixed precision floats to define primitives within each region. A quadtree or similar data structure could be employed to identify and render those regions currently visible in the document viewer.\rephrase{Say more here?}
-
-\end{enumerate}
-\pagebreak
-We aim to compare these and any additional implementations considered using the following metrics:
-\begin{enumerate}
-
- \item {\bf Performance vs Number of Primitives}
-
- As it is clearly desirable to include more objects in a document, this is a natural metric for the usefulness of an implementation.
- We will compare the performance of rendering different implementations, using several ``standard'' test documents.
-
- \item {\bf Performance vs Visible Primitives}
-
- There will inevitably be an overhead to all primitives in the document, whether drawn or not.
- As the structure of the document format and rendering algorithms may be designed independently, we will repeat the above tests considering only the number of visible primitives.
-
-
- \item {\bf Performance vs Zoom Level}
-
- We will also consider the performance of rendering at zoom levels that include primitives on both small and large scales, since these are the cases under which floating point precision causes problems in the PostScript and PDF standards.
-
- \item {\bf Performance whilst translation and scaling}
-
- Whilst changing the view, it is ideal that the document be re-rendered as efficiently as possible, to avoid disorienting and confusing the user.
- We will therefore compare the speed of rendering as the standard documents are translated or scaled at a constant rate.
-
- \item {\bf Artifacts and Limitations on Precision}
-
- As we are unlikely to achieve truly ``infinite'' precision, qualitative comparisons of the accuracy of rendering under different implementations should be made.
-
-\end{enumerate}
-
-\section{Software and Hardware Requirements}
-
-Due to the relative immaturity and inconsistency of graphics drivers on mobile devices, our proof of concept will be developed for a conventional GNU/Linux desktop or laptop computer using OpenGL. However, the techniques explored could easily be extended to other platforms and libraries.
-
-
-\pagebreak
-
-\section{Timeline}
-
-Deadlines enforced by the faculty of Engineering Computing and Mathematics are \emph{italicised}.\footnote{David Gow is being assessed under the 2014 rules for a BEng (Software) Final Year Project, whilst the author is being assessed under the 2014 rules for a BEng (Mechatronics) Final Year Project; deadlines and requirements as shown in Gow's proposal\cite{proposalGow} may differ}.
-
-\begin{center}
-\begin{tabular}{l|p{0.5\textwidth}}
- {\bf Date} & {\bf Milestone}\\
- \hline
- $17^{\text{th}}$ April & Draft Literature Review completed. \rephrase{This sort of didn't happen...}\\
- \hline
- $1^{\text{st}}$ May & Testbed Software (basic document format and viewer) completed and approaches for extending to allow infinite precision identified. \\
- \hline
- $26^{\text{th}}$ May & \emph{Progress Report and Revised Literature Review due.}\\
- \hline
- $9^{\text{th}}$ June & Demonstrations of limitations of floating point precision in the Testbed software. \\
- $1^{\text{st}}$ July & At least one implementation of infinite precision for basic primitives (lines, polygons, curves) completed. Other implementations, advanced features, and areas for more detailed research identified. \\
- \hline
- $1^{\text{st}}$ August & Experiments and comparison of various infinite precision implementations completed. \\
- \hline
- $1^{\text{st}}$ September & Advanced features implemented and tested, work underway on Final Report. \\
- \hline
- TBA & \emph{Conference Abstract and Presentation due.} \\
- \hline
- $10^{\text{th}}$ October & \emph{Draft of Final Report due.} \\
- \hline
- $27^{\text{th}}$ October & \emph{Final Report due.}\\
- \hline
-\end{tabular}
-\end{center}
-
+\section{Overview}
+The remainder of this document will be organised as follows: In Chapter \ref{Proposal} we give an overview of the current state of the research in document formats, and the motivation for implementing ``infinite precision'' in a document format. We will outline our approach to research in collaboration with David Gow\cite{}. In Chapter \ref{Background} we provide more detailed background examining the literature related to rendering, interpreting, and creating document formats, as well as possible techniques for increased and possibly infinite precision. In Chapter \ref{Progress} gives the current state of our research and the progress towards the goals outlined in Chapter \ref{Introduction}. In Chapter \ref{Conclusion} we will conclude with a summary of our findings and goals.
--- /dev/null
+\chapter{Proposal}\label{Proposal}
+
+\rephrase{Most of this chapter is copy pasted from the project proposal} \\
+ \url{http://szmoore.net/ipdf/documents/ProjectProposalSam.pdf}
+
+\section{Aim}
+
+In this project, we will explore the state of the art of current document formats including PDF, PostScript, SVG, HTML, and the limitations of each in terms of precision.
+We will consider designs for a document format allowing graphics primitives at an arbitrary level of zoom with no loss of detail. We will refer to such a document format as ``infinite precision''. A viewer and editor will be implemented as a proof of concept; we adopt a low level, ground up approach to designing this viewer so as to not become restricted by any single existing document format.
+
+There are many possible applications for documents in which precision is unlimited. Several areas of use include: visualisation of extremely large or infinite data sets; visualisation of high precision numerical computations; digital artwork; computer aided design; and maps.
+
+\subsection{Clarification of Terms}
+
+It may be necessary to clarify what we mean by the terms ``infinite precision'' and ``document formats''. Regarding the latter, we consider a document format to be any representation of visual information which is capable of being stored indefinitely. Regarding the former, we do not propose to be able to contain an infinite amount of information within such a document. The goal is to be able to render a primitive at the same level of detail it is specified by a document format, regardless of how precise this level is. For example, the precision of coordinates of primitives drawn in a graphical document editor will always be limited by the resolution of the display on which they are drawn, but not by the viewer.
+
+\section{Methods}
+
+Initial research and software development is being conducted in collaboration with David Gow\cite{proposalGow}. Once a simple testbed application has been developed, we will individually explore approaches for introducing arbitrary levels of precision; these approaches will be implemented as alternate versions of the same software. The focus will be on drawing simple primitives (lines, polygons, circles). However, if time permits we will explore adding more complicated primitives (font glyphs, bezier curves, embedded bitmaps). Hearn and Baker's textbook ``Computer Graphics'' includes chapters providing a good overview of two dimensional graphics\cite{computergraphics2}.
+
+The process of rendering a document will be considered as a common area of research, whilst individual research will be conducted on means for allowing infinite precision.
+At this stage we have identified two possible areas for individual research:
+
+\begin{enumerate}
+
+ \item {\bf Arbitrary Precision real valued numbers} --- Sam Moore
+
+ We plan to investigate the representation of real values to a high or arbitary degree of precision. Such representations would allow for a document to be implemented
+ using a single global coordinate system. However, we would expect a decrease in performance with increased complexity of the data structure used to represent a real value. \rephrase{Both software and hardware techniques will be explored.} We will also consider the limitations imposed by performing calculations on the GPU or CPU.
+
+ Starting points for research in this area are Priest's 1991 paper, ``Algorithms for Arbitrary Precision Floating Point Arithmetic''\cite{priest1991algorithms}, and Goldberg's 1992 paper ``The design of floating point data types''\cite{goldberg1992thedesign}. A more recent and comprehensive text book, ``Handbook of Floating Point Arithmetic''\cite{HFP}, published in 2010, has also been identified as highly relevant.
+
+ \item {\bf Local coordinate systems} --- David Gow \cite{proposalGow}
+
+ An alternative approach involves segmenting the document into different regions using fixed precision floats to define primitives within each region. A quadtree or similar data structure could be employed to identify and render those regions currently visible in the document viewer.\rephrase{Say more here?}
+
+\end{enumerate}
+\pagebreak
+We aim to compare these and any additional implementations considered using the following metrics:
+\begin{enumerate}
+
+ \item {\bf Performance vs Number of Primitives}
+
+ As it is clearly desirable to include more objects in a document, this is a natural metric for the usefulness of an implementation.
+ We will compare the performance of rendering different implementations, using several ``standard'' test documents.
+
+ \item {\bf Performance vs Visible Primitives}
+
+ There will inevitably be an overhead to all primitives in the document, whether drawn or not.
+ As the structure of the document format and rendering algorithms may be designed independently, we will repeat the above tests considering only the number of visible primitives.
+
+
+ \item {\bf Performance vs Zoom Level}
+
+ We will also consider the performance of rendering at zoom levels that include primitives on both small and large scales, since these are the cases under which floating point precision causes problems in the PostScript and PDF standards.
+
+ \item {\bf Performance whilst translation and scaling}
+
+ Whilst changing the view, it is ideal that the document be re-rendered as efficiently as possible, to avoid disorienting and confusing the user.
+ We will therefore compare the speed of rendering as the standard documents are translated or scaled at a constant rate.
+
+ \item {\bf Artifacts and Limitations on Precision}
+
+ As we are unlikely to achieve truly ``infinite'' precision, qualitative comparisons of the accuracy of rendering under different implementations should be made.
+
+\end{enumerate}
+
+\section{Software and Hardware Requirements}
+
+Due to the relative immaturity and inconsistency of graphics drivers on mobile devices, our proof of concept will be developed for a conventional GNU/Linux desktop or laptop computer using OpenGL. However, the techniques explored could easily be extended to other platforms and libraries.
+
+
+\pagebreak
+
+\section{Timeline}
+
+Deadlines enforced by the faculty of Engineering Computing and Mathematics are \emph{italicised}.\footnote{David Gow is being assessed under the 2014 rules for a BEng (Software) Final Year Project, whilst the author is being assessed under the 2014 rules for a BEng (Mechatronics) Final Year Project; deadlines and requirements as shown in Gow's proposal\cite{proposalGow} may differ}.
+
+\begin{center}
+\begin{tabular}{l|p{0.5\textwidth}}
+ {\bf Date} & {\bf Milestone}\\
+ \hline
+ $1^{\text{st}}$ May & Testbed Software (basic document format and viewer) completed and approaches for extending to allow infinite precision identified. \\
+ \hline
+ ? May & Draft Progress Report and Literature Review \\
+ \hline
+ $26^{\text{th}}$ May & \emph{Progress Report and Literature Review due.}\\
+ \hline
+ $9^{\text{th}}$ June & Demonstrations of limitations of floating point precision in the Testbed software. \\
+ $1^{\text{st}}$ July & At least one implementation of infinite precision for basic primitives (lines, polygons, curves) completed. Other implementations, advanced features, and areas for more detailed research identified. \\
+ \hline
+ $1^{\text{st}}$ August & Experiments and comparison of various infinite precision implementations completed. \\
+ \hline
+ $1^{\text{st}}$ September & Advanced features implemented and tested, work underway on Final Report. \\
+ \hline
+ TBA & \emph{Conference Abstract and Presentation due.} \\
+ \hline
+ $10^{\text{th}}$ October & \emph{Draft of Final Report due.} \\
+ \hline
+ $27^{\text{th}}$ October & \emph{Final Report due.}\\
+ \hline
+\end{tabular}
+\end{center}
+
+
--- /dev/null
+%PDF-1.5
+%µí®û
+3 0 obj
+<< /Length 4 0 R
+ /Filter /FlateDecode
+>>
+stream
+x\9c}\93±j\ 31\f\86w=\85^ ®ä¸\8aoÍR(th3\86\ e\81\84@i\87\\86¾~eÇv\eN
+ÇaKßéø?\83Ï@X\9e÷g|Ü\13\9e.ÀAð\a\18_\94|Âî\ 3)\10\1e á«îVSÆU\f\89&]ðAßù\88[Ð\96C$yd\12\8fpÌ.\127Cõ\bOÄ\930Î'\1c^× \rr;±ñ'\9a\8e1Òt\16dè\18¤ëXH\9cpªS\81þÕ9k\83¸>-õbäÎ\114\1dc¤ë\18\ 1º\8e\95\822ëøX¨å¶PËg!ש'_\ eÝ9\88îd\ e]\89~áI\19¨K\19¨K\19hd·\98øAøÏjlÔJ¯\8f^rÒK¾«{}õ¢ë5g \99\18S\ e9âw]Gù\85\99jY\96R2ÇÛ:¥Z×~\84ÒYË¿\8e6¶ð\ 6¿¤Æão
+endstream
+endobj
+4 0 obj
+ 270
+endobj
+2 0 obj
+<<
+ /ExtGState <<
+ /a0 << /CA 1 /ca 1 >>
+ >>
+>>
+endobj
+5 0 obj
+<< /Type /Page
+ /Parent 1 0 R
+ /MediaBox [ 0 0 192.800003 160.800003 ]
+ /Contents 3 0 R
+ /Group <<
+ /Type /Group
+ /S /Transparency
+ /I true
+ /CS /DeviceRGB
+ >>
+ /Resources 2 0 R
+>>
+endobj
+1 0 obj
+<< /Type /Pages
+ /Kids [ 5 0 R ]
+ /Count 1
+>>
+endobj
+6 0 obj
+<< /Creator (cairo 1.12.16 (http://cairographics.org))
+ /Producer (cairo 1.12.16 (http://cairographics.org))
+>>
+endobj
+7 0 obj
+<< /Type /Catalog
+ /Pages 1 0 R
+>>
+endobj
+xref
+0 8
+0000000000 65535 f
+0000000684 00000 n
+0000000384 00000 n
+0000000015 00000 n
+0000000362 00000 n
+0000000456 00000 n
+0000000749 00000 n
+0000000878 00000 n
+trailer
+<< /Size 8
+ /Root 7 0 R
+ /Info 6 0 R
+>>
+startxref
+930
+%%EOF
At the fundamental level, a document is a means to convey information. The limitations on a digital document format therefore restrict the types and quality of information that can be communicated. Whilst modern document formats are now able to include increasingly complex dynamic content, they still suffer from early views of a document as a static page; to be viewed at a fixed scale and position. In this report, we focus on the limitations of modern document formats (including PDF, PostScript, SVG) with regards to the level of detail, or precision at which primatives can be drawn. We propose a research project to investigate whether it is possible to obtain an ``infinite precision'' document format, capable of including primitives created at an arbitrary level of zoom.
-
-\rephrase{Move to introduction? But it discusses the Introduction :S} \\
-In Chapter \ref{Introduction} we give an overview of the current state of the research in document formats, and the motivation for implementing ``infinite precision'' in a document format. We will outline our approach to research in collaboration with David Gow\cite{}. In Chapter \ref{Background} we provide more detailed background examining the literature related to rendering, interpreting, and creating document formats, as well as possible techniques for increased and possibly infinite precision. In Chapter \ref{Progress} gives the current state of our research and the progress towards the goals outlined in Chapter \ref{Introduction}. In Chapter \ref{Conclusion} we will conclude with a summary of our findings and goals.
-
{\bf Keywords:} \emph{document formats, precision, floating point, graphics, OpenGL, VHDL, PostScript, PDF, bootstraps}
-
-\rephrase{TODO: Make document smaller; currently 16 pages with almost no content; limit is 20 with actual content}
%\renewcommand{\baselinestretch}{1.5} % Uncomment for 1.5 spacing between lines
%\parindent 0pt % sets leading space for paragraphs
-
+\usepackage{ulem}
%\usepackage{natbib}
\usepackage{makeidx}
\usepackage{graphicx}
\usepackage{mdframed}
\usepackage[compact]{titlesec}
\usepackage[table]{xcolor}
-%\titlespacing{\chapter}{0pt}{-50pt}{0pt}
+\titlespacing{\chapter}{0pt}{-50pt}{0pt}
% spacing glue: how to read {12pt plus 4pt minus 2pt}
% 12pt is what we would like the spacing to be
% plus 4pt means that TeX can stretch it by at most 4pt
% minus 2pt means that TeX can shrink it by at most 2pt
-%\titlespacing\section{0pt}{0pt plus 0pt minus 14pt}{0pt plus 0pt minus 14pt}
-%\titleformat{\chapter}
-%{\normalfont\LARGE\bfseries}{\thechapter.}{1em}{}
+\titlespacing\section{0pt}{0pt plus 0pt minus 14pt}{0pt plus 0pt minus 14pt}
+\titleformat{\chapter}
+{\normalfont\LARGE\bfseries}{\thechapter.}{1em}{}
%\usepackage[usenames,dvipsnames]{color}
%\usepackage{listings} % For code snippets
\include{meta/Titlepage} % This is who you are
+\include{meta/Abstract} % This is your thesis abstract
+
\newpage
\include{meta/Acknowledgments} % This is who you thank
-\newpage
-
-\include{meta/Abstract} % This is your thesis abstract
\pagenumbering{roman}
\newpage
% Do the table of Contents and lists of figures and tables
%---------------------------------------------------------
\linespread{0.3}
-% Do we need these for a fresher guide?
{\small\tableofcontents}
-%\listoffigures
+\listoffigures
\markboth{}{}
\linespread{1.5}
\newpage
%Include the chapters!
\include{chapters/Introduction}
+\include{chapters/Proposal}
\include{chapters/Background}
\include{chapters/Progress}
\include{chapters/Conclusion}