X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fdocuments.git;a=blobdiff_plain;f=LitReviewDavid.tex;h=a8947e81f22fde0379904455a9611aeeefc4a6b3;hp=e5ad1fe679ee6112f81e5ad99bf1e11174f3827e;hb=6dba5c70d6843d5c8fbb81116b8ee527cfa5a730;hpb=24e6b793cd84df7f985fe006623f8dc7fcbdb5d1 diff --git a/LitReviewDavid.tex b/LitReviewDavid.tex index e5ad1fe..a8947e8 100644 --- a/LitReviewDavid.tex +++ b/LitReviewDavid.tex @@ -1,6 +1,9 @@ \documentclass[a4paper,10pt]{article} \usepackage[utf8]{inputenc} \usepackage{hyperref} +\usepackage{graphicx} +\usepackage{amsmath} +\usepackage{amssymb} %opening \title{Literature Review} @@ -19,6 +22,9 @@ could be passed on from person to person without them ever meeting. And thus the document was born. Traditionally, documents have been static: just marks on paper, but with the advent of computers many more possibilities open up. + +\section{Document Formats} + Most existing document formats --- such as the venerable PostScript and PDF --- are, however, designed to imitate existing paper documents, largely to allow for easy printing. In order to truly take advantage of the possibilities operating in the digital domain opens up to us, we must look to new formats. @@ -28,6 +34,19 @@ the content of the document to be explored in ways that perhaps the author had n However, these data-driven formats typically do not support fixed layouts, and the display differs from renderer to renderer. +Ultimately, there are two fundamental stages by which all documents --- digital or otherwise --- are produced and displayed: +\emph{layout} and \emph{display}. The \emph{layout} stage is where the positions and sizes of text and other graphics are +determined, while the \emph{display} stage actually produces the final output, whether as ink on paper or pixels on a computer monitor. + +Different document formats approach these stages in different ways. Some treat the document as a program, written in +a turing complete document language with instructions which emit shapes to be displayed. These shapes are either displayed +immediately, as in PostScript, or stored in another file, such as with \TeX or \LaTeX, which emit a \texttt{DVI} file. Most other +forms of document use a \emph{Document Object Model}, being a list or tree of objects to be rendered. \texttt{DVI}, \texttt{PDF}, +\texttt{HTML}\footnote{Some of these formats --- most notably \texttt{HTML} --- implement a scripting lanugage such as JavaScript, +which permit the DOM to be modified while the document is being viewed.} and SVG\cite{svg2011-1.1}. Of these, only \texttt{HTML} and \TeX typically +store documents in pre-layout stages, whereas even turing complete document formats such as PostScript typically encode documents +which already have their elements placed. + Existing document formats, due to being designed to model paper, have limited precision (8 decimal digits for PostScript\cite{plrm}, 5 decimal digits for PDF\cite{pdfref17}). This matches the limited resolution of printers and ink, but is limited when compared to what aught to be possible @@ -36,12 +55,18 @@ to issues with numeric precision. \section{Rendering} -Computer graphics comes in two forms: bit-mapped (or raster) graphics, which is defined by an array of pixel colours, +Computer graphics comes in two forms: bit-mapped (or raster) graphics, which is defined by an array of pixel colours; and \emph{vector} graphics, defined by mathematical descriptions of objects. Bit-mapped graphics are well suited to photographs and are match how cameras, printers and monitors work. However, bitmap devices do not handle zooming beyond their ``native'' resolution --- the resolution where one document pixel maps to one display pixel ---, exhibiting an artefact called pixelation where the pixel structure becomes evident. Attempts to use interpolation to hide this effect are -never entirely successful, and sharp edges, such as those found in text and diagrams, are particularly effected. +never entirely successful, and sharp edges, such as those found in text and diagrams, are particularly affected. + +\begin{figure}[h] + \centering \includegraphics[width=0.8\linewidth]{figures/vectorraster_example} + \caption{A circle as a vector image and a $32 \times 32$ pixel raster image} +\end{figure} + Vector graphics lack many of these problems: the representation is independent of the output resolution, and rather an abstract description of what it is being rendered, typically as a combination of simple geometric shapes like lines, @@ -67,74 +92,71 @@ API like \texttt{OpenGL}\cite{openglspec}. More complex shapes like B\'ezier curves can be rendered by combining the use of bitmapped textures (possibly using signed-distance fields\cite{leymarie1992fast}\cite{frisken2000adaptively}\cite{green2007improved}) with polygons approximating the curve's shape\cite{loop2005resolution}\cite{loop2007rendering}. -Indeed, there are several implementations of these vector graphics - -\emph{GPU Rendering}\cite{loop2005resolution}, OpenVG implementation on GLES: \cite{oh2007implementation}, -\cite{robart2009openvg} - -\emph{Existing implementations of document format rendering} - -\subsection{Xr: Cross-device Rendering for Vector Graphics\cite{worth2003xr}} - -Xr (now known as Cairo) is an implementation of the PDF v1.4 rendering model, -independent of the PDF or PostScript file formats, and is now widely used -as a rendering API. In this paper, Worth and Packard describe the PDF v1.4 rendering -model, and their PostScript-derived API for it. - -The PDF v1.4 rendering model is based on the original PostScript model, based around -a set of \emph{paths} (and other objects, such as raster images) each made up of lines -and B\'{e}zier curves, which are transformed by the ``Current Transformation Matrix.'' -Paths can be \emph{filled} in a number of ways, allowing for different handling of self-intersecting -paths, or can have their outlines \emph{stroked}. -Furthermore, paths can be painted with RGB colours and/or patterns derived from either -previously rendered objects or external raster images. -PDF v1.4 extends this to provide, amongst other features, support for layering paths and -objects using Porter-Duff compositing\cite{porter1984compositing}, giving each painted path -the option of having an $\alpha$ value and a choice of any of the Porter-Duff compositing -methods. - -The Cairo library approximates the rendering of some objects (particularly curved objects -such as splines) with a set of polygons. An \texttt{XrSetTolerance} function allows the user -of the library to set an upper bound on the approximation error in fractions of device pixels, -providing a trade-off between rendering quality and performance. The library developers found -that setting the tolerance to greater than $0.1$ device pixels resulted in errors visible to the -user. - -\subsection{Glitz: Hardware Accelerated Image Compositing using OpenGL\cite{nilsson2004glitz}} - -This paper describes the implementation of an \texttt{OpenGL} based rendering backend for -the \texttt{Cairo} library. - -The paper describes how OpenGL's Porter-Duff compositing is easily suited to the Cairo/PDF v1.4 -rendering model. Similarly, traditional OpenGL (pre-version 3.0 core) support a matrix stack -of the same form as Cairo. - -The ``Glitz'' backend will emulate support for tiled, non-power-of-two patterns/textures if -the hardware does not support it. - -Glitz can render both triangles and trapezoids (which are formed from pairs of triangles). -However, it cannot guarantee that the rasterization is pixel-precise, as OpenGL does not proveide -this consistently. - -Glitz also supports multi-sample anti-aliasing, convolution filters for raster image reads (implemented -with shaders). - -Performance was much improved over the software rasterization and over XRender accellerated rendering -on all except nVidia hardware. However, nVidia's XRender implementation did slow down significantly when -some transformations were applied. - - - -\textbf{Also look at \texttt{NV\_path\_rendering}} \cite{kilgard2012gpu} - -\section{Floating-Point Precision} - -How floating-point works and what its behaviour is w/r/t range and precision -\cite{goldberg1991whatevery} -\cite{goldberg1992thedesign} - -Arb. precision exists - +Indeed, there are several implementations of entire vector graphics systems using OpenGL: OpenVG\cite{robart2009openvg} on top of OpenGL ES\cite{oh2007implementation}; +the Cairo\cite{worth2003xr} library, based around the PostScript/PDF rendering model, has the ``Glitz'' OpenGL backend\cite{nilsson2004glitz} and the SVG/PostScript GPU +renderer by nVidia\cite{kilgard2012gpu} as an OpenGL extension\cite{kilgard300programming}. + + +\section{Numeric formats} + +On modern computer architectures, there are two basic number formats supported: +fixed-width integers and \emph{floating-point} numbers. Typically, computers +natively support integers of up to 64 bits, capable of representing all integers +between $0$ and $2^{64} - 1$\footnote{Most machines also support \emph{signed} integers, +which have the same cardinality as their \emph{unsigned} counterparts, but which +represent integers between $-(2^{63})$ and $2^{63} - 1$}. + +By introducing a fractional component (analogous to a decimal point), we can convert +integers to \emph{fixed-point} numbers, which have a more limited range, but a fixed, greater +precision. For example, a number in 4.4 fixed-point format would have four bits representing the integer +component, and four bits representing the fractional component: +\begin{equation} + \underbrace{0101}_\text{integer component}.\underbrace{1100}_\text{fractional component} = 5.75 +\end{equation} + + +Floating-point numbers\cite{goldberg1992thedesign} are the binary equivalent of scientific notation: +each number consisting of an exponent ($e$) and a mantissa ($m$) such that a number is given by +\begin{equation} + n = 2^{e} \times m +\end{equation} + +The IEEE 754 standard\cite{ieee754std1985} defines several floating-point data types +which are used\footnote{Many systems' implement the IEEE 754 standard's storage formats, +but do not implement arithmetic operations in accordance with this standard.} by most +computer systems. The standard defines 32-bit (8-bit exponent, 23-bit mantissa, 1 sign bit) and +64-bit (11-bit exponent, 53-bit mantissa, 1 sign bit) formats\footnote{The 2008 +revision to this standard\cite{ieee754std2008} adds some additional formats, but is +less widely supported in hardware.}, which can store approximately 7 and 15 decimal digits +of precision respectively. + +Floating-point numbers behave quite differently to integers or fixed-point numbers, as +the representable numbers are not evenly distributed. Large numbers are stored to a lesser +precision than numbers close to zero. This can present problems in documents when zooming in +on objects far from the origin. + +IEEE floating-point has some interesting features as well, including values for negative zero, +positive and negative infinity, the ``Not a Number'' (NaN) value and \emph{denormal} values, which +trade precision for range when dealing with very small numbers. Indeed, with these values, +IEEE 754 floating-point equality does not form an equivalence relation, which can cause issues +when not considered carefully.\cite{goldberg1991whatevery} + +There also exist formats for storing numbers with arbitrary precising and/or range. +Some programming languages support ``big integer''\cite{java_bigint} types which can +represent any integer that can fit in the system's memory. Similarly, there are +arbitrary-precision floating-point data types\cite{java_bigdecimal}\cite{boost_multiprecision} +which can represent any number of the form +\begin{equation} + \frac{n}{2^d} \; \; \; \; n,d \in \mathbb{Z} % This spacing is horrible, and I should be ashamed. +\end{equation} +These types are typically built from several native data types such as integers and floats, +paired with custom routines implementing arithmetic primitives.\cite{priest1991algorithms} +These, therefore, are likely slower than the native types they are built on. + + +While traditionally, GPUs have supported some approximation of IEEE 754's 32-bit floats, +modern graphics processors also support 16-bit\cite{nv_half_float} and 64-bit\cite{arb_gpu_shader_fp64} +IEEE floats. Higher precision numeric types can be implemented or used on the GPU, but are slow. \cite{emmart2010high} @@ -142,9 +164,31 @@ slow. \section{Quadtrees} -The quadtree is a data structure which -\cite{finkel1974quad} - +When viewing or processing a small part of a large document, it may be helpful to +only processs --- or \emph{cull} --- parts of the document which are not on-screen. + +\begin{figure}[h] + \centering \includegraphics[width=0.4\linewidth]{figures/quadtree_example} + \caption{A simple quadtree.} +\end{figure} +The quadtree\cite{finkel1974quad}is a data structure --- one of a family of \emph{spatial} +data structures --- which recursively breaks down space into smaller subregions +which can be processed independently. Points (or other objects) are added to a single +node, which if certain criteria are met --- typically the number of points in a node +exceeding a maximum, though in our case likely the level of precision required exceeding +that supported by the data type in use --- is split into four equal-sized subregions, and +points attached to the region which contains them. + +In this project, we will be experimenting with a form of quadtree in which each +node has its own independent coordinate system, allowing us to store some spatial +information\footnote{One bit per-coordinate, per-level of the quadtree} within the +quadtree structure, eliminating redundancy in the coordinates of nearby objects. + +Other spatial data structures exist, such as the KD-tree\cite{bentley1975multidimensional}, +which partitions the space on any axis-aligned line; or the BSP tree\cite{fuchs1980onvisible}, +which splits along an arbitrary line which need not be axis aligned. We believe, however, +that the simpler conversion from binary coordinates to the quadtree's binary split make +it a better avenue for initial research to explore. \bibliographystyle{unsrt} \bibliography{papers}