From: David Gow Date: Mon, 28 Apr 2014 03:25:03 +0000 (+0800) Subject: Some horrifyingly bad Lit Review stuff. X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fdocuments.git;a=commitdiff_plain;h=66a3095577df07859b7ef9e8378238a4ef56d278 Some horrifyingly bad Lit Review stuff. --- diff --git a/LitReviewDavid.tex b/LitReviewDavid.tex new file mode 100644 index 0000000..e84b6f5 --- /dev/null +++ b/LitReviewDavid.tex @@ -0,0 +1,185 @@ +\documentclass[a4paper,10pt]{article} +\usepackage[utf8]{inputenc} +\usepackage{hyperref} + +%opening +\title{Literature Review} +\author{David Gow} + +\begin{document} + +\maketitle + +\section{Introduction} + +Since mankind first climbed down from the trees, it is our ability to communicate that has made us unique. +Once ideas could be passed from person to person, it made sense to have a permanent record of them; one which +could be passed on from person to person without them ever meeting. + +And thus the document was born. + +Traditionally, documents have been static: just marks on paper, but with the advent of computers many more possibilities open up. +Most existing document formats --- such as the venerable PostScript and PDF --- are, however, designed to imitate +existing paper documents, largely to allow for easy printing. In order to truly take advantage of the possibilities operating in the digital +domain opens up to us, we must look to new formats. + +Formats such as \texttt{HTML} allow for a greater scope of interactivity and for a more data-driven model, allowing +the content of the document to be explored in ways that perhaps the author had not anticipated.\cite{hayes2012pixels} +However, these data-driven formats typically do not support fixed layouts, and the display differs from renderer to +renderer. + +Existing document formats, due to being designed to model paper, +have limited precision (8 decimal digits for PostScript\cite{plrm}, 5 decimal digits for PDF\cite{pdfref17}). +This matches the limited resolution of printers and ink, but is limited when compared to what aught to be possible +with ``zoom'' functionality, which is prevent from working beyond a limited scale factor, lest artefacts appear due +to issues with numeric precision. + +\section{Rendering} + +As existing displays (and printers) are bit-mapped devices, one of the core problems which must be solved when +designing a document format is how it is to be \emph{rasterized} into a bitmap at a given resolution. + +\subsection{Compositing Digital Images\cite{porter1984compositing}} + + + +Perter and Duff's classic paper "Compositing Digital Images" lays the +foundation for digital compositing today. By providing an "alpha channel," +images of arbitrary shapes — and images with soft edges or sub-pixel coverage +information — can be overlayed digitally, allowing separate objects to be +rasterized separately without a loss in quality. + +Pixels in digital images are usually represented as 3-tuples containing +(red component, green component, blue component). Nominally these values are in +the [0-1] range. In the Porter-Duff paper, pixels are stored as $(R,G,B,\alpha)$ +4-tuples, where alpha is the fractional coverage of each pixel. If the image +only covers half of a given pixel, for example, its alpha value would be 0.5. + +To improve compositing performance, albeit at a possible loss of precision in +some implementations, the red, green and blue channels are premultiplied by the +alpha channel. This also simplifies the resulting arithmetic by having the +colour channels and alpha channels use the same compositing equations. + +Several binary compositing operations are defined: +\begin{itemize} +\item over +\item in +\item out +\item atop +\item xor +\item plus +\end{itemize} + +The paper further provides some additional operations for implementing fades and +dissolves, as well as for changing the opacity of individual elements in a +scene. + +The method outlined in this paper is still the standard system for compositing +and is implemented almost exactly by modern graphics APIs such as \texttt{OpenGL}. It is +all but guaranteed that this is the method we will be using for compositing +document elements in our project. + +\subsection{Bresenham's Algorithm: Algorithm for computer control of a digital plotter\cite{bresenham1965algorithm}} +Bresenham's line drawing algorithm is a fast, high quality line rasterization +algorithm which is still the basis for most (aliased) line drawing today. The +paper, while originally written to describe how to control a particular plotter, +is uniquely suited to rasterizing lines for display on a pixel grid. + +Lines drawn with Bresenham's algorithm must begin and end at integer pixel +coordinates, though one can round or truncate the fractional part. In order to +avoid multiplication or division in the algorithm's inner loop, + +The algorithm works by scanning along the long axis of the line, moving along +the short axis when the error along that axis exceeds 0.5px. Because error +accumulates linearly, this can be achieved by simply adding the per-pixel +error (equal to (short axis/long axis)) until it exceeds 0.5, then incrementing +the position along the short axis and subtracting 1 from the error accumulator. + +As this requires nothing but addition, it is very fast, particularly on the +older CPUs used in Bresenham's time. Modern graphics systems will often use Wu's +line-drawing algorithm instead, as it produces antialiased lines, taking +sub-pixel coverage into account. Bresenham himself extended this algorithm to +produce Bresenham's circle algorithm. The principles behind the algorithm have +also been used to rasterize other shapes, including B\'{e}zier curves. + +\emph{GPU Rendering}\cite{loop2005resolution}, OpenVG implementation on GLES: \cite{oh2007implementation}, +\cite{robart2009openvg} + +\emph{Existing implementations of document format rendering} + +\subsection{Xr: Cross-device Rendering for Vector Graphics\cite{worth2003xr}} + +Xr (now known as Cairo) is an implementation of the PDF v1.4 rendering model, +independent of the PDF or PostScript file formats, and is now widely used +as a rendering API. In this paper, Worth and Packard describe the PDF v1.4 rendering +model, and their PostScript-derived API for it. + +The PDF v1.4 rendering model is based on the original PostScript model, based around +a set of \emph{paths} (and other objects, such as raster images) each made up of lines +and B\'{e}zier curves, which are transformed by the ``Current Transformation Matrix.'' +Paths can be \emph{filled} in a number of ways, allowing for different handling of self-intersecting +paths, or can have their outlines \emph{stroked}. +Furthermore, paths can be painted with RGB colours and/or patterns derived from either +previously rendered objects or external raster images. +PDF v1.4 extends this to provide, amongst other features, support for layering paths and +objects using Porter-Duff compositing\cite{porter1984compositing}, giving each painted path +the option of having an $\alpha$ value and a choice of any of the Porter-Duff compositing +methods. + +The Cairo library approximates the rendering of some objects (particularly curved objects +such as splines) with a set of polygons. An \texttt{XrSetTolerance} function allows the user +of the library to set an upper bound on the approximation error in fractions of device pixels, +providing a trade-off between rendering quality and performance. The library developers found +that setting the tolerance to greater than $0.1$ device pixels resulted in errors visible to the +user. + +\subsection{Glitz: Hardware Accelerated Image Compositing using OpenGL\cite{nilsson2004glitz}} + +This paper describes the implementation of an \texttt{OpenGL} based rendering backend for +the \texttt{Cairo} library. + +The paper describes how OpenGL's Porter-Duff compositing is easily suited to the Cairo/PDF v1.4 +rendering model. Similarly, traditional OpenGL (pre-version 3.0 core) support a matrix stack +of the same form as Cairo. + +The ``Glitz'' backend will emulate support for tiled, non-power-of-two patterns/textures if +the hardware does not support it. + +Glitz can render both triangles and trapezoids (which are formed from pairs of triangles). +However, it cannot guarantee that the rasterization is pixel-precise, as OpenGL does not proveide +this consistently. + +Glitz also supports multi-sample anti-aliasing, convolution filters for raster image reads (implemented +with shaders). + +Performance was much improved over the software rasterization and over XRender accellerated rendering +on all except nVidia hardware. However, nVidia's XRender implementation did slow down significantly when +some transformations were applied. + + + +\textbf{Also look at \texttt{NV\_path\_rendering}} \cite{kilgard2012gpu} + +\section{Floating-Point Precision} + +How floating-point works and what its behaviour is w/r/t range and precision +\cite{goldberg1991whatevery} +\cite{goldberg1992thedesign} + +Arb. precision exists + +Higher precision numeric types can be implemented or used on the GPU, but are +slow. +\cite{emmart2010high} + + + +\section{Quadtrees} +The quadtree is a data structure which +\cite{finkel1974quad} + + +\bibliographystyle{unsrt} +\bibliography{papers} + +\end{document}