X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fdocuments.git;a=blobdiff_plain;f=LitReviewDavid.tex;h=e5ad1fe679ee6112f81e5ad99bf1e11174f3827e;hp=e84b6f54d20b345798197381a0f4806f99bf03bd;hb=cae44d330d3099c9d07c291e8084d14843c9f01b;hpb=66a3095577df07859b7ef9e8378238a4ef56d278 diff --git a/LitReviewDavid.tex b/LitReviewDavid.tex index e84b6f5..e5ad1fe 100644 --- a/LitReviewDavid.tex +++ b/LitReviewDavid.tex @@ -36,71 +36,38 @@ to issues with numeric precision. \section{Rendering} -As existing displays (and printers) are bit-mapped devices, one of the core problems which must be solved when -designing a document format is how it is to be \emph{rasterized} into a bitmap at a given resolution. - -\subsection{Compositing Digital Images\cite{porter1984compositing}} - - - -Perter and Duff's classic paper "Compositing Digital Images" lays the -foundation for digital compositing today. By providing an "alpha channel," -images of arbitrary shapes — and images with soft edges or sub-pixel coverage -information — can be overlayed digitally, allowing separate objects to be -rasterized separately without a loss in quality. - -Pixels in digital images are usually represented as 3-tuples containing -(red component, green component, blue component). Nominally these values are in -the [0-1] range. In the Porter-Duff paper, pixels are stored as $(R,G,B,\alpha)$ -4-tuples, where alpha is the fractional coverage of each pixel. If the image -only covers half of a given pixel, for example, its alpha value would be 0.5. - -To improve compositing performance, albeit at a possible loss of precision in -some implementations, the red, green and blue channels are premultiplied by the -alpha channel. This also simplifies the resulting arithmetic by having the -colour channels and alpha channels use the same compositing equations. - -Several binary compositing operations are defined: -\begin{itemize} -\item over -\item in -\item out -\item atop -\item xor -\item plus -\end{itemize} - -The paper further provides some additional operations for implementing fades and -dissolves, as well as for changing the opacity of individual elements in a -scene. - -The method outlined in this paper is still the standard system for compositing -and is implemented almost exactly by modern graphics APIs such as \texttt{OpenGL}. It is -all but guaranteed that this is the method we will be using for compositing -document elements in our project. - -\subsection{Bresenham's Algorithm: Algorithm for computer control of a digital plotter\cite{bresenham1965algorithm}} -Bresenham's line drawing algorithm is a fast, high quality line rasterization -algorithm which is still the basis for most (aliased) line drawing today. The -paper, while originally written to describe how to control a particular plotter, -is uniquely suited to rasterizing lines for display on a pixel grid. - -Lines drawn with Bresenham's algorithm must begin and end at integer pixel -coordinates, though one can round or truncate the fractional part. In order to -avoid multiplication or division in the algorithm's inner loop, - -The algorithm works by scanning along the long axis of the line, moving along -the short axis when the error along that axis exceeds 0.5px. Because error -accumulates linearly, this can be achieved by simply adding the per-pixel -error (equal to (short axis/long axis)) until it exceeds 0.5, then incrementing -the position along the short axis and subtracting 1 from the error accumulator. - -As this requires nothing but addition, it is very fast, particularly on the -older CPUs used in Bresenham's time. Modern graphics systems will often use Wu's -line-drawing algorithm instead, as it produces antialiased lines, taking -sub-pixel coverage into account. Bresenham himself extended this algorithm to -produce Bresenham's circle algorithm. The principles behind the algorithm have -also been used to rasterize other shapes, including B\'{e}zier curves. +Computer graphics comes in two forms: bit-mapped (or raster) graphics, which is defined by an array of pixel colours, +and \emph{vector} graphics, defined by mathematical descriptions of objects. Bit-mapped graphics are well suited to photographs +and are match how cameras, printers and monitors work. However, bitmap devices do not handle zooming beyond their +``native'' resolution --- the resolution where one document pixel maps to one display pixel ---, exhibiting an artefact +called pixelation where the pixel structure becomes evident. Attempts to use interpolation to hide this effect are +never entirely successful, and sharp edges, such as those found in text and diagrams, are particularly effected. + +Vector graphics lack many of these problems: the representation is independent of the output resolution, and rather +an abstract description of what it is being rendered, typically as a combination of simple geometric shapes like lines, +arcs and ``B\'ezier curves''. +As existing displays (and printers) are bit-mapped devices, vector documents must be \emph{rasterized} into a bitmap at +a given resolution. This bitmap is then displayed or printed. The resulting bitmap is then an approximation of the vector image +at that resolution. + +This project will be based around vector graphics, as these properties make it more suited to experimenting with zoom +quality. + + +The rasterization process typically operates on an individual ``object'' or ``shape'' at a time: there are special algorithms +for rendering lines\cite{bresenham1965algorithm}, triangles\cite{giesen2013triangle}, polygons\cite{pineda1988parallel} and B\'ezier +Curves\cite{goldman_thefractal}. Typically, these are rasterized independently and composited in the bitmap domain using Porter-Duff +compositing\cite{porter1984compositing} into a single image. This allows complex images to be formed from many simple pieces, as well +as allowing for layered translucent objects, which would otherwise require the solution of some very complex constructive geometry problems. + +While traditionally, rasterization was done entirely in software, modern computers and mobile devices have hardware support for rasterizing +some basic primitives --- typically lines and triangles ---, designed for use rendering 3D scenes. This hardware is usually programmed with an +API like \texttt{OpenGL}\cite{openglspec}. + +More complex shapes like B\'ezier curves can be rendered by combining the use of bitmapped textures (possibly using signed-distance +fields\cite{leymarie1992fast}\cite{frisken2000adaptively}\cite{green2007improved}) with polygons approximating the curve's shape\cite{loop2005resolution}\cite{loop2007rendering}. + +Indeed, there are several implementations of these vector graphics \emph{GPU Rendering}\cite{loop2005resolution}, OpenVG implementation on GLES: \cite{oh2007implementation}, \cite{robart2009openvg}