X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fdocuments.git;a=blobdiff_plain;f=LiteratureNotes.tex;h=95b299723d8f4a6b8ee5cae6aabcc7cfa2a199a5;hp=55a224869be51a814fc4f110bf77b18a1c4541a7;hb=4e29c54be001b88f60fd055ddb01d9b6bb12097f;hpb=636de1179f4e0a94bf6315deadec1f757800e459 diff --git a/LiteratureNotes.tex b/LiteratureNotes.tex index 55a2248..95b2997 100644 --- a/LiteratureNotes.tex +++ b/LiteratureNotes.tex @@ -1,4 +1,4 @@ -\documentclass[10pt]{article} +\documentclass[8pt]{extarticle} \usepackage{graphicx} \usepackage{caption} \usepackage{amsmath} % needed for math align @@ -16,7 +16,8 @@ %\pagestyle{empty} % Uncomment if don't want page numbers \parskip 8.2pt % sets spacing between paragraphs %\renewcommand{\baselinestretch}{1.5} % Uncomment for 1.5 spacing between lines - %\parindent 0pt % sets leading space for paragraphs + \parindent 0pt % sets leading space for paragraphs +\linespread{0.8} \newcommand{\vect}[1]{\boldsymbol{#1}} % Draw a vector @@ -45,8 +46,12 @@ \lstset{basicstyle=\small} + + + \begin{document} + \pagestyle{fancy} \fancyhead{} \fancyfoot{} @@ -64,18 +69,323 @@ \tableofcontents -\section{PostScript Language Reference Manual \cite{plrm}} +\section{Postscript Language Reference Manual\cite{plrm}} Adobe's official reference manual for PostScript. It is big. -\section{Portable Document Format Reference \cite{pdfref17}} +\section{Portable Document Format Reference Manual\cite{pdfref17}} Adobe's official reference for PDF. It is also big. +\pagebreak + +\section{Portable Document Format (PDF) --- Finally...\cite{cheng2002portable}} + +This is not spectacularly useful, is basically an advertisement for Adobe software. + +{\bf Intro} +\begin{itemize} + \item Visual communications has been revolutionised by computing + \item BUT there have always been problems in exchanging formats + \item Filetypes like text, rich text, IGES, DXF, TIFF, JPEG, GIFF solve problems for particular types of files only + \item PDF solves everything for everyone; can include text, images, animation, sound, etc +\end{itemize} +{\bf PDF Features} +\begin{itemize} + \item Raster Image Process (RIP) --- For printing (presumably also displaying on screen) + \item Originally needed to convert to PS then RIP, with PS 3 can now RIP directly. + \item Reduced filesize due to compression + \item Four major applications - Stoy 1999\cite{stoy1999} + \begin{enumerate} + \item Download files from internet + \item Files on CDs + \item Files for outputing to printers + \item Conventional [commercial scale?] printing + \end{enumerate} + \item List of various (Adobe) PDF related software + \begin{itemize} + \item Includes software for PS that converts to/from PDF + \item So PS was obviously pretty popular before PDF + \end{itemize} + \item Can Optimize for screen/printer [not clear how] + \item Can compress for size +\end{itemize} + +\pagebreak +\section{Pixels or Perish \cite{hayes2012pixels}} + +The art of scientific illustration will have to adapt to the new age of online publishing'' +And therefore, JavaScript libraries ($\text{D}^3$) are the future. + +The point is that we need to change from thinking about documents as paper to thinking of them as pixels. +This kind of makes it related to our paper, because it is the same way we are justifying our project. It does mention precision, but doesn't say we need to get more of it. + +I get the feeling from this that Web based documents are a whole bunch of completely different design philosophies hacked together +with JavaScript. + +This paper uses Metaphors a lot. I never met a phor that didn't over extend itself. + + +{\bf Intro} +\begin{itemize} + \item Drawings/Pictures are ornaments in science but they are not just ornamental + \item Processes have changed a lot; eg: photographic plates $\to$ digital images + \item we are about to turn the page --- if not close the book --- on yet another chapter in publishing history.'' (HO HO HO) + \item It would be cool to have animated figures in documents (eg: Population pyramid; changes with time); not just as supplements'' + \item In the beginning, there was PostScript, 1970s and 1980s, John Warnock and Charles Geschke, Adobe Systems + \item PS is a language for vector graphics; objects are constructed from geometric primitives rather than a discrete array of pixels + \item PS is a complete programming language; an image is also a program; can exploit this to control how images are created based on data (eg: Faces) + \item PDF is flattened'' PS. No longer programable. Aspires to be virtual paper''. + \item But why are we using such powerful computing machines just to emulate sheets paper? (the author asks) +\end{itemize} + +{\bf Web based Documents} +\begin{itemize} + \item HTML, CSS, JavaScript - The Axis of Web Documents + \begin{itemize} + \item HTML - Defines document structure + \item CSS - Defines presentation of elements in document + \item JavaScript - Encodes actions, allows dynamic content (change the HTML/CSS) + \end{itemize} + \item \texttt{} will let you draw anything (So in principle don't even need all of HTML/CSS) + \begin{itemize} + \item Not device independent + \item Coordinates can be specified with precision finer than pixel resolution'' {\bf (TODO: Investigate this?)} + \item JavaScript operators to draw things on canvas are very similar to the PostScript model + \end{itemize} + \item SVG --- Same structure (Document Object Model (DOM)) as HTML + \begin{itemize} + \item Noun language'' + \item Nouns define lines/curves etc, rather than paragraphs/lists + \item Also borrows things from PostScript (eg: line caps and joints) + \item IS device independent, very high precision'' {\bf (TODO: Investigate)} + \item JavaScript can be used to interact with SVG too + \end{itemize} + \item $\text{D}^{3}$ (Data Driven Documents) - A JavaScript library + \begin{itemize} + \item Idea is to create or modify elements of a DOM document using supplied data + \item \url{https://github.com/mbostock/d3/wiki} + \end{itemize} + \item We are in a new Golden Age of data visualisation + \item Why do we still use PDFs? + \begin{itemize} + \item PDFs are owned'' by the author/reader; you download it, store it, you can print it, etc + \item HTML documents are normally on websites. They are not self contained. They often rely on remote content from other websites (annoying to download the whole document). + \end{itemize} + \item {\bf Conclusion} Someone should open up PDF to accept things like $\text{D}^{3}$ and other graphics formats (links nicely with \cite{barnes2013embedding}) + \item Also, Harry Potter reference + +\end{itemize} + +\section{Embedding and Publishing Interactive, 3D Figures in PDF Files\cite{barnes2013embedding}} + +\begin{itemize} + \item Linkes well with \cite{hayes2012pixels}; I heard you liked figures so I put a figure in your PDF + \item Title pretty much summarises it; similar to \cite{hayes2012pixels} except these guys actually did something practical +\end{itemize} + +\section{27 Bits are not enough for 8 digit accuracy\cite{goldberg1967twentyseven}} + +Proves with maths, that rounding errors mean that you need at least $q$ bits for $p$ decimal digits. $10^p < 2^{q-1}$ + +\begin{itemize} + \item Eg: For 8 decimal digits, since $10^8 < 2^{27}$ would expect to be able to represent with 27 binary digits + \item But: Integer part requires digits bits (regardless of fixed or floating point represenetation) + \item Trade-off between precision and range + \begin{itemize} + \item 9000000.0 $\to$ 9999999.9 needs 24 digits for the integer part $2^{23} = 83886008$ + \end{itemize} + \item Floating point zero = smallest possible machine exponent + \item Floating point representation: + \begin{align*} + y &= 0.y_1 y_2 \text{...} y_q \times 2^{n} + \end{align*} + \item Can eliminate a bit by considering whether $n = -e$ for $-e$ the smallest machine exponent (???) + \begin{itemize} + \item Get very small numbers with the same precision + \item Get large numbers with the extra bit of precision + \end{itemize} +\end{itemize} + +\section{What every computer scientist should know about floating-point arithmetic\cite{goldberg1991whatevery}} + +\begin{itemize} + \item Book: \emph{Floating Point Computation} by Pat Sterbenz (out of print... in 1991) + \item IEEE floating point standard becoming popular (introduced in 1987, this is 1991) + \begin{itemize} + \item As well as structure, defines the algorithms for addition, multiplication, division and square root + \item Makes things portable because results of operations are the same on all machines (following the standard) + \item Alternatives to floating point: Floating slasi and Signed Logarithm (TODO: Look at these, although they will probably not be useful) + + \end{itemize} + \item Base $\beta$ and precision $p$ (number of digits to represent with) - powers of the base can be represented exactly. + \item Largest and smallest exponents $e_{min}$ and $e_{max}$ + \item Need bits for exponent and fraction, plus one for sign + \item Floating point number'' is one that can be represented exactly. + \item Representations are not unique! $0.01 \times 10^1 = 1.00 \times 10^{-1}$ Leading digit of one $\implies$ normalised'' + \item Requiring the representation to be normalised makes it unique, {\bf but means it is impossible to represent zero}. + \begin{itemize} + \item Represent zero as $1 \times \beta^{e_{min}-1}$ - requires extra bit in the exponent + \end{itemize} + \item {\bf Rounding Error} + \begin{itemize} + \item Units in the last place'' eg: 0.0314159 compared to 0.0314 has ulp error of 0.159 + \item If calculation is the nearest floating point number to the result, it will still be as much as 1/2 ulp in error + \item Relative error corresponding to 1/2 ulp can vary by a factor of $\beta$ wobble''. Written in terms of $\epsilon$ + \item Maths $\implies$ {\bf Relative error is always bounded by $\epsilon = (\beta/2)\beta^{-p}$} + \item Fixed relative error $\implies$ ulp can vary by a factor of $\beta$ . Vice versa + \item Larger $\beta \implies$ larger errors + \end{itemize} + \item {\bf Guard Digits} + \begin{itemize} + \item In subtraction: Could compute exact difference and then round; this is expensive + \item Keep fixed number of digits but shift operand right; discard precision. Lead to relative error up to $\beta - 1$ + \item Guard digit: Add extra digits before truncating. Leads to relative error of less than $2\epsilon$. This also applies to addition + \end{itemize} + \item {\bf Catastrophic Cancellation} - Operands are subject to rounding errors - multiplication + \item {\bf Benign Cancellation} - Subtractions. Error $< 2\epsilon$ + \item Rearrange formula to avoid catastrophic cancellation + \item Historical interest only - speculation on why IBM used $\beta = 16$ for the system/370 - increased range? Avoids shifting + \item Precision: IEEE defines extended precision (a lower bound only) + \item Discussion of the IEEE standard for operations (TODO: Go over in more detail) + \item NaN allow continuing with underflow and Infinity with overflow + \item Incidentally, some people think that the solution to such anomalies is never to compare floating-point numbers for equality but instead to consider them equal if they are within some error bound E. This is hardly a cure all, because it raises as many questions as it answers.'' - On equality of floating point numbers + +\end{itemize} + + +%%%% +% David's Stuff +%%%% +\section{Compositing Digital Images\cite{porter1984compositing}} + + + +Perter and Duff's classic paper "Compositing Digital Images" lays the +foundation for digital compositing today. By providing an "alpha channel," +images of arbitrary shapes â and images with soft edges or sub-pixel coverage +information â can be overlayed digitally, allowing separate objects to be +rasterized separately without a loss in quality. + +Pixels in digital images are usually represented as 3-tuples containing +(red component, green component, blue component). Nominally these values are in +the [0-1] range. In the Porter-Duff paper, pixels are stored as $(R,G,B,\alpha)$ +4-tuples, where alpha is the fractional coverage of each pixel. If the image +only covers half of a given pixel, for example, its alpha value would be 0.5. + +To improve compositing performance, albeit at a possible loss of precision in +some implementations, the red, green and blue channels are premultiplied by the +alpha channel. This also simplifies the resulting arithmetic by having the +colour channels and alpha channels use the same compositing equations. + +Several binary compositing operations are defined: +\begin{itemize} +\item over +\item in +\item out +\item atop +\item xor +\item plus +\end{itemize} + +The paper further provides some additional operations for implementing fades and +dissolves, as well as for changing the opacity of individual elements in a +scene. + +The method outlined in this paper is still the standard system for compositing +and is implemented almost exactly by modern graphics APIs such as \texttt{OpenGL}. It is +all but guaranteed that this is the method we will be using for compositing +document elements in our project. + +\section{Bresenham's Algorithm: Algorithm for computer control of a digital plotter\cite{bresenham1965algorithm}} +Bresenham's line drawing algorithm is a fast, high quality line rasterization +algorithm which is still the basis for most (aliased) line drawing today. The +paper, while originally written to describe how to control a particular plotter, +is uniquely suited to rasterizing lines for display on a pixel grid. + +Lines drawn with Bresenham's algorithm must begin and end at integer pixel +coordinates, though one can round or truncate the fractional part. In order to +avoid multiplication or division in the algorithm's inner loop, + +The algorithm works by scanning along the long axis of the line, moving along +the short axis when the error along that axis exceeds 0.5px. Because error +accumulates linearly, this can be achieved by simply adding the per-pixel +error (equal to (short axis/long axis)) until it exceeds 0.5, then incrementing +the position along the short axis and subtracting 1 from the error accumulator. + +As this requires nothing but addition, it is very fast, particularly on the +older CPUs used in Bresenham's time. Modern graphics systems will often use Wu's +line-drawing algorithm instead, as it produces antialiased lines, taking +sub-pixel coverage into account. Bresenham himself extended this algorithm to +produce Bresenham's circle algorithm. The principles behind the algorithm have +also been used to rasterize other shapes, including B\'{e}zier curves. + +\section{Quad Trees: A Data Structure for Retrieval on Composite Keys\cite{finkel1974quad}} + +This paper introduces the quadtree'' spatial data structure. The quadtree structure is +a search tree in which every node has four children representing the north-east, north-west, +south-east and south-west quadrants of its space. + +\section{Xr: Cross-device Rendering for Vector Graphics\cite{worth2003xr}} + +Xr (now known as Cairo) is an implementation of the PDF v1.4 rendering model, +independent of the PDF or PostScript file formats, and is now widely used +as a rendering API. In this paper, Worth and Packard describe the PDF v1.4 rendering +model, and their PostScript-derived API for it. + +The PDF v1.4 rendering model is based on the original PostScript model, based around +a set of \emph{paths} (and other objects, such as raster images) each made up of lines +and B\'{e}zier curves, which are transformed by the Current Transformation Matrix.'' +Paths can be \emph{filled} in a number of ways, allowing for different handling of self-intersecting +paths, or can have their outlines \emph{stroked}. +Furthermore, paths can be painted with RGB colours and/or patterns derived from either +previously rendered objects or external raster images. +PDF v1.4 extends this to provide, amongst other features, support for layering paths and +objects using Porter-Duff compositing\cite{porter1984compositing}, giving each painted path +the option of having an $\alpha$ value and a choice of any of the Porter-Duff compositing +methods. + +The Cairo library approximates the rendering of some objects (particularly curved objects +such as splines) with a set of polygons. An \texttt{XrSetTolerance} function allows the user +of the library to set an upper bound on the approximation error in fractions of device pixels, +providing a trade-off between rendering quality and performance. The library developers found +that setting the tolerance to greater than $0.1$ device pixels resulted in errors visible to the +user. + +\section{Glitz: Hardware Accelerated Image Compositing using OpenGL\cite{nilsson2004glitz}} + +This paper describes the implementation of an \texttt{OpenGL} based rendering backend for +the \texttt{Cairo} library. + +The paper describes how OpenGL's Porter-Duff compositing is easily suited to the Cairo/PDF v1.4 +rendering model. Similarly, traditional OpenGL (pre-version 3.0 core) support a matrix stack +of the same form as Cairo. + +The Glitz'' backend will emulate support for tiled, non-power-of-two patterns/textures if +the hardware does not support it. + +Glitz can render both triangles and trapezoids (which are formed from pairs of triangles). +However, it cannot guarantee that the rasterization is pixel-precise, as OpenGL does not proveide +this consistently. + +Glitz also supports multi-sample anti-aliasing, convolution filters for raster image reads (implemented +with shaders). + +Performance was much improved over the software rasterization and over XRender accellerated rendering +on all except nVidia hardware. However, nVidia's XRender implementation did slow down significantly when +some transformations were applied. + +\section{Boost Multiprecision Library\cite{boost_multiprecision}} + +\begin{itemize} + \item The Multiprecision Library provides integer, rational and floating-point types in C++ that have more range and precision than C++'s ordinary built-in types.'' + \item Specify number of digits for precision as a template argument. + \item Precision is fixed... {\bf possible approach to project:} Use \verb/boost::mpf_float/ and increase \verb/N/ as more precision is required? +\end{itemize} \pagebreak \bibliographystyle{unsrt}