X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fsam.git;a=blobdiff_plain;f=chapters%2FBackground%2FFloatingPointOnTheGPU.tex;h=0d40fb19a6403c37912f503c4adc2589a02c29d0;hp=bd5ea294c0e6bbb14ddad1ac47353f23959f3201;hb=689e433b348588e05f47d23385dbf05d239c95d2;hpb=9fcf44a0c34f393689118e913a2d17d907036c85;ds=sidebyside diff --git a/chapters/Background/FloatingPointOnTheGPU.tex b/chapters/Background/FloatingPointOnTheGPU.tex index bd5ea29..0d40fb1 100644 --- a/chapters/Background/FloatingPointOnTheGPU.tex +++ b/chapters/Background/FloatingPointOnTheGPU.tex @@ -1,8 +1,18 @@ -\subsection{Rasterisation on the CPU and GPU} +%\subsection{Rasterisation on the CPU and GPU} + +{\bf FIXME: I feel this section is important but I'm not quite sure where to place it; it could almost work as a paper by itself (in fact I sort of wrote one for it already...)} Traditionally, vector images have been rasterized by the CPU before being sent to a specialised Graphics Processing Unit (GPU) for drawing\cite{computergraphics2}. Rasterisation of simple primitives such as lines and triangles have been supported directly by GPUs for some time through the OpenGL standard\cite{openglspec}. However complex shapes (including those based on B{\'e}zier curves such as font glyphs) must either be rasterised entirely by the CPU or decomposed into simpler primitives that the GPU itself can directly rasterise. There is a significant body of research devoted to improving the performance of rendering such primitives using the latter approach, mostly based around the OpenGL\cite{openglspec} API\cite{robart2009openvg, leymarie1992fast, frisken2000adaptively, green2007improved, loop2005resolution, loop2007rendering}. Recently Mark Kilgard of the NVIDIA Corporation described an extension to OpenGL for NVIDIA GPUs capable of drawing and shading vector paths\cite{kilgard2012gpu,kilgard300programming}. From this development it seems that rasterization of vector graphics may eventually become possible upon the GPU. It is not entirely clear how well supported the IEEE-754 standard for floating point computation is amongst GPUs\footnote{Informal technical articles are abundant on the internet --- Eg: Regarding the Dolphin Wii GPU Emulator: \url{https://dolphin-emu.org/blog} (accessed 2014-05-22)}. Although the OpenGL API does use IEEE-754 number representations, research by Hillesland and Lastra in 2004 suggested that many GPUs were not internally compliant with the standard\cite{hillesland2004paranoia}. -\rephrase{We implemented a GPU and CPU renderer so we could compare them}. +In order to explore this, we implemented a simple fragment shader to render a circle. Points $x^2 + y^2 < 1$ should be black. When scaled to bounds of width $\approx 10^{-6}$ the edges of the circle become jagged due to imprecision. However, the behaviour is quite different depending on GPU model. A CPU renderer was also implemented to evaluate the same function using IEEE-754 singles. + +\begin{figure}[H] + \centering + \includegraphics[width=0.7\textwidth]{figures/gpufloats.pdf} + \caption{Difference in evaluating $x^2 + y^2 < 1$ for the x86\_64 and various GPUs\\ + The view bounds are identical} +\end{figure} + %Arbitrary precision arithmetic, is provided by many software libraries for CPU based calculations