X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fsam.git;a=blobdiff_plain;f=chapters%2FBackground%2FFloatingPointOnTheGPU.tex;h=33357a63619c2a3131dfdae426df49d0d8839517;hp=bd5ea294c0e6bbb14ddad1ac47353f23959f3201;hb=7fe12ce195f039925222ad98b38018ad31d1b1f2;hpb=9fcf44a0c34f393689118e913a2d17d907036c85 diff --git a/chapters/Background/FloatingPointOnTheGPU.tex b/chapters/Background/FloatingPointOnTheGPU.tex index bd5ea29..33357a6 100644 --- a/chapters/Background/FloatingPointOnTheGPU.tex +++ b/chapters/Background/FloatingPointOnTheGPU.tex @@ -1,8 +1,13 @@ -\subsection{Rasterisation on the CPU and GPU} +%\subsection{Rasterisation on the CPU and GPU} Traditionally, vector images have been rasterized by the CPU before being sent to a specialised Graphics Processing Unit (GPU) for drawing\cite{computergraphics2}. Rasterisation of simple primitives such as lines and triangles have been supported directly by GPUs for some time through the OpenGL standard\cite{openglspec}. However complex shapes (including those based on B{\'e}zier curves such as font glyphs) must either be rasterised entirely by the CPU or decomposed into simpler primitives that the GPU itself can directly rasterise. There is a significant body of research devoted to improving the performance of rendering such primitives using the latter approach, mostly based around the OpenGL\cite{openglspec} API\cite{robart2009openvg, leymarie1992fast, frisken2000adaptively, green2007improved, loop2005resolution, loop2007rendering}. Recently Mark Kilgard of the NVIDIA Corporation described an extension to OpenGL for NVIDIA GPUs capable of drawing and shading vector paths\cite{kilgard2012gpu,kilgard300programming}. From this development it seems that rasterization of vector graphics may eventually become possible upon the GPU. -It is not entirely clear how well supported the IEEE-754 standard for floating point computation is amongst GPUs\footnote{Informal technical articles are abundant on the internet --- Eg: Regarding the Dolphin Wii GPU Emulator: \url{https://dolphin-emu.org/blog} (accessed 2014-05-22)}. Although the OpenGL API does use IEEE-754 number representations, research by Hillesland and Lastra in 2004 suggested that many GPUs were not internally compliant with the standard\cite{hillesland2004paranoia}. +It is not entirely clear how well supported the IEEE-754 standard for floating point computation is amongst GPUs\footnote{Informal technical articles are abundant on the internet --- Eg: Regarding the Dolphin Wii GPU Emulator: \url{https://dolphin-emu.org/blog} (accessed 2014-05-22)}. Although the OpenGL API does use IEEE-754 number representations, research by Hillesland and Lastra in 2004 suggested that many GPUs were not internally compliant with the standard\cite{hillesland2004paranoia}. -\rephrase{We implemented a GPU and CPU renderer so we could compare them}. +Figure \ref{gpufloats.pdf} shows the rendering of a circle through evaluation of $x^2 + y^2 < 1$ in an early version of the IPDF software (Chapter \ref{Process}) using an x86-64 CPU and various GPU models respectively. If we assume the x86-64 is IEEE-754 compliant performing the default rounding behaviour (to nearest) the GPUs are using different rounding behaviours which may not be IEEE-754 compliant. +\begin{figure}[H] + \centering + \includegraphics[width=0.9\textwidth]{figures/gpufloats.pdf}\label{gpufloats.pdf} + \caption{CPU and GPU evaluation of $x^2 + y^2 < 1$ at $\approx 10^6$ magnification} +\end{figure} %Arbitrary precision arithmetic, is provided by many software libraries for CPU based calculations