X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fsam.git;a=blobdiff_plain;f=chapters%2FBackground%2FFloatingPointOnTheGPU.tex;fp=chapters%2FBackground%2FFloatingPointOnTheGPU.tex;h=d85bade691f41d744bd6d19199c60bbe7b5e2542;hp=0d40fb19a6403c37912f503c4adc2589a02c29d0;hb=ae8d5f837db032eb4d9e9666f5026fab7e3e8e4a;hpb=253d241eb8279be539ad72a0283d3f1575b537ab diff --git a/chapters/Background/FloatingPointOnTheGPU.tex b/chapters/Background/FloatingPointOnTheGPU.tex index 0d40fb1..d85bade 100644 --- a/chapters/Background/FloatingPointOnTheGPU.tex +++ b/chapters/Background/FloatingPointOnTheGPU.tex @@ -4,15 +4,7 @@ Traditionally, vector images have been rasterized by the CPU before being sent to a specialised Graphics Processing Unit (GPU) for drawing\cite{computergraphics2}. Rasterisation of simple primitives such as lines and triangles have been supported directly by GPUs for some time through the OpenGL standard\cite{openglspec}. However complex shapes (including those based on B{\'e}zier curves such as font glyphs) must either be rasterised entirely by the CPU or decomposed into simpler primitives that the GPU itself can directly rasterise. There is a significant body of research devoted to improving the performance of rendering such primitives using the latter approach, mostly based around the OpenGL\cite{openglspec} API\cite{robart2009openvg, leymarie1992fast, frisken2000adaptively, green2007improved, loop2005resolution, loop2007rendering}. Recently Mark Kilgard of the NVIDIA Corporation described an extension to OpenGL for NVIDIA GPUs capable of drawing and shading vector paths\cite{kilgard2012gpu,kilgard300programming}. From this development it seems that rasterization of vector graphics may eventually become possible upon the GPU. -It is not entirely clear how well supported the IEEE-754 standard for floating point computation is amongst GPUs\footnote{Informal technical articles are abundant on the internet --- Eg: Regarding the Dolphin Wii GPU Emulator: \url{https://dolphin-emu.org/blog} (accessed 2014-05-22)}. Although the OpenGL API does use IEEE-754 number representations, research by Hillesland and Lastra in 2004 suggested that many GPUs were not internally compliant with the standard\cite{hillesland2004paranoia}. +It is not entirely clear how well supported the IEEE-754 standard for floating point computation is amongst GPUs\footnote{Informal technical articles are abundant on the internet --- Eg: Regarding the Dolphin Wii GPU Emulator: \url{https://dolphin-emu.org/blog} (accessed 2014-05-22)}. Although the OpenGL API does use IEEE-754 number representations, research by Hillesland and Lastra in 2004 suggested that many GPUs were not internally compliant with the standard\cite{hillesland2004paranoia}. In Section \ref{} we illustrate how the use of -In order to explore this, we implemented a simple fragment shader to render a circle. Points $x^2 + y^2 < 1$ should be black. When scaled to bounds of width $\approx 10^{-6}$ the edges of the circle become jagged due to imprecision. However, the behaviour is quite different depending on GPU model. A CPU renderer was also implemented to evaluate the same function using IEEE-754 singles. - -\begin{figure}[H] - \centering - \includegraphics[width=0.7\textwidth]{figures/gpufloats.pdf} - \caption{Difference in evaluating $x^2 + y^2 < 1$ for the x86\_64 and various GPUs\\ - The view bounds are identical} -\end{figure} %Arbitrary precision arithmetic, is provided by many software libraries for CPU based calculations