+While traditionally, GPUs have supported some approximation of IEEE 754's 32-bit floats,
+modern graphics processors also support 16-bit\cite{nv_half_float} and 64-bit\cite{arb_gpu_shader_fp64}
+IEEE floats, though some features of IEEE floats, like denormals and NaNs are not always supported.
+Note, however, that some parts of the GPU are only able to use some formats,
+so precision will likely be truncated at some point before display.
+Higher precision numeric types can be implemented or used on the GPU, but are
+slow.\cite{emmart2010high}
+
+