The use of floating point arithmetic in computer systems was pioneered by Knuth, Goldberg\cite{goldbern1967twentyseven}, Dekker, and others\cite{HFP}, but modern systems are largely compatable with the IEEE-754 standard pioneered by William Kahan in 1985 \cite{ieee754std1985} and revised (also with contributions from Kahan) in 2008\cite{ieee754std2008}. Recently, the ``Handbook of Floating Point Arithmetic''\cite{HFP} by Muller et al (2010) provides a detailed overview of IEEE-754 floating point arithmetic. Whilst a Fixed Point representation keeps the ``point'' (the location considered to be $i = 0$ in \eqref{fixedpointZ}) at the same position in a string of bits, Floating point representations can be thought of as scientific notation; an ``exponent'' and fixed point value are encoded, with multiplication by the exponent moving the position of the point. A floating point number $x$ is commonly represented by a tuple of values $(s, e, m)$ in base $B$ as\cite{HFP, ieee2008-754}: $x = (-1)^{s} \times m \times B^{e}$ Where $s$ is the sign and may be zero or one, $m$ is commonly called the ``mantissa'' and $e$ is the exponent. Whilst $e$ is an integer in some range $\pm e_max$, the mantissa $m$ is a fixed point value in the range $0 < m < B$. The choice of base $B = 2$ in the original IEEE-754 standard matches the nature of modern hardware. It has also been found that this base in general gives the smallest rounding errors\cite{HFP}. %Early computers had in fact used a variety of representations including $B=3$ or even $B=7$\cite{goldman1991whatevery}, and the revised IEEE-754 standard specifies a decimal representation $B = 10$ intended for use in financial applications\cite{ieee754std2008}\footnote{Eg: The smallest valid unit of currency \$0.01 could not be represented exactly in base 2}. From now on we will restrict ourselves to considering base 2 floats. The IEEE-754 encoding of $s$, $e$ and $m$ requires a fixed number of continuous bits dedicated to each value. Originally two encodings were defined: binary32 and binary64. $s$ is always encoded in a single leading bit, whilst (8,23) and (11,53) bits are used for the (exponent, mantissa) encodings respectively. The encoding of $m$ in the IEEE-754 standard is not exactly equivelant to a fixed point value. By assuming an implicit leading bit (ie: restricting $1 \leq m < 2$) except for when $e = 0$, floating point values are gauranteed to have a unique representations; these representations are said to be ``normalised''. When $e = 0$ the leading bit is not implied; these representations are called ``denormals'' because multiple representations may map to the same real value. The idea of using an implicit bit appears to have been considered by Goldberg as early as 1967\cite{goldbern1967twentyseven}, and it leads to an increase of precision near the origin.