X-Git-Url: https://git.ucc.asn.au/?p=ipdf%2Fsam.git;a=blobdiff_plain;f=chapters%2FProgress.tex;h=e37865d206091daa4258c92d6dd00133ad3a5c5f;hp=475bc07ef5b82d411843d54165fedec4a573a81d;hb=253d241eb8279be539ad72a0283d3f1575b537ab;hpb=df1e38d148d992b7caf24932ae89cf5cd610d5b8 diff --git a/chapters/Progress.tex b/chapters/Progress.tex index 475bc07..e37865d 100644 --- a/chapters/Progress.tex +++ b/chapters/Progress.tex @@ -28,6 +28,9 @@ The literature examined in Chapter\ref{Background} can broadly classed into thre To improve the Literature Review we could consider the following topics in more detail: \begin{enumerate} \item Additional approaches to arbitrary precision possibly including symbolic computation + \begin{itemize} + \item The Mathematica computational package claims to use symbolic computation, but we have yet to explore this field + \end{itemize} \item Floating point errors in the context of computing B\'{e}zier Curves or similar \item Algorithms for reducing overall error other than Fast2Sum \item Alternative number representations such as rationals (eg: $\frac{1}{3}$) @@ -55,7 +58,7 @@ Algorithms for floating point arithmetic may be implemented in software (CPU) or An open source Virtual FPU implemented in the VHDL language has been successfully compiled and can be substituted into our testbed software in place of native arithmetic running on the CPU. The timing diagram for this FPU throughout the execution of test programs can be extracted. Currently the virtual FPU is restricted to 32 bit floats and the square root operation is unimplemented. -Mainly motivated by producing Figure \ref{minifloat.pdf} we have also implemented functions to convert an arbitrary \verb/Real/ type (which may be IEEE-754 floats) to and from a fixed size floating point representation of our choosing. We have not implemented any operations for floating point arithmetic using these representations. +Mainly motivated by producing Figure \ref{floats.pdf} we have also implemented functions to convert an arbitrary \verb/Real/ type (which may be IEEE-754 floats) to and from a fixed size floating point representation of our choosing. We have not implemented any operations for floating point arithmetic using these representations. By using the functions to convert real numbers to variable precision floats as an interface for the virtual FPU, we hope to illustrate the limitations of floating point arithmetic more clearly than would be possible using IEEE-754 binary32 as is native to the C and C++ languages. Using the virtual FPU instead of a CPU based software library will prove useful for determining the exact performance of floating point operations.