Support doing coordinate transforms on the CPU
So because OpenGL (by default at least), does everything with
32-bit IEEE floats, even when our "Real" type is a double, we lost
precision when rendering it.
This commit adds support for running the coordinate transform on the
CPU using the Real datatype, and only truncating the results, making
things work properly with doubles. This will be slower on documents
with a large number of objects, but there's no real difference in
our unoptimized code at the moment.
Right-click to switch between CPU and GPU transforms.
UCC git Repository :: git.ucc.asn.au