My eyes, they burn! Also runs faster, slightly less buggy.
Adding things to quadtree is implemented but segfaulty. (i.e. it segfaults all of the time, and even if it didn't, it'd crash if you zoomed in, and also you can still only add things to node 0 and really this breaks a lot more than it fixes) For the prosecution, segfaults except: - When run in valgrind - When run in gdb on nvidia hardware Infinite loops when: - You zoom in, zoom out, add an object, and zoom in again.
Keeping it Real, add Gmprat::Str, Gmprat::Log10 Fixed* all those horrible compile errors. View can now use VReal and Path can now use PReal and at the moment they are Gmprat unless the quadtree is enabled. For Gmprat::Log10 there are problems. Using: log(a/b) = log(a) - log(b) log(a*b) = log(a) + log(b) Unfortunately mpz_div_ui quickly becomes a massive performance issue. Even when well within the range of IEEE singles. Need to rewrite Gmprat::Log10 but may as well commit since it works* slowly at least.
Define for Transformations on Path only, also fixed segfault due to GraphicsBuffer The "invalidated" is used to recycle memory in the GraphicsBuffer, but that can only be done if there is actually enough memory. This was causing a segfault if the document was initially empty and had things added to it. Can now remove the hack in main.cpp where documents started with a {0,0,0,0} RECT in them. We can now (ab)use the Path for transformations. The idea was that Arbitrary precision stuff might be faster if we only care about the bounds of the paths. Unfortunately the way Real was used everywhere will make it a bit difficult to actually use this. We need Beziers and their bounds to be stored with floats, but the Path bounds to be stored with GMPrat or some other arbitrary type. It probably won't help that much, but evaluating those Beziers with GMPrat really slows down the CPU renderer. It's also a giant hack because I still use BezierRenderer for GPU rendering but I now use PathRenderer for CPU rendering of Beziers. Le sigh. We can now store Bezier bounds relative to Path bounds and apply transformations only to the Path bounds.
Automatically generate quadtree children. This mostly works, and does the view reparenting. It doesn't do any clipping, so we still hit precision issues as (for example) bézier control points are outside the quadtree node's coordinate system, and become very, very large. There are some issues with the boundaries of quadtree nodes, as the system currently can't display two nodes at once. To compensate, it traverses up the tree until it finds a single node which contains both, though this will be a distant anscestor if the nodes are distantly related. Quadtrees are still disabled by default. There are also a couple of minor changes to the GL code to make it spit out fewer error messages and run slightly faster on some hardware.
A bunch of OpenGL debug annotations. (Which will probably break compiling on cabellera)
Still-broken quadtree shenanigans! Quadtrees are, therefore, still disabled by default.
Don't try to unmap buffers which aren't mapped. Fixes the GL_INAVLID_OPERATION that snuck in: this would occur when calling GraphicsBuffer::Resize() on an unmapped, non-invalidated buffer.
Get it compiling on Cabellera (g++0x not c++11) ... But at what cost?
Béziers on the GPU. So this was a terrible thing in many ways. What we're doing: - Converting all of the coefficients to floats (+ doing a bit of preprocessing) and uploading the the GPU. - Uploading the data_indices array from the document to the GPU. - Using the vertex ids in the IBO when rendering béziers to get the object id, then looking up the data_indices array (as a texture) to find which bézier coefficients to use, then reading the bézier coefficients from another buffer texture and finally having the geometry shader generate 100 lines much as the CPU one does. We're using buffer textures to access these things because they don't have fixed sizes (and can get big), so we can't use uniform buffers and because shader storage buffer objects need OpenGL 4.3, which only graphics cards manufactured in the last 45 seconds actually support.
Refactor Rendering of Objects (prepare for CPU rendering) New abstract class ObjectRenderer and derived classes for each type of Object. It's a bit more complex but hopefully easier to build on now. There are probably a heap of bugs in this, but I can see the test pattern again, so I'll commit before it gets worse. Note: We now have to make sure Screen is initialised first or the segfaults will hit the fan. (Now it makes sense why all those things weren't in constructors in the first place :S) It also now segfaults if you get View and Screen the wrong way round.
Minor perf improvement on nVidia nVidia's driver does not like you mapping a STATIC buffer, as they're usually in parts of VRAM not directly accessible by the CPU. The driver therefore has to migrate the buffer somewhere slower. We initialize the object bounds buffer by mapping it and writing directly into it. This is good where we're doing coordinate transform on the CPU: we're changing it every frame and it can be a dynamic buffer, but we only need to do it once if the GPU is doing coordinate transforms so we make it a static buffer. This change "fakes" mapping a STATIC buffer for the first time by allocating some CPU-side memory, having the application write into that, and then initializing the buffer with that. This removes a performance warning on nVidia when switching to GPU-side coordinate transforms.
Fix an intel LG warning by orphaning text memory
OpenGL 3.1 core profile support. If you thought the graphics code was ugly before, wait until you try this!
Hideously useless buffer perf optimization.
Horrible debug font bufferification. (Sorry)
Slightly less broken GraphicsBuffer implementation. I copied the original out of some terrible code I'd written ages ago. It now is less overtly broken. I'd still like to do an optimized version with Persistant Mapped Buffers and manual memory control, but that can wait until post-shaderification.
Store everything in a VBO, making things faster. Also 1024*1024 grid of boxes, for extreme slowness. Yay!