David's final changes: more profiler features, fixes.
Merge branch terrible and branch awful scr.Present was in wrong place Merging sucks Everything sucks GPUs suck fglrx sucks professionalism sucks I hope no one looking to hire me ever reads these. Conflicts: src/main.h
About to break everything with a merge
My eyes, they burn! Also runs faster, slightly less buggy.
Totally FITH everything I'm trying to get it so that Path and View can use a different number representation to Bezier. It's not going well.
Use Gmprat for Path bounds with TRANSFORM_BEZIERS_TO_PATH So, the bounds of Paths are stored with Gmprat The Bezier's are all stored relative to the Path, as floats The transformations are only applied to the Path Gmprat bounds. This seems to work rather well. In other news, the DeCasteljau algorithm in the CPU renderer had no upper limit, which is why it was slowing down so much. The CPU renderer tends to suffer from SIGFPE-itis when using floats, because when you do a cast there is a SIGFPE if the resultant type can't represent the operand. This occurs in a few places that don't actually affect the rendering...
Define for Transformations on Path only, also fixed segfault due to GraphicsBuffer The "invalidated" is used to recycle memory in the GraphicsBuffer, but that can only be done if there is actually enough memory. This was causing a segfault if the document was initially empty and had things added to it. Can now remove the hack in main.cpp where documents started with a {0,0,0,0} RECT in them. We can now (ab)use the Path for transformations. The idea was that Arbitrary precision stuff might be faster if we only care about the bounds of the paths. Unfortunately the way Real was used everywhere will make it a bit difficult to actually use this. We need Beziers and their bounds to be stored with floats, but the Path bounds to be stored with GMPrat or some other arbitrary type. It probably won't help that much, but evaluating those Beziers with GMPrat really slows down the CPU renderer. It's also a giant hack because I still use BezierRenderer for GPU rendering but I now use PathRenderer for CPU rendering of Beziers. Le sigh. We can now store Bezier bounds relative to Path bounds and apply transformations only to the Path bounds.
Add loadsvg script command, fix ParanoidNumber size limiting* Need to write some scripts and performance test them now I guess? Maybe add commands for scripts to output information for plotting.
Some quadtree stuff: bugfix + mutation
// there is no elegance here. only sleep deprivation and regret. Basically, clipping Béziers now actually works, because De Casteljau is no longer totally back to front. (In fact, I think I just made it more back to front, which cancelled the original out?) Also disabled zoom out in the quadtree and some dodgy disabled debug in the bezier shader.
Classify Beziers, use DeCasteljau for CPU renderer I don't know how glDrawLines does it, but I can't get rid of the wiggles in straight lines done using CPU renderer. So I switched to DeCasteljau. Then I made it classify the lines and only use one Bresenham instead of 100 anyway.
Careful, you may have to shade your eyes Except for all the things that don't quite work, shading works perfectly.
A Song of Floodfills and Segfaults Ok, just need to make the FloodFillOnCPU not stack overflow...
Go nuts with Qt We can do all the things I promised!* And we can do it without having to implement a vim style stdio based interface! Or adding 16 extra mouse buttons! Qt can parse XML or even SVG all by itself though... I'm going to ignore that and just keep treating it as a menu system. * Commit --amend Well, the set of things we can do is neither a subset nor a superset of the things I promised.
Attempt Shading + Bezier Bounds (hopefully) correct Turns out I can't do high school calculus despite 4 years of Physics study. The shading algorithm I envisioned has several rather hilarious things wrong with it... Although that 'j' does look damn good if you set the zoom *just* right...
Bezier bounds rectangles are calculated correctly CPU rendering and SVG parsing uses absolute coordinates. GPU rendering uses relative coordinates (relative to the bounding box). The Objects struct stores the absolute bounding boxes now. Previously it was just using {0,0,1,1} (and thus the GPU's relative coordinates were equivelant to the CPU's absolute coordinates). I might have fixed some other things but I can't remember.
Fix beziers on GPU. We can now render an SVG correctly(ish)! Woah!
Remove terrible "pow()" functions There are many reasons why that was terrible and it finally all came apart in a segfaultastic display. We now have Power which only works for integer powers But we only need those at the moment anyway.
Béziers It's hard to type the ́e so I will just call them Beziers from now. New struct represents a cubic bezier, can be evaluated. The Objects struct contains a vector of beziers, and a vector of indices for each object. If an ObjectType is BEZIER than the index can be used to look up the bezier control points. Control points are relative to the bounding rectangle; so we can reuse the same curves (eg: For fonts). Rendering happens on CPU only, sub divide and use Bresenham lines. Bresenham lines are not quite optimal but I eventually gave up. So we don't have a "line" type, but you can make one by creating a Bezier where x1,y1 == x0,y0 They look kind of wobbly. Save/Load not tested. It might break. But it will have to be pretty heavily rewritten soon anyway.