Hi, a while ago I was working on a patch for QuakeSpasm to speed up mdl rendering on glsl-capable hardware (lerping between frames done in glsl, one draw call per mdl, all mdl data in a static vbo). It works really well on my main system (nvidia 650gt); for maps like (with 120k epolys), I get around double the fps as before.
However, szo tested the patch for me on his radeon HD 7700 and reported r_speeds frame times going from ~60ms to ~175ms on ne_ruins. That's bad enough to suggest software fallback is happening somewhere, I think.
I guess I should find/borrow an AMD gpu system to debug on, but short of that, I was wondering if anyone with an ATI/AMD card would mind giving it a try. Any tricks for debugging software fallback? I am using 4x normalized GL_BYTE for the vertex normals, and 4x unnormalized GL_UNSIGNED_BYTE for the vertex coordinates, which are maybe a bit unusual / old-fashioned vertex formats, but they're listed in this (old) pdf as natively supported on ati: http://amd-dev.wpengine.netdna-cdn.com/ ... _Guide.pdf
Could calling glUseProgram, glBindBuffer, and glVertexAttribPointerFunc per-model be really fatal for performance on AMD? Or maybe combining glsl vertex shading with fixed-function fragment shading is a bad idea.
this is the glsl branch on github:
https://github.com/ericwa/Quakespasm/tree/alias5
and a windows binary: http://quakespasm.ericwa.com/job/quakes ... 9cd846.zip
the master branch without the patch:
https://github.com/ericwa/Quakespasm/tree/master
the diff:
https://github.com/ericwa/Quakespasm/co ... r...alias5
Thanks