by mh » Fri Aug 26, 2011 7:11 pm
Animating meshes in your vertex shader is the way to go. Combine that with static VBOs containing all of the xyz data (and better yet, if you have generic vertex attrib arrays you can use GL_UNSIGNED_BYTE for Quake formats; don't normalize but make sure that you pad them out to 4 bytes and set a w of 1). Bind your VBO, set a pointer to the xyz for current frame, set a pointer to the xyz for previous frame, send your blendweight as a uniform and let the GPU do all the rest of the heavy lifting. Depending how you do light you could potentially completely eliminate all CPU -> GPU data traffic. Hell, you could even encode the r_avertex_normal_dots table into a texture if you're still using that.