Thursday, December 16, 2010

Even more D3D11

So I'm now at the stage where I'd guess I have sample working code to enable maybe 75% of what's required for a port. I've finally managed to get textures working, and - once again - they've proven to be a hugely overblown monstrosity to create.

The full list of what I've now tested includes render targets, depth buffers, depth state, sampler state, the Effects framework (I'll talk more about that shortly), shaders, vertex buffers, input layouts, dynamic VBO and texture updating.

I've reviewed other state objects and am satisfied that they're no worse (but sadly no better) than those I've tested. I've also reviewed matrixes and have determined that they work more or less identically to those in D3D9.

The D3D11 Effects framework is a strange beast. Microsoft no longer provide a binary implementation of it, so to use it you need to compile it yourself from source (provided with the SDK) and link it with your program. Not that big a deal (it'll fall under the GPL system library exception) but a very strange decision nonetheless.

HLSL without using effects is another option, of course, but I couldn't get textures to work this way (or alternatively, the documentation is incredibly poor and utterly failed to point me in the right direction).

I haven't looked at constant buffers or compute shaders yet; these might be options for another time. Constant buffers seem like they might be useful, but I can't really see how compute shaders could offer anything in a Quake-related context.

The D3D11 paradigm is very much one of fill in a struct (or 2, or 3, or 4), pass that to a create funcion, and then apply the objects created at runtime. This very much front-loads the work at setup/startup time, with actual runtime code being quite light. This confirms my own observations from playing around with the SDK samples - some of them take a very long time to start up, but once they get going they seem to run fast enough (faster than D3D9 in some cases).

So what's the next step? I'm almost motivated enough to try a port, and I like the idea of going back and doing what I had originally done with DirectQ, which is to do a very basic GLQuake port. I'm still undecided about this just now, so it's something I'll sleep on, but - if I do it - it won't be a repeat of last time, where the test port suddenly transforms into the new main codebase. I've definitely seen enough to convince myself that D3D11 is not a viable way forward for a Quake technology project, and that retaining DirectQ at D3D9 is a better decision.

So any port I may decide to do will just be for personal entertainment purposes, nothing else. To ensure that it stays that way, it will probably be Q2 or Q3A rather than Q1, and it may even be released, but that all depends on whether or not I actually do it.

6 comments:

MichealS said...

just curious about the current state of the DirectQ development

mhquake said...

The current status of DirectQ is that I'm building up 1.8.7 intermittently, but RMQ and other Real Life matters are taking up my time. 1.8.7 has some quite huge performance improvements, as well as some additional (mainly multiplayer) features, and my intention is to release it as soon as I'm happy that it's good. But that conflicts with and needs to balance with the other stuff I have going on, so it's happening slower than normally. But it will happen. :)

MichealS said...

That's cool, I understand having other things that need time. I do have couple other questions.

It's been a hella long time since I actually played through Quake, so I don't remember: Does fog actually play a role in the game?

And is it still missing in 1.8.666a?

mhquake said...

Fog has been back for a good while but it only works if you've got shaders. Use the "fog" command, like in FitzQuake.

There are a few problematic questions with fog - like how user-set fog interacts with mapper-set fog, how things behave when you're underwater (particularly if the contents colour clashes badly with the fog colour) - that need to be answered, but unfortunately I don't have an answer for them.

Well, I do - I know how I think it should behave, but there are inevitable disagreements with elements of the mapping community going to come from it, that I don't really want to face as things will only end in tears and it's the player that will suffer most.

There was no fog in the original Quake so it plays no part in the original game.

Nyarlathotep said...

Any word on how MHQuake performs or behaves on older bastions of Direct3D support, like Matrox Parhelias? Just curious.

mhquake said...

No idea about Matrox, but I suspect that legacy hardware is not going to get it at it's best as it needs full native D3D9 support (with 3 or more texture units) for the most optimal rendering path. Probably still be better than OpenGL though.

It performs very well on Intel, which it was originally written for. In fact it'll completely wallop anything else on an Intel 915 (up to 3x better performance and infinity x better stability than an OpenGL engine in my own tests). The 915 actually does support the optimal rendering path in D3D, but with software emulation of vertex shaders. Interestingly though, the 915 is such a poor overall 3D performer that even software Quake is competitive with it...

Later Intels are better, earlier ones are worse, with the exception of the 965 which is the ugly duckling of that particular family.

It's also great on ATI - I've had reports of FPS never dropping below 200 even during more intense moments in Masque of the Red Death.