Tuesday, March 22, 2011

A Brief History of Time

Instead of fixing up some multiplayer issues (which I should be doing) I've been experimenting with timers again. I believe that I've now taken the final steps required to properly decouple all of the subsystems in the engine, and now it's just a matter of properly determining which timer they should run on each frame (and under which circumstances: local server, active timedemo, etc).

Right now I have a build with 3 such timers running each frame, with one (which I'll call the "Client Timer") ticking according to the value of host_maxfps, another (which I'll call the "Server Timer") ticking at up to 72 FPS and the third (which I've called the "Steady Timer") ticking at up to 500 FPS. Depending on the mode that the engine is running in, various events during a frame (reading client input, rendering a screen, running the server, updating sound, etc) fire off based on one of these timers.

This has all got me thinking more about exactly how often certain events should fire off. All of this is purely "thinking aloud", and there are no final decisions made, but it is interesting to think about them.

First up is the renderer. I'm wondering if the renderer should ever actually run at more than - say - twice your refresh rate. What exactly is the value of running the renderer (and remember that I'm only talking about the renderer here, not client input, for example) at 97 bajillion frames per second versus running it at 120? On the other hand, framerate locking the renderer would result in complaints that host_maxfps isn't working and that timedemos are unusually slow, so on balance leaving it unlocked seems better. Sometimes you have to do the Wrong Thing in order to do the Right Thing.

Second up is the server. Traditionally this has been subject to a cap of 72 FPS, but I've come to believe that the Quake server (particularly in single player games) should actually run a good bit slower than that. Look at it this way: back in 1996 most people were getting 20-30 FPS from Quake, and that was considered good performance. Quake development would have involved tuning the behaviours of physics/etc to give the intended results at this kind of framerate. As Quake has framerate-dependent physics, based on this logic even 72 FPS would be too fast and would be incorrect.

So why was 72 selected? A comment in the original source code ("don't run too fast, or packets will flood out") indicates that it's intended to be an absolute upper limit, rather than "the framerate that Quake is supposed to run at" (which I believe to be actually nearer 25). And why 72, rather than something like 70 or 75? The most reasonable explanation seems to me to be that 72 Hz was a common (if not the most common) refresh rate for monitors at the time, and it was just arbitrarily chosen on that basis.

This is all conjectural, of course, and I must dig through the old .plan archives to see if I can find anything that has a bearing on this.

So like I said, no final decisions made yet, and more testing of everything is required before I'd even consider a release with decoupled timings again, but it is a very interesting topic.

5 comments:

=peg= said...

From a multi-player-player point of view, I would love to be able to use the maximum input rate, while v_syncing the renderer to the monitor refresh-rate (rendering more frames then the screen can actually display seems pointless to me).
The only reason for setting host_maxfps to 250 or even higher (in the current situation where input-rates are tied to the renderer frame-rate) is to get smoother and more responsive mouse/keyboard input!

As for single player.. I suppose your reasoning there makes sense, but I'd like to be able to bunny hop in SP as well, whether this is an originally intended gameplay-mechanic or not ;)

=peg= said...

Forgot to mention that I still try to get the highest refresh-rates out of my screen, in order to minimize video-lag..

60Hz means that it takes 1/60 ~ 16.7 ms before the screen displays what happened on the server, which is of course on top of the connection latency..

Basically, the sooner you can see what happened, the faster you can respond to that (which is where the input-rates come in)..

Obviously MH is well aware of all this, but I'm just stating it here for the sake of clarification ;)

=peg= said...

*clarity (excuse my English, it's not my native language ;))

mhquake said...

It actually takes longer because your GPU buffers up to 3 frames worth of data and commands, so you might be waiting up to one-twentieth a second before what you see on-screen matches what you just did (assuming 60Hz).

You can get rid of the buffering in OpenGL by issuing a glFinish command (gl_finish 1 in Quake), but D3D doesn't have an equivalent. It's possible to fake it by using a call that forces a CPU/GPU synchronisation, like reading back from the framebuffer for example, but I haven't implemented anything like that.

mhquake said...

One thing that *I* forgot to mention is that even in ID Quake it's incredibly simple to decouple client input from everything else, and buffer up events precisely as they happen.