This is an interesting question I've been pondering over (and experimenting with) in the course of my resolution changing code. Received wisdom appears to be that when changing resolution you should take the following steps (simplified list):
- Delete all OpenGL objects.
- Destroy your current OpenGL context and make the current context NULL.
- Destroy your window.
- Recreate your window.
- Set a new pixel format.
- Create a new context and make it current.
- Reload all of your OpenGL objects (textures/display lists/etc).
If you're the cautious type you'll likely do the above anyway, and be happy to leave it at that. But just think: you could be creating a load of unnecessary work for yourself. There are plenty of GLUT examples where the window size changes but a Pixel Format switch is not required, and how is that really any different from a change of resolution? Likewise with when the scr_viewsize cvar changes.
So I decided to throw out caution and just change the window size (and resolution if required), without bothering to destroy and recreate anything. And guess what - it worked. Perfectly. First time. Everything survived a ChangeDisplaySettings call with no issues or corruptions. I've tested the following mode changes and everything just works perfectly fine:
- Change from one Windowed mode to another.
- Change from Fullscreen to Windowed.
- Change from Windowed to Fullscreen.
- Change from one Fullscreen mode to another.
- Change from 16 bit to 32 bit.
- Change from 32 bit to 16 bit.
So the answer to my initial question seems to be: SetPixelFormat is only required when you need to change the bit-depth of the color or depth buffers. Never any other time.
For the record, here's the procedure I use for switching resolution.
- Determine if ChangeDisplaySettings is required by comparing the new mode with the current mode, and call it if required.
- Call SetWindowLong to change the window style (title bar & borders vs. none).
- Call AdjustWindowRectEx to set the client area to the desired window size (adjusting for title bar & borders, or lack thereof).
- Call MoveWindow to position the window to the origin (not required, but hey!)
- Call CenterWindow to center the window (required as the size will have changed).
- Call InvalidateRect (NULL, NULL, TRUE); to force an immediate repaint of the desktop (required to refresh desktop areas that the window may have previously covered).
- Call IN_ActivateMouse and IN_HideMouse to bring the mouse settings into sync with the new mode (if it's a windowed mode, the next GL_EndRendering will finish fixing things up).
- Store the new mode settings back to the current mode settings, and do some other bookkeeping stuff.
Post-postscript: If one was feeling a bit cautious but still wanted to try this method out, once could do a ChoosePixelFormat on the desktop DC (using GetDC (NULL);), then do SetPixelFormat on the window DC. This is based on the assumption that whatever pixel format is used for the desktop will also be valid for any window we care to create. I'll probably end up going down this route myself.
2 comments:
logically you don't need to destroy the context at all when changing resolution/video mode.
logically you don't need to recreate everything.
illogically, ati drivers are buggy piles of poo and will give you widespread texture corruption if you do not.
Even if you just let the user resize the windows using the normal windows resizing stuff - as soon as a new texture is loaded (ie: next map) previous textures will be corrupted, generally mip levels, etc.
But yes, this is an ATI/driver bug, and the actual wgl api is meant to allow it. These are the same issues that carmack stumbled into when he first tried adding dynamic video mode changes to glquake. He found _all_ drivers broke in some way, which is why engine modders don't normally try to do it the fast way.
Ah, I knew ATI had to have something to do with it...!
Post a Comment