Direct3D comes with an explicit concept of vsync. OpenGL does not. There's many raging debates that you can google over across the internet on this, but let's cut to the skinny of it. You can do vsync in OpenGL using extensions and extensions exist for all the platforms.
What is VSync? Well in the old days, your screen would render with a raster and would literally traverse from top to bottom. This took so long that you could see it. These days, the "Vertical Blanking Line" depends on the kind of screen you're using, CRT, LCD, Plasma, MiscOtherTech. As we push our graphics further and further, we increase the resolution and even the most modern monitors can sometimes draw "tearing", or, one half of each frame. This happens when new draw commands are issued before the previous scene has finished drawing.
To avoid this problem, applications can become synchronized with the vertical refresh rate of the actual monitor. This ensures that new drawing is only done when the old drawing has finished being applied to the entire screen. When wouldn't you want to do this? ... for real applications pretty much never. For benchmarking, yeah, you might want to explicitly turn it off to see just how many frames per second you can squeeze out of your app (although, this is also a bit of a misrepresentation, since you can just calculate your idle time to compute how fast you could really render at full speed anyway).
So how do you do it? Well on windows, you can use the WGL_EXT_swap_control extension which lets you specify the SwapInterval. An interval of 1 means "one full screen", while two means "two full screens" and zero means "don't sync". You rarely need anything more than one full screen and I'd be surprised if many modern drivers do anything special for values over 1.
On MacOSX, we can similarly change our SwapInterval using setValues:forParameter: on the OpenGLContext instance. And on Linux, glXSwapInterval also exists, but like Windows it's an extension you have to hook up to.
I've implemented the Windows and MacOSX versions and left in stubs on the X11 version (because I can't test it right now). The OpenGL-Lessons will turn on vsync by default now if it's possible to. The API is straight forward, #isVSyncEnabled and #isVSyncEnabled:, the latter lets you change it by passing in a boolean.
Here are some issues that have come to mind since I did this today:
- You can't easily find out what the refresh rate of a monitor is, so if you are running with vsync on, the Main>>loop hertz is meaningless
- You need to disable Main>>loop synchronization if you're using the vsync otherwise you'll constantly have frame drops (the loop runs at 60hz, your monitor runs at 60hz, it's impossible to match them up exactly, so the CPU loop will always end after the vsync, causing it to believe it has lost a frame).
- Given that we generally -want- to run with vsync and we cannot rely on the vsync always running at 60hz, the current "animation" approach I'm using for the Main class is not going to work. It assumes it can run at a certain hertz and even if you could expect to run at 60hz, if you drop a frame, that means your animations will be one frame behind now.
So - this means I need to redo the way events work. I'll give each event its own time delta and pass in the time elapsed since it last ran. This should allow accurate physics simulation, movement, etc. Objects moving about in 3d space that are visible may "jerk" slightly if the framerate drops below the vsync, but that's the price you pay for spending too much time between render syncs versus having accurate physical movement in time.
For now, the code naively assumes we're running at 60hz. It'll have to stay like that for now until I get some more free time to change it.
Another thing I need to do is make the tests that work with transform feedback and tests that use framebuffers actually check if those extensions are available and if they're not, throw an error on open instead of breaking in the middle of the code.
Have fun Smalltalking!