Take a step in to the wayback machine for a moment and zoom back to your dos days, or earlier, when you were making computer games. Those games drew the entire screen, every frame. You would push your graphics drawing so that you could squeeze as many frames per second out of the computer as possible. You would have one process - that process would draw the screen and deal with user input all at once. Everything was done by you - drawing widgets, laying them out, handling user input, animating the background.. everything.
There were some very nice properties to this kind of programming. For one, it was fast. You could make your program run very quickly. For two, it was optimized. You only did what you needed to do every loop. For three, you had complete control over everything.
And with that freedom, comes a cost. You must do _everything_ yourself. This turns out to be exactly what you want to do if you're making a custom user interface - for, say, a game. But if you're making the same sorts of user interfaces over and over again, you find yourself creating abstractions and refactoring code in to reusable methods. And thus, a user interface framework is born.
Immediate Mode Graphical User Interfaces are a renaissance movement to make people aware that when your abstraction breaks down, you have to take a step back and look at your problem with fresh eyes. If you're no longer making a gui like every one else, then you don't want to use the code that every one else is using.
It's a good movement. It's smart. If you're building a game, almost everything on the screen is animate at one point or another - almost all the time. Even on menu screens you usually have some sort of movie playing in the background. In this kind of environment you run in a mode of "always updating the screen".. in other words, you're frame hunting, trying to get as many frames per second as you can.
Imagine, for a moment, that every desktop application you have was trying to do the same thing. Ouch. IMGUI would work on the desktop if, and only if, every one shared the same update process. That would require some kind of coordination and rules as to how 'things should be'. Funnily enough, that's what we have - what the IMGUI guys called an RMGUI or "Remembered Mode" - in otherwords, everything on the screen is remembered between frames.
This concept is important, because it lets you do several things. It lets you decide to only draw bits of the screen. It also lets you decouple event handling from drawing, such that you can take input events while another part of the screen is updating. This is a logical step when you want to share the screen with other applications - and share the input devices.
As such I find some of the rhetoric about RMGUIs on the IMGUIs forums rather disturbing and annoying. They also have a thing against OO programming. It's an interesting read but it also ticks me off in unique and interesting ways. I should point out that coming from the Smalltalk world - we're less about being Object-Oriented-Programmers than Message-Oriented-Programmers. So in many ways, we fit with what 'casey' talks about in the forum.
Casey likes to push the idea that you have structures (data) and methods that do stuff with or to it. Well, for me, that's what objects are. They're structures with methods on them that have to do with how you want to abuse that data. Inheritence is a good way to get some reuse. Certainly, using objects as a type system is something that is not high on my todo list and many dynamic language advocates will say the same thing.
So, we have a bunch of widgets we -might- draw on the screen. In IMGUI, you still need to store your state.. it's a list and it has something selected, even if we aren't "hot" or "active" that selected item is still selected. Same with a text area - some of the text may be selected, where is it scrolled to? etc. So, in IMGUIs you still have to store all this information - in a structure - about the widget.
Well that structure information may as well be the widget. It still doesn't mean you -have- to draw it. In fact, when it comes to culling what you draw on the screen, having a widget hierarchy in memory makes that very simple. In fact, the guy in the video points out that IMGUIs are netoriously bad for allowing you to cull - if you do decide to cull, which bits of your code should you run?
So finally, when it comes to doing state machines with IMGUIs, you're actually worse off than using a state machine. Why do I say that? Because every time you draw the GUI you go through the code that handles a) drawing, b) events, c) state. That means, at a minimum, 24 times a second, you're figuring this stuff out over and over and over again.
I point this out because the link to IMGUIs was pasted as a comment to my post about experimenting with input events. The idea of my experiment was to encapsulate the current state of a widget in a Smalltalk process. That process doesn't start and stop with every drawing frame. It quietly sits there in the background waiting for events and runs code.. it persists across time. In fact, one might call it a Remembered Mode State Machine (hah!).
Now don't get me wrong, I think IMGUIs are a good idea. But this comes back to the point I made at the top of the post - you should employ IMGUIs when you realise that the abstraction you're currently using doesn't make sense for what you're trying to achieve. Good luck to them - and when I go to write my next game - I'll be giving their pattern a try.