Experimenting with the way events are processed
Return to home page
Comments Loading...
2007-05-12

For curiosities sake, I thought I'd experiment with the way events occur in Smalltalk. Basically, the VM gets events from the operating system and throws them in to the image to a class called InputState. This class is "special" in that it knows the primitives to hook in to the events.

At this point, the classic User Interface paradigm takes over. These events are handed to the @sensor of the active window. The sensor that filters the events and queues them up on a Window Manager or something similar. The events are then targeted to particular widgets inside the window by traversing the widget hierarchy.

For a mouse event, the path to the widget is found by following the containing bounds of widgets until we find the inner most widget and hand it the event. It then .. does something. Generally speaking most widgets sit in a stateless mode and accept events. However, that's where the complexity arises - even the simplest of widgets have at least two states - mouse up and mouse down. Many widgets have more than that.

Let's take button for example. When the mouse goes down, so does the button. When the mouse comes up, a click is fired and the button goes back up. Focus is transfered to the button too often changing its look doubtly too to indicate that it is going to accept keyboard events.

There are more states than this though - if you move the mouse off of the button while the mouse button is down then the button will pop back up. If you release the mouse while off the button, nothing happens. If you move back on to the button while the mouse is still down, the button drops back down again.

There are all kinds of subtleties when it comes to widget behaviour - one of the most complex of the set is scrollbars. So when someone decides to make a widget framework, they have to come up with a mechanism for handling all of these states.

There are two common approaches to this. The first is to build a tracker of some kind that you enable when the mouse goes down and sets up some state for the drawing code to handle the different visuals of what's going on.

The second approach is to build a state machine pattern and do the entire thing with state machine changes. This is often the easiest to do because it means there is less code and fewer chances for bugs to creep in by accident. It also encourages pluggable behaviour.

So what's this blog post about? I thought I'd try a different way to handle the state of a widget and handle the events that are coming in to the window. I thought I'd see what it'd be like to use the Smalltalk stack as the state machine.

The idea is simple in a Smalltalk-is-elegant kind of way. When we open up the window we start an event process as well. It sits there reading events off of the event sensor of the window and does stuff based on the code. Sounds familiar right there, however what we do is push this loop on to each widget. When the window gets an 'entered' event it calls the #process method on its view.

When the view gets a mouse moved event, it checks to see if the mouse has left its bounds. If it has, it simply does ^self. It then checks to see if the mouse has moved in to one of its subviews and if it has, it simply does: subview process.

The current 'state' of the entire UI is captured in the Smalltalk execution stack. In this way, when we want to deal with the different states of a button, we write a method that reads events in an endless loop and 'does stuff' based on the next event.

We can do some very nice things here - first of all, we never have to search the entire widget tree to deliver an event. The currently focused widget is the one currently reading events. That means when we want to deal with keyboard input, self is the guy to do it. We can also easily translate mouse coordinates to be self oriented such that all mouse events that we deal with are always oriented to 0@0.

This makes detecting if the mouse has left us pretty easy: if the coordinate is < 0@0 or >= our bounds extent. We end up with very little code to handle the most basic events.

Now if we want to do something like the button states, we'd write code like the following:

Button>>process
[self nextEvent.
event isLeftMouseDown ifTrue: [self buttonPressed].
... do all other states stuff ...] repeat
 
Button>>buttonPressed
artist enableMouseDownDisplay.  
[self nextEvent.
self containsLocalEventPoint ifFalse: [^self buttonProcessButOutOfBounds].
self isLeftMouseUp ifTrue: [^self clicked]] repeat
 
Button>>buttonPressedButOutOfBounds
artist enableNormalDisplay.
[self nextEvent.
self containsLocalEventPoint ifTrue: [^self buttonPressed].
self isLeftMouseUp ifTrue: [^self]] repeat 
 

That's the crux of it. Obviously you can do many more interesting things, but the basics are pretty clear and the methods are fairly obvious too. The beautiful thing about this is that when something breaks and goes wrong - you get the Smalltalk execution stack there in the debugger explaining where you are and how you got there.

Right now there's only one piece of magic going on behind the scenes. That is that when we read an event off the queue - if we didn't read the last event (we as in, this view processing events) then we get the previous event back. That means if you move the mouse out of the view - both that subview and you get the event, so that you too can decide to leave your method context. It also means events like 'close' "pop" their way back up the stack naturally.

I'm going to play with this paradigm some more and see where it takes me. I'm rather liking the smaller code base that this gives you.