We’re trying to implement useful gesture input in the human-computer interface on Ubuntu.
This is not as easy as it sounds. We’ve discussed this with a number of vendors who have tried their hand at this, some of them several times. We’ve examined what other platforms have done. We’ve read countless academic papers on the subject. Something this simple turns out to be surprisingly complex.
One of our constraints is that we want to provide a consistent feel across all applications running on Ubuntu, across all form factors on which Ubuntu runs. Ubuntu is a computer operating system than can on all kinds of hardware, from phones through tablets and desktops and event big iron servers. I don’t think there will be a lot of call for gestures on big iron and while I’ve seen Ubuntu on phones (and things like the Gumstix) I don’t think that form factor is officially supported, so we’re focused primarily on netbooks and the desktop.
What we have done is decided a central gesture primitive recognition system is required in order to provide the consistent feel of gesture across all applications, and allow for certain applications such as the window manager to grab an entire class of gesture for meta-uses. We’ve also identified a subset of gestures we’ve dubbed gesture primitives that roughly translate as the basic linear algebra transforms of translate (swipe), scale (pinch/expand), and rotate (rotate), and a touch-and-hold we’re calling touch. There gestures are built up out of multitouch input events over a time domain through a library known as utouch-grail and passed through what we hope will be a standard interface known as geis.
The next steps for us will be to implement a programmable gesture recognizer on top of these primitives to produce higher-order gestures such as double-tap and flick, and onwards to a more complex gesture language. In addition, we’re making an effort to provide a gesture interpreter that will translate these gestures into actions in existing (what we legacy) applications so that everybody can have fun touching their screens.
So this is an overview of what we’ve been working on. I’ll start detailing more on each of these topics in future posts.