"In our last episode," (which you can see here) I touched on Global Moxie founder Josh Clark's erroneous notion of workflow interruption to help the user, and how options should open up a device's potential use cases. In this episode, I'll talk about user interfaces in general and what could be done to improve all of them for everyone — not just the people with two free hands.

New Paradigm, Same Problems

Clark calls for a standardized UI paradigm, but that's nothing new. It was done 30 years ago for the GUI and those conventions are still with us. The result was the Macintosh Human Interface Guidelines (MHIG). The fact that these conventions are still around is a testament to their quality. MHIG explains why things should be done in certain ways and what else was considered. Explaining why things are done a certain way makes the MHIG perpetually relevant. I believe it should be revisited and referenced for new TouchUI Guidelines (“TUIG?”). We can include 30 years of learning and evolution, along with the freedom this new UI paradigm offers. I have issues with some of the current touch conventions, though. I think some can be abated by Apple, Google, HP, Microsoft and RIM revisiting Apple's seminal work and updating all UIs. Just think: If they would focus on making a great interface, and drop all the silly lawsuits, maybe we could funnel the litigation money into advancing interfaces. (I know: "Fat chance.") The current iOS UI is pretty pathetic in terms of advanced features and shortcuts. I have already complained about having to press buttons (physical or virtual) at least six times to quit an active app. When you compare that with WebOS's swipe to close an app in 1/4 of a second, this 5 second process is a pain in the finger (literally).

Technology Is Not Yet the Great Equalizer

If a person used assistive touch, the only gesture they would need is the ability to press a bluetooth button to control the whole ball of wax if he or she could not speak. The OS could even monitor speech so someone who can only interact through speech could use it as well. It could be a viable option IFF there was a way for it to be unable to cache or transmit any non-control related sounds the mic picks up. (I wouldn’t want to have to crack open my iPhone and remove the microphone and camera for security. If I called the shots in device design, I would add a hardware switch and AV shutter with no workaround.)

Invisible Interfaces

Eventually we will be able to wear inexpensive little devices (glasses, headbands, necklaces, hair clips, hats, etc.) that pick up brain wave patterns and do our bidding, but until then we need every avenue open and available to truly make technology inclusive for everyone that wants to use it. Speech, touch and sight should all be first class citizens in every device.