Wearables: Voice is not enough!
tl;dr: Head wearables like Google Glass have promoted Voice as the main UI. Newer ones added hand gestures. What if we can just use our eyes?
Driving back from Tahoe last weekend, I was thinking about head wearables and their UI. Google Glass first came to mind. Voice was the main interface Google promoted, “Ok Google, Take a pictureâ€. Quick, simple, but not private. Given we use wearables on the go, we need a UI alternate that is private.
Then I thought of some AR glasses you can buy today like Meta and Atheer Labs. They add hand gestures to the mix. For example you can grab and drag virtual windows that appear to be in-front of you (a la Iron Man). Hand gestures are cool, but do you want all the attention you will get in public?
Not always.
This lack of privacy was concerning. We enjoy privacy. It is the reason we prefer having our own place or room. Single showers instead of group locker room showers. Sharing on Snapchat instead of Facebook.
I wondered what else was possible. We seem to have exhausted all UI options. Then I remembered the Magic Leap device I wrote about last week. It’s rumored to have eye tracking -needed to pull off the illusion of virtual objects appearing in reality. AHA! Why not use that as a user interface?
Our eyes would provide a private interface to use our wearable devices and won’t attract attention from people around us.
The rest of the ride ideas poured in about how such a UI would look like. For writing, there would be a virtual keyboard that appears in-front of you. You would trace the words letter by letter using your eyes (similar to Swype keyboards on mobile phones). For taking a photo, use shortcut gestures like quickly shifting your eyes to the right corner and then back to center. Winking could open a menu you can select from: contacts, videos, games…
Writing with your eye in a similar way to Swype.
Back home I did some research on the state of eye-tracking tech and compared the different UI options based on speed and privacy.
I quickly came across a company called Eye Tribe. Their device can detect your on-screen gaze position roughly within the size of a fingertip (<10mm). Take a look at their video demo (below).
There are many other companies like this, proving the tech is available. Head wearable companies usually have optics and vision experts, hopefully meaning they can attain similar success.
For the comparisons, I thought of two different use cases. 1) Writing a message “ Hello, how are you today†2) Taking a picture
Let’s start with writing.
I actually timed myself writing the message using different UI. Phone: 10 sec (1 handed, regular) Voice: 5sec (Google speech recognition) Laptop: 5 sec Eye: 8 sec (used image of iPad keyboard like figure above) Gesture: comparable to eye(leap motion writing).
People seem to be more dextrous with their fingers, so writing with gestures similar to the linked video might be easier at first. However, you would look like an orchestra conductor. This will attract some curious looks, until wearables become mainstream and people are used to gestures.
Eye writing requires more testing to check comfort. Speed can be improved with a cursor that shows where my gaze is, button highlighting, prediction suggestions. The obvious benefit of eye writing is the privacy. What if you get an intimate message from your significant other. You wouldn’t want to say that out loud wouldn’t you? That negates the speed of Voice UIs.
Taking a picture.
Gestures are the fastest here, you just make a frame shape with your hand and a picture is taken (whatever is within the finger frame). With eye UI 1) you shift your eyes to the right & back to center which opens the camera app 2) a virtual frame appears. Once you do the eye motion again, a picture is taken.
In conclusion, I suggest that head wearables incorporate these 3 UI types within their products. This will allow them to cover all use cases and scenarios that people will encounter in their day-to-day life.
- My previous LinkedIn post: Greener Future with Augmented Reality
- My Medium profile for more: Zak Nasser