Wearables: Voice is not enough!

Wearables: Voice is not enough!

tl;dr: Head wearables like Google Glass have promoted Voice as the main UI. Newer ones added hand gestures. What if we can just use our eyes?

Driving back from Tahoe last weekend, I was thinking about head wearables and their UI. Google Glass first came to mind. Voice was the main interface Google promoted, “Ok Google, Take a picture”. Quick, simple, but not private. Given we use wearables on the go, we need a UI alternate that is private.

Then I thought of some AR glasses you can buy today like Meta and Atheer Labs. They add hand gestures to the mix. For example you can grab and drag virtual windows that appear to be in-front of you (a la Iron Man). Hand gestures are cool, but do you want all the attention you will get in public?

Not always.

This lack of privacy was concerning. We enjoy privacy. It is the reason we prefer having our own place or room. Single showers instead of group locker room showers. Sharing on Snapchat instead of Facebook.

I wondered what else was possible. We seem to have exhausted all UI options. Then I remembered the Magic Leap device I wrote about last week. It’s rumored to have eye tracking -needed to pull off the illusion of virtual objects appearing in reality. AHA! Why not use that as a user interface?

Our eyes would provide a private interface to use our wearable devices and won’t attract attention from people around us.

The rest of the ride ideas poured in about how such a UI would look like. For writing, there would be a virtual keyboard that appears in-front of you. You would trace the words letter by letter using your eyes (similar to Swype keyboards on mobile phones). For taking a photo, use shortcut gestures like quickly shifting your eyes to the right corner and then back to center. Winking could open a menu you can select from: contacts, videos, games…

Writing with your eye in a similar way to Swype.

Back home I did some research on the state of eye-tracking tech and compared the different UI options based on speed and privacy.

I quickly came across a company called Eye Tribe. Their device can detect your on-screen gaze position roughly within the size of a fingertip (<10mm). Take a look at their video demo (below).

There are many other companies like this, proving the tech is available. Head wearable companies usually have optics and vision experts, hopefully meaning they can attain similar success.

For the comparisons, I thought of two different use cases. 1) Writing a message “ Hello, how are you today” 2) Taking a picture

Let’s start with writing.

I actually timed myself writing the message using different UI. Phone: 10 sec (1 handed, regular) Voice: 5sec (Google speech recognition) Laptop: 5 sec Eye: 8 sec (used image of iPad keyboard like figure above) Gesture: comparable to eye(leap motion writing).

People seem to be more dextrous with their fingers, so writing with gestures similar to the linked video might be easier at first. However, you would look like an orchestra conductor. This will attract some curious looks, until wearables become mainstream and people are used to gestures.

Eye writing requires more testing to check comfort. Speed can be improved with a cursor that shows where my gaze is, button highlighting, prediction suggestions. The obvious benefit of eye writing is the privacy. What if you get an intimate message from your significant other. You wouldn’t want to say that out loud wouldn’t you? That negates the speed of Voice UIs.

Taking a picture.

Gestures are the fastest here, you just make a frame shape with your hand and a picture is taken (whatever is within the finger frame). With eye UI 1) you shift your eyes to the right & back to center which opens the camera app 2) a virtual frame appears. Once you do the eye motion again, a picture is taken.

In conclusion, I suggest that head wearables incorporate these 3 UI types within their products. This will allow them to cover all use cases and scenarios that people will encounter in their day-to-day life.

要查看或添加评论,请登录

Zak Nasser的更多文章

  • SDG7 - Fund Power with Crypto Mining

    SDG7 - Fund Power with Crypto Mining

    TLDR: What if we install renewable power and payback the cost using crypto mining. The local community gets a portion…

    4 条评论
  • WildFire Master Plan - Drones, Satellites, AI, & $1B Carbon Credits

    WildFire Master Plan - Drones, Satellites, AI, & $1B Carbon Credits

    It has been 3 years since Camp Fire burnt 153,336 acres and cost $16 Billion. Today: Dixie Fire started only 4 miles…

    2 条评论
  • Elephant Toothpaste for Forest Fires

    Elephant Toothpaste for Forest Fires

    Part of a series. 2nd post, is on most important tech we need to focus on now.

    1 条评论
  • Elon Musk Showmanship & STEM Education

    Elon Musk Showmanship & STEM Education

    Summary: as controversial as Elon is, he is famous & known as a man of science. This will inspire many to be scientists.

  • Eco Tourism - Dark Sky Trend

    Eco Tourism - Dark Sky Trend

    Look up to the sky, what do you see at night? Half of us live in cities and see a blank sheet that is light black with…

    1 条评论
  • Use Mountains as Giant Batteries

    Use Mountains as Giant Batteries

    You have seen mountains before. Green ones.

  • UBI Proof & Ai Economic Risks

    UBI Proof & Ai Economic Risks

    TLDR: People with UBI can be productive and not lazy, 2 proofs below. More experiments aren’t needed.

  • R.I.P. Signup Screen

    R.I.P. Signup Screen

    Sign up screens. Bread and butter of most apps and websites we’ve grown to love.

    1 条评论
  • Positive Side of Autonomous

    Positive Side of Autonomous

    tl;dr: autonomous cars are a net-positive to us. The transition is what causes negative perceptions.

    1 条评论
  • Over Revving Without Blowing Your Engine Out

    Over Revving Without Blowing Your Engine Out

    We all faced deadlines in our life that pushed us to our limits. Spent hours toiling at night.

社区洞察

其他会员也浏览了