Spatial UX: What Should We Measure?
So far we've seen some very rudimentary types of interactions from the various VR and AR makers, with probably the most well developed and interested coming from Microsoft in their Hololens demos. What I'd like to discuss is more in the weeds of UX design: what should we be measuring in spatial user interfaces and experiences in order to quantify good vs bad design forms and patterns.
What Can We Measure?
To know what behavior we should be measuring we have to know what data we can measure. So let's make a rough list:
- Head position and rotation
- Hand positions and rotations (+ relative to each other and head)
- Physical center (calculated from head + hands)
- Physical location and rotation change
- Eye focal point (using eye tracking + focal plane tracking*)
- User biometrics (heart rate, blood pressure, etc)
So, now we have to think about the expected actions a user might take in a spatial experience and what the important aspects of those actions are.
What Is The User Looking At?
One of the most useful UX tools for the web post-deployment is the heat map, which tracks either the mouse cursor location over time, the dispersion of clicks across the site, or the location of the user's gaze over time. In spatial experiences particularly I think that the order of items the user looks at, and for how long, is particularly important. By adding a third dimension we are inherently adding to the visual complexity of what the user sees. Our ability to draw attention to particular elements will be integral to a good user experience. This means our signifiers must be readable, relatable, and actionable.
- What the user is looking at (eye tracking and focal plane tracking)
- How long the user is looking at something
- The path of the user's eyes around the scene (what do they see first, second, etc)
- Whether the user leans in or away from certain content
The good thing is that we can theoretically get a video from the user's perspective, since we know their orientation and movements already. Watching this back gives us a perfect view of their experience and will capture small reactions and movements missed in reading the hard data.
Missed Opportunities
Something that happens a lot in spatial interfaces so far is that we have a hard time with precise motions due to the fact that they mostly rely on us positioning our hand in space, making some hand movement, and then moving our hands. The problem is that the fidelity of motion of our arms when they're supporting themselves is not actually that great. They tend to wobble about a lot.
Complicating this is the imperfect depth perception that comes from current lens technology (this will be helped over time as optics become better). So spatial interfaces tend to have great giant 'grab' areas, and even then it sometimes takes several tries to actually do it right. Measuring how many times the user 'misses' in a certain area before succeeding will be helpful in designing easy to navigate experiences and menu systems and such. Measuring the user's heart rate and blood pressure would give us more information when they become frustrated as well. Simple monitors could easily be built into the headsets.
Achtung!
"Pay attention to meeeeee!" whines an important call to action. Does the user actually do so? Measuring how long it took for the user to notice things that appear (or disappear) tells us how good signifiers they are. In AR particularly, where the real world is busily going on behind and around your content, making things flat or providing a solid color background can draw attention to that object. But how often does it work, and how long does it take to work?
Visceral Reactions
You've created a cool popup subscription call to action that appears in the user's face after 20 seconds on your "site." Do users jump back when it pops up? This is probably a bad thing. Your popup decides that it's going to position itself absolutely in the top right corner of the user's vision at approximately 18 inches from his face. Does he shake his head and swat at it to try to be rid of it? Again, a good sign you need to rethink that popup because you're alienating a user on a much more visceral and fundamental level than today's popups on a screen. You've invaded the user's personal space and he'll remember that and hate you for it.**
Awkward Body Positions
We can see the amount the user is turning her neck and body to find our content and UI elements. For productivity extreme movements like these should be avoided whenever possible. Using rotation and position information we can measure the frequency and length of these changes. What angle is the user's neck relative to the vertical axis? We obviously have to be careful there since she may be lying down, but based on the hand motion we should be able to determine this.
The Usual Suspects
Of course all these new considerations do not negate our basic UX principles. We still track the user flow through content, the time with each piece of content, scroll mapping, user interviews, etc. In addition this is obviously a partial and initial list that will grow and change over time as this field is expanded to more types of data manipulation and content interaction. Soon I'll be writing about specific design patterns and get into the actual UI parts of content creation in spatial experiences.
Comments and questions always welcome!
Those interested in collaborating on UX and UI experiments may contact me at [email protected]
*Focal plane tracking it entirely possible using current sensors according to Eyefluence. I don't know anyone else pushing this as a key function of future HMDs but I'm hoping others will jump onboard!
**My Rule #1 for spatial UI is that within a certain radius of the user's face no content should exist unless brought there through direct and intentional user action, and Rule #2 is that nothing should "stick" to the user that cannot be easily hidden, closed, or pushed away. The user must feel in total control of his personal space. I'll post later with my 10 Spatial UI Commandments.
UX @ Avid
7 年Absolutely excellent article Anthony! You've brought really important metrics and KPIs that will be necessary in acquiring quantifiable and actionable data beyond anything consciously reported by the user. Our highest barrier in implementing these methods is the hardware. It would be interesting to test these methods with technology that is currently available to see the actual margin of error of what we are working with! Right on!