How do users figure out what to do when faced with unfamiliar objects and environments? Affordances are the characteristics of an object that signal what actions are possible and how they should be done.
Physical objects convey information about their use. A doorknob affords turning, a chair affords sitting. Well-designed virtual objects provide clues about what can be done, what’s happening, and what is about to happen. Putting grooves in the surface of a virtual ball guides users where to place their fingers to pick it up, buttons can look like they need to be pushed, UI panel elements can look like they can be grabbed and moved.
Users should be able to perceive affordances and instantly understand how to use the items.
The act of touching a virtual object or UI control is referred to as an interaction – picking up an item or pushing a virtual button, for example.
Interactions do not require users to form specific gestures or poses with their hands. Users can pick up objects however feels natural (Ultraleap’s Interaction Engine makes this easier by recognising contact between fingers and objects/controls). We recommend direct physical manipulation of virtual objects wherever possible, rather than abstract gestures or poses.
Objects and controls should be well designed with clear affordances. When users intuitively understand what interactions are possible, little or no instruction is needed.
Users form an abstract pose with their hand in the air, and in some cases move it. For example, pinching the thumb and forefinger together to activate a VR drawing tool, and drawing in the air.
Abstract gestures and poses should be taught to users each time they’re introduced via instructional information. They require more cognitive effort than natural interaction so should be used sparingly – ideally when they enable valuable or engaging experiences to make them feel worth learning.
The interaction zone is the area in which users hands can be tracked and they can therefore interact with VR objects and controls.
- For the Leap Motion Controller: Depth of up to 60cm (24”) preferred, up to 80cm (31”) maximum; 140×120° typical field of view. Tracking works in a range of environmental conditions.
- For the Stereo IR 170 camera: Depth of between 10cm (4”) to 75cm (29.5”) preferred, up to 1 m (39”) maximum; 170×170° typical field of view (160×160° minimum). Tracking works in a range of environmental conditions.
Occlusion refers to instances where the Ultraleap camera does not have a clear view of a hand, and momentarily cannot track it properly. These moments are rare but they can happen in certain circumstances.