Table of contents
When designing interactions for hand tracking careful consideration needs to be given to priority, usability, comfort, and reliability – interactions are not universal across all applications.
Manipulating virtual objects with our hands leverages a lifetime of physical experience, minimizing the learning curve for new users.
Ultraleap recommends direct physical manipulation of virtual objects wherever possible, rather than abstract gestures. Users can manipulate objects with a broad range of push, pinch, and grab interactions. Whichever fingers and orientations feel most natural to them will work. So, for example, users can grab and throw an object without needing to learn new behaviour.
Ultraleap’s Interaction Engine provides a quick and robust way to implement these flexible, physical interactions.
The most intuitive interactions are ones where the virtual objects feel physical, and convey their use clearly through affordances, rather than overlaid prompts and help content.
Experience with directly manipulating physical objects in the real world means interactions may require little or no instructional information for people to use them in immersive 3D. This makes for a smoother, more intuitive virtual experience.
Flat UI panels and components can be very useful in XR applications – users can access a lot of features quickly through a main menu, for example. But the 2D menus that suit touchscreens so well need adapting to a more physical, three–dimensional space.
Adding an overall concave curve to panels – particularly larger ones – helps ensure every control can be comfortably reached.
We also recommend adding an anchor object that users will quickly learn to grab in order to reposition it.
Ultraleap Unity modules provide a toolkit to quickly build curved UI panels and controls suitable for XR.
Unlike natural physical interactions, abstract poses cannot leverage a lifetime of physical experience. They usually require instruction via tutorials or demonstrations. Teaching users specific poses reliably is a significant challenge as users will tend to perform them with slightly different hand poses and movement.
If you do decide to include abstract poses in your application, follow these basic guidelines
- Use effective tutorials that can be accessed by the user at any time.
- Make sure gestures and poses are highly distinctive from one another to avoid unintentional activations.
- Minimise the number of gestures users must learn, and make their purpose consistent throughout.
- Design for gradual introduction of new gestures one by one. Even once taught and mastered, most users will likely only remember a small number.
Gestures and poses work best when used for actions that are valuable to the user. This makes them feel worth learning. For example, in Ultraleap’s Pinch Draw demo users learn to form an abstract pinch gesture. That gesture is consistent with the real–world experience of holding a drawing tool and enables them to draw in mid–air – a unique and engaging VR experience.
In other applications, abstract gestures may be appropriate for enabling locomotion, or bringing up a main menu UI panel.
Try this gesture example yourself:
Using hands is the most natural way to interact with an immersive 3D environment, but the real–world feedback inherent to pressing a button on a controller must be designed back in. Different types of feedback can guide interactions and increase user confidence.
Defining interactions for an application requires careful consideration of the full feature set and how people will use each feature.
Physical interaction should always be used first where possible. Gestures and poses should be used sparingly for quick access to important or frequently used features.
Other features can be accessed via other means. This could be a virtually wearable menu or UI panel, or contextual switching depending on mode or location. If your application contains lots of features, you will need elements like these in order to provide access to everything.
A powerful way to streamline user experience is to utilize context. You can refine user choices by considering where the user is looking, what elements are available to interact with, and what potential and likely actions might follow. This will focus users toward relevant interactions and remove unnecessary and unwanted clutter.
An example of this is using context to change the content in a virtually wearable menu. In this example, the default menu offers a range of UI panels that can be pulled out into the world. Once users open a shopping interface, the menu changes to offer only options related to shopping, such as product colour customization.
At all times, consider user comfort when placing objects and UI elements. Design interactions in a way that minimises the need to:
- Perform large and strenuous movements
- Hold hand poses for extended periods of time
- Raise the arm above the shoulder
- Fully extend the arm
Try to ensure that virtual objects and UI elements are positioned within easy reach and in clear view by default. When opening a UI panel users should never have to look around to find it, or stretch to reach it.
Elements that users generally interact with by pushing with their fingers, such as UI panels, should be positioned around 60cm away by default. This should ensure that elements are comfortably in reach for all users.
Virtual objects to be grabbed or pinched should be positioned around 50cm away by default for comfortable reach and grip.
All distance recommendations from Peebles, L. and Norris, B., 1998. Adultdata: the handbook of adult anthropometric and strength measurements: data for design safety (p. 404). London: Department of Trade and Industry.
Virtual objects and UI components such as buttons and sliders should be sized appropriately – will users engage with one finger, a small pinch with two fingers, or the whole hand?
One–finger targets should be no smaller than 2cm. A UI component controlled with a pinch would require a slightly larger pinchable element.
Make sure there is enough space around each component to perform the expected interaction, and not accidentally trigger other targets.
This creates a large interaction zone with reliable hand tracking throughout.
Users can be surrounded by objects, controls, and UI panels within a full 170° (within comfortable reach, of course).
Occlusion refers to instances where the Ultraleap camera does not have a clear view of a hand, and momentarily cannot track it properly. These moments are rare but they can happen in certain circumstances.
When one hand is directly between the other hand and the camera, this obscures a hand from the camera’s view.
To prevent this, avoid interactions or controls which might prompt users to cross their hands over one another.
At certain angles the arm and wrist can obscure some fingers from the camera’s view.
To prevent this, do not spawn new objects or panels in the dead centre of the field of view, or directly behind users’ hands. Instead offset user gaze slightly to one side. This will ensure the best possible tracking.
|Side perspective||Front perspective|
Some fingers may be obscured when the hand is directly in front of the Ultraleap camera.