Table of contents
Try these menus in your own projects with Ultraleap’s Unity toolkit:
UI Panels in XR are comparable to windows on desktop computers – they behave in a similar way and contain many of the same types of content and controls. However, they can be considerably more versatile and interactive than their 2D counterparts. For example, within a panel, users can try handling a 3D product before they decide to purchase it.
Panels can contain many different types of controls, so it is important not to overload or crowd them, and don’t put controls too close to one another. Ensure that interactive elements are large enough to select easily and have sufficient space between them to prevent users from accidentally selecting the wrong object.
Think about scaling – a target intended for one–finger interaction such as a button should be no smaller than 2cm. This makes sure the user can accurately hit the target without accidentally triggering targets next to it. A component controlled with a two–finger pinch, such as a slider, requires slightly larger scaling and spacing.
UI panels often open new objects and panels in the environment. To prevent occlusion issues, avoid spawning these items in the dead centre of field of view, or directly behind users’ hands – instead offset the user’s gaze slightly to one side. This will ensure the best possible tracking.
In immersive 3D curved UI panels mean users can comfortably reach every control. Panels can be large without becoming cumbersome.
Normally difficult to achieve, Ultraleap’s Unity modules provide a toolkit to quickly build curved UI panels and controls suitable for XR.
Try this example yourself:
Users simply push a button – for example from a virtually wearable menu – to open a panel.
The panel should open in a pre–determined position in front of the user where they can clearly see it appear and reach it comfortably. This should generally be 80–90cm away by default.
Try this example yourself:
Users can pinch menu items and move them, before releasing to activate features at a chosen spot in the environment.
Rather than appearing as buttons, these draggable items should clearly look like three–dimensional objects that can be “picked up”. They should also be the appropriate size for two fingers to pinch.
This method encourages the user to arrange their XR workspace to suit their specific needs and emphasises the flexibility virtual menus can provide.
Try this example yourself:
Users can grab objects from virtual “pockets”. These can be locked in space or on the user’s waist like a utility belt.
Once grabbed they can be placed in space or thrown, bringing up a UI panel at that position.
As with the “push button” method, they should appear 80–90cm away by default so they can be reached comfortably.
To close an open panel users can grab it and move it back towards the menu it was originally selected from.
The panel shrinks and morphs back into the menu to fill the gap left when it was opened.
Users can also push a “close” button situated on the panel – for example in the top right corner.
Make sure this button is clearly visible and situated a safe distance away from other controls.
Upon grabbing a panel to reposition it, a close or”trash” zone can appear in the environment for the user to drag and drop the panel into.
Ensure the drop zone is easily reachable and situated away from the main interaction area.
There are two main ways of moving UI panels in 3D space – grabbing an anchor object, and grabbing the panel itself. When deciding which to use consider the XR environment, context, and content of the panels.
Each panel features an anchor object that serves as a small “handle” below the panel.
Users move the panel by grabbing the anchor object and releasing to reposition it.
The anchor object should be spherical, appear “grabbable” and be large enough to pick up easily (see affordances).
Anchor objects make moving panels intuitive and easy. They also keep the reposition interaction separate from the other interactions. This significantly reduces the likelihood of users accidentally moving the panel or inadvertently using other controls.
Try these examples yourself:
Users can move a panel by grabbing anywhere on the main surface, then moving the hand and releasing the pose to reposition it.
This method doesn’t use an anchor object. This reduces visual clutter, particularly when dealing with multiple panels in the same workspace.
However this also means this interaction is not obvious to the user. This interaction therefore needs to be introduced with a tutorial. There should also be a visual indicator that activates as the hand approaches. This could be a change in the visual state of the panel such as an outline or colour change.
Users may also grab the panel unintentionally when interacting with other elements if some of those elements are pinch /grab controlled. It therefore requires careful consideration of the interactions on the panel and where they should happen.
Users can resize a panel by pinching or grabbing with both hands anywhere on its surface.
They move both hands apart to increase panel size, and together to decrease. Releasing the pinch or grab will set the new size.
As two hands approach, the panel should show clear visual indicators, such as arrows at the corners, to reinforce that it can be resized.
Two–handed actions like this will not be obvious to new users and should be introduced with a tutorial.
Consider adding dedicated anchor objects to the panel corners for this purpose. This will minimise the chances of accidentally interacting with controls on the panel.