Interaction Engine

The Interaction Engine allows users to work with your XR application by interacting with physical or pseudo-physical objects.

Whether a baseball, a block, a virtual trackball, a button on an interface panel, or a hologram with more complex affordances. If there are objects in your application you need your user to be able to touch, grasp, or hover near, the Interaction Engine can do that work for you.

To see what the Interaction Engine can do, take a look the Interaction Engine examples included with the Unity Plugin, documented further down below.

The Basic Components of Interaction

  • “Interaction objects” are GameObjects with an attached InteractionBehaviour. They require a Rigidbody and at least one Collider.

  • The InteractionManager receives FixedUpdate from Unity and handles all the internal logic that makes interactions possible, including updating hand/controller data and interaction object data. You need one of these in your scene for interaction objects to function! A good place for the manager is right underneath your player’s top-level camera rig transform (that is, not the player’s camera itself, but its parent if any).

  • Each InteractionController does all the actual interacting with interaction objects, whether by picking them up, touching them, hitting them, or just being near them. This object could be the user’s hand by way of the InteractionHand component, or an XR Controller (e.g. Oculus Touch or Vive controller) if it uses the InteractionXRController component. Interaction controllers must sit beneath the Interaction Manager in the hierarchy to function.

A basic XR rig with the Interaction Engine.

Interaction objects can live anywhere in your scene, as long as you have an InteractionManager active. Interaction controllers, on the other hand, always need to live underneath the Interaction Manager in order to function. The Interaction Manager should always be a sibling of the camera object, so that controllers don’t inherit strange velocities if the player’s rig is moved around.

Just add InteractionBehaviour!

When you add an InteractionBehaviour component to an object, a couple of things happen automatically:

  • If it didn’t have one before, the object will gain a Rigidbody component with gravity enabled, making it a physically-simulated object governed by Unity’s PhysX engine. If your object doesn’t have a Collider, it will fall through the floor!

  • Assuming you have an Interaction Manager with one or more interaction controllers beneath it, you’ll be able to pick up, poke, and smack the object with your hands or XR controller.

The first example in the Interaction Engine package showcases the default behaviour of a handful of different objects when they first become interaction objects.

Update the Physics Timestep and Gravity

Unity’s physics engine has a “fixed timestep” and that timestep is not always in sync with the graphics frame rate. It is very important that you set the physics timestep to be the same as the rendering frame rate. If you are building for an Oculus or Vive, this means that your physics timestep should be 0.0111111 (corresponding to 90 frames per second). This is configured via Edit -> Project Settings -> Time.

Additionally, we’ve found that setting your gravity to half its real-world scale (-4.905 on the Y axis instead of -9.81) produces a better feeling when working with physical objects. We strongly recommend setting your gravity in this way; you can change it in Edit -> Project Settings -> Physics.

Get Your XR Rig Ready

If you don’t already have an Ultraleap enabled XR camera rigged to your scene, you can follow these steps:

Drag the Service Provider XR prefab into your scene. Drag the Interaction Manager prefab into your scene. If you wish to move around the scene with hand tracking, ensure the Service Provider and Interaction Manager prefabs are both under a root.

We will often refer to this root as a rig.

To create AttachmentHands for use in your scene, you would:

  • Create a new GameObject in your scene

  • Rename it Attachment Hands

  • Drag it into the root object created earlier

  • Add the AttachmentHands script onto the object

Configure InteractionXRControllers for grasping

If you intend to use the Interaction Engine with Oculus Touch or Vive controllers, you’ll need to configure your project’s input settings before you’ll be able to use the controllers to grasp objects. Input settings are project settings that cannot be changed by imported packages, which is why we can’t configure these input settings for you. You can skip this section if you are only interested in using Ultraleap hands with the Interaction Engine.

Go to your Input Manager (Edit -> Project Settings -> Input) and set up the joystick axes you’d like to use for left-hand and right-hand grasps. (Controller triggers are still referred to as ‘joysticks’ in Unity’s parlance.) Next, make sure each InteractionXRController has its grasping axis set to the corresponding axis you set up. The default prefabs for left and right InteractionXRControllers will look for axes named LeftXRTriggerAxis and RightXRTriggerAxis, respectively.

Helpful diagrams and axis labels can be found in Unity’s documentation.

Check Out The Examples

The examples folder contains a series of example scenes that demonstrate the features of the Interaction Engine.

Many of the examples can be used with hands via Ultraleap Hand Tracking Cameras, or with any XR controller that Unity provides built-in support for, such as Oculus Touch controllers or Vive controllers.

Example 1: Interaction Objects 101

The Interaction Objects example shows the default behavior of interaction objects when they first receive their InteractionBehaviour component.

Reach out with your hands or your XR controller and play around with the objects in front of you to get a sense of how the default physics of interaction objects feels. In particular, you should see that objects don’t jitter or explode, even if you attempt to crush them or pull on the constrained objects in various directions.

On the right side of this scene are floating objects that have been marked kinematic and that have ignoreGrasping and ignoreContact set to true on their InteractionBehaviours. These objects have a simple script attached to them that causes them to glow when hands are nearby, but due to their interaction settings, they will only receive hover information, and cannot be grasped. Note that Rigidbodies collide against these objects even though they have ignoreContact set to true. This setting applies only against interaction controllers, not for arbitrary Rigidbodies. In general, we use Contact to refer specifically to the contact-handling subsystem in the Interaction Engine between interaction controllers (e.g. hands) and interaction objects (e.g. cubes).

Example 2: Basic UI in the Interaction Engine

Interacting with interface elements is a very particular kind of interaction. In VR or AR, we find these interactions make the most sense to users when they are provided physical metaphors and familiar mechanisms. Thus, we’ve built a small set of fine-tuned InteractionBehaviours (that will continue to grow!) that deal with this extremely common use-case: The Interaction Button, and the Interaction Slider.

Try manipulating this interface in various ways, including ways that it doesn’t expect to be used. You should find that even clumsy users will be able to push only one button at a time: Fundamentally, user interfaces in the Interaction Engine only allow the ‘primary hovered’ interaction object to be manipulated or triggered at any one time.

This is a soft constraint; primary hover data is exposed through the InteractionBehaviour’s API for any and all interaction objects for which hovering is enabled, and the InteractionButton enforces the constraint by disabling contact when it is not ‘the primary hover’ of an interaction controller.

Example 3: Interaction Callbacks for Handle-type Interfaces

The Interaction Callbacks example features a set of interaction objects that collectively form a basic Transform Tool the user may use at runtime to manipulate the position and rotation of an object. These interaction objects ignore contact, reacting only to grasping controllers and controller proximity through hovering. Instead of allowing themselves to be moved directly by grasping hands, these objects cancel out and report the grasped movement from controllers to their managing TransformTool object, which orchestrates the overall motion of the target object and each handle at the end of every frame.

Example 4: Attaching Interfaces to the User’s Hand

Simple applications may want to attach an interface directly to a user’s hand so that certain important functionalities are always within arm’s reach. This example demonstrates this concept by animating one such interface into view when the user looks at their left palm (or the belly of their XR controller; in the controller case, it may be better to map such a menu to an XR controller button).

Example 5: Building on Interaction Objects with Anchors

The AnchorableBehaviour, Anchor, and AnchorGroup components constitute an optional set of scripts that are included with the Interaction Engine that build on the basic interactivity afforded by interaction objects. This example demonstrates all three of these components. AnchorableBehaviours integrate well with InteractionBehaviour components (they are designed to sit on the same GameObject) and allow an interaction object to be placed in Anchor points that can be defined anywhere in your scene.

Example 6: Dynamic Interfaces. Interaction Objects, AttachmentHands, and Anchors

InteractionButtons and InteractionSliders are useful on their own, but they become truly powerful tools in your UI toolkit when combined with Anchors, and Core utilities like the AttachmentHands and the Tween library to allow the user to carry around entire physical interfaces on their person in XR spaces. This example combines all of these components to demonstrate using the Interaction Engine to build a set of portable XR interfaces.

Example 7: Moving Reference Frames

The Interaction Engine keeps your interfaces working even while the player is being translated and rotated. Make sure your player moves during FixedUpdate, before the Interaction Engine performs its own FixedUpdate. You’ll also need to make sure the Interaction Manager object moves with the player. This is most easily accomplished by placing it beneath the player’s rig Transform, as depicted in our standard rig diagram above.

If you’re not sure that your application is set up correctly for moving reference frame support, this example demonstrates a working configuration that you can reference.

Example 8: Swap Grasp

This example scene demonstrates the use of the InteractionController’s SwapGrasp() method, which allows you to instantly swap an object that the user is holding for another. This is especially useful if you need objects to morph while the user is holding them.

Working with PhysX objects in Unity

Before scripting behaviour with the Interaction Engine, you should know the basics of working with PhysX Rigidbodies in Unity. Most importantly, you should understand Unity’s physics scripting execution order:

  1. FixedUpdate (user physics logic) sometimes, but always with PhysX

  2. PhysX updates Rigidbodies and resolves collisions sometimes, but always with FixedUpdate

  3. Update (user graphics logic) once every frame

Source: this helpful chart from Unity, via the execution order page.

FixedUpdate happens just before the physics engine “PhysX” updates and is where user physics logic goes. This is where you should modify the positions, rotations, velocities, and angular velocities of your Rigidbodies to your liking before the physics engine does physics to them.

FixedUpdate may happen 0 or more times per Update. XR applications usually run at 90 frames per second to avoid sickening the user. Update runs once before the Camera in your scene renders what it sees to the screen or your XR headset. Unity’s physics engine has a “fixed timestep” that is configured via Edit -> Project Settings -> Time. At Ultraleap, we build applications with a fixed timestep of 0.0111111 to try and get a FixedUpdate to run once a frame, and this is the setting we recommend. But do note that FixedUpdate is not guaranteed to fire before every rendered frame, if your time-per-frame is less that your fixed timestep. Additionally, FixedUpdate may happen two or more times before a rendered frame. This will happen if you spend more than two fixed timesteps’ worth of time on any one render frame (i.e. if you “drop a frame” because you tried to do too much work during one Update or FixedUpdate).

Naturally, because the Interaction Engine deals entirely in physics objects, all interaction object callbacks occur during FixedUpdate. While we’re on the subject of potential gotchas, here are a few more gotchas when working with physics:

The update order (FixedUpdate, PhysX, Update) implies that if you move physics objects via their Rigidbodies during Update and not during FixedUpdate, the new positions/rotations will not be visible until the next update cycle, after the physics engine manipulates objects’ Transforms via their Rigidbodies. When you move a PhysX object (Rigidbody) via its Transform (transform.position or transform.rotation) instead of its Rigidbody (rigidbody.position or rigidbody.rotation), you force PhysX to immediately do some heavy recalculations internally. If you do this a lot, to a number of physics objects a frame, it could impact your framerate. Generally, we don’t recommend doing this but we know that sometimes it’s necessary.

Custom Layers For Interaction Objects

Have a custom object layer setup? No problem. Interaction objects need to switch between two layers at runtime:

  • The “Interaction” layer, used when the object can collide with your hands.

  • The “No Contact” layer, used when the object can’t collide with your hands. This is the case when the object is grasped, or when``ignoreContact`` is set to true.

On a specific Interaction Behaviour under its Layer Overrides header, check Override Interaction Layer and Override No Contact Layer in its inspector to specify custom layers to use for the object when contact is enabled or disabled (e.g. due to being grasped). These layers must follow collision rules with respect to the contact bone layer, which is the layer that contains the Colliders that make up the bones in Interaction Hands or Interaction Controllers. (The contact bone layer is usually automatically generated, but you can specify a custom layer to use for Interaction Controllers in the Interaction Manager’s inspector). The rules are as follows:

  • The Interaction Layer should have collision enabled with the contact bone layer.

  • The No Contact layer should not have collision enabled with the contact bone layer.

  • (Any collision configuration is allowed for these layers with respect to any other, non-contact-bone layers.)

You can override both or only one of the layers for interaction objects as long as these rules are followed. You can also name these layers anything you want, although we usually put “Interaction” and “No Contact” in the layer names to make their purposes clear.

Custom Behaviours for Interaction Objects

Be sure to take a look at examples 2 through 6 to see how interaction objects can have their behaviour fine-tuned to meet the specific needs of your application. The standard workflow for writing custom scripts for interaction objects goes something like this:

  • Be sure your object has an InteractionBehaviour component (or an InteractionButton or InteractionSlider component, each of which inherit from InteractionBehaviour).

  • Add your custom script to the interaction object and initialize a reference to the InteractionBehaviour component.

using Leap.Unity.Interaction;
using UnityEngine;

public class CustomInteractionScript : MonoBehaviour {

  private InteractionBehaviour _intObj;

  void Start() {
    _intObj = GetComponent<InteractionBehaviour>();


Check out the API documentation (or take advantage of IntelliSense!) for the InteractionBehaviour class to get a sense of what behaviour you can control through scripting, or look at the examples below.

Disabling/Enabling Interaction Types at Runtime

Disabling and enabling hover, contact, or grasping at or before runtime is a first-class feature of the Interaction Engine. You have two ways to do this:

Option 1: Using controller interaction types

The InteractionController class provides the enableHovering, enableContact, and enableGrasping properties. Setting any of these properties to false will immediately fire “End” events for the corresponding interaction type and prevent the corresponding interactions from occurring between this controller and any interaction object.

Option 2: Using object interaction overrides

The InteractionBehaviour class provides the ignoreHover, ignoreContact, and ignoreGrasping properties. Setting any of these properties to true will immediately fire “End” events for the corresponding interaction type (for this object only) and prevent the corresponding interactions from occurring between this interaction object and any controller.

Constraining an object’s held position and rotation

Option 1: Use PhysX constraints (joints)

The Interaction Engine will obey the constraints you impose on interaction objects whose Rigidbodies you constrain using Joint components. If you grasp a non-kinematic interaction object that has a Joint attached to it, the object will obey the constraints imposed by that joint.

If you add or remove an interaction object’s Joints at runtime and your object is graspable, you should call _intObj.RefreshPositionLockedState() to have the object check whether any attached Joints or Rigidbody state lock the object’s position. Under these circumstances, the object must choose a different grasp orientation solver to give intuitively correct behavior. Check Interaction Behaviour.

Option 2: Use the OnGraspedMovement callback

When grasped, objects fire their OnGraspedMovement callback right after the Interaction Engine moves them with the grasping controller. That means you can take advantage of this callback to modify the Rigidbody position and/or rotation just before PhysX performs its physics update. Setting up this callback will look something like this:

private InteractionBehaviour _intObj;

private void OnEnable() {
  _intObj = GetComponent<InteractionBehaviour>();

  _intObj.OnGraspedMovement -= onGraspedMovement; // Prevent double-subscription.
  _intObj.OnGraspedMovement += onGraspedMovement;

private void OnDisable() {
  _intObj.OnGraspedMovement -= onGraspedMovement;

private void onGraspedMovement(Vector3 presolvedPos, Quaternion presolvedRot,
                              Vector3 solvedPos,    Quaternion solvedRot,
                              List<InteractionController> graspingControllers) {
  // Project the vector of the motion of the object due to grasping along the world X axis.
  Vector3 movementDueToGrasp = solvedPos - presolvedPos;
  float xAxisMovement = movementDueToGrasp.x;

  // Move the object back to its position before the grasp solve this frame,
  // then add just its movement along the world X axis.
  _intObj.rigidbody.position = presolvedPos;
  _intObj.rigidbody.position += Vector3.right * xAxisMovement;

Constraining An Interaction Object’s Position and Rotation Generally

The principles explained above for constraining a grasped interaction object’s position and rotation also apply to constraining the interaction object’s position and rotation even when it is not grasped. Of course, Rigidbody Joints will work as expected.

When scripting a custom constraint, however, instead of using the OnGraspedMovement callback, the Interaction Manager provides an OnPostPhysicalUpdate event that fires just after its FixedUpdate, in which it updates interaction controllers and interaction objects. This is a good place to apply your physical constraints.

private InteractionBehaviour _intObj;

void OnEnable() {
  _intObj = GetComponent<InteractionBehaviour>();

  // Prevent double subscription.
  _intObj.manager.OnPostPhysicalUpdate -= applyXAxisWallConstraint;
  _intObj.manager.OnPostPhysicalUpdate += applyXAxisWallConstraint;

void OnDisable() {
  _intObj.manager.OnPostPhysicalUpdate -= applyXAxisWallConstraint;

private void applyXAxisWallConstraint() {
  // This constraint forces the interaction object to have a positive X coordinate.
  Vector3 objPos = _intObj.rigidbody.position;
  if (objPos.x < 0F) {
    objPos.x = 0F;
    _intObj.rigidbody.position = objPos;

    // Zero out any negative-X velocity when the constraint is applied.
    Vector3 objVel = _intObj.rigidbody.velocity;
    if (objVel.x < 0F) {
      objVel = 0F;
      _intObj.rigidbody.velocity = objVel;

Applying Forces to an Interaction Object

If your interaction object is not actively being touched by an Interaction Hand or an Interaction XR Controller, you may apply forces to your Rigidbody using the standard API provided by Unity. However, when an object experiences external forces that press it into the user’s controller or the user’s hand, the “soft contact” system provided by the Interaction Engine requires special knowledge of those external forces to properly account for them. In any gameplay-critical circumstances involving forces of this nature, you should use the Forces API provided by interaction objects:


These accelerations are ultimately applied using the Rigidbody forces API, but are also recorded by the “soft contact” subsystem, to prevent the object from nudging its way through interaction controllers due to repeated application of these forces.

Interaction Types In-Depth


Hover functionality in the Interaction Engine consists of two inter-related subsystems, referred to as ‘Hover’ and ‘Primary Hover’ respectively.

Proximity feedback (“Hover”)

Any interaction object within the Hover Activity Radius (defined in your Interaction Manager) around an interaction controller’s hover point will receive the OnHoverBegin, OnHoverStay, and OnHoverEnd callbacks and have its isHovered state set to true, as long as both the hovering controller and the interaction object have their hover settings enabled. Interaction objects provide a public getter for getting the closest hovering interaction controller as well. In general, hover information is useful when scripting visual and audio feedback related to proximity.

Primary Hover

Interaction controllers define one or more “primary hover points,” and the closest interaction object (that is currently hovered by an interaction controller) to any of the interaction controller’s primary hover points will become the primarily hovered object of that controller. For example, in InteractionHand’s inspector, you can specify which of the hand’s fingertips you’d like tracked as primary hover points. The primary hover status of an interaction object can be queried at any time using a controller’s primaryHoveredObject property or the object’s isPrimaryHovered property.

Fundamentally, primary hover is the feature that turns unreliable interfaces into reliable ones. When only the primary hovered object of a given interaction controller can be depressed or otherwise interacted-with by that controller, even the coarsest motions are guaranteed to only ever interact with a single UI element at a time. This is why the button panel in [[Example 2 (Basic UI) | Getting-Started-(Interaction-Engine)example-2-basic-ui-in-the-interaction-engine]] will only depress one button per hand at any given time, even if you clumsily throw your whole hand into the panel. The InteractionButton, InteractionToggle, and InteractionSlider classes all implement this primary-hover-only strategy in order to produce more reliable interfaces.

Because it constraints interactions down to a single controller/object pair, “primary hover” tends to most closely resemble the concept of “hover” in 2D interfaces utilizing a mouse pointer.


Contact in the Interaction Engine consists of two subsystems:

  • Contact Bones, which are Rigidbodies with a single Collider and ContactBone component that holds additional contact data for hands and controllers, and

  • Soft Contact, which activates when Contact Bones get dislocated from their target positions and rotations. In other words, when a hand or interaction controller jams itself too far “inside” an interaction object.

Contact Bones

Interaction controller implementations are responsible for constructing and updating a set of GameObjects with Rigidbodies, Colliders, and ContactBone components, referred to as contact bones. The controller is also responsible for defining the “ideal” or target position and rotation for a given contact bone at all times. During the FixedUpdate, the InteractionController base class will set each of its contact bones’ velocities and angular velocities such that the contact bone will reach its ideal position and rotation by the next FixedUpdate. These velocities then propagate through the Unity’s physics engine (PhysX) update and the contact bones may collide against objects in the scene, which will apply forces to them and potentially dislocate the contact bones, preventing them from reaching their destination.

Additionally, at the beginning of every FixedUpdate, an interaction controller checks how dislocated a contact bone is from its intended position and rotation. If this dislocation becomes too large, the interaction controller will switch into Soft Contact mode, which effectively disables its contact bones, by converting them into Trigger colliders.

Soft Contact

Soft Contact is essentially an alternative to the standard physical paradigm in physics engines of treating Rigidbodies as, well, perfectly rigid bodies. Instead, relative positions, rotations, velocities, and angular velocities are calculated as the trigger colliders of contact bones pass through the colliders of interaction objects.

Custom velocities and angular velocities are applied each frame to any interaction objects that are colliding with the bones of an interaction controller in soft contact mode. The controller and object will resist motions deeper into the object but freely allow motions out of the object.

If debug drawing is enabled on your Interaction Manager, you can tell when an interaction controller is in Soft Contact mode because its contact bones (by default) will render as white instead of green.


When working with XR controllers, grasping is a pretty basic feature to implement: simply define which button should be used to grab objects, and use the motion of the grasp point to move any grasped object. However, when working with Ultraleap hands, we no longer have the simplicity of dealing in digital buttons. Instead, we’ve implemented a finely-tuned heuristic for detecting when a user has intended to grasp an interaction object. Whether you’re working with XR controllers or hands, the grasping API in the Interaction Engine provides a common interface for constructing logic around grasping, releasing, and throwing.

Grasped Pose and Object Movement

When an interaction controller picks up an object, the default implementation of all interaction controllers assumes that the intended behaviour is for the object to follow the grasp point. Grasp points are explicitly defined for InteractionXRControllers (as Transforms) and are implicit for Interaction Hands (depending on how the hand’s fingers grasp the object), but the resulting behaviour is the same in either case.

While grasped, interaction objects are moved under one of two mutually-exclusive modes: Kinematic or Nonkinematic. By default, kinematic interaction objects will move kinematically when grasped, and nonkinematic interaction objects will move nonkinematically when grasped.

When moving kinematically, an interaction object’s rigidbody position and rotation are set explicitly, effectively teleporting the object to the new position and rotation. This allows the grasped object to clip through colliders it otherwise would not be able to penetrate. Nonkinematic grasping motions, however, cause an interaction object to instead receive a velocity and angular velocity that will move it to its new target position and rotation on the next physics engine update, which allows the object to collide against objects in the scene before reaching its target grasped position.

When an object is moved because it is being grapsed by a moving controller, the OnGraspedMovement is fired right after the object is moved, which you should subscribe to if you wish to modify how the object moves while it is grasped. Alternatively, you can disable the moveObjectWhenGrasped setting on interaction objects to prevent their grasped motion entirely (which will no longer cause the callback to fire).


When a grasped object is released, its velocity and angular velocity are controlled by an object whose class implements the IThrowHandler interface. IThrowHandlers receive updates every frame during a grab so that they can accumulate velocity and angular velocity data about the object. Usually, only the latest few frames of data are necessary. When the object is finally released, they get an OnThrow call, which in the default implementation (SlidingWindowThrow) sets the velocity of the object based on a recent historical average of the object’s velocity while grasped. In practice, this results in greater accuracy in users’ throws.

If you’d like to create a different implementation of a throw, you can implement a new IThrowHandler and set the public throwHandler property on any interaction object to change how it behaves when it is thrown.