The simplest thing you can do is bind the movement of a scene object to a face tracker, so that the object moves with the face. To do this, insert a face tracker patch and then go to your object's properties and click the dot next to Rotation. You should now see a face tracker and object patch in the Patch Editor.
Click the Rotation port on the face tracker patch and drag to the port on the object patch. Once the patches are connected, click Run to see it in action. Whenever your head moves, your object should move with it.
You can set an animation to begin and end when triggered by specific actions on a face. In the example below, the object is set to appear if someone opens their mouth. Then, if they lean their head to the right, the object changes position. If they close their mouth, the object disappears.
By connecting the Face Tracker, Head Rotation and Mouth Open patches via the Tracking Data ports, we're telling Spark AR Studio that it should be using the information from the face tracker to look for an open mouth or a leaned head.
Both Mouth Open and Head Rotation are boolean signals, which means it can either be happening or not happening. Patches like Mouth Openness have scalar signals.
If you want to use a boolean signal to start an animation, you'll need to use the Pulse patch to transform the signal into a discrete event.
You can use logic to make your animation react to a specific set of conditions. For example, you can create a patch graph that makes a ball drop from top to bottom if the head is tilted in either direction. To do this, us the Or patch to indicate that the animation should occur if either or the other action happens.
Here, the Or patch is placed after Head Rotation so that it can take both the inputs from the face tracker and trigger the movement if the head is leaned left or right.
In the Basics of Scripting tutorial, we explain how to make pizza fly into your mouth when you open it. Use this example to create this effect with visual programming.
You can use interactions such as a tap to make your effect respond to specific actions on the screen. In this example, we've used the Object Tap patch to make an object change position when someone taps directly on it. You also use Screen Tap, Pan, Pinch or Rotate to trigger or control interactivity in your effect.
You can also use face gestures to control aspects of your effect. Here, we've used Smile to trigger different hats to appear, but you could also use Blink, Eyebrows Lowered, Eyebrows Raised, Right Eye Closed, Left Eye Closed or Smile to something similar. These patches must be connected to a Face Tracker patch to work properly.
We've also used Counter above to control when each hat appears. Counter allows you to track inputs, in this case smiles, and their count. We've set a maximum count to 3 here, which corresponds to three hat options we've added to the effect. Each hat is matched with a count number from 1 to 3, which triggers whether it's visible in the scene or not.
You can use screen interactions such as Screen Tap or Screen Long Press to not only control when things happen in your effect, but also to control how they happen. In this example, we're using Screen Long Press to control where an object is located on the screen.
To do this, we used 2D Point Unpack to break the location of a screen press into it's individual X and Y coordinates. We then use Multiply to change those coordinates by a set value. You could also use any math patch to make this change. In order to bring those coordinates back together, we use Point Pack. In this case, the object we are controlling the location of is 3D, so we used Point Pack instead of 2D Point Pack.
The Runtime patch tracks the number of seconds that have passed since your effect started to run. One way you can use Runtime is to control how long something appears on the screen.
Here, Runtime connected with Offset tells the effect to check how long the effect has been running and compare against the offset we define. Here, we've used Less Than to define the offset as 3 seconds. This means that the text will only be visible when the runtime is less than 3 seconds.
For this example, we've also used Screen Tap to reset the timer, so that the text reappears when someone taps the screen. After 3 seconds, it will disappear again.
In the example patch graph below, we've used all the facial gesture recognition patches in one effect. Each facial gesture triggers a different plane to become visible, creating an effect that can cycle through interactions that are tied to specific facial gestures.