Scripting Guide
Pictarize support custom javascripts for interactive AR effects. You can open up the scripting panel on the bottom left of your scene. This is an advanced feature for developers.
Let's start with some basic concept. Each target image corresponds to one AR scene. You can attach custom scripts to each of these individual targets (i.e. scenes) to control their behaviours in order to make it more interative. Unlike traditional (game) programming, in which you need to start a program and create a running loop, Pictarize already created the main program for you, and the main program will fire up your custom functions during the lifecycles of the scene. Inside your custom functions, you can modify the properties (e.g. position) of the contents. After finish executing your code, the control will pass back to Pictarize main program.
Scene Life Cycle
function onInit( {target, data} ) {
// when scene is loaded (but target image is not yet detected)
// you can do some prepatation work here
// this event only fire ONCE
}
function onActivate( {target, data} ) {
// when target image is being detected, and the effect is about to start
// this event fires every time a target image is detected (e.g. after lost track)
}
function onDeactivate( {target, data} ) {
// when target image is lost track, and the effect is about to end
}
function onUpdate( {target, data, time, deltaTime } ) {
// this event is called on every frame
}
function onClick( {target, data, object, time} ) {
// this event is called when any object from the scene is being clicked (i.e. tapped)
}
Most of the time, you want your code to manipulate the content objects (e.g. 3D models, video, audio, etc). The first thing you likely would want to do is to get individual content objects. You do that by using the target.getObject() method. This method takes a single parameter, which is the name of the content. It is also the name you specified in the Targets Panel.
xxxxxxxxxx
function onInit( {target} ) {
const object = target.getObject("object-name"); // object-name is the name of the content you gave
}
Content Object
The content object allows you to read and modify the properties of the underlying content. They have the following properties and methods:
xxxxxxxxxx
// Basic properties
object.position // get the 3D coordinates of the content. It's a dictionary of {x, y, z}
object.setPosition(x: Number, y: Number, z: Number) // set the position of the content
// e.g. to move the object by 10 units along the x axis, you can write:
// object.setPosition(object.position.x+10, object.position.y, object.position.z)
object.rotation // get the 3D rotation of the content. It's a dictionary of {x, y, z}
object.setRotation(x: Number, y: Number, z: Number) // set the rotation of the content
object.scale // get the 3D scale of the content. It's a dictionary of {x, y, z}
object.setScale(x: Number, y: Number, z: Number) // set the scale of the content
object.visible // get the visibility of the object
object.setVisible(Boolean) // set the visibility of the object
object.name // the name of the object
Depending on the content types, there might be extra properties and methods.
3D Models
If your 3D models have built-in animations. Custom scripts allow you to control that how you want to play them. There is a getAction() method to retrieve the underlying animation action. The return type is AnimationAction from THREE.js. Ref. It's possible that there are multiple animations attached to the 3D models, so you need to pass in an index parameter.
xxxxxxxxxx
const action = object.getAction(0); // get the first action
// with the action, you can do many things, for example:
action.play(); // start animation
action.reset(); // reset animation
action.paused = true; // pause animation
Uploaded Videos
For uploaded videos (differnt from embeded youtube/vimeo), you can get the underlying video object using getVideo. The return type is HTML video object, so you can do whatever html supports.
xxxxxxxxxx
const action = object.getVideo();
// with the video, you can do many things, for example:
video.play(); // play video
video.pause(); // pause video
Uploaded Audios
For uploaded audios, you can get the underlying audio object using getAudio. The return type is HTML audio object, so you can do whatever html supports.
xxxxxxxxxx
const action = object.getAudio();
// with the audio, you can do many things, for example:
audio.play(); // play audio
audio.pause(); // pause audio
Embed Youtube/Vimeo videos
For embeded videos, we currently support three methods to control the playback. Noted that it's different from uploaded videos.
xxxxxxxxxx
object.playVideo(); // play video
object.pauseVideo(); // pause video
const isPlaying = object.isPlayingVideo(); // return a boolean indicating whether video is playing
onClick
For onClick function, there is an object input parameter. This is the content object being clicked. Most of the time, you might want to know what content it is by checking against the name. Example below shows how you can make a button (a button just means a content. That could be image, text, or anything), and trigger a 3D model to start animating when it is being clicked.
xxxxxxxxxx
function onClick({target, data, object}) {
if (object.name === 'my-button') {
const model = target.getObject('my-model');
const action = model.getAction(0);
action.start();
}
}
onUpdate
Two additional inputs, time and deltaTime, are present in onUpdate call. time is the elapsed time (in seconds) since the scene being activated (i.e. the time onActivate is being called). deltaTime is the elapsed time since the last onUpdate call. They are very useful if you want to programmatically animate some contents (e.g. create transitional effects like fade in / fade out).
xxxxxxxxxx
// expand the model from scale 0 to 10, with speed of 2 unit per seconds
function onUpdate({target, data, time, deltaTime}) {
const model = target.getObject('my-model');
const speed = 2;
const newScale = Math.min(10, time * speed);
model.setScale(newScale, newScale, newScale);
}
data
Finally, there is also a data input for all the event functions. This is a storage object for you to keep custom data across the lifecycles of the application. You can assign any custom data to it, and even custom functions. In the example below, we hide a 3D model at the beginning and make it appear after user have clicked for five times.
xxxxxxxxxx
// create a custom function - an effect to hide a 3D model
function onInit({target, data}) {
const model = target.getObject('my-model');
data.myEffect = () => {
model.setVisible(true);
}
}
// whenever a scene is activated, hide the 3D model
function onActivate({target, data}) {
const model = target.getObject('my-model');
model.setVisible(false);
data.clickCount = 0;
}
// count user click. then trigger the custom effects when 5 times are clicked.
function onClick({target, data, object}) {
data.clickCount += 1;
if (data.clickCount === 5) {
data.myEffect();
}
}
Conclusion
There is a simulator right inside the editor. You can easily test your effects by running the simulator instead of building the projects everytime you make changes. Once you are satisfied, you can then proceed to build the project and test it in real devices.