Over the last few days I’ve been working to improve support for touch devices in Dasher 360: the primary focus is on touch-enabled TVs and monitors, but I’ve been doing much of the testing for this on my mobile phone. Which means things are steadily getting better for people wanting to use phones and tablets to access the site, too.
It turns out – and this may not come as a surprise to many of you – that the default pinch behaviour in the Forge viewer is to do an unconstrained orbit. No zoom, just the orbit mode that makes the least sense for building models. So the experience inside Dasher 360 was frankly really bad.
The good news is that it’s actually pretty easy to override the default behaviour and add custom touch-gesture support inside your Forge viewer applications. The key to doing this is to create your own object that implements Autodesk.Viewing.ToolInterface, and in particular the handleGesture() function. This allows you to intercept a number of operations: in our case we want to override the behaviour of “pinch” and “rotate” gestures, converting each to a zoom, but you can also modify the “drag” operation, if you so wish.
Here’s some TypeScript code that implements this function in a way that makes sense for us. I haven’t included the rest of the class, as it doesn’t really contribute anything.
handleGesture(event: any): boolean {
// We don't care about drag gestures - leave these to another handler
if (event.type.indexOf('drag') === 0) {
return false;
} else if (event.type.indexOf('pinch') === 0 || event.type.indexOf('rotate') === 0) {
// Pinch or rotate gestures should result in a zoom (with no rotation)
let target = this._viewer.navigation.getTarget();
if (event.type === 'pinchstart' || event.type === 'rotatestart') {
// When the gesture starts, we'll store the view direction
// and initial distance between the camera and target
let position = this._viewer.navigation.getPosition();
this._direction = new THREE.Vector3();
this._direction.subVectors(position, target);
this._dist = this._direction.length();
}
// Then we normalize the direction vector and multiply it by the
// scale factor of the gesture, adding this to the target to get
// the new camera position
this._direction.normalize();
this._direction.multiplyScalar(this._dist / event.scale);
let newPos = target.add(this._direction);
this._viewer.navigation.setPosition(newPos);
return true;
}
return false;
}
Adding the “tool” is easy: you call viewer.toolController.registerTool(tool) and then viewer.toolController.activateTool(tool.getName()). The last-added tools get called first – before the built-in ones – allowing custom functions to return true and essentially stop events from propagating to other tools. If you return false then other tools will be called – it’s really up to you to decide the appropriate behaviour for your application.
To see how things work without – and then with – this custom gesture interception, watch this video:
As you can hopefully see, this makes a big difference for touch-based usability.