In a recent post we saw that switching to use THREE.BufferGeometry brought some unexpected benefits when it came to rendering robots inside Dasher. I wasn’t very happy about the fact that said robots were spinning destructively on the MX3D bridge, so I started looking into options for collision detection inside a Forge viewer application.
From the start I should say that the approach that I ended up choosing is fairly rudimentary in nature: a much better solution would be to integrate a physics engine such as Ammo.js or perhaps even a voxelization engine such as VASA. But I figured that providing this simpler technique might still be of benefit for certain simpler situations.
The basic approach is outlined in this StackOverflow response and this GitHub example. The idea is to take the various vertices of your source mesh and then fire rays from the mesh’s position to each vertex: if the rays intersect the scene somewhere closer than the distance from the position to the vertex then there’s a collision happening.
This is fine when you’re dealing with simple geometry – the sample uses cubes, for instance, so we’re really only talking about 8 vertices – but interactive performance is going to be a challenge when dealing with complex meshes such as our robot.
To explore this I decided to start with the robot’s skeleton – the core structure that connects the “bones” of the robot and allows us to manipulate the various limbs independently but connectedly.
There’s a THREE.SkeletonHelper object that allows to query the skeleton’s vertices, which we can simply transform to world coordinates using the helper’s matrixWorld property. The vertex order is the same as those of the skeleton’s bones, but with the start-point and end-point swapped (the first bone is from v[1] to v[0], the second from v[3] to v[2], etc.). At least for the version currently used in the Forge viewer – this may change, in time.
This approach works well, although the core skeleton can sometimes be quite far from the surface of the mesh itself. I went ahead and created a number of vectors that were offset by the approximate bounding size (I didn’t get this quite right, but never mind) around the core skeleton.
To check intersections with the scene the Forge viewer has a handy rayIntersect() method (it’s under the impl property, so you need to access it via viewer.impl.rayIntersect()). As the robot itself is displayed via an overlay, it doesn’t participate in the intersection operation (which is a good thing).
We don’t need to check collisions for every bone, or for all the vectors around the central one. In fact I ended up reducing down to just the end bone and one single radial vector, to keep the collision detection operation responsive. If you’re not calling this super-regularly (we’re doing so for every joint transformation, so it’s being called a lot) you can ratchet up the accuracy by including more vectors.
In the below animation the collision detection isn’t perfect, but it’s good enough for our purposes.
One interesting effect is that when we reverse the robot operation we can’t just decrement the counter we’re using to keep track of the robot position: we need to reduce it by a larger amount to avoid this kind of “Max Headroom” stuttering.
I’m not fully sure why this happens – maybe there’s some lag introduced because we’re applying multiple joint transformations, one after the other – but anyway.
I hope this proves to be useful for people who want to implement some kind of rudimentary client-side collision detection using the Forge viewer. It won’t work for all scenarios, but will hopefully still prove useful for someone.