The inspiration for this post has come from a variety of sources. (Feel free to skip this preamble where I talk about the history of the project: as much as anything it’s so I remember myself how things happened when I come back to this post at some point in the future. ;-)
My colleague, Simon Breslav, worked on an initial implementation in Dasher 360 that animated robots – and even mapped stress information to their surfaces – for a demo shown at AU 2017, back when I was travelling around the world with my family. One of the issues Simon had with his implementation was the amount of per-model configuration required: the mechanism was linked to specific components that somehow had to be identified in the model (for the demo this was hardcoded).
Simon’s work was partly driven by a collaboration with the AMF in the UK, where we knew we’d one day want to display animated robots inside Dasher 360.
More recently I started working on displaying human skeletons inside the Forge viewer. It turns out that using a bone structure for a skeleton isn’t currently needed for Dasher 360, as we’re deriving the joint positions from video footage, and don’t need to worry about the various joint angles between limbs.
For robots, however, we will be capturing the angles of the various joints, which really lends itself to using a bone structure of connected parts. During and after Rob|Arch 2018 I worked on enabling the animation of skinned meshes inside the Forge viewer. This resulted in the ability to create and animate a skinned mesh with basic geometry – that I created in code – for each of the limbs, but my goal wasn’t to define the geometry of a robot in code, it was to allow loading of external robot models that were created using a 3D modelling tool of some sort.
During the Forge accelerator in Rome I hit my head against the limitations of various object loaders inside r71 of three.js (the one currently supported by the Forge viewer):
- The glTF loader in that version was only for v1.0 of the spec, and had a number of issues.
- The JSON format supported by three.js at that time didn’t seem to include any joint information.
- Collada seemed the only way to go, but was limited at v1.4.x in that version of the toolkit.
In the long-term I think glTF is probably my preferred option for bringing 3D data into three.js, but it’s not really an option with r71. Collada looked like a decent option, but in v1.4 it only supports skinning and not kinematics (the distinction seeming to be that skinned models have their mesh vertices morphed along with changes to its bone structure, while kinematics can just apply transformations to rigid bodies… I’ve probably stated this poorly – not being a domain expert – so feel free to correct me in the comments).
The rigged model of a Kuka KR-1000 industrial robot I got from TurboSquid included a DAE (Collada) file that was v1.4.1. This was a good thing, in that three.js r71 could at least load it. The bad news was that if you just brought it into the viewer – by adding the loaded geometry directly into the Forge viewer scene – then it resulted in geometry that was poorly positioned.
I could go through the individual meshes in the model and add these into the scene…
… but these wouldn’t animate with the attached skeleton. Argh.
When I got home from Rome I found this simple bit of advice that seemed a great way to add your own kinematics to a model: you can add a mesh to any bone in a skeleton and it will get transformed along with it. Perfect! Now all I had to do was to find a way to parse the scene loaded from the Collada file and attach the various meshes to their corresponding bones.
Here’s the algorithm I used to load in the Collada model and attach component meshes to the bones:
- Clone the first skinned mesh from the Collada file and add it into the viewer scene.
- This defines the skeleton – with its various bones – for the robot model.
- In our case the mesh only represented a flexible hose (which presumably it supposed to deform as the skeleton moves), but I just set its material to invisible.
- Call a recursive function that searches the Collada scene for the meshes that correspond to its bones.
- If we find an object in the scene that has the same name as a bone, get its children and add the meshes that correspond to them into the viewer.
- We can create a simple THREE.Mesh using the geometry in collada.dae.geometries['XXX-mesh'].mesh.geometry3js (where XXX is the name of the child).
- If you add them “as is”, then the results are just as we saw when we first imported the model: the transforms are messed up in exactly the same way.
- The difference being the meshes at meast move (albeit in the wrong place) with the skeleton. Progress!
- To get them to show up in the right place, we need to apply the inverse of the new owner’s matrixWorld.
- In most cases we add the mesh to the bone directly: in case you need to fix some geometry (such as the base of the robot) then this needs to be added to the parent (the scene itself).
- Create a THREE.SkeletonHelper object, passing in our mesh. Add the mesh to the scene (and the helper, if you want the bone structure to be visible, too).
- Periodically you’ll want to change the joint angles on the various bones: this will allow you to animate the robot, whether based on real or simulated data.
Once implemented in code, in turns out that this approach doesn’t even require skinning to be enabled: it works perfectly with the shipping Forge viewer. Hooray!
Something we learned, a few posts ago, is that in order to animate an object inside the Forge viewer it needs to have its material’s depthTest property set to false. This allows the scene to be updated without a full render. The downside is that you can see the object “through walls” (and other geometry) and even its own components are drawn in an arbitrary order.
This means there’s no point trying to apply a nice Phong material with shadows, etc. Here’s what it looks like when you do:
Using a basic mesh material things look a little better, but it’s still not great:
The best approach, in my opinion, is to make the material wireframe. It adds a really nice feel to it.
Here’s how things look when animated:
Performance is pretty decent, and the wireframe view makes multiple overlapping robots display well, too.
For the future: once we’re able to use a recent three.js version, we’ll also be able to use Collada v1.5 files. Which can contain kinematics data directly, allowing stuff like this. There’s also a much broader set of models available, thanks largely to the OpenRAVE project. If we end up needing to be on r71 for some time to come, I’ll probably need to look into how best to create Collada v1.4 files from Autodesk tools, and see whether the scene structure can be managed by the code I’ve used to import the model for this post. We’ll see.
A quick word of thanks… Petr Broz and Cyrille Fauvel both helped me a huge amount during the Forge accelerator in Rome: it’s largely thanks to these two that I was able to sift through the various options and progress to the point where I could make this work on my own. Thanks again for all your help, guys!
Next week I’m flying across to Amsterdam to help with a project we’re working on with MX3D for Dutch Design Week. Later in the week I’ll fly across to Barcelona to give a presentation on our generative design efforts in the AEC space. I’ll hopefully find the time to write a blog post or two, at some point…