Back in September we started the work to integrate the (currently still internal) SDK for Autodesk Tandem into Project Dasher. This is what I’ve referred to, in the past, as “Tandem inside Dasher” (as opposed to “Dasher inside Tandem”).
The work hasn’t been happening constantly: it’s been intermittent, as changes were needed to both Tandem and the core Forge viewer to support certain workflows. I’m happy to say that, as of this week, the integration is pretty close to having parity with the version based on the traditional Forge viewer, with a few gaps that should be filled in over the coming months. This work isn’t intended for public consumption, to be clear: it was to better align Dasher with the Tandem platform but also to test out a few key component integrations.
Here’s a quick run-down of the capabilities we’ve (re-)implemented using the Tandem SDK.
We started by getting all the level and room information from Tandem, as opposed to having these defined manually in the project settings. So the breadcrumbs and building navigation panel are driven purely by the data coming from Tandem. This greatly simplifies project setup, and also means we can very easily tweak level assignment options inside the Tandem application rather than having to do this work inside Revit and re-exporting/re-translating in Forge.
The next big piece of work related to sensor dots. As you may have seen in the product demo from the March 2022 Tandem webinar, the Tandem team have implemented a new approach that clusters sensor dots according to proximity. This is a parallel mechanism to the Data Visualization Extension’s sprites, and we’ll see how/whether the two converge, over time.
An interesting side note: the Tandem “stream markers” are implemented using SVG elements that are placed in the DOM. This is conceptually pretty similar to what we had in Dasher a few years ago: we switched across to use a GPU-rendered point cloud to help with scalability, when we had to display thousands of sensor locations. The Tandem team have elegantly side-stepped this issue by implementing clustering, which means that you’re very unlikely to see thousands of sensors at a time, as they will cluster together when zoomed out. It doesn’t mean that they won’t end up using a GPU-centric approach, of course: there are definitely pros and cons in terms of performance, styling flexibility, etc., but this initial implementation certainly seems to work well for now.
There are still some things to work out around sensors dots: we control visibility via filtering based on types, etc., so need more control over that, but also need to wire up events for hover (to display tooltips) and click (to show data graphs), but this is a good start, for sure.
The last big piece of work was around surface shading. The main issue with implementing this was around the fact the Forge viewer Data Visualization Extension worked with rooms coming from a central model: as mentioned in a previous post, Tandem inherently works with multiple models loaded into the viewer, and you can’t assume that only one of them will source the room objects for surface shading.
The Forge team integrated support for multiple model surface shading in v7.60 of the viewer. This means there’s an additional model option you can pass to setupSurfaceShading(), renderSurfaceShading(), updateSurfaceShading() and registerSurfaceShadingColors().
This is now working well:
I’m really happy with how things have come together, and that we now have an implementation of Dasher that makes use of the Tandem platform for key capabilities. We’ll be keeping this in sync with the work being done by the Tandem team – they’re working at a pretty aggressive pace to implement new features – and it’ll continue to be a useful test integration for the Tandem platform.
A big thanks to the Tandem team for their support getting this implementation working, in particular to James Awe (for the core SDK), to Manuel Wellman (for sensor dots) and Traian Stanev (for digging into a number of issues including surface shading). And to the Forge viewer team for promptly implementing support for multiple models in the Data Visualization Extension, of course.