Over the next few posts – in this series, anyway – we’re going to take a look at the shading of objects (actually meshes) using the Forge viewer’s Data Visualization Extension (Project Hyperion). This is something we’ve done in Dasher for some time, and I was excited that using Hyperion would once again allow us not only to rip out some of our old code but also to go in new directions and explore interesting new capabilities.
Let’s first explain how this type of shading differs from what we’ve seen in previous posts, namely volumetric room and planar shading: simply put, this style looks at shading the surface of an object rather than a room. It’s also volumetric, but the effect can be applied to more uneven geometry. In fact the same low-level shader is used for both the volumetric approaches. Here’s the Hyperion documentation on this topic.
The main model we’ve used in the past to test out this mechanism is the one for the MX3D bridge (which I’m happy to say will be installed in about a month’s time, something you’ll be hearing more about in due course). An interesting side-note relates to units: as the bridge was modelled in millimetres, the coordinate space is much larger than a typical building model; something that has helped us to make sure larger scale models are supported by both Dasher and Hyperion.
What changes were needed to support larger models? Actually nothing very complicated: we mainly needed to be able to specify a larger “confidence” value for the display of sensor data. This confidence setting dictates how far data will be shown from the sensor location (in world coordinate space) before it starts to drop off. With the NEST building we typically use a value of 60, while for the MX3D bridge we use a value of 5000. (As an internal detail, we needed to increase the initial value of a shader variable to be much higher so that both larger and smaller models would work with the same logic. This has been fixed from v7.45 of the Forge viewer.)
The other setting we used for the MX3D bridge, was to increase the “power parameter” from 2 to 3. If you refer back to Shepard’s method – inverse distance weighting – on Wikipedia, you can see that as you increase the power parameter, the visualization tends towards being a Voronoi partition. We found that going beyond 2 worked better for the bridge model: this was previously hard-coded in the shader for “non-building” models, which was less than ideal.
The Hyperion team (thanks, Ben!) took this feedback and used it to modify the core shader to expose variables (as uniforms, in shader-speak) for confidence and the power parameter, but also for the alpha value that gets applied to the shading (allowing the coloured overlay to be more or less transparent). So not only can we now vary the confidence and power parameter for different models, we can allow the user to change these settings in a much more dynamic fashion, such as via sliders in Dasher’s UI.
Let’s take a spin with these new controls in Dasher’s settings, to see how they affect their respective parameters. First, here’s confidence:
This is the power parameter – you can see the Voronoi pattern emerge as the power increases:
And here’s the alpha value, which affects the blending between the heatmap and the underlying model.
All of this goodness – at least the underlying implementation provided by Hyperion – is heading your way in v7.45 of the Forge viewer, which should be available in the coming days (and in Dasher shortly afterwards).
In terms of the steps needed to implement per-object shading, here’s a summary of the process, which should mirror what is explained in the documentation:
- Create a new SurfaceShadingGroup with a unique name.
- Create a main SurfaceShadingNode for the root dbId of what you want to shade.
- For each of your sensors, add a SurfaceShadingPoint to the main node.
- Add the main node as a child of the shading group.
- Create a new SurfaceShadingData object.
- Add the shading group as a child of the shading data.
- Initialize the fragment Ids in the shading data structure.
- I found this still needs to be done manually for this style of shading, but I need to double-check as I may be missing something.
- Call setupSurfaceShading() with our shading data.
- Register the colour range for our various sensor types using registerSurfaceShadingColors().
- Call renderSurfaceShading() with the name we gave to our shading group back at the beginning.
As we were working through this implementation, I realised that getSensorValue() was being called once per fragment – which is inefficient for models that have multiple fragments per dbId. The Hyperion has fixed this from v7.44, I believe.
Now for some less-than-great news… my AU 2021 class proposal didn’t make the cut, this year: there were very few slots for Forge-related content, it seems, with the priority understandably going to customer stories. I’ll be working with our Developer Advocacy & Support team to see whether there’ll be another way to get Autodesk-sourced Forge material out during the AU timeframe, but it may well be that this blog series is the best (or perhaps only) way for you to access it. So it goes. If you feel you’d like to hear me talk about this at another event, do get in touch. (I’m also available for birthdays, weddings & bar mitzvahs. ;-)
In the next part of this series, we’re going to take a look at how object-level shading can be used to shade objects such as sensors in a building model.