As a recap, so far in this series we’ve looked at creating 2D heatmaps, making them resizable, adding lots of them inside Project Dasher, and then making them pinnable per-level.
There are two main tasks left for the completed feature – which will form the core of my upcoming class for AU2020 – which are to add information on building systems and to implement a UI to make these new views easier to access.
Today we’re focusing on adding building system information. Let’s first step back and talk about why this is important…
One of the key insights we’ve derived over the years with Project Dasher is that data correlation is valuable. This might mean correlating occupancy data with CO2 levels, or seeing people moving across a bridge and comparing that with stress levels. Context is important.
In this series we’ve already seen the introduction of the ability to correlate data across sensor types – something that was possible before, but not in a highly visual way – but also to correlate these sensor types across levels of a building (which we saw in the last post with pinning). I expect both of these to be valuable in their own way, but today’s post introduces another potentially interesting type of correlation: the ability to compare data captured by sensors with geometric information about a building’s internal systems.
This allows us to really understand the context of the data in new ways: how light levels behave relative to the placement of lighting fixtures, for instance, or how temperature variation across an open space is influenced by the HVAC system.
Forge does provide 2D geometry streams, but they typically need to set up and published. For our workflow we’re going to base our 2D layers on information that exists in 3D (hopefully, anyway, otherwise the feature won’t do much for people). We saw the ability to view layers in 3D during my preparation for AU2019, and we’re going to leverage the same feature for our 2D work: you need to find a way to define your layers, whether by filtering by object type or by assuming the geometry comes from a specific source file. We’ll assume that you have a list of dbIds for each layer, anyway.
Here’s the approach I found to flatten this 3D geometry into 2D layers. This is all quite detailed – it’s something that should probably be considered an advanced topic in terms of development with the Forge viewer – so feel free to skip over it if you’re not interested in the nitty gritty.
- Create a material (a THREE.MeshBasicMaterial is fine) for each layer with a different colour.
- It makes sense for these materials to be at least partially transparent (I went with 50%).
- Get the instanceTree and call enumNodeFragments() on each dbId defining a particular layer.
- Get a renderProxy for each fragment, and convert its geometry to “simple” THREE.js geometry.
- We do this by enumerating the mesh vertices and faces, copying them across to the new geometry definitions.
- Check the bounding box of the fragment: work out which level(s) it’s on by doing a simple intersection with the bounding boxes for the various levels.
- Create a mesh for the new geometry and the layer’s material.
- Add the mesh to a scene for each level.
- Now that we have a scene containing the geometry of the various layers for each level, render each into a texture (the dimensions of which need to match the bounds of our heatmaps per-level).
- This is a bit complicated, but it involves creating an orthographic camera looking down onto the scene – making it 2D – and adjusting it to fit the bounds of heatmap’s scene.
- This is the same approach as was used to generate the 2D heatmaps themselves.
- We render it using a custom THREE.WebGLRenderer and our own THREE.WebGLRenderTarget.
- At this stage we should have a single texture for each level: we can dispose of the scenes that were used to generate them, as they won’t be needed anymore.
- To render these layer textures on our per-level heatmaps, we map each to a plane.
- Create some THREE.PlaneBufferGeometry with the dimensions we care about (for our heatmaps these just need to be square).
- We create a THREE.MeshBasicMaterial using the texture as a map – once again with a touch of transparency – and set depthTest and depthWrite to false.
- The Mesh displaying this texture needs to be positioned relative to the bounding box for it to align properly.
- This took me ages to work out, but the mesh’s position needs to be the exact centre of the bounding box.
- When we render the heatmaps – even when animated – we can now choose to include the layer overlay.
- We need to make sure autoClear is turned off on the renderer, to make sure we combine the two render passes.
For the NEST model we can see how going through the levels causes an overlay to be displayed for the mechanical system (most of which is in the basement):
This geometry is available in the 3D view, of course, but it’s interesting to see it now in 2D, too.
And this style of overlay also works with heatmap animations:
The IKON building model has a more extensive set of geometry layers available. Here’s a view of temperature heatmaps of the first and second floors:
And here they are with the MEP layers toggled on:
To try this for yourself you can toggle the option on in Dasher’s Appearance settings. Bear in mind that the first time you do this it can take quite some time to extract the geometry and overlay the textures (it’s the geometry processing that takes the time). You’ll know it’s done when the graphics appear and/or the toggle turns blue. If you leave this option toggled on, then the processing will be performed – and the additional time taken – when Dasher starts, but you won’t then have to wait to toggle between modes, as all the required textures will be available.
I think this could be a really interesting way to correlate geometry representing systems with IoT data. If you agree, disagree, or have other thoughts about how it may be used, please post a comment!