I work in the Research Engineering team at Autodesk, a centralised pool of software engineering talent that contributes to research projects across our various industries. Organising this way allows us to explore commonalities (and efficiencies) that would otherwise be hard to achieve. Every year or two we have an internal “Innovation Days” event, which is essentially an internal team hackathon. (Lots of other teams at Autodesk do this, of course, we’re far from being unique in that regard.)
Our latest Innovation Days took place on Thursday and Friday of last week, with the theme of sustainability. I met with the extended Dasher team (which is still very small, even when extended :-) for our team meeting the day before, and we brainstormed a few possible ideas (the broader team had had a very interesting presentation by Zoé Bezpalko, a Sustainability Strategy Manager at Autodesk, before the break, but I wanted to get some Dasher-specific ideas on paper). Alex Tessier suggested bring thermal imagery data – apparently people are using FLIR infrared camera modules with their iPhones to capture such images, these days – into Dasher. At first I was a bit sceptical – understanding the position of the camera relative to the scene is a big job! – but we agreed to set that problem to one side, for now, and just worry about analysing a thermal image and bringing its data into Dasher.
That sounded much more tractable, to me, so on Thursday I went ahead and started work on bringing thermal image data into Dasher. As I don’t have actual thermal imagery for a 3D scene, I took a somewhat cheeky shortcut and used this site to generate an image for a screenshot of a scene inside Dasher.
Here’s the original, for comparison:
The basic idea was to analyse the data in the thermal image, and – for the corresponding 3D view, which we already have – to “drape” the temperature onto the 3D locations represented in the image. I initially thought about creating a coloured sphere at each location, but decided to skip that and directly use the Forge viewer’s Data Visualization Extension to display coloured sprites instead. One benefit of this approach would be the ability to display tooltips for each of these sprites, something we’ll see in Part 2.
I didn’t want to create a sprite for every single pixel in the thermal image – at a resolution of 1792 x 950 that’s a lot of sprites – and so decided to create one sprite for every 25 pixels in the X and Y directions (i.e. one in every 625 pixels). For my first implementation I wrote some code that averaged the pixel values for a section of the thermal image – something I’d done, way back when, inside AutoCAD – but then my son reminded me that you could just draw the image into a scaled down canvas element (we’re in any case using a canvas to access the image data), which does this automagically. You can then access the data, pixel by pixel, and the canvas has done all the heavy lifting for you. (I think it’s wonderful that I’m now getting programming hints from my kids!)
This scaling allowed easy access to 2,698 (71 x 38) temperature values, which seemed like a decent number of points to create for a single thermal image.
Now that we have an RGB value for a particular area of the thermal image, we can find the corresponding location in the 3D model. For this we use the hitTest() method of the Forge viewer’s impl object to fire a ray from the screen position along the view direction and find its intersection with the 3D scene. (We use this technique already inside Dasher to place sensors beneath the cursor, so I was familiar with how it worked.)
Then we can simply create a sprite with the appropriate colour at that 3D location. Because we create them in a uniform grid relative to the view (and the thermal image) – and that the sprites have a uniform size from any distance – we seem them as a grid once they’re created:
Here’s the same view with a small application window, which shows them more densely packed.
One question I have is whether windows radiate heat that can then be picked up by a thermal camera. Or, in other words, whether the hitTest() function be instructed to ignore transparent materials. I assume so: my sense is that the default setting of this flag to false (i.e. to take into account hits on things like windows) is appropriate, in this case. But it’s something that is easy to tweak if I have this wrong.
As soon as we pan or orbit the view, we see that these points are truly in 3D.
I was already pretty happy with this implementation. Of course there are a lot of missing pieces to this… I’m hoping that my colleagues working on computer vision and machine learning research will be able give me a nicely calibrated camera for a single thermal image, but even if they can’t I think it demonstrates some interesting possibilities for harnessing the Forge viewer and its Data Visualization Extension to display this kind of data. It would be so cool to aggregate a point cloud from various thermal images – or from thermal video footage – but that would bring lots of other challenges around storing and displaying larger amounts of data. All of which I’m leaving for another day, way in the future.
On a day in the much nearer-term future – I’m thinking tomorrow or the day after – we’ll take a look at the approach I took on the second Innovation Day to implement tooltips showing an approximation of the temperature at a particular location.