We’ve been busy working on a number of interesting features for Dasher 360 over the last few weeks. The main focus – at least from my side – has been to extend the filtering capability to allow people to filter sensors based on type and their location in the model. Not only that, but the filtering is reflected in realtime in the list of sensors, even when highlighting different areas of the model via the site’s breadcrumbs. I wasn’t sure if this was going to be viable – mainly for performance reasons – but for now it seems to work really well.
The other big push has been around surface shading and getting that properly integrated into the timeline. The timeline is now visible in the public demo and can be used to control the period for which the surface shading gets animated. Eventually the timeline will drive other visualizations, too, whether the plots for individual sensors or for coordinated video feeds (more on this below).
I recorded a quick video to show the progress:
These changes are only partially live on the main site, today, but they’ll be there by the time most people read this post, I’m sure. :-)
In the above demo I’ve mentioned a feature that was implemented for the Pier 9 bridge project: the ability to display video from various cameras and have that correlate with surface shading information. This is particularly interesting in the context of the bridge, where we want to analyse strain data while understanding the specific context: how many people were walking over it, were they marching in military formation, that kind of thing. At some point we expect to integrated machine learning to analyse the image data and surface clues about what’s going on, for now we’re just focused on getting the video feeds sync’ed with surface shading.