Last week I talked about an API that we’ve added to Dasher, allowing it to be controlled via URL parameters. I expect this to be useful when integrating Dasher into other systems – as you can control the initial view of data, such as for a particular sensor during a specific time period – but I also expect it to be helpful for people to collaborate. It’s with this in mind that over the long Easter weekend I started building a “sharing” feature into Dasher.
You can think of it as a URL builder, much in the way that you can customize embed codes in YouTube. When the sharing feature launches, it queries some information about the current session, and sets the defaults of the various controls to match as far as possible what is going on inside Dasher.
For example, if there’s a sensor plot open, it uses that sensor’s ID as the default sensor to show data for. If a specific level is being shown it either uses that as the default level to display or – and this is quite clever, even if I do say so myself – it sets the level parameter to ‘auto’ if it’s the level the sensor is on. Something similar is done for the surface shading type.
It was trivial to get the selection from the timeline, and use its bounds. I could have left the date/time string to be edited manually, but it seemed nicer to provide a date/time picker, the only downside being the standard HTML input element of type ‘datetime-local’ doesn’t support fractions of minutes (seconds, milliseconds, etc.), which could be important for finer-grained data analysis. The compromise was to display the date/time picker with a standard text input element for seconds.
For the padding parameter – which specifies the number of days to show either side of the time selection – I just take the average of the two sides (i.e. if there’s one day to the left and three to the right of the time selection, the padding value will default to 2). The current state of the timeline options – playing, looping and speed – is picked up, of course, as well as the visibility settings for sensors, skeletons and occlusion.
The trickiest part was to analyse the view options: in addition to zoom and orbit, I added a tilt parameter (if orbit is yaw then tilt is pitch, to put it into aeronautical terms) which can be quite useful to look down onto a scene (or to look at it from the front). These are all very useful for integrating into other systems – especially when you use the ‘auto’ mode for orbit, which will orbit the view round to be close to a particular sensor, as a controlling system may not have enough geometric information to specify where best to view a scene from. It’s not easy to derive these three settings from an arbitrary view that the user has navigated to, though – I did try, but found that comparing with the starting view is non-trivial, especially as level selection zooms in automatically, which changes the camera settings.
In the end I gave up on trying to derive the default view settings, and rather chose to implement a view parameter that encodes the camera’s position and target vectors. This is much more helpful for a sharing feature, as it will go to the exact view chosen by the user. As it’s “either/or” with the zoom/view/tilt parameters, I have the “Capture view” toggle disable these three parameters when it’s enabled. For now I’ve left the zoom/view/tilt options to be the default mechanism, although I have a feeling this may change once I get user feedback.
The last few parameters relate to the layer settings – which I’ve queried and displayed, but rather than creating a complex UI around the individual layer settings, for now I’ve just left them to be edited as text – and the theme. Wherever possible I’ve used dropdowns to let the user pick valid values, whether for the sensor, level, play speed or theme.
Every time you modify a setting, the specific parameter value is previewed, and the overall URL updated: if a particular parameter is deemed redundant – as it doesn’t change from the default setting – then the parameter is excluded from the URL. This should help people understand how these sharing URLs are formatted, so in that sense this feature is also a learning tool for the API.
Once the settings have been adjusted, the user simply clicks the button to copy the URL to the clipboard, ready for pasting into an email, etc. I had some fun making the URL and the copy button stay visible, effectively fixing their position at the bottom of the sharing window: I did this by setting the position of the table that contains both the URL and the button to be fixed, and then adjusting its position when the panel is moved or resized.
I’m really happy with how the feature has turned out: I actually think it’s going to be genuinely useful for people. We’re still doing some final testing on it, but I hope to be pushing it live (for people to try with the demo model on the Dasher site) sometime next week.
In the meantime, here are a couple of examples of models being loaded and displayed via URL parameters.
The first is highlighting data from a humidity sensor in the NEST building:
The second is of the MX3D bridge, with skeletons and surface shading enabled, and using the zoom, orbit and tilt parameters to get a view across the bridge:
That’s it for today’s post. Tomorrow I’ll hopefully get the chance to post on another topic that’s currently keeping me busy.