In the last post I neglected to remind people of Kiosk Mode, a great way to see some of Dasher’s features. Simply click on the filmstrip button at the bottom of Dasher’s vertical toolbar and move the cursor off to the side (this just stops the button from staying highlighted when the cursor gets hidden). Then just sit back and see how Dasher works.
Here are some posts talking about how this works. Kiosk Mode knows about various Dasher extensions and can automate their usage. There’s a fair amount of complexity in the mechanism, but it works well enough.
Over the weekend I started work on a new capability that leverages some of the work done in Kiosk Mode, this time to expose a set of URL parameters to automate operations when Dasher starts. You might think of it as a Dasher API, but it’s not really a full API, rather a way to automate some initial steps when Dasher loads.
For instance, it might be used to integrate Dasher into a larger system that already knows something about what’s happening in the building: it might have identified a discrepancy or problem in the data coming from a certain sensor (or area) during a specific time period, and want to present this data – contextualised in 3D by Dasher – to the user via a handy link.
It might also be a mechanism that’s used by someone to share a view on some specific data with a colleague… perhaps they’ve found something interesting and want to get a second opinion on it.
Anyway, I’ve started working on the automation mechanism, and have fleshed out the following parameters/operations:
- basId (string)
- The name of the sensor to display data for (we can extend to support a list of sensors, if needed)
- level (number)
- The level to display or 'auto', which will display the level of the specified sensor
- shade (string)
- The sensor type for which to display surface shading data or 'auto', which will use the type of the specified sensor
- start (date string)
- Start time/date for the time selection
- end (date string)
- End time/date for the time selection
- padding (number)
- Amount of padding (in days) either side of time selection
- loop (bool)
- Whether to loop the display of data
- play (bool)
- Whether to start the playing of data
- speed (number)
- The speed at which to play the data (1X, 2X, 4X, 8X, 16X)
- sensors (bool)
- Whether to display the sensor locations
- occlusion (bool)
- Whether to enable sensor occlusion by geometry
- skeletons (bool)
- Whether to enable the skeleton extension
- zoom (number)
- A zoom factor for the model display
- orbit (number – degrees)
- An orbit angle or 'auto', which will calculate the angle based on sensor position
- layers (string)
- A string containing layer names and values, separated by |
- Values are 0 to 3 for Off, On, Highlight and X-Ray, respectively
- e.g. 'Architectural|1|Mechanical|3', which sets Architectural to On and Mechanical to X-Ray.
- theme (string)
- darkgray, darkblue, lightgray
You – or the Dasher site, once I flesh out a URL builder feature – would build a URL such as the following (this is currently non-operational, as I haven’t pushed the capability live on dasher360.com, as yet):
Breaking it down, this URL will launch Dasher and then:
- load the graph for sensor with the ID of “default_421120005”
- isolate the geometry on the level the sensor is on
- launch surface shading for the type of the sensor (in this case CO2)
- display the sensor locations
- play the timeline
- set the timeline to loop/repeat
- set the start of the time selection to be midnight on April 1st, 2020
- set the end of the time selection to be midnight on April 3rd, 2020
- show a 1/2 day padding on either side of our time selection
- set the playback speed to 4X
- set the theme to “dark blue”
- zoom in to the model with a factor of 1.2
- orbit the model automatically to get close to the sensor location
- set the Architectural and Mechanical layers to “on”
Here’s a quick video of how this kind of URL will load a Dasher model and specify these various visualization options:
I think I’ve thought of the main parameters that will be of use to people, although I’m sure there are additional usage scenarios that will come up, over time. For instance, I can certainly imagine people wanting multiple sensor plots to be loaded (and perhaps we’ll have the idea of a primary one which drives the various “auto” parameters).
If any additional parameters occured to you as you were reading this – or you have thoughts on other ways this might be useful – then please post a comment!