I talked recently about a Dynamo view extension I created to capture presentation graphics (primarily animated GIFs) from the entries in the “hall of fame” of a particular Refinery study. This has been super-useful, internally, but not just for creating fancy graphics.
Early on we realised that it was a great way to identify problems in Dynamo graphs: we could see the occasional error state (with the yellow text at the bottom left of the screen) flash past, indicating that a Dynamo run had completed with warnings and/or errors. Presumably the errors haven’t stopped the metrics from being evaluated and therefore have the design added to the hall of fame.
When using the previous version of the tool to track down errors, we had to jump through some hoops: inside the animated GIF and the stored single image frames we could find a task that caused errors, and then use the numbers associated with the metrics displayed in the graph to find the task in the Refinery study – usually by using two of the metrics in the scatterplot. When you select and run it in Dynamo, you see the errors that you can then troubleshoot. A big pain, though.
So I decided to copy some code from Warnamo to help detect when an error has occurred during a run. We then flag that design as having errors, so that we can siphon these problematic designs off at the end and put them in their own set of “hall of shame” GIFs. :-)
This was super-helpful… especially as we started tagging the JPGs captured for individual designs with a “-error” suffix. But it didn’t solve the pain of tracking them down in Refinery.
The next step was to pull out a subset of the designs – the ones that cause errors – and create a new RefineryResults.json (inside a new folder named after the original study but with a “-errors” suffix) that could be read and executed by Capturefinery.
Again, a useful step, but if you ran it in Capturefinery it would run through the full set of designs and leave you with the last one’s input settings inside Dynamo. The ideal would be to have some UI that allowed you to run (and capture) a specific design. Providing the ability to select the start index and the number of tasks to capture from a particular study is actually super-helpful: you can capture a solution set in chunks, or evaluate (and debug) a single design. Very handy for tracking down and fixing problems, especially when you use it in conjunction with Warnamo to zoom in on the errors in a complex graph.
Here’s the current version of the tool in action. It shows the list of studies – including those that have been created previously by the tool when it came across errors – and will show you the number of designs included before you go and choose to capture the graphics for designs that could lock your machine up for hours.
(This is a really crappy-looking GIF, but I didn’t want to embed a 15MB file in this post. If you’d like to see a higher-resolution version, there’s one here.)
The results of the Capturefinery are much as we saw in the previous version, although you now get more GIFs (i.e. there are a new set for the designs that completed with errors).
I think this tool is now getting closer to being publishable via the package manager: most of the reason not to was related to not having a very developed UX, but I feel this update is a substantial improvement. I’ll see if I can get it up there during the coming weeks.
I’m heading back across to FHNW, this afternoon, to give another (after this one from earlier in the year) evening lecture on Generative Design for the Masters in Digital Building program. This time I have more examples to show – including the campus layout example and the Project Rediscover graph – so I think it should be a good session.