As a follow-up to this recent post, I decided to add a couple of metrics to the previous graph and try it out with Refinery. And here’s the updated graph for you to try for yourselves.
The first metric relates to the pathfinding capability, and I’ve called it “Access”. There are a few things going on, here: for sure we want to measure the shortest average path, but also to disqualify (or at least penalise) designs that don’t have paths to all of of our points of interest (i.e. the corners). But I also wanted to weight the results to favour a balanced set of paths: designs that have the focal point near one corner would have at least one really short path along with other much longer paths.
The first thing to check was the number of paths: if it’s not the same as the number of corners, we set the metric’s value to be the diagonal distance across the bounding box. This should be high enough to disqualify the result from our optimisation.
Assuming we have the right number of paths, we take their mean length and add it to the standard deviation (something in the Python node you can see, above) to get a value that should be minimised: we want shorter average paths but also more equal paths in terms of length. It might make sense to add some weighting to these values, but it seemed enough to just add them together, initially.
The second metric relates to “Visibility”.
In this case it’s much simpler: we just take the average visibility value from the whole grid – which will be between 0 and 1 – and multiply it by 10. The average is performed twice – as we have a list of lists – but we could also just flatten the list of values before calling average. The result should be basically the same (there may be the tiniest difference due to floating point rounding of operations done in a different order).
The idea is to maximise visibility: we ideally want a focal point where we can see around us to a large degree. We could have checked visibility in all directions, to make sure there’s at least some visibility of each of the four sides – but that’s been left as an exercise for the reader. :-)
Here’s the complete graph:
And again with the graphics displayed:
This is a relatively simple graph for Refinery to optimise – it just has a single input parameter and two metrics – but it’s still interesting to explore the results. I did an optimisation run with 10 generations of a population of 48 candidate solutions.
In the Explorer view, we’re mapping the two output metrics on X and Y, and then use the input parameter for both size and colour. In the design grid you can see the pathfinding results but not the visibility: coloured surfaces do not get captured.
Here’s the graph set to automatic, with us clicking different results inside the scatterplot: this runs the graph with the input(s) of the selected design option, side-by-side with the Explorer view.
The top part of the “snake” – in different shades of blue – can be discarded: while they score highly on visibility, the focal point is too far to one corner. The other options are more interesting, and the one that you’d ultimate select is a matter of choice (given the fact we’re not evaluating the design in other ways).
Hopefully this is helpful for you to get started using the Space Analysis package in your generative design workflows. Please keep to keep the feedback coming on where you want us to take it!