It’s time to talk about a little project I started over the holiday break: connecting Project Refinery – the optimization engine for Dynamo that will help drive Generative Design workflows – to Virtual Reality. It’s a project I’ve been thinking about for some time, now, and was originally inspired by two things: the workflow Van Wijnen uses Refinery for makes heavy use of VR for visualization – to evaluate designs with internal stakeholders – but right now they export the geometry from Dynamo to Revit and then use Enscape to visualize the scenes. It’s not a complicated process, but it takes a little time for each iteration.
The second inspiration came when I realised that Refinery had started capturing the 3D geometry for each solution, as you can see here in the Design Grid. Each of the below graphic thumbnails is actually a full 3D view you can zoom and rotate.
Digging around, I realised that the captured geometry was stored on the local file-system (currently under %appdata%/Refinery/geometry) in JSON format. This got me thinking… “what if I created a simple Unity scene that allowed you to load in the JSON for a particular view, and then display it in VR?”. So I started digging a little further, and then realised that the JSON format was actually a capture of mesh geometry for display via Three.js. This could, of course, be loaded into Unity and made sense of there, but why not just use WebVR (or WebXR, the newer AR- and MR-capable version) – which is already Three.js-integrated – to display it on a variety of devices?
So it was that in the New Year I got started building a system to connect Refinery to VR via WebVR. I started by taking the content of one of the JSON files and building a web-page around it: there was a little work needed to take some logic from the 3D viewer built into Refinery to deserialize the JSON data into Three.js objects. But I managed to get a working view of a single solution.
The next step was to serve this JSON data up so that it could be controlled centrally (from the person running Refinery and browsing the results). For a while I went down a rabbithole of spinning up a local web-server on the system running Refinery – the JSON files were stored on the local file-system, so why not use Mongoose or Python to server up the JSON files, right? This worked, but once I started trying to work out how to map the ID used by Refinery for each solution to the JSON file on disk, I realised I could just use the Refinery server directly: there’s a simple REST endpoint that allows you to query the geometry for a specific ID (this is currently hosted on the local system running Refinery at http://localhost:8000/o2/v1/geometry/{id}, although at some point it will end up somewhere in the cloud). Perfect!
This also simplified the communication of the geometry data to the various connected client web-pages: rather than loading the geometry and pushing it out to the various clients via WebSockets or WebRTC (which would either take some server infrastructure or some investment in making all the peer-to-peer plumbing work), I could just publish the ID of a particular result – which is basically a GUID – to a public location for the clients to query. When the clients receive a new ID (something they poll for), they can then query the Refinery server directly to get the geometry for that ID.
This is actually quite elegant, privacy-wise: there’s absolutely nothing of interest about the ID itself – so it can be placed anywhere on the web – it’s the geometry associated with the ID that’s potentially sensitive, and the clients access this locally from the Refinery server: if they’re not able to access the server on their local network, they won’t get the data.
I ended up using http://httprelay.io to store the IDs centrally: you can generate a new channel for each “customer” of the system, which is OK for testing/research purposes. I hacked the Refinery client to push the ID for a particular result to this service when it was double-clicked in the UI, and then the various client pages pull it down from there and then query the geometry directly from the Refinery server. As each page is polling for an update to the key, they’ll be able to update their geometry whenever a new one gets pushed: much in the same way as Vrok-It allows you to have multiple clients visualizing a common model via Forge (with a presenter managing the session and choosing which models to load and what operations to perform on them).
For now I’ve tested the solution with simple non-VR web-clients – it works fine on both desktop and mobile browsers – as well as mobile VR clients (Google Cardboard and Gear VR) and tethered VR clients (HTC Vive). I built in some rudimentary teleportation into the page, which is handy for exploring the model.
A huge thanks to Damon Hernandez from Samsung Research – who many of you will know as one of the founders of the AEC Hackathons – for giving me his personal Gear VR at Berlin’s AEC Hackathon: it’s largely thanks to this that I was able to make this project work well for mobile VR (with the very neat Gear VR controller). It’s really cool to be able to test the page on a browser on my PC, then on a Gear VR (still at my desk) before going down to my basement and loading the same page inside Firefox connected to the HTC Vive. All of which get updated with fresh geometry when a new solution is selected inside Refinery.
Here’s the loading view on the Gear VR: you can see how the page queries the IP or hostname for the system running Refinery, as it needs this to connect to the geometry service. The screen recording doesn’t follow into WebVR mode, though, so I’ve captured some via the HTC Vive for the other features.
Here you can see the visualization of a residential layout, with the two Vive controllers represented.
If you pull the trigger on a controller it teleports to the location (or the ground location beneath it) that you point at…
Unless you happen to point at the sky, at which point it toggles between day and night.
What’s even more compelling about this approach is that it’s just a matter of time before WebXR (which I’ve also built a version for, even if it’s not quite ready for primetime) enables the display of these models in Mixed and Augmented Reality views: at some point it should be possible to use phone-based AR – with a “magic window” visualization approach – or proper MR via devices such as HoloLens – to integrate the 3D geometry into a more collaborative, shared experience.
Next steps for the project are to include some UI elements to display the goal scores for that particular result, along with some controls that would allow navigation between results (assuming you don’t want to have someone outside the VR experience managing it for you). It’ll be interesting to see how it goes!