I’m happy to report that a HoloLens device has arrived in the Neuchâtel office. I’m meant to be on holiday, next week, but as the kids are signed up for various fun activities of their own I foresee some amount of fooling around with HoloLens in my immediate future. :-)
I was pleased to see a few HoloLens devices floating around at the recent Forge DevCon. In some cases it was inevitable – in that one of our speakers, Dona Sarkar, comes from the HoloLens team and also participated in the SFVR Meetup at the DevCon – but a nice surprise was that my friends at hsbcad had one on display in the exhibition area.
Somewhat frustratingly, I didn’t find time to try the demo myself – the conference was just too hectic, I had too much going on – but I did reach out to Kris Riemslagh at hsbcad to understand a bit more about what they’d done. And see whether I could give it a try, now that I have access to a device.
Right now the application is still a prototype – although the plan is to make it available to users of the hsbshare service, in due course – so it’s not practical to get it working here, just yet. But there is a nice demo video that shows it in action:
Kris gave some additional insight into what they’ve done. Interestingly they’ve used the Forge platform – specifically the translation service that is now part of the Model Derivative API and generates geometry for the Viewer – to create the 3D geometry that gets displayed by the HoloLens application.
In Kris’ own words (with a few minor edits from my side to clarify terminology):
We added a method in our hsbshare to grab the geometry + materials that are present in the Viewer, and create a file on the node server from it. The file contains whatever is turned on in the Viewer model, and manipulated in the model. So we do not use the URN, but rather the manipulations to the geometry that were done in the viewer.
On Hololens we run an application developed in Unity. In this app we download the file from our server, scale and move the objects onto our virtual “table”, the white rectangular shape in the video.
We added a number of voice commands for “scale up”, “scale down”, “reload model”…
As in the Microsoft 101 or 102 demo app, the table can be placed on top of anything in the real world.
We added a number of tiles (currently primitively white). When you airtap on them some other commands kick in: start stop and reset exploding and also add and remove physics, which makes all the pieces drop onto the floor (as a gag).
I was curious about whether the app could work with additional models, beyond the truck and house used in the demo. Apparently it can work today with any model that’s been translated for viewing via Forge, which is extremely interesting. For instance:
The truck with panels is an agglomeration of 2 models (URNs): the truck + a small house. The panels of the house are then moved inside the Viewer to stack on the truck. The video shows a small animation to restore them to their original position. But the model sent to HoloLens is the one with the panels on the truck.
I can see that Kris and team have put in quite a bit of thought and effort into this prototype. I’m looking forward to trying it firsthand, at some point. I’m also looking forward to sharing my own experiments with the HoloLens platform, once I’ve managed to do something myself that’s worth sharing.