One of the pieces of feedback I received from internal folk on the prototype VR app I developed for Google Cardboard and then added voice recognition to was “it’d be really cool to add ViewCube-like navigation commands”.
Which basically meant adding “front”, “back”, “left”, “right”, “top” & “bottom” to the list of voice commands recognised by annyang and have them hooked up to a function that changes the view accordingly. The main complication being the fact that some models come in with “Z up” despite the majority having “Y up”. Hopefully none will come in with “X up”, an eventuality I so far haven’t planned for. :-)
I also fixed a bug which mean the camera’s up direction would flip when you zoom in or out, which caused the orientation to change. The overall experience is pretty stable, at this stage.
Here’s a quick recording of the viewer in action. I managed to record this directly on my phone using adb, which was pretty cool. The only downside being I had to record the voice separately on my PC and combine the two tracks afterwards in Camtasia: it turns out the browser’s voice recognition competes with any local voice recording, anyway – you hear a stream of beeps indicating them ping-ponging back and forth and no voice commands work – so this ended up being the best approach available.
The video ends a bit abruptly: the video recording stopped at exactly 3 minutes, so I ended up truncating everything a bit more than expected. Nothing I said afterwards was of particular importance, in any case.
Here’s a link to the updated HTML page along with the accompanying JavaScript file.
The ADN team is busy demoing this – along with other samples, of course – at their annual Developer Days around the world. I’m really looking forward to catching up with them at the DevDay in Las Vegas and experience hundreds of developers giving this a try at once – hopefully with simultaneous voice commands. Should be quite something! :-)