I mentioned this in last Friday’s post: after building an Android app to bring our web-based VR samples to Gear VR, it made sense to do the same for Google Cardboard. It made sense for 3 reasons:
- Most importantly, I wanted to see what the additional capabilities of the Android SDK would bring to the web-based VR samples, particularly around the magnetic trigger button.
- Until the Note 4 gets its Lollipop update in “early 2015” – and WebViews support WebGL – there isn’t much more to do with Gear VR. I’ve completed the plumbing but am waiting for the toilet to arrive. OK, bad analogy. :-) My Nexus 4, on the other hand, is running Android Lollipop, so at least that’s one way to see how the web samples work when loaded inside a WebView.
- The supported development environment for Google Cardboard, these days, is Android Studio. After wrestling with Eclipse to get my Gear VR built using the Oculus Mobile SDK, I was keen to give Android Studio a try.
The Cardboard SDK for Android is really easy to include in your Android Studio project. I started by cloning the primary sample from GitHub and imported that into Android Studio. Once I had it working on my device, I created a project from scratch, added the libs and copied across big chunks of the main activity.
As we’re doing the stereo rendering of the model via an embedded web-page, we’re primarily using the Cardboard SDK for information on when the magnetic trigger on the side of the device gets pulled (something that we couldn’t get from HTML).
It would have been great to have had information regarding the speed or duration of the pull: you really only get told that it was pulled. But that’s fair enough… in the below Java file we make do with what we have by implementing some basic “double-click” logic:
Beyond that we need to make sure the AndroidManifest.xml had the appropriate entries…
<?xmlversion="1.0"encoding="utf-8"?>
<manifestxmlns:android="http://schemas.android.com/apk/res/android"
package="com.autodesk.a360cardboard">
<uses-permissionandroid:name="android.permission.CAMERA"/>
<uses-permissionandroid:name="android.permission.NFC"/>
<uses-permissionandroid:name="android.permission.VIBRATE"/>
<uses-permissionandroid:name="android.permission.INTERNET"/>
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme">
<activity
android:name=".MainActivity"
android:label="@string/app_name"
android:launchMode="singleTask"
android:screenOrientation="landscape"
android:configChanges="orientation|keyboardHidden|keyboard">
<intent-filter>
<actionandroid:name="android.intent.action.MAIN" />
<categoryandroid:name="android.intent.category.LAUNCHER"/>
<category
android:name="com.google.intent.category.CARDBOARD"/>
</intent-filter>
<intent-filter>
<actionandroid:name="android.nfc.action.NDEF_DISCOVERED"/>
<categoryandroid:name="android.intent.category.DEFAULT"/>
<data
android:mimeType="application/com.autodesk.a360cardboard"/>
</intent-filter>
</activity>
</application>
</manifest>
The interface itself is fairly rudimentary: it uses the same HTML/JavaScript as the Gear VR sample, but advances the selection when a single pull on the magnetic trigger is detected. If a double-pull is detected, the selected model is opened. A trigger-pull from within the model will go back to the main list by reloading the page… to get the selection in the list back to what it was we send the number of “down” clicks we’ve counted since the app was loaded and pass that through. The JavaScript does a modulo remainder to determine the item to select. A little crude, but this avoids us having the JavaScript need to call back into our Java code.
Overall it works pretty well. The performance of the embedded WebView seems as good as with the web-based samples in Chrome for Android: they’ve done a good job of making sure the container itself doesn’t add overhead. Plus you get the benefits of being properly full-screen – without the need for some user input, as you have in HTML – and the “always on” is managed for you: no need to go and make sure your screen doesn’t turn off after 30 seconds (my Nexus has it set to 30 minutes, these days, which is good for VR but less good for normal usage).
The double-click takes a bit of practice to get right: at the beginning you single-click twice quite a bit, which means you have to loop back round to select the model you wanted (which is irritating). But it’s fairly useable, given the limited input options we have available.
Speaking of input options: I spent some time trying to work out how to enable speech recognition inside a WebView. This *should* be possible with Lollipop, as you can now pre-grant permissions for loaded web-pages as long as the app itself has compatible permissions granted by the user, but I wasn’t able to get it working. This is still at the bleeding edge, so I’m hopeful that will be enabled, at some point.
Next time we’re going to talk a little about NFC, and see how that can be used effectively with Google Cardboard to launch our custom Android app.