Last Friday Microsoft announced a preview SDK for Kinect for Windows 2. As the first public release of the SDK, it seems a good time to publish an initial set of samples for readers to play with. These are very much a work in progress – I tend to restart AutoCAD between Kinect Fusion captures, for instance, as otherwise I’ve been getting regular crashes – but they should give people a sense of what’s possible. And while I haven’t yet implemented certain capabilities we had before, I have gone ahead and snuck a few enhancements in (which you’ll see in particular with the Kinect Fusion and Face Tracking integrations). Also, as I’ve now realised I have a lot to say about these samples, I’m going to stagger them across a few posts during the course of this week, following on from this introduction.
The latest KfW device is a big step up from the original Kinect for Xbox 360 and Kinect for Windows v1 devices: for starters you get about 3 times the depth data and high-definition colour. This round of the Kinect technology is based on a custom CMOS sensor that uses time-of-flight rather than structured light to perceive depth – Microsoft moved away from using PrimeSense as a technology provider some time ago, well before their acquisition by Apple.
KfW v2 has a better range than KfW v1 – it has no need for a tilt motor or near mode – and it’s much less sensitive to daylight (I haven’t yet tried it outside, but I will!). This is really an impressive piece of tech… in many ways you’re effectively getting a laser scanner at the ridiculously low price of $200.
There are definitely some things to be aware of, however. The latest SDK is now Windows 8/8.1 only, which I have will no doubt exclude a number of people wanting to use this on on Windows 7. (I run Windows 8.1 on the machine I use for Kinect work – as I need a native OS install with GPU usage for Kinect Fusion, even if the rest of the SDK can function from inside a VM, such as my day-to-day system running Windows 7 via Parallels Desktop on OS X – so I’m thankfully not impacted by that particular decision.) The device also requires USB 3 – naturally enough, given the data throughput needed – and requires additional, external power in much the same way as KfW v1 did.
One other important “platform” consideration when using these samples… I’m tending to use them on AutoCAD 2014 rather than 2015. They do work on 2015, but as with this release we’ve completed the shift across from PCG to RCS/RCP for our native point cloud format it’s not currently possible to index text files into native point cloud files (as we can do in AutoCAD 2014 using POINTCLOUDINDEX). Which is a bit of a gap for developers wanting to generate and programmatically import point cloud data into AutoCAD: there’s currently a manual step needed, where the user indexes an .xyz file into .rcs using ReCap Studio before attaching it inside AutoCAD 2015. (This isn’t the end of the story, hopefully: I’m working with the ReCap team to see what’s possible, moving forwards. If you have a specific need for custom point cloud import into AutoCAD that you’d like to see addressed, please do let me know.)
A few words on the Kinect SDK itself, before we wrap up for today (I won’t embed the code directly in this blog – there’s just way too much of it). The various APIs that comprise the Kinect SDK have changed a lot between v1.8 and v2 of the SDK. The changes are generally positive – code using the APIs now feels a lot cleaner, and I do believe the changes have been done for good reasons – but it does mean there’s a definite break in compatibility. You will need separate codebases for each of KfW v1 and KfW v2, if you need to support both.
On a related note, we’ve just received a Structure Sensor from Occipital in the Neuchatel office, so I’m going to start taking a look at that, when I get the chance. There is an iOS SDK – and it seems open source drivers and a “hacker” USB cable are on their way – so I’m curious about what I’ll be able to get out of it. The device appears to have a lot of potential, even if it’s apparently even more quirky than Kinect Fusion, in many ways. I do buy into the fact that this tech is getting more mobile, but we’ll see whether this particular device is an evolutionary step or a dead-end. Especially as Occipital have licensed PrimeSense tech for the sensor, and Apple isn’t exactly known for running an OEM licensing business.
I personally suspect that the “sweet spot” for mobile capture tech in the near-term is a lightweight mobile app allows you to ensure you have adequate coverage of your object before the data is crunched back at the office or up in the cloud to generate a decent model. Unless Project Tango reliably delivers the goods, of course (another tech I have mentally flagged with “we’ll have to see”).
Anyway, onwards… the next few posts with be looking at various AutoCAD-Kinect integration samples that have been ported to KfW v2.