This is very exciting: v1.7 of the Kinect for Windows SDK is being released today and it includes the uber-cool Kinect Fusion component.
For those of you who have not yet heard of Kinect Fusion, it allows you to use your Kinect for Windows sensor as an effective reality capture device: it aggregates input from depth frames provided by the Kinect sensor, mapping out a 3D volume. Or, for the layperson, it allows you to paint a 3D model of an existing real-world object or scene into your computer’s memory.
Here’s a video from Engadget’s Expand event, held over the weekend in San Francisco, where this version of the SDK was announced and the two main features – Kinect Interactions and Kinect Fusion – were demonstrated:
I’ve been working with a pre-release version of the SDK for the last few months, and it’s been a fun experience. I’m happy to say that the KfW team were really responsive providing APIs that make sense for applications such as AutoCAD (where you actually want to deal with raw 3D data rather than a synthesized 2D image of the reconstruction volume from a particular viewpoint).
Kinect Fusion makes heavy use of the GPU, so although I was – from v1.6 – finally able to work with the Kinect from inside Parallels Desktop on my MacBook Pro, I’ve now had to move back to developing for Kinect on my native Windows box (at least until the day Parallels provides GPU virtualization, I suppose).
I have some code to share that integrates point cloud data from Kinect Fusion inside AutoCAD, but I’m going to wait until the final release of the SDK is available later today (the SDK being made available “in the morning” probably means at the end of my day, as I’m based in Europe) before posting the code later in the week.