As part of my ongoing procrastination around my AU material development – despite which I’m managing to make some progress… my WinRT stuff is mostly done, now – I went ahead and updated my Kinect samples to use v1.6 of the SDK. The version which finally works from a Windows session inside a Parallels VM on my Mac. Yay!
Here is the updated sample project, which includes the face-tracking capabilities shown in this previous post and therefore also requires the Kinect Developer Toolkit.
It wasn’t really much effort to port: a couple of methods that map depth and colour data into “skeleton space” have been deprecated, so while they still work it seemed sensible to avoid the compiler warnings and get them migrated to the new way of doing things. Here’s the previous code:
int x = i % depWidth;
int y = i / depWidth;
SkeletonPoint p =
kinect.MapDepthToSkeletonPoint(
DepthImageFormat.Resolution640x480Fps30,
x, y, depth[i]
);
And here’s the new way of doing things:
DepthImagePoint pt = new DepthImagePoint();
pt.X = i % depWidth;
pt.Y = i / depWidth;
pt.Depth = depth[i];
CoordinateMapper cm = new CoordinateMapper(kinect);
SkeletonPoint p =
cm.MapDepthPointToSkeletonPoint(
DepthImageFormat.Resolution640x480Fps30, pt
);
What seemed peculiar, when I first ran the code, was the relative scale of the points returned by the new approach. I decided to put the two calls side-by-side and compare the results, and – sure enough – the new technique returns points that are 8x the scale of the ones returned the old way. I seem to remember that the points were previously pretty accurate, so I adjusted the code to simply divide the various ordinates by 8, bringing the results back in line.
Here’s a point cloud capture using the KINECT command with me in my Halloween costume: