After introducing the series, taking a look at some basic samples and then looking at importing Kinect’s high-definition face tracking data into AutoCAD, it’s time for (in my opinion) the most interesting piece of functionality provided on the Kinect SDK, Kinect Fusion.
Kinect Fusion is a straightforward way to capture 3D volumes – allowing you to move the Kinect sensor around to capture objects from different angles – and the KINFUS command in these integration samples let’s you bring the captured data into AutoCAD. Which basically turns Kinect into a low-cost – and reasonably effective – 3D scanner for AutoCAD.
The KINFUS command (I’ve now done away with having a monochrome KINFUS command and a colour KINFUSCOL command… KINFUS now just creates captures with colour) hosts the Kinect Fusion runtime component and provides visual feedback on the volume as you map it, finally giving you options for bringing the data into your active drawing. Much as I’ve talked about in the past for KfW v1, although this version has a few differences.
Firstly, as mentioned recently, the KfW v2 implementation of Kinect Fusion is much more stable: while with v1 it was not really viable to effectively run Kinect Fusion within a 3D design app – the processing lag when marshaling 3D data made it close to unusable, at least in my experience – with v2 things are much better. It’s quite possible that much of this improvement stems from the use of a “camera pose” database, which makes tracking between frames much more reliable.
Using Kinect Fusion for “reality capture” is still some way off being a completely polished experience, but for someone with appropriate expectations – and who’s prepared to put up with a certain amount of frustration – it can be worthwhile.
Here’s a snapshot of a capture of a car we saw in a previous post:
With previous incarnations of this integration, the main way to bring the captured data into AutoCAD has been as a point cloud, mainly as we’re dealing with large volumes of data (captures of 2-3 million points are common, for instance). For fun, though – and because the point cloud import story isn’t as good with AutoCAD 2015, as you now have to index .XYZ files in ReCap Studio before attaching the .RCP inside AutoCAD – I went ahead and implemented mesh import, too.
So the KINFUS command now gives you the option to bring in a mesh rather than a point cloud, once you’ve finished capturing the volume and have selected the voxel step (which allows you to reduce the size of the data you bring in).
A quick word of caution, though. Point clouds – even if they’re written to a text file for indexing – can basically be as big as you want, AutoCAD will manage to bring them in without any problem. Meshes, on the other hand, are a different matter. If you don’t care about colour, you can create a SubDMesh object that has up to about 2 million vertices (at least that’s what I’ve found). Beyond that and the mesh creation will almost certainly fail.
And if you want to capture a coloured mesh – which I have to admit is pretty awesome, see below for an example – you shouldn’t expect to go beyond 300K vertices: the creation of the SubDMesh should work fine, but AutoCAD will take forever to apply the per-vertex colours on the mesh (if it even manages to do so).
A little narcissism comes with the territory when you’re a blogger, so I chose myself as the subject of my initial Kinect Fusion mesh import into AutoCAD. Well, that and the Kinect was pointing at me and I didn’t bother pointing it elsewhere.
As you can imagine, I was pretty excited when I first managed to get colours imported. Here’s a little tale of how things looked…
The mesh was first displayed in AutoCAD with the “conceptual” visual style, but it was too edgy:
I switched the visual style to “realistic”, but it was too shiny:
And finally I created my own visual style which was “just right”:
Just to prove it is indeed, 3D, here’s a quick GIF (although reduced to 256 colours, of course);
Again, this mesh has 286K vertices, so it’s fairly small by Kinect Fusion’s standards: I chose a 1m3 capture volume with a voxel density of 256 per metre and then didn’t even move the camera before completing the capture. In case you want to check it out, here’s the 10 MB drawing file.
I do believe it’s possible to use Kinect Fusion to capture complete objects – if enough care is taken not to move the sensor too quickly, and you’re sensible about not trying to capture too much at once. Once tracking fails, I often find the results to become a little unpredictable: you might find you have multiple ground planes at different angles, for instance. It’s natural to want to capture as much as possible in one go, but I’ve found that performing multiple captures that you later aggregate ends up being more efficient. Yes, this is much easier with point clouds that with meshes, of course.
I should mention that the system I’m using is a few years old and a “mobile workstation” (i.e. a high-powered notebook) rather than a full desktop. Here are the results from the Kinect v2 Configuration Verifier:
I would hope that using a full desktop – or a newer/beefier notebook – would improve tracking and therefore the quality of the results you can generate using Kinect Fusion with AutoCAD. It’s nonetheless interesting to see how well things are moving ahead, though, and this is only with a preview release of the Kinect SDK.