As mentioned in yesterday’s post, on Monday I spent a few hours in the early morning at the local archeological museum, the Laténium. While this is nowhere near the scale of the Smithsonian, it is the largest archeological museum in Switzerland and is well known by people in the field, especially those interested in La Tène culture.
I love the Laténium – our kids often seem to go there for birthday parties, which certainly beat the ones held at the local McDonalds – and so I was delighted to get to go “behind closed doors” (the museum is closed on Mondays) to work on a 3D reconstruction project.
The project came about because a colleague in the Neuchâtel office, Laurent Pallares, is part of a team working on the entry for an internal design competition at Autodesk. Laurent’s team’s entry involved taking a scan of a local monument, Neuchâtel’s Fontaine de la Justice:
Laurent had already captured the “outdoor” version of the fountain using ReCap Photo – and had worked on the (excellent) resulting mesh and 3D printed a small-scale replica – but had found the detail on the original statue (which is on display in the museum, presumably due to its historical importance) to be different, and thought it interesting to arrange a visit to capture it for the purpose of comparison. And as he’d attended a session I’d presented on Kinect Fusion a week or so ago, he asked if I wanted to take a crack at it.
The statue is on display in the museum in a slightly tricky configuration to capture. Here you can see that it’s on a high pedestal but has a balcony next to it. Which is fine, but scanning the rear was a bit problematic: we had to lean over the railing and try to scan the back as best we could.
Here’s a picture of our set-up, to give you an idea of what we used, equipment-wise:
The capture process was actually a lot of fun: we started off trying to capture an .OBJ using the standard Kinect for Windows SDK sample, Kinect Fusion Explorer, but found we couldn’t quite tweak the volume to be the size we wanted (and so ended up capturing a few different meshes). It worked fairly well but we weren’t able to capture the whole thing in one pass.
I had to also try it in AutoCAD, of course, and with a bit of work we managed to capture a few good point clouds (my code just brings in points, it doesn’t attempt to re-create meshes with millions of vertices).
Again, we couldn’t quite get the whole model in one go: Kinect Fusion started dropping details when a capture went on for too long (irrespective of the tracking status), which was a bit frustrating, but keeping the separate scans fairly short allowed it to work reasonably well.
Once back at my desk I managed to aggregate a couple of the scans into a single, decent-looking point cloud:
Given the access to the model – the rear was in the shade and I was half hanging off a balcony to get to it – it’s not surprising it wasn’t possible to get a full 360-degree scan (the point cloud looks good from this angle but soon looks a bit gappy when you spin it around).
But it was a nice test of the possibilities – I was fairly happy with the results. A big thank you to Laurent Pallares for inviting me along and providing the photos for this post (unfortunately he was the one holding the camera and so all the shots ended up being of me).
In related news, I’ve just had confirmation that I’ve been accepted into the Kinect 2.0 pre-release program, and so should be getting a device (and permission to blog about it) in the coming weeks. Kinect Fusion won’t be there in the SDK right off the bat, but the 1080p data (~3x the number of points) will certainly make this technology extremely interesting once it’s available.