While in our San Francisco office, the week before last, I bumped into Brian Mathews and team as they were making their final preparations for the TED 2011 conference. I don’t know how many of you know about TED: it's an amazing conference - and information resource - that I'm proud to say that Autodesk sponsors. It’s a dream of mine to attend this event in person (maybe one day I’ll get the chance) but at least the sessions are posted online for everyone to enjoy.
I’ve talked at great length about Microsoft Photosynth and Autodesk Photo Scene Editor, in the past, both of which take sets of images and create 3D models of some kind (whether point clouds or – in the case of the next version of Photo Scene Editor – textured meshes), and both work very well with static scenes. The problems start when you want to capture something more dynamic/transient in nature, as it’s hard to move the camera quickly enough – even when shooting video and extracting stills – to get accurate feature recognition across images.
The Labs team have put together something really neat: an array of cameras attached to a rig and connected to a device that coordinates their actions – essentially taking a picture from each at exactly the same time. It’s a bit like a version of bullet time that handles the specific case of the photos being taken simultaneously.
The resultant images get sent to the Photofly servers to generate a texture 3D mesh, useable in (and exportable from) an as-yet-unreleased version of Photo Scene Editor, codenamed “Caipi”.
I haven’t yet heard whether the TED luminaries found the technology of interest, although I’m sure they will have – this stuff is really cool.