As I mentioned in the last post, I’ve been working on an application that brings across point clouds from Photosynth into AutoCAD.
Before we get into the details, I’d like to lay some of the groundwork for this series of posts by talking a little about the bigger picture: “reality”. How about that for bigger picture? It doesn’t get much bigger than that, unless you’re working on the LHC at CERN. ;-)
Reality is increasingly being captured in digital form (Google StreetView, Bing Maps, Photosynth) and augmented (just look at the cool iPhone apps available in this area, such as Layar, but that’s really just the beginning: with wearable computing such as the SixthSense project and other such advances, the future is looking increasingly colourful :-).
I see this area – which in many ways just part of Human-Computer Interaction, as we break down the barriers between us and the computers we use – as being of definite relevance to people working in the design industry. Engineers performing inspections of built designs, capturing the current state, performing analysis and making suggestions of possible enhancements – all while looking at the object and seeing it overlaid onto their field of vision. Sounds like science fiction (and yes, I do read a lot of sci-fi), but it’s no longer a quantum leap away.
Capturing reality is a big part of this, and it’s an area that’s becoming more and more accessible. On the one hand 3D scanning technologies, such as LIDAR, are becoming increasingly affordable: various types of device that create point clouds representing 3D models at an appropriate scale, depending on your need. Then there’s Project Natal, which is going to enable gaming without any sort of controller by using “reality capture” to analyse movements and gestures. And then there are tools such as Photosynth, which take it down (or is that up?) a level and allow you to capture reality in 3D (and glorious technicolor) from a set of 2D photos.
Photosynth has some really clever imaging analysis algorithms (I admit this video is now a little old: now that Photosynth is a pure web service some of the details may have changed… I also suggest also checking Blaise Aguera y Arcas’ TED talks) that determine, based on the shared locations of points on photographs taken from multiple angles, where a common point is in 3D space. The more pictures you provide, the better the chances of points being cross-referenced and added to the point cloud. A Photosynth which is “100% synthy” has all the pictures related in some way to it, but that doesn’t necessarily mean a very large, granular point cloud. While AutoCAD is now capable of dealing with point clouds up to the 2 billion point range, Photosynth point clouds are mostly smaller than 200,000 points (from what I’ve seen, so far), which makes sense, as they’re primarily in place to correlate and provide access to sets of photographs.
Even so, the point clouds that are available in Photosynth provide some very interesting possibilities: even if they’re not incredibly detailed or dense, it’s certainly possible to capture certain types of design using such a technique, simply by uploading a set of photos. The cost of digitising a real-world object into a 3D model is suddenly reduced to the cost of a digital SLR (and the same technique could very well be applied to frames from a digital video recording).
In the next post on this topic we’ll start looking at the implementation of the Photosynth import application itself.