I am really very excited about this technology. For those of you who’ve found my investigations into Photosynth and computer vision/photogrammetry solutions to be of interest, you’re in for a treat. :-)
As Scott Sheppard has announced over on his blog, Photo Scene Editor has just gone live on Autodesk Labs: a very interesting application, in itself, but also the first public client of the Photofly web service (see this previous post for some mention of other “*fly” technologies coming from Autodesk). Photo Scene Editor allows you to build, visualize, edit and analyse “scenes” from sets of photographs. The tool uploads sets of images to Photofly, which then analyses the camera positions (and other properties determined from the images) and generates a point cloud, packaging it up with other information about the scene to send back to the client application. The “editor” part of the tool now comes into play: you can manipulate the point cloud generated automatically by Photofly, removing extraneous points, as well as providing additional information to help “stitch” images into the scene that could not be positioned automatically. All of which helps build a 3D model representing the physical scene you’ve captured, with the use of splats determined from the source images to make the scene appear more realistic.
It’s to some degree its post-upload manipulation capability that differentiates this technology from Microsoft’s excellent Photosynth service, but there is more. Photosynth’s focus is largely around coordinating photographs in 3D space, while the technology we’re developing is focused on capturing reality for use from a design perspective. Photosynth has some great point clouds that can be extracted – with a little effort – for use in design applications, but we’re building technology that is fully intended – in time, of course – for production use by businesses. Which means a focus on the ability to calibrate the photos, to generate accurate results and also to bring point clouds into Autodesk products.
Over time I expect this service to be used for further purposes, such as real-time positioning of an image relative to a model (think about the potential to determine the position of a smart-phone relative by its camera’s image-stream for augmented reality applications). But that’s for the future: what we have today is focused on capturing designs for the purposes of “as built” analysis and – in due course – design augmentation.
So let’s step through the installation and usage of the product. After logging into Labs, clicking the link and accepting the standard download agreement, you’ll start the install and get a EULA:
I found the summary useful to help understand the limits for using the technology, and didn’t especially object to having to click four checkboxes. :-)
From there we get a “welcome” page…
… from which we can either choose to create a new Photo Scene or open an existing one. When creating a new scene we simply have to select some photos Photofly will then use to create it.
In my case I’ve gone ahead and selected the photos I used in this previous post:
The images get read…
… and loaded into the editor for upload.
By clicking on “Compute Photo Scene”, you start the upload and scene creation process. What’s interesting, here, is that the application really uses the cloud for the processing. While Photosynth’s client application does a fair amount of number crunching locally on the system (which has been described by the development team as edge computing, but actually has significant downsides when you want to use the web service from other clients, something we’ll discuss further in the next post), all of the scene creation marshaled by the Photo Scene Editor is done by Photofly, up there in the cloud.
A fair amount of optimization has also been done to avoid multiple upload/processing of the same image: each file’s checksum is used to identify it, and only images the server doesn’t already know about get uploaded and processed.
We can see that the online scene creation starts while the upload is ongoing. As we proceed through that stage, we get the ability to leave the application and to come back later for the results:
Once we’re complete…
… we get to see the actual scene, locally in the client. The scene is communicated back from Photofly to the editor via an RZI file (more on this in the next post), an XML file that describes information about the scene and how the local images should be used to display it.
There are various options enabled, at this stage. We see camera positions, as well as what appears to be a 3D model in the middle of it all.
We can disable some of these options (compare the toolbars in each image to understand the differences), to get the underlying point cloud:
While the point cloud is interesting, one of the key features of the editor is the ability to apply splats – portions of the local raster images that get draped onto the point cloud:
Just to make sure it’s clear this is a 3D model, here’s a view from a different direction:
Now you’ve got your basic scene it’s worth re-enabling the point cloud, at which point you can select and edit (which typically means removing) extraneous points. You can also use the slider to adjust the splat size to something more appropriate.
I’m not going to talk about point cloud editing, today – I think the post has gone on long enough, already – but I do suggest playing around with it. If you’re interested in analysing the model, you can calibrate the scene using known distances (the width of my glasses, for instance), so that accurate measurements can be taken.
It should also be noted that while the results can look spookily realistic, there is not actually an underlying mesh to be accessed (at least not yet :-). There are certainly plans to take this technology a lot further – now is a great time to play around with it and let us know what you would want.
In the next post I’ll focus a little more on the underlying mechanics of the system and on some work I’ve been doing with the Photofly team, to develop a “proof of concept” application that uploads images to Photofly and brings the resultant point cloud back down into AutoCAD 2011.