Once again I’m a bit late announcing this one: I had friends visiting from the UK, last week, and so took the week off (although I ended finding a fair amount of time to study linear algebra, which was surprisingly fun). I’d queued up the week’s blog posts in advance to reduce the chance of people noticing my absence. ;-)
I first heard about Project Memento when I visited Singapore a few months ago (I predicted in that blog post that I’d be writing about this project the day it was live on Labs, but unfortunately that clearly didn’t end up working out… oh well).
This is very interesting technology: Murali Pappoppula and his talented team have built a tool that allows you to manipulate very large meshes. Think of meshmixer on significant quantities of performance-enhancing drugs.
Project Memento lets you work with meshes that are quite a bit larger than those that can currently be edited using Maya or Mudbox (the tools in the Autodesk portfolio that are most capable of dealing with large meshes). The eventual plan – I’m not sure this is currently implemented, but it’s apparently coming – is to allow you to edit essentially infinite meshes, as only the section of the mesh you’re currently working on needs to be fully in memory. In fact the team had to get really down and dirty to make sure that OS-level memory paging didn’t get in the way of efficient, low-level disk access.
The longer-term plan for this technology (at least it was, 3 months ago :-) is eventually to support editing on all kinds of devices: if you’re working on a memory-constrained device, such as a tablet, then your editing window onto the mesh will simply be smaller. The more capable the system, the larger the editing window.
Assuming this plan does end up playing out – a lot can change as technology such as this emerges – it presumably won’t be for a while. The currently posted preview is for 64-bit Windows only.
I plan on spending a bit more time working with this over the coming months, focusing on technologies that would allow creation of very large meshes: part of the problem we’ve had with Project Memento is generating large enough meshes to push the technology’s limits. The most feasible approach, right now, is to generate them using some kind of reality capture technology, which is one of the reasons it was the Reality Capture team at Autodesk who developed the technology underlying Project Memento. Although of course I’m looking at this backwards – from a technology rather than a user perspective. It’s also pretty clear that over time people working with tools such as ReCap – as well as with meshes generated in other ways – will find this kind of capability compelling.
From my side, I’m particularly interested in a possible integration with Kinect Fusion. The Fusion component currently populates a very well-defined volume but it it were possible to stream down polygons directly into a tool such as Project Memento – especially via the upgraded sensor announced with Xbox One – life would get really interesting…