Back when FARO announced their new Freestyle3D handheld scanner, I contacted them to see whether they might have one for me to take a look at. They very kindly obliged, and a few weeks ago I received a loaner model in the post.
I won’t be writing an exhaustive review – at least not in this post – but I did want to share my first impressions, mainly to capture them for future discussion. Bear in mind that most of what I’m writing here is personal opinion and the rest is pure speculation :-). Hopefully someone at FARO will be able to point out any factual inaccuracies so I can correct them.
Of course my primary interest in the scanner was to get it working in some way with AutoCAD, and ideally without a lot of the hurdles I jumped through when integrating Kinect Fusion (in many ways a comparable system). Before seeing whether that was possible, let’s take a look at some of the important points about the Freestyle3D scanner.
Much like Kinect v1, the Freestyle3D is a structured light scanner: it projects a pattern of infrared dots and detects their deformation. Like the first Kinect, it has a range of 50cm to 3m. That’s about where the similarities to Kinect end, though.
While Kinect Fusion requires a desktop class PC to run the Kinect runtime – essentially reconstructing a watertight 3D mesh in real(ish) time – the FARO system takes a different approach. More on this in a little while. One of the reasons Kinect Fusion has such heavyweight requirements is that it’s performing energy minimisation calculations between consecutive point-cloud frames – albeit in a highly parallellised fashion via the GPU – to determine what additional data is contributing to the mesh.
The Freestyle3D can work with any PC, but comes bundled with a Surface Pro 3 with FARO’s SCENE Capture software pre-installed. This is a great way to perform captures in an (largely) untethered way: there’s a wrist strap for the Surface Pro and you carry it around along with the scanner in your other hand (they are connected to each other by a USB 3 cable).
[In many ways the Surface Pro 3 is the device of choice for many Windows-centric software vendors to meet the needs of mobile customers: Siemens seemed to base their whole mobile pitch around it at the recent Develop3D Live event, for instance. It certainly has the horsepower to run moderately heavyweight desktop software without a significant amount of UI rework needed.]
So how is the Freestyle3D’s scanning approach different from Kinect Fusion? Rather than requiring a heavyweight graphics card to basically “diff” the point clouds for each frame coming from the scanner, SCENE Capture uses Visual SLAM to determine how the scanner is moving through space. It’s basically using computer vision to extract features – edges, corners, etc. – from the camera input and then uses these data-points to track how the scanner is moving through 3D space. You’ll notice, for instance, that tracking is very dependent on light levels: if there’s insufficient clarity in the image coming from the camera, the software has trouble extracting enough features and therefore tracking the scanner’s location.
This means a few things. Firstly, it’s a lot snappier: while you have to move slowly – and the software warns you when you’re starting to go too quickly – tracking is a lot more reliable than I was used to with Kinect Fusion. Secondly, you’re not working with a voxelised 3D volume – a closed mesh – you’re building a point cloud. Which means you’re going to see more noise, especially when scanning reflective surfaces, the Achilles heel of the 3D scanning world.
When tracking does get lost, the visual feedback is actually fairly good…
… you’re given decent visual clues as to where you need to place the scanner for tracking to be restored:
Capturing is therefore fairly painless. I’m by no means an expert user of SCENE, but I managed to work out most of what I needed. It apparently provides the capability to edit out erroneous frames from a scan – something I can see might be needed, as in a few of my longer scans I found that I had multiple planes for the floor or one of the walls. I’m sure this is down to user error, but it certainly highlights the fact you need a certain level of expertise to avoid this scenario (probably by creating and merging multiple, smaller scans).
One thing that absolutely needs work is the workflow from SCENE to Autodesk software. When you install SCENE you can see that it includes an Autodesk component called DeCap (this is an Autodesk SDK that can be used to create RCS and RCP files… it’s basically “headless” ReCap ;-). Unfortunately the SCENE software doesn’t seem to use this directly, at the time of writing (v5.4). I found I had to export to another format – whether .E57 or .PTX, sometimes one worked better than the other – and then import that into ReCap Studio to generate a .RCS or .RCP file that can be imported into AutoCAD.
So quite a convoluted process to get the data across from the Freestyle3D into AutoCAD. I’m told that both FARO and Autodesk are working on improving this workflow, so I don’t expect the pain to continue forever. It’s still early days, of course.
I’d also love to see this scanner feed Autodesk Memento, generating a mesh rather than a point cloud. This kind of integration is largely working with Artec’s scanners, today, but not yet with other devices such as the Freestyle3D.
Overall I found it very interesting working with the Freestyle3D. I’m very curious to see how this technology – and the supporting workflow – evolves, over time. I’m sadly having to ship it back in the next few days, but I have stored a fairly varied set of captures that I intend to work on, when I get the chance. For instance I fully intend to try extracting floorplans from an office space, as mentioned the last post.