Last month Fast Company published an article about an interesting project Autodesk Research has been working on for a number of years: internally the project was known as LEGOBot, but now that it’s being talked about publicly it has understandably been renamed to BrickBot.
BrickBot is a really cool project that’s built on two core ideas: robots are stupid and engineering is expensive. Robots need a lot of help to be told what to do, but telling them what to do is neither straightforward (today) nor flexible: you need to code for specific conditions, and if those conditions change you need to add more code.
The over-arching goal of BrickBot is to build a system that takes a 3D model of something and then works out how to fabricate or assemble it. In this initial instance, BrickBot does this with LEGO bricks, but that’s just where things are today.
- Object detection and localization
- Looking at a bin of parts, what parts do we have and where are they?
- Grasping and manipulation of the bricks
- Knowing where the parts are, how do we go about picking them up and rotating them to be ready for placement?
- Planning, actuation and assessment
- Given the hierachy of the model – we need to build walls before the roof, for instance – what steps need to be taken when, and were they successful?
Back when Thomas Davies – a colleague based in Toronto – presented about the project at the internal Autodesk Technical Summit 2017 (held in London), he mentioned they’d used 5 different convolutional neural networks to build their pipeline, with one feeding its results into the next. That was over a year ago, so the pipeline may well have grown since then.
Another interesting part about the project is that the robots were trained in a purely virtual environment: synthetic images are generated with different lighting conditions and passed into the pipeline, removing the need to use real-world data and making it exponentially faster to train the system. Super cool!
The story has been making the rounds over the last few weeks… even the BBC have featured it. Here’s the video from the article:
I think this is a really promising project: this kind of capability is going to be extremely important in the future. Congratulations to Mike Haley, Yotto Koga and Thomas Davies for the well-deserved recognition of their work (even if Thomas is now off doing other things within the company).
On a related note, I’m looking forward to visiting SINDEX at the end of the month (it’s a biennial event that I went to in 2014 and in 2016). Back in 2014 I saw some interesting approaches to solving the part picking problem, for instance: I’m curious to see who else is working on this kind of problem, and the kind of solutions that are being proposed.