The MX3D team first showed the smart bridge to the public back in October 2018 at Dutch Design Week in Eindhoven. I was there with the team to finish and test features such as realtime display of skeletons detected using computer vision. Here’s a reminder of how that went:
An interesting aspect of the public’s response to the bridge at DDW2018 was that it had broad appeal to people but that people appreciated different aspects of the project:
- The majority view: “wow, what a cool-looking bridge!”
- Those who dug a little deeper: “it was 3D-printed by robots? Interesting – I wonder how or when this will become viable as a construction technique.”
- A few folk who really got the nuances of the project: “this is the future of smart infrastructure: one day cities will function on the data that objects like this will generate.”
If I had to guess at the distribution between these groups, it’d probably be 70-25-5. I could be totally off with this guesstimate, but hopefully it gives some indication of the way people have responded to the bridge. It’s also very possible there are people who don’t even find it cool-looking, but I haven’t met any, as yet. :-)
Anyway, this post is for the 1 in 20, or rather to encourage people to join that third group. Here’s a graphic showing the bridge’s sensor network:
There were various reasons for putting nearly 100 sensors in a 12m bridge, many of them related to monitoring the structure: stainless steel that’s been 3D-printed by what is effectively FDM (although it seems the correct term for this process is WAAM, or Wire Arc Additive Manufacturing) is basically unproven as a construction material. Placing sensors in the bridge will allow us to better understand the scope – and limitations – of this technique, and allow us to more optimally design future structures made in a similar way.
But beyond that, a bridge that knows about how it’s being used – and can report information about its usage for broader aggregation – has huge potential when integrated into a “smart city”. Today we rely on highly dubious collection of personal mobile device data (sorry, Google) to let us navigate effectively around cities, but widespread adoption of smart infrastructure could tell us much more and in a way that benefits the community while respecting individual privacy.
As a reminder of the scope of the bridge project – and an exploration of the implications around smart infrastructure, both good and bad – do check this excellent Quarantime episode with the marvellous Mickey McManus and Alec Shuldiner.
A wonderful thing about adding so many sensors to a structure such as the MX3D bridge is that we actually have no idea what we’ll learn. This is research at its most exciting, and I’m thrilled to be participating in some small way.
So what about seeing the data that the bridge is generating? With the MX3D bridge we have a couple of approaches for presenting sensor data to people in a way that makes sense.
The first is via smartbridgeamsterdam.com, which describes the function of the bridge and how the data it generates is going to be used. It has a lightweight view of data coming off the bridge – with data from a selection of sensors being displayed, albeit with no Y-axis labels – to help people appreciate that the bridge is “smart”. We’ve codenamed this effort “Dasher Light”.
The brains behind Dasher Light’s view on the data are Mike Lee and Josh Cameron, who took care of the front- and back-end work, respectively. (Jacky Bibliowicz has been working hard on the time-series back-end, too, which is less visible but super important.) They have done an amazing job pulling this together at short notice: they basically had 12 hours between the bridge coming online and the opening to get something working.
While the data is live, it’s not our intention that people would necessarily change their behaviour because of it, e.g. to jump up and down on the bridge to see how the data changes. We’ll see whether that happens or not. Josh does think that he can detect people crossing the bridge in the strain-gauge data, though (something we’ll be able to confirm once we have the ability to see how many/where people are on the bridge, which we’ll talk more about in a bit). When he talks like this he makes me think of Tank reading the code of the Matrix.
The second way for people to view data from the bridge is via Project Dasher, which is really the reason I’m involved in this project. The Dasher team has been working to display historical data captured from the bridge, contextualising it in 3D. This is both to help people see the location of various sensors but also to display heatmaps for different types of sensors on the surface of a 3D model of the bridge. We have the basics working well, but we need a bit more data in our back-end to make this valuable for people to use.
This is the video that was shown to Queen Máxima during the bridge’s opening. It’s based on some test data – as the bridge wasn’t yet online or even installed when we created it – but it gives an idea of what we’re expecting to be able to see, in time.
Here’s a sneak peek of the MX3D bridge with actual data inside Dasher:
As a reminder, the journey to capture and display data for the MX3D bridge started a long time ago, soon after the inception of the project in 2015.
Knowing this project was coming, Alex Tessier and Alec Shuldiner pushed for a separate, internal project to instrument a raised walkway in Autodesk’s Pier 9 office in San Francisco, allowing us to use this as a test that our systems could deal with the increased data load associated with monitoring infrastructure (which has data at much higher frequencies than is typically used to measure building comfort). Pier 9 become a great test-bed for our computer vision research, too, allowing us – via Project Ajna, spear-headed by Pan Zhang and Liviu Calin – to start extracting anonymised 3D skeletons of people crossing the bridge from video camera footage.
This is the next big hurdle for us to cross with the MX3D project: getting two sets of cameras installed and hooked up to the bridge’s nervous network that will feed the skeletonisation process and allow us to display anonymised views of people crossing the bridge alongside the other data the bridge captures.
We also have some additional LoRaWAN sensors to install that will detect the ambient temperature (etc.) of the area surrounding the bridge, which will help us better understand how the bridge’s own temperature changes based on weather conditions. That’s a much smaller thing to take care of, though: the camera installation and calibration is the bigger challenge but will also help enable the overall vision for the MX3D Smart Bridge.
In the next post we’re going to focus on the first group of people – who think the bridge looks cool – by taking a look at how it looks under different lighting conditions and at different times of day. It turns out it is possible to be both smart and beautiful, after all. :-)