David Rose is an entrepreneur, MIT lecturer, author, and pioneer in ambient computing. Among other institutions, his work has been featured in MoMA, The New York Times, WIRED, The Economist, and The Daily Show. He’s the author of two books: Enchanted Objects and his latest, SuperSight, which is the subject of our conversation.
Show notes
- David Rose - Enchanted Objects
- David L. Rose - Wikipedia
- David Rose - LinkedIn
- supersight.world
- SuperSight: What Augmented Reality Means for Our Lives, Our Work, and the Way We Imagine the Future by David Rose
- Enchanted Objects: Innovation, Design, and the Future of Technology by David Rose
- Home Outside
- Clearwater
- Ambient Orb
- Google Glass - Wikipedia)
- Meta Quest Pro)
- IKEA Place on the App Store
- ARtillery Intelligence
- lululemon Studio and the MIRROR
- CityScope
- Lumen World
- Handheld projector - Wikipedia (pico projectors)
- Find a National Park Service Map - GIS, Cartography & Mapping (U.S. National Park Service)
- Nreal
- Freedom Boat Club
- Candela per square metre - Wikipedia (nit)
- Audi introduces headlights that can project images with its electric SUVs - Electrek
- Fireflies.ai
- inCitu
- Figma
- Miro
- Reality Composer on the App Store
- Adobe Aero
- Sketchfab
- TurboSquid
- Warby Parker
- Beat Saber
Some show notes may include Amazon affiliate links. We get a small commission for purchases made through these links.
If you're enjoying the show, please rate or review us in Apple's podcast directory.
This episode's transcript was produced by an AI. If you notice any errors, please get in touch.
Transcript
Jorge: David, welcome to the show.
David: Hey, thanks Jorge. I’m so excited to reconnect with you.
Jorge: I am very excited to reconnect with you as well. As the word reconnect implies, I’ve been aware of your work for a long time, but folks listening in might not be familiar with your work. How do you introduce yourself?
About David
David: It’s a problem. I mean, I’ve been working across maybe eight startups over the last twenty five years or so. My first startup was called Interactive Factory and did simulations for learning like Sim City-like simulations and museum exhibits and toys. Like, we worked on LEGO Mindstorms and Guitar Hero in the early days. And then I went on to found a photo sharing company and then Ambient Devices, which was kind of glanceable information objects that were wireless, pre-internet of things.
And then, I wrote a book called Enchanted Objects, again about connectedness and how sentience will come into the world around us into jewelry and watches and wearables and glasses. And then, took a turn towards computer vision and machine learning and founded a company called Ditto Labs, which is all about kind of social shopping out of other people’s photos.
And recently, I just came out with a new book called SuperSight, which is all about the augmented reality “real world” metaverse future and how designers need to learn a new set of tools for creating in spatial computing and how the UX for spatial computing will be very different than the mobile tools that we all know and have become accustomed to.
Ambient information
Jorge: I think that I became aware of your work first when you were at Ambient Devices and I remember this one product that stuck with me: the Ambient Orb.
David: Yeah! It was really more of a lamp than a… I mean, it didn’t look anything like a computer or a technical object; it was just a grapefruit-sized frosted glass orb that had LEDs inside that could glow any color. But we hooked it up to a wireless network.
Actually, we used an 8,000 tower pager network in order to get data to it. So we cost-reduced the pager receiver for the object so that it would be like less than five bucks in order to send data to this thing. And it works deep in buildings; you don’t have to hook it up to wifi or Bluetooth or anything.
And then the color represents whatever you care about. So you could choose a channel like, “what’s the temperature going to be tomorrow?” and then it shows you the color is the temperature scale of temperature in your zip code tomorrow. And it would pulse if it’s raining, or when the next bus is coming, or what the surf status is at your beach, or traffic congestion on your route, or your stock portfolio. You know, whatever is interesting enough that it changes dynamically and could be reduced to one-dimensional data.
Yeah. People loved that product . And actually, the big insight from Ambient Devices, which led to the creation of my next company called Vitality, which is all about behavioral economics and healthcare, was that when you put information in front of somebody that they glance at throughout the day — when the information is kind of unavoidable — then behavior change happens as a result of that.
So, if you track when the next bus is coming, you trust the transportation network more if you track how many steps you’ve walked, you get obsessive activity. So, I think it ties into some of what are the cues that we need in order to live the life that we want to through information and choices.
Jorge: And in the case of the Ambient Orb, like you said, this was one thing that you cared about, right? Whether it was the arrival of the bus or…
David: Blood sugar levels for your diabetes? Yeah.
Jorge: Exactly! And I get the sense that… well, I can see a very direct through-line between the Ambient Orb and the things you’re writing about in SuperSight, except that we now have the ability to communicate a lot more information than just one parameter. So what is SuperSight?
What is SuperSight?
David: Well, SuperSight, I feel, is the next evolution of how we will see. If you look at the animal kingdom, there’s a lot of specialization for seeing in the dark or seeing a long distance away, or having super wide-angle vision for understanding where predators are.
And I see this next generation of wearable computing will evolve human sight and perception in a way that will give us this kind of information prosthetic to be able to see anything that’s contextually relevant, superimposed over the world.
Maybe when we started Ambient Devices twenty years ago, we had to create a physical object — a lamp for your mantle or your bookshelf — in order to deliver glanceable information. But now, of course, we could do that with AR and that information could be mapped to the right moment and the right place to make a better decision.
A simple example would be a light switch today doesn’t have any information except for whether it’s on or off, but you could put information about when was the last time it was turned off, or how much energy have you consumed through this room over the last year, or, you know, you can kind of sprinkle data lightly in an atomized way throughout your environment or work environment to help show guidance or risk or more context or more metadata, over anything in the world.
That’s, in my mind, the exciting part of SuperSight: how do you spatialize all this amazing data that’s online today?
Jorge: So is SuperSight just another way of saying ’augmented reality’ or…
David: Yes! That’s all.
Jorge: Well, I’m asking because the phrase ’augmented reality’ is in the book subtitle, right? It’s: “What Augmented Reality Means for Our Lives, Our Work, and the Way We Imagine the Future.” And yet when I was reading the book, I got the impression that, obviously, sight is central to the notion of SuperSight. It’s right there in the word. But it’s not just the stuff that is coming in through our sense of sight; there’s also an aspect of this that has to do with computer vision, right?
David: Yeah, that’s right. I mean, in order to be able to spatially anchor information in the world, we need to know a bunch of things about context. And that might be just, “where are you?” And, “what is around you?” But it also may have things to do with like, “what are your own states?” Are you confused? Are you engaged? Are you overwhelmed right now? Are you a little bit bored? And so, being able to both turn the cameras towards ourselves and be able to sense what is our state right now.
You know, being understood by these these tools changes the game. And then also knowing what’s in front of you and that you really need computer vision and scene segmentation and object recognition in order to be able to see what’s in front of you and locate the right kind of cues at the right time.
Jorge: So in a way, it’s kind of a visual prosthetic — to use your word — for us, but it’s also adding or amplifying the visual sense of the computers, right?
David: Yeah. I often say step one is to create a digital twin of the world. That’s the first thing you need to do. We’re working on a project right now. I’m the CTO of a company called Home Outside, and we’re trying to persuade and charm and inspire people into investing in their own yard.
So the idea is: make a digital twin of every yard in America. There are 53 million of them. And then, using street view data and satellite data (thank you Google for some of that!) then algorithmically move in shade trees if they don’t have shade trees, move in natural pollinating bushes, move in hedge conditions, understand the sun and shade conditions, and then give them this amazing 3-D, blowing-in-the-wind view of their own yard that they never could have imagined before. And then, for reasons of climate change, or saving water, or just increasing the value of their home, they see this dream of a future that is designed with people and that designed these garden kits that are dropped in programmatically.
And being able to do this at scale, I think we could really, hopefully change how people invest money in their lawns. But it’s a perception problem in terms of creating a digital twin. But it’s also a context problem of how do you anchor that new design in 3-D in augmented reality? And something that they see either through their phone or through new glasses that are coming soon to a Verizon store near you.
Augmented reality versus virtual reality
Jorge: I wonder if a lot of folks know what is meant by augmented reality. It might be worth unpacking that, especially in contrast to virtual reality, which I think has been more in the news than augmented reality.
David: Yeah. These terms are morphing quickly. I mean, when I say ’augmented reality,’ I don’t mean the Google-Glass-floating-screen-that-doesn’t-know-what’s-in-front-of-you vision. I’m talking about anchoring objects and information to the world and to other people and to surfaces. You know, knowing enough of what’s around you so that things can be spatially anchored and glued again to the world. So as your head looks away from it and then looks back, it’s still there. That’s augmented reality.
Now, virtual reality is usually when you have blinders on so you’re just looking at opaque monitors that are in front of your eyes. Now, those still respond to a gyroscope in the unit, so as you look left, the world moves. But the reason I say it’s changing and the industry is calling it ’mixed reality’ or XR now because increasingly the new Oculus headset, the Quest Pro, has VR but pass-through. So pass-through means there are cameras on the front of the VR headset that are taking the camera view of the world and then can mix that with layers and show you a mixed reality view as well, even though you’re wearing occluding headsets that are still big and wonky.
But the augmented reality vision is really to use what’s called a beam splitter or wave guide, so that you can see the actual pixels of the world and the photon… sorry, not the pixels of the world, the photons of the world; you actually are looking through your glasses but then a virtual layer of information is merged with that view.
Jorge: cityscope installed new shelves in my house and in order to do that, I downloaded an app from IKEA that allowed me to point the iPhone’s camera to the room where I was hoping to put the shelves in, and it would render the camera feed — what the camera was seeing — it would render it on the iPhone screen. And on top of that scene, it overlaid a 3-D model of the shelves that I was thinking of buying. And I could flip through different models and position them differently and see how they would look in the room. And that I think is an example of what you’re talking about with mixed reality. Is that right?
Planes of projection
David: Yeah. Well, I would still call that an augmented reality experience or mixed reality. One of the frameworks in the book is what I call the ’planes of projection.“ I think augmented reality can be either a glasses view at one side of the spectrum — so it’s a personal, private view that only you see — versus a smartphone or tablet holding up something view which is available on billions and billions of phones today.
I just read a report this week that something like 30% of people use AR every week. That’s what this report said from ARtillery. A-R-tillery, which I thought was a pretty high number. So, moving down the spectrum of the planes of projection, there’s a see-through view, which is more like a windshield, heads-up display and a car or jet fighter, or Lululemon has a product that they acquired called Mirror, which shows you in the mirror, and then also blends that with the yoga instructor or pilates instructor.
So that’s another kind of augmented reality, but that’s a see-through or see-a-reflection kind of view. Like, say you’re in a hotel room that looks over a city like Las Vegas. If you stand in one place or it knows where your eyeball is, you could superimpose information on the city below you. So you’re still seeing the city, but you’re also seeing superimposed information. So that’s like a see-through AR.
And then, the last one on the spectrum is projected AR. So that’s when you have a data projector that knows what’s in the world and can superimpose data on top of things. Like at MIT, we have a project called CityScope, which has a LEGO model of a city and there are two data projectors that are projecting information about walkability score or rent prices or access to amenities like food so you can identify food deserts. And so, as you move around the LEGO pieces that compose the city, that metadata about the city dynamically changes so you can see that there’s a food desert or there’s a decrease in walkability score. So that’s a projected view.
And one of my students from a couple years ago created an AR flashlight, which is: it knows the building information model, knows where the electrical and plumbing runs are in walls, so as you shine this AR flashlight, which is really just a little pico projector against the wall, it knows where you are in the building and then can project kind of an x-ray view into the wall. So that’s more of a shared view AR.
I’m so fond of the shared AR experiences because then it’s not this privatized — like only you see it and people don’t know what you’re looking at. But we were using the flashlight the other day for… Harvard has a museum that has these old hieroglyphs and they were all painted with these amazing colors. But nobody’s restoring them with all of the paint. So you can shine the flashlight at the hieroglyphic reliefs and you can have them painted and then they also talk to you. So translated for you. And it’s really this like beautiful, magical experience.
And of course, you could do that with a phone, but you just wouldn’t be able to do it with other people as easily as having the information be on the wall. You know, on the hieroglyph.
Jorge: Right. It’s like there’s a continuum between having this be an individual versus a shared hallucination, right?
David: Yeah, that’s right. You always want to do hallucination with other people. I do!
Jorge: But this idea of the… I’m super intrigued by the idea of the AR flashlight, because that brings to life what you said at the beginning, that the first step is creating a digital twin of the world. My expectation is that that flashlight is going to be useful to the degree that the model is complete. The model of the building.
Adding an information layer to the world
David: Absolutely. Yeah. The product is called Lumen; it’s lumen.world. And I think they’re really thinking about what are those places that do have digital twins — or that you can make a digital twin — and that wants to be shared. Like maybe it’s an escape room, and you use the flashlight in the escape room. It’s already dark in the escape room, so you get more clues or you can see where the murder weapon is. Or maybe it’s like an Airbnb kind of thing where you walk into a house, you’ve never been there before, and you get the ”how-to.“ You know, how to use the house, how to find stuff or the backstory behind the things that are there, almost like house-as-museum type of experience.
Jorge: And then extrapolate ’house’ to the whole world. Eventually, we could annotate everything and add this information layer to the world, right?
David: Yeah, as long as it’s not super bright out.
Jorge: Right, right
David: I’m working on a product right now for boating and we were really excited about this kind of look through the water and see the terrain below, like the topo lines of the terrain below. I think one of the most beautiful kind of information-dense objects are maps — you know, the maps as you go hiking that were created by a National Park Service of all of the terrain and the hill shading. Those are just so gorgeous! I was like, ”can’t we do that below water?“ Because people who were driving boats want to see where the rocks are and people who are fishing want to see where the structure is.
So we started this company called Clearwater, that’s clearwater.ar. And with this fantasy of: okay, so you hold up your phone, you see through the body of water, you see safe passage tracks, you see where the fish might be biting based on time of day and season and stuff. And then, we started working with these AR glasses from a company called Nreal, which actually are out in Verizon stores, for about $500. And then we started working with Freedom Boat Club, which is a boat rental club where you pay four hundred bucks and then you only pay for gas and you can use as many boats around the world as you want to. And you usually have no idea where you are and where to go. So you need a lot more guidance.
But these NReal glasses, they’re just not bright enough. They’re the brightest AR glasses on the market, but if it’s a sunny day, that’s a lot of Nits… that’s the measure of lumens in the world. So, you will need sunglasses, and you also need an incredibly bright projection display in order to overcome the ambient brightness of the world.
So now, what we’re doing with that boating app is we’re putting a camera on top of the boats to give you a better base view, and then superimposing the safe passage tracks and the buoys and the names of other boats, but we’re displaying the information on the screen that’s already sitting on mini boats next to the steering wheel; it’s called a multifunction display. That’s already super bright and waterproof and the kind of the place to show AR on boats. It’s kind of obvious: if they already have a super bright screen, use that.
Why SuperSight is inevitable
Jorge: It feels like there are so many possibilities latent in this idea of overlaying information in the world contextually. You say in the book that… I’m going to paraphrase, but you say that SuperSight is the ”inevitable next wave of tech.“ Why is it inevitable?
David: Well, I think in the same way that IOT was kind of driven by the falling costs of connectivity and the overabundance of… or not the overabundance, but the abundance of data [and] the falling costs of connectivity made the ability to put sensors in toothbrushes and doorbells and thermostats and watches something that certainly could be done from a business perspective.
Now, designing that in a way where people adopt it is the designer’s challenge, to find those opportunities. But I look at the same kind of macro trends for the falling costs of compute and the how the display technology is getting smaller and brighter and more battery efficient. So, even pico projectors, right? Audi is showing the flagship version of their next car has pico projectors all around the car so it’s projecting information; you can decide what should be projected on the ground next to your door, you can have the next turn not only be put on the heads-up display, but also projected on the world around you.
Those projectors are getting also brighter and higher res and less costly so that we could just… we’ll be replacing the light bulbs in our home with pico projectors, right? Every light bulb could be a pico projector. So, over your kitchen island, if you have two pendants, those could be projecting and will be projecting data down.
And so now, as designers, we should say, ”well, let’s assume there’s a little camera — because cameras are also super cheap — in every light bulb in your home or work. And all of those light bulbs don’t just shed light, but they’re pico projectors. Now, how should we arrange information about cooking or collaboration while you’re cooking in ways that that people find helpful and desirable?“
Hazards of SuperSight
Jorge: That sounds super exciting. Hearing you talk about it, my mind is racing with the possibilities. And also there’s a part of me that feels a little alarmed. And one of the things that I liked a lot about the book is that peppered throughout the arguments in favor of SuperSight and the explanations of everything that this set of technologies promises to do for us and to us, you have peppered ’hazard’ warnings.
David: Yeah. I came up with six hazards that I’m the most worried about. But I agree. It’s like this whole thing is fraught with information privacy issues, persuasion and advertising issues, cognitive crutches issues.
Let’s talk about the cognitive crutches, because I feel like that’s such an interesting question about if you have a new tool that helps you, whether it’s GPS or I’ve been using this app called fireflies.ai; it joins me at every meeting and transcribes the call and gives me a highlight summary of the call and then also does a analysis of who’s dominating the conversation and whether they sound pissed or happy. Will that make me less attuned to being able to measure those things myself? So, I call them cognitive crutches.
But what do you think about whether these tools will atrophy our ability to have conversations with other people without having hints at what to say that are orbiting over their head. And it’ll be hard to just have a conversation with a person without that help, if you will.
Jorge: Right. And people have had objections to new technology on those grounds for a long time, right? Was it Plato that was objecting to writing because it would impair our memories?
David: Yeah. Right, right, right.
Jorge: But it is important to have these concerns front and center. I suspect that much is going to come down to business models.
SuperSight’s promise for the future
Jorge: There’s a thesis in the book — and actually, it’s funny, and I’ll mention this here because it hasn’t come up yet in the conversation — but the book itself leverages augmented reality; there’s an app that goes with the book and provides additional content. And it’s really uncanny. And I’ll just describe it here for listeners. There are certain illustrations in the book that you can point your phone camera to, and the book will come to life with videos of David explaining concepts and diagrams, animating, and stuff like that.
And in one of those videos you say that the central thesis for the book is that SuperSight can help us overcome ”a profound failure of imagination, allowing us to see vividly a more sustainable and equitable future.“ And I’m wondering how it can do that. How can it help us envision a more sustainable and equitable future?
David: Well, I hope that listeners to this podcast take up that challenge, because if you could project anything into people’s field of view, the obvious things that companies are working on now is like, how to fix this thing or what’s the name of this thing, or help identify complicated tasks.
Like, I was just working with an energy company in Italy last week who are trying to do workforce development and trying to get people to more quickly learn how to do more complex operations and kind of teleport an expert into their view of a power plant or nuclear plant if they need the help in that moment. So, there’s some really obvious uses of this technology that a lot of companies are doing.
But to me, the most important challenge that we can take on is to use the technology as an imagination engine to help us see things that are hard for humans to see. So, if you’re not trained as a city planner, you don’t just go around looking at intersections and say, ”oh my gosh. This needs to be redesigned! Like this… this is not pedestrian-friendly. This is not good. This intersection is terrible if you’re a biker because you have these occluded views and we really need a place for the bikers to be in order so that the cars see them as they take off out of the intersection.“ But if you’re not trained as a city planner, you just see intersections and you don’t see what things could or should look like.
In the same way the example that I gave earlier about kind of seeing how beautiful your front lawn could be if it were redesigned in a way that really prioritized sustainability. Most people just see their front yard and they’re like, ”Eh. I don’t know; it’s fine. And I don’t want to kill anything and I don’t know exactly what to do, and the plants have Latin names, and aggghhh!“ You know, it just all seems too much and too complicated.
But if you can give people a view of the shelves in your study, or intersections or front yards or even things regarding — I think there’s a huge opportunity to help people project forward and say, how do the decisions I’m making today affect how I will feel and look like in ten years. We’re all so myopic and we’re living in the moment. I think being able to use this technology to see further and see the consequence will really be helpful.
Jorge: It sounds like it’s a means to prototype different ways of being at much greater scale, right?
David: I think that’s exactly right. I think looking at this as a fast prototyping tool for things that could be bread box kind of scale, but they could also be world scale.
There’s a company in New York called inCitu. It’s in-C-I-T-U. And they’re working with developers to allow people to hold up their phones and see what this park will look like or this, or the building that’s going here will look like. And it’s really kind of a prototyping community engagement tool so they can get a lot more people to weigh in on what they think that something will affect the neighborhood. That would be hard to do at skyscraper scale before.
There was a really nice story about the same — maybe it was also inCitu — that was doing it for bridges that would go across highways that would allow wildlife to pass. So you increase the roaming area and the breeding area for wildlife by creating these bridges. But they prototyped a bunch of them and anybody driving by can hold up their phone and see what that proposal might look like. It’s a great way of prototyping larger.
For people listening to the podcast, the tools that you use today might be dominated by Figma or Miro or other kind of sketching and collaboration tools. I think the tools that we all need to learn are…
I taught a workshop a couple of weeks ago. It was using Reality Composer, which is an Apple product, and Adobe Arrow, which is obviously an Adobe product. But both of them come with asset libraries of 3-D objects, many of which are animated 3-D objects. And then you learn how to use Sketch Fab and Turbo Squid and other 3-D asset stores to be able to kind of sketch in 3-D in context.
I think we all need to level up our tools to start understanding what the capabilities are for being able to envision something that is happening over the top of the world.
SuperSight and experience design
Jorge: Yes, and I think that there are many people in this field who consider themselves experience designers, and the reality of how we have been designing these experiences so far is that they’ve been constrained to these small glass rectangles. But this feels like an invitation to think of the canvas on which those experiences are rendered as being the entire world.
David: Well, I think the design skills of hierarchy and typography and color, those are still very relevant. I was at Warby Parker for two years and worked on a remote eye testing business and also virtual try-on product. So you can just hold up the phone and see new glasses on your face. Thank you to Apple for the iPhone X camera, which gives you such a detailed terrain of the face in order to superimpose glasses, like in a very, very believable way, and at the right scale.
But we did a lot of iteration on, ”how do you make someone aware when they’re perusing the app of the virtual try-on experience and of the AR experience?“ We came up with like a simple pull down mechanism so that you’re looking at the glasses you pull down from the top of the screen and that opens up the camera feed and then you see the front facing camera. But we still persist a little card on the bottom of the screen, so you know how to get back into the app experience.
And how do you carousel through the next set of glasses in order to try different things on your face? And how do you try different colorways also on your face and how do you like and save those things? Getting people in and out of AR experiences is something that we all need to think about and try some of the best experiences and really adopt some of those design patterns.
Jorge: It’s early days for the medium and I suspect that there’s still a grammar — an interaction grammar — to be developed, right?
David: Yeah, absolutely.
Jorge: Well, this is all super exciting and super relevant. I encourage everyone to read the book. It sounds to me like you have a lot of stuff going on. You’ve mentioned at least three or four companies that you’re involved with. Where is the best place for folks to follow up with you?
Closing
David: Sure. supersight.world. I was going to do supersight.ar but I mean, .world is weird enough. So, supersight.world. I have example videos of projects that I’m working on. I actually have a Miro Board of design principles that is also there of kind of things I’ve found about, you said, ’design grammar.’ When designing in this medium, there’s some common principles that we can learn from Beat Saber and Google Map navigation.
I’ve tried to synthesize those in terms of design principles for spatial computing. That’s all at supersight.world. The frameworks from the book are there. A poster that we created with the book, links to prototyping tools, and certainly if people are working on a project. I’d love to weigh in and point you towards resources or people that I’ve seen. You can also schedule. I think I have a Calendly link there if you want to schedule a half hour with me.
Jorge: That sounds fantastic. Folks, take advantage of this opportunity because this is really exciting and I think transformational. Thank you David, for sharing with us.
David: Thanks, Jorge. This has been a really fun conversation.