Michael Anton Dila describes himself as a “designer of conversation” and someone with a passion for starting things. Among the things he’s started are ventures in online learning, co-working, and mobile technology. He’s also held several leadership roles in an elite innovation unit in the U.S. Department of Defense. In this conversation, we discuss his latest initiative, Oslo for AI, which seeks to design better processes for governing artificial intelligence.

Show notes

Show notes include Amazon affiliate links. We get a small commission for purchases made through these links.

If you're enjoying the show, please rate or review us in Apple's podcast directory:
https://podcasts.apple.com/us/podcast/the-informed-life/id1450117117?itsct=podcast_box&itscg=30200

This episode's transcript was produced by an AI. If you notice any errors, please get in touch.

Transcript

Jorge: Michael, welcome to the show.

Michael: Hey, thank you. I’ve been looking forward to it.

Jorge: Same here. You and I have been friends for a long time, and frankly, I’ve long thought about what we might talk about on the show because I feel like we have so much to discuss. But you have a new venture that is really exciting, and it felt to me like the perfect opportunity to have a conversation with you and share it with our listeners.

About Michael

Jorge: But before we get into that, would you mind introducing yourself?

Michael: No, I’d be happy to. My name’s Michael Anton Dila. I recently turned 60 years old. That was quite an event, the first birthday where I feel like I really hit a psychological speed bump. So that’s been interesting, but it’s getting better every day. Let’s just do a short history. I was born in the United States and grew up largely in Canada. I’ve lived between the two countries, although I’ve mostly had my home in Toronto, Ontario for the last thirty-plus years. Although I am an avid and enthusiastic border-crosser — known to cross borders of all kinds, not just geographic ones — I have a background in philosophy. At one point, I thought I was going to be an academic, a professor. I am very happy to say that I got off that path and unexpectedly found a new one in design, initially.

I was super excited to discover a path back to creativity. I had been, early in my life, very interested in theater and film. When I was young, I thought that was my path. And finding myself in design, and back, especially, in a work life of collaboration with people, with skills that I don’t have and never will probably, was actually enormously energizing. This was in the late nineties, and it was also a time when being in design — being a trained thinker — was auspicious and differentiating. So that was an exciting path into discovering all kinds of design. I was mostly in communications design and brand work, although I swam upstream, over a decade, and started getting into innovation work as well.

I’d say about 15 years into the design work, I did my first startup. I did a startup around online learning and innovation knowledge. I was actually trying to solve two problems. One, I knew a lot of people like yourself, who are practitioners with deep practice. It often gets expressed or shared with others, through the medium of the book. And I wanted a product form that was more immediate and more useful to people who need that knowledge at work tomorrow. So, I was trying to build a platform that enabled that on the one side for practitioners and on the other side for people trying to do innovation work in organizations who need to know things that they can’t afford to take a year off or three years off to get another piece of education. They need just-in-time help.

So that led me into startups. I did a couple of different startups over a period of five or six years. And then, in 2016, I ended up in a brand new innovation unit in the US Department of Defense and spent five years in a variety of leadership roles, both building teams and functions in a unit that had mostly to do with connecting innovators in the Department with the outside world, with innovators in particular in academia across the research university landscape in the United States in particular, and also in regional venture economies across the country.

And I finished that work in the summer of ‘21, and then spent a couple of years very much immersed in AI. And for better or worse, I’ve been recently putting it this way: that, for the last couple of years, I’ve had my head in the oven of AI, breathing in various fumes from different sources, and had at one point — and you may recall our talking about this, back in 2022 — I thought for about a year back then that I might do an AI startup.

And then, I put that down. I started writing about intelligence, developed a theory called System Three, which I’ve been writing about for about a year and a half. And then, in the middle of last year, a colleague and I here, Alex Ryan, and a very small team made a submission to OpenAI’s call for proposals around democratic input to AI. And that was the articulation point in this latest path. I really had been thinking about how all kinds of people can become more active participants — more enabled agents in — this emerging, not just technology, but this emerging world that this technology is going to both shape, enable, describe, define, and so on. And so, that has taken shape in the form of this project called Oslo for AI, that gets us to the present day, and I’ll take a breath.

Jorge: And we will get into Oslo for AI and what that’s about. But I just wanted to reflect back to you because now hearing you talk about it, I think it’s the first time that I’ve heard your full backstory.

Designing Contexts

Jorge: Hearing you describe yourself as a designer and then this Venn diagram between, I think you said “design” and being a trained thinker, as you were describing that, I was thinking, “Yes, you are a designer, but what kind of designer are you?” The word that came to mind is I think that you’re like a designer of contexts in which people come together to do certain things. That’s the impression that I have as someone who’s watched you do this.

Michael: Yeah, I like that. I sometimes say that I design doorways for people to walk through, and sometimes I design the space beyond the doorway. To your point about the design of context. And, I do that, although I must say, I try to do it in the most light-handed way. I often try, and I’m most interested in designing contexts which are not defined by me but maybe brought into a certain kind of interaction by me with some interest in maybe a certain direction for it to go but also trying not to have a thumb on the scale of where it goes, very much open to, as much as anyone else, following where it leads. I’m very interested in discovery. I’m very interested in exploration. I’m always interested in what I don’t know, what I haven’t seen.

Jorge: I’m thinking of the story of the stone soup thing, right? And I’m thinking of it, it’s that kind of context where there’s a provocation where the outcome is not fully known — knowable — but there’s like an initial provocation and maybe gestures that keep the thing evolving somehow. But to your point, it’s very exploratory and an emergent context.

Michael: Yeah, and it’s funny you mentioned stone soup because a friend who I think you probably know, the futurist Stuart Candy, uses stone soup as a device, and even a game for starting certain kinds of conversations and used to have a kind of dinner-oriented conversation that I participated in at least once. But it definitely was an experience that brought into that sort of archetype of story and folklore, this idea of unwitting collaboration. There’s something also about that.

The protagonist in the stone soup folktale is not exactly tricking the villagers into giving up their contributions but he or she invites them in a certain kind of way that I think is designed to get them excited about becoming participants.

And I relate to that.

Jorge: Yeah, I can see that.

A New Model for AI Governance

Jorge: Let’s switch gears to talking about the soup that you are cooking now, the Oslo for AI project. What is this? What is it about?

Michael: First of all, it’s useful to talk about why it’s called Oslo. As I started to imagine the project, I didn’t imagine calling it that right away. And I certainly had reservations about calling it that because I knew whenever you name something, you anchor it in a context which can often have unintended consequences or echoes. The reason that I called it Oslo was that I had rewatched a film called Oslo, which is based on a play, which dramatizes the secret Oslo Peace Accords that were orchestrated by a couple of Norwegians, a husband and wife, during the early 1990s when the first negotiations between Israel and the PLO were happening, and Yitzhak Rabin, the Israeli Prime Minister at the time, and Yasser Arafat, the chairman of the PLO, were having their first-ever face-to-face talks and their first-ever serious negotiation at peace.

And the Norwegians saw or believed that process wasn’t going so well, and one of them had an idea about a very different way of doing negotiation and conversation and talked a couple of people from both sides into entertaining an experiment, and taking them away to Norway, just a couple of people from either side, and starting a very different kind of conversation: much smaller, and also one that involved as part of a holistic approach, making relationships as well as doing the daily work of negotiation.

And this really resonated with work that I’ve done and been a part of for a long time in my life, in which I realized that when we form deep human relationships in our work, we can make available, especially when we’re doing incredibly difficult, complex work, we can make available things that give us an enormous advantage. And part of my thinking in this work that I describe as System Three, again, a theory about a much more full-spectrum kind of cognition that people do together, sometimes, or, at least most recognizably for most of us, through the medium of conversation. And so, I have been also interested in, and a designer of, conversations — you mentioned designing contexts, and conversations are a technology for navigating contexts — I might say. And so that’s why I called the project what it is.

What it wants to do is to create new possibilities for participation. And in particular, it’s described a design process, which over the course of a year will create a set of serial design engagements, in the form of three-day retreats for small numbers of people, eight to twelve people at a time, in different places all over the world. And each time we bring those people together, we will ask those people to work together to contribute to the design of something that we’re calling a constitutional assembly, which is meant to answer the question, “What would it look like if we were to have a constitutional convention in the 21st century, and what would it look like if the context for that constitutional work were the governance of AI and pervasive technology?”

And so, the idea is that we’ll bring these small groups together, give them three intensive days of working on those questions and making a contribution to a design, and at the end of the project, we will assemble those designs into a demo that we can run, which will, as a practical matter, answer the question. We spent a year on this question of what it would look like to design a new way of running a participatory constitutional process. And this is what it looks like. And the hope is that that has something to contribute to the future of governance in the context of AI, not to be substitutional of other structures and institutions of governance but to be contributory and to perhaps add something new and maybe something missing from our institutions and our institutional constructs and their limitations.

Jorge: I would love to unpack what might be missing that the process aims to compensate for.

Michael: Yep.

AI Regulation

Jorge: But before we get into that, when I hear the word governance, I’m thinking in terms of systems thinking. One possible synonym there would be “regulate,” right? Like regulation. And there’s a lot of talk right now about the degree to which AI should be regulated. And what I’m hearing here, and I’ll reflect it back to you so that you can correct my wrongheaded thinking on this, but what I’m hearing here is that the process is going to enable a sequence of — I’m going to use the phrase “citizen assemblies” — for exploring possible ways of regulating artificial intelligence. Is that a fair read?

Michael: Yeah, so the intent is first of all to take people who are working in various ways in and around AI as it’s emerging. They are people who, depending on where they come from, whether it’s commercial AI, entrepreneurship, government organizations, NGOs, or arts organizations, etc., they are people who already live within structures that have governance. And those governance structures ask people to do and care about certain things and focus on certain things. And, by design, those structures are limiting and they set limits. The concern I have is that those limits get in the way of our imagination of what governance might be or should be or could be.

And so, firstly, I want to take people out of those contexts, take them outside of the walls that their daily work takes place in, and bring them into a space that is less rule-bound, has more space for imagination and exploration, and to explore together the terrain of what we want, what we’re concerned about, what we’re worried about, what we’re excited about, what we think is important to get right as this technology emerges, that we may have to work in ways in which current institutions are not structured for or don’t make possible, right?

Just as a concrete example, in a set of collaborative sessions that we ran as part of the project in January, a question emerged, several questions emerged around the systems of economic incentive into which commercial AI is emerging. And one of the questions that people had an interest in considering was: Are patents the right form of intellectual property for artificial intelligence? They are the dominant form of intellectual property for technology. But as the founding of OpenAI was a direct answer to, initially, there are people who believe that if this technology becomes powerful in a certain kind of way, then perhaps it’s not a good idea for a single set of actors to have ownership and control of that technology.

And I think this is one of the really under-discussed things about the OpenAI crisis of last fall, which appeared to be a very human crisis, and I’m sure in many ways it was. But the thing that got talked about, but in a way also quickly passed over, was this idea that it was a very… it was literally a foundational, constitutional, even principle of the founding of that organization, was the point of view that if something like what is described as artificial general intelligence, which we all agree is a fuzzy thing, but if such a super powerful technology were to be brought into existence, that it’s probably not a good idea that a single private company or a limited group of actors, should have exclusive property in that technology. Our institutions of governance, at the level of companies, organizations, governments, are not very well equipped to have conversations about and, so what could it look like? What should it look like? So questions like that are the questions I hope we’ll explore.

Jorge: That’s a super intriguing provocation. And I did have that on my list of questions for you if, the AI and the title “Oslo for AI” referred specifically to artificial general intelligence, or whether you were thinking about the kind of AIs that we have now, generative AIs like large language models and stuff like that. I do think of those as very different things in some way, no?

Michael: Oh, for sure. And different in the important way for the moment in the sense that one of them, one set of them is real and actually exist today, and the other is still quite speculative. But other kinds of difference as well, to your point. And, I think everyone is starting to say more explicitly that AI is really a very fuzzy kind of catchall and limited in its use.

And yet, it’s a term of art that now has broad currency and it’s gonna have some currency I expect for a while. I think that a useful anchor for what we mean by AI, I’m happy to go back to Alan Turing and his 1950 paper. And he talked about machines that could think. And I think the design of certain kinds of… from neural networks to deep learning, to machine learning, these are all techniques that are meant to, in some sense, simulate kinds of processes of thinking that can be described mathematically and computationally. And I’m not sure with a single set of goals, in fact, I’m quite sure not for a single set of goals. Again, with a plurality of goals, it’s part of how we get to this idea of a general intelligence machine.

I think I’m more interested in multiplicity and plurality. I expect that there will be — and maybe already are at a low level — all kinds of different forms of machine intelligence, which is maybe where, if we think about our place in the animal kingdom, we have available to us already, a way of thinking about different forms of intelligence among different kinds of living things, from animals to other systems, which I think can meaningfully be described as intelligent.

One of the exciting things for me about the focus on intelligence, and the hope that the focus broadens rather than narrows, is it brings us into the opportunity to think that we’ve been surrounded by other kinds of intelligence for a very long time. Certain projects of human dominance have obscured that reality. But not all human cultures have been unattuned to that way of thinking. And my hope is that many of us are coming back to that thinking because we see the opportunity to enrich all kinds of things about the way we live by connecting to other kinds of intelligence, by having a relationship with other kinds of intelligence.

Redefining Intelligence

Jorge: Again, I feel like this is a really powerful provocation. What I’m hearing here is that even though the immediate goal is to explore the possibilities of new ways of living in a world where these AIs exist, ultimately, there’s a higher order goal, which is to broaden the circle of what we consider to be intelligent or worth considering as intelligent. Is that fair?

Michael: Yeah, I think it is. And one of the things that I’ve been thinking about a lot lately is, the word intelligence, even, or maybe especially in our vernacular, is meant to dignify a certain high order of thinking. And I think that, again, at least certain traditions and certain cultures have used that idea to dignify the thinking of particular kinds of people and therefore increasingly narrowed our conception of intelligence, both in what it is and in who has it.

This is something else that I’ve been deeply concerned about - that our ideas, or the dominant ideas and concepts of intelligence, belong to a very dominance-oriented way of thinking. The language of intelligence and intelligence quotient and the measurement of intelligence belongs to a pretty ugly history of the pseudoscientific division of races and other categories of humans into orders of dominance and subordinates. And I think we need to be very mindful of the fact that history is literally where our language about intelligence and our thinking about intelligence comes from.

And as much as we need to be careful about biased data sets, which is a very common topic of conversation in how AI can go wrong, I think at a more fundamental level, this idea of superiority that is baked into certain discourses about intelligence is also something that we have to interrogate and make sure that we’re not inadvertently ushering something quite nefarious in the door.

I think the entire idea, which I find entirely unnecessary, that many seem to find — I don’t know — inevitable? Is that our quest to make machine intelligence will invariably lead, if we’re successful, to creating an intelligence that’s superior to our own. And what does that mean? What does it mean to one, believe that’s possible? Two, to have that as a goal? What would that accomplish? And funny enough, this is precisely the place where people start to get speculative and horrified about the idea of an intelligence that exceeds our own and doesn’t need us anymore and decides that we’re disposable.

So I personally don’t worry about that. I find that kind of science fiction horror show way of worrying about the technology not terribly useful. I think there are a lot of much more mundane things that we can be very worried about, and concerned about without going there. But, yes, I think this project wants to lead us into thinking in different ways about what intelligence is, what making it is, how we’ve already had, I believe, traditions of making kinds of intelligence, what we can learn from paying more attention to that.

Jorge: You talked earlier in the conversation about the power of language as a framing mechanism. And to your point, when we introduce these dualities and hierarchies, then we’re immediately verging into territory that frames our understanding in ways that close off possibilities. And I love this idea of creating a space for exploring alternative ways of being in the world with these things.

Now, I’m curious about the impact that you expect these constitutional design exercises to have. Because the fact of the matter is that the AIs, at least here in the - I’m going to use the phrase “Western world,” although I don’t know if that’s the right framing - but, I’m thinking of things like OpenAI, which you alluded to.

Michael: Sure.

Jorge: And that’s a commercial enterprise at this point, right? Despite what the founding goals were. And let’s say that an interesting alternate new way of being, a new governance mechanism, would arise out of one of the Oslo workshops. Is the expectation that those would somehow be adopted by the people building AI? Is it that they would inform civic frameworks like new laws or, I don’t know… what’s the expectation?

Michael: Yeah, so this is where things do get fuzzy. So on the one hand, yeah, I hope that what emerges from the work are practical tools for creating innovation in governance, what that looks like, I think, can and will depend a lot on context.

And I’ll give you an example, which I bumped into last week. There’s an organization in the UK called, I think it’s called Projects by IF. A woman named Sarah Gold published and shared on LinkedIn a set of AI design patterns or, to put it differently, more specifically, a set of design patterns that are meant to make the building of trust with AI systems possible and part of a conscious design practice. And so, when I saw them, when I saw these design patterns, I immediately realized that what I was looking at was a practical architecture of governance for designers for the design of interfaces and moving from those interfaces into the systems that those interfaces are meant to provide a sort of regulatory structure around. And regulating the interaction between the machine system and its human users, that’s one thing that’s one shape that a governance structure and a space of innovation in government could look like.

Again, one of the conversations in our January collaborations got onto the idea of what if we could take existing financial instruments and put them to novel use. And the specific scenario was, could we create a bond that allowed all kinds of people to buy into the ownership of AI technologies and in the process become part of the wealth-building potential of that technology and thereby become both fiduciary and social stakeholders in this technology? And again, just a sketch of a thought about how we might start from where we are, with, again, existing financial instruments in the existing financial system but lead it in the direction of innovating such that there is a real mechanism by which public ownership might actually become meaningful — not the kind of limited and I think all too meaningless sense of public ownership that we have in the idea of the ownership of stock, publicly traded stock.

I think what’s important about that and that example was that was a place where we were trying to imagine how people might have a direct effect on changing incentive structures, economic incentive structures. Because, again, if one of the big lessons that we’ve learned around technologies and business models that emerged from the internet in the last thirty years, then I think there’s a certain degree of consensus that the adoption of the advertising business model by digital technologies, which became surveillance technologies, or have become surveillance technologies, because that’s how they create marketable data that can be advertised against, has produced a really terrible ecosystem. And the companies that do it are not run by evil, horrible people. But they’re run by people who exist and prioritize certain kinds of economic success against — literally — the health of users.

So, something we’ve seen before, right? This was true of cigarette companies once upon a time. There’s something I think far more insidious about it in the context of internet technology, because — and I think this will be more true for the various forms of AI, which of course already come to us through the infrastructure of internet technologies. What happened or what started to happen as those technologies became pervasive and ubiquitous… I know you remember the day when we used to go online, we used to connect our computers through infrastructure to the internet, right? And now we live in a world of always-on, persistent connection.

And one of the things that’s walked through the door of that shift, and lots of others that attend around it, is we now exist in a technologized environment in which our use of technology is increasingly non-elective, often unconscious, and increasingly unavoidable. And those are just realities now. Like, those are not realities that we can change unless we turn all this stuff off, which is extremely unrealistic at this point. So, I don’t say that to complain about it, so much as to say, “Hey, that’s the reality we live inside and it’s not the reality we lived inside thirty years ago.” And I think, again, when we think about governance, how have all kinds of governance adapted to that dramatic shift in reality?

The argument I would make is they haven’t. They have largely ignored that shift. We still don’t really have good language for talking about that shift. And yet, we find ourselves more and more in that reality and less conscious and aware of it. So, I think this is again, something that we ought to change. Not those conditions, but certainly our awareness of them and how we make decisions about living inside that different reality.

Closing

Jorge: And it sounds to me like Oslo is a space for modeling those possibilities in a very… I was going to use the phrase bottom-up, and I don’t know if that’s fair, but at least in a way that is accessible to pretty much anyone who wants to participate. And that’s where I want to wrap our conversation.

Michael: Sure.

Jorge: …Folks are probably listening to this and wondering how they might participate or what they can do to help. So where can they find out more and follow up with you?

Michael: The best way to find out and follow the project is on LinkedIn. Oslo for AI has a page on LinkedIn. It quickly became the case as we started setting up the project and as I started adding people to the project — advisors, collaborators, participants, etc. — LinkedIn just seems to be the natural medium for the connection of all those people. And so, I think a lot of our public-facing activity will live there and express itself there. We have a very ridiculous website, which I encourage no one to go to at the moment. We’ll be making concerted efforts to make it less ridiculous in the next thirty to sixty days. So that’s a way of learning more about the project.

The project is, on the one hand, very ambitious in scope, as you’ve heard. But it’s also very micro in scale, notwithstanding that we’re going to do nine of these retreats, they’re for eight to twelve people. And so, over that work, something like a hundred or a hundred and twenty people will have been in-person participants. And those are in-person three-day retreats. That’s a very conscious design form because we believe that kind of face-to-face interaction, especially over a period of time away from things, simply makes certain kinds of things possible that aren’t otherwise.

But that comes with restrictions. And I would also say, this is an experiment. And I can imagine a future in which work like this has a different scale and works in different ways. But at the moment we’re trying some things, and so what’s most important is that every time we bring together groups, we’re trying to bring together groups from the geographies in which we hold these retreats. And for a couple of reasons. One, one of the things that we want to happen as a result of the work is that people are changed by this experience, that they bring the ways in which they’re changed back to their daily work. And hopefully, that makes new things possible in their daily work. We’re creating new connections by bringing groups of people together from the region who don’t know each other, don’t work with each other, and therefore every time we do one of these, we’re leaving new nascent networks of connection and potential collaboration.

And last, we are making a progressive contribution to a social impact, which is, again, the demonstration of this possibility of this new form of governance, this new technology of governance. And again, what exactly that looks like and how it will work we can only speculate about right now because it’s the entire point of the project, to design that thing. And the participants will be the ones who decide what it’s made up of and how it works. So while I have a theory of what it could look like, that theory is really to give us something to reach toward and use as it’s useful, but for it not to get in our way. So I think the thing that, if people are intrigued by the answer, or what might be the answer to the question, “What would a constitutional convention in the 21st century look like if it was led not from within institutions, but built and designed by people?”, then we’re going to make a first stab at answering that question in a practical, demonstrable way.

Jorge: It sounds to me like AI is a bit of the stone. And the soup is doing what soup in a big cauldron has done throughout history, which is bring people together to share new ways of being. So thank you for sharing this project with us. And I look forward to seeing how it develops, and I encourage folks to check it out and to connect with you on LinkedIn.

Michael: Yeah. And if I can just follow the soup metaphor one more step, I’ll invite people to think of it as a gumbo. There are many kinds of gumbos, and I’ve not finished tasting them all. One can’t taste them all because they keep being invented. This is the richness of bringing together diverse traditions, allowing people to participate and build on existing traditions and create new ones. That’s what I hope we’re doing in this project.

Jorge: Thank you for cooking up this gumbo, Michael, and for inviting us to be a part of it and for being here on the show.

Michael: Thanks, Jorge.