Emily Campbell is a design leader and advisor. She brings thoughtfulness and depth to producing business results through design and helping designers develop their careers as they fulfill that mission. Like me, Emily is deeply interested in AI. She’s developing an emergent pattern language for working with AI, and that is the subject of our conversation.
Show notes
- Emily Campbell
- Emily Campbell - LinkedIn
- Emily Campbell - X
- Shape of AI
- The Shape of AI Substack
- Emily’s post in LinkedIn
- A Pattern Language: Towns, Buildings, Construction by Christopher Alexander, Sara Ishikawa and Murray Silverstein
- The Timeless Way of Building by Christopher Alexander
- Midjourney
- DiffusionBee
- Stable Diffusion 3
- Gordon Pask - Wikipedia
- Paul Pangaro
- Rabbit R1
- Humane AI Pin
Some show notes may include Amazon affiliate links. We get a small commission for purchases made through these links.
If you're enjoying the show, please rate or review us in Apple's podcast directory.
This episode's transcript was produced by an AI. If you notice any errors, please get in touch.
Transcript
Jorge: Emily, welcome to the show.
Emily: Thank you.
Jorge: This is our first time talking. We’ve never met before, but I reached out to you because you shared something via LinkedIn that I wanted to learn more about. And I sometimes do this: I see something that piques my attention and I reach out to the creator and say, “I want to talk.” So, for both my benefit and the benefit of folks listening in, would you please introduce yourself?
About Emily
Emily: Sure. Absolutely. And it’s so great to be here with you today. So, I’m Emily Campbell, and I’ve been a product designer in the product space for about 15 years. I’ve had an opportunity to work on many different platforms, to work in service design, to work on design systems, and as I saw the industry and the focus starting to move in this direction of artificial intelligence, it just piqued my interest to say, “What is this? How can I make sense of this for me so I can help others make sense of it, use it in my work, and use it in the teams that I work with and lead?”
That’s my background in a nutshell, but I’ll just share more broadly. One of the things I love about the design space and being a designer today is how much we can draw from other disciplines, how much we can draw from different ways of looking at the world and thinking about the world. And as I look at this space that we’re in, it’s just absolutely firing up my curiosity to find words and metaphors and common language to start to make sense of this rapidly changing moment. And that’s what kind of drew me to this idea of going a little bit deeper and exposing some of my learning and my thinking.
About The Shape of AI
Jorge: In the intro, I alluded to something that you’d shared. I think I saw it via LinkedIn, but what I’m talking about is a website called Shape of AI. Can you tell folks what that is?
Emily: Absolutely. So, it’s currently at shapeof.ai, though I did just get the dot com, so that will be changing soon. The Shape of AI is a project that serves to help us find common languages and common patterns for this world, this space of artificial intelligence. I started to look just at the surface level because that’s what is most apparent to us right now. It’s what we can see. It’s what we can use to better understand this technology. And so, as I started to work in AI in my own work and also using some of the tools that were already on the market, I began to see these patterns emerging. Some of the patterns are interactive, some of the patterns are informational or content-based, and some of the patterns are more complicated. Some of the patterns are maybe even dark patterns, depending on how we think about it.
And The Shape of AI is a place where I’m starting to share how I’m seeing these patterns emerge, how I’m starting to think about them and what I’m seeing others say or think or share about them. My hope, my intent, is that eventually the site expands beyond just the surface level to start to look at how the form-based aspects of artificial intelligence can tell us about the shape of this thing beneath the shape of the incentives, the business models, the use cases, and also how our patterns of behavior as humans and as users of these tools will shift or should shift or have to shift in some cases in order to get something out of it or use them effectively or manage ourselves in this period of change. So that’s the big concept right now. It’s a collection of patterns, and I’m really eager to continue working on it.
Jorge: This mixes a couple of things that I am really interested in. One is AI and AI as it affects the user experience and the other is patterns and design systems, which is really exciting to think about a pattern… I’m going to use Christopher Alexander’s phrase, a pattern language for designing AI-based experiences. I don’t know if that’s a fair characterization.
Form-Based Aspects of AI
Jorge: When you were describing it, you used the phrase form-based aspects of AI, that piqued my interest. What do you mean by form-based aspects?
Emily: So right now, AI feels pretty unapproachable to the average person. You can get the gist of, “Oh, there’s this ChatGPT thing, and I’ve heard maybe of LLMs,” but what’s actually happening below the surface is not commonly understood. It’s highly technical. It’s proprietary in some cases. And of course, on top of that, it’s emerging, it’s changing very quickly. The data is changing and the interaction patterns are changing.
When we look at the visual patterns and the interactive patterns that are starting to show up, it tells us a bigger story about what’s happening below the surface. And it’s our best opportunity to understand and communicate with this technology, this machine. So, for example, I’m so intrigued by Midjourney as a tool to help us understand AI, and the form-based aspects of it are the ability to craft a prompt out of tokens and see those tokens return to you. That’s really important. That’s form-based.
The functional aspects of what’s happening technically are not visible to the user, but by interacting with these tokens visually, I can start to understand how the computer is processing this language, how these tokens relate, and start to expose the bias and the understanding of the model itself. And I think we’re going to be in this space for a while until our technical capabilities as average folks increase and as the standards across this industry start to solidify. I imagine they’ll become less necessary to give words to these things. But right now, it really does help us see the big picture.
Jorge: I’m going to reflect that back to you to make sure that I’m getting it right. I think it’s a very important distinction that you’re driving here. The sense I got from it is that the way that, let’s say, LLMs work — what we’re calling AI now — is opaque for a lot of people. They don’t have good mental models for what’s happening under the hood. The models that we have of the systems we interact with are based on our interactions with those systems. And with something like an LLM, we have free range to craft interactions that can mimic a bunch of different behaviors, that might not be native to what is happening beneath the surface. And when you say form-based aspects, the way I’m reading that is that somehow the fact that this is an AI-driven system that we’re dealing with will manifest in ways that are maybe inherent to it being an AI as opposed to a system that’s been designed and developed by humans without some kind of assistance. Is that fair?
Emily: I think that’s a great encapsulation, and let me give you an example. One of the patterns that I’m most intrigued by, I’ve called “tuners,” and these are the ability for somebody to easily inject tokens or constraints that the machine is meant to use to return an output that relates to what the user is trying to get out of it. So here’s why this is important. For an average person who starts to interact with OpenAI, it’s just an open canvas. And we already know that there is a perception that folks don’t know that they can guide the machine. So we’ve seen false information pop up in research papers and legal briefs and in my own children’s explorations. They take it very literally. They take it as truth.
When you add in a pattern that gives you the ability to add those constraints to the machine, it switches something on our mental model of, “Oh, I’m directing; I’m in charge.” I think that’s critical both in terms of the usability of the product for me to understand these more advanced capabilities that I have, but I think it’s also important from a perspective of ethics or just how we understand the role of this technology in our lives and in our work. That AI is a great tool. It can be. It can expose information and creative connections in ways that the human mind can’t easily get to, but we are the ones who are directing it.
It is a probability machine. It is not a person. And so when we give these form-based patterns to users, it helps to emphasize the nature of the relationship that we have to these tools.
The Information Architecture of AI Patterns
Jorge: Yeah, if I might bring up another example… and again, this is to see if I’m thinking about it in the right way, when I started playing around with AI — generative AI of the sort that we are discussing here, right? One of the first uses I put it to was creating images for my newsletter. And, I remember that I used a tool called Diffusion Bee, which was a front end to Stable Diffusion. And the first versions of Diffusion Bee had an interface that basically gave you an input field for you to write a prompt.
And as an early user of the thing, I had no idea about prompting the system. So I was making some kind of naive and poor… let’s say poor prompts, right? And in subsequent releases, they added dropdowns. The team designing the thing added dropdowns that let you select certain things. They would let you select… I don’t use it anymore, so I don’t have it installed on my computer; I’m trying to remember from memory. But it would be things like particular styles or particular colors or…
Basically, what it was doing is it was adding visible selectors rather than an open-ended text field for the prompting. And the way that I perceived it as a user was all of a sudden I had a model — and I’m talking now about a mental model — of where the boundaries of possible interactions with the system would be.
Emily: Yeah, I love this. It’s so great that you mention it because there’s actually a quote in here I’ve been staring at just in the intro even and I want to come back to it, actually, if it makes sense because I think it helps to solidify this. But, yeah. I love the way that you’re framing this, and I do think this relates to a related topic, which I know is something you write about as well, which is the information architecture.
The information architecture of AI is related to this very closely and is very intriguing to me right now. So if we think about our pre-generative AI world, information architecture helped people understand what actions were available to them through affordances and what information helped them get better use out of the tool or the data that they were interacting with.
When I’m interacting with a model that doesn’t have defined parameters, that is this amorphous, fuzzy, massive thing. I, as a user, don’t know what the information architecture of the model is. If I don’t get the answer I’m looking for, I have no clear understanding of how to move my way to whatever that outcome is that I’m seeking.
And that’s where the interfaces we put on these models, the clues, the hints, the parameters, the guardrails, can help the user navigate this data. Because it is more conversational. It is more emergent and amorphous compared to traditional product design of two or three years ago, or, still today, but like these non-gen AI interfaces where things are more linear.
For example, does it matter anymore how many clicks it takes to get to an answer? Three clicks to value has been this kind of standard that we’ve held in our head. Does that matter anymore? Is it how many words is it? How many tries? Or maybe there’s something beautiful about the fact that maybe I don’t get it on my third try, but it takes me somewhere I didn’t even know I wanted to go.
How do we measure that? How do we think about that? So that opens up a can of worms. But I think that these two areas of patterns and information architecture are tightly related when it comes to understanding and designing for AI.
Jorge: Yeah. One way to think about it might be something like if the interface is an open text field, then the constraints on the user’s mental model are the constraints imposed by human language. And that’s pretty broad, right? Or another way to put it is, when you design a navigation system for, let’s say, something like a website — nav bars, particularly primary navbars in websites, they help you move around the website, but they also create a context for you. So if you see the labels checking, savings, credit card, etcetera, you immediately think, “I’m in a bank, and in a bank, I expect to do certain things.” If you’re interacting with a system by chatting with it, you don’t have that contextual framing, so it’s a space of greater possibilities, and that is not always good.
Emily: It’s not. And that statement, “it’s not always good,” can go in a lot of directions I think we could explore. And I think the one that I’m most focused on right now is usability because for people who are starting to interact with this technology inside of these platforms, I’ve used the comparison from the show Portlandia. There’s a clip, “Put a Bird On It” where they run around the store putting birds on things. And there’s some quote like, “You didn’t even notice this tote bag before, and now there’s a bird on it.”
And I feel like there’s something similar happening here where companies are going, “Look, there’s AI. Oh, you can AI now.” But they’re not explaining what does this mean? Why is it here? What can you get out of it? How should you use it? It’s just there. And I think that is a very troubling moment that we are in as the people who are shaping these products to be usable by the people who are paying or showing up to use them.
So back to your point of how do we expose this mental model or this mental map. The user is the one in control. I think this is where these common languages, these common patterns, start to become important because standards don’t exist right now. We should be talking more about them, about what’s showing up so we can standardize as fast as possible because these are not beta environments where folks are just starting to play around with these tools.
These are in their day-to-day lives. These are features that people now are seeing show up in the products they rely on. And if we aren’t starting to find standards of intent, standards of form, and interaction, then it can lead to very confusing usage. It can lead to a degradation of value. And I think that these are all things we should be very mindful of in our professional lives.
The Categories of AI Patterns
Jorge: That’s a good segue. I wanted to ask you more concretely about the patterns that you are documenting. I don’t recall exactly the way you put it, but the sense that I’m getting from the conversation, and also from the site, is that these are emergent patterns. Like you’re saying, these are not standards right now. There might be glimmers of possible directions that these things could take.
But, obviously, to make a repository of patterns navigable, you have to organize it in some way. And Shape of AI has a few categories, so I wrote them down here. They are identifiers, wayfinders, prompts, tuners, and trust indicators. And I saw those categories and I was like, “Ooh, just the categories are so intriguing!” And I was hoping that you would tell us a little bit more about both what the categories are and then how you came across that set of categories.
Emily: Absolutely. And I’ll preface by saying there’s so much more that needs to be on the site that isn’t there. And I’m hoping to continue to expand this model, both individually but also by taking in the inputs of others. But let me talk you through maybe the broader model before I go into why these particular categories were the ones that I started with. It relates to the conversation we’ve been having so far, which is that the sort of mental model of interaction right now is changing from being linear and task-based to more of a sense-and-respond, almost kind of system-oriented interaction pattern.
Let’s move away from Chat GPT, the sort of open text example, and even talk about how these are showing up in products that already exist, like Notion’s ability to prefill a column with some AI prompt. So this isn’t just about these open chat indicators. I need to first of all identify what can I even do here? Why does this exist? What am I supposed to do with this feature? How can I start to better understand my usage of it? Now, some of that is just by playing around with it and looking at the interface, but sometimes the best way to learn about something that is amorphous is to just use it.
I’ve been using the term “ping” the system. You just put something out there and see what comes back. From what comes back, it should teach me how to get something better out of it, right? Every single response that I get from the system should lead me to be able to improve my input, to get better outputs, and ideally, give me some clues on how I can get better outputs.
And then of course, how do I make sure that what I’m getting out is trustworthy, is meeting my needs? How can I evaluate it? And this is happening really fast. This happens day to day. If I’m talking to you, the bank teller, and I come up and I want to open a checking account, I have to enter the room and say, “Oh, where do I need to be? Oh, you look like the person I should talk to. What questions should I even be asking you? Okay, I asked you a question. Oh no, that’s not quite it. What I’m actually trying to get at is this.” And that’s how the conversation goes. That’s the world we’re in right now.
Let me just talk through these categories then against that model. We have identifiers. That helps me understand where AI is present. That’s important, both because I want to know where the future is and use it, but also maybe, especially right now, it’s important that I can differentiate between conversations or features that are me interacting with this bot versus the known patterns or the known product.
I need to understand how I’m supposed to shape the first thing I put into this, the first prompt or the first request of the system. And that’s where wayfinders can become extremely important. They get you to some value or some information as fast as possible so that you can start to shape it as fast as possible.
And that gets to that evaluator of, are we really worried about three clicks to value? Or is it, how quickly can you get information you can use to get better information or better outputs? Once we have our prompts that we are shaping, we get information back. That’s a category I haven’t dived into, but I really want to explore the surfaces of the type of information, the form that these responses are coming back to me in.
Then we have tuners and I already mentioned tuners is this pattern that’s very intriguing to me. But it’s the idea of the system teaching me how to use the system so I get some information back and I’m guided to, “Did you want to add more emphasis on this particular direction, or did you want to constrain it? Did you want to change the voice and tone of the response?” Et cetera. Any of those parameters that shape that output, how can you put that at my fingertips so I don’t have to dig for them?
That’s our identifiers, our wayfinders, our prompts, our results, our tuners. And then finally, how do we think about trust? How do we think about whether or not I should believe that the information I’m getting or the result I’m getting is trustworthy? Trust is a very interesting topic to go into because trust shows up in many forms and we can build trust in many ways, but for a new technology, helping people identify what they should be aware of, empowering people to be owners of. Is this good enough? What do I do with my data? How can I continue to shape this? How are you going to use this? It just keeps that user in the driver’s seat. So that’s where it sits right now. And there’s so much more that I want to explore here, like the dark patterns and so on. But I’ll just, I’ll hit pause for a moment.
Conversation as Model
Jorge: I have a follow-up question on the categories. But before we get into that, I just wanted to note that before telling us about the categories, you set the stage by talking about the underlying model. And in my notes here, the word that I underlined is conversation. I’ve understood interaction design to be conversational for a long time. And this is a model that I think a lot of us have had about these types of interactions where clicking on an option in a navigation menu, you are kind of saying something to the system and the system is saying something back to you.
It’s just that the conversations have been very rigid in some ways and very constrained. And the way that the model started shifting in my mind when you were talking about it, is that these AI-driven systems are much more explicitly conversational in some ways.
I’m going back to the work of folks like Gordon Pask or currently Paul Pangaro who write about systems thinking and conversation. It feels like there’s a whole area to mine there as we design these things.
Emily: I struggle with how far to go into the conversational design realm because there’s capital-C conversational design, which has represented a very deliberate practice primarily centered around chat bots, and it’s important. There’s a lot we can draw from that world. There’s a lot we can learn from people who have been practicing in that world for some time.
I had a conversation yesterday with a designer who has been working on personified bots. This is not a bot that’s there to provide you customer service; this is a bot that’s there to represent some sort of fictional character that you can interact with for any number of reasons. And Meta has been exploring this. There’s character AI as the grandest example. This was part of an NFT project, and this person did not have a background in information architecture or even in product design. Their background was in film and specifically writing screenplays, exploring how characters interact with each other. How can I generate a synthetic connection between two people that don’t exist through these conversations? The affordances of conversations.
For those of us who have been in relationships, maybe you recognize that moment when you can communicate with your partner from across the room with a look. Like, the conversation doesn’t have to be verbal. It doesn’t have to be written for it to exist. And so, going back to your model of interaction design as conversational, that’s where this maybe lowercase-c conversational design comes in. It is that interactivity that is different than the physical interactivity of interacting with screens.
And the last thing I’ll say here, just because this is so relevant and it’s been on my mind: we’ve spent the last ten years in a world very dominated by screen-based design. We had the launch of the iPhone, we’ve had the proliferation of personal devices. Screens just dominate our world right now. But prior to ten years ago, that was not the dominant way that we interacted with services and with products. It was much more form-based, physical, or even conversational. There’s a big difference between talking to a human on the phone, talking to some AI bot that’s guiding your conversation, and then navigating some AI-driven or machine learning-driven help platform where you can’t talk to a human to save your life.
This idea that screen design represents the end-all and be-all of product design, I think, is a fairly recent way of thinking about it and probably not the way we will be thinking about it moving forward. So if anything, I think we’re coming back to the fundamentals of design being that ability to get some outcome out of interacting with something and the form of that can vary, depending on the context and the technology and the user’s ability and so on and so forth.
Jorge: And to your point, we are seeing the emergence of devices like the Rabbit R1 and the Humane Pin, which are explicitly using capital-C conversation as the interface. But I meant it more in the lowercase-c sense, to your point there. At some level, all interactions are an exchange of the sort that you’re describing where whatever action I take, whether it’s a tap on an element being rendered on one of these small glass rectangles, or like you said, a particular gesture that you make to your partner across the room. Those are all interactions that elicit some response. And there’s this back and forth between the two happening.
I wanna circle back to the categories though because, in hearing your description about the categories for the patterns, it helped clarify for me the boundaries around this particular set of patterns. These are, in some ways — and I’m saying this so you’ll correct me if I’m getting it wrong — not general patterns for interaction design of some sort; they are explicitly patterns for systems that wear their ‘AI-ness’ on the surface. And what I mean by that is systems that are implementing some kind of AI-driven functionality. Is that fair?
Emily: I would say at the surface level, that is the boundary that I’m setting because the focus of this project is intended to communicate to people who are designing or building the AI-driven systems. But I would also say I don’t think that it has to be constrained to AI itself. It is more about how we communicate with systems.
Systems can be complicated. They can be complex. But what’s critical is that it’s very difficult to see the whole all at once. And so, you have to interact with pieces of that system to understand, or at least create, a solid mental model of the larger system. And I’ve spent quite a bit of time in my life as a designer and even in some of my academic work prior to becoming a designer on systems thinking: how do we think and interact with systems that we can’t fully understand? Complex, complicated systems.
I would invite people to think about this in a constrained manner because I don’t know that I want to go into trying to teach systems thinking right now. And I don’t think we need to make that a requirement for us to design for these products and these platforms. But I also think there’s something important in understanding that AI is not just this contained box. “Oh, I’m doing AI. I am putting AI on it. AI is a new way of interacting with data.” That, or generative AI, in particular, I should say, is a new way that people can interact with data and with complex systems through some sort of interface. Whereas previously, that isn’t something that has been available to the average person. So these patterns help us figure out how we can create the interfaces that allow people to do that more easily.
Flaunting Disruptiveness
Jorge: This might be misguided on my part, but I see the current crop of AI technologies as the sort of thing that emerges on the scene every once in a while. I don’t think it’s common to have one as disruptive as this one. I haven’t seen one as disruptive, I think, since I first saw the web. But when a new technology like this comes along and there’s a rush to implement it, people go out of their way to highlight that their system is implementing the new technology.
And I can’t think of a specific example, but I’ve seen old posters like things from the early 20th century or late 19th century when products were starting to become electrified, and they would have ‘electric’ on the name and they would highlight the fact that this was an electric thing, right? Eventually, electricity just became a part of everyday life and something that was expected to be there and we stopped highlighting that.
Or, another common one is all these LP albums from the early 1960s or late 1950s or whatever that had like ‘stereo’ and ‘high fidelity’ plastered all over the album cover to highlight we are implementing the cool new technology. And eventually, it becomes a little tacky to do that because it’s like, “Of course, it’s high fidelity or stereo or whatever. Like, why wouldn’t it be?”
And to your point, I wonder to what degree the current focus on patterns that are applicable for systems that are somehow implementing AI is going to be the dominant focus as these things become more pervasive.
Emily: I share your concerns, and I do think that this will have a natural trough of enthusiasm, and then it will become standardized and it will become part of our day-to-day lives. I can remember back to what, 2012 or so, the debates about responsive versus adaptive design and so many lines of text were wasted on that conversation, and I just don’t know that it matters anymore.
But what I hope comes out of this moment is more than simply giving words to these interfaces and putting fancy sparkles on things and feeling like this crest is going to change the world until it doesn’t. I hope this represents a fundamental change in how people think about data, particularly their data, and think about interacting with data in their lives. If we play our cards right, this can be a very empowering moment for individuals. Individuals not just working on these tools but using them.
Today, the platforms that we use are a bit of a black box. You just send your data out into the ether and then you laugh when you get sock ads all over the internet because you bought some wool socks on Amazon. But what we have the ability now is for people to start to interact with that data in ways that they haven’t been able to before to start to understand, “Okay, if there’s bias in this algorithm, where is it coming from? What token is driving that? What training data is driving that? How can we be more specific in requesting what we’re looking for from this machine?” and avoiding biases or avoiding incorrect information. Or maybe we want to explore that. Maybe we have this curiosity, but we get to be the agents of how we use data, not these gigantic platforms, figuring out what’s on the screen and what’s in that navigation menu.
But in order to get to that point, we have to give people the standards and the knowledge of how to interact with these systems and this data. So that their initial cliff of usability is as short as possible, and then get to a point where they’re actually driving it. And you mentioned Christopher Alexander earlier. I have been reading actually the Timeless Way of Building, which is another book of his that doesn’t get as much attention as A Pattern Language. But there is a quote he has in A Pattern Language speaking to a timeless way of building. And I want to read it really quick because I think it’s so critical.
“That towns and buildings will not be able to become alive unless they are made by all the people in society. And unless these people share a common pattern, language within which to make these buildings, and unless this common pattern language is alive itself.”
And I think that’s so critical because what that’s saying is if we are going to build a society, a platform, an ecosystem where everybody can be an agent, can be involved — users, customers, builders, whomever — we have to make sure that they have a common understanding of this so that they can affect this. If you don’t know how to describe something, it’s difficult to describe how to change it. If you don’t know what happens when you do something, it’s hard to ask for a different outcome. And so, by creating maybe not my particular language, but a language, a standard of thinking about interacting with this data, we can give people the tools to make that interaction better, to make it safer, to protect their interests within it, and to have more autonomy to keep humans in control and not these gigantic tech companies and their algorithms. And I think that’s the most critical thing that I hope comes out of this moment. Even if we stop putting AI sparkles on every product we ship.
Closing
Jorge: Emily, that feels like a wonderful place to end the conversation because it’s just such a perfect summary of what the project is seeking to accomplish. Where can folks follow up with you?
Emily: Absolutely. So right now, Shape of AI is online at shapeof.ai. Please start there. You can contact me through that website. I also have a link to the Substack that I’ve set up where I’m publishing my thoughts on a recurring basis. And of course, I’m also on all the major platforms, LinkedIn, X, are we calling it X now, formerly known as Twitter. And I’m always open to chat with people and follow up as well.
Jorge: Thank you so much. I will include all of those links in the show notes. Thank you so much for being with us, for putting together this project, and for sharing it with the world.
Emily: Thank you for having me, this was a really fun conversation.