Rachel Price is a Principal Information Architect at Microsoft and teaches Information Architecture at the School of Visual Concepts in Seattle. She was a guest on The Informed Life in 2019, discussing the role of structure in improvisation. Today’s conversation focuses on a subject that’s on a lot of information architects’ minds: how to responsibly design AI-powered systems.

Show notes

Some show notes may include Amazon affiliate links. We get a small commission for purchases made through these links.

If you're enjoying the show, please rate or review us in Apple's podcast directory.

This episode's transcript was produced by an AI. If you notice any errors, please get in touch.

Transcript

Jorge: Rachel, welcome to the show.

Rachel: Thank you. It’s lovely to be back.

Jorge: Yes, it is indeed. And for those who might not be aware, you were a guest on our show previously. When I was reviewing that episode before recording today, I realized it was way back in 2016. So it has indeed been a while.

Rachel: It certainly has.

Jorge: Some listeners might not have heard that earlier episode. Could you give us a refresher on who you are and what you work on?

About Rachel

Rachel: Sure. Hi, I’m Rachel. I have to say my own name at least once here. I’m a principal information architect at Microsoft, and I work on Microsoft Learn, which is one of the world’s largest repositories of documentation, training, and skilling content, geared towards technical users of Microsoft products.

So, it’s a really giant knowledge base that I get to help manage in all sorts of ways, from really technical IA backend stuff to front-end UX design-type things. I’m part of a team of information architects, which is very fortunate. I feel very grateful to be here. I get to live and breathe IA on a huge scale every day.

Jorge: I remember when we had our previous conversation. I came away thinking, “Oh my gosh, Rachel is really dealing with the sort of challenges that information architects are meant to tackle in the world, right?”

Rachel: Yes, I feel like there’s a chapter in a textbook for every meeting I attend.

Jorge: And that spans from, like you said, working with a really large content repository, but also, the other side of that equation seems to me like you have a very well-defined audience that uses that content, right?

What Rachel: Yes, it’s interesting. That’s changing quite a bit now, especially with the pandemic. There was this big push to skill up the whole world and help everyone. “Achieve more with Microsoft products” is the Microsoft mission, which I probably got out of order, but it’s approximately that.

And so it’s interesting that we do have this core audience that we cherish and are very loyal to our developer-type folks. But there are a lot of IA challenges we face in acknowledging that we don’t only help developer-type people; there are a lot of students, educators, administrators, and people who might not self-identify as technical users who do need our content. So it’s interesting you bring that up because that’s actually one of our more philosophical IA challenges that we deal with a lot.

Jorge: That’s interesting. And I suspect that it might be related to perhaps the development that prompted this second conversation.

Impact of Generative AI

Jorge: Since the last time you and I spoke on the show, generative AI has exploded. It’s become huge, right? I don’t know if this is related to this shift in audiences, but one of the secondary effects of generative AI is that it’s making it possible for many people who are not members of the traditional technical developer audience to do things that previously would have required more specialized knowledge.

Rachel: Yes, we’re opening the floodgates, and whether folks are ready for it or not, a lot of the traditional forms of gatekeeping in terms of what knowledge you have and what you’re capable of are easing. So, there are many more people trying to do incredible things with new tools who don’t have the technical expertise that we traditionally might rely on when we write technical documentation or training materials for what can be very technologically advanced tools.

Jorge: You recently presented at the Information Architecture Conference, and you spoke about the role of AI and the role of information architecture concerning AI, and that’s the reason why we’re talking today.

Rachel: It’s okay to say I sent Jorge an email and I said we should talk about this.

Jorge: Well, artificial intelligence was the overarching theme of this year’s Information Architecture Conference, I think rightly so. And even though I did not have the opportunity to see your presentation, I kept hearing from folks like, “Wow, Rachel really did a great job with this.”

And you’ve published since then. I don’t know if it’s a full transcript, but it’s an overview of the presentation at a minimum. I wanted to have you on the show even before you reached out because my sense is that you are discussing subjects that anyone interested in information architecture should at least have on their radar. So, can you give us a very brief overview of what the presentation was about? I’ll then share a link to the blog post in the show notes.

Responsible AI Design

Rachel: Yeah, so the presentation was basically this idea that, as an information architect, if you are an IA-type person, which is how I’m saying it now because I know “information architect” may not be a title lots of people have, but if you’re an IA-type person, I want you to know that you already have the skills and the mindset to be an advocate for responsible AI design and development.

The premise of this is assuming that AI is here. Yes, sure, we are in the thick of a hype cycle around it, but that does not mean it’s going away at the end of the hype cycle. That seems pretty clear. At some point in your career, you will be interacting with AI as a user, right? That seems like a given at this point. But more specifically, to my little soapbox, as someone working with a product team, you’re going to be part of a team that ships some feature on some product someday that uses AI, and that requires us to think differently about how we design things and what we need to be aware of and what we need to be thinking about when we decide how we’re going to use AI in a feature or what it’s going to be able to do and how a user will interact with it and how it plays into the user experience.

These are all new muscles that we don’t have yet, and this is happening at lightning speed. I put this presentation together because I went through this really intense process at Microsoft where, in January of 2023, we got this edict from on high that we had to start baking AI into our experiences. Now that decision is a whole other podcast conversation about how you decide what the proper solution is to any problem. But that said, all of a sudden, AI was part of the conversation, and what I saw was a lot of people who were making frantic decisions or coming up with interesting but challenging ideas for how to leverage AI in end-user experiences. At the same time, we also had an edict that we had to do it responsibly, and there’s a whole set—a huge standard-called the Microsoft Responsible AI Standard, which is public, that we had to follow.

So, I put this presentation together because I went through this experience of having to learn all about AI very quickly. I really knew very little when I started. I had to become very familiar with the tenets of responsible AI and some of the principles behind it and the research that led us there. This was a partnership with product managers, engineers, and designers through a “typical” feature design process that also met a huge legal standard for how we had to do it responsibly.

What I realized through that process was that it all sounds really hard, and it was challenging, but the skills I had as an IA, the systems thinking I was already doing, the way I work, the kinds of thorny problems I tend to think about, and the ways I have of facilitating people through complex and maybe ambiguous situations, all of those skills were the same skills I ended up needing to be a responsible AI practitioner. And I was like, “Oh, of course: this was applying all of the ways that IAs think and the things we tend to think about, just shifting it by five degrees and saying instead of just trying to do good UX, we’re actually now able to use those skills for something a little more concrete, use it for responsible AI.

And I really wanted to share with the IA-type folks out there that we need you so badly, and you can do this. You already know how, I promise you already know how to do this. There’s just some, like new terminology and like learning a little bit about how machine learning works, right? And then beyond that, you’re so prepared.

Jorge: You said a couple of things in the post that might raise eyebrows, and I think they’re in line with what you’re saying here. You said that IAs are the adults in the room.

Rachel: Yeah, I purposely… To the PMs and engineers I work with, if you’re listening to this, I do love you very much. There’s this tendency—we were in—and are still—at the thick of the hype cycle, right? And I think I purposely take a little bit of a contrarian stance there where, in my daily life, at this moment of working on AI, I am surrounded by lovely, intelligent people who are so excited about AI that they… I feel it’s hard for us to talk about the limitations and what’s actually not possible today, or what is actually a potentially very dangerous idea.

So, I tend—I don’t know if this is the most productive way to do it, but in that situation where 90% of the people in my space are only saying positive things and are only willing to talk about what could go right, I take a kind of a choice there to say, “Okay, cool.” Then, I’m really going to talk about what could go wrong. I’m really going to raise that flag. I think that context is going to be different for every person in every organization. There are probably a lot of IA-type people who are sitting among a lot of other, like, type-C conservative adopters—to use the hype cycle terminology of—”maybe you’re all saying this is a bad idea,” or “here are all the things that could go wrong in my…,” or maybe you’re in a really nice mix.

In my case, at that time in 2023, I was surrounded by so much enthusiasm that it was really freaking me out. Like, I was really worried that we weren’t thinking about the fallout of the blind optimism, and I was worried that we would ship a feature that was so blindly optimistic that it failing would do actual damage to people. Now, in our case, it was such a limited shipment—we were shipping a way to ask technical questions a little bit easier, which I talk about in the presentation. So, nothing like super complex, but I think that’s why you hear that voice coming through in the presentation; it’s this voice of like, when you’re the only one in the room who’s saying, “Hey, maybe we should think about this for a minute,” I find I have to take that little bit contrarian voice in order to be heard in those moments.

Jorge: It also sounds like you had an advantage in that Microsoft did have this responsible AI framework, right?

Rachel: Yes.

Jorge: So I would expect that would make it a lot easier for you.

Rachel: It did give me some backup. I say in the presentation, once I realized that this was the Microsoft legal team coming to help me do my job, I was like, “Oh, okay, cool. Now I can just say no, let’s not do it this way,” instead of quietly, maybe meekly, hoping someone might listen to reason.

Challenges and Tensions

Jorge: It sounds like there are directions being requested by the business that are at a minimum in tension with each other. On the one hand, there is a new set of technologies that hold tremendous promise. At a minimum, they will upend a lot of things, a lot of the status quo.

Rachel: Absolutely.

Jorge: So I think a lot of organizations rightly perceive the need to start working with this stuff sooner rather than later, which is leading to — and I’m not saying that this is your case — I’m seeing this in other situations. It’s leading to a lot of “ready, fire, aim” types of situations where it’s like, “Okay, we have this new technology; now what service can we put it to?”

Rachel: Yeah.

Jorge: And there’s this real kind of urgent drive to do that as quickly as possible. And that is in tension with doing it… “responsibly” is the language you’ve brought to the table here. It’s a language that is being used at Microsoft, but I would expect that at a minimum, folks in other organizations might want to, like, not get sued, or they might think there are things that could potentially go wrong with this stuff.

It does feel that those two forces are in tension with each other. And what I’m hearing you say — and now I’m going to try to bring it back to your presentation — is that information architects are especially well-suited to help teams navigate that tension. Is that fair?

Rachel: Yes. I think that there’s always this negotiation happening in these moments, especially where we see this happen, not just with AI. This is part of the technology cycle, where a new capability is now available, or a new technology is available, and rightfully, the business is saying, “Okay, we need to invest in this.” Especially for aggressive businesses that want to be early adopters, right? “Okay, we need to find a way to invest in this. We need to find a way to start using this. We’re going to make a lot of mistakes along the way,” right? And so, there’s this choice to choose a solution and find problems for it to solve. And that is a business need. I don’t have an MBA, I’m not an expert in how businesses work, but like, we see this from the business side, and it makes a lot of sense in terms of getting into the market and all this stuff. There’s always going to be that tension then, like that’s the request coming down from on high, so to speak.

And then you have, hopefully, this somewhat user-centric team saying, “Okay, cool. Our tradition is to find a problem that needs to be solved and then come up with a solution, and maybe this new technology will be the solution.” So there you have this tension, right? And I think yes, IAs are really well-suited to, first of all, notice those two things are happening, put words to that, make that ambiguous thing more concrete, and also then help close the gap between those two things and negotiate and decide what the trade-offs are. I think that’s IA. Some people might argue with me that that’s just being a really good PM, or something product manager, but I think that’s IA and systems thinking and that kind of facilitation.

The other thing I want to mention that’s cool about having a set of standards you use for responsible AI — and again, I am not saying the Microsoft Responsible AI Standard is the only one or the best one; it’s the one I’m most familiar with.

Evaluating AI Solutions

Rachel: One of my favorite things about it is the very first question you have to investigate with your team, which is called “Fit for purpose.” You start by saying, “Okay, we’re proposing an AI system as the solution for this.” First, you name the problem you’re trying to solve, and then you really have to investigate and be honest about whether AI is truly the best solution here. You ask, “Why? What else have we tried? Why is AI going to be so great?”

And that’s the very first filter that forces you to be honest with yourself. Are you appropriately aspiring to use AI because it genuinely makes sense, or have you stretched beyond the bounds of reason, turning into a solution looking for a problem? That negotiation and the expectation that you, as a team, have to have that conversation, come to a decision, and be real with yourselves about it is a very early and important step to addressing that tension.

Jorge: I wonder to what degree the label “AI” — artificial intelligence as a label, as a descriptor of a set of technologies — is muddying the conversation. Because when you mention “fitness for purpose,” the phrase AI is being used to describe a bunch of things that…

Rachel: Yes! Absolutely.

Jorge: …are quite different, and…

Rachel: Mostly, what we’re talking about is machine learning. Also, there are people who are going to be listening to this podcast going crazy at my overuse of the word AI. So that’s absolutely true: there’s a whole bucket of technologies that we call AI.

Jorge: And, as people who… What was the phrase you used earlier? IA-type people, right? As IA-type people, I think we are particularly frustrated by imprecise labeling of some sort, right? But part of the challenge here, I think, is twofold. Firstly, when we talk about AI, we are talking about a wide swath of technologies. And secondly, even if you use a more precise phrase — like I prepend “generative” to AI to indicate we are discussing things like language models and such — right?

Rachel: Yes.

Jorge: Even if you do that to narrow it down, these technologies are fairly general-purpose in that they involve symbol manipulation…

Rachel: Mm-hmm.

Jorge: …symbol manipulation technologies, which is about as low-level as you can get. When you say “fit for purpose,” we don’t know yet. Part of the exciting thing about this time is that if what we have are these tools, the most powerful symbol manipulation tools ever created by humans, manipulating symbols opens up a lot of possibilities, right? A lot of purposes.

Rachel: It does.

Nuances of AI Capabilities

Rachel: I can only speak to my experience at Microsoft; there are so many things these technologies can help us with, and I’m going to be a little blunt here and say the things that they can currently help us with are magical to us as information architects and extremely boring to other people. The aspects that get caught in the “fit for purpose” question are the big magical thinking ideas that these technologies can’t currently magically fix. I’ve been thinking about this a lot because when you and I were talking at the IA Conference, we had a side conversation, and you were telling me about the things you like to use generative AI for, which I totally agree with. We have some of that in common. For example, I really like to use it for the pattern recognition of broad swaths of notes. I just did a huge literature review, took way too many notes, and used the Notion AI tool to find patterns and also highlight the blind spots in the notes, like what wasn’t being covered that it would’ve expected to come up, or to summarize, right? And like, to translate things into different formats and all this - a little bit like administrative stuff and a little bit like research assistant stuff I have found to be really helpful. I think those are amazing use cases.

These are not the use cases that are getting pitched because that’s not very sexy, right? So, when I take the contrarian voice and when I get really glad that the “fitness for purpose” question makes us stop and ask very serious questions is in these conversations I have with folks who are saying, “Well, the AI can just know X, Y, and Z, and the AI can do X, Y, and Z and complete this task or whatever.” I have to then say, “Hey, we need to have a really serious conversation about what AI knows, which is nothing. It’s not sentient; it doesn’t know anything.”

Also, that’s me being pedantic. But we also have to have a serious conversation about what an AI tool, like a large language model, is currently capable of in our system with our data in the structure that it currently has. This idea of what are those realistic capability constraints in our situation? And I think that this is a really hard thing to talk about publicly, or on LinkedIn or social media, wherever, because that conversation is extremely nuanced. It’s not me saying, “AI is terrible and AI can’t do anything.” It’s me saying, “Oh, large language models, for example, generative AI, is really great for all these things, it has specific strengths right now, and it’s also not great for these other things. Or in order to be great at task completion or truly answering questions very well, it has to have access to all of this data. You have to set it up in this certain way. You have to, there are all these dependencies on other things for that strength to actually come alive.”

And that’s a much more nuanced take than “Oh my God, AI’s amazing. It can, it’s going to solve everything!” or, “It’s terrible; I wouldn’t touch it with a 10-foot pole!” When I hear either of those things, my eyebrows go up because I hear this extreme take that is ignoring all of the in-between.

Jorge: Yeah, you used the phrase “magical thinking” earlier, and I am seeing a lot of that. And, again, I attribute it in part to the language that we use to talk about this stuff, right? The phrase artificial intelligence. I suspect that a lot of folks, when they hear the phrase artificial intelligence, subconsciously prepend “general” or “artificial general intelligence” to the phrase, right?

Rachel: This is a known problem. I’ve been trying to find where I read this, and I can’t find my citation, but since the earliest days of artificial intelligence research, we humans have a tendency to really fall in love with it and assign it capabilities that it doesn’t have because it’s exciting, and we don’t fully understand it. And part of this is about how AI models are created and released. They are closed; we can’t see inside of them. We don:’ t know what they are doing. And even the folks who design them don’t know what they are doing. That’s part of the whole thing.

And so, it’s like magic. So it’s not surprising that we do magical thinking and assign it magical capabilities. And I think that’s where the danger comes in, and that’s where you see a lot of the loud voices who are sharing lots of really wonderful material and thinking with us, who are saying, “Hey, stop! Slow down!” They’re really circling around this idea of the danger of magical thinking and the danger of what I call like the “blind optimism.”

Jorge: Well, you mentioned the hype cycle, right?

Rachel: Yeah, the Gartner Hype Cycle.

Jorge: Well, it does feel like we are either cresting or maybe we’re still on an upward slope with that. But, inevitably, I think what’s going to happen is that there are going to be many uses of this thing that are not well fit to the purpose they’re being put to because of this magical thinking, leading to harm, lawsuits, all sorts of trouble, right? And there’s going to be the inevitable disillusionment as people realize that this is not some kind of magical thing, it’s a new technology. And the market will correct, and people will start to learn how to best use these things, right?

But now, my takeaway is that it behooves the people who design these systems not to get trapped into this magical thinking. You need to really understand the technology, what it’s capable of, and what it’s not capable of so that you can help identify what it’s fit for.

Rachel: Yeah.

User-Centered vs. Technology-Centered Design

Jorge: And the challenge here, and I’m just going to try to name it, is that many people who are involved with user experience design — to paint a really broad picture here — it’s right in the name of the discipline, the idea is to be user-centered and to have the user’s interests as the driving factor. However, part of the challenge we’re facing now is that we are dealing with a technology that is generating tremendous excitement because of the potential for change it implies. If you approach this changing environment solely with a user-centered lens, you might end up sacrificing the open exploration of new technology.

An analogy from the world of architecture: I think it was in the 19th century that the first elevators appeared when they were first invented. Elevators changed what you can do with buildings because they make it practical to build vertically. In a world where elevators exist, different kinds of buildings become feasible. It’s not that we’re going to build towers from now on, but towers are now feasible. It behooves architects to understand the new technology sometimes ahead of what the users’ needs are going to be, because the users are not going to be asking for a tall tower if they have never lived in one and all they know is, “Well, it’s going to take me 25 minutes of walking upstairs to get to the apartment.” They can’t conceive of an elevator. Then, doing research on what the user wants might not get you to where you want to be. I don’t know; I’m just putting a lot of stuff out there to see how you react to that.

Rachel: Yeah, no, I think that’s true. And this is where we get at the nuance today, which is like another way of saying, “it depends.” But I don’t care what users want most of the time; that’s not what I’m worried about. What I’m looking at is what they are trying to achieve, what they are in the midst of doing that has anything to do with me or the service I’m in charge of, and how I can make that better, how I can improve, remove a barrier, make something less painful, or make their need come true potentially in a different way.

And I think that’s where there is a lot of room for ingenuity and innovation with new technologies, including AI. It’s about how we can help users accomplish their needs in a different way. That is slightly different from assigning new needs to our users and saying, “The world has changed; now what you need is this.” They’re so close to each other, but a really talented user researcher does the former, not the latter, right? It’s about looking at what the need is, observing this need, and then intervening or removing barriers to help that need come true.

And this is at the heart of when I get really frustrated with magical thinking, blue-sky thinking, or big design exercises where we’re doing the “how might we,” if that “how might we” isn’t actually steeped in real user needs, which a lot of times it’s not, depending on the maturity of the design team, right? Everyone’s mileage may vary. That “how might we” is just, “Let’s come up with new needs that we can solve with AI.” That’s where I get frustrated and louder and put up more of a fuss about, “Whoa, let’s think about this for a minute.”

Jorge: Yeah. I don’t want to disparage them if this is not true, but I think I read recently that Logitech was rolling out new AI features on their mouse driver programs or whatever. And that feels to me — and again, I have not seen what those features are; they might be actually really user-centered and wonderful — but it felt to me kind of emblematic of the times, right? Where it’s like, “We have this new technology, let’s see what many things we can bolt it onto because it’s going to drive up our stock price or make the product more attractive because people will recognize the label AI,” whatever. But that’s very different from taking as a starting point the things that people want to do or need to do.

Rachel: Yeah, that’s where the negotiation between the business need and the user need didn’t happen successfully. You have this very understandable top-down thing like, “Hey, we need to incorporate AI as part of our market strategy, whatever, and also we want to be early adopters.” Great. Then you have this negotiation, which is really hard — of, “Okay, cool. How do we do that in a way that isn’t just bolting it on? What do we know about our users today? What needs have we actually found impossible to solve that maybe AI is the key to fixing or moving forward on those needs that we just haven’t had the wherewithal to be able to solve, maybe because of the scale of it or because the infrastructure that would have been required or the support staff that would have been required to answer all these questions or whatever?”

Jorge: Unfortunately, we’re running short on time here, but I’m hearing you talk about this and feeling really excited because you’re singing my tune here in some ways — or a tune, at least, that I find very welcome. And I’m wondering, from your perspective, if somebody listening considers themselves an IA-type person of the sort we’ve been describing, what can they do to become the sort of person who can help mediate these tensions or drive forces in different directions?

How to Become a Responsible AI Designer

Rachel: Yeah, so the first thing you can do is learn the basics of how machine learning and AI systems work. I’m not saying you need to become proficient, but you should be able to move yourself through the stages of Bloom’s taxonomy to the point where you can explain it to someone else. You can have some opinions about how it works, the difference between traditional programming and probabilistic programming. Understand that difference. Because if you can understand that difference, which means, in probabilistic programming, we’re designing for situations where we don’t actually know quite what’s going to happen. We know approximately what’s going to happen, but we can’t predict every outcome.

Then, from there, you can think about, “Okay, cool.” — and I’m going to focus this on UX stuff — “What are the UX heuristics that we would need in order to make some good judgment calls about the basic rules we should have?” The rules of thumb, right? We’re developing new rules of thumb right now for this stuff, and so I think the key is to understand those differences, understand the basic mechanics of how ML and AI systems work so that you can actually have an opinion and have a perspective on the UX heuristics that are important. And then you can layer that with everything you already know, all the skills you already have, your systems thinking mindset, all the facilitation skills you use to help people get to understanding. You layer those things together. And now you are really well prepared to be a responsible AI steward.

That said, I did a whole presentation about this. The full transcript is on my website. I think you’re going to find that link. There are a bunch of AI standards that are public. Microsoft is one of them. In my talk, I share a set of links to all the standards that are available. Do some Googling or Binging — I’m contractually obligated to say that. No, that’s a joke. But anyway, do go do some searches. Look for the public standards, read through them. They’re going to feel pretty familiar. There’s not a lot of stuff in there that’s crazy. Just get familiar with the mechanics and the jargon.

Closing

Jorge: This is fantastic, Rachel. I think as a starting point, I’m going to point folks to your post that covers the material from the presentation. Thank you for being back with us. This is so exciting, and I hope we can keep talking about it.

Rachel: Absolutely. It’s my pleasure.