Jodi Forlizzi is the Herbert A. Simon Professor of Human-Computer Interaction at the School of Computer Science at Carnegie Mellon University. Dr. Forlizzi has a distinguished career as a service designer, researcher, design leader, and academic. In this conversation, we discuss the changing role of design in the face of disruptive new technologies such as AI.
Show notes
- Jodi Forlizzi
- Jodi Forlizzi - Google Scholar
- Human-Computer Interaction Institute, Carnegie Mellon University
- E-Lab - Wikipedia
- Tay (chatbot) - Wikipedia
- Reflection in Action with Jodi Forlizzi - Rosenfeld Media
- The Double Diamond - Design Council
- Walkman - Wikipedia
- Hugh Dubberly - Dubberly Design Office
- Astro Teller - Wikipedia
- John Zimmerman
- John Dewey - Wikipedia
- Richard Buchanan (academic) - Wikipedia
- Peter Checkland - Wikipedia
- Ludwig von Bertalanffy - Wikipedia
- AFL-CIO
- UNITE HERE!
- Dark Matter and Trojan Horses: A Strategic Design Vocabulary by Dan Hill
- The Design Way: Intentional Change in an Unpredictable World by Harold G. Nelson and Erik Stolterman
- Design Flight School — Harold G Nelson, PhD
- Christopher Alexander - Wikipedia
- Systemic Design as Born from the Berkeley Bubble Matrix by Harold Nelson
Some show notes may include Amazon affiliate links. We get a small commission for purchases made through these links.
If you're enjoying the show, please rate or review us in Apple's podcast directory.
This episode's transcript was produced by an AI. If you notice any errors, please get in touch.
Transcript
Jorge: Jodi, welcome to the show.
Jodi: Thank you.
Jorge: I’m very excited to have you here. I’ve known you for a while, although I think we’ve only interacted a couple of times. But I’ve been aware of your work and I’ve been influenced by your work. I’m particularly excited by the stuff that you’re working on now, and I’m hoping that you will share it with our listeners. We’ll have a substantial conversation about stuff that I think is very important and very timely. Folks listening in might not know about you. How do you go about introducing yourself?
About Dr. Forlizzi
Jodi: My name is Jodi Forlizzi. I am a professor. I am the Herbert A. Simon Professor of Human-Computer Interaction in the School of Computer Science at Carnegie Mellon University, and I’m trained as an interaction and service designer. I was the first designer hired in the School of Computer Science, where I teach and do research on many aspects of design and HCI.
Jorge: And if I’m not mistaken, your background is in visual design, right?
Jodi: My background actually is in illustration. I went to Philadelphia College of Arts, which became University Arts, which recently closed sadly, and I have a BFA in illustration. I don’t know how much detail you want right now, but after undergrad I worked at Penn doing technical illustrations. A lot of changes were happening in the technologies that designers used at the time, and this was a really formative time in my career because I was working directly with these new technologies and also collaborating with scientists, which I do today. After a period of time, I began to look for a program of graduate study in interaction design, and I ended up in the Master’s of Interaction Design at Carnegie Mellon. I think we were the second class, and we were making it up as we went. Everything was changing so fast.
After receiving my master’s degree, I worked at a consultancy called E-Lab in Chicago. We, along with anthropologists, did research and implications for new product design. Also a really formative time in my career because I began to explore what design research was.
A few years after that, two years to be exact, the faculty from Carnegie Mellon came back and asked if I would like to join the faculty with a joint appointment in the School of Design and the Human-Computer Interaction Institute, and that is where I have been since. Although in 2017, I moved my appointment entirely to Computer Science, and I have had the wonderful benefit of many collaborators, great students, interesting research on human-robot interaction, educational games, personal informatics and personal data, and human-robot interaction. And of course, most recently, human-AI interaction. The big elephant in the room.
Jorge: That’s the elephant I was referring to during the introduction. I don’t know when you made this switch in your career, but I’m hearing echoes of what happened to me in that I studied architecture and ended up falling into information architecture via web design when the web happened, just because that was such an obviously impactful new technology that came on the scene that just was apparent would change everything.
Jodi: Yeah, that could, if I could just deepen that to say that at the time, technology was pushing what we did, number one. But number two, traditional design disciplines like graphic design, product design, and architecture weren’t enough to explain interaction at the interface. We needed something new. We couldn’t use these traditional disciplines to understand, for example, why people made errors when using an interface. So I think that’s what really probably pushed our work and was the birth of interaction design.
Jorge: I know a lot of colleagues who are my age peers who also came into this line of work from other disciplines just because there wasn’t a formal course of study to do this stuff.
Jodi: Exactly.
Jorge: And that brings us to the elephant, which is AI.
AI as a Design Material
Jorge: You’ve written and spoken very compellingly about the notion of AI as a design material. I would love to start there and just hear what you mean by this, because I think that a lot of people—including myself—when we first think of AI, we think of it as a toolset, maybe like an augmentation of the tools we use. But this notion of AI as a design material just feels very fruitful to me and very interesting, and I would love to hear what that means to you.
Jodi: Great, thank you. First of all, I should say, I want to classify thinking about design and AI in two buckets. One bucket is designers using AI-enabled tools like Figma or Photoshop. The second is designing things that have AI in them, and that is the portion where we talk about AI as a design material.
We noticed in our teaching—at classes, design classes at Carnegie Mellon, and existing in a very technical place, a computer science school—that there was room for additional perspectives on how we think about AI when we innovate. There are a lot of reasons why AI products fail, and there’s a lot of data showing that AI products fail pretty frequently. They fail for a number of reasons. One, the model can’t be built either because it’s too difficult or there’s not enough good data. There may be no customer desire for what’s built. There may be no business case for what’s being built, or there may be fate, fairness, ethical, accountability, or technology issues that shut products down. We’ve seen this again and again. One example cited a lot is Microsoft Tay, which was a chatbot that began to spew out racist comments and was shut down soon after it was released.
We realized that there was a need for designers to peer in on this innovation process and work with AI, but they didn’t need to know the technical aspects of AI. They didn’t need to understand algorithms or build models. They needed to look at AI in a really different way as a set of capabilities, as something that could be molded, much like clay or pixels or paper. Hence came this concept of AI as a design material, and that’s been the basis of some of my and my colleagues’ research around AI innovation. It’s also the foundation for a course we’ve been teaching for about six or seven years now, called the Design of AI Products and Services.
Jorge: What is the distinction, if any, between AI as a design material and data as a design material? Because, when I think of the materiality of a design medium, I think of the means through which an experience is delivered somehow. I don’t know if that might feel a little too broad. In this case, when I think of AI, it might be worth stepping back and defining terms here and just talking about what we mean by AI. But I assume that we are talking about generative AI, language models, that kind of thing, right? Where there’s some kind of neural network that has been trained on data. In my mind, it feels like data has a role here, no? As part of the material that we are working with.
Jodi: Yes, of course. I would certainly agree with that. We need data to build a model, and so I would say that’s even another kind of material or a primitive that would feature in the design.
Jorge: Is it just as an input to model-building, or is there something about the data that we have about the domain we’re working within that gives designers certain capabilities or constraints their ability to do certain things? In some ways, the model is the object of interest here. But there’s another aspect to data, which is the data that the system works with as users interact with it. Like, for example, let’s imagine an online store where the store is selling widgets of some sort. There could be data that gets used to train the model that is going to serve different needs within the system, and that might be generic data. There’s also data about interactions in that particular store, including things like the product catalog, user interactions that get recorded. Is there a distinction worth driving between that kind of data and the data that gets used to train models?
Jodi: I guess that’s a rhetorical question because those data might be used to customize an existing model or build a new model. They may or may not be used to inform computationally the product. So, on one hand, I would say there could be a distinction, but on the other hand, if we’re going to feed this in or build additional models, I would look at it as another kind of design material. One of the things we talk about in our class is that there are three ways to innovate with AI models. One is to build a custom model from scratch. Another is to customize an existing model. The third is to use an existing model altogether, which could be an LLM or, more recently, an SLM. We’re seeing small language models come on the scene. So we try to think about innovation and product development in a designerly manner, but the material we’re working with is data, and therefore machine learning and artificial intelligence.
AI and the Double Diamond
Jorge: I heard your interview with Lou Rosenfeld on his podcast. You said something that I found very intriguing. It was a call to get designers to reframe the process of design so that they’d be more comfortable starting in the middle of the double diamond. Did I get that right?
Jodi: Yes.
Jorge: I think that a lot of us who have been trained as designers in the traditional double diamond approach think that the place to start is by understanding the problem domain. And then once we’ve defined the domain that we’re working within, we narrow down onto a solution space. And that’s when we then start with this second diamond, right?
The way that I heard that comment was that when a new technology — a disruptive new technology, like the current wave of AI — comes in, the technology opens up opportunities that were not there before that may necessitate kind of resetting your expectations about where the process is due to begin.
Jodi: Yes.
Jorge: And I’ve noticed what I’m going to describe as resistance among some designers about approaching it that way. And I’m wondering if you would address that, if that’s something that triggers any thoughts?
Jodi: Sure. So for the benefit of people listening who maybe didn’t hear that other podcast, I just want to reiterate that in a traditional user-centered design process, you can envision a double diamond and designers will start on the far left with the user. They expand into concepts. We come into the middle of the diamond where we think we’ve identified what the ultimate particular solution to the problem is, and then we expand and contract again as we make the thing.
We found that with artificial intelligence, it’s helpful not to start with users because sometimes we don’t know the best user but to start with the technical capability. So an analogy is Philips. When Philips designed a cassette recorder, they designed a technical capability, which was the ability to record sound on a metal tape.
And nobody really knew what they were going to use that for. It first started as an answering machine, but then all these cultural practices emerged from it: boom boxes, mix tapes, Sony Walkman, which I would argue really changed the way that we listen to music forever. So we like to say that there’s this technical lane and then there’s the innovation lane.
So what we teach our students to do is to first grasp these AI capabilities in fairly abstract forms and then seek the best customer to make them happen. So facial recognition, for example, is great for unlocking your iPhone. What else could it be used for? And we find a pattern, which is that people often jump to the most complicated uses of AI, which will statistically fail. And so what we try to do in our class is get people to see what we call the low-hanging fruit, which is where there can be viable uses of simple, robust AI or where there’s AI that has okay performance and the model is fine.
So another example that we always talk about is the voicemail transcription on your phone. It’s not perfect. It’s good enough for you to know whether you should go in and listen to that voicemail. And the question is, would you pay more for better voice transcription? The answer is no. So designers need to be triangulating all these things. And what we find right now is that what designers often understand the least about is AI.
Now, more recently in my research, we’ve been doing some interviews with people in various roles on product development teams of people developing AI. And I will say around the world, we’re seeing three types of ways that these product developers start. One is with technical capabilities. Some are starting with the notion of a customer or a user. Then there’s a third category of companies that I would say are copycatting models and products that already exist.
Jorge: The example of the Sony Walkman. If I remember correctly, the way that product originated was the president and founder of Sony made it for himself, right? Like he wanted this thing that he could take on airplanes. And what that reminded me of was this Steve Jobs quote about their job not being to figure out what customers want but to figure it out before the customers do, right?
Jodi: But even then, yeah, they made mistakes too. Our colleague Hugh Dubberly likes to talk about the Apple tablet and how it was prepared twenty years before the iPad, but the world wasn’t ready for it, and it was also big and a little bit heavy.
Jorge: There’s this tricky game that needs to be played. Because when a new technology appears on the scene, I think it behooves designers to explore the capabilities of the technology. Particularly if it’s a technology that isn’t just something that we already know but faster, better, cheaper; it’s something that is fundamentally different.
Jodi: Yeah. And to deepen that, I would add something that acts autonomously, can have biases, and be nefarious. All these things designers have to think about when they’re developing products. And so going back to that notion of moderately performing AI, we feel that number one, there’s a lot of opportunities for product innovation in that place and number two, it’s probably less likely that the product you develop is going to go off the rails with ethical or bias issues. And there’s an analogy at Google that they use called a drunk island. The Drunk Island Metaphor. And we say to our students, you want to think of AI that’s as good as what a bunch of drunk people on an island could do. Like they could do an okay job. They’re not going to do anything really unique. In our critiques, we often yell, “This is on the drunk island,” because it helps students to remember that there’s a lot of space for innovation in this moderately performing, robust, simple artificial intelligence.
Jorge: This feels to me like a good segue into another concept that I wanted to ask you about. You are, like you said, you’re the Herbert A. Simon… Is it like the chair?
Jodi: Yeah, I’m a professor. My title is Herbert A. Simon Professor. I have the Simon Chair, which is such a wonderful honor.
Systems Thinking and Product-Service Ecologies
Jorge: When I think of Simon, I think of systems. I would love for you to tell our listeners about product-service ecologies.
Jodi: Sure, but first a little story about Herb Simon. I took his class as a master’s student. It was a class on cognitive science, and there were a lot of heavy hitters in that classroom. Astro Teller, who’s like number two at X, was in my class, and my colleague John Zimmerman and some other people that went on to very illustrious careers.
But Dr. Simon was amazing. First of all, no one was supposed to take notes during class. While I, being a designer who loves to draw and write, actually held a notebook on my lap and took notes the whole semester. The library has since taken that to archive. Dr. Simon would take any questions from the class, maybe scratch one or two things on the board, and then weave them into a seamless, robust lecture on cognitive science. It was amazing. The experiments and things that we did in his class, and the few meetings I had with him in his office, were just incredible. I can’t even believe that I was able to study with such a person. So to have a chair named after him is such an honor. I think about how he thought about research every day. It’s really formative.
So now on to systems. In my work, early on in my career, I did a lot of research on elders and product use. One of the things that I saw was that around an elder was a really dynamic set of people and products and contexts. We were researching in particular how robotic technology could help elders and their caregivers. We learned in our observations in homes that, as elders declined, you would often see more than one product in the same category. So maybe a digital clock and an analog clock, because they can’t read the analog clock, or they don’t know how to set the alarm on the digital clock. The notion that people who provided help could often be structured and have particular roles, but they could also be ad hoc people, like someone you encounter at a bus stop or in the drugstore, someone to help you get out of a chair.
I became really fascinated with this idea that everything exists within a system and a system which is adaptive. This built on a theory that I actually wrote about when I was a master’s student, which was this framework of experience in user interaction. This was a paper that we dashed off, and we never knew that it would take on its own life. It came from Dewey’s notion of pragmatism because everybody in our design seminar course, which was taught by Dick Buchanan, another great, had to read Dewey’s works.
But anyways, in this framework, we talked about three types of user-product interactions and then three types of experience. The last one we called experience purely. This was the notion that everything is dynamic, lifted up, and shared. Going along in my career, as I began to deepen and understand more types of systems theory — Checkland, Bertalanffy, and others — I saw that this was a good construct for thinking about designing things because they are so complex now that we really need to keep all these factors in mind.
So when we give junior designers an assignment, like designing a ride-sharing service, often they’re only thinking about things like the driver and the passenger. But there are many other systems, right? Public transportation, taxis, cyclists, people on micro transport, and on. So this systemic view, I thought, was really helpful, and it culminated in the work I did in my dissertation. As I told you, I was lucky enough to get a job at Carnegie Mellon with a master’s degree, but I went on and did a self-defined Ph.D. in Design and Human-Computer Interaction. That’s where I really polished this idea and I’ve been writing a book about it for a long time, but it’s parked on my table. I think I need some co-authors. I have a deep chapter on systems, comparing systems and talking about why these are important.
Jorge: The reason why I wanted to bring this back to product-service ecologies is that I get the sense that as the tooling becomes more automated, as designers, we are called to focus on higher order problems. Maybe I don’t even know if “problems” is the right word to use there. But things like framing what it is that we’re doing here, trying to determine whether the thing that we’re doing is the right thing.
And the risk in this — and it’s something that I’ve encountered in my own teaching — is that one very quickly ends up getting into pretty abstract waters. When we talk about illustration — your background in illustration — people can envision the object of illustration as an illustration, a drawing of some sort, right? When we talk about architecture, people can envision the outcome as a building.
What is the object — the kind of graspable object — of something like a product-service ecology, the sort of system level design that we’re talking about here?
Jodi: I’ve thought about this a lot. I think this is a key question. I think it’s an orchestration or a plan or a set of things that a designer has to oversee or manage. If I could talk a little bit about the work I’m doing with the unions right now, I think this is the most complex application of design I’ve ever had in my career. It’s unfolding in two ways. One in the research, which I’ll talk about in a minute, but second, in leading this really large and diverse research team that has union members, designers, HCI people, learning scientists, hospitality experts, labor and HR researchers. We use design a lot to get to the question beyond the question, which is reframing.
This work is looking at preparing hospitality workers for the now future of AI and automation. Hospitality is one of the industries that is going to be the most impacted by automation, and it’s been exacerbated by the pandemic because, during the pandemic, there was something like 95% unemployment, and a lot of hotels went to contact-free services and not housekeeping every day, but every couple of days. What we like to say is you would never implement an algorithmic manager on a doctor without talking to them first, but for these hospitality workers who are largely immigrants, women, people of color, older, middle-aged and older workers, many who don’t read or speak English, it’s done all the time.
So the AFL-CIO came to Carnegie Mellon to learn about advanced technologies like robotics. The union wanted to get ahead of some of these developments, and we struck up this project around hospitality with this union Unite Here, which is the largest hospitality union in the US. Now our team is examining this change and studying workers and training and software systems, and we’re developing ways to help workers and employers and technology to be successful in this change. This is a huge project. It’s really compelling.
There are a lot of roles for design, and we talk about reframing all the time. One of the trends I think in this work is that there’s this prevailing notion that technology is bad, and we can use reframing to show that no, technology is not bad. Often it’s the system around the technology because there’s been poor implementation or uneven training or features aren’t configured in a certain way, or the workers need a little bit more literacy training or digital literacy training, or maybe a manager was put into place without really an understanding of the products that they have to use.
So in this way, a systemic view can be really helpful. Reframing can be used to get beyond this notion that technology is bad and it’s gonna replace workers. Instead, we can say workers can be a valued point in this transition to collecting data about workers, collecting data about customers, and using these data in pragmatic and ethical ways.
Jorge: And at that point, the object of interest or the object of focus ends up being a model of the system? Let’s take this example with the hospitality folks. I would imagine that the objective here, ultimately, is to get to a point where the technology is actually benefiting all these people, right? So that they’re not suffering from the things that you’re talking about now. Perhaps technology can be used to make their jobs better, more pleasant, more efficient. Obviously, avoiding the pitfalls, but also there are probably opportunities there. That might result in a bunch of different products, a bunch of different touchpoints, right?
Jodi: That’s right.
Jorge: As a designer, is the thing that you’re creating a kind of a level above that where you’re looking at a model of the system?
Jodi: I think it can be both. Let me give two examples. One is, when housekeeper’s rooms are assigned, traditionally this was done on paper. They would get this thing they called their board, and their board would list the rooms. The rooms are checkouts and stay-overs. Stay-overs are light cleaning. Checkouts are heavy cleaning, so a housekeeper would like to interleave these together to save wear and tear on the body. With an algorithmic manager, the rooms are assigned by an algorithm, and they’re optimized for profit.
A guest room attendant can be assigned like 10 checkouts in a row. Not in one wing, but up and down in different wings of a hotel, different floors. So they’re pushing a large cart, and they don’t like this. They want to be able to sequence their own order. So the workaround is to mark all these rooms as in-progress and go about your day. But there are problems with that because then they’re not connecting to operations and management, and linens aren’t getting moved, and other critical pieces.
In our observations, we captured these different perspectives around room assignments from housekeepers, managers, the software manufacturer, and hotel management. We made an artifact, which is really a Figma sketch of the ability to drag and self-sequence. This became a huge component of our work. We worked with guest room housekeepers in co-design sessions, and they talked about how this kind of design would increase a sense of self-efficacy and transparency and control over the workload. The workers in the Local 226 in Las Vegas went and asked for bargaining language around self-sequencing. So our simple discovery and our research about sequencing rooms, and our simple artifact in the form of a Figma sketch, was lifted up to bargaining material and changes in how these workers do their jobs. That’s a phenomenal trajectory, and it’s a simple prototype.
Another example is the software company. Typically, when designers work on work like this, they do studies and they throw design recommendations over the wall and hope that somebody hears them and makes use of them. But the company that we have been studying their product is actually really keen and on board to benefit from the outcomes of this research. We are actually joining their agile meetings. We are running a study in a culinary training academy on the training of their software. We are taking some of their software and adding some screens that are based on our ideas. This is extremely synthetic, and I think it’s the first of its kind study where we’ll have designers, a training academy, a software company, and a union working together.
So there’s a big systemic view of how design can help the research unfold. I suspect that we will also have learnings about digital and AI literacy for these workers, which will, in turn, feed other aspects of the research and possibly even policy or governance. I had the benefit of speaking about this work in one of the Senate’s closed-door AI innovation briefings, and some of the language from our presentation made it into the executive order. I am super proud and excited by this work. On the one hand, you can say we’re just doing what designers do: we’re surfacing the voice of the user. But there is definitely a meta-level component to orchestrating this work and figuring out what to lift up in terms of the next step in the research.
Jorge: Sounds brilliant. I’m reminded of Dan Hill’s book, Dark Matter and Trojan Horses, where he talks about these design interventions that are meant to drive broader change. It’s intentional, it’s not accidental. Designers listen to this podcast, and my sense is that there is a sense of urgency around needing to embrace the reality of how things are changing. My position on this has been that designers need to take a step back and think more systemically and work more systemically in the way that you are advocating for here.
Future of Design Education and Practice
Jorge: What can designers do to start on this path of thinking about the work they’re doing, perhaps at a higher level?
Jodi: First of all, I think that the pace of AI, as much hype surrounding it, is going to force us to move to this way of thinking. Many of the simpler design jobs that UX designers did will be done by AI-enabled software. Although we do have the Figma crisis, where the AI-enabled Figma went on the market and made Apple-looking designs, it was yanked off pretty quickly. I think what designers do is changing. I’m making a joke about that, but in all reality, what designers do is changing.
And I think two things: We have to prepare students for that, and we have to make sure that when people exit to work in the industry or whatever they do, they feel confident in taking on these systemic roles, management roles, whatever you want to call them. I also see and have traced a troubling trend that a lot of smaller art and design schools are closing. My undergrad alma mater closed suddenly—they had accepted students for the fall. Alumni donors received no advance notice. This just happened. It’s really terrifying. So there’s a lot of indicators that change is needed. What I’m worried about is who is training people to think this way and how we can ensure that people are doing it.
Just to persist on this for a minute, during the pandemic when everybody was home, I did some interviews on Zoom with design managers and design leaders and design academics, and I asked them to characterize what they thought design education was and what was needed. This paper was published in the Design Management Institute in 2020. I think you can search for it, but basically the general finding was that these skills are not quite routine in designers, and it’s what’s really needed.
The other question is, what is the base education for this kind of knowledge? Many of us in our generation came from a core design discipline, like graphic design, service design, product design, architecture, interaction design. The question for me is, what do we lift up and what do we take away? And I have thoughts about that, but that’s probably another podcast. My colleague Harold Nelson, who was one of the authors of the book The Design Way, along with Erik Stolterman, has actually just released what he’s calling a Design Flight School. So he has some classes available online for people to grasp the systemic concepts. And Harold is a close colleague of mine, and I think if anybody’s stepping in the right direction, I would believe that he is. So I would encourage people to check those out.
Jorge: What about on the client side, the people who hire designers? Is there an awareness that design needs to happen at a different level, or are they looking to hire people with portfolios full of screens?
Jodi: At the entry level, I think it’s some of the latter; there are people that make screens. I think a lot of times what happens is that designers are hired to do simple things or small tasks, and they see opportunities to surge up and do larger systemic managerial things within an organization or a project. Then companies say, “Oh, I didn’t know I need this, but I do.” And I’m reflecting on some of what our colleague Hugh Dubberly has told me about the interactions with some of the companies he works with. And also in the example of my Figma sketch of the self-sequencing, that’s one small design artifact that took on its own life and had meaning at a lot of layers and a lot of levels.
I think we need to be working at the macro and the micro together. And a lot of times, companies don’t realize this until they have the benefit of experiencing it firsthand.
Jorge: Yeah, that resonates with me. I often get hired to help with things like redesigning the navigation of a website, and the conversations that are required in order to do that prompt these strategic discussions that bring up all sorts of other things, right? And I like to say that information architecture, in that case, ends up being a MacGuffin for these more systemic, strategic discussions about who we want to be in the world and how we want to show up.
Anyway, your work has been inspiring to me, Jodi. I will share…
Jodi: As your work has been inspiring to me as well. I think there’s so much power in architecture and information architecture right now. I think this is an under-explored area of design. If you go back to—we were, we saw each other at Berkeley. For listeners, we ran into each other at Berkeley a couple of months ago. It was just reflecting on the legacy of architecture at Berkeley and all the amazing things that happened there. This needs to be reexamined, I think.
Jorge: Absolutely. Yeah, Chris Alexander was there for a while, right?
Jodi: Yes, and Harold was there. They called it the Berkeley Bubble. There’s a paper about the Berkeley Bubble. They created a group of people who didn’t really have an agenda, who discussed and researched systems that designed deeply. We need that again. We need that not in a professional conference or a place where people have to pay to go, but we need groups of people discussing this. So if I had an ordinate amount of time or maybe the right colleagues, hint, hint, I would try to make a discussion group around this because I think it’s really relevant.
Jorge: I agree, and I would love to—I’ll raise my hand and say if you want to put together such a group, I’d be really keen to be part of that!
Jodi: Okay. More on that. We’ll make a promise to circle back to that.
Closing
Jorge: Alright. Jodi, it’s been such a treat. I want to be respectful of your time. Where can folks follow up with you?
Jodi: Sure. Thank you for that. And it’s been lovely to chat with you. I wish we could do this every week; we would have no listeners though. jodiforlizzi.com is my website. I have a Google Scholar page, which shows all the things we’re publishing. And we are writing a book on our class design of AI products and services. My co-authors, John Zimmerman and Dan Saffer, the working title of the book is Unremarkable AI, but that may change, so stay tuned. We hope to have that out on the presses very soon.
Jorge: And I hope to be able to have you back when the book is out, if not sooner. But I’d love to talk about that because it sounds like a great opportunity to learn from you yet more.
Jodi: That’s great. We’re really, in examining what we’re doing in our class, really looking at where it might be hard for designers to comprehend, especially, for example, starting in the middle of the double diamond. So we’ll see.
Jorge: Alright, we’ll be on the lookout for that. Thank you, Jodi. And best of luck with the book.
Jodi: Thank you.