Karl Fast is an independent scholar, information architect, and futurist. He’s the co-author of Figure It Out: Getting From Information to Understanding alongside Stephen Anderson, who was featured in episode 39 of the show. In this conversation, Karl tells us about what interaction designers can learn from cognitive science. We had a lot to discuss, so this episode is the first of two on the subject. (Here’s part two.)

Show notes

Disclosure: I received Karl’s book for free as a previous Rosenfeld Media/Two Waves author.

Show notes include Amazon affiliate links. We get a small commission for purchases made through these links.

If you're enjoying the show, please rate or review us in Apple's podcast directory:
https://podcasts.apple.com/us/podcast/the-informed-life/id1450117117?itsct=podcast_box&itscg=30200

This episode's transcript was produced by an AI. If you notice any errors, please get in touch.

Read the transcript

Jorge: Karl, welcome to the show.

Karl: Thanks for having me.

Jorge: Well, I’m very excited to have you here. For folks who might not know you, would you mind, please, introducing yourself?

About Karl

Karl: Sure. So, my name is Karl Fast. I am a Canadian by birth and education and sentiment, and I have been working in information architecture and user experience design for about 25 years or so. I like to say now, I create systems for thinking in a world that is just jam-packed with information. And a lot of the questions I have, and the work that I do, are about how do we live well and how do we think well in a world where information is cheap and abundant and pervasive. But the same is also true for computation and the networks and all the different things that we use to bring these together. And we can see trend lines where we’ve got more technology, we’re more dependent on it, it’s everywhere. And the ways that we use that technology — the possibilities of it — are simply becoming richer and richer.

And you can think back to the early days when we simply had a keyboard and a screen that was one color. And then we added a mouse. And then we had multiple colors. And then we get mobile and all of these types of things. I have worked as a practicing information architect. I have worked in startups, I have worked as a consultant. I have a Ph.D. in information science, and my work was on how to take digital libraries and how to design them so that they are more of a knowledge creation tool rather than just simply a document repository where you have to search and browse. How do we actually create knowledge from digital libraries, and how do we expand that potential?

And then, I spent about seven years working as a professor of user experience design at Kent State University. And now I think of myself more as an independent scholar, and I do consulting work and writing. I also think of myself as practicing what I call “information futurism,” of a sort — thinking about where information will go in terms of how we can use it as this resource.

The last thing I would mention is that about a year ago, I co-published a book with Stephen Anderson. It’s called Figure It Out: Getting From Information To Understanding. And some of the stuff I think we’re going to talk about today is definitely part of that book.

Jorge: Stephen was a guest on the show as well. Your book was one of my favorite reads from last year. It touches on many subjects that I believe more designers should know about. And you mentioned several of them during your introduction there. I’m very curious about the phrase “systems for thinking in a world”… I don’t know if you use the word “flooded,” but in a world that is inundated with information, right?

Information

Karl: Yeah, inundated, jam-packed. I think of information in a historical context. You know, in terms of civilization, really, that one way to look at civilization and information is that we have always tried to have more information. We have always developed new technologies for creating information, for recording it, for copying it, for distributing it, for organizing it, for sharing it, et cetera.

And we have now — especially over the last 20, 30 years through digital technologies and through the internet — have just exploded the amount of information. And the other way to look at it, though, is we have lowered the cost, right? The cost of creating, publishing, distributing, searching, organizing. All of these types of things have been lowered. But just because we have information doesn’t mean we also have understanding. And the cost of understanding still remains, I think in many cases, very high.

One of the things that we’re interested in in the book and my long-term interest here is: well, how do we change that cost structure around understanding? And I’m using that as a broad term to include things like planning, reasoning, thinking, sense-making, analyzing, decision-making — all of these more cognitively complex activities, which is, you know, more than say, “Oh, I’m just kind of skimming the headlines in the paper,” or something like that.

Systems for thinking

Jorge: When you say systems for thinking, what does that mean? Like, what would a system for thinking be?

Karl: Well, part of it is shifting language, as opposed to a formal definition of systems — or shifting our perspective. Many times, I think, if you work in design, you work in user experience, you make products. We tend to think about the application; we think about the device; we think about the website; we think about the content. We think about this thing — this artifact — out there, as opposed to all of the other things that could come into play.

In that sense, I think we’ve often narrowed our views and sometimes often by necessity. But when we look at this long-term trajectory about where our technologies are going, we are going to see more and more opportunities to bleed these things out into the world, to connect to aspects of our physical environment, to connect to other people in richer ways.

And we can also see this with augmented reality. We can see it with virtual reality. But we can also see it with artificial intelligence and robots. And what would it mean for a robot not to be just pursuing its own goals but to help pursue our goals as a true cognitive partner that has a physical presence? So these are big, big questions that I think that we need to be asking. And I think that a lot of the work that we do tends to be really focused on, well, I’ve got a rectangle with a lot of pixels.

Jorge: What I hear implicit in what you’re saying here is that for us to effectively design and create these systems that you’re alluding to, like robots and AI, we have to somehow shift our understanding of the work we’re doing beyond these rectangles composed of pixels.

Karl: Yes, I think so. We need a broader toolkit. I like to talk about a broader conceptual toolkit. You know, we have a set of concepts that we use all the time when we are doing design when we are making things. But a lot of that language has been built up around a whole certain set of assumptions.

So, let me give you an example of this. There’s a paper that I was reading a couple of years ago about what researchers are calling mobile cognition. And they start with an observation that, in hindsight, is incredibly obvious. All these psychology studies we have — all these studies that are about how people think and how they work with information and make decisions. Think about all this stuff in, say, Thinking Fast And Slow, by Daniel Kahneman, right? Famous book. Well, basically pretty much every single one of those studies, the person is sitting down. But it turns out that there’s a whole bunch of studies about, hey! When people stand up, when they walk around, things change. It actually activates different parts of our brains and opens things up.

There’s an example of this longstanding thing in psychology called the Stroop test. So this is where you’re going to have a list of different names of colors, right? So red, yellow, blue. But sometimes the color is going to match the word, and sometimes it isn’t. So, it’s got to do two different things, and it’s generally used as a measure of cognitive control. Can you focus your attention on the salient information, and can you come up with the correct answer? How many answers can you get correct, and can you do it quickly? Well, it turns out all the studies in the Stroop test, which is a standing thing in psychology, right? They were all done sitting down. So then somebody did a study where they said, “Okay, stand up.” And they did better.

So much of this, especially when we look at the research that we are building and the conceptual tools that we have, are all based on a set of assumptions that, when you see some of these things, you’re like, “Oh, well, that’s pretty obvious in hindsight!” But it’s so obvious; we’ve kind of forgotten about it.

Jorge: In the book, you talk about a distinction between the… I think you call it a brain-bound view of how the mind works versus a more expansive view. I think you call it the extended view. Is that what you’re referring to here?

Karl: Yeah. So, there’s a famous paper by two philosophers, David Chalmers and Andy Clark. And in the mid-90s, I want to say 1996? They wrote a paper called The Extended Mind. And the idea of The Extended Mind is, well, where does the mind end? That’s the question that they’re really asking. And what they argue is that through cognitive science over the last 50 years and the rise of cognitive science starting in the late 1950s, come to equate the mind and the brain as the same thing. And they argue that they are not the same thing. That we shouldn’t think of that, and to do that as very limiting. We can think the brain certainly ends inside the skull. But the mind, they argue, does not. We can think of the mind as extending out into the world.

Now there are weak and strong forms of that argument. In the strong form of that argument, you would say that when you are holding your phone, it is literally part of your mind. In the weak form, you would think of it more as a way for offloading. And there’s a lot of debate around this. The extended mind is one idea within this broader notion that I think many listeners have heard of to some extent, which is this embodied cognition or embodiment for short. In the book, we use the word “embodiment” as sort of this broad shorthand, kind of in the way that in design circles, we use UX often as a sort of umbrella term, rather than getting into the nitty-gritty details of the difference between interaction design and information architecture and usability and content strategy, right?

Each of those is important, but as a broader catchall, that people who aren’t doing the detailed work — it’s a label for them. And so, we use embodiment in the book as this broad-encompassing thing because within it, if you dip into the academic literature, you’re going to hear: extended mind, distributed cognition, situated action, activity theory, and activism. There’s a whole pile of these different ideas. The distinction between the brain-bound model of cognition and the extended-mind model of cognition is terminology that Andy Clark comes up with. He doesn’t use it in that paper, but he’s explored it in several books. And I believe that actual phrasing comes from a wonderful book he wrote — although it’s a heavy book for sure — called Supersizing The Mind.

Interactionism

Jorge: Circling back to the Stroop test that you were talking about and how the test participants’ performance in the test varied depending on whether they were standing or sitting, what that implies for me at least, is a need for greater consciousness about what my body is doing whenever I’m performing any kind of activity — especially a cognitively taxing activity. Is that fair?

Karl: I think that’s absolutely fair. I would also say that this is important for people who are making things, who are building the tools that we have. We talked a bit earlier about the word “systems.” You asked me about that, and I tend to use it in somewhat a loose way to mean that you’re not seeing just the app, just the website, just the device; you’re seeing the body. You’re seeing the physical space in which they are. But more importantly, you’re seeing how all of these things are connected together and what connects them together, right? So, you are changing the unit of analysis. In the book, we described this as the “locus of understanding.”

Where is the locus of understanding? Is it the app? Is it in the brain? Or is it more connected to all of these things? And what is it that connects these things? In the way that I’ve come to see it, I have come to see interaction as the fundamental thing that connects all of these together. And I’ve come to believe that we have a relatively weak way of talking about interaction or an understanding of all of the ways that it happens. I don’t think this is great terminology, but my current working term for this is “interactionism.” It’s a bit of a problematic word, which I wouldn’t mind getting into if you don’t mind.

Jorge: Let’s do it. But first, to be clear on what you’re saying here: Am I right to understand that what you’re saying is that interaction in this view is where the locus of understanding resides?

Karl: No, I don’t think so. I wouldn’t say that. That is one thing that one can focus on it, and I don’t think we see it very well. And I can give you some examples of why I think interaction is really important. I think it’s often a case where we want to change the locus.

Sometimes you do want to zoom down and be able to focus just on what’s happening on the screen or the app. Sometimes you do want to focus more on what the body is. I tend to think about changing the locus; we need to also go wide. To look at all of those things, which we would normally see as independent and discreet and interaction as kind of this glue that binds them and makes them all function together as a bigger system.

Sometimes through things which are explicit, sometimes through things which are implicit or have to be inferred, and if we do that I think we get a new language for what do we see when we’re, say, doing a usability study? Or what do we see when we’re doing ethnographic work? And how do we interpret that?

Jorge: So would a fair reading then be that whenever we are designing for interaction — when we’re doing interaction design — we are… Well, first of all, this lays a big responsibility on folks, right? Because somehow you’re designing part of the person’s cognitive apparatus, so to speak.

Pragmatic and epistemic actions

Karl: Sure, sure. But I mean, interaction design already talks about designing behavior, right? And you know, that means that you are shaping the things that people do and the ways that they are in the world. But we can also talk about it in terms of just facilitating certain types of interactions.

So let’s step back a little bit and tell you about a paper that I read. I’ve got this lovely book called HCI Remixed. Learned about it 15 years ago. And they asked a number of famous people, important scholars and researchers in the world of human-computer interaction about what was the one paper that really changed your thinking. And they didn’t print those papers; they just asked everyone to write an essay about that paper and why it changed their thinking.

And every time I pick this book up, I think to myself, “well, what’s the paper that changed my thinking?” And the answer is really easy. It’s the paper called On Distinguishing Epistemic From Pragmatic Action by a guy named David Kirsch, who is a cognitive scientist at UC San Diego and his grad student at the time, Paul Maglio. And this is a study about how people play Tetris, but it’s easiest to understand it by thinking about how people play chess.

So when people play chess, imagine that you want to move the Bishop. You pick the Bishop up, and you move it into position, but you keep your finger on it. And as you’ve moved it, you realize, “Uh oh. That’s a bad move.” So you move it back. From an interaction design perspective, or from HCI, we would say, “Oh, well, you have done two actions on the world. You move the piece. And then you pressed undo.” That was, therefore, an inefficient action. It was not worth doing. We would even probably classify it as a mistake.

And what Kirsch and Maglio say is, we should not think of all action as being the same. Action gets done for different reasons. And through this study of how people learn to play Tetris, right? They’re using chess to illustrate this. They argue for distinction between two different types of actions, at least. So this example, they would talk about what would we call pragmatic action. And a pragmatic action is one in which you are making a change in the world, the point of which is to change the world.

Jorge: Moving the Bishop to a different square.

Karl: You’re moving the Bishop to a different square. So if that moving of the Bishop is pragmatic, then it’s an error. But we all know from having learned to play chess that that’s not an error, right? And so, they argue that what you’re really doing is: you are moving it, in this case, and once you have it in that position, you’re like, “it’s easier to see.” And it is easier to see than to imagine that in your head. So, it’s what they call an epistemic action. Epistemic as in epistemology, as in of or relating to how we know. So, epistemic actions are things that we do, changes we bring about in the world that make our mental computation — that make our thinking — easier? That make it faster, or that make it more reliable, to reduce the chance of making a mistake.

And once you begin to think about epistemic actions, when you see actions this way, there are so many different examples of it. You see it all over the place. Because if we only had pragmatic actions, what would happen is you would… This is how you would play chess. This is how the ideal person should play chess: they should sit their stock-still and never move. And then, they should make the most physically efficient move possible to pick up a piece and move it into position with as little extraneous movement of the body as possible. Because there’s a whole bunch of different things that we do that really can’t be accounted for unless we… if everything is a pragmatic action. There are so many things we would say are completely superfluous.

For example, consider gesturing with our hands. Why do we talk with our hands? There are some people who have looked at this question. There’s a woman named Susan Goldin-Meadow. She published a book about, oh, it was about 15 years ago. It’s called Hearing Gesture. And for 25 years basically — or more by this point — she has been asking this question: why do people talk with their hands?

And there’s a pretty obvious answer to this, right? You’re like, “Oh, well, I’m using these gestures because I am creating information for you, the listener.” These are things that are helpful. It’s extra information, just like talking faster or talking slower or speaking loudly or talking softly. That conveys different information. And that’s a good answer. And the research says, “Yep, that is absolutely part of the story.”

So, why do you talk with your hands when you’re on the phone? Or, say, on a podcast? Because people do this. You can’t see the other person, but people still make these gestures. So, one answer there — and I think a pretty good one — is, “Oh, well, it’s a learned behavior.” You’re used to being around other people, right? So obviously, these gestures would carry over. Fine. What about someone who is blind? Why do they talk with their hands? Because studies of people who are not sighted — and who are born without sight — show that they also talk with their hands. They will also talk with their hands when they are talking to someone else who is blind. So imagine, right? You’ve got two people, neither of whom has ever seen a hand. They are talking back and forth. They are using hand gestures, which they know cannot be seen. And when they analyze them and classify them, it turns out that they’re using very similar gestures when talking about the same kinds of concepts.

There are lots of studies around this, like, say, comparing kids who are sighted and kids who are blind and how they use gestures when they have a reasoning task, and then they have to explain their reasoning to somebody else. And they both use similar kinds of gestures. The conclusion from all of these studies, at a high level, is that, yes, there is a component in which that communication is meant for someone else. That gesture is for the listener. But there is also a component in which that is directed inward. We actually use these gestures to shape and facilitate and kind of grease our internal cognitive mechanism.

And you can see this the next time you go to a meeting, and you’re called on to speak. Try sitting on your hands and see how well you talk. Nobody likes to do it. And people actually find this to be a struggle. Or go to a conference, right? We’re talking towards later on in the COVID pandemic where we’re not really at conferences. But you’ll go to say a panel discussion, and somebody asks a question, and somebody might fumble, but what’s going to happen, I guarantee it. They’re going to start moving their hands, and then the words will just tumble out, and it’s because the gesture has an internal component to it. That’s what the research is pointing to.

Jorge: What I hear there is that somehow the gesture is part of our thinking system.

Karl: Yes.

Jorge: How so? Like how does that work? And I want to go back to the Bishop. It’s clear to me what the pragmatic action does in that case, but what does the epistemic action buy me? Maybe I put my fingers on the Bishop, lift it, and hover it over the board. Am I building some kind of more tangible mental model of possible moves?

Karl: You are because… Well, what you’re doing is you’re taking things out of a “brain” space and putting them into a perceptual space, right? You’re shifting that board. So you no longer have to see…. well, without that as an epistemic action, with the Bishop, you have to — in your mind — imagine what the board would look like if you move the Bishop into that position. But when you do it in a space, now it becomes a perceptual problem, and you can actually see it. And that is easier for us to do, especially when you’re a beginner. You could say here, “well, expert chess players, grandmasters, they don’t do that.” And this is true. But the reason they don’t is that they have practiced really, really hard for many, many years to get really good at it.

And studies of chess players have shown that the cultural idea we have of chess as being this indicator of intelligence are really incorrect. What are chess players really, really smart at? They’re really smart at playing chess, but that doesn’t make them really smart at, say, astrophysics. The point of that is that there is always a point in some domain — no matter how expert you are — there’s always some other area where your brain-based cognitive abilities have limitations. We always reach a point… it is… our brain is just simply overwhelmed.

Don Norman said it really well, many, many years ago at the lovely book design Things That Make Us Smart. “The power of the unaided human mind is greatly exaggerated.” And so one way to look at what we do in design is, like, that statement. We are building things to overcome and extend, augment, and amplify the powers of the human mind. But what embodiment is telling us is that we need to incorporate more things into that picture. And I think that’s especially going to be true as our technologies improve and allow us to use more and more of our physical abilities, our interactive abilities, our interactive powers, to amplify that.

Learning about embodiment

Jorge: Well, it sounds like an area that designers — particularly designers who are working on the sort of digital systems that we run so much of our lives on — need to be aware of. And unfortunately, we’re running out of time here. I feel like we might need a second conversation to dig more deeply into this, but where could folks follow up with this subject? Like where can they find out more about it?

Karl: If I was to recommend one thing for people to go back to that is very readable as a good starter on this, I actually would point to Don Norman’s book Things That Make Us Smart. He talks about these kinds of ideas in that book, and that book is almost 30 years old now. I feel that book has been hugely overshadowed by The Design of Everyday Things. He gives many different examples. He introduces the concept of what’s called distributed cognition, which is a subset… What I think of as embodiment. One of the principles of distributed cognition is that cognition is embodied.

The Tetris paper is considered to be a major paper within the world of distributed cognition. I would recommend looking at that paper, On Distinguishing Epistemic From Pragmatic Action, by David Kirsch. I don’t recommend reading all of it. We talked about just the one example he used in that paper of chess as an analogy for explaining their findings. The focus of the paper is actually on how people play Tetris, and they developed a robotic Tetris player — a program to play Tetris — and compared it to how human beings play Tetris and looked at the differences between those two. And the robotic player was based on a classical, cognitive science model, where it’s all based on you perceive, and then you think, and you act. So I think that’s a really interesting place to look as well.

David Kirsch also has another paper that I think is just fantastic, very readable. It is called The Intelligent Use of Space. And you can easily find this one online as well. And this is a particularly fascinating one because it’s published not in a journal of cognition or a journal of design; it is published in a journal of artificial intelligence. It is presented as, I think, a really damning critique of AI and robotics. Because what he points out is that all of this stuff, cognitive science, AI, and also human-computer interaction, and thus UX, has built on classical cognitive science. And classical cognitive science says, “Hey! We perceive information from the world. Then we’ve got our mind — our brain — which does all this thinking work, the cognitive part. And then action is simply output.” And embodiment is like, no, no, no. It’s much more complicated than that. Thinking and perception and action and the world are all intertwined in many, many different kinds of ways. It’s very much more complicated than that. And so he says, “look, if robotics is based on this idea, like, it doesn’t use the space around it as part of the thinking.”

The first driving robot, there’s a guy named Hans Moravec. I think that’s his name. And he did some of the early work on robotic vehicles as a paper that he did for his, I think, for his Ph.D. dissertation. The way that he designed the robot, it would look, it would sort of scan the environment and then it would think for like, I don’t know, 10, 15 minutes? Okay, so it would scan the environment like, okay, where all the different objects? And then it would think and plan out its movements for 10 or 15 minutes, and then it would move like up to about three feet, and then it would stop, and then it would scan the world again, and then it would move.

Well, we don’t work that way. Babies don’t work that way. Like, no animal works that way. you might think, “Oh, well — that’s the early eighties.” Like, that’s the way that it used to be. But this is still the way it is in robotics. A big project in AI has been how can you get robot arms to assemble a chair, like a chair from Ikea. Can you do it? This is considered to be like the moon landing equivalent in robotics.

And so, a paper came out about four years ago that made kind of a splash. It was even on the front page of The New York Times. They went and bought two off-the-shelf robotic arms and then programmed them so that they could assemble a basic Ikea chair. And when you read the paper, it’s like, wow, it did it in 20 minutes. A chair! Like, people are going to be out of work. But then you read the paper, and you realize that does not assemble chairs anything like human beings assemble chairs.

So, they broke the problem down into three phases. The first phase is scanning the environment. They randomly scatter all the pieces of the chair around onto the surface. And the robot spends three seconds scanning to identify all the different pieces. Then it goes and makes a plan for how it’s going to assemble a chair. It sits stock still for about… I think it’s like eight or nine minutes just thinking, not moving. And then the next 11 minutes is executing the plan. So it makes this plan. “I’m going to pick this piece up, and then I’m going to rotate this arm, and then I moved the other arm, and I’m going to rotate that, I’m going to grab it over here…” And that’s how it works.

It’s this whole idea of perception, and then cognition is thinking really hard inside the head, and then action is simply the output. This idea is buried really deep. And if we’re going to build a future where we have robots as true partners — software AI as true collaborators — and we can begin to see human beings in the full dimensions of our cognitive abilities, right? Until we can do that kind of thing, I think we’re always going to be limited as designers. And we know that our technologies are changing quite a bit. We can see all these things on the horizon. So, my question around this idea of interaction is, are we really prepared for that? And I don’t think we are.

Jorge: Karl, it seems like a great place to wrap it up, even though it’s kind of in a question mark. It’s a prompt for us to have a second conversation about this.

Karl: Yeah. Then we can talk about rats and heroin!

Closing

Jorge: I like that. That would be interesting. I’m very curious now as to what you mean by that. But in the meantime, where can folks follow up with you?

Karl: So, I tend to hide a little bit. I’ve especially been hiding the last six or seven years. I’m hoping that that is going to change over the next year or so. The main way to follow me probably is on Twitter. You can find me on Twitter; I’m @karlfast. That’s K-A-R-L-F-A-S-T. Technically, I have a website, but it’s like seven years out of date. You can also find me on LinkedIn. You can look me up there and send me a message. I will tend to respond to those two places; it just might take me a couple of weeks because I tend to be very slow. I’m not active on Twitter, really at all. But I will be notified, and I will generally respond.

Jorge: Well, fantastic. I’m going to include links to all of those ways of getting in touch with you in the show notes, and I’m also going to include links to the papers and the resources that you mentioned above. Thank you so much for being with us today, Karl!

Karl: Thank you for having me.