Karen McGrane describes herself as a “UX multi-hyphenate”: information architect, content strategist, technical communicator, accessibility advocate, and more. She’s co-founder of Autogram, a content management and design system consultancy, and author of two classic books on content strategy. In this conversation, we focus on how AI might affect content management on the web.

Show notes

Some show notes may include Amazon affiliate links. We get a small commission for purchases made through these links.

If you're enjoying the show, please rate or review us in Apple's podcast directory.

This episode's transcript was produced by an AI. If you notice any errors, please get in touch.

Transcript

Jorge: Karen, welcome to the show.

Karen: I’m so happy to be here. Thank you for inviting me.

Jorge: I’m very happy to have you here. We were talking before we started recording, and I said that you are among the people that I wrote down in my list when I first started thinking about having a podcast. It’s like one of the people I’d like to talk with is Karen, and I’m thrilled that we are able to make this happen, that you’ve agreed to be on the show. Some folks listening in might not be familiar with you and your work. How do you go about introducing yourself?

About Karen

Karen: Sure. So I’m, I guess a very long-time information architect and UX practitioner. We’ve known each other from the way, way back days. I am one of the rare people in the field, or at least from that era, who actually has a graduate degree, a Master’s in HCI and Technical Communication that I got in the late nineties. And so I honestly have been doing information architecture and content strategy for my entire career.

When I left grad school, I joined a very small digital agency called Razorfish that eventually grew into a very large digital agency and then became a very small digital agency again. I described that as a little bit like getting an MBA in the trenches. It was an introduction to all the things that can happen to a business, and, in a sense, was a startup in the days before the tech industry became all startups. So when I left there, I was the VP and National Lead for User Experience. I really am grateful for the time I spent there. I got to work with a lot of people who have gone on to very successful careers in UX and feel like I learned a lot, and it was a fun time.

Ever since, I’ve been independent. I ran a company called Bond Art and Science for a while, and now I have a company called Autogram that I run with my business partner, Jeff Eaton.

We started that back right in the middle of the COVID times in 2020, and our focus is on helping large-scale organizations with their content strategy and particularly content management challenges.

My partner Jeff has much more of a technical background than I do. He was a Drupal architect for a long time and is very skilled at going up to his elbows in various content management systems. So we work with organizations on helping them plan and figure out what to do, especially if they’re going through a big replatforming or a big migration or some kind of thing that requires them to take a step back and look at all of their content more closely.

Jorge: These are organizations that are in businesses that are content-heavy, like publications, or is it all over the place?

Karen: So for a long time, back in the ’00s and the ’10s, I did a ton of work with publishers. Like, that was really my bread and butter, dragging magazines kicking and screaming onto the internet. Since publishers have really brought a lot more of that in-house, I think they are one of the businesses that have dawned on them that they really are a digital-first business.

Now the tons of organizations that we tend to work with are still information-intensive businesses but tend to be more in technology, financial services, or healthcare. Anybody who’s managing thousands or tens of thousands or honestly even millions of web pages and other types of documents, that might encompass their support documents, might encompass the mess of PDFs they’ve got.

There’s tons of giant databases out there filled with content and other information that somebody has to take a look at. And when that problem becomes pressing for an organization, we hope that they think of Autogram and come to us for some advice on how to do that well.

Why Content Management Matters

Jorge: I think that many businesses don’t realize that they are in the content management business in some ways, right? Perhaps it’s not central to the business, but like you’re saying, there’s so much around it like support, for example, that requires managing a large set of content items.

Karen: It’s plumbing. I think that is there to support a large business, and you don’t necessarily think that the plumbing is the core of your business strategy. But when it breaks, it sure turns out to have a big effect on it. And getting that right, like if you have a situation where you’ve got to go in and fix it, it may be one of the only chances you’re going to get to really take a step back and say, how do we plan this correctly for the future? And how do we make sure that some of the problems that we set up the first time or that have arisen as we’ve used it over the years can get fixed so that it runs smoothly over the next decade or so, that’s my hope.

Jorge: We might put a pin on this, but there’s a concept of governance there, right? Like that, you’re not just creating for now. This has to be a system that produces and manages content over the long term. There’s another aspect to your career I was curious about because I don’t think you mentioned it, but I see you also as a thought leader. Like, I don’t know if that phrase is in the, in the sense that I think that you’ve been at the forefront of advocating for making content more accessible and usable and more cleanly structured. You are the author of, is it two books?

Karen: Two books. One of them is called Content Strategy for Mobile and the other is called Going Responsive. And they were both written when mobile was becoming the hot topic, and to me, it was an important inflection point that gave me a chance to explain why structured content and why presentation-independent content and why accessible content were important for organizations that wanted to be planning for the future.

If you want to do well on mobile, it means essentially the same thing as being effective for accessible content, and it means that you will be able to get your content wherever it might need to go in the future. So yeah, I would say I’ve been an advocate for that for my whole career and hope to do so for the remainder of my career. It’s a big part of why I think about content management and think about information architecture the way that I do because organizations can lock themselves into an approach that makes their content not very flexible and not very reusable. And simply by taking a little bit different mindset, they can do more with their content. Who doesn’t want to get more value out of the content they’re producing?

Jorge: There’s a phrase that, I don’t know if you coined it, but it’s a phrase that I associate with you, which I think embodies these ideas, and it’s a phrase, “truncation is not a content strategy,” right?

Karen: That became my brand on Twitter to such a degree that any time anyone finds like a funny truncation where it shows a bad word or whatever, I will get a Twitter ping or now a ping on one of the many other platforms you can find me on.

Jorge: Those of us who are in the industry, I think, I’ll speak for myself. I always get a little chuckle out of that because we’ve been in the rooms where those conversations are had, where it’s like, “No, just truncate the field.” And you’re like, “No, don’t!”

Karen: “Just drop in an ellipsis. What could possibly go wrong?”

Jorge: Yeah. That’s great. And that’s where I was hoping we would come to because clearly you recognized with mobile the opportunity to have these conversations happen, right? Like mobile provided a perfect opportunity to talk about structured content precisely because all of a sudden we were reading content on screens that provided much less real estate than people were accustomed to, so it brought to the forefront questions around what do we show? So you need to have a much more thoughtful approach to what’s going on there.

Gen AI’s Impact on Content Management

Jorge: And it feels to me like we’re going through another big transition now, in particular with generative AI, right?

Karen: I agree.

Jorge: Well, I was hoping that we could have a conversation about this. What are you seeing out there? Like what’s your first approximation to this stuff?

Karen: I have to say, I really like reading your newsletter and I think you have a take on AI and generative AI that aligns really well with my own. I guess I will give a little plug. My husband, Tim Carmody, works for a company called Deep Learning AI and writes their newsletters, and so I have learned a lot about generative AI just by having him in the house. Before he took that job, I think I had maybe sort of a reflexively negative attitude toward AI in general.

I joke a little bit that there’s a quote from Douglas Adams, something like any technology that was invented before you were 30 has always been, and it’s just the natural order of things, and any technology invented before you were 30 is something great and you can get a job in it, and any technology invented after you turn 30 is the devil and should be feared and loathed. And I find that to be true for myself, right? Like, the internet, when the web came on the scene, it was like, wow, this is fantastic. This is exciting. You can get a job. And now with AI, there’s this sense of oh, that’s weird and scary, I don’t know about that.

But I’ve really come around to not like more of a reflexive curiosity, more of a sense of this is intriguing. And I think it obviously poses some risks, but it poses some opportunities. And there’s some exciting things here. And I think much like anything out there, it’s a tool, right? It’s a tool that we’re gonna be using. deeplearning.ai, their founder, Andrew Ng, uses the analogy a lot that AI is like electricity. It is sure something that can be dangerous in the wrong hands, but it’s also something that offers a lot of good potential.

And what really matters is the applications of the electricity. In and of itself, it is value-neutral. It is what you choose to do with it and how you build those applications that can either be very dangerous or very beneficial to society. And that’s how I really think that more awareness of it and more curiosity towards it is a healthy attitude to have, especially right now.

So I’ll say that Jeff and I at Autogram have been doing a fair amount of experimenting using generative AI. Not on our projects a whole lot, but experimental things to use it for encoding and use it for figuring out how do you look at a large corpus of content and do some analysis on it.

Our takeaways from that is definitely that I don’t see any information architects or content strategists losing their jobs anytime soon as a result of that type of work. In fact, it really strikes me that there will be a greater need for people who can think in systems.

The flip side of that, I do think that a lot of low-value—I don’t wanna say low-value, very routinized writing jobs may be at risk. And I don’t say that with any enthusiasm; I say that as a lot of the content slop that gets produced is low-value and the idea that we are now having robots generate even less useful content is not a good thing. It’s not a good thing for the web, not a good thing for society. And as much as I don’t like to see writers lose jobs, I also think that there might need to be a reckoning around what the value of SEO-generated content is in general, and if it’s slop that robots are churning out, it didn’t have much value to begin with.

Jorge: I would describe the primary reaction that I’m seeing in many of our colleagues as being kind of fearful, like it’s based on fear. And I think it’s understandable because these are the first languaging systems that seem to be as competent as we are, but they obviously don’t have many of the other attributes of intelligence that people have, like theory of mind, consciousness, shame, pride, right? It’s like, there’s all these things that don’t come with the package, and those are pretty important, you know? And on the flip side, we were talking earlier about truncation and this challenge of we don’t have enough money to create a shortened version of this thing.

And this might be a situation where an AI could be useful, right? Because they’re pretty competent at summarizing texts. And again, there’s really no jobs being lost there because it wasn’t being done to begin with. So it does feel like we are gonna have to play it by ear. And to your point, I think that the space needs folks who understand this stuff, who understand structuring information.

Karen: Absolutely. I think it needs many more people who can do that effectively than I think we even realize. And I think one of the problems that I have seen and that I fully expect to continue seeing for the next 10 years or so is organizations thinking that they can just throw AI at a problem around either capturing what they have in a large pile of documents or appropriately pulling out relevant bits of information or categorizing and realizing that without having an appropriate structure in place, without having some existing taxonomy and some existing encoding as to document types, none of that is going to be possible. And it may be that, sure, you can use multiple generative AI systems in concert to do this as a set of stages, but guess what? You’re gonna need a person or a team of people who are able to orchestrate that and who are able to do that kind of and maybe, most importantly, do the review and analysis to determine if what the AI is doing is accurate and refine the systems and correct the mistakes.

That is quite labor-intensive. It may, at scale, eventually wind up being more efficient than having a team of people do it themselves. But this is not some kind of slam dunk where right from the start you just throw AI at your million-page website and say, go sort this out for us, it comes back with answers. You’re going to be, I would argue, worse off because one, it’s gonna do it wrong, and two, you’re not gonna know where it was wrong. You’re gonna have to go find all the mistakes that it made, and that is gonna take human effort and human brains to do all of that.

Jorge: I would argue that it’s even riskier than that because these systems are so eloquent that we give them the benefit of the doubt, right? Like we say, oh, it has to know what it’s talking about because it says it so well, right?

Karen: Yeah. And some of the experiments Jeff Eaton and I have been doing have really uncovered exactly that: it will come back with a system, like “here, take these documents and come up with a tagging schema for them,” and it will come back with a list that just looks so complete and it’s got good descriptions and you’re like, “wow, that makes a lot of sense.” And it’s only if you are an expert in the domain that you can look at it and review it to understand what it’s missing, what it got, what isn’t quite right, what categories actually should be collapsed, and that takes a fair amount of genuine insight and knowledge of the field.

Jeff described it as, you know how when you read a newspaper article or a magazine article about a subject that you understand really deeply, and all you do is read the article going, “oh no, that’s, no, that’s wrong. No, they totally misunderstood that. No, they glossed over something very important here,” whereas if you read an article about a subject that you’re not expert in, you’re just like, “oh, that was very interesting. I learned a lot there.” It’s the same kind of thing that AI does, like it can give you something that looks polished enough and complete enough that you don’t know what it’s missing.

Jorge: And again, it feels to me like the whole languaging system thing is pretty novel to us, right? This is not something that we’ve had before, like not to this level of eloquence, right?

Jorge: You’ve been through this kind of transition before. You were working at Razorfish in the early days, and you talked about doing graduate studies in managing information or communicating information. I don’t know what time that was, but was that during the early days of the web?

Karen: Late nineties. I was there from ‘95 to ‘97. I remember we’d be in a lab, and someone would announce that the latest build of Netscape was available, like the latest beta, and everyone rushed to download the latest beta of Netscape because it was that new and that exciting. And the latest was actually a big improvement over the old one. So that was a fun time to be in grad school.

Jorge: I think you and I are more or less contemporaries, and I remember those days. It was a transition, right? It was a transition from the world before the web to the world after the web, and you were a player in that transition.

And you were, as we talked about earlier, instrumental in the transition from the world where we experienced these digital publications through screens with fixed proportions, let’s say, to this world where content can be experienced on any kind of device, right? It was triggered by mobile, but that wasn’t the only medium in which it would be experienced.

Anyway, you’ve been through these two transitions where the world changed, and I’m wondering if there are any lessons, any principles that can be brought forth as we go through another transition in how content is produced and experienced.

Karen: That’s a good question. Karen: I do really believe that thinking of the content that you produce as being distinct from the container in which it will live is a fundamental principle that I wish we could, as people who produce content, all collectively wrap our heads around and truly embrace.

And it’s very difficult. I think the legacy of print, the idea that you create something and it’s gonna live on this one sheet of paper and you know the dimensions of the sheet of the paper and you know the size of the typography, and you’re aiming what you produce for that box—that desire is so strong.

I think that the web and mobile and just multi-device publishing gave us a metaphor for understanding that. And I think that generative AI and LLMs and the idea that what we produce is now going to be remixed in different ways is maybe another additional layer on top of that.

Like, I have been using the search engine, I guess you might call it the AI search engine, Perplexity quite a lot. I always wondered what would unseat Google as the search engine. And it strikes me that something like Perplexity will probably be what it is. The reason I like it is that you can type in a natural language query, and it will go out and summarize a number of sources for you and give you an overview of a topic.

And I think the idea that what you create—a research paper or webpage or a book or some other document—will now wind up being remixed and summarized and spit back by an LLM, I think suggests an even greater recognition that you can’t be wedded to one particular form or one particular context. You have to be aware that what gets produced, you’re no longer in control of the linear narrative, and you’re no longer in control of how someone is gonna experience that document. And so, that means you need to have more structure, you need to have more semantics in place, you need to be planning for how that remixing or that reflowing might happen.

Jorge: The notion of structured text is something that… The dream of the semantic web has been something of a dream for a long time.

Karen: Oh yeah, I know, a distant future where someday…

Jorge: And yet now, to your point, it feels like it’s more important than ever because it’s one thing for a system to build a database of relationships between terms based on the frequency with which they appear next to each other, and it’s quite another for a system to build a database of relationships that have been somehow encoded based on the actual intended meaning. Those are very different things.

I’ve been doing a little bit of experimenting with the open-source Graph RAG project that Microsoft released this summer, and the results so far seem better than using, kinda unstructured RAG. And I think it points to what you’re talking about here, this notion that it does take a kind of mental leap for someone to stop thinking of the text they’re seeing in front of them as the kind of authoritative or unique manifestation of that text and start thinking of it as something that can travel in some ways.

Karen: Yes, yes, exactly. Something that will be remixed or that… I guess it has always been true that the reader will experience that text in the way the reader wants to, which means they will skip ahead, they will scan the text, they will not read in order, they’ll read something, and then they’ll jump back.

And we, as authors, I think always had a responsibility to say, “I’m not in charge of how someone is going to perceive this text. The best I can do is present this text in a way that will increase the chances that someone will take the meaning away that I intended.” And as we have grown into a hypertextual world and a world of multi-devices, and now a world of LLMs and other AI, the need to do that is ever stronger. And to me, maybe this is just how I approach the world, but the semantic encoding that is put in there and the structure that goes in there, to me it is and always will be the foundation for…

Governance and Observability

Jorge: We put a pin on governance earlier, and I feel like I want to return to that because we’ve been talking about authoring and we’ve been talking about reading. I think implicit in the conversation so far has been the notion of designing the containers so that they present the right thing at the right time to the right person, in some way.

Karen: Yeah.

Jorge: What about the governance piece of this? The, “How do we as an organization ensure that we are putting the right stuff out there to support our business, to support our customers?” Is there an AI angle to that part of the work?

Karen: I absolutely think there is. Oh, the checks and balances that get put into place to evaluate what the AI is doing and the accuracy and responsibility that the AI is taking. At Autogram, we talk quite a bit about the philosophy or approach called observability. And that comes from software observability, which is essentially understanding a system based on its outputs. And the higher observability you have into a system, the more likely you are to be able to evaluate whether that system is working correctly and diagnose problems.

When we talk about it from a content standpoint, what we mean is having mechanisms in place to evaluate if the content is doing what you intended it to do. One of the projects that we have done with several of our clients is to try to identify the intent of each piece of content, and that is a parallel, I think, to understanding what it is that the user expects from the content. But intention is in many ways a business-focused approach to saying, “Why did you make this in the first place? Like, why is this out there?” And what that means is that if you understand the intent of the content, you have a better chance of being able to measure or evaluate whether it actually met that intent.

An example I give as a casual one is that some of the content you’re producing is there for legal reasons. Like you are legally required to have it, you don’t really care. You care if people look at it, but the number of people who look at it or the time that someone spends looking at it isn’t really a way that you evaluate whether that content is doing its job or not. Like, it exists is what it means to do its job. But you may have legal requirements for different countries, and so you have to have a way to evaluate if this is here for legal reasons, how do we know if it is meeting our requirements for the legal reasons?

That’s very different from content that you might create for SEO purposes or for marketing purposes where the number of people who look at it and whether the right types of people look at it is very important. You want a way to measure or evaluate that.

So when you add AI into that, I think having that sense of observability becomes even more important because if you have AI, generative AI, generating information, generating text, generating encodings or categorization, you need to know what the intention of that is, and you need to be able to evaluate, is it hallucinating? Is it making things up? Is it going off and suggesting things that are not aligned with what your intention is? And you have to have observability. You have to build that into the system, otherwise you’re not gonna know.

Jorge: That seems like a perfect example of this. Another one that came to mind is support, which we talked about earlier. I could imagine that you could assign reductions in messages to tech support based on the presence or absence of a particular piece of content in a support knowledge base.

And once again, I think that having underlying structure to the content would make it easier for you to tie those pieces of content to some kind of strategic goal.

Karen: Yeah, I think we’ve already seen problems arise from that. I know there was at least one case where a company decided to implement an AI LLM-driven chatbot for customer support, and the chatbot started lying to people about their policies or whatever, and the company had to, I guess, uphold whatever it was that the chatbot said and give people discounts or credits, or I’m not sure.

It’s like, whoops, guess it turns out the chatbot might just start hallucinating, saying things to customers that… in this case, I think it was probably just funny and bad, but there’s a lot of risk there. There are scenarios where it could be dangerous. I think you’ve probably seen examples of things like there’s a lot of content slop on Amazon, and one of them was a guide to foraging mushrooms. That’s really dangerous. You can’t be sending people out with a guidebook to forage mushrooms; you can kill ‘em.

So, yeah, I think having those guardrails in place… I don’t mean to be too despairing here, but I have seen how little governance or how poor the governance is for many organizations with their web and other digital content. They wind up not managing it. They have hundreds of thousands of web pages and vast repositories of PDFs that no one has ever looked at, and they don’t know what they are. They don’t know if they’re saying the right thing. I think if that’s where we were at with content that we were fully in control of, what’s gonna happen when we start really letting LLMs in on the party, and we don’t have any governance in place to control what they’re doing? I would see that as a significant risk for a number of businesses that I hope they don’t leap in with both feet without being aware of what the risks might be.

Recommendations for AI Integration

Jorge: Assuming that someone listening in is in a business that is looking into using AI to help automate part of their content management workflows, what recommendation might you have for them? And obviously, like recommendation number one might be “reach out to Autogram.” But what’s the number one thing that they should think about?

Karen: So I think it really would be having a really robust approach to iterative testing and learning. It needs to be a research approach. One might be to say LLMs are not some kind of monolith, right? There are dozens of them out there, and you might not necessarily know which ones are best for your particular problem until you try some of them out.

I was talking to Jeff Eaton about it and said, I think there’s a little bit of a parallel to the content management world in that, when you are deep in the space the way we are, we know how different CMSs think about problems. It’s like Drupal thinks about it in one way, and WordPress thinks about it in a different way, and Contentful thinks about it in a third way.

If you’re a client, you may not know what those differences are, and their salespeople sure as heck aren’t gonna tell you. Like, you have to get into the systems and play around with them a little bit to start to know how they think. The same, I think, is true with LLMs right now. Like, they think about things in different ways, and some of them are gonna give you better answers than others. So you are gonna have to start there and start testing.

And even once you get through that, I think really recognizing that the LLM is not gonna solve a problem for you. What it’s gonna do is give you options that then you can evaluate. Like, sure, absolutely, use an LLM for generating multiple variants of a headline or multiple summaries for a meta description and then AB test those. That seems like a fantastic use for an LLM, and it’s one where your skilled writers probably have a role to play in helping to guide that or helping to seed it so that you can be generating multiple versions and then be able to do that at scale.

But that’s not a “let’s replace our writer and just have the LLM do it.” It’s a “we need to have an iterative research approach to how we’re gonna do this so that we can learn and evaluate how that’s gonna work well.” And especially for organizations working at scale, I think there is an opportunity for that to become more efficient, but only if you put the work in upfront.

Jorge: Again, I feel like we’re totally in alignment here. I’ve kept telling folks that these are tools to augment humans, not replace them.

Karen: Yes, exactly.

Jorge: And this is just along these lines. Okay, so last question. Where can folks follow up with you?

Closing

Karen: Where can folks find me? I can be found at my website, karenmcgrane.com. I can be found at the Autogram website, which is autogram.is. I can be found on LinkedIn, as much as it depresses me to say that. I am still on Twitter a little bit, as much as it depresses me to say that. I can also be found on BlueSky, on Mastodon, and on Threads.

Let’s see, if you want a copy of one of my books, reach out to me, and I will hook you up with one. I am getting websites for both of them, but they are not done yet, and they are no longer for sale. But if anybody really desperately needs a book, I will get ‘em a book. Yeah.

Jorge: The context there, for folks who don’t know, is A Book Apart, the publisher, went out of business. Unfortunately, there are a lot of authors caught in this situation.

Karen: Oh, and one more I forgot. I’m also a moderator on RUX Design, the largest subreddit for user experience professionals. So if you want to be on Reddit, please come on over to RUX Design.

Jorge: Fantastic. I’m gonna include links to all of those in the show notes. Karen, thank you for everything you’ve done for the discipline, but also for being here and having this conversation. It’s awesome to finally talk with you here on the show.

Karen: Yeah, this was so much fun. I wish I could talk with you more often.

Jorge: Let’s make that happen.

Karen: We will.