Morgan Marie Quinn is a Content Design leader at Google where she oversees content teams across Bard and Google Assistant. Morgan joins Kristina to talk about all things large language models (LLMs) and generative AI. From defining LLMs, accuracy, vulnerabilities and governance, this episode has it all, direct from someone at the forefront of this new era for content design.
About this week's guest
Morgan is a recovering Mommy Blogger and Content Design leader at Google, where she oversees content teams across Bard and Google Assistant. Before joining Google, she led Content Design and strategy for companies like ServiceNow, Compass, and Intuit. When she’s not wrangling people, processes, and Large Language Models, you can find her ignoring her inbox, shuttling her teenagers to their social engagements, and perfecting the art of the power nap.
Kristina Halvorson:
Let's get into it. Are you recording on your end?
Morgan Marie Quinn:
Let me start and then do cameras off. Because I know you.
Kristina Halvorson:
Is that okay?
Morgan Marie Quinn:
Yeah, of course. I'm recording.
Kristina Halvorson:
Was the bio that I read for you last time, that was good, right?
Morgan Marie Quinn:
Yeah. There's a slight change, but I wouldn't worry about it.
Kristina Halvorson:
Okay. All right. Ready?
Morgan Marie Quinn:
Yes.
Kristina Halvorson:
Let's go. Hello. Hi. Welcome back. No, not yet. Don't say anything yet.
Morgan Marie Quinn:
Sorry. I'm waiting. Sorry, go ahead.
Kristina Halvorson:
I'll cue you in. Wait, you can't see me.
Hello. Hi. Welcome back to the Content Strategy Podcast. I'm your host, Kristina Halvorson. Boy, do we have a juicy topic for you today. That's right, friends and neighbors, it's AI. Here to talk about AI and LLMs is my friend and neighbor. Well, she's not really my neighbor, but in my heart, she is, Morgan Marie Quinn. Let me tell you about Morgan. Morgan is a recovering mommy blogger, which we should just dedicate an entire episode to that, and content design leader at Google, where she oversees content teams across Bard and Google Assistant. Before joining Google, she led content design and strategy for companies like ServiceNow, Compass and Intuit. When she's not wrangling people, processes and large language models, you can find her ignoring her inbox, shuttling her teenagers to their social engagements and perfecting the art of the power nap. Very admirable. Welcome, Morgan, to the show.
Morgan Marie Quinn:
Hi, Kristina. Thank you so much for having me.
Kristina Halvorson:
Thank you so much for being here. We have so much to talk about and It's so complicated.
Morgan Marie Quinn:
Yes, it is.
Kristina Halvorson:
We have, after much conversation put together, a list. I never walk into interviews with a list. I always just tell people, "Just show up and pretend that we're having coffee and conversation will flow naturally." It turns out, large language models, not necessarily an easy thing to just chat about over coffee. We're going to have a little bit more of a structured conversation. Perhaps my listeners will appreciate that, instead of just the rambling. No, that's not true. Not every conversation is rambling. If I screw a question up, you tell me that is the wrong question to ask. Let's just start at the top.
Morgan Marie Quinn:
Okay.
Kristina Halvorson:
Actually, the top is not the large language model. The top is, what I always start with, which is Morgan, can you please tell us a little bit about how you came to your position as a content design superstar at Google?
Morgan Marie Quinn:
Wow, superstar might be a stretch. Oh, my gosh. I had such a winding road to get here, to be honest. I know that, that is really common amongst folks that work in this industry. I always love hearing people's journeys. For me, like you mentioned, I did start as a mom blogger. I stayed home with my kids for about four years. I had worked in personal finance and took time off when I became a mom. During that time, I started blogging. It was very random. I was filling my days with something to do for myself that didn't have anything to do with children. It was like, at that time, I would say it was the early 2010s, the rise of the mommy blogger. It was very much a time to be blogging. Through my blog, I started a little freelance business.
I was doing product reviews, I was a managing editor for another site, doing social media management and I started working with brands. This was before influencers and TikTok and all the stuff that's going on now, totally different world. Through that work, I started writing for a product called Mint, which was a personal finance tool owned by Intuit. That was the perfect blend of my personal finance background and now this freelance writing marketing thing I had going. I started writing for their blog and then eventually, got hired full-time there as their managing editor and social media manager. It was just an incredible opportunity. This door opened for me. That's really how I got into tech. Inevitably, like a lot of us who found ourselves in tech or in marketing or social media, I started working on the product, whether it was doing in product copy or launching campaigns, working with product managers and some of my product marketing partners on making the UI better.
That really led to a more formalized role as a content designer. Content design wasn't a thing back then, but as it became a thing, and I would say Intuit was definitely one of the first companies to formalize content design as a role. Then that's what I did, I worked there for a number of years, spent the bulk of my career working at Intuit across a ton of different products, like TurboTax, QuickBooks Self-Employed, Quicken, when they owned it at the time. I got such incredible valuable experience there. I was really ready to grow my career and stretch in some new ways and was pursuing more of a leadership role and ended up getting an opportunity to work at ServiceNow. They Didn't have a content design practice at all and they were interested in standing one up. I went to ServiceNow and stood up the content design and conversation design practice over there, which was amazing. Also, blew my mind. Totally different product.
It's an automation tool built for engineers. Very technical. My brain was broken most of the day. I have never felt more like I didn't know what I was doing, except maybe today in my role. I was really craving getting back into the more consumer facing space. I ended up taking a role at Compass. The senior vice president at ServiceNow, senior vice president of design I should say, left ServiceNow to go to Compass to grow their UX team. He contacted me and was like, "Hey, do you want to do it again over here? We had built that practice, content design practice together at ServiceNow." He wanted to do it at Compass. I was like, "Yeah, sure." I took a leap and went over there.
It was definitely a wild experience. They were in this hyper-growth phase, very much operating like a startup. At the same time, I had been talking to Google for a while. I had, had a recruiter I was connected with. I had explored some different roles. Hadn't really found the thing that I was looking for until this opportunity came up to work on Assistant. It was really intriguing to me. It checked a lot of my boxes, most importantly, that I had really been wanting to get out of the UI. I was just tired of talking about strings and style guides and capitalization and should we use ampersands. All those things are important, but I just really wanted something different. I joined a little over a year ago. It hasn't been that long, and have been working in this assisted technology LLM space ever since.
Kristina Halvorson:
That is quite a journey. The weird thing is you're 32 years old. It's so strange.
Morgan Marie Quinn:
In my heart, I'm 32.
Kristina Halvorson:
I know. We all are and yet, why am I so tired? What is interesting to me about that arc is not only that you were, I mean, who just goes from mommy blogger standing up a content design practice at a major brand in I don't know, fewer than 20 years? Not only have you worked with so many interesting organizations, but you also have just jumped headfirst into this terrifying new world. Well, not terrifying because robot overlords, but just because it's complicated, world of AI and of training these LLMs. I am going to ask you to explain to me, actually I'm asking for a friend. No, it's me. Could you explain to me what is a large language model? You're working on Bard, you're working in Google Assistant, what is a large language model in the first place?
Morgan Marie Quinn:
Yes, this is a great place to start. I didn't know anything about large language models when I started this role. I know a lot of folks are just learning about them now and I'm not that far ahead of you all. We're all in it together. An LLM, like you said, stands for large language model. Essentially, what it is, is it's just a type of artificial intelligence and we call that AI. You hear people talking about AI, that's artificial intelligence. Essentially, what they do is they take in huge amounts of data and then they use all that data and they use it to learn and understand and then generate new content. Now, data in the world of LLMs is content.
LLMs need a huge amount of content to learn from. Whatever type of data or content they're taking in is eventually what the LLM is going to become an expert in. It starts to spit out related content, it learns and so it really creates new content. It is not spitting out verbatim or exactly what took into it, but it starts to generate its own. They're not human at all, but they can sound very human-like if the content going into them was created by humans. They're really interesting because they can surprise us a lot. They go farther than what we would expect a machine to be able to create.
Kristina Halvorson:
It just occurred to me for the first time, what if we were feeding AI content created by AI?
Morgan Marie Quinn:
You're not the first person to ask that question.
Kristina Halvorson:
Then what happens? Does AI eat itself? Is that what happens?
Morgan Marie Quinn:
I'm sure there's a movie coming out about this in the next six months that will tell us.
Kristina Halvorson:
I'm totally not interested in seeing that movie. Not even a little bit.
Morgan Marie Quinn:
I'll say though that this question of what it is, is super important. The other thing that is really important to wrap your mind around it is how they work. It's like, what it is but how it actually works, I find to be more grounding. I talk a lot about how building an LLM is like teaching a baby how to talk. It's an easy concept for me to grasp. I'm not a childhood development expert, but I always say I do have my own little LLMs at home. Raising kids, there are things you're supposed to do to encourage language development in them. There are a lot of similarities with encouraging that language development that we do in LLMs. I know it sounds weird, but hear me out. Babies are like these little LLMs that haven't learned how to talk. You have to start giving them inputs, because babies ultimately learn from the world around them.
All the stuff They're experiencing are these data inputs. It's like data is going into babies all the time, and they're learning. We read to them. We talk to them. We'll reflect back to them when they are using baby talk, we'll say, "Are you asking for more milk, or would you want me to pick you up?" That's how we reflect back to babies so they can start learning to talk themselves. Then we just expose them to a lot of experiences and then they start to learn. LLMs are a lot like that. Like babies, babies talk a lot of gibberish and LLMs do too. Like a baby LLM can make sense sometimes and then it starts speaking baby talk and it goes off the rails on you, just like babies. What you have to do is keep exposing them to new and more advanced information so they can keep developing and growing and learning.
LLMs are similar in that you have to expose LLMs to a ton of content so they can learn. You also have to correct them all the time when they get stuff wrong, because they do. Then you have to give them a lot of examples to learn from. Like little kids, like I said, LLMs will say the darnedest things. My favorite story is I was working on an LLM experiment last year to see if we could give this LLM more music expertise. We put all this music data into the LLM and did some training and fine-tuning. We started testing it and it would not stop talking about Taylor Swift. I don't know why. It was so bizarre. We were cracking up. We didn't put anything in there specific to Taylor Swift. I don't know why, but it was just so funny. They do these surprising things. You don't always have full control over what they do. You're just basically constantly testing and monitoring and course correcting them so their training can get better and better.
Kristina Halvorson:
First of all, that's amazing. I'm like, it's not human, but even the robots want to talk about Taylor Swift.
Morgan Marie Quinn:
She's that powerful.
Kristina Halvorson:
I know.
Morgan Marie Quinn:
She's that powerful.
Kristina Halvorson:
Maybe she owns the AI, just in general. I just want to say, I feel like everyone at this point has played around with ChatGPT or some version of that, some version of generative AI, at least intentionally we've all been working with that without knowing it for years, but at least intentionally. A thing that I've found when I'm messing around with ChatGPT and I'm still trying to figure out how to use it in a way that actually helps me instead of just derails me and entertains me for hours at a time, is the process of actually teaching it and correcting it. I feel like in some places, in some ways, I'm doing a public service by correcting it about whatever it's talking about. What happens when humans, is this the disinformation thing when humans get in there and purposely feed information to LLMs that is incorrect or that is terrible, what happens then? Because I'm sure that especially at Google, and I know that you can't directly address some of these topics, but how do authors or how do we govern things like that?
Morgan Marie Quinn:
That's such a good question. I don't know if anyone has the answer right now. I'll say in general, there are definitely things LLMs are not good at or they are vulnerable to. If an LLM is getting a ton of information from, let's say sources on the internet, factuality or accuracy is a huge issue. Because the LLM can't always decipher what is a fact or a good source. It's just learning from a bunch of information. If that information is inaccurate, the LLM is going to spit out inaccurate information. It's like, how do you fact-check the internet? I think there's still a huge opportunity in that space to get the factuality part right. As far as people intentionally training the models to be inaccurate, I'm sure that happens. I think about it all the time, like how are people going to be using this technology for more nefarious purposes?
I will say though, for folks who are working on building LLMs, something to be prepared for is not so much that users are going to be intentionally feeding LLMs incorrect information. They will, some of them, intentionally provoke an LLM to see what kind of response it's going to give. Usually, those provocations are offensive. You have to think about what is the worst thing someone is going to say to this LLM? What is the most nefarious thing they're going to try to get out of it or content they're going to create or whatever it is. People will really abuse an LLM just to see how it responds.
Kristina Halvorson:
I actually have a question. This is not on the list of questions that we did put, but I do have questions.
Morgan Marie Quinn:
Oh, no.
Kristina Halvorson:
I know, now we're going off the rails. This is more like one of my typical interviews. All right. I have a question. If a platform like Instagram or what used to be Twitter, if they can spot offensive material or racist material or dangerous material or whatever, and they can shut it down, is that a thing that a large language model can do? Just refuse to just stop talking to somebody that's trying to provoke it or to feed to train it to be terrible?
Morgan Marie Quinn:
That capability for sure exists. I know that a lot of companies who are working on this technology have those guardrails in place. It's just that it is hard. I would say not impossible at all. Luckily, there are teams with this deep expertise who are only thinking about this problem, which is amazing. It's hard to predict just how terrible people might be or hard to predict always how an LLM is going to respond because the content that They're creating is organic. For sure there are ways to do a full stop on a conversation, but It's always a work in progress. You should never consider that work done. Like, "Oh, yeah. We checked that box, we're good to go." It's constant work.
Kristina Halvorson:
Let's then slide back over to the official list of questions and talk about the work itself. I wonder if you could, because I know LLMs aren't just built with me sitting at a desk with ChatGPT training it like, no, that's actually not how you make a cake. You don't add. I don't know. I didn't ever ask ChatGPT to tell me how to make cake.
Morgan Marie Quinn:
Do you correct ChatGPT when it gets stuff wrong?
Kristina Halvorson:
Sure. I do. For sure.
Morgan Marie Quinn:
That's helpful.
Kristina Halvorson:
You know what? I'm a responsible citizen of the internet. That's what I think that, that's important.
Morgan Marie Quinn:
Look at you.
Kristina Halvorson:
I do it for hours. No, I don't do it for hours. Talk to me a little bit who is involved with making an LLM?
Morgan Marie Quinn:
Not just on my team but other people I've talked to who are working on the same technology. These teams are incredibly diverse. You'll definitely see that traditional triad of UX, PM, eng, that you would find on other product teams. There's also data folks, operations people, marketing folks, research, linguists. I'm probably leaving out so many roles. From my experience, I would say It's very much just an all-hands-on-deck type of situation and everyone is bringing something to the table in this environment. It's really rewarding for people who just love that highly collaborative but experimental type of environment.
Kristina Halvorson:
I actually didn't know that there were marketing people involved. Linguists make sense, but marketers and researchers. I'm having trouble getting my head wrapped around, why does it take that many people? Because at a very, very basic level, it's like okay, large language model, we need some people to go or some bots or whatever to go find a ton of content, whether it's on the internet or it's from our internal servers or It's from the mommy blog that we wrote for 20 years. A person needs to feed that into the system and then there needs to be somebody who is training it. What are all these other people doing?
Morgan Marie Quinn:
I can't say exactly what everyone is doing across different companies. I think first to the point of like, well the content already exists, it is not necessarily just pulling content from the internet. I'm sure some companies are doing that, not every company is doing that. There are definitely implications with that. Not all content is just free to use however anyone wants to use it. There's just interesting tasks. I know you were surprised that there's marketing teams heavily involved. Some of I think the more fun things to think about with that partnership are like when you go to any of the LLMs, whether it's ChatGPT or Bard or any other tool and you go to those sites, they'll give you examples of prompts.
They'll help you get started, try this, try that, or whatever it is. Well, anything I would hope that is being marketed as a prompt, like a great experience for an LLM is something that's been heavily vetted. That prompt should work in real life, should be tested many, many, many times to make sure it's consistent and stable and gives you a great answer every time, should be relevant to your market or a real use case that people can wrap their minds around and hopefully, that's grounded in research. Everybody really, I think in these environments has to work very closely together in that way.
Kristina Halvorson:
What you're saying is you need to design an experience for your large language model. That's what you're saying to me.
Morgan Marie Quinn:
Oh, yeah. Sure.
Kristina Halvorson:
It's that simple. Again, it's such a complicated topic. I'm not going to lie to you, I ignored it for months. I was just like, I'm too old. I've worked too hard. Somebody else is going to need to figure this out. I'm going to just kick back and I'm going to just continue to talk about my website content strategy. Diving in, your brain does just want to do the straightforward thing like, this is just talking to a customer service bot that I have done a million times and that I've hated doing. This is just more fun and it's more mysterious and thinking about the entire machine that goes behind the LLM is just really, really mind-blowing. Go ahead.
Morgan Marie Quinn:
One of the harder transitions, I will say, going from a more traditional content designer UX background role into this work is wrapping your mind around the ambiguity or lack of control you have over the output. We're so used to working on a UI where we control the strings or we're working on a chatbot that is in a flow that has a lot of logic in it, and we know exactly what the chatbot is going to say to the customer depending on what the customer is trying to do. This isn't that. We don't control the output. We are crafting an experience, like you said, and trying to train this LLM to behave or perform in this certain way, but give it space to do its own thing at the same time.
Kristina Halvorson:
In other words, that's good parenting. You just described good parenting.
Morgan Marie Quinn:
It really is like raising up this baby LLM.
Kristina Halvorson:
Bard is just a little baby. That's really all it is.
Morgan Marie Quinn:
I think Bard is maybe a young adult.
Kristina Halvorson:
When I first met you, one of the things I remember you saying is I need more content designers. I already have so many content designers on my team, I need more and more and more. Can you talk to me a little bit about, and we go back and forth on the podcast sometimes about the role of a content designer and just to level set for listeners, that is someone who is working with content within apps and services versus a website content strategist who's somebody who is wrangling content across websites or within working closely with UX folks like IA and research and so on. Not that content designers don't do that. I just wanted to clarify that content designers are folks who are specifically working within digital apps and services. Talk to me about how content strategists, content designers, what are our superpowers when it comes to diving into these teams that are training and building these LLMs?
Morgan Marie Quinn:
It's a really good question. I think what makes content designers really strong in this space, there are a few things, but first of all, is our ability to really understand the needs of users and what kind of content will deliver on those needs. Especially with LLMs, we need content that's clear, understandable, and conversational. That is really one of our big superpowers. It's grounded in user needs. LLMs need a lot of that kind of content, clear, understandable, conversational, to learn and how to embody the traits. Because ultimately, that is really what makes or breaks an experience when you're using an LLM. We see it even now with a lot of AI generated content. It's wordy, it might be too technical, it's nonsensical or robotic. All those things can be improved by content design.
I think also content designers are just natural dot connectors. We tend to work across a ton of different projects. I know we don't like that and we feel spread thin, but it does give us an advantage here. We're really good at scaling solutions because we are so resource constrained. We have those pain points as a community, but it really gives us a lot of strength in the LLM world. A lot of the work is seeing these word problems or common themes across the work. Because of that unique lens that we bring of being dot connectors, we're really good systems thinkers.
LLMs are basically giant word systems and they need a systematic approach to developing training content. Then I think if I had to say another superpower, content designers and content strategists, however we want to define everybody, we're really creative folks. As technical as this space is, it is a great one to flex that skill. LLMs are in dire need of personality and conversationality. Right now, I think that's a huge opportunity for us to be differentiators not only in our role but create products that are differentiated. The only way that LLMs are ever going to be interesting, conversational, is if they get the kind of content that trains them to do that. It's a huge opportunity for us as well.
Kristina Halvorson:
What I'm hearing from clients right now is they're coming to us and they're like, you helped us get resourced. Now we have this robust content strategy practice and we're working on enterprise content strategy and now we've got executive leadership. Joining our heads, hey, you're the content people. What are we going to do about this AI thing? My folks are just like, "Why do we need what?" How would you propose, does every organization need an LLM? Is that a thing that everybody should be looking into just because it's a thing? Do people need to think about protecting their content? How do you begin to get your head wrapped around what even you should be thinking about when it comes to this topic for your own organization?
Morgan Marie Quinn:
That's such an important and big question. I would just say ultimately, I don't know. This is a very novel technology right now and the exact applications, especially practical applications, aren't totally clear. It really is, we're on the forefront of this. I think that you're not going to find this one use case fits all situation. It's really dependent on someone's business, their user needs and what their own constraints are. I think customer facing experiences like customer service or help centers, you mentioned that earlier, are very obvious applications. There's just immediate opportunity to streamline those experiences, make them more helpful.
Usually, companies have a lot of existing content that could potentially be used to train an LLM. We can wrap our minds around that one. If you are getting pulled into these conversations at work and you don't know anything or know that most people don't know that much, which is totally fine, we're all learning. I do think what I have learned over the last year or so, there are some important questions to just keep asking. I still ask these all the time every day in my own work. First, someone is pulling you into a conversation of like, how are we going to use LLMs? You're a content person, what do we do? First, it's like, well, what customer problem or unmet need are you trying to solve with an LLM? What exists, what's grounded in research? Hopefully. Then what is the ideal LLM experience that you want to deliver?
If there's this customer problem or need, if an LLM is going to help solve it, what is the ideal experience for that? Then I would think like, then what content do we need to train the LLM to do that? Because it's not going to know how to do it on its own. Where is that content coming from? How much do you need? This is not the type of thing that one person can do on their own or even a small team can do on their own. LLMs need a ton of content. Where it's coming from and how much you need and is it usable, are very important to ask. Then how do you know that, that training is working? How do you know that LLM is getting better? What does better look like? What does quality look like for you delivering on that ultimate experience? Then finally, what safety guardrails do you need to have in place if that LLM does misbehave? Those are just some basic questions. One out of a million, or a few out of a million.
Kristina Halvorson:
Well, and for what it's worth, they're very basic, brilliant questions that I'm sure 90% of companies are not actually asking right now. I think what I'm seeing and what I'm hearing is that folks are under pressure to figure out what we can do with this. Not taking a step back and saying, what business problems or customer problems have we been wrangling with or currently exist where this might be a useful application of this technology? It's, we have the technology. Let's use that. Let's start with that. We're going to use it, but how? That's everything though, right?
Morgan Marie Quinn:
Totally. It's not that different than working in a product environment anyway. I do think the questions I laid out are very much ideal and it's fair to say, we actually don't know what problem this is going to solve. We just want to see if we can actually use it for something. That's fair too. The question even I ask every day more often than not is because people will say, "Well, we want the LLM to do this. We need it to do this." I know that's a conversation that's happening across a ton of companies. We need an LLM to do this. Where is the content coming from? That is the first question I ask all the time. Because the LLM is not going to do that unless you have content that's going to teach it how to do that.
Kristina Halvorson:
Well, you know what's interesting about that is that organizations that have, let's go back to your comment about support content, organizations that are like, we've had this problem of support content that lives across 80 different platforms and now we can just feed it all into the LLM. Then it'll magically sort itself out and structure itself. Very basic problems that content strategy and enterprise content strategies seek to solve in the first place, which is when was the last time you looked at that content? Is it relevant? Is it accurate? Is it timely? Who owns it? What happens when it comes out? All of those processes.
Morgan Marie Quinn:
Is it tagged?
Kristina Halvorson:
Exactly. The substance of it. If it is tagged, by what logic and who did it? All of those very basic questions. Well, not necessarily basic.
Morgan Marie Quinn:
Or even, are we allowed to use it?
Kristina Halvorson:
Yes. Exactly.
Morgan Marie Quinn:
What privacy issues are there?
Kristina Halvorson:
All of that. Those are content strategy questions. What is so scary is that there are still so many organizations where content strategy is a function where there's still order takers, which I can't believe how 30 years after the internet was commercialized that I still am saying those words. Yet, that is the case. If ever there were a case for organizations to get their acts together and figure out their enterprise content strategy and their content ops, this is it. If you're really going to start leveraging, I said it, leverage, leveraging your content as a real business asset, you better get your acts together before you even start thinking about large language models. Now, how many organizations will do that? None. Two.
Morgan Marie Quinn:
I know. Well, the information architects of the world unite. This is your moment.
Kristina Halvorson:
No kidding. Same with technical writers and tech com folks. I've been banging my fist on the desk for years saying, "Don't forget about the writers, don't forget about the tech." They're just the under-sung heroes of our work. Now they're just like, "You're calling me now? Now I charge five times as much." All of you should charge five times as much.
Morgan Marie Quinn:
As they should. Exactly.
Kristina Halvorson:
That's correct. Here is, I'm sure the question that you're going to get asked by people at Thanksgiving dinner. What keeps you up at night about AI? What worries you? I mean, of course, I always race ahead to like, and then robots took over the world and there was an uprising. I don't want to hear about the robot uprising. What actually concerns you?
Morgan Marie Quinn:
Well, for what It's worth. Kristina, I will put in a good word with you with the robots when they do take over.
Kristina Halvorson:
That's fine. Why? I see. Because you're already aligning. You're aligning in advance.
Morgan Marie Quinn:
I don't know. You're in with ChatGPT. You're already training it. I'm sure you're going to be fine.
Kristina Halvorson:
That's right. They've already put me on the nice list and not on the naughty list. All right, good. We're set.
Morgan Marie Quinn:
There are things I suppose that keep me up at night or I worry about. Just like on the human level, I think a lot about what we don't know. I don't know what I don't know. I think a lot of just like, what can't we see right now that will seem so obvious once it makes itself known? Luckily, I'm not the only person that thinks about this and there are really smart, brilliant people thinking about it. I do just wonder all the time like, "Oh, my gosh. What are we going to go, oh, of course. Of course, that happened." What are those negative implications we just can't predict, or what are those nefarious players in this space doing? A lot of other people I worry about, the spread of disinformation and especially, around election season. Just worried about that.
I do think the technology will open up opportunities and just a world of information to people who might not have had access to it before. I do wonder who will be left out or negatively impacted and how can we mitigate that? I don't have answers to any of this. They're just the things I think about. Honestly, I just professionally and even just for my own team, I think a lot about the content designers working in this space. It's very challenging. It's very ambiguous and we're on this new frontier doing work that doesn't really match our job description. That job description was already hard to define in the first place. That definitely doesn't get better when you're in this space. Ultimately, I do feel hopeful this will unlock new career opportunities for content folks. It's just not clear right now what that path looks like.
Kristina Halvorson:
Maybe it will be in three months or six months or a year. We don't know.
Morgan Marie Quinn:
What is time in this space?
Kristina Halvorson:
I know. We say that all the time. I truly mean that though. The title of prompt engineer didn't really exist in any kind of meaningful way three months ago. Now all of a sudden, it's just proliferating so quickly. Did I use that word right, proliferate?
Morgan Marie Quinn:
I think you used the-
Kristina Halvorson:
Sometimes words fly out of my mouth that I've never actually said before and I've only read them and they come out of my mouth and I'm just like, that's a thing that happens to me on the regular.
Morgan Marie Quinn:
For a person who works in words, I am often not great with words. If it makes you feel any better.
Kristina Halvorson:
That makes me feel better. It makes our listeners feel better, I'm sure. This is my last question. Actually, it's my second last question. What is exciting to you about AI? When you think about what is fun or what is helpful or what is hopeful, what is exciting?
Morgan Marie Quinn:
Yes. Okay. There's a lot to be excited about too. I do get really excited just about what AI teams in general are doing to improve healthcare. A lot of this technology is being used to identify illnesses sooner or more accurately or give providers tools to help them deliver better healthcare, safer healthcare to people. I get very, very excited about that. I also get really excited, like I mentioned a little bit earlier, about how LLMs might be able to serve as a tool to bridge the gap for people who maybe have learning differences or skills gaps, accessibility needs, or maybe other disparities that can create a more equitable world.
Maybe I'm looking at things through rose-colored lenses. I think a ton about how we might be able to unlock new opportunities for people. Then, man, I just like geeking out on all the weird inspiring and just creative content people are coming up with. I don't know if you saw them, Kristina, but recently there were these AI generated images floating around the internet depicting Freddie Mercury performing at a modern-day Pride parade. They took my breath away. He looked amazing. He had this perfectly gray mustache. He was super fit and healthy. He was in a white outfit commanding the stage at this Pride parade. It broke my heart and just was awe-inspiring all at the same time. I was like, "Oh, I never imagined creating something like that." It was so cool.
Kristina Halvorson:
I think that, that is amazing. This is not as amazing, but it's fun. I saw a compilation going around that were the Beatles as children. It was so cute and so precious and everybody's like, "My God, where did they find, wait, why are those backgrounds so similar?"
Morgan Marie Quinn:
Why are their hands weird?
Kristina Halvorson:
That's right. Exactly. No.
Morgan Marie Quinn:
Did you see the hipster presidents doing that?
Kristina Halvorson:
No.
Morgan Marie Quinn:
That one killed me. Joe Biden with the mullet. It was awesome. It was so funny.
Kristina Halvorson:
There are people out there using technology for good. I know it. We see it, it's delightful. It is seeing people be creative with it certainly, that is a thing that will get me up in the morning for a while too. Last question then. Where can somebody go to learn about this? To your point, everybody is just trying to figure it out and we're still trying to sift through everybody's hot takes. It's complicated and it's evolving and it's getting ahead of us. Where can people go to at least ground themselves in the things that you have learned, to date?
Morgan Marie Quinn:
Two of my favorite sources are, Hard Fork is a really great podcast. It's from the New York Times and they discuss a lot of future tech topics. They're doing a lot of work around AI and LLMs right now. They actually have some episodes featuring Google's DeepMind CEO and Google's CEO. I shared those around my team. We all did, just discussing them. Because even though we work at the company, we still learned a ton of new things. Even hearing about challenges that say the DeepMind team is facing just helped put our own work into perspective. It was super validating. Those, I really found valuable. Then for more of just general takes or just understanding trends or points of view, I really love the Pivot Podcast and that's hosted by Kara Swisher, who I adore, and Scott Galloway. They cover tech and business in general, but they do dig a lot into AI. I just love hearing their perspectives on it, especially because they aren't always aligned. Just always great to hear diverse perspectives.
Kristina Halvorson:
Thank you so much for that. Thank you so much for joining me today on the Content Strategy Podcast. You are doing such exciting great work and your humility around it is just mind-blowing. If I were in your shoes, I would just be on the mountaintops yelling about... Wait, you probably can't do that because of confidentiality.
Morgan Marie Quinn:
Darn.
Kristina Halvorson:
That is why I am self-employed.
Morgan Marie Quinn:
Well, I just thank you so much for giving me the time and space to talk about this. It was quite an honor.
Kristina Halvorson:
We'll leave it there. Yay.
Thanks so much for joining me for this week’s episode of the Content Strategy Podcast. Our podcast is brought to you by Brain Traffic, a content strategy services and events company. It’s produced by Robert Mills with editing from Bare Value. Our transcripts are from REV.com. You can find all kinds of episodes at contentstrategy.com and you can learn more about Brain Traffic at braintraffic.com. See you soon.
The Content Strategy Podcast is a show for people who care about content. Join host Kristina Halvorson and guests for a show dedicated to the practice (and occasional art form) of content strategy. Listen in as they discuss hot topics in digital content and share their expert insight on making content work. Brought to you by Brain Traffic, the world’s leading content strategy agency.
Follow @BrainTraffic and @halvorson on Twitter for new episode releases.