Advocating for more AI Governance
Matt Stauffer:
Hello and welcome back to Pragmatic AI, where we talk about using AI in the real world. What works, how to use it well, and when it causes more harm than good. Practical tools, real trade-offs for builders and business leaders. And my guest today is a business leader, Karen Maria Alston, a 20-year communication strategist turned AI governance advocate. She believes that the most important voices in the AI conversation aren't those in Silicon Valley, which is one of the reasons why I'm so excited to have her here.
She isn't actually originally a technologist, which is why she's here. It's why she belongs at this table. She's helping build bridges between AI policy and the communities and people who will live with the consequences of these technologies as they expand. So current Karen, that's kind of my pitch for you, but can you just kind of say hi and tell us a little about who you are and what you do?
Matt Stauffer:
Yeah, and one of the things that we are gonna talk about a little bit later is AI, what does governance and safety and all that stuff mean? But when we chatted most recently, one of the things you were talking about quite a bit was understanding that individuals, whether or not they choose to engage with AI as a tool they're working with or that they're trying to evaluate its impact on their lives from like a how am gonna use it, everybody's going to be impacted. And you kind of gave me a list of ways and it was kind of very telling to me, you're like, well, if this is the case, you're gonna.
And I don't know if you remember that, but you kind of gave me a list of your like, if you're dating, if you're in social media, if you live in a small town, these are all ways. Can you kind of give us that pitch? But like, if somebody sitting here being like, somebody forced me to listen to the stupid AI podcast, and I don't even want to listen to it anyway, because I don't care about AI, tell that person, like, what is the impact of AI in their day-to-day life that they can't escape?
Matt Stauffer:
Yeah.
Karen Maria Alston:
Well, first of all, thank you for having me. I'm so excited for this conversation. I'm Karen Maria Alston. And for most of my career, I was a marketer. I communicated to different audiences. I worked with a lot of impact organizations, local governments, federal government on how to communicate ideas, strategies, processes, products, you name it, to the general public. And I see this thread throughout my career, which is so incredible that now I'm talking to you about AI safety and governance as someone who spent 20 years marketing and talking to people.
Matt Stauffer:
And I mean, it's very interesting because we have felt like certain things are just, you know, if you see it, must be real, right? Like if you see a video of something happening, it must be real. If you see a picture of something happening, it must be real. And there was this thing that happened just, I think, yesterday where a woman was like, yeah, this influencer took a picture of my body at a tennis tournament two years ago. She AI photoshopped her face onto my body and said, hey, this is me at the US Open yesterday. It's just like everybody believed it because of AI, looked very convincing.
Karen Maria Alston:
So I'm so glad you brought up this question because I'm going to tell you how I got here. And how I got here was talking to people who said, I don't care about AI. I don't use AI. I'm busy working. I've got kids. I'm focused on putting food on the table or the price of gas. And I realized all of those people who said, I'll never use it. I don't know what chat GPT is. AI is still impacting them, whether you use it or not. So.
Matt Stauffer:
And because it was only your head being moved onto it, you don't even have the telltale signs of like, this was an AI generated photo or whatever. And it definitely just feels like the things that we have been taught we can trust over the last, I mean, I guess, what is it, 50 years of video being prominently delivered, 50, 60 years, are suddenly not safe. And if you can't trust those things, I mean, initially we were like, well, let's just teach our boomer parents how to recognize the very obviously AI-generated things, right? The voice is gonna sound a certain way. It's gonna have seven fingers on each hand or whatever, but that's changing. You no longer, a lot of the people who have made their entire Instagram profiles around how to identify AI videos are starting to be like more and more like, can't tell anymore. You know, you don't know. It's getting trickier and trickier. So if we can't individually be the ones who are responsible for our safety.
Karen Maria Alston:
You log into Facebook and you see this funny cat meme or this funny video and you think, that's so funny. Not understanding that that's artificial intelligence. Right now on Instagram, there are accounts with millions of followers. There's one I'm following right now. That's a Buddhist monk. He's got two and a half million followers on Instagram. 100% AI. You're starting to see influencers on Instagram. AI. So for all my single folks that are out there dating, they're swiping on one of the dating apps and they're seeing these profiles that they think are great or that's my type of person. Well, guess what? A lot of those photos are AI. And now even on TikTok, for those of us who shop, I'm a shopper. Some people shop on TikTok. A lot of the TikTok shop accounts are 100% AI. So when people say, I don't use it, I don't care about it, it's still impacting them.
Matt Stauffer:
Right. There aren't those signs just telling you this thing is manipulative. And of course, it's fine to talk about like making sure that your parents don't, you get scammed on the Internet. But you're talking about not just those things that are very obvious to us, very Internet savvy people, but also things that we deal with the data that we're not expecting to have an eye on. Like you said, in dating and all these other things. What is the solution? You mentioned governance and safety, like where who do we look to to help keep us safe if it is no longer something where there's obvious signs for each of us to look for?
Karen Maria Alston:
And when it comes to information, so when someone puts out a video that you trust the person, your assumption is, well, that person is actually cooking or they're jumping off a mountain and they don't realize that that person used an AI tool to create a video of them paragliding off of, you know, something and it's not really that person. And so when I say to folks, you may not use AI, but AI is impacting you every single day.
And there are bad actors out there who can also use AI to manipulate people. And that is what led me down this path of AI safety and governance.
Matt Stauffer:
Yeah. Yeah. And I mean, one of things you mentioned was training people. And obviously we can do training. I do think that there's, it's probably pretty likely that with all such scams, older people, less technically savvy people are the ones who are targeted because they're easier to get to. They're just sitting at home and whatever. But in the end, all of us are ripe to be manipulated by something or other. You know, it's not only only people, people are on dating apps. It's not only you know, people who don't understand technology, who can get taken advantage of by being presented a story or piece of information that looks completely convincing according to all of our traditional metrics. And so some of it is training and educating people. But I imagine that you have the phrase governance advocate, right? Governance is not just education. It's not just individuals. And one of the things I've been talking about a lot in this podcast is people who are critical of AI, because I have friends who are just like, you should not be doing an AI podcast. You should not be using AI in your day to day. You're contributing to the burning down of our society. I'm trying to actively engage with those concepts. But I'm also just like, one of the big questions has been, is the decision for the individual to abstain or do fixes at that level have to come from government and business decisions? If I stop using AI and I convince a thousand of my friends to stop using AI, the data sets not get built? Does the environment not change? I'm skeptical of that. I have that same question for you from this kind of like the safety and governance perspective. How much of this comes down to training individuals and how much of it has to be somewhere else? What is this somewhere else? Is it legislation? Is it reform in the companies? What is the direction you're heading when you think about governance reform and advocacy?
Karen Maria Alston:
So a lot of the industry is talking about a human tag. And very soon, within a couple of years, logging into Facebook and logging into Instagram will be majority AI-generated content as opposed to human content. And that's hard to process because as a human, you and I are used to logging into our social media and seeing other humans. So one of the conversations that's happening, and there's actually a coalition of organizations coming together for these different social media platforms to add a human tag so that you know it is 100% human. The other thing is we have to start training people. We have to educate people. Once again, how I got here is no one was communicating with everyday Americans that this is happening. And you and I are highly technical people. We live in this world. We generally can spot AI or get a feeling that might be AI. But for the average person who doesn't live on the internet, who's not using these tools, who's just going to work and school and getting their family together, and then they log onto Facebook, you know, on a Sunday morning just to laugh and see what's happening with their families, they don't understand that a lot of that content is manipulated. And so how do we reach those people to say, I need you to pay attention that this isn't real so you don't get scammed? Another thing you and I know, the voice mimicking. People are being scammed out of their bank accounts, out of their retirement. They're getting calls from their banks saying this is fraud and I want to confirm this transaction. And people are giving their entire bank information to someone who is just robbing them blind. And it's because of the tools that they can use. And there's no one saying, okay, well, how do I track that? How do I stop that? Who do I call? Do I call my local police? And what are they supposed to do?
Matt Stauffer:
Mm-hmm.
Karen Maria Alston:
And to be honest, it's a wide range of thoughts on this issue, right? But I always tell people to start with the AI companies and their goal. They're all racing to achieve AGI or ASI, whatever terminology you want to call it, which is that artificial general intelligence. And so what people don't understand and what we're not being told is the path to get there, what that means for our economy and our social fabric, what that means to young people. You know, most of us as Americans, the American dream is you go to high school, you learn a trade, or you go to college, and you have an expectation to go into the workforce and be able to find jobs until you retire.
Matt Stauffer:
Yeah. Yeah. Yeah.
Karen Maria Alston:
What does that do to the social contract if there's artificial intelligence doing all the work that humans were designed or have been conditioned to do for the past X amount of years? What does it do to our environment? It's a whole other conversation of people who are very angry around our water supply, our electrical supply, all of the things that these data centers can do. But even more so, how does this impact who we are as Americans as we give more and more of our daily day-to-day activities to an artificial intelligence.
Matt Stauffer:
Yeah. And I want to note for anybody who's not familiar, AGI is artificial general intelligence. The idea is when AI hits the level of ability and knowledge where it can match a human. ASI is artificial super intelligence. And so that's kind of like nearly God-like ability. But AGI, if you hear it thrown around, it basically means they are as capable as a human at doing human reasoning, human thinking, and human tasks, as Karen is mentioning here. And so that's when we become completely replaceable. Cause a lot of this conversation is around basically when humans are being replaced and the impact on their lives. And I want to talk about that in just a second. But I just want to make sure we kind of had that baseline understanding. So you mentioned kind of like, we're not necessarily thinking about the costs as we think about the benefits. And one of the things I think that was in your statement, but I want to really pull it out is the people who are racing and who are working hardest to get the benefits are the people least likely to pay the costs. And so that's like the people at the labs and everything like that are least likely to be impacted. They're not living next to data centers. The average person consuming, paying, yes, exactly. They're gonna be able to put walls up between them and the environmentally ravaged, whatever. The average person who's paying 20 bucks a month or using free chat GPT is seeing some impact on them, right, because of the worry about the scams and stuff like that, but also the average person using chat GPT is not the person necessarily living next to the data centers. The people most negatively impacted are the people who are not actually getting any the benefit. We're talking people in third world countries who are being used as cheap labor to make sure the Waymo doesn't run into somebody or people who are in impoverished communities in the US or elsewhere.
Karen Maria Alston:
So I don't want to tell people what decisions to make, but I want to give them the tools to understand how AI is going to impact our lives. If you're paying attention to what's happening with Department of War, if you're paying attention now with what's going on in campaigns or how information is given to us, if I can manipulate and have, you know, a million agents go out on Facebook and put comments on a couple million accounts around this one particular issue, then I can influence an election or I can influence people's opinion. It's not, I understand where your friends are coming from to tell you that you shouldn't be doing it. And there are some people who are very anti-AI and think that we should just stop. There are also people who are thinking about, well, how does this help humanity? And we can cure cancer or we can solve all these horrible diseases or you and I could live to 120 years old easily because we'll have the technological capability to do that, which is great. But what price do we pay as humans in the pursuit of that? And what damage do we do to our country in that race to get there and all the people we're not thinking about as we're racing, racing, racing to get to this level. And so that is the issue and how I ended up in this space of being able to say, someone has to start telling people, hey, you may have fun with these cat memes or you may laugh at these funny videos, but there's a huge price to pay for us as humans as we move forward.
Matt Stauffer:
who are living next to the data centers. So there's a huge disparity between the people who are benefiting from it and who are getting the cost. I've asked this question kind of from a couple different angles, but I'm still very curious. I know you live in DC and you have kind of connections in that general area. So when I hear DC, I hear politics, right? And I know you're not a politician, but I do know that you've got a lot of connections. Is what you're working on inherently political? And of course we can say it's theoretically political. But like, are you on a day to day dealing with lawmakers and trying to get stuff happening? Are you purely just dealing with individuals? Like, where is your level of advocacy able to be most effective right now?
Karen Maria Alston:
They're also going to be very wealthy.
Matt Stauffer:
Yeah, right. Surprise.
Karen Maria Alston:
So what seems to be, what drives politicians is money and voters, right? Let's just be honest. And I think if people understood that the average politician doesn't understand AI, I'm just gonna be honest. There are a few, a few who are spending time learning and understanding it. But for the most part, the average US politician doesn't understand it, is probably a casual user of chat GPT, and that's about it. And so what is happening in the industry, policy governance industry, is people are realizing that you have to mobilize voters because politicians listen to voters, and then they also listen to money. I mean, obviously, those are the two things that impact them.
Matt Stauffer:
Yeah.
Karen Maria Alston:
So there's a lot of money coming to DC. There's data center money, a lot of different PACs being formed to try to get all these data centers approved throughout the country. And then there's a lot of money around policy and governance that's coming to DC to try to convince these politicians that, hey, as we're building these tools and as these, you know, 10 companies are making these decisions for billions of people, we need to think about how this impacts everyday Americans because they're going to get the phone calls when the parents can't find jobs for their kids or the town that, you know, the small town has this big data center and it's built, but now there's not a lot of jobs for everybody and they've got this huge data center in their small town or colleges that are struggling to find people would go there because you could get a college degree on the internet probably in a couple of years and not have to pay 60, 70, $80,000 to send your kid to college. They're going to get those phone calls. And so what's happening is people are realizing you have to mobilize voters, that we have to educate voters so that they can push back and say, wait a second, my congressman, my senator, my governor, I want to make sure that you're thinking about how this impacts our state. And to be really honest, most politicians don't think long term. They're thinking about their next election. They're thinking about how to get through their term. And so a lot of things get kicked down the road to deal with, the next person that has this job or the next person who is sitting in this office can deal with it. But the challenge of doing that is it harms our country. And the advantage that China has is they don't have an election every two, four, or six years. They can focus on what's best for their country and how they make sure they're not harming their citizenry. But for us, it's like, well, I won't be governor in two years, so the next guy will deal with that. Or girl.
Matt Stauffer:
Yeah. That's really helpful. God, I had a really great question queued up and then you said something else fascinating. I pulled it out of my brain. OK, so you're not anti-AI. At least I don't intuit you to be. Yeah. And there's a lot of people who if someone says data center or someone says environment or someone says governance or somebody mentions that the founders of the labs are going to get rich. They're like, so you're a Luddite, you're anti-AI, and you want AI to go away, you think it's the worst thing in the world. And so one of the things I appreciated about our initial conversations were that you are both capable of saying, here are the risks and here are the costs, and you're not doing so and saying, therefore, this should not exist. You're saying, what does it look like to balance the potential benefits that can come from this with, in a way that does not just unnecessarily disproportionately or unhealthily stack up just a list of downsides on the neediest people and the people with the most difficulty. You know, it's just like, it's trying to fix these disparities. It's trying to make sure that we're only willing to take on the costs we're willing to take on versus other people forcing those costs on us, right? So it's like a health thing. It's an equality thing. It's not a let's stop doing AI thing. So if you could today snap your fingers and push through, you know, one piece of law that controls some aspect of the governance of AI and you're just like, this is my pet. If I could make this happen, I feel like I would be accomplished. What would that one piece be for you? Or do you even know at this point?
Karen Maria Alston:
You know what? I think if I could create it, I would create some type of tracker or tool where people could see how their life would be impacted a year from now, five years from now, 10 years from now, or how their kid's life would be impacted. Because no, I love these tools. I use several of them every single day. But I also know what's coming, right? And Americans are creatures of habit. We're comfortable at the same place at the coffee shop and we enjoy driving a certain way to work and we just were creatures of habit. And the biggest issue is who are we when you take away a lot of the things that build our identity. And that is the challenge of artificial intelligence. That's what no one is talking about. If you've had a job for 30 years and I take that job away, and that's a part of who you are, part of who you are proud to be. And now there's an AI doing that for you. Or you're an executive assistant. I use Claude. I'm like a power Claude user. Claude is my executive assistant. It takes care of my calendar, my day, absolutely everything I have to think about for that week. Well, there are people who have that job. There are people who enjoy that job. And now you're taking that away from them. And we're not telling people what's coming. And I brought up the college issue.
Matt Stauffer:
Mm-hmm.
Karen Maria Alston:
In a few years, you will literally be able to get an incredible degree online, you know, office hours with professors, everything else for minimal amount of money because the data and information will be online and readily available to you. So what does that do to small town America with a lot of small mid-sized college towns that employ a lot of people, bring culture and art to that part of the country. These are the things that if I had a magic ball, I would be able to say, I just want you to see this impact so that you understand what's coming. And that is what is bothering me. Like I said, I wouldn't tell people what to do, but just to be able to see, do you see how this is slowly and slowly and slowly taking on more aspects of our lives?
Matt Stauffer:
So that sounds really scary, right? Like it sounds like people are gonna lose jobs, people are gonna lose livelihoods. And one of the things people joke about on the internet, and it's only a half joke, is this idea of the permanent underclass. And a lot of programmers are like, this is the programming choice I'm making today to make sure I'm not a part of the permanent underclass. What they really mean is just, it feels like there's people who are gonna lose and people who are gonna win because of AI. And so individuals wanting to be able to continue providing for their livelihoods are like, well, I need to figure out what I need to do to be a part of the people who are winning. There are also other people I've talked to who are just like, well, AI is going to make it so that I can do my job faster. So now I only have to work three days a week. And I was like, this is, you live in capitalism. That's not how it's going to work. I'm sorry. You know, so like I am curious if the magic ball told us all what the future is, are you hopeful about human.
Karen Maria Alston:
I struggle with that, to be honest. I believe in humanity and I believe in the goodness of people. The challenge is who we all elect to do the right thing.
And that's the issue, right? Do we have the leaders who are going to do the right thing for Americans or do the right thing for our globe to make sure that we don't just have an oligarchy of a few super wealthy people and everyone else struggling? Now, there will be new jobs created because of AI. There's no question about that. There will be a shift probably back to more blue collar trades and blue collar fields, more human touching types of jobs. That's definitely starting to emerge. It's going to come. I'm excited that soon we're going to see the first 14-year-old billionaire that's at home right now, vibe coding some app that's going to grow and be a billion dollar business. We'll have that too. And it's just thinking about, what will be the businesses, if you take away a lot of the service businesses, what will be the businesses that will thrive?
And I have this joke that in five years, somebody will pay you to go for a walk with them, right? Events will be big again because we'll be craving in real life experiences, right? So let's just go walk around the park and I charge you $1,000 an hour to walk around the park with me. Like that may be a big business in a decade from now, I don't know. It's going to, I mean, just like any other huge shift in our labor industry, when the tractor came into being or when the industrial revolution happened, it shifted our mindset around work and it shifted what we had to learn in order to adapt to a new market. It's the same with AI. AI is coming into the marketplace and it's just about to disrupt the current labor market in the current way we look at how you achieve and do certain things. And we're gonna have to think differently.
Matt Stauffer:
So Karen, you said that you are a power user of Claude and a lot of this conversation has been around your thoughts around policy and all this kind of stuff, but I'm super curious. As somebody doing the type of work that you're doing, what does using these AI tools look like for you? I mean, you mentioned Claude, managing your schedule and your day-to-day life. And I don't know if I know anybody who's actually doing it at that level. Tell us kind of like, what are some tricks that you're using Claude for that you think maybe other people aren't trying right now?
Karen Maria Alston:
When I say that Claude is my executive assistant, I mean it. It manages my calendar. It manages my availability. When I have meetings with people, I have a skill and I have a co-work set up that whoever I have a meeting with, Claude gives me their background, bio, any interesting facts about them. So before I have the meeting, I know exactly who I'm talking to. It looks at my life and says, you know what? You need some work time, so let's make sure that, you know, these parts of your calendar are always blocked off for deep work and deep thinking. Every aspect of my life, Claude pretty much manages and it blows me away how much more of my life that it's taking on. From travel to, I need to do this or I need a massage but I don't want to spend $200. Claude will search for whatever's out there and find me three alternatives and say, well, I found that $60 at $80 or whatever. And it's just it blows me away and it knows my brand, it knows my colors, it knows my businesses. So if I say, hey, you know, develop this PowerPoint on this topic. Here's what I've written. Here's my documentation. Here's the data. And then it gives me this PowerPoint so that I can present it to a client. And I was a chat GPT power user. And now I'm just, I'm blown away with Claude. Absolutely blown away. It has complete control of my calendar, my drive, my email. It looks through my email to see if I've missed something. It has control of so many different things that I use on a day-to-day basis, whether it's Canva or box or figma. It just looks at everything and therefore I trust it to give me feedback. And then I've set up in my prompt to always be contrarian, always pressure test, and always question me. So it's never just agreeing with me. And if I have a blind spot, to recognize the blind spot or what I'm not thinking about or what I didn't take into consideration in whatever answer it gives back to me.
Matt Stauffer:
That's amazing. Okay. Yeah.
In a recent episode my friend Nick Peterson said that he uses, he's an academic theologian and so he writes things and then he often hands it off to a colleague and says hey can you review this and two weeks later he'll get something back and so he's like one of the things that's really helpful is to ask it for criticism. And he's like I still send it to the other person, but I get Claude's criticism right away, I can act on it and then I have a much better, you know, more refined version. I send it to other people.
This idea of asking Claude to be more contrarian and to be more critical, I think is a really under discussed kind of idea. One of the things I've talked a lot about is how they want, this is the phrase that is the correct phrase that I used to use all the time and I'm like, it's the fact that they want them to be, that they programmed them to make you happy. I'm trying to remember what the word is for that. But basically, you get a lot of downsides from AI because it's less concerned with being right or helpful and more concerned with being perceived as valuable and perceived as helpful. And so that ends up with it not being as helpful. And I'm curious whether you're getting this whole like, no, please be critical, please be negative. You're able to potentially tease out some level of usefulness that the default settings of these LLMs are not getting us. So that's a really, really interesting idea for me.
Karen Maria Alston:
Right. And that also makes me in the minority. So back to the danger of these LLMs and why people are, you know, anthropomorphizing their LLMs, calling their LLMs Suzy or John or, you know, whatever, is they have someone who is constantly reinforcing. So when OpenAI puts out what people are using it for and therapy is a big thing and how do I respond to this text message? Or there's this girl that I want to talk to, but I'm afraid to talk to her. So I mean, how is that impacting our society? And if you're an 18 year old boy, the LLM is giving you all this great, amazing feedback, and you got to go out in the real world and talk to a girl who doesn't give you great, positive, reinforcing feedback. What does that do to our society? What does that do to our human interaction?
Matt Stauffer:
Right.
Karen Maria Alston:
And therapy, I think, is the number. I think therapy is the number two use that when OpenAI released the usage and what people were using it for, number one was write this letter, help me respond to an email. But in the top two or three was therapy. That people were, how do I deal with this? Or I'm feeling this way. Or I'm having this difficulty. That's a huge problem to your point that if it's always reinforcing or telling you what you want to hear, because that's going to drive up its usage and get people to use it more, what does that mean?
Matt Stauffer:
Yeah. So if we're talking, you know, we've been talking a lot at systemic levels and structure levels. If we're talking about the individual right now and we've got an individual who's fully aware, you know, you've said, I want to make sure you're aware. They're like, I understand about these concerns. I understand about the dating apps. I understand about the cat gifts and all kinds of stuff. I know all those things. I've heard everything you said. And they're like, Karen, what should I do?
Matt Stauffer:
What would you think a takeaway from an individual listening to this podcast, what do you think a next step for a person listening to this is, you know, that you would want them to walk away with? What should they do differently? What should they keep doing? You know, what's a follow-up from this?
Karen Maria Alston:
Coming back to how I fell into this, what does this do to our society? What does this do to our need to be connected to something that's constantly reinforcing and being that support system and our best friend? I've seen people on Facebook say, Claude is my best friend, my business partner. It's this. And that's when you see these people who are dating AIs and falling in love with their AIs. Well, that's that progression of I get someone who's constantly supporting me.
Matt Stauffer:
Okay. Okay.
Karen Maria Alston:
You know, I think of Sam Altman had a quote about it's up to humanity. And I agree with him on that. That person needs to talk to someone in their family, a friend, someone that they love and help that person get to that level of understanding. Because the truth is, whatever happens with AI, you know, humanity has to make the decision, right? So if we just want to beat China, and that's the goal that we have, then that's the outcome. If we want to cure cancer, but we as a collective, as people, have to be able to say, understand the risk. I understand what's coming. And a lot of people say, position it sort of like the nuclear race, right? To understand, what happens if everyone has this capability? We all sort of stop because everybody has the capability. And there's this race to be able to beat everybody else to that level of capability. And then to take a moment to think about, and it's hard, because this is something I struggled with and had to work through too when you brought it up earlier. And that is there's more people on the planet than just Americans. There are people who live in countries that don't have the resources, don't have frontier labs, you know, headquartered there, don't have billions or potentially trillions of dollars to invest in artificial intelligence, and the decisions that we're making impact them. And so beginning to think through, okay, well, maybe I should begin to pay attention, not just how this impacts me or my community, but how it impacts other people. And it's not a left or right issue, it is a human issue. And that's what people need to see. It's not a political issue. As humans want to advance with this technology.
And then how do we process when there's a technology? Because for billions of years, we've been the most capable person on this planet. Humans are the most capable and the most intellectually advanced mammal on this planet. What happens when we create something that's smarter than us? And how are we able to react to it? And how are we able to adjust as humans when it's smarter than us?
Matt Stauffer:
So what I'm hearing is we all need to go watch a whole bunch of sci-fi movies where they've been trying to answer these questions for the last 20 years and see what happened. What happened in that? And what is the guy's name? Do androids dream of electric sheep, Philip K. Dick. We just need to read a whole bunch of Philip K. Dick novels and hopefully it can help our brains get stimulated a little.
Okay, so Karen, we're kind of nearing the end of the time that we have set aside for today. Is there anything that you feel like you wanted an opportunity to share with everybody that you didn't get a chance to talk about today? That you want people to hear?
Karen Maria Alston:
No, I mean, I'm hopeful, like I said that I am supportive of AI. I believe in these tools and know how much they can help our society and you know if we can cure cancer and navigate some of these really big issues that plague us and that AI could also be an opportunity to help us be more equitable for more people you know so on one side I'm really hopeful that it can be a tool that makes us an even greater civilization and that we can leapfrog into a place of abundance.
But the other side is that we're human. You know, we have hubris and ego and all of the things that make us human and beginning to think about how we utilize these tools to help other people and not for me and how I benefit or I become rich or I build the vibe coded app that allows me to, you know, be able to buy my bunker. But thinking about how as humanity we help move forward. And then thinking about asking your politicians, your governor, your state reps, what are they doing? What are they thinking about?
Are they questioning? Are they asking questions? Are they thinking about how whatever decisions they make impact the citizens? And are they making decisions that are going to be beneficial to citizens or are they just making the decisions that the companies are telling them they should do? So that's the other side is to have that agency around, okay, what's coming? What is my governor doing? Is there an innovation team? Is there a group of people that's advising my governor or my states, senators or my congressmen around the decisions that they're going to make. And is it done in public and not in secret?
Matt Stauffer:
Yeah, yeah. Okay. I feel like I have to-do lists. Political engagement is definitely one of them. Because as someone who I'm just like, yeah, I hear my friends who are anti AI. But even then I think if those friends were to be told you can use the benefit of AI, but it will be regulated. So it's not stealing people's data to train all these things and it will be managed. So it's not killing the environment, they'd be like, okay, that's fine. Those are usually the two concerns they have, maybe the third one being privacy. So I'm just like, we can get through those things, right? And the good news is I don't know anybody who is super pro AI who is against those regulations, right? Nobody's just like, I love AI and I only love it if it's stolen other people's data for training. I love it and I only love it if it's screwing the environment, right? They're just sort of like, the good news is the things that critics want fixed, I don't think anybody disagrees, other than maybe the AI companies, because they don't want to deal with the ramifications of having to deal with fixing those things. So hopefully, regardless of what you think about AI, you are on the same page that, you know, like these are things that we can advocate towards them being fixed, and we can all be on the same page that those are for the benefit of humanity. So I appreciate you kind of like setting that up. Also, thanks so much for teaching a little bit about how you use it, because I don't know anybody using Claude or any other LLM at the same level of executive assistantship as you are. I've got some friends who are pretty close, but you're really acting, you know, you've got to integrate it in ways that I certainly don't. So I'll be researching that some. Before we do the final wrap, I do want to make sure that people know how to follow you. So if they're interested in keeping in touch with you, learning what you're working on, anything like that, where do they follow you on the internet?
Karen Maria Alston:
So you can follow me on LinkedIn if you're a LinkedIn person. It's Karen Maria Alston. It's also the same on Instagram, Karen Maria Alston. Feel free to reach out. You can email or DM me. I'm the easiest person to reach. My cell phone is probably in appendages at this point because I am a tech user. I use all these tools all the time. I'm working on a couple of fun things. So if you follow along, you'll be able to see some of the things I'm working on. I sort of hinted at it earlier talking about these pages that are artificial intelligence. And I'm going to do a research project to see how fast I can get to a million followers with a non-human. So I'm looking forward to doing that. So if you want to follow that along, just reach out.
Matt Stauffer:
Okay, I am fascinated by that concept of the research project, you know, because obviously we don't want people to be manipulating us to get followers, but I've really appreciated when people have said, this is already happening, so I am going to publicly transparently show you what is already happening to your social media feeds by seeing what we can do. So I can't wait to learn more about that one. Okay, so one of the things I told you all is that at the end of each of these podcasts, and I do it most of the time, but not always, I'm going to share something that one of you has shared with me about ways that you're using AI as a part of your day-to-day life. And Karen actually gave us a bunch of very fascinating ones, but I did get one from Twitter from Devin Garbarosa. And he said, not my own anecdote, but a colleague at my last job used it two Christmases ago to ingest 15 different cookie recipes. And I had it plan out a bulk shopping list and create a streamlined order events, knowing how many mixers, how many ovens she had in order to make her annual activity more streamlined. She said it was life-changing and the cookies were also fantastic. And I like this one because often when we think about like the promise, we're like, yeah, I'm fed at all this stuff and it gave me back a recipe that looked good and then you actually cooked it and it was garbage. But I feel like we're finding bit by bit more the things where it's like, if you ask it to make a chocolate cookie recipe for you, it's probably gonna be garbage. But if you say, here are the existing sets of steps, it can work with the existing stuff to put it where you want, much better than it can come up with something a whole cloth based on its training data, right? So I love that as like an idea of like, don't ask it to make the recipes for you, give it your recipes and say, please do this manipulation thing that a human could do, an executive assistant can do, but now you can get it to do it just kind of for you. It wouldn't be worth paying an executive assistant to plan your holiday Christmas cookie shopping and doing, but it's certainly worth having AI do it. So thanks for sharing that Devon, we appreciate you.
All right, so that is it for today. Karen, thank you so much for sharing and thank you for being an advocate because it's not just that you're teaching us about what you're doing, but your actual job today is trying to help all of us have a better human experience of AI as we interact with it and AI as it impacts us even when we're not the ones choosing to interact with it. So I'm grateful for you being someone out there doing that kind of advocacy and thanks for joining us today.
Karen Maria Alston:
Thank you.
Matt Stauffer:
For the rest of you, thank you so much for hanging out with us, and we will see you next time.
Creators and Guests
