What’s the BUZZ? — AI in Business

Prepare Your Business For AI-Generated Disinformation (Guest: Rod Schatz)

May 30, 2023 Andreas Welsch Season 2 Episode 9
What’s the BUZZ? — AI in Business
Prepare Your Business For AI-Generated Disinformation (Guest: Rod Schatz)
What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Rod Schatz (Data & Digital Transformation Executive) and Andreas Welsch discuss how leaders can prepare their business for AI-generated disinformation. Rod shares his perspective on the risks of generative AI for disinformation and provides valuable advice for listeners looking to raise awareness within their organization.

Key topics:
- Determine how generative AI will contribute to digital disinformation
- Develop strategic responses in a changing digital landscape
- Establish a robust disinformation resilience framework

Listen to the full episode to hear how you can:
- Build AI literacy within your organization
- Balance corporate goals and societal responsibility
- Create an AI safety council and preparedness plan

Watch this episode on YouTube:
https://youtu.be/HG4aIN3_GOM

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today, we'll talk about preparing your business for AI generated disinformation information, and who better to talk to about it than someone who's got a strong perspective on that. Rod Schatz. Hey Rod. Thanks for joining.

Rod Schatz:

Thanks for the invite. I'm looking forward to the discussion.

Andreas Welsch:

Awesome. Hey, why don't you tell our audience a little bit about yourself, who you are and what you do?

Rod Schatz:

Sure. So I've been a technology executive for the last 15 years specializing in data and digital transformation. In 2016, I co-authored a book on digital transformation that's allowed me to see the world in a totally different perspective. So from a digital first mindset, I've also pioneered a couple startups. And disinformation is something I'm really interested in because I'm somewhat worried about society as a whole and where all this is gonna go.

Andreas Welsch:

That's awesome. Hey, it sounds like you've really seen a lot in that part about this information and, all the potential of AI leading to, more disinformation is, something I've been looking into as, as well and been thinking about a lot and exploring lately. So really timely conversation. Now, Rod, how about we play a little game to kick things off? What do you say?

Rod Schatz:

Sure. Sounds good.

Andreas Welsch:

All right, awesome. So this game is called In Your Own Words, and when I hit the buzzer, the wheels will start spinning and when they stop, you see a sentence and I'd like you to answer with the first thing that comes to mind and why, in your own words. And to make a little more interesting, you'll only have 60 seconds for your answer. Now, for those of you watching, Also drop your answer in the chat and why? Really curious to see what you come up with now. Rod, are you ready for, what's the buzz?

Rod Schatz:

I am ready.

Andreas Welsch:

All right, then let's get started. If AI were a bird, what would it be? 60 seconds on the clock. Go.

Rod Schatz:

Okay, perfect. It would be an eagle. And why an eagle? Cuz eagles soar. They get to see a broad perspective of the landscape. They're sharp, they can hunt, and ultimately an eagle is a top of the food chain. And I see AI as being one of those things that's gonna quickly move up into our corporate food chain and harkey in organizations to be all powerful. And that's what I see as an eagle.

Andreas Welsch:

Fantastic. Within time and great answer. Thank you so much. I'm taking a look at the chat here real quick to see where folks are joining us from. So Asif from Dallas, Texas. Michael in Boston, Mary in Chicago, Elly in Houston. So for now seems like a like a predominantly US based audience, but I'm curious if you're joining from anywhere else in, in the world, please drop it in the chat as well. So Eagle is this majestic bird, this majestic animal. And if you ever seen any live they're, quite impressive, right? I think Eagle also gets us to our first question and especially if we look at something like generative AI and here in the US the eagle being a key symbol and a key animal here as well. Now looking at generative AI, I'm wondering if we believe that it will contribute to the disinformation megaphone and to what extent, and if you've actually seen some current examples, and again, maybe government and the eagle. Have, their first play here.

Rod Schatz:

Yeah, sure. So I personally think we're at a bit of a precarious spot. And what I mean by that is governments to date haven't done a good job of regulating big tech, in particular social media. And I think generative AI is really exposing some of those flaws in lack of regulation. And the key thing where I'm going with that is the data that's used for generative AI is full a lot of biases. And the thing that concerns me the most is how those bad actors are gonna exploit that large training data set to use it for disinformation or harm. And I think we really have to break it down into two aspects. One is there's the personal aspect where generative AI can do. Damage and create disinformation for a person, but also for brands, large corporations or medium-sized corporations. In terms of some of the areas where we obviously can see the biggest impact of disinformation, it's gonna be on politics. And the other thing I see is there's a strong likelihood of social problem amplification. So for vulnerable populations, I can see that generative AI can be used in ways to exploit them. So in terms of some examples, I'm gonna highlight sort of three. It was probably about two months ago with the looming indictment of Donald Trump. There were a bunch of generative AI images that popped up of him being arrested in lower Manhattan where he was on the street and there was a bunch of police officers grabbing him. And so those images were a hundred percent generative AI created. So text to image. Another one that I bumped into a couple weeks ago was the Turkish elections. So the, person trying to be elected over the existing President. There was a video that was released of the Challenger and it was done in English. And it was a video and was posted on social media. And then about a day later he actually came out and said he didn't create it. So it was all misinformation. And it was interesting too that it was done in English, not in Turkish. So that's another example of text to video. And then one that popped up on Twitter. There was an image posted of an explosion on the Pentagon grounds in the us and then, Twitter went a storm on this and the S&P dropped substantially. And then about 15 minutes later it was reported as a deep fake and then things started to rebound. But those three examples show how easy it is to create disinformation with these new tools. And the thing that I'm finding fascinating is we're in the really early days of this technology. ChatGPT's been out since November; other iterations of it a little longer, but the thing that I'm having a hard time as a technologist is keeping pace.

Andreas Welsch:

That's right. Yeah. It evolves so quickly, right? And there are so many news, so many improvements and anything also with some things being available open source or you read about these examples of people training models on their gaming laptop. So they become really small, really resource efficient. You no longer need to have huge infrastructures and almost like super computers to train these things. I think there is a there's a significant risk in it. And what do you think we need to do individually to identify whether something is real information. What role do we play in play individually in this?

Rod Schatz:

I think the big thing is education. And one of the things I think a lot about is my children is, How do we know what to trust and how do we know not what not to trust? And so what that ultimately comes down to is really solid fact checking. And that's what I mean by education is I think we all need to now develop new skills, which is to evaluate the content that we read. We see, we hear, and evaluate whether or not it feels trustworthy. So I think that's one aspect we all collectively need to do. Is that real education piece to develop, like I said, those fact checking analytical skills.

Andreas Welsch:

I think that's a very important point. And for me, whether you say media has always been divided or catered to a certain part of the population or what you believe. I feel as independent media, there's always been the sense of trust and objectivity that you put into these organizations. So if it becomes harder to discern whether something is real or not, or if you first of all need to ask yourself that question is that plausible that this has happened? And then do your own fact checking. I think there's a huge change and a huge risk in itself as well and also questioning a bit our our trust in these organizations as independent or as supposedly independent. Now I'm also wondering as we're looking at this area of AI-generated this information and how might actually that the digital landscape reshape the competitive dynamics, especially of media, of politics, of businesses, and what do you recommend? What are strategic responses that are required so that businesses and individuals and leaders can navigate these challenges?

Rod Schatz:

Yeah. I think step one is education. Like I said, it not education from the perspective of trust and understanding how to understand if the information is trustworthy. But it's education in terms of understanding the tech. So one of the things that I've bumped into in my career is there's a massive digital divide between executives that run organizations and their understanding of technology that supports them and technology that is out there to help them grow and transform. So leaders really need to learn and they need to learn. They need to lead with how this technology can help them. The premise of the book that I co-authored back in 2016 was around how digital transformation disrupts business models. Up until November, most people thought I was a doom and gloom. Of all the things I would say of this is things that I can predict are gonna happen to your industry. With generative ai, we're gonna see the pace of change in industries. Day-to-day jobs is fundamentally gonna change. So I would argue a lot of traditional business models are at threat. So what that ultimately means, it's the perfect storm for organizations. So organizations now need to prepare and build strategies, and the strategy has to be a strategy that is led to execution, not one that culture eats strategy for breakfast. So it truly has to be one where there's a true implementation plan and as it relates to disinformation. The other thing that I've been thinking a lot about is. Organizations need to develop just like they have tabletop exercises for cybersecurity breaches, organizations now need to develop disinformation breaches. So it's another thing that I definitely think they need to prepare for. And what I mean by that is going back to my earlier point, there's gonna be two aspects of disinformation. One will be at a personal level. So think of a CEO or CFO. And the other could be at an organizational level. So a competitor creates disinformation related to something, to a top competitor, to basically erode trust. So I think there's a bunch of things that organizations need to do. A further one that I think the needs to happen is, for a lack of a better term, I'm calling it an AI safety council. Organizations need to develop an AI safety council, which is basically taking on all the ethical components of it, but it's also looking at how to deal with all the risk. And so there's obviously policy acceptable use both internally and externally of how to use these tools. There's the data protection. One thing I think is gonna become very powerful is organizations that have strong stances on how they use generative AI related to customers. And that I think is gonna separate many firms from the pack. So in other words, responsibility.

Andreas Welsch:

To me, the part that you mentioned about how, easy it is to create this kind of information or disinformation and, how others might use it is really something that we need to create more awareness we're and I'm wondering if leaders are prepared for it. Even if our friends in PR departments are prepared for it to react to such in information. So to your point, I think there's a lot more awareness that needs to be created. That this is possible and that this might eventually happen. Maybe let's take a look at the chat. I see there's a question from Manjunatha that I'd like to pick up and pick your brains on. So Manjunatha asks given the reality of generative AI disinformation, how can you minimize the human cost and broad role? Does generative AI play here itself? Can you think of a parallel to self-regulation here, and what what does it take to do the fact checking for generative AI?

Rod Schatz:

So those are two big questions. So the human cost, I think this is one area where we're gonna see a lot of disruption. So corporations exist for one reason, that's to make a profit. And having sat at many executive tables, I know how those discussions transpire. And if generative AI can be used, which it can be to lead to automation. If you can go from a marketing team of 20 down to six, but still produce the same output, if not, or better, I'm pretty sure, I know many organizations that would say, then let's downsize. I was listening to an HBR podcast yesterday and one of the panelists was talking about a conversation with a CFO who was actually talking about this very topic. He was weighing if the marketing department or even his finance team is gonna get smaller. Do I make the right decision, which is to the good of society? Do I make the corporate decision, which is to let people go? I think that's definitely a question that all organizations need to start to discuss internally and come up with what their game plan is. What we've seen historically though, is we've gone through massive waves of innovation throughout the history of humankind. And what's happened is people get displaced for a short timeframe, but then they go off and learn new skills and they're redeployed elsewhere. So I think that's largely what organizations need to start doing is having some of those conversations. Now, the second question, can you compare on self-regulation? I don't think self-regulation is gonna work. The example I used earlier of social media. For the most part, the social media companies haven't had a lot of really stringent regulation on them. And you could argue the US' political elections have been influenced by disinformation on those platforms. And so again, coming back to why do corporations exist, it's for profit and profit to shareholders. So that's always gonna guide their direction no matter what. So I do think regulation needs to be put in place, but I don't think it's as easy as just government. I think it has to be multifaceted. I think that government obviously has a big role on it. I was thinking about this yesterday. I read an article that talked about having a UN treaty across all nations around what are the guiding principles of responsible AI. And then it also got me thinking. Industry associations also play a major role in this. So think of the medical profession. They have industry associations, lawyers, engineers, software. I think industry associations also need to jump in and develop their own AI safety councils on how to deal with some of this stuff. It's one of the things I learned in the book is when disruption comes, the perception that is out of the blue, organizations don't know how to respond and so their response is slow. And then that leads to ultimately them turning into Barnes and Noble, which means they disappear. So I think it has to be multifaceted.

Andreas Welsch:

Great. So maybe, from there on you mentioned certainly we can expect some changes in organizations. But maybe coming back to that disinformation topic again. How do you feel that organizations can establish a robust resilience framework to combat disinformation, to identify it, to react to it and, one that not only safeguards the reputation, but one that also promotes ethical AI as a competitive advantage?

Rod Schatz:

I think the first thing that organizations need to do is they need to monitor the internet and social media for their brand. So they need to see if anybody's putting out disinformation related to their given services, their products. So they need to be on top of it. The key is you don't want to be reactionary. And to use the cybersecurity example as well the struggle with cybersecurity is it's typically reactionary and you're always from a position of disadvantage. And with disinformation, I really think organizations need to be on top of it. I think organizations need to have a policy and a set of a framework. If it happens, what do we do? What are our 14 steps? Who's involved? And the key thing with disinformation is, Not one group in an organization owns it. It's part marketing, it's part legal, it's part tech, it's part executive team. And because of that, it can be very chaotic if it happens. The organization I was previously at, there was an incident related to one of our employees and we didn't have a good plan and we scrambled. And so that's why I'm saying the first step is being proactive. So it's having good internal discussions. It's laying out how you're gonna respond and what your response will be. The next thing is related to the use. Internal use is having center of excellence. So we've seen lots of information and posts on LinkedIn and other social media platforms around. You have to be careful what sort of information you put inside of the generative AI engines. So those engines themselves may use that information for retraining, so that retraining itself, if taken outta context, could also be disinformation against an organization. So as having some of those sort of policies and education internally, I talked about AI safety council. I also think it's also about having an adaptive culture. So some of the organizations I've talked to, many people are afraid of this technology. And the key thing that I've learned over the last four and a half months is you gotta get in there and get your fingers dirty. Play with it to really understand it. And what I mean by that is we've also read a lot about this technology hallucinates. So how do you know when it's hallucinating? What I found is you ask at the core question of what you want, and then you bombard it with secondary questions to make sure it's not giving you a hallucination. And the big thing on all of this leadership, the executives really do need to learn this technology and they need to lead. And they need to show that there's a strategy behind all of this.

Andreas Welsch:

Awesome great summary. All around how to start addressing AI generated this information and come up with a plan and devise that plan before you actually need it. It's a lot like crisis communication and having that plan together. So you just pull it out of the drawer and go according to your script and your plan if it has happened. Now, we earlier talked a bit about trust and erosion of trust is as well. And they're wondering as AI generated disinformation really challenges that the very trust that we've put into the institutions that we've put into the organizations that are supposed to be neutral in many ways are and the overall digital ecosystem. What are maybe some of the partnerships that you see or collaborative efforts that companies and industries and governments can forge to reinforce that integrity of information and make sure that we have a reliable and secure future. What are you seeing there?

Rod Schatz:

I don't think just government regulation is the solution to this. I do like the term you use for partnership. I do think that the tech ecosystem itself is definitely part of the solution. One of the things that I've been thinking about for the last couple years is related to trust and disinformation. I have a good friend, and him and I debate this all the time. We really need a trust platform. So if I'm gonna post something on Twitter, it goes through basically this trust platform where I have a rating of my trustworthiness has been X on my last posts. And then the power of the crowd gets to rank whether or not they feel my story is trustworthy. And so think of who wants to be a millionaire? The power of the crowd was always in the high nineties in terms of their. Their ability to get the right answer. And I think that trust platform, if it was blockchain BA based, also it's immutable. It also helps allow us to understand what is coming from credible sources. So the example yesterday of that Pentagon picture. If it was on a trust platform, that person most likely has a low trustworthy score. So then people would've looked at it and went, oh, okay. So then the news media wouldn't have been so quick to post and jump on it. So I do think there's definitely some tech in this is where I'm going with this that has to be built to also help us around trustworthiness. And additionally, there are some companies that are working on deep fake detectors. So one of the things that we saw pop up quite quickly after ChatGPT was released was a detector on whether or not text was written by ChatGPT. So, I think it definitely is a partnership. I also mentioned industry associations. I think they have a big role to play. And then the key thing I think we need to do as society is we need to measure success on all of this stuff. So regulatory typically implemented governments aren't good at measuring KPIs, that kind of stuff. And I definitely think that's something we need to implement is how do we measure success? Cuz we're right now reading lots of doom and gloom about humanity and humanity's ability to survive AI. The only way we're really gonna be able to do that is to measure what we've put in place and see how it's working and continuous improvement.

Andreas Welsch:

No that's, awesome. I think that makes a lot of sense. We are getting close to the end of the show and I was wondering if you can summarize the key three takeaways for our audience today. And I know we've had a lot of questions in the chat as well and been able to take some, but what are the key three takeaways for our audience?

Rod Schatz:

I think takeaway number one is disinformation is not going anywhere. It's gonna get worse, if anything. I do feel that all of us as individuals and as organizations, the companies we work for, we need to be prepared for this. So we all need to start to develop our own proactive approach to how to do this. And the one thing I really want to emphasize too is I really think that AI safety council is the starting point for a lot of organizations. It's the ability to defend when disinformation is out there related to a brand, individual or personal, but it's also the ability to manage and use these tools effectively to help organizations grow and foster, but also to help keep us employed as humans.

Andreas Welsch:

Awesome. Thank you so much Rod, thanks for joining us and for sharing your expertise with us and for those in the audience for learning with us.

Rod Schatz:

Bye-bye.