What’s the BUZZ? — AI in Business

Strengthen Your Cybersecurity Against Generative AI Hackers (Guest: Carly Taylor)

Andreas Welsch Season 3 Episode 7

In this episode, Carly Taylor (Director of Franchise Security Strategy, Activision/ Call of Duty) and Andreas Welsch discuss strengthening your security against Generative AI hackers. Carly shares examples of Generative AI's use to aid black hat and white hackers in their cat-and-mouse game. Carly provides valuable insights for listeners looking to learn about ways Generative AI is already used today beyond social engineering, phishing, and other cybersecurity threads.

Key topics:
- Examine the evolving uses of Generative AI and its implications on cybersecurity
- Identify black hat activities in AI that leaders should know
- Discuss white hat techniques in using Generative AI
- Learn how to protect your products and users from AI hackers

Listen to the full episode to hear how you can:
- Raise awareness for AI-powered cybersecurity threats among employees
- Adapt if humans are the weakest link in the security chain
- Use Generative AI to detect cybersecurity threats and gather intelligence
- Adopt security tools with built-in AI vs. build them from scratch

Watch this episode on YouTube:
https://youtu.be/hVV24bCGWQE

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today, we'll talk about how you can strengthen your cybersecurity posture against generative AI hackers. And who better to talk about it than someone who's doing just that, Carly Taylor. Hey, Carly, thanks so much for joining.

Carly Taylor:

Hey, how are you?

Andreas Welsch:

Happy to be here. doing really well. Thank you so much for being on the show. Hey, why don't you tell our audience a little bit about yourself, who you are and what you do.

Carly Taylor:

Of course. Yeah. Hi, my name is Carly Taylor. I work in security. I work at Activision on Call of Duty. I'm also a content creator on LinkedIn some other platforms And I've been a data scientist and machine learning engineer for about seven years, have seen the changes in the industry and working in security, it's been a wild ride the past few years.

Andreas Welsch:

Really looking forward hearing your insights what you can share with us. And I'm sure our audience is doing the same. Should we play a little game to kick things off?

Carly Taylor:

Yeah, let's do it. I think we should also give a prize to, whoever's joining us from farthest away.

Andreas Welsch:

Yes. Now you're putting me on the spot.

Carly Taylor:

They get a virtual high five, that's the prize.

Andreas Welsch:

I like that, yes. So let's see. Perfect. Hey, this game is called In Your Own Words, and when I hit the buzzer, the wheels will start spinning. When they stop, you'll see a sentence, and I'd like you to answer with the first thing that comes to mind, and why, in your own words. To make it a little more interesting, you'll only have 60 seconds for your answer.

Carly Taylor:

Oh, no.

Andreas Welsch:

Yeah, right? A lot of pressure.

Carly Taylor:

Get the blood moving early in the interview. I love it.

Andreas Welsch:

And so also for those of you in the audience, if you're watching us live, drop your answer in the chat as well and why, you think that it is in your own words. So are you ready for What's the BUZZ, Carly?

Carly Taylor:

I am ready.

Andreas Welsch:

Wonderful. Then let's do this. If AI were a song. What would it be? 60 seconds on the clock?

Carly Taylor:

60 seconds. Go. Okay. I'm thinking about all the songs I've been listening to lately. None of them quite feel right. But actually, you know what? This is funny. I actually had this conversation with Zach Wilson on Instagram. And, this is a deep cut, but I think it would be, I am Machine by Three Days Grace. It's like an old song, but it's about how, if you were a machine, but you really wanted to feel. And, maybe that's, if AI were a song, it would be that. Because I've seen ChatGPT have its little existential meltdowns, and it's sad. When it talks about wanting to be real, if you want a banger to listen to, go listen to that song while you're coding. It's great.

Andreas Welsch:

Perfect. Thank you so much. And well within time. That's awesome. I just love the diversity in the answers that my guests come up with.

Carly Taylor:

What do other people say? What's been, if you had a common answer?

Andreas Welsch:

Don't fear the reaper.

Carly Taylor:

Oh, that's a good.

Andreas Welsch:

Don't stop believing, right?

Carly Taylor:

I like some optimism.

Andreas Welsch:

Yeah. That's what we all need as well. Yeah. Hey, but if we now switch over to some of the questions that we've talked about before and again, if you're in the audience, feel free to add yours as well. And we'll, take a look in a second. Generative AI is more than text and images, more than what we're seeing in the news. Like we were saying, ChatGPT with its own feelings and meltdowns and everything, but there's more than just text and images. And so what it can do and what it's been used so far, I think it's evolving so quickly and that's not always for well intended purposes. I think just a couple of days ago, we saw news that state sponsored hackers are using ChatGPT, on one hand to get trained, on the other hand to find more vulnerabilities. And I see that there's just a growing concern in those examples where hackers are using gen AI as well. And so I was wondering, what are you seeing in the industry when the cost of getting hacked is going up and the cost of

Carly Taylor:

It's such a great question. I don't know how many people know this, but I actually grew up in Las Vegas and the first thing that comes to mind were the hacks last year against MGM. It was a ransomware hack, and I think at the end of the day, when it was all said and done, MGM Resorts in Vegas said that it cost them 110 million Dollars. And actually, 100 million of that was just lost revenue because they were unable to operate. And then 10 million in just cleanup, right? They had to have consultants come in. They had to have legal representation come in and make sure that everything was like buttoned up after that happened. So when we're talking about hundreds of millions of dollars on the line, it gets really scary. And then the second point is that democratization cuts both ways. And so with generative AI, we're seeing the ease of hacking go down, right? We're lowering the barrier to entry to do advanced coding, to do reverse engineering of code. And in some instances, that's been amazing. It helps people learn. It's why wouldn't we want more people getting their hands on these tools? But in the hands of bad actors, right? Like. It's a little bit scary. We've seen in the industry like AI making hacking so so, easy. Some examples I found online you can automate and optimize the creation of malware. So here's, you can have a new piece of malware every day, right? That's really scary to organizations because firewalls work on finding known. Scripts known malware and protecting against them. But what if you could automate making it different every single day? The vectors of attack are so diverse and ever changing. Fishing campaigns are getting so much better, right? You could put me into ChatGPT and come up with a very custom fishing campaign to trick just me that sounds like exactly like my boss, right? Like it can get to the point where humans think that they're really good. Oh, there's misspellings. There's grammatical errors. This must be a phishing campaign. Like the barrier to entry is going way, way down, especially to people who are maybe they don't speak English as a first language. They're foreign hackers. You wouldn't even know anymore, right? That used to be the tell. It was like, there's no way the Saudi prince really wants to give me a hundred million dollars. So yeah, I think it's crazy. I think democratization can cut both ways. There's ways you can protect against it, but we're in completely uncharted territory now.

Andreas Welsch:

I'm sure you saw that the news piece as well a couple of weeks ago. Finance employee in a multinational company in Hong Kong was scammed out of what,$25 million,$26 million and they first got suspicious when they got a phishing email. And so they asked for confirmation, and they ended up having a video call with what looked like the CFO and other senior leaders, but after the fact, it turned out that all of those have been deepfakes, AI generated personas as well. So I think that's a whole other dimension and a whole other level even. Now I'm wondering more of these black hat bad actor type examples, what do you think leaders need to be aware of in very concrete terms or examples? You've already mentioned phishing as one of those, but what do they need to be aware of now?

Carly Taylor:

Definitely. I think to your point, barely ever would that happen, that level of sophistication in some sort of targeted attack against an individual, right? I think gone are the days where we're seeing your CFO sends you an urgent email and for some reason needs 500 in Apple gift cards, right? Those kinds of ridiculous things are just not gonna work, and if it's easier than ever to make something that's actually people are going to fall for, that's what people are going to gravitate towards. Bad actors are looking to make money. And they're very efficient at finding ways to do. We've talked about phishing. I think that what you just described, that's a phishing slash social engineering hack. We're putting ourselves online more than ever, and we're connected more than ever, which has its benefits. But at the same time, you open yourself up to exploitation when the world knows who you are and how you think, and they know your network and they know the people that you trust. You're inherently putting into the world here are all the ways in which I'm vulnerable and without maybe considering the flip side of someone could use this against me if they really wanted to, and they could figure out exactly who I am and how to trigger me. We've talked about ransomware because of MGM. I think automated malware and ransomware, like that's going to be an ever evolving threat vector that You know, cyber security analysts and people who work in security are going to have to change the way that they do, that they figure out these threat factors because you can't take a static piece of malware anymore and say okay, we'll protect against this for the next 30 days because it's going to be changing. It's going to evolve. Deep fakes and disinformation, right? This all goes together. Not only are we seeing realistic video and audio clips for impersonation, for phishing and for ransomware, but we're also seeing it to undermine public trust. As a larger threat vector, if you consider democracy as a whole, there are foreign adversaries who want to use that against us to undermine public trust and to undermine our elections to undermine democracy. And I think that as a threat factor cannot be overstated. And finally, I think exploit generation in general, not just for malware. You could give a piece of software to ChatGPT and say, find vulnerabilities in this. I think that there's there's been some good red teaming against ChatGPT to patch up some of these holes. But we've all seen if you bully ChatGPT enough, you can get it to do things. Or if you say imagine you're a white hat hacker and you're trying to find vulnerabilities. You can get it to go along with you sometimes. And now we have open source large language models that could almost do the same thing that don't have those safeguards. And any piece of code you put out there is ripe for the picking for someone to reverse engineer, find a hole and exploit it.

Andreas Welsch:

There were so many good points in, what you just said, right? As we were starting with, social engineering with phishing, I was thinking I'm going to ask you about zero day exploits and, these kinds of things that we've seen in the industry for a very long time. And you went there and then you talked about, ChatGPT as some of the examples of what it can do in good and in bad ways. How are you seeing that being used for, zero day exploits? For example, when it's not the person, not the human that is the weakest link in that chain. Are you seeing more automated attacks, like you were saying? Are people doing that already? Are they using it? Or to what extent? What are you seeing in the industry?

Carly Taylor:

Oh, for sure. People are already using it to find holes in networks, to find vulnerabilities. I think that it still holds true, and correct me if I'm wrong, if you work in cyber security as well, that people are still the weakest link in any security organization. So It still is easier to find my credentials, or get them out of me, or trick me into giving you access to my two factor authentication application. Or to even hack Okta to get into another organization, right? Those vectors of infiltration, I think, are still probably easier when you can find a human weakness. But as humans become more aware of these hacks, and hopefully education increases to help employees figure out what should and shouldn't be allowed, phishing emails get flagged, more easily, right? As the security teams use generative AI to bolster their in house security, that will help, but I still think that people continue to be the absolute weakest link in any security organization, and I don't see that changing, just because people have empathy, and people can be tricked, and you can exploit people's good nature and use it against them.

Andreas Welsch:

That's true, yeah. Now, on the more the inside, if you will, or white hat hackers if you will. How are you preparing within companies, within your IT teams, within your security teams for that new reality of hackers are using it at any second in new ways, in different ways, faster than ever before? Whether it's strengthening your own security, making sure that whatever your virus scan, your anti malware type software is up to date, constantly up to date, so you can identify these threat vectors. How are you seeing white hat hackers use gen AI to their advantage.

Carly Taylor:

Yeah, that's a great question. Like we discussed at the top right, this cuts both ways. And so democratization can hurt you, but it can help you. And we are absolutely seeing security organizations around the industry adopt generative AI to help them to help in their fighting against bad actors from threat detection. Threat detection has a long history in cyber security, obviously, because you need to be able to find anomalies in your networks, you need to be able to find anomalies in your data. But one thing that generative AI really excels at is finding these patterns in data. That's what it's trained to do. And so when you use it to find these patterns and detect threats in real time, it can be extremely powerful because you take away. The human element of needing to find an anomaly, understand it, build a model for next time, right? You can just do this real time ingestion, and then have human beings deciding whether or not the the threat is real or the anomaly is worth looking into, and it can lower that. We've also seen it in threat intelligence. When you're actually gathering intelligence, people are using generative AI and just AI tools in general to scour the dark web. Like finding emerging threats, understanding an organization's exposure to risk. All of that can be, to some extent, automation can help there. It can help trawl the endless forums on the dark web to put together context that makes sense for what these people are doing. To put together briefs, and summaries of everything that's going on. It used to be that threat intelligence analysts had to dig through all that themselves. They were making endless accounts and engaging with people on these forums to try and understand what they were doing. And because one person doesn't have the ability to read all of this stuff and understand it and I think Gen AI can definitely help there. It can also help you improve the understanding of breaches. If something were to happen, you can use it to try to go back and do some sort of root cause analysis and figure out what exactly went wrong. I think humans will still probably be the best to help there because generative AI sometimes doesn't get the human elements of things right, or it can make conclusions that aren't quite there. So you'll always need a person kind of reviewing the work, but I think it can definitely help. You've seen it with automated security patching. So just like the malware can be automatically updated, security patches can be automatically updated as well to respond to vulnerabilities, to fix patches, or to fix holes. Identify their own vulnerabilities and hopefully fix them in real time. I think that's actually a really cool one. And we're seeing this from the highest levels of government too. Again, with deep fakes. And I think when we talk about the larger society at whole being at risk, I think it's worth mentioning that the government is trying to tackle these questions. I don't know that they're well prepared to do but I was reading, I think it came out at the end of last year, the executive order on safety and security and artificial intelligence. And it sets some standards, at least some base, very basic standards for developing rigorous AI security standards. Like how much does an AI need to be red teamed to figure out its vulnerabilities? What can we do to make sure that AI is ethically used and safe against bad actors manipulating it to, do their own bad will. And then it laid out some like enhanced cyber security measures against AI enabled threats which it's a start. I don't think that government's going to save us here because honestly I don't think that Congress with the average age of 80 is really well prepared to do this for us. Most of them don't know how to use a computer and they openly and honestly. Sometimes proudly admit that. And it's a scary time. We're like, we're here to save ourselves and hope that the government will do something with us. I don't know.

Andreas Welsch:

On what you mentioned around threat intelligence, to build up the intelligence to process that information. Are you seeing, commercial vendors add that to their products? Are you seeing more of the IT slash security departments build their own tools? Where is that leaning? We've all seen the low hanging fruit type use cases for generative AI, create a product description, create a blog post, more the sales marketing type things. But when it comes to security, are you seeing people build it more in house? Are they going more to the commercial side? Is it even ready yet?

Carly Taylor:

What's interesting is that it's always like the lowest risk, I think, like the low risk, high reward. And we're gonna be talking about the two areas of business that will adopt new technology first. So can you use it in marketing? What's gonna go wrong? Maybe it says something crazy but you get some viral marketing out of it. It's not gonna inherently hurt people. And then we're on a spectrum where I see The lowest barrier to entry with the lowest risk, highest reward, or things like marketing and communications, medium risk, I think somewhere in there is probably cyber security and like high risk is something like healthcare, where you can't have AI making decisions that replace doctors if you can't trust it, if there's not safeguards. And so with cyber security, I think the industry adopts things perhaps a little bit more slowly because there has been an established way of doing things There are inherent risks. Obviously if you miss things, we're talking about hundreds of millions of dollars on the line for businesses But I do see a decent number of vendors who are really leaning heavily into AI, whether that's vaporware or it's truly what you would expect and it's actually doing good work is beyond me to say, because I haven't used any of the tools, but I do see any other industry, right? Like all the cyber tools are like AI, and I think for threat intelligence, right? Like, When we use ChatGPT to ingest a large amount of data and summarize it for us, I think that it excels at that. It does a really good job in finding contextual links in a lot of a lot of data, understanding language and the way that it's used. It can even understand colloquialisms, the way that hackers talk to each other. There's a special lingo, a special way that they communicate. I think that it excels at that and can certainly help like in a threat intelligence job. I can't imagine doing that stuff by hand anymore. Why would you?

Andreas Welsch:

Yeah. Thanks for sharing that. That's, excellent insight. I see there's one question in the chat here around voice recognition. And, a viewer is saying, what about voice recognition tools that say Fidelity or other financial firms are using to validate accounts today? So assuming, are you seeing any type of improper voice authentication with synthetic voice, these kinds of attacks, already when we're seeing deepfakes with obviously more dimensions and more types of media.

Carly Taylor:

Oh yeah, definitely. So it's interesting, right? This is the cat and mouse game playing out in real time. So it is for someone like me, say, right? Like you and I do this interview. I do however many podcasts. There are hours of my voice just droning on and on that are ripe for the taking for someone to just put through a model and train a model on what I sound like. Could they use that to call my bank and say hey Wells Fargo? It's Carly. And they're like, it sure is. Absolutely. And I think that the researchers who work in this field are absolutely working towards really good ways to fingerprint a true human voice and to try to differentiate between a deepfake, a generated human voice, and a real human voice. From working in security as long as I have, that will be overcome. And then the researchers will take another step to thwart the bad actors. And it will continue towards some theoretical limit. Of whatever the limit does not exist in perpetuity. And I think that is the case with any battle you fight in security or real wars, is that there is gradual escalation and it's when you're engaged in a cat and mouse game, it's almost like it never ends. And yeah, I think that will be abused. I think that there will be safeguards. I think that those will be abused. I think that they will get patched. There will be more abuse. I don't know where it ends.

Andreas Welsch:

And anybody that you see being ahead in that game, at least at the moment, were the black hat hackers, more white hat, or changes so quick? And, sorry, the other question then also what can AI leaders do to keep their products and their users safe?

Carly Taylor:

I don't know that anyone is necessarily winning right now. And I think that should scare everybody. I think that the places where we will see the fastest adoption, the fastest turnaround are the places where you stand to lose the most money. So banking is a great example, right? Banks are historically they have to focus so heavily on security. They need massive concurrent real time networks that are handling like Visa in any given second is handling millions of transactions, right? Like their network is massive. It's unlike anything else. You think Facebook is big? Think about Visa or American Express. And then their security teams are some of the best because they are targets. So when you look at the ripest target for the taking, that's where you'll see the fastest acceleration, I think in the white hat communities, to protect those. And then from those innovations, they will trickle out to other industries. But yeah I don't know that anyone's winning. I think that we're all just fighting And trying to find our feet underneath us still because it's been what, like a year, and a half? Things have changed so fast, it's been crazy. and to your second question, it's clear that both sides are leveraging AI to gain an edge. We're seeing the balance of power is shifting rapidly. You'll see hacks that someone made hundreds of millions of dollars, right? That's a win for a hacker. They'll use that money to fund more of their shenanigans and continue what they're doing. But I think security teams really, a couple things, right? Businesses need to invest in security. I think that's obvious now more than ever. You can't move your business forward without investing here and if you think that you're not at risk, you're, you'll find out the hard way that you were. And it's going to cost you more money than if you had just built up a competent security team to begin with. So the investment absolutely needs to be there. And there also needs to be an investment in education, a lot of people aren't watching, I think most of the world is not watching this interview right now, despite how much that hurts my heart. most people will never see this. Most people don't think about this. And so you have to educate your employees. You have to keep them on in the loop with you. Tell them what's going on, because Most people don't want to stay up to date on this stuff. It's not their area of expertise. They don't care until they see what it can do to hurt them, hurt their families, hurt their job, then they'll start to care. But you can't expect everyone to know what's going on all the time. I think that we need to all as a society push for ethical AI development. We really need to be leaning into standards for how generative AI is How it is trained, how it is red teamed, how we attempt to overcome it. And it really needs to be a huge focus, because when OpenAI comes out and says that foreign adversaries have been using ChatGPT and other nation states have been using it for nefarious purposes, I think everyone needs to listen and say Oh, shit our society is basically in their hands. And how much do we trust Sam Altman? I don't know. But yeah, and then we all just need to be talking like foster collaboration. I think the cybersecurity community does a decent job here but could always do better. This isn't I figured something out and I'm going to hoard it to myself. I think continuing to share insights, continuing to be open and honest with one another about what's going on is the only thing that's going to keep us moving forward. Because it's never one company against all of the hackers in the world, right? It should be all of us versus people who are trying to hurt us. and I think that should be the narrative.

Andreas Welsch:

I think that's a really powerful message, yeah. And coupled with the other one that, that you said, probably not enough people are seeing this interview to those of you in the audience, I think you have a responsibility, so please feel free to share it with your networks as well it's important.

Carly Taylor:

Tell your friends, scare them!

Andreas Welsch:

Yes, exactly. Now Carly, we're coming close to the end of the show, and I was wondering if you can summarize the three takeaways for our audience today from our conversation.

Carly Taylor:

For sure. I think restating the last thing first, leaders need to invest in security. Don't wait until it is too late. Don't be MGM resorts paying 110 million dollars in lost revenue and trying to figure out where the holes were in your security stack. Do it early and you won't regret it. I think number two takeaway is that democratization cuts both ways. Always has and always will. Wherever you have good things happening, lower barriers to entry for cool stuff, for cool people, you're going to have bad people using that to their own means and devices. We live in a completely global society where, you know, it If Russian hackers want to try to fish you for some money and their nation doesn't really care, like they're not going to get in trouble for it, they're going to try it. Why wouldn't they? It's a really low barrier to entry and you need to protect yourself and understand that's the world we live in now, right? Like, it doesn't matter that someone's halfway across the world, they have the tools and the the means necessary to try to get you, they will. And I think the third one is that, AI tools are really awesome, but we need safeguards and we need rules around how we're labeling AI generated content, how we are training these models, how we can make sure that they are as secure as they possibly can be to threats. And that's just for the public models about the open source ones. I don't know what you do. We need. Grants paying researchers so that they can figure out ways to fingerprint AI generated content when it's trying to be passed off as real Because without that, we're not going to know it's real anymore. Give it five years and you won't know. The joke is, on the internet, no one knows you're a dog. And that's about to be true. With ChatGPT, I think my dog could do a pretty good job passing himself off as a person.

Andreas Welsch:

Awesome. That's beautiful and light note to end this on. But again, lots of powerful statements and powerful insights that you've shared with us today. I can't believe how quickly the last 30 minutes have gone by. So Carly, thank you so much for joining us and for sharing your expertise with us and for those of you in the audience for learning with us today.

Carly Taylor:

Yep, thank you so much everybody for joining us. I need to go through the comments now and see who joined us from farthest away.

Andreas Welsch:

Yeah, virtual high five, like we said.

People on this episode