What’s the BUZZ? — AI in Business

How to Set Up Your AI Governance and Risk Program (Walter Haydock)

Andreas Welsch Season 5 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 26:58

What does it actually take to build AI governance that enables innovation instead of slowing it down?

In this episode of “What’s the BUZZ?”, host Andreas Welsch sits down with Walter Haydock, CEO of StackAware, to break down how leaders can move from reactive AI usage to structured, scalable governance without killing momentum.

Together, they discuss why most organizations get AI risk wrong, where governance efforts typically fail, and how to design a program that balances speed, security, and business value. He shares practical insights on defining risk appetite, simplifying policies, and avoiding the extremes that derail AI adoption.

Highlights you’ll get from the conversation:

  • Why most companies fall into two traps—“ban AI” or “anything goes”—and how to find the middle ground.
  • The three risks every AI leader must address: data confidentiality, IP ownership, and reputation.
  • What ISO 42001 actually provides and how it helps operationalize AI governance at any scale.
  • The only four ways to handle risk—and why AI doesn’t change these fundamentals.
  • How to define risk appetite in real business terms to guide faster, better decisions.
  • Why overly complex data classification policies fail—and what to do instead.
  • The #1 mistake organizations make when implementing governance programs: unrealistic timelines.


If you want a clear, practical approach to managing AI risk while still moving fast, this episode delivers actionable guidance you can apply immediately.


Listen now to learn how to turn AI governance from a bottleneck into a competitive advantage.

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch

Welcome back to another episode of What's the BUZZ?, where leaders share how they have turned hype into outcome. Today we'll talk about how you can set up your AI governance risk and compliance program, and who better to talk about it than someone who does that for a living. Walter Haydock. Hey, Walter. Thank you so much for joining

Walter Haydock

Andreas. Thank you for having me.

Andreas Welsch

Fantastic. Walter, why don't you tell our audience a little bit about yourself, who you are and what you do.

Walter Haydock

So I'm Walter Haydock. I'm the founder of StackAware and we help AI powered healthcare and B2B software companies measure and manage cybersecurity compliance and privacy risk. And we primarily do that through ISO 42001, implementation and readiness.

Andreas Welsch

Okay, fantastic. ISO 42001 for anybody not familiar with that? What's that all about?

Walter Haydock

ISO 42001 is a blueprint for building an AI management system. It's an internationally accepted standard that explains how to develop a set of risk assessments, how to build an AI policy, how to build. Procedures and develop metrics for tracking AI performance over time, and it also gives you a toolkit of technical controls that you can apply to help you manage your AI risk.

Andreas Welsch

That's awesome. So it sounds like a, like the full package that you need around governance to, to manage all of this. Now folks, if you're in, in, in the audience and you're just joining the stream, drop comment in the chat where you're joining us from. I'm always curious to see how global our audience is. And if you want to learn how you can turn technology hype into business outcomes, consider picking up a copy of my book called The AI Leadership Handbook available on Amazon. Now, Walter in good old fashioned on, what's the buzz? Should we play a little game to kick things off? What do you say?

Walter Haydock

Let's do it.

Andreas Welsch

Okay. Wonderful. So this is it. When I hit the buzzer, at least the proverbial buzzer here, the wheels will start spinning and you'll see a sentence. I'd like for you to complete the sentence with the first thing that comes to mind. You only have 60 seconds for your answer to make it a little more interesting. Are you ready for, What's the BUZZ? Let's do it. Okay, here we go. If AI were a book, what would it be? 60 seconds on the clock. Go.

Walter Haydock

Every book ever written? Ha.

Andreas Welsch

That's a short answer. Why every book ever written?

Walter Haydock

If you think about it, generative AI systems are functionally trained. Maybe not every book ever written, but on many of the books that have ever been written, they are incorporated into their model weights. They're not necessarily memorized, but they are. Incorporated into the training process for a lot of commercially available systems out there. I know there was a lawsuit against philanthropic, for example, because of the training that they had done on many publicly available books that were not licensed by philanthropic. And some of these books were purchased and some of these were actually allegedly downloaded from book piring sites, so that's why I think that. If AI were a book, it would be every book ever written. Yeah.

Andreas Welsch

Wonderful. Thank you so much for sharing and well within time. So I think that's a good segue because I feel sometimes you, you might run the risk that even you are well in intended AI models, products, agents might do something that you are. Not aware of that you didn't quite foresee. So being able to understand and track what happened, why it happened, and better yet to prevent it in the first place through governance, through understanding your risks is so critical. Like I said at the beginning I see so many vendors and so many consultancies pushing. AI pushing agents into the market, leaders pushing down into the organization. We need to do something with AI. Feels like it's been the same for the last almost 10 years. Where is it security in all of this? Where is AI governance, risk compliance? Are they just at the receiving end in most of the cases? What do you see?

Walter Haydock

The organizations that I work with are generally in the top 10% when it comes to sophistication about AI governance and security, because at a minimum they understand there is a challenge to be addressed here in the other 90%. I see a dichotomy in approaches. On the one hand, security teams saying things like, no AI. AI is banned, which. Whether or not they say it it's not true. It's impossible. And those organizations take a, we're not gonna allow this until we've vetted every single tool. Or we're just not going to allow it ever. On the other hand, I see organizations that take in American parlance, they take a YOLO approach. You only live once and are basically letting their. Teams do any and everything, and they're not applying any sort of controls to it. And that has its own risks.

Andreas Welsch

Yeah. I can certainly see that. What are some of the top risks that you help companies mitigate? What are some of the obvious things that you see? And then maybe what are some of the non obvious things that organizations run into time and again,

Walter Haydock

the top three risks that companies. Generally focus on with their governance programs are one, data confidentiality, two. Intellectual property ownership. And then three, reputation management. So on the first one, unintended training of publicly available LAR large language models like ChatGPT, for example. That's a major risk that a lot of companies are worried about. With respect to the second one, I alluded to it already a bit, but intellectual property ownership and also protecting one's own intellectual properties. Property from training, from unintended exposure to large language models is a major concern. And then the third is reputation management, where companies especially that are interacting with consumers that are providing chat bots, things like that are worried about. The potential damage to their reputation that an accidental or even a intentionally generated response could create.

Andreas Welsch

I was gonna say, especially on, on the last one I remember the early stories buying a Chevy truck for a dollar. The courier service chatbots that went on a tangent and started complaining and spitting out nonsense. What was it this year? I think it was taco Bell, right? With the AI assistant at the drive through and people being able to order a thousand. Cups of water, a thousand cups of Sprite or something. So yes, these things do occur and they occur time and again. And then you make it into the news. And obviously you'd rather be in, in the news for a success story than the success being we've delivered a thousand bottles of water or something. Now how can organizations then balance that, that speed of innovation on one hand? Because as things are happening, there's, there, there's a lot of innovation coming into the market. Larger organizations with deep pockets and resources are likely better situated to capitalize on it. They've likely also built the muscle over the years how to work with these new technologies, how to work with AI. On the other hand, like I said you still need to have security governance risk in place so you don't end up on, on, on the wrong side. Of those news, how do organizations balance that innovation and security from what

Walter Haydock

a key step to balancing innovation with security is establishing an organizational risk appetite, and this is a clearly expressed limit of the level of risk that you're willing to accept. With respect to AI or any sort of practice. And a key way to set this is to identify what the potential reward from using AI is. So if you are generating a hundred million dollars in revenue, it probably makes sense to assume more than$0 in annual loss expectancy in terms of risk. Now, if you tell me you're assuming a billion dollars in risk and you're only making a hundred million dollars in revenue, then that is probably not a good idea. So you need to incorporate things like the potential for regulatory action, the potential for cybersecurity incidents, the potential for impacts to society is that's also something that is important to take into account when you're establishing your risk appetite. And then once you have that in place, then you can determine what the appropriate. Tools to approve are what the right technical controls to implement are.

Andreas Welsch

So question there, when you say, when you work with consumers, what's the risk appetite? One thing that immediately comes to mind is mental health, chatbots, companions. Is that the kind of thing we're talking about? Or is it really the, Hey, my package is delayed, or I didn't get the shipment? That, that sort of more more innocent use case in, in, in that sense, for lack of a better term.

Walter Haydock

With respect to mental health chatbots, there's been a lot of regulation already. For example, New York State, Utah have published laws regulating mental health chatbots, and I view these efforts as reactions to reporting in the news about individuals who may have expressed. Suicidal ideations and not been caught in time. And these laws are well-meaning, but they fail completely to incorporate the opportunity cost of these regulations. So for example, there's a huge epidemic in the United States of people who don't have access to mental healthcare because there's a shortage of psychiatrists. If you have mental healthcare providers, they need to have malpractice insurance, they need to have all this stuff. There's no one, there's no law that says you need to be a psychiatrist, or that there needs to be a psychiatrist available. So these laws that are applying very strict standards to these mental health chat bots and banning them in many cases are basically saying, yes, I understand that there are risks related to these systems, but. We're not providing any alternatives and your only alternative is nothing. So I think the risk calculus that a lot of these states are making specifically with respect to mental health chatbots is way off and is not the right way to go.

Andreas Welsch

So then maybe bring it back to, to, to business. When you talk about risk appetite how can I en imagine or en envision that this conversation goes is, this is, like you said, we're making a hundred million in revenue. How much risk are we willing to take on? And then how do we mitigate it? How do we ensure it, how do we prevent it, or how does it typically work?

Walter Haydock

Risk appetite generally should be expressed in terms of dollars of annual loss expectancy, or if you want to get really fancy about it, you can develop what's called a loss exceedance curve, which says essentially, we accept a 1% chance of a hundred million dollars loss. We accept a 5% chance of a$50 million loss. Something like that. That would be a very advanced way of expressing it, but the general principle is the same, and I'm sure I'm gonna offend a lot of people when I say this, but. Too bad. You also need to incorporate the risk of death into your risk calculations. And if you say zero death is acceptable, you say, zero potential for injury is acceptable, then you should just shut down your company or you should just shut down your system because there's always gonna be a risk. Including risk of death with everything that you do. And in the United States, the US government puts a price on human life. It does it with respect to climate change. It does it with respect to traffic fatalities. So that's something that businesses should do. And people who say, that's unacceptable. You're just pretending like you don't need to make any trade-offs and you're off.

Andreas Welsch

I think that's an important part, right? The there, there are always trade offs. You need to be aware of what they are and to your point how much risk you're willing to accept and what the return is in, in, in exchange for it. Or ideally if the risk does not materialize. What can you actually gain on the other side of the pond? I see a lot more risk aversion and a lot more regulation and a lot more what if before companies even have the chance to start before startups are even able to be founded or flourish or build a product. So to me that's the balance too. How much, can you mitigate and an, and anticipate risk versus stifling in innovation? What happens, on a. On a more regional scale or geographic scale, certainly applies to companies too. Now. You talked about risk appetite. You talked about how not to end up on, on, on the wrong side of the news basically and manage your reputational risk. What are some other aspects of AI governance that leaders need to be aware of? You talked about some of the things when you said ISO 42001 is a complete system to manage your AI program. What else does it in entail in a bit more detail?

Walter Haydock

ISO 42001 gives you a high level framework for conducting AI risk and impact assessments, and then determining what to do based on their results. It also has some, I think, often overlooked. Components. For example, having an internal audit program is really key and an important way to continually improve your organization and your AI management system. So having a third party review your practices is a great way to make sure that you're optimizing for risk and for performance. And then also. Developing effective metrics for how you're measuring AI performance is another key piece of effective governance. So understanding, what are your tolerances when it comes to error rates or when it comes to expenditure of tokens. These are all important things that companies should consider when they're building their AI management system.

Andreas Welsch

It feels like larger organizations are might already have teams doing that, or they might be able to stand up teams or dedicate people to, towards the topic. What does it look like in medium organizations, large enterprises that are not necessarily the, the big fortune 10 50, 100? How do they do that?

Walter Haydock

A standard like ISO 42001 is flexible in that it incorporates the size and complexity of the organization that is implementing it. That's definitely something to take into consideration. So a bigger organization would understandably have more refined, more sophisticated metrics, things that it's tracking than a smaller one. A startup could have lower granularity risk assessments, and it could apply controls in a way that meets the intent that accepts mitigates transfers, or avoids the risk that it has identified in these risk assessments. But it doesn't necessarily need to have the same level of complexity as a Fortune 10 company, as you mentioned.

Andreas Welsch

Except mitigate, transfer and avoid. Those were the four that you just mentioned. That's right. Those are the others. The key aspects to focus on when you think about risk, probably in general, but also AI specifically

Walter Haydock

with respect to risk, including AI risk. There are only four options except mitigate, transfer, avoid. That's the only. Path you have, you can choose one or a combination thereof. There are no other options. And it doesn't change with AI that try to true risk management practices still apply.

Andreas Welsch

Awesome. I feel sometimes being constrained to far options is plenty. And it gives you at least. A sandbox to to play within and know what the boundaries are. Unlike so many other aspects of AI in building AI systems. Obviously there, there are a number of regulations a number of standards. We've been talking quite a bit about ISO 42001. I know there's the national Institute of Standards and Technology with their own risk management framework. There's the EU AI Act. Said individual states might have enacted different laws and regulations, but what are some of the more common ones, what's the bare minimum that any organization that says, yeah, I, I need to do something about AI risk and I need to be aware of it, I need to manage it, I need to document it. What are some of the core fundamentals across any regulation that AI leaders need to be aware of?

Walter Haydock

There are three general types of AI frameworks. The first of all would be legislation or regulation like the EU AI Act, and that is not optional. You have to comply, and the way you find out that you haven't complied is through an enforcement action and a penalty like a fine. The next level would be a voluntary framework like the NIST AI Risk Management Framework, which. Doesn't have a method for external certification, but is a series of best practices. And then the final tier of AI framework would be a certifiable standard like ISO 42001, where you could have an external auditor come in and confirm that you're meeting its requirements.

Andreas Welsch

So it sounds like ISO 42001 is the bare minimum for anyone. And then if you want to do more, there's the risk management framework by NIST or even the Mandatory Regulations through the EU AI Act.

Walter Haydock

I would flip that. In fact, I would say the minimum is the EU AI Act or whatever your applicable legislation or regulation is. The next level up would be a voluntary framework like NIST AI RMF, and then the final piece would be external certification like ISO 42001.

Andreas Welsch

Love it. Perfect. I'm so glad we're having this conversation. I'm, I always learn something new from my guests and I hope you in the audience are taking some good information here away from our conversation as well. Your vendor, we're ar you need to do AI, I give you some licenses, discounts, I maybe give you services to jumpstart things. Yeah. Awesome. We need to do AI. My buddy on the golf course said they're doing AI. We need to do it too. So you push it in into the organization. Maybe you even realize we actually need to do a little more here or a lot more. And we need to look at risk and governance and setting up a program or getting some external help certifying that we are. Working according to standards, according to regulations and so on. What's really the first step that leaders should take when they do realize we need to do something about managing our AI risk?

Walter Haydock

The first step in building an AI governance policy, the first step in building an AI governance program would be building an AI governance policy, and that explicitly lays out what the organization's risk appetite is. That could be in the form of you are allowed to do these things, you are not allowed to do these things. It could be in the form of. We'll run a risk assessment on every tool, and as long as you're below our risk appetite, then it's approved. You could also have some sort of tiering approach where you say, these types of use cases are approved, these aren't however you do it. Having a clear policy that is actionable, that individual employees can understand is really key because. Otherwise, people are just gonna be making it up as they go along, like you said, using different tools, vendors, discount services, doing stuff they see on LinkedIn and not really having a clear picture of the risk involved.

Andreas Welsch

I think that's so important in so many organizations where I give workshops on how do you use AI properly and responsibly. Many leaders, many professionals share. I'm actually not sure what kind of data am I allowed to put in there. It said, don't put confidential data in there. Don't put personal data in there. What is confidential? We have four or five different classifications of confidentiality. Clear what's public. I can put in there, but how do I know if something is confidential? How do I know if it's strictly confidential? How do I know it's only for senior executives, c-suite board members? And what if I have access to that too, because I work on the report. So I see a lot of confusion there. So your point about make it clear, make it actionable, make it easy to understand for pretty much any employee that you expect who will work with that AI system or these AI systems. I think that's so important. Not only do I think you can. Mitigate and prevent some of the risks of data leakage, IP leakage. But I think on, on, on the other hand, you will also be able to increase your AI usage and adoption because if people know what they can use the systems for and the tools for, they're more confident. They're more willing, more open to using it.'cause yeah, sure. That's the kind of data I, I work with on a daily basis so I can put it in there. Or conversely, that's the data that I should never put in there, but at least I know what goes in there, what goes not in there.

Walter Haydock

Most data classification programs are terrible and excessively complex. Having anything more than two sensitivity levels, public and confidential is usually wrong and just makes it more difficult for people to actually abide by the guidance the organization gives.

Andreas Welsch

I think organizations in IT departments demand a lot from their employees. In, in, in that regard on top of everything else, right? So now I need to remember which one of the four or five levels of classified classification applies here, really. But anyways, risk, obviously super important. Not a lot of people like to talk about it, quite frankly. I see you as one of the few and one of the few vocal ones on LinkedIn sharing what how you help organizations. Maybe out, out of all your projects all the work that you've done so far, what are some of the common things where you say this is where we always thought, this is always the. The topic where maybe we trip over or something that's unexpected that you feel would be super relevant to if more people knew that upfront going into a project like that.

Walter Haydock

The biggest challenge that I see from across stack awares engagements is unrealistic timelines. So organizations will establish. Either internal requirements or they'll be bound contractually to achieve a certain standard or achieve a certain certification, and they will develop what, in my opinion, are preposterous timelines for achieving that. That really don't take into account the internal dynamics, the requirements for coordinating with external parties, the needs of their business teams, the shifting priorities that they're likely to face. So I would say. The biggest issue is timing, and if you're going to undertake a project, have realistic timelines and move with a purpose, because otherwise you're just going to be stuck talking about doing something for years.

Andreas Welsch

I think that's so critical that the two things that come to mind are every task takes longer than expected or than initially expected. And the second one, every task will take as much time as you give it. But certainly sounds like there's a lot of pressure when organizations, when leaders realize we need to do something about risk, so let's hurry up and let's get this out there. But to your point there's still people in the organization and especially in larger organizations, I know change processes can take a good amount of time. And, it's a matter of getting people on board and raising your awareness to your point, if there's more than two classifications of data and confidential data hammering that in, in, into people's heads and so on is important. Now, Walter, we're getting close to the end of the show and I was wondering if you can summarize the key three takeaways for our audience today.

Walter Haydock

The key takeaways from today would be one. AI is coming and there's no way around it. Every company's gonna be an AI company in five years. Number two would be that if that is the case, which it is, then some level of AI governance and risk management is going to be table stakes. And then three, a key piece of that risk management is understanding the laws, regulations, and standards that are applicable to you and your use case.

Andreas Welsch

Wonderful. Thank you so much. Maybe on, on the last one, is it just the AI leader or the person wearing the AI hat that needs to know about those regulations or anybody else that needs to pay attention?

Walter Haydock

My opinion is that. Every AI system needs to have a cross-functional business owner who is accountable for what that system does, because that is the type of person who understands all the relevant trade-offs and can make the right decisions. AI governance leaders can help in the determination of risks and making recommendations for how to address them, but ultimately, it needs to be a business leader who is making the final decision.

Andreas Welsch

Thank you so much. Walter, that brings us to the end of our episode today. Thank you so much for spending the time with us, for sharing your expertise and experience with us. I certainly learned a lot. Like I said, I hope you in the audience did so as well.

Walter Haydock

Thank you very much, Andreas.