What’s the BUZZ? — AI in Business
“What’s the BUZZ?” is a live format where leaders in the field of artificial intelligence, generative AI, agentic AI, and automation share their insights and experiences on how they have successfully turned technology hype into business outcomes.
Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, agentic AI, and process automation.
Since 2021, AI leaders have shared their perspectives on AI strategy, leadership, culture, product mindset, collaboration, ethics, sustainability, technology, privacy, and security.
Whether you're just starting out or looking to take your efforts to the next level, “What’s the BUZZ?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation in business.
**********
“What’s the BUZZ?” is hosted and produced by Andreas Welsch, top 10 AI advisor, thought leader, speaker, and author of the “AI Leadership Handbook”. He is the Founder & Chief AI Strategist at Intelligence Briefing, a boutique AI advisory firm.
What’s the BUZZ? — AI in Business
Teaching AI Agents Ethical Behavior (Rebecca Bultsma)
Can you trust an AI agent to act in line with your values — and who’s responsible when it doesn’t?
In this episode, Andreas Welsch talks with AI ethics consultant Rebecca Bultsma about the pitfalls of rushing AI agents into business workflows and practical steps leaders should take before handing autonomy to software. Rebecca draws on her early ChatGPT experiments and academic work in data & AI ethics to explain why generative AI raises fresh ethical risks and how organizations can reduce harm.
What you’ll learn:
- Why generative AI and agents amplify old AI ethics problems (bias, hidden assumptions, and Western-centric worldviews).
- Why you should build internal understanding first: experiment with low-stakes, traceable use cases before deploying public agents.
- The importance of audit trails, explainability, and oversight to trace decisions and assign accountability when things go wrong.
- Practical red flags: agents that transact autonomously, weak logging, and complacency about vendor claims.
- A legal reality check: new laws (like California’s chatbot rules) are emerging and could increase liability for organizations that deploy chatbots or agents prematurely.
The top takeaways:
- Learn by experimenting personally and internally in your organization to discover where agents fail.
- Start small with low-stakes, narrowly scoped tasks you can monitor and audit.
- Don’t rush; rather, observe others' failures, train your people, and build governance before going public.
If you’re a leader evaluating agents or responsible for AI governance, this episode gives clear, actionable advice for keeping your organization out of the headlines for the wrong reasons. Tune in to hear the whole conversation and learn how to turn AI hype into safer business outcomes.
Questions or suggestions? Send me a Text Message.
***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.
Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter
Welcome to What's the BUZZ?, Where leaders share how they have turned hype into outcome. Today we'll talk about how to teach a how to teach your AI agents ethical behavior and who better talked about it than someone who's actively working on AI ethics. Rebecca Bultsma. Hey, Rebecca. Thank you so much for joining.
Rebecca Bultsma:It's my pleasure. Thank you for having me.
Andreas Welsch:Wonderful. Why don't you tell our audience a little bit about yourself, who you are and what you do.
Rebecca Bultsma:Sure. So at the ChatGPT moment, I was one of the first people to start experimenting with ChatGPT, and I started using it all the time, pushing the limits. Using every new a I tool that came out. And after a few months, I started having questions about who was behind the curtain, who was making decisions where it was drawing information from. And I couldn't find a lot of people who could explain it to me outside of computer science people who I didn't necessarily understand. And so I started trying to help other people make sense of it while also enrolling in a Master's program, one of the only ones in the world at the time, at the University of Edinburgh and Scotland that specifically focuses on data and artificial intelligence ethics. And so I've been fully immersed in AI ethics and still using and experimenting with all the tools. I think they're amazing. I think they're cool. I think it's an amazing time to be alive. But now I also have a very nuanced view of all the ways that this can go wrong and all of the risks that are underneath some of that shiny bright surface.
Andreas Welsch:Awesome. I love that perspective. Hey I tried this out. I was curious. I wanted to learn more. And I think it just shows that how broad this space of AI is and how curiosity helps us learn more about these things. And you. Almost like Alice going down the rabbit hole and learning more and more in a positive
Rebecca Bultsma:Absolutely. With a little mad hatter on the side for sure.
Andreas Welsch:Exactly. Alright, wonderful. Hey folks, if you're just joining the stream, drop a comment in the chat where you're joining us from. I'm always curious to see how global our audience is. And also if you want to learn how you can turn technology hype into business outcomes, take a look at my book, the AI Leadership Handbook on, on Amazon and anywhere you get your books and audio books. Now Rebecca, I know we're both rushing out of meetings, rushing into the next meeting. So our time today Yeah. Is a little shorter than we had in initially planned. That's why I wanna jump straight in, into the topics that we discussed. And, I feel for a number of years we've been talking about this concept of ethical AI, trustworthy AI, responsible AI it's evolved over the last 7, 8, 9 years. And I'm wondering what are you seeing? What, what has changed from your perspective when we talk about all of, these different concepts.
Rebecca Bultsma:We've been talking about responsible and ethical AI obviously for a long time because AI's been embedded in a lot of our workflows and our daily life in a lot of different ways. For example algorithmic powering of social media, right? There was questions around teenagers using it, a teenager looking at maybe a post about an eating disorder, and then the artificial intelligence powering the algorithm. Pushing thousands of those notifications on them. So we've been talking about AI ethics for a long time. What's changed, obviously, is this introduction of generative AI and because it can act like a human and do really human kind of. Things. It raises a ton of new ethical issues to the front of mind for most people. That we haven't had to think about yet, or with this level of urgency. And so it's always been there in the background. I could give you a hundred case studies of how. AI ethics or the lack thereof was super problematic before generative AI, but now it's an even bigger deal just because the capabilities have expanded so quickly and there's so many more risks.
Andreas Welsch:So now we take these models with the world views encoded probably largely Western or u US based world views. And we put them into our tech stack. We put them into the most critical pieces of our infrastructure, whether it's dealing with customers or with employees or leaders, and, getting coaching for employee conversations and whatnot. And we're even, adding one thing on top and we say, Hey, you do this autonomously or semi autonomously. Now with agency, it feels like we haven't even really started it at the foundational level and yet many organizations are looking to push this. Yeah, and I'm a I'm an adjunct professor. I I work with undergrad students and. One of the modules I teach is around ethics, and I feel it's such a broad and big topic, but it's also very hard to get your arms around. Oh yeah. And it's hard enough to teach humans to behave ethically. Now we want to do this with agents or we need to do that with agents. If they act on your behalf, if they make decisions, maybe optimize more for a business goal and like for the collective. Good. What are the new challenges that, that you're seeing there in things that leaders need to be aware of?
Rebecca Bultsma:There's honestly so many I highly discourage organizations from embedding things like agents before their actual people have a really solid understanding of AI, generative AI in general. It's capabilities, it's limitations, the connected risks, because those are just amplified obviously. Once you bring in something like an agent, and you're totally right, there is this major risk of bias. We call it like a weird bias is an acronym. It's western educated, industrialized rich, and democratic societies, or are way heavily. Overrepresented. So that's its own kind of problem. But there's this risk already of relying on AI systems because as you mentioned, they're, they have an ethics system encoded into them that reflects the builders who built it and a flawed data system that it was trained on the internet. And so the risk of us using it without awareness is huge. Us without having enough awareness to oversee an agent and make sure that it's aligned with our company values. There could be major reputational damage if it acts in a way that's contrary to your values. There's shopping agents now you can buy right from within ChatGPT or a bunch of the major financial I institutions in the world have come together and come up with a way that. Agents can pay other agents. So if I, I'm looking at a great pair of shoes and I tell my agent that as soon as they get below$300, it can just go buy it, use my account. And so there's obviously risks connected to that, that we don't fully understand. Chat, GBT is released browsers and agents, but there's malicious actors out there who. Know very simply how to maybe hack these or redirect them to incorrect websites and you end up sharing personal information with the wrong places. There's just so many unknowns with those kind of external agents. Obviously within something like Microsoft copilot in a, an enclosed ecosystem that is more secure. There are some really amazing things you can do with agents, and that's the problem, like the word agent is murky, right? Like travel agent it doesn't come with you to do stuff or like it's a weird word, but, and it means lots of things in lots of different contexts, but. You just don't want an AI system making decisions ever, in my opinion, because it can't explain how it arrived at those systems or those decisions in any sort of way that's explainable or defendable. And then when it goes wrong, who's accountable? It's not the agent. You can't blame the AI. It will be you. It will be your company. If it's bias, if it. Spends$10,000 on shoes instead of buying one pair. Who do you blame? You can only blame yourself, which is why you really need to understand it and have really good control before you do this.
Andreas Welsch:I think it's a really interesting time that we're in for so many reasons. First of all of these things are now possible. Some of them are still in, in the infancy, but you can see where this is going. To your point, shopping, financial transactions. Automation built in into your browser, yes I can see where this is going. And like I said, many of the things are pretty incredible. But then thinking about what does that agent optimize for? Does it optimize for the provider's goals, which could be revenue maximization or does it optimize for the customer's goals? Which would probably be paying less or getting better service or faster shipping at no cost or what have you. And then what is ethical in, in that sense? Is it the, win-win? Is it maximizing the greater good for everyone involved? Are you looking at more long term relationships, customer lifetime value? Maybe? I, I. Give a little more here, but then you buy more next time. These sort of things. And how do you even know how these agents decide it? I think that's the big question for me now. Huge. Yeah. From your work now working with leaders, working with organizations, advising them on ethics, things that they should keep in mind how can you en ensure that agents don't just optimize for reaching that goal, but also consider other aspects that, that a human would consider or that a, regular business person would consider thinking two or three steps ahead.
Rebecca Bultsma:Honestly it's tricky because as of right now, it's hard to know what's going on in the background of how an AI is working and how it's making decisions. One of the most interesting stories I've ever heard is, this was a few years ago when they were trying to teach image recognition to an AI system. They showed it thousands of pictures of wolves and dogs. And to teach it to learn the difference between a wolf and a dog. Yeah. And then when they tested it later, the only reason it knew what was a wolf and what was a dog, is that the wolf had a snow in the picture. So even if you showed it a picture of a frenchie and there was snow in the picture, it decided it was a wolf, which shows that we don't know how it's thinking in the background, and it may be very different than how we think it's. Thinking in air quotes when it makes a decision. And so I think it's super important that we take that into account. Obviously, it will get better, but a lot of it's gonna depend on the very specific instructions and tasks that you provide, keeping things very narrow to start and very traceable so that you can understand where things go wrong and start with really low stakes things. Obviously with any AI use start with things that are low stakes, right? Because low stakes hopefully mean very low consequences if it goes horribly wrong. Yeah. And the agents will get better and we'll have some of these questions answered, but we do need like audit trails and mechanisms because there isn't necessarily any government oversight over this right now. But you would need some internal company oversight for sure.
Andreas Welsch:Yeah, in a way, you might think that once things do happen bad things happen, unattended, consequences happen that then regulation or, broader changes are in, enforced in a sense which is always un unfortunate in, in that sense. But I think this part about, auditability, traceability peeking in into the black box, checking the logs, what is happening? Maybe not so much on an individual user level, but for those who are looking at bringing agents in, into the business, I think those are features that, that are so important because. Like I said at the end of the day, if something happens, you will want to trace what has actually happened and why and not just you want to, you will eventually need to have something Oh yeah. What has
Rebecca Bultsma:happened. Absolutely. Absolutely.
Andreas Welsch:So what do you then recommend leaders do at this critical point in time when all vendors are shouting, we've got agents build it on us or on our platform. Come here. And you feel the pressure that you should be doing something because your friends on the golf course are doing something or your competitors are doing something, what do you need to do then?
Rebecca Bultsma:Take a breath. Honestly, it's not like a race to the bottom right now. And there's new legislation that is coming in that may impact you. And I haven't done a deep dive into it yet, but some of the new laws that they introduced in California, for example. Specifically target chatbots and they're designed to protect teenagers and kids, but they also potentially impact. Anyone who has any sort of chat bot on their website and introduces new liability, if in any way, shape or form it offers any sort of advice or emotional, like anything that crosses into that territory, suddenly it's governed by different laws. And there's other states who are introducing similar legislation and. You might just wanna focus on getting your whole staff super, super trained on generative AI and working with things internally like maybe Microsoft copilot working on internal agents before you start thinking about introducing things externally to the public. Until we have a better understanding of what the legal landscape looks like, what the liability landscape looks like, and until we have a really good understanding, wait for it to go horribly wrong for somebody else first is my advice because it will, let's just see what happens. So my advice is to take a breath.
Andreas Welsch:Sure. I think that's actually a really good advice. It's not as urgent or as, as imminent as it might see seen when you read the news. And there's still, en enough time that you need to spend anyways learning and experimenting and seeing what works and what doesn't. And also the the topic about startup on, on internal use cases in internal scenarios first, and, cut your teeth there before you put something out in the open is a good one. I think it was in, in July, August timeframe McDonald's was in, in the news. They had connected an agent to the HR backend system. So applicants would get information more quickly. And researchers, I actually found that they were able to hack the agent and steal the password, get access to, I think it was 6 million records of previous job applicants and all the kinds of things. So speaking about things that can go wrong. There are more of these examples certainly coming, and it's usually these moments in time when the risk is, okay, let's take a breath and let's see what we should really be doing. So yes, agree. Now, I said this one is gonna be a, a short episode and I know we're getting close to the end of the show. Rebecca I was wondering if you can summarize the key three takeaways for our audience from today.
Rebecca Bultsma:I think that my three key takeaways would be to invest some time understanding as much as you can about. Agents, how they work. Experiment with them on your own first, before introducing them into your organization or mandating them into your company. Just experiment with accounts that you have in your personal email and your personal name to figure out how they work and where they fail. That would be number one. Number two, know that these are not foolproof. There's a lot of problems with them. You just brought up a good one with McDonald's. On a large scale and on a small scale, there are issues. And number three, just take a breath. There's no rush. We're all figuring this out. Things are gonna go wrong, but maybe just don't let it be you who is the trailblazer in things that are going wrong. Just sit back and observe and take a breath.
Andreas Welsch:Wonderful. Thank you so much, Rebecca. It's been a pleasure having you on the show and hearing your perspective on how we can teach agents ethical behavior. Folks if you want to connect with Rebecca, you find her on LinkedIn. And yeah, thank you so much for spending time with us today.
Rebecca Bultsma:Thanks for having me.