
What’s the BUZZ? — AI in Business
“What’s the BUZZ?” is a live format where leaders in the field of artificial intelligence, generative AI, agentic AI, and automation share their insights and experiences on how they have successfully turned technology hype into business outcomes.
Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, agentic AI, and process automation.
Since 2021, AI leaders have shared their perspectives on AI strategy, leadership, culture, product mindset, collaboration, ethics, sustainability, technology, privacy, and security.
Whether you're just starting out or looking to take your efforts to the next level, “What’s the BUZZ?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation in business.
**********
“What’s the BUZZ?” is hosted and produced by Andreas Welsch, top 10 AI advisor, thought leader, speaker, and author of the “AI Leadership Handbook”. He is the Founder & Chief AI Strategist at Intelligence Briefing, a boutique AI advisory firm.
What’s the BUZZ? — AI in Business
How to Succeed in Your First AI Project as a New AI Leader (Kathleen Walch)
What if you could navigate the complex landscape of AI projects with a proven methodology?
In this episode, host Andreas Welsch welcomes Kathleen Walch, Director of AI Engagement at the Project Management Institute, as they explore CPMAI, an essential framework for leading successful AI initiatives.
From identifying real business problems to understanding crucial data requirements, Kathleen shares her expertise on the importance of a data-centric approach:
- What are the most common challenges new AI leaders run into?
- What are the key steps to successful AI projects?
- What’s the #1 surprising thing about AI projects and leadership?
- What’s next with Agentic AI and how will it change the paradigm of what AI leaders need to do?
With actionable insights and compelling examples, this episode provides an invaluable resource for any business leader seeking to leverage the power of AI.
Don't miss out on learning how to transform AI hype into meaningful outcomes—tune in now!
Questions or suggestions? Send me a Text Message.
***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.
Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter
Today, we'll talk about the six steps to leading successful AI projects and programs, and who better to talk about it than someone who's built an entire methodology around that. Kathleen Walch. Hey, Kathleen. Thank you so much for joining.
Kathleen Walch:Hi. Thanks so much for having me.
Andreas Welsch:Wonderful. Why don't you tell our audience a little bit about yourself, who you are and what you do.
Kathleen Walch:Sure. So as you said, I'm Kathleen Walch. I am the Director of AI Engagement and Community at Project Management Institute, PMI. So I joined there about 10 months ago or so. At this point. I came from the Cognalytica acquisition. I had co-founded Conalytica back in 2017. We started out as an AI focused research advisory and education firm covering the AI markets, and then quickly realized that our clients wanted help with running and managing AI projects. And there was no formal methodology for doing so because data. Is the heart of AI. So AI projects are really data projects. So you needed a data-centric methodology. So we came up with the CPMAI, cognitive project management in AI methodology. Have trained thousands of people at this point. Also have a podcast called AI Today Podcast. And we are coming up on season nine, which I can't believe I know. It's an official PMI podcast now. So season nine will be launching on July 16. And then also we had a lot of content that we brought over as well. So a lot of articles and some research that we had done.
Andreas Welsch:So I'm super excited to have you on and obviously going back to 2017, there was, before AI was cool or when it was cool the last time, or however you wanna look at it, right? Yeah,
Kathleen Walch:I always say that I've been in the AI space since before Gen AI made it cool.
Andreas Welsch:Yes. Like I said, excited to have you on. I'm going through the CPMAI certification myself at the moment. And obviously we're also collaborating around PMI a little bit. Now, Kathleen, should we play a little game to kick things off?
Kathleen Walch:Sure.
Andreas Welsch:All right, so let's do this. So this one is called obviously What's the BUZZ? That's the whole idea of the buzzer and the show. When I hit the buzzer, you'll see the wheel spinning and a word appear, and you have 60 seconds for your answer. Are you ready for What's the BUZZ?
Kathleen Walch:I'm ready.
Andreas Welsch:Let's do this. If AI were a book, what would it be? 60 seconds on the clock.
Kathleen Walch:That is a really great one. I would say Jack of All Trades and Master of None, especially when it comes to Gen AI.
Andreas Welsch:Why that?
Kathleen Walch:Because it's really great at giving you basic information, especially one inch deep. But to go very deep into things you need to provide better prompts, really understand how to work with AI systems. And if you're at that surface level, then I think it provides you a very kind of one inch deep response.
Andreas Welsch:I love that. Indeed I'm sure many of you have found this as well in your journey of using AI over the last two and a half years, or maybe even longer, since before it was cool. It can do a lot of different things, but what's the one thing you really wanted to do and where it delivers value and, where you can go deep? Awesome answer. Now, you know that brings us to our questions and, we've obviously seen and we're seeing more leaders to take on AI projects and programs, which I think is fantastic. More businesses are thinking about it and more business leaders are thinking about it. I see a lot of times they look to. Data leaders, AI leaders, technology leaders, maybe there is even an AI department or center of excellence that already exists, but it's then the, Hey you've always worked on this technology sort of thing. Can you now figure out how, what do we do with AI and what's our strategy and how to make this real? But I also hear that for many leaders in this space, it's the first time that in their career that they're leading in initiatives like that. It's not just technology, obviously, as we know. But I'm curious, what are some of the common challenges that you see leaders run into and why they take CPMAI?
Kathleen Walch:Yeah, that's such a great question and you're right. A lot of times, despite AI being around since of officially coined in 1956, I feel that many leaders and many teams and individuals, this is maybe their first time running and managing an AI project, which I always like to say is different than using AI tools. And so a few years ago actually, we identified common reasons why AI projects fail and what you can do to avoid that. Because I always think you can learn so much. From other people, right? And some of the mistakes that they've made so that you don't make it yourself. So some of the common reasons that AI projects fail is that you treat them like a software application development project. So you are going to try and use software development methodologies. A lot of people say that they're using Agile, which you can do it in an app. Away. But as I mentioned earlier, data is the heart of AI. You need data-centric methodologies because these are fundamentally data projects. With that, then if data is so important, we get issues around data quality and data quantity. So is our data good and is it good enough and do we have the amount of data? We need, we always say you don't need Google sized amounts of data, especially depending on what it is that you're trying to do, but you do need data, and you do need to make sure that it's representative of whatever it is that you're trying to do, and that you also have access to that data. Far too often teams move forward and they're like yeah, our organization has this data. Let's just move forward, and then they get to needing that data and it takes some months to access that. That's gonna slow down your project, right?
Andreas Welsch:For sure. Yes.
Kathleen Walch:And then we've also seen a return on investment. The ROI, you have to make sure that's justified, and you have to make sure that you're measuring things effectively because your projects are not going to be free. It's money, time, and resources. I always like to say to make sure you measure, so when you're not doing that, even if the AI system is working as expected, if it's costing more than it's saving. Is that really successful? Probably not. So we need to make sure we're measuring that. We also have issues around proof of concept versus pilot. So a lot of times people will say, I'm gonna run a little proof of concept and see if it works. But we say your proof of concept is really proving nothing because you're doing it in a super controlled environment and you're using the best quality data that you have. And usually the person who has built the solution is the one testing it when you put it out in a pilot. And it's the real world. Now things start to get interesting, right? People aren't using it as expected data can be messier. Maybe data sources are different than you expected. So we say always move to a pilot and see how it's actually being used, and then continue to iterate. And the most common reason that we've seen AI projects fail. I talked about how AI was officially coined in 1956, and you can go, that's 70 years ago, right? Almost 80. At this point you're like, what's. What's going on? Why haven't I heard about it more? We've gone into two previous AI winners, and so that's a period of decline in investment, popularity funding. People moved towards other methods to get to their solution. I. And the big overarching reason for this is over promising and under-delivering on what AI can do, and I'm still seeing that today. So you have to understand what AI is good at, what it's not good at, how you can use it as a tool and make sure that you're actually setting realistic expectations.
Andreas Welsch:I like that part, especially the last one about overpromising and under underdelivering. And I think just recently we've seen that even the biggest tech players in the industry and in the world are not immune to that take Apple as an example in the announcements of apple Intelligence last year. I think we're very close to making progress, to shipping something. And then they found it's, actually really hard to do this reliably without hallucinations, without some of the unwanted side effects, biases and what have you. So there's this tension as well, when you want to ship something, you need to ship something to be competitive. But at the same time, if millions and millions of people use your product, you better make sure that it's right and accurate. Otherwise you get the customer feedback on the other side. So not an easy situation to be in. Whether it's with machine learning, predictive analytics, or now with more Generative AI or maybe even agent AI things. And I. The other point that deeply resonates with me is data, right? I, remember working with a large Fortune 500 customers and we said, Hey here's an idea for a prototype or for a pilot even. And we, agreed on the scope, yes, we, we want to do this. For example, in finance, we want to estimate what is our, liquidity forecast. And then we needed data. Guess what? Then all of a sudden legal gets involved in it, needs to extract the data and it, comes to a grinding halt until you finally have it three months later. Six months later. If you think you can have a quick win, six months to get the data is not so quick. So to your point those are real challenges that, that you still run into. Especially you be doing this for the first time. You mentioned earlier in, in your introduction, you've trained hundreds and thousands of, leaders over the years to learn those fundamentals of running successful projects. What would you say are, the key steps to success if somebody's new in that role or wants to move into that AI leadership role?
Kathleen Walch:Yeah, so that's also a great question because what we've seen is far too often people, right? Sometimes it's that fomo, that fear of missing out. You feel like your competitors are doing it, and so people rush to get things out. And when you rush anything, especially when it comes to AI, you're gonna quickly realize that. You know that move fast and break things mentality does not work with AI projects. So that's why we came up with CPMAI methodology because it's a six phase iterative approach. So we always start with. Business understanding, which sounds simple enough, you're supposed to be answering what question what problem are we trying to solve? And again, like I said, it sounds simple enough, but a lot of times people just move forward with it. And we say, make sure you're solving an actual business problem. And then once you know the problem you're trying to solve is AI the right solution. And if you don't know if AI is the right solution, we say, okay, let's break it down one level deeper. Because a few years ago, back in 2019, a lot of people were. Really caught up in the term and the hype. And they're like, is this an AI project? Is this not an AI project? And it would paralyze them. And we said, dig it. Dig down one level deeper and say, what are we trying to do? And that's where we came up with the seven patterns of AI. And so it is again, like we present it as a wheel because there's no one that's more important than the other, but it's hyper-personalization, treating each individual as an individual, the recognition pattern. So this is making sense of unstructured data. We think about facial recognition, right? You can't really program your way to understand individual faces. So that help make sense of that unstructured data. Then we have our conversational pattern. So this is where humans and machines talk to each other in the language of humans. We think about AI enabled chatbots, but also LLMs fall in that pattern. Then we have our. Patterns and anomalies. So machines are really good at looking at large amounts of data and being able to make sense of that data quickly. Then we have our predictive analytics. So this is where you take past or current data to help humans make better decisions. So we're not removing the human from the loop. Then we have our autonomous pattern. So the goal of this pattern is to really remove the human from the loop. Whenever you're trying to remove the human from the loop, it's going to be one of the hardest patterns of AI, right? I think about autonomous vehicles, which is my dreams, still my dream, even though we don't have commercially available, fully autonomous vehicles. But then we can also think about autonomous business processes. So how can you have. An AI tool navigate autonomously through your business. And then last, we have goal-driven systems. So this is really around reinforcement learning and finding the most optimal solution to a problem. So we think about game playing and scenario simulation in here. And a really fun example that I like to bring up is that the city of Pittsburgh used reinforcement learning and the goal-driven systems pattern to help optimize their traffic light pattern. So they wanted to reduce idle time. And make sure traffic was flowing to help with the missions as well. So they used it and I thought that was a really fun example.
Andreas Welsch:Wonderful. That's indeed a great example, right? And
Kathleen Walch:yeah.
Andreas Welsch:Something where everyone benefits from, that in, in the end too.
Kathleen Walch:Exactly. So that was just phase one, right? So now we got five more phases. So we go through our patterns and say, okay, does it fall into one or more of those patterns, then yes, let's move forward. With our AI project. Then phase two is data understanding. So now we need to understand what data we need and you brought up an example, three, six months later maybe you're finally getting your data. It's really not uncommon, especially for large organizations. Maybe you have data in different systems. Some need to be internal, some need to be external. So because this is iterative and we can do this over many sprint cycles, we say start with the smallest amount of data that you need to get. The result for this first iteration. So control the scope, right? So maybe start with just one small feature. If you're building a chatbot, for example, you don't need to start off with the chatbot being able to answer 10,000 questions. It just needs to answer the most frequently asked question. Because if you're trying when I talk about ROI, if you're measuring that, if it's reducing call center volume, if it's increasing customer satisfaction, just start with that one. And then you get it out there, you see how it's working, and then you can add additional questions and features on as you go on. Then we have our data preparation. So now we have the data. Now we need to prepare it, clean it, de-dupe it, normalize the data, anonymize the data, whatever it is that we need. Then we get to the fun stuff. We're building our models. So we have model evaluation, especially if we're building it from scratch. A lot of people wanna start. Phase four. But it's really important that we go through phases one through three first. Then we need to evaluate the model. Is it performing as expected? Even if we're using AI tools, right? We need to be making sure that they're performing as expected. And then we have finally model operationalization. So this is using the model in the real world, wherever you've decided that the model's gonna be on cloud. On premise, on an edge device, like a phone or in your car, wherever it is. And so those are the six phases of CPMAI. And they're iterative so that if you start in phase one business understanding, then you go to data understanding, you think you have the data you need, now you need to clean it. And then you're like, oh, I'm in the cleaning phase. And I realized I don't have enough think. Go back a phase. That's okay. And like I said, we always say think big, start small and iterate often because you don't need to do everything in one iteration.
Andreas Welsch:I think that's such an important point, especially the fact that AI projects are iterative a lot of times, right? We're so conditioned to running a project from start to finish. It's got a start date, an end date, and what are the deliverables in between? Let's look for quick wins and how how can we build this and ship this very, quickly and then you throw it over the fence. And then somebody else needs to maintain it and they haven't been involved and they say what am I supposed to do here? How does it even work? Or it's missing some critical things. Yeah. So the iterative nature I think is absolutely critical. I, have a new course coming out on LinkedIn learning. I. Probably in the next couple days of, what I hear about the, risks in AI projects. And, that was one of the things that I picked up on as well. And it's the, fact that also managing that expectation to your leadership is critically important. It's not a straight shot. It's not start to finish many iterations and many loops. And like I said, going back one or two steps in fixing things and improving them is absolutely normal, but shaping that awareness. I think needs to happen as well. And not everybody has that awareness yet.
Kathleen Walch:Exactly. And I think that when you realize their data projects and you follow data-centric methodologies, then it helps a lot.
Andreas Welsch:Yeah. You said you've been working on this since 2017, since before AI was cool when, it was cool that the last time. What's the one thing that still surprises you about these AI projects and, what when leadership is involved or gets involved after all these years?
Kathleen Walch:I think going back to those common reasons why AI projects fail that, despite it being known and that it is aware and out there, we still continue to fall into these different reasons. Regular reasons why AI projects fail. And I think one of the most important things is to always start with business, understanding what problem are we trying to solve and never to downplay that. Each phase of CPMAI is important, but phase one is the most, the arguably the most important because if you're not solving a real business problem, you shouldn't move forward in the idea. It doesn't matter how much you like it or how good you think it's gonna be, if it's not solving a real problem, you shouldn't move forward. And far too often, organizations and leaders feel pressure. To just get something out. They want to have an AI tool in the market or have an AI solution, and so they just go ahead and start creating something and then they get 6, 12, 18 months into the project and have nothing to show for it except a large pile of debt. Yeah.
Andreas Welsch:So, how do you see leaders a addressing that then? Certainly one part is through the methodology, but it seems like a lot of these issues are soft issues if, you will, or in interpersonal things. There's this culture, there's politics. There may be in individual aspirations, career aspirations. There's the challenge of losing face or not wanting to lose face greenwashing, right? Dashboards from, red to yellow to bright green. The, higher you go in, in the hierarchy, what do you see there? How can leaders shape that awareness that it's not as easy as as it seems in a vendor demo, for example?
Kathleen Walch:Yeah, I think managing expectations and taking out the hype. Versus the reality, and then also really thinking big, but starting small. So say, what is it that we wanna do? So a really great example that I always like to use, because then you can still show wins. They might not be as grand as you want, but they're successful. You're gonna get buy-in, you're gonna have these small wins, and you can continue to work on these projects. So the US Postal Service said what's the number one question that we get asked? Track a package, right? And so they were trying to reduce call center volume, especially around the holidays and seasonality so that they couldn't just hire up a ton of people, but they needed to control the call center volume. So they said, let's just answer that one question really well. What's our goal? We wanna reduce call center volume, then we can measure that, right? It's really easy to measure. That's the return on investment, right? And they showed positive return on investment. So they're able to then report back and say, Hey, look at this win. Look at what we did. And then they can say, okay, now let's go to the number two or number three, or get the 10 most frequently asked questions. And then in the next iteration they can add an additional 10 and then add an additional 10. Rather than saying we need a chat bot that's gonna be able to answer 10,000 questions in five different languages, let's go. What happens, right? We have to control the scope, especially from Project Management Institute, right? All the project managers out there who are working on AI projects know you have to set expectations, especially with stakeholders, and you have to control the scope because scope creep is real. And so it helps when you have an understanding of how to run a managed AI projects, what should and shouldn't be an AI project because also AI, as much as I love it, and I'm a big proponent of AI, it's not the right solution for every single problem. Sometimes you can have straight automation. Sometimes you can code your way to get the results that you need. You don't always have to use AI.
Andreas Welsch:Love it. And, by the way in a lot of the conversations that I'm having, I'm hearing the same thing. The, Hey, we need a chatbot to do something. It's like, why? What is the problem that we're actually trying to solve? First of all, let's understand that is to your point, AI the, right solution. And then is something like conversational AI or the conversational pattern, or even a chatbot the right way to do this. Are people really going to chat with their PDF for example, or are there other ways in which we can get similar or even better results? A great point. We've obviously accelerated things in the industry from predictive analytics to machine learning to generative AI. This year it's all about genetic AI. 2025 is the year of agentic AI. If you want to believe some we're talking about this just before we went live. We're al already halfway into the air. What is up with agentic AI? How will that change the paradigm of what AI leaders need to do? What do you see and is that just smoke mirrors at this point?
Kathleen Walch:Agentic AI, we were talking earlier about how it's kind of the year of agentic AI 2025 is the year of agentic AI and what does that mean? And so we're halfway through the year and are people really using agents and is it part of their workflow? And how has it really been changing the game? I have not seen it as widely adopted as the hype, right? This is also where you have to avoid the hype and be grounded in reality and expectations, right? Apply critical thinking skills, and I have not seen it as widely adopted as it was first portrayed. But then there's also opportunity around this as well. So I know at PMI, we have a tool called Infiniti, which is free for all of our members. And I encourage anybody who's watching this, who's a PMI member to go check out Infiniti. And later this year we will be introducing agents into Infiniti. But sometimes agents are in the background and so they're running and you're not even aware that they're going on. So for the user, it's just a normal experience. And so what's nice about different AI tools and when it comes to agen AI as well, is that you don't need to really be super technical in order to get benefits and get value from this. So at PMI, and other organizations have this as well, but folks that are. Starting to move forward with agentic AI AI, we have the opportunity to help define what that means. The part of the reason we came up with the seven patterns of AI originally is because there's no commonly accepted definition of AI. So when there's no commonly accepted definition of AI, it's am I doing AI? Am I not doing AI? I don't know. There's no definition. So we said, let's break it down one level deeper. And that's when the Seven Patterns came about. I presented to the OECD, the Organization for Economic Cooperation and Development back in February of 2020, literally before the whole world shut down.
Andreas Welsch:Yeah.
Kathleen Walch:And they've adopted the seven patterns as their definition of AI, which is now in the EU AI Act. So PMI, we have that opportunity again to help define what is AI, how should. Project professionals particularly be looking at this and how can it be used to help enhance what you're doing today?
Andreas Welsch:Wonderful. I wondered there as, as well we've seen at the end of last year, slack come out with a report things called the Workforce Index where survey participants shared, Hey I'm not telling my manager that I use AI because they might think I'm lazy or I'm incompetent. And now while many of these vendor sponsored studies are, nice and fair, you. Might also easily dismiss them. Now in, in May duke university came out with a study among 4,500 professionals and they came to the same conclusion. Individuals, professionals are reluctant to sharing that, Hey I use AI. So I wonder if there, there are some reasons below the surface why we're not seeing, why we're not hearing so much about AI and agents, first of all, because. Individuals don't want to share that they're using it being afraid that they might be seen as lazy or incompetent. Where I would actually say it's quite the opposite. If you're going to use AI and if you find ways to incorporate in your work, you are at the cutting edge of things and, you should be there, right? You should think about how can I improve what I do and what I deliver. Now the other part that I wonder about is whether also these technologies are still relatively nascent or relatively not basic in the sense of what they do, but how the tools work, right? They're at a level where you maybe need to be a little more of a developer or, even for low-code, no-code things, putting together your agenda, AI workflow, you need to have some technology affinity or awareness. So I'm assuming as that gets easier and baked into other solutions. That the adoption will flourish as well.
Kathleen Walch:Yeah. It's interesting too because I talk about this idea especially when I deliver keynotes of this leaper mindset. And what is a leaper? We have identified four different types of people. Broadly one are observers, and those are people that sit on the sidelines and watch other people use AI, but they don't themselves. Then the second is taskers, and this is where a large majority of people fall today, where they'll use AI to help with certain parts of their workflow, but not their overall workflow. So maybe I'll use it to help me brainstorm ideas for an article, or I'll use it to help me generate images for a slide deck. But I'm still. Mostly writing the article, or I'm still putting together the slides, but I'm just using it to maybe help me summarize all the bullet points that I have so that there's less text or create images for me. Then we have early adopters and they're gonna be embracing any new technology, and AI is just another new technology to them. So they love quantum and blockchain and AR and VR and all the different technologies that are out there. Then we have this idea of a leaper. And what is a leaper? It's someone who. Takes AI and uses it in their entire workflow. So now I can have maybe the large language model of my choice. Go in and help me brainstorm slides and a presentation. Then outline what the slide deck should look like. The human can always should and should always be in the loop. Go back and forth. Say, yes, I like this. Let's tweak some of this. Okay, now I'm gonna upload a template for a PowerPoint presentation that I want. Now create the slides for me. And then you can go and edit it, and then it also can create images for you and really put together that entire presentation and save you hours of work. After you've reviewed it, you can say, okay, now send it out for review to different teams that need it. Or you can send it out to whoever it is that's gonna upload it for you, for your presentation. So you have now. Really used AI through this entire process. That's what a leaper is. And what are the characteristics of a leaper? They need to have courage. They need to have courage to try, and that they need to, it's really about their mindset, right? So that they can't, don't be afraid of this technology. I always say that prompting, there's a really low risk to failure because you don't need to involve those other teams. We're. Sometimes in the past, maybe you've needed to involve data teams to get the data or IT departments to give you access to different tools or help you with programming different things. But you don't have to do that with AI, and that's what makes this leaper mindset so important. And it really is overcoming your own fears and your own hesitations and your own doubts, rather than being super technical. And I can see that happening with AG Agentic AI as well. It's also, I liked how you brought up this idea around people and not wanting to share that they're using AI. One because they don't wanna be perceived as lazy, but two, it also comes down to they don't want to be perceived as not working because if they've used AI and it now saves them three hours a day, they don't want to either be perceived as not working, or also sometimes have more work added to their plate, right?
Andreas Welsch:Yes. 20% more efficiency, productivity. Here you go, right? We know how this works.
Kathleen Walch:Yeah.
Andreas Welsch:So what I take from that response is almost don't be a sleeper. Become a leaper.
Kathleen Walch:Oh, wow. Okay. I like the rhyme.
Andreas Welsch:Yeah. Definitely lots of potential there. Now Kathleen, we're coming up to the end of the show and I was wondering if you can summarize the key three takeaways for our audience today.
Kathleen Walch:I always say think big, start small, and iterate often. Understand that AI projects are data projects, and make sure that you follow data proven methodologies for AI success. I.
Andreas Welsch:Wonderful. I haven't had anybody on, the show recently who's been able to articulate it that clearly and concisely. So thank you so much for summarizing it as you just did. Kathleen, thank you so much for joining us today. Really appreciate you sharing your experience and expertise with us. It was a super insightful conversation.
Kathleen Walch:Thank you so much for having me. I really enjoyed it.
Andreas Welsch:And folks, for those of you in the audience, see you next time for another episode of What's the BUZZ? Bye-bye.