What’s the BUZZ? — AI in Business

Special: The 9 Aspects of AI Leadership (Launch of AI Leadership Handbook)

Andreas Welsch Season 3 Episode 23

In this special episode, Andreas Welsch launches his new AI Leadership Handbook together with fellow AI leaders:
- Matt Lewis (Founder & CEO, LLMental)
- Brian Evergreen (CEO of The Future Solving Company)
- Maya Mikhailov (Founder & CEO, SAVVI.AI
- Paul Kurchina (Enterprise Architecture Community Leader)
- Harpreet Sahota (Developer Relations Expert)
- Steve Wilson (Chief Product Officer at Exabeam and Co-Author of the OWASP Top 10 for LLM Applications)

Key topics:
- What’s keeping Chief AI Officers up at night?
- What’s the hardest part for new AI leaders coming into an AI leadership role?
- What does a future with AI agents look like?
- How can AI leaders succeed in this phase of AI adoption where we’re just coming off the hype?
- Why do we still need to educate business leaders and stakeholders about AI?
- Why is AI here to stay this time and why should IT teams and CoEs care?
- How has RAG evolved over the last 12 months?
- How does data play into concepts like RAG and agents? And why is it important to still keep an eye on technology?
- What is happening in the cybersecurity space since the release of the OWASP Top 10 for LLM Apps?
- How are bad actors exploiting LLMs’ personalization capabilities and why do leaders need to know about LLM vulnerabilities?

Listen to the full episode to hear how you can:
- The sheer possibility of what AI can do now can seem overwhelming—even for AI experts and leaders.
- AI agents promise autonomy, automation, and optimization beyond anything currently possible, with agents negotiating with other agents to find an optimal solution.
- Data remains a critical ingredient for any successful AI project that many leaders still neglect.
- IT leaders are asking for tangible examples and return on their investment when working with embedded AI solutions.
- Agentic RAG has emerged as the latest iteration of RAG systems—however, data quality remains critical to achieve high quality outputs.
- The cybersecurity discourse has evolved from avoiding embarrasing PR to avoiding data leakage and legal consequences with bad actors looking to exploit these new LLM-based systems.

Order your copy of the AI Leadership Handbook today: https://aileadershiphandbook.com/order


Watch this episode on YouTube:
https://youtu.be/TMSfRMYooy0

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today, we're celebrating a special occasion, the launch of my first book, the AI Leadership Handbook. Over the last two years, I've talked to so many leaders about how they can use AI in their business, and everybody knows it's super important, and everybody knows it's super urgent. But what I found is that only few know where to really start and to do it successfully. And that's why I've invited those that have been doing it successfully onto my show, What's the BUZZ?, and asked them to share how they have turned hype into outcome. Now watching 60 some episodes that we've recorded over the last two and a half years and piecing everything together gets a little cumbersome. So the idea for the AI Leadership Handbook was born. It's your one resource that helps you prepare you to lead AI programs in your business with a 360 degree view beyond just technology. And Because it's a little boring if you just listen to me, I've invited some of the guests back. For example, Matt who's already joined us. He's a CAIO in the life science industry and really excited Matt to have you on to share a bit more about what are leaders seeing today? What's keeping them up at night and how can CAIOs sleep well, again.

Matt Lewis:

Thanks so much for having me, Andreas. And congratulations on the launch of the book. When I first joined your program last year, we talked about the role of the Chief Artificial Intelligence Officer. It was really an outpouring of engagement from across the industry. Transformation, innovation, large enterprise and a small startup ecosystem as well in terms of folks like really interested in how this work and really be progressed forward to drive outcomes. And I've really enjoyed our conversation since and just so great to see the book out in the world and. Thank you again for inviting me to write the foreword to it. It was really the highlight of my summer. And so happy to see it out in the world and looking forward to seeing folks using it to really not just advance the discourse, but really to start making some real change out, out in their environments, whether they're in life sciences, healthcare, or the regular industries or other places of the world. Thanks again for having me on the program.

Andreas Welsch:

Awesome. And again, thank you for providing such a beautiful foreword. Folks, if you would like to not only read the book, but read Matt's foreword, too, let me put a link in the chat where you can buy the book. If you're looking for it on Amazon, just search for AI Leadership Handbook. Now, folks, if you're just joining the stream, drop a comment in the chat where you're joining us from, because I'm always curious to see how global our audience is today. We have more than 400 registrations, so I'm sure we get a good global coverage. Matt, when we're all preparing and we have a we have a total of six guests that there will be joining us, but when we're all preparing, we said it would be awesome if you guys can ask me a question as, as well. So we'll do that at the end of each section as we move from speaker to speaker. But, we spoke about a year ago, like you said, and I know you're very well connected in the industry and in the domain. What are you seeing? What's keeping CAIOs up at night one year, one year further? And especially almost two years since the introduction of ChatGPT. What's keeping them up at night?

Matt Lewis:

Sure, It's a really great question. First of all, just in terms of location and all the rest. I am based in New York. Today I'm actually up in Cambridge, Massachusetts. Attending an artificial intelligence meeting at MIT. And so happy to be spending this time with you in the audience. But just in terms of locations and the rest for those that are in this area. It's so much happening in so many places. And it's really just a great time to be in the space and to be alive in general. But in terms of what's keeping Chief AI Officers and really anyone that's deep in on AI up at night. I think that really is like the most vexing kind of challenge that we face is I've chatted with other AI experts in the space and that there are real solutions and real value to be accrued by implementing generative artificial intelligence and other AI across the ecosystem. But one of the, one of the kind of like weird paradoxes of AI adoption is that even as it allows for increased efficiency and outsized productivity within and across the workforce, it almost is like this like double edged sword that the folks that are actually leading the work and helping to progress it forward don't necessarily find that their days are any shorter or that the, that they have more time. If anything, they're, maybe they're more excited about their futures or more excited about all the things that are possible. And I've spoken to a number of people, like myself, that are on the speaker circuit, that are at conferences, that are leading businesses within their organizations, and potentially are involved in the not for profit environment, like myself, of a foundation that I'm a board member of and other activities I do beyond. My organization and I think it's challenging truly to get to bed on time because people are up until midnight or beyond finding new challenges to address and all the things that can really be done now with generative artificial intelligence that. For many years, sometimes decades went unsolved. So it is a true problem. This question of being up at night is both a figurative, but also a literal challenge as AI becomes more and more practical and pragmatic and possible. It literally means that people that are deep in on it are struggling with their mental health and struggling with their stress and their anxiety, because there's so much that can be done. How do we prioritize our time? I think though to the point of your question, I think, we're in a point now, I think, in the life cycle of generative AI where last year was really a lot of hype and a lot of people were saying wow, this is this is amazing. This is great. And I think when we spoke last year, people were asking do I really need someone like Matt? Do I need a Chief AI Officer within my enterprise? I think many organizations have passed that consideration and have recognized that, yes, if they believe that they're still going to be relevant in five to seven years as an organization, that they need someone to head up the AI kind of strategy and portfolio and product and People that are augmented by AI and typically that's a head of AI or a chief AI officer to help catalyze that consideration and even the federal government here in the U. S. has done that recently and are hiring literally hundreds of chief AI officers across all the major agencies. Every major organization now is either having a CAIO or is hiring for that position. And it's really hard to argue against it. We're, now in a position, unlike last year, where people were like that's an interesting role. It's a curiosity what do we do with it? Where now people are actually putting people in those roles. And it's more a question of how, like, how do we use that person's time? How do we value and respect the role that it is? It really extracts the most value from it so that it helps the organization here and now, but also is set up well for success in the future and it's a little bit, a little bit antithetical perhaps to some of the roles that people may have come from before they're in this position. I was the Chief Data Analytics Officer for many years before I was Chief AI Officer. And in that type of role, we were always like scurrying around, trying to find things to do, like trying to find ways to justify the role. I can't tell you how many times I went to a conference where one of the titles was like, is the CDA role, CDAO role going to be obsolete in a year, two years? I never hear that topic discussed at AI conferences because there are so many work streams. That are relevant and resonant for the CAIO role that it's like the opposite problem. It's like, how do you really scale and democratize the work of the CAIO role so that it provides value to the business here and now, but also so that it can be done at a measured pace so that people can adopt it and accelerate its consideration across the business. I think prioritizing. and aligning with the people so that it works for everyone is probably the biggest consideration.

Andreas Welsch:

See, from my own experience building and leading an AI CoE, I remember that was one of the toughest challenges. First of all, staying on top of all the things that are coming out, figuring out where is the value, what are the things that we should be prioritizing, we should be pursuing. That's just because They are fancy or we can do them, but because they actually deliver a measurable business value. And I think that's a key part of what you mentioned as well, of that AI leadership role, helping not only guide your data science, AI teams, but also work very, closely with your business stakeholders to tease out where are those opportunities to bring in AI and how do we measure it, right? That, I think that was the other thing. Certainly everybody wants happier employees and fewer clicks and you name it. But very few people are willing to put a price tag on it and pay for it, right? So how can you measure it? I think is another dimension that you mentioned too in, in that. Now, By the way, if you're in the audience and you have a question for Matt or for any of the following guests, please feel free to put it in the chat. We'll take a look in a couple of minutes and pick one or two of those. Now, everybody starts somewhere in a leadership role on AI. I think it's getting easier to get started and to get your hands on it. But what's your recommendation? What's your advice to leaders coming new into this role? What's the hardest part that they should prepare for?

Matt Lewis:

Yeah, it is. It's another really important question. And I think again, unlike a lot of other C suite positions or really any leadership position in an organization you have to think about like the context. of the role that this is contributing to unlike say like a Chief Innovation Officer or chief commercial officer or Chief Data Analytics Officer or really any like leadership position most of these types of roles that contribute to the health or safety or risk of the organization have existed for years, decades sometimes, the function of augmented intelligence in any business, whether it's in pharmaceuticals or in banking is brand new. So the people that are coming into these roles, whether they're new to AI, or they're just new to the role, have really a dual responsibility. They have a responsibility to articulate and architect the thesis about what their kind of organization's narrative for AI is going to be. That is like what they stand for and what they won't do within the organization. For example, like Gen AI can be used for lots of things, but what really makes sense for them to do as a business and what should they never do? If they don't, if you never say no, you don't have a strategy. And that's really important because there is no kind of history. For the organization to consider with regards to Gen AI, and you have to have principles. Otherwise, what's the point? And so that's an important consideration. So that the leader in that role is essentially building the thesis, building the organization, building their team. Which historically most likely does not exist when they come into role and there's a an understanding of culture and of process and of systems and of the organizational strategy and its priorities and where it is in the commercial market, both internally and externally and where it wants to go, which a leader of the business, if they're coming from the business, they're an incumbent, they may know a bit of, but not everything. If they're coming externally, they know very little. So you have to recognize what's possible to achieve within a business that is essentially brand new to Novo. And then also there's like a second consideration that. While you have to build a business to make the AI work, you're also directly responsible for transforming the business at large in order to be successful. The CAIO role is really a, it has an internal role to stand up a new business, like a startup type role, but you're also transforming the business in the future so that it's successful and is future proofed against competitive threats, both that exist in the present environment, as well as those that are coming from the startup ecosystem and other places. So it's, you have to wear both hats, like how do you build for the present, but also how do you defend for the future? So it's, it takes a really considerate approach to do both those things and not to get stressed out by it so that you can sleep at night. Here's the earlier question.

Andreas Welsch:

And I was just having a conversation with the CEO the other day and they asked exactly that same question. What are the threats that we need to be prepared for and look behind our back? And what are the opportunities that we need to seize and look what's ahead of us with AI and because of AI? So I think great point summarizing that. Now, I'm curious. You've read the book cover to cover, obviously, before you wrote the foreword. I'm wondering, first of all, what got you excited, and then what's the question that you have for me? Because that's the thing none of the guests and I have aligned on before. So if you follow the show and I usually ask my guests, if AI were a dot what would it be? It's the surprise question. I'm curious, what question you have for me?

Matt Lewis:

Yeah I when I agreed to write the forward, I didn't completely realize that I had to actually read the book to write the foreword. I was so excited to, by the opportunity to do it that I of course immediately agreed. And then I stuck, set out to start writing and I was like I can't really do this without knowing what the book is about. So I started reading a little bit about it. I was like, damn, I got to read this whole thing before I can really write the forword, and I read it. And I was like man I honestly couldn't have written a better book myself and I had some you know designs I'm trying to do something similar. But this is really the way that I would have articulated how a leader within any business and any enterprise, is trying to adopt and augment their teams, their organization from a really considerate perspective. And I, it really was so practical, but also so well researched and considerate and helpful. And I, hope some of that came across in the forward and in the book itself. But I think my question to you is as we think about the trajectory from really going from one of And I think that's a really important piece of awareness and hype as people have described it to this kind of place of action that we're in now of how the book is even at the current length is still probably a little unwieldy for, but the average kind of executive enroll. So not everyone has time to read the full book, if you will. If someone is not an AI expert, and I see some friends that are here on the program that are on the right side of the screen, if you will, and where they have teams that are not as deep in on the content as you and I are. How would you recommend that they use the content in the book to act as a foundation or as a springboard to actually make change within the organization?

Andreas Welsch:

I love that question, first of all. Thank you. Look, I think that the key thesis that I want to get across with the book is it's not about technology. Definitely not just about technology. Yes, there are so many things that we can do with AI and because of AI, but none of them will really work and materialize unless you bring your people along. And that means that you need to work closely between your data, IT, AI, and business teams to bring them together to find the most valuable ideas to pursue. You need to align AI with your business strategy to begin with, otherwise you're most likely just running science projects that don't go anywhere, right? You So you should think about how do I create a following around this within my organization. So you need to look at things like multipliers, communities of practice that can help you not only enter different business functions, finance and procurement in HR, where you want to help them get more value out of AI, but also become advocates for you and bring back information and feedback to you. What is working? What is not working? So, really have advocates and multipliers for you. Those are things that you can do even without a data science background, right? Those are some core change management organizational principles that you can follow. And that's why I wanted to make sure that they're easy and understandable and accessible in the book as well. Great question.

Matt Lewis:

Yeah. Again, thanks so much for having me here and having me in the book. It was, it really was a really special opportunity to participate in as the first book that I have both an autographed copy by the author and which I've contributed content to. So it's a real milestone in my professional career and looking forward to our next conversation. So thank you so much again, Andreas.

Andreas Welsch:

Wonderful. Thank you so much for being with us, Matt, and for sharing all your learnings with us. Of course. Awesome. So then let's see, let's move over to our next guest. Hey folks our next guest is Brian Evergreen. Brian is the author of Autonomous Transformation that he published through Wiley. One of the most acclaimed management books of the year by I Thinkers 50, one of the leading organizations looking at in ranking management books. So I'm super excited to have you on, Brian. Last year, we spoke about transformation and reformation and autonomous transformation. So I want to make sure that we pick up there again one year after and see where are we? Why is AI leadership still important? What does transformation mean? So thank you for joining.

Brian Evergreen:

Absolutely. Thank you for having me, Andreas. And I'd say that we are in an interesting moment where people have seen that there is value to be had here when it comes to AI, but they're struggling to to capture that value themselves. And I think one of the biggest reasons that we've seen something I think that I and maybe and you and many others warned people against when the AI, so Gen AI hype bubble started to really grow was you can't AI is not something where you can just get quick wins. We talk a lot about AI use cases, and I've been giving this example lately, and that I think is I really like at least which is that if you were to picture the most beautiful building that you've ever been inside of, and just walking in and just being awed by that building. Now think, how did they, how did that building come about? Did they come together with the architects and The the the wood, woodworkers and the people that are shaping the metal that all these people that came together, they start and say, okay here's a set of tools and let's start with a great use case for the first one. And then we'll go from there, right? There's no way that would have happened. Instead, they came together and said, like with Duomo is, which is one of my favorite examples. They were pushing the boundaries of what was possible. possible from an architectural standpoint. And so being able to say, yeah, could we even build a dome that big? What would have to be true for us to make such a big, beautiful dome? And and I'm so glad they did. And every beautiful building we've ever been inside of. Even our homes today, the way we remodel is not by picking up a tool and walking around our house. We always start with what we want to do first. And so I think from a sort of state of the union around AI I think that a lot of organizations have been disappointed by the fact that they've run up the hill. Using the same sort of system of leadership and management and strategy that they had been using for digital transformation and trying to carry that over to AI, which has a lot more complexity. So organizations I think have been really disappointed by those results and are starting to I'm the concern I have is that some of them may end up getting to the point where they say, okay, then it's just not. AI, it's an AI problem when really it's not at all an AI problem. It's a way of doing strategy, a way of planning, a way of setting a vision and turning that vision into strategy, and then taking that strategy and using storytelling to ignite and spark action across your meaningful, purposeful, useful action across your organization. And if any part of that system is broken. Then it doesn't matter what the front end of that system is. It doesn't matter if it's an AI project or it's a blockchain project or it's anything or an SAP project, right? If you don't have those pieces all in place from a great vision that, that people actually care about bringing to life, a a strategy that, that is, I call it designed for inevitability, a strategy that should work. And then And then storytelling that's really resonating with people about that. Then it's, you're going to be burning money alongside the other 87 percent of AI projects that fail.

Andreas Welsch:

I think that's a really good way to frame it. I really like the analogy of the building and the beautiful building exactly right. Nobody ever said, let's take a hammer and a chisel and figure out what we do with them. But what do we want to build to begin with?

Brian Evergreen:

Yeah. Let's just look for some low hanging fruit and we'll go from there.

Andreas Welsch:

While we are already seeing that, yes this bubble is getting a little smaller with Generative AI. People I feel have, used a good amount of it, have a decent understanding of at least how do I use ChatGPT, what can I do with these kinds of tools, especially if you're in the tech sector, I feel that's more prevalent than in others. As this is coming down the peak of the hype cycle, I see the next one just that's coming up, right? And that's the most favorite topic of the summer if you've been in AI. Agents. Software components.

Brian Evergreen:

Hot agent summer?

Andreas Welsch:

Yes, right? That take a goal and then go and figure it out and everything is magic. Now I think in a lot of those conversations, we see that people are approaching it just like another automation project, right? And instead of you having to define rules and program every, line of code, you just give it a goal and it magically figures it out. I saw an article by you just a couple of weeks ago, having a very strong and spiky point of view on that topic. So I'm curious, what are you seeing with agents? Where is this going? And why is it not? Your next automation project.

Brian Evergreen:

I great question, Andreas. And I, do like to be spiky. So I'm, glad that you that you appreciated that article. So I think you might be referring to AI agents are not automation. Which is something that I wrote on my, sub stack about a lot, I think a lot of organizations or just people, we think about agents it's, and I will, say many sales and marketing organizations that are working to try to, Capture the current excitement about agents and use that to, to sell, are, sometimes miss reallocating and saying something that I would a Rube Goldberg machine for those who are familiar with those is that's basically an AI agent a Rube Goldberg machine for those who aren't familiar. You'll know if when I describe it, which is a ball swings and then it hits something else. And then that thing rolls down something and bumps into a domino, which then starts, and it's a chain reaction. This happens, then this happens, and this happened, then this happens. And that is essentially a perfect, I think, analogy for automation, which is essentially. A recipe, a series of events that you can prescribe. First do this, then do that, then do this, then do that. If this, then do that, right? And you can think through and come together, bring your experts together to think really deeply about how all those steps are going to come together. Where AI agents are different is that instead of saying, I'm going to teach you as an AI, as automation, the steps. Instead, I'm going to teach you the, when, I'm going to teach you how to research the way, the two main additions, how to research the way that I would research, and then the, how to, reason the way that I would reason based off what I found. And then at what confidence levels I would make a decision. Whereas before we're just teaching it a recipe and a series of steps, in this case, we're teaching it a little bit more about how we would think through things and at what step of, researching reasoning we would feel comfortable then making a decision. That's one major difference. Another one is that instead of with automation, you might have an, a given automation sequence. You might have a step where it's okay. Now check with the human. If you, if at this step, if this equals that check with the human to sign off, but with agents, you can set that instead at thresholds, confidence levels that would span the entire thing. So in other words, if anything's outside of the norm, at any point, check with a human. Whereas with an automation project, if it passes step two, where that was the check with a human step, and things start to go haywire on step three, it's not going to bother checking with a human, because it's just going to follow the sequence. Whereas an AI agent would be more dynamic, and would know, okay, any time anything gets even slightly outside of this this box that I've been told I can go operate within, then I'm going to check with a human. So those are a couple examples. I think the last that I'll say that I'm very excited about is that I think it'll redraw organizational boundary lines. I think 2D chess is to say that we're going to figure out how to create AI agents that can navigate the current interfaces across the internet that will go find, learn how to understand menus and then Click through those different menus and use search bars and to navigate the existing construct of the internet. I think 3D chess is sidestepping that and then having agent to agent, almost like a new internet in a sense that will be leveraging APIs and. Have a whole agent marketplace that's happening behind the scenes that will help us as, leaders and as individual consumers to be able to not have to think so much about so many of the things that we do on a day to day basis. Like if I were, my favorite example is if I worked at a. And let's say, Andreas, you did as well. And we each needed lumber for our projects that we were about to start. And you live on the other side of the coast from me. And so I, call up in Seattle and I call Acme Lumber and I say, Hey, I need lumber in Seattle. And they look through their warehousing and they say, okay, we have lumber in in New York City. And and so we're gonna have to drop a purchase order that is gonna include the extra cost for a truck driver to drive it over to you and the, this, therefore this is the shipping time and all of that, right? And the impact to from a CO2 emissions perspective. And then let's say you're on the opposite, side of the country and you call up a b, c lumber, a separate lumber company. And you say, I need lumber in, in in New York city. I don't think you're, I don't, where are you based Andreas? You're in the Middle East. Philadelphia. Okay, never mind. Philadelphia. So I need lumber from Philadelphia. And I say and then the ABC lumber says great. Let's look in our warehousing. Oh, it looks like we have some in Seattle. So now you're, we're drafting a purchase order where you're going to pay extra. And now two truck drivers are going the same lumber of all the way across the country. Where and, that's because it would be too cost prohibitive and it would take too much time. Okay. for every lumber company to manually call all the other lumber companies to check on their warehousing at any given time between human to human. In the agentic future, the ability to have AI agents that just query an entire marketplace in a matter of other agents in a matter of milliseconds, those two companies could actually trade us as customers. Because for us, we want the right quality of product. We want the right quantity and we want the best possible price and, speed in terms of logistics that we can get. So we don't care. I don't actually care if it's Acme versus ABC lumber. I just want the best lumber I can get at the best price and you too, right? So behind the scenes, those two companies could trade customers without us even necessarily knowing or caring, saving us both money, saving positively impacting the human experience. Cause those two truck drivers now don't have to traverse the country for those extra orders. And a better impact to the planet. So that's one example. And if you span that across every industry, that's the future from an agentic perspective that I'm most excited about.

Andreas Welsch:

That sounds really exciting. Sounds like there's a lot more work to be done and also for AI leaders to get ready for this. So we have an entire chapter on this. Perfect. an entire chapter on this in the Outlook on AI agents, where are things, where are they going? And also an entire chapter on transformation, autonomous transformation, and many of the insights that you've shared. So Brian, I'm curious. What's your question? I haven't read it too.

Brian Evergreen:

Yeah I thought about this and I would say I want to ask you one that's exciting as the one that you asked me. Okay. And so I would say if AI were a Friends character, which would it be?

Andreas Welsch:

Good question. Joey. Always into something new, something troublesome if you don't pay attention. But definitely you can't have a cast without it or without them.

Brian Evergreen:

I love it. That's a great answer. That's a great answer.

Andreas Welsch:

Awesome. Brian, thank you so much for the question and for the inspirational talk. I know people or let me ask you this. Where can people find you to get more inspiration?

Brian Evergreen:

They can find me right here on LinkedIn. Happy to connect with anybody. Feel free to reach out to any, anywhere I can be helpful.

Andreas Welsch:

Awesome. Thank you so much, Brian for the insights and for supporting the launch of the book.

Brian Evergreen:

My pleasure. I'm so excited for you, Andreas. Anybody who's listening, go buy Andreas book if you haven't already. And I've already skimmed through it. I'm going to be reading it more deeply this weekend. And but from what I've seen so far it's a great read and and very practical.

Andreas Welsch:

All right, and let's move over to our next guest, to Maya. Hey, Maya.

Maya Mikhailov:

Hi, Andreas. Hey, look what came in the mail yesterday. I'm so excited to start reading this. I have so many plane flights ahead, and believe it or not, I still like flying paper books.

Andreas Welsch:

Awesome. I'm sure folks in, in Australia, will join or watch the recording. Be prepared summer is about to start. Get your paperback. awesome. Hey, you you were on the show, I think about a year ago, maybe five quarters, something like that. And we talked about how can you get your leadership on board and manage stakeholder relationships, especially with your senior leaders. And, from my experience as well it's super key to, to have your stakeholders on your side, not just when you're you know, AI is the shiny object and everybody wants to do AI, but also help them understand why should you, or why should we do things a certain way? Where are some of the limitations? Yes, generative AI is great, but it still won't do your demand forecast. I think that was a great example that you mentioned on the episode together you've led so many AI programs in the most senior levels of executive leadership in financial services. CEO of Savvy AI. Maybe share a bit about, what Savvy is all about, but I'm wondering what is critical in this phase of AI adoption where we're just coming off the hype? What do new and existing AI leaders need to tell to their leadership about it?

Maya Mikhailov:

Absolutely. And Andreas, first of all, super excited to join you on this huge day for you. Not many of us get to say that we are a published author. But you do, which is well deserved. Brian made some great points, your previous speaker, Brian made some great points about a bit of disappointment that's setting in with AI adoption as we're coming off of this sugar high of AI, if you will. But I think there are three key elements that organizations need to address. When adopting ai and that's reality data and courage. These are the three key elements right now of going over the hype cycle and getting into some practical implementations. And the first one is a bit of reality that's sinking in. Yes. We're coming off of as Brian pointed out, hot. AI agent summer, the new brat, if you will. But the reality is that AI projects need to align with an organization's goals and strategies. They need to be able to be something that can be scoped and achieved. No longer can we talk about. What's cool to do with AI? What's creative to do with AI? Let's talk about what can we do that generates ROI for the organization? What is a tactical solution that can create value? So on the other side of that hype cycle is tactics. It's implementation. It's practical solutions. It's a bit of the boring stuff, but the reality is, that the shiny the, whole shiny object. Phase of AI is coming to a little bit of an end when I look at enterprises right now. But the fact of the matter still remains that AI can achieve results. It has been achieving results for organizations for decades, like Netflix, Amazon, JP Morgan. I can literally go down the S& P 500 and show you how they've all been using different facets of AI and not just generative for years to achieve results. But I think reality is still there. Super critical right now, let it get scoped, make your CFO happy, get it out the door, and attach it to some real business metrics. The second critical element is really data. A lot of enterprises have spent the last decade investing in data system. And Andreas, maybe you've heard this too. You've heard the phrase data is the new oil. and yet when it comes to AI, I'm still shocked at how many companies will tell me our data's not ready yet. And I'm not sure we trust our data. And that comes, I don't know. Have you heard that as well?

Andreas Welsch:

Occasionally. Yes. Yeah. And it's crazy, Right.

Maya Mikhailov:

It's so crazy. It's what have you been doing for the last 10 years? What are these tens of millions of dollars that you've spent on data architecture, on data piping, on data transformation? What did you invest to? So I guess I'm a little bit confused why there's a fear that if data isn't perfect, it's not AI usable. I guess my bigger question is how are you running your business right now if you don't trust your data? But if anything, Investing in AI will show enterprises how and where they need to make better data investments, and it'll also show them insights that have been locked away in like the dark reaches of their enterprise warehouse data warehouse systems that need to be brought to light. So I think that getting over that data perfection in AI is really critical in moving beyond the hype, because you'll always be chasing data perfection. And the last thing is a little bit of an obvious one. It takes courage right now. Data gathering, data prep, we used to joke that was like the slowest part of any AI project. Now I'm finding increasingly that it's just fear that's dragging down some of these timelines. Constantly being stuck in this like model tweaking and validation. We have to get it more perfect and more perfect. And yes, 94 percent accuracy is much better than 92 percent accuracy. Currently, the team that's working on this problem might be working at 80 percent accuracy and taking six months to get into production versus six days that it may take if you attach an AI program to it. So I think the. Doomerism, if you will, a little bit, the doom and gloom of the AI conversation. It's needed to throw some caution into the hype cycle, but then again it's swung that pendulum so far that there is a lot of fear. So I would say that when it comes to launching AI systems, You need to have courage to get live and get it done. And I work with a ton of financial institutions and financial services companies. And trust me, like risk avoidance is basically in their charter, but there has to be a balance between being too afraid to do anything. And then also that the other side of that, which is we let's throw everything into production right away.

Andreas Welsch:

I really love it. How are you summarizing that right then? I think, again, those three pillars make perfect sense, without clear data, you will struggle, but you shouldn't overthink it or just stretch it out too far, so you're behind on your AI projects, but it does take courage, like you said, right? Last time we talked, you mentioned a lot of the advice around managing your executive stakeholders and how to manage them. Also talk to them that yes, AI is here, but it's not the, solution for each and every problem. So if you're buying the book and reading the book, there's a lot of that in it as well, what you shared. And I know you're a big proponent of understanding first what the problem is that you're trying to solve and then figuring out what technology should we actually use. I thought, hey, Gen AI had solved all of that for us and everything was just super easy. But I have a feeling we still need to educate leaders and stakeholders about all those things. And just wondering, how do you navigate those conversations with leaders that are a little more familiar with Gen AI? Or maybe even think that they got it all figured out and know everything?

Maya Mikhailov:

Very, delicately.

Andreas Welsch:

Especially if they're your director. Yeah, exactly.

Maya Mikhailov:

Very delicate. It was quite a lot of tact. First of all. I will say this. I think there is still a massive education gap with leadership and with ai because I don't think that a lot of leaders really understand that AI isn't one thing. It's not just generative. Right now those two words are being used so synonymously. That AI is just being used as a shortcut for Generative AI. And they're not thinking about it as an entire toolkit, that it's not one model. It's not just generative AI to rule them all. You're talking about tools and talking about using the right tool for the right problem. And, that's fundamentally where we start. Which is that basic kind of aha moment of there's not just one type of path to achieve the results you're trying to achieve. It may be generative, it may be not. Generative definitely has some very useful applications. Look, at all the convenient tools that Apple just introduced yesterday. Take a photo, ask a question, get information, get appointments and more. These are very practical applications of generative. And ones that work for you. with the problems that they're trying to solve, but it's really not the only game in town. And I think when we talk to leaders, we give them examples of sophisticated organizations that are looking at generative, but they're looking at generative as part of the solution. They're looking at the right model for the right problem. And sometimes they're looking at a process of chaining together multiple models to solve a certain problem or certain process. For them, it's not the model type that matters. It's about, is this the best way to solve this problem? Is it even about AI or is this a process problem that's just broken? So when we talk to leaders, we keep stressing that more education is needed both by the leadership teams, but also by their teams themselves. Right now, it can't be top down. You have to let your people experiment with AI tooling. You have to let your people bring problems to the table that they're seeing that they believe that AI can solve for. And the leadership need not be prescriptive, meaning they need not say, Oh my gosh we have a new ChatGPT model. Let's figure out How do you use ChatGPT in our organization? That's not the solution. The solution is frontline teams, SMEs, experts in your company who know the problems that need to be solved. They know where the data resides and they might have a different tactic to solve them. So it shouldn't be top down. Here's a tactic to solve it. It should, let's educate ourselves in the toolkit. Let's expose our folks to the toolkit and let them get their hands on it. So they can find the right solution for their particular problem. And, finally. Leaders need to be educated about how their teams are feeling about AI. And this is really important. Look, this is a new technology and in an odd way, what I'm seeing in enterprise is that the tech part of it is almost easier to implement than the change management part of it. Because so much of the original conversation about AI was about human replacement. How many workers can we replace? And headline after headline of companies saying, We've replaced X amount of workers. We shut down this. We don't hire anymore for that. Now, When your team as a leader, you start talking to about AI to your team, and that's what they've been hearing in the public discourse, they're immediately thinking to themselves, my job's at risk. I don't want to use this tool because what they're really trying to get me to do is to train my replacement. So I think it's up to leaders. to educate their teams, and to bring them to the table, to bring stakeholders to the table, and to really treat this as a human augmentation tool, and not a human replacement tool. And that's the way they're going to get their teams to adopt this faster.

Andreas Welsch:

I love that focus on humans and making sure that you, Have that connection, that you have that transparency as a leader with your employees, that you are being sincere about that. And also what you mentioned earlier that there are different technologies and they all serve a different purpose. So there has to be some level of an understanding or at least willingness to understand what tool do I use for what job.

Maya Mikhailov:

Oh, absolutely. And I think a lot this year, I'm seeing that at the enterprise leadership level where they've stopped becoming prescriptive. It stopped being, how can we sprinkle some gen AI or how can we sprinkle ChatGPT on this? And now it's okay, let's look at our problems. Let's start talking to our teams about what really needs to be solved. And let's open our minds a little bit that it's not just one model that is the solution. There are different processes for this.

Andreas Welsch:

I love that. Really good call to action as, as well for those of you joining us live or listening. And again just to reemphasize that the part about when should you use machine learning, when should you use generative AI has inspired an entire chapter here. Make sure to get your copy today.

Maya Mikhailov:

I got mine.

Andreas Welsch:

Awesome. Now, question to you as well. What's your question for me? I can feel it's getting a little hotter. My seat. I'm on the hot seat.

Maya Mikhailov:

Alright, so I have a question for you and I have a prerequisite. You can only give me a one word answer.

Andreas Welsch:

Okay

Maya Mikhailov:

So you can't couch this in anything. Ready?

Andreas Welsch:

Ready.

Maya Mikhailov:

Name me the most overhyped and underhyped thing in AI. One word answer.

Andreas Welsch:

Agents.

Maya Mikhailov:

Is that overhyped or under hyped or both?

Andreas Welsch:

Both. It, depends what side of the buying equation you're on. I think on the selling side and on the vendor side, it is starting to, get a little overhyped. And I think on the business side, on the buying side, it is under hyped, at least the understanding yet of what will be possible and, how to prepare for it. So your hypothesis. Hot asian summer is turning into an asian espresso fall. I don't yet have the hot song of fall dialed in yet. It's maybe your pumpkin spice latte that you've been waiting for. Back in season, right? Something like that. I love that question, by the way.

Maya Mikhailov:

Starbucks will be so happy. Oh yeah,

Andreas Welsch:

I need to see if I need to turn this into a promotional

Maya Mikhailov:

They should be sponsoring this entire livecast.

Andreas Welsch:

They should. Awesome. Hi, thank you so much for sharing all your expertise. Like I said, lots of good information in here as well, inspired by our conversation last year around how do you work with your senior stakeholders and manage those relationships. Perfect.

Maya Mikhailov:

Thanks so much for having me.

Andreas Welsch:

Thanks. So let's see, our next guest is Paul Kertina. Hey, Paul, thank you so much for joining.

Paul Kurchina:

Thank you. It's a pleasure to be here and I'm armed. I've got one physical blogger and I actually have the digital book here as well.

Andreas Welsch:

Boy, you're really well prepared. Thank you so much. Now for those of you who don't know Paul, I know you are a prominent figure in the SAP ecosystem. You're an evangelist. You run the enterprise architecture community. You've been in and around the SAP ecosystem for. More than 25 years.

Paul Kurchina:

Let's say over three decades now. I'll call myself out.

Andreas Welsch:

Oh, okay And you've you know, you've seen so many trends come and go whether it's from large enterprise vendors Or if it's in the tech industry as a whole So I'm super excited to have you on and hear from you what you're seeing when you do talk to enterprise architects who are You know, now being faced with that reality of, Hey, yes, vendors are actually not just talking about AI, they're integrating that into the Application. whether it's in the short run or long run this, will impact me. It's something I need to learn more about. I need to be aware of. And I know you've, been a great supporter of helping these communities understand more about AI, and we've worked quite a bit on that when I was at SAP. But I'm wondering, what are you seeing why should IT teams and SAP COEs even care about AI? What's keeping them up at night, and how can we help them sleep peacefully?

Paul Kurchina:

Yeah, and just before I respond, I just want to tag in a few things that may my just mentioned it in a previous thing. It was interesting. We always think that we need to have our data perfect, right? Our data right. And we was at an event last week and the speaker was, I had to just stop for a second and said, can you repeat that again? She reinforced the fact about having AI work on the data you had now and the low hanging fruit and quick ROI about just dealing with that data. My mindset had been. That you had to get it perfect, right? No, you can get value today to help you gauge what you should do. I'm glad she reinforced that, that fact as well. And AI has been around for a while. I was an SAP customer, a utility customer. Back around 2004, we were taking sensor data into the cloud doing something called similarity based modeling to predict failures, weaks, and events on assets. Back in the day. A lot of that is not, this is not that new, and I really liked your point as well, and there's a good thing about I, I easily distracted by being in the middle of your speakers, forced me to listen to all the other great talks so far. And that aspect about all the different AI tools. Tom Fishburne did a great post yesterday, or cartoon, about showing a massive AI hammer, right? Almost the solution for everything. And I could not agree more about some of the points. As we're learning about AI, understand it's just one tool inside our toolkit. To be applied in the right way. And sometimes I think in it, and you and I have been around that we get too carried away. It's the hot or the new thing, and that will solve everything, right? And it doesn't. So let me just circle back to what I'm hearing from the many customers I deal with. Quite frankly, everyone is I like to think of it as we almost have, AI is like a dense fog that keeps on getting denser. in terms of what's out there. I've got a buddy of mine, Julian Moore, in Australia as well that and by the way, I'm coming to you from Calgary, Canada. Julian awesome, helps AI, helps associations in that market deal with AI. And Julian plays with 20 different AI tools a day. A day, in terms of tools. And relating back to this world, we're seeing the onslaught of all this AI stuff, so to speak. And I think some of us are frozen in the sense of knowing what to, what should I really jump in on, right? What do I want to invest on? The old, I forget what the term for it is, the story about when they put 13 jams a jar in a grocery store. Sales go down, they reduce to four people buy. And it's almost a bit analogous to that, what I'm hearing from many different customers, in knowing where to place their bets, the SAP audience, as has been very keen on, activating things that SAP creates by their solutions. In many cases, waiting for things that until they're there, turning them on in essence, right? And when I'm hearing from SAP customers, it's interesting in preparation for this last night, I asked a few hundred customers in one of my recent events, how can we help you in terms of on with AI? And what you have to do as an enterprise architect. And it's things like, understand, I asked for the top 10. So of course I used AI to take the 300 plus things and give me the top 10. So customers want to know are, what are the capabilities that are built in? No surprise. What's built in. They want to know what's on the roadmap, right? What is coming at what point in time? So they know, do I need to look at other solutions? Do I need to perhaps build my own, but get a gauge of what's coming? Get a sense about AI and user enablement, the whole experience and engagement side. Get a sense of the infrastructure that's required. Customers aren't necessarily going to the public cloud scenarios in the SAP world. Some of them have to, are dealing with their own hosted scenarios. How can they ensure they're able to take advantage of the AI? Data analytics, the whole, that whole area as well. I'm keen on that. Many customers are it's interesting, are making the move from their legacy ECC systems. over to the latest S/4HANA. And what's on their mind in that move is not just what capabilities I can take advantage of, but the other context is, how can I use AI to accelerate my transformations? What can I do, because quite, and I've been involved in R/3 back in the old SAP days, S/4 since it came out in 2015. Seriously, we can't approach those implementations the way we've done them over these years. What is the new state of the art automation and how can AI help to accelerate those? Of course governance and, the controls in this are all something that are top of mind. And, the last one I'll highlight is probably the key one is, The human interaction and engagement with AI. And it's interesting. There's a reluctance and I, think you find it as well, Andreas, is especially people around this package software world for a while to really dip their toe in the water and try things. And what I always encourage people, and I'm seeing a lot of architects do it in other roles as well, just start playing with basic things to help you in your day to day. And it almost takes a, bit of pushing on people. Some of them just have to, once they start trying a few things, and often I think you have to extrapolate to, someone's day to day personal life. And get to try things. That's always been that case. Then they think I can do this here or there. Let me think about how the use cases as well in my world. So seeing a number of different things, but I think what's, missing and your book is a great resource to help on it, they're looking for not a really a silver bullet, but give me playbook, in terms of what I can do to educate myself as well as help educate.

Andreas Welsch:

Thank you for summarizing that, that deeply resonates with me, right? Coming from an enterprise software background and having seen similar things that you mentioned, I can only imagine Certainly, large companies, organizations, are not just wall to wall one vendor, right? There's a number of different systems. Maybe you use your Salesforce for your sales in CRM and your workday for HR, and maybe you use SAP. I was looking at NetSuite's site and for CX, Intercom the other day for a client. Everybody's adding AI to it, so I can only imagine that there's this effort to understand what is available, what can I use, how much does it cost, does it solve a problem? I don't know. It just grows exponentially. But even if you do that the underlying premise of how do I make this usable in my organization, how do I bring people along, how do I scale this, is still one of the most important questions in addition to technology that leaders need to solve.

Paul Kurchina:

And I think that point of whether you call it how do we augment ourselves, right? If you call it augmented intelligence or whatever it is, as you're waiting for, I'll term it, some of the dust to settle. On, some of these things. You have to, there's almost like a couple of paths here. And I'm interested in your views on this. One is in terms of, as you're delivering value in certain things inside your organization, you're monitoring where things are going. But I think more now, more so than ever, I'll say what's different now is a number of decades, all kinds of different technologies, right? We've, I've seen over the course of time and involved in, and maybe the analytics is more back of a different scale to the, mobile devices like we have. I remember being involved with SAP. Actually, my first book, now that I think about it, was called Mobilizing SAP back in 2004, dealing with RIM and others. So that was almost more of the BlackBerry was more in the business world, right? And then with iPhone, it bled in the personal world and roll forward to now and the recent iPhone launched yesterday and things like that is, it's probably, we used to say as well, that consumer world pushing the business side, but a certain degree that's impacted or impacting now our personalized also so much. So it's almost like this Venn diagram between the two worlds.

Andreas Welsch:

Great analogy, right? Seeing that overlap and again, I think it, it takes leadership, whether it's, in your business or in, in your personal life too. Look for these opportunities, like you said, and experiment with them, play with them, get a better understanding of what can I actually use this for? Is this useful? How can I make it even more useful in what I'm doing? So I've learned the

Paul Kurchina:

drill now and I'll flip to you. I've learned over the few calls. I've got a question for you. Thank you. It took me three speakers to get there, but I'm there. So for my audience tends to be customers and Partners, and even SAP in the SAP ecosystem, right? That, what is your advice to them on how to best use, leverage the book in their organization?

Andreas Welsch:

I think the key part is that there is an abundance of technology around you. Vendors like SAP have been embedding it left and right. In many, cases, you can experiment with these things. There are trials available where you should ask for one. The part that a software vendor, just by selling you software, is not going to solve for you, is how do you What are the right things that we should pursue? How do I put them on a timeline? How do I assess value for these capabilities? What should I really be implementing? It's one thing to try them out and see what can we do with it. But then when you have multiple of those, how do you move them from An idea state to really an implementation and an operation state. And the part that's key and it's dear to my heart is how do you scale that across the organization? So when I was at SAP, we built a community of multipliers within the S/4HANA organization. And we, we, created this network of champions, if you will, people that understood or. Got more training on what is AI? What can we use it for? They understood also from their peers, what have they tried out before? What's working well? What should we adapt? What are we not going to repeat? And really form this community of practice, of multipliers, of champions within your organization. Even if you're a small organization, right? And you know everybody. in your startup and have conversations with people, see who are the more technical ones that do want to get engaged, that are excited about this, because they are much, much closer to the business and to the business process and the business problem that they see every day, that they can bring ideas and information back to you, what's working, what's not working, how can we apply AI. So really, that's, a key part, in my opinion whether you're focusing on SAP or any other vendor as an IT organization, as an AI organization. Bring your business stakeholders along to help prioritize what should we be looking at.

Paul Kurchina:

All right. Take care, my friend.

Andreas Welsch:

Thank you. Alright. While we are in Canada, let's move over to another fellow Canadian, Harpreet Sahota. Thank you so much for joining.

Harpreet Sahota:

Andreas, man. Thank you so much for having me. Congratulations on the book. It's sitting right here on my desktop as we speak. Just got it in the mail, man. So thank you so much for for sending me a copy.

Andreas Welsch:

Wonderful. I also need to say thank you to you because we recorded an episode, I think it was in the fall of last year, about RAG, Retrieval Augmented Generation, and fine tuning. And maybe before we get into that, I know you're super busy. You're into developer relations. You have your own show and live streams. You go very deep on the technical topics. So excited to have you on as we switch more from the business strategy part two. Some of the more technical things and what can you actually do with this? It's not going to get too technical for those of you in the audience, but really appreciate your perspective there, right? We recorded an episode last year. I created a short clip, 40 seconds, something, put it on YouTube. It's the most watched clip on my channel over the last 12 months, more than 14,000 or 15,000 people have watched It. You were talking about retrieval augmented generation, fine tuning, right? Last year, everybody was trying to figure out, should I go this way, that way? Is fine tuning something we should be doing or not? If people are in and around AI, they know they're sitting on a goldmine. which is their business data, but we also know it's messy, it's dirty, Maya talked about it, Paul just talked about it. How does data play into these concepts like RAG, like fine tuning? What should people really be thinking about now at this stage of the, game and the adoption?

Harpreet Sahota:

Yeah, so I guess from perspective of, just retrieval augmented generation data quality obviously is super, super important especially for retrieval accuracy because the retrieval aspect of RAG RAG is Retrieval Augmented Generation, that retrieval aspect, it needs high quality relevant data in its knowledge base because if you have poor quality data, then you're going to get irrelevant or inaccurate information that's retrieved from your database. You might miss relevant information because of a poor representation of that data or poor indexing. Or you might just get biased results. And then in the generation aspect RAG, the G is for Generation. we use the retrieved information to produce an output. And so think about how data qualities can be affected there, right? If we have accurate and relevant data that's retrieved in that retrieval process will end up with more contextual and coherent and factual generations. So make sure that you have just diverse and high quality data will really help generate a good response and contextually appropriate response. When your data sets are. Nice and clean and well structured. Of course, this is going to improve the model's ability to your retrieval pipeline, the embedding model, to retrieve the right documents. When you think about cross modal retrieval talk about multimodal RAG systems, this becomes even more important because you need to have high quality, aligned data that's across different modalities that you have, text, image, audio, video, what have you. So yeah, just high quality, good data means that you are having good representations that are then into your vector database and will improve retrieval quality.

Andreas Welsch:

Perfect, so data is still important. Sorry folks, still need to get your data in order, but don't overdo it, right?

Harpreet Sahota:

Yeah dude, like if I just quickly like I want to be, I was once one of the people who would ignore data and just be like, Oh, it's all about the models and all about building a dope model and look at this model. And I think over the years and more so just ever since I've been working at Vox151 where data quality is like the thing, like I now truly understand and appreciate the importance of good quality data and like really understanding the impact that it has, not only on any model that you're training or fine tuning but just your overall system. So I used to be one of those people that would sleep on data quality. And it was just this abstract concept to me that I didn't truly appreciate because I was just so into I need to get the right learning rate, the right hyper parameters and the right configurations or the right which layers should I freeze or unfreeze. But yes, like the actual. Quality of your data is going to have the largest outsized impact on whatever you do downstream.

Andreas Welsch:

Awesome. Now, I heard you mention something around modal, multi modality, right? Image, text, video, audio and certainly data quality being important there. But for things like RAG. How far out are these things? Are we talking quarters, months? Is it already here? So a bit into the looking glass or, maybe, even just just what is available today when it comes to model modality?

Harpreet Sahota:

I definitely, yeah, definitely still think it, it has a little bit of ways to go. There has been some good models that have been put out that show some promising results. For example, Meta released something called Chameleon. It was their newer version of the Chameleon model which has which is great because you can use the embeddings from that model to do cross modal retrieval. So it is an improvement over Clip or ImageBind. And then Apple released their it was called 4M21, which was 21 different modalities that they can handle with one model. So models like this are becoming more and more powerful and it's really pushing what is capable with multimodal RAG. I think still a long way to go to have the results that we're having with text only RAG. But, the research is coming and yeah, we're working on it. We're going fast, I'd say.

Andreas Welsch:

That's, exciting, right? I think so many things have been moving so fast over the last two years, ever since ChatGPT dropped and it feels like there's no week without any big announcements, something impressive, something exciting. Sometimes it's a little easy. Matt was saying at the beginning too, to get carried away and focus too much on what's new this week without thinking about the practical application. With Retrieval Augmented Generation, I think people have realized that, hey, yes, your large language model has a fairly good understanding of how to construct language, how to understand language. Certainly with knowledge cut offs and all these kinds of things, there are certain limitations that the model might not have the latest information. Because it wasn't available at the time it was trained. Hey, Retrieval Augmented Generation. We can take what a user has entered in the chat, for example, turn that into vectors, compare that against vectors in your database. Pull out something that most closely closely resembles that. Data is, important and to get good results. Now, with Brian, we're talking about AI agents and that being the, hot summer of AI agents. How, does data quality fit into these new and emerging concepts as, as well?

Harpreet Sahota:

Yeah Like, when you think about agents within the context of large language models themselves it's not like a it's just a pattern at the end of the day, right? There's a reasoning and action pattern. And these are just emerging properties that, that come from bigger and bigger models, right? And the reasoning and action pattern there's, a call to a language model. There's, the, language model is reasoning over the query. It's thinking about actions to take, or it's thinking about responses to give to the user, and then it go ahead and it takes that action. I don't think any of us you know, unless we're working at those frontier model labs are going to be building models for agents or fine tuning models for agents, we might be doing something more closer to agentic RAG, where we're combining this reasoning and action pattern with the RAG pattern, right? So this just like mashing up two patterns. So what is RAG? RAG is just Dense Vector Retrieval within Context Learning. And so if we step back again, like just think of the whole process of RAG we can, split it up into a few different phases. There's like the preparatory phase where we have some document processing, where we're chunking splitting documents into manageable pieces. Then there's the embedding, then pushing everything to a vector database. And then there's the query processing, and this is where we'll see the reasoning and action loop occur. A user query will come in, the language model will reason over that user query, and perhaps you might have a module that is specific to some type of query. Let's say you have a homework assistant for a high school where you know, you're teaching general education classes. The query comes in, you might have the language model reason over, okay, is this a history science or physics question? And then it will route the query to the appropriate kind of vector database or index to pull the information. So you see this happen more where the agent is part of the just meshed up in, into the pattern.

Andreas Welsch:

Awesome. That's the part that really gets me excited when you can connect your own data to these, systems, to agentic RAG, pull out information that is relevant, that is specific, right? Get around some of those limitations that LLMs, natively and inherently have, but also use the strength of both approaches. You've also inspired an entire chapter on RAG and getting better results from your LLMs with your data. So I'm excited to share that with the world as well. Harpreet, now I'm curious, what's your question that you've prepped?

Harpreet Sahota:

Yeah there's, folks like me that are in the industry that are we're more nerds, if you would, like we're more hands on. We're reading the latest research where we're, very hands on, right? We, understand the limitations and we understand what AI is capable of, and quite senior in our roles, but not necessarily leadership, right? So we're senior, maybe we're, but we're not leadership, right? At the end of the day, And I feel like there can be some tension between folks like me and folks that are in the business that are like, we need AI. We need to get this thing, everywhere. And I'm wondering what. Tips or advice do you have for folks that are like me who are super technical senior in their careers? Yeah, we we understand, of course you could throw around vague terms, business value and all this, all these buzzwords that business people use, whatever, we get it. But we just can't communicate to leadership properly like the limitations okay. Like, how can I tell, how can I tell a leader that idea is not going to work out and here's exactly why.

Andreas Welsch:

I think that's a great question. Look, I think, in my opinion, it takes both sides to be open for this kind of dialogue. As a data AI practitioner and expert, do you have a really solid understanding of the technology. You are looking at the data. You see what is possible and what is not possible. I think through examples and Maya was saying, doing that diplomatically and tactfully showing what is possible or also asking questions, right? You can probably get. Get closer together on what it is that we're trying to solve, right? Why is this important? How are we helping the business? Why do we believe, on the other hand, that AI is the solution for this? Or, again, maybe it's not generative AI, maybe it's not RAG, but it's machine learning, and it's your, whatever, K means algorithms and type models. I think just, level setting is probably one around enablement, shared understanding, seeing what is the actual problem that we're trying to solve and why is this important. Looking at technology as a means to do it. Maybe sometimes AI, Gen AI, machine learning isn't even the right solution for it. Being able to articulate that and being heard is the other thing. But those would be some of the recommendations that come to mind for me. Definitely seek that dialogue.

Harpreet Sahota:

And what about the flip side of that, where you have people who are maybe new and they just want to they see a, they're technical, perhaps maybe just early in their career. And now they're like they, have a hammer now AI problem. What advice would you give to them to filter out from bad ideas?

Andreas Welsch:

No. I would say practice how to use that hammer in a, in an isolated en environment. Look for a use case. Maybe that's more in, in your personal space. Experiment with the technology and then bring it to your business. Hey I've learned something new. If we try out X, Y, Z, or I think here's how we can apply this. I think if you're in a technical role, it's absolutely critical that you're at the top of your game, that you understand what is there, what what is new. But you also need to understand how can we use it and for what purpose. So also be curious, being inquisitive of how this can help really move the business forward or make a measurable impact, right? Maybe that's the better way to express business value a measurable impact. Awesome.

Maya Mikhailov:

Thank you.

Andreas Welsch:

Wonderful. Harpreet, it was a pleasure having you on. Thank you for walking us through some of the new advancements in RAG, agentic RAG, and why data quality is important and is still important.

Harpreet Sahota:

Thank you very much.

Andreas Welsch:

Awesome. All right. Thank you. So let's move on over to our last guest. Steve, Wilson. Hey Steve, thank you so much for joining.

Steve Wilson:

Thanks for having me, Andreas. Excited to be here.

Andreas Welsch:

Hey we connected last summer when I saw a paper come out in a report come out, by a foundation called OWASP, it was called the top 10 for large language models. You are one of the lead authors or co authors on that report. And we started getting into a conversation about how is security evolving around large language models. And had an episode on that. It's now been nearly a. Also, a year since we've talked about that, and I'm sure many things have evolved when it comes to large language model security. We actually have a dedicated episode in that next no, in two weeks, that I'm really excited for because I've been struggling getting people with deep knowledge in front of the camera on this topic of LLM security. But before I keep on rambling more, maybe if you want to introduce yourself briefly and what OWASP is all about, and we get into what you're seeing in the cybersecurity space.

Steve Wilson:

Yeah, really quickly. I'm Steve Wilson, and I spend all my time thinking about the combination of AI and cybersecurity. My main job is I'm the Chief Product Officer at Exabeam, where we use AI to sift through petabytes of data and find threats for organizations. Last year, I got involved with the Open Worldwide Application Security Project, which is OWASP for short, which is a 20 year old foundation with 200, 000 members dedicated to building secure software. And I put together the first project there to research the vulnerabilities specific to large language models. And then that led to led to my own book journey, which should be coming out later this month, which is writing a book for O'Reilly on the topic.

Andreas Welsch:

I'm super excited for you. Can't wait to, read that. You've you've already shared a sample chapter and I am definitely ready to, read the entire book. Yeah. Hey, what are you seeing in, in the security space when LLMs? I think we've, seen on At least those top 10 examples of how generative AI can be used for adversarial intents. How much of that is, is real? What is being used? How are people using it and how can you prepare or defend against it?

Steve Wilson:

Yeah, I think what we see is it gets more real by the day. And a year ago when we first started this, people were just ramping up their first LLM projects. And so people were looking at, hypothetical vulnerabilities that were pretty clearly there, but they often resulted in the, the end state being. Embarrassing the organization who was creating the LLM, right? It might put you in the, headlines because somebody made your LLM call somebody a bad name or do something that was in poor taste. But increasingly we are seeing these now put into mission critical operations. And it's, the topic of what you've been talking about all day is that this is transitioning to reality. And we now see these, very hypothetical vulnerabilities with names like indirect prompt injection that a year ago, somebody put out a paper. describing how they had put together a system to evaluate resumes and somebody could embed a secret code in the resume. And you cover this in your book. So that it would change the rankings on the person who submitted their resume. And it's okay, that seems like that's going to be real. What we've seen in the last few months is Microsoft Copilot and Slack falling victim to that exact vulnerability. And these are real, and these are now attached to things which are holding our more, most important data, and people are learning hard lessons now.

Andreas Welsch:

Yeah, I remember seeing that paper and trying that out and having done some academic research in that space of how do you use AI and machine learning at the time, about five, six years ago to rank resumes just, seeing that is possible. It could be possible to game the system that way was, just mind blowing. And, especially as, we want more and more people to use applications that have large language models and engine AI embedded, so you're almost pushing them to, towards that edge. And you need to educate them, I think as well, and what could be possible, but certainly, being careful about that. Now if it's, not just about. Prompting injection, right? I think a couple of months ago I had Carly Taylor on the show. She's a senior machine learning manager at Activision and she was talking about large language models being used for spear phishing, targeted phishing campaigns. What are you seeing there? How are large language models being used by the bad guys, if you will?

Steve Wilson:

Yeah, so you know we've seen bits in the news where organizations have been targeted with things like deep fake zoom calls. There was a bank that somebody transferred a bunch of money. Very real vulnerability. But what I'll tell you is it's gotten to the point where I have now spoken to people who have conducted interviews. That were deep fakes and people working at companies that hold mission critical data, who one of the interviewers you know, because we're often these days interviewing people in other countries and things like that's become routine, but somebody became suspicious. They had somebody who was a little more trained in this from the security team conduct a follow up interview. And they said this is not, a person. So this is real and this is happening. And a year ago, what I was telling people is your previous phishing training, we were all used to Nigerian print schemes where things were barely written in English and you had to look at them Just have the tiniest bit of skepticism to spot these fakes, right? The URLs are misspelled and this and that. Now a standard phishing email, those are so good. They're flawless every time. There's no excuse. But we've moved beyond that, where we're not just getting emails and we're gonna wind up with deep fakes. We've seen examples of people getting phone calls with their friends and relatives, having their voices cloned. This is not only real, it's becoming routine.

Andreas Welsch:

I think that's where it gets really scary. And I feel just having more awareness and education about this, even if you are in the enterprise, especially if you're in the enterprise you need to educate your teams that here are some of the risks, how others might be using this. on you or to trick you into doing something and believing something. So corporate and cybersecurity are asked to step up their game and evolve as well. Now, what do you think leaders need to know when it comes to LLM vulnerabilities, right? There's a huge push on, on one hand by vendors to put as much. Gen AI in there, so marketing can say, Hey we've, got the best and the most AI capabilities. Sales is getting excited because they can sell more and they can upsell you. Maybe you're even building your own things where it's differentiating in, in your business. What do you really need to know as a leader? How can you protect yourself and your company against some of those vulnerabilities?

Steve Wilson:

Yeah, I think, what we see right now is with the nature of pre trained transformers, it's so easy to build something that looks compelling. That, that first demo step becomes almost, trivial. People can put together amazing looking demos in an afternoon. And as a business leader it's really easy to get excited about that. What as you push those into production and the cyber security space in particular, we see so much use of large language models. Every vendor is adding one. If you don't do this right, you wind up with something that is not only insecure in a traditional sense, it's actually borders on embarrassing. And what we see is people pushing wrappers on top of ChatGPT out into production. They're not provided with sufficient sort of training and data to really answer focused questions. So they, hallucinate. They're, they're not focused. They're actually expensive to operate. And and they're vulnerable to virtually every one of those top 10 vulnerabilities. And what I've seen at, Exabeam is. Us having to look at this through a very focused lens and put on a product management hat before you put on the engineering hat and say, what is the very specific business problem I'm trying to solve? And what is the most constrained way that I can solve that? And if I can do that, I can feed the model exactly the data that it needs. I can eliminate, or at least dramatically reduce things like hallucinations. I can make the output. very focused. I can limit the scope. One of the things I talk to people about is is limiting the scope of what your bot does. It's the first thing on my checklist from my book, which is don't try to build guardrails by building a deny list. Build your guardrails by using an allow list. These are the only things this bot is allowed to do and everything else is out of scope is going to be far more effective than running around, trying to play whack a mole, convincing it what to not do.

Andreas Welsch:

I really like that approach. Makes me think back of my days in IT and firewalls and zero trust security and all. Steve, one last thing. What's your question for me? We have a lot more time to talk about security on the show in two weeks. And there's a lot of good information that you've already shared.

Steve Wilson:

Yeah I'd say the big question for you is in your interactions with business leaders and people in these new roles, like a Chief AI Officer, what, do you see as the short best key to get them to think about the security team is an ally rather than an adversary in deploying these technologies.

Andreas Welsch:

Poof! That's a tough one. That's probably the toughest of all questions today. Look, I think the key part is understanding on one hand, what is the opportunity, but also what is the risk? But it's nice to have a fancy chatbot that doesn't always send you into an infinite loop, or sends you back to the start. We've all been used to it. It's nice. To have a better customer experience. If you're building this thing on, your own, what are the risks? What are the risks that we're getting into? Not just of this things, saying something silly. If you're a parcel service, we've had that experience in Britain a couple of months ago, or if you're a a pickup car dealer thank you. And you buy your pickup for a dollar. Those are embarrassing, but it also gets malicious. So what are the risks of somebody gaining unauthorized access to our systems or exposing information that we have trained this agent, this chatbot with and how can we mitigate that, right? I think that's the key part to look at as well, not just the opportunity, but also understand what is the risk that we're getting into and how can our teams that we already have help us protect against that.

Steve Wilson:

Awesome. Hey, Andreas, thanks for having me on. I enjoyed it and I'm looking forward to next time.

Andreas Welsch:

Perfect. Thank you so much, Steve. Really appreciate it. Talk to you in two weeks. Awesome. Folks, we're getting close to the end of the show. Thank you so much for joining and for celebrating the launch of the AI Leadership Handbook with me. If you haven't already done so, go to Amazon, look for the AI Leadership Handbook and buy your ebook or paperback copy today. You can learn about all the nine key steps. I'm super excited and thankful for all the great guests that we have had on and to dive into some of the chapters and some of the topics here as we're exploring more and more generative AI. Not just exploring, but also bringing it into your business.

People on this episode