
What’s the BUZZ? — AI in Business
“What’s the BUZZ?” is a live format where leaders in the field of artificial intelligence, generative AI, agentic AI, and automation share their insights and experiences on how they have successfully turned technology hype into business outcomes.
Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, agentic AI, and process automation.
Since 2021, AI leaders have shared their perspectives on AI strategy, leadership, culture, product mindset, collaboration, ethics, sustainability, technology, privacy, and security.
Whether you're just starting out or looking to take your efforts to the next level, “What’s the BUZZ?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation in business.
**********
“What’s the BUZZ?” is hosted and produced by Andreas Welsch, top 10 AI advisor, thought leader, speaker, and author of the “AI Leadership Handbook”. He is the Founder & Chief AI Strategist at Intelligence Briefing, a boutique AI advisory firm.
What’s the BUZZ? — AI in Business
State of the Industry: Are AI Agents Ready for Primetime? (Jon Reed)
Is your AI strategy built for sustainable impact—or stuck chasing the next big trend?
In this episode, host Andreas Welsch welcomes enterprise tech analyst Jon Reed for a candid discussion on what’s actually happening with AI agents in business today. Drawing from recent enterprise conferences and frontline conversations, Jon shares hard-won insights into what works, what doesn’t, and why so many AI projects still fail to deliver meaningful outcomes.
Discover the foundational shifts organizations must make to move beyond pilots and hype cycles:
- Focus on AI readiness over AI-first: Learn why leading with experimentation, not mandates, sets the stage for responsible and scalable AI adoption.
- Build a culture of trust, not fear: Understand how shadow AI is spreading—and how safe sandboxes and clear communication can turn experimentation into impact.
- Prioritize business outcomes, not abstract autonomy: Explore how hybrid agent-human designs deliver ROI faster than full automation—and why “good enough” might be good enough to start.
- Treat AI like a data project: From agent performance measurement to governance protocols, hear why the enterprise AI stack demands more than plugging in a model.
Whether you’re a CIO navigating vendor noise, a business leader seeking ROI, or a transformation leader building internal alignment—this episode delivers grounded strategies and real-world guidance for leading in the era of Agentic AI.
Tune in now to explore what it takes to make AI both productive and practical in the enterprise.
Questions or suggestions? Send me a Text Message.
***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.
Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter
Hey everyone, and welcome to what's the buzz, where AI leaders share how they have turned hype into outcome. There's no shortage of news around AI agents and what they mean and what they bring for businesses, for the economy, for professionals alike. Today, we'll take a look at this state of AI agents, and I'm not doing that alone but to, together with Jon Reed, one of the esteemed analysts in the community. Jon, I'm so excited to have you on. Thank you for making time for us.
Jon Reed:Yes I enjoy our dialogue. Thank you for commenting smartly on my SAP AI deep dive and it's good to be here. And I hope this feels more like a discussion to our viewers'cause I'd really like to hear your take on my views as well.
Andreas Welsch:Fantastic. So why don't we jump right in for, this one and we'll go a little longer as, as usual to have enough time. Now I know you've spent a good amount of time with different vendors at different conferences, which is wrapping up the first half of conference season this year, and I'm really curious, what are you seeing when it comes to agents? It sounds like everybody's talking about it. Many sound alike. What do you make of it? What's real? What's the state of it all?
Jon Reed:That's a really good question and, look, we may not get to the bottom of what's totally real even by what we finished today, but I have gotten off an extensive series of user conferences and events, both pressing vendors on these issues and also talking with customers. What we are seeing, on the event circuit, the spring is the beginnings of the first documented agentic use cases where we can actually talk to a customer who is live with some various agents and can speak to how they got there, what the challenges are and, in some cases what some of the results are. But I would say that we are still in a pretty early stage in terms of comfort. Level with customers and agents and primarily aggressive early adopters are the ones that are, taking the stage at the moment, and in fact. And this is what I was hoping to get into with you a bit. I think we're actually at a really crucial crossroads in enterprise AI right now. And I would say there's a couple of forks in the road right now that are really, important for companies. And one of them is to decide what kind of AI company they want to be and, the second is, to start taking some positions on autonomy and what that means in an agentic capacity. And that's been a big topic this spring and happy to get into it further.
Andreas Welsch:Awesome. Like I said really looking forward to it and hearing what you're seeing. And especially the. Seeing the first customers and companies that are not just playing around with this, but who are actually thinking about deploying it and who are deploying it. I was fortunate to attend a couple vendor sessions and events over the last couple weeks as well and, I saw as well what you shared just now, companies and IT leaders who share here's how we are approaching this. Yes, we're looking at this. Through an automation agent automation lens we need to make sure right? It's, secure, it's safe, there's governance around this. So great to see this come to the forefront very, quickly and very early on. I see a lot of discussion about protocols like model context, protocol agent, protocol discussions around how do we make sure that. Agents, when they communicate with each other, they communicate safely and securely. I don't know how many more times I need to mention security and safety. But good to see that, all of that is top of mind for, many leaders. One thing that I heard the other day that surprised me to, to some extent is that leaders. Have said, Hey we, are beyond this point of defining an AI strategy. And to me it seems that's mainly the large organizations that have been doing this for a while now, that woke up two years ago when Chat GPT came around, we need to figure out our AI strategy. Now they have it. They might not even have a dedicated AI budget anymore. It's part of the IT budget. What are you seeing there? Is, that the, new normal or is it really just front runners at, the moment that are thinking that way?
Jon Reed:Yeah I would take a little bit of an issue with companies that say that they have, that, I'm not gonna say that no company has a coherent AI strategy, but for the most part I, feel like I could poke holes in most of them in terms of what their employees are really feeling and responding to. For example, like what is your stance on headcount reduction in AI? How are you using this? Are you are, is it just a work enabler for you or are you actually reducing, for example, new hires? There are. There is some evidence of dramatic moves. Moderna made some headlines recently by fusing HR and I it together. There's some interesting questions around that, but at the same time Moderna itself I think is still feeling its way through how some of this is gonna work going forward. So I think we're actually at a pretty early stage and I don't think many companies have. Really define this well. In fact, I think the, AI first mantra. Is really problematic in a lot of ways, which we can get into. The thing I ki wanted to s s start with is I, wanna get in all of this with you, but I wanted to take a step back for a minute because I think it's important for people to realize how, we got here and what I'm trying to do, because I have an interesting role in all of this because I don't really have a dog in this fight. As far as, I do have some passionate views on AI but I don't really have a dog and a fight in terms of what technology companies adopt. I'm, I don't sell technology. I if, AI works, that's frigging great. If blockchain gets revamped next week and works better, of course that's not gonna happen then that's fine with me. I'm pretty. Tech agnostic. I'm looking for successful outcomes. And one interesting thing for me is that there's so many polarizing debates right now. So you know, the Apple reasoning white paper, for example, I wrote about that and how it polarized in such a way that really, I think it was hard in the middle of that for enterprises to get in, decision makers to get useful takeaways, which there were plenty of in that report that they could actually use. Which is way more important than an esoteric discussion about reasoning. But one of the things I think really important is that I don't just study enterprise projects. I study culture, I study scientific breakthroughs in AI. I, designed my own training program that I spent thousands of hours on. That doesn't make me the ultimate expert in the world, but I think it brings an interesting perspective that combines different, approaches not just an enterprise discipline. And here's why that matters, because really a lot of enterprises like to think that, and they make a big mistake. And I have some fresh mistakes and, takeaways for you that I prepared just for your show, by the way. But they, make a mistake in thinking falling for the vendor rhetoric that in a few months these problems are gonna get solved. And what I'm trying to do is help. Enterprises see the pros and cons and, every technology has them. And in fact, this is a subset of what we're dealing with now is a subset of, a deep learning discipline that really goes back decades. And it was at least two decades ago that I started seeing some white papers criticizing some deep learning flaws around things like. Causality versus correlation, inability to to come to proper accurate decisions outside of the distribution of the training set. Lack of a world model of our basis in actual factual groundings. These are actually things that have never been solved and, so when vendors say This is gonna get solved in three months, just put this in. That's some, that a real BS detector goes off for me. And so that's why the history of this is important to realize that. People have been trying to solve these problems for a long time. And what, happened over time is there were some really momentous occasions where deep learning started taking to the forefront over other forms of AI. And then there was a huge, really important paper and I think 2018 trans attention is all you need. That launched the sort of transformer technology on which generative AI is built. And then you had a really, important. Commercial breakthrough with chat GPT that popularized the technology, which is of course so important. But it's really important to keep in mind that the people who developed this technology originally, they were going, they were taking a moonshot for artificial general intelligence. That's what they were going for. They weren't trying to build enterprise productivity system. This is really, important to understand this. And so, what happened was, I. We had this popularized chat, GPT technology and then it started to become like what can we do with that in an enterprise context? And, of course what happened was a lot of disappointing results. And so you read a lot about project failures and underperforming projects. There was a big study that was primarily Danish, but it was a big one that came out not long ago that looked at, I think, 20. 25,000 users of 7,000 companies using generative AI solutions with no productivity increases or gains. This is now, the data from this is now a couple, years old, but the point is there, there's been some discouraging results. But in the meantime, what's happened is that a lot of vendors, and you could pick whichever you want, you could pick SAP or Salesforce, or ServiceNow or Workday or what have you, Oracle to some extent, the big hyperscalers they, sat down and said. How can we make this technology more accurate and more useful in enterprise context? And, that's the kind of stuff that you and I. Like kind of look at all the time. So things, for example, like RAG was developed as part of that. So rag, retrieve, augmented generation is way of bringing context and new data into the LLM context. And of course now in the agent scenario, RAG is just one of many tools that a agent can call to perhaps. Be smarter, be more real time aware. Maybe do math problems since l LMS suck at math. Like, all these different things that are, being developed as part of this enterprise architecture. But it's important to keep in mind that the folks that were chasing a GI, they, they consider this kind of stuff band-aids they, a lot of them have moved on to other pursuits and it's just important to keep that context because what was happening was in recent years, the idea was let's just keep scaling and eventually we're gonna get there. But now we've really hit the wall with scale. And so as a result of that, now that's, in a way, that's been a healthy thing. There's no more data to train on really, there's specialized data, like industry data, but there's no more general data to train on. And so that's what's prompted this, kind of revisiting of things like reasoning models and, what we call like point of inference type of testing, test scale scenarios that bring in more data at the point of inference to try to make these machines smarter. And the reason I give you all this background is to, realize that, that we're. That we're making the most of a technology that wasn't quite built for this purpose. And so that's why when you, run into things like agent to agent protocols and you say yeah, but these agents don't have a hundred percent accuracy rates. They land somewhere in the 60 to 98% range or what have you, based on how sophisticated they are, maybe how well they're trained, all that. And then you set them loose in a bunch of. Agent to agent discussions and you have compound error risk and stuff like that. And I think when enterprises take a step back and understand this whole drama that's played out, that's led us to this point, it helps them to understand that they can still use these technologies for many valid use cases, but. It's not like in three months or six months, all of this is gonna get solved. And I, it's just such a crucially important point to me because again and again, I hear vendors saying, in three months or six months, this is all gonna be fine. There won't be any hallucinations. I'm sorry folks. This has been a problem for 20 years. They're trying to fix this. It's a trillion dollar market at stake for anyone who can create a more generalized intelligent system that doesn't require five tools and, multiple agent calls back to the language model and verification steps. There's a big prize waiting for a more simple architecture. But in the meantime, what do we have? We have more complex architectures. However, those complex architectures can still make money and get good results for companies.
Andreas Welsch:Yeah. Awesome. I think that sets the stage perfectly, right. And we don't even need to go back as far as the 1950s when AI was coined. With this whole agent to agent communication. The way I think about this is in, in human terms I usually don't like to compare AI and humans for the reasons that AI is not human, doesn't have human capabilities. But let's just, play along in terms of communication. We have this AI this, large language model based capability or technology that's based on language, right? We communicate and converse in language and how many misunderstandings do we have, right? If, you're married, your significant other, they can tell you how many times, right? You didn't really get the message. So now you have these agents that communicate based on language. So language can, be ambiguous in, many cases, right? To your point, compound error rates and so on. All because it's based on language and the, communication articulation of what that goal is that the next agent should be looking at. The interpretation of, that. In a sense, whisper down the lane. Nonetheless I, feel there's also a big opportunity to improve on how industries, how consorts define these protocols. If you think back to the early protocols of the internet. Things like DNS or, HCDP and, other protocols. They're super, super simple. Security wasn't as big of a thought in, in, in designing those. Now we've moved on so, we can make it better. We can make it a different in that sense. But I do agree with your point that the LLM still need managing and keeping wood rail, right?
Jon Reed:So you make a really good point and it's really important that we have these protocol discussions, but I do want to emphasize that I think those discussions are generally getting a little bit ahead of where a typical customer is right now, but they do need to be happening in parallel. So for example, one thing I'm pretty encouraged by is Google just this week turned over the A2A. Protocol to the Linux foundation, and not only Google has signed off on that, but also Microsoft AWS. How often do you see those three players involved in the same thing? That's a, instead of launching competing protocols, for example, that's a really encouraging sign that these vendors understand the urgency of, like you say, establishing something of a common translation layer. And then some big enterprise vendors and software vendors are involved also. And there's a long list but, in general right now for companies, if I were advising companies, I would say save the agent to agent workflow connectivity questions for a little bit right now and, start identifying some more. Out of the box use cases that can deliver some business results, get your users more comfortable with this technology and, start to build on that with understanding that eventually, yes, you're gonna wanna connect to processes outside of company walls and such. But at the moment, I think. It talking. I know you get excited about watching agents communicate, but for the average customer, I think just getting one more basic ag agentic workflow in place and, successful, and then building on that is, is often a good first step. And a lot of customers aren't there yet.
Andreas Welsch:I would agree. Many of the conversations that I have, especially with upper mid market customers. They say, what, where do I even start? All I hear is AI and I need to do something, or I'm falling behind. Can you help me with my strategy? Okay, we have a strategy. Can you help me prioritize? What are the things that I should be looking at? Or maybe I've already rolled out something. Can you help me? And, our teams get people on board that they know how to use it and how to use it well. And how do you comm how do we communicate? That this is there to help you, not to replace you. That's the other big discussion. And, the other big fear technology is all nice and fair and I, agree with you. Protocols need to be there at, some point. Hopefully many companies will, leverage them in whatever they build. But I think there's an additional angle at the moment that is that's been lingering there, that's a lot more urgent. And, that's the, cultural piece, right? In, the fall. Slack came out with their workforce analysis study, they found among, I think four or 5,000 participants, 46% said, I don't even tell my manager that I use AI at work because they think I'm lazy, I'm incompetent, or much just end up with more work now that I'm 28% more productive. Now as is the case in many times you might think that's a vendor, vendor focused study. But then Duke University came out in May, and they actually confirmed those findings. They they had about 4,500 participants in, their study and came to the same conclusion. People are using AI, so shadow AI usage, right? Those of you have been around in the industry for a while, have seen shadow shadow AI. Shadow it. Sorry. Now we have shadow AI. And I think that's a big problem for, organizations to, to address, right? It doesn't matter what technology you use or what vendor you build on, but how do you encourage your people to use it, to share with others how they're using it and actually make this something that's, attainable, something that we want to share as opposed to. Being punished or, giving you the, feeling, the perception that you are being punished because you're using some advanced technology? No, I think it's actually quite the other the other way around.
Jon Reed:Totally agree and a lot of foresight there because I made a list for you, my top three to four mistakes customers are making and my top three to four success things. And I won't reveal'em all at once, but you hit on two of them just in that one segment and I'll, get to it. And I think one thing I should have mentioned is that. That when we talk about I was a little bit critical about some of the limitations of the technology, but the other thing I've fight. Against is employees can come into the workplace with a very negative perception of these tools based on bad experiences they've had using these tools where it coughs up gibberish or things they don't want. And then also, like we discussed some failed initiatives using more generic versions of the technology that aren't trained on quality enterprise data. And so a big part of what companies. Need to do in order to get more successful is to build momentum with more accurate AI architectures that are, built on, quality data sets within their company. But as, they do that, the other mistake that they can make, and this is right on my list, is failure to create a safe. AI sandbox with the requisite permissions as otherwise, you are gonna have the shadow AI experiences, which has a lot of potential IP risk as well, by the way. So we need to, companies need to cultivate safe environments that are secure and provide the proper role-based access to data so that you can sandbox your own experiments and play around with this technology without any pressure. And, in fact, what I would like to see companies do. And this is one of my other mistakes, is to pull back from this like mandatory AI first intimidation stuff of, use these tools or else and, instead set up sandboxes for people to play around and set up reward and recognition systems for employees that use those tools to propose cool new workflows, cool new apps, cool new ideas, instead of making this a punitive measure. Make it an exciting measure and, then I think you're gonna see a lot less use of the shadow AI tools because the sandbox tools are gonna have all the requisite corporate data to build specific value add use cases for your corporate setting. And so I think that's one of the biggest mistakes companies are making.
Andreas Welsch:So I'm excited to, to see the how especially large companies are evolving. I remember as far as two years back, companies said, Hey, look we, are building a large language model, playground. People folks in our organization, please don't use public versions of chat, CPT and cloud and whatever for it leakage risks and data privacy risks. We use the APIs, we build a nice UI around it, and then we enable our workforce through a transformation program. Here's how you prompt how you use these tools and we, get more safe usage that way. The point around now connecting that to, data that you have in-house or expanding that playground into something that's more agent or that's more of a workflow where you can actually. Build it out and, see does this work for my use case, without having to go to it, without having to put in a huge request in the business case and what have you to try this one thing out. I think that's, going to be exciting how large organizations who I think will be at the forefront of this are going to, think about this.
Jon Reed:You made a really important point to which, I, one of the few things I should have mentioned and didn't in my little recap is that the other big lesson from the early generative AI projects. Where the stats aren't so great on product success was, is incorporating that customer specific data, but also this move to more agentic systems, which is really this thing around, instead of just having discussion based interactions through a bot, the bot could potentially start executing actions either in, in cohort with you as an enabler or in some cases more autonomous steps as well, and. And it's clear to see that there is more potential value to be had there, right? Because if you're if you're having an interaction with a bot around what is my work? Leave time for the rest of the year. And and then the bot can not only tell you but say, would you like me to block this in your team's calendars? Or blah, blah, blah. It's clear to see how the, value of engaging with these tools goes up in a ag agentic capacity. And we could spend 10 minutes debating the issue of agents. I don't think that's. The how they're strictly defined really matters all that much except to understand there's a variety of agents. But in general, I think when you think about actions, that's a really important variable there. And, like you said, a lot of companies are still trying to figure out how to do that. And it's gonna really vary a lot, by industry. For example I just got back from my, one of my most recent shows was Salesforce connections and. Some of the a agentic use cases in there were a little more advanced for customers, but one of the reasons for that is a LinkedIn comment you made earlier about quote unquotes. Good enough. Generally speaking, in most industries, working around sales, marketing, and service experiments, I. In a, even in a live setting, doesn't necessarily have dire cons. Consequences whereas if you're talking about industrial AI where you're having agents interact with your manufacturing and your shop floor, the stakes are obviously much, much higher before you set those agents loose. And in, that context of talking with marketing and salespeople, and to an extent service as well. Service is a little higher stakes, I would argue than marketing but the point is, there's still this thing around, okay, we can roll out some of these things and experiment with having agents take on more and more steps. And I think those are really, healthy discussions to be had as long as workers feel like they're still a part of the picture and valuable and aren't. Gonna be replaced for no reason. And the, one of the companies I did a use case on, I really liked how they had communicated with their employees around how their roles might change and how they hope to free them up for more value add activities. And there wasn't an intent to to lay people off. And they were very clear about those discussions. And I think when you can be clear about those, you can really then. Move into a culture of experimentation and enthusiasm rather than one where people feel like I'm being surveilled and potentially automated out of existence.
Andreas Welsch:So look I'm working on two new courses with LinkedIn Learning specifically around the topic of how can I, as a leader, encourage and empower my team to use AI and use agents, but do it in a way that's responsible, right? A lot of times we see somebody sends you a, draft and says, Hey. Can you take a look at this and you start reading it and you feel like, Hey, come on. You just poke that straight out of ChatGPT. You didn't read it, you didn't edit it, and now you expect me to take a look at this. Putting myself in into the shoes of a leader in an organization, I think we need to set expectations that regardless of the tools that you use, whether it's yourself, whether it's you and a team or it's you and an agent. The quality needs to be top notch. That's, what we expect. It doesn't really matter if, it was Jane and John or Jane and five other people that created this credit where credit is. How, do you manage that now that these tools are available that your employees are afraid to tell you because I think you might think that they're less capable, which again, in my view, isn't the case. So that's one dimension. And then on the other side, as an individual contributor, how do you use the tools responsibly and communicate that to your peers and to your manager?
Jon Reed:I'd love to take your course. That sounds very appropriate and I totally agree with you and. One thing in terms of myth busting that I'd like to put out there is I think that. Look, shareholders love, talk of autonomy and because whenever you can take humans off the line item on a balance sheet looks a lot better. Consequences to culture and society be damned, but it looks better, right? But in fact, like there's degrees of agentic autonomy and it's actually wrong that you can't develop ROI with a hybrid type approach. In some use cases where there is some elements of crucial human supervision and handoff, but also some AgTech autonomy within those scenarios, it's actually a matter of use case design. It's, a myth that once you pull humans in that you can't get ROI. But there are certainly times where, and this is where it takes such careful evaluation on the part of companies. There are times where, for example, having agents push production, supposed production ready code into production ends up costing more work on the backend fixing it than it did to do it autonomously. And that's why you have to look really, carefully at each situation there. There isn't something, some bulletproof thing for your. Culture, industry and use case, but it is encouraging to realize that you can have a lot of success with these hybrid designs where humans are still involved at crucial points.
Andreas Welsch:I think so too. And coming back to your earlier point about the detrimental effects of going for an AI first culture, or at least putting that out there in, in a memo and saying, we want to be AI first. I've been thinking about this rather in terms of how you first of all become AI ready. We, we've been talking about this for, a little bit now. How, do you create this culture of experimentation and encouragement so that people feel more confident, they feel safe about the use, about their job and their wellbeing to begin with and their existence, and then, yes. Then we become AI first because we are enabled to think where else can I use this? Do I really need this step to begin with? Is there a way that I can use technology to do that? What technology can I use? Maybe AI, maybe not AI. That's perfectly fine as well. It doesn't have to be AI only. So I think there again, culturally, there's a lot that needs to be done.
Jon Reed:I really like your AI readiness theme and I, we could talk like probably the rest of the show about what an AI ready company looks like, but I really do like that, and I don't like AI first. At really any point, because one of the reasons I don't like it is this, the technology that we're talking about. First of all, there's a lot of different kinds of AI. Yeah. Not just generative AI but, secondly, generative and agent AI is, generally a cost intensive solution with a very specific set of pros and cons that also has environmental consequences. This, technology is not a commodity yet. I just got through. Criticizing Mary Meer and Bond's report for implying that it is, and it's not as far as affordability is concerned yet. And and it's good for some things and not for others. And so one of my I have, along with my mistakes, I have my underrated keys to product success, which I will run by you. And what I like to see companies start with first is not anything about AI, but something around what's the most compelling future for you? Your role in your industry going forward? How are you gonna compete? No vendors, no technology, but how are you gonna be successful going forward? What do you want to be known for? Do you want to be known for the best customer service? Do you want to be known for the most differentiated personalized product configuration? What are the things that you really wanna stand out for? Then with trusted advisors start having the technology conversation around how that's gonna be enabled and, what's ready to put to work now and what's not. Quantum computing, for example, probably isn't ready yet, but some forms of age agentic AI will be. But in some cases, and we, you and I had messaging about before this, sometimes. Some RPA scenarios will actually be part of the mix and are still useful. It really depends. So you need to figure out the different roles different technology can play. The only thing that I concede, and I had a big argument about this with the CIO on video a while back, was the CIO wanted to say everything should be framed in terms of business problems, not technology. I do think that once you ha. Plot that FUT future for your industry. You do need to say, what is our AI strategy specifically? And the reason you need to do that is because your board expects you to have one. So you don't have the luxury in this case, of saying it's just another technology. We'll use whatever tool is appropriate in the tool belt. Ultimately, I think that's the right approach, but the board wants to hear something more. So you do need to have an AI strategy, but I would argue. It goes back to what you just said. It needs to be an AI readiness strategy, which is really a data platform strategy that involves not just AI, but analytics, better decision making, democratizing tools. All of that should be part of your AI readiness strategy.
Andreas Welsch:The part that I find so encouraging is that we've seen early movers two, three years ago even during the machine learning cycle. And I've gone through this my myself when I was at, SAP and coming out of machine learning in inter generative AI, I saw a lot of good feedback from the analyst community back then saying if you've done machine learning and you've done your learnings, now you know how you can apply that to Gen AI and what mistakes not to repeat. Now we're coming out of gen AI going into a gen. I think there are additional learnings, but the, thing that encourages me is that organizations are, reaching out to me that are a hundred, 200, 500, a thousand people strong. Not the 10,000, 100,000 organizations, right? So now they're looking at this. So there is this majority happening and, smaller organizations are trying to figure out, okay what do we, really do? I have all these different vendors and consultants and, system integrators that are whispering in, my ear. I can help you. I have the board that puts pressure on me that I need to figure this out. To your point, AI strategy, right? What do we do? Where, do we start? So to me, that is,
Jon Reed:yeah, that's why you're, that's why you're paid the big bucks, because those are challenging questions in terms of who to work for and all of that. If, I were able to give one piece of advice to the smaller companies you refer to, it would be, unless you are very sophisticated in your approach to AI and data science, which most smaller companies are not, I would strongly advocate picking a, vendor or two to really work with, to build out what you're doing because. When you look at the architectures that work that deliver the good results, and you can go on YouTube and look at this anytime you want. Look up things like rag architectures and accurate age agentic evaluation and stuff like that. You'll notice these architectures are very complex. I. And, the different models that provide superior, performance are constantly shifting and being retrained and different models for different purposes and all of that stuff. Let the vendors with deep pockets sort a lot of that out for you and, you wanna have conversations with folks like yourself that can help you start thinking about. What does our AI readiness look like? Are there areas where we have high quality data and a real obvious use case or need where we can start getting started? Because one of the cool things about some of these products and some of these vendors you can work with is you don't need multi-year implementations to start getting some wins here. And so it's so important to be able to not only say, what is your AI strategy, but. Hey, we actually can already notch a couple of wins. We already beefed up our website documentation using an agent, or we already did some other kind of HR related documentation that new employees needed. Things like that they, get people started.
Andreas Welsch:Look, I've been saying this for a number of years now and I think it applied with machine learning, it applied with Gen AI and, it does apply with agentic AI as well. Since we're talking about this topic, and that is, you don't have to build everything from scratch right there, there are companies, whether it's startups I see a lot of activity in the sales, go to market space doing outreach, doing go to market, doing contact lead augmentation and data and enrichment type things. If that's an area where you have a need, you don't need to build this right there, there are companies that, do this. If, you're focusing on marketing and communication, there are lots of companies out there that specialize in that domain as well. And same with any other business function. So the approach that I usually take is to say first of all, what is your business strategy? Similar to, to what you shared, where do you want to go? How can technology help you accomplish that? And then maybe take one or two departments or teams where you want to prove this out. What do they actually do? Where are they spending most of their time? What is what are the, costly task and, process that they run? And within that process, are there certain steps that you can now augment or automate with the help of technology like AI? If people feel more confident, then you can increase. The amount of, technology in that process. But I think there's this, human element that's stayed true throughout many, technology generations and evaluations and evolutions. And that is. We need time to adapt. We, need time to warm up to this and feel comfortable so we can use it. Now granted the, time to use it and, get it into production has, decreased significantly. So we, don't have infinite time to, to play around with this. Everybody else is probably doing something similar, but I think there's a there's a tension or balancing act between. Doing something and, getting it in production quickly, but also making sure people can follow and can adopt the technology as well.
Jon Reed:Love it. And it's, I think it's fascinating that you and I come at this from, I. Such different backgrounds in a way, but are meeting like in a, core agreement on how to move forward with customers. That really pleases me a lot. Let me throw a couple other things in there and you can riff on them that are tied into the rest of my success. Tips, I think fit in here. One of them is. No AI proof of concepts. Don't do that. Do pilots instead that are live. And you might choose internal pilots if you don't wanna do something that's customer facing yet. But do something live. You could pick a, use case that isn't high on the risk gradient in terms of the EU AI Act, for example. And pick something modest but, do it live because. You need to see it with your own data. It's so important and you need to see it in an iterative way where you can improve upon it and perhaps in a co-innovation capacity with whatever vendor you're working with. But don't do POCs. Get going on the pilots and, that, that to me is a real key, but treat it like a data project more than a software project. That, that's a really big key for me.
Andreas Welsch:I remember a couple years ago we, had the term for, this kind of thing, it was called throwaway proof of concept.
Jon Reed:Yes.
Andreas Welsch:Is it still a thing? Are people still doing this?
Jon Reed:I still think there's a little too much kind of dabbling and, we still hear about POCs. I heard about a customer doing one and I cringed, but I didn't say anything. But I, just wanted to say listen to what you're hearing on the keynote stage of early adopters and how they're using these tools and they're figuring out what they're good for and, the. The, you know what the really cool thing is, and I'm sure you've experienced this so you can probably speak to it. Yeah. The way the light bulbs go off for users, once they start seeing what the tools can do in a particular context then they, start developing their own things.'cause they come to their supervisor, they come to their team and say, Hey, why can't it do this? Why can't it pull this information? Why isn't it doing that? And you can say, actually it can do that. Let's do that. And, that's exciting.
Andreas Welsch:Now, a couple of months ago Johnson and Johnson was in, in the news, and I think it was their CIO who said, Hey, look, we're changing the approach to our AI program. We've we've stopped 90% of our pilots or proofs of concepts or whatever you wanna call them. But things that are not in, in production, that are in evaluation because we see it's actually a very small amount of AI scenarios or use cases to deliver the most significant impact. So we're focusing, I think it was on supply chain and maybe one, two other areas. And I, I remember the, reactions online in on social media were, mixed with oh, did they throw three years worth of, money and time and resources away or super, super great. Now they're focusing and I must say I'm, in, in Kemp. Great. Those, were learnings and there was an investment that a company and, a large organization of dead size like it had to go through to come to the point. But I'm hopeful that along the way they were able to, bring people along too. So there's actually another LinkedIn learning course that that I created that just came out on the topic of mitigating AI project risks. Because a lot of times it's not just the technology there. Yes, there is data. Yes there are business requirements, but there are also people, right? It's, the people that are being affected by this change. It's the people leading the change when status reports turn from red to bright green. The higher they go up in, in the hierarchy. And then it gets difficult when things take longer, when the results are not what you initially expect them to be. When you cannot prove the, hypothesis. So when things drag out, they're not successful. At some point you need to look each other in the audience. Do we stop this or do we throw even more good money after that? And so how do you avoid this? But still, I think if you do that on a smaller scale, maybe if you don't spend three years, especially in a smaller organization, they're important learnings to, be made that help you and, prepare you for the next step then.
Jon Reed:So one really interesting point that I didn't get to that really fits in well here I think is like. we have to be careful not to take our jaded experiences on more trivial generative AI experiments with the real impactful stuff. So for example, like I'm a Google Workplace customer and I got a note the other day. This got a lot of social media hits. I saw about these price increases pertaining to the AI, and Google was bragging about all the AI value it's been delivered, so therefore it's raising the prices. And I asked my colleagues, have any of you gotten any value for many of this technology across the productivity suites we use? And the answer was no. Like not any value whatsoever. And, I think that's a pretty damning indictment, but I, that, that's not to say that Google will never deliver real value with AI. I would argue that, Google delivered tremendous value with. An algorithmic form of AI with when it built Google search a long time ago. So it's not like Google doesn't know how to deliver value, it's just that a lot of these productivity, generative AI things are really not where the impact is. And if Johnson and Johnson was playing around with some of that stuff, I can't really blame them for saying, yeah, we're not making that much money, like helping people write stuff that has to be writ rewritten anyway. And one of the. Really interesting things is taking a step back from this urge to replace what humans do and look at what machines are really good at that. Humans are not, because a lot of the most exciting AI developments, such as things like alpha fold, are about setting machines, in this case, into a healthcare setting. To do protein folding type things that human brains and human processing, we can't do that. And one thing I love, you say oh, I'm really scared to bring AI into generative AI into a supply chain or man or agent AI into a manufacturing setting. Start with quality assurance because you can set a agent's loose on analyzing your processes and spotting anomalies and problems. And a lot of vendors have this technology, you probably have experience with it and, surface. Those anomalies to your managers and shop floor leaders and, you'll start to see, I think, much more powerful impact than, Hey, do you want me to write this LinkedIn post for you in generic language.
Andreas Welsch:Right? Now that we've covered a, lot of ground over the last 40 minutes. From where did we start? How did we even get here to. How are people using this? How should they be using it? How do they want to use it? And, what do organizations and leaders need to do to drive more genuine adoption and bring the people along on the journey. But before we wrap up, John I'm wondering any any famous last words. When it comes to the state of AI agents in, the enterprise as we head into the summer, and by the way I'm, hoping that we can do a few more of, these over the course of, the summer.
Jon Reed:Yeah. I think you and I have some potential to continue this dialogue with the feed different angles so that, that'll be cool. I think the main thing that I would leave you with, and you may have a couple observations on this, is it's, good to agree with everyone okay, you have these sandbox experiments and you're cultivating this culture of experimentation, which is great, but when it comes to these pilots. And launching those. It's good to agree with everyone on, how we're gonna measure success and what does that look like, and also how are we gonna evaluate the performance of agentic tools. And I've spent a lot of time on De Genomic this spring writing about evaluating RAG and evaluating agents and how to do that because there's a lot of cool vendors of one of which I wrote about Galileo, but there's other ones and there's open source tools as well that allow you to begin to monitor the performance of these agents in real time and start to figure out. Where are the glitches and make and making those corrections. So, think of it like, like a continual improvement type of technology. It's radically different than that classic ERP go live of old where you turn it on and you, deal with the bugs and then it's basically working for you. Now we, I could poke some holes in that mentality too, but the point is that's what software wants to look like. This is not that. This is a continual state. Of measurement, evaluation, performance, and then bringing this back into your user base and, saying are we, doing our jobs better? Are we happier? I don't know why more people talk. Don't talk about that. Are we happier? Are we having more impact with our customers? All of that stuff fits in.
Andreas Welsch:I love it. And, especially the last part, not just the happiness factor, but the, how much more can we actually do with the help of, technology, so again, coming coming back one more time to AI first, which, which seems to take a, very cost centric approach. And how can we take out cost to take out resources from, our p and l? I think the other opportunity is really what else can we achieve? If we have now an, army of agents or teams of agents at everybody's disposal that they can use.
Jon Reed:And by the way, I do realize that companies are obsessed with operational efficiency right now because of the way the economy is currently in the macroeconomic challenges and all of that. There's not that many companies, aside from maybe Nvidia, that, that have the luxury of just grow growth, all the time. And even Nvidia has had a stumbling blocks of weight. So the point is if I sound idealistic when I talk about things like happiness and experimentation, it might be surprising to think, but those things actually get back to operational efficiency because what happens is that as employees become more versed with these tools, they do punch above their weight in certain areas. And guess what? You may not have to hire as many people for that team next year and all of that but that's a much less punitive way of going about this than prematurely reducing headcount, which by the way, we see a lot. And then having to hire people back on contract and, then your customers are complaining and all of that. There's much better ways to go about this that all have to do with employees embracing these tools, overperforming, and then your operational efficiency is a happy byproduct of doing things the right way.
Andreas Welsch:So it also seems that paying more attention in your employee service if you do run them. To employee engagement indexes, to work life balance, stress factor happiness in, in, in the work. All these factors that large companies ask their employees to, to submit their feedback on. I think those are becoming a lot more important and, we should pay a lot more attention to those.
Jon Reed:Yeah. Instead of AI first, I would like to know what is your talent automation balance and have you figured that out? Yeah. And if you haven't, what are you trying to do to increase both?
Andreas Welsch:Alright, Jon, it was a pleasure having you on. Like I said, we've covered a lot of ground. I'm already looking forward to our next episode together. Cool. And see what has changed and developed until then. So John, again, thank you so much for your time and for all your insights.
Jon Reed:Thanks for the great discussion. Learn a lot from you. Thanks.