
What’s the BUZZ? — AI in Business
“What’s the BUZZ?” is a live format where leaders in the field of artificial intelligence, generative AI, agentic AI, and automation share their insights and experiences on how they have successfully turned technology hype into business outcomes.
Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, agentic AI, and process automation.
Since 2021, AI leaders have shared their perspectives on AI strategy, leadership, culture, product mindset, collaboration, ethics, sustainability, technology, privacy, and security.
Whether you're just starting out or looking to take your efforts to the next level, “What’s the BUZZ?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation in business.
**********
“What’s the BUZZ?” is hosted and produced by Andreas Welsch, top 10 AI advisor, thought leader, speaker, and author of the “AI Leadership Handbook”. He is the Founder & Chief AI Strategist at Intelligence Briefing, a boutique AI advisory firm.
What’s the BUZZ? — AI in Business
Enterprise AI: What's Next with Agents (Jon Reed)
AI agents are arriving fast, but the winners will be the organizations that pair them with trusted data, clear processes and security — not those who chase scale alone.
In this episode, Andreas Welsch talks with analyst and diginomica co-founder Jon Reed about what’s actually next for agentic AI: where real value is emerging, why many pilots stall, and how to move from hype to measurable outcomes. They cut through the noise around model releases and scary headlines, and focus on practical pathways for enterprise adoption.
Highlights include:
- Why throwing scale at models isn’t a cure-all and how enterprises succeed by constraining problems and adding domain context
- The “last mile” problem: when out-of-the-box LLMs miss industry-specific language, and when to fine-tune or use smaller domain models
- Data readiness as a multi-year journey — and how to weave data cleanup into early AI use cases so you can get wins fast
- Real business examples (procurement long-tail, finance workflows, shop-floor QA) where agents and ML already drive measurable impact
- Security, trust and identity for agents: new threat vectors, authentication challenges and when agent-to-agent interactions create additional risk
- Practical tech signals to watch: RAG + tool-calling, MCP/A2A protocols, zero-copy data approaches, and when to favor internal wins over multi-vendor orchestration
- People and culture: empower middle managers and junior staff to experiment safely, avoid short-sighted headcount cuts, and build communities of practice
- Legal and fairness considerations, including why frameworks like the EU AI Act offer useful risk-drive guardrails
If you lead an AI initiative, build data platforms, or are responsible for secure automation, this conversation gives realistic next steps — from choosing the right model size and partners to hardening agent deployments and capturing the first ROI moments.
Listen to the episode to learn how to move past hype and design agent strategies that actually deliver business outcomes.
Questions or suggestions? Send me a Text Message.
***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.
Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter
Today we'll talk about what's next with AI agents, and I'm so excited to have Jon Reed, esteemed analyst and co-founder of diginomica, with me again. Hey, Jon. Thank you so much for joining.
Jon Reed:Yeah, so glad to be back. It's really been great to have this dialogue going this summer, and it's been great to get the feedback on it as well. And I'd like to think we're getting somewhere, but things are also moving really fast, so
Andreas Welsch:we'll see
Jon Reed:how we do.
Andreas Welsch:Yeah, exactly. Look the first two episodes that we've run we're a huge success. We had lots of great feedback, like I said, lots of interest in. People engaging in a good dialogue. So if you have any questions in the audience, please feel free to put them in the chat. We're super excited to hear from you and we'll answer them as we go along here. But Jon, without further ado, obviously it's been an amazing summer. We've come from the Spring event series. We are seeing now agents mature a little more. We're talking more about security, it feels, and trust and authentication and these kind of things, but to me those are only just some of the topics of what's next. What are you seeing? What's next for you?
Jon Reed:Every day is a different day, and it's actually been I think a pretty significant week. It's gonna be interesting heading into the fall event season. A lot of big shows on deck. I know you're going to a number of them also, and we're really gonna get a, gut check pretty soon on customer adoption and where everything stands. But one thing I thought was really fascinating, just in the last week, you had the. The underwhelming release of GPT-5 with even a lot of AI fanboys disappointed in the release. And I think the big takeaway, I think more than anything is that traditional approaches to just throwing scale. Data scale at AI problems aren't working very well. And, we're seeing a real divide, I think now between the big AI, AI companies and this pursuit of artificial general intelligence, which we could debate versus. A really different approach to enterprise use cases that I think is far more promising that, that recognizes more technical limitations and constraints, but then applies them very cleverly to enterprise and industry scenarios. And we could talk a little more about that and why I think that's a really important distinction because you can run the risk of thinking if these models are underwhelming, then they're not gonna work for my project and. Along those lines. This week I think was another instructive thing because we saw a ton of headlines for this MIT study, that 95% of AI pro generative AI projects fail. Now, what's interesting about that is as soon as that came out, a bunch of people published little articles on it, none of which I think were particularly insightful. And then it turns out to be a group inside of MIT that is working, I think on some specific stuff around AgTech AI. They immediately pulled the study offline. I found it. But but, so now we're all talking about a study that no one can actually really find. I think that's a really good summary of the potential for confusion. And actually when you dig into the study, and we can maybe talk a little bit about it today, it's actually not like this damning, like we can't do anything with AI thing. And in fact, what they found. Alongside the idea that 95% of the, these pilots are, that they looked at in their study, which is a pretty small sample size, by the way, a few hundred companies. But 95% didn't get past the pilot phase, but I think the tantalizing part is the 5% that did. Had really big success, right? Yes. So that's a, really interesting thing too, which is like not just small success, but big success. And the other interesting thing is that when you dug further into the study, and I don't know totally how to reconcile this yet, but there were different numbers that indicated that 30% of projects were successful on their own. But more like closer to 65% were successful when working with an external vendor. And this is something we talked about in past episodes where I think a lot of customers are gonna do right now to avoid trying to recreate their own cutting edge AI architectures, but churning instead to trusted vendors to help manage the complexities of these environments. So anyway, I just think it's really interesting because just in a week's time. We, had so many interesting juxtapositions between different ways of thinking about the exact same thing
Andreas Welsch:Yeah. It's hard to comment without coming across as being a little cocky. But I think if you've been in the trenches, if you've seen this play out over the last few hype cycles machine learning being one of them, you know that it's not that easy. It's not that simple. It's not just about the low hanging fruit and lips. Boil the ocean and see what we can do with this magical technology. It's about where do we have a problem? Can we measure the impact and. Can we solve it with the data we have and with the technology that's available in very, few cases, technology is the limiting factor, right? It's, either we haven't really understood or articulated that the problem well enough, or we're not bringing our people along. Yes, there's technology in involved as well, but like we've also seen in, in previous releases, whether it's GPT-5 or four or three, some of them were more and more groundbreaking than, others, but technology itself isn't necessarily the limiting factor. I like that you mentioned the, MIT study. I, I was going to bring that up too. I need to update my slides. I need to tell you that for, years I've been saying it's the 85% even here on, on the back cover of the AI Leadership Handbook. It, says 85% of AI projects fail, goes back to 2018. When Gartner said, don't deliver value, not just fail. Now we're at 95. Okay, fine. But I think it goes back again to this fundamental part of have we even understood what the problem is that we're trying to solve. The second part for me is. Also something that, that I've been sharing for years is don't build everything yourself just because you can. There are vendors, there are companies out there that do that for a living, that do that in the areas that, you use software in anyways. So look where opportunities, is there AI in these products we already use? How do we get it? Is it an additional subscription? Is it a higher tier? Does it even add value? And start there instead of first building your AI platform and everything else around it for this one little use case that might or might not play out the way you hope to.
Jon Reed:Absolutely. And I wanna talk a little more about this AI readiness concept that we touched on in past episodes.'cause we did get some feedback on that. But one thing I wanted to also share that I think is a really interesting juxtaposition with. With this sort of like broader sort of failure rates thing and the GPT-5 underwhelming release and you know these other vendors that are also, I think. Really releasing more like whether it's gr or anthropic releasing more, I would say incremental improvements in a lot of ways rather than like drastic jumps. Like we have between GPT-2, 3, 4, for example. But as a contrast to that I was watching a video on YouTube. And I'm not gonna call out the video'cause I'm actually somewhat critical of, part of it and I wanna, don't wanna get into this whole back and forth with this person, but it had to do with churning LLMs into like domain experts. And I think. This is the sort heart of the enterprise play and so what this individual said is when it comes to vertical AI applications, the systems that you build for incorporating your domain insights is far more important than the sophistication of your models and your pipelines. So they were saying the limitation these days is not how powerful your model is, but it's more does it understand the context in your industry for your customer. Does it perform the way you need to for that industry? And so I think that's really interesting because how do you apply these large language models to specialized industries? There's, a last mile problem. And the last mile problem has to do with giving that model the specific information, context and understanding for that particular industry setting. And, that becomes really important. And when you, take that on. There's really promising ground that can be made. But the key thing to understand there is that LLMs outta the box aren't gonna understand a lot of these specialized contexts because even though they've been trained on the vast internet, like they haven't necessarily been trained on these very specific domains in this customer, specific data for a variety of reasons, including the, privacy part. And so, that's the really interesting contrast, I think. And, so you have a, someone like this talking about achieving. Upwards of 90, 95, 90 8% accuracy rates in those domains focused on the specific constrained problems, and that's really exciting. Now, one of the really interesting questions is. Is that enterprise promise that I think people are starting to see, is that enough to maintain the whole AI economy? And the question is uncertain because the big I, the big AI companies are counting on valuations that I think extend like pretty far beyond that. This doesn't necessarily save the whole AI economy, but actually I think what it does is it creates PE opportunities for people like you and me and opportunities for vendors that focus on the enterprise to bear down on these industry problems and provide LLMs with the context they might not have.
Andreas Welsch:I think so too. And. To me there, there's so much potential that is still untapped. And I see that largely like we've talked about in, in our previous episodes, in the small and medium sized business sector people and, organizations who have a need to drive more automation to increase the, speed that their business is, able to function at. But who might not necessarily have the, deep pockets or, the resources like larger organizations or global organizations do to get consultants to, to fix this or, build an army of engineers to do that for them. And I think there is a huge opportunity in at least two areas. One, how do we use the tools that are available to us? Again, could be things like copilot or ChatGPT or, some other tools. Some basic AI literacy. How, do I use these tools safely, responsibly? How do I make sure that whatever I pass on to my colleague or to my customer has been checked by myself or, by somebody on the team? So I don't just generate AI slop and send it on Then on one hand, but on the other hand, how can we. In very concrete ways, optimize and automate some of our standard processes. And maybe that could be an RFP process, like we send an RFP out to a number of suppliers. Then we get responses, and now we need to figure out what did they respond to, which parts can they actually complete? And, all of these things where you have a lot of people doing this work still today as, one example, right? Or again, content creation, newsletters, simple things where organizations have people. Doing these tasks for a good amount of their, at their time a week or a month. And we're not tapping into those yet. So to me, that's a big area. I also wanna say I, was just in, in Europe for a couple weeks and the conversation there was really interesting. When this topic of data and, agents came up, yes, models are great. They're trained on vast bodies of information and, language, but they don't know your specific business data and, processes and product formulas and what have you. The next topic that came up was maybe it's small language models that are either specialized or trained on industry knowledge or that you fine tune on your business data. That could be some one one approach. The next topic that came up was are we seeing a pendulum that keeps swinging from on-premise to cloud and now with costs in privacy concerns? To me that seems to be more of a European topic. Are we seeing the pendulum swing back in our organizations, larger organizations putting Nvidia hardware back in their data center to run these models on their premise so they have better control? Just two of the, big questions that, that came up and, to me that they're super fascinating. I'm, not seeing that here in, in the US yet. I dunno if, you are, I think here it's still API based, we call a model from somewhere. Certainly some organizations, some industries that are more concerned about what's happening with my data, what do vendors potentially do with the data and with the prompts they have. But it was more noticeable in these conversations than I had in Europe.
Jon Reed:Yeah, and this is part of the. This experiments in a good way that people are gonna find in terms of what are the right size models for their use cases. And in some cases you can, I've talked with vendors who start with larger models and then are able to scale that down which is a little bit like what Deep Seek did to some extent, but basically you can diffuse or boil models down to smaller models once you have established a use case. One. But there, there are characteristics of different size models that have to be analyzed too. Like for example, larger models are often better at. At sort of the language understanding part. So if you're, if you have customer facing stuff, you might want the larger model that has, that does a better job of understanding what the customer is saying.'cause they might use different words that mean the same thing and the larger models are gonna be better at that kind of understanding. Whereas but, if you have, for example, a team of super users using an internal model. They can be taught like the right ways to interact with that model. And so they might do just fine on a smaller model where they realize, okay I need to prompt it with these kinds of words but, it won't understand these other words. And so it's just really interesting and I, did a podcast this week with. Aaron Harris, CTO of Sage, and they essentially found they had to train an LLM because outta the box LLMs just didn't understand finance terms well enough to actually use them. And so we got into that in the podcast. And that's really interesting because sometimes you'll need to do that. Other times you won't need to do that, and other vendors are doing just fine, like using, RAG and tool call type context and using a more out of the box model. So again, it comes down to use case, but like in the case of the finance one, for example, in the podcast we talked about how if you ask a question like, what stock item will I need to reorder soon? ChatGPT wasn't recognizing that as an accounting question. It didn't recognize the accounting terms. And the sage model that they built, it understands that stock refers to inventory and that's the world the model is operating in. And it just depends on your, use case. And that's why you have to dive into this with expert partners. And expert advisors to help you to head down that right path. But if you do, I think you start to build on these use cases and start to notch some real wins here. Even though it in the background, you have this study that says 90% of projects fail. But the thing is, the kind of projects that are failing, in my opinion, are not the kind that you and I are talking about right now.
Andreas Welsch:That brings up an good point in memory. I did a webinar a couple weeks ago with cists who are in, the procurement space, and we, had a panel of, experts and also got lots of questions from the audience. And some of the audience questions were like, Hey, with AI and especially agent AI. What can I do in procurement? And there were some really great examples where they said, Hey, look for example, something like Long Tail you have individual vendors or users in your business that, that buy from individual. We vendors, but it's not at a volume where you can negotiate large discounts or larger contracts and it's hard to manage and it's hard to analyze the spend and categorize it. Consolidated so people are not doing it. But now with agentic AI, they said, Hey we can actually do that. So we are not replacing people doing this job because nobody's doing it anyways, and we're getting businesses better insights so they can consolidate, spend, optimize there. The category management and right real value in business results from that. So to me that, that was a great example because also what I heard over the summer was the whole debate about our entry level jobs going away. And everybody panicked. And we looked at labor statistics and STEM graduates seem to be in, a higher unemployment category this year than they have been previously, things like that. And then luckily also the debate switched to if we don't have entry level roles how, are they becoming senior? Who, are the next senior and expert level roles? No, please don't, just, don't just eliminate the, need for early in career roles. You need to figure that out too. So anyways, two, two stories and one, one is the where do we deploy agenda AI and, how can it add value? Things like long tailed spend in, in procurement where we're not looking at this anyways. And on the other hand, we're not taking jobs away. And, this mindset I, think it's really, important because I also see too many leaders think about. Where can I cut cost? Where can I cut spending? Where can I reduce headcount? Where to me, the bigger question is what can I enable if I have this, booster, this not just productivity booster different insights, new insights, scaling our knowledge, scaling our information acquisition and retrieval and reasoning. And I'm not seeing a lot of leaders think about that yet, but would absolutely encourage them to rather approach that mindset than where can I save?
Jon Reed:Yeah, and this is something you and I hit on in our last discussion around use cases is, the importance of companies really. I think taking a step back and deciding what. Their, what their innovation is heading towards in their industry. What do they want to be known for? And I'm hoping that a lot of companies wanna be known for the way in which they excel in, how they serve their customers and their and, the opportunities they provide from employees. Now, that doesn't mean that they're not gonna wanna also be operationally efficient. Of course they do, but. It's so important to have those goals firmly in mind because as we discussed in our service scenario last time, it affects your decisions on whether you're gonna deploy AI to serve customers better. For example, round the clock after your service employees go home. Whether you're gonna use that to reduce your service team, or ideally based on the use case I wrote about. Enhance the service you can provide to VIP customers by taking admin off the plate of your, so again, it's the same kind of thing. And the same is true from the talent side, right? Do you want to use AI to cultivate talent as you were talking about, or are you gonna use it to just get rid of your junior staff? If you can get away with that, I would argue that. A lot of junior staff are more talented than, bots still. But let's just say that you're gonna try that. That becomes a very self-defeating model going forward. And so you need to ultimately make technology a. I don't care if, it's blockchain or AI or whatever you think subservient to your bigger goal and don't allow the technology to take over the goal and say I can't help it. AI claim those jobs. It's no, they claim those jobs because you allowed that to happen. Yeah you, can redeploy those people. You can treat them differently. And when we talk about AI readiness, which will be I think the next. Topic in our discussion here. A big part of that I think is the cultural part of how do you cultivate a culture of employees that is excited about using these tools and not having them being opposed upon them or feel like, I gotta work harder'cause I'm gonna lose my job, but more like I'm being provided with secure ways of experimenting with these tools so that I can ideate use cases for my company to excel. That's the kind of environment you want to create.
Andreas Welsch:So that brings me back to the MIT study that, that we, talked about at the beginning, and one of the nuggets in there was we need to empower middle managers to empower their teams and their team members. It's not just done if we say you need to use AI, right? Where AI first, you need to use AI because. Your employees are, struggling. They don't know what this means. Where do I start? How do I even use it? How do I know that something I create is good? Maybe they don't even know that you can create something that is not good or factually inaccurate. I heard two great examples last, week in a session that I did, and one of the participants said, yes our, leadership team is AI. First is you need to use AI in our company. And people look around and say, what does it mean? So I help them figure out, Hey, here's how you can use, for example, ChatGPT, or Copilot or AI in other parts. Another participant said we're going about this a different way. Our leadership also encourages the use of AI. We have smaller groups or communities of multipliers or champions if you will, and we come together and we say, here's how I use it in my function. Here's how I use it in marketing. Here's how I use it in sales. This is what's worked really well. Here's how I got to that point. Here's what I had to change. Here's where it fails. And so we create this culture of shared learning and, community-based learning. We accept AI as a new technology, as a new way of working, but still with a mandate of we want to use more AI to be more efficiently and effective, but also give that opportunity to, learn together and, share visibility, share praise of how people are, using that in our organization. To me, those were two excellent examples. The same goal, different execution, and very likely different results as well.
Jon Reed:Yeah, and if you can get your hands on the report and hopefully it will be issued at some point, they get into this 5% and what. What made those 5% successful? And it's, really quite interesting to dig into that a little bit. And the, more successful areas are no surprise, they're high value use cases that are integrated deeply into workflows scale through continuous learning rather than broad feature sets. Less generic sort of productivity things, but more looking at very specific functions that can be automated. A lot of interesting, they talk a lot about back office admin being successful as opposed to throwing all of the money into sort of various marketing and sales agendas. Though they did note that lead gen can be a successful area as well as service. I think what, this report didn't get into is that a lot of these industrial use cases are gaining momentum too. I talked with a vendor just last week about some of their success starting to embed shop floor type. Functionality you can start with things like quality assurance and equipment monitoring, things like that. You're not giving agents control over your shop floor yet necessarily. But there's a lot of really interesting stuff and it, and again, it comes down to customers that are ready and, what I'm seeing is that there, the reason that AI readiness topic is powerful is because. AI remains an accelerant to me, which is that if you have really good processes and good data platforms in place. AI is just gonna help you outperform that much better. But if you are struggling in those areas, I don't believe that today's AI can save you from yourself by, applying that onto problematic data, silo data problematic process flows lack of leadership buy-in stifling workplace cultures, people clinging to their. Jobs like life raft. These things are not conducive to AI deployments. And so, that's why the AI readiness conversation is so important because when you apply AI, you enhance what you're good at. But AI can't make you good at things you're not good at already.
Andreas Welsch:Beautifully said that. I don't even know what I should add to that. Other than a confirming nod in, your direction there's an additional component to me that I've been thinking about a lot more lately. I I, hosted Steve Wilson, one of the co-leads of the OAS top 10 for LLMs report several times on, the show a couple weeks ago, as, as well. And, we got into this, topic of how do you secure your agent AI? And I saw some interesting examples of, vendors out there thinking about security and identity and authentication and got. Thinking too, right? Today we have largely humans sitting in front of a screen clicking on, on, on the screen manually. Maybe you have some service accounts or, some API keys if, there's some something happening in the background. But now, already with agents we, have a non-human entity that acts on your behalf or that acts on your company's behalf. And a lot of times when we talk about AI, we talk about agents, we talk about. My company view, how can I use this? How can I optimize my process? How can I operate? How can I improve my operations? But again, there are two, maybe three things that I think we're not talking about enough yet. One is how do you. Build and scale and configure your operations for agents on the other side that will send you requests all of a sudden more and more, much more quickly at a higher pace, a higher velocity, higher volume, and what have you. Are your systems designed for that or how do they need to change? How do we need to think about agents at some point doing business with other agents? Is that other agent even trusted and trustworthy? Are they really representing who they say they are? Is it really my business partner that's behind this and their agent interacting with my agent? So trust, security, authentication, even in this scenario between companies, becomes much, much more important. Then what we're just talking about and we're just scratching the surface with is that agent in my business even authenticated and trusted.
Jon Reed:Exactly. And the thing that I have seen again and again this summer is that we cannot underestimate the security issues involved here. And the there are new threat vectors and the risk management around this is really important. And one way to think about this too is a little bit of a difference like between a human scenario where if, you like. If you welcome someone into your house because you trust them if, you think about it, an agent would do the same thing, but if, you say you have free run of the house, do whatever you want, like an agent might do that too and just let you. Do whatever you want. Once it's authorized you, like it's not gonna ask questions, whereas a human, if you started rifling through my laptops and stuff, even though I gave you free run of the house, I might be like I don't need you running through my laptops right now. What do you need right now? Agents aren't really well equipped to ask those kinds of questions. Once you've been granted access and, the access you're giving these systems is quite powerful, so it's not a deal breaker. But it's something where you have to have this on the forefront of your mind at all times. And it's important.
Andreas Welsch:Yeah. See even on a, smaller scale, say you're a sole proprietor or small business owner, yes. You have access to tools like nanos or things like open AI operator, or now the automation workflows basically. But what happens when you give it access and it. It, it, sends confidential information to another customer, to another prospect. It wasn't intended for them. They shouldn't see what you quote your, actual intended recipient, but now they do big miss. So to, to what extent do you give these agents, these technologies, access to your system, to your data and how do you mitigate. Those risks. I think that's a real question that, that we need to ask as, as well, that vendors also need to help figure out if they want to see more adoption. It's fine that it can click on a screen and it does things independently even if it takes a little longer than how, we would do it as humans. But I think this, part of risk, management, risk mitigation, and just understanding of what could go wrong and well, what could that mean I think is so important as well in this discussion. And especially as we're talking so much about hype and all the future and everything that it's going to enable. It works well until it doesn't. And then the question is who's, at fault? What's the damage? So it's better to think about that upfront as always.
Jon Reed:Indeed. Since we last talked on the AI readiness topic, I spent a lot of time looking at this and I, think we've covered a lot of different components of that. Things like leadership and culture and. Process, discipline and data. But the interesting thing is that the data part. Still is something that a lot of vendors are arguing over in terms of, it does seem very, clear that AI services thrive on different kinds of data presentations than, we typically have had in the past. And so what's the most optimal way to do it? And some are saying you need a real time live data stream for your'cause. AI thrives on. Latest live data, right? Yeah. And that's true to a point. But then there are times where real time can be prohibitively expensive in certain domains. And I talked with some vendors about this who were saying yeah, that's true, but in some systems that I might need for AI they're not being updated in real time, but that's okay. I just need the most. So you're back to the right time thing. And then some people talk more about. Cloud based access and how important that is. Other people insist that it doesn't need to be in the cloud. There's a lot of discussions of edge based devices, but I think the edge based stuff's a little tricky because the compute. On edge devices is often not enough for what you need, and then you have various cloud processing things from edge devices, so no one's figured out all of the answers to this question. I think one prevailing trend is, not to move data around as much, but thinking more in terms of zero copy scenarios where data can reside, where it resides. And, you have some kind of intermediary system that can essentially help you with this. It's a new way of thinking about middleware, but in an AI type of context, I guess you could say, no one's figured all of this out yet, and there's no way to avoid some of the pain around it. And I. The way I look at it is that a core component of AI readiness is gonna be your data platform vision for your company. And that's probably gonna be a multi-year scenario in terms of how do you make your data most accessible to these systems. But the good news is that you're gonna find that there's places where you can start, where you do have. A high caliber of data. And and people say these systems can handle, unstructured data. That's true to a point. But even unstructured data, these systems thrive better when that unstructured data has certain kinds of recognizable labels and formats that it can make sense of. So you're gonna learn as you go a little bit what the best. Data is for those systems, and that's why these trusted advisors are so important. But I guess I just wanted to say that there's no consensus on all of this yet, but the one good thing that we know is that starting with areas where you have quality data is a really good idea because you need to be doing. Actual live work with quality data to see what the results are and iterating on that rather than testing these systems with data that is not your own, because it will not give you the input that you need on whether to go forward or not.
Andreas Welsch:See, I I spoke in Munich at, an event a couple of weeks ago and afterwards some attendees came up to me and said you talk about AI, you talk about the, need for good data, clean data, fresh data, accurate data, all of that. We know that our business, our data is nothing like that. And we're not the only ones, right? So how can we do this? Because we've been trying for years, getting our data in, order, getting our leaders to buy into this to get money, to get funds and do things we don't really have time to, to do a multi-year roadmap and, something, what can we do?
Jon Reed:And
Andreas Welsch:so one of my recommendations was, think about how can you do some parts of it as a phase zero of your AI project. Because if your leadership, if your board gives you money to invest in AI, yes, you need to get your data house lakehouse, what have you in order, to do AI. So think about where can I weave this into existing projects or new projects that we're starting with a manageable effort to make progress. Because similar to what you said, it's not like data is a new topic. It's, been there for years. It's been there for decades and it hasn't been solved yet because usually it's, not the sexy, it's not the shiny thing that, that people want to invest in or get visibility for or credibility for. But we need to do this work now more than ever. If we expect to have AI, if we expect to have a agent AI on top of that, using the data to reason, to make decisions, to generate proposals, to research informations in all of that. So the time is now. But if we don't have a dedicated data budget, a data project. See if you can weave that in as a phase zero to an AI project before you get started with that.
Jon Reed:And a lot of, I think the excitement that I feel about enterprise AI. I dunno about you, but it's the combination of the structured and unstructured data because behind transactions in systems are back channel conversations about particular customers and relevant information around. The history of interaction with that customer and the idea of combining all of that into one way of engaging with the system to find out tell me more about this customer. Not just their sales history, but the feedback that we've gotten from them and all of that. And having all of that as part of one workflow is so appealing. And the good news is that a lot of companies have. Somewhere. I don't think it's necessarily that it has to be cloud, but the reason why cloud products have been appealing is that when you move to a SaaS product, a lot of times a different level of data discipline is imposed upon you because you need to become a little more sort of vanilla, if you will, in order to fit into a SaaS. A classic SaaS environment. So what you're looking for from a structured data perspective in terms of AI readiness is systems that are not incredibly, heavily customized internally, but are a little more out of the box you could say, where you have had to impose some, discipline around. Structure and you have had to work through some meta issues around customer names or regions or whatever it is you're looking for, stuff like that. And like you said everyone has been burned by these initiatives in the past. But the good news is that with these iterative projects, you are gonna have the ability to come back to your team and your leadership team and say, okay, we started by working with this. The data was already pretty clean here, we gotta win. We were able to increase customer satisfaction. We were able to reduce, turnover. We're able to land new clients or optimize lead gen or whatever it is and, that gets leaders excited and then. You can say, and now we need to go a little further here. And so that's the hope from in the past where in the past the problem was we were doing these massive cleansing efforts, but there was never really any external result. It was when we finally get it done, now we can look for the use cases. But the idea here is that you. Rolling out the use cases as you're cleaning the data. That's the distinction here. And if you can do that, I think you will be able to develop momentum to move into the thorny areas, and it will happen because what you're gonna run into is your users or your customers are gonna ask questions. Of your AI system that it can't answer because it doesn't have access to certain data. And then that's gonna be your next project is to be like, how do we add that data into this mix?
Andreas Welsch:And it sounds like that's a much better problem to have than the agent answering questions where it does not have the data Right. And making things up.
Jon Reed:Oh, much better. Yeah.
Andreas Welsch:So, definitely lots to do and lots to, to work on. For sure. We, we. said that today we'll talk about what's next. We covered security, trust, and authentication. We talked about AI readiness. On the one hand, getting your people and, your leaders ready. And on the other hand, data being a big topic where we, need to increase that, then readiness to do AI on, on top. And I'm just super excited to see how things are moving in the industry and how quickly. They are moving from what, two years ago? A year and a half ago, roughly. The, first agent frameworks that, that came out, then vendors latched onto, they said, okay, here has a real opportunity. Maybe to some also a thread. We need to do something about this. Now they've shifted. Now there's the adoption. Starting slowly but surely. I'm excited to see more examples. By the way, I also feel not enough companies are talking about their agentic deployments yet. I feel sometimes there are some. Little bits and nuggets that you pick up in, in the tech media and, coverage. But I have seen very, few companies out there and say, Hey, here's how we've deployed it. Here's how we looked at tasks and, how we broke this down into what does the agent do? What does what does the subject matter expert do? Very few maybe for good reasons. Maybe it's the competitive advantage for those that are already ahead. But I'm sure there's, more that we will see. And then to me, that also unlocks the next wave of what is possible, what should we be thinking about, right? Oh here's company X, Y, Z talking about this couple weeks ago, right? We, talked about Moderna combining IT and HR to look at roles. Whether or not that's the, right approach, I think is, to be seen. We'll see that probably in the next three to six months or a little later, but the first companies taking this initiative and this approach and looking at how can we divide work? Where do we need people because of the unique capabilities and capabilities and knowledge and where do we need agents and how does that look? It's just moving super fast and it's exciting to see how that progresses,
Jon Reed:right? And as we move to the end of this third discussion, I think on that what's next thing when I go to the fall shows the things I'm gonna be looking for are, one of the big ones are what you just described, which is I'm gonna try to document more customer success stories. Of what's been achieved so far and what the sort of hard won lessons are there. And, I have got a few from last spring, but I expect more in the fall. So that will be really interesting. Vendors don't always do a good job of, promoting those. And a lot of times what's interesting is some of the really good ones are not. Necessarily heavily generative AI right now. Some of them are still very classic machine learning type examples that still have really powerful impact in terms of things like triaging an emergency room into different priority levels or something like that. It could be more of a machine learning approach or even a computer vision approach. So that's one thing is that. We, wanna document all of that. But, then the other thing is just looking at what's next for agents. And I think for customers right now, I would say start looking at. MCP and tool calling as the extension of looking at like things like RAG and context. It's all about how can you make LLM smarter by incorporating both your own data, but also external validation and supplemental tools that help with decision making and process flows. So it could be things like. Checking with a credit validation service externally and start experimenting with that. And MCP is obviously one protocol that really helps with that. I would say that's the next phase is start to look at how to make them smarter for the things that you want to do by taking advantage of the data both internally and from. In tool calling, which does include that rag context, but also much more where I would say keep a wary eye for now, but a curious eye is around the agent to agent protocols and A two A and stuff like that. And what you will hear a lot from with vendors. This fall is around orchestrating multi-agent workflows across domains. You're gonna hear a whole lot about that. And the reason for that is that every vendor wants to be the agentic system of record for their customers. So they want to basically say, you can count on us to do that. And then other vendors like the Boomie in the UI pass of the world, are trying to be more like the Switzerlands of these scenarios by saying we don't have a dog in this fight but we're gonna orchestrate this for you as well. Now, all of that kind of. Talk is very useful, but I think right now proceed with caution when you think about the potential to hand off agent workflows across vendors, because that's where it's gonna be a little more heavy lifting right now in terms of getting the results you want. It's not that you shouldn't look at it, but I think right now. Focus on getting some internal workflows going within the context of maybe one vendor or maybe two that work very closely together, that have a really good partnership, nail that down first and, then we can look ahead to the prospect of more agent to agent communication going forward. But the stats on multi-agent handoffs right now just aren't that great and the more complex the workflows are. The more problems you could run into. So I would say keep an eye on that because it's really promising, but start with stuff where you can get some wins.
Andreas Welsch:That's very practical advice. Look at what's there. How do you use these protocols and technologies, experiment with it so you become more knowledgeable and, overall more experienced as these protocols and, solutions. Mature as well. And also the, recommendation that every vendor wants or needs to say, we're doing something with AI and you can do it with us. I think makes perfect sense to me. It comes down to what where do you have an investment? Where do you have a beachhead already? If it's this solution or, that solution, it might be easier to go here or go there and maximize your investments.
Jon Reed:And by the way, just real quick, one of the reasons why I think both MCP and A2A are worth tracking. It is because so many vendors that you might think of as competitors have signed on to these particular protocols. And that's really, important because in the history of enterprise software, the standards that have done the best are the ones where the competitors sign on to the same frameworks. And so when you see folks like Microsoft Salesforce, SAP, Oracle, signing onto the same standards that. That tells you that there's some promise there. But again let some of the deep pockets like venture into the hardest aspects of that first and let them take some of the bruising lessons. But definitely be watching. I think my final thing for you would be since you have been to. A bunch of things lately, and I was doing more deep dive research from home. What, would you say was the most memorable interaction you had with a customer or, someone that asks you questions around what they're doing? Was there anything that really jumped out like that surprised you or.
Andreas Welsch:I usually give a lot of talks and, keynotes here in North America. So being in Europe for a couple weeks was super refreshing because I got a whole different perspective whether there was more an IT leadership audience or mixed group from financial services and venture capital and, so on. But the one event that is spoken in, Switzerland I, got so many. Deep and thoughtful questions that I haven't gotten in a long time over here, to be honest. And, one of those questions was around how do we make sure that these models are fair and safe and unbiased, and especially if we have certain vendors building this basic technology. And then again, within certain vendors, there are certain tendencies, right? More conservative leaning, for example. More progressive, more guardrails, fewer guardrails. And then what if we have these tools here and LLMs more with with a western lens in, corpus of, data compared to more on the east or the far east. So how, does it work? How do you leverage it or mitigate for it? Do you need different models? Just a good, Very broad. And, from my point of view, deep discussion. And I would say at the moment, did you have
Jon Reed:a good did you have a good answer for dealing with bias in these models?
Andreas Welsch:So I think if it was good is is for the audience to judge. I think I did have a good answer and my, my answer was. Think about where do you deploy these models and what do you deploy them for? Yes, there are some bias in there, so we need to figure out how do we check them and, do evaluation. But if you know that you deploy models on, a global scale for different audiences, maybe indeed consider using a Quinn or deep seek more for, far east in the cloud. Philanthropic in, in what? Or, open AI models for more westernized scenarios. But that's also by the way, where the small language model discussion came up to say what is it that you're trying to, optimize for and for this model to be really good at? So maybe a smaller model that has more specific domain knowledge about the industry, about your business could be the solution as well. But to me, the big thing is that. These, models are never free of any values, right? We, use language to train them language, predominantly English where there are certain world views and biases encoded in that language anyways. Yep. The, words that we use, that we speak, that we write, so naturally these values are encapsulated in these models. So we, need to be aware of that. That's a much, much longer topic and discussion to get into. But maybe also good challenge for all of you to, think about what, if you have something that's more Western leaning or progressive or Western and more conservative or US based, more conservative if you will, with grok or you have something like DeepSeek and Qwen. With different worldviews and ideologies. How, do you deal with that?
Jon Reed:Yeah, and I'll just add really quickly to that,'cause you're right, yes. We don't have time for a deep discussion on bias today, but it might be interesting to revisit that topic. I really like when companies take a proactive approach to that, but of course not all of them will. But I will point out that the EU AI Act has a really good risk framework that is worth understanding and applying to all of your AI initiatives and potentially embedding it into your development and design process, regardless of whether you do business in the EU or not. And, the reason for that is that that, those risk levels roughly correspond with your likelihood to get sued by people. Based on the misuse of those scenarios. And I can think of lawsuits going on right now in the US that pertain to the high risk areas in the EU AI Act. It's got nothing to do with that that legislation. But it's got to do with the fact that you're playing when you playing risk areas, you're gonna. Attract legal attention. And and you're gonna need to have a plan for that. And absolutely these, discussions are not just about, oh, my company has great value, so I want to confront bias. You also have legal exposure if you don't.
Andreas Welsch:Yeah. And I think it, especially software vendors, right there, there, are certain cases in, the in the judicial system at, the moment about bias for resume screening and candidate matching. Yeah. And things like that. Things you can look up in independently as well. For me the big thing of what's next is, really this part of we need to enable the workforce. You need to enable your workforce to use AI and do it responsibly. So you're not just off the hook by using ChatGPT and or Copilot, and you're done. But how do we empower. Line managers, middle managers, to empower their teams and their team members to use AI and give them guidelines to say, Hey, here's how I want you to use it. In many ways they'll be similar to how we expect people to work with each other within our own team or across teams. So if you are thinking about how do I do that? First of all, think about how do you get people within your team or across teams to work together? What are the standards of quality that you expect people to follow? And how can you communicate that now in simpler ways? Over our last two episodes, I've been working with LinkedIn Learning and two courses there that I'm super excited about to, be recording shortly. So keep an eye out for, those, but I think that's the, next practical frontier at the very foundational and base level. That's what's keeping me busy and excited at the moment.
Jon Reed:Excellent. Great stuff, man. That seems like a good, place to stop.
Andreas Welsch:Yeah, absolutely. So Jon, cool. Thank you so much for joining. Yeah I'm always super a amazed by how quickly we come, up with a topic and then just run with it. Totally unscripted. So that's, yeah, that's a lot of fun to see at the end where we've taken it over the past 50 some minutes. So thank you so much for joining, for sharing your experience and expertise with us.
Jon Reed:Great talk! Till next time.
Andreas Welsch:Till next time. Bye-bye.