What’s the BUZZ? — AI in Business

What Enterprise AI Actually Wins At (Jon Reed)

Andreas Welsch Season 4 Episode 28

Stop chasing flashy multi‑agent demos. The big gains in enterprise AI are coming from focused, context‑driven systems, not agents in a room.

In this year‑end conversation host Andreas Welsch and analyst Jon Reed cut through the noise to explain where AI is failing in the wild and where it's producing measurable business value. Jon lays out the vendor‑customer gap, the real risks of agentic experiments, and the practical architectures that are working today: compound systems, context engineering, RAG/knowledge graphs, evaluation and observability, and right‑time data layers.

What you’ll learn:

  • Why multi‑agent orchestration rarely works at scale today and the narrow exception where it does
  • How vendors are ahead of buyers, and how leaders should close the gap with clear communication and upskilling
  • The difference between treating AI as a worker vs. a tool, and why that choice matters for people and projects
  • Practical, enterprise‑ready wins: document intelligence, procurement RFP automation, AP/AR, hyper‑personalization, and focused assistants
  • Why explainability, audit trails, and granular autonomy toggles are essential for trust and compliance
  • How to approach AI readiness: clean data, metadata/annotation, and composing smaller specialized models into reliable workflows

If you build or buy AI in the enterprise, this episode is full of real examples and honest advice on where to invest, what to avoid, and how to design systems that produce results now, while preparing for broader scale.

Tune in to hear the full conversation and get actionable guidance for turning AI hype into business outcomes.

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Welcome back for another episode of What's the Buzz, where leaders share how they have turned hype into outcome. It's been an incredible year for AI with all its ups and downs and I'm super excited to, to welcome Jon Reed, distinguished analyst, co-founder of diginomica to do a year end review together. Hey Jon, thanks so much for joining.

Jon Reed:

It's so good to be here and I'm really looking forward to getting your reactions because we've both been on parallel tracks of getting out in the field, hitting the events. Now we're gonna find out compare notes. I've got some bunch of stuff to throw at you that no one's heard before. I want to hear what you have to say.

Andreas Welsch:

Fantastic. So why don't we get right into it? We've we've recorded three episodes over the course of the summer from, looking at what is real, what did we actually hear in the spring circuit? Then where are things moving? What are some of the big myths that you need to be aware of if you're an AI leader? To, to closing us off in at least for the end of the summer, closing us off before we hit the circuit. Now for the fall. So John I'm super curious. What have you seen, what has changed since last time we talked and you said, yeah, agents are there, but there's probably a little more that leaders that organizations need to think about. I think security was one of the big topics we talked about back then. So what's new? What surprised you? What have you seen?

Jon Reed:

So what I really want to get to in our discussion is what's working and what's not on AI projects today. And obviously agents will be a part of that, but I think we, it's fair to say having been to a bunch of the major vendor shows this fall and a bunch of analyst discussions as well, that. AI vendors, enterprise vendors are for better and for worse pretty far out ahead of customers. In terms of, I refer to this as the gap, right? Between where customers are at and where vendors are at and how they talk about this technology. That's not necessarily a terrible thing if customers like the roadmap and like the vision. But again, and again off the keynote stage, I would hear from customers. I like what I heard, but I need help getting there. I don't know exactly how to get there. And that, I think we're gonna get into the meat of our discussion today, but the thing I did want to get to first is just a little bit of framing and I want to say a few controversial, spicy things to maybe get this going a little bit before I get you into the projects and get your reactions on that. I think we're definitely like. We're definitely beyond this first wave of gen AI failures and generative office productivity, generic stuff into more embedded AI, which to me is much more exciting, embedded in our business processes. It's mostly not happening at scale yet. Now there is this risk of work swap that I think we'll get back to. But in the background of all of this, there is this AI bubble, right? And this discussion around the bubble. And there are some really concerning signs around the big AI economy, including the sort of circular deals that are happening between a handful of vendors. That really does get you a little bit concerned. I think the good news, if there is any and all of that is I think that the enterprise is gonna help shield us from some of the worst of that if there is fallout. Because enterprise AI is chugging along in kind of a sneaky, secretive kind of way. It's a little bit more modest. It's because it's not the trillion dollar, a GI economy that vendors were pushing the big, the anthropics and the open ais, but it does matter. But the thing is, Andreas, when you have open AI going back on their promise not to get into porn and getting into porn, what that tells me is they're struggling on an enterprise level. If they were signing seven figure enterprise deals, they wouldn't be getting into porn right now. So there are some issues with the big sort of vendors that have set the tone for this economy and diminishing returns on where LMS are headed is certainly part of it. Reasoning models came out last year, and that was the last big trick. So there's a little bit of a sense of kind of stagnation on the broader industry of these big LLM companies that are feeling a little more like commodities in some ways. But on the enterprise, we see interesting progress, and that's what I want to talk about with you today.

Andreas Welsch:

Yeah, many of these points resonate quite deeply that you've shared. Look when it's super cheap, when it's a fraction of ascent to create some kind of content, to create some kind of work product, you really need to wonder what are the economics behind it and what does that scale need to look like in order for AI labs to, to recoup some of the money that, that they're spending in data center buildouts and buying future growth. I don't remember the latest numbers weekly active users that, that OpenAI has. If I'm not if I'm not wrong, I think it's somewhere 400 million active users per week or something, which is still huge. But probably the biggest chunk of those are consumers are, students are maybe not necessarily the enterprise workloads.

Jon Reed:

So a lot of them are therapy and companion bots and stuff like that. It's not business caliber workloads that are firing the revenue for them. So it is what it is.

Andreas Welsch:

That's right. To, to me the part that's interesting too is the likes of OpenAI and perplexity and others getting into agentic browsers or browser agents being able to act on, on, on your behalf. And so to me I'm wondering on one hand as a consumer, yes, I, I can see the picture, I can see where this is going. Lots of convenience. I don't need to log into all of these different sites to, to book travel airline hotel rental car. It's like a dream come true, at least from a productivity point of view. But I'm wondering then also from a business point of view, what does it mean? What does it mean if your products now are at the mercy of OpenAI perplexity or others being ranked? Being ranked as trustworthy things that we've seen shifting from SEO search engine optimization to answer engine optimization or generative engine of optimization. How do you do that? How do you build authority that your products are ranked there? On the other hand, if you're in the procurement department, how can you in the future automate even more of these tasks? Maybe you spend two hours sourcing the material from a new supplier. Maybe in the future you have a browser agent. You say, Hey, get these 500 screws or nuts and bolts from the best suppliers. Yeah. But

Jon Reed:

that, that browser agent example is radically different than the consumer agent proposition. And the problem is on the consumer side, that having an agent do your personal assistant things for you and book all these things, that's just not the strength of agents right now. Agents work a lot better in much more specialized context because context is the key and context gets lost and the permissions that are required to. To affect these Ag agentic workflows on a personal assistant level are quite difficult, and it's not clear what the business model would be for OpenAI, even if it if that technology was ready for that. So I don't see that coming in and saving the day at this moment in time. But like I said, I do think that the enterprise is going to help a little bit to shield from some of the worst of the fallout, but we'll just have to see what falls out and when, you're always amazed by how long this can go on and maybe we'll be talking about it again next year. But I think the interesting thing about the enterprise use cases is that it's not metaverse or blockchain. There's more there, but getting enterprise projects right is a really big deal right now. And I would say that before we get into the project discussions and the things I wanna run by you there I really wanted to make a point that companies really need to make a decision about AI and communicate that if they're using AI. To free up their humans to do their work better and automate the mundane, to set their people up for high value. They need to be clear about that. If there is gonna be some layoffs for whatever reason, that needs to be clear, there needs to be some level of trust around the overall strategy. And I think what I'm finding is that the customers that I talked to this fall the successful ones were all setting a tone internally. And it wasn't always the same tone. Sometimes it was a little more aggressive, more AI first. I didn't always like the messaging that much. We talked about why I don't like AI first but the point is they were clear on what they were trying to do internally and the employees understood that too. And the level of communication around that is so important because if you don't have that, then I can't see how you can have much success with your discreet AI projects. And so I really wanted to hammer that point from the get go because. Look, there, there probably will be some operational efficiencies with these technologies and that's good. But how this impacts the lives of workers is so huge. And I really love this Tim O'Reilly quote that I quoted in my hits and Mrs. Roundup just this week on this because he was going off on the use of digital workers, which is a catchphrase I happen to abhor. But he says, and he was riffing on Jensen Huang, who had used this term. And also Ben Thompson. Mr. Teery had used it also, I believe, but he said, you can treat AI e as either a worker or a tool, but your choice has consequences as an entrepreneur or executive. If you think of AI as a worker, you are more likely to use it to automate the things you or other companies already do. If you think of it as a tool, you will push your employees to use it to solve new and harder problems. Now, I'm not here to solve a semantic data. I don't debate, I don't want to have an hour long discussion with you about digital workers today, but I think it's really interesting because it O'Reilly's getting at the philosophical thing of how can AI really transform operations in our company versus are we gonna just go in and try to get rid of a bunch of people and quote unquote cut heads? And if you don't get that I don't like your chances on getting the projects right either.

Andreas Welsch:

So what you mentioned about being clear and being upfront as a leader is super important. I just attended generative AI week a couple weeks ago in, in Austin, and I heard Hannah Calhoun speak, she's the head of AI at Indeed. And she opened by saying, Hey, look, our leadership openly acknowledged towards our employees, there's going to be some change. We don't know exactly what it is, but we can promise you that we will provide you the resources to upskill, to learn about the tools, to learn about how this change un unfolds as we're all going through it together. And to me, that's an incredible example. And testament to the leadership. And again, sharing, Hey, look, there's something happening. We don't know exactly where this is going, but we'll be with you every step of the way and to your point, right? Too often we've seen over the course of this year, we wanna be an AI first company. Do dot or else. Bye-bye. I don't think that's the right way to, to approach this and whether you are the one that needs to say bye-bye, or you are the one left behind who's now fearing that you might be next in line. I'm always perplexed at how this work or works or how this can work.

Jon Reed:

So you and I talked about this in past episodes of the juxtaposition between a culture of experimentation and a culture of fear. And I think we're still there, right? Like we, what environment you creating? And since we last talked meta's metaverse lead challenged its employees to five x their productivity. And I'm just like, man, like I feel bad for those people because I'm sure they're already busting their butts, trying to rationalize these metaverse investments and make a really difficult virtual business environment real and compelling. And now someone comes along and says, you got a five x, wow.

Andreas Welsch:

That's tough. And depending on, on, on the role or the type of business you're in if you're more on, on the vendor engineering type side software vendor side, many of my friends and contacts have shared, hey, there's an incredible opportunity like once in a lifetime, once in a career opportunity to be doing this kind of work in this moment in time. But it's also incredibly in excruciating, I feel burned out. I don't think I can go on much longer. The same, the same struggle there as well. So yes, big opportunity, but also I think we need to be a little more realistic. And I think that transpires to, to the enterprise too. There's one additional comment I wanted to make in reference to something you said at the very beginning, right? Enterprise vendors are very quick to deliver things this time around, and they're far ahead of their customers. To me, that shows that in any business especially established businesses, we still have roles, we still have guidelines, we have processes, we have security. We have questions about cost, legal risk, and the whole change management of bringing people along on their journey too. So the risk that I see is enterprise vendors being light years ahead of their customers not being able to catch up and adopt. And on the other hand, you have startups that don't even exist yet that will be born with this idea of the frontier firm, with the idea of using more AI. Do more with a smaller team, and I think that can get lopsided very quickly.

Jon Reed:

Indeed. Yeah. And I think that whole notion of how vendors close that gap between where they're at and where their customer's at, even if they're heading in the right direction let's give them that credit just for that conversation. It's still a gap. And the best vendors are figuring out how to do that. And in the next section of our conversation, I'm gonna give you some examples and hear what you have to say and you probably have some as well. I wanted to say one more thing just to frame this conversation is I had an interesting interaction around a post I put up this week around finding human purpose in AI. This was a panel that I did, and we'll get into it more later, but. One of the reader comments said, treat AI like your best intern. And I said, I loved your comments overall, but I actually recommend treating AI like your worst intern, not your best. I said, my best interns could complete complex products autonomously and resolve interpersonal issues. Your best interest interns would never delete your production database or agree with your worst ideas or put you in legally compromised situations by citing in accurate data. Just to pick a few, and this is another important point that I wrote here, which is on the flip side, even my best interns. Couldn't do some of the things AI can do currently, a couple of which are cited in this article. It's a different set of pros and cons, and to me that's such an important point, is we have to get off this whole thing around comparing everything to a junior employee or to a set of tasks or something. We need to really be clear on what the strengths are and what the possibilities are as well. And so I wasn't just hammering it. I also said that's a, in some ways that's limiting to compare it to your best human because some of the things AI, AI is good at, it's way better than any human. So I think we have to reckon with technology in a totally different way than that.

Andreas Welsch:

I couldn't agree more. At some point I think we, we all start somewhere and we need a crutch or we need some easy comparison. To me, it's the part where do we compare humans and AI or humans and agents When we think about what do we have in place for people? What do we need to have in place for these systems now? Grounding code of conduct to your point, system access, data access, security, physical security, the kind of things that are, yeah. Given that, that we've developed in, in, in a business for people, with people over decades now, they need to apply in some shape or form to agents as, as well, right? I think of org charts or address books. Who's even in this business, which agents do we even have in our en environment? Who, who are they assigned to? Who's created them? What is their role? What is their function? Think of job descriptions or some kind of standardization. To me, there's still a big gap around the more lifecycle management type aspects of agents at a larger scale. So that's where I feel comparing humans and AI makes sense, but not to the point where we say this is what the agent can do and this is what a human can do. And the your junior employee, that's where I think it's dangerous.

Jon Reed:

Indeed. Okay, so here's what I got for you and the audience. I've got a collection of things that are not working on enterprise projects right now and what are working. And the good part, the good news is that the, what's working part is actually a lot longer than the what's not working part. Hey. But let me get to what's not working'cause a couple of the what's not working ones are controversial and I wanna put them out there first. One of the things that is not working is ag agentic for the sake of age agentic. Egen AI has a very particular set of strengths and weaknesses and should not be used just because you want to use Egen AI. In many cases there's other technologies that fit the bill better. So that's one thing is stop doing it just because you want to say you're doing it. But a more important one is almost anything multi isn't working right now. Okay. It not at scale. It's not working. So stop it. Like I, I know that standards are important, like A two A is cool and vendors talked a lot about A two A this fall agent to agent protocol. Look vendor to vendor agents, putting agents in the same room and hoping they will understand and talk with each other is not working. Okay. So stop it. Just stop it. Yes, standards are gonna be important and MCP is a little bit better because MMCP is not just agent to agent. There's a lot of tool calling involved with MCP and other forms of standardization of operations of LLMs. So MCP. Even so is an immature protocol and we've written about that a fair amount onus. So we gotta be a little careful with MCP also, but multi multi doing this. When you hear multi-step, multi-agent, all of this stuff, be really careful because mostly that's not working. There is an exception, which is within a specialized workflow, you could have an orchestration agent and then a handful of specialized agents that are very task specific. And that type of orchestration does generally seem to be working pretty well. But again, this is a very focused set of parent and children agents, if you will. And why is it working? Because they share the same context on the same data platform and that's why it works. So quit pretending that the multi stuff works right now because it doesn't and it doesn't help your customers because the customers need wins right now, not science experiments. So that's one thing that's not working. Now will it be working next year when we're talking? Probably a lot better, but now is now.

Andreas Welsch:

I don't even know how to respond to that. In the conversations I've been in the conferences that I attended, to your point on the vendor side, there's a lot more talk about MCP and H two A. Again my favorite example of practitioners talking about these sorts of things, generative AI week in Austin a couple weeks ago. I went in thinking is anybody even doing anything with agents? I'm not seeing anything. I'm not reading a whole lot on online other than McKinsey, Deloitte and other vendors saying, here's how you should do it. Is anybody even building agents? And so to my big and positive surprise, yes, there are lots of organizations doing this. And it's especially the larger ones as I've found, that have been on this AI journey for a much longer time. So they have built the muscle. They, they have the governance the center of excellence, the champions, the multipliers in whatnot, in, in place already from either machine learning or at a minimum generative AI. And now it's about how do we build agents? How do we scale agents, how do we do the guidelines and the guardrails. Which kinds of roles do we need to have on the team? All of these things. And so I would say just because we're not hearing a whole lot about it and not a whole lot of organizations talk about it. It does not mean that nobody's doing it. Everybody's probably figuring this out on, on on their own terms, in their own pace, and they're afraid to share it. There's one, there's part of my other question. When I said apparently you're doing it. Why aren't we sharing it here? Why am I not hearing about this in other news outlets or from other sources? And I said we're taking a more careful approach because we're concerned that maybe our employees get concerned all of a sudden think that we might put them out of work or, but

Jon Reed:

I was able though the good news is I was able to get a fair amount of ag agentic use cases out of companies. And you're right, there's some secretiveness around it. But there's some real agent building going on. But in general, the agent to agent stuff where you've put agents in a room and hope they'll learn how to talk to each other from different vendors and data platforms isn't working right now. But there are some ways around that. So for example, like if in a decisioning context, it's a little easier because what you can do is you can. Take different data from different vendors and harmonize it onto one data platform. And then you can have your agents feeding off of that data. So for example, if you want to have a data where you ask about supplier inventory issues and you could have a data platform pulling from multiple vendors. So that's more doable in that context. What's more difficult right now is executing transaction across end to end from different vendors and stuff like that. It's still more, and I've talked with partners who are doing this now it's much more like classic integration projects at the moment where you're really integrating between two vendors if you want to do it. That will change over time and the standards are important, but I would say vendors overemphasize standards a little bit this fall versus like how to get started.

Andreas Welsch:

Let me put another question there. To what extent do vendors actually do the work that their customers are doing? Yes. There's standard in integration. Yes, there are partnerships there, there are discussions about how can we make this happen, but how many vendors actually go through the process of building the agents, building the integration that their customers actually need to build? So I think those are two, two different pairs of shoes. Yep. Which is why, buyers are seeing this in a vastly different context play out than vendors are seeing it from my point of view.

Jon Reed:

Yeah, so let me finish out with my What's Not Working list,'cause there's not too much on there. Another thing that's not working is trying to layer a agents onto bad processes and silo data. It should be obvious by now, but you can't do that and expect a good result. AI is like icing on your architectural cake in a way. If your cake is half-baked, your AI's gonna stink. That today's generation of AI really requires quality data and effective processes. And you can s shoehorn some of that in a little bit, but you're really missing out. And so that's obvious, but it still exists. And then. The com, I'm not gonna say this is not working, but the architectures that are working today, if you go on YouTube and look at some of these agentic architectures, they're pretty complex. They have a lot of moving parts and they include a lot of things like swapping models and smaller models and all of this. A lot of organizations aren't sophisticated enough to get a good result from today's AI. So I wouldn't say it's not working because to your point, there are companies that have built centers of excellence and have done all that internally. So I'm not gonna say it's not working, but I would just say be careful what you're getting into because it's more than just a handful of data scientists. Now, if you want to get results with enterprise AI, it's a significant investment with the architectural complexities involved. And many customers for that reason are choosing to move forward with agents that are provided by vendors that are embedding them into existing software processes. And for many companies, that's a good place to start.

Andreas Welsch:

I would agree whole wholeheartedly, right? Especially the mid-size, upper mid-size businesses where you have someone or maybe a few people working on this topic, but not necessarily at the scale or the depth as large organizations as a JP Morgan Chase, as as a Walmart, as PepsiCo or some others. And to me that's the exciting part, right? When vendors get to this point that they simplify enough of that stack that it gets easier to use and to apply it either out of the box or with some customization, but where you don't need to stitch everything together and keep all the dependencies in view and maintained and everything. I think that's a big opportunity. And that's when things will propel even further. So my question would be, again, if you're in, in, in this mid mid-market, upper mid-market segment with your company. Definitely look at what is out there, how does this work? I think that understanding is key, but when it comes to building, is it okay to give it another quarter or another two quarters? Having your vendor's roadmap in view so you don't need to build the entire stack from scratch. The other thing that comes to mind for me is now obviously every vendor says come to my platform. I can, I, I can optimize your IT service management and I have agents.

Jon Reed:

Yeah,

Andreas Welsch:

I can optimize your HR workflows and I have agents. I optimize your procurement, I optimize your cx, I optimize your finance. Everybody has agents and they're very likely really good in that platform with the data to your point. But we also know that work doesn't just happen in one platform, doesn't just happen as one task. That tasks that cross multiple data platforms, applications, processes. And I think that's where I'm curious and excited to see what 2026 has in stock, what vendors are coming out then.

Jon Reed:

Yeah, absolutely. And yeah, there, there is overall like a data platform strategy that companies need to have. And I'm skipping a little bit of the transformation theory in our discussion today because there's only so much time. But in general, leadership at a company needs to start, not with technology, but how are we gonna compete in our industry going forward? How is the role of our customers changing? How is the role of our suppliers changing all of that? How are we gonna excel in these areas? And then the technology discussion comes into play and then the data platform and all of that, because like you said, you could get caught up in activating agents in different areas without really understanding how the pieces are eventually gonna tie together. That's not really the topic of today's discussion, but it's really important. Like I, I'm really glad you raised that because we're skipping that, but it's really important that those high level conversations happen and I would hope that organizations would contact someone like yourself to start there more than this, start on a more discreet level within a line of business.'cause I think that isn't very strategic. So anyway, let's talk about what is working.'cause there's a lot of things that are working. Oh, I, one more thing that isn't working that we can get into a little more later is that we are, we're moving past some of the generic office productivity use cases, which is great into more embedded AI. And I'll get into a couple examples, but we are dealing with work slop and there's some debate as to how much work slop we're dealing with. But work slop is the phrase, you brought it up before the show, and I think you were right to do that because it's the sense that, are we accelerating? Imperfect work inside the organization that's machine generated. So there's a lot of work that needs to be done there to figure out how we're not piling on noise onto our colleagues. And of course, a classic example, my colleague Brian Summer hammers is actually not from internal employees, but it's, and it's not work swap either, but it's being bombarded with applications from ambitious, job seekers who are applying to, hundreds or thousands of, places at the same time. And they're justifying it by saying you're doing the same to us. You're using the same tools to screen us out. But anyway, the point is when you create that level of volume, either internally or externally, it does create new challenges. So that's something we need to keep in mind.

Andreas Welsch:

I think so. Absolutely. So I've been running in a number of leadership workshops for larger organizations, manufacturing, automotive and so on over the last couple of months. And what some of the leaders shared was, hey, now I end up with a full inbox that's already been full before AI, but my team members keep sending me information left and right. Hey, I participated in, in, in this meeting. Here are the action items. Hey, I wrote this report, can you take a look? And now the leader sits there and needs to either decide do I look at all of these things myself or do I create some AI agent or some kind of tool that, that filters what really matters to me? So to me that comes down to we've solved, two problems already. We've solved the problem of access with the internet, democratized access to information. We solved the problem of generating information thanks to generative AI. It's got an incredibly cheap, anybody can do it. Now we're not necessarily thinking about should I send this just because I can, but now we have a filtering problem. What really matters? What is the most important information that I need to be aware of today? It's probably not the, 10 emails, three action items each, but what really matters, that's one big thing for me. Yeah. Where leaders need to think about how do I accomplish that? And the second part around workshop it's a cultural thing in my mind it's not a technology thing. Yes, technology could get better, it could write more like you or I do, but still, it's a cultural thing, it cultural thing to me for the following reason. We're asking people to do more with less we for a very long time. But we're not setting the right expectations. I think. Yes, there, there're the unwritten rules that anything you create should be of high quality. Certainly if you want to get money for it, your customers expect high quality for the money. They pay you, they expect value, but. When you use AI as a shortcut to create something really quickly, that's good enough, but the other person has a different ex expectation. It's not just good enough that it's actually great, then you have that mismatch. Then you have that AI workflow that's generic, that somebody else now needs to read through, get from good to great. I just sent back and say, no that's not what I can accept. Do it again, but use your own expertise.

Jon Reed:

Yeah. And I don't think we're gonna resolve all of those things today, but it's something we should flag as something that is being raised by senior leadership and they're not sure how to combat all of it yet. But it's a topic we're gonna be seeing well into the next year for sure. So as far as what is working one thing that's working. AI readiness, which I've now advanced this topic in my mind since we last discussed it into two components. The first one I'll mention now is AI readiness is about getting results on the way to cleaning up processes and data. So the results part is so key and we're seeing so many great examples of this from different vendors right now. And I don't wanna turn into just like a vendor show because some of them are our clients, some of them are not. But I do wanna mention a couple so that the audience can research this further. But there were a couple that were really compelling on to genomic. To me, one was. A piece from Boomi by Mark Samuels on three Boomi end users in the nonprofit space. But it was all about how they were able to clean up data and position themselves for more. Sophisticated AI down the road. But in, in the meantime, they're connecting data, predictive analytics. If we intervene before it gets too late it makes a difference for people like this is a big deal, single view of our customer's transactions. These are not simple things for enterprises to achieve. And if you document and accomplish them with an AI readiness goal in mind, it can really build a lot of organizational momentum. We saw another one from Derek Dupree who just came back from a Solana show and he was talking about a piece where there were six different customers who were basically honing their processes. With AI in the future in mind, but Pfizer was a potent example where they've gone from 58% order automation two years ago to 85% today with a target of 95% across 200 markets globally. But the foundation required years of process work before AI entered the picture. Now, I'm not saying it always takes years to get ready for AI. That's not the point there. But the point is, how cool to be able to do that and do that process optimization work while charting benefits as you go? That is the perfect way to set yourself up for AI projects. And so we're starting to see companies really understand that and act on it from a data and process perspective.

Andreas Welsch:

One of the great examples I heard the other week was from PepsiCo, their SVP of engineering was sharing an example of how they engage differently now with their B2B customers in different parts of the world. All with the idea of how can you order more easily from us as PepsiCo as a vendor, but also how can we, present additional products to you, do some more upselling, cross-selling, help you get more value out of bundles that you can sell to your customer In Hesia, because of the use of AI and the way that they've changed their customer experience customer engagement with B2B customers, they've seen a 9% improvement, which in that case meant$250 million in additional revenue. Yep. When I call this out or when I said at the conference and I call this out as the conference chair, to me, that's a fantastic example as well because a lot of times there's this false expectation of we need to get to 95, 98, 90 9% of something, right? It has to be a hundred percent, it has to be double. It has to be huge and meaningful. 9%. Is it really that big? Look at the absolute numbers as, as well. So depending on the size of your organization, even a single digit percentage improvement can translate into significant absolute monetary savings. So keep that in mind as, as well as you're thinking about where do I get value? How do I measure it? And to your point, real companies getting real value from it.

Jon Reed:

Indeed. What else is working here? So let me give you a couple more examples. LLMs operating within constraints. So basically what's really effective right now are compound systems that include ag agent components, but I call them compound, but non-agent components as well, which might include quote unquote tool calls for these systems. So one of the things that's working really well is context engineering, where you figure out exactly what context the agent needs, and I'll get back to that also. But these compound systems are very cool because instead of saying, oh. Agents replace everything we do. It's more that you strategically combine a variety of technologies, including old school machine learning and predictive algorithms. Perhaps also with deterministic automation, where you need that deterministic reliability and also with proper human supervision when it's needed. And it's really pretty compelling. I was a judge on a competition with with UiPath earlier this year, and I was really struck by some of the examples there. And I'll just break this down. I'm not gonna, I'm gonna try not to use their product names, but it was a services company that redesigned a process using AgTech automation that combined document intelligence. We'll get back to that intelligent agents. AI driven decision making, but it included deterministic, RPA it, the system can process invoices across 160 plus clients without constant rework. It's not operating enterprise scale yet but it was, a substantial pilot, not a small pilot. And so you see the breakdown here and it was really helpful to see that where. The document, understanding that the agentic intelligence can do realtime triage and apply business rules escalating when necessary, orchestrating the workflows. The robots are handling more deterministic steps like data consolidation, reconciling with the ERP system that has to be compliant. Human stepping in for flagged exceptions and quality control examples, such as 30% increase in data accuracy, 50% reduction in automation development time, 80% decrease in yearly maintenance effort. So I wanted to get into a little detail there just to illustrate how. How designing these use cases, using different tools and components, and then figuring out exactly where the agentic steps make the most sense is really working for companies. And I really wanna highlight that because even though I can be very critical of AI, this is something that a lot of AI critics don't get because they don't understand what's happening in enterprise at the, at that deep level that you and I are because we're talking with so many customers.

Andreas Welsch:

And I think that also elevates the role of enterprise architects even more in, in these

Jon Reed:

a hundred percent.

Andreas Welsch:

Being able to understand what is happening here, what do we need to make happen, and what kinds of technologies can real, can really help us in. We've seen this over the past 10 years, roughly, when we had RPA as its own domain. When we had machine learning as its own domain, when we had predictive analytics as its own domain in little by little by ad advancing each of these pieces individually, it's also made a lot more sense to combine them to your point in, into these composite systems. Where we have rules for things that are pretty static, where we need deterministic behavior, where we have more pattern recognition documenting intelligence to your point, to extract entities, to extract information from documents, put them back in the workflow, and then have more orchestration on top with an agent to say, Hey, where does this actually go? What are the right tools? To your point earlier that I need to call to get the data from here to there, or to make a decision on how to process it next to me, that's the big potential, right? To. If you're in a large organization, again, you've likely developed some of that skillset already, and you've built, you've composed these applications. If you're in a smaller organization, it's probably more looking at different building blocks or looking at workflow automation from what I'm seeing and putting these individual pieces in where it makes sense. But wherever you are in, in, in your journey, I think just the fact that this is possible today is already mind blowing. And to me the next step is when we look at how can we bring audio, video, images, multimodal experiences into that take a picture of your Yep. Of your machine of your broken part and say, what is this? How do I fix this? And then the system being able to look up the information, some of the promises that were a lot harder to achieve 10 years ago with machine learning and the early versions of deep learning, I feel are getting a lot easier now. Just take a picture of, of an object with ChatGPT and have it analyzed what is this gimme description what can I do with this? How do I repair this is a lot easier than any previous generation. So I'm excited and curious to see more of that also come into the enterprise.

Jon Reed:

Yeah, and I think the big distinction that I wanted to drive home here is that it's really working when you're designing these use cases, like you said, with an architectural perspective, without any judgment around, oh, AG agentic is the latest, so it should replace other stuff that is more older tools like, no think of it all tools are created equal in your mind and find the best one and find the best combination. And it's the combination of tools that, that seems to be getting the best result. Now I'm gonna rifle through a number of other things that are working and get your commentary on them. One of them is something that Andre Carpathy talks about. I recommend looking at his views. YouTube videos on the future of software. He talks about the autonomy toggle. I talk about autonomy at a very granular level. So customers want control of the amount of ag agentic autonomy at a very granular level within processes, within divisions, et cetera. Vendors that can provide that are having success in earning the trust. FYI stop talking about the fully autonomous enterprise and fantasies about that and give people control over how much autonomy they want and you're gonna get a result.

Andreas Welsch:

I love that. We've talked about autonomous finance or lights out finance years ago in the industry and. It hasn't happened. I saw a vendor presentation a couple weeks ago big enterprise software vendor who said, look, we're, we've moved from on-premise to the cloud, now we're adding AI here. Two or three steps in between. But eventually we, and envision a fully composable, agentic autonomous system sitting there thinking, so what 20, 20 60 is when we'll have it? Or what, because it's a it's a huge process, right? For any software vendor that, that's in business today, that, that has a sizable customer base. It's a huge undertaking already moving from on-premise to cloud, not just your development and your software, but then your customers as, as well. It's a decade long effort and even longer in, in some cases. So yes, visions are great. The reality is very few vendors that are established today will completely turn their stack upside down or develop something new because of all their legacy.

Jon Reed:

And I would add to that, that the customers are just in a very different place with how much they trust this technology and what they want to do. They have very different compliance issues by region, by industry. So give them the control and the vendors that do are gonna be successful. So that's my thought on that.

Andreas Welsch:

I would even add that, and maybe that's my provocative bit of today's episode. That, alright, here we go. Your largest so your largest enterprise customers are already building agents. There, there was a piece by McKinsey that came out recently that said 6% of companies they surveyed are really reconfiguring their business or really looking at business processes and moving them to agentic. I think those are the 6% that, for better or worse, will develop their own agentic operating system stack that will look at ways to automate their processes, to potentially at some point replace something like your good old business systems that you might have in your basement or that you're hosting somewhere in, in the cloud. So I think there, there is the risk and on one hand, and then the other hand it's again, those companies that don't exist yet, but that will be able to build from scratch something that works autonomously, semi autonomously. Then competing with something that might look a lot more legacy, even if it is in the cloud, even if it has AI sprinkled on top.

Jon Reed:

Cool. The next thing that's working is evaluation and observability of agents. It's incredibly important to be able to evaluate agents and be able to ideally make real time adjustments in their behavior, but definitely to audit them afterwards. Look at audit trails, see what went wrong. There's all kinds of things that can go wrong. These are probabilistic agents. They don't always work in your workflows the way you intended them to. Sadly. Andreas vendors are not doing a great job on this topic. They are largely dismissing this, but there are some wonderful exceptions. Oracle came out with some very aggressive announcements around this topic. There's a whole bunch of open source tools and vendors in this space doing a great job that, that the customers should be looking at. But the bottom line, I had one veteran tell me it's not an issue because our AI is explainable. Those are not the same thing. I'll get to the explainability thing in a second but it's not the same. These are being trusted with a lot of important stuff inside your enterprise domain. You need to be able to see what they're doing and why. Evaluation frameworks are a must if you want the trust of customers.

Andreas Welsch:

Absolutely. Yeah. And somewhat similar in, in, in that direction. A great example that I heard at one of the conferences I I attended was was around governance and to this point of e evaluation, but mainly how do we even know what agents we have in a business and how do we know that they're not going rogue? So that company solution was to say every agent has a unique human owner, right? So we know exactly if we see an agent who's responsible for it, who's built it in, in, in similar, the part about e evaluation obviously goes a step further to say, does this agent actually do what it's supposed to? Does it do it within the guardrails? Or is it going off the rails somewhere doing something we don't want it to do? And I think that's, again, borrow from how we lead, how we manage human employees. It's pretty similar, right? So yes we have a certain level of trust and expectation that somebody does what they're. Trained to do or what they are supposed to do. But that's also why we have some checks checks and balances in, in place. And we'll see that here for agents too.

Jon Reed:

Yeah. And really good call on the governance part because you could put, you could throw governance frameworks into the same discussion as well. Yeah. For sure. The next thing that's working is explainability via tools, rag slash knowledge graphs. The point being there that in a lot of enterprise context. AI explainability gets a little bit better because when you see the demos, you see the source documents that that are being pulled for various workflows and questions that you might have of your agent or your assistant, or whatever you wanna call it. So the point is, explainability is possible in, in, in a rag context because you're providing the agent with information that is being sourced from various types of databases or vector engines or whatever you want to call it. The point is, it's there and you can drill into it, and you should. Often drill into it to make sure what you're looking at, that's very important. But it doesn't solve the LLM Black box thing completely by any means, but it's a major upgrade. And in fact, I would go so far to say to customers that if you don't have that level of source documentation, you need to ask why and put the brakes on it until the vendor provides it or until you provide it for your own employees.'cause you need it. But that's working right now because it's taking a little bit of the edge off the mysterious. Where did you get this information from? Issue.

Andreas Welsch:

I think that's absolutely critical. And from what I've seen over the past couple of weeks, whether it's pharma or it's financial services, certainly industries with heavy regulation and the need for clear documentation, the same is true there. As, as well. What did the agent do? Why did it do it? What data did it use in the process of making a decision or proposing a decision? Where did the data come from? Things like that. Traceability is really, IM important in that same context as well. To be able to show to your to, to your regulators, to your compliance, to your risk departments and say, Hey, yes, this is what the agent did and here's the whole trace of why it happened and what exactly has happened.

Jon Reed:

A hundred percent. I'm adding context engineering to the list as well. We've been talking about some of those concepts. It's a little bit of one of those buzzwords that takes on a life of its own, and I'm aware of that, but the reason I'm mentioning it is because. With that buzzword in mind, for example, head over to YouTube and do a search on that. You'll find a ton of really relevant content that will help you to understand how enterprises are making agents quote unquote smarter by putting in data that is specifically relevant to them and their enterprise. And the other thing that we didn't mention before is it's a major breakthrough really for customer data privacy as well to approach it this way, because now you're not co-mingling your customer data with the LLM and the trained data from the LLM, you're simply providing the data as context, which the LLM does not absorb into the training if you architect this correctly. And so it's a way of really honing. What LMS do into the context that is you and your business and your preoccupations, and in some case industry requirements as well. And there's an art to doing it. I used to think of it as a bandaid. It's, it, to some extent it is because it's compensating for some weaknesses in lms, but it's more sophisticated than a bandaid now. So that's not very nice to me to say that anymore. So the point is not to get infatuated with the buzzword, but to understand that the discipline behind it of getting the agent the right information for your company and your role and your job is getting a lot better. Oh, and by the way, in a secure and role specific way as well, which is yet another reason why it's very hard for customers to build this stuff right now and why vendors with vast resources have an advantage. Because all of that role in security specific stuff also matters in the context.

Andreas Welsch:

Oh, yeah. Yeah. For sure. Especially if you don't need to manage roles and permissions in five different places, but it's transparent. Yep. In, in the platform that you connect the agents to or that they live in or that they have access to. Big argument as a selling argument. And reduces the risk as, as well of any, data slipping through or anybody getting access to something they're really not supposed to.

Jon Reed:

And that's one reason why a lot of companies aren't, and a lot of vendors aren't really trying to train large language models. There are some exceptions, but for the most part, they're using the frontier models or the best large models they can find, but supplementing it with the proper context. However, I did wanna note on my list of what's working that's smaller models can be very effective at times along with specialized agents that draw on those models. And so again, we're seeing some enterprise discipline around these breakthroughs here. And like you said, some of these may be percentage point savings, but they do start to add up and, we see this example in the document intelligence use case. For example, I did, I reive my demo floor series for AI demos and I did one with ever sort. And one of the cool things they do is they'll spin up a smaller model with your. Documents in it for a very specific region or area. And it's just those documents. So it's just working on those. And then if you want to add a term or a search to that, then it can spin it up and retrain it'cause it's small. Like these are important and they're going under the radar. When people are ringing their hands around the AI bubble because they don't add up to enough right now to influence the market economy. But don't be fooled. These are important developments for enterprises trying to solve problems. In the case of document intelligence, which I know you wanted to comment on as well, we could call it whatever you want. The document intelligence is one way of framing it. But the point is that enterprises are deluge with documents and it's incredibly difficult to manage them all and deal with them all. In the case of legal documentation, for example, you can get thousands of pages of stuff. It's an obvious use case to have AI go in and red line some of that stuff and flag problems and phrases, and you say, okay, some of that's been existence previously. Sure. But I saw demos this fall that it, that helped me to see for example, how. An intelligent system can even point to a clause that has handwritten stuff where you're like, oh, that's actually a, an adjustment on the termination thing that's important. And it spots that or it's force majeure, but the word force majeure isn't used, but it still recognizes the semantic context of that paragraph. That stuff's really important and valuable and time saver. And just to give you a sense these are not small developments in my opinion, even though the results may appear modest at first clients.

Andreas Welsch:

So one would hope that in, in your business if you're listening or watching here, that you think about this early in, in your journey and not when you've built agents that access data that's outdated, that's incomplete, that's, that's got issues. I think that's when organizations start thinking about it most of the time. But don't do that same mistake. Start at the beginning. Two, two thoughts that come to mind. One was I attended a vendor event partner together with Microsoft a couple weeks ago in the document management document intelligence space. And there it was all about how can we help organizations get better decisions, better insights through copilot agents, data stored on SharePoint, on OneDrive, because we have so much more metadata about the actual files that are stored there. So we know which one is the real. Latest version, right? Is it the 1, 2, 3, 5 F for final or something else? Who's created, what is this really about? So enriching the data with more metadata so your agents have better information to go by was one thing. The other one was an example. One of the startups in, in this agent document space shared, hey our legal counsel built their own agent. They said, usually when I get an when I get an NDA, here are the five things that I check for. If they're in there, good if they're mutual, great. If not, here's what I usually write back. So agent, look for these aspects. If not, make a proposal. Send it to me, I'll review it and I'll send it off and through an AB test over a month or three months I don't remember fully, they've been refining the agent to the point where they say it's an NDA, the agent now handles all of the all of the requests I get. And I, as a legal professional, have time to work on contracts that we write work on more complex things. And to me, that, that was a great example because it, it sounded as if the council wasn't necessarily technical by nature, but they were able to build something that helped them in their role. And the part that I really liked was the AB testing. Let it run in, in parallel for a while. What would the agent decide? What would I have decided? Make it better. Fix the gaps that the agent didn't cover yet to over time develop confidence. Say, okay, yes this, not just reviews every NDA that I get and it sends it out and I don't need to do that anymore.

Jon Reed:

Excellent. Yep. Those are great examples that really reinforce that's a very promising area to really bear down on it. It's like that next level. Now we're moving beyond the more generic productivity stuff. That's one. I think one of the best places to start us with your document workflows. I know we're on the home stretch, so what I wanted to do now is run through we talked about what's working in general. Now. I wanna walk through what's working from a use case perspective. But before I do that, just real quick, because you nailed the second component in AI readiness that I've been setting this fall, which is. You could call it real. We need a real time data layer for AI, and we're getting a better understanding of what that looks like. And I use real time in quotes because it's really more about right time. I still hold firm to the right time distinction.'cause sometimes real time is still a little bit elaborate inexpensive for certain kinds of data that isn't updated constantly. But the point is you need a right time data layer and you need a lot of meditation and annotation. And this is what's become very clear, is that we're starting to understand what agents need to understand what the data is. And so think about metadata and annotation as a way to tell the agents what the data is about so that they can better and more accurately access it and consume it. So we're starting to learn some things about what AI readiness looks like, and I think that's a really important development in the last year. So I wanted to emphasize that. Let me just give you a list of some of the other use cases that I think are working well right now, particularly well right now. This is not a comprehensive list and you can maybe. Say which ones you respond to the most, et cetera. One is hyper-personalization. Meta and Google are reporting a lot of revenues off of this. You have to be careful with this in the B2B space because you can get into the creep factor very quickly if you, if and meta and Google don't care about being creepy, but you should. So you have to be a little careful about that.'cause they don't, the collateral damage of people thinking they're creepy isn't a big deal for them, but it should be for you and your customers. But in general, hyper-personalization is one of the best ways to generate content with these machines. And it does work, especially if you design the use case properly and show people the stuff they want to see. S first level support. We know a lot about that. Again, it's often best not around headcount reduction but moving people into higher impact roles. There's a lot of interesting stuff around sales automation. There's a bunch of stuff around focused assistance and various domains, including things like supplier management or what have you, inventory roles. The point being focused assistance. I could have added to the What's not working list. General assistance? No. Not a general assistant. Not yet. But focused assistance in a particular domain. Yes. And and I would add coding to the list though that is controversial because of the downstream impacts of code. So you have to be careful coding if you want productivity out of it. It's a real discipline. It's not as easy as the vibe coders would make you think. But that's a much longer discussion. The other thing I would put on the list is 80% their templates. So for example, project plans, marketing plans, campaign plans, 80% is just ballpark, but 80% templates is still a heck of a lot better than zero. But 80, 80% doesn't obscure the last mile that you need to often apply to that. And then there's a lot of content generation use cases, but you have to be careful because of the work slot factor. And note that I did not say content creation. I said content generation. There's a big difference between those two. And I'll fight. Mark Zuckerberg in the mud on that one until he gives in that. Creating ads is content generation, not creation. Sorry.

Andreas Welsch:

I love that, that, that deeply resonates with me. Because I I identify as a content creator, not as a content generator. Exactly. There, there's something that goes into the process of generating that is a lot. Lower level in that sense, churning out things rather than creating thought experience and and everything else.

Jon Reed:

Did I miss any use cases that like, that you really like? There's a bunch of AP and AR automation stuff as well.

Andreas Welsch:

So I recently moderated a panel with procurement experts and the topic was around autonomous procurement and granted autonomous procurement, like the autonomous enterprise sets, big vision. But some of the procurement leaders that they run, actually one lady in the semiconductor industry said, Hey, we've rolled out agents to help us with RFPs. So on, on one hand sending out RFPs. On the other hand, if we get the responses back, evaluate how did suppliers respond? Are they addressing the key points that we have in there? Where are the similarities? Where are the gaps into this kind of analysis and consolidation so that our procurement professionals. Again, can work on higher level tasks, to your point, free up for more valuable work. And we can do it in a fraction of the time. To me, that was a great example again, because it's easy to listen to vendors on the other side and say, yes, we have agents and we'll help you optimize your long tail spend, and we will do the IFP process for you. Talk is cheap, right? Still is. But seeing those real examples of people saying, yes, we're doing this, we're doing this at scale across many different geographies. Thousands or hundreds of thousands of suppliers thousands of RFPs per year in all of this that really hits home that really, talks to the value that is in these solutions and what it can already do today off the shelf. Love it. John, before we wrap up I'm really curious. 2026, what do you think? Where will we start? And if you are, if you dare, where will we end?

Jon Reed:

That's a really good question. I'll keep it on the enterprise focus and forget about the larger market patterns that are really hard to predict. I think on the enterprise, like I think what we're gonna see is a growing customer maturity where that they're gonna start to take more ownership over the concepts here. And while they might work with vendors, there's gonna be more customer input and design, and they're gonna get a better handle on the technology and how it can work for them. And I'm looking forward to that because, like you said, at the moment, it can be hard to unearth some of these stories. A lot of it is maybe a little more private adoption, if you will. I think we're gonna see more public and we're gonna see more stuff happening at scale. I'll be really interested to see how it all lands with, the. The role of humans, the role of AI. I happen to think that a lot of the AI job loss, like hype is a little overblown, but I'm not gonna sit here and say that there aren't impacts, especially with junior roles and certain things like that. But I think that's gonna be really interesting and I'm hoping to continue to unearth companies that really do this in a way that energizes them and that creates fulfilling roles for their employees.'cause I really think we're at a real fork in the road of do you want a fulfilling workplace or do you want a surveillance workplace? And I think you have to really make a choice and push one way or the other. And I'm gonna be really curious because I know which one I'm rooting for, but I'm not stupid enough to say that it's a foregone conclusion that the stuff I'm rooting for is gonna win. So I think later this year we should start, get some really potent examples on both sides of that. And let's see who wins out.

Andreas Welsch:

Thank you so much for sharing your perspective. Similarly, I think, on, on one hand, yes, there are opportunities to take out some of the repetitive mundane work and put in some kind of system, AI, agent AI, what have you. But a lot of times to me it feels that this is very shortsighted, meaning it's focused on the here and now, and it misses the opportunity. What if we can give our employees tools and systems that will help them become more efficient and more effective? They already know our process, our product, our customers, our industry. They have deep knowledge. So what would we be able to achieve if we gave them better tools to do the work? So to me, that's one of the big yeah, cha challenger questions I ask participants in my leadership workshops. What else could you be doing? If you had the same number of employees, but they were more capable and even more productive.

Jon Reed:

Before we wrap, can I just ask you one quick thing, which is you do these workshops? I've done a bunch of workshops with college students. Yeah. And I wrote a piece on finding human purpose in AI. That was based on some of that also a panel I recently. But I think it's really interesting to see the, I think individuals are also highly motivated right now to better understand this. And I don't know what you're seeing in your workshops because part of this is enterprise level, let's have successful projects, but part of it is deeply personal and it affects you and me as well. What are we shooting for in our careers? And I really love those conversations and I think they're really important. And, I already ranted around creative creativity and we know about the critical skills atrophy and critical thinking needs, but I think even more important than critical thinking in my. Adventures this fall is don't give up on domain expertise either, because a bunch of young people were telling me, I'm not sure if I want to become blank anymore because AI can do that. And I kept trying to tell them no, don't give up on your passions. Like you may have to tweak them, but don't give up on them. And the enterprise context, we're gonna need domain experts because sometimes those are the only people that can tell where AI went wrong in a particular area because it is pretty good at a lot of stuff in certain levels. So I dunno if you're finding that in your workshops, but to me, I think, I really welcome the conversation, not just on an enterprise level, but on a personal level of how do I do that? And I view it as this tier dual thing of become more human, but also I also call it become a cyborg. I was joking, but it was like also learn how to work with machines, but also become more human do both,

Andreas Welsch:

Yeah. I feel some of those phrases get overused a little too much and we need to fill them with meaning. What does it mean becoming more and more human? Let me answer in, in, in two or from two perspectives. One is where I see a lot of need and traction and interest right now in large enterprises, medium sized enterprises, is how do we use these tools? It's mainly around productivity. Yep. How do I get my people off the public version of chat, CPT and onto copilot or onto a more governed yeah. Model garden type thing. So we at least know the data is not likely to leak or leave our environment. That's one. The second part from there is how do I guide my teams to use this empowering them? On one hand, yes, I want you to use the tools. On the other hand saying, but you're not off the hook doing great work. Here's how you can do great work together with AI. Review the output. Be specific. First of all, when you enter your prompt or your goals, check for bias. What does bias even look like and how do you get it from good to great. So that domain, the other domain to me is similar to what you shared working with students. So I, I teach management information systems. I teach production operations management. And I introduced generative AI in, in the classroom about a year ago. And we did some prompting techniques from, the simple things. Zero shot, a few shot to chain of thought, tree of thought type of things. I tried to run the same thing in the spring and my students said, can't you show us something else because we're using chat GBT on a daily basis? We think we're pretty good with this. So we looked at other tools, Hagen 11 Labs, Gemini for image generation and built a scenario around that. And I'm going to run it in, in a few days. Again. I'm curious how far adoption has advanced, it's also not just about adoption, because similar to the questions you've mentioned, students keep asking me, do I still have a job when I graduate here? Am I going down a path in stem or some other field where AI will be at least as good as I am when I get outta college? And to your point I think we still need that level of expertise, but it's a lot harder to define right now what expertise looks like and how you can build it. Yes, you can perform at that expert level a lot faster, but to really know if something is right, is wrong, is good, it's bad, brings it back to Malcolm Gladwell and the 10,000 hours you need to become an expert, which I believe is still true. It just looks different. So in a business, it might very likely be per your most senior and your most expert resources with your most junior ones in. Job shadowing, pair work, what have you there, there are many concepts for knowledge transfer in both directions by the way. That I think can be very fruitful and very promising to build up your next generation of experts. But don't dismiss the expertise is now all in the cloud, all in large language models.

Jon Reed:

Love it. To be continued. I would just say quickly that we can't redefine it in one conversation. But I think building the community around it is what you alluded to that I think I'm seizing upon, which is don't just be a siloed expert, be a cross-disciplinary, think you and build a community around what you care about.'cause that's something that machines don't do, but you can.

Andreas Welsch:

Beautifully said. Alright, folks in the audience, thank you so much for your viewership, for your listenership over the course of this year. This concludes 2025 and I can't believe how many different topics we've touched on. Jon, we've touched on four at least in our episodes. So do go back, listen to those as, as well, how things have progressed since the beginning of the summer. Stay tuned for more and hopefully for more episodes of John and I on what's the buzz and yeah. Have an awesome holiday season and a great new year. Jon, thank you so much.

Jon Reed:

Thank you.