
What’s the BUZZ? — AI in Business
“What’s the BUZZ?” is a live format where leaders in the field of artificial intelligence, generative AI, agentic AI, and automation share their insights and experiences on how they have successfully turned technology hype into business outcomes.
Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, agentic AI, and process automation.
Since 2021, AI leaders have shared their perspectives on AI strategy, leadership, culture, product mindset, collaboration, ethics, sustainability, technology, privacy, and security.
Whether you're just starting out or looking to take your efforts to the next level, “What’s the BUZZ?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation in business.
**********
“What’s the BUZZ?” is hosted and produced by Andreas Welsch, top 10 AI advisor, thought leader, speaker, and author of the “AI Leadership Handbook”. He is the Founder & Chief AI Strategist at Intelligence Briefing, a boutique AI advisory firm.
What’s the BUZZ? — AI in Business
Enterprise AI: Common Myths Inhibiting Success (Jon Reed)
Chasing hype is easy. Delivering results with AI in the enterprise? That’s where leadership is tested.
In this week’s episode of "What’s the BUZZ?," I sat down with Jon Reed, industry analyst and co-founder of diginomica, to unpack some of the biggest myths that hold organizations back from real Agentic AI success.
Here are four myths that stood out:
1) Myth: It has to be Generative (or now, Agentic) AI
Predictive models, machine learning, and other “less flashy” approaches often deliver the most immediate ROI. Success starts with the problem you’re solving, not the trendiest tool.
2) Myth: Perfect data guarantees perfect results
Even with high-quality data, AI is probabilistic and not deterministic. Outliers and unusual errors happen. That’s why audit trails, risk management, and cultural readiness matter just as much as data quality.
3) Myth: AI replaces expertise and creativity
AI amplifies expertise but cannot substitute for it. Domain experts are critical for spotting flaws and guiding outcomes. And while AI can generate content, true creativity and ingenuity still rest with people.
4) Myth: Leaders don’t need to understand the tech
Courage and vision are vital, but without data and AI literacy, leaders risk reimagining the future on the wrong foundation. Both human leadership skills and technical fluency are essential.
If you’re serious about moving past AI buzzwords and building sustainable success in your organization, this conversation is for you.
Questions or suggestions? Send me a Text Message.
***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.
Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter
Today we'll be busting some myth around the enterprise AI topic and the do's and don'ts that typically inhibits success. And with me, I have Jon Reed, esteemed industry analyst, co-founder of diginomica. Jon, it's so great to have you on again.
Jon Reed:Yeah, it was cool. I was really hoping we could continue our discussion and hijacking your show in some ways in a fun way because I just said let's have a discussion'cause I want to hear what you think of what I think and I think that makes it super interesting for viewers.
Andreas Welsch:I hope so too, right? In two weeks ago, we're talking about what have we seen from different vendors conferences in the first half of the year. So I'm really excited to talk about what are some of the myths that we each see out in the market? What are people believing? What are they doing? Where do they typically feel? And then how can you actually do it better? Without further ado, I would say let's jump right in.
Jon Reed:Yeah, for sure. And one thing I do want to say is, and I think this is true for both of us, is that when we talk about miss and puncturing high balloons and stuff, this isn't for to, to boost our LinkedIn profiles or to get like viral commentary. This is really about how do you get underneath that into a place where you can really have success on projects. Because one thing that ha that has convinced me in the last year of events, especially this last phase is that there are plenty of success stories out there. The thing that might be a little disappointing to some of the AI true believers is that some of these success stories are a little bit modest in scope, but I find them very encouraging because they show thoughtful design in ways of building momentum and I think sometimes people get frustrated'cause they want to believe that this is instantly revolutionary or transformative when in fact. It's really about building winds on top of winds in a more modest way. And if that's not sexy enough for you, then I don't know what to tell you. I think it's pretty sexy.
Andreas Welsch:If you're watching this on LinkedIn and YouTube, you could probably see my face light up as Jon was talking, because it reminds me of what we've seen with machine learning already and what we've seen to some extent with generative AI as well. And I remember some of the early machine learning projects business leaders said we need to have 98% automation rate otherwise this is not going to fly into the world. What does your process look like today? Oh we're we can automate 30% of things. Even incremental improvements or even incremental improvements in accuracy can have very significant impact in absolute terms. So to me as we were talking, that was one of the big myths that we saw early on, that it has to be fully automated, has to be perfect from the beginning on. I think a lot of times it's also iterative in nature. We evolve the data evolves. Our goal and scope evolves in some respects. And to me, that's an important thing that has stayed true from days of predictive machine learning to generative AI in that I see now with genetic AI as well. So that's why I'm especially excited that we are also seeing the first customer examples and stories out there, why companies are using this and how they're using it.
Jon Reed:Yeah, for sure. And look to I think, I don't wanna rehash like all of our last show because our last show, I really recommend for folks to get a sense of the context of how we got to this point in the market, which we got into a lot and also some limitations on how people perceive AI first. But we also talked a little bit about that some of the documented struggles around generative AI have to do with, I think, misconceptions about how the tech is best applied. And I think generic productivity copilot. Things like this Rod wrote an email for me a little bit faster or whatever are not at the heart of really compelling use cases. And so when we take a step back, we talk about really rethinking industries and, I think so much of this is an industry conversation and how are you gonna compete? In an industry, how do you want to change your company's business model and services? And then all the tools at your disposal are there, including maturing agentic, AI technologies. And as you point out, in some cases you might find that a simple machine learning algorithm applied to some inventory and supply chain patterns actually like frees up a significant amount of savings, even though it seems almost surprisingly basic from the vantage point of today's technology and. If that works, that's beautiful because that really shows you these outside gains that you can sometimes have just by smartly applying the technology to an industry setting instead of a generic productivity con conversation.
Andreas Welsch:One of the things that really surprised and almost shocked me at the beginning of the Gen AI hype was leaders in large organizations saying, we need to look for generative AI. Use cases, that's AI, predictive machine learning, that's not AI. We don't care about this. It has to be generative. And we quickly obviously realized no, it doesn't have to be just generative. Everything has a place and it you need to find out what that problem is that you're trying to solve in the first place. Is it more predictive? Is it more around natural language summary, co-generation, what have you? And where does each of those. Play play out their strengths. Now add agentic AI to the mix as well. That's an additional dimension, but just blindly saying it has to be this, now it has to be agentic. To me that's a big myth because you're missing a lot of the opportunity and you might not even be using the right tool or the right method for the problem that you're trying to solve.
Jon Reed:Totally. And we could definitely call that the sort of first myth of the show, right? That agentic becomes this all consuming technology that blinds you to other use cases and approaches. And in fact, agentic technology has a very specific set of pros and cons at this point, and we can get into that further. Right now I recommend AgTech technology specifically for more focused workflows as opposed to stringing a bunch of agents together where you start to see a lot more. Breakdowns in the current technology. It is still a tool. It's just, it's not the cheapest tool though. And that's really important too. But it has some uniquely beneficial characteristics and I will get into some use cases around that as we go to illustrate that point. One myth that I'm gonna start with here is. This one is a little explosive, I think, but this notion, the better your data, the better your result. Now there's a lot of truth to that, and I don't wanna completely discourage people from thinking that way because so often when you look at where these projects didn't go right in the past, it did have to do with not enough customer specific data. A lot of these generalized large language models really haven't been trained on. Enterprise specific and industry specific. So there is a whole thing around data readiness and AI readiness that you and I discussed before, which is a really juicy topic. So I don't want to discourage people from thinking about data quality, but I wanna point out a couple things. There's still a whole lot to be done around culture, business model, design, all of that. I just wrote a piece today about the limits of autonomous agents and how important it is. For customers to dictate their own pace. When it comes to autonomy, that's not just a data problem, that's a culture, that's a compliance thing based on your industry, what your comfort level is. With all of that and also I do wanna point out that even if your data is perfect, these systems are not so even like the. The vendor that I focused on auditoria in my last piece, they're doing a really good job with a finance specific model they developed, and they also, they have a very specific architecture. Accuracy rates are in the 90%, but we're and, sometimes high nineties, but we're not talking a hundred percent. This is not deterministic. So just because your piping in all the quality data, it doesn't mean everything's gonna go perfect. And even when you look at things like rag context which gives up-to-date information into the system. There are issues around RAG in terms of which data gets pulled, whether the LM properly uses that data. And it's really important to think about that because if you step back and, you realize it's not just about the data, how does that help you? It helps you because you do things like what Auditoria did, where they set up audit trails around the processes so you can look at what went wrong, when it did go wrong and, you look at this from the comfort level of your users and figure out is this good enough for them? And you're able to step back and say, look this technology needs to be designed very carefully and it's not just about the data.
Andreas Welsch:Yeah. You know Absolutely right. What you think. Especially the part around the deterministic, probabilistic nature. To me it is an important one that we cannot emphasize en enough because I feel that a lot of times through the history of information technology and software, we've typically replaced a process that was manual or a semi-manual with an automation in the automation worked like this. If this happens then do that. Could be a workflow, could be an application that you've developed. Now all of a sudden we say, look at the data or look at certain input factors and then make a decision based on language and, some statistics and probabilities of what the right outcome or the most plausible outcome outcome would be. So yes, naturally you are introducing additional variance, additional risk into that process. And I think being aware of that it's not infallible, that it's not a hundred percent, not a hundred percent of the time is a really key aspect. We've seen this before with, again, generative AI. We're now seeing it even at a faster pace and a greater level of magnitude or order of magnitude with agent AI. And to me that is something that's really important to highlight that it's not infallible. It's not a hundred percent correct a hundred percent of the time. So we need to understand when is it likely going to fail and, what is the impact if it does fail. Basic risk management and mitigation strategies that apply here again as well.
Jon Reed:And some folks might be wondering if what you say is true, then why would I want to even mess with a probabilistic technology? And I think we should get into that a little bit further in some of the use case discussions around what the strengths of some of these systems are. Obviously the no-brainer strength, the really obvious strength is its ability to, when you attach some kind of a language chat interface onto these systems, its ability to engage users in whole new ways is clearly one of the top strengths. And it's something that has frustrated enterprise software vendors for a long time. One of the oldest tropes in our industry, Andreas, is the difficulty of navigating these systems and the super user protecting their domain and protecting their information and no one else has access to it. And this type of interface really demolishes all of that in a lot of different ways and, makes it so much more accessible to interact with these systems. And so that's just one example. I don't wanna list them all right now'cause I want to get into some other things, but that's one example of. Why you would want to use a probabilistic system because it can do things that rule-based systems simply can't.
Andreas Welsch:Yeah. And maybe adding two points to that one is also as humans, we're not a hundred percent correct a hundred percent of the time. So we need to realize and accept that as well. So the question becomes how much better or how much worse would an agen system before perform in this case? And is that acceptable? And I think the other part is where actually interoperability between. Different language driven modalities becomes I important, right? If I am in my Microsoft suite all day, being able to trigger agents or tasks that span different systems that I have in my landscape gets a lot more important. Or again, if I'm in my ERP and I use a product there, that I can do this across different systems so that the agent experience in that sense transcends different systems. It's an assistant, right? It doesn't really matter if it's in this silo or in that silo, but being able to reach in into different silos to get me the answers that I need.
Jon Reed:And you make a really good point because with discipline and you start, when you start getting into the 90% range on accuracy, which by the way is not always easy, but when you can get there on a focused use case, it does get to the point where you can start looking at. My use case in my industry, is this good enough for me? For a lot of things and, especially like I think one of the things I'm fascinated by, and I'll get into a little bit, is the customer facing service use cases. One of the reasons that fascinates me is because I. I like to look at what machines can do that humans can't. Here's one thing, 24 7 service right in the past if the service center with the humans was shut down, the bot would simply point you to these pages of documentation and good luck like sorting through all of that, right? And, now you have the potential, if you can get your service bot good enough to at least resolve a lot of relatively important queries, like without having to make the customer wait until the next day. Now granted, there will be some complex issues that still can't be resolved that way, but I think it's really interesting because you say this is something we couldn't do before and now we can do it. The one thing though that I do like to point out to people is that it's. You can't just look at accuracy rates. You also have to keep in mind that some of the outliers may be unorthodox and different than what a human mistake would be. Yeah. In the finance domain, for example, you could have a seven figure mistake. Now you can account for that with certain rule-based. Audits of those systems. But just to give you a couple of examples, one from the news headlines where a Tesla vehicle ran into a plane because had never encountered planes in its training material before. Okay, that's an outlier, but that's also an outlier that a human would not do. And so when you plan for this you have to think about what those outliers might look like. Another example, just for my own life. Is more recently, I had a transcript and I use machine generated transcripts in my work and it inserted the word holocaust in the transcript. Now, a human transcriptionist would not have done that. It would've understood the meaning of that word and say there's no way that the word holocaust occurred in Jon's interview about an AI project. It just, it didn't occur right? But the machine doesn't understand that term. That's not like a career ending mistake, but I didn't want that in my copy. And so you have to keep that in mind that sometimes the outliers can be a little unusual but with thoughtful use case design you can accommodate that. And it's just important to remember that it,'cause some people are like humans and machines. Make all make mistakes. Yeah. But some of the machines make different kind of mistakes.
Andreas Welsch:So to give you an example: a couple of weeks ago, I was looking to book a rental car abroad in Europe, and I wanted to find out whether I can legally drive the rental car outside of the country where I was going to rent it in. And I couldn't find reliable information on the website. Only rental car company in town where we're staying doesn't have a chatbot on their website. They point you to a PDF of terms and conditions and you can read it. I was an under wiser afterwards, to be honest. Yeah. So I could have called during business hours, during European business hours. I decided to drive to the next town over once we're there and rent with a different company because again, customer experience really wasn't good. So in this day and age, even a basic chatbot, a chat bot based on gen AI that can understand what I'm saying, even if it's not exactly trained on it would already work wonders I'm sure. Let alone something that's agentic and that can help you with getting more information and so on.
Jon Reed:Yeah. I've done a couple of these use cases now with customer facing service bots. And granted, you have to be comfortable with this in your industry based on whatever regulations and oversights you might have. But, one of my favorites that I did was Salesforce connections which was with a company called Engine. And the reason was it's a travel company and I'm pretty wary of that because I ha actually don't travel as n agentic use case, except for the most like boilerplate travel arrangements'cause. I'm very fussy and I think most people are about the nuances of travel. But what was really interesting is that Engine had a really nice use case,'cause they looked at this and they said let's start with reservation cancellation.'cause they handle over half a million inquiries a year. And a good chunk of those are cancellation requests. And they put the agent up to those cancellation requests. But the whole idea was not the, and this is a really important point I think for leadership to think about in your AI leadership theme, the idea wasn't how can we reduce head count reduction in customer service. The idea, the guiding question was how can we make our customer experience more seamless and better and increase self-service? I think that's a really important distinction. And, they also really like the idea that this sage agentic AI bot would be much more kind of human centric and human conversational because so many of the service bots we've encountered online are just not that they're much more robotic and very stiff. Can we have a more engaging form of that? And, so they went ahead and did that. And, their customer satisfaction scores, which they were monitoring went up as a result. They're actually hiring more salespeople even though it's helping them with some of the prospecting now. And on the call center side, did they release those call center folks when they had, when they were freed up? No. And, what they said is they're never gonna, at least in the foreseeable. Use agents for over bookings because that's really stressful, high touch experience for a human, but they're gonna apply it to other stuff and it, and they said it allowed us to remove a lot of the less complex stuff through self-service interaction, leaving more room for the humans to. Resolve things and what they're looking at doing with that human time is, not getting rid of people. Though I think they're happy about making them more efficient. They want to apply those humans to more high touch and VIP clients to provide superior levels of SE service. I really love that because I think it's all about using the technology in the right way in a framework that's driven not by how can we reduce head count or be more efficient, but how can we serve our customers better, becomes the guiding question. I as grouchy as I am about travel bots and stuff, I just couldn't, I couldn't poke holes in that one. I thought it was great.
Andreas Welsch:I think so too. I read your writeup about it when it came out a couple weeks ago, and to me that's such an encouraging story and I hope that we're going to see many more of those because I think leaders embracing that mindset realized that we actually have a lot of great expertise in human capital. In our company already. So how can we use that experience and, that expertise of how we work, how our company, how our customers work, how we work in the industry, what makes us unique, and really apply that to human, interactions and scenarios. And yes, fine. Give the, menial, the repetitive things to a bot that maybe still can emulate some, empathy and, again, increase customer satisfaction. But then, like in, in this example here, use these, resources and, team members and and shift them over to activities that are high touch, that are high value at the most significant clients in VIPs where we generate even more revenue and so on. So I, hope that we're seeing a lot more of that thinking rather. Let's just automate and cut headcount because I think, at some point, you also run out of headcount to cut And, you've lost so much capital human capital and, experience in your company. That'll be hard to keep that service level up.
Jon Reed:Yeah, and I think what you are gonna find is if you do this right, you're gonna improve what you might call the cognitive load of certain employees. So you will have to hire less people. So there will be some operational benefits to come out of that, but I'm like you, I'm a little wary of this because I think a lot of companies are gonna use this. In more of a brute force way, ironically, in concert with a lot of mantras around AI first. But if you combine AI first with a headcount reduction mentality, I think you're gonna create fear-based employees. And what you and I talked about last time was how, the alternative is to energize employees, not just with a use case like we described. But combine that with a sandbox environment that creates a culture of innovation where employees can propose new uses of the technology that are validated and secure within your enterprise structure. And if you could, if you can do that, I really think you're gonna make inroads in whatever industry you're in. Versus I think. The more kind of slash and burn approaches. And look, I understand the economic pressure companies are under, but I really steadfastly believe that if you use this technology right, you can grow the top line and be more efficient. I'm not gonna let go of that.
Andreas Welsch:Same. Same here. And, we've seen a lot of that in the tech industry, coming out of the pandemic you thought headcount reductions in the order of magnitude of 10-15,000 at some of the large tech players, where an exception will they've occurred and reoccurred on an annual basis. On maybe not quite the same scale on a quarterly basis. Couple weeks ago we, we saw news from Microsoft coming out cuts after cuts because of in investments and bring investment in AI to fuel those those new areas to also free up capital to invest in building data centers and building infrastructure and so on. I think a lot of times you also need to see what does it do to your to your company culture, to staff morale. You'll see it your employee experience or employee surveys as one leading indicator. And I think o overall tech might just be a leading industry in that sense, where it's on one hand a lot closer to that technology to see how can we apply it? Let's assume we apply with good intent to reduce cost fine. So there could be some ripple effects where 1, 2, 3 years down the line, we see this in more traditional industries become more prevalent as well to see how can we replace. Labor with AI. Again, I would caution and, let's say, first of all, think about how can you augment human labor with AI to do more or to do what you're doing better and, improve your customer satisfaction as one example. But I'm just curious to see there again what are the ripple effects and then also with so much talent now on the job market. What are the new companies that we haven't seen yet that are going to emerge from this? Because there is so much excellent talent out in the market that knows how to build products, how to solve tangible needs in different industries and verticals.
Jon Reed:Absolutely. And I do wanna acknowledge, I think there's a fairly profound problem around how more junior level folks are gonna progress in, in these AI environments. And I think there may be an opportunity to have AI mentors bring junior folks up to speed to an extent. I'm a little bit wary about exactly how all of that plays out, but I wanna get to a couple more leadership myths be since you've been focusing on that, these have been bothering me a lot lately. One of the myths is that we don't need experts. Yes. Some of these systems do have expert repositories and they do for example, ChatGPT. probably knows more about certain kinds of medical practices than I do'cause I'm not a doctor. But this notion that we don't. Need domain experts, I think is absolutely ludicrous, and I'm gonna tell you about a use case that illustrates that also, AI does not replace human creativity. This is very bothersome for me because I've seen it again and again on these lists of skills that you don't. Really need anymore. And I heard it on this thing today on AI leadership on YouTube with about a thousand attendees where they were like, AI does the creativity and humans are editors and curators. That's wrong. It every company needs creative ingenuity to succeed. And the most compelling stuff that, that gets created, either product design or content for marketing, is gonna come from humans Now. AI will do a fair amount of content, generation of more routine content like FAQs and product descriptions and all of that. And I would acknowledge that humans perform the editing function on that content, but that I don't consider true creation. I think that's a very generous definition of the word creation. That to me is content generation. And there's a difference between content generation and true human ingenuity. And again, these studies have shown that these systems aren't. Creating truly ingenious ideas. They really help with brainstorming, which I'm gonna get into in a minute. But they help with brainstorming by, by, by going through all that training data and throwing things at you that you might not have thought of before. They're not coming up with new and novel approaches to business yet. Maybe someday they will, but they're not right now.
Andreas Welsch:Look to me, the the analogy comes to mind is like a hammer and a chisel. First of all, hammer and chisel don't build the statue. It's the artist. Even if you can hold a hammer in a chisel and if you can do the work, doesn't mean that you are the creative that you can mold it and decide it that the way only, you can in that sense. Plus I think the other question is. We really just want to become reviewers and editors. A couple days ago, I saw one of the leading AI influencers post about how AI can generate a thousand version versions of a website. Now I need an AI to sift out the 999 that I don't like. It's not just a problem of generating, I think it becomes a much harder problem of making decisions. And again, we're back to either human human feedback, human preferences. And if you've ever been in a room with a few creatives and, looked at either different marketing copy, different images, colors, logos, what have you. A lot of times people have different opinions, right? So figuring out what is it that we actually want to do still takes a considerable amount of time. Doesn't matter if you generate five versions of it, or a thousand just gives you more more crap probably to sift through and say, not that, okay, here are the three that are okay, but do you really need to then out of a thousand, so the creation, I think in that example is not so much the problem as. Filtering out what is in the noise and what's the actual signal.
Jon Reed:Yeah, for sure. And I think that that you make a really good point there that we get back to what AI what is your vision for your company? What kind of AI do you want to have? What do you want to cultivate amongst your employees as well? And I think. When you step back though, what you're gonna find is that if you take an honest look at this technology, you're gonna see that it's not in a position to do everything that you wanted it to do in terms of creative stuff. But like I said, there is a role for content generation and all of that, and I think that's a good role for. For generative AI to play. And
Andreas Welsch:I have a question for you if you don't mind me asking. You've been an analyst for a longer period of time. You've been looking at the market, it's different trends, different vendors and so on. When has technology ever fully delivered on the promise that organizations, that vendors believed that it it could unlock, right? So to me, there is a good amount of the hype cycle that we see in any technology and maturity and adoption curve. So yes, approaching it with a sense of realism that yes, all of this is possible, will be possible and at some point is good to see the big picture. But the question a lot of times also is what is the very concrete, precise thing that we can do right now? To explore this, to put this in into production, to learn and, to do that with limited risk, but a lot of learning so we can accelerate and, scale as we go along.
Jon Reed:And look I'm a creative person by trade way. Before I was an enterprise person, I started writing creative stuff when I was just a kid. Wrote for heavy metal fan zines in high school. If I thought AI could do that stuff, I would be honest with you and tell you that I thought it could, I would say I'm scared shitless because it can do what I can do. But, the thing is you just have to take a step back. Now, I want to tell you about a really interesting use case in this one. Is called when AI gets a board seat. And, it came out let me just double check on this. It came out on hbr.org, Harvard Business Review, and Esteban Kolsky first brought it to my attention because he's doing a lot of really interesting research on strategic intelligence for the boardroom. And this, use case is fascinating and it highlights a couple of my critiques on Miss, but also shows a really productive use case. So in this case, what they did is they spent a year. Involving AI, so this. This would be a generative AI scenario, participating in boardroom level decisions. And one of the reasons I like this scenario is I think it plays to a lot of LMS greatest strengths. It's less of a deterministic use case. It's more about things like creative distillation of ideas, brainstorming, support, summarization and inter interaction via the chat interface, which we discussed. And the LM becomes an advisor in this context. And there were a couple really interesting things about that. So what they did is they had, they found that if the board just worked with a tool and asked questions, it didn't really work. They needed someone with some critical thinking skills to distill their questions and put it like to the. The LLM and get feedback back. So they needed a little bit of an intermediary to sort some of the feedback, but when they did that they, had some pretty provocative, useful results. And there was a really interesting aspect to it, which is that. That LLMs don't care about our feelings. And so the LLM would propose things in challenging ways that didn't really pay attention to if you're on a board, you're thinking about, oh, maybe I shouldn't say this because the political, no, it just put it out there. And they were making some pretty critically important decisions during that time. In terms of things like where should we relocate a plant? Should we reconfigure our supply chain? They, said the biggest advantage of ChatGPT, which was what was used here, was disrupting the natural flow of the meetings. They thought it might be clumsy and awkward, but the executives appreciated how it made them stop and think. And the team was aware that they had worked together for decades and they needed that type of challenge. So this is an important cultural point, right? That they need it. You have to be open to that. They have this, great. Thing over this whether they should close manufacturing facilities and how to what their external stakeholders would think about all of that. And, AI helped them to have a more complete and fact-based discussion about all of that, which I thought was really, interesting. Now I did have a couple of concerns about the use case because I thought that while. While it was able ChatGPT was able to provide a lot of working estimations that were accurate enough to move forward. They, I think they might've gone a little far, a little bit in terms of which plant should be closed. I think some of that has to be due diligence and stuff like that. But but in general I thought it was a very, strong use case. The only thing that, that I had a little bit of an issue with is they asserted that you need to. Critical thinker, but not necessarily an industry expert to operate the tool. Now, I do agree that the critical thinker was important because the critical thinker could flag things that were obviously inappropriate or not on the mark. But the thing about these tools, and the reason why I say domain experts are still important, is that their output is getting sophisticated enough that only a domain expert is gonna spot certain kinds of discrepancies, I might be trained in critical thinking, but if you show me medical charts that generated by these tools, I'm not gonna be able to tell you, oh, this one is obviously wrong because of this, it missed this. Whereas someone who is trained in that area and domain expert is, really the only one that's gonna spot that. And so I think it's really important to understand that domain experts aren't being replaced just because these tools get these headlines for being able to, oh, it passed the bar exam. It passed this exam, it passed that exam. Sure it passed some static benchmarks, but that's very different than a seasoned domain expert with real world expertise in your industry.
Andreas Welsch:So definitely a good myth to bust, right? Don't replace expertise, human expertise, just with AI a couple months ago. I, worked with a client in manufacturing and here we're exploring a new business area that we might want to go into. We've done a lot of research, we've accumulated a lot of data. We have a board meeting coming up in a couple of weeks. Can you help us build an agent that can first of all disseminate all the information that we have already gathered. But that can then also in the workshop as the board members are having their conversation and, deliberation, put that information in the agent, combine it with what it already knows, and give us recommendations for why this would be more advantageous than something else or what some considerations could be. And I thought it, that was a really great idea and really great use case to use AI on the data to find correlations, to find causalities that humans might not see immediately or might miss completely. But again they also the point was we do need a person to operate this, ideally, somebody either in our company or somebody who knows our industry so they can ask the right questions or they know that whatever ChatGPT or whatever agent you use, spits out, it's actually accurate, makes sense. Yes. This is doable, right? Yes. In our industry we can source rubber and material and other things from these different vendors or it will take that much, it'll cost that much. Yes, you can apply some critical thinking, to your point, but unless you have the domain expertise is really, tough. We also see this in trainings, right? They do a lot of corporate trainings on how do you bring AI into your business and how do you use it properly? And if you do that with professionals, like they have a frame of reference. What does good look like? What what does accurate look like? What does good look like when I do the same with my undergrads and at university, he said use it as a thought partner to prepare for an interview. Ask it to ask you some questions and give you feedback on your responses. What do you think? How did it work? The answer for from the last two semesters has been good. I said, okay. Why do you think it's good? Oh, we did what I wanted it to do. Okay. Was the answer helpful? Was it complete? Was it again good? Yeah, it was helpful. So unless you. Give a frame of reference or in the academic domain, a rubric. What are the different dimensions? What are the different intervals on a scale basically from bad to great. Unless you do that or you have the domain expertise, it's really tough to judge. Is that useful? Is that something that we should actually be doing? Yes. Critical thinking comes up every time. And I'm grateful that they're talking about this as critical thinking because I think it's been on a decline anyways before AI. So more important than ever, but then also the domain expertise to say, does this actually make sense? Is this realistic?
Jon Reed:And I do wanna reemphasize one point from that really interesting use case on the boardroom, which is that. Even though I do think it's incredibly important not to diminish human expertise. And the reason I'm hammering this is because there are some AI evangelists that are really diminishing this point. I do think on the contrary that, you do want your domain experts to be open to being challenged by the systems. I want my experts to be open to all kinds of new ideas, not just from machines. But from their team. Yeah. But they should also be very open to the machine surfacing a new idea or a new point of view that they had not considered because these machines have been trained on vast amounts of information beyond the scope of what you might know. So I wanna see that openness as well. And if, you get that, I think in a way you have the best of both worlds. So I have one more to throw at you on the leadership team. Yeah. Do it. And, this one I think was really interesting. And it, it dates back. I think I might have mentioned briefly last time about this, but it dates back to. A debate I had with an HR leader who was leading some AI initiatives who talked about what skills they need, and he talked about, I don't, he, at first, he said, I don't need technical skills because I don't need to know how my iPhone works. And I challenged him and I said, this is totally different than an iPhone. Especially in hr, you can, maybe, you can start with posting job descriptions, but once you get into things like performance reviews and assessing people's like careers and perform succession planning and stuff like that, you better understand how it came to those decisions on who it screened in and who it screened out and all of that. You, you need to understand the technical architecture that had got you to this point and, he conceded the point and agreed with me, and this led me to. A video online that I watched, which is a good video. It was from MIT Sloan Management called the 10 Essential Leadership Traits from the AI Era. It's not a long video, but they, had something really interesting in the comments which led me to my paradox of AI leadership view in the comments that said, what struck us most while editing the video was how unexpected the answers were. We thought we'd hear about data literacy and tech skills. Instead, we got playfulness, courage, and pres present futurists and made us realize the human side of AI transformation might be more. Complex than the technical side. I thought that was, those were really good points. But I responded to the video and I said, playfulness and courage are important, but if you don't have the technical and data literacy chops, you're gonna make huge mistakes. There was a quote here about how even more important than the tech is imagining how this change will work. I. Without deep knowledge of the tech, you can't accurately rethink how it's going to impact work. You'll reimagine incorrectly beyond what the tech can do. That's why AI leadership is a nearly paradoxical combination of bottom line results, experimentation, excellent human leadership skills and deep. Technical skills downplay the ladder at your peril. So my exportation to business executives is by all means, become a more well-rounded business. Bottom line, soft skills person. But do your homework on the tech side too, because you need a certain level of literacy there.
Andreas Welsch:In a way, what comes to mind to me is how you build models with Lego blocks, right? As a child, one of the most important things you can learn is physics, right? Where there's one object that cannot be a second in the same place. You learn about gravity, you learn about things that can tip and fall, and I think in many ways, in the same in the same vein, you have to learn about AI as well, to your point.
Jon Reed:Yeah,
Andreas Welsch:Would be data literacy if I don't know. What are some of the opportunities, some of the challenges with AI technology? I might build something that looks really great, but once I set it up, it falls flat, right? So not so great. By the way I did an interview a couple weeks ago and, one of the last questions I was asked was: imagine it's 2040 and a new generation of teenagers or new leaders has come in. AI is everywhere. What do you think, first of all, they will look at and say, what did you think in 2025 about AI? Are you crazy? That's what you thought? And what do you think will it enable? And my thought looking out further 15, 20 years is people are probably thinking. What do you mean, you used it to write your emails? Or to summarize your summarizing meeting transcriptions? That's what you used it for. Are you, sure? Are you serious? I think we'll see a lot more, a lot bigger and hopefully more impactful things to your earlier point than summarizing text and so on. Maybe 15, 20 years from now, people will say, what do you mean? You had meetings between people. You didn't send your avatar.
Jon Reed:It's a fascinating future. And I think 15 years from now is enough time. It could, I think there could be a mixture of very wonderful and scary things, and we could talk a little bit about some of that, maybe if we do this again. But I think you're totally right to blow the lid off today's limitations sometimes is, totally good. And that's a really good exercise. And that's, that goes along with. Understanding the present is understanding where you're headed. You have to think about both. Like I, I think about this in the context of my town right now, which is going through a downtown redesign that I disagree with because I don't think they've reckoned with the future of transportation enough in their design. And so that's where it does get a little tricky where, I would acknowledge that sometimes in addition to today's limitations, you do need to think about what will this look like five or 10 years from now so you don't build a totally immature structure.
Andreas Welsch:Let me throw another analogy. Why thought about 2040 and my answer came to mind. I would also hope that by 2040 we've. And we've internalized enough what AI can do, what are the do's and the don'ts, much like we know now what to do and what not to do with electricity, right? It can do many, things that powers the computers that they were having the conversation on. And many more things that the people didn't imagine when electricity first was conceived and deployed. But we also know and we teach our children from a very early age. What you don't do right? You don't stick a screwdriver in the power outlet. You do that once, but probably not twice. So don't do it even the first time. So in a similar capacity I think we need to educate and enable a lot more. And I think some, of the recent activities in the US government funding around bringing AI to K 12 education might go in that direction to help build and increase that literacy. But I think, it has to start at a much earlier age than when you're at working age or when you're in a company already. But anyways. Think about it as electricity, what are the dos and don'ts that ideally by 2040 we figure out and taught our children.
Jon Reed:So I know we gotta wrap shortly, but I want to throw one thing at you before we wrap. I threw a bunch of different stuff at you, use cases and stuff like that, digest that for the listeners. What is your sort of general takeaways from all of our discussions so far today? Sure.
Andreas Welsch:So I. There is a good amount of hype in the market, and hype is great. We need hype to get excited, to think big, to dream big and envision what could be. But just dreaming and envisioning and fully buying in into that vision isn't everything. It's not all right. We need to start somewhere and figure out what is the one practical thing that we can do today to learn, to grow, to figure out how does this work? How does this behave? What does all of this mean? What are the ripple effects if we bring this piece of technology to our business? And I think in that process, there are a lot of myths that you will be able to bust. And that's the exciting pieces as well that lies ahead of us, right? There's nobody that's. Figure it out yet. Not everything, not all. Maybe individual bits and pieces. So it's a matter of us bringing these bits and pieces together as we see them, as we hear them, as we learn about them, and then decide what are the things that we want to adopt in our company? What do I want to do in my team? How do I want to guide them? I think that's the challenge ahead of us as leaders, as people. Whether you work independently, whether you work in a business, and that's the exciting part to me.
Jon Reed:I love it and I think if we do this again, I think we may want to talk a little more back to the AI readiness themes you brought up in terms of how you get to that point. The final sort of practical thing I'll issue is when you do that, unless I. I think your advice is perfect, but if you do that now, unless you're a team with really deep data science and LLM sophistication and understanding of things like rag architectures and agent tool calling, unless you really understand all that stuff, bring in trusted vendors. To help you with these things because that's one of the biggest places you're gonna get into trouble. If you go on YouTube and watch videos on things like rag and agentic system design, you're gonna be blown away by how complex some of these architectures are and, what moving targets they are as well. Like, one day it'll be one thing. And then today I just saw a new video from one of my favorite one follow. On YouTube, on hierarchical ag agentic reasoning, so not just reasoning, but hierarchical reasoning. So these things are moving so fast. I would just say, unless you are on top of the world with these capabilities internally, please look at involving vendors both new and old that you trust and pull them into these conversations.
Andreas Welsch:That's a big opportunity, right? So you can scale very quickly. You can bring in expertise where you need and augment what you have, and also upscale your team along the process too.
Jon Reed:And don't rule out independent advisors too. Like I've always been an advocate of independent advisors on projects, and I would include that for AI as well, because it is really helpful to have someone who has less direct financial. Incentive in an ongoing way like a major vendor does in your project that can come in for a much more modest amount of money and, provide a more, a different view of things. And yes, there's more politics to manage around that, which we could discuss at some point, but it's highly beneficial to have that also. And if you have those pieces in place, I like your chances of following the advice you just gave.
Andreas Welsch:Wonderful. Jon, it's been great having you on again. I really appreciate the Yeah. Free flowing discussion and we've covered so much ground over the last 50 minutes, roughly. I can't wait to do this again. Yeah, I really enjoy these conversations and hearing what you're seeing, and how you think about this, and where things are moving. So I hope for those of you in the audience, you appreciate it in the same way. Jon, thank you so much.
Jon Reed:Many thanks. Look forward to the next.