
What’s the BUZZ? — AI in Business
“What’s the BUZZ?” is a live format where leaders in the field of artificial intelligence, generative AI, agentic AI, and automation share their insights and experiences on how they have successfully turned technology hype into business outcomes.
Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, agentic AI, and process automation.
Since 2021, AI leaders have shared their perspectives on AI strategy, leadership, culture, product mindset, collaboration, ethics, sustainability, technology, privacy, and security.
Whether you're just starting out or looking to take your efforts to the next level, “What’s the BUZZ?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation in business.
**********
“What’s the BUZZ?” is hosted and produced by Andreas Welsch, top 10 AI advisor, thought leader, speaker, and author of the “AI Leadership Handbook”. He is the Founder & Chief AI Strategist at Intelligence Briefing, a boutique AI advisory firm.
What’s the BUZZ? — AI in Business
Decoding the Hype with Real Insights on Agentic AI (Peter Gostev)
What if you could cut through the noise of AI hype and truly understand agentic AI's potential?
In this episode, host Andreas Welsch engages with Peter Gostev, Head of AI at MoonPig, to explore the practical implications of AI agents in business. Together, we delve into key distinctions between genuine agentic capabilities and simple workflows, as well as meaningful applications like research and coding agents.
In this episode, we discuss key issues shaping AI adoption:
• Every company seems to be launching an Agentic AI product. How can leaders avoid getting fooled by marketing buzzwords?
• What defines a true AI agent, and how does it differ from a simple LLM-powered workflow?
• AI job titles are shifting—but do you need to update your title with every latest development?
• Who’s experimenting with AI agents, who’s rolling them out at scale, and which companies have already been using them for a while?
Whether you're navigating the complexities of AI in your organization or simply curious about its future impact, this episode is packed with insights that can help you make informed decisions.
Ready to separate fact from fiction in AI claims? Don’t miss this enlightening discussion—tune in now!
Questions or suggestions? Send me a Text Message.
***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.
Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter
Today we'll talk about how avoid. Getting fooled by AI and agentic AI claims, and who better to talk about it than someone who's passionately posting a lot of information about that and gives actionable advice to the community. Peter Gostev. Hey Peter. Thank you so much for joining.
Peter Gostev:No, thanks a lot for having me.
Andreas Welsch:Hey, I've been following you on LinkedIn for quite some time, and I'm always amazed and inspired by the posts that you share because I see that you take a really pragmatic view on things that are happening and you also give some really good advice. But maybe for those of you in the audience who don't know Peter yet, maybe Peter, you can tell us a little bit about yourself, who you are and what you do.
Peter Gostev:Yeah, sure. So, yeah, apart, apart from writing here on LinkedIn I've got a day job as well. So I lead AI at MoonPig. So MoonPig is a e-commerce company. We offer greeting cards and gifts which is actually a really interesting space to apply AI because we've got, apart from all the usual use cases, we've got images and maybe video, audio, music. Not that we've done a lot of that yet, but it gives us a lot of extra dimensions to apply. And before that, I was working for a bank. Where the type of use cases that we are looking at in the AI space, were much more kind of corporate ones that you expect.
Andreas Welsch:Hey, it's great to have you on the show. So thank you so much. Peter what do you say? Should we play a little game to kick things off?
Peter Gostev:Yeah, sure. Let's do.
Andreas Welsch:Alright, so this one is called In Your Own Words, and when I hit the buzzer, the wheels will start spinning and when they stop, you see a sentence. I would love for you to answer with the first thing that comes to mind and why, in your own words. So you only have 60 seconds for your answer to make it a little more interesting. Are you ready? Sure. Let's do it. Okay, then let's do it. If AI were a book, what would it be? 60 seconds on the clock go.
Peter Gostev:Yeah, I feel like it would be a combination of the most crazy, fact based book that you would ever read. 'cause and the, the reason why I say that it's a, a lot of the things that actually happening now, if we used to it now, but we actually heard it. Five years ago you, you'd feel like it's completely insane. So I feel like it's a, it's a factual story, but, but, but real, but the other half is, is a lot of hype and and not very well-founded information as well. So yeah, I feel like you'll get really lost trying to make sense of, of this book. Right.
Andreas Welsch:I really like that idea. Right. It's indeed a good mix of everything that, that captivates you, but also keeps you wondering how far have we come and how far do we yet have to, to go? I really like that. Thank you so much. Great answer. So, you know, but obviously that's not all we want to talk about today. And I mean, if you look at what's happening in in the market, certainly there's a lot of hype and a lot of buzz around this topic of AI agents or agentic AI. All of a sudden, we're not just pointing and clicking anymore. We're not just entering information in form fields. But the idea is that we have a more natural conversation with this software, with this piece of technology, and we can give it a goal. It goes and figures out what needs to be done. Research is information maybe. Some agents are connected to the internet, some are more connected to your own database. But just, just overall, it seems that over the last quarter, one headline has chased the next. So I'm wondering where we're in April. You know, April Fool Day. But who's the fool here with all the information? What do you make of that?
Peter Gostev:Yeah, it's, it's definitely, it's very hard to make sense of of what's going on and. What is good information? What is bad information? So I do always try to sense check, am I being a fool? Am I seeing something that is or maybe I'm making judgements that are wrong. So I definitely don't feel like I'm on the firm ground where I definitely know everything, but what's gonna happen and so on. So I think we've all got potential to be fools, but I think the, there are two, two extremes. One is you get carried away by everything that's being claimed and the hyper anti agents that we are all about to get AGI and, and all of that. I think that's one direction, but I think there is equally another direction where you just assume that everything is nonsense and you just don't actually take the center account. So the only antidote to that that I can think of is that you just have to try and, and get intuition for what's actually real or not, and it's hot. Thing to do. I do, whenever I write I do try and stick to that rule of not writing about it if I haven't tried it. It is a bit hard to stick to just 'cause of the volume of, of the things that are going on, but there've definitely been some very happy tools that I haven't written about just 'cause I haven't tried it. There's no point in me hyping it or criticizing it if I have no idea what it actually is like.
Andreas Welsch:That sounds like a very sensible, a approach in in, in many conversations. It seems to come down to that as well. Right. First of all, see if you can filter the information. What is really relevant to you right now in your industry, in your business, and what's all the other noise that vendors are talking about that management consultancies are talking about that, you know, just happens to flood the news. I also really like the tangible advice of, see if you can try it out for yourself. And I think sometimes that's not even as easy as it sounds for many of us. When you're in a in a leadership role or in your daily business and you're, you know, rushing from meeting to meeting to meeting, and at the end of the day you're wondering, what did I do today other than, you know, be in meetings or talk about things. So being able to sit down and have that time to experiment, I think is to your point, critically important. We all need to make that time. So we can also assess for ourselves, is this real? Is that something useful? Again, is somebody just making big claims and maybe a quarter or two down down the line things will.
Peter Gostev:And, you know, I think that we are, we are really lucky that with the field of AI it's actually very accessible. So if you, if you compare it to some other previous trends that we had in, I know in, in tech such as cloud, you can't really exactly sit down and try cloud. That's, that's not really a meaningful thing that that exists. So before, if you were to read about cloud, that's, that's quite reasonable to say, okay, I've read some consultant reports or something that, that's good enough. But with, with something like using language models or image models to see what's possible there's really no good reason why you shouldn't just try it instead of freezing about it. And and the, the another dimension to, to it is that. We, there is really nothing in our previous experience that can give us intuition for what it can do, what it cannot do. It's just a completely new, it's like a new alien being. So how could you guess? It's completely unreasonable to expect anyone to have any intuition for how it works. So it combined with, it's completely new and it's actually easy to try. You, you really have to try it. There's no other way.
Andreas Welsch:Now what you said deeply resonates with me. I remember 10 years ago, a little more than 10 years ago when cloud was that big trend that that came up. Yes, there was a lot of reading and, and learning about it. What is infrastructure platform software as a service, but. It also seemed pretty standard, right? Pretty cookie cutter. You either you either rent the, the bare hardware or you have a little more on, on top if you're a developer or you get the application. But it was pretty much the, the same concept for, for any vendor. So, great point there that you have to experiment with it because it's moving very, very quickly and. To your point, you cannot really talk about it or, or predict where it's going unless you've seen it yourself. But you know, I think that brings up another point. And, and for me having been in, in the tech industry and, and having seen that firsthand how large tech companies So, you know, having been in the industry myself and having seen how large technology companies go to market, how they do their marketing, how others in the industry act I feel like all of a sudden every vendor has an agent AI product. So how can leaders about getting full or even tell what's real and what's a real agent or just an LLM or a workflow with some AI. What are you seeing or what's your recommendation there?
Peter Gostev:Yeah, the, the biggest problem there is that the, there's lack of clarity of definition around what, what is meant by agents. And and I would assume that whenever we hear the word agents, I would tend to assume that it's closer towards exaggerating its capabilities rather than under staging its capabilities. And the reason for that is that for any hype. Based term, right? People always try to jump on that trend to, to try and describe it. And there's definitely several different flavors of what, what is meant by a agentic. And, and everyone seems to claim it. So I would say the big distinction, I would draw is whether that use case is defined within one specific workflow with maybe like small deviations in terms of, or maybe there's like three ways you can go. So that that could be one category. And the second category is does it have freedom to go and do things? And in the more open-ended space. So these are probably. The many more variations to this, but that's probably the distinction. And in my mind, my personal definition, where I would draw the line is that I would not call. These systems which have a strict workflow to be agentic and the, there are many different steps that they might go through, but I would not call them Egen. And, and the reason for that is that it's not if we forgot that this term existed. You could have built these kinds of systems two, three years ago where we could have just staged and people did that, right? People stitch together different steps and they got, got a good answer. So there's not really any reason to invent new terminology to kind of imagine the, what is to describe workflows which have AI steps in them to as agentic. And I think the only reason to call something agentic is why it's truly has some. Capacity to go beyond a predefined workflow and then maybe make certain decisions about what they should do and, use multiple steps, different tools go back on itself. That sort of thing is agentic. So it come coming back to specific questions. How do you distinguish, I think establishing that distinction? Is it a workflow that someone's defined them built or is this truly like you just going around and, and doing certain things and it, it's, by the way, it's doesn't mean to say that you always wanted to be an agent. I actually think like a lot of time you don't want it to be an agent, but I think that's at least maybe somewhat clear line you can draw whether, which, which one is it?
Andreas Welsch:I really like that part that you said about you know, determining, is it a straightforward workflow, something that you could have done already a couple years ago, or is there more of this variation, more of this research, this thinking reasoning to some extent some extent in there that then really makes, makes it agent or more. More capable, more intelligent in that sense. But I also agree with you, right? It's very easy to conflate different terms or for vendors to subsume different capabilities under this new term agen and say, Hey, we, we've had this all along, so I really like your recommendation of, Hey, think about what does this thing actually do? Is it a straightforward workflow? Is it more you know, more reasoning is it more complex? And, and look behind the curtain, basically. Now I saw on LinkedIn your title is Head of AI, not Head of Gen AI, not Head of Agentic AI. What what does it mean? Does it mean you're not working on the latest stuff? Or should we not get fooled by titles? What's your recommendation to other AI leaders who might also have a title of head of AI but not Gen AI or Agentic AI?
Peter Gostev:Yeah, so the way we split it within within MoonPig is that we do have a data science team which is doing machine learning and is also data architecture and all of that really important work. And we also have AI team, which is my team, and you could, we could have called us our Gen AI. And the focus there is much more taking the mostly APIs, although not entirely but to simplify it, take APIs and apply it somewhere around the business. I think it's a mistake that a lot of business are making, that they would take their data science team who are like machine learning PhDs and they'll say, by the way, can you just stop doing your PhD stuff and can you call this API and you know, back the model to up with JSON. So, and it's and it feels like. It's not necessarily their skillset anyway, and you're also just wasting the, the thing that they're actually good at. Now, it doesn't mean that they can't do that. Like obviously the smart people they would and the, the, a lot of successful transitions, but I don't think we should just assume because it's vaguely in the same space that we should be stopping one doing the other. So that's why we have a split Now we collaborate on, on things where it makes sense, but essentially we wouldn't say, oh, by the way, you used to do, I dunno, pricing algorithm this way. Can you stop that and let's do like some alarm stuff. It doesn't like, it doesn't really make sense. So, but equally with gen AI, what we are focused on much more is building software with with the latest technology. And to your point about Agentic, we do experiment with things and we certainly experiment a lot with, with different new, whenever new models come out we always test and see what, what new capabilities we can bring with Agentic, I would say, we have really struggled to find anything where we would want to put anything Hy Agentic inside mpe. And the reason for that is that. Whenever we have some workflow or some specific thing that we want to achieve, it's not very clear why you would want to introduce any ambiguity in, into that process. So we tend to be much better off in saying, at least at, at this stage of our development, we still have quite a lot of opportunities where we would say, okay, we've good. Let's say some data here or some interaction here. We want, we, our current teams go through these steps to process the data somehow and put it in this system, and then they put it in that system. There, there are quite a lot of use cases, like along these lines, which I guess you could design to be agentic, but I don't really see the point. Why wouldn't I just say. Take it from this system, I'll output it in this way, put it in in this way. So there's enough problems as it is in terms of like making sure that that works properly and having it, it feels like we need to have a really, really strong benefit to say, or by the way, it is now agentic so it can decide whether it should be here or here, where they should go there or there. And maybe it is. Because we're not such crazy complicated business. Maybe we just don't have that. But even in a bank, if I go back to my bank in days, I don't see a lot of reason why you would design it in a, that kind of more free flowing way. That's why I think there is definitely not that clear that, at least in the internal use cases for. Your own business processes, at least at this stage of development, it's agentic is necessary at all. I could be wrong. I could be surprised, but at this point I don't see a lot of reason to really go down that route.
Andreas Welsch:Well, there are so many good points in those last few minutes, what you shared. I think what I heard was on, on one hand, don't focus so much on, on the title. Rather look at what are the outcomes and, and the output that you can deliver and go and experiment with new tools. It doesn't matter if you're head of AI, head of Gen AI, or head of agentic AI. It's rather what is your team's mission? And they're great and skilled people that can help you do that. The second part around agents, honestly, something that. I sometimes wonder as, as well. And if I heard you right, you said why introduce something that is not necessarily reliable into a process, maybe a process where you need to have a higher level of reliability or where a straightforward automation would give you repeatable results. Same input, same output every time, not most of the time, or sometimes right. You know, on one hand, I see companies looking for that as well. That's why we have processes. That's why, you know, things are coded, things are supported in workflow tools or in oth other kinds of enterprise software. Yet we're introducing that level of ambiguity on one hand, just because we do more complex things or we can do more complex things with agents, but if we don't really know. How, how they work are they repeatable every time? Do they get the, the same output, the same quality output every time? I can certainly see these, these concerns as, as well that you mentioned. Which actually brings me to my next question. And, and that's, you know, who's fooling around with agents to, to stay with the term fooling who's rolling them out in what do you see, who's had them for a while? I'm sure you're attending different conferences or talk to other professionals in the area. Who, who's actually doing something beyond the, the marketing buzz that this is here, this here to stay this awesome. Try it out. Who's actually doing it?
Peter Gostev:Yeah. So the, the best ones. That, that we, we see publicly. I would say there's a really good application of that ambiguity is when you don't actually know where to look for information. So research is a, is a really genuinely good application of it. So, and, and the reason for that is that, so I said earlier that if you've got a clear. Process where you've got inputs and outputs. When you actually don't have that, then it's great. So so yeah, deep research is a obvious example where it's a constraint in the sense that it just kind of goes around brows, the web, but it's also unconstrained in the sense that it can go anywhere. So that, that is, I think is an excellent application. And I think research is actually. Uniquely well positioned in terms of exploring the, the different areas. So that, that's clearly a very good one. Then we've seen the coding agents being also the, the popular category. This one is interesting in terms of it being how successful they're going to be. I would say the, the key point there is that whether, and I've seen some. Examples of them doing it is when they close the loop and being able to test kind of right unit tests or actually click the buttons on on the website, for example. If that's what what you're building, then I think the Gentech approach could, could be great. I've also seen some. Bad examples of it. Personally, I don't find the Sonnet 3.7 kind of approach to agency. Very good. It just seems to go around in circles and it just does way too much and it's, it just kind of goes off the rails. So that's, that's an example of, it's actually quite hard to get that balance right, because I guess you want the agents to be proactive, but if they're too. Too far the way they go, then they, they go off the rails as well. And we've obviously seen a lot of like the MCP hype going around as well. So it's hard to say that how much of it is actually getting real traction versus people just trying things out, which is great. They should, but it's, it's also hard to say like, oh, is this really properly taken off? Like, would we still talk about it like six months later? It is, it's hard to, hard to say for sure.
Andreas Welsch:Yeah, so maybe also to some extent give it a little more time to settle and, and to have more people explore it and poke holes into it. And then you see how real and how durable it really is in a sense, right?
Peter Gostev:Yeah. Yeah. And one thing about agents as well is that the sense that I get is that. People kind of imagine they, they've built, for example, with MCP, it's a connector, so there's a now capacity to go and interact with different tools, which is fundamental. Like if you don't have the pipes, there's nothing you can connect to, then it's, doesn't matter how clever the model is, you're still constrained. Then we've got, yeah, like coaching agents that can go around and build their own software autonomously. All, all of that. I think what people forgetting is that if you give access, it doesn't mean that the models are good enough to do that. So, and now I do get the sense that I think we're getting a little bit carried away how good the models are. I think the models are just not that good. So even like silly examples of like cloud playing playing Pokemon. Like, okay, it like can mechanically play, but is this actually not, not that it's useful, it's obviously a toy, but it is like, is this actually, is it actually good at it? And the answer is no. It's, it's pretty terrible. Yeah, it is very interesting. But that's the state where, where in, okay, we can let, we have physical capability to let clo. Play Pokemon, but it's awful. So it's, it's also like I think we need to calibrate the, the kind of two sides of Hype. It's, it's very exciting in the sense that you can kind of feel the AGI, you know, feel the potential, but it's also not there. So you need to kind of. It, it's still missing that actual capability. So it's kind of exciting. But you, you should, when you get excited about this stuff, you should wear a hat off. I'm letting myself imagine the future rather than, oh, I'm gonna actually build this tomorrow because we are definitely not, not in that space.
Andreas Welsch:I think that's a very pragmatic lens through, through which you're seeing that and recommending that others do as well. Certainly exciting, right? All, all the things that AI and and technology has now come to do and is now able to do that we haven't been able to do before, but still, there's always a little further that we can go now. Peter thank you so much so far for sharing all your insights. I was wondering if you can summarize the key three takeaways for our audience today on how not to get fooled by agent AI claims.
Peter Gostev:Yeah, sure. So the, the, the thing about agents is that you need to be clear whether they're truly agentic or they are. So where they have capacity to go, make decisions and operate out there in a more free way, or whether they're just more of a specific workflow that, that is predefined. That is the clear distinction that I draw in my head. The second point is. As far as I'm concerned, the agent AI is a little bit too soon. While we do have a really good applications such as researcher and coding agents, which, which are really good as a generic, I. Capability, were little bit early and we're just building the pipes with things like MCP and and and so on. And the third point is that the only way for you to know where we are I is to test it and try it yourself. And definitely do not rely on what. Anyone is writing, including me. And the only way for you to really know and have your own intuition, and that's really the important part, is what's your intuition for how well it works. And the only way you get there is when you actually try these things for yourself.
Andreas Welsch:I love that. That's very practical, very tangible advice. Peter, thank you so much for joining us today and for sharing your experience with us. It was a pleasure having you on.
Peter Gostev:Oh, brilliant. Thank you. This was fun.