What’s the BUZZ? — AI in Business

Making AI Agents Reusable Across the Enterprise (Samantha McConnell)

Andreas Welsch Season 5 Episode 2

Stop building the same capabilities over and over when everyone builds agents. Standardize and reuse common features across your business.

In this episode of “What’s the BUZZ?”, Andreas Welsch sits down with Samantha McConnell to discuss how large enterprises can build reusable AI agents that create real business value. The conversation moves beyond vendor claims to examine how organizations operationalize agentic AI, manage rapid innovation cycles, and balance empowerment with governance.

Samantha shares how Cox approaches AI through centralized hubs, agent registries, and differentiated governance models for individual productivity agents versus enterprise-scale solutions. The discussion also highlights why adoption is critical, and why many AI agents will have much shorter lifecycles than traditional software products.

Catch the BUZZ:

  • Preventing reinvention through AI hubs and agent registries
  • Governing enterprise AI agents without slowing innovation
  • Managing the lifecycle of rapidly evolving AI agents
  • Measuring adoption and business impact, not just usage
  • Connecting agent initiatives to clear business success metrics
  • Using a land-and-expand approach to scale agentic AI responsibly

Key Takeaways:

  • Balance innovation and control by tailoring governance to agent scale and risk
  • Design for faster time-to-value and shorter solution lifespans
  • Define outcome-based success metrics before deploying AI agents

A practical episode for leaders focused on turning agentic AI from experimentation into repeatable, enterprise-ready impact.


Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Welcome back for another episode of What's the BUZZ?, where leaders share how they have turned AI hype into outcome. Today, we'll talk about how to build reusable AI agents that you can leverage in your enterprise. And who better to talk about it than someone who's actually working on that. Samantha McConnell. Hey Samantha, thanks so much for joining.

Samantha McConnell:

Of course. Thank you for having me.

Andreas Welsch:

Hey we met in Austin at Generative AI Week in November, and were on a panel together and had a chance to talk a little bit about what you're doing. So I'm super excited to have you on the show today. But maybe for our guests in the audience who might not be as familiar with you yet can you share a little bit about yourself, who you are and what you do?

Samantha McConnell:

Absolutely. Yep. I'm Samantha McConnell. I am a director of AI strategy and product management. I am part of our centralized AI organization, the AI and Innovation Squad, here at Cox Communications, which is a Telco.

Andreas Welsch:

Now I know you've chaired Generative AI Week last year. You've been in the industry for a long time. You've seen many things happening when it comes to data, when it comes to AI. So I'm really looking forward to our conversation today. Absolutely folks, for those of you in the audience, don't forget I have a new book coming up called The Human Agent AI Edge. That'll be out at the end of February. Get ready to check that out when it launches on Amazon. It's all about how you can build and shape your next generation of AI ready teams. When everybody says we need to do AI, but there's little guidelines on how do you do it well, how do you get your teams to actually use it without creating AI slop, low quality stuff that ends up in our inboxes and clogs them? And how do you encourage your team members to do that? Alright. So without talking about that too much, Samantha, should we play a little game to kick things off? Let's do it. Okay. So in good old fashioned, what's the buzz? When I press the buzzer, the wheels will start spinning and you'll see a sentence. I'd love for you to complete the sentence with the first thing that comes to mind and why, in your own words, are you ready for, what's the buzz?

Samantha McConnell:

I think so.

Andreas Welsch:

Okay. Here we go. So if AI were a plant, what would it be? 60 seconds on the clock. Go.

Samantha McConnell:

All right. I'm gonna go with bamboo.

Andreas Welsch:

Because

Samantha McConnell:

It is highly useful, has many different uses. It's also really fast growing, right? Yeah. And so for both of those reasons, I think with the rate of AI growth, the rate of AI innovation, and also just the many different ways AI can be used and provide value, I feel like AI is bamboo if it were.

Andreas Welsch:

Okay. I love that. Great answer. Lots of pandas chewing on it too. So fantastic. Great segue in into our topic too, right? I, like I said I see so many vendors talk about we have AI agents, you should use what we make available. You can, build on our platform. Whatever your to or to your heart's contend, but. I'm wondering as more and more organizations are building agents, as more and more people in the organization are building agents, how do you even ensure that your developers, your citizen developers, don't reinvent the wheel for things that are common tasks that you can encapsulate or, like you've been doing with modular services or with APIs.

Samantha McConnell:

Yeah. So to me it's a com combination of making folks aware of what we have, and then also effective governance to ensure that we're not reinventing the wheel where it's really important that we don't do and I tend to think about this in terms of two buckets. The first is what we call individual productivity. Agents, so these are agents that are benefiting one person or maybe a couple of people, right? I build something for myself and maybe I share it with a friend. We have an AI hub and an AI registry, which are centralized resources for AI agents. And logging those and sharing information about those. And if I need an an individual productivity tool, I can or agent, I can go there and look at it. But it is a little bit more of an incentive based approach in terms of not reinventing the wheel, right? I can go see what my peers have created when it comes to enterprise tools. Or or AI agents that are gonna be serving more of a production enterprise scale need. They're serving whole departments or larger user groups in those situations. There is that same making folks aware, right? We're still sharing with folks what we have available, but we also have a governance process in place. So anytime a stakeholder here at Cox Communications needs a new enterprise scale AI solution, they bring that to our AI council. Which is a cross-functional governance body. My team, the AI squad are, is a significant portion of it, but not all of it.

Andreas Welsch:

And

Samantha McConnell:

Folks like security and data governance and supply chain and all those good folks. And we get together to review those. And so in that situation, there's the incentive of we use what we already have and then we also have a go. Process in place to ensure that we're not having a proliferation of duplicative solutions.

Andreas Welsch:

That sounds interesting, especially the part about the registry and having a central. Piece of information or something available to know what has already been built. I haven't seen a lot of vendors cover this specific part. To the extent that you're able to share is it something that you've built in house? Are you using a vendor solution for that? Because, if we compare agents and humans, and sometimes that's good. Sometimes that's bad. But for the sake of saying, hey, in, in a business, I know who's there. I have an address book. I can call up Jane. I can call up John and I know where in the org org chart they are and what their roles are. I haven't seen a lot of that with AI agents. So I'm curious, did you build that? Did you buy that?

Samantha McConnell:

Yeah. Ours is primarily homegrown. To serve our particular need. And it's still in its early stages, but it's proving very helpful.

Andreas Welsch:

And how do you ensure that the list is as completed as possible as part of the governance? Do you say you have to go through this process and then we approve it and then your agent gets access to the network? Is it that strict or stringent, or is it like anybody can build an agent, but please register them there?

Samantha McConnell:

So again, it depends on the kind of agent, right? And so for the individual productivity agents, it, there's looser restrictions on those. We aspire to auto automatically register those on the backend, but for now, those are a little bit less tracked. What is very tracked is anything that is of a bigger scale. So any of those more enterprise or production level tools that are getting used by larger groups of people, that we do have a strict compliance process. And all of those are getting logged to our registry. But I'll also add too, for the individual productivity agents we have a list of approved AI tools. So if you're gonna go and you're gonna go innovate, build something for yourself in those situations, you have to use it with our approved. AI tools to ensure that we're being compliant, secure, and all those good things.

Andreas Welsch:

No, that makes a lot of sense. In many of the smaller and mid-sized businesses that I work with IT leader say we've opened the flood gates. We said, okay, no more wait and see, but let's now do AI. And all of a sudden we're getting all these requests and I need a tool to do this. I need a tool to do that. And how do we make sure we're not spending money in five different places for something or five different tools that do the same thing? So something like that.

Samantha McConnell:

And it's middle ground. So when we first created, so my centralized AI org that I sit in. Is about a year old. So it was created officially at the beginning of this year. We were in the planning stages of it in Q4 last year, and we considered several different op operating models from highly centralized to more of a federated all the way down to a very decentralized model for AI. And we said our aspirational model is we would love to work towards decentralized. But as we learn through this and as we get all of our best practices and appropriate guardrails in place, it's something a little bit more in the middle where for the larger scale implementations, we are taking more of a highly governed, more centralized approach. Not entirely, but more sure. But then also to your point, because. Everybody's got a good use case for AI, right? And we can't we can't honor those thousands of, perhaps smaller scale, but still worthy use cases. In those situations, we've decided to say, here's our approved list also available on our AI hub of, here's an agent builder, here's where you go for this, here's where you go for that. And as long as you're using those tools within the appropriate guardrails for those things that bene, that are geared towards a person or a couple of people we've said, we're empowering you.

Andreas Welsch:

Awesome. That sounds great. Sounds like really striking the balance between innovation on one hand, but also the governance and the rigor and risk management on the other side, right?

Samantha McConnell:

Yes. That's the goal.

Andreas Welsch:

When we were in, in Austin, and also when I attend other events I see so many vendors talk about, Hey, it's super easy to build an agent. We've built a ton of them for you. We have a platform. You can build your own ones or you can extend what we shipped, but I don't really see a lot of. Those companies actually talk about what does it mean, what does it take to take the agent and put it in a business? What are what is the lifecycle around them? How do we get people to use it? What were maybe in, in your examples some of the surprises that you ran into as you were going through that process.

Samantha McConnell:

Yeah. So I think there's a number of things I can talk about here. One that, always comes to mind first when I think about AI agents that can be surprising is just how quickly the innovation cycle is moving, even relative to other AI technologies. So as you mentioned, Andreas, I've been in the AI space for about a decade. But it's very different than it used to be, right? Even very different than it was a couple of years ago. And so the rate of innovation, the speed to market, the time to value is shortening, it's getting shorter, right? I can talk about, eight years ago I launched our enterprise virtual assistant here at Cox, and it's still in place. It probably took a couple of years to get it launched but it's still here in service eight years later. I launched a couple of generative AI solutions a couple years ago with the Gen AI boom, including, aI backed knowledge search, right? Content creations and things like that. Those had a shorter development time, maybe six months or so, and they're still in service a couple years later, but we're starting to look at them and say, Hey, I think we could build this better. And then the agentic work that we're doing now, you could have a POC. In place in days to weeks and something, getting to a V one even quicker than we've ever seen before. But also our assumption is that most AI agents will not see their first birthday. And so while you're getting to market much faster, you're seeing return on that investment much quicker. You also have to be comfortable with overhauling that, being prepared. That's not gonna go sit in service for eight years. It's gonna be need to be revisited.

Andreas Welsch:

Now, does it mean that you totally scrap the agent and that there is some better tool or

Samantha McConnell:

Depends...

Andreas Welsch:

Off of the shelf tool? Yeah.

Samantha McConnell:

Yeah. That is something we have run into is that we envision, we prioritize, we build a solution. Knowing that there will be an off the shelf thing coming, but it's not here quite yet. And we have to take a guess, a strategic, guess on when the off the shelf thing will be coming and you're not gonna get it right a hundred percent of the time. And sometimes it does happen that you build a thing and then it seems five minutes later there is an off the shelf solution. And so that's the mental math, or we're constantly doing.

Andreas Welsch:

No I'm sure that's not easy. That's not easy for you as a leader, it's probably challenging for the team. And it's probably not easy to convey that and sell that to, to your leadership that, hey we're investing something. It takes a couple weeks or a couple months to build this, but by the time or shortly after we've shipped it, maybe there's something else coming. But still in that period of time, possibility, it was good for what it is.

Samantha McConnell:

And it took maybe days or weeks to build it, not years. And that is the trade off. And so yes we are learning to innovate at that pace and it's requiring additional kind of organizational and just cultural shifts, right? To become comfortable with that kind of approach.

Andreas Welsch:

I can imagine, especially in larger organizations where there's more processes, more red tape, more also enablement and education. Not just in the AI function or the IT function, but the governance, risk, legal other business functions that you work with? I'm sure there's a lot more that goes into that. In, in, there's

Samantha McConnell:

tons. And I don't wanna lose sight though. Andreas, you also mentioned adoption. I think that's another important thing to talk about here. And that is that has been really key. So I have worked on two different AI products. One was AI assisted coding. The other was that generative knowledge search. And in both of those situations, we launched the product and then we did a look back at it and we said. This product is performing as expected technically. However, we're only at about 50% adoption. And so that business case that we had planned right before we, we created this AI product, right now, we're only on track to get about half of that. And fortunately we, we noticed that, we took the appropriate steps and now, fully on track. And more but adoption is just as important as ever, if not more given these accelerated timelines. And that's something to really, to keep insight from the beginning.

Andreas Welsch:

How do you measure adoption, if you don't mind me asking? I know there are many different ways from looking at users to looking at tokens and consumption and transactions. What's working for you or what did you decide on?

Samantha McConnell:

Depends on the product. And examples I gave you is primarily we started looking at users initially and users with a significant threshold of usage should not folks who tried it once or twice, but the folks who were using it routinely. So that was the first one. And then how often were they using these tools versus legacy tools? We also looked at that balance to say, are they fully adopting them or are they going back to legacy tools, which they may be comfortable with, but we know aren't quite as effective.

Andreas Welsch:

I see. Huh. Now. Many organizations, many leaders are jumping on this agentic bandwagon. It seems, it seems so promising. I was talking to a CIO the other day and he said, look all the consultancies all the vendors, whisper in my ear, whispering in my C-Suites ear and see, it's super easy. We should build agents. And he here all, all the productivity gains that you'll achieve. We see in the news we've seen last year in the news specifically lots of talk about a slowdown in entry level hiring. What are you seeing, what do organizations, what do leaders miss as they jump on the genetic a AI bandwagon?

Samantha McConnell:

Yeah, so I think one thing that can often be missed is establishing appropriate success metrics. For your AI initiative before you dig in. And it's not just success metrics at all, but I think AI can be something really exciting and appealing that folks wanna use. But if the problem you're trying to solve is a policy problem, AI's probably not the right solution. Or if it's really like an org structure problem, AI is probably not the solution. And ensuring that. The, we wanna use AI imperative doesn't lead you to a mismatch between the technology and the result you're expecting to get. I think that's really a really key thing to keep in mind that you're starting your agent AI endeavor.

Andreas Welsch:

Connect your AI strategy, connect your AI portfolio to your business strategy. What is it that we're actually trying to drive? How do we measure if we if we achieve that, right?

Samantha McConnell:

Yeah. And I think at a macro, and then also more at the I don't know, micro, but a more granular level too, right? As you're going, as you're tackling particular processes or particular use cases, how do you are your AI products connected to the right metrics?

Andreas Welsch:

Yeah. So question for you. You've obviously, like I said, been working on AI products for over a decade. You've been actively working on agent AI scenarios, use cases, bring those into the enterprise. What's your favorite agent AI use case and why? And how do you measure the impact of that agent?

Samantha McConnell:

So I might be biased by the thing I'm currently working on, which I do love very much.

Andreas Welsch:

That's good. Yeah.

Samantha McConnell:

So we are working on reimagining our B2B marketing and sales organization with agent AI. And we have some wonderful partners in that space who came to us and said, we're ready. We're ready to embrace AI. We're ready for change. What does this look like? And. Referring to what we talked about before in terms of rate of innovation. We said, that's great. We love this vision, and also we need to ensure that we are tackling that scope in an appropriate way. And so we're using a land and expand approach. Where we are starting, we started with personalized content creation. As part of the overall marketing and sales process. Work towards our, POC beta v1, and then we are moving outward and moving in depth from that initial work. Because we if we try to tackle everything in kind of one big bang approach, then we're gonna run into that issue. If we get. 20% done and then the world's changed and we're gonna say, I wanna build it differently now. And so we're taking manageable bites right out of the overall scope while keeping the overall goal in mind. And so I mentioned content creation, so that one is about increasing our efficiency in producing personalized marketing content and also making it much more. Personalized, whereas previously it was much more one size fits all with our B2B audience. Now we're able to use AI to make it scalable and customizing it by segment, by industry, by all kinds of different criteria. And making it much more relevant to the audience. And then from that tool we've moved on towards, things like brief creation and also move towards selling. And we're looking at seller preparedness, right? What information do we give them ahead of attempting a sale? And there we're looking at, seller effectiveness, seller efficiency, some of those different things. But really we have not had challenges in terms of measuring impact or identifying impact. And I think that's generally been true for us, where the. AI benefit is more specific and more even tied to a particular department. I, it's, the business case is not particularly difficult. I think where it can be a little bit more challenging sometimes is when you have the very general enterprise productivity tools. And in those, I think while they're valuable, that's the one where I've sometimes had a little bit more of a struggle with the quantifying that impact because it's more diffuse, yes. Yeah. That's something we're still thinking through.

Andreas Welsch:

That's something I'm seeing too, right? Yes. It's fine to write emails faster, get a little coaching here they are. Summarize your meeting minutes. But how do you quantify this into something that you can take to your business stakeholders and say, Hey, here's what we're spending, here's what we're saving. And how much are people really saving at the end of the day? Does it really move the needle? Or to your point, is it the departmental use cases? Is it about operational efficiency at a larger scale? Is it about engaging with your customers differently? And I remember talking to CTO of a media company in 2024 and they operate a few theme parks and he said, Hey, with the help of generative AI, we're looking at personalizing the guest experience. You're going to the Harry Potter theme park. All your email communication will be in the language you're going to, to the minion part. Everything will be in minion. And obviously what's true for consumers certainly applies to B2B as well. So great to see how you're thinking about that. How do you bring team members on board there? When agents can do more work and do more sophisticated work, now we're able to scale like said, personalize. By industry, by many other parameters, things that we as humans, haven't been able to do in the same amount of time or maybe with the same cost. How do you navigate some of those conversations and maybe it's not you personally, it's more across the organization.

Samantha McConnell:

I think where we've used it AI can be really fulfilling because. Our marketing team absolutely wants to create the most personalized, customized communication possible. It just wasn't, it wasn't doable before. But they aspired to that. And so I think, and that's using that as an example, it allows them to achieve more of their vision than they were previously able to. I think that's fulfilling, and I think for many folks, even with some of the more, generally used tools, it gives them the chance to take some of the administrative work off their plates. And so now they're able to focus more on whatever they were passionate about in their job, right? And certainly folks can have some understandable apprehensions, but generally, once folks get hands-on with the tool, if we're launching the right AI products they say, thank goodness that we have this now. We wish we'd had it 10 years ago.

Andreas Welsch:

That's awesome. That's the kind of feedback and the kind of support that you want to drive adoption for sure. So maybe before we get closer to the end of the show, can I ask you, what are your key three takeaways for our audience today?

Samantha McConnell:

Okay, let me think through this. I think first probably from what we talked about around not recreating the wheel, I think finding the right balance between con control and governance ensure you're being responsible, but also empowering folks because overly controlling use of AI means you won't be meeting all of the need. So I think finding that balance is maybe my first takeaway.

Andreas Welsch:

Yeah.

Samantha McConnell:

All right. I think maybe second is. This idea of the increased rate of innovation. So we all have these sort of muscle memories of how we move through these kinds of technology products products and projects, right? But in the world of ag agentic, AI, the wheel of innovation moves more quickly than that. And so we can take advantage of faster time to value, but we also need to be prepared to revisit those solutions. And many won't last as long as we're used to. Yeah. And then I think maybe the third thing I'll call out is ensuring that you connect your AI products to appropriate success metrics. Not just a success metric, right. But ensuring that it's something that is appropriate to the technology. So I think between those three things, that covers a lot of ground.

Andreas Welsch:

Cool. I love it. Very tangible. Samantha I really appreciate the depth with what you've, shared how you are approaching that as at at Cox Communications, how you've evolved the program. What are the key initiatives? Really the part about governance stands out to me on one hand fostering innovation where it's about personal productivity on the other hand, while still fostering it. Also making sure that it is compliant and standardized. That's great. And the learnings about building reusable components and the registry, is pretty exciting too. Yeah. Samantha, thank you so much for taking the time to be with us today and share your expertise and your experience with us. Really appreciate it. Of course.

Samantha McConnell:

Thank you so much for having me.