What’s the BUZZ? — AI in Business

Agents Need IDs: How to Authenticate & Score Agent Trust (Tim Williams)

Andreas Welsch Season 4 Episode 26

When AI agents can self‑spawn, act at machine speed and delete their own trails, identity and trust become business-critical.

In this episode, Andreas Welsch talks with Tim Williams—an experienced practitioner who’s helped organizations commercialize AI—about the security gaps agentic AI exposes and practical ways to close them. Tim explains why traditional username/passwords and persistent tokens won't cut it, how trust for agents should be treated like a credit score rather than a binary yes/no, and why observability and transaction-level controls are essential.

Highlights you’ll get from the conversation:

  • Why agents operate at a different scale and cadence than humans, and the new risks that creates.
  • Real breach lessons (e.g., persistent token compromises) that show why persistent access is dangerous.
  • The concept of sliding trust: using a trust score to gate actions (low-risk vs high-risk transactions).
  • Short-lived, transaction-based approvals and why persistent credentials must be replaced.
  • Why cryptographically verifiable, immutable identifiers (and why blockchain can help) matter for accountability.
  • Practical governance: observability, human-in-the-loop checkpoints, and preparing infrastructure in parallel with agent adoption.

Who this episode is for: business leaders deciding what to delegate to agents; security and identity teams rethinking access; product and platform builders designing safe workflows for autonomous systems.

If you want actionable guidance on how to let agents accelerate your business without exposing you to runaway risk, tune in and learn how to turn agent hype into reliable business outcomes.

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about security and authentication for Agentic AI and who better to talk about it than someone who's actively working on that. Tim Williams. Tim, thanks for joining.

Tim Williams:

Thanks for having me, Andreas. Awesome.

Andreas Welsch:

Why don't you tell us a little bit about yourself, who you are and what you do.

Tim Williams:

Sure. I've spent 25 years in corporate life basically trying to make AI make money for organizations. So largely on the go to market side of businesses, using things like predictive algorithmic modeling 20 years ago to predict customers that were going to churn through to optimizing workflows through voice channels live channels. Live chat. Chat bots, those sort of things, right through to next best action guidance for sales teams and AI driven marketing strategies through CRM Lifecycle marketing teams. I went out on my own about a year ago to try and help Australian businesses try and make sense of how to. Capitalize on agentic AI and, take that forward into their businesses and got turned onto the idea of identity and security when it comes to AI agents back at the start of this year. So about eight or nine months ago. And it's a really fascinating space.

Andreas Welsch:

Absolutely. So that's why I'm so excited to have you on, and thank you for, staying up late for us. Thank you, to have this conversation. Obviously agents are the buzzword of the year, maybe going into next year as well from what it seems, and hopefully much longer than that as well. There's so much promise and so much opportunity of what these agents, these goal-driven systems will be able to enable, but I think there's also some risks associated with that. Isn't that right?

Tim Williams:

Absolutely. Very different from human actors, not only in terms of scale, but also in terms of their operating cadence and, how, just exactly how things can go wrong if I think about humans operate, generally speaking, in, in a an ad hour day, they would handle a certain number of transactions and they're largely operating either. Bad intention or good intention, and it's generally easy to tell the difference between the two. AI agents can be operating 24 7 365 days a year. They could be making thousands of decisions a minute. They can self spawn additional agents to help'em achieve their goals and they can, as many different incidents have been reported lately. Have very good intent and be designed with good intent. At heart yet make very catastrophic decisions and mistakes in the pursuit of those goals. So a couple weeks ago I was reading that security firms Cyber Rx said agents already outnumber humans 45 to one. So what changes then in, in terms of identity and trust and verification when you no longer have a human in front of the screen, but an agent taking these actions? Absolutely. So generally speaking the way I love to explain it is that humans generally have fingerprints. That's not just biometrically. It's not just the ability to fingerprint every time. I want a two fa into a system, but also I tend to leave a trace. I tend to leave an audit trail behind me that can be followed up that I can't access and change. Agents are obviously very different that there is no bio to metric. There's no fingerprint. And often there's been examples where AI agents have been found to autonomously delete their audit trails and act in ways that humans just simply cannot at a speed, at a scale that just can't be replicated by humans.

Andreas Welsch:

It's really this part of scale and agents being even a bigger black box than AI and models have been to date.

Tim Williams:

Yeah, absolutely. And it's these like self spawning capabilities, that you look at. Claude Co for example, is a really good example of the ability for an agent to spawn sub-agents who can then continue to act. And so instead of acting to prevent risk from a single human, like you say before, 45 to one you've got. Agents already operating at scale Agents who can then self spawn multiple subagents to help them achieve their goal. And those agents. Exist one second and don't the next. And very different to how humans operate. Humans generally are operating in a, one-to-one kind of environment. Their, identity persists. You can go and find where they are, whereas an agent could exist one second and not the next.

Andreas Welsch:

Right now, if I look at your background image, that kind of is a little scary. There's this guy with a long mustache and the red eyes and holding a bag with a dollar sign in his hand. And, the other one with a tie seems like they're diligently working. Yeah. How does that translate to what you're seeing in businesses emerging and what that risk could be?

Tim Williams:

We've got the the FBI looking agent over here who has authority. If you think about an FBI agent, they wear a badge. They wear a way of knowing that agent has a specific identity that is being delegated authority by the state. That's the way we like to think about agent identity. Not only can I tell who they're, but I know that they've been delegated a level of authority by the human entity that's responsible for their actions versus old mate over here who doesn't have an identity. He's operates in the shadows. He's he's out there trying to make life difficult for, well-meaning humans trying to go out, go about their day.

Andreas Welsch:

I like that analogy that you just shared about the good, the legitimate agent having a badge being or as us being able to identify them and, them being authorized to, to represent who they say that they represent. How do you do that kind of authentication then with agents and, how do you make sure that they stay within limits and are Yeah, rather on the good side than on the rogue side?

Tim Williams:

Yeah. So if you look at the way many are approaching it today in most cases, because this is a very new territory for a lot of people you find that a lot of people are simply providing agents with their username and passwords or their private keys or their OAuth tokens. And we've seen a number of different breaches that have attacked what have traditionally been accepted to be relatively secure avenues. There was the a, a breach a couple of weeks ago, the Drift AI breach, 700 organizations were, compromised via sales agents. And those agents had persistent all tokens that were able to be compromised by the attacker. Who then was, that were able to use those oil tokens to get into their AWS accounts, their Salesforce accounts, all sorts of different things that are quite scary. And. Those attackers were in for 10 days and managed to delete their, trail as they, went. And so it's a good example of those. Traditional security givens that you would use to, to manage humans just being significantly insufficient for for managing agents. There's a few different approaches that are emerging. There's, some people who are trying to use MCP for the solution. Others who are trying to stretch other identity and access management capabilities. What we're doing at Async is quite different. We're actually approaching it more social security numbers and FICO scores. The ability to have a unique identifier that tells you exactly who this person is. Sorry, exactly who this agent is, but also the person that's responsible for any of their actions and who to hold accountable if anything goes wrong. And then rather, than thinking about security and trust in a binary, trust me, or don't trust me model, which is what all of these traditional security solutions are thinking about is actually. Trust when it comes to agents is a sliding scale. An agent could be very trustworthy, one minute and very untrustworthy the other, and that's where credit scores came in for humans. If you think about why credit scores became available was because the ability to trust a human with credit or with money is not a binary yes or no. It changes over time with, circumstance. The first time I borrow money, I could be very trustworthy. I haven't committed fraud. But then the second or third time when you can see I've got a payment history or I've done certain things suddenly starts to change the dynamic. And the same thing can happen with agents. They could be well, built. They have a good origin. They've been built on solid platforms. You, you've got a, really reputable accountable entity for their actions. And when they start to do business, they could be quite trustworthy, but they could drift as we've seen. Anyone who's used AI can know that if you the longer you interact with an AI, the less reliable some of its actions can become. And then you can also have compromised agents where an attacker has actually been able to. Compromise the agent, whether it's through memory poisoning or whether it's through tool misuse or, a number of other different types of attacks that we've seen recently. That ability to trust that agent can degrade over time. And so we are thinking about that. In terms of it being a sliding scale and that, that trust score really can also be used, determined used to determine what level of risk you want to take on by trusting that agent. If you're talking about a very simple transaction that the agent's trying to do, maybe a trust score of 50 is fine, versus maybe this agent's trying to finalize a million dollar transaction

Andreas Welsch:

Yeah.

Tim Williams:

You probably wanna address for in the nineties. And so we are thinking about it quite differently. And our position really is that traditional solutions in this space just, aren't going to be adequate for agents operating at the scale that they're already operating on in the future.

Andreas Welsch:

So it sounds like that it becomes a transaction-based check. Is the transaction that the agent is trying to complete something that is high risk, medium risk, low risk, and some risk modeling going on to say there maybe some classes or some categories of tasks or in impact, and what is that risk? Is it somewhere in the ballpark?

Tim Williams:

Yeah, absolutely. But it's also comes down to the persistence of that access as well. Like what you just talked about was actually really a bit that I probably didn't cover off as clearly, but is really important is traditional access management solutions and identity solutions are a binary and persistent kind of model. I talked about the token breach before where they were persistent all tokens, whereas agents. Really can't be trusted with persistent access because of all the reasons that I've mentioned. And so you are talking about short intervals and transaction level approval rather than persistent identity access like you would have in traditional systems. And yeah, as an organization, you can make decisions around what trust score do we accept? For agents, and does that trust score differ depending on which systems they want to interact with or which transactions they'd like to complete?

Andreas Welsch:

See, to me that's been one of the big questions. When we talk about agents, how much do we delegate to them in terms of task, risk, impact? Yes. And how trustworthy are they? Because, yes, on paper, yes, in our lab, yes. These one transactions that we fire off here and there to test it out. They seem to work fine. But then when you have them in your environment or in the wild, so to speak.

Tim Williams:

Yeah.

Andreas Welsch:

Who knows what really happens and how quickly can you act? So it sounds like you're really working in something that's very impactful and can be impactful for a lot of businesses. Figuring out what do we do here? Yeah, absolutely.

Tim Williams:

Yeah.

Andreas Welsch:

We've always had some sort level of trouble with bad actors and challenges around trust. People saying I'm this person, or I have these yes privileges, permissions, but they didn't. We had the whole thing of social engineering going on. I think a year ago or so, there was something there was some papers of agents deceiving their users about the intents and what actions they've taken. So how does it all change with agents now agents saying who they are in verifying all of that.

Tim Williams:

Yeah it's a really good point. You, think about I used the example of social security numbers before and FICO scores and it's a number of breaches that we've seen in the past have targeted exactly those kind of data points. If I can work out what someone's social security number is, and I've got their date of birth and I've got their address, yeah, I can pretend I'm them. That's not a problem. And so it's another reason why. This space demands a, different approach. And cryptographically, unique immutable and un spoof identities are really important. We are choosing to use blockchain as part of the technology we're using to enable that. We are not a traditional crypto or token platform. We're actually generally a web two enterprise type solution, but. The research we've been reading and the academic research that we've been pointed to really tells us that at the moment, blockchain is really the only solution that's providing in a reliable manner. A cryptographically secure and and zero trust solution for, Creating these identifiers and it creating them in a way that can be verified in a decentralized manner, in a sub-second kind of latency. But it requires that level of sophistication because like you say it could be very easy for one agent to imitate another. Like I said before, there's no tric, so you, don't have the fingerprint. Yeah. That's unique. You don't have all those sort of things that could be used for humans to, to resolve those issues. And we have to look for more innovative ways to use technology to solve that problem.

Andreas Welsch:

Curiosity question there. So as we're talking about agents as we're talking about where they might fall short, where they might have some level of Im impersonation or even a lack of authentication, how do you deal with that kind of technology as you're building the product, as you're building the company? What's your thinking on agents? Where do you rely on them? Where do you say no, this is too much, or this is too risky?

Tim Williams:

Yeah, I think it all comes down to it comes down to observability and it comes down to having systems and processes in place to manage trust. And often we had this conversation of it's a bit chicken or egg, right? Like agents aren't quite at that point where you can trust'em to be out on their own. And you need human in the loop at all these different steps. And for us to remove those things really is quite a leap. But that leap is what was, is required to actually fully realize the innovative potential that agents represent. And I often have this conversation of, async will, would be a solution for this or, whatever technology or solution you've got in place would be required. And the answer I often get is I don't really trust my agents to get out into production at the moment, so I wouldn't use a solution like that. It's imagine what you, what would happen if you could. And so I, I think it's that thing of before there were cars, there were no roads, right? There were tracks four horses, right? And so you've got two options. You either build roads and wait for the cars to catch up, or you build cars and you wait for the roads to catch up. Actually you can do both. And I think that's what this particular moment in time requires is this technology is advancing at a rate that is I've never seen before in my lifetime. And I'm not sure about how other people feel about it. And. I don't think it's enough to sit back and go we'll wait for agents to operate at a scale required to build infrastructure. Actually, I think you need to build the infrastructure in parallel so that you can actually get the most out of this innovation and these advancements that you possibly can in as short time as possible.

Andreas Welsch:

One of my guests earlier this year talked about roads and self-driving cars and, autonomous cars in a similar analogy, more talking about the fact that, hey, we need to design for two things: one is the future of how do we run autonomous vehicles to act and behave. Yeah. And there's also more figuratively speaking about software when we still have human drivers on the street, when we have bikes, when we have motorcycles and all different kinds of transportation on this street. So that was an interesting concept, and it's interesting to hear you talk about even the analogy that layed the foundation for that. Is it roads? Is it is it dirt tracks? And don't wait, is what I'm hearing.

Tim Williams:

Yeah. And I think it's important if you think about AI is similar to a lot of other technologies in that it comes and goes in bursts. So if you think about internet speed's a really good example, right? You, I remember before even the mi, the dial up modem your internet was really only big institutions that could afford really big infrastructure to make it happen. And then all of a sudden someone realized, Hey, I can send the internet down a phone line. And all of a sudden everyone's got internet. And then those 56 K modems were the thing for quite a while. And then someone went, oh, actually broadband's a thing and then cable's a thing. And then that was the accepted standard. And then all of a sudden you've got fiber and then it takes off. And so you see these kind of. AI operating in the same, kind of way like AI's been around for 70 years. And you'll see periods of massive acceleration and then periods of very much stag stagnation in, in, in advancements. And I think it's important when you reach a point where you are seeing technological breakthroughs like we're seeing at the moment that you continue to double down. Because what happens next is if. Long period of stagnation where you don't get that value. And so being able to enable that innovation and to help advance how far AI will get in this particular burst period, I think is really important.

Andreas Welsch:

So we're already pivoting towards what is the opportunity, what's the potential? And that was my next question I was going to ask you and I'll ask it anyways. When you work on a topic that deals with risk aversion, that deals with prevention of harm of fraud, of identity theft in that sense, and all these security topics, I have a feeling that can be pretty daunting, pretty negative for, some people if you always need to think about what can go wrong. Yes. So where do you see the opportunities in all of this? Because of the work that you're doing or despite of the work that you're doing?.Depending on how you want to look at it.

Tim Williams:

Actually think it's because of, and the reason I say that is, we wouldn't have to build something as complex or as innovative as we need to do when we're building async. And so we are constantly seeing what's happening at the forefront and at the at the frontier of, where this technology is going. And yeah, it can be scary, but at the same time you can start to see the potential we are having to build these things out because they're so clever, because they're, unlocking a capability that you've never seen before. And having to design and build around all of these advancements is actually really exciting because you can see where this can go and, there's always the risks, right? There's always the risks of bad actors, and there's always the risks of, things going the wrong way. And I think it's really important that the right kind of architecture and the right kind of infrastructure is built to help mitigate those risks. Because ultimately this technology exists. Even before we got to the level we had with agents. You still had bad actors to find different ways to get into different things. I had a very similar conversation. This is gonna sound like I'm name dropping, but I'm gonna do it anyway. I was in a conversation at a Salesforce meeting with an ex-head of the of cybersecurity at the CIA and we were talking about quantum computing. And the question that got asked was quantum compute, it's exciting. Quantum computing in the hands of bad actors really is quite scary, right? Like you are talking about processing capability at a level we've never seen before, and then you are putting AI agents on top of that and what could that become? And, and his response was actually quite prescient. It was. It was. Bad guys are always gonna find ways to get into things. Our job is to close those holes. There's always going to be a new hole that pop up. It's whack-a-mole. They're always gonna find new ways. You've just gotta knock'em down as quickly as you can. Try and predict what is gonna be out there and try and close those gaps before they happen.

Andreas Welsch:

So then with all of that out there, the challenges and risks, the big opportunities, what should leaders be looking for when they bring agents into their business?

Tim Williams:

Yeah, I think, for me, it all comes back to strategy. I think I think everyone's read the, different reports that talk about the failure rates of AI experiments or AI projects and I think what I see as a common thread between a lot of them, I'm not gonna say all of'em, but between a lot of them is, it can be very kneejerk, it can be very narrowly focused. Often you see those those efforts really focused on just trying to cut costs and trying to just I always say it's the shareholder's favorite story is the CFO's one, which is, oh, we're cutting costs and we're reducing head count. And we're when actually I don't think that's the right strategy with AI and with AI agents in particular. I actually think it's a growth, play, and I think it's a very different strategy, but it actually doesn't matter what strategy you are following. It actually has to be a really clearly well thought out strategy. Not just for AI, but what is your. Business strategy for the next five years look like. Empowered by AI and by AI agents in a way that is unencumbered by those constraints of the past. And so the first thing I would suggest is strategy. Think about if you think really hard about what it is that you are trying to solve, what is it in your five year strategy that AI can. Make faster, better, more profitable whatever the outcome might be. And particularly what, how can you use it to deliver a customer experience that is worthy of what your customers deserve, but also one that you can protect, make customer experience your much through AI and through AI agents. So to be my first piece, the second piece is really think through the, that the observability and the controls you have in place around knowing what AI is in your business what I, how you will manage decisions around letting AI new AI into your business, and being really clear on how you can take steps to manage those risks. If they go wrong, whether they are agents that you are responsible for or whether they're agents of other parties that you are leading into your business, be really clear on how you are going to manage it, if, and if, and when those things go wrong. I think that's a big piece that's really important right now.

Andreas Welsch:

I think that's very actionable and very tangible advice. Tim, we're getting close to the end of the show. And I just wanna say thank you for sharing your experience with us and for staying up late, I know you're in Australia. It was a great conversation. I learned a lot about where we currently are with agents and what needs to happen to make sure that they are secure, that we can authenticate them, and that we can give them a real identity and tie it back to a user.

Tim Williams:

Yeah. Great. Thank you for having me, Andreas, so I really appreciate it. It's been a great tour.

Andreas Welsch:

Thanks so much, folks.