What’s the BUZZ? — AI in Business

Leading Your AI CoE To Success (Guest: Brian Pearce)

October 20, 2022 Andreas Welsch Season 1 Episode 14
What’s the BUZZ? — AI in Business
Leading Your AI CoE To Success (Guest: Brian Pearce)
What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Brian Pearce (Senior AI CoE Leader) and Andreas Welsch discuss leading an AI CoE to success. Brian shares shares his journey on building and leading a CoE, and provides valuable advice for listeners looking to do the same.

Key topics:
- Understand the mandate of an AI CoE
- Learn when AI CoE’s mission is done
- Hear why a product mindset is key for AI

Listen to the full episode to hear how you can:
- Focus on delivering business value
- Look beyond just building models
- Engage with stakeholders as users of your product

Watch this episode on YouTube: https://youtu.be/_V-MWGawqks

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about leading your AI CoE to success. And who better to talk about it than someone who's done just that. Brian Pearce. Hey Brian, how are you?

Brian Pearce:

Hello, Andreas. How are you?

Andreas Welsch:

I'm all right. Thank you so much for joining. I'm so excited that we have the opportunity to have you on. And we've talked a couple weeks and actually also a couple months ago. And I was so in inspired by the story that you've shared your own career story in, how you have built and led a CoE at one of the largest U.S. financial firms. What you can share with the audience, what has worked for you, what they can take away as well. Before I talk too much, why don't you tell us a little bit about yourself

Brian Pearce:

Sure And, I think just for context today. I was a member of a, leadership team that built in AI c OE at a large US financial institution. We were responsible for standing up the CoE and ultimately delivered$200 million worth of models in a two year timeframe, once we actually got going. Specifically I was responsible for product for, go to market, and for customer success functions in that CoE. It's a team sport. There's lots of other players but, that was of my sort of role within that leadership team.

Andreas Welsch:

Thanks for sharing. So I'm confident that the learnings you have gone through and that you'll share in the ones that resonated with me are probably the same ones that others in the audience are either experiencing right now or as you're getting onto that path of leading a CoE that you might run into. Pay close attention. So for those of you in, the audience if you're just joining the stream, drop a comment in the chat. What do you think the role of a leader in the AI space in business should I know we've talked about this already a little bit and, many people in my network that lead AI projects come from very different backgrounds professionally. And so I'm always interested to learn about the path that people have taken in their career. And so I'm curious: what's been your journey to AI? Can you share a bit about that with us?

Brian Pearce:

Yeah, for sure. I think my journey's a little bit different than lot of folks in this space. I really came at AI as a full stack product manager, which I fully embraced that description. And my career has really been focused on taking like emerging technologies and applying them to business problems. And it's started out frankly with client server and then internet, and then mobile. And then I came to AI. AI is just another in a sort of series of technologies that I've used to solve business problems in my career. And so I'm really coming out of it from almost more of a software perspective. And, that's a little different. I think if you look at most AI job postings, people want people with a lot of AI experience or particular degrees. And that's really not where I come at it from. And so I think that does help me in some ways. And then I bring a very different perspective, which is a perspective of sort of the software industry bringing much more sort of that software engineering, software sort of background around the types of things that we're doing or we're doing in that side of the house can really be applied to AI. I think that's helped me be successful, because it is a little bit of a different perspective.

Andreas Welsch:

That's, awesome. I think that's also very encouraging, right? Because to your point, a lot of times,there's this expectation that you need to have a PhD in statistics or have been a researcher before you can move into this kind of a role. But also from my experience, I feel that having a different background and different perspective also gives you a different perspective on the topic of AI and making it relevant in business. And also as we talk to your your stakeholders in business about these kind of things.

Brian Pearce:

Yeah, absolutely. I think I was joking with you. I feel like 99% of job postings today, I would be automatically just rejected from an AI job posting right? If you look at way the postings work, and it's no, I've been there. I've had success. I have a lot of perspective I can bring, but because there's something of a focus, right? That ends up being a barrier for a lot of folks, and I don't think it needs.

Andreas Welsch:

I think it's also on all of us as we move in into leadership roles to carry this forward, right? And to look for diverse skill sets that in the end help us become better and build better teams. So maybe let's switch gears a little bit. We wanna talk about leading a c for AI. Maybe as the first step, right? We should probably talk about what does a CoE even do? And it's the abbreviation for a center of excellence, but what actually stands behind it? What do we mean by it? And what needs to be in place to build one? When is it a good idea to have one in the first place?

Brian Pearce:

Yeah it's so funny that it's such a basic question. But it's actually really important to understand. And, talking with other CoE leaders and in looking at my own journey and, our journey when we both are CoE. I think it can mean different things for different companies really based on culture and business needs. But at its core, at its very core an AI CoE should be accelerating the realization of business value when you're using AI. So you know, it's really about how do you go faster, do better, get more results when you're using AI. And move it from really being about experiments or the lab or R&D into really delivering value for your stakeholders, real value in the business as opposed to just learnings or experiments.

Andreas Welsch:

Fantastic. So let's maybe turn over to folks in the audience. If you have a question for Brian, put it in the chat. I think the other part that I feel is so important is not only having the right skillset and understanding why you need this CoE. But I think we all go through a certain journey, right? Whether we are aspiring to move into a role like that, are in a role like that, have been in a role like that. There are lots of learnings and lots of stories to tell as well. So I'm curious, what was one of the things that you've learned on your journey when you've started the, CoE?

Brian Pearce:

Yeah. I mean there's a couple of things. I think going in, getting really clear on the mandate of the CoE is important. Why does the CoE exist? What's it gonna do? How's it gonna help your company? And what's the role? And certainly I think I shared with you the story of trying to be the AI police. When we first started, we were really excited about AI and we were gonna catalog and control and decide. Every vendor in the company that was using AI. And it was just a complete failure, because everybody has got AI embedded in their products. And the idea that our team would somehow have control over this was just not realistic. At this point, the hardware routers have AI models in them. Are you really gonna have your team be responsible for the hardware, the models in a hardware router? It's not possible, right? So really getting smart about that getting your data in place and data governance is really critical. Certainly as a finance institution, data governance is absolutely critical for us. I think for any organization and. Governance can actually really help you. So when you begin to really understand your data, you have good tight controls over it becomes much cleaner. It's a much better tracking of data. A much cleaner sort of understanding of the chain of custody of data. It's really critical. And so you can go from there into that space. And then frankly, understanding who the stakeholders. Is also super critical, right? From a stakeholder perspective, you have to really understand who is it that is gonna really benefit? Who is it that's really gonna be interested in AI? For us, we had our governance, we had model governance, we had data governance, we had our business lines. Those were all stakeholders. And it was really important for us and even like legal and compliance were important stakeholders where we. Education and then continuing to keep them in the loop as we move forward. And so really that was really critical for us is understanding the sort of full ecosystem of stakeholders and then having a plan to actually engage those stakeholders. And, frankly, we did a lot of education. The legal team was not happy when I showed up for the first time and told them, guess what? AI, it doesn't follow rules, right? It doesn't follow. You don't tell AI things, AI learns. There was not some happy people, right? There was something like, wow, this is gonna be a problem, Brian, but we worked through it, right? And we began to understand. But it's really that understanding that was getting to a common understanding that was really critical for.

Andreas Welsch:

Yeah I remember having similar conversations with some of the customers, that I've worked with. And, specifically around risks and controls. As soon as you get in that area, and like you said, legal as well, it starts getting a little finicky So let me pick up maybe on that AI police topic real quick. I see we are getting a few questions in the chat already. What has that led to as you and the team started trying to, catalog as much as possible? How was the resonance, the feedback from your peers.

Brian Pearce:

Yeah. So when we started out that process we had every good intention. Which was we wanted to compile a list of everywhere in the organization we were using AI. And we began to realize that every vendor vendors were claiming to have AI in their products. And we wanted to have a list. And the point where we tried to set up these gatekeeping controls where we were gonna make approvals and people frankly just worked around us. And we lost some credibility. It was an overreach. We thought we were doing the right thing, but we really just got out ahead of ourselves and we need to begin to operate in a large organization like that. People very quickly realize you don't really have the scope to pull this off. You really can't do this. And you lose some credibility. You lose some trust. So we really had to go back and reset on that point to realize that we weren't gonna be successful doing that. We wanted to really get focused back on that delivering value mandate and get back into some of the good graces of our stakeholders and let this other stuff go by the way.

Andreas Welsch:

Thanks. Perfect. I think that's a great way to reset and go back to the point of providing value to the stakeholders. Thanks for sharing. Thanks for being open to sharing that. So I see there's a question from Pedro in the chat. And he's asking, where should an AI CoE in your view be best located? And what criteria would you use to determine if it should be centralized or federated? So now we're talking about organizational models as well.

Brian Pearce:

I think this is gonna depend on your company culture, right? And how your company works. For us, what we ended up doing is we had a virtual team. So we had a business team that I sat on. That was in our innovation group. And that's where sort of the head of the CoE sat. We had a data science team in our data group and we had a technology team in our technology group. And we worked together as a leadership team. Cause you really need a blend of all those skills. And that was the way that worked for us. I think What you risk is if you have the CoE say, be just in your IT organization. You have to make sure that it's not just a solution looking for problems to solve, right? You have to make sure that the business is engaged and you're really thinking about the right business problems. You don't want people to just see it as yet another tool in the toolbox, right? Oh, you've got Java and you've got AI and you've got NoSQL. And hey, they're all together in an IT toolbox. And so I think you need to make sure that your organization, if it's gonna be sitting in IT. And then I think when IT sits in a data organization, I think you have to really make sure that data organization is connected to the business, right? Do you have the business knowledge in that. And the centralized versus federated, I think is actually really an interesting question. A couple of things I think come into play, and again, some of it is organizational culture and frankly size. I think if you have an organization where you've got two or three data scientists spreading in teams throughout the whole organization, that becomes really challenging, right? Because as a data scientist, you're not working in a team. You don't have a career path. I think it becomes very hard to leverage learnings and best practices, and that's, to me, a case for centralization is to get some scale. But if your businesses are large enough that you can really have scale, then I think federated can make sense. For us, we started out as a centralized model and moved to a, I would call it a semi federated model. We continued to have some centralized resources developing models. But then the very large businesses who could hire their own teams of data scientists did that as well. And the central team became like best practices, platforms and tools and really try. A community of practice together. So it's gonna vary by company. I know that's not like a magic answer, right? I think you have to really factor in these different aspects of that decision.

Andreas Welsch:

Awesome. Thanks. Yeah, I think that the cultural aspect is definitely an important one. I see there's another question from Octavo in the chat. And he's saying, do you see the role of IT automation infrastructure platform within the scope of the CoE?

Brian Pearce:

Yeah, we didn't have that. We worked with the RPA team and the RDA team as part of it. I have to say, I think the most interesting model that I saw was ran into a CoE leader that had a team that was AI, RPA and process engineering, which to me, that is perfect, right? Because, you know what? You can show up. There's a business problem. You bring your process engineering team in there. You figure out like, what's going on? What's the best way to solve these problems? What's the best process? And then you can apply both RPA and AI together. You know why often caution people, right? As you begin to think about AI and applying AI, AI can make a bad process really fast, right? You can speed up your bad process, you can make bad decisions over and over again. So you need to be really thoughtful about the process that you're looking at and where you're gonna drop AI or RPA into that process and that team that had all three together was pretty compelling. For us, scale was just too large and so we split those up and we had different teams. But I do think there's some synergy between those teams working together for sure.

Andreas Welsch:

Awesome. Thanks. You mentioned a little bit about the being the AI police and going back to the drawing board and repositioning yourself. How did things work after you had done that and you started getting in into your first AI projects? What were some of the learnings early on that have carried forward over the journey?

Brian Pearce:

It took us a couple of attempts to get it right. I think at first we got really excited about building models. And we didn't wanna disappoint our stakeholders, so we took in every project and we began to then lose projects, deepen the pipeline. After working on them for weeks, projects would wander away. We'd run into some problem and the it would stop. And we had to really begin to do a much better job upfront of filtering what projects were we gonna take on, and how do you really make sure that what you're doing is in fact a good AI problem to solve? And it turned out a couple things we needed to get really focused on. One is really about data. So do you have the data to solve this problem? If you don't have the data or the data isn't easy to get to in some cases the data was really challenging to get to. It was, we had it, but it was locked away. It was gonna take months to unlock it and to get it in the shape to use. That's not a good project to take on, right? You gotta go solve that problem. And then there's an actionability question, right? Building a model doesn't actually solve any business problems. Somebody has to, or some system has to, consume the output of that model and make a business decision. And we had some models that we were all excited to build, had the data and we trained them up and tbm and this is so great and there was nobody to use the output. Or the system that needed to use the output had a two year backlog before they were gonna consume it. So there was this huge mismatch in terms of the model was ready, but the system that needed to consume the model output wasn't ready. So we got a lot better and we had to say no more upfront to stakeholders and just do a better job filtering what things we took on. But once we got that going, we found that we got a much better success rate in terms of the things we began to work on and began to get more traction in terms of actually delivering things.

Andreas Welsch:

I remember from my own role: prioritizing also comes with a set of criteria that you want to prioritize these cases by. You mentioned the models that the CoE has built have led to$200 million in savings. How did you quantify that? Or what were some of the KPIs that, that you put in place to measure?

Brian Pearce:

Yeah. The part of the process we began to do as we learned upfront was that as part of the valuation of is this a good idea or not, should we take this on, we began to build a sort of business case for each model, right? And again business cases are gonna vary by company. I think everyone's culturally have different perspectives on business cases. We got to the point where we built a business case for everything we wanted to do and that became a factor in our prioritization. And in fact, we got to the point where we actually had the leadership of the business team like sign off on the business cases. And again, early learnings, right? All excited. We're gonna build stuff. We get the project team together, they put together just a crazy business case, huge numbers. It's gonna be a billion dollars. This is so great. And we go through the process and then later we go talk to the leadership team and hey, we did a billion dollar model for you. And they're like, you know what? It's not, no, we don't pass. So we got much smarter about upfront. Put together a business case, make sure it's relatively realistic, and get sign off from that senior leadership team on that model. And then that became an input into our prioritization. Along with sort of the idea around is the data available? Is it actually actionable? So are the systems or people in place to actually consume the output of that model. And it's really a good use of. So in some cases even though you have maybe had some value and you had the data is it really something where you need AI or it could just simply a regular analytics report do the same job for you probably cheaper and faster, and we can have our AI resources focused on something else.

Andreas Welsch:

Awesome, thanks. So I'm taking a look at the chat. One of the questions was from Michael. How do you keep an AI CoE relevant to enterprise groups? Do you hold regular calls, install members within other teams, produce internal reports? What's the one tactic that you do not recommend following?

Brian Pearce:

Yeah. A couple of things that we did. So when we first started the CoE up, we put together what we called a roadshow. And we just barnstormed, right? We had a great presentation that talked about different aspects of AI, how to think about AI, where were good AI projects, where were not good AI projects and went all over the company and whether it was a one-on-one with the senior leader or an all hands call or a team meeting or We presented to hundreds of people at offsites. So we really just tried to generate interest in AI program and awareness and tried to use that as an opportunity to drum up business. And then what we did as well is we had basically a team of folks that some businesses would call them like a go-to-market team or like a consulting team that were basically aligned with the businesses. We had one person on our team that their job was to work with the deposits business and go to deposits business and sit down and get to understand that business and the leaders and evaluate with them like what are their top 10 priorities for the year. And of those 10, which ones might be AI opportunities, and then of those six that are AI opportunities, you know what three are actually actionable. And so by building those relationships and really getting into that business, that helped us stay top of mind. Because these businesses, they've got lots of things going on. AI is like 1% of their mindshare. And so if you're not there with them, they're not thinking of you. They don't realize this is a new tool in the toolbox that they can be using. There's some opportunities here. And then when it comes, like the budget time, they don't put aside any budget for AI projects and then you're stuck, right? You have to wait a whole budget cycle to get back in. So we were really trying to stay top of mind by using that kind of relationship model. I think the thing that was most challenging, just given the culture, it was any sort of mandate, right? The culture of the company was such that people didn't really like the so centralized function. People were suspicious of. And any type of mandate that came out of that centralized function, you have to do it this way, you have to work with us, that was challenging, right? And so we had to really not use that, right? You can't play bad cop all the time. And you really had to go to people and say, I'm here to help you, right? Help me help you be successful. I have tools, I have resources, I have the new ways to solve the problems you have. I'm here to listen and I'm here to be your partner. And that was a much, much more successful approach to that.

Andreas Welsch:

Fantastic. I see there was one question from Jesse asking how you see the role of a business analyst in this CoE ue, for AI? There was another question from Janet. What's the business case for change? And how will you drive change beyond the process? And so what's the role of a business analyst and what's the change you're driving? Is there maybe a connection between the two?

Brian Pearce:

Yeah. And I think one of the things I think that's important to understand and, one of the things that became really critical for us to help people understand is AI by itself, it's not gonna solve any problem. You have to have some context around it. And when we talk about a business analyst. It's really about helping to put AI in that context and whether it's other systems, right? Because when you're actually in production and you're actually scoring your model, right? You've got data in, you've got data out. So where is that data going? What is the business process that it's in the context of and who's doing something with it? And it's easy to get focused on oh yeah, I'm building this model and I've got my training data and it's, oh, my model's so great and look at my area under the curve. But it's okay, that's great for training, but to actually score. You need data in, you need data out. And you gotta think about who's consuming that data out and who's sending you the data in. And that's really where the business analyst comes into play and understanding sort of the context of that project. And the value play really comes into, okay, great. What is the value of that overall effort? The model by itself, no value. You're not doing something with the output of that model then you're just doing experiments. You're just doing R&D, right? Which can be useful in some context, but you're not delivering value to the business, right? So that's why it became important to really understand what's that full pipeline? And once we're actually in production what difference are we gonna make? What's really gonna change in that business process? And how are we gonna act the bottom line.

Andreas Welsch:

Perfect. Seems like there's never enough time to talk about these things. But maybe can you summarize it in one point? What's the one main takeaway for our audience today before we wrap up?

Brian Pearce:

I think for me the real key is AI is not magic. And, what's really critical is you have to really get focused on delivering business value. So AI is a fantastic tool that you can use to do that. You need to focus on what is the actual problem you're solving and how's it helping? You can't just be focused on, no, it's great. We're building models. This is really cool. I wanna do deep learning and let's do a neural net. That's just not gotta be the focus. The focus has to be on solving business problems and using this powerful new tool called AI that's in our toolbox.

Andreas Welsch:

Awesome. That was very concise and straight to the point. And again, resonates deeply with me in what I've seen and I'm sure with what many of you in the audience are seeing as well. So with that, we're coming to the end of the show. Thanks Brian for joining me today.

Brian Pearce:

Thanks, Andreas. Really enjoyed it. So it's a fun conversation.

Andreas Welsch:

Awesome. Thanks.