What’s the BUZZ? — AI in Business

Grow Your Chief AI Officer Role (Guest: Matt Lewis)

October 15, 2023 Andreas Welsch Season 2 Episode 18
What’s the BUZZ? — AI in Business
Grow Your Chief AI Officer Role (Guest: Matt Lewis)
What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Matt Lewis (Chief AI Officer) and Andreas Welsch discuss how you can grow your Chief AI Officer (CAIO) role. Matt shares his journey on transitioning from CDAO to CIAO and provides valuable advice for listeners aspiring to grow into an AI leadership role.

Key topics:
- Why do you need a Chief AI Officer?
- What are the top focus areas of this role?
- How can you continuously learn about generative AI?
- What are CAIOs most concerned about?

Listen to the full episode to hear how you can:
- Hear what a Chief AI Officer does and why they should report to the CEO
- Understand why cross-domain skills prepare your for a CAIO role and why industry knowledge is key
- Get tips how to learn about generative AI while managing your time
- Frame AI projects in the context of augmented intelligence

Watch this episode on YouTube:
https://youtu.beTecRJLoiZxI

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today, we'll talk about how you can grow your chief AI officer role. And who better to talk about it than someone who's actually in that role, Matt Lewis. Hey Matt, thank you so much for joining.

Matt Lewis:

Hey Andreas, thanks very much for having me.

Andreas Welsch:

Awesome. Hey, why don't you tell our audience a little bit about yourself, who you are, and what you do.

Matt Lewis:

Sure. I'm currently Global Chief Artificial Augmented Intelligence Officer at Inizio Medical. I've been in that role, which is Chief AI Officer, essentially. It's a lot easier to say that than a whole long title. For about six months now I've been in the role. Before that, I was a global chief data analytics officer for the firm, six years in that role, and I've been with Inizio all total for eight years. I've been in life sciences my whole career, about 25 years. Inizio is the largest end to end communications and consulting firm for life sciences. We work to help groups commercialize new interventions for a variety of different diseases and health conditions across the planet. We're using now artificial intelligence to kind of speed time to market and help ensure that people are able to manage their health.

Andreas Welsch:

That's awesome. Thank you so much for the summary of who you are in your bio and I think especially healthcare is such an important industry and topic and where there is a lot of discussion about AI, but also given it is a regulated industry, I know it's not the easiest of industries to do AI. And given all the hype and the buzz around generative AI these days, I'm really glad to have you on and to hear more from you on what really matters for someone in a Chief AI Officer role and maybe even how you can become one. So for those of you in the audience, if you're just joining the stream, drop a comment in the chat where you're joining us from today. I'm really curious to see how global our audience is. And I know people have joined from all different parts of the world previously. So I'm always eager to see where you are joining us from. Matt, should we play a little game to kick things off?

Matt Lewis:

Sure.

Andreas Welsch:

This game is called In Your Own Words. And when I hit the buzzer, the wheels will start spinning. When they stop, you will see a sentence that I'd like you to complete in your own words. To make it a little more interesting, you only have 60 seconds to do that. Are you ready for What's the Buzz?

Matt Lewis:

Sure.

Andreas Welsch:

Awesome. Then let's get started. If AI were a movie, what would it be? 60 seconds on the clock, starting now.

Matt Lewis:

That's rough. If AI were a movie. Okay first I'll say that AI would probably not be any of the actual AI movies that have been released before, say April of 2023. Because all the movies that are out there that people have actually seen, I've seen them all, are all these like kind of Pollyanna-ish or doomsday scenarios of what AI is. That actually don't reflect what generative AI is capable of doing today, or what augmented intelligence with AI can actually do with humans in the actual world. I might say that a movie could be maybe like a mix of the movie Evolution, which is starring David Duchovny from 20 some odd years ago. Where this alien life form comes to earth, people didn't expect it, and then in a very short amount of time, it takes over everything and eventually humans figure out how to both work with it and manage it. But it is able to grow in proportions that no one expects and quite quickly that kind of captures a lot of what's going on right now in generative.

Andreas Welsch:

That definitely sounds like there are a lot of similarities with AI evolving so quickly that sometimes if you skip a day or two of news, you miss some really important things, but at the same time, it's hard to stay on top of everything that's going on as well.

Matt Lewis:

Yeah I said, Andreas, back earlier in the year, like maybe May or so, I was speaking at a conference, and I said that a week in generative is like a quarter in the real world. And now, my team makes fun of it because it's, true. It is you miss like a couple of days and it's everything is turned on a dime. It's almost impossible to keep up because things are changing so quickly. And if that's true, if a week is like a quarter in the real world, if you miss like a month of what's happening in our environment, that's like a year of like a real marketplace kind of activity. A lot can really change in a very short amount of time and it requires like constant vigilance.

Andreas Welsch:

Very true. So I'm seeing more and more of those gray hair as well. Yeah, all over. So I'm taking a look at the chat and it's awesome to see. From India to Charlotte, North Carolina, Finland, Canada, all over the place. Indonesia, Dubai. Thank you so much for joining us. Really awesome to have you with us. And if you have any questions for Matt or myself, please feel free to put them in the chat as well. I'll take a look in a few more minutes to see what's on your mind and how we can help answer those. Now we've obviously want to talk a bit about the CAIO role. And I thought maybe it's good to, to level set because maybe not everybody has come across a Chief AI Officer in their business or in their role before. So I was wondering, what does a Chief AI Officer do? How is it different from maybe a head of a center of excellence or other types of AI, data roles that you see? And to whom do you actually report in your company?

Matt Lewis:

So I'll start there first and then we'll work back. So I report to our divisional president. I think in larger companies, that's going to be fairly typical that the Chief AI Officer role will report up to a divisional president where you're managing a very large business and the divisional president has a singular responsibility for a very large 100 million, multiple 100 million, billion dollar P&L Whereas in smaller companies, it might be the CEO. That's a direct report. Before I was in this role, I was chief data analytics officer. And I think a lot of folks that have a CDAO responsibility might have like a CTO report, might have a chief digital officer report. I, do feel very strongly that the chief AI officer role does need to report up to the chief executive within the business, whether that's a divisional president or to the CEO for a number of reasons. The role itself, the Chief AI Officer role, is a strategic role. It's not really a functional expectation, and it has expectations with regards to staffing and support, resources, financial for standing up lines of business, but also for progressing the transformation of the organization at large. And under another kind of part of the organization you'll, always be downstream from that consideration and difficult to advance the expectations of the business forward as a member of the leadership team. So it really does need a direct line. In terms of what the work looks like, I think it does differ depending upon, to your point, whether it's part of a regulated industry, whether it's another vertical, whether the organization is wholly focused on, say, internal responsibilities or whether they were a customer or client responsibilities or essentially what, kind of line of business they're in. But I can tell you my work is now that I've been doing it for about six months there, there are at least probably like four main work streams that I'm responsible for. The first of which is definitely upskilling and staying up to date and remaining on top of what's actually emerging in the space, which almost feels like a full-time job because there's so much as that's coming out that it's hard sometimes just to stay on top of what is emerging, not just in terms of literature which is the late press and what's coming out from newsletters and blogs, but also what's coming out in the peer reviewed press and journals and academic settings, and also from other experts that I speak with in the generative AI space and AI space broadly to pressure test and ensure that our approaches are rigorous and validated before we actually go to production. So that's a kind of consideration by itself. And then the first part of my work is what I might call like enablement, which is really a bit of like evangelism, like of helping the space at large recognize what good looks like, what the standards and best practices and expectations should be within this emergent space as things progress forward, as we think about where augmented intelligence and generative AI and, other considerations should be for stakeholders within the discipline. How they should enact policies and protocols and practices to ensure that they're doing what good looks like as things move forward. And then also a lot of education for those teams, for groups that are part of our organization, for outside organizations. Sometimes delivered directly through professional societies. Sometimes partnered with a partnership that we're generating with one of the big tech firms at present to deliver training to a wide group of people. And really thinking about, from a competency perspective, what are those skills that are going to be necessary in the near future versus the far future to help people remain competent and kind of future-proofed against what's coming. So that's really just the first piece of my work, the enablement piece. The second part is really related to what might be called like governance, which is really thinking strategically about if the organization that we reside within now is exceptional at delivering this set of solutions and services and software and whatever we're doing as a group today. What will we need to be doing two years from now, five years from now, to win in the marketplace of ideas? And what will that pivot look like as transformed by artificial intelligence? And as a result, what resources will we need? What staff capabilities will we need? What structures and systems and processes will we need with regards to things like compliance and ethics and, which you and I were talking about before, and governance and provenance and all the rest that are going to be necessary to ensure that our customers can trust us and etc as we progress forward. And a lot of that existed in the CDAO environment, but it was a bit nascent perhaps. Now it's being nurtured and cultivated into more of a robust consideration in many groups. And then the last piece, which is probably what I might call like imagination, perhaps is really working with customers, working with clients to think about how they stand up generative AI, other artificial intelligence implementations within their environments so that they can supercharge and really 10X what's possible in today's environment and start getting different types of outcomes that are incrementally beneficial than what's possible today. And the types of things that they want to do are varied. Everyone doesn't have the same goals, but they all are levered around the same types of considerations where AI is juxtaposed on top of legacy or existing processes.

Andreas Welsch:

That's awesome. I really like how you are describing your role and how it's multifaceted and on one hand grounded in the technology and the data, but on the other hand, that multiplication evangelism type component to it as well to help others understand what's the opportunity. What can we do with it? And I think we talked about that backstage a little bit. It's actually not there to replace you. It's there to help you get things done more quickly, faster, get insights that you haven't been able to get before and so on. So I think that's a really important component to combine the two because to your point, also from what I hear where your role is in the organization. If it's on eye level with your peers in different business functions, in different functional roles, I would imagine it's a much different. Much more different conversation than being in either a technology or an IT or data role and so on.

Matt Lewis:

Yeah, that's exactly right. The one I couldn't have said it better myself. And the way that I've expressed it, like when people ask me internally, I guess sometimes you'll be like you were doing a data analytics role. What's different about this new role? And I said before there might've been people that had a consideration of data engineering or the cloud or data lakes or storage or the rest. And, but those were somewhat tactical considerations. Like they still could do their work and then they would work with us to do data analytics where it was appropriate. And that's great. But the work that we're doing now under the AI mandate is not tactical, it's transformational. Everything that exists within our company and everything that exists globally, by first society at large, will be transformed by AI in the days to come. And as a result, the CAIO role is to really catalyze that transformation and to have the role live under another group or to live downstream somewhere couldn't really serve that end. It really needs to be central.

Andreas Welsch:

Great point. And I think a great example of how this can work in a business. I'm taking a look at the chat here. Let me see. One of the questions is around evolving your career and, transitioning. Maybe even if you have a different background in and you're not a data scientist or an AI engineer by trade, how can you move into more AI type roles? Do you always need to learn Python and R and these kinds of languages to move into these roles? Or are there other opportunities to learn about the business or to bring in what you know about a certain industry?

Matt Lewis:

Yeah I think that's a really great question and we get asked that question a lot, both internally and externally. It isn't necessary to have a deep tech background to transition either into a leadership role like this or like an AI specialist role, which I think a lot of people are interested in these days where they're either dabbling in AI, they're talking a lot about AI out in the community or they're attending conferences or interested in really doubling down in the space. I think one of the things that really differentiates folks that kind of stay in like a legacy role versus transitioning into more of a dedicated role is really a little bit of what might be called the intellectual curiosity, and a bit of learning agility to recognize the value that can be extracted from a novel role that is different from, but perhaps related to the current position. I think that really hasn't really changed in the 25 years I've been working, the only difference is that now, I think if you're able in role, whether you're in an analyst role, if you're saying data analytics, or you're in a strategist role, if you're in the CSO suite or in a digital role, and you're able to start working with off the shelf applications like generative applications that exist that are available for license or that you can use on your phone or whatever. And you can see what's possible in or around the edges of your work, both in terms of speeding time to decision or being more creative or validating concepts that would have taken much longer to produce. And you can start imagining what your work would actually look like two, three months down the range. You can actually start doing some of that, standing those up as like mini experiments and demonstrating to the business that the work is both possible, validated, and can actually be transacted on. And I've seen a number of people both. Within our firm, as well as other groups demonstrate that next most likely work is actually the job they should be doing. And then they just transition from one to the next by showing that this new role, which is essentially emergent work that didn't exist pre-generative is actually better for the business, better for them. And then the alignment happens. So it's a less hey, you guys are posting this thing out there that I want more than hey, I think I could actually create more value for the business by doing these things that I'm actually already doing. And then the transition just happened. I've seen a lot of that recently. And where people said that they were an analyst or a designer or a strategist or something, and now they're in a role that they're still trying to figure out what the title is, but it's a higher paid position with more value for the business. And that is a better alignment with what they're looking to do I think for the Chief AI Officer role specifically, I've seen people come that are really deep in on the data science side, that are really deep in on the dev side, that are deep in on either strategy or digital and transition over, but it is challenging to do a role, I think, like this without Some subject matter expertise in whatever business you're in. It doesn't really matter what the business is, but if you're at Coca Cola, you have to understand that organization and business to some degree, because there's the corporate part of the role working alongside the C-Suite. It has an expectation that is somewhat divorced from the core technological considerations just in terms of the P&L, in terms of asking for headcount and making investment cases and working alongside consultancies, but you need to have an understanding of the business to be able to speak alongside your peers in that regard. Having a deepened understanding of the tech is helpful and important for sure, but there are aspects of the role at the level that are necessary within and across the business.

Andreas Welsch:

Perfect. So definitely sounds like good opportunities there. If you want to grow into that kind of a role, or if you even want to do more with AI in a role that you currently have or are looking to evolve your career and take that to the next level since obviously that it's a lot easier to get exposed to different kinds of AI and dabble in it to some extent without having to be a deep expert on different programming languages or more detailed type of things. Now, I'm wondering, it goes along the same line, right? How do you stay current on new topics in your role? What matters to you, especially because it's on one hand so strategic, but also so broad and so deep at the same time? And how do you stay current and what keeps you up at night?

Matt Lewis:

It's such a challenging question. That's like the hardest question probably of all the things you'll ask me. It is so difficult to do that. There's a quote that I often go back to. Kevin Kelly, who used to be at Wired Magazine, who's came out with this short book of famous quotes that he things he wished he knew about later in life And one of his quotes is:"You can't control how much work you have. If you work on anything that's worth doing, the amount of work you have will never end. Because it's a worthy cause and you could work on it forever." The only thing you can control, he says, is your time. Your time is the only thing that is amenable to intervention from you as a professional in role. There's literally no end to the amount of AI news that will come out. You can literally sit at your desk 24/7 x 365 and news will forever come. It took me a while actually to realize what that meant within this space because I was working 70 hours, 75 hour weeks from December through May, June, just to stay up to date and still deliver against my expectations. And it just, it wasn't tenable. I just couldn't do that level of commitment and still deliver against my actual expectations. And in the summer, I switched the way that I was learning, and I just kept a kind of time clock, like a little timer on my desk, where every day, I jump into the AI pool and jump back out 45 minutes every day. I don't do more than 45 minutes, but I time it. And I start the clock and I stop the clock exactly when I'm doing anything that's direct learning. There are about eight AI newsletters on like Substack and Beehive and a couple of LinkedIn ones that I look at directly. There's some podcasts like yours and a couple others that I look at. There are some peer reviewed publications I look at, some conferences, but I don't do more than 45 minutes. It doesn't matter where I am in the section. But when my clock goes off, I'm done. And, whatever I learn I save and I annotate and I map everything in the concept map that I use. And so when I need to come back to what I've learned, I've stored it already for later use, which for me is mostly presentations that I give. I don't have to do that same work twice. I only dip in once. And then when I use it later, it's already analyzed. That works for me, because before I was spending like three, four, or five hours a day, just trying to learn. And it wasn't efficient because by hour two, hour three, I was so exhausted from 12 hours prior of working. That by hour 14, hour 15 of being awake and working, I just wasn't getting much out of it.

Andreas Welsch:

That's, I think, a really good recommendation, how you can structure it and still get to the essence of what you need to learn and want to stay on top of it without burning out. I think Louise had a question earlier where she said, hey, how do you balance being an AI enthusiast and an expert and trying to stay on top of these things with your personal life, if it's interwoven and if there's such a flood of information? I really like how you've described it and timeboxing it and writing it down or saving it. So if you need to come back, yeah, at least know you've read it already and you know where to find it. That's awesome.

Matt Lewis:

Yeah. I think I'll just clarify that comment and say, I'm not really an AI enthusiast at all, actually. I'll say that I'm like an augmented intelligence enthusiast in that I think there's a lot of annoying things that exist in actual human life, both professionally and personally that augmented intelligence can make better for all of us as people. And we're right at the cusp, like right, at the precipice of being able to make our work lives significantly less painful. And a lot of our personal lives much more enjoyable by the use of AI. And I call that augmented intelligence. I think it's like we're just now being able to do that. I can do a lot of it because of my position. But the rest of the world is like about to see what's possible, because of that augmented approach. I favor the augmented intelligence side of my title much more because I really think that if we do this right, it can make a lot of people that are doing mundane, rote things all the time, which are quite annoying and boring, take those away, make the work more pleasurable. And then for people that want to do things in their personal time, nights and weekends, whether it's music or art or sports or whatever it is, there are lots of ways to enjoy our free time much better and much more richly using AI that is just starting to be woken up to.

Andreas Welsch:

I really love how you phrase that and frame that. Especially the part about augmenting intelligence and augmenting our intelligence. Now we're getting close to the end of the show, and I was wondering if you can summarize the key three takeaways for our audience today? And maybe talk a bit about how you see the Chief AI Officer influencing the business and influencing these different stakeholders as well as part of that transformation. Top three key takeaways.

Matt Lewis:

The Chief AI Officer role is it's an executive level position that is critical in ensuring the successful transformation of existing businesses to the kind of future modern digital businesses that will exist in the next three to five years. And it has to report to the senior executive within the business, whether that's a division president or a CEO, to be successful. Everyone's going to have a different set of work streams. I shared mine. Depending upon vertical and function and private or public or what they do, it's going to be a different mix of things, but probably some educating, some standing up models, some implementing, and a lot of learning. Also, it's not a job for people that don't like talking to people and don't like interacting with people. Because you're with a ton of actual humans all the time, every day, all the time, because you're trying to help people learn and learn differently and work differently than they've worked in their entire professional career. And that takes a lot of diplomacy and a lot of discussion and a lot of understanding of how to work and how to work better. And then the last thing I'd say is augmented intelligence is not something that we invented. Gartner actually came up with this concept originally about eight, nine years ago. It sees like the human as like the alpha and the omega of the AI picture where humans begin. They develop the software, they develop the model, they implement the plan with AI on board. They interpret the results that come back. They determine the recommendations that go forward to the user and then working with the system to figure out how to implement them so that the results are incrementally better than either the AI or the human can do alone for the mutual benefit of all involved. That's augmented intelligence. It doesn't matter whether it's in commercializing novel molecules for pharmaceutical companies, like we do, or it's in helping to find a sample playlist off of remixes from the 1970s in your personal life. It's still finding more value in the world.

Andreas Welsch:

Thank you so much for that summary and hitting in the key points that we should be aware of. And Matt, thank you so much for joining us and for sharing your expertise with us. And for those in the audience for being with us today, again, super global audience. I enjoy seeing how far of a reach we're able to create.

Matt Lewis:

All right. Thanks again for the time. Much appreciated. Thanks everyone in the audience.