What’s the BUZZ? — AI in Business

Creating A Human Future With AI (Guest: Brian Evergreen)

Andreas Welsch Season 2 Episode 8

In this episode, Brian Evergreen (Founder & CEO, The Profitable Good Company & Author “Autonomous Transformation”) and Andreas Welsch discuss creating a human future with AI. Brian shares examples on human-machine collaboration and provides valuable tips for listeners looking to drive AI initiatives while keeping the impact on people in mind.

Key topics:
- Understand does AI drive autonomous transformation
- Assess your options from reformation to transformation
- Prepare for collaboration between humans and machines

Listen to the full episode to hear how you can:
- Focus on strategy & culture before technology
- Classify your project from digital reformation to autonomous transformation
- Manage organizations as social systems

Watch this episode on YouTube:
https://youtu.be/3ArPJEGyyfg

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today, we'll talk about creating a human future with AI and who better to talk to about it than someone who's focusing on doing just that. Brian Evergreen. Hey, Brian. Thanks for joining.

Brian Evergreen:

Hi Andreas. Thank you for having me. I'm happy to be here.

Andreas Welsch:

Awesome. Hey, why don't you tell our audience a little bit about yourself, who you are and what you do.

Brian Evergreen:

Thank you. Yes. So my name's Brian Evergreen. A little bit about me. I've had three careers so far. I was an internationally competitive chess player, and the highlight of that was playing Josh Wakin. He did beat me. I retired from that at 12 and then my next career for a while was in music. I studied music theory and composition and sang in Carnegie Hall and made music with people all around the world. After that I, transitioned into working in AI and corporate America and globally. And the last few years I've been working with Microsoft and before that Accenture. And really,AI strategy is probably the easiest way to summarize that.

Andreas Welsch:

Awesome. And I saw that you also have a book coming out, right?

Brian Evergreen:

I do have a book coming out. About a year ago, I signed a book deal with Wiley around autonomous transformation. But starting with a question of how can we create a more human future with these technologies? And so the book comes out just in a couple months and really looking forward to sharing it with the world.

Andreas Welsch:

That's awesome. So I'm sure you've really seen a lot and also in the research that you've done for the book. So I'm really excited to have you on today and to share more with us about how we can actually create that more human future with AI. Alright, Brian, before I get started, should we play a little game to kick things off? What do you say?

Brian Evergreen:

Let's do it.

Andreas Welsch:

Fantastic. Alright, so this game is called In Your Own Word. And when I hit the buzzer, the wheels will start spinning. When they stop, you'll see a sentence and I'd like you to answer with the first thing that comes to mind and why, in your own words. Okay. And so to make it a little more interesting, you'll only have 60 seconds for your answer.

Brian Evergreen:

Okay.

Andreas Welsch:

Yep. So for, those of you watching us live, also drop your answer in the chat and why, and let's see what we can come up with here. Brian, are you ready for, What's the BUZZ? I'm ready. Let's go. Okay, excellent. Then let's do this. If AI were a bird, what would it be? 60 seconds on the clock. Go.

Brian Evergreen:

If AI were a bird, I would say it would be like a baby eagle maybe. In the sense that it still needs nurturing before it's gonna really be able to fly. And also the altitudes that it can reach is higher than most other birds. And it is capable of many things. And so I'd say Eagles, the first thing that comes to mind for me on that. So I'm, gonna cut your time in half.

Andreas Welsch:

Fantastic. Awesome. I live in the Philadelphia area and the football team, it's the Philadelphia Eagles, so that, that resonates on so many levels. Great analogy. I'm always impressed what my guests come up with for these questions. Alright, so let's maybe jump into our first question. And we've been talking in the industry a lot about digital transformation for the last 10 years now. And some are doing it better than others. I'm also wondering what comes next after digital transformation?

Brian Evergreen:

Great question. So the first thing I'd say is that digital transformation is absolutely still a thing. When I talk about autonomous transformation, it's not necessarily a linear path that you move through digital transformation. And then autonomous transformation. Autonomous transformation introduces a new phrase into the business lexicon. It's a new era of transformation for organizations and for society more broadly. When I started with the book in researching and with the goal of defining this era, I realized that even digital transformation itself actually needs a little bit more context. And because the true definition of transformation is to change the nature and the structure of something and improving it while you're changing that you're transforming it. But there's a lot of things that we do in this digital transformation era that aren't transforming the nature of the process or the structure by which we deliver value to clients or the world. And so I searched for the right word and what I came up with was reformation. So digital reformation is actually when you're vastly improving. Cause the definition of reformation is to improve something without changing its structure. And so if we're going from an analog to a digital paradigm and vastly improving it, but we're not really changing the structure of where the process by which it's delivering value to its to clients or to consumers. That's a digital reformation initiative. And then digital transformation is when we're not only moving from analog digital, but actually transforming. And an example of that is what we saw with streaming with Netflix. We're still going through transformation of what that means in terms of how we interact with entertainment. And so autonomous transformation. There's autonomous reformation, which is what we're seeing right now in robots being leveraged in warehouses. Where machines are coming in and there're things are moving from a digital or analog paradigm to an autonomous paradigm to vastly improve. The nature of what's the work that's being done and the efficiencies, autonomous transformation would be that we're actually transforming the way that value is delivered to the world or to clients, consumers around the world with autonomous capabilities moving from digital or analog to autonomous. And I don't think that we've yet seen an actual full-blown autonomous transformation initiative hit the market yet.

Andreas Welsch:

Awesome. Thanks for sharing that and for setting that up. I keeping an eye on the chat and I see Michael is asking, is AI a reformation or a transformation?

Brian Evergreen:

AI is neither reformation or transformation. It can be leveraged for either. So you can leverage AI to improve the nature of a process that you're running and make it much more efficient without changing the actual process. That would be a reformational project. If you used AI to completely change the structure or the process by which you're delivering value to the world. That would be a transformational application of AI. So AI in this case is a tool as opposed to the inherently tied to either reformation or transformation.

Andreas Welsch:

Awesome. Thanks for sharing that.

Brian Evergreen:

I realize the other thing I should mention is that both reformation and transformation assume that you're starting with something. You're reforming something, you're transforming something. Another important piece of this puzzle that I think is important to share is the acts of creation. Sometimes you're creating something that doesn't yet exist because you want to create that value for it. So you're not necessarily starting and then transforming and reforming it. You're just creating something net new. And again, in that case, you could also leverage AI and there's many that are doing that to create new ways of delivering new types of products and applications that we haven't seen before.

Andreas Welsch:

That's awesome. And you're giving me a great segue to our next question, right? Because creation now with tools like generative AI that are at our disposal, we're able for the first time to also use AI to create information. Whether it's new information or it's composed in a new way. But I'm also wondering in this overall autonomous transformation and having people in the center of them, what role does aI play in that kind of transformation from your perspective?

Brian Evergreen:

Great question. So in autonomous transformation human autonomy and machine autonomy are actually two sides of the same coin. That's the first thing that I think is important to clarify. Because I think when people hear the phrase autonomous transformation, they think of, okay, process by which you're gonna replace all human jobs with machine workers, which is absolutely not the case. It's more about the work hierarchy of repetitive to creative work and machines being able to take on more of the repetitive work which bumps up humans to be able to move from operations to more things, more like stewardship. And so in terms of your question around generative AI, I'd say that generative AI plays an important role in a couple ways. One is in the actual teaching of machines. So there's obviously machine learning that many or most are familiar with, which is the process by which machines will learn based on patterns in data. And then there's machine teaching, which is a newer paradigm that is around instead of focusing on how do we teach machines to learn based on patterns then instead focusing on how do we teach machines based on human expertise that fundamentally involves in a lot of cases reinforcement learning and humans creating the boundaries and the guidelines. The way I talk about that is often is chess. I used to teach kids chess after I'd retired from chess. There were some other teachers that were teaching children, if they do this, you do this. If they do this, you do this. And they would be memorizing these different patterns of openings. And then my focus instead was the principles of saying, you wanna control the center. You want to get your pieces developed from the back rank. You want to think about your pawn structure and the way that that affects your overall positioning on the board. The way that would play out in tournaments is that the ones who'd memorize those patterns, if they ever ran into anything they weren't expecting, they wouldn't know what to do, because they were outside of the pattern they memorized. Whereas a student that had learned the principles and the fundamentals of chess could walk up to any position they hadn't even been playing and think through and say this would be the next best move based off of controlling the board, the center of the board or, whatever the principal might be. And so I think in the same way, machine teaching starts to shift toward that paradigm of, instead of trying to teach machines through patterns of data, there's a lot of things for which we just don't have data. And so being able to instill principles and then create a simulated environment and leverage reinforcement learning for the machine to practice and learn the same way humans do enables a whole new swath of use cases that just weren't possible before. And so all the way back to your question about generative AI and what role that plays in terms of that process of machine teaching. Up until now, that's been a little bit more explicit for those, machine teachers to then create those boundaries. And I think we're gonna see the imbuement of that into the machine teaching process to allow humans to distill their expertise in a more human-like way with even less machine teaching expertise required, once the systems have been designed. And then on the latter end of it, I think that generative AI will be very valuable for allowing human machine interaction. So if you have machines working together with humans in a factory or machines navigating somewhere in a public space for those humans to be able to come up and, talk to it and leveraging generative AI to be able to answer questions and interact. Or just large language models in general, but to be able to answer those questions and understand the intent of what those humans are trying to communicate. So those are two main ways that I see that within the context of autonomous transformation.

Andreas Welsch:

That sounds really in intriguing, especially that they're part of teaching machines. When for the last six, seven years we've been talking all about having machines learn from these patterns in data. What does it look like on a more tactical or task oriented basis? And is that what you would see as making sure that we have a more human future where we work with AI, alongside AI, where we teach AI? And how does that symbiosis maybe even look like?

Brian Evergreen:

Yeah short answer is yes. And so it works from a tactical perspective. If I give an example of a machine, let's say in a factory that's being leveraged to make something. The machine learning paradigm would say, okay is there a pattern of data by which a machine could learn how to control that machine. Let's say the algorithm could learn how to control an extruder in a manufacturing context. And so that's the machine learning paradigm that most of us are familiar with. Machine learning, a lot of times there's not data for, or you have to instrument and then get many years or months of data before you can even start. Machine teaching would say we're gonna create a simulation, either data driven or a physics-based simulation of this machine. And then we're going to have a human that operates that machine. Say that when I see this output I know, okay, this is too viscous or this is the color is wrong, or whatever the parameters, the goals are, and they say, when I see that this is the correction that I make. And that's not something that you can necessarily capture in historical data the same way. And so then when they create those boundaries and then they combine that with a simulation, a machine can basically effectively run millions, many hundreds of thousands of simulations to see that the correlation between the principles that the human has designed against just the pure physics of it. And then that can be leveraged. An example is that Pepsi did this with their Cheetos extruder. They leveraged machine teaching for the extruder to be able to run autonomously. The last I heard, they're still working on getting that into production. But the plan is that the workers can then move into more of a stewardship. It's not gonna replace the work that they do. It will enable them to be freed up from the many things that they're trying to manage and monitor at all times. And so I think we're gonna see a lot of that. I do think just speaking of humans and machines, and it's an important conversation right now. There's three ways I think people are looking at things today. I think there's job protectionism, which is saying the current class of jobs we have as they exist today must be protected at all costs, which introduces an economic and a long-term risk. Which is that if the nature of the way that work is being done is transforming elsewhere, the whole organization may go down and the market could take away all jobs at that company as opposed to that one class of jobs or those classes of jobs that were under threat. Then on the other end, there's job fatalism, which is saying machines are coming. Get ready to bow to our machine overlords. And I don't know that I even need to necessarily speak to why that's not necessarily the right way to think about things that or at least the way that I would recommend. Then in the middle, there's job pragmatism, which is saying, okay, the market is shifting and evolving. Jobs are going to change. There are some tasks and, collections of tasks that currently make up a job that will be made autonomous or automated by machines. And in the time that a leader has from when they've signed on an AI initiative that could ever take on a project. There's a length of time, months, years before that actually gets developed to the point where they could make that human redundant that was running those tasks previously. And so in that time, to me if, a leader hasn't created new opportunities for those workers to retain their tribal knowledge and to maintain the culture of the organization. That's a leadership failure. That's, actually not to do with the technology. It's more to do with the leader that's running that organization.

Andreas Welsch:

That's an interesting perspective that you're sharing, and I think that also puts it into perspective of what the opportunities are and at the same time also have a more realistic and objective view on, there's those three categories. I like how you've described them. If I look at the chat real quick for a second. I see there's a question from Ahmad that I would like to ask you. Yeah. Are we in or are we in the in between times for AI moving from conception to globalization? Or have we moved into globalization already?

Brian Evergreen:

I think that it varies significantly. Because I think there's one thing that is going on right now, which is obviously with the generative AI and the fact that anyone anywhere can access these the power of these systems that many of those of us who work in the field have been interacting with. Just as powerful of systems, but that can't be made accessible to anybody in their living room or on their cell phone even. And but, that have nothing to do with generative AI. And so I think that in terms of AI as a broader topic, like Bill Gates and others have said, we're moving into an era, an age of AI, so to speak. And not just AI, but it has a lot of adjacent technologies and enablers. But it's the blanket of AI has become I think. The sort of reigning term in terms of what this era of transformation is based on. And and so in terms of globalization or in terms of conception, I'd say that it varies significantly by the specific application and the set of use cases. But I don't think we're at a point where everyone is. Only 13% of AI initiatives actually make it through into production. And so I think that we're still as a society figuring out. That's actually why I started the process of the book was trying to solve and answer that question. And so I have a strong notion of a few of the reasons why AI initiatives don't make it into production. I think it's less about a geographic difference at this point and more about just the fact that society and the cultures of our organizations and the strategies that we've set and this difference between technology leaders versus business leaders versus industry leaders who are often divided by their expertise. I think that those are more the main reasons that we're struggling to see a global adoption and harnessing the economic potential that these technologies provide.

Andreas Welsch:

Now you've mentioned earlier that it's also a leadership task to prepare your teams for that future, for that autonomous transformation, for more of that AI driven transformation. What would be your advice to AI leaders? How can they prepare for that transformation? How can they bring their teams along and bring their transformation to life?

Brian Evergreen:

So the first thing I'd say, I've started a company focused on connecting with leaders and sharing the things that I've learned with them to help them drive this kind of transformation across the organization. And so trying to condense down to at least a couple quick bullets. I'd say one is that the way that we've been solving problems, the way that we've been taught to solve problems as leaders and as a society is based on industrial revolution era thinking. Fundamentally based on in a lot of cases, Taylorism, which I won't get into right now, but that, but there's a lot of problems with that tool set. When we come to the 21st century complexities, and we're starting to see the cracks break between organizations that are evolving beyond that era, that way of thinking versus those that are still mired in trying to solve problems. And if you start with bottoms up planning where you say, what are all the problems that we need to solve? And each team comes up with a list of problems, and then those problems are shared up with their leadership. And then ultimately it bubbles up to the C-level or whichever the highest level that then approves those. These are the problems we're gonna solve. Solving problems, getting rid of what you don't want doesn't actually necessarily move you toward what you do want. So I think that if you're looking and saying, which problems should we solve with AI? There's actually an earlier problem, which is that instead of focusing on which problems you need to solve, because you know the famous quotations around if we're solving a problem in the carriage times with horse carriages, we wouldn't have come up with a car. Instead of solving problems, we should say, instead of looking at what trends and what can I react to happening in the market. Be the thing in the market, be the leader in the market that others have to react to. Decide what future do I want to be in, do I want to build? And then work backwards instead of solving problems. Solve for that future. Say, okay, based on that future that I, and we as an organization wanna move into, what would have to be true for us to get into that future? And you can create an entire map of different initiatives and hypotheses that you could prove or disprove that would advance you toward that future moving boldly forward instead of backing into the future. And solving problems as you see them, as you see them come up. The famous rearranging chairs in the Titanic, right? So that's the first thing I'd say. I think a second one is that a fundamental issue that we're seeing is cultural. There's business leaders, technology leaders, and industry leaders. A hundred years ago, industry leaders had all of the purchasing power and were making the decisions. And then 50 years ago, business leaders in the wake of the world warps and in the rise of shareholder primacy rose to the four as a key leader in business decisions and purchasing power. And then in the last 30 years we've seen IT or technology leaders go from the backroom to the boardroom. And now you have three extremely intelligent groups of people with significant expertise and training. All wanting believing that they know the answer and they know how the money needs to be spent and what the plan needs to be. And most organizations that I've worked with are divided by that expertise instead of multiplied by it. And I think that's the reason that seven of the 10 most valuable companies in the world, public companies, I should say, were technology companies. Because in those companies, the industry leaders and the technology leaders are the same. So they're not divided because they speak the same language. So for leaders to prepare for this AI future and this era of transformation, I would say it's actually a strategy question and a culture question before it ever gets to which technologies you should use. To go advance into that future. And it may not be generative AI or it might be it.But you shouldn't start with the technology to figure out which direction you want to go with that technology. You don't plan a trip based off of what you're gonna bring. You decide where you're gonna go, and then you pack what you need in order to enjoy your time there.

Andreas Welsch:

That's a great analogy. That resonates deeply. And also what you shared about it's a technology, it's a tool. It's a means to an end, but it's not the theme. It's a theme that we see throughout and also that others have shared. So it's great to hear you reinforce it as well. Now I wanted to ask you if you can summarize the three takeaways of our show for our audience today, because we're getting close to the end of the show.

Brian Evergreen:

Absolutely. So I'd say the first is that when it comes to autonomous transformation and AI and generative AI and all these technologies, the first thing leaders need to focus on is actually strategy and culture and not the technology. It's tempting cuz the technology is so interesting and so exciting and so capable to start with the technology. We need to start with strategy and culture because, if you've completely digitally transformed or autonomously transformed to the umpteenth degree where you can say, we're the most transformed company in the world, but no one's buying your products. You're gonna be out of business shortly and the fact that you've transformed doesn't matter. And so that's the first is, starting with the strategy and culture is what I would say. The second is the introduction of the words, the paradigm of digital reformation, digital transformation, and autonomous reformation, autonomous transformation, and then acts of creation. Because at any given point, when you charted the future point that you wanna advance into, there might be a combination of different types of initiatives that would align to those different. Subjects that will help you move into that. So if you're advancing toward a future and depending on the life cycle of your organization, you might just need a digital reformation initiative and to solve a problem because you have a profitability issue that's posing a risk to your organization continuing to survive. It's not to say that there's one that's more better than the other. It depends on where you are in your organization. So that's the second thing I'd say. And the third major point is that the things I've shared so far are one of maybe 15 to 20 that I've found in the process of writing the book that are ways that we need to break away from managing our organizations as machines for that is a relic of our industrial era thinking. A lot of people have asked me over the years and, they're writing this book Yeah. That AI stuff or that human stuff is fluff. And that's nice and I would love to be able to do that, but I really need to get the value first and then we can think about that later. And, what I'm positing is that creating a more human, cuz I said strategy and culture are some of the main reasons we're not seeing value for organizations in implementing or getting into production these types of technologies. It's not that creating a more human future and creating a better world for an experience, for the human experience is a nice to have outcome. That's actually a fundamental ingredient in order to have cuz right now talent can leave data scientists, the types of technologists that you need, they have options. So if they're working on an initiative that's boring or that is exploitative or just even neutral, when they have an option to go get paid as much or more, but to work on something that they're passionate about. That's what we're seeing the shift, right in organizations and then consumers react to people doing good in the world and creating a better human experience with the new products they're introducing. And so I think that creating the third thing I'd say is not something that I first came up with I should call out to Dr. Russell. We need to move from managing our organizations as mechanistic systems to as social systems and managing them as machines to a network of people and all the things that we're seeing that are in the business lexicon and coming out as HBR articles about empathy and feelings and all these other things, they don't fit in a machine paradigm. But a lot of the most successful companies have already started to make that shift. And I'm just codifying that and I propose practical applications for how do you actually make that shift if you're starting at this point.

Andreas Welsch:

Awesome. Thanks for, the summary. And also thank you so much for joining us today and for sharing your expertise with us. It's really great to have you on and to learn from you about how we can create a more human future with AI.

Brian Evergreen:

Thank you. Thank you for having me. It's been a pleasure.

People on this episode