What’s the BUZZ? — AI in Business

Create A Sustainable Future With Generative AI (Guest: Mark Minevich)

January 28, 2024 Andreas Welsch Season 3 Episode 3
What’s the BUZZ? — AI in Business
Create A Sustainable Future With Generative AI (Guest: Mark Minevich)
What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Mark Minevich (Chief Digital AI Strategist & Author) and Andreas Welsch discuss creating a sustainable future with AI. Mark shares his perspective on using artificial intelligence that serves our planet and provides valuable advice for listeners looking to align their AI initiatives with sustainable, ethical AI objectives.

Key topics:
- Address AI's longevity and opportunities in a world powered by AI
- Analyze sustainability in high-resource Generative AI environments
- Evaluate ethical and sustainable AI use and its measurable impacts
- Implement ethical AI in businesses beyond panels and policies

Listen to the full episode to hear how you can:
- Focus on sustainability beyond pure productivity gains on an individual business level
- Understand how your business’ AI projects contribute to achieving sustainable goals
- Ensure AI projects align with ethics and privacy guidelines
- Balance Generative AI driven innovation with power consumption


Watch this episode on YouTube:
https://youtu.be/R8rxQN60eGM

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about how you can build a sustainable future with ai and who better to talk about it in someone. Who's on a global mission to helping people do just that. Mark Minevich. Hey Mark. Thank you so much for joining.

Mark Minevich:

Thank you so much. Andreas, it's a pleasure to be here. Just back from Davos and it's all been about AI. I just hope it's gonna be AI and sustainability very soon. But I'm excited about 2024.

Andreas Welsch:

Why don't you tell our audience a little bit about yourself, who you are and, what you do? You already mentioned Davos, but I know you're involved in many different projects and organizations. So that's why I'm particularly excited to have you on the show.

Mark Minevich:

Andreas. Thank you. I'm on a mission to save our planet using artificial intelligence. What else, if not AI? I have several roles. First role is I'm the president of a private advisory boutique that is helping governments on improving their national AI strategies and also enterprises in different countries across the world, including here in the United States, some very large Fortune 100 companies that we're trying to really focus on improving their adoption and their governance strategic structure. I'm also an investor in a number of companies. Affiliated with a number of funds and also independently looking at opportunities where the next stars will arise. I'm also sitting on a number of foundations and communities. I'm the chair of the AI for the Planet Alliance, the co-chair, and the founder of that alliance, focusing on what United Nations and UNESCO and UNDP could help and could do for a number of different companies across the world, including leveraging some most sophisticated technologies that we'll talk about. I'm also the chairman of the executive committee of AI for Good Foundation and also sit on working nation for future work executive committee. And involved in a number of different groups across the United Nations, digital government and such and various other governments. And in, so I take thought leadership very seriously. I'm a columnist for Forbes. Magazine on AI and I'm also I recently wrote this interesting book that you could see here in the back Our Planet Powered by AI and this kind of news concept of planetary centric AI. So many, things going on in my life. I'm excited to be here.

Andreas Welsch:

Look, I think a lot of my guests typically talk about how can you implement AI in, in a business? What are the things that you, should consider? But I already see that this commitment right in, in, in this mission is something that runs deep and is deeply personal. So I think we'll have an exciting conversation.

Mark Minevich:

We do, yeah, absolutely personal and should be personal for every one of us here. We have this technology. Let's use it properly.

Andreas Welsch:

I like to play a little game and make it a little spontaneous. And I'll, ask you a question, right? This game is called in your own words. When I hit the buzzer, that's why it's called, what's the buzz? The wheels will start spinning and when they stop, you see a sentence. I would love for you to answer with the first thing that comes to mind and why, in your own words. To make it a little more interesting, I'll give you 60 seconds for your answer. For those of you who are watching his live, also drop your answer in the chat and why? So are you ready for, What's the BUZZ?

Mark Minevich:

Sure.

Andreas Welsch:

Let's do this. So if AI were a color, what would it be and why? 60 seconds on the clock. Go!

Mark Minevich:

For me, the answer would be green.

Andreas Welsch:

Wonderful. Why green?

Mark Minevich:

Because I think AI should be sustainable and I equate our planet as green.

Andreas Welsch:

Great answer. Thanks for being spontaneous and taking that on, the fly. Now look, I think there's been so much doom and gloom about AI last year to a point where it's splitting long-term academics on the topic. It's definitely dividing people to some extent already there as well. But I think on the other hand, we also know that AI is here, it's here to stay. And so I'm curious what are the real opportunities if our plant is being powered by AI? What are the opportunities that you are seeing?

Mark Minevich:

Yes. Again, thank you for opportunity to be here. And, before answering this question, I wanted to give a premise to the audience, why is this important to discuss this right now? Today, in January 2024, after this massive experience we had in Davos World Economic Forum, which I was fortunate to attend, I think what's happening is the corporate America and big tech are ferociously moving AI to optimization. Why? Because they care about efficiency. They care about productivity and really not very little if anything else. Corporate leaders want to make the world efficient and this is what I hear. But, we need the world much more sustainable. Our planet is suffering. It's burning. And we need a world where humans get more time to do what they need to do. They have to feel safe, they have to feel loved, and they have to feel that they're bringing value. So I feel AI could be a tool to solve some of the greatest problems and challenges that the world has to offer, especially in innovation. AI could be utilized beneficially in areas we just discussed. Climate, healthcare, infrastructure, energy. Some corporations do get involved. I want to give them the benefit, but not as much as we want. I want corporations to lead this and put this in a priority. 95% of AI is very application driven. Right now, what you're seeing on your show, it's very solutions driven and it is focused on efficiency. I want this to be focused on all inclusiveness and sustainability, and that's why I've written this book. So when ChatGPT, Andreas, was released in 2022, November, we all remember that moment. It sparked so much interest in artificial intelligence. And the world is facing the threats now, floods and fires, and everything that is discussed at the United Nations it is just unbelievable. We're at a very pivotal moment, very what I call an inflection point, and it's crucial for the public to understand our capabilities and limitations. So people are fantasizing right now whether AI will replace human beings. What's important, what's not important? What's the combination of mathematics and software? How do the computers actually reason? How do they understand? All of this is important, but I, there has to be a growing influence towards eco-friendly and sustainable technologies. Shift towards more customized and tailored solutions towards AI. And this is the emergence of what many people at the United Nations called planetary AI, which I fully support. I think that AI definitely drives efficiency and sustainability, but it helps humanities most significant challenges. It cannot be stopped, it cannot be posed. It cannot be controlled by any community, by any movement, by any government. It is going to be an essential partner for foreseeable future. It does cast some dark shadows and raises existential concerns that the world must deal with. Having said all of this, I think it's important now to discuss the specific things that I am discussing and promoting how this is going to bring value to the planet, to the world. AI's role in sustainable future lies in its ability to look at so much data. Trillions and trillions of data. It's optimizing processes. It's able to make predictive insights across all sectors. And here is where the energy management and here is, where healthcare and agriculture and conservation and disaster management all coming in to make our planet healthier, not just human beings. Remember what I said, it's not just human beings, it's planet and our longevity. So let's talk about optimizing energy use and emissions control. AI algorithms already today are optimizing energy consumption in transportation and in buildings, significantly reducing carbon emissions. Predictive AI models are already forecasting footprints, allowing for more, better, and more efficient carbon management. But then we go in the area, which is also very important, energy management. Google for example, again, a large company but it's set out on a mission using AI technology, specifically DeepMind, to optimize energy use cooling systems. We're gonna talk about this data centers, substantial energy savings. We have to improve our public health and safety. This is one of the reasons why I did this. I myself suffered tremendously under COVID. I was almost not coming back to life when I was struck with COVID in Miami in December 2020. So I am saying to myself, we gotta do something better for public health and safety. Disease spread and pollution monitoring. AI has capabilities today to analyze pollution levels, models in terms of disease spread patterns, particularly valuable in densely populated regions. This data that we all know where it's coming from. Sometimes governments, sometimes technology companies, it is able to inform public health initiatives and environmental policies to mitigate health risks with pollution and with pandemics. So we gotta focus on predictive healthcare. I see a lot of it in places like Sloan Kettering Center in New York. Some of my friends are instituting that. Those are AI models that analyze massive amount of patient data, offering insights, preemptive healthcare, shifting the focus from treatment to prevention to diseases. Then an area which is so important, which is agriculture. We need agriculture around the world. We need farmers, sustainable agriculture. AI technology is already used in crop monitoring, yield prediction, pest control. It is optimizing agriculture practices to maximize yields and minimize environmental impact. Very important. Natural Disaster Prediction and Management. We had natural disasters. Hawaii, California, New York. This is incredible. It's gonna go on and on. We need a better way to manage them and make predictions. AI has ability to monitor seismic activity to predict natural disasters like tsunamis and earthquakes, improving its prediction capabilities in early warning systems, potentially saving thousands of lives. Our assets, our houses, our way of life. We spoke about climate change. This is the biggest issue of them all, heavily discussed at the WEF. We need to spend more time on mitigation and adaptation, specifically on carbon capture and renewable energy. AI is assisting discovering new materials today for efficient carbon capture. A better enabling integration of renewable energy sources in what we call smart powered grid. The weather forecast, they're not so accurate right now. AI could improve the accuracy of weather forecast, very crucial in preparing for extreme weather events all the time, and of course, enhancing education. Our education is also part of our health. It's part of our planetary experience. We can't remove that. And AI education, especially in remote areas across the world, is very limited. And AI could improve and bridge that gap and better understand and tackle environmental issues, but also our diverse education. So bottom line is we are able to use climate change, AI, and climate change to improve our diversity, improve our environmental sustainability, our health and societal resilience. I've given many illustrations of what is possible, and I think that by focusing on greenhouse additions, disease spread pollution monitoring, areas like food security, which I haven't mentioned, but very important. Optimizing crop yields, agriculture, energy optimization, biodiversity, carbon capture, energy management, flood monitoring, food monitoring. All of those issues are so critical important and AI has a major role to play. Not only on just analyzing and predicting, but in 2024, we are moving from insights and information to action. So we want to take all of this information from analysis to actual action worldwide and make things happen.

Andreas Welsch:

Thank you for sharing all that. I think there's so much information there. Even I want to go back and listen to that again and, hear all the different examples that you mentioned. And, I think what you mentioned even before that, right? It sounds like definitely this aspect of sustainability and using AI for the purpose of helping our planet and keeping it sustainable, is the overarching vision and mission and then it depends on individual companies that do play a part in this, in removing and reducing greenhouse gas emission in improving crop yields, right? In improving the energy management and so on. So I think that is something that resonates deeply with me. And then also seeing, like I said, what that means individually for companies, how they then contribute to it. Now I'm wondering obviously a big part of your work is to make sure AI is used ethically and sustainably, but I'm wondering for whom and how can leaders actually measure the impact as they're going down that path of using AI for the purpose of increasing sustainability?

Mark Minevich:

That's a very good question. Let's define first of all who the leaders are. It's the United Nations, it's the CEOs of companies. It's the it's foundations. Leaders are all of us or people that we're using AI and we need to measure AI in our homes as well. Don't think of leaders of just folks in the government in the White House, but we are all leaders of our life. And the way we need to understand what impact AI is bringing. And so for me personally, Andreas, it's has to go through key areas, which I defined and talked in my book. Privacy, bias, energy consumption, and clear alignment with the United Nations SDG goals. So those are the key factors for me that would create measure of success. Let go back to privacy. AI needs to respect privacy by default and by design. And it doesn't do that right now. Not especially in generative side generative AI. No matter which LLM model you're using, OpenAI or whatever the AI is not respecting privacy by default and by design. We gotta change this. We have to make sure that we minimize data collection and usage. We have to implement responsible data handling practices to maintain public trust, and we gotta establish clear guidelines. Not only guardrails, not only just regulations. Regulations alone do not solve the problem, and this will help privacy protection. And that's how we gonna measure are we creating success with privacy and are we impacting the consumers, the companies? Do they feel safe? Bias is another issue that I mentioned. Again, majority of the AI scientists around the world are white males. And this, again, not that they purposely are doing it, but there is bias that comes out in the way they design their data models. And this came across vividly when I was in Davos, for example, in a session. It's very, difficult to find women, and I've been discussing this with women leaders saying, we gotta change this equation. And we also have more minorities in data science as well. But coming back that the bias is coming from the training of data and it's affecting trustworthiness and fairness and reliability, all the stuff that I've written. And I did the whole report with my colleagues at World Economic Forum about this topic. So what do we do about this? How do we measure and how do we know that we have impact? We have to detect and measure bias extent at every level of our maturity. When the data science is being built. When this is being put in production. When this is being used. Every single area of of the models, we have to detect and measure bias extent, how extensive it is and where this is coming from. We have to utilize different data sets. I think there's something called, now bias BBQ, bold better bias identification. So we have to deploy different types of technologies to identify those biases. We have to implement collective and mitigative measures. So it's not enough just to say, okay, we identify, how do we mitigate, how do we remove this? And how do we have a plan to mitigate this? I want to talk about measurements in everyday life, and not only in enterprises and companies. And that is dual role of AI for energy consumption. AI training is electricity intensive, extremely electricity intensive. We have to figure out mitigation techniques to optimize energy usage in all sectors, not only in just data centers for technology companies, but in software and technology and communications in sustainability Improving efficiency and efficacy by 40%. I think it's possible if we hit those numbers, even if we improve less. We could do the job and we should use some tools that came across with, I wanna mention for your audience, software, carbon, for example, carbon accounting tools available from every cloud providers. Finally, to your question, climate and SDGs is a big area of me of not only the United Nations is doing a job, fantastic job on measurements, but I think AI role has a major role in sustainability. And we have to monitor environmental resources of usage and emissions. We have to monitor lower emissions aided in climate change compact. AI could allow us to do that. And we should really evaluate AI's impact comprehensively, not just level one. But we have to look at emissions and we'll have to look at production costs and vendor costs. So we have to look at a very comprehensive package here. I think it's really privacy, bias, energy, and alignment with SDGs, robust management, robust mitigation, and commitment to transparency, accountability, explainability and fairness.

Andreas Welsch:

Thank you for sharing that. I think again, there are multiple opportunities for businesses looking to incorporate AI, but also those building AI-based products to make sure that there are sustainable or even more sustainable than they have been. Now, maybe switching gears a little bit for our last question. You might run into people that say, Hey, sustainability is that lofty goal. And sure whatever we do has to be sustainable or has to be more sustainable. But I wonder how can you make it concrete for an individual business? Say, what can a business do? How can business leaders use AI ethically? So beyond ethics panels and policies, what goes into that you're seeing to make it really concrete and to take away?

Mark Minevich:

Thank you very much. First of all. I agree with you also. We can't look everything on just regulations alone. And I think that we need to establish very strong governing bodies across governments that have expertise in AI. That could possibly analyze AI from different angles, not just what they're comfortable with like the traditional agencies like SEC and FDA. We have to do better. Companies must continue to promote responsible and ethical AI and focus from their angle. Regulations already is very active in Europe, in European Union, but I'm not sure those regulations are producing the results that the consumers and everybody are looking for. Very restrictive. I think what President Biden has done is looking at innovation, balancing innovation, and also specific standards for federal government. Now, having said that, I wanna answer your question, what could be done? I think it's very important to focus on setting up AI, responsible AI ethical committees, mitigating and identifying risks of AI products developed in-house or outside by third parties. Important to have Chief AI Officer or Chief AI Ethics Officer involved and committees as well to make sure there is balance. That's number one. Education and training workforce. That's a big area that I think we need to really focus on and specifically spend the time right now. This is an area which is really providing, level of education that is needed for consumers, for companies, and public in general, and educating them in responsible and ethical AI. I think it's very important. Those types of areas. So really think that we need to do a better job in that we need to also we spoke about policies, but sometimes you do need certain policies that are actionable and they should be done on local, national, and international level. Yeah, I just did a whole Forbes article of incredible work that was done by UN agency called UNICRE headed by Iraqi Barza. They have shown that they are protecting the safety of children using AI and not only with policies, but real actionable things that they have done with the government of UAE. So I think that's important, and we gotta not only just come out with policies, we gotta adopt those policies and we gotta make those policies from a governance perspective, that's including clear understanding of use cases of AI. How are they being used? Ensuring AI would have moral values and enable human accountability and understanding. More importantly than anything else, we have to engage the stakeholders. We're not doing enough on engaging the CEOs, not only technology companies. It's great to engage Sam Altman and Satya Nadela. They already know what they're doing and what they're not doing. But I think it's important to engage the rest of the industry, the rest of the industry stakeholders, making sure they understand what we're talking about. Make sure the language is clear, high level ethical considerations are on the table. And as we discussed on last questions, we have to continuously monitor the impact of how this engagement is working. Is it really producing the impact or is it not producing the impact? And if we are doing something right, let's incentivize that good behavior. Let's give some brownie points to the leaders that are engaged to the public that is engaged on establishing those ethical behaviors within the company. Those are the strategies that I pointed out in my book, and I think they do point out very clearly how they resolve the issue of potential misuse, bias, privacy, transparency and such.

Andreas Welsch:

I think you've given us a very broad but also a very concrete overview of, first of all, why do we need to think about sustainability more, and also throughout, whether it's our development processes or even starting with the ideas up upfront and all of the responsibility then not only happens at the governmental level, but also at an industry, at an individual business level, all the way down to those of us working on AI and working with AI.

Mark Minevich:

Andreas. I just want to point out just for two minutes, I think it's all on our minds today. With everything going on with generative AI, and you and I discussed it before. It is consuming a lot of energy and it's consuming a lot of power. It is concerning, and it should be concerning to all the leaders. If I wanna give you a statistics models like ChatGPT, power consumption, much more than Google search are equating to the countries like the power consumption of Ireland. Can you imagine this? Put this in perspective, the AI model of ChatGPT, OpenAI is using as much energy consumption as the country of Ireland. So by 2027, servers, data centers that are processing this, the Nvidia machines basically, and the likes, are gonna be using up to 134 terawatt hours annually. That is the parallel energy usage of the entire nations sometimes. So this is really challenging for us to observe. Right now we're even seeing right now today, 300 kilowatts per rack in terms of data servers are being used. That's tremendous amount of power and estimation. I've seen the reports, I've looked at the reports, and this was discussed at World Economic Forum. 80% of the data center power will be consumed by AI over the next 15 years. 80% that people should, that there has to be some way that we're gonna differentiate. Access to power is a key differentiator in my opinion. We have to figure out how do we lower that consumption? By what methods? Of course every company that is developing the data centers, we just have to look at Nvidia. They're skyrocketing in terms of earning calls. That is fantastic. That's great news for them. And big money right now is putting incredible amount of technology in liquid cooling. And investors like KKR and Bosch and everybody else are investing very heavily in high powered and high heating data centers. They're driving up demand because you need cooling technologies for all of this stuff, and that's where sustainability comes in. But more importantly, to long-term strategy, in my opinion. And this was written up by many analysts from investment bankers to such in order to cope for this demand. We have to focus on renewable energy much more. There has to be even a deeper understanding and deeper call to action. Renewable energy has to increase at all levels. Growth of generative AI will only accelerate the demand, and so we have to focus on acceleration of renewable energy. So smart grids, very smart, AI driven smart grids will provide us this opportunity to provide cooling information to build smart renewable energy grid, smart power to. Make sure we power those power hungry, generative AI systems. And we should also think about, I've seen some reports, we should not dismiss it. The future of sustainable AI operations I think will heavily depend on nuclear energy. I know we sometimes want to be dismissive, but there is already research going on by fusion tech companies. That are building some sort of a new nuclear type of way to manage this type of energy consumption. And who do you think invested in many of them? Sam Altman, because he realizes what the direction is. So I wanted to just bring this attention of your audience that we have to be very mindful. And we have to support environmental, social, ESG initiatives right now. Generative AI as much it is bringing potential, it is taking a lot of power consumption and we have to really think about mitigation and how we're gonna use this so we could lower this consumption, hoping in the next few years by 20-25%.

Andreas Welsch:

Yeah, thanks for adding that. And I think following some of the news on these topics, my understanding is as models are getting smaller even as they're getting more tailored, that might be setting us up on that path. But Mark, we're getting close to the end of the show today and I really wanna thank you so much for joining us and for sharing your expertise with us. It was a pleasure having you on the show and thanks for those of you in the audience for, learning with us. Mark, thanks again.

Mark Minevich:

Thank you so much. Really a pleasure to be here.