What’s the BUZZ? — AI in Business

Scaling Enterprise AI Products For Business Impact (Guest: Srujana Kaddevarmuth)

Andreas Welsch Season 3 Episode 15

In this episode, Srujana Kaddevarmuth (Senior AI CoE Leader) and Andreas Welsch discuss scaling enterprise AI products across the business. Srujana shares her insights on applying a product mindset to building AI capabilities and provides valuable advice for listeners looking to scale their AI applications across the enterprise.

Key topics:
- Productize AI capabilities through a holistic approach from idea to improvement
- Raise awareness of the risks and nuances of deploying AI in the enterprise
- Decide when to use Machine Learning and Generative AI — individually and together
- Increase diversity on your AI product team for more inclusive results

Listen to the full episode to hear how you can:
- Apply a product mindset to data democratization and AI productization in your organization
- Raise awareness for the risks associated with productizing AI, ranging from biases to permitted vs. prohibited use cases
- Celebrate and intentionally build diversity in your organization to build products that can be useful and acceptable by the different sections of society

Watch this episode on YouTube:
https://youtu.be/goysDB00D9s

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about scaling enterprise AI products across the business. And who better to talk about it than someone who's doing just that. Srujana. Hey, Srujana, thank you so much for joining.

Srujana Kaddevarmuth:

Hi, Andreas, excited to be here.

Andreas Welsch:

Wonderful. Why don't you tell our audience a little bit about yourself, who you are and what you do?

Srujana Kaddevarmuth:

Sure. Happy to share. Hello everyone. My name is Srujana Kaddevarmuth. I lead the AI portfolio for a fortune one company working out of Bay Area here in Silicon Valley. I enjoy being associated with the causes of using AI for good and being an active advocate. I serve on the board of the United Nations Association and focus on using AI to attain United Nations Sustainable Development Goals. Looking forward to this conversation.

Andreas Welsch:

Wonderful. It's always great having leaders on the show who not only use AI in business, but also who have a deep passion for using it for good. So thank you for that. Excited about our conversation today. Should we play a little game to kick things off?

Srujana Kaddevarmuth:

Sure.

Andreas Welsch:

Wonderful. All right. Let's see. This game is called In Your Own Words, and when I hit the buzzer, the wheels will start spinning. When they stop, you'll see a sentence, and I'd like you to answer with the first thing that comes to mind and why. In your own words. And to make it a little more interesting, you'll only have 60 seconds for your answer. And for those of you watching this live, also drop your answer in the chat and why. Always curious to see what you come up with. Srujana, are you ready for What's the BUZZ? Okay, then here we go. If AI were a song, what would it be? 60 seconds on the clock.

Srujana Kaddevarmuth:

I think it's going to be rock and roll.

Andreas Welsch:

Okay why that?

Srujana Kaddevarmuth:

Because you know how the industry is evolving, right? I don't think there could be a better way to describe that, right? It's very interesting. A lot of hype and practical challenges in terms of deployment and the industry is evolving at a lightning pace. It's very difficult to catch up with. So I think that's the best way to, yeah.

Andreas Welsch:

So rocking and rolling and lots of moving and shaking. Perfect. Wonderful. So why don't we talk about the actual topics we want to talk about today. And that's what productizing AI, right? I know for probably the last two years or so, There's been a change in mindset and perception that, hey, AI isn't just a project. It's not just a one time thing, but we should actually think about it as products. And that's why I'm really excited to have this conversation with you because I know you're approaching it in the same way too. So I'm curious if you can talk a bit about that for our audience. What does productizing AI actually mean and how do you do it?

Srujana Kaddevarmuth:

Yeah, absolutely, Andreas. We are in this age of data deluge that we all agree, wherein 1. 1 trillion megabytes of data is getting consumed and generated every single day. Around 70 percent of the world's GDP is undergoing digitization as we speak today. When such a humongous amount of heterogeneous, non intuitive, messy data is generated, it becomes a responsibility of everyone involved in using that in a safe and effective manner. And the organizations have an added incentive to use this data to drive value for the large enterprises. So they are focusing on two important concepts here. The first one is around data democratization and the second one is around AI productization. So productizing AI is a journey of translating the insights generated from exploratory analysis into scalable models that can power products. It means developing and deploying these algorithms in the production environment and doing that in an efficient and effective manner. Because there are multiple benefits the organizations can reap by productizing AI. The top three benefits happen to be the first one is around human capital efficiency, right? So it helps the organization utilize the scarce AI resources to do the niche AI work, freeing them up from the mundane repetitive tasks, thereby reducing the attrition and keeping them happy. The second aspect is specifically around ensuring that we are effectively utilizing this AI, but also bringing in and fostering innovation within the organization, right? Just imagine a lot of bandwidth of these AI experts would get freed up because a lot of processes are going to get automated. Now they can use that bandwidth to run mass experiments and drive some innovative thinking, thereby fostering innovation. And the third aspect is around standardization of the technology capabilities. Just imagine if the large organization is operating across different geographies, across different channels. So building AI as a product is going to help us standardize these technology capabilities across these different channels, which will help us build and monitor these capabilities in a responsible and effective manner. So I think there are a lot of benefits and perks for the organizations to prioritize AI, and that's why they are doing it.

Andreas Welsch:

Yeah. What's interesting to me is that the first one you mentioned is, data, right? It all comes back to data at the end of the day. And I think it's an interesting realization in the industry over the last even 18 months, 24 months that yes, hey, there, there are things like generative AI, but if you want to get better results, we want to get concrete results, relevant results, you still need data. You need your data. This one makes it unique.

Srujana Kaddevarmuth:

Absolutely.

Andreas Welsch:

Now, when you talk about productising it, what does it mean from your point of view in terms of process, in terms of mindset, in terms of resources, when you work with the business that's quote on quote just looking to address a business problem to, I don't know, be more efficient, be more effective, save a couple percentage points or fractions of a percent here or there. How do you do that? How do you run that process?

Srujana Kaddevarmuth:

Yeah. So just imagine if we are going ahead, if, for example, when you're talking about the large enterprises, there are going to be multiple use cases where we can use AI, right? However, we cannot be building an algorithm or a model separately for each use case. It's not just efficient at all, and it's not scalable. So productizing AI helps overcome those challenges. challenges and helps us expedite the overall value adoption across the organization. But also there are certain nuances that the business is a little considerate about and concerned about when deploying these algorithms in the production environment and productizing AI, right? For example, one of the challenges the productizing AI as a concept would face is around lack of resource investment and leadership championing that. Why? Because In many scenarios, the projects succeed, but there are certain scenarios in real world when the projects or the initiatives fail. When that happens, the best case scenario is that we have only built a proof of concept. And the worst case scenario is that we have built the entire product end to end and the results are not relevant, they are not good enough. So this loses the credibility of the management and the leadership on these data science or AI teams, thereby blocking the resource investments. Similarly, the second challenge could be around the production systems malfunctioning, right? When we look at AI, machine learning algorithms, they get smarter over a period of time if they are connected to the constant and new data feed. If they are not, then they degrade in quality and that happens pretty quickly. So if we are not deploying it efficiently in the production systems, and if we don't understand that these algorithms are very different from software applications, then it's going to be a challenge. One of the mistakes that most people in the industry do is thinking of these. ML algorithms similar as similar to that of the AI applications or software applications. When that happens, they don't understand that there is a nuances in terms of how do we deploy these algorithms, right? So that poses a risk. And the third risk that I think that the industry is facing and you look at productizing AI is specifically around legal implications, right? These specific algorithms, which are AI algorithms, are the statistical representation of the world that we live in. They come up with the outcomes based on the data that they have been trained on. Sometimes that data can have some certain kind of biases and by productizing that, we are institutionalizing those biases across the organization, which is a challenging and a bad scenario, and it can lead to some sort of a legal reputational damages and risks for the organization, and it should be mitigated in an effective manner. Yes, productizing AI helps. There are a lot of benefits, and we could do that efficiently, and that's why companies are doing that, but also there are certain challenges and risks that we need to be aware of.

Andreas Welsch:

That makes sense. So, folks, for those of you in the audience, if you have a question for Srujana, please put it in the chat and we'll take a look in a minute or two and then pick some of those up. Now you've already started talking about some of those risks and, nuances, but what are some of the additional things that you're seeing when it comes to deploying AI in the enterprise at scale and how can you mitigate them? I think you already talked about resources as one constraint, but what are some of the other ones you're seeing?

Srujana Kaddevarmuth:

So in terms of mitigating the risks so there are, we spoke about some of the risks associated with productizing AI, but also if we identify those risks early on in the life cycle, it will be easy for us to mitigate them. And we can mitigate them by primarily focusing on three different principles, right? The first one is around model flexibility. So the perfect model or the algorithm does not exist in this universe. However, we can build the algorithms that are flexible enough to consider certain real life scenarios and accommodate them without having to undergo major architectural redesign because the technology is evolving so rapidly. Whatever was relevant when I started my career is not relevant today, and whatever is relevant today would not be relevant tomorrow. So the algorithms need to be flexible enough to adapt to these technological changes so that we don't have to do a tech stack revision, right? So that's one. The second aspect is around computational efficiency. We can mitigate the risks of productizing AI by focusing on the computational efficiency, the functional usage, the runtime, because one of the challenges and the mistakes that people do is focusing a lot on accuracy of the models. While accuracy of the models is important, in the proof of concept stage. But in the production stage deployment, it's equally important to focus on the functional usage, computational efficiency, and runtime, lack of which would lead to risks. So ensuring that there is a right precedence, especially when deploying the algorithms in the production environment, is going to be super important. And the third aspect is around building model wrappers. As I mentioned, that machine learning. Algorithms operate very differently from that of the software applications. They need to be connected to the constant and new data feed. And that has to be done through building strong and effective data wrappers or model wrappers. And if that does not happen, then there would be risks of the project's failures as well as the initiatives would fail, primarily because any kind of deviation that is happening due to the broken data feed is very difficult to detect as compared to the outright application failures. So focusing on these aspects and mitigating them along the way is going to help us productize AI in an efficient and effective manner.

Andreas Welsch:

Thanks for sharing that. I'm curious when you say you need a wrapper, right? And certainly I understand the part about data needs to be fresh right there. There has to be a stream of new data. But what do you mean by a wrapper around the model? What does it mean? What does it look like?

Srujana Kaddevarmuth:

So it's more about connecting the data feeds in and out, right? One is. If we, for the inputs, we look at eclectic data sources, which could be first party, third party data sources, and connecting that to the model. And as an output, taking the outcomes or the outputs of the model, and then keep creating a feedback loop so that the algorithms can constantly learn as well. That happens when we create the data wrappers, both as an input as well as the output, as well as there would be an inferencing layer and some of the outcomes would go to the visualization layer. So creating those effective wrappers is going to be important across the overall value chain or across the ecosystem so that we have the robust algorithms, robust models, as well as the data flow is current and efficient across the ecosystem.

Andreas Welsch:

Okay. Thank you for, explaining that. Now, we've, talked about machine learning. We've talked about data, obviously quite a bit. I know the industry is still super excited about generative AI. Now when, from your perspective, should you use machine learning? When should you use generative AI? When should you use them individually or together? What is it that that you're seeing or that you would recommend?

Srujana Kaddevarmuth:

Yeah, so I feel Andras, that there is a lot of hype around generative AI. It's not that there is generative AI algorithms do not have a lot of potential. They have a lot of potential. However, when I say hype, I mean that there has been a lot of investment that has happened in the generative AI across industries over the last year. But the return on investment has not been proportionate, right? I see that there is just a lot of craziness across the industry around using generative AI algorithms for every sort of a use case, which should not be the right approach, right? The right approach should be yes, there are going to be certain use cases, which require deep language understanding where generative AI would be a best solution. Focus on using generative AI for those use cases. But there are several other use cases where conventional AI and their classical regression classification problems, so wherein conventional AI can solve for that and even help with optimization. So we should be using the conventional AI in that context. This is primarily important because generative AI, as an algorithm or generative AI as a field requires a lot of compute infrastructure, right? So there is a lot of costs associated with generative AI deployments. It's not just cost to the companies, but also cost to the environment, cost to the society. So one needs to be really mindful in terms of when and how to use that, right? For example, there are three important costs associated with using generative AI algorithms. The first one is the compute cost. There is an infrastructure cost. And then the there is a energy cost, right? Requires significant amount of energy to run. So being mindful of when AI algorithms is going to be super important. Rather than just using generative AI algorithms for the sake of it, being mindful, looking at the true use cases which require deep language and intent and context understanding and using generative AI to solve them is going to be helpful. And also when we are doing that, we can still use various methodologies like node balancing because not all the resources could be consumed at all phases of the project life cycle. So focusing on scaling up and down of resources dynamically is going to be helpful and as well as using both GPU as well as edge computing and looking at distributed ways of computing is going to be helpful. So finding different avenues to ensure that we efficiently use these generative AI algorithms is going to be important, but we should start with does this use case actually qualify and meet the generative AI deployment, right? So that's how I would approach this.

Andreas Welsch:

Awesome. And as I'm listening to you, I'm getting really excited about this, whole topic of edge AI and doing your inference and also your language processing on device and then only reach out to the cloud if needed. By the way, I was watching some reviews again of the Rabbit R1 personal assistant and of the Humayun AI pin over the weekend and just seeing what some of that latency is if you need to make a call to the cloud every time. Certainly at the moment, it isn't as user friendly as you would like it to be as you're you know, used to having things on device. But anyways, I'm digressing.

Srujana Kaddevarmuth:

No, I believe, Andreas that's important. The reason why I say that is even a World Economic Forum made the prediction in terms of the gallons of fuel or the energy that's going to be used to cool down these processes and servers. And we are looking at quantum computing to help us mitigate the situation in the coming decade. But overall, I think it's really important for us to not reach the situation wherein we'll have to desperately look for remedial measures, but do that in a conscious manner right from the beginning.

Andreas Welsch:

That's a great point. Absolutely. Yeah. I think that also goes back to your point about machine learning, statistical algorithm, regression and so on, right? If you can use those and they and we know that they use less power and less energy with machine learning. Even better results in many cases, then you don't need a hammer for, every problem. Now what are maybe some examples that you are seeing around machine learning and traditional more classical ways of doing in, business?

Srujana Kaddevarmuth:

Yeah, so in for the traditional algorithms, there are a lot of supply chain related use cases, right? So we are road optimization, there are inventory management, etc. So where we can use the traditionally I algorithms also in the customer domain and marketing domain, multi touch attribution models, single touch attribution models, etc. can be built using the conventional AI algorithms. Some recommended systems around explore exploit algorithms use the conventional AI models. But also, in terms of the generative AI, there is a huge focus around using these generative AI algorithms for content creation, which could be around creating the product reviews, creating certain content on behalf of the customers and taking their consent to post that so that people can have realistic opinion about what that product is and key capabilities and it can make the overall discovery and search process more efficient and useful. There are certain opportunities around using generative AI, in terms of enhancing the computer vision applications around drone deliveries using autonomous vehicles. So there are multiple other avenues as well, especially around content creation for, let's say, smart creatives, wherein we are talking about creating certain images and assets for the organization that would otherwise take a lot of manual intervention and processes, which can be expedited as well. So there are multiple avenues and multiple applications that different organizations are thinking of in terms of using conventional AI as well as using generative AI.

Andreas Welsch:

Thanks for sharing those. I think it's really important to keep that in the back of our minds, what those examples are and where opportunities are across the business in addition to generating, summarizing, translating language. Thank you. Now I also know that when we look at building AI products, like you had already said we rely on data that's derived from real world events. Maybe some of it is created in a lab as synthetic data. But the idea is that we have data that has been created or observed in the real world. And, you already mentioned bias and biases as one example of things, data points in the data that represent the real world that we might not want to have in there because we want to make it more inclusive. We want to add more diversity, more fairness in these kinds of things. So I'm wondering from a product leadership point of view. How does diversity play into building AI products? And by the way, is it just the technology that we need to focus on? Or is there more than data and technology to it?

Srujana Kaddevarmuth:

There is definitely more to that, Andreas, right? So today we are seeing most of the government organizations, corporate organizations, are all delegating the decision making power to these algorithms, right? And these decision making powers are the ones that are impacting people's lives and livelihoods. So it's, there are some life altering decisions that we are delegating to the algorithms like legal sentencing, job applications, loan approval, college admissions, and even the less important and less vital decisions around what to eat, what to wear, whom to make friends with on social media, where to travel, and whom to date are all being delegated to the algorithms. So when these algorithms are influencing different facets of our lives and transforming all the industries that we could think of and are being consumed by different sections of the society, it only makes sense Equal representation amongst the people who are developing these technologies, isn't it? But we don't see that happening, and there is a lot of gap in terms of ethnic gap, in terms of neurodiversity gap, as well as gender gap as per Women in Tech statistics, nearly 80 percent of the technology related jobs are held by men, with only 20 percent being held by women, and only 26 percent of the overall computing related jobs are held by women. And this situation has further exacerbated and the gap has widened. It's really important for us to think about how we can make sure that the workforce has expanded, especially during pandemic, primarily because of the caregiving responsibilities that women had to take. And many people left the workforce around 750,000 women in the United States left the workforce during pandemic. And majority of them were in their prime working age of between 33-45 years. So it's really important for us to think about bringing in more diversity in the domain. And very recently, McKinsey published an article which indicated that corporate America is at the crossroads and the decisions that the corporate leadership would make today would have implications on diversity for generations to come. And this is going to be reflecting in the technology space because technology is defining how the world A lot of things shape up today. While the opportunities are numerous, I believe the future of women in tech or future of diverse ecosystems in the technology space is primarily going to depend upon tech industry's ability to attract more girls to pursue STEM education and then enter STEM careers and create a holistic environment for them to thrive, not only in the corporate sector, but also in the development sector to create a greater socio economic impact.

Andreas Welsch:

Thank you for sharing that. I couldn't agree more, right? And it starts with providing these opportunities. It starts with creating and finding those opportunities and also, starting at an early age, like you mentioned, because only then, and if you have role models that you can look up to and that you see in, these roles today, you see someone that you want to become. And, so thank you for sharing that. Now we're getting close to the end of the show and I was wondering if you can summarize the key three takeaways for our audience today.

Srujana Kaddevarmuth:

Yeah, so because there's a humongous amount of data that is getting generated, the first focus area or the key point that I would want the audience to take away from this is going to be around understanding that data democratization and AI productization are here to stay, and most of the large organizations are going to focus on that. So it's really important for us to bring in that kind of a product mindset when building and deploying AI algorithms. The second key takeaway is around there are a lot of risks associated with productizing AI. So doing that in a responsible and conscious manner is going to be super important. And the same holds true with the generative AI algorithmic deployment, not just using generative AI algorithms for the sake of it, but being mindful of the use case and using the right use cases and using the right algorithms for the right use cases is going to be really important. It will save time, money for the enterprise as well as make an individual successful in the corporate sector. The third aspect is we need to celebrate and intentionally build diversity and inculcate that within the organization because diversity helps us build products that can be useful and acceptable by the different sections of the society, as well as it helps foster innovation within the organization. So these are the three takeaways that I want the audience to take from this particular conversation.

Andreas Welsch:

Wonderful. Thank you so much for sharing. Also, Srujana, thank you so much for joining us today and for sharing your experience with us. It's been a pleasure having you on the show.

Srujana Kaddevarmuth:

Same here. I totally enjoyed this conversation, Andreas. Thank you for having me here.

People on this episode