What’s the BUZZ? — AI in Business

Increase Engagement For Responsible AI Programs (Guest: Elizabeth Adams)

May 05, 2024 Andreas Welsch Season 3 Episode 11
Increase Engagement For Responsible AI Programs (Guest: Elizabeth Adams)
What’s the BUZZ? — AI in Business
More Info
What’s the BUZZ? — AI in Business
Increase Engagement For Responsible AI Programs (Guest: Elizabeth Adams)
May 05, 2024 Season 3 Episode 11
Andreas Welsch

In this episode, Elizabeth M. Adams (Leader of Responsible AI) and Andreas Welsch discuss increasing engagement in Responsible AI programs. Elizabeth shares findings from her doctoral research on responsible AI engagement in Fortune 10 companies and startups. She provides valuable insights for listeners looking to create effective responsible AI programs within their businesses.

Key topics:
- Evolution of responsible AI in times of Generative AI
- Common barriers to creating an effective responsible AI program
- Leadership approach to increasing employee engagement and participation in responsible AI programs
- Navigating conversations with employees when their jobs are affected by AI

Listen to the full episode to hear how you can:
- Develop a responsible AI vision for the organization
- Make the vision concrete through AI ethics statements
- Ensure employees are aware of AI ethics policies and how they relate to their work
- Provide a path to raise stakeholder claims and build trust that claims are addressed

Watch this episode on YouTube:
https://youtu.be/o7ygYzN6GBE

Questions or suggestions? Send me a Text Message.

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Elizabeth M. Adams (Leader of Responsible AI) and Andreas Welsch discuss increasing engagement in Responsible AI programs. Elizabeth shares findings from her doctoral research on responsible AI engagement in Fortune 10 companies and startups. She provides valuable insights for listeners looking to create effective responsible AI programs within their businesses.

Key topics:
- Evolution of responsible AI in times of Generative AI
- Common barriers to creating an effective responsible AI program
- Leadership approach to increasing employee engagement and participation in responsible AI programs
- Navigating conversations with employees when their jobs are affected by AI

Listen to the full episode to hear how you can:
- Develop a responsible AI vision for the organization
- Make the vision concrete through AI ethics statements
- Ensure employees are aware of AI ethics policies and how they relate to their work
- Provide a path to raise stakeholder claims and build trust that claims are addressed

Watch this episode on YouTube:
https://youtu.be/o7ygYzN6GBE

Questions or suggestions? Send me a Text Message.

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about how you can increase employee engagement with responsible AI. And who better to talk about it than someone who's doing just that. Elizabeth Adams. Hey Elizabeth, thank you so much for joining.

Elizabeth Adams:

Thank you so much for having me. I am delighted to be here today.

Andreas Welsch:

Wonderful. Why don't you tell our audience a little bit about yourself, who you are, and what you do?

Elizabeth Adams:

Sure. So I'm Elizabeth M. Adams. I currently bring you greetings from the beautiful city of Savannah, Georgia. I consider myself a responsible AI leader, influencer. I've been in the space, oh, about eight years now. My background is in technology. I've been a technologist for a couple decades and then pivoted as I mentioned to the AI ethics space about eight years ago. And so since then I have had a number of successes delivering AI ethics projects. One particular one was in the city of Minneapolis through a Stanford fellowship I had where we worked on a civic tech project. Project with citizens and policy makers to understand how to use AI to govern society. I run my own business, pursuing a doctorate, do a lot of keynotes speeches. So I keep myself really busy, but all things responsible AI mostly.

Andreas Welsch:

I'm so excited to have you on, and we were part of LinkedIn's Creator Accelerator program. And so excited, we finally have the opportunity to do this episode together. Looking forward to learning from you as well. So folks, for those of you in the audience, if you're just joining the stream, drop a comment in the chat where you're joining us from. I'm always curious to see how global our audience is. Now, with that out of the way, should we play a little game to kick things off?

Elizabeth Adams:

Yes, I'm a little nervous, but yes, let's go for it.

Andreas Welsch:

Alright, perfect. This one is called In Your Own Words. And when I hit the buzzer, the wheels will start spinning. When they stop, you'll

Elizabeth Adams:

Oh, Jesus.

Andreas Welsch:

I'd love for you to answer with the first thing that comes to mind, and why, In Your Own Words. To make it a little more interesting, you only have 60 seconds for your answer. And again, for those of you watching us live, put your comments in the chat as well. What do you think it is and why? So are you ready for What's the BUZZ?

Elizabeth Adams:

Ready.

Andreas Welsch:

Perfect. Then let's do this. If AI were a fruit, what would it be?

Elizabeth Adams:

I would say it would be an apple. Because the apple was something that Eve took from the tree and it created all kinds of confusion around a whole bunch of stuff. So I would say that it would be an apple because here way back when they were thinking that the apple was this great thing and then now there's challenges with it So for me, it would be an apple.

Andreas Welsch:

Awesome. Wonderful and well within time. Makes me wonder who would be the snake then? But we can save that for a separate episode. Alright. So let's jump right into our topic. I was pursuing my PhD at one point as well, a couple years ago and came across many similar topics around ethics and responsible AI. Back in 2016, there was the case that ProPublica uncovered with COMPAS, the recidivism software that was based on bias data and incorrectly sentenced individuals more based on their ethnic background and actual substantive data. Or Amazon, I think a couple of years later, 2018 or so, scrapping their resume matching tool because there was bias in it and women where we're disadvantaged, as one example, right? I think in light of those examples and probably several others, I've seen leaders be quick and put AI ethics, responsible AI programs in place. Committees, policies the whole nine yards, obviously. But I'm curious, from your research talking to employees and interviewing them, how they are interacting with responsible AI, with these policies. What are some of the gaps that you're seeing? And how is responsible AI evolving now in times of Generative AI as a new aspect or dimension to this?

Elizabeth Adams:

Okay, so that's a really good question and a good follow on. Let me just back up to say that a few years ago, there were several cases of bias, AI harm, that people outside of the technology space and inside the technology space were making organizations and policy So you had mentioned a couple of cases. There are several others where people are actually advocating for safer, transparent, more responsible, accountable technology. So this isn't new while we are talking about responsible AI in 2024, as you mentioned back in 2016 and 2018, it was more ethical AI, principled AI, and it's had its evolution. And so, there've been many organizations for different reasons that have taken on this journey of responsible AI. And from my research, those where employees feel that responsible AI is an integral part of the culture, they can tell you exactly what their AI policies are. They can tell you who is responsible for what in the organization. They can point to where their center of excellence is, where they're collaborating with their colleagues. And so there's a real difference when responsible AI isn't integrated. And they can also tell you that there is a vision for responsible AI and they're able to evolve with Generative AI. So they have policies, they have procedures, they have frameworks, or they're adopting a framework or standard from some external organization. And then you have the opposite end where responsible AI is not integral. So for my research, I've researched people who are working in the responsible AI space. So there's some that are doing it well, and then there are some cases where responsible AI is stalling. In those cases where I've talked to employees and leaders where responsible AI is stalling, it's because there isn't a vision. They're not sure what their role is. They're not sure what their leader's role is. It's maybe something that is in a smaller segment of the organization, maybe just in a technical division. And so it's not something that is across the board. And because of that, employees are using their own expertise and insights by playing with Generative AI outside of work and understanding some of the harms. And then they're bringing those insights and informing themselves on how to do it differently, how to be more responsible in the absence of some of those policies and procedures that are across the board. And then you have that group that's in the middle where their organization is working on it and they can tell who's doing what, but maybe not as fast as we would like based on some of the issues that we're seeing in society.

Andreas Welsch:

I see. So you point out one interesting thing, right? Several of them actually, right? But the part about employees knowing that there is an AI ethics policy or something. How do you see that be different between smaller organizations, larger in terms of, say, the capacity, the teams, the structures, the processes they have? Say, is AI ethics only something if you're a Fortune 1000, Fortune 500 company? Or what does it look like even if you're a smaller company or just starting out?

Elizabeth Adams:

Yeah, great question. So in my research, I interviewed employees with some of the top Fortune 50 companies as well as startups. I really wanted to have That view of what's behind the ones and zeros of what their workplace realities are. And so for my research and what I saw is that the smaller mid sized companies can be a little bit more agile in how they're evolving, how they're modernizing their processes. But there are some larger organizations where, across the board, people, employees, stakeholders do understand that they are driving a responsible innovation organization. I do see that some of the smaller mid size are a little bit more agile as something new, a new innovation, a new topic comes about. They're able to incorporate that differently and more quickly in their organization. But there are some organizations that have built in responsiveness. They have built in monitoring and evaluating as a part of their responsible AI culture. And so when they hear something or when a policy comes, they're able to quickly add that into their systems or into their processes. And I also discovered that many of the larger organizations that have gone through digital transformation in the past, then they are able to bring their work forward. force along a lot quicker in terms of how to evolve and enrich their responsible AI program as we are all learning about newer things that are coming down the pike.

Andreas Welsch:

That's interesting. Hearing how you frame that being more agile but still being able to incorporate then also some of the new topics around responsible AI or maybe guidelines that are coming. By the way, I'm taking a look at the chat, so from North Carolina to Pennsylvania, Florida, we have a few folks joining here Rory says that the fruit for him would be durian fruit, the smelliest of all fruits, yes. Spiky on the outside then soft on the inside, but definitely with a distinct taste and smell. Now you mentioned organizations of different sizes, people being more aware, a little less aware of what policies exist and who to talk to, but I'm wondering what are some of the other barriers that you're typically seeing in creating effective, responsible AI programs?

Elizabeth Adams:

I had a list of questions for all the participants, and of course this also echoes when I was advising organizations as well. And I talk about this in my LinkedIn course Leading Responsible AI in Organizations. Creating a vision around responsible AI is absolutely critical. So that employees have an idea of how to line up against or with that vision. How do they make sure that the activities that they're engaged in, the policies that they're even developing inside the organization, the product documents, the technical requirement documents, the contracts, how would they even know that they're lining up with a responsible AI if there isn't a vision? That, to me, is absolutely critical. And then making sure that cascades out across the organization, as well as externally. Some organizations have it as a front facing statement on their website. Some send it out via email to their stakeholders through a newsletter or something like that, but employees need to have an idea of why they're there working on this responsible AI thing. They also need, from my experience, an opportunity to have healthy discourse within the organization. That is absolutely critical. critical. And so employee safety is one of the findings that has come out of my doctoral research. Being able to say something is not right and not having any fear that they'll lose their job or that they'll be retaliated against. Because if you think about it, employees are the ones that are closest to seeing when there might be a hiccup or when there might be some feature that needs to be changed. And that goes across the board. It's not just the technical teams. I talk to folks who are in customer service as well and hearing what customers are saying and having a loop to take that back into the organization in a very safe manner. So having healthy discourse of vision. Even employees need to get behind that, but leaders as well. That's what helps in their work in partnership with their leaders is because they're all driving down the same path because there's a vision. So I would say those two things can help fill the gap around what I'm seeing. But a vision and an opportunity to be able to express what an employee is seeing and feeling. Feeling as a part of their workplace reality as part of the responsible AI culture, and then having a place for that.

Andreas Welsch:

Now, I'm curious, the part around vision that you mentioned, I feel it's a word that's tossed around so easily and people interpret it in many different ways. What does it look like for AI ethics? I've seen some companies in the market say, Hey, we have guiding principles. We design for people. We minimize bias. We ensure human ownership or agency. What does that look like then in terms of vision? Is that it, or is there something else?

Elizabeth Adams:

It's clearly a statement. It's clearly a statement followed up by an ongoing commitment to ethical practices. So you have a larger statement of why you are even using AI, why you're developing AI, why you're procuring AI, whatever your business model is for needing to use AI. or cell AI. That needs to be clear what this vision is. We develop AI for X, Y, and Z. We are using AI responsibly for X, Y, and Z. Our guiding principles are this because of that, like all organizations that I've worked with have different ways. So there isn't a cookie cutter one way approach for an organization to do that. And then the lining of practices. Activities within the organization to support that. So in HR, it might be beefing up your employee handbook so that as employees on board or as they're being refreshed about new policies, it's clear that you are an organization that is driving responsible innovation. And then listing out what that means. In the job descriptions, there could be a section, and I have an infographic on I think maybe 30 different job descriptions, where you can add a paragraph so that the employee knows, leaders, C suite, whomever, know exactly what their responsibility is around responsible innovation. So the vision is a clear statement and then there are activities within the organization that support that particular vision. And I just happened to mention HR could be legal. It could be marketing. It could be data science, engineering wherever. We have a lot of data in our organization. That is a part of my doctoral research. These artifacts that guide it. Tangible and intangible policies, procedures, framework, standard operating procedure, whatever it is, a charter, should be very clear about what that vision is. How your organization is evaluating, monitoring, and what the ongoing commitment is to that.

Andreas Welsch:

I see Rory is asking, with how many organizations have you worked, have you helped implementing responsible AI programs?

Elizabeth Adams:

Yeah, so great question. Before I started my doctorate program, I did several large and small and medium size, but my sample size for my qualitative study is 60 participants. I was hoping for three participants per organization. Some organizations have more, but their roles are not just technical roles. They're all over large organizations, small organizations, academia, industry, government, non profit. But the criteria was that AI effort.

Andreas Welsch:

Now, if you're in a leadership role and you want to drive more employee engagement and participation in responsible AI programs, clearly it's important, right? The examples that we talked about at the beginning of the session show what the impact can be if there's bias in data, if individuals are negatively impacted, if there's discrimination because of the bias in data and making these predictions. We need more engagement and what have you seen works really well for leaders to get that engagement from their employees? Are there things like gamification? Are there quizzes? Are there mandatory trainings? Are there maybe some completely other things that drive engagement employee engagement when it comes to responsibility.

Elizabeth Adams:

One organization actually has an option for how they address stakeholder claims. Stakeholder claims are, I think that we should do something different when, maybe we should increase our threshold when we're thinking about our algorithms for certain groups, certain zip codes or whatever it is. And there is a place for that stakeholder claim to be prioritized by either what's in theory it's stakeholder identification and salience theory. So either it's a powerful claim, it's an urgent claim, it's a legitimate claim. It's one of those, but there's an opportunity for employees to bring their claims forward. And then there's either like a governance board or some other group That is managing these different things just like a technology bug. So someone has a place and it's separate from this kind of and I know of one organization that has like this whistleblower option. So if you're not listened to, then here's an option for you. But this is not what I'm talking about. I'm talking about a place for employees to raise their concerns. The other thing is having trust that their insights go somewhere, and they're just not ending up in some box somewhere and they don't feel like they're listened to. And having an opportunity to have dialogue about their claims. This can be hard to scale, especially if you have a large organization. And so some of what I have advised organizations in the past to do is like pilot a small program and see how that works. Pilot, pilot it with some group that is closest to some of the biased data. So that's what I've seen an opportunity in organizations where there is a priority placed on or an opportunity to prioritize stakeholder claims. That's one way to get them engaged. For me, again, it has to be something across the organization that the entire organization understands that this is a part of our culture, because I've seen it where people have their OKRs and their KPIs, and the leaders are asking their employees to drive towards that without really having an opportunity to understand how they can partner in responsible AI. So that's why I say it needs to be something at the organizational level, because it really depends on what the leaders pressing issues are that will guide how employees then have to line up their efforts, if that makes sense.

Andreas Welsch:

That makes sense. Yeah. Thank you for sharing. I'm taking a look at the chat and I see Doug has a question as well. It's more around bias. Like why is there bias? Why do you feel, or what do you feel is the source of bias in AI? Is it because we ourselves are biased and therefore it manifests itself in data or what are you seeing or maybe what are you coming up with?

Elizabeth Adams:

Yeah that's a good question. Really good question. Obviously there's bias in data, historical data that has excluded entire groups of people, data that has been used in certain zip codes that again exclude certain people and people are making decisions based off of certain groups of people. So there's historical data and that historical, excuse me, there's historical bias data. A lot of that can come from systemic inequalities. can come from discrimination perpetuating negative stereotypes. So I also think that there is bias, obviously human bias. So all of us show up to work with our narratives about certain groups of people. So I face this a lot in Generative AI as I'm playing around with it. It just boggles my mind how certain results can come back for certain things that I haven't even asked for. perpetuating negative stereotypes. And my question is always, how is this data trained? How is this model trained? What data was it trained on to deliver this result? It's just, I don't quite understand. I get it, but it's still boggles my mind that there wasn't a human that was annotating the data, associating the data that could think, There's something wrong here. This isn't a result that we should be making public. I think that there are many ways to combat it, but obviously historical data excluding experiences, excluding, yeah, someone's lived experience in the data and then now training on that data. Or where there's someone thousands of miles away that's annotating data. I use this example in several of my presentations where I use, an example of two individuals taking a temperature with an infrared thermometer. One is lighter skinned, one is darker skinned, and the computer vision model predicts with the lighter skinned person that the image is a piece of technology. But with the darker skinned person, they predict that infrared thermometer is a gun. So someone annotated that data somewhere to help train that model. Does that make sense? And there's lots of different ways that bias can, creep in. And then what responsible AI is really an ongoing commitment to the ethical practices to prevent that in a number of different ways.

Andreas Welsch:

That's an example that I have not come across myself yet, but I think that visualizes it very clearly, right? What the impact of bias, biased data, biased output is and means, right? I think about then using something like that or a model trained for computer vision to detect individuals with an infrared thermometer.

Elizabeth Adams:

Yeah, for a healthier outcome. In this particular instance, in this image, the thing that I think about, because I did so much work with the city of Minneapolis around AI surveillance technologies and what should be used to govern society and what shouldn't, it was very enlightening to me to see people advocate for what should be basic human rights. They are employees, they're teachers and engineers and doctors and baristas who were coming down to City Hall in Minneapolis or communicating to city council members why a particular technology wasn't good for the residents at the time. But what was concerning to me and what still is concerning to me is when you think about how some of these images show up in this particular case, this guy who is In this image where the computer vision model predicts he has a gun, he could be a frontline worker somewhere. Unbeknownst to him, he could be an engineer somewhere. And now his image was scraped off the internet and now he could be potentially considered a menace to society or harmful to society. So there are real consequences to humans. It's that AI harm, real consequences to humans, but he's also an employee. So that's where part of my research is how do we engage broader stakeholders in the design and development, in the policy creation, In the frameworks, how do we broaden that representation so that we can catch some of those things before they actually make their way out into the market?

Andreas Welsch:

Maybe if I may pick up the question from Rory and if I can frame it a little differently. I feel a couple of years ago we're looking at statistical models, machine learning making predictions, these kinds of things. Now we're looking at Generative AI actually generating information and also, to your point, like where there are apparently biases in the data, in the models that manifest themselves also in the output. But what's the need for responsible AI that you see. Is it the same as three, four, five years ago? Is it greater? In which direction is it going?

Elizabeth Adams:

The attributes of responsible AI are growing. Back a few years ago, there might have been nine main ones: transparency, explainability, trustworthiness, auditability, and so forth, fairness. Now, we're seeing attribution. Is a new tenant or an attribute of responsible ai consent. User safety is huge. So someone mentioned to me how they feel like their parents had gotten scammed by some AI or Generative AI system. And so they were concerned about the user safety and then obviously privacy. And around my using my data to create something. And how do we do that responsibly? So those are some of the newer attributes around or principles around AI that we need to consider. Policy is always going to be something that we need to evolve in as we see. And unfortunately, policy is reactive, right? So we see something that happens, it harms people. And then we do all that we can to advocate for it to not harm people and policies are passed. But attribution, absolutely. There were several people that I talked to. Who's, as I mentioned, play with generative AI because maybe they don't have a good policy for that in their organization and they want to learn on their own, they want to upscale or reskill, and they're learning that as a photographer, I want my data protected. I want to make sure that there's attribution. Or that there's consent for my use. So lots of newer things and this isn't going to change. We're going to constantly have to be in a state of evolution as new things become available. That is why monitoring and evaluating systems are so important and being responsive and having a culture that is responsive to what we're finding.

Andreas Welsch:

Thank you for sharing. We're getting close to the end of the show and I was wondering if you can summarize the key three takeaways for our audience today, how they can increase engagement around responsible AI practices.

Elizabeth Adams:

I would say if you are at an organization, this is what I did many years ago. I took it upon myself because I was curious. So one, get involved, get engaged, increase your AI literacy so that you can become a champion at your organization. If there's a vision or not, you can yourself. This also helps upscale and rescale. Secondly, start being a little louder about what you're finding. We need more leaders in responsible AI, and you don't have to be super technical to be a champion and to be out front. Improve yourself on AI literacy, start being a champion, and then I would say raise your hand for some opportunities at the organization. Years ago, I think I mentioned, or maybe I didn't mention, I conducted a learning event. It had the Chief Privacy Officer and General Counsel there, and then I was asked to be a part of the working group to create the organization's first AI ethics principles. Had never even thought that might be a career path for me. And as we think about the future of AI, there's so many opportunities and careers for us to be responsible and ethical. And continuing that path to find your specific niche.

Andreas Welsch:

Elizabeth, thank you so much. Thank you for joining us today and for sharing your expertise with us. And for those in the audience for learning with us.

Elizabeth Adams:

Thank you.