What’s the BUZZ? — AI in Business

Putting AI Ethics Into Practice (Guest: Reid Blackman)

July 25, 2023 Andreas Welsch Season 1 Episode 18
What’s the BUZZ? — AI in Business
Putting AI Ethics Into Practice (Guest: Reid Blackman)
What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Reid Blackman (Author of “Ethical Machines”) and Andreas Welsch discuss how leaders can put their AI ethics guidelines into practice. Reid shares his perspective on going beyond just AI policies and provides valuable advice for listeners looking to incorporate AI ethics from start to finish.

Key topics:
- Lead with AI Ethics
- Asses bias & explainability
- Define & operationalize your AI Ethics policy

Listen to the full episode to hear how you can:
- Articulate the action guardrails
- Implement your AI ethics policy slowly
- Specify success metrics

Watch this episode on YouTube: https://youtu.be/XpMnVPyA0JQ

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about putting AI into practice. And who better to talk about it than someone who helps organizations and leaders do just that. Reid Blackman. Hey, Reid. How are you?

Reid Blackman:

Good man. How's it going?

Andreas Welsch:

Doing all right. Thank you so much for joining. Hey, why don't you tell us a little bit about yourself and what you do?

Reid Blackman:

Yeah, sure. I am the author of a book called Ethical Machines published by Harvard Business Review on how to put AI ethics into practice. I'm the founder and CEO of Virtue and AI Ethical Risk Consultancy. I'm the Chief Ethics Officer of the Government Blockchain Association. I've been on EY's AI Advisory Board, a senior advisor to the Deloitte AI Institute, sit on the advisory boards of several startups and that sort of thing.

Andreas Welsch:

Awesome. So it sounds like you're definitely the perfect person to talk to. I'm, so excited to have you on. Looking for our conversation today.

Reid Blackman:

Yeah, my pleasure. Perhaps one relevant thing that I used to be a philosophy professor, so ethics is my thing. Been my thing for 20 plus years.

Andreas Welsch:

Sounds great. So as I am, hear more about your perspective. So if you are in the audience, if you're just joining the stream, drop comment in the chat and maybe put there what do you think AI ethics entails? What does AI ethics mean? Let's jump straight straight to the question. When I look at the way the debate has evolved, right? I think early on when we're climbing the hype cycle of 2016-17, there was a lot of talk about is AI putting us out of our jobs and loomy gloomy type things. And I feel it's been succeeded a lot by this focus on AI ethics. In my view not being immersed in in the topic on a daily basis but it seems that it's that next wave. So I'm wondering why is AI ethics now getting so much buzz? What are you seeing?

Reid Blackman:

There's a couple issues. One is that there's just lots of instances, lots of cases in which companies are getting in trouble for realizing AI ethical risks. So for instance there's been various co multinationals involved. Scandal's a bit strong, but regulatory investigations. So Goldman Sachs being investigated by regulators to determine whether or not the credit limit that it set using AI for the Apple card was discriminatory against women. Optum Healthcare under investigation for An AI that allegedly recommended to healthcare practitioners to pay more attention to white patients than to sicker black patients. Obviously you've got risks related to self-driving cars like an Uber killing a self-driving a self-driving uber killing a person. You've got issues with Facebook and everything that it does with it's algorithms, it's AI algorithms for serving. Ads, for instance, homes to purchase to white people versus homes to rent to black people. So there's all sorts of reputational risks that get that have gotten increased attention over the past few years. I haven't even mentioned all the privacy violations. You combine that with the coming regulatory regulations from the EU AI Act to name just one. And you get naturally lots of attention around AI ethical.

Andreas Welsch:

I see. Do you feel that's a natural or almost a logical progression as we see technology be adopted more and more? More that we do get into these cases. Not sure if calling it an edge case would be the proper term, but basically where somebody is using something or maybe pushing the boundaries and doing that either a little too far or maybe do it not as consciously as they should. Where do you see that from your perspective?

Reid Blackman:

Yeah. AI's meant to have big impacts, right? It's meant to operate at scale. That's the whole point of it. If you used AI and it could only have a tiny little impact on a few people, no one would be interested in it. You wouldn't have a show when I wouldn't have a company around AI ethics, so let alone a book. AI operates at scale. It has massive impact. That's the whole point of it. And there are various kinds of impacts that are ethically, reputationally, and legally problematic. And so what makes sense that, number one, given the kind, given the scope of the impacts, people are gonna pay it special attention to are these impacts positive or negative? The second thing to say in this regard is that, there are certain kinds of ethical risks that are likely to be realized by virtue of how the technology works. So it's by virtue of the fact that machine learning is a pattern recognizer that you get things like recognizing biased or discriminatory patterns. You get black box problems when the pattern is too complex for mere morals to understand. You get privacy violations by virtue of either the training data for the AI or the inferences that are made that the inferred data points can self constitute a violation of privacy. So given the scale of AI and given the kinds of risks that are likely given the nature of the piece that is machine learning, you're gonna get lots of increased discussions about these things, especially in the wake of various scandals.

Andreas Welsch:

Great summary. I think you touched on a few things already and one of them being data and obviously there's no secret here. If you want to do AI or you want to do machine learning, you need data like you just said. And so I think also to your point broadly speaking that the data will contain different types of biases because it's based on decisions and events that somebody has taken in the real world. But I'm curious how should leaders start thinking about bias as it relates to ai? What's your recommendation?

Reid Blackman:

Okay so, the first thing is that there's arguably, justifiably lots of attention on bias in AI and that's because it's really bad and it happens at scale, and it's quite likely. One thing I wanna highlight though, is that an AI ethical risk program doesn't end with bias. It may emphasize that issue, but it certainly shouldn't end there because there's lots more ethical risks to address. That's the first thing to say. The second thing to say is that companies need a systematic approach to bias identification and mitigation, and this needs to be woven throughout the AI lifecycle. So just generally what you're doing, AI ethical risk mitigation, when you're doing your ethical risk due diligence, you want to think about what those ethical risks are and how they might arise. Just the concept phase. Just when you're thinking about is AI a good solution and if so, what might that AI solution look like? You're gonna wanna think about the probability of success of developing a model that can solve the problem you want it to solve. But you'll also want to think about what are the risks involved? Not just of say, technical failure, but say clouding, regulations, ethical and reputational risks, cybersecurity risks, so on and so forth. And that needs to get woven into the life cycle. So you do it at the concept phase. If you wanna use a word that I really hate, the ideation stage. And then you need to do it during design, development, deployment, monitoring, and maintenance. It's gotta, because there's always room for something to happen in the system to introduce discriminatory outputs, right? You might have done your bias due diligence. You've sufficiently unbiased it, you put it into production, it starts gathering new data, new live data that's being created. And it turns out that data is a source of potential discriminatory impact. So you always have to check and check and check.

Andreas Welsch:

There's a lot of focus on what are the business results, the business outcomes? What are the KPIs, the hard metrics we want to drive and want to improve? And I feel AI ethics in many cases might still be this little appendix that, oh yeah, we almost forgot about it. Is that how you perceive it as well?

Reid Blackman:

There's certainly more talk than action. I don't think anyone would, deny that. There are some companies who take it quite seriously. Which is not to say that they've got it down perfectly, of course. I don't think they would think that they do. But and then there are some companies that are doing nothing. And then I think most companies are not doing much at all. I think they're waiting to see what happens with the EU AI Act. Although they're not gonna have to wait much longer because it looks like it's gonna get passed probably in less than a month. And then the other thing is that companies that do something, they often start with something like an AI ethics statement. And then frankly, they do a really bad job of writing these things and then they don't know how to implement them. So they just say, oh, what do we do with these high level principles? And then they just slam on the brakes. So I think most companies are stuck as to know what to do.

Andreas Welsch:

Got it. Maybe let's take a quick look at the chat. One question here is given you a philosophy professor, which of the classics would you recommend to be best able to understand the relationship between technology and society?

Reid Blackman:

I think that there's a tremendous amount of value to be had in reading philosophical classics. So I could go tell you to read Aristotle or Mill or Rau or Marx or whatever. I don't think that's going to be the quickest or best route to serious, robust, ethical risk identification and mitigation in enterprise. If I were to suggest something like, here's how to get up and running relatively quickly in one's capacity as an ethicist in particular, then I might suggest turning to the literature and medical ethics. It's what philosophers will call applied ethics, because it's about particular kinds of cases. So you'll see issues around the ethical permissibility of abortion or euthanasia. The ethics pertaining to healthcare systems generally those are really important issues. They're controversial and in studying those kinds of things, you'll learn lots of concepts that can be carried over. For instance, in healthcare, we have lots around informed consent specifically having to do with one's ability to have control over one's body. But in the. in the world of AI ethics, you might think about informed consent as it relates to data about you, for instance what biometric data you're comfortable with people collecting and using. So I think rather than diving into the classics of ethics or moral philosophy which is great and you I recommend it. Generally, I don't think it's the most efficient, most effective way to really get up to speed on how do we think about the ethical risks that pertain to AI. And they certainly won't help you at all with creating the correct governance structures that are going to allow you allow an organization to mitigate risks. So there's gonna be people as it were on the first line of defense. Maybe the second line of defense who need to think about ethical risk for particular use cases for particular models. But there also needs to be though that senior leadership that thinks about governance, policies, procedures, what are the appropriate tools? And I'm sorry, but Aristotle's not gonna help you with that.

Andreas Welsch:

Got it. Yeah. Thanks for answering that. Let's take a look at this second one. Sometimes the biases are in built in data that is not explicitly known unless we find a pattern in the prediction. So when MLOps, continuous learning is adopted, how will we balance the bias into when the training of the model is automated? And maybe that goes back little bit to the checks and balances that you've just mentioned. But what are perspective on that?

Reid Blackman:

A number of things to say. Number one, there's not one source of bias. It's not people like to say it's the training data. It might be the training data. It might be something else. It might be the objective function that you set. For instance one of my examples is let's say you're distributing kidneys for transplant. And what you want to do is you've got this ethical goal of maximizing quantity of the year, right? You wanna get the most use out of this donated kidney that you can get. So all else equal better to give it to the 18 year old and to the 98 year old because you'll get more, use, more years saved out of that kidney. That seems like an ethically sound goal. And what you'll find out if that's what your objective function is, and you make predictions about who's gonna get the most use out of these things is white people will get more use than black people because white people, at least in the United States, have better mortality rates than black people. And so it'll start giving those kidneys to white people instead of black people. The training data is, if you like, perfectly accurate, right? Black people have worse mortality rates than white people. Now, there are various kinds of explanations having to do with historical discrimination that explain how our society got to be that way, but I don't think the training data is biased. I think it may reflect certain kinds of historical biases among other things. But it might be that you need to change your objective function, not the data. Or it might be that you need to change your objective function or the constraints around the objective function. So maximize year saved, given the constraint that we want some kind of equal or equitable, whatever word you wanna use, distribution across various protected subpopulations. Again, it might not be the directive function. It might be where you set your t. It might be how you weighted the input variables. So there's lots of sources of discriminatory outputs and lots of strategies and tactics for bias mitigation.

Andreas Welsch:

Thanks, Reid. I think that's very important to hear, right? Because I feel that the going assumption or notion is it's gotta be the data, right? But to see and to understand it's not just the data. The data is maybe one source, but there's so many other different factors that they're play into, I think, is very important also to keep it at the forefront and in our minds how complex that they can actually be.

Reid Blackman:

Yeah. And look even if you grant for the sake of argument that there's some argument for a thing that really always is the source of the bias really is the data. I think that's false, but let's suppose it is for the sake of argument. It doesn't follow from that, that the only plausible or effective bias mitigation strategy involves tampering with the training data. It may involve changing your threshold, changing the weightings, etc. So there's lots of non tinkering with data stuff that can be done in the service of bias mitigation.

Andreas Welsch:

Awesome. We started our conversation with, Hey I, remember early on there was a lot of talk about AI is putting us out of work, and if I take a look at what's happening in business and in many other areas as well, right? Where there are high stakes decisions. Whether they're high stakes because it impact to a person or because there's a financial impact. People want to know how has that prediction come about? Why does that AI thing or system think it knows it better than I do? And, so all of this talk about explainable AI, that was there not too long ago. AI being a black box, people again don't really understand what's happening, why the prediction has been made. What are you seeing there and how does explainability play into that bigger topic of.

Reid Blackman:

Okay, so right explainability is a problem because machine learning is a pattern recognizer in vast droves of data. It's looking, it's identifying, at least ideally identifying patterns that consist of the mathematical relations among thousands of data points. So this is very difficult for mere mortals to comprehend. And so we get the black box problem in low risk situations, like if it's just making a prediction about whether or not this really is a picture of your dog that you upload. We might not really care about explainability in certain kinds of high risk sit or high risk situations like this person is likely to develop diabetes within the next two years or develop cancer or should, receive this course of treatment or should be denied a mortgage or a loan, or an interview for a job or admissions to college, et cetera, et cetera. And those high stakes situations, you may very well want explainability. Now, why would you want explainability? There's a variety of reasons. One is, and I think this is perhaps the most underappreciated one, is that from an ethical, risk perspective, explainability has to do, among other things, with what transforms inputs to outputs. So what are the rules of the game? What are the rules of the model that transform inputs to outputs? And once you think about, okay, these are the rules of the game. That opens up the question, are the rules of the game fair? Are they good? Are they reasonable? Are they equitable? If you can't explain what the rules of the game are, you can't engage in that kind of ethical assessment about whether or not this is a fair reasonable rules to play by, or if instead you're asking your customers or clients to play by unfair rules. So explainability in that case is a necessary condition for engaging in the ethical risk assessment. That's one issue of explainability. Another is that you want to be able to explain to someone why they were harmed. Change that you need to change X or Y or Z in such a way so that you'll get approved for the mortgage next time. So sometimes explanations are needed so that you can help your customer or your client get better results the next time. Okay. There's more to say there, but let me answer, I want to answer the second part of your question which is what are companies doing now around it? And the short answer is that if they're doing anything at all, they're using technical tools like LIME and SHAP, which give something like an explanation for why the model is giving the predictions that it's giving. There's a deep interesting question about the extent to which those technical explanations. Simplifications to the point of being distortions of what's going on in the model or whether they are accurate and use sufficiently useful explanations. And then there's a second issue. which in some ways is just as important anyway about who needs the explanation because those technical explanations are interpretable or understandable by data scientists, but not by your average consumer, citizen regulator, et cetera. So an a crucial part of the explainability discussion has to be who do we need to give explanations to? Why do they need the explanation? So that means what would constitute a useful explanation or a good explanation for them, and in what language do we need to communicate that, just say, how do we ensure that it's intelligible?

Andreas Welsch:

And I think especially on that point of explainability and, how to communicate it, I remember some of the earlier projects that I have been a part of. We started without explanation. We said hey we're, 87% or 97% confident that this is the right information that they were showing to you. And, the business user would come back to us and say 97%. Is it good? Is it bad? How do I know it? And so then the next step was to say here are the top influencing factors or features that have gone into that prediction. Okay. But how important were they? And, so I think I've seen it evolve also to a point where it's a scale maybe it's a scale of five stars. Is it very low? Is the prediction robust or is it something you maybe should check take twice as a person? So without giving absolutes that people don't have a reference point for?

Reid Blackman:

Yeah. That's one of the things these tools do is they say, here are the features that played a particularly important role, but I've seen versions of this tool multiple times where it'll will articulate the major contributors to the output, but it won't say the way in which it contributed. So for instance, it might there's feature X and Y, the person got approved for a loan and let's say features X and Y played big, roles in that decision. But was X really four and Y was against, it was Y four and X was against it. Were they both for it? For it? Part it's crucial if you're gonna say that X played a role in the prediction to say what role that it played. Otherwise we just, we don't know what it means.

Andreas Welsch:

Yes. Fair point. Yeah. Maybe taking another quick look at the chat. There's an interesting question here around the EU AI Act. So what about the auditing of AI on a wider scale either by auditing companies or international certification authorities or governments? What's your view on that when it comes to that aspect of the EU AI Act.

Reid Blackman:

There's a lot left. I like the general risk approach. That's also how I approach AI ethics from a risk perspective. How do we make sure that we don't ethically Realize a bunch of ethical pitfalls, wronged people at scale. I think that's really important. Talking about those high risk that high risk approach I think is a good approach. I have to think a little bit more frankly about what the act would cons would consider a an acceptable risk or I forgot what they call it, but basically medium and low risk. They have a particular name for that escapes my mind at the moment. I wanna know what those are exactly. Medium risk is decent amount of risk. And then the other thing is that we don't really know what the standards are, right? What really constitutes. Bias or discrimination, what constitutes a good explanation? So the EU AI Act spells out at a relatively high level the kinds of things that you need to be compliant with. But the actual standards that compliance with which constitute compliance with the regulations is TBD. And so until we see what that looks like, we don't really have enough meat on the bones to know how effective compliance is going to be when there is compliance.

Andreas Welsch:

Got it. Perfect. Thanks for answering the question. You already mentioned earlier that companies do have ethics statements, ethics principles. Some do it better than others. How can companies actually put those ethics statements and guidelines in inter practice?

Reid Blackman:

So first of all, the statements need to be better in the sense that they need to actually articulate what the guardrails are. Anyone can say they're for fairness, that's no big deal. I like to tell, say the KKK is there for fairness. But their conception of what fairness actually consists in is very different from what our conception of fairness consists. So you gotta do more than just say we're for fairness. You have to say we're for fairness and that means we will always X or we will never Y, or we value people's privacy. And so we will always X and we will never Y. So to give you a simple example. We value privacy and so we will never sell customer data to a third party. Okay, now I understand what one of the guardrails is around. Your commitment to privacy. So I think that's number one, articulating. What the action guardrails are in connection to each value is absolutely necessary. And it's immediately, in some sense, action guiding. Cuz it's saying what's off the table? There are still gray area cases to deal with, which we can talk about if you want. The other thing that I think is absolutely crucial that I see companies stumble on again and again is they, write their statement and then they wanna rush to implement. Okay, now we've got our principle. How do we implement? And you gotta slow down. You have to slow down if you wanna implement quickly, don't rush to implementation. You need to do a gap in a feasibility analysis. You need to see. Okay. Where does our organization stand relative to the standards that we've just set in our statement? What does our what are our data science teams look like? What does HR look like? What does marketing look like? What does the existing governance structure look like? What does the existing product life cycle look like? And where is there a gap between how our organization actually operates now and where our statement says we want to be? And then related to that, you wanna do a feasibility analysis. What do we need to do? How big of a lift is it going to be to move our people, process and technologies up to the point where we're compliant with our ethical standards. People skip the step and they run right straight to implementation, and then they run into tons of roadblocks because this department didn't know that thing was going on. This thing that you want to do was not compatible with this existing governance structure or enterprise risk management as it exists already, or this or that policy and it's disarray.

Andreas Welsch:

I think that makes it quite actionable, right? For those of you listening in the audience as well. I think that the call to action not to take an action too quickly is a very important one.

Reid Blackman:

Yeah. Awesome. And it's counterintuitive to people.

Andreas Welsch:

I feel in business where we're so driven to show results, right? That to your point, that they can come at the right expense of implementing something.

Reid Blackman:

So we can say that we've and so people'll go rush to find the tool. They'll go rush to find the tool. Let's find a tool. We need a tool for body identification. We need a tool for explainability. But you haven't specified any success metrics. For tool selection, you haven't specified how that tool will get embedded into existing processes, and so does tool doesn't get used. So on.

Andreas Welsch:

Fantastic. I was wondering if you could summarize the top three takeaways for our.

Reid Blackman:

Yeah, sure. Let's see. Top three takeaways. Slow down a bit. Let me, so write an ethics statement where you can actually have guardrails. And two, do a gap analysis or a feasibility analysis. Third thing is, I think it makes sense to invest in, say, a person who can really understand the ethical risks, not by reading Aristotle, but for instance, by studying the literature, medical ethics. But having an ethicist around, at least in the creation of the program to spot the kinds of ethical risks that you want to avoid could be really powerful.

Andreas Welsch:

Awesome. Thanks for sharing your expertise with us and for, those in the audience for learning with us. Reid, it was a pleasure having you. Thank you so much for joining.

Reid Blackman:

Yeah, thanks for having me.