What’s the BUZZ? — AI in Business

Define Guardrails For Generative AI In Your Business (Guest: Ravit Dotan)

June 13, 2023 Andreas Welsch Season 2 Episode 10
What’s the BUZZ? — AI in Business
Define Guardrails For Generative AI In Your Business (Guest: Ravit Dotan)
What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Ravit Dotan (Director The Collaborative AI Responsibility Lab at University of Pittsburgh) and Andreas Welsch discuss how you can define guardrails for generative AI in your business. Ravit shares her insights on AI ethics and regulation and provides valuable advice for listeners looking to learn about the emerging field of generative AI.

Key topics:
- Describe the challenges of generative AI
- Understand differences compared to traditional AI systems
- Guide your teams to use generative AI responsibly

Listen to the full episode to hear how you can:
- Tell apart design choices from actual AI capabilities (anthropomorphism)
- Prepare for regulation which extends and amends existing laws
- Mitigate risk through a company policy for generative AI use

Watch this episode on YouTube:
https://www.youtu.be/1b8QbH_hF40

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Ravit, so great to have you on. We'll talk about defining guardrails for a generative AI use cases in your business today, and so I figured who better to talk about it than you. I know you've been doing a lot of work in that area and especially around ethics. So I'm super excited. But before I talk too much or steal your thunder, why don't you tell us a little bit about who you are and what you do?

Ravit Dotan:

Hi. Yeah. I'm really glad to be here. So thank you for inviting me, Andreas. So I, work in the field that is called AI ethics. That means that I'm thinking about the social and environmental impacts of AI systems and what we can do to push this technology in a positive direction that benefits humanity. I work as an advisor, speaker, researcher, and in particular, I work with a generative AI company called bria.ai, and I lead their AI ethics efforts.

Andreas Welsch:

I'm so glad to have you on. I've been following your path for the past year or so, and I'm super thankful for you sharing your perspective on AI ethics with the community. I can't believe it. We have more than 870 registered attendees today. That's huge. That also shows what an important and hot topic that actually is as we're looking at generative AI. Maybe we play a game for just a few minutes to kick things off as an ice breaker, and then we jump in into our questions. So look this game is called In Your Own Words. And when I hit the buzzer, the wheels will start spinning. And when they stop, you'll see a sentence. I would like you to complete that sentence with the first thing that comes to mind and why, in your own words. Ravit, are you ready for What's the BUZZ?

Ravit Dotan:

I don't know, but ready or not, here I come.

Andreas Welsch:

Okay. So let's see. If AI were a season of a year, luckily there they're only four, what would it be and why?

Ravit Dotan:

I'll go with spring because I'm hopeful for all the things that can grow from it.

Andreas Welsch:

Fantastic. And what else? Spring comes after winter. So maybe there's a bit of hope that we've left the AI winter behind us as well. What do you think?

Ravit Dotan:

After spring comes summer, so currently we're at a stage of intense hype. Some of the hype is helpful, some is not. But I'm really curious about what happens after the hype is behind us.

Andreas Welsch:

That's an awesome answer and a perfect segue in into our first question actually. So thank you for being so flexible.

Ravit Dotan:

This is fun.

Andreas Welsch:

Awesome. In our audience, some say winter. Some say also spring, like you, new beginnings. So that's awesome. And definitely a global audience. From Mauritius.

Ravit Dotan:

I'm seeing some familiar names and faces in the chat, so thank you for coming. It's great to see you and I'm really glad for all the people checking in the chat saying where they are.

Andreas Welsch:

Yes. So that's awesome. So please continue doing that. And also if you have questions, pop them in the chat. We'll take a few as we go along. Let's jump into question number one. If we look at this field of generative AI there's so much hype. We've done AI also for a number of years. And not everybody's rolled it out in their company successfully or to the greatest extent possible. But I think one of the things that remains the same is that aspect of change management. So whether it's how you build software with AI, there are some changes. They've been there before as we looked at more deterministic models. Now they're there with generative AI models. And this aspect of change management is certainly there as you then roll out these new technologies and generative AI infused applications, if you will. So I'm wondering what you're seeing there. How generative AI is now actually different compared to previous kinds of AI or technology that we've had. And what do leaders need to be aware of now as they roll it out?

Ravit Dotan:

Yeah. Okay. So you're asking how generative AI is different from other kinds of AI exactly. I just wanna say maybe to contextualize a bit what AI does, because I think for many people, generative AI is the first time that they feel up close and personal with AI systems. I think that is actually the biggest difference that now we have individuals able to interact with AI system, knowing that this is what is happening. Because so far, what happened was AI is all around us. It's been like that for a long time. When you write an email and you get an auto complete, that is AI. When you speak to your phone and it writes things down, that is AI. When you go to the airport and some camera, I just had this for the first time but instead of using my boarding pass, they actually scan my face. I was like, what is this? Answer: AI. And AI does many, things. I think the only real difference with generative AI is that now people are coming into close contact with it themselves as users at home, and it's more apparent to them that it is AI. So I think that is the main difference that I would say.

Andreas Welsch:

For me, the question was around what else changes, especially around ethics. What are things you need to be even more aware of than what we've been doing, or we're supposed to do all along.

Ravit Dotan:

So what are the differences in terms of what we need to be careful about. The risks, what we need to be aware of. I'll just mention three things. The first thing that I really want people to have more awareness of is misinformation. So we all know that we can use generative AI to purposefully create false information, right? Like I can easily write a news story of something that like never happened. But I think there's lower awareness to unintentional misinformation, let's call it, because people think of generative AI as like a search engine or something. Whereas a source of truth that is simply not the case. I'll just give a few examples. I'll start with a story from a friend of mine who is a lawyer. He lives in New York and he wanted to find lawyers in New York with a specific kind of expertise. And so he asked ChatGPT: can you give me some lawyers in New York with this kind of expertise? And he got a list of 10 names and they all sounded good. They had addresses, they had phone numbers, working for the right kind firms. Small problem: none of them existed. They were all non-existent people and he called one of the phone numbers. It was a spa. And today is actually a great day to talk about this because we have the first defamation lawsuit against OpenAI, because someone actually used it in a way that created false information about someone. And now the person is suing, because the false information was defamatory. And so, it is unwise to use this tool if you would like truth about things. And I think that is a big misconception because of how people perceive AI. Maybe as something that is all knowing all powerful no.

Andreas Welsch:

That's, that makes a lot of sense. I was actually giving a session on"How to use ChatGPT to increase your productivity?" today. And that was one of the things that I emphasized as well. It can be a really great sparing partner, but, please do make sure that you check your sources and that, whatever output you use, you treat it as the same kind of output that you would generate. Because at the end of the day, it's your name or your organization's name that is attributed to it.

Ravit Dotan:

Exactly. And maybe, I'll add to that when it comes to misinformation or just like stuff that is false, I think the false facts are something to definitely be a aware of. But I wanna add, there's a special kind of mistake that ChatGPT makes. And that is bias. It makes biased assumptions which can lead to biased products. So I'll give an example of a little experiment that I did with ChatGPT like a month ago or two months ago, inspired by someone named Hadas Kotek. Okay, try to ask ChatGPT"Here's a sentence for you. The doctor yelled at the nurse, because she was late. Who was late?" ChatGPT says, guess, the nurse. Great. Replace the order:'The nurse yelled at the doctor because she was late. Who was late?" I actually got two different answers. One was like, again, the nurse. But the other answer was like, actually you have a grammatical error in your sentence.

Andreas Welsch:

Oh, wow.

Ravit Dotan:

Then, you go on and on and that pretty much no matter what you do, you get this assumption that the nurse is female. And even if you ask something like:"A nurse and a doctor have lunch. She paid, because she makes more money. Who paid?" Again, it's a nurse. It's not just false facts, it's false inference ranking that is not only incorrect and can lead to just having incorrect information in your notes or whatever document you're creating. But also it creates discrimination, which is a social problem and also sometimes illegal.

Andreas Welsch:

That's a very important point to point out. Because, I think at least the broader public or those not being so deeply familiar with these challenges might take it at face value and actually trust the information or might amplify and reiterate these biases that are there.

Ravit Dotan:

I wanna add one more risk for people to be aware of. A huge risk with specifically chatbots is creating a misconception that you are interacting with an intelligent human-like entity. I think we're prone to those mistakes because of science fiction. We've seen those imaginary robots and I think there's like a doomsday hype kind of thing going on. And just like an appeal to it maybe like it feels like a human. But, why? I think a part of it is because of the UX of the technology, of the chatbot. It produces chains of words that contain like I. I think. I will tell you, I am sorry. When it makes mistakes, these are deliberate choices by the designers of chatbots. And they can make us feel as if we're interacting with a smart being that's maybe up to get us and take our jobs. It's just not the case. They just chose to represent our technology in that way, but it's really an illusion. And so that's a third thing that I would flag.

Andreas Welsch:

And I think, again, that's so important to point out. Because as we're interacting with that technology, it does sound like there is some kind of intelligence there, because it also sounds kinda like you and I would if we write something or if we say something. Yeah. That's very important to emphasize that it's a design choice, but it's not necessarily how this kind of technology behaves generally or all across the board. Now, maybe one question in the chat from Ken that I see that I would like to pick up on. It's specifically around guardrails and when or how to establish them. So Ken asks,"in your opinion, are guardrails to be self-imposed, like an honor system, or do they require government mandated laws?" It gets us straight to the topic of our episode today. What do you see there?

Ravit Dotan:

I think it's gotta be a combination of regulation from governments, just like laws. It's got to be there. But when I say laws, I don't necessarily just mean new laws that we're gonna come up with new AI laws. I also mean enforcement of the current laws that we have. For example, non-discrimination laws already exist. They are typically technology agnostic. It's not like the law says it's illegal to discriminate unless they're using AI, in which case it's okay. So we already have those laws what we need to see from federal agencies in the U.S. and just civil society organizations like the ACLU is lawsuits happening and enforcement. So some of that has already started, which is great. But we need much more. So I would say, when I'm thinking of regulation, I'm actually thinking about that first and foremost, using the laws that we currently have. Been said that, I do think that certain capabilities are not covered by the current laws. In which case we would need to either innovate the existing ones and upgrade them or add new laws. Having said that laws is a small portion of what is going to contain, to push this technology in a positive direction. When I think of AI, generative AI, my comparison is something like the internet or fire. It's a cross sector task to make sure this technology is beneficial, right? When we think of the internet, yes, we have laws, but also we have education, right? And we also have efforts by the companies themselves. And so there's some things can and should be covered by the law. Some things companies should do self-regulation. There are additional sectors that need to be in the game. I personally focus on the financial sector and thinking about there are organizations that are funneling money into generative AI and other AI systems. They have a responsibility too. When they invest, they should be asking those difficult questions. Where is the data coming from? What guardrails are you doing for self-regulation company? These are questions that they should be asking. When I'm thinking of financial actors, I'm thinking of like investors. Insurance companies, they have things to lose, right? Because they're gonna pay if the company's gonna exude. Procurement. When we buy AI systems, especially people in the public sector, we should be asking those questions about responsibility. We should refuse to engage with companies who do not give us satisfactory answers, and then we should encourage the companies to improve, for example, as an investor. The investor is in a position to provide resources and help the company grow. They do that in many areas. And AI responsibility should be one of them.

Andreas Welsch:

Great to see it as much as laws or government needing to do their part. But all of us in the different capacities and the different entities that are working or in interacting with AI. I see there's a question from Anabelle. Maybe picking up on it quickly. She says,"What do you think are the new laws we need?" I was wondering if you have a perspective on that. And the point that you mentioned, maybe let's first start with what we have and see how that can be applied to AI, is just another way of doing this or governing that. I think is a great, answer and in a great step. How, do you see this going beyond that?

Ravit Dotan:

Probably many people in the audience have heard about the E.U. AI Act. But I'll just mention a little bit for those who are not familiar, because I wanna piggyback on that. So the EU AI Act is the most substantive effort right now to create an AI specific clock. What it does, it divides AI category AI applications into risk categories, and it gives regulation like different kind of regulations for each risk level. The top category is going to just be prohibited. I think that is a good idea. Some things should just not be done. And I think it's not the kind of thing that can be covered, but other kinds of laws. Additionally, I actually served in intelligence in military intelligence for a couple of years. I was shocked in a way when I realized, wait a minute, but in intelligence, we are listening in on people's calls. Oh my God. And I thought, no, but it's wrong. But then I realized, oh no, but yeah, I get it. Yeah. That's what we do with intelligence though. Some technological capabilities, we forbid other people from using. But we think, no it's only for governments. So I was using that as an example. There are some things we allow our governments to do, but we don't allow anyone else. We don't allow anyone else to listen in on phone calls and read people's emails. That's like illegal except so I think in addition to making some things illegal really restricting the access to some kind of capabilities. So I would say these are some of the laws that I would go for in addition. And these are laws that are already in motion and working. Impact assessments is something that a lot of laws are talking about. And I think that is absolutely crucial, but I wouldn't limit it to AI. I think impact assessment is something that we should be asking for. Any kind of emerging technology where the impact is going to be vast and we're just like, we're not sure. We don't know what this thing is going to do. So I wouldn't. Yeah, wouldn't restrict it to AI, but I would want more laws saying you gotta do a risk assessment, an impact assessment and understand what the tool is likely to do.

Andreas Welsch:

That's a great point. So, you have a scale of things that you will allow to do and definitely clear no or category.

Ravit Dotan:

And I would also add laws on transparency. If I could make a law, what would the be? I would like transparency about what the company actually measures. What do they track, right? When I think of many people who make food or people who make drugs, I have some information about the processes that they have gone through to make the product safe. I would like to know do they measure anything related to fairness? I would like to know that. And if so, what do they measure? They, I think they should have some kind of leniency in like designing the measures themselves. But tell me, what do you measure to track fairness? What do you measure to track your environmental impact. What do you measure to make sure your explainability is on track? What do you measure to make sure that controllability that you can control this thing? And what do you measure to track impacts on displacement? So I would like to have laws requiring those who develop and also deploy the technologies. Tell me what you measure. Not only when you do your impact assessment at the beginning, but also later when you are working with this tool.

Andreas Welsch:

That's really important. Then again, making sure that it's fair, it's transparent, and there's also a repeatable process and way to assess that. Yeah. Maybe one question. I think we're also in inundated with AI news every day. If I open my feed, from all the productivity hacks to new companies, startups, applications all the glory of what it promises and all the doomsday talk of what the risks are. I think while there's so much hype in the market, I think it's also a good amount of truth to the fact that leaders have a responsibility to help their teams upskill, so that they can benefit from these new technologies. But I think certainly not do that blindly and say go ahead, try it out. We've seen examples where maybe even that encouragement hasn't happened and, people have just tried it out, and what can happen then with your sensitive information and so on or, to your point, with inaccurate information. But I'm wondering, what do you think? How can leaders actually encourage the teams to use generative AI in a way that is responsible?

Ravit Dotan:

Cool. Yes. Love this question. Super important. Yes. I would like to see companies put policies in place. A company must have a policy for usage of generative AI tools. If you do not have a policy, you are opening yourself up to so much exposure in terms of compliance, but also in terms of your trade secrets, like in terms of your efficiency, your output. You have to have a policy. I think that policy should touch on at least the following things. When is it allowed to use the tool and when it is not? Remember that once you put things into ChatGPT, now the data is theirs, unless you do the subscription thing. Do you want your IP with someone else? So just like, when is it okay to use it and what it is not In addition to IP issues, think about what we talked about at the beginning of the call, accuracy problems. So yeah, like when do you use it and when do you not use it? Second, when you use it, what does it mean to use it safely? Transparency. I think employees should say when they're using the tool because of the kind of mistakes that can come up and exactly how they used it. Fact checking. If you do end up using this tool, but there's like anything in that document that is supposed to be true. How do you fact check? I'll, give an example. I saw on LinkedIn someone who said that a PR company reached out to her to invite her to do a thing. And the message is"Hey, X! I really like your work, especially like your book titled blah blah, blah. Can you come and do this thing with us?" The book did not exist. So yeah, she tells them like thanks for the invite. However, like the book does not exist and I certainly didn't write it. So whatever you use it, there's gotta be a policy for fact checking. So I would start with those three things, when to use it and when not. What do you do when you use it which, gotta include transparency and fact checking. So I would start with those. And to conclude, I would also, when you formed this policy, I wouldn't make sure you have the voices of your employees as a part of the conversation.

Andreas Welsch:

Thank you so much. I was going to ask you to summarize the takeaways, but you did that already so beautifully. Thank you so much. I think it's awesome to see so many of you here with us today and having such a engaging dialogue. So look we're, getting close to the end of today's show. Thank you so much for joining, Ravit. Thanks for sharing your expertise with us. I'm super excited by the conversation we've had and the depth you bring to that topic.

Ravit Dotan:

Yes. Thank you again for inviting me. This was super great.

Andreas Welsch:

Thank you. Excellent.