What’s the BUZZ? — AI in Business
“What’s the 𝘽𝙐𝙕𝙕?” is a bi-weekly live format where leaders and hands-on practitioners in the field of artificial intelligence, generative AI, and automation share their insights and experiences on how they have successfully turned hype into outcome.
Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, and process automation.
Whether you're just starting out or looking to take your efforts to the next level, “What’s the 𝘽𝙐𝙕𝙕?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation.
What’s the BUZZ? — AI in Business
Shape Your Business' Accountability For Generative AI (Guest: Aurelie Pols)
In this episode, Aurelie Pols (AI Data Privacy Expert & Advisor) and Andreas Welsch discuss how leaders can shape their business’ accountability for generative AI. Aurelie Pols shares her perspective on building trust and accountability and provides valuable advice for listeners looking to navigate the challenging landscape of data privacy and regulation.
Key topics:
- Understand how businesses need to earn their customers’ trust
- Learn what AI leaders need to consider
- Learn who in the AI ecosystem is accountable
Listen to the full episode to hear how you can:
- Assess AI systems to apply risk frameworks
- Protect vulnerable groups of the population from AI harm
- Loop in your AI ecosystem to increase accountability
Watch this episode on YouTube:
https://youtu.be/vzNg8xCzTWc
Questions or suggestions? Send me a Text Message.
***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.
Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter
Today we'll talk about shaping your business' accountability for generative ai, and who better to talk to about it than someone who's been working quite a bit with European institutions on topics like that. Aurelie Pols. Hi, Aurelie. Thanks for joining.
Aurelie Pols:Hey, Andreas. Good to see you.
Andreas Welsch:Awesome. Hey, why don't you tell our audience a little bit about yourself, who you are and what you do?
Aurelie Pols:Sure. I typically describe myself as a recovering data-holic. So my background is in statistics, econometrics, and I flock to the internet in early two thousands. And after selling my startup in Belgium, I moved to Spain. To change exchange startup for a family. And I got worried about this thing called"privacy". So I started listening to these people talking about this thing called GDPR. And while I was still playing with statistical tools, I started listening to the lawyers as well. Since then, I have created my own company. I support different enterprises on a global level with their obligations with respect to GDPR. And then other privacy laws come up in the United States, in certain states globally and things like that. So I basically help certainly startups in B2B align with compliance obligations with respect to privacy legislation.
Andreas Welsch:That's awesome. I know it's anything but a trivial space and I know that if you're mentoring into AI or building AI enabled products, it has to be top of mind, especially if you want to play in global markets. So I'm really excited to have you on the show and learn from you as well today.
Aurelie Pols:Thank you for having me, Andres.
Andreas Welsch:Perfect. Now Aurelie, should we play a little game to kick things off? What do you say?
Aurelie Pols:Sure, yeah. Hit me.
Andreas Welsch:Perfect. So this game is called In Your Own Words, and when I hit the buzzer, the wheels will start spinning. And you'll see a sentence that I want you to answer to in your own words. And to make it a little more interesting, you only have second 60 seconds to come up with an answer. In your own words. Now, what do you say? Are you ready?
Aurelie Pols:Okay, let's go.
Andreas Welsch:Perfect. Then let's do this. If AI were a month, what would it be? 60 seconds on the clock. Go.
Aurelie Pols:I would say September.
Andreas Welsch:And why? Why is it September?
Aurelie Pols:Because we're not at AI winter. I don't think so. It's also not winter of humanity. But I don't think it's AI spring either. Because generally speaking, I don't think AI is new. But it's certainly changing. So it's a new season and there's been some fruits with certainly data coming after the summer, and new challenges. And see what we're going to do with the AI. And as I like to cook, maybe see how the jam is made or how the sausage is made to make sure that this goes in the right direction for our societies.
Andreas Welsch:That's perfect. Great way to come up with that on the spot. Let's jump into the questions for our session. And please feel free to put yours in, in the chat as well if you're in, the audience. We really want to make sure that this is an interactive session for you as well. We talked about this as we were preparing for today's episode. It's not something new, right? Businesses have had to earn their customers' trust for a very long time, and they've had to be accountable and they are accountable. But now, if we're looking at AI and all this hype, and all this promise on one hand and all these dystopian thoughts and concerns on the other, what do you see really changes with generative AI? How do companies and businesses need to maintain and win their customers' trust with accountability?
Aurelie Pols:Yeah. I've been trying to make sure I understand the right definition of generative AI and understand what this is. There's been like a new hype following certainly the release of ChatGPT at the end of last year. And so, there's this idea of interaction also with the consumer and the fact that basically this creates outputs one way or another. And I think that's the big change, that at least I see. And there are so many angles here. You can answer this question in so many different ways. But at this moment in time, I would say it's this interaction with the end consumer and the fact that it generates something. It spits something out. Whereas I think before, statistical models have been used to predict certain things. Probabilities is nothing new and uncertainty also. Processing power has increased through the decades. But this has always created decisions that were imposed on individuals. Whereas here, maybe there's a way to start thinking about some form of a more interaction than just the fact that, okay, I'm going to send you this type of message because I have profiled you in a certain way. And so this also brings along the reflection around the chain of responsibility. When we talk about accountability and, as you say, customers found companies have tried to become more accountable through time. Tried. I don't think there's perfection. It's something dynamic. But it also means that if you as a user decide to use certain things or ask certain questions, it's obvious that you have a role to play within how that information is transformed and brought back to you. And I think the major challenge that we're going to face is to understand what are the different roles played within the different flows of information, data, images, video, whatever it is that the output is? And who is responsible for what? And we aren't too bad at defining when it's data. But systems are getting more and more complicated as well. And I think this is what the GDPR also started to say, okay, you are responsible in front of the customer for that type of data. So you can't just share it all the way through, at least in Europe. So I think that's a change that's being brought about is let's have more conversations about responsibility, but also including the user of these systems. Does that make sense?
Andreas Welsch:That, makes perfect sense. And I think especially comparing and contrasting B2B and B2C, I see a lot of consumer-facing applications today. And especially around generative AI, where they're maybe not necessarily at the level where enterprise IT or enterprise information security expects controls and so on to be. But we'll get to the point of who are all the different players in a little bit on today's show. So I'm curious to learn more from you on that topic as well. Now maybe looking specifically at leaders and emerging leaders around AI in a company that are building AI embedded applications and that are maybe leading a COE, Center of Excellence. What do you feel they need to ask themselves when they incorporate generative AI now in their applications? And why is that so important? And why is it so different again from deterministic AI and things we've done in the past?
Aurelie Pols:Yeah I think that the piece of the puzzle that we're actually missing. But that is increasingly defined within society, but not within companies is what are the consequences of what we're doing? What is the potential harm with respect to the uses of technology? And this is where we see it increase. I think, within companies to start thinking about fairness. Are we thinking about fairness? Are we thinking about discrimination? There's been discussions about this when you talk about facial recognition and certain people of certain races have problems using these systems. Because, basically, they're not part of the initial foundational models of that. As we all positively jump onto this bandwagon, and we also hear doomsday scenarios, I think we just need to stop and say, okay, where are potential harms? And how am I going to mitigate for that? Is it possible? Maybe up to a certain point? Yes. Maybe up to a certain point also no. And so, it boils down certainly for companies to say, okay, what do I want to do with this technology? Why am I going to use this? And this is also the stage where I see a lot of my customers who are very positively certainly engineering teams are like jumping on this to say, okay, we need to figure out, we need to become a AI driven company, but we need to figure out use cases. We need to figure out what this means for us. And compared to 2016 when there was discussion around data protection, privacy and things like that. Everybody was like, God, no harms. Why are you caring about this, if there are no consequences? What surprises me is there are so many doomsday scenarios today. I have five frameworks of harms, and it's amazing. It's proliferating. So it means that at least I know where what should be avoided? How we need to think about potential negative consequences? And then have conversations about what to do about this. So this is something I think teams really need to start thinking about. And it was already embedded before inside the GDPR. It's a risk-based framework. Privacy law and data protection laws in Europe don't say you're not allowed to do something. It says you are accountable. You are responsible for what you do. And this is pushing this further. Hopefully it's possible to mitigate for these harms. I don't think you can catch everything. You have been in software development as well. There's always a use case you've never thought about. And then there's a discussion about are you throwing resources at this to solve for this? Yes? No? It's always about that. But this idea of risk based and, okay, what is my system supposed to do? What do I want to do with this technology? And where is potential harm for the user or risk for my collaborators or the companies I work for? So this is how I see it today.
Andreas Welsch:Great point. And I think especially that part about the risk-based assessment, right? That and that you are accountable and responsible and a clear definition and delineation. I'm, wondering is it really just businesses that this applies to? And maybe which type of businesses does it apply to? And who else in the broader ecosystem that works with AI and builds AI solutions do these aspects apply to?
Aurelie Pols:I think it applies to any business in the chain of uses of this technology, but it's complicated to really pinpoint who does what. And it really boils down to understanding what you're bringing to the table with your technology, which decisions are being made by your technology and potentially what the consequences are. So, as I mentioned before, my background is in GDPR. And there's like a lot of discussions about legislation and obligations and things like that. And to be honest, it's not really new. We've seen it. Inside legislation before, there's an article in within the GDPR about what's called automated decision-making. So basically predictive analytics if you want to call it that way. Let's imagine I predict that you prefer strawberry yogurt or banana milkshake. The fact that I predict that and send you a message about this has no consequences whatsoever on your life, if I get it wrong. If now I predict that you are suicidal, is that slightly different? And so who is responsible for making the decision is this acceptable or not within the chain of command? You can say, I just have an engine that does analysis and that spits out information. How then this is presented to the consumer, to the data subjects, to the citizen depends on the last line that decides how to use that technology. And this is also those that are responsible for it. That question is to say, is what you're putting in front of a user safe? Is it in line with their expectations of something that would not harm them or is there something that could be potentially negative for them? And it's not an easy question to ask. Because as I said, use cases sometimes they fall out of the sky and you're like, God, I never thought of that. Yeah. So I, think also in that sense, we need to think about it in a dynamic way. There's been a lot of discussions about if there's a new format that comes out on the market. It's tested, it goes through different authorities and things like that. And so this idea that the same should happen with technology. I agree. I totally agree with that, I think. But I'm not sure I can imagine the guardrails and I have a lot of imagination. But I think there's something that needs to work in a more dynamic way than before. Because new issues will arise, new use cases. The way to use the technology will change. And we need to think about it. And there's still a lot of uncertainty also in terms of legal obligations to be honest.
Andreas Welsch:So what I think I'm, hearing you say, especially in that last part is we need to get more and more agile. We need to make decisions more and more quickly and adapt as the technology landscape changes, and what we're able to do with it.
Aurelie Pols:Yeah, I think that, but also certainly not only the companies that work with the technology or decide to embed technology. But also the enforcement authorities that need to support individuals in their uses.
Andreas Welsch:So you mentioned the word safe earlier, and Jesse made a great comment here in the chat. He said the end users expect a safe product and the operators and owners of the product need to focus on meeting that goal. And for me, that really encapsulates that level of trust. That level of accountability that we need to foster also between those that provide and develop software with those that use it. Just looking at additional messages here, Aamir says educating people about the technology and its implementations is a good method. I think so as well, Aamir, absolutely right. We need to educate what is the technology capable of? What should you use it for? What should you not use it for? Whether that is as an individual user and coming up with some guardrails or guidelines, but also for those that provide that technology as well. And looking at the Fabrizio's comment here just to round this out. He says, Hey I agree, but educating them about their rights works even better and should come first. So I think we're already seeing here different perspectives on that topic, which is great. So thank you for the discussion and, for the dialogue. I think it's great to discuss these things and have different viewpoints and, hey, this is what companies should do what, leaders should do. And the next question I typically get from folks in the audience as well, but can you make it tangible? How can you actually implement that? And maybe that goes back to the frameworks that you mentioned earlier. But how can leaders actually implement these things and become more accountable as they look at generative AI?
Aurelie Pols:Yeah it's not an easy task and I've certainly seen it in privacy as well. There's no magical blue pill for it. I think also related to what was said before in the different comments and, thank you for that. The possibility for consumers to also flag harm towards the product that they're using or the service that they're using. And I suppose we've all had these moments where you pick up the phone and you dial and then you have to do one for this, and then two for that. So it's people who have been outliers. They, know issues with these systems and the fact that that actually happens. So I think certainly there are frameworks out there and there's been a lot of work around, for example, fairness. What does fairness mean? How do you make sure that the population upon which your foundational models are built are actually proportional and distributed according to the reflection of the population that you're serving? Things like that. But I think it's also in this reactive collaboration. Collaboration and the fact that there is a channel towards the company serving that system that allows for people to flag issues that are coming up. So I think a lot of companies need to build on the obligations that they already have. Under legislation like consumer protection, like data protection. So there are bases out there where you can build. And I always tend to compare European legislation to Lara Familia in Barcelona. It's like a cathedral we keep on building up. It would be nice if the United States did a bit the same, because you've got like biometric data here and then you've got breach there. And then you've got the states doing that, and now they're talking about AI and generative AI in some form of a work group or law. But it would be nice if we could just keep on building in a coherent way. So my recommendation would be to say, start with data protection. Start with consumer protection. Make sure you embed indeed this idea of agility. You need this information to continue to flow and you need to be able to iterate on certain issues. And obviously it depends what kind of data you're using or what kind of harms we're talking about. At some point, this is also a conversation we're having in Europe, is this idea that you have to pull the plug on certain harms because it's not acceptable. And even the United States has this when you start talking about children COPPA, for example, HIPAA, sectorial law that are there. You can feel in the United States also here, there's things that we should be more careful about or potentially say we don't want this. And it's pretty recent. I think in the last, what, six to nine months, there's been discussions of actual bans of uses of data, whether it's targeted advertising for children. It's always the population that are considered vulnerable. There's more and more analysis around this as well. I would say if you are in these fields, whether it's health or pharmaceuticals or things like that, it's clear that your accountability has to be way higher. And you have to have a program that is coherent, that is dynamic and continues to be built up. So it really depends if you're talking about the banana milkshake or the strawberry yogurt, or you're talking about something slightly more dense and complicated. But there are frameworks out there. I typically start working always with the people in security. We talk about privacy. We embed ethics. We're starting to talk about social and corporate responsibility as well, because carbon footprint, everybody cares. And then yeah, start talking about, okay, generating information, generative AI, what does this mean? What are the consequences? How can we best figure out how to proof that we have done the right thing as a company? I think this is the best way to go. And on the other side also the public is a matter of education. I found my son using ChatGPT a couple of months ago to do some of his homework. I'm sure he's not the only one. We had a chat about it. It was for physical education. So to be honest, I don't really care. So it was like, okay, you could do it for that, but not for that and things like that. But it was interesting. We had conversations about what the technology means. And so it's really this collaborative aspect of yes, technology, great technology. There's super things we can do with that. There's issues as well. And we are a society that needs to live together, that needs to collaborate and go in the right direction. So let's try to put guardrails, let's try to educate. Let's try to prove we are doing the right thing and continue collaborating. And my hope to be honest when it comes to AI is global collaboration. Because we haven't really seen that for privacy. It's okay, the GDPR and let's just ignore the GDPR for three years, certainly in the United States, until other countries came out and said we think it's a good idea. So we'll just copy bits and pieces. So a ripple effect of that. And now what we're seeing is Europe comes out with the AI Act, which is partially based on the OECD reports. So come on guys. We need to work together to make sure that we come with something coherent and that works for everybody.
Andreas Welsch:Thank you. I think that's an awesome summary and an awesome call to action as well. What we need to do to really advance this and create more accountability by businesses and overall, the ecosystem. And educating users about AI as well. I really like the different examples you gave. Whether it was the strawberry yogurt and banana milkshake or targeted advertising as a contrast or the personal conversation you mentioned about using ChatGPT and for what purpose. Yeah. We're getting close to the end of today's show. Aurelie, thank you so much for joining us and for sharing your experience with us and for those in the audience for learning with us.
Aurelie Pols:Super. Thanks a lot, Andreas. Take care. And thank you for listening
Andreas Welsch:Excellent. Thank you so much.