What’s the BUZZ? — AI in Business
“What’s the 𝘽𝙐𝙕𝙕?” is a bi-weekly live format where leaders and hands-on practitioners in the field of artificial intelligence, generative AI, and automation share their insights and experiences on how they have successfully turned hype into outcome.
Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, and process automation.
Whether you're just starting out or looking to take your efforts to the next level, “What’s the 𝘽𝙐𝙕𝙕?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation.
What’s the BUZZ? — AI in Business
Get Your Business Ready For The E.U. AI Act (Guest: Andrea Isoni)
In this episode, Andrea Isoni (Chief AI Officer) and Andreas Welsch discuss how you can prepare for the E.U. AI Act. Andrea shares his insights on the upcoming regulation as well as AI-related ISO certifications and provides valuable insights for listeners looking to better understand the impact of regulation on their AI products.
Key topics:
- Understand E.U. AI Act's impact and protective measures
- Prioritize AI leaders' top three essential actions
- Discover recent developments in the evolving AI Act
- Identify critical regulations, compliance efforts, and standards
Listen to the episode to hear how you can:
- Determine the risk classification of your AI model
- Assess your Generative AI cyber security exposure, potential leaks, and attacks
- Decide which kind of governance you need for your AI models based on the new ISO 42001 standard
Watch this episode on YouTube:
https://youtu.be/uRPjhONR0xU
Questions or suggestions? Send me a Text Message.
***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.
Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter
Today we'll talk about how you can get your team ready for the EU AI Act, and who better to talk about it than someone who's helping others do just that, Andrea Isoni. Hey Andrea, thank you so much for joining. Thank you for having me. Wonderful. Hey, why don't you tell our audience a little bit about yourself, who you are, and what you do?
Andrea Isoni:My name, as they say, is Andrea Izzoni. I'm the Chief AI Officer at a company called AI Technologies. Because as I was saying, all we do is actually AI and machine learning development for other companies, both private and public. And therefore, it means that I touch a variety of different projects in many different industries in all the countries, possible data, multimodal as we say nowadays. Apart from that, I am also in the committee board for disclosure because I guess we are going to touch it on ISO, ISO certification for AI. So the new ISO certification, I am the board member that just helped to approve and draft essentially the policy.
Andreas Welsch:Wonderful. I'm excited to have you on the show and looking forward to learning from you what's top of mind there, not only for the EU AI Act, but then also what's happening around other certifications, standardizations and so on. For those of you in the audience, if you're just joining the stream, drop a comment in the chat where you're joining us from today, because I'm always curious to see how global our audience is. Now, Andrea, what do you say, should we play a little game to kick things off?
Andrea Isoni:Sure.
Andreas Welsch:Alright. Good. Let's see. This game is called In Your Own Words. And when I hit the buzzer, the wheels will start spinning, and when they stop, you'll see a sentence. I'd like you to answer with the first thing that comes to mind and why. In your own words. So to make it a little more interesting, you only have 60 seconds for your answer. Andrea, are you ready for What's the Buzz? Yes. Perfect. And those of you in the audience, if you would like to participate as well, put in the chat what your answer would be. So here we go. If AI were a month of the year, what would it be? 60 seconds on the clock.
Andrea Isoni:If it was a month, it would try to predict the next month until it got to the end and they turn out a mistake. And obviously, hallucinated goes to, instead of this, after December, it would get to banana, member. Something to hallucinate or something like that.
Andreas Welsch:Awesome. Definitely captures the state of AI in many areas today as well. But great answer. Let's come to more of the ones that that we've prepared. And obviously we know the EU AI Act is coming, right? It's been announced at the end of last year. I think just before that there were a couple of things that have extended the process. I know Germany, France were in the mix there, and they've given additional input and feedback. And I think it's evolved quite a bit from its original draft, from what I understand. But I'm wondering who's affected by the EU AI Act? Whom does it actually aim to protect?
Andrea Isoni:First of all, it's a question of principle. The first thing to remember, at least as a principle, then obviously the arrival the wording of the act and the policies is safety. Safety of beings, and on top, obviously, the human beings, okay? So that's the first thing to remember. It's nothing about a business per se. Obviously it's affecting business, that touch business with consumer, things like that. But the goal is to make AI safe. Safe in usage for whatever it is, okay? So when there are tiers, different tiers, on risk. When you say risk, again, I'm not meaning about business risk or all of that. I'm meaning about risk of human life or hindering life in some way or some form. Some life, some way, some form, okay? That's the first thing. So there are different tiers from unacceptable to regularly acceptable. The acceptable tiers will be just GDPR, like we know it today, okay? Unacceptable, we can go a little bit on that. And, then in between there are the important one, because that's what you can't use it, that's it. And in between the other one there are actually regulated, yeah? And when you are regulated, you need to put the governance in, you need to make sure it's actually compliant with certain regulations and things like that. Okay? Just quickly on, probably it's interesting to understand from a regulatory point of view, what is not allowed anymore in a practical sense, yeah? Biometric in public spaces and including camera that there will not be obviously if the police or Institution that will be allowed. We're also with them to restrict the procedure things like that Okay, another thing is no in I don't think we have the time here 25 minute One is very important is a social scoring. Whenever you have social scoring for the individual, so you're trying to target the specific, not a group of people, a single individual will not be allowed based on his emotion or his racial background, things that will not be allowed. I want to point out that this regulation not completely disaligned for other regulations. This thing, it's was the same before. I think this is important. Think about airlines. Everyone knows that, right? It goes on the web to try to find the best price, etc. And then you come back five seconds later, what happens? The price rise. What do you think? Ah, they target me. They did because I watched it before. No. I can tell you it's already illegal. It was already illegal before because any algorithm, even before, was for airlines or things like that, they cannot target the individual. It's just a group of people. If they raise it for you, they have to raise for anyone else the price. So the new regulation on AI just aligned to that was before. That I think is important to understand. It's not something out of the blue they invented.
Andreas Welsch:Thank you for sharing. And I think that's an important part as well, right? How it aligns to other regulations. And I think out of the EU, certainly the other landmark regulation is the GDPR that's been introduced a couple of years ago. So I know on one hand it does protect the individuals. And on the other hand there are concerns that it adds additional burden or might stifle innovation. If you're regulating, you're over regulating, but I think what you mentioned around the risk based approach and is it high risk or is it medium or acceptable risk is definitely a classification that also you and your own business can make with a good degree of certainty, right? That's a good north star to say these are the things that are allowed and they're not. From your perspective, if you're in an AI role, if you're an AI leader in a business today, what are the three things that you need to know and that you need to do now?
Andrea Isoni:If you are a business that know it's using AI, somehow, when I say AI, it can be also an API, yeah? API for other providers, things like that. First you need to check is what the worst potential outcome that can happen to either your clients or employees, or third parties that use your service, yeah? Which usually is clients, yeah? Once you identify the worst potential harm, the second aspect of it, check your cyber security exposure, because if anything, this regulation will increase the cyber security budget. The reason is, think about a chatbot, yeah? In theory, the worst that can happen, maybe insult someone, yeah? You have a chatbot that, for whatever reason, it starts to insult the the client, or it's trying to misleading sell something they shouldn't sell, or things like that. Obviously you know this is not happening, because you are controlling it. However Let's imagine a hacker that hack into your system and try to misleading your client to make your effort. In that case, the first thing is understanding what's the worst potential harm. Second is increase the cyber security exposure or barriers to not allow any hacker to make it happen. Although if you don't want it, like I say in the example, right? So you have a chatbot. The chatbot is exploitive on an external one and now you are committing something against the rule. The third thing is obviously its governance. You need to have the right governance in place. If you remember GDPR regulation, when you had data stewardship, champions, all the role, you, as a rule of thumb, take all the role, yeah, the DP, DPL, some things like that data procession officer, all this role, as a rule of thumb, just translate this role into an AI role. So data procession officer that become data AI officer. Data stewardship become AI stewardship. I think data champion become a data champion. So if you do that, this is obviously a rule of humb. You can start your governance again in the same way you did for data at GDPR. So essentially three. You identify the worst potential harm. Check your sub security exposure towards your potential harm, because an hacker could exploit that. And third, set up a governance just resembling what you did for data. Yeah, that's the GDPR.
Andreas Welsch:I'm sure there's a lot more detail that one can go into on each of them. I like how that aligns with some of the previous topics we've covered on the show here, like the OWASP Top 10 for Large Language Models, who already outlined some of the things that you mentioned. And actually, in a couple weeks, I'm looking to have someone on the show to talk more about your security exposure because of generative AI. On one hand when you have hackers exploiting AI, to attack you and your systems, on the other hand what you can do within the company. So I think great great connection there as well. Obviously, the AI Act has been hotly debated for the last couple of years. It stifles innovation, it completely misses the point, it's over regulating. At the same time, I think governments do have a responsibility to protect their citizens. I empathize a lot with the intent here that the European Union has put forward and has put this regulation together. But the AI Act has been so hotly anticipated in the industry as well for that period of time. But I'm wondering what's really new, what has changed since the initial discussions? Obviously, high risk, medium risk, low risk these kind of classifications are still there. But with the introduction of Generative AI, ChatGPT and so on. What would you say has changed that people need to be aware of?
Andrea Isoni:The business we experienced yesterday, we are trying to push as much sophistication in the model as possible. As you mentioned, large language model, all that is, is coming to the place now. Billions of parameters, actually trillions. What means is the more sophisticated, the less intelligible it is, the less unexplainable it is. All of these is gonna change when this regulation come into place because it's not gonna be explainable. Not explainable means you are gonna be even more affected by the regulation. You need to pay even more for governance and things like that. So the actual end result in all of these is the sophisticated of the model, I don't like to put in the, is inappropriate, but just to understand the number of parameters of the model will go down. I don't like to talk in that terms, but is the simpler way you can put it. The number of parameters of model will go down. Why? Because the higher it is, the more you need to spend in governance a obviously business trying to minimize the cost. Okay. Now, where the EU AI Act started? It started doing safety, which is still here. And the other aspect, which is marginally here, with respect to when we started, is actually copyright law. So essentially, protecting the intellectual property of people. But since you asked it, there was way more in the regulation, but it was even more difficult to make you agree all the parties on an actual intellectual property law. So this is a just, I would say there is a hint of what originally was for in into the regulation, okay? But it's important to say when you mention France, Germany, yes, they are still discussing, etc. It's important to understand that this regulation is almost approved, in a way. way.I'm not a legal expert, but all I know is that the final piece in when the EU Parliament and EU Council will actually ratify the Act. And that, it didn't happen yet. From that moment, Yeah, obviously there is a transition period and the transition will not be for all the tiers together. The super high risk will be within six months and then slowly until 24 months. 24 months I think is the last acceptable risk, so slowly but surely from there. But that moment, this it will be when the you European Parliament ratified that, which is, as far as I know, today didn't happen. Frankly, I don't know when is the day.
Andreas Welsch:Maybe one of the obvious questions if you've looked into the AI Act or read it, just make very explicit, right? Even though it's called the EU AI Act, it doesn't only just apply to companies in the EU. Maybe, can you talk a bit about that and what that means?
Andrea Isoni:First of all, the interest overall, what's the status in the world? Obviously, the EU is the one most advanced in that, but I have to say even China. China has a I wouldn't say similar, I don't know enough to say, but they have regulation as well. It's funny enough, EU and China are leading the pack in this kind of thing, while US has a very light touch, because US obviously is more afraid to lose competitiveness of their own businesses probably the E.U. Actually this is the main reason. And for the rest of the world, I think there will be adoption of these, anyway, an alignment. I'll give you an example of what I mean by alignment. There is an institution, it's OECD. Yeah, that is an international organization. And they made a definition of what AI system is, and as I was saying, the definition is actually used in today's EU AI Act. All I'm trying to say is that hopefully, and that's my hope, all these regulations, although in different continents or parts of the world will somehow, somewhere, will be transferable slash compatible in a way. And that's very important to me. I really believe we need worldwide regulation with no distinction. Because otherwise you will get too much imbalances. Actually, an hacker in a non regulated country can easily assess you that you are regulated. So in a regulated country, it will happen that you will have the worst of both worlds. You are regulated, so your company will be limited. But at the same time, they will be even affected to the one outside. There is this funny situation in which technically, you can have the worst of both worlds. So the only way to avoid that is everyone play the same game. If you're not playing the same game, certain country may exploit that against you, even if you think you are safer. Which is funny, but that's the way it works.
Andreas Welsch:That's a very important point as well. Now I'm wondering you mentioned because of the regulation, you're anticipating models will actually get smaller, they'll have fewer parameters, right? It'll be easier to explain. Does it mean that there will be more fine tuned models, there will be more domain specific models, overall smaller models? That also the large vendors will provide because they'll be easier to explain or where do you see this have the most impact? Is that in your own business if you're building your model,
Andrea Isoni:There are two different I would say approaches here. One is what the business needs. Yeah. And most of the time, business needs very simple automated solution. The simple automated solution, frankly, they don't most of the time, even neural network kind of architecture is probably too much. Yeah. So the regulation, what we force the regulation is forcing business to actually take the this decision is, can you do with logistic regression? Please do it. Why? Because if logistic regression they want that is, is very explainable. I need to do little governance on it, and, it is less cost. So it's better for the business. If you start to use the increasing models, then you need to count on more regulation. You need to do more tests, alright? Safety check, a test. Because, frankly, nobody, at least today, if you use a large language model. Even assuming you know all the data, because obviously if you're using an API for OpenAI or Anthropic, you don't even know the data they are training, which is not a problem, but definitely you don't know the path of the the neurons that are activated, yeah? So this is very difficult to explain today, and you will need even more tests. to check the security and safety of this model and the governance. And that's one part. And second, we don't really understand how they work anyway. Until these two things are mature and under control that make no sense to scale even more parameters.
Andreas Welsch:I remember when GPT 4was announced, right? It was all about how many parameters does it have, or if OpenAI is working on GPT 5, how many more parameters does it have, and how does it grow, but if you're announcing it's actually smaller models, fewer parameters also for the purpose of explainability, interpretability in the context of regulation that makes a lot of sense. Now we touched on it in the beginning that obviously the EU AI Act is a landmark thing here. But there are also other regulations, compliance efforts, standards, when it comes to AI, what are some of the ones that are really important that people need to be aware of but might not be as familiar with?
Andrea Isoni:So yeah, there are international it's called International Standard Organization for a reason, ISO. And if you are familiar, they actually had all the policy for quality assurance, information security assurance 27001, quality is 9001, you're familiar with that. Many companies essentially take these policies to certify their companies follow certain quality standards, security information standards, okay? And that helps them to be less exposed to you, to probably have the right governance in place. In exactly the same way, a few days ago, if not one or two weeks, ISO published a new standard. It's called the 42001 standard, which is about AI. Within this policy, you create your own governance around your AI model. Not the data, the model. So essentially, how to train the accuracy right, how to do the data pipeline right, how to check there are hallucinations and things like that, using the model, using the right matrix and correlation, things like that, in a governance safe way. And that's obviously is in tune with the EU AI Act and the regulation, because the problem is whenever this regulation will effectively be approved. The important thing is, okay, how do I know I have the right governance? Do you follow standard? There's no guarantee on anything, it's a fallacy. But at you can show you following certain acceptable worldwide standards on your governance, like you do for the quality or information standards.
Andreas Welsch:I think it's great to see that there's some standardization being introduced and that you and your business can, and potentially should, adhere to as well. So I would recommend take a look at that as well. I'm sure you're quite active in that area as well. We're getting close to the end of the show. Could you summarize the key three takeaways for our audience today?
Andrea Isoni:First thing as I say, understand the impact of harmful impact or AI model you are using it. Once you understand that, understand if there is any cyber security exposure or leak that can be exploited. And third thing, try to understand which kind of governance you can have in the models. That's for business people.
Andreas Welsch:Awesome. and sweet. Thank you so much. Andrea, thank you so much for joining us and for sharing your expertise with us. And it's great having you on. And for those in the audience, thank you for learning with us today.