What’s the BUZZ? — AI in Business

Managing Automation Risks (Guest: Ralph Diaz)

June 16, 2022 Andreas Welsch Season 1 Episode 5
What’s the BUZZ? — AI in Business
Managing Automation Risks (Guest: Ralph Diaz)
What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Ralph Aboujaoude Diaz (Practice Leader, HFS Research) and Andreas Welsch discuss how to manage risks from process automation. Ralph shares his insights on adequately assessing risks from process automation and provides valuable advice for listeners looking to balance process efficiency and business risk. 

Key topics: 
- Understand key risks in AI & automation projects
- Use a framework to categorize automation risks
- Evaluate new automation ideas based on risk

Listen to the full episode to hear how you can:
- Determine risks based upon tasks (automation, data extraction, etc.)
- Create a recovery plan in case of automation failure
- Consider where to keep humans in the loop

Watch this episode on YouTube: https://youtu.be/f_KNjHGXQis


Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about managing your automation risk. And you know who better to talk to about it than someone who's actually built a framework around it. Ralph Diaz. Ralph, thank you so much for joining.

Ralph Diaz:

Hey, and Andreas, very glad to be here and very excited about finding this conversation on risk.

Andreas Welsch:

Awesome. Hey I know many people in the audience know you from your memes on LinkedIn. But why don't you tell us a little bit about yourself, who you are, what you do.

Ralph Diaz:

No, absolutely. Apart from being funny, I'm actually quite serious or I was serious. Not anymore. Probably my entire career was very much in the risk and compliance space. Having started my career in Big four really managing large project around IT risk security very much focused initially on large scale ERP deployment where I was managing the security aspect of this deployment. And then I start moving into more automation driven role where pretty much I was looking around how to modernize risk and compliance in general. And also helping business functions also start getting the most out of automation in all the sense of the words from all the possible spectrum: hardcore automation up to really UI automation. But really, I started getting that role when I was in my previous employer large pharma company. And I've been tasked to run initially this whole program which was the difficult part around aligning all stakeholders, putting in place the operating model. And as part of that work, I had to deal with the risk and compliance element around how to ensure that ultimately everything we deploy at speed always is governed and managed and monitored appropriately, specifically when you operate in a highly regulated industry. So I spent maybe 50% of my time dealing with compliance. The technical part was easy. But it's pretty much making sure that all the risk and compliance function from first line, second line, even our external auditors are actually happy with the way we're gonna manage really the risk. This whole initiative. So that's where really maybe I become an expert in risk around automation.

Andreas Welsch:

That's awesome. It really sounds like you've seen quite a lot and from a lot of different angles. And I think to your point, if you've done it in pharma, in a regulated industry, then you for sure know the ins and in us. Yes, exactly. So maybe point to the audience. So if you're just joining the stream, please drop a comment in the chat. What's one risk you typically see in automation projects, just so we get the dialogue a little going and we'll pick up on those comments. Your background is in risk in compliance. And one of my favorite frameworks is a decision tree that I've seen you have created, and it's all about risk. And I know you've posted a while ago. And for anyone that hasn't seen it yet maybe can you describe some of the core points of it and some of the key risks that one should clarify before starting an automation project?

Ralph Diaz:

Absolutely. It's really, it's a framework that allow to maybe count sometimes your ambition. I know that when there is automation, some people are very ambitious and they wanna automate everything. Suddenly everything that didn't work, automation is not gonna really solve that magically. So it's just a framework to understand that when you have a use case, what do you wanna automate and to what extent? That's probably the first question you ask. And every use case it's about for me, a type of automation. You have use cases that are very heavily driven around automation to extract transform information. which is already quite big. When you start mixing structured, semi-structured and structured data, that's by itself. It's a project. But some automation can be very much focused on, you only have all those connectors from off the shelf system, so you don't really have, you don't really need to build that kind of automation, but you might need to inject way more automation around the analysis. And that's again, another type of automation. You will bring certain tools and really try to have the analysis as a key use case for automation and obviously the evolves. Once you extract, analyze, there is this additional step which is way more complex by nature because that's where really start using some cognitive intelligence here because you need to take decision on the back of some informational analysis. And that decision sometimes can be very easy. Black or white, binary A or B or can be super complex because you need to take a lot of contextual elements into your decision making process. And last really when you take that decision, you need to implement a decision. And that's what for me, it's an automation driven on being able to go and implement or trigger an action that will go into any backend system, do some activities, but really it's the end for media automation. Long story short, for me, any use case have at least one of those four type of automation. If you have the four of them, perfect. You're trying to go into one automation. If you have one or two, you are trying to maybe partially automate something and leave the rest for the human to do it. So I think then when you have these in mind exactly what type of automation you are trying to do, that's where you come around. Okay. To what extent I'm gonna actually automate, am I gonna have a lower level of automation, which is still highly dependent on human, or are you gonna go the other side of spectrum where you're actually saying, you know what, it's gonna be pretty much full-blown automation where some automation is gonna pretty much do the information acquisition, analysis, decision making, action implementation in a fully autonomous way. Exactly what you want here. I would say the first answer is I wanna go like full automation but it's not simple. Because. It depends on what a factor, and here again, you start hitting some risk that you need to take into account. Because of the complexity of the data you're trying to get into the automation workflow or because also of the criticality of the process. Where you can't really afford to be so highly automated because you might have risks. So talking about the risk, this funnel of risk allow you to maybe just concept whatever use case you have, whatever automation types you wanna enable, whatever automation level you wanna reach. Then you start asking some key question and the first one will be really what's the risk of negative outcome of automation. Automation, obviously the nice, the happy pass is that it always works, always gonna deliver something amazingly fast without any human intervention. But what will happen when something bad happened? And actually, to be honest, big, large project automation got tend of fail on a periodic basis with obviously recovery operations on the back. But automation fail like a human failure. And I see you need to understand the importance of what's the impact of those. And is it aligned with your risk appetite when you have a an automation, let's say managing a financial closing process and suddenly it fails during the month's disclosure, what's gonna be the risk of that if you cannot actually recover quickly? Maybe very high because you cannot really report your finance. So that's maybe one question. You're saying, you know what, I'm not gonna be aspiring from reaching such high level of automat. So I'm gonna recalibrate a bit my initial ambition and inject a bit of human in the middle here to actually ensure that this operation is conducted almost without any problem. Yeah, that's a negative outcome. I think it's big for me. You have the technology deployment. Yes. You wanna do amazing stuff around unstructured data. Is the technology ready? Do you have the necessary? From a project mode, from a B.U. activity to actually being able to maintain such a complex technology. And that's also something to take into account. I think some companies wants to do a lot, but when you realize the effort to be that you need to put in order to be able to maintain a technology structure is not easy anymore. You need experts, you need to pay for it. Risk of user adoption, obviously that's the biggest one. You have an amazing automation, but there is a kind of resistant. A change management process that's not happening well, and you have pretty much a rejection of automation. And this can take many forms. I'm not gonna drill down in this one. Happy to take a question, but also I've explored a lot around what do we mean by rejection and into which context? And one important really, I think specifically when you try to automate a mission critical activity I think it's the risk of situational a. Human even if you monitor and you outsource this monitoring function to many human, which needs to be 24 7, follow the sun monitoring that you still have a bit more control, to be honest and these human are still very much aware around procedures, how they run it. When there is an environmental changes, they can quickly recalibrate. Not as easy as an automation when you need to go through the entire change management design process and push that back into production. I think it's way more easy to, for a human, really to be aware of any situational change. And I think that when you have automation, there is a lot of ch, there is a lot of issue around pushing everything into automation, not having any more human really fully in the loop. And then when something back happen again, these humans are unable to recover those. Even if you have a handbook, even if during your project you keep your SOP, if humans are not trained, if humans don't actually manage those processes on a daily basis, they're gonna lose the situational awareness. Even if there is a failure, they're gonna take a lot of time to recover and risk of availability. I think for me also, that's a big risk when you have automation that move into really the complexity of cognitive automation. Relying on a lot of data structured and taking data of multiple data sources, internal and external. You start having also a reliability risk. What happen actually if this automation fail, or how can I trust the automation? And sometimes to be able to audit a certain algorithm or ensure that post project, this algorithm remain consistent structure, and complete and accurate. also. It's a lot of work. It's not like you unleash something and then it's gonna work forever. So re being able to con, continuously rely on automation. It's a big exercise in terms of people expert that needs to be always there to ensure that that, that reality level. So this are really a bunch of risk. I have more risk but if you think, and the framework is around 12 high level risks that I've called out. These are, I mentioned four or five, but yeah, automation is complex. There are gonna be those kind of risk, but risk can be mitigated. I'm not saying, oh, these risks are here. Let's stop everything. And you know what? Let's go back to everything human,manual. It's amazing. Absolutely not. I think first of all, your initial aspiration can be recalibrated so that you have a bit more human involvement. But if you still wanna aspire for high level of automation, you need to put in place mitigating control, remediation actions, a lot of monitoring, activities which needs to be in place to ensure ultimately that those risks are always under control and monitoring activity. The more also you monitor your modular activities are automated by nature the more you're gonna be able to place reliance on the model also. So I'm not saying remediate automation by putting in more automation. But monitoring needs to be in place so that you ensure that all those risks are always under acceptable level and really are aligned with your risk appetites ultimately as an organization.

Andreas Welsch:

So Ralph, I think that's a great point you're making, right? Risks will always be there, but it's a matter of how you plan for them, how you plan for their mitigation, and that you're aware of them in the first place that they do exist and that you make a conscious decision to what extent you're able to, or able and willing to accept those risks. I'm curious where have you applied it or what would be one use case that, that you can share where you've applied it and how has it helped in that?

Ralph Diaz:

Part of my job initially was to deploy RPA. And obviously the big use cases were processes that require a lot of human intervention actually managed by service provider that we have outsourced somewhere. And also from a compliance perspective, those were also processes where a lot of audit findings, a lot of processors were actually there that cost quite a lot initially. And these are usually big cases for us around let's go and hit those process. Let's ensure that maybe a process that have a bunch of SOX controls in there that have failed previously because of the human issues not being able to run the control properly. That happen a lot. So let's have automation to actually run a process and at the same time run those controls, which was a big win-win from an operational perspective and a compliance perspective. So and again, we start hitting the finance processes and I was talking about the financial closure, financial closing process. Super important process that takes information from multiple system, SAP included, then consolidate all of these, put it in another system, and then having a lot of manual step around that. And yes, we realized that quickly that initially it was all structured data. So the case was like super simple. You don't have anything to worry about. You're not gonna bring intelligence and additional tech to actually deliver the outcome. But then we realized that yeah, the steps are there. You can't really rationalize, optimize those step. You can't eliminate any step. These are very procedural stage regulated, state regulated steps that you need to follow. So ultimately we went through the funnel and we start understand and we realized that, you know what this, if this happen if suddenly the bot doesn't run on the certain frequency, if even if we put a trigger and the trigger doesn't really connect back to the rpa, a control room because of a connectivity. Whatever. But there was so many issues that, so many what could go wrong in the process then? Yeah, the initial, you say, you know what a very simple process, which actually when you bring into this RPA selection process matrix that we have, when you put a lot of numbers, et cetera, and then you say amazing for automation. Yeah, it was amazing for automation. But this tool didn't took into factor the risk that this could have on a highly regulated industry. Not being able actually to work when it has to work and when it fails not being able to recover because you can't recover this in order to be able to actually go back into an SAP table that is actually critical by nature. You need to absolutely have a credential first in the privilege access management system. So you need to be a trained user. Then once you have it, you need to jump on the previous access, elevate your access, log in into the instance. It's was huge. So initially we had the ambitions actually to do pretty much full automation, but you know what? We're just gonna have partial automation, extraction of X. And I said, sending this to human reviewing sense check, valid check. And then always human inputing back this into the consolidation system manually. But obviously with RPA having done a swivel chair around taking data, putting it back et cetera, and then themself. Really ultimately a human loading the final template into the consolidation and reporting system. But because of risk and the appetite that we didn't wanna do, even if the automation was achievable and feasible without having to do anything fancy, but the risk was too.

Andreas Welsch:

I think that's a very important point and I'm sure the audience will appreciate that as well. I remember, a couple years ago we were hearing a lot about lights out finance and autonomous procurement and these kinds of buzzwords and aspirations and visions. So I feel it's very grounding as well to hear from you. We've aspired to accomplish this, but then when we've looked at the risks and we've weighted the pros and cons and the impact in our industry and our business. We've actually made a conscious decision not to go that far and to stop a little sooner, but still have that human in, in the loop. I think that's the other component that I feel is very important also then in those automation programs.

Ralph Diaz:

No, absolutely, and I think it's always very easy to say that when automation we have a very well-written contingency plan with automation, with steps where we have speedy recovery, etc. I don't buy that specifically. When you have a complex process, yes. You have an amazing contingency plan recovery steps. This needs to be run by people. Ultimately, and ultimately this is run by a support function somewhere outsourced so that you rely again on people on a 24 7 shift. It's very simple to say, we have a recovery, don't worry. Some processes you can't even rely on contingency book and recovery steps because, there is a big risk that the human who is gonna be actually those operation is not gonna be trained enough. He's not gonna be there in a timely manner. So it's great to always think about what happened if the automation failed. Yes, we have processes to recover, but if you bake in the risk again and the risk is critical, you don't even want to go that way. You stop there.

Andreas Welsch:

Perfect. That's awesome. Thank you so much. So I see we're already a bit over time. So maybe let's summarize. What I'm taking away from our conversation is take a look at the risk framework. Again, I really love what you've put together and how you've described it just now as well. There's so many different risks, and I know we've only touched on four or five of them to get this 360 degree view of automation. On one hand what is possible when we all get excited about it, but let's also be realistic of where some of the limitations are and to what extent we, our business and our industry is willing to take on that risk. Folks, we're getting close to the end of the show. Thank you so much for joining us, Ralph. Thank you so much for sharing your expertise with us. Really appreciate it.

Ralph Diaz:

Perfect. Thanks everyone who joined.