What’s the BUZZ? — AI in Business

How Businesses Can Trust Generative AI (Guest: Ramsay Brown)

May 02, 2023 Andreas Welsch Season 2 Episode 7
What’s the BUZZ? — AI in Business
How Businesses Can Trust Generative AI (Guest: Ramsay Brown)
What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Ramsay Brown (Founder & CEO, Mission Control) and Andreas Welsch discuss how leaders can trust generative AI in their business. Ramsay shares his perspective on responsible AI practices and provides valuable insights for listeners looking to implement AI.

Key topics:
- Hear the key questions for responsible AI
- Address common AI trust challenges
- Learn about the future of work with generative AI

Listen to the full episode to hear how you can:
- Align AI for ESG goals
- Understand responsible AI greenwashing
- Create better policies for AI use
- Prepare for the future of work with AI
- Take action to build trust in generative AI

Watch this episode on YouTube:
https://youtu.be/AOyITl4Khvg

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about how leaders can actually trust generative ai and who better to talk about it than someone who's focusing on just that trust in ai. Ramsay Brown. Hey Ramsay. Thanks for joining.

Ramsay Brown:

Thank you so much for having me. It's an honor to be here.

Andreas Welsch:

Awesome. Hey, why don't you tell us a little bit about yourself, who you are and what you do?

Ramsay Brown:

Absolutely. Thanks. I'm Ramsay Brown. I'm the CEO of Mission Control, an AI safety SaaS company, building the AI trust ecosystem. We view our mission as to accelerate quality, velocity and trust in the data science lifecycle on generative AI and that's reflected in our project, our products and our projects and our communities. We focus on how to help teams win with AI trust at scale, and that means being able to deploy the kinds of things that help them move faster while breaking fewer things. This is just about all we think about is the landscape of AI trust, and we're really grateful to get to discuss this with you today. So thanks so much for having me.

Andreas Welsch:

Awesome. Thanks for hopping on. Yeah. Really the point of moving faster and breaking fewer things is, something that deeply resonates with me. Especially as I've been looking more into generative AI and trying to make sense of it, what it all means and what it leads to and what the opportunities are. Ramsay, should we play a little game to kick things off? What do you say?

Ramsay Brown:

Let's do it. Let's do it.

Andreas Welsch:

Fantastic. So this game is called In Your Own Words. And when I hit the buzzer, the wheels will start spinning. When they stop, you see a sentence and I'd like you to complete that sentence with the first thing that comes to mind and why, in your own words. So to make it a little more interesting you'll only have 60 seconds to do that. Are you ready?

Ramsay Brown:

Hit me.

Andreas Welsch:

Okay. Perfect. Then let's get started here. If AI were a movie genre, what would it be? 60 seconds. Go.

Ramsay Brown:

Bildungsroman. Bildungsroman is a genre of novel that translates really well into film. That is the coming of age story. And it's so easy to talk about AI or AI as apocalypse or AI as procedural drama or AI as utopia. But none of that cuts it right, because right now you are living through the puberty of artificial intelligence. You're living through the awkward growing phase in which we are moving from the laboratory to the living room, and as we find that capabilities and market penetration are predictably downstream of capital market's ability to continuously funnel private equity into the synthesis of machine intelligence, everything that's happening is what happens when we go from systems that are very hard to use, very expensive to build, and very low capabilities too. These are now pervasive. They are capable, they are affordable, they are frictionless, and now we can start truly building on top of them. And that process is a coming of age story. You are currently living through the awkward transition period. For artificial intelligence. And to me, there's no better genre than Bildungsroman, which is that story of, moving out from your hometown, girl meets boy, meets girl, find oneself, figure out place in society. That's the genre.

Andreas Welsch:

Awesome. Thank you so much. That's a very unconventional answer. Like you said, definitely not what I had expected, but great to hear how you phrased that in. Again, to your point, I think that's really what we are seeing right now over not just the decades leading up to this point, but now also our reckoning of what are we doing with this and, where noise is going.

Ramsay Brown:

The Dartmouth conferences on artificial intelligence were 67 years ago, and here we are now with the fastest time to a hundred million users for ChatGPT. And anytime anyone said oh, this is a flash in the pan, we're forgetting that what you're staring at is not a novel parlet trick. You're staring at a general purpose technology at the scale of electrification. For the automation of and synthesis of, symbolic reasoning tasks. There's no way that's not gonna be one of the biggest things that's ever happened, period. And from this perspective like that, it's that we're now here as opposed to where we're even, 10 or 15 years ago when I was in my PhD work to where we're going. So it is wildly different. So I really do couch it as, yeah, we are in the transition phase and something is actually happening now. And that's why it feels the way it does. That's why you and I are talking about this on a Tuesday morning.

Andreas Welsch:

Hey why, don't we jump into to questions. And again, for those of you in the audience, if you have any questions, please feel free to post them in the chat. We'll pick them up as we go. And we have a lot of knowledgeable members joining as well in the audience. So feel free to get a dialogue going there as well.

Ramsay Brown:

Yeah, and I see Michael Novak. I see your comments. It's not electricity. This is fire, man. There is a Prometheus metaphor here. For sure. Because the, thing that we are doing is a is, a fundamental transformation of, not matter per se, but information for which so much of matter, capital, energy and identity is downstream of our ability to use information effectively. So yeah, I see that. If not electricity, fire, that's not a bad metaphor either.

Andreas Welsch:

Yeah, exactly. Maybe a question for you. You've held the first annual leaders in Responsible AI Summit in partnership with Jesus College in Cambridge last month. You were able to get together a vast number of experts on the topic of AI ethics and trust and responsible AI. So I'm curious as we get into this topic of how can leaders trust generative AI? What were some of the key topics and recommendations that leaders building AI products now need to know that you've discussed in Cambridge with these experts?

Ramsay Brown:

Yeah, so I'd say that first two points of immense gratitude. First, to my counterparts Julian and his team at the Intellectual Forum at Jesus College Cambridge, for extending the opportunity to come and host such a special and high impact day with them. And the second to the attendees of the summit for whom we would have, we would've nothing without their brilliance. It was a very unique opportunity to get a relatively intersectional cross section of some of the world's leading thinkers and practitioners in this space together under the Chatham House rule to discuss four topics that are at the forefront of our ability to meaningfully trust AI and generative AI, and what's coming next after it. Now, those four topics were how our use of AI aligns or fails to align with ESG goals. And what we're thinking about as the world is changing and how, what sort of role we are to play in reversing biosphere collapse. The second is what sort of policies we would recommend to policy makers to help them make better policies faster? The third was what is and isn't working when it comes to responsible AI movements and how we trust AI. And then finally, as the future of work becomes the today of work, especially with generative AI and what's coming down the pipe for a agentic AI and synthetic labor, what are we doing to prepare ourselves robustly for a world in which the cost of knowledge work drops to zero in the next 18 months? These are extremely large questions. There's no way to not confront the gravitas of the question at hand on each of these. And the takeaways are actually being summarized right down by my research team. It was an interactive work shopping day of about 70 people. And everyone was handed a pad of sticky notes and a Sharpie and then set down in groups to collectively answer these questions. And we're actually going to have a anonymously authored summary paper available shortly. In about the next six weeks for anyone who's interested in what the findings and takeaways from the summit really were, but I can touch on at least a few of them now. To give you a little bit of a, snapshot of where we stand. The first of which is that based on our current trajectory, it seems like a lot of what's going on in responsible AI runs the risk of becoming something akin to greenwashing that's happened in ESG. And for those who aren't familiar with greenwashing. As a reminder, greenwashing is what happens when a large corporation appears to create the image of them taking the steps that it would they'd need to take to reduce their ecological impact. But in fact, it is just an image and they do not actually meaningfully operationalize any of it becomes more of a PR thing than anything. When we look at what's going on in responsible AI, one of the big outstanding questions is, are we doing something similar to that? But with AI where large organizations are saying look, here's our principles, here's our practices. We have this framework, and then you go talk to data science practitioners and you hear the same answer of my incentive structures haven't changed. I'm not given tools. Our ops team doesn't talk to our data science team. No one talks to compliance, and compliance doesn't talk to us. We all operate in fiefdoms and we found that this is an unsustainable way to do this cuz nothing's actually getting done. We do run that risk. That is still a very real risk right now, and it's important to note that when we look at the meaningful jobs to be done. Around responsible AI, it does appear that while there still may be last mile edge case or long tail concerns around some of the fundamentals of AI ethics, it feels like more and more of the conversation is moving towards governance and the practical steps that must be taken within data science teams to transform. Ethical recommendations into specific concrete actionable steps that are actually played out in data science notebooks, not PowerPoint slides, not risk analysis frameworks, but in the actual live impacted flow of data science. And that's something that we send a lot of time focusing on internally within our organization to produce solutions for. And we are hearing from more and more teams that's where the problems really lie. The second topic around meta policy. One of the big takeaways was that policymakers are predominantly trying to write policy for a world that's previously existed as opposed to a world that is going to exist. And this is a major problem with policymaking writ large. Policy because of the incentive structures of policy makers. You can't go back to your constituents and say we did some speculative forecasting for a world that doesn't exist. And I got some science fiction authors and some data scientists and some people who are good at economic forecasting, and we think this is the world that's going to exist. So I'm gonna write policy for that. If you do that, you'll get voted out of office. It turns out that's exactly what you need to do, though. The rate of change around our ability to meaningfully harness intelligence and the price point of doing so is going in opposite directions extremely fast. We have never been able to manipulate thought mechanically or biologically as effectively as we can today. That number, that capacity is increasing dramatically and an accelerating pace. The cost of doing so is decreasing when that happens. Almost no existing social or political systems we have in place today are capable of meaningfully organizing that world because it has never existed before. This is the reason that politics has such a hard time with black swan events is there's no precedence you can set for this. We, since it didn't happen before, we can't write a law to it. And this is what we're seeing even right now with the EU AI Act, which is absolutely suffering from Brussels syndrome and is being rewritten and moved in its goalposts because they're coming to understand that there's no way that you are going to write a law right now that is going to meaningfully get out ahead of capabilities fast enough for these technologies or their use cases. So this is as Marsh McCluen puts it, the problem of trying to drive a car by only looking in the rear view mirror and using that as what you're looking at instead of the road. And politics is by definition a rear view mirror kind of process. So that's been highlighted as one of the concerns we need to address in terms of what is and isn't working. What we're coming to find is that there's this gap between the compliance sides of the house that are responsible for how we need to abide by laws. What are the risks and controls we need to have and factors of trust we need to abide by as an organization. And then the data science side of the house. Data scientists don't know anything about compliance, and compliance teams do not anything about data science. And I'm making a really gross generalization here on purpose. And I realize that this is not a completely fair representation of reality. And there's lots of data scientists who understand compliance and lots of compliance officers who understand data science. But writ large, the incentive structures that have emerged in large organizations are such that these two groups of people own distinctly responsible and accountable different lines of this practice. Yet they need to be intersectionally merged such that they can collectively behave and own each other's outcomes better. And this is not happening. And this friction between these departments is where a lot of the problem lies. Breaking that is not just a technical problem, it's a cultural problem. And then finally, in terms of the future of work and, what this means, this is probably the closest to where generate AI is cutting into the bone right now. And the consensus isn't clear on what this impact is going to be. So I'm not going to speak for the whole of the attendees, but I'll speak for our opinions here within our organization. The impact of generative AI is going to be a discontinuity from previous ways that we've used AI within the enterprise. So previously, AI has been used for predominantly tasks of prediction, classification, or control along business unit lines for which we needed to be able to make a small decision or automate a small process, and that could have been at any point in the integrated value chain. The problem here now is the types of things that generative is capable of doing, appear to be more robustly general than previous. So this is not I need a classifier model for determining whether or not we should restock this product and then make the necessary adjustments to our supply chain as much as it turns out that. If I have an adequately powerful model capable of language, some of the latent capabilities in that model may look so far a scan from what we actually thought the job to be done for this model was that suddenly it's capable of doing lots of different types of jobs within the organization relatively effectively. And the this is on the heels of things like Goldman Sachs report or our upcoming Labor Transformation Playbook in which my organization has scored the 1016 jobs. The Department of Labor recognizes for the 40,000 odd tasks that those jobs are made of and how every single one of them is amenable to being disrupted and replaced by generative AI and e ai. This is the structure of the problem, is that when we look at the incentive structures that chief Financial officers are under. If you tell a CFO that there exists a new batch of technologies that are capable of reducing their operating costs substantially by reducing the demand for labor. There's not a CFO alive who understands the games that they've signed up to play. That's not going to make the decision to do a standing reduction in force on the majority of their business units when generative AI and AI are capable of doing those jobs, not getting rid of everybody, but looking at standing reductions in force because you're able to accomplish the same productive output with less human labor involved in that process. That the consequences of that decision and the consequences of, where we're going in terms of the following price point of the jobs to be done becoming automated is I think going to be one of the largest conversations in the social discourse from here on out. Because these tools are leaving the phase of isn't it funny to see the Pope in Balenciaga to my team's now using these and we're now on a permanent hiring freeze like every other team. And, that was the nature of the conversation of the summit is what do we do about these types of problems?

Andreas Welsch:

Perfect. So how can our audience learn more about it when the paper becomes available?

Ramsay Brown:

So we have our magazine newsletter Accelerate which you can sign up for at takecontrol.ai/accelerate. And I'll be happy to drop a link into the feed here a little later where we'll be announcing the release of the paper and we look forward for any conversations and discourse around this. Honestly, we're so grateful to the attendees from the summit who came and joined. And we really look forward to being able to, broadcast more thoroughly what the current state of the arm, what the current snapshot of the discourse is around this, because we think that we were able to build a very safe space, to be a little more comfortable to talk about what's really going on. And I think the, story is quite compelling and, we want to get it out there.

Andreas Welsch:

Awesome. Thank you for sharing it. You, didn't promise too much. When you say those, are some big questions. Certainly judging by the answers, right? We need to have even more discourse and more dialogue about them and what the impact is and what all of that actually means. Yeah. But maybe we can bring it to business and make it more tangible. From this 30,000 foot view, what does all this mean? To what can I do with it? And, what can I do today? And especially when we think of trust and trusting this technology with all its benefits and limitations Yeah. That we already know and, have seen over the last couple of months. What would you recommend, what can leaders already do today to trust generative AI in the forms that we are seeing it right now?

Ramsay Brown:

Yeah. So we're gonna go three very specific things. That leaders can do to start trusting generative AI better. And it is an active process. It's not a passive thing. The first is your people, the second is your processes, and the third is the technology itself for your people. There are two things that you need to be doing with your people to improve generative AI success and trust. And this shouldn't come as any surprise. These are culture and training, but there's two particular types of things here. The first of which is the teams that are going to succeed with these tools are the teams that are going to succeed in the market because these are going to confer strategic, competitive advantage for your organization that is going to distinguish you on capabilities, price point, quality of service, or margin because you're able to operate more efficiently using them. That means your organization needs to be constantly reviewing as an internal team, how are we winning using these tools? And that comes from the top in my organization. Every week we have a review internally of how did you get your job done this week using generative ai? These are not a, I better not catch you using it. It is a, I better not catch you not using it. This is like why did I, why did we buy nice MacBooks? Why did, why do we use good tools? Go use the best tools you can to get your job done. So that's a culture thing that comes from the top. As a leader, you need to be saying that culture, that this is the way we do this now. And the second is you need to be instilling the culture of how to think about these with that critical theoretical lens of, yes, I know the output said this, but does that even make sense for what I'm trying to accomplish? Because if we treat these, like they're always going to be clear, buoyantly perfect tools the first time around, and they'll always create the right outputs. We're gonna miss the point that they're not. Their tools for accelerating our work. They're not yet to the stage where they can be trusted verbatim to provide business accurate and business valuable responses. Now, that doesn't mean we get to get rid of them entirely. Quite the opposite. It means like everything else, you need to apply your critical thinking to what's going on. So that's the first. That's your people. The second is your processes. You need to have a process review of where within your. Business value creation flows from understanding your market through to customer success and corporate strategy. Where in each of these and your value chains do these tools fit in specifically? Yeah. This is one of the things that we think about a lot and we help teams figure out is, when we look at how we do things, where does generative even fit? You need to be analyzing those processes to determine where these tools amplify or augment how your people are already operating. And then the trust component of this is you need to have governance policies in place, which are specific. Actionable, measurable, recordable, accountable documentable steps that teams are taking. Either as they build these tools or as they use third party tools, you need to be able to demonstrate in systems of record. Here's the steps we are taking to know that we are accountable for mission success using these tools and we've taken this seriously. That's the process for trust. And then for the technologies themselves, you need to be either developing in-house capabilities around generative or using third party. Third party. The open, the stuff that we have available, the open ais, the mid journeys, the stable diffusions and everything built on top of them have a fundamental barrier for trust around data security and data privacy, which has been our big innovation with our gen ops platform for blocking secrets from getting leaked into generative AI systems, which has become a huge problem. You see it in the news now that Amazon blocks ChatGPT, because they found its trade secrets leaking in.Samsung had a massive breach the other day where people were copying and pasting secrets into ChatGPT from one of its chip fabrication facilities. And across the private public sector, we keep hearing the same thing. Our options are either to ban this technology or we keep leaking secrets. We've developed a solution that helps people and organizations. Use ChatGPT without leaking their data into it. And we advocate for technologies that improve, practically speaking, not just measurements or policies around trust, but the actual act of using these tools. We live in a time where we don't have to keep using checklists to solve everything. We can actively intervene in automatic processes to improve their safety and their scale. Those technology layers. And the whole emerging generative ops, like field and FM ops field and prompt ops fields are going to be every business leader's best friend in this process because they provide the business intelligence tooling and the trust level and trust layers that are otherwise missing from how these tools operate. So those are the three things, our people, our processes, our technology. Make sure your team knows what they're doing and is from the top incentivized to do it. Retune your business processes to understand where these fit in and capture value, and then invest in tooling that create the trust layers between you and the third party services that you wanna depend on. Those are the, specifics, I'd say.

Andreas Welsch:

Awesome. Thanks for sharing, and thanks for sharing in, that level of, detail and structure of how you recommend leaders go about this. Looking at the chat, I also see that this really resonates with our audience and I saw Maya's comment from earlier. Where she compares it to the industrial revolution in this coming dawn right. Of the technology and yeah us not even knowing exactly where this will end up and, what this will power. I think that comparison Maya was on steam engines, and where they were used and how they were used to power the economy in a new way.

Ramsay Brown:

Yeah. And to her point this is why the World Economic Forum has its center for the fourth industrial revolution. We are in a completely qualitatively different time period from where we've been previously that we're stepping into. When we look at this like what, does 2023 to 2025 look like? We have to remember we have active warfare going on, around the world both literal and hybrid. We have a major US election coming up. So we were tasked recently by a partner to, to create some meaningful predictions about that timescale. And we see the rise of a agentic AI. We see major crises around deep fakes. In the next US election, we see AI being used in hybrid warfare, embodied humanoid robotics coming down the pipe. Already we've seen calls for national AI bans, so we've already been got to check that one off our prediction list with, what's going on in the eu. We see attempts at regulation of GPU control. We think that the AI deserves rights discourse is going to accelerate much faster than the AI Bill of Rights discourse. And if we look even at the, long tail of this, we look at the rise of synthetic labor and what happens when an LLM and a Python harness strapped to an Ethereum wallet is no longer a tool, but a self owning agentic system, and it asks to work at your company. How do you regulate that? Like how as a business person do you trust a, an agentic autonomous, self owning intelligent system? That's not a sci-fi question. That's a 2025 question. So to Tom's point, we are absolutely in something to the extent of the industrial revolution but, of the mind as opposed of the muscle.

Andreas Welsch:

Awesome. And great to hear also the imminence of it, that it's not decades into the future. But that what we're seeing today is already the first step to to more widespread use and even more capable systems and agents, to your point.

Ramsay Brown:

And there's only one thing I wanna tack onto that, which is to the point of the author William Gibson. The future is already here. It's just not very evenly distributed. Two things are true. Very large enterprise organizations move very slowly to make decisions about the deployment of new technologies. That is true. So if you're someone listening in this stream and you say, yeah, but my company can't move that fast. You are correct. And we are at a breakneck pace to build artificial general intelligence. My team included. Both of these things are simultaneously true. And as much as people like to think about the, term disruption, meaning guys and hoodies in San Francisco. Practically speaking, the economist, Joseph Schumpeter, coined the term of creative destruction to point out that capital markets are very good at dismantling slow organizations to funnel their capital more effectively towards faster organizations. And if we remember that the future is here. It's just not very evenly distributed. This can be a helpful lens for business decision makers in understanding the imperatives to start accelerating the adoption of some of these technologies.

Andreas Welsch:

I think that's an awesome note to end on today's show. So I would like to thank you for joining and sharing with us what you've seen, what you've discussed at the event that you held in Oxford last month. How you see that this will impact not only the future of work and what we're living in right now, but also our society more broadly and the role that we all play in this as leaders, as experts in that space. So Ramsay, thank you so much for joining us and for sharing your expertise and for those in the audience for learning with us today.

Ramsay Brown:

Hey, thank you so much for having me. It was really an honor.