Getting Agentic w/ Alistair Lowe-Norris
Download MP3Jacob Haimes:
During the podcast, you may hear this sound, which denotes that I have included at least one link related to the content that preceded the sound. The purpose of this is to allow for citations and providing resources without breaking the flow of conversations, as per usual items mentioned. During aside, we'll have links in the show notes as well.
Hey everyone. It's your favorite guy that needs not one, but two podcast outlets to discuss AI. Today I'm joined by Alistair Low Norris, and let me tell you, this guy's credentials are stacked. He's currently the co-founder and chief responsible AI officer at. Irid a company focused on helping others build and deploy Agentic ai, which is compliant with external and internal requirements.
On top of that, he was the Chief Change Officer for Microsoft. He earned the President's lifetime Achievement Award from President Joe Biden in 2023, and most importantly, he also has a podcast called the Agentic Insider, which I listened to quite a bit in preparation for this interview. I found it to be a great resource, not just for this interview, but also for getting a better understanding of how a very different group of people think about ai.
In this episode, Alistair and I focus on the idea of responsible or trustworthy or safe, or pick your favorite adjective, ai, how compliance with specific standards is already being mandated. In some cases, the consolidation of power that we're seeing play out among a few large tech companies and the importance of narrowly scoping systems for ensuring safety.
Although some of our takes do differ to varying extents, we had a great time discussing the future of ai, and I feel reassured that at least some companies working on AI solutions have people like Alistair working towards more responsible deployment of this tastic tool. Similarly to last time I put an extended version of this episode on the Kairos FM Patreon.
If you're interested in getting even more content, you can listen for only two USD per month. With that, uh, let's get into it with our first ever guest that's also a host of their own podcast. Alistair Lowe Norris.
Alistair Lowe-Norris:
Hi, my name is Alistair Lowe-Norris.
Jacob Haimes:
Awesome. So in one sentence, what is trustworthy AI
and why do you think it's important?
Alistair Lowe-Norris:
I think trustworthy AI is about having AI that is three things, ethical, safe, and good for people and good for the planet.
And I think it's super important because we need to make sure that as we continue to evolve, on this great world of ours, that we have a set of AI that is safe and trustworthy. Okay. And act for the benefit of humanity rather than against it.
Jacob Haimes:
Gotcha. Alright. So, yeah, it's great to have you on.
So, uh, I was reached out to by you, because you also have a podcast called the, agentic Insider. And I watched a good bit of these, you know, prepping to have this conversation. And one question you ask every time to all of your guests is what's the future you are building for?
Before you answer that question, I first wanna know like, why do you ask this? What, what is it that you're trying to get out of this question?
Alistair Lowe-Norris:
I think we ask, you know, the, what, what future are you solving for, because we're, we're really passionate about finding out what's, what people are passionate about.
What, what is it that's driving them forward. I mean, everybody has a day job, but that may not be the reason that they're, they're pushing forward. So what future are they looking to create? What are they solving for? What are the problems and the challenges that they see that in their life?
They're trying to, they're trying to help move forward. It could be personal, it could be professional, it could be something philanthropic. It entirely depends on, on where their passions lie. But I think this is a great question to, to pick things out. And this is my co-host, Philip that, um, Philip Wan that came up with this.
And really, I, I think it, it works extremely well as a, as a question, sort of draw people out and find out what's, what's relevant to them.
I think it's really the, the same answer to, to what are we doing with trustworthy ai? I mean, the future that we're solving for is an AI that is ethical, safe, and good for people, good for the planet. It's a, it's a focus on driving, uh, AI forward. In a humane way that ensures that it is okay, good for people and good for the environment.
Jacob Haimes:
So let's take a step back maybe just to understand more about who you are and how you got to where you are now. Obviously that is like you have a, a. A lot of work that you've done, that's quite impressive. And so maybe what are some of the pivotal moments for you, during that, process to getting to, you know, your current position and, and working on AI in this way?
Alistair Lowe-Norris:
Sure. I mean, I think so, so I'm currently the chief responsible AI officer for Iridius. Okay. I stepped into this because I'd led work at Microsoft around responsible AI before I had left. So I had a career at Microsoft that spanned from the nineties all the way through to, to about three years ago.
Um, 23 years, um, at the company most recently the Chief Change Officer. And so I, I focused on large scale change and transformation at the company. My career is focused on that transformation and change, but it's the human and people side of change.
So, yeah, I mean, I've, I, I, you know, go back into the nineties and I, I was responsible and, and, and help drive and ship products for things like SharePoint that people may have heard of and things like that. But at the end of it.
You know, change and transformation only comes from, from the human side. And so, you know, it's easy when you wanna drive a change, whether that's building software, shipping it, having customers adopt it, or, or ultimately, you know, making, you know, making large changes in the, in the, in the world at large.
But it's easy if there's, it's just you and I because we can go out, we can have a chat, I get, you know, and I have an ability to be able to persuade you to be able to change the behavior and, and the way you do things. And for me, I think what was Pivotal was going into the nineties, I was working, um, with the Ministry of Defense in the uk.
I was working with the Ministry of Defense and they had a, um, they had a, a need to be able to roll out a completely new approach to, to, to technology in the, in the, across the organization and.
When they were doing that, they wanted to create a sort of a modern office, a mockup office of what the office of the future could look like and people would be able to come and be able to do certain user scenarios and so on. It. And they called it the Improved Ways of Working or the IWoW team. Okay.
And they built out this model work model office, and they had all of these fun things to do. And it was at that point that we realized that, that the way we were trying to drive this from a project management perspective was, was very different than, than, you know, we, we were needing to change user behavior because people were used to the way they were working and they didn't really wanna work in the new way.
And so they had, you know, it's the same thing if I, if I tell you to cross your arms, you will cross your arms a certain way if I tell you to. Cross your arms the other way that's awkward and you don't wanna do it for very long. And, you know, and, and so most people don't rush towards change. They're happy with where they are.
And I think it was during that sort of eye wow project and, and working with Microsoft and the Ministry of Defense, that it was very clear that, that we needed to affect the, the human side of change far more than than we'd ever really done on projects in the past.
And they were, they were, they had these things called enterprise agreements. And the enterprise agreements was laid out where every three years you would work with your enterprise customers and they would resign the agreement, hopefully with more users and more software that they had before.
And actually the number of users was going down, so we were making less money.
So the question was, okay, what are we not doing that we should have been doing? Why are they not using the software? I mean, we love it. Why aren't they loving it? And so the it, it suddenly became something where we realized that, that in order to drive. Adoption of software and use it. There had to be some sort of engine to be able to push this forward.
And that engine was change management. And I don't mean the technical change management, I mean the human side of it.
And so I think that was the, you know, some of these big pivotal moments in, in my career, I think have actually been around understanding the psychological behavior around change, uh, and the neuro capacity of people to change and how to drive large scale change and transformation for, for people, uh, and how to nudge behavior of individuals forward, um, to be able to move things forward in that way.
And I think it's the same with ai. Okay. It's how do you drive it in that way?
Jacob Haimes:
I guess the, there are a couple things I wanted to, to pick apart from that. Uh, I guess the first one is you mentioned, you were working with Microsoft as the chief change officer and, we'll get into the change officer bit in a second.
But, during the past, you know, two decades essentially, which saw. More or less the, adoption of, the internet by and large. Like it was already there, but now it is everywhere. It saw the rise of big data, it saw moving to the cloud.
How does AI differ from some of these previous, more significant changes like big data moving to the cloud, uh, and how is it the same and, and what are lessons, maybe that we can take and move and bring into this AI space, as it's developing?
Alistair Lowe-Norris:
Okay, so I mean the, the, the three things, one,
it's a change like any other, it's a tool. Okay? I know everyone around the world is super excited about it, but it is another tool, okay? And so, and so, it's a case of, okay, now we, now we have a spanner or a screwdriver, and we didn't have those before. So now we can do things that we've never been able to do necessarily.
But I think that the, the fundamental difference with AI is the speed, okay? Mm-hmm. And everyone, uh, you know, that. The changes are happening faster, they are not going to slow down. What is currently under development, but has not yet been released are versions ahead of where people are seeing right now.
So the tools that are released out there that are in the public domain, um, are, you know, every single leap forward in terms of foundation models. And some of these pieces are, uh, are, are significantly far ahead. They are, you know, it's, they're, they're more than, more than evolutionary. They're revolutionary.
And these, these fundamental transformations are coming much faster than, than people will think they do. And to that extent, if we're currently, if we take, you know, they take a standard model that's out there, maybe it's version three, version four, version chat, TT five, for example. Um, there are, uh, there are model builders out there who are four versions ahead, each of which is fundamentally, you know, arms, arms, race, different than the previous versions.
And so when you,
Jacob Haimes:
I guess... I would. Push back against that a little bit, uh, based on just like how I'm understanding the system because, you know, people have been saying that since essentially, you know, GPT-3 came out, right. That, oh, well the, the thing that's being worked on internally is much better.
Uh, and that it's so much further ahead. People were saying that, six months ago, a year ago Sure. And what we got was GPT five, which was kind of a dud, compared to the leap forward that we saw from three to four.
Jacob Haimes:
When it comes to trashing or fawning over new models, I like to be a little bit more thorough than the standard people are saying it's bad or good, or game changing or et cetera. So I did some digging. One thing basically everyone agrees on who aren't just mindlessly creating tech enthusiast content is that the release of GPT five was totally botched.
You can find the takes of various prominent writers who are both positive and negative about the model in the show notes to briefly cover the popular discourse reasons. The reason this was a dud, it was massively overhyped, surprise, surprise, and the general public likes more sychophantic AI and the abrupt removal of GPT-4 caused consumer unrest.
I don't love that, but let's move on for now. Some defenders of the model have decided that this seemingly backwards progress is actually just a cost saving measure thanks to the router, which is necessarily part of the non API query infrastructure. I guess I see it more as the marketing story failing as a result of the narrative.
They've been leaning on for a year, beginning to creak. But you know, that is just me. Well, it's not just me, but you get it.
I also see that there is, um, maybe a disconnect between what's being stated and like where these systems are gonna go and then. What we actually see manifest.
Alistair Lowe-Norris:
I think it depends on what you use it for. Okay. Okay. So
I think that some of the directions that the models are going, um, means that the. The, I guess the diversity of use cases and the breadth of capability that AI is going to provide is going to be significantly greater, um, that people will see in the home as well as they would see, you know, in their, in their business or even in a manufacturing factory or something.
Jacob Haimes:
I mean, I, I cannot overstate this. I would love to have a robot do my laundry for me. I hate doing laundry. Um, but there aren't systems, that are scoped for this, right? They're, they're much more general.
Alistair Lowe-Norris:
I, I, you, you are now really, you know, from an AI safety perspective, you are, you are absolutely nailing this. So, so take for example, you know, so you now have, now have a, a, a domestic helping robot. That's to create a hypothetical scenario and it's capable of being able to cook meals and it's be able to, um, to put on the laundry into a washing machine and a dryer and so on.
And it's capable of looking after a baby at some point, the question is, did it put the baby in the washing machine? Okay. Did it put the laundry in the frying pan? Okay. You know, the more of these domains that you add to it, the more complex that you make it, the more the, the operating, the operating areas that it needs to work in.
And the domains okay, become so much greater. And then the question is, okay, where are the safety guardrails on all of this as well? Okay. And so, so I agree with you, the, the breadth of use cases become significantly problematic as you, as you expand them. I think that there are companies out there that are trying to take a step in that direction.
Where they're saying, what if it's, it's funny when you think about it and take a step back, there are so many companies right now that are com clearly shedding staff in call centers worldwide because they can replace them with AI based chat bots. And so the idea is how many people can we, can we fire today?
Okay. Through, because we've now got all this wonderful technology that will, that will help. And I think, you know, if we park that to one side, suddenly there are conversations that are happening where you're saying, what if we put. A robot in the home, but it wasn't using an AI to actually run the intelligence side of it.
What if it was actually connected to a person who was certified in childcare or certified as a chef, or certified as a domestic help or eldercare, these sorts of things. And then what you're doing is you are almost using it as a, as a movable system that is connected to a human at all times that is capable of providing that care.
But yes, AI, augmented, but then the domain question comes, okay, it's only scoped to use for a particular domain. So potentially the domain changes depending on the actual human that is connected to it. And all they're doing is sort of remotely channeling their personality and their requirements into this device.
And they are, you know, remotely working, working it like a drone that has more capabilities. So now it's not completely autonomous. Okay. And we're not into Cyberdyne systems, but we are into a stepping stone where the AI is allowing it to be able to pick up things, move things, and so on, at the command of the drone operator.
Um, but it's all completely interactive with the, with the person at the other end. So it is human to human, but through an automated device, that sort of conversation is happening as an intermediary to try and get past the domain conversation.
Jacob Haimes:
Okay. So if, if that's happening. So I, I wasn't aware of this, but this essentially sounds like, um, I mean in like the, the sci-fi equivalent is like, you go, you know, you, you jack into, uh, some sort of.
Like system. And then you are in an avatar, right? Like this is essentially avatar
Alistair Lowe-Norris:
Sure.
Jacob Haimes:
But a little bit more
Alistair Lowe-Norris:
Exactly right. No,
you're, you're quite right. But I mean, we don't have parallels because we've seen them in sci-fi for years in that respect. But avatar is exactly that. You are, you are jacking into another body, okay?
And now you are controlling that body and doing what that body is allowed to do. So then the question that you and I have now is, okay, AI safety, what's it allowed to do? How far can it go? What can it do? Okay. And how, how, what are the guardrails associated with that?
Jacob Haimes:
Well, and then there's also the, uh, the last thing, which is what's the value at mm-hmm.
Of, of doing that. Because if it's, if it's just a one for one, then as, as in one person is controlling one robot, um, there in some cases is some marginal value add. Like you can reduce. I, I guess transportation costs for the person. So let's say you're, you know, trying to create a system that would help, um, the elderly if they all have, you know, robots in their home, and then a person can go into like different systems, um, you know, you can potentially help them, but that is also like, maybe creates a sense of false security, uh, for those people because that person can't always be there.
And I guess that's one thing that is a concerning aspect to me, um, is it seems like the value add for a company in, in this scenario is that you can devalue.
The human labor of, uh, the people in the location that, uh, I guess you care about, uh, in terms of your customer by using people from somewhere else instead that have, uh, lower cost to you. So you're saying, okay, well, uh, we don't wanna pay for the, the people that live in, let's say New York, um, or in the US even, but instead we want to pay a lot less money.
Uh, and so we'll just, you know, do that. And that then removes, you know, the, a lot of value and from the, the people who are already doing that job, um, because they can no longer get paid as much because the threat is, oh, well we can just use this other system that costs a lot less.
Alistair Lowe-Norris:
And that's where we're into the call center situation of, you know, what we will do is we will outsource the call centers in the United States to places that can provide that service.
Much more cheaply. And then it's the same situation with now, we'll outsource that service entirely to ai. And this is the same thing. How can we provide support? The question really, and I think this is really why, you know, AI needs to be good for people and good for the planet, is, is there a way of being able to augment the services that already exist?
So what we're really doing is talking about a fundamental replacement of service or providing a service where it doesn't exist at the point of care of care being needed. What if we were able to augment it? What if we were able to provide additional care, okay. Through service and units like this, okay, where that care was not possible all of the time.
Jacob Haimes:
I guess when, when we take the, um, example of call centers, what I see happening is we have call centers and previously, um, I mean fundamentally call centers are something that is expected of companies, but they do not want to do it, right? This is something that they are essentially required to do by convention, but if they can reduce the cost of customer service as much as possible within that convention and still sort of be seen as, you know, doing what is okay as a provider, they will.
And so what happens is the AI comes in, uh, and is. Quote unquote, replacing, you know, people who are, are doing this, but the system gets worse. And so instead of it being like some sort of augmented thing, we now have, you know, it's a pain in the ass to talk with a human at a pharmacy in like 50% of pharmacies because they require you to go through this like five step process of verifying that you don't want the, like, the things that they think you want when you could just talk to a person originally, or, or something like that.
As I mentioned, I've experienced this a number of times and apparently I'm not the only one. A study from April, 2025 titled Deploying Chatbots in Customer Service Adoption Hurdles and simple remedies found that a key reason consumers choose not to use chatbots is an aversion to gatekeeper processes when there is an initial imperfect service stage, potentially followed by a transfer to a second.
In addition, many consumer surveys have shown that at least in the United States, people would prefer humans such as one from Gartner, which found that 64% of consumers would prefer their customer service providers were not AI-based systems, despite this customer service providers continue to aggressively pursue poorly thought out shifts away from providing customer service with humans.
I'm not saying that I believe there's no value in integrating AI systems into customer service. They need to prioritize customer agency and effectiveness of the created tools, not the naive minimization of cost per interaction.
And that, that's a specific example for, from my life. But, the important part is what I'm seeing is not that it's being made better.
Alistair Lowe-Norris:
Sure. I mean, I, I, I think, I think I'd push back a little bit on the, on the earlier comments that, that co that, that, that organizations or companies do not wanna pi provide customer support.
I think what what companies wanna do is they want to have, you know, customers who, who, as far towards the raving fans. Okay. Uh, and of the spectrum as they possibly can be, they want customers to be as happy as possible because then they will use the service, become promoters, okay. And, and so on. And as part of that, they know that some people find it difficult to use, uh, you know, whatever the technology is or whatever the service is.
And in order to do that, they have to provide support. So, in some cases, I agree with you, it's a, it's a necessary evil, but I think that. A lot of what companies have done up to this point is they have, they have outsourced to large call centers that exist around the world. Okay. Especially trained for these purposes.
Okay. And, and they provide, and they are designed to deal with tier one, tier two, tier three, and so on. And in fact quite a few of the tier one calls. In fact, the large it, it tends to be a much wider funnel at the top for tier one and a much narrower funnel at the tier three. Um, because you have more people calling up for very simple questions where they might be able to find it online or they might be to Google it, but they don't want to, they just wanna call somebody and just have them tell 'em how to do it.
And I think that there are opportunities, um, to. Outsource that to more automated solutions. What we saw in the early days of this was AI chatbots that really can't answer the questions that people need. Okay. Now we're getting to a point where there are AI systems that are able to answer it in a language of your choice, um, based off of a knowledge base that it's trained on.
So it's really, the questions are mostly contained in an, in an f, a Q or similar. Then that's something that, that you can triage out. Those questions and triage out those sorts of incoming support calls in a way that much is much more easy now than it was two or three years ago. Um, and, and providing a, a, an interactive experience that provides, you know, to customers the full fidelity that they need.
I think that what, that doesn't negate the need for a call center. It means that what you're doing is you want your call center and the humans to focus on the hardest problems. The ones which are really challenging the customers. Not, I need a password reset, but, you know, I literally cannot make this thing work in the way that I need.
And I, and I'm starting to not be able to use the software to, to do, to, to get the, the, the impact that I want out of it. That's a fundamental problem, and that's where the, I think the call centers and the, and the customer support services will focus, that becomes an AI augmented solution with humans, okay.
To be able to deliver the service. So the vast majority of the tier one calls in some cases. They are simple, easy to deal with off of a knowledge base, uh, and so on are, are, are easy, easier to replace, but not at the broader end of things.
Jacob Haimes:
Right. So I guess then this gets into sort of a fundamental issue I have with AI space right now.
Um, in particular is that to me what seems to be happening is, uh, the select companies that have these, uh, models at the cutting edge are saying, oh, we'll just use our chat bot, uh, like our general chat bot. Um, and that's very costly and it has a high failure rate relative to, uh, other potential solutions.
So we could actually take the FAQ that you mentioned, use that as, you know, retrieval, augmented, uh, generation, and using that for the sort of. Uh, corpus that's being drawn from and then have, uh, an intelligent routing system based on questions, um, that are provided to the chat bot. And then it just, you know, pulls up the FAQ and there's a little bit of a wrapper for, um, natural language on it.
Like that is a much more specific targeted solution than just, oh, well, we'll, you know, give it as context to the LLM and, and have it respond, but it's much more robust. And so why, what's your take on, like why that's happening and how this is going to play out as well? Is, is that, is it something like what I just said, going to be where people end up moving towards?
Alistair Lowe-Norris:
Yeah, I think you laid out the, the, the solution, uh, that, that that's out there and available right now. I think that people have thrown, you know, AI at anything. Okay. And it's the same standard LLM, it's not super trained and it, you know, and it's not, it's not able to do what's required and it gives people a poor experience, your approach with RAG or, you know, or, and as context windows get so much larger, the need in some ways for, for being able to rag something as opposed to just pointing it at an existing corpus of knowledge, um, and using its context window to, to understand it and create the, the relevant vector and graphs out of it is, is much easier than it ever was in the past.
But I think you are, you are right. If you can pull that corpus of knowledge in using RAG or using, using Context windows, and then provide a, a, so a solution that has the, the capability to explain it. Not just in a language that the person speaks, but at the, uh, at the complexity of the language. So some people, um, speak the same language, but their use of, uh, their use of that language is less sophisticated, um, for want of education, for want of, you know, where they're living.
And so it needs to be able to, to meet them at the point of where they need it. So it needs to not only be able to explain it in a language, but it needs to explain technical things in non-technical ways to people who are not technical to take an example. Um, or it needs to talk about it in a fifth grade English level or a fifth grade, whatever level, um, compared to speaking at a, at a, at a college level.
And I think there's, there's that sort of stuff that you can do, but I think your solution is exactly the way to push this forward and be able to do it, not just take a one size fits all approach.
Jacob Haimes:
I, I, this has been a great, very interesting conversation for me. 'cause I feel like each time you respond, I pick out something within the, the previous thing that I wanna build on more. Uh, so we'll get back to like, some of the core things. 'cause I, I do want to hear more about sort of, um, your experience with tech regulation.
Um, but in, in that comment that you just made, um, you mentioned that
we need to talk, or, sorry, you mentioned that the, the system should be responding at an appropriate level and in theory that would work. But in practice what we see is that performance changes drastically under these other circumstances. So, um, using. African American dialects has demonstrated to tank performance on benchmarks it much more significantly than just using broken English.
Um, so how do we make sure that we're actually giving the same, the same information, the same quality, the same accuracy across all of these different things. Similarly, you know, low resource languages are demonstrated to be substantially, um, less, uh, LMS have a much harder time performing well on.
Benchmarks in those. I,
Alistair Lowe-Norris:
I think of course that's true. Of course that's true. And I think that, I think that that's a, you know, if, if we want to really live up to diversity, equity, and inclusion, we have to be able to meet people where they, where they're coming from. And I think that that current versions of this, to the point about things are going very fast, are not able to meet everyone equally and where they need to be met.
I think that will come over time, but the answer to this is more investment. In that and ensuring that that parity occurs and that we don't just pick, you know, the, the, the major languages in the world to ensure that people can use those. And you are, you, you know, you become disenfranchised because you are unable to communicate in the language that it has been trained in to, to your point as well, you, if you train it in Western ideals, which is where some were, some of the models were trained, okay, then it is unable to recognize, um, uh, certain things from other diasporas.
And I think that's a, a really fundamental issue, and it has equity must be brought okay. To, to these, uh, to these foundation models. It's a serious important problem.
Jacob Haimes:
So how do we do that? Like, do, do you have any thoughts on, on how we can secure that? Because currently if you fund like funding that I guess, um, is very difficult because instead your, uh, like support just sort of goes to general.
The general capabilities improvement and that's not, uh, actually helping. Uh, this sort of problem that you just
Alistair Lowe-Norris:
had. Sure. But I mean, this is a very, a very western English, uh, horror horrific mentality where it's like, if as long as we focus on that, that's where the money is, and that's not true. There are far more people who are not Western and English speaking, um, that, that that can be served.
Okay. And that can be, and that service can be delivered too. And I think it's just a very. Uh, it's a very, you know, biased way of viewing things. The answer to, to the question you're asking is investment. And the best way to be able to bring that investment about is with, um, uh, having a series of benchmarks, having a series of, um, of, to some extent, uh, philanthropic, uh, you know, organizations or advocacy organizations that are clearly saying.
These models are falling short in their ability to, uh, to guide the impacts of these systems on, on, uh, society specifically with groups of individuals. And it needs to be continuously played out. Okay? And you need to be able to some extent, play the models off each other and say, you know, this, you know, it's great that Gemini has suddenly been able to, to meet these benchmarks far better than others, or it's great that, you know, uh, OpenAI has suddenly been able to do something better over here.
And, and then what you're doing is you are, you are, you are using healthy, you know, capitalist competition in some ways to be able to drive this push in the right direction. But it has to be through advocacy. It has to be through benchmarks, and it has to be people holding the, the, the, the model builders accountable because foundation models are going to be where it's coming from, the, the hyperscalers, the investment that they have, buying nuclear power stations and so on.
It's the, these large organizations with the money, they're either gonna succeed or fail on what they're investing in, and they, it's going to be these model builders that have to do it. So holding them accountable as much as possible is gonna be crucial.
Jacob Haimes:
Well, okay, so I guess I, I just don't quite see why the investment, like you're saying, the way to do it is investment, but wouldn't it make more sense to not invest in these companies if, if they're either gonna do it or not gonna do it?
Uh, and all evidence seems to be pointing to them not doing
Alistair Lowe-Norris:
it. Sure.
Jacob Haimes:
Uh, lemme
Alistair Lowe-Norris:
clarify, lemme clarify. Well, I'm totally investment. I'm saying it's up to the big foundation model builders, OpenAI, Google and others to invest in it, Microsoft and so on. If they choose not to do that, then, then, then they will, you know, not, they should not receive.
Okay. The, the, the user support and the, and the, and the use of their service because they are failing to meet people where they're at. And I think that's the fundamental thing. So, so if advocacy and benchmarks and so on and, and, and, and all of the sort of, uh, the political push drives in a certain direction, that's, that's absolutely a way to be able to guide model builders to, to go in a particular direction.
If they don't do it, then other model builders will step forward. Okay. And other people who, because these, these techniques and the approaches are being either stolen slash democratized and so other people understand now how to build models. There are ways where, where people will create solutions, okay?
Where it's possible to, to deliver the service that is necessary. Okay.
Jacob Haimes:
And then, yeah, I guess another thing is about democratizing ai. So this is something that, uh, I care a lot about. Uh, and there are a lot of different ways to think about this. So, um, I think based on my understanding, uh, looking at some of your, your work, uh, is the main democratization of AI that you are referring to is, uh, the use, uh, like accessible use of these systems essentially.
Um, but that sort of presumes that there is buy-in on the, uh, initial level, uh, for like using or creating language models. To me. You know, we need to build benchmarks. We need to assess these systems. While that is definitely, you know, in has positives, it also presumes that, um.
I guess we want to do that in the first place. And I feel like that's not a conversation that has been had. I, I
Alistair Lowe-Norris:
think that's fair. And I think that what you are ending up looking at is saying that there are multiple ways to deal with this. Okay. One is, uh, you know, one is through, uh, one is through policy and regulation.
Okay. And governmental laying down that in order for the good of our society and our community, it is crucially important for, uh, for model builders and AI to meet these ethical requirements. You know, and, and you can lay down that as part of it. Um, and then you are in a, you are in a constant battle with, um, with lobbyists Okay.
On behalf of these organizations. Okay. Um, to, to, to stop hampering competition and to make things, you know, less regulated. But ultimately regulation has a place Absolutely. In this. Okay. Because that's what, that's part of what government is supposed to provide. The second thing is that, yeah, you know, we we're staying away from the political conversation that that's generally where it's supposed to be.
So regulation is one avenue. Another one is inter is, sorry, uh, standards, international standards. So I, you know, international Standards organization, I-S-O-I-E-E, other things like that are able to say that this is best practice for how to measure bias, systemic bias in an organization, how to ensure that every AI has an AI impact assessment to understand about its, you know, there's, there's a particular standard that everyone talks about called ISO 42001, and they talk about it because it's an auditable AI standard, but actually when you look at it, it's inherits from 70 other AI standards.
Again, one of those is 42005, which talks about how for every single AI system in a, in a company or an organization. Or in a country you need to do an AI impact assessment and that on the AI system. And that is to understand about how is it affecting particular groups and society as a whole and how are, are you ensuring that those people are involved throughout the entire AI life cycle?
From the initial concept all the way through design and then into implementation.
Jacob Haimes:
I definitely had all the context for these right at that moment and didn't have to look them up after the fact. But for those of you who can't spout off ISO standards by their five digit identifiers from the top of your head, here's what he's talking about. ISO's formal name is the International Organization for Standardization and no, that isn't incorrect.
Actually, the reason behind it is kind of interesting, but completely unrelated. ISO is a global organization that creates and maintains standards, essentially codified best practices for a majority of global industries. ISO 40 2001 is special because it is a standard specifically made for AI management systems, which as far as I can tell, just means using something labeled as AI in your workflow and managing that, Alistair noted that it was the first auditable standard for AI management systems, which really just means that it has requirements which either are or are not met.
Based on what I can gather from the free online resources, because actually getting access to the standard requires you to pay like 250 USD is that 40 2001 draws heavily from 27,001, which concerns information security management systems. The most important difference being that 40 2001 was written specifically for AI systems, so it pays more attention to issues like bias and reliability.
Alistair Lowe-Norris:
So I think that there are ways of being able to put standards out there and best practices. Companies can completely ignore that. They absolutely can, which is where regulation and so on comes in. You are either going to make people pay in some way to force it.
Okay. Or you, you either appeal to their better nature. Okay, which isn't, you know, which these are companies, they're, they're not really in it for their better nature. They're in it for money for themselves and shareholders. Or you end up trying to push this down with regulation and that the comment is, you either do it this way or we penalize you.
The other way to that point of, you know, if you take the EU AI Act, then we're talking about potentially, you know, 7.5, uh, percent of global revenue as an example.
Jacob Haimes:
Just a slight correction here. It's actually 35 million euros or 7% of total worldwide annual turnover, whichever is higher. That's pretty close for off the top of your head though, so we won't fault him for that.
Alistair Lowe-Norris:
You know, these are significant amounts of money that, that people can, that, that people can, can get charged or find, um, uh, for, for, for making mistakes or in willfully making mistakes.
So I think there is a, there's a balance here. Okay. Guidance, standardization recommendations. Uh, can, you know, the, the, the, our peers, you know, your peers are doing it. Why aren't you doing it? Advocacy, advocacy organizations, and then regulation. All of these are sort of an inter intermixed set of things that can push organizations in that way.
But ultimately, if let's pick a company, okay, let's say open AI isn't doing what is needed, but people are still paying it, the money, then it's not really in its interest to continue moving in that direction. So you are 100% right, but I think it's a, a large different set of groups that all interlink to try and move this forward to be, to be good for people and good for the planet.
Jacob Haimes:
Okay. So what you're saying, like the, the iso uh, 42 0 1, uh, oh oh one, um, is like, it sounds great and the 42 0 0 5 as well, like that, the, these sound like reasonable, um, you know, ways to go about it. But as you said, you know, companies aren't doing this, uh, really, uh, at least from what I'm seeing, um, so how, what can we do?
Um, to help push that be. Is it just sort of vote with your wallet in this case, like, don't use their systems? Um, or is there more that can be done here?
Alistair Lowe-Norris:
So. Companies, if you were to ask companies a couple of years ago how much they, you know, how much they cared about responsible ai, the answer would've been mostly zero.
Um, you know, at, at that point, I think that, that there is a significant move in that direction over the past six to 12 months, um, where it's becoming far more prevalent and. There are, there are some companies that are, that are holding, they're intentionally, um, moving in that direction and others are paying lip service to it.
So some people are saying, you know, I'll, I'll, I'll do whatever the minimum bar that I can. This is a, you know, that might crop in our organization would say that, you know, I wanna do the minimum amount of compliance and no more. Because, you know, it's all about compliance. You want to comply with the, the standards.
You don't necessarily gonna go wild with it. But compliance, it, it, it has to go beyond compliance. It has to be something where you live and breathe. The way that you develop whatever service or product it is to ensure that if you're using ai, you are responsible throughout it. So it goes beyond that lip service.
And to your point, companies have to embed it. If I, I'll take an example here and I'll, I'll keep using Microsoft because I, I was responsible and, and, and involved in the creation of it, but Microsoft has a huge gamut of responsible AI tools and policies that it publishes externally on how it internally develops its software and how it holds itself accountable, what its principles are, how it does AI risk assessments, how it does AI system impact assessments.
All of these about, trying to think around this time last year, around September last year, Microsoft. Cascaded those responsible AI standards down to its suppliers and said that if you are a supplier to Microsoft and you are using ai, and I don't care if you happen to buy one third party AI tool and decided to, to use it in your delivery of a product or a service to Microsoft as a company where Microsoft is buying a support service, whether it's they're buying paper towels and napkins for use in the restrooms or the kitchens, or they're actually buying software that they use to respond to particular, uh, particular needs.
Okay. Microsoft is now requiring those suppliers to complete, uh, a a com. Uh, an approach around responsible AI very heavily that procurement says you need to meet responsible AI standards, and these standards are extremely. Extremely strict. They need to be AI red teaming. People have to have health and system monitoring.
They need to do AI system impact assessments, okay. On their disenfranchised and potentially disenfranchised groups and publish it. They need to provide transparency notices, so procurement. In a large organization like Microsoft, okay, mandated this a year ago, so all the suppliers have to meet this, or they can't be a supplier anymore.
Jacob Haimes:
This one took a minute, but we got there eventually. So what Alistair is referring to is that Microsoft has suppliers, and to be one of those suppliers, you need to meet requirements outlined in the Microsoft data protections requirements or Microsoft DPR. Now, I'm not totally sure what these suppliers are supplying, but it seems to be covering a very wide space, so I don't think it's worth getting into.
In September of 2024, Microsoft added multiple requirements to the DPR regarding AI management systems, which mapped to items within ISO 40 2001. It's worth noting that at least for larger companies like Microsoft or Amazon, they get compliance certifications for specific products or services, not the entire company.
For example, Microsoft currently has ISO 40 2001 certification for 365 copilot, and Amazon has one for AWS. That being said, I couldn't find many companies, any companies really, other than these big tech companies that were sharing the information about their certifications. I have no clue how long the accreditation is supposed to last for, if it's anything like it is in healthcare.
They'll let things go due to process drift over the course of the next year or two, which is the duration of the certification, and then they'll rein it in again when it comes time for the next audit. While no governments have singled out specific requirements for ai, there are a few domains which are beginning to expect IO 40 2001 specifically, or requirements in line with the standard according to explainers on this, the areas already seeing this are e-commerce, healthcare and automated robotics, including autonomous vehicles, although.
Do take that with a grain of salt because the only people writing explainers on this are the companies that can be the auditors for it.
Alistair Lowe-Norris:
It's not the only company that is doing this. It was able to take that step because it had all of the standards that used internally. You could argue, well, hang on a minute. What Microsoft just did was cause complete pain for all its suppliers because it said, okay, uh, we've now decided to hold you as accountable as we're holding ourselves.
And if you're only three women and a dog running a company and you are still using AI in the delivery of your service, suddenly you have to do all of these things that you didn't know you even needed to do, but take it from the other side. What Microsoft is saying is we guarantee that not only do we have a commitment to responsible ai, but all of our suppliers.
And vendors do as well in everything that they do. And Microsoft says that if you as a supplier, so let's say, um, you know, let's say Microsoft decides to hire a BC corporation. A BC corporation uses AI and A B, C uses D-E-F-A-B-C has to hold DEF to the same standard. Okay. And it must have contracts with them.
Now, this has always been true. There's always been data protection agreements that said that we will obey GDPR and we will make sure that our subcontractors or sub-processes obey GDPR and, and things like that. Now it's gone more into security, privacy, data protection, and responsible ai. And Microsoft is a, is a, is an example, a company that's done that, but other companies are coming out with it as well.
And so now we're gonna see more and more procurement organizations who are going to expect their suppliers to meet these standards. And therefore, in order for their suppliers to meet the standards, the service that those suppliers buy, even if it's from OpenAI, have to meet those standards. So suddenly, okay, all of these companies worldwide are gonna be knocking on the doors of foundation model builders and saying, prove to me that you meet these requirements because if you don't meet these requirements, I can't use your services in the delivery of what I'm doing, and therefore I have to go to somebody else.
So now suddenly I can't buy from you anymore. Now we're talking about enterprise, medium business, small business. True. We're not talking about consumer here, but ultimately this does make a difference. And I think that means that responsible AI needs to exist inside an organization well enough that they then turn it over and say, we're now gonna hold our third parties accountable.
And that's starting to happen more than it has in the past. And I think that's a big change. We're gonna see. AI ethics is gonna be huge over, over the coming, you know, coming years. Huge.
Jacob Haimes:
It feels to me that what's happening is, you know, the buck is essentially being passed, um, to, first it's, so Microsoft having these like responsible AI mandates is good. Um, but there's, so, there's so much, like how can we validate that those are happening, I guess, uh, in this case.
Alistair Lowe-Norris:
That's exactly right. I mean, take, take this as an example. So ignore AI for a moment. Okay? You wanna be able to prove that you know what you're doing with security as an organization. You're a company and you wanna say, I have good security credentials.
If you are in America, you could go and get a particular, uh, an audit against something called SOC two. It's done by the Association of CPAs. Everyone in America knows it 'cause it's a very American certification that was created by the Association of CPAs. So CPAs could do these audits. If you go broader across them, across the world, you tend into move into ISO 27,001, 2, 7 0 0 1.
And that's, you know, about, you know, SOC two is about. 50% of what 27,001 is. But if you either got SOC two, you know, type two certificate, you've got a 27,001 audit, you had both of those, you were proving that you are meeting, you know, industry standards for security and cybersecurity information security.
If you wanted to prove that you were as good for privacy management, GDPR, and you know, California CCPA, CPRA, things like that. What you end up doing is getting something like ISO 27 7 0 1, so you can get auditors to come in and say that you are meeting the best standards worldwide for doing this, and you're meeting regulation.
There are, it's the same sort of thing for responsible ai. Okay? 40 2001 and an equivalent. You are coming along and saying, you are meeting the EU AI Act. You are meeting these standards. So auditors are able to look at this and ensure that companies are doing all of the things that are required out there and.
There are enough, you know, there are enough ways that companies can self-audit if they wish to do so. There's a particular paper that was published, it was published a couple of years ago, but it was republished by, by um, a, a, a lady called Sonny, who's a researcher out in, uh, Australia and a group of others.
Um, uh, and it put together the question bank for AI risk assessments. And it started out with something like 293 questions. I think this when they published it again in, in January. There are around 245 questions now. But it basically consolidated Australia, the EU AI Act, some things from the Microsoft, um, and a host of others around the world, uh, I think it was Singapore and Canada and some others.
And they said, okay, here is a full question bank that if you answered these well enough, these 245 questions, you would have. Proven that you have taken every possible measure that you could in order to do the right things. Now you still need the assets and the evidence to back it up, but at least if you could answer all these questions, you've thought about all the things that are necessary, and then you just actually have to do the things you said you were doing.
And I think that there is enough of that, of that guidance that's out there now that allows you to do responsible AI as a, as an organization, um, and do it in the right way. I think that organizations are still not where we want them to be. I believe that if you are an organization that is doing, doing responsible ai, you should be publishing this information very publicly on your website, and very few of them are doing that.
There's a very big difference to me between transparency and openness. Openness requires disclosure. I think you should be proactively disclosing all of these things in a way that allows other people to, to build up their trust with you, rather than just saying, yep, we'll definitely be transparent if you ask us.
We'll tell you. Which is where a lot of them are right now.
Jacob Haimes:
Yeah. Yeah. And I think that's pretty important. Uh, now, especially with, uh, the other concerns around AI of like, uh, collapse of, of truth and, and, uh, the difficulty of building trust in the current state of, you know, ubiquitous communication. Um.
Being able to trust a company that, uh, I mean, that says, I, I don't know if I would ever go that far, honestly. Uh, because my, my model of a company is just profit maximizer. That's what they're made to do. Like you can't fault them for that. That's the whole point. So, uh, but as close to trust as you can get, um, with, uh, you know, what they're sharing is not just saying, oh, well if you asked, we'd tell you, but here is the information you need to know in order to evaluate this, um, without you having to ask absolutely for it, I think is, and I try to do that with like, the podcast, for example, and like the show notes.
So, you know, I have all of my, my sources and stuff, but I, I think it's, it's a lot of work. Two is the, is the thing. And so it's, it's hard to get people to do it.
Alistair Lowe-Norris:
It's, this stuff isn't easy. Okay. And it, and to some, to some extent as well. Your point about, you know, profit maximizer is absolutely true, but this is a, a necessary cost of doing business as you, as we move forward.
And I think as, as AI moves into the forefront and is used in every possible way that it is now being used and will be used in more, I think this has got to be something that, that, that becomes far more important. I think AI ethics and, and, and so on as a discipline, uh, you know, is going to become important.
And maintaining and keeping, okay, those systems responsible as they grow and evolve is going to be crucial. 'cause just because it was, it, just because it was responsible today doesn't mean it's gonna be responsible tomorrow if these systems learn. Then they are changing. And if they, and, and the model, the foundation models that are out there, it's great to be able to say that they need to be explainable, which is what the Eeu AI Act and, and others actually do.
But in many ways, they're not explainable. You cannot understand, okay. Exactly how a model came to a decision. Okay? Because of the way that it has been architected. So now what you have to get to is, it's not explainable, but it might be interpretable, which means that the decisions have to be understandable and it need to be designed so that they, they include comprehensible explanations to, to help humans understand how they came to that decision, even if the exact chain of thought is not possible.
Because if you ask open ai how on earth it came to an answer, it, it can give you a chain of thought. It's completely made up. Okay? It's not, it's not, it's not absolutely how it came to it, and you can't know it.
Jacob Haimes:
I mean, we know what ha is happening within the model, right?
Like we, we just fundamentally do know there are, uh, processes that are happening there. So if you go onto like the lowest level, we can understand it. Yep. But then also, and this goes more towards like my perspective, we could have just. Documented this and done good engineering in the first place.
Like we could have just designed it purposefully from the beginning, made the scope narrow, and then understood the process as we trained it. Um, and we wouldn't be in this scenario it. And so that, that is like, yeah, I guess one thing to push back on, um, because it makes interpretability, like while interpretability is nice, uh, explainability would be far better and interpretability shouldn't be seen as, uh.
Uh, one for one substitute there.
Alistair Lowe-Norris:
Right. I completely agree. I completely agree. I mean, one of the principles at at, at, at irid that we have is around interpretability and explainability. And the, the reason, you know, the reason that we have that under, under the pillar of ai, AI being ethical is because you, there are some things currently.
Because the way the models have been designed and the information is impossible to get as to, you can't get explainability.
Jacob Haimes:
And so I think this is a good then like, sort of segue into. How, what we've discussed, especially what we just discussed, influences and informs Iridius and what you do at Iridius what is it that you're doing, um, and how have you taken this sort of ethos, uh, and perspective and brought it into that work, um, in this case?
Alistair Lowe-Norris:
Sure. I mean, Irid is about, you know, Irid has, uh, uh, four patents pending, uh, focused on building safe and responsible AI solutions for customers. It has an approach where it's, it wants to put a, uh, a multi-agent system fabric together, okay. That allows customers to, to create AI solutions okay, in days rather than months, but by, but build them in a way that they are compliant with external regulations, external standards, and internal policies and standards by design.
The, the fundamental thing that's different with what Iridius is doing is that. There's a, there's a huge, huge issue out there where people are trying to build kitchen sink agents that have everything inside them.
To your point earlier, you take one model and you make, you do 700 different things. And, and when you have things like this and you're trying to, you have, you know, industry luminaries and, and leaders standing on stage and saying, we're gonna have, you know, potentially dozens of agents all working together.
And then there's this orchestrator that brings them all, and it, and it has to make sure that they're all talking correctly. Um, we, we did it very differently. We, we took an approach that said, why don't you take the work that needs to be done and break it down to the smallest possible units. Okay. The same way that NASA did things with the Apollo approach, because you, you literally did it in that way.
So it's not deterministic. It's definitely still ai. But what we're saying is why don't you have agents that are small? Okay?
And so we have a distributed series of small agents. But that means that you can build a system with millions of agents, billions of agents, all running and orchestrating together, okay? To be able to build solutions. But the attack surface. And the concern around any individual agent is much smaller because its functionality and its requirements are very, very narrow.
It has a very defined role, but it also has a set of ethical behaviors and guidelines, and it has a set of entitlements. And those entitlements guide what it is allowed to do. And so it's about making sure that these agents aren't super focused, super tight. Very small. And then because of that, the attack surface is smaller.
It allows them to self-organize. And at the same time you are, you are, you're able to monitor the safety of these things much more easily because they're not so monolithic and so complicated, um, uh, that it, that it becomes a difficult task That I think is a,
Jacob Haimes:
doesn't that cost so much money?
Alistair Lowe-Norris:
Not really, because these agents are quiescent until they need to be activated.
So if you take a, take a bank for example, that has, I don't know, 15 million accounts, and you need, you know, one of them is a savings account, one of them, you know, a a 15 million customers, each of which has a savings account, a checking account, and something else. So let's say there's three different, uh, three different accounts, 45 million accounts.
You could have literally an agent for every single one of those accounts. I'm not just simplifying this. So now we've got 45 million agents, but not all of them are running. They're only running when something arrives. So you've got a transaction coming in, so the agent wakes up. Okay, so it's, it's hydrated.
It does whatever it needs to do, and then it dehy, it's dehydrated again and it shuts down. So now you are, you are, you are running these sorts of things, okay? And you end up.
Jacob Haimes:
So you, you do have whatever, hundreds, thousands of agents, but it's not that you're running them in parallel. It's that you're running them when they need to be used, and then you're moving it away.
So every agent has its own very defined box that it sits in.
Alistair Lowe-Norris:
Yes. But absolutely. But your point about explainability is crucially important because I know what that agent is. It has a transparency card. It clearly says what the model is gonna do, how it's gonna work, exactly what its inputs are, what its outputs are, and, and clearly this is exactly how it works.
Jacob Haimes:
And this also ties into, I think, possibly the question that I. Uh, beforehand was most interested in, in discussing with you in depth, um, which was in your podcast at one point.
You mentioned once you have a system that is just compliant or rather that just is compliant, then you don't need to worry about safety needs anymore.
Alistair Lowe-Norris:
I, I think that's not the best way that I could have phrase things. What I think I mean is if you have a system that is meeting the current standards, the best practice is worldwide for safety with ai. And you are ensuring that it is compliant as it currently stands, then you know that you are as safe as you can be.
Right now, what you need to do is ensure that you are staying ahead of the safety game constantly. So safety changes, the needs change. We don't wanna be, we might be a hundred percent cybersecurity compliant right now, but tomorrow we might be 94% because something else has come out. And so you have to be able to ensure that safety and in fact, explainability all of these things, fairness, are constantly moving targets.
You need to be able to ensure that you are compliant with these constantly changing standards, regulations, and targets. And you do that by ensuring that you introduce these new standards and approaches. Into the existing system and require it to step up to meet that compliance. What that may mean is that you are retiring versions of agents that exist and replacing them, you know, in a, in a stepwise way.
Okay. Very, very quickly. But even so in a stepwise way with better versions of those agents that are now compliant with regulation 1.73, where they were previously compliant with regulation 1.72 or whatever it is. So I think that. What I should have said and better would've said is I think that once you have a system that is compliant, you, you, you need to continue to worry about safety, but you need to ensure that the safety needs are continually being addressed minute by minute, hour by hour, ba day by day as as new information is provided and the system has to be able to adapt to that while still maintaining functionality.
And that's something that self orchestration and all the rest of it will help with.
But ultimately it can give you a little bit more confidence that you are staying compliant with whatever the best practices are worldwide. And some that, you know, standards take two to three years to launch. It doesn't mean you can't be compliant with something that isn't yet, isn't yet, um, isn't yet current as long as it's a, a great way of being able to make sure you're staying ahead of the game.
Jacob Haimes:
Okay. And then I guess to close off like the, the thought on standards, um, you've been working with standards and and regulation like for, for a long time. What actually happens. When standards aren't met
Alistair Lowe-Norris:
sure. I mean, I, I think there's two things. I think if we take regulations and we take standards as two separate things. So if you're not compliant with regulations, you can be fined up the wazoo, assuming that the regulator has defined fines. Okay. Um, and so, uh, you know, the EU AI Act has great fines. The Colorado and Utah one have minor fines that a company that's very large would look at and treat as a treat as a a, a minor mosquito bite.
Okay. That it really wouldn't care about. Exactly. Yeah, exactly. It's co. So then, then if you are hoping that customers companies will be ethical, they think that's a problem, you have to, you have to be harsher. With the penalty in order to ensure that compliance becomes, IM important inside the organization.
So let's put regulation to to one side for the moment. Okay. If you take standards, nobody is pushing. Most nobody, it's not true. But most organizations are not pushing to comply with standards except nobody goes along and says, normally I wanna be 23% compliant with security standards. Okay. Because it feels like that's a, that's probably a bad idea.
Okay. You, you almost certainly wanna be as compliant as you possibly can be. And so then the question is whether you wanna be audited or not to demonstrate compliance, but you can still be compliant even if you're not audited. It's in the procurement organizations that actually drive this. So inside, if you company wants to be compliant with a standard, okay, then it ha then it's either gonna be audited or not.
If it's not going to be audited, then it's entirely up to your own organization to do what you think is right and implement those standards. But you don't have any outside organization that is checking it. And any outside auditor, in order to get that outside audit, they need to have been an internal audit done by somebody who's understands it well enough to ensure compliance before you put it in front of an outside auditor.
So, to the, to taking it from a part, from a case of, you know, tier one where we're saying we're gonna. We're gonna do the best we can. We're gonna implement these standards, but we're not gonna check ourselves against it. That's tier one. Tier two is now we have an internal audit team that's gonna ensure that we keep compliant with it and the new standards as they change.
Step three says, now we're gonna have an external audit that proves that we got the little badge that says yes. We pass the, pass the test and we're lovely for the next three years. And, and audits, by the way, uh, have to be redone every year anyway. But then the last one is, if you're doing all that, then take the procurement organization and require all your suppliers to do it too.
Now you are asking all of your suppliers to have the audit badge or to go and find an auditor who can tell you, you are as good as that. Um, and provide that, provide that validation back. And I think. That then suddenly allows you to sort of mature as an organization. Now you are compliant. If you're at level four, you are, you are compliant as an organization at level three 'cause you've been audited externally.
Level four, you're now holding all of your suppliers to the same standard. Now suddenly you're doing in theory the best you can, but then the next question is, okay. Just like we said with ai, these regulations change so people have to be ahead of it just because you are, you know, just because you are compliant with the standard in 2012 doesn't really help if it was updated in 2021.
And so those sorts of things really do matter. So I think it's, there's, there's a level of, a level of taking those, those steps up the tiers to demonstrating that you're not only compliant. I think that the best organizations go beyond compliance to embedding it in their fabric. So we would say in irid is that we're born out of safe and responsible ai.
It's in everything we do from every single step along the way. Um, and that, that fundamentally changes the way that we do business because we will not do things if it doesn't meet those standards. So we publish our three pillars and responsible Ai p principles on the website, and ultimately everything we do is built out of that.
I think that's a fundamentally different way to do business, but it requires the CXOs level to actually drive this with accountability and resources in order for it to happen.
Jacob Haimes:
Yeah. No, I, I, I definitely agree.
Uh, because I feel like you hear about, about standards, but it's not clear, you know, what is the actual mechanism that is encouraging. Yep. So I then want to do a couple of questions just as like, closing,
Uh, what is your hottest take regarding ai?
Alistair Lowe-Norris:
Yeah. I mean, I, I think if the robots thing wasn't, wasn't, wasn't enough of a hot take, I dunno what would be, but I think that you, we haven't seen anything yet with ai.
It, it's about to be pervasive in ways that are scary. Okay. And in parts of people's lives that, that they haven't seen before. Um, uh, in a, in a way that it's going to be ubiquitous and that in itself is, uh, is going to be quite a, quite an interesting set of times for us to live in. Gotcha.
Jacob Haimes:
Okay. And then what's something that like really irritates you either about this space or, just something you have to do that you're like, I hate doing this thing. That's part of my job, what's something that sort of grinds your gears?.
Alistair Lowe-Norris:
I, I think generally the, the, uh, you know, I'm a chief responsible AI officer, so, so for me, the, I see a lot of organizations trying to pay lip service after the fact to responsible ai. Um, I think that, uh, people see it as, you know, okay, I can do this as a bolt-on.
We built all these AI systems and now we'll put a policy in place. It says, okay, we'll, we'll, we'll, we'll, we'll definitely do things right. Okay. But, but they needed to have done them right from the start. They need to do a full assessment of all of this, and, and people are paying lip service to it and just trying to do it to show that they're good, but they're not.
Um, and so when you dig beneath the surface, you realize that not a lot is happening. And, and that's the vast majority of companies nowadays, and I think that that has to change. So that grinds my gears is
Jacob Haimes:
Gotcha. And then the last question, what do you enjoy most? About your work
Alistair Lowe-Norris:
I love learning from people. Okay. I mean, I, I, I've learned a lot, even from the checker, from you and I talking today and just helping me think in different ways. I think for me, you know, with the growth mindset, it's all about trying to learn and, and, and, and understand more about how, how people are using ai, how the world is working, and I just love that.
So for me, my favorite part about what I do is I get to talk to so many different people and, and have fantastic conversations that I normally wouldn't have done. Okay. And that, that to me is what makes my life so much richer.
Jacob Haimes:
Awesome. Yeah, thank you so much for, for joining me. It's been a pleasure talking to you.
I definitely, yeah. Learned a lot and have like, I think a much better understanding of some of these, these things we talked about. So yeah
Alistair Lowe-Norris:
Thank you very much for having me, Jacob.
Jacob Haimes:
Whew. I know it got a little compliance heavy there at the end, but I also think it's really useful to see that the kind of work that is being put into AI safety and ethics is actually beginning to have real impacts by informing the best practices in these standards. If you have enjoyed this episode, think about leaving a review wherever you listen.
Like seriously. From a machine learning perspective, they weight the importance of reviews so much higher than like anything else. So it's actually really helpful to get some on there. Just saying Anyways, on release, Halloween is just around the corner, so here's a spooky farewell.
Creators and Guests
