Getting Into PauseAI w/ Will Petillo
Download MP3Jacob Haimes -- INTRO
Welcome to the Into AI Safety Podcast, where we discuss the challenges facing the field of AI safety and those who work within it. This show aims to provide the foundation and resources needed to get up to speed on safe and ethical AI. As always, I am your host, Jacob Hames
During the podcast you may hear this sound đź”” which denotes that I have included at least one link related to the content that preceded the sound. The purpose of this is to allow for citations and providing resources without breaking the flow of conversations. As per usual, items mentioned during asides will have links in the show notes as well
This interview was recorded on May 22nd, 2025, and asides were recorded on June 11th, 2025
This episode is an interview with Will Petillo, the onboarding team lead with PauseAI, a movement with the aim of hitting pause on the development of so-called frontier models. I'd like to give a quick shout out to Remmelt Ellen, who was actually the first guest this podcast ever had, because he's the one who recommended and put me in contact with Will in the first place. So thanks, Remelt. This one's for you. During my conversation with Will, we'll get into just what PauseAI's ask is, prioritization of cause areas in an uncertain world, warfare, corruption, and so much more. Before we start, I'd like to note that Will joined the podcast in a personal capacity, and his views do not reflect those of his employers. Now, without further ado, let's get into it
-- END INTRO
Jacob Haimes
Welcome to the show, Will. I really appreciate you coming on. Could you introduce yourself and give us a little bit of information about who you are and what you do?
Will Petillo
Sure, so my name's Will Petillo, he, him pronouns, and I've previously done like a lot of different stuff before getting involved in PauseAI. So like coming out of college, I was a teacher for a little while, then a paralegal in an intellectual property law office, then a semiconductor engineer for Tokyo Electron, then a video game programmer, and that's that's leaving out all the little miscellaneous summer and odd jobs and volunteer things and extracurriculars and all that kind of stuff going on, which that would take up too much time if I listed all of that. So currently I am volunteering close to full time as the onboarding team lead at PauseAI đź””
So this is talking to new members as they come in and getting them connected with various volunteer opportunities in the organization, letting them know what PauseAI is about, what the culture is, what our values are, answering any questions or concerns that they have, and basically connecting people
Jacob Haimes
Okay, and so just to make sure that we're all on the same page here, could you give a brief description of what PauseAI is as well? And we'll of course be digging much more into this later, but just so we're starting off on the right foot
Will Petillo
Sure. So PauseAI is a international grassroots volunteer organization. Our banner ask that's kind of everything is centered around is trying to slow down the frontier of artificial intelligence development
Jacob Haimes
Okay. And so by frontier, you mean the biggest models that are being developed by the big name companies, right?
Will Petillo
Yes, that's exactly what it means. Earlier when we started out, we were kind of continuing. There was a future of life in-spite letter asking for a six month moratorium and it had like a specified amount of compute not to go beyond đź””. And it was for a specified like six month period
I didn't join the organization right when it formed. I was maybe six months or so when I joined like after it started. So I don't know exactly the history of it, but I could see there's a pretty strong like parallel between, you know, that idea and like our founder was, Jupp Minderzma, felt like, hey, there isn't really a sustained thing to actually like make this kind of thing happen. And, since no one else is doing this, I'm gonna start it up. And when I first joined, after I'd been around for a little bit, it was just a Discord server, Jupp doing some stuff, and everyone else kinda maybe helping a little bit here and there, but mostly just kind of. Chatting and giving moral support
Also, another one of our other core founders in the US branch is Holly Elmore, and her take on it was that we need to be having protests. AI safety has kind of been around for a while, but it's largely been like conversations on less wrong, like technical alignment research by machine intelligence research Institute. there's not hadn't at that point been much effort to really engage with the public. The rationale being that, like, this topic's too complicated and regular people won't get it. Or if people think that they get it, they'll pull the thing off course and do stuff that makes things worse somehow
Jacob Haimes -- ASIDE
As is often the case during these interviews, we find ourselves mentioning Less Wrong, Rationalism, Affective Altruism, Eliezer Yudkowski, or some other related thing. Before AI was a more mainstream topic of conversation, there were people that were talking about the risks of advanced AI systems, and the place these conversations were had was primarily a community blog called Less Wrong
Less Wrong was originally created by Eliezer Yudkowski as a place for so-called rationalists to discuss and share ideas on their topics of interests, which include cognitive bias, philosophy, economics, and AI. This community has strong ties to effective altruism and AI safety. There are many criticisms of these spaces and these groups, and I discussed them much more in detail during my interview with Dr. Igor Kravchuk, so if you are curious about the broader context, I would highly encourage you to check those out. In general, the cultures in these spaces have strong exclusionary tendencies, which I believe are the result of elitism and in-group favoritism
The reason I think both of these not-so-good behaviors are so common is that core aspects of both the effect of altruism and rationalism ideologies can be seen as justification for these behaviors. For example, if you believe that certain readings or concepts are so valuable that having not read them or thought about them would negatively impact one's ability to contribute to a conversation, it's easy to dismiss them
Because the communities are based off of prohibitively long writings, which often include new and unintuitive terms for already existing ideas, this kind of gatekeeping is supercharged. For new people in the space, most conversations you have are littered with these conceptual landmines, and setting one off increases the likelihood that whoever you're talking to decides they actually don't need to listen or care about your opinion
A direct consequence of this pattern is that almost everyone who wants to get involved in AI safety is gaslighted into believing that they need to change in order to be valuable to the field. I want to be clear that I don't think this could be further from the truth. The way that we reach the best solutions is by building off of diverse perspectives. Having a bunch of people who have all done the same readings and share the same ideas is a recipe for failure.
Anyways… Because of these tendencies being embedded in the cultures, most didn't leave these communities to discuss their concerns, and some actively discouraged it
-- END ASIDE
Will Petillo
and Holly felt this was a mistake and no, like the core ideas people can get and also AI affects everyone so people should have a say and so what was really needed was to shift the Overton window by like making all of this stuff more visible đź””. So those ideas together essentially formed the foundation of what PauseAI was about
All this to say is that the initial sense of there's like a certain amount of compute that shouldn't be used in a training run until like we know what's going on at the earlier levels, that could get out of date. Like it could be that what the frontier looks like changes and it's more about like what algorithms are used or how much is trained like, you know, beyond post-training
So there's some flexibility in terms of what exactly we want to slow down. And that's a kind of necessary vagueness because the industry is changing very quickly
Jacob Haimes
Okay, so yeah, I think that's a really good way to describe sort of PauseAI just so that we know what we're talking about and working towards. But I also am really interested, know, how did you get to where you are right now? You said it started, you know, as a Discord server
Where did you learn about this? How did you get involved? And what was your path here to being a full-time volunteer with PauseAI?
Will Petillo
Sure, so I'd first learned about AI and AI safety from Less Wrong and reading a of Yudkowski's stuff 10 or 15 years ago now. And at the time it just seemed like an interesting thing that maybe people would have to deal with later. I actually wasn't even totally sold on the idea that the concepts they were saying were true
Yeah, things shifted for me when chat GPT came out and suddenly like it was a lot more visible. It's like, my sense of when the stuff would become like strongly relevant got a lot sooner. I think that that happened to a lot of people. And this was happening at about the same time that I was feeling a bit burnt out on video game development, largely just for personal reasons.
So I saw some link on Twitter about some protest happening in California. I wasn't able to go to it, but I was intrigued by the idea. Actually, my initial thing was I was concerned that they would cause, you know, be, it would backfire. And so I was kind of joining to say like, I want to make sure these people like know what they're doing. And it's, it's not taken off track and I was actually kind of going in to argue. But then when I actually started engaging with it, it's like, okay, there's a little bit more thought behind this than I would have assumed right off the bat
Jacob Haimes
Gotcha. OK. And then you say you're on the Discord, and you're sort of trying to understand the arguments. Maybe at least somewhat because, you know, I mean, it's good to sort of red team and be able to understand what someone else is thinking. So how did that transfer towards you getting more involved?
Will Petillo
Yeah, so I guess basically the thing was in engaging with this, I had no background in activism. My mother did a lot of environmental activism, but I'd kind of distance myself from that as a little bit of like teenage rebellion
In a way, or in a subtle sense. So, you know, I kind of like have that in the back of my mind, but like it had some sort of negative associations with it. But as I was engaging more, started to like learn more about what activism was about. And I was also wanting to find like ways to be involved, partly just to fill a time and have like something, you know, interesting to be engaged with.
Actually, before… At first, I joined, it was just kind of a passive thing and more of my effort in terms of AI safety as an issue, which, yeah, because at this point it was interesting to me, was actually trying to get involved in technical alignment, specifically leveraging my game development background to build a grid world level editor in the Unity game engine so that anyone could build test environments and be able to run different algorithms through them to see which ones passed various safety benchmarks better. And so I put some work into that and it also went to this virtual AI safety unconference, or VAISU 🔔. I think that happens every year.
So I submitted some of the things I was working on with Unity, like to show people like some of the tools that were available and also connected to some other people. I actually got started working for a while with Japs Tietsig on his project for like an algorithm that would build Satisfizers. So like it would prevent the user from inputting any goal that was open-ended by like translating it into a form that there would be some like limited amount that it would pursue any given thing
And there's more one could say about that. I also had the idea from Vaisu to start making a series of videos. It's on my YouTube channel under Will Patillo. I have a playlist called Guardians of Alignment where I interviewed a variety of people who are working on, basically I contacted people from VAISU saying like, hey, you have some good ideas here. I'd like to help them get distributed a bit more. Who would be up for having a video interview? And then I scheduled several of those. So that's, the origin of that whole series
And one of my takeaways from all this is that there were some pretty good ideas here. I don't think any of them were things that someone like Yudkowski would say, yes, this solves it. But they seem to be like trying to understand the problem and really solve it at a deep level, whether they're successful or not is another thing. And I did not have that impression at all when I was looking at like what's happening at OpenAI and other companies, their approach seems to be like, we'll make AI developed to the point where it can function as an alignment researcher, and then we'll just offload the problem there. I wish that was a caricature, but I think that's actually kind of what it is
Jacob Haimes
I think that's a very straightforward, reasonable way to explain what quite a lot of people think the plan is đź””
Will Petillo
Anyway, so all that to say, like I was more impressed with the work of volunteers than what I was seeing with the big companies. And this kind of led the realization that, OK, like I'd been trying to think of it from a technical perspective, but that really seemed to be downstream of policy and just what's being incentivized by economics
You know, it doesn't matter if someone has like the best idea in the world if they're not able to actually build it and something else, which is like a less good idea, gets all the money. This idea that tech is downstream of policy. so that, seeing that, that kind of caused me to want to pivot a lot more towards working in some form of advocacy and by this point I'd already kind of established a bit of a relationship with the people at PauseAI
Jacob Haimes
Gotcha. So I do have one question. So one reason that I started this show, this podcast, is that a lot of people, I think, get into similar positions at varying, like, guess, places along this trajectory of wanting to get into maybe some of the more technical or policy oriented research. And then for some reason or another, like, it's just very competitive, there's lots of people trying to be doing that as well, and so it can be really difficult to find your own path
And I wanted to talk specifically about like… you saw this sort of discrepancy, I guess, between what is possible in promoting more diverse ideas to find solutions. And although you had your own sort of like ideas and were working towards some, you decided actually, I think that it's going to be more valuable in some manner of ways, maybe that's personal fulfillment, maybe that's like impact, I'm not sure. But it's more valuable to be doing like more advocacy and sharing based work
Is that sort of correct? And regardless, like talking me through that a little bit more to understand how you went about navigating this sort of decision space
Will Petillo
So you mentioned actually two things there, impact and also personal fulfillment. And the choice was a little bit easy for me because those were both pointing in the same direction. I was just speaking now in terms of tech being downstream of policy in terms of my views on what's more impactful is just seeing the policy side as bottlenecking the other things I was working on previously
But in addition to that, I was just in a space of my life where I could not maintain focus staring at code, and working with people was just a lot more satisfying where I was emotionally
Jacob Haimes
Gotcha
Will Petillo
So since those were both pointing in the same direction, it was like this was easier and also seemed better in a rational sense. there wasn't really a whole, it just kind of, that shift sort of just happened and there wasn't really a big internal conflict there
Jacob Haimes
Awesome. And so you then went, I mean, you were already on the Discord for PauseAI. But so how did that start? Like, did you just go up to, know, Holly and say, hey, I want to volunteer. Or does that look something a little different?
Will Petillo
So at the time it was just, there was a link to join the Discord and I clicked on it and then I was in it. And then they had regular events of like a weekly action meeting where people would just say what they'd worked on in the last week and ask for help if anything was coming up. And that was just on a public meetings channel that you just click on and you're in it. Then there's all the channels where people are having conversations. Anyone could just participate in those
And you know, if you're trolling or something, you could get banned, but basically it was a very open public sort of thing. Lately, there's been more like private channels kind of forming, you know, for leadership. But at the time when I joined, like everything was totally open
Jacob Haimes
So we talked a teeny bit about this before, but if you could give like a little bit more background and understanding of what is the goal, you mentioned like a flagship ask, or I think you may have used slightly different words, but like what is that and why is that?
Will Petillo
So there's two basic reasons why we're interested in the frontier training of AI models. And the main one is, well, one of them is that that's kind of our view of where a lot of the biggest risks come from. But something kind of related to that is that seems to be pretty clearly where the bottleneck is
So if a technology already exists and it's been distributed out into the world, trying to stop any bad usage of it, that's a really complex governance problem that's having to diffuse out in a lot of different places. Whereas the big training runs are happening in a very small number of companies in very
you know, few locations cost a huge amount of money. Every step in the process of it is owned by various monopolies, like on the hardware side. so that, so the amount of coordination needed is, or the number of actors needed to coordinate is much smaller. And it's also has bottleneck in the sense of downstream effects. So when there's a new training run, there's like, all sorts of capabilities that might come out of that. I think there was a story once of some people at OpenAI taking bets as to what their new model would be capable of after a particular training run đź””. And then with new capabilities, there's all sorts of different technologies that people might build off of that. And then with every technology, there's some unknown amount of societal effects that it has. And so the societal effects are ultimately the thing that matters. But, at each step, it like branches out exponentially. So if you deal with the place like the farthest upstream, then that deals with everything else
Now there's obviously downsides to that because like now you're also slowing down, you know, good societal effects. But our view on this is that it's moving so quickly and there's lots of issues that are already have come up from technology that are from AI specifically that society hasn't totally come to terms with, built an immune response, figured out how to deal with, that we really just need to slow down to be able to deal with what's out there already before constantly adding new things into the mix
Jacob Haimes
Okay, and one thing that you had mentioned previously as well to me was this idea that it's not necessarily about the pause itself. It's, I mean, a pause would be great, but what's more valuable is what would have to be true about the state of the world if we were able to get to a pause. Can you explain that a little bit more?
Will Petillo
Yeah, so one objection that was often, I've often heard given to that Future of Life Institute letter and is also sometimes brought up with PauseAI. Although our ask for a pause is more open-ended. We never say six months, we say like as long as it needs to be. So this kind of addresses this issue directly, but also there's a question of like, well, this pause can't possibly be forever
You know, there's some kind of time balance until you can't hold it up. And so what do you expect to achieve in that time? Do we really expect that safety research is going to catch up to capabilities in six months or however long it is? Or that decision making is going to get massively better in that time if we just slow things down? That doesn't seem to be the direction the world's going lately
So the way I kind of take a look at it is to be less focused on just the outcome of a pause and more on the process that it implies, like you said. So if there was some pause, let's say it was even five minutes, like I've heard this idea of an AI fire alarm.
Well, that's obviously useless in terms of what can be done in five minutes. But this doesn't just drop out of the sky. If something like this happens, there has to have been some mechanism that brought it about.
So if like a five minute pause were actually enforced and monitored and people knew that it was happening, this would imply that there was coordination between the major governments of the world, which implies that they've talked to each other and set up communication channels. It implies that experts have figured out ways to monitor and enforce it
So it's not just relying on trust between states that don't necessarily trust each other. And it also implies that all the individual actors have bought into it. This kind of thing would not hold up if there was one group really pushing it and everyone was just dragging along because they had to. You need buy-in for an agreement to hold up. And that buy-in implies that individual actors, like the US government, for example, saw the need for slowing things down, which realistically is only gonna happen if their constituents are pushing for this, and that the governments are actually listening to their constituents
Jacob Haimes
Yes, that is also relevant, as I mean, we did talk about this in the last episode that I released at this point. And that is that like constituent communication does seem to be pretty impactful if done well. At least it has in the past. So I think there's there's hope for that
Will Petillo
I want to speak to that a little bit and might be jumping ahead, but me on the value of constituent communication so one statement that one of our DC lobbyist Felix has you know, he's been talking to a bunch of policymakers on both Republicans and Democrats and has generally gotten like a lot of buy-in in terms of like people being like interested in the the arguments that he's making generally agreeing…
But then it's like sometimes being slow to act or almost always being slow to act on them. And so he would follow up by asking, what would it take for you to be a leader on this issue, to like do this and this and such a thing? And the answer is generally something to the effect of I need to know that my constituents have my back. I need to know that people actually care about this and that it's going to help me not hurt me. So, what would that take? And they would
The response or one of the responses was if I could get 10 letters on AI like asking to slow things down from constituents in my district over the next week, that would shift my opinion on what I would do on this. It wasn't tied to a very specific promise, but I actually got a very similar number when I was personally talking to a state level representative and they said something very similar. Like if I could get like just a few letters on this, that would move my interest on this
And one of the reasons for that is not very few people write letters. And so when a representative gets one, for every one they get, they can assume that there's a lot of other people who probably care about the issue and who aren't writing. And whether you take a cynical perspective and say they just want to get re-elected or a less cynical view that they want to represent people, it comes to the same thing.
So, yeah, like… We need to be able to, and that's ultimately gets to be our responsibility as citizens. Like if we want things, we need to ask for them
Jacob Haimes
Yes. And to ask for them in meaningful ways, I think is also important there. We can't just click on the button that says this will send a form letter, because at this point, that's such common practice, as far as I can tell, that it essentially means nothing to the policymakers
Will Petillo
This is a good point in terms of what makes effective communication and it's a very simple answer to this. The thing that matters is evidence of effort and or really evidence of caring. And so you, which means like you don't actually, if you're writing a letter to a representative, it doesn't have to be this really well researched persuasive thing
All you have to do is make it clear enough what you're asking for so they don't get the wrong idea and go the opposite way. But beyond that, you just have to show that you care about it. telling a personal story is fantastic. But one paragraph, maybe two, is plenty for an email. And that's also why it's important to actually write a personalized one yourself rather than just clicking a link on change.org or something of that nature, specifically because it's so low effort, it doesn't really send much of a signal. Now that's still better than nothing. And it also matters how it's being used, how the organization circulating the petition uses it
So for example, if I go in as a lobbyist and I have with me a list of names of constituents with zip codes who have signed a thing, and I go into that saying, look at all these people who agree with me and these people might have stated that they might even shift how they vote in some cases based on where you come to this issue, that's gonna get their attention a lot more than just me going on my own. So, petitions can be meaningful, but if they're just kind of circulated around and saying, hey, look how many signatures we got and they don't actually do anything with it, then that is much less useful
Jacob Haimes
Gotcha. I think that's a good segue into the next sort of question area that I'm interested in, which is how does PauseAI go about achieving this goal of, you know, reaching a place where we can have a pause on large training runs? How do you do that? What are the actual things that are are happening?
Will Petillo
I split our actions up into three categories in terms of like if you're involved in the organization as being inside game, outside game, and organization
So, the inside game is like lobbying, essentially. Talking to people in power, trying to convince them of things. Outside game is trying to talk to regular people. So educational messaging, outreach, protests, anything that gathers attention is outside game kind of stuff. And then organization is working with people in the organization trying to get all the parts to work together. It's very behind the scenes
Each of these has a very different mindset with it. And this can lead to a lot of confusions around activism when you try to judge one process through the values of another. So, for example, taking compromise positions and like asking for something that's a step in the direction of what you want makes a lot of sense if you're in the inside game trying to like make incremental improvements is a really bad idea if you're taking from an outside game focus and pretending that what you actually want is the small incremental thing
But the other thing that's important about this analysis is the fact that they work together. So if I'm, like I mentioned with that petition as a, know, imagining myself as a lobbyist, insiders have a lot more leverage if there is a strong outside game movement that they're representing. And likewise, a mass movement is going to have a bit of a pyramid scheme effect to it if it's not tied to some actual lobbying and trying to work within the system to some degree
Jacob Haimes
Gotcha. Okay. My first reaction was like, that doesn't sound good, but you weren't trying to make it sound good. Right. I got completely confused there
Will Petillo
And this can be a feeling, in, I've heard of activists like feel this before of, hey, like what is my role as an activist? It's to convince other people to be activists. And what am I trying to convince them of for them to be activists too? Like, wait, where does this like bottom out in terms of like actually changing things? Does everyone in the world have to subscribe to this thing?
Like, no, it's because the point of that is to give fuel to the insiders and then they are managed to make incremental progress, which then sustains and like gives a reason for the mass movement to say like, hey, we're actually accomplishing things and that makes the whole thing sustainable
Jacob Haimes
And then the third group, the organization, where do they fit into that dichotomy?
Will Petillo
That's the behind the scenes and get everyone else working together. So it's kind of like my role in PauseAI, for example, is very much the organization. I do a little bit of writing to senators and I want to start lobbying a bit more. I've done a little bit of outreach, but mostly it's working with the people and resolving disagreements and handling some logistics and that kind of stuff
Jacob Haimes
Gotcha. So it's really just that interfacing aspect to make sure that the inside game sort of people can actually be leveraging the outside game efforts and vice versa
Will Petillo
Yes, you could say it's just the two components, but I think the third one is the glue is the kind of important thing to keep in mind as well, because that's a different mindset that it takes all together. That's more about like outside game, it's really the core virtue that you want to have is courage and like boldness. Whereas like the inside, it's a little bit more about like, you know, strategy and like thinking things through in a lot of detail. And the organization, it's more about humility and like letting other people take the stage
Jacob Haimes
Gotcha. So that is helpful context, but it doesn't quite answer the question I think that I was originally going for, which is like, are the actual things that are happening both on the, I guess, inside game level, like lobbying, it sounds like writing letters is an example, or Felix in DC, I assume, like meeting with Congress people or their staffers. Is there more than that in the inside game and on the outside game side is there anything there that's being orchestrated or happening on a more like routine basis?
Will Petillo
So for the inside part, think that basically covers what we're actually doing right now. There are other things that we could be doing in that category, like working with other organizations a little bit more deeply. For the outside perspective, yeah, so one of our main groups that I'll direct people towards is the outreach team, which contacts social media influencers, traditional media, and tries to get our message out to established audiences
So like if someone's like a podcaster and they're talking about AI, then we say like, hey, you should talk about PauseAI. So that's one aspect of it
Also are all of our protests that we're, we're going a little less on those because when we first started protesting, it was a really new thing. And you could get like five people together and it would be you know, a news item, that's being less the case now. So more of our focus in mass mobilization is in the form of creating local meetups
So like I have one in Portland where I handed out a bunch of flyers. We had a meetup at the library and then we just brought in people to kind of… I would give a little starting lecture about some of the things that what's happening with AI and what PauseAI stances, but then let everyone else share their own perspectives. And then I brought up some questions for them to discuss. also being involved. I was recently tabling for an internship fair at a local college. And so I got a bunch of students who signed up who were interested
And so we're gonna have this whole process where we're teach them a lot of the basics of activism and then hopefully it culminates with them trying to meet with their state representatives and like put together a presentation and like practice that or some other form of activism that works for them. so basically, yeah, a lot of this outside stuff is, and then also some of our members have kind of taken on individual projects of like putting together videos or other things that are distributed digitally
Jacob Haimes
Okay, cool. So I think it's just because it's particularly relevant to that. How would I know if there was a PauseAI meetup going on? Like where would I look or how would I find out about that or stay up to date on that?
Will Petillo
So the PauseAI website has a national groups page on it, which says the countries that have places and so that's international. Probably a better one for finding local meetups in the US is PauseAI US, that's the US branch
Basically, yeah, so the process right now is largely like joining the often involves joining the discord and then there's like a lot of local communities pages and then we send someone to those. It's something that we're working on creating a more like public facing centralized thing that anyone can access and find, is there a thing in my city? I know in Portland, I've set up a Pause AI Portland Facebook page and invite people to that. But that's kind of using social networking rather than just like, I'm in Portland, do I find it?
Jacob Haimes
Okay. I guess maybe related to that, but also going towards sort of another question area that I have is, especially like right now, at this point in time, there are a lot of things that are going on. I can think of a couple immediately, like increasing impacts of climate change and sort of backward sliding towards totalitarianism and fascism, and like not just in one place but in many, and you know war in various different places. So given that there are so many of these issues, how do you balance or how would you suggest thinking about like where to put your time? If let's say I only have a little bit of time to do some advocacy, why PauseAI as opposed to maybe something else
Will Petillo
So there's a saying that's been around for a while that I like a lot, which is think globally, act locally. Which is, I think it's a very useful mindset to kind of have some sort of big picture view of what's happening in the world and what the upstream causes of everything else is, like what's driving the system as a whole
But then, working on that directly might not really be tractable or might be too big or it might not be accessible to any given individual and so it's very valuable to just pick some specific thing that is of interest to you for whatever reason and focusing your time on that but also like being aware of it not just being an isolated thing but part of something bigger
So, that can look like a lot of things. So that is to say that if your main issue is combating fascism or environmentalism, I'm not going to try to argue you against that. I think there are some things I could say, like that's great work, keep it up. You know, for anyone who's kind of involved in something along those lines
Jacob Haimes
Anyone who's already doing activism work, don't stop
Will Petillo
And now, and I'm also not gonna say like your time's not being like optimally used even. I can make a case for the value of working on AI like in itself. And if you hear that and it's like, this is really compelling, I feel more motivated towards this, towards something else. Great. But I don't really see it as a competition. Like as far as time goes
I see ourselves as more competing with things like, you know, video games or sports or, you know, just all, or actually really competing with people like making angry comments on social media and trying to be like activist and, you know, in terms of yelling at each other and polarization. Less of that, I think would be good and more things that actually work, whatever the cause area is.
Now, in terms of, of PauseAI, think the best answer to that is just my own reasons for picking this out. then listeners can decide whether it resonates with them.
Basically I think there's, there's two causes. One is I'm personally believe in the whole existential risk kind of thing in a fairly short term and if you know, there's a lot of arguments being made about that and I think that could be a whole podcast in itself talking just about that. So kind of avoiding some of those arguments. I think it follows pretty clearly that if that's something that you believe in and can feel, making that a priority seems kind of obvious. The underlying assumptions, contentious, but the conclusion I think follows pretty easily
The other rationale though, is that it ties into everything else and is kind of an accelerant on all the other, I've recently come across this idea called the meta crisis, which in very short terms, humanity's been around for like over 100,000 years, actually considerably more than that, and we've been having, facing various existential crises for less than 100, know, nuclear wars, like major pandemics, global warming, all kinds of other health related sorts of things. That's all been very recent. That's not a coincidence. There's something kind of driving all of this. And I would see AI as being an accelerant on all of these things
So if your big concern is about rising sense of fascism in the world, well, the impact of social media on people's sense making and polarization seems pretty relevant to that. Or also, if you're concerned about centralization of power, AI, you can see how that could accelerate some of that, like centralization of power for the people who control it, given the economics
Jacob Haimes
Well, I do, I do get what you're saying. I also want to just push back a little bit to see, to see sort of where it goes. Why a pause on frontier models then? Like, to me, you know, I agree that social media, for example, has had a very significant impact on, on people and social media and sort of content suggestion and surfacing algorithms and the way that sort of the internet has been created is definitely AI in my eyes, but that isn't addressed by this sort of like stop the frontier
And so I guess that's my question is just seeing that difference and asking, you know, what about that?
Will Petillo
So my interest in the frontier, I kind of mentioned this earlier in terms of like the upstream source of a lot of other things. But another way of putting of phrasing that idea is it's the biggest source of unknown unknowns
So right now we have a bunch of issues that are, know, at AI of the past, we already had issues of polarization, of bias, you know, and some other things
And more recent kind of stuff, we're also suddenly get the issues of automated aspects of warfare, the environmental impacts are gradually growing. If you advance the frontier further, it's kind of hard to predict exactly like what new things are gonna come out of that
Jacob Haimes
Okay, do want to also... AI has been used in warfare for like a very long time. So that's not a new thing
Jacob Haimes -- ASIDE
Will and I talk around each other a bit here, and I think it's because we just have a slightly differing opinion regarding the impact of ceasing training of the most advanced language models on AI and warfare. But it isn't super clear based on our conversation, so I wanted to hop in and provide a more streamlined version of our perspectives. I bring up the distinction that AI has been used in military contexts for a while because, in many ways, warfare has already been automated
While AI systems used by the Israeli Defense Force in the Israel-Hamas War, such as Habsora, Lavender, and Where's Daddy, are not based on LLMs, that doesn't make their extensive use or the acceptance of their outputs as accurate without verification any less horrifying
Similarly, the use of metadata to determine which individuals on a kill list should be terminated by the United States military starting in the 2000s is something we should not be okay with. Tools like these would not be impacted by a pause on developing the most advanced language models, so my thought is that a pause wouldn't have that significant of an impact regarding AI and warfare
That being said, militaries are beginning to experiment with the use of LLMs for operational tasks, and I think it's highly likely that we will see similar patterns of over-reliance on outputs of these systems, just as we have many times in the past. I believe Will is seeing this end identifying that a pause would slow down the adoption rate of language models in the military, which is probably a good thing
While we are emphasizing different aspects, both of us agree that the increasing automation of warfare and the lack of oversight that that often brings is extremely not okay. One last thing I wanted to mention is that I believe it is a disservice to speak of automation of warfare as something that is only in the future. Doing so normalizes the breaches of human rights that are already happening
-- END ASIDE
Jacob Haimes (49:43.744)
It's being increasingly used, but I would say that that's not… It was, it was initially introduced as AI and it was quickly branded as not AI when it was first introduced in like, you know, the seventies or, or whatever, because they wanted to use it in their, you know, targeting systems. So. Yeah, I just, I think that's an important distinction to make that like we're already doing a lot of that bad stuff
Will Petillo
Sure, I’ll be little more specific. So drones that can identify and kill human targets, you know, without being directly controlled and then being upgraded into swarms. That is not something that existed recently
Jacob Haimes
Yes, I just see this as a very changing from the single drone, which…
Will Petillo
Which is also new, you know, in terms of like being able to act autonomously
Jacob Haimes
Well, one could argue that like the targeting aspect of the system is like that technically someone had to pull the trigger, but like they got to choose whether or not it was adjusted or like the system chose whether or not it was adjusted. So I'm just, I'm not saying that I like, like, obviously I think that using AI in warfare is not good and I don't want it to happen. I just think it's very important to note that it's not brand new
Will Petillo
I think there's the underlying point here that I'm hearing is that these changes are continuous. Like if I had to put any line in the sand of saying like this became possible because of this particular training run or this particular moment where something is AI versus it wasn't, there's gonna be a certain level of just that being arbitrary. any change that looks like a new thing was actually preceded by something that was actually very similar and maybe didn't get as much attention
Sure, but that doesn't mean that change doesn't happen
Jacob Haimes
Right. I guess to me, I just don't... I agree that it would be good to slow down for especially because of like what it would mean about the world, right? I don't think that would have an impact on the use of so-called AI in warfare in a meaningful way
Will Petillo
I don't quite… So you're saying that if the frontier advanced or does not advance in AI development, this is not going to have any impact on the military?
Jacob Haimes
Specifically when we're talking about the big, companies that are creating, you know, like the frontier models as they're being called, by some, the frontier model language models, cause these, the frontier models are language models. The military is not largely using those, for like combat scenarios. They're already using other AI systems and those other AI systems will continue to get better regardless of this pause
And so… You know… I'm saying don't pause, I'm saying maybe we should be thinking about that as well. And I'm not, yeah, I'm just sort of putting that out there as something that I think is important to note
Will Petillo
Sure. I guess one thing that I think is important and when you start thinking about the managing all the applications of AI that already exists and like how it diffuses and how it gets used. I'm certainly not against that. The challenge with focusing on that basically is that it's complicated that you have to get into a lot of nuanced distinctions as to which applications are good and which ones are or not and I think people can have like a lot of opinions in terms of like where exactly to draw that line and…
Jacob Haimes
I mean, yeah, that's definitely true
Will Petillo
So when you're trying to rally people with a single underlying cause that everyone in the movement agrees on, dealing with all those really nuanced distinctions, ultimately it's about the process by which you're coming up with those in terms of what clearly matters
Jacob Haimes
Right. Okay. And so you mentioned as well earlier, like personally for you, some of the existential risk or X-risk type stuff is salient and part of what you think about, but that that's not necessarily everyone. And so I guess I'd like to hear a little bit about, you know, what are the different perspectives of those within PauseAI. And how do you guys go about getting to the ones that are being championed, like the call for the pause to frontier training runs?
Will Petillo
Sure. So, yeah, in terms of, I guess one of our core principles is we try to be focused on that particular outcome and within that, a, going for a big tense philosophy. And part of that is we have a mix of members who are, say, are techno-optimists versus techno-pessimists
So techno optimists is generally of the view that technology is generally good and yeah, there's some, you know, there are harms that come up, but ultimately society learns to adapt and you get most of the positives and leave aside most of the negatives. And in the long run, it all works out. But this, you know, AI development is different because it's potential to be destructive is just so much greater
Then on the other hand, there are people who are more of a techno pessimist sort of route, which basically is of the view that like all these things are destroying our collective decision making and just making the world worse, like allowing a lot more value capture rather than add. And we just need to stop all of this stuff.
And our, I'd say our PauseAI is not firmly in either of those camps. We kind of include both of those people and part of what
Jacob Haimes
Techno-neutralist
Will Petillo
Yeah, it's agnostic. Because it's more of a, it's less a matter of it all balances out and it's equal and more of a, there's a lot we don't know here. But yeah, I guess you could sort of say that
And another place where we're balancing interests is in liberal and conservative. So there are, you know, some of our members who are more concerned about things like national security or just the effect that this has on culture and, you know, and how like tech companies are able to then shape the information flow in the world versus people who are more liberal and they're concerned about like environmental causes and various things of that and bias and various things of that nature. So that's a mix
And yeah, and then also kind of related to that, some people who are concerned who come from like the less wrong community like myself and are interested in existential risk and others who don't really buy into that and but see a lot of the things that are happening now and are just concerned about the direction that things are going for easily observable reasons
Jacob Haimes
Yeah. And I think the important part there to me is that, you know, all these different perspectives still converge on this similar idea of like, yeah, it would be good if we could have a pause. That would mean some good things about the state of the world and about where we're at. And so it's worth championing that despite the differences that we have in why that is
Will Petillo
Yeah, there's like what it implies about the world and related to that is like with all these kind of differences that we have within our organization and exist in society and that people make in good faith. need time and a place to be able to hash that out and find some kind of compromise or win-win between all these views. Like I have my own opinions on each of them, but like I don't, you know, maybe I'm wrong. And so, you know, get other perspectives
Jacob Haimes
Yeah, that's one reason that I love just getting more people to be able to meaningfully engage in the conversation is because, you know, everyone's got their opinions and we're all probably wrong. So we might as well work together to get to something a little bit better
Will Petillo
You know, and that process is messy, but I think it's a lot better than the current process where we're essentially, I'd say we're following what I'd call the golden rule, which is whoever has the gold makes the rules
Jacob Haimes
Yes. Okay, so another thing that you mentioned, just as something you'd want to bring up and discuss is misconceptions or frequent arguments against sort of this idea of a pause. You know, what are they? And how would someone who is fully engaged and bought into the PauseAI perspective address them?
So yeah, I'm curious like what some of the most frequent misconceptions and objections to this are that you hear
Will Petillo
Sure, so I think a couple of things that I just think are really important for anyone listening to this podcast to hear about PauseAI and yeah, these are things I've run into.
First one, this isn't super common, but I think it's definitely worth speaking to is that you can't have an opinion if you're not a machine learning expert or you're deep into this in some degree
And so this is something that I think was kind of under the surface in a lot of the AI safety discourse before it became publicly known. And I'll also see it sometimes in new members joining where like they're maybe a school teacher or something else. And they're like a little bit timid about speaking out. And you know, they cite this as a reason. And yeah, so a couple like arguments against that. One is that if it affects you and you're in a democracy, like you should have an opinion
Jacob Haimes
Or even if you aren't, we'd like you to have an opinion. And have that matter
Will Petillo
I'd say your opinion matters. I'm not gonna, you know, push you to be more opinionated but like… What you feel about it matters if you're affected by it.
Yeah the other thing and this kind of relates to there's a whole tension here because on the one hand there is a real problem of Expertise being dismissed and on the other hand it's being like held in too high regard, you know
And so the way I kind of synthesize those worldviews is imagining like an image where you're looking at it at a fairly low resolution. It's kind of blurry or pixelated. And it's like, you have two people that are like shaking hands and you can look at that and get like, okay, I kind of have a view of what this is about. Now, if you increase the resolution on that image, most likely you're still going to see the same thing of two people shaking hands. But it's possible that on increasing resolution, you'll suddenly notice that one of the people is holding a knife behind their back. And then that totally changes the meaning of what the image is about
So like in that kind of case, the increased resolution matters towards interpreting it, but then it's on the person who is like, has the higher resolution view to state what the detail is that changes when you zoom in and why it's important. If you just simply like say that, hey, this cloud you saw in the background, it's actually not fluffy. It's actually a little more wispy, and so therefore your view of the picture is totally invalid. Like, that's just gatekeeping
Jacob Haimes
Right. And I think I want to also just hit on this for a minute, because I think this is not just like a worry of people that is unfounded. I think that a lot of people in this space will will treat others differently if they are perceived to not have like an appropriate sort of knowledge base or background in this space
And I think that's like a really big problem because what it results in is you get people who care about these problems, who are quite experienced in various different domains, not AI specifically, but others, which could bring valuable diverse opinions and perspectives to this space
And they feel like they can't contribute unless they change to fit the mold that somehow a vast majority of the people in this space have fit into largely because of like selection pressure. And then also just there at this point, so many people that they can be so selective and that's like really harmful to those people. And so just reminding everyone that like…
If you feel like you can meaningfully contribute to the conversation, then you can. Even if you get something wrong sometimes, saying, this is how I think about it, this is why I think about it, is valuable
Will Petillo
Yes, and I highlight one particular challenge that can come with that is different groups have different norms for how they communicate, which aren't even really necessarily relevant to the message, but are just kind of make communication within groups easier than between groups. And that sense of friction happens like often under the surface and can be off-putting if you're not part of it. Like communicating is harder, it's less fun. And this is actually a issue that I think all social movements deal with, I've heard Roger Hallam from Extinction Rebellion refer to this as the paradox of political identity đź””. Where when you're forming a group, it grows fastest if there's homogeneity within the group because people can talk to each other easily, there's less barriers to communication, and it grows really fast and that's great, but then you quickly run into an issue where everyone in that social group is already joined or has decided that they're not going to and like you can't really expand anymore and at that point you need to get to other social groups but by then they're like a clear minority rather than like part of the starting thing and get pushed out and it becomes a self-reinforcing issue
The way around that is well partly, deliberately being more inclusive and being intentional about it, but also is allowing for lots of subgroups to form that have their own norms to it and then communicate in a meta way between each other. So basically giving other people spaces where they can communicate more usefully.
Yeah, but also lastly on this point, I'd say like tech-twitter, where I see this come up the most honestly is on tech Twitter, in people kind of making like gatekeep-y kind of comments. I'll just say to anyone who's more of a self-identified expert in this sort of thing, if you have important details that are kind of being missed by a lot of people who are new, absolutely share those. But if you're just gonna say, you don't know what you're talking about, shut up, then you're part of the problem, stop it
So anyway, the next thing I'd say is a major misconception that I run into a lot. This is probably the most significant one, especially in the political climate right now, is this idea of “we,” in air quotes, must “win,” air quotes again, the race to AGI. So like the idea that like, the United States can't slow down because then China will speed ahead and win the race to AGI and will be left behind. And that would be bad because the CCP would be running the world and they're evil and so on.
And like, man, there's a lot wrong with that
Jacob Haimes
There's a lot to unpack here
Will Petillo
Yeah, but the thing that I kind of want to focus on that's just like I would see this as like a misconception and that I like a really want to you know, beat on this drum is that that's not how contracts work. If we're no one is advocating seriously for a unilateral pause that like United States to just just slow down regardless of what the rest of the world does, or one company should just slow down regardless of what happens in the rest of the business community
For meaningful things to happen in this space, like in the direction of a pause, what we need is a international treaty, something that affects everyone involved. And the thing about agreements that affect everyone is that they affect everyone. It's not just one person slowing down for the sake of the others
Now there are real challenges with that, in that if you're gonna have that kind of agreement, you need to have a process by which it can be meaningfully monitored and enforced. And that takes a lot of red teaming, because you have to think about all the ways that someone might defect or break the agreement or be secretive
But that's a process. It's a thing that can be figured out. And the first step towards figuring out those genuine hard problems is to try. Like I wouldn't expect the US and China to have a treaty right away, but there's nothing stopping them from just picking up the phone and saying what their positions are, trying to find the meaningful overlap, and starting the process of trying to figure out all the hard problems of getting an agreement that sticks
Jacob Haimes
I mean, there is something that's stopping that
Will Petillo
Well, I mean, not nothing good
Jacob Haimes
No, no, it's nothing that's reasonable. But yeah, guess that's another thing that I think is worth just bringing up whenever this sort of argument is also brought up, is that... So I'm gonna choose a specific piece just because it really irritates me. There was one... Actually, I don't even need to name it because I don't want to like signal boost it. But..
The person that wrote it got very popular by sharing it and essentially in it it lays out this argument of like if we don't you know put as much effort into this as possible and scale as fast as possible then you know we're going to be left in the dust and China is going to you know overtake the US and the West and…
What it doesn't do is acknowledge the difficulty and the issues with having this sort of stance, regardless of whether or not it's valid, which I don't believe that it is, because of how this particular thing was written
It's also incredibly important to understand, you know, what it means and what might happen as a result of you sort of presenting and signal boosting this rhetoric which vilifies Asians and Asian people
And so I just want to make sure that, you know, since we did discuss it, I also like address, you know
There isn't like an easy solution here because there are legitimate concerns about like the increasing power of the Chinese Communist Party and the impacts that that will have on other nations, but at the same time, that does not mean that you should have negative reactions or hate or vilify Asian people that live near you. And I think that's really important to say because it does happen. It just does. There is a causal relationship between when rhetoric like that goes viral and Asian hate crime. So don't do that. Rant done
Will Petillo
Absolutely, I agree with that. One other thing about that particular document I think gets missed a lot is that I think the core argument was that around there being like a lack of security around the process even as it is. So like even by the standards of like, even if you accept all of the assumptions about race dynamics, even then we're not actually following it
Any edges that companies have are just effectively being given away through lax security. so that's kind of nonsensical by almost any value system. But yes, I think what you're saying about not feeding into narratives about the just CCP being evil
I mean, for one thing, it's even on that level, that's like a bit of a caricature and you know, the United States government isn't totally blameless. And, but then also then, then just stating all that reflexively and, you know, not including any other nuance to it. Yes, I, that absolutely can lead towards people being mean to each other, which is just bad. And yeah, anything to stop that is important
So the, yeah, if there's anything else on this, I'm happy to continue
Jacob Haimes
No, no more on that subject, but do you have other objections or misconceptions?
Will Petillo
Yeah, there's so one other one I think is important. This has come up a few times. And I think I see the parallels between this and a lot of things that happen in environmentalism. I would essentially phrase as, hey, I like using ChatGPT. Are you telling me that I'm a bad person when you say that AI should be paused?
And basically, I think the core idea that I want to get out here is that the way you think about systems and the way you make choices as an individual are very different from each other. And so with the PauseAI activism, like that's a systems level, like acting as a citizen kind of thing
It is entirely consistent to be using a technology and also believe that it shouldn't exist because of the effects that it has on society as a whole. And also, the decisions that one makes as an individual doesn't really change all that much on the societal level. Like if you just stop using ChatGPT, that's not gonna make AI go away or really slow it down meaningfully
And if you try to like argue about tell a bunch like shame a bunch of other people into doing the same that could actually backfire by just turning it into a cultural war kind of thing. What really matters is to be shaping the incentives and the landscape in which everything exists
And just yeah just holding these as being different things like if you have a way of using AI that you think is generally positive. Cool like I don't care about that as a PauseAI activist. What I care about is what's happening in the companies as guided by policy
And a few others in here, but I think they could kind of skip. sometimes people will often say that pausing is just impossible. I guess another important one is something I've kind of brought up earlier, is that it's not just about the outcomes, but it's about the process. And then also another one is that this whole thing is impossible. Well, first we need to try
Jacob Haimes
Yeah. Okay. So given, I guess, like all of this and the context, I also wanted to at least somewhat address your stance on other efforts that are ongoing and what PauseAI thinks, or at least you think about these other sort of governance safety efforts. yeah, actually, before we even jump into one, I'm curious if you think, if you're like, I know exactly, you know, what he's going to ask about
Will Petillo
I think so. So we have this like large ask of trying to get for an international treaty stopping frontier development. But are there any other like more immediately tractable political asks that we're going for that are likely to happen in the near term? You know, a treaty, we'd have to build up a lot of influence for that to happen. Or that idea would have to build up a lot of influence
So, I think it is reasonable to consider like stepping stone kind of things that are good enough of themselves and also pushing the right direction. And one of these that I kind of personally find the most interesting in terms of being something that's available now and like currently being voted on are bills essentially related to transparency
California's SB 1047 was an example of one that almost passed but got vetoed at the last moment by Gavernor Newsom. now, but the fight's not over because there's also, New York has an AI like Raze Act
Jacob Haimes -- ASIDE
I actually do a deep dive on SB 1047 in my other podcast, Muckrakers. So if you're really interested in the bill and the drama surrounding it, I'd recommend listening to it
The TLDR is that SB 1047 was a bill being considered in the California legislature during 2024. The bill would theoretically require developers who made the largest models to do some safety documentation and that they could be sued if the model caused deaths or millions of damages. In addition, there were also some whistleblower protections included in the bill. In my opinion, the bill was extremely poorly worded. And I also don't think that it's that relevant even today
The models that are being created, as far as I know, have not surpassed the threshold for being covered models according to the bill. The RAISE Act, which stands for Responsible AI Safety and Education Act, is a different bill that is currently in committee in New York
The bill is quite similar to California's variant, but it removes the section about establishing a new board to oversee the implementation, and changes some numbers around regarding what AI systems will count. In addition, it cleans up some of the convoluted language, which was definitely needed. I do think that this bill is an improvement on the one that was previously out there, and I'm interested to see where the New York legislature takes it
One thing I want to note is that these bills are not the only ones that are out there. I strongly encourage you to look into the AI regulation that is brewing in your neck of the woods
-- END ASIDE
Will Petillo
I think this is basically an idea of, let's see, yeah, make sure that companies have a safety plan, have a third party review that plan, not fire employees that flag risk and disclose major security incidents. So very similar to SB 1047, just kind of building off of it
Jacob Haimes
So that's interesting. I did in on my other podcast. did like a deep read of SB1047. And I was not impressed. I think in principle, it's great because it essentially just says, yeah, we should like have more transparency and processes in this stuff. Also there are whistleblower protections, etc
But when I looked at it, and maybe this is because I don't quite understand the legalese or how it's being written, but it seemed like it didn't actually have any power. It was just sort of like, you shouldn't do this
But there was there were no enforcement or consequences or anything like that for companies not following those rules
Will Petillo
Yeah, and well, guess the first kind of sort of outside view thing I would bring to question that analysis is that if it really had no teeth on it at all, then why did the companies push so hard against it?
Jacob Haimes
My opinion of that is because any amount of regulation is a step towards more regulation. So I'm not saying it's bad to have. I'm saying it goes nowhere near what would actually be meaningful. It is a like, in good faith, “we'll try our best” kind of statement which doesn't actually hold any water
Will Petillo
So I would agree that it's weak and that like the kinds of things I'm concerned about, this does not fix
My reasons for being in support of it are basically identical to the company's reasons for being against it. You know, like it's a step in the direction of having more meaningful regulation later, being able to see what's going on a little bit more clearly, having companies like publish what their plans are, gives a way to be able to say, hey, their plan is terrible
In a way that's like clear rather than hidden and like maybe it's a good one, you know behind the scenes being able to protect whistleblowers means more likely to get evidence and You know information that is is helpful for getting other things
Jacob Haimes
I mean, there are some and this isn't to say that there's nothing good about it. Like, I think the whistleblower protections were just like, definitely something that should have happened. And any more that we can get towards that are really important. But yeah, I guess just the the other perspective is also like, did it have a negative impact on the on the rhetoric or that like this bill didn't get passed. Obviously we can't go into the Gavinner and his brain about why he chose to veto it. But..
It seemed like it, to me, like, it wasn't doing much. And that was essentially what was being vetoed is that like, why would we do the, like, this just sort of adds a bunch of extra steps, but it doesn't actually do the things we care about. and so, yeah, just curious what your thoughts are on that
Will Petillo
So I would say that Governor Newsom's stated rationale, that we shouldn't regulate the big companies because the small companies can also be dangerous, doesn't make sense from any perspective?
Jacob Haimes
No, no it does not
Will Petillo
And so I think the pretty obvious reason as to why that got vetoed was political nepotism. Yeah, I'm just gonna say it like governor-ism. You're corrupt. Like I'm saying that on the air
Jacob Haimes
Yeah, no, well, I mean, I think that's pretty, the amount of money and like last minute lobbying that went into that, I think makes it clear
Jacob Haimes -- ASIDE
First of all, it's important to note that the final version of the bill did pass both the Assembly and Senate with three quarters of the non-abstention votes cast. After taking his sweet time with the bill, the Gavernor decided to veto it, sharing a three page justification of his decision
In the letter, his main argument against the bill was that smaller models may be more dangerous than larger models depending on what they are used for or what kind of data they are trained on, and that this wasn't worth the, potential expense of curtailing the very innovation that fuels advancement, end quote. I find this phrasing interesting because it parrots the exact argument that is often made by big tech. Any regulation stifles innovation
This is, of course, the primary refrain of the bill's main opponents. Also worth noting is that he specifically stated that the bill would apply, stringent standards to even the most basic functions, end quote. Although I could be wrong, as I'm not a legal expert, I believe this to be blatantly incorrect after having read the bill multiple times in detail. Again, I'm fairly certain that the bill would not have had an impact on any models to date. So, his justification is bogus
Now let's follow the money. Andresen Horowitz alone spent $140,000 on lobbying in Quarters 2 and 3 of 2024, naming only SB 1047 and one other bill about consumer protections as the subject of their efforts. During the same time, Meta and Google spent over half a million dollars each in California lobbying, which impacted many bills, including SB 1047
In addition, OpenAI and Y Combinator both decided to get their anti-democracy California lobbying feet wet to the tune of around $100,000 for OpenAI and $75,000 for Y Combinator. It's not just lobbying money, either. It seems the decisions regarding who to employ were very intentional. Axiom Advisors, employed by both MetaPlatforms and Andreessen Horowitz, is run by Jason Kinney, who has advised Newsom for nearly two decades and is well established from prior scandals to be within Newsom's inner circle. In addition, Newsom appointed Darius Anderson, a high-profile lobbyist and founder of the Platinum Advisors lobbying firm, the one hired by Y Combinator, to a state commission in January 2024, which should have everyone's spidey-sense tingling for the revolving door phenomenon and corporate capture
Given all of this, I'm pretty confident in our conclusion
-- END ASIDE
Will Petillo
So, okay, so on one hand, like if you are basically saying that you kind of like the direction that it's going, but it's not nearly enough to be meaningful, I would agree with that. If you're stating that it's kind of unnecessary red tape and extra steps, that's kind of a complicated question that I don't think I'd really be able to dive into a whole lot right now. That would be a coherent argument against it, if that's the case
Jacob Haimes
That was along the lines of when I was going horse
Will Petillo
Yeah, if it's just extra steps with no purpose at all. I think I would disagree with that, like that idea of having some reporting requirements and transparency is useful as a stepping stone to future things
Jacob Haimes
Yeah, and I think I agree with that as well. I'm just, you know, trying to pull out the arguments
Will Petillo
So one other thing that I think is worth bringing up though is what you mentioned about the effect that it had on the rhetoric and the conversation as a whole, the fact that it got vetoed and like everything that came out for it
And I actually see this as being disappointing, but ultimately valuable in itself because part of challenging a system is forcing the people in power with a kind of adversarial views to out themselves and make it clear what they're pushing for and why
So like actually making Andreessen Horowitz like spend a lot of lobbying dollars to influence people. Actually making, you know, Gavernor Newsom like veto it. Because it's kind of an idea that's, this is drawn more from like disruptive protests. But I think applies here, at least analogously, is that if you kind of, it just, make demands and they're things that you expect the public to be generally supportive of, which the public was largely supportive of this bill, then you put the opposition in a bind where on the one hand they can agree and give you a little incremental progress, which you can then leverage to get more incremental progress and just kind of grow incrementally, or they could clamp down on you and they have to do so in a way that's like unfair and absurd. And then you can use the sympathy that you get from the public to build more momentum in a more outrage-based kind of way
Jacob Haimes
Gotcha. Yeah. I do think that, so there's like a lot of bills that are in the pipeline across the United States. And I believe one that recently got passed was the like, TAKE IT DOWN Act đź””, which is not really related to AI. It's like it, it tertiarily affects non-consensual AI imagery, but it's more about online safety and the rights of citizens to be able to tell companies and websites, why do you have that there, take it down, which is just a reasonable thing. But how do you see this impacting, or do you see this impacting sort of the broader AI governance space. Do you think this is a positive or a negative or does it not really have any bearing?
Will Petillo
So yeah, things like the TAKE IT DOWN Act, that sounds good to me. Having people be able to have, if they're non-consensual images on the internet, to be able to take that, yeah, that sounds great. Also, I'm interested in it from a PauseAI perspective in the sense of people being involved and things that people care about being heard and there's being some kind of process in place
In terms of priority, in terms of which thing would I hand out flyers in support of versus which one would I just sign a petition, it's a little bit lower. this is kind of a worldview thing that my main concern is about where things are going and that the where things are is kind of complicated and nuanced and a lot of details on it
The sort of bills that I'm really interested in and why something like SB1047, despite being fairly toothless, was very interesting is, I'm interested in things that are proactive rather than reactive, I guess I would say. So there are things that are already issues, and those are worth dealing with. I don't wanna downplay that at all. But that's a thing that we just kinda need time for society to sort out
And what I'm a little bit more interested in is what are the things that relate to where it's going such that it affects how many more of these issues we then later have to be reactive towards. Because just playing whack-a-mole with all of these issues as they come up doesn't feel sustainable. Like we have to do that to some extent
But if the rate at which new things are appearing is faster than the whole legislative process of building coalitions to deal with them, then just lots of stuff just piles up and slips through
Jacob Haimes
Right. And I would say it has been faster than the legislative process can handle for quite some time
I guess the thing that I'm, so I think like it's not only is it a good thing to have on the books, but I also see it as like a small win. It's not, you know, gigantic, but to the point of what you were saying earlier of, you know, having anything on the books makes it more likely to have something else. I think that we can see it as like a small win that, you know, this is tractable to get more buy-in. People are starting to care because I mean, despite the bill not being AI centric in its wording, it was very motivated by deepfakes and AI
And so I think that it is like somewhat of a positive to be able to point to some of the bills that maybe aren't as aligned with what you would ideally want, but are like, okay, this is a good step in the right direction. I think that's pretty interesting and important
Will Petillo
Yeah, I agree with all of that
Jacob Haimes
Okay, so the last thing I wanted to ask about, I think before just a couple of ending questions, was about how big tech sort of handles the safety aspect because a lot of the big companies say that they're pro safety. And so I guess I would be interested in hearing your take on what that means and how people can interpret that maybe is a good way to put it
Will Petillo
Yes, I would say the AI companies have generally been pretty effective at redefining safety in terms of what gives their public relations departments a headache. And it's basically like, at first it was, can we control our AI, our large language models to not say words on the list of things that we don't want them to say
And like, that's kind of interesting in the sense that if you can do that, that that implies a greater level of understanding of the technology, which may be for other things or that you've just applied some filters
Yeah, it's not nothing, because like anything that requires understanding things better or building up control techniques is progress in something. But ultimately to conflate that with we've made progress on this like much smaller issue, therefore you don't have to worry about the bigger one and we're using equivocation to make them sound like they're the same thing. That's safety washing; the environmental movement has had to deal with green washing. We're you no surprise that something similar is happening in AI
Jacob Haimes
Maybe to build on that, you said your mom was in environmental advocacy and bringing that parallel in with like the greenwashing and safety washing. Are there other parallels that you see that are particularly interesting or maybe an indication that the safety movement could learn from some of the past happenings in the environmental movement?
Will Petillo
Yeah, there's quite a few. One of them is the what happens with polarization. So I think this is not widely known, but a long time ago, environmentalism, like when it was first kind of forming, was politically neutral or even a right-leaning issue where..
Actually, yeah, very early on, it was like something more Republicans were in favor of and like the Democratic angle on it was, yeah, these are just these rich people that want to be able to hang out in nature and creating all these preserves. And it's not taking into account the working people who, you know, and being able to like optimize for their interests
And then for a long time, was sort of kind of on both sides. But then it started leaning a little bit more towards the Democrats being more receptive towards it. And that kind of led into a feedback loop where first, like the some environmental activists found higher returns on investment in talking to democratic legislators and so they put more effort into it and allowed it to kind of get tied to other things because that you know got people more interested
But then meanwhile the oil lobby started like pushing really hard towards conservatives and then it became this polarized sort of thing which once it was that it's very hard to get it out of that view and that puts a hard cap on how much people can support it and how much can get done, which you know may have had some benefits in the short term but was was probably bad for the movement on the whole
I think Katja grace of AI impacts has the SA series which I'm drawing a lot of this from đź””
Another issue from it kind of relates to that inside outside game dynamic. There are groups in the environmentalist movement who are just like very focused on making little incremental progress with insiders. And you can really get the run around, like if there's not a lot of like strength behind it, where people promise things and it doesn't really go anywhere. You try to build a bunch of connections and it's just like little bits.
Whereas what's needed is much more fundamental and and sweeping than you can really get with tiny little wins over time. So yeah, those are some, a couple of the parallels. Basically like things related to activism generally is where a lot of the parallels are
Also, I guess one other thing, this is something I kind of meant to mention earlier in terms of AI and this relates to the Big Tent philosophy and the combination of concerns about ex-risk and job loss and deep fakes and all these sorts of things is…
One thing I like to focus on, you there's the process of having everyone be involved, but there's also the fact that, or the idea that these are all tied together. There's like very similar dynamics driving all of these problems, in the obvious sense, like AI bringing them up. But then also the kind of underlying issue is the gradual loss of power of individuals to be able to shape how their lives go and how decisions are made
And whether you frame that as the loss of the regular people's power to large corporations or in the x-risk sense of humanity's power to machines, then it's kind of a similar idea and dynamic and what the fear is and what we're trying to push against
Jacob Haimes
I think that's a great, a great sort of like, ending point on there. before you go, I do have two questions. the first is what is your least favorite or least favorite part of the job or, or the misconception or like, type of interaction that you have with people that just like really annoys you
What is the part that grinds your gears, so to speak?
Will Petillo
I picked on boarding because it had the least things that bothered me
I've very much enjoy meeting people where they're at. like I had one call where the first line the person said was, are you a cult? Another one, someone said that they were building an AGI with some of their friends. Like this was like at 1 a.m. and I was like, and they'd never heard of the alignment problem.
It's like, so I kind of enjoy some of the weirder conversations and you know, the mix of people who like disagree versus people who are kind of agree and need to be activated and that sort of thing. But I'd say that the part that's the least fun about onboarding is the initial messages and just how few people respond
So yeah, you'll have someone who could joins a server they click the wants to help tick box and you message them It's like hi like I'm you know from PauseAI I'm here to answer any questions about how to get involved You know what brought you here and you know something to that is some friendly easy to respond to thing to that effect and you know, five percent of people reply and I get it, you know people join because they just want to want to check it out and they're…
You know, it's volunteering. You can't shame someone for not giving away their time for free. But on the other side of it, that just gets, you know, just like the little mini rejections. And also that constant contact can get a little repetitive
Jacob Haimes
Yeah. And then the flip side of that, which is what is your favorite part about the work, like the of interaction you have or, you know, something that you get to do, like an opportunity that you wouldn't otherwise, what do you enjoy the most about this work?
Will Petillo
My favorite part of onboarding people to pause AI is when I get someone who is really freaked out about where AI is going, whether it's existential risk or even just like dead internet or anything else, they're visibly distraught, angry, sad, despairing. And then I can talk to them for a while about the amount of agency that they actually have and the levers of influence that regular people can have on the world and see their mood visibly lift towards something more hopeful. That is like incredibly satisfying. Yeah, I'd suffer through a few hundred unresponded messages to get one of those happily
Jacob Haimes
Awesome. Well, Will, thank you so much for joining me. I think this has been a really good conversation and I'm happy to be able to share more about, like you mentioned, different groups being able to share what they're thinking about and how they're thinking about it. I think it's really important and I appreciate you joining me
Will Petillo
I guess there's one more thing maybe I'll say which I didn't mention before Which is just There are a bunch of if you're interested in AI safety or AI generally in some form and you see all the different groups relating to it out there and you're wondering is pause AI for me or where does this fit into the larger ecosystem of AI related stuff
My short answer to that is it being a low barrier to entry, like a for everyone kind of movement. So like if you wanted to be involved in MIRI or CAIS or something like that, like not just anyone could get into those. It takes a lot of specialization. They have like a lot of assumptions about like what they're looking for. This PauseAI is really for like, you have a lot of background or none or you're interested in governments or not
Like just like, come in, find your community, and maybe it turns out that there's something else out there that's a little bit more specialized that's better for your interest than this one, but it's a good place to just kind of come in, start out, see other people who are like-minded, or we have a whole channel for people who just disagree. Even if someone just wants to be involved in the conversation and thinks all of our ideas are nonsense, there's actually even a place for you there as well, because sometimes little pearls of wisdom come out of those disagreements when people engage in good faith. So, yeah, that's something I think I wanted to leave people with as to what we're about
Jacob Haimes -- OUTRO
Gosh, what a cool guy
If you thought this conversation with Will was interesting and you'd like to see more of his content, I've included links to his itch.io, platform for sharing games and other software demos, as well as his YouTube channel, and a couple of his writings. If you enjoyed the episode, one of the best ways to help get it to more people is to leave a review for the podcast on whatever listening platform you use. Lastly, you may have noticed that Into AI Safety is now part of Kairos.fm, a podcast and media network that I created to host this podcast, my other podcast, Muckrakers, co-hosted by Dr. Igor Kravchuk, and more. You can check out all of our content on our website, kairos.fm. That's K-A-I-R-O-S dot F-M. And with that, I guess I'll catch you next time
-- END OUTRO
Creators and Guests
