What can Artificial Intelligence Cautionists learn from the Climate Movement
The parallels between both movements don't appear as different as they seem
Recently a tweet from Nigella Lawson popped up on my twitter feed with the caption “ Sorry to do this to you on a Monday morning, but the end of the world is nigh” linking to a Financial Times article on the inherent risks of uncontrollable Artificial Intelligence. Nigella, a British food writer and television chef, isn’t known for her expertise and opinions on technology matters which can only mean that the relatively fringe “AI Cautionist” view is starting gain traction with the wider public. featuring on the BBC, Vox and New York Times. Eliezer Yudkowsky, an AI researcher and arguably the leading voice pushing back against AI advancement, recently published an article on Time Magazine stating a controversial comment “ Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined)…. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”
Now for those in the climate movement, this sequence of events isn’t anything that hasn’t been seen before. It usually starts out with researchers within a field highlighting the dangers of a technology without safeguards, this then slowly catches on to a determined group of followers who create movements and organisations to raise the alarm a broader base attention all in the hope of ultimately getting the people with power- world governments and intergovernmental organisations such as the UN. The problem is that this sequence of events is not linear and usually, when said movement goes up directly against powerful interest groups who profit from the bounty that the technology brings, the call to accountability gets stuck in a state of purgatory with researchers and campaigners constantly sounding the alarm just to receive a shrug of the shoulders. This then escalates to campaigners taken further radical action such as demonstrations and protests. When things still don’t gain enough traction we then see what is known in the political activist world as “direct action”, this is actively targeting the centres, groups or property that are causing the harm or destruction, usually taking the forms of strikes, sit-ins, hactivism blockades or (see Germany’s Ende Gelande movement which actively targets the development of coal mines in by occupying the space) not too disimilar from what Yudkowsky suggested.
Now, isn’t this getting a bit too far ahead of itself? How can we compare a hypothetical scenario of a yet uncreated intelligent system with to something like the climate crisis which is, at this very moment, causing record breaking temperature events along with all the weather-related event’s it brings with it? According to a 2021 survey of 44 researchers working on reducing existential risks from Artificial Intelligence- the median risk was 32.5%- ranging from 2% all the way to 98%. This is a large variation, which goes to show just how unpredictable this field is, but also does broadly align with other estimates which range from 10% upwards. To put this in comparison: If there was a 10% chance of a meteor careering towards Earth causing the destruction of all life, you can be pretty sure that world governments will crack heads together to generate a robust plan addressing this issue. Unfortunately, it would appear that there is roughly only 400 people around the world working directly on reducing the chances of AI related existential catastrophe and sounds to me like something we should be concerned about.
So, what does that mean for AI cautionists? Well, for a start signing petitions and open letters to address the issue, as a recent called for pause in development amongst by Elon Musk, might not have enough of an impact with generating broader awareness and action. If an existential threat does appear more and more likely, then further tactics and methods such as civil disobedience and direct action will need to be considered. What does that mean in practice though? Well, we can’t be sure because, as we’ve painstakingly learnt in the climate movement, a cookie cutter, copy and paste method from one movement to another just doesn’t work as cultural, organisational and historical contexts all need to be considered before launching any effective movement. What we do know is that some of the biggest response from governments to climate change has come within the last few years, largely in part to the monumental, near tireless civil disobedience actions from organisations such as Extinction Rebellion in the UK, Fridays for Future in Europe and the Sunrise Movement in the US. Shifting the Overton window (the policies and ideas that are found acceptable to the mainstream population) by disrupting business as usual has beenshown to work. As stated recently in a twitter thread in regard to the recent Just Stop Oil action at the Snooker World Championships “ if you have people willing to do very outlandish things in public space…well maybe there is a crisis”.
The people within the rationalist and effective altruist community are by nature utilitarian and can take notice when certain tactics work and when they don’t. The problem is the lack of urgent action so far on what Effective Altruist Organisations deem “one of the world’s most pressing problems”, except for highly technical LessWrong posts debating the finer points of the debate. This tension lies in the difficulty of measuring something as complex as social movements as to whether they do more harm or good to a specific cause. There is also a general lack of research and interest from the effective altruist community into researching social movement tactics and what effects they have on policy change and generating wider public interest. Notably there is some recent good work done by the Social Change Lab which finds evidence that non violent direct action does have important outcomes on public opinion, behaviour and policy. Even ignoring the wider research, we can even see real world successful demonstrations within the tech industry itself, notably the when employees from Amazon, Google, Microsoft, Facebook and Twitter organised a climate walk-out across 25 cities resulting in Amazon committing to zero carbon by 2040 and Google announcing a $2 billion investment in renewable energy infrastructure.
Existentially threatening AI might not come to be, but given the stakes involved as well as the rapid capability increase, this might be the best moment for AI cautionists to go from the behind their computers to the streets demanding greater accountability and research into safer AI because until then, the urgency just isn’t showing.