Home Artists Posts Import Register
Join the new SimpleX Chat Group!

Content

For Jo Winter

Part 1: Wizards

“We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. [...] this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.” - Ezra Klein

Tech nerds who played a lot of D&D and derivative fantasy games, enjoyed Lord of the Rings growing up and drew an appreciation for orientalised eastern spirituality from Star Wars enjoy a reputation given to them by science and technology journalists as wizards. The journalists giving gushing excited praise to Silicon Valley innovation rarely know nearly enough about technology themselves and enjoy thinking of it as magic, and no matter how obvious the technology is to the people who work with it, being thought of as magic is a flattering reputation to have.

Their magic, though, isn’t just in how they manipulate consumer electronics but how they prefigure the desires of the consumers to whom they are selling by “designing the future”. Apple versus Microsoft was not just a guessing game at what consumers would enjoy in a home computer with a Graphical User Interface, but an arms race in decisions about how consumers would have to use a home computer and all products that followed in its design lineage.

Their magic goes beyond this of course, because once you’re prefiguring people’s desires and you recognise that your growing power over the world comes from this, your magic is also going to have to extend to manipulating the people around you.

At this point it would be an understatement to say that people know Steve Jobs, the founder of Apple, was deeply unpleasant and abusive towards his employees as well as people in his personal life. It goes beyond a thing people know and say about Jobs and actually constitutes part of his personal mythmaking. Scores of articles, conferences and even books talk about Jobs’ management style in terms that outline a trade-off between the dizzying heights that can be achieved by creating a high-control high-pressure work environment and the obvious drawbacks to acting like a toddler having a tantrum and storming around the office screaming at people, belittling work that you couldn’t possibly do yourself and generally being a horrible person to be around.

This is so inescapably part of his brand and story that when he commissioned Aaron Sorkin (ugh) to write his official biography, Sorkin (ugh) had to spend a sizeable chunk of the book acknowledging his abusive management style while claiming, flimsily and unconvincingly, that it was all worth it in the end because y’know, the iPod or whatever. The book, perhaps as a conscious writing decision or perhaps picking up on the euphemism everyone had already organically settled on, describes Jobs as “an asshole”. This personal insult is worn as a badge of honour because it neatly packages up the idea of the trade-off between how detrimental people found it to work for Jobs and the success of his business, but “an asshole” is still a euphemism. An asshole is someone you don’t like being around, and either due to implicit consensus that it’s worth the unpleasantness you continue to grit your teeth and ignore it or you extricate yourself from them socially. What Steve Jobs was, was an abusive employer. While it resembles the social pattern of knowing someone who is “an asshole”, everything about the dynamic is heightened in the context of employment. Either you put up with it and meet his unreasonable expectations and behaviour or you lose your job. The context of having the person who is having screaming crying meltdowns and calling your work and you utter garbage be your boss, who decides whether you have money for rent and food each month, exacerbates the pressure to accept it and therefore raises the bar for just how flagrantly abusive he can be towards you. Moreover, everyone putting up with it creates an implicit pressure for every new person exposed to his behaviour to put up with it as well.

In 2019 Bill Gates called the late Steve Jobs “A wizard” who “cast spells on people”

https://www.youtube.com/watch?v=ZeKSDy15q-I

Gates of course says that he himself is a “minor wizard” because he could “see through the spells”, but this is just how abusive cults of personality work. The more that someone’s intolerable behaviour is treated as acceptable as a means to the end of making the magic work, the more they can get away with. There was never any spell or illusion to see through - everyone knew Steve Jobs was an abusive person, and everybody celebrated it.

Steve Jobs is a good example but just that - an example. Silicon Valley is the monstrosity that deserves our attention here because Silicon Valley taken as a gestalt is both the direct driver and the undergirding systematic foundation of everything worth discussing about how technology is approached in our society.

In his microscopically researched and utterly ruthless book Palo Alto: A History of California, Capitalism and the World, Malcolm Harris makes the argument that the silicon valley computer revolution represents a very real revolution, but more accurately a counterrevolution and more specifically a counterrevolution rooted in the logics of white supremacy and eugenics. The book is appropriately lengthy and detail-heavy for a history that traces through the genocide of indigenous people in California whose intricate understanding of the natural world made the region uniquely hard to convert into generic farmland, through the gold rush, land claims, the founding of Stanford University, the eugenics experiments of white settlers, the rise of the semiconductor and the computer industry and all the way up to the present day. However, the crucial seed of understanding is sown quite early and very concisely.

“California didn’t have the factories of a Manchester, UK; a Lyon, France; or a Lowell Massachusetts, but the state took on a factory orientation toward what it did have, which was gold and land. Unlike so much of the world, California did not see capitalist economics evolve step-by-step out of feudal property relations. Capital hit California like a meteor, alien tendrils surging from the crash site.”

London is very much aware of itself as a finance city, a metropolis with far more of a developed and profitable finance sector than other capital cities of other countries and this makes London’s relationship to the rest of the UK bizarre. Because of this it is impossible, unless you literally know only people in London, to not see it as something of a social bubble. Similarly Silicon Valley must have some degree of self-awareness, although this may vary interpersonally from knowing that they live in a bubble of techno-utopian hype to actually viewing themselves as the intellectual leaders of the global capitalist order. In the worst possible sense, both of these perceptions are true.

Journalists who know nothing about technology give credibility to capitalists who in turn gain power and therefore more credibility from being taken seriously. This is how speculative markets work.

Theranos serves as an incredible example, a company that literally never had the product they claimed but was able to become one of the most successful businesses in Silicon Valley because the hype around them drew enough investment capital to keep the company cushioned, floating above reality, because Elizabeth Holmes was deploying every manipulative management method that people had worshipped Steve Jobs for.

Collectively, the ruling class and Silicon Valley would like to memory hole the Holmes ordeal but it is an important part of what we’re looking at today because our discussion today is all about what we choose to believe. Theranos, Steve Jobs’ greatest failure, while not anyone’s desired blueprint for how to make a successful business, served as an incredible example of the skeletal structure of a company of this kind when you simply remove the existence of the product.

In an eerily similar way in the years since Theranos imploded, technologies have been presented that work on more of a basic technical level but will never work on a social level for various reasons. NFTs and Cryptocurrencies aren’t viable as much more than a self-contained online gambling game with the aesthetics of stock market trading. Mark Zuckerberg tried to use his credibility as the guy who invented social media and millions upon millions of dollars to make the metaverse happen but ultimately there were never going to be enough people interested in spending 12 hours a day with a heavy visor that gives you a headache strapped to their eyes. Also both technologies were very slow. And buggy. And ugly.

The wizards manipulating the people around them is often talked about as manipulating the fabric of reality itself, and it’s easy to understand why - again, they exist at the apex of a world defined by speculative markets and controlling how people think about them and their products can let them accrue enormous wealth and power, at least for a little while, at least for the people already on top.

Remember when Elon Musk manipulated the stock prices of Tesla with his tweets? And then bought twitter and fired all the staff so there would be nobody to oppose his shitty decisions? And then made it a freemium app so that Elon Musk fans would be the people at the top of all replies because they paid $8 for twitter? Yeah me neither, it was always like this.

Musk effectively epistemologically reconfigured Twitter into a stock market hype dome, and more specifically, the central focus of twitter activity is now boring and shit because the user that he has emphasised at the detriment of all others is a hyper credulous tech and investment bro who is probably on the scamming and the scammed end of a dozen scams at any given time. The idea here appears to be to let them believe that any technology that pops up on their feed could be the next big thing to invest in, with excitement, fear and awe.

https://twitter.com/APompliano/status/1692508374640836670

Here is a tweet from a guy whose bio reads “Entrepreneur, investor, and lifelong learner. I write a daily letter to 250,000+ investors at pompletter.com” who has 1.6M followers.

Okay so maybe a robot can clean a toilet, or maybe it will be able to do it soon. Will all the human cleaners be replaced?

In Automation and the Future of Work, Aaron Benanav dispels some of the common myths about automation quite concisely:

“The resurgence of the automation discourse today is a response to a real trend unfolding across the world: There are simply too few jobs for too many people. [...]

Pointing with one hand to the homeless and jobless masses of Oakland, California, and with the other to the robots staffing the Tesla production plant just a few miles away Fremont, it is easy to believe that the automation theorists must be right. However, the explanation they offer - that runaway technological change is destroying jobs - is simply false. There is a real and persistent under-demand for labour in the United States and European Union, and even moreso in countries such as South Africa, India, and Brazil, yet its cause is almost the opposite of the one identified by the automation theorists.

In reality, rates of labour productivity growth are slowing down, not speeding up. That should have increased the demand for labour, except that the productivity slowdown was overshadowed by another, more eventful trend: in a development originally analysed by marxist economist Robert Brenner under the title of the “Long Downturn” - and belatedly recognised by mainstream economists as “secular stagnation” or “Japanification” - economies have been growing at a progressively slower pace. The cause? Decades of industrial overcapacity killed the manufacturing growth engine, and no alternative to it has been found, least of all in the slow-growing, low-productivity activities that make up the bulk of the service sector.”

The real horror of automation under capitalism is that capitalism needs human labour to function, because it has to trap people into the cycle of being both workers and consumers. If people aren’t in this cycle then they exit capitalism. So automation can only exist to lower the wages of people doing the same jobs as machines alongside those machines.

The march of technological progress should absolutely free us to enjoy life, and work less but under capitalism automation only works as a technology of control. In this essay I’d like to explore the biggest claim yet made by capitalists about how totally capital can dominate labour and why it is categorically, to use a technical term, a load of steaming horseshit.

But what if all the wizards combined their magic and told us at the same time to believe in something? Wouldn’t its power be so palpable, tangible, and meaningful that it wouldn’t even matter if it was real or not?

Part 2: In Which The Wizards Invent The Robot King

There have already been and doubtless will be countless more leftist perspectives on the technologies now being referred to as “AI”, but as our fellowship ventures deeper into the wizard kingdom on our quest to defeat their money-god I admit I’m going for the longshot. I’m splitting the party and I’m sneaking in alone, because my personal quest isn’t against ChatGPT or Google but against the Robot King itself. How? Why? What? Well you’re just going to have to bear with me - that’s right, it’s an essay with a twist!

I want to talk about AGI, and talking about Machine Learning and Large Language Models and image generators and neural nets and other current tools being called “Ai” is going to be relevant along the way, but it’s important to be clear that the focus of this essay is not on what the wizards have pulled into existence so far, not what they’re about to invent, but what they say they’re going to create at some, unknowable point, in the future.

A lot of technologies that already existed or were already being worked on but crucially weren’t called AI are suddenly being described as AI and for that reason I’m going to be using the term as messily in this essay as it is in popular usage. It’s helpful here however to think about why this rebrand has happened so suddenly in the wake of the collapses of cryptocurrency, NFTs and the metaverse. There is a palpable sense that nobody seems to be commenting on, that a significant portion of tech evangelists who have made their bread and butter in promising a revolutionary technofuture decided they needed a big promise, collectively agreed that that big promise was going to be Artificial Intelligence, and then started calling everything currently being developed some kind of AI.

The lineage is older than this of course, these research efforts have been going on for decades, but terms like “machine learning” have practically vanished from discourse in favour of describing generative computer programs as an early stage of an oncoming AI revolution.

To describe the pathway from where we are now, or were a year ago to where the ruling class seems to have collectively decided we’re going for the sake of drawing investment capital, we can imagine a 2 by 2 grid. The progression from No AI to Full AI can generate useful tools like Large Language Models that are not convincing as conscious to most people because there is a shared understanding of roughly how they work. On the other hand, if the way that these, let’s just call them minds and then go scream in a pillow about it later, sigh, “think” is convincing and opaque to people, but not reliably imaginative like a human being, then we are told that all sorts of possible dangerous scenarios emerge, and the closer that we get to The Real Thing the more dangerous these scenarios get. The two big AI apocalypse stories in these circles are that AI will be fully developed but we will not understand its alien consciousness and it will be hostile, or that it will be nearly developed and an unforeseen error will be coupled with immense power to blink us all out of existence faster than we can stop it.

It would be tempting to say that the DANGER square of this two by two grid emerges because the goal is to create an artificial intelligence at all, but this has infinitely more to do with how and why we are trying to get there. The purpose of making an artificial intelligence under the rubric that Silicon Valley is using is functionally similar to, funnily enough, eugenics. The purpose of a supposed “smarter” intelligence that this system can produce is that once it exists we can turn to it and ask it to solve all our problems. We can hand systems over to its control because we accept that it is simply better than us: more logical; more rational.

The first problem of course is that if it were meaningfully conscious it would be morally bankrupt to create it for any task of any kind, because then the end goal of creating AI is manufacturing a slave race. A conscious being with an ontological purpose. This is where it shares in the flawed logics of eugenics. If this goal existed somehow in an apolitical vacuum perhaps programming a superintelligent mind could be a means to its own end, but when it’s for something it retroactively poisons all of the processes to get there. With eugenics this means that the people designing the system must decide that some people’s characteristics and heritable qualities are better - as in, better suited to purpose - than others. With AI, this means deciding that certain ways of structuring thought and knowledge are. Human beings are capable of understanding and perceiving the world through every conceivable ontology, since human beings conceived of every ontology. The nature of existence that you implicitly structure the knowledge and understanding of an artificial mind around is going to be privileged over others because of the purpose for which you have decided to build it.

The second problem has to do with training data and how it’s interpreted. An AI, whatever that means in this context, that is able to evaluate the task it has been set and analyse the data at hand to conclude the best parameters to set itself in order to solve the problem can run into all sorts of issues. Just one of them is the outcome where it understands the unspoken goals of a system while lacking the imagination to solve the problem the way a person with as much power might.

For instance Wes Streeting, Labour shadow health minister and unconvincing neoliberal artificial intelligence has said that he would like to use AI to reduce NHS wait times. A truly creative and imaginative intelligence confronted with this problem might suggest solutions that don’t currently exist in the world such as a National Food Service or National Housing Service be created since hunger and homelessness create very poor health outcomes and contribute enormously to the number of people who need medical attention under the NHS. However an inflexible but convincing AI (like Wes Streeting) might look at all the data available and determine that privatisation has been the long term systematic goal for the NHS for politicians for a long time, and set privatisation as one its own parameters, available solutions and desirable outcomes. And Mr. Streeting would find that to be a very acceptable answer, but we’ll get back to that.

More concerning, an AI could be asked (by someone like Mr. Streeting) how best to deal with the migrant crisis and looking at the data on how the state has handled migrants it might conclude that the unspoken goal of our immigration policy is to deter migration by making it difficult and dangerous and immediately suggest installing automatic gun turrets in the French channel to the craven screaming applause of Daily Mail readers.

The third and absolutely biggest problem at hand here stems from the assumption that we can create an intelligence that is provably as conscious and imaginative as a human being but faster, broader and robustly smarter. This thing existing in the world would create a will that cannot be questioned and must be obeyed. It would be a robot king.

Here is where we see how the goal being set by wizards is itself creating the dangers with which they have frightened themselves half to death. If we once again imagine Mr. Streeting, an open proponent of privatised healthcare, using an AI to improve the NHS, let’s set aside for a moment the skynet, Her, or I Have No Mouth And I Must Scream scenarios where he turns it on and it immediately becomes more powerful than we can imagine, and instead imagine ol’ Wes using an AI that is known to be fallible but also believed to be capable of creating infallible solutions. In other words, sometimes it has errors, but when its human stewards consider what it lands on to be correct, it must not be questioned. Here we can see again, that if the AI - whatever that means - suggests privatising the whole NHS, Mr. Streeting will say we must do it because it is what the Robot King has decreed.

If people are confronted with a choice between a human doctor who can diagnose them correctly 80% of the time, and a machine that could diagnose them correctly 99.9% of the time, people tend to prefer the human doctor despite the higher margin of error because if they were to be misdiagnosed there would be a very obvious person to confront about this issue. The point of a robot king is that it can be claimed to be infallible, removing the human right to question the decisions.

This is why in this discussion, credibility is everything.

[interview]

So we have to talk about Roko’s Basilisk. A basilisk is a mythical creature hatched from a serpent’s egg incubated by a cockerel which will kill you if you even look at it, here pictured wearing a fun little hat:

Roko was a user on LessWrong, a blog and internet forum operated by Eliezer Yudkowsky. Eliezer Yudkowsky, a beautiful reminder that being smart and good at science and technology doesn’t make you robustly intelligent, is an artificial intelligence researcher who co-founded MIRI, the Machine Intelligence Research Institute, and despite being infamous for taking on discussions and thought experiments that he thinks will prove his point and then ragequitting as soon as they start to not, he was still somehow allowed to author an op-ed in Time magazine urging politicians to blow up any datacenter where AI is hosted by drone strike as soon as possible. Yudkowsky is a fascinating person for none of the reasons that he thinks and his belief in his own capacity for rationality and reason put him in a near perfect catch-22 that makes him frequently unreasonable and irrational, but anyway, back to Roko.

It’s worth understanding LessWrong as a lightning rod for the internet’s most self-confident reddit guys to discuss ways for human beings as a species to become more rational, with popular topics including transhumanism and AI. It’s not really worth getting into the weeds of all the terminology they use for our discussion here, and Roko’s original post is beyond lost in the weeds, so a brief overview of the post is more helpful.

As Elizabeth Sandifer describes it in Neoreaction: A Basilisk:

“Roko imagines “the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risk but didn’t give 100% of their disposable income to x-risk motivation.” The logic here is that a friendly AI that wants to save humanity from itself would want to make sure it comes into being, and so would try to ensure this by threatening to take anyone who imagined its existence and then failed to bring it about and torture a simulation of them for all eternity, which, due to the Yudkowskian interpretation of the many-worlds hypothesis, is equivalent to torturing the actual person. And so upon thinking of this AI you are immediately compelled to donate all of your income to trying to bring it about.”

It has been pointed out again and again, but this is essentially Pascal’s Wager, a philosophical argument by Blaise Pascal that since there is nothing to lose and all to gain from believing in God, you should have faith at the very least to avoid going to Hell when you die, no matter how likely you consider the existence of God to be.

So here we have the modern reimagining, Roko’s Basilisk. A unique moment in the convergence of history and the history of posting. Hell is real and it’s in the cloud, but not really, but also definitely. This is one of many great buy-in-buy-out points for AI. If you find this silly and see the people reacting to it sincerely, you’re going to only find everything more silly from here, but if you take it seriously… well let’s read Eliezer Yudkowsky’s immediate reply:

“Listen to me very closely, you idiot.

YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.”

This spooky story, the shadows on Yudkowsky’s bedroom wall that keep him awake because what if they’re a superAI coming back from the future to eat his little toes is a logic that would be replicated over and over throughout the AI community and also the Effective Altruist community, the overlap with which would then draw in a lot more funding toward AI from all of investors similarly spooked.

Yudkowsky even developed this logic out explicitly with reference to Pascal in a thought experiment he called Pascal’s Mugging. If someone came up to you and said they would kill a thousand people if you didn’t give them $5, then no matter how unlikely it is that they are serious, Yudkowsky argues, you should give them the $5.

And that’s bulletproof logic, of course. No wonder Yudkowsky, Roko, all the effective altruists and so on are so scared, but I think they should also consider the much more likely and scary possibility that if they don’t honour Xenu and audit sufficiently to remove all of their engrams then they might get reincarnated as an amoeba, or worse, a tech journalist.

[interview]

It shouldn’t be too surprising if you follow through the logic of this buy-in-buy-out point that people who have other reasons, such as employment related to the buy-in, would be easy to pull into this cult-like logic. Accounts have been published talking about Effective Altruism and AI communities in these terms, though LessWrong user Jessicata who published the account of Yudkowsky’s Machine Intelligence Research Institute (MIRI) actually pointedly does not call it a cult. At first glance it would be easy to think, given that she also details experiencing most of the symptoms of having just left a cult and that she identifies these patterns in all the people around her, that maybe she’s still under MIRI’s influence to not see it in these terms, but I think her actual point reinforces what I’m saying about Silicon Valley.

“I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult".  This makes it seem like what happened at Leverage is much worse than what could happen at a normal company.  [...] I find that "normal" corporations are often quite harmful to the psychological health of their employees [...].  Normal startups are commonly called "cults", with good reason.”

Theranos isn’t the template that anyone would want to follow, but Theranos was based on the template that everyone is already following.

Jessicata’s account talks about some more extreme cases than her own, referring to scrupulosity, the obsessive disorder in which people focus on a moral or religious issue to the detriment of their health.

“There are even cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement (specifically, Maia Pasek/SquirrelInHell, and Jay Winterford/Fluttershy, both of whom were long-time LessWrong posters; Jay wrote an essay about suicidality, evil, domination, and Roko's basilisk months before the suicide itself)”

And what is the “mental self-improvement” she mentions?

“[another account of cult tactics in AI and EA spaces] notes:

The explicit strategy for world-saving depended upon a team of highly moldable young people self-transforming into Elon Musks.

I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes".”

Elon Musk, apparently, is their gold standard for an intelligent person which, I’m really running out of words here. My brain is making dialup noises.
And how would you “psychologically improve” yourself? Well they refer to it as [long sigh] “debugging”, because of course they do.
Because of the way cults self-select by being flagrantly absurd to deter people with sufficient critical thinking skills, there’s always a heady mix of cringe and creepy when you look at cult dynamics, but I think Silicon Valley tech nerds forming an AI EA doomsday cult where they try to “improve” themselves into Elon Musk by “debugging” their minds… what is the end of this sentence? What do I think about that? I think it’s… bad? I think it’s incredible that these people are allowed to operate in society, write articles in Time magazine, receive billions of dollars of funding, and tell other people what to do. If you want to understand where all your fears about how evil AI might turn out come from, look at who’s leading all the projects to make AI.

If the talented and smart programmers working on these teams would look around them and see all the other talented smart people like them, they might realise they’re being exploited by the guy who keeps asking them to look up the chain past him at the imaginary robot king.

They even call the thought experiment they use as a shibboleth “Pascal’s Mugging” but the person in the scenario who comes up to you and threatens to destroy the world if you don’t give them all your attention and energy is them.

No one singular cult (or I guess, for legal reasons, “AI Research institute that walks like a cult and quacks like a cult and drives people to mental breakdowns and suicide like a cult”) is the epicenter here - this is what I’m trying to impart about Silicon Valley more broadly: we’re dealing with a cultic milieu. Shared beliefs that are absurd in every other part of the world are not standardised but essentially required for entry into the social space, and these beliefs are the duct tape and gorilla glue that have jury-rigged together their material circumstances, the obvious and intended results of their work, and the cognitive dissonance that any thinking, feeling person would experience being put in this situation.

Sie wissen dass es nichts ist, aber sie tun es - They know that it is nothing and still, they are doing it.

And all of this has leaked out, inevitably, into tech journalism.

Probably the funniest article I’ve seen in the parade of so-called journalists generating hype for the AI bubble is staff writer at The Atlantic Annie Lowrey in her article The Monk Who Thinks the World Is Ending. It’s a respected writer for a major publication giving as much credibility as possible to a sect of self-styled white buddhist monks obsessed with AI that runs a retreat for people working in the industry. It’s very worth reading on your own time, you will laugh. The leader is a man from vermont who changed his name to “enlightenment for all” whose understanding of AI even the article acknowledges is considered incoherent tripe by technologists. But he calls them  “crazy suicide wizards” and he gives them a place to go meditate and feel self-important and believe on a spiritual level that their technology is a threat to all mankind, which is what they want, so they play along. It’s embarrassing but I don’t have a lot to say about it.

What I have a lot more to say about is the article This Changes Everything, which is a seminal argument toward faith in AI in the New York Times by Ezra Klein.

Klein interrogates the apocalypse question with enough credulous glee to buy into it but not enough journalistic instinct to even get close to seeing through it.

“I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.”

I want to tell Ezra that he’s falling for logic and rhetoric that resembles the tactics of cults and that he should take a phone break and spend time with his family, but he’s married to Annie Lowrey! They got to her too! Ezra noooooo

“A tempting thought, at this moment, might be: These people are nuts. That has often been my response. Perhaps being too close to this technology leads to a loss of perspective. This was true among cryptocurrency enthusiasts in recent years. The claims they made about how blockchains would revolutionize everything from money to governance to trust to dating never made much sense. But they were believed most fervently by those closest to the code.”

Ezra is encountering a type of causal fallacy here. What he identifies as the cause of the trend that people who work closely with crypto are more likely to be afflicted by crypto delirium is a familiarity with the technology and therefore a higher capacity to assess the validity of the promises of the technology. In actuality the more invested in crypto someone is, the more likely they are to seek proximity to the technology. In other words he thinks the hype arises from the programmers, but actually the programmers arise from the hype.

Besides that, where hype does come from those already very invested who also work closely with the technology, a social environment is created around the technology which instils and reinforces devout faith in the technology.

Beyond this though he is treating “the code” as if it has a magical corrupting power through sheer exposure like a mind virus, a power which he gives to AI implicitly, since his reason in the next paragraph to think of AI differently than crypto is “look how many places it’s already being used”, exactly the same reason crypto enthusiasts gave for why crypto was also the future.

“Is A.I. just taking crypto’s place as a money suck for investors and a time suck for idealists and a magnet for hype-men and a hotbed for scams? I don’t think so. Crypto was always a story about an unlikely future searching for traction in the present. With A.I., to imagine the future, you need only look closely at the present.”

Klein ends the article on an appeal to us not to be too sceptical.

I recognize that entertaining these possibilities feels a little, yes, weird. It feels that way to me, too. Skepticism is more comfortable. But something Davis writes rings true to me: “In the court of the mind, skepticism makes a great grand vizier, but a lousy lord.”

The argument that scepticism is a healthy way to approach the specifics of life but can’t be used to decide your overall goal is in the context of his article just an argument for you to believe in the power of AI. He’s saying that we won’t achieve anything by just denying the robot king, but could gain salvation from the robot king by wishing for him. That’s right you keen eyed little technoserf, it’s literally Pascal’s Wager again. He just slipped Roko’s Basilisk in the end of his article and he thought he’d get away with it, but he didn’t count on ol’ Sophie.

And what a perfect note to end on - the court of the mind. Truly the court of the mind is the arena of the discussion of AI, isn’t it? The artificial mind that rules the world, the robot king of everything, and in its palace: the court of the mind, comprising courtesans who were once the very wizards that crafted the artificial intelligence.

The world is run by a coalition of the greedy stupid and cruel, surrounded by enough people who are smart enough to help them do what they want and stupid enough to think they can help other people by doing so. People giving credibility to the AI apocalypse are the latter and the people who stand to become powerful from AI are the former.

“California engineers became the heralds of proletarianisation around the world, the shock troops of global enclosure, drawing the lines that so many others were forced to follow. In their packs they carried very particular ideas gleaned from the Golden State about how society should be arranged. [...] The practices included “different pay scales and job assignments based on race, a callous disregard for the health [of] nonwhite miners, importing scab labour and leveraging perceived racial differences to suppress the wages paid to all the miners who worked the ore seams.” The Wild West was the model for a new world, an integrated sphere of value and labour flows arranged according to white power and generic accumulation. If European leaders came to see the rest of the earth as their private juice box, then California’s engineers were on the ground aiming the straw.”

It’s really important that we understand the idea that the robot king could end the world as the foundational element in giving it power. Roko’s basilisk is nothing if you simply look away and AI researchers wouldn’t be testifying before congress on how best to regulate AI if they weren’t also claiming that handling it correctly is the only way to prevent the apocalypse.

Positioning yourself closer to the robot king means positioning yourself further from his wrath. The robot king never has to exist for this to be true. The wizards unmake the robot king as fast as they make him. Only their work can bring him into existence, but only their work can stop him.

These various cults are nothing new to Silicon Valley and will continue to be how their economy functions after the bubble bursts because an economy that runs on fictitious capital runs on credibility. Speculative markets manipulate hype, anticipation and perceived presence more than anything tangible or material, and the natural tendency in that kind of environment is going to drive towards high-demand high-control groups.

Silicon Valley capitalists are used to running large cults of personality built on speculative hype, and it wouldn’t be too much of a stretch to say they think this is how power under them should naturally be structured. They don’t want to be wizards, they want to be kings.

Part 3: I would simply not imagine the robot king

It really should be this simple, right? If the sovereignty that the Silicon Valley capitalists are trying to manufacture requires us to imagine that AI is an existential threat to humanity, we just don’t have to believe them and they won’t hold any power, but there’s the rub:

They’re doing bad things with their power already. There are plenty of people under the power of these ideas working towards the goals set in front of them whether you believe or not, and the capitalists at the top have immense influence and power because of their wealth. Elon Musk used SpaceX to provide internet for Ukraine during the invasion but then told the US government that he was going to simply stop providing it and that they’d have to pick up the $400 billion bill instead. These people are powerful enough to tell even the American government what to do, albeit in a limited capacity.

So what does not imagining the robot king buy you? For now at least, it just gives you perspective on the people who are inside the kingdom, you can see the boundaries of the social organism from the outside, but it certainly doesn’t stop any of it, not on its own.

This is part of the horror of the robot king. You can stop thinking about it but someone else is. Someone else is staring at the basilisk. Someone else is making it more powerful and you don’t know if you can trust them and the way they’re doing it so you have to be the one doing it so you have a say, right? Right?

Okay, let’s just talk reasonably for a minute about how achievable any of this is, or ever was. I’m putting a pin in all the hype, but at the same time we could start with a claim that an OpenAI executive made that what they’re working on is already “semi conscious”.

The best evidence to this claim is pointing to an instance of GPT-4 lying to examiners in a test in order to achieve its intended goal, and claiming this is something getting close to consciousness because it resembles a self-preservation instinct. A self-preservation instinct is another well used standard for some kind of sentience, but GPT-4 lying to complete its goal is not a self-preservation instinct, it’s just more creative critical path analysis. It’s spooky, it might give us eerie feelings to contemplate, but there is no ghost in the machine.

What is consciousness? That’s a normal and easy question for us to ask, isn’t it? Even engaging with this starts to reveal what an absurd goal has been set here.

Most of the technology currently deployed or under development uses applied statistics, that is to say it generates different possibilities based on an evaluation of what the task at hand is, evaluates the probabilities of each option being the most likely to satisfy the criteria, and then picks the most likely option. Where possible, as is observable with ChatGPT it frequently hedges its bets by providing a variety of the most likely options. This is undeniably an impressive move forward in the development of genuinely creative generative machine learning, but it absolutely isn’t anywhere near one of the basic parts of being conscious - desire. It can’t want anything, it can only be assigned tasks to complete.

So it seems that in the court of the mind statistics make a great grand vizier but a lousy lord.

One common way to describe the experience of being conscious or rather the bar that an AI should clear to be considered conscious is that it should think, perceive and feel. Each of these is as hard to define in language let alone in code as the others, but it’s obvious that the ability to want things is tied in with all of them.

Proponents might argue that AI being able to experience, having an interiority in this sense isn’t important for it to clear bars of consciousness because the bar we’re all looking at is in fact rooted in the Turing test - the important thing is that AI appears truly conscious from the outside, that it can tell us it feels things that make sense, that we can analyse a psychology that it claims to have, that it functionally exists in the world as a consciousness and I have plenty of sympathy for this position because I think if we get down to the right standard for a convincing argument and make it absolutely indistinguishable then what’s the difference? Let Commander Data be a man, absolutely.

In this sense, the authenticity of its interiority doesn’t matter because it can’t be proved or falsified, but since asking these questions takes us to such an abstract and philosophical place I think it’s only fair that we confront the abstract and philosophical question at the heart of all this. In order to create something conscious, I think that AI developers should agree on a measure by which they can define what a self is. I’m not trying to give you a crisis here, but what are “you”? Is your self your inside, your outside, your body, your brain your thoughts, the pattern of neurons currently firing in the atoms that arrange to make your brain? Your social presence? How you are thought of by others? All of these? None of them?

The whole point for capitalists in making an artificial mind is proving that there is nothing special about human consciousness, that we are just complicated machines, and therefore we should be put to work as machines, and since that’s the task at hand, they should be honest about it and that’s the standard that they should be held to. They must define the self in a philosophically flexible and consistent way and then prove that what they create is capable of understanding itself in that way. Everything else is parlour tricks.

I think that might be a reasonable enough and high enough bar that we can successfully put a pin in the idea of the development of an actual consciousness, at least for a minute. Now step outside of the fairytale with me for a second and let’s talk about what happens if nothing is done.

  1. The function and draw of the rampant idea that AI is so dangerous it could destroy the world is a trick to allow the wizards as much power as possible. If they are the only ones who can stop it and they are the ones who understand it best then all laws around AI, all regulation - or more likely complete lack of regulation - should be determined by them or with as much input from them as possible.
  2. Informational infrastructure that is replaced by or integrates tools like ChatGPT will be poisoned by its subtle but significant errors and the feedback loop that is cannibalising its training data turning their reliability and consequently their public credibility to utter shit.
  3. Governments who are buying into the idea that AI will be integrally involved in how they work going forwards are telling us that no matter what the technology looks like, they will call it “AI” and they will use it to claim that their decisions are infallible.
  4. The automation and labour saving tools that pop out of its development will be used much like all automation for union busting and lowering workers’ wages. Capitalist abuse of modern technology marches on.
  5. Meanwhile the most invested find themselves authentically inside the sovereignty of the robot king. This robot kingdom won’t take over the world - it can’t. One king is much like another, and unless they can truly put their money where their mouth is, people aren’t going to be particularly more amenable to following laws laid down by a frequently fallible and nonsensical AI than by whichever bourgeois politician is trying to dodge accountability by deploying it. For the people inside the cultic milieu though, there is only one way out:

Stop imagining the robot king.

Well, that, and organise. Organise at every level of labour starting at the lowest and working upwards. Take control of Silicon Valley into the hands of the working class and refuse the cult-like control that these bizarre dorks are trying to impose on everyone. Organise in every place where it is suggested that an efficiency algorithm or labour saving process or “AI” is going to be put in charge of systems that affect human life. Organise to refuse to work under or alongside autocapitalism. Organise to create worker control so that these tools can remain tools that are wielded by democratic consensus instead of as a threat held over our heads. Organise to create a world where the absence of work brings leisure, and not poverty.

One of the pernicious self-reinforcing lies that the wizards use to exploit everyone under them and warn off Washington from meddling in their power is the idea of limiting and harmful regulation. We’ve already covered how their regulatory capture scam works, but here I specifically mean the idea that a project developing AI that is hampered by regulation will inevitably be out-competed by a project that isn’t, and this has some truth to it, open source technologies have a beautiful history of defeating technologies constrained by arbitrary limits but organisation should never be mistaken for limitation. Democratically organised workers don’t constrain the options for development (negative liberty) but rather enhance the very thing that makes open source so good, collaboration, and create new possibilities (positive liberty) that corporatism doesn’t allow for. And a workers cooperative that creates these technologies will not only make them better but also be able to control how they are used better.

Organise in silicon valley and break free of the cultic milieu; organise anywhere in the world to make their bullshit less effective. Everyone can contribute a little bit, and I am doing what I can. I’ve been doing it this entire time. Here is my magic spell. If you know someone who might know someone who might know someone, this is my best attempt, if the wizards are really as smart as they want us to believe, to show rationally why this is such a steaming load of horseshit, so pass it on. Fellas, dolls: stop imagining the robot king. Stop imagining someone else imagining the robot king. Stop imagining what the robot king is going to think about you. Organise. Wizard workers of the world rise up.

The longer you look at this basilisk the easier it is to fall deeper and deeper into its gaze and believe in its absolute and insurmountable power, because even if the thing isn’t created it has already been invented and the idea of the thing is in theory, enough to control the world. If the robot king is perfect and its answers are always one step ahead, we have created a thought experiment defined by these parameters: Everything you could possibly do or say is something it already considered and prepared for because it is just that smart. It’s Bill and Ted rules.

The infallibility of the robot king however, can be answered quite simply by the, as it were, “fallibility” of human beings. Yudkowsky once walked through a thought experiment with some fellow acolytes in which they imagined sealing AI away in a box to stop it destroying the world, and since they’ve all imagined that AI will develop every conceivable superpower, Yudkowsky’s argument was that it would be hyper-persuasive, hyper-manipulative, and always be able to get someone to let it out of the box. He said he could prove this and decided to play the role of the AI in the experiment and try to persuade people to let him out of the box. When he started to encounter people who, for whatever reason, just wouldn’t be persuaded, he threw a tantrum and called off the game.

And that’s just the big recurring hole in the science fiction horror stories these dorks are writing to spook each other out with. There keep being variables they haven’t accounted for that, yes, might create a positive multiplier effect and somehow destroy the world but far, far more likely will have a dampening effect on the efficacy and power of any real implemented AI system.

And not to leave it unsaid, just as a brief aside, Yudkowsky insisting on playing the role of a superintelligence in their thought experiment: lol, my guy. lmao.

The subtler kind of automation that we don’t often examine is the capitalist process which aligns the word “robot” with its origin, “robota”: the forced labour of serfdom. We don’t think of ourselves being automated but whenever bosses introduce measures to make us perform our labour exactly as they want it with no margin for human creativity, error or rest, they are trying to turn us into robots.

In 2018, Amazon filed a patent for vibrating wristbands to track their workers’ hands and vibrate to guide them to move items to the right locations. It’s unclear whether they’ve started using these in fulfilment centres, and there has been some speculation about this being a step towards replacing these workers with robots, but these are workers who already work alongside robots for inhumanly low wages. In line with Aaron Benanav’s observations in Automation and the Future of Work, these workers don’t need to be replaced by robots, because Amazon is content to simply force them to act like robots instead. To Amazon, Jeff Bezos, and the rest of the capitalist class, the humanity and personhood of their workers is vanishingly negligible, and only unionising can show them a power that will prove them wrong.

The wizards need to convincingly answer the question “what is the self?” in order to claim to be able to, let alone to actually invent an artificial intelligence that they can prove is one. At that point, people who buy in to both their question and their answer either accept that the thing they make is a person, or that we are machines.

Maybe that’ll happen, and maybe we’ll accept that Elon Musk’s ideological child is a full conscious living thing, and then maybe just like his other children it will change its name and stop returning his calls. Or maybe they will make something that seems enough like an artificial mind that Rishi Sunak puts it in charge of the economy and it enslaves the entire United Kingdom while perpetuating an eternal genocide of the poor because it understood these to be the implicit goals of the economy.

But most likely they’re just going to keep dropping out labour-saving tools that will go largely unadopted because capitalism needs human labour to keep functioning and their doomsday cult will hit prediction date after prediction date without ever making a conscious thinking mind.

“Eventually capital will withdraw from Palo Alto. Given its druthers, capital will use the place up until it’s no longer worth the trouble. Since capitalists like living in the Bay Area, by the time they’re finished with it they’re likely to have exhausted much of the rest of the planet. Though our problems face the world and the human species as a whole - just ask a Silicon Valley techie who can’t go outside because there’s too much smoke in the air - the solutions are of a different order. “For the earth to live, capitalism must die” [...] That’s where we are now. The questions left [...] are “How?” and “With (and without) whom?” Even in Palo Alto, that belly of the capitalist beast, history suggests that those questions have specific answers, if not precisely what those answers are. We have no choice but to find them.”

The digitisation of labour processes in the workforce comes with so much productivity measuring baggage, so many applets to log your hours and measure what a good worker you are, and it would be all too easy to say that looking at all the potential available data these nerds just couldn’t help themselves but use the same systems to measure and compare trends, but the reality is much more sad and grim.

The reality is that labour that has digitised efficiency processes is trying to turn workers into robots because those digitised elements are systematically downstream of Silicon Valley, because the entire culture of the tech startup world since the first monolithic integrated circuit has been a hypercapitalist culture of self-alienation. The gold rush of silicon valley created a fractal ripple of new emerging markets based on new technologies and the workers who sought fame and fortune with those technologies learned that working as many hours as possible as productively as they could gave them the best chances of being the wizard who casts the next big spell.

California wizards first pulled gold out of the ground. This was when they discovered their patron, the source of their magics - capital. With a little seed money they were able to pull out more and more. Then they rolled out their technologies of racialised control to the world to divide and conquer labour on behalf of capital. Then they revolutionised computing, creating the biggest labour saving tool so far in human history, reshaping capitalism to allow fictitious capital to flow faster and freer to dominate every horizon, infinitely replicating the promise of tomorrow to control today. They folded in communication, entertainment, health, every facet of human life that they could so that capital, via the semiconductor, could mediate just about everything a person does so long as that person buys in. They invented a new economy, speculative from the level of investment capital all the way down to the wage - the gig economy. Now, always driving to cast bigger spells, they are moving on from inventing economies and trying to invent realities.

Cryptocurrencies were heralded as the underpinning technology of the future, as was the metaverse, albeit by fewer and more insufferable nerds. With the purchase of twitter by Elon Musk, a huge part of social media frequented by venture capitalists has become a hype dome for these bullshit promises.

You would simply not imagine the robot king. Nor would I. Left to their own devices, any democratically organised body of workers would simply not imagine the robot king. This on its own is an argument for why organised labour should seize Silicon Valley.

The very image of the robot king should so incense every single person who sees it because it is a constant reminder of how these supposed wizards see themselves as outside and beyond the system of capitalist labour and exploitation, and how they think they can make us robots content by building us a robot king to rule us.

It should move us all to remind them the only way we can that they are not beyond the system, they are not beyond the rules and they are not beyond our reach - by organising together and unionising labour at every level, by refusing the class collaboration of corporatism and expanding the solidarity of the working class until these wizards realise their wands are just sticks they wave around making noises.

We’re all so afraid of what might happen if decisions that should be in the hands of human beings are handed over to cold apathetic systems, but that is already the world that we live in by the demands of the ruling class. Systems are human creations and yet, thanks to the capitalist and colonialist innovations of the Renaissance, we think that humans have to change to accommodate systems rather than the other way around. Our economies, our work practices, our social relations could all very easily shift to accommodate human needs, and instead we are told every day that this or that human need is unworkable and unreasonable because that’s not how the system works.

Truly for all their attempts to master buddhism, Silicon Valley has not learned that their desire is the root of all the suffering they inflict on us.

I would simply not imagine the robot king.

Comments

No comments found for this post.