The AI Industry is Crowdsourcing Its Own Apocalypse

Edit: After I posted this, OpenAI announced that they would, in fact, be slowing down their releases. I feel cautiously optimistic about this, though I still feel pessimistic that they, their competitors, and many other people underestimate the power of what they've created already and the variety of possible problems it can cause.


I was trying to fill in the little text boxes in a form for an OpenAI job application, but was having a problem that I often encounter in such situations, which is that I eventually turned it into a persuasive essay. The gist of this particular essay was that OpenAI should hire me because I can see a danger that is invisible to people already inside the industry, and key evidence for this danger lies in their own recent technology, ChatGPT. In another text box further down the form, I argue that OpenAI may play a crucial role in AI safety. Unfortunately, I'm beginning to think that their crucial role, at least for the time being, is to slow down.

It is important to prefix the below by noting here that I might be wrong in several ways. For one thing, I may be overly pessimistic about AI researchers and businesses cutting off a demo or delaying the release of a product out of concerns for the consequences, because Meta did this with Galactica when it turned out to be dangerously good at inventing information about research. It's also possible that, after the shock value of ChatGPT recedes, it will start to look a tiny bit less amazing, and correspondingly less a sign of future danger. It's quite possible that I'll publish this post and still apply for a job at OpenAI, and they'll read it and say, "Yeah, we want people who aren't afraid to criticize us!" So, if you're reading this and still want me to work for you, I'm interested. It might be one of the better chances I have to help AI not have an apocalypse. In the meantime, I'm a little pessimistic, and I'll tell you why.

TL;DR Fail

I feel like I should be able to boil the consequences I'm worried about down into a central core, like, what really could go wrong, anyway? But I can't, for the same reason that I can't boil down all the problems a person can create without AI, which is that it spans basically everything. A person can creatively come up with all sorts of... anything, and use it to do... anything. Intelligence is basically the answer to every question that can be answered. It is the ultimate tool -- a screwdriver that can fit the shape of any screw, and also take the shape of a hammer, or a car -- because intelligence is the thing that we use to make screwdrivers and hammers and cars. AI is currently like an assortment of useful sections of our brain. Unlike us, it can't yet do "everything". At least, not without us helping to fill in the gaps.

But perhaps, if I can't summarize what can happen, I can at least summarize why the current trends seem dangerous to me. It's still hard to summarize, because although it's not "every possible reason", it's a bunch of reasons that are barely related. The title of this post is currently, "The AI Industry Is Crowdsourcing Its Own Apocalypse", which encapsulates the "middle" of the issue, i.e., something leads to crowdsourcing somehow, and that leads to bad stuff. But how and why does this happen, and are there at least a few examples of what the "Apocalypse" is?

I guess here's my summary: Our understanding of AI is based on a comparison to ourselves, but we really don't want to believe that we're anything like a machine, which leads us to consistently underestimate AI. We think that becoming like us will require much more impressive advances, and this lulls us into a sense of security about when those advances will happen, and what can already happen.

We are making and improving more and more different components of a thing that can do anything (intelligence), with a very incomplete knowledge of the capabilities of any one component, let alone what those components might be able to do when they're pieced together. It's a bit like we've just invented matches, gasoline, and chainsaws at about the same time and have handed them out to everyone, and are now hoping that people will do interesting things with them, and also tell us if they're unsafe.

I'm going to explain myself with 5 "hypotheses". They're hypotheses because I don't claim to know that these things are true, but I think all of them are possible, and some or all of them are even very likely.

Hypothesis #1: Existing AI parts can be assembled into systems that are much more intelligent than expected.

Intelligence is much greater than the sum of its parts. We underestimate current AI because we don't want to believe our brains can be made from parts that often look stupid on their own.

When we see a weakness in a particular AI model, such as a text-to-image generator that can't count past three, it looks really stupid to us and we take it as a sign that we're good at it because of some central core of awesomeness, rather than that we just have another blob of neurons for math. We're failing to see how much greater intelligence can be than the sum of its parts. But at the same time, we fail to see that some things look stupid just because we're good at those specific things. This is similar to the effect of talking to someone who can't speak your language very well. You instinctively think that they're less smart, even if you can't speak their primary language at all.

We also ignore the fact that, when even small parts of our brain malfunction, as is sometimes the case with brain damage, we can end up with limitations that look similar to AI clunkiness. One example is being able to process words and talk, but not repeat what someone has said. Another is being able to produce speech that sounds fluent but has no meaning. We can point and laugh when AI fails, but we also look silly when you disable or disconnect parts of our brains.

We argue that AI text generation falls on its face sometimes because it lacks deeper understanding of what's going on. But for some reason, we think that this deeper understanding is something more than just more parts. Looking out from the inside, we can't imagine that we could be made by piecing together such inferior components. We think that we must have better parts, or that specialized areas of our brains are not really primitive separable components, or, at the very least, that the more primitive components are used by some kind of core that is very different. Even if our brains are like that, it doesn't mean it's the only way to be like us.

Consequence: Someone will make something way more powerful than we expect, way before we expect it, just by plugging existing AI parts together. Even if "self-aware" AI like HAL or SkyNet seems too far-fetched for you, comparatively mundane AI systems could create problems ranging up to the level of catastrophe. Familiar money-making schemes that have previously had marginal success, such as AI that picks stocks, might suddenly become much better because it can digest human-readable news articles. If somebody who is willing to take risks figures this out and automates it and makes it scale very quickly, they could make a lot of money -- and maybe also cause a market meltdown as a side-effect.

Hypothesis #2: We're so obsessed with scaling models that we think AI is mostly bounded by scale.

Scaling models has dominated recent success, and, as usual, the proponents of the most successful method see this as a validation of their philosophy and think the future will look just like the past.

We can see that there are improvements being made by changing the training or the structure of a model, and that new applications for existing models are constantly being discovered, but we think that the basic underlying intelligence is a function of size. We theorize that model scaling changes AI in a way that is different from all other changes, and that this thing is what gets it closer to something like AGI.

Because we expect scaling to be the biggest concern, we think we can rely on the obstacles to scaling as a throttle for danger. The timeline for us to scale models might not be entirely predictable, but it's more predictable than a lot of other things, which gives us a false sense of security. We assume we've got years rather than months or weeks to worry about some of AI's capabilities. Since building larger models takes so much compute power, we also feel relatively assured that we know who will make major improvements, or at least that they will be a large or otherwise predictable who.

Consequence: AI could gain capabilities much sooner than expected, and if it does we'll be unprepared for it, and might not even recognize it. In addition, the improvement might require an unexpectedly smaller amount of resources, and could therefore come from someone less predictable. It could be some random person who hacks something together, gives it a twitter/discord/commodity-trading/ebay/fedex account and leaves it running overnight just to see what happens.

Hypothesis #3: Nobody is keeping track of all the capabilities of AI.

There seems to be a basic flow of AI technology right now from a "trunk" that is relatively easy to keep track of, to "leaves" that are impossible to track. The trunk would be large models such as GPT, which then get augmented or trained further in the biggest "branches" (e.g., Codex). Those branches are then made accessible via APIs, the APIs are used to make some kind of product (e.g., Copilot), and those products, which might also be combined into more products are the "leaves".

One step from the trunk we already lose track because some models are free for the taking, and we don't know what they've been made into. But even if we're just talking about the publicly available models and systems made from those, it's hard to believe that anyone is keeping track of them in a meaningful way. There is undoubtedly a giant spreadsheet of AI stuff out there, but we need someone to assess the big picture from a safety perspective.

Consequence: Even if some researchers and businesses are actively trying to make AI safer, they can't properly assess what dangers are possible, let alone try to mitigate them, because the number of people making improvements to AI is growing rapidly, and quickly widening the scope of what it can do. This is especially a problem when two AI systems are created separately that turn out to complement each other in some way, such as one that can guess who is most vulnerable to phishing scams from their facebook posts and another that can imitate email from one of their friends with only a small sample of writing.

Hypothesis #4: Without a strategy that provably deals with AI as a whole, progress on individual problems will be erased.

We have a handful of issues that get a lot of attention, such as the spread of disinformation, magnification and propagation of hate and prejudice, loss of jobs that are automated, plagiarism, and even the eventual possibility of an existential threat caused by AGI, but we lack a general strategy. We need a general theory about what can go wrong with AI, and an overall strategy for testing it and protocols and methods for limiting it while we're testing it and, if necessary, when it is released.

Consequence: Simply put, if we can't come up with a general strategy that encompasses all the things in our list, we will be overtaken by the sheer number of additions to the list, because, as I said before, intelligence is a thing whose purpose is to do anything. Anything is too many things to handle separately.

Hypothesis #5: We're making new AI widely available too early.

This is the hypothesis that is most likely to run up against the core beliefs of some of the people reading it. But it's important to keep in mind that no matter how much you advocate for free software, or government transparency, or democratization, you probably don't advocate for releasing products that are unsafe. We might disagree on how to balance safety against access, but only up to a point. Something like a car, which is used by everyone and full of complicated parts that must work properly to be safe, should be extensively tested for safety, because even if you don't mind living dangerously, the pedestrian crossing the street in front of you might have a different philosophy. If something gets used by a narrower range of people and is made of just a few simple parts and is just obviously dangerous, such as a hatchet, you might be expected to just make sure it's going to stay in one piece. Even if we disagree on whether the government, a company, or a bunch of random people should make sure a product is tested for safety before releasing it, or what we should do if it proves unsafe, or who should get to use it, we probably agree that somebody should do something. The question isn't actually one of ideology, it's a question of whether AI is one of those things that can unexpectedly blow up in your face if you're not careful.

For the most part, this probably boils down to whether AI should be treated like "traditional" software. Making traditional software components open source works because traditional software components have reasonably defined behavior, capabilities, and safety concerns. Most programmers can look at the source code for most software and piece together an idea of what it's doing and how it works. It's built in iterations that are limited by the fact that a programmer must figure out how to build it, and must put in an amount of time that roughly corresponds to the complexity of the new code.

On the other hand, AI, at least in the form of neural networks, can gain unpredicted capabilities without anyone knowing why, and the complexity of AI can be scaled drastically with relatively modest increase in the time a human has to put in to create it. Its power is simultaneously growing in scope, becoming more accessible to people without technical knowledge, and becoming more generally applicable to various tasks.

Consequence: We have crowdsourced the creation of AI problems. We give everyone access so that we can do things like assess safety and some relatively expected problems happen immediately. For example, teachers must immediately figure out how to recognize and react to kids using ChatGPT to write essays. Millions of people will be using the latest image generating AI by the time many graphical artists see it for the first time and will have to consider whether it is about to replace them, help them, or turn out to be toy. We might say to them that these tools are not really as smart as they look, and that we knew it was going to be ok. We might also say that there were press releases and demos and that a release of the AI occurred by gradually widening access to it. However, even just the fact that people panic after a release is evidence that we didn't really prepare them. It's not really fair to leave everyone guessing.

Some of us might find fears of job displacement and the automation of homework assignments to be overblown, or at least relatively straightforward to address, but they're really just the most obvious problems, and tame in comparison to others. There is a continuous spectrum of how severe AI problems can get. The term "apocalypse" is definitely an overstatement for some of the dangers of AI, but it is also to be taken literally. The worst case scenario will be possible at some point, and I'm arguing that it might be possible soon, and that there's some chance that it's already possible. This might sound like hyperbole, but ask yourself, when you look at those hypotheses above, how confident are you that they are either false, or not actually a problem?

Crowdsourcing already has risks and benefits we associate with AI, such as the ability to find loopholes, repurpose tools, and solve problems that even the smartest experts or largest computers couldn’t figure out, as well as the unexpected consequences of incentives and difficulty with alignment (either through ignorance or disregard). By crowdsourcing current AI, we are effectively creating a human-AI hybrid with dangers that are just like AGI, with the only advantage being that they will (hopefully) unfold more slowly. However, "more slowly" could mean, "more than 5 minutes after you turn it on," which is not much comfort.

I could be underestimating the extent to which the AI community has considered these things. I'm barely more than a beginner in AI. However, when I read about theory or research, or talk to people in the field or just people who are enthusiastic about AI technology, what is frightening is how rarely people talk about potential problems at all, even really obvious problems. There is a very real possibility that people don't talk about AI problems because they don't want to think about them, or because they're afraid that discussion will slow down progress. When people do talk about it, they tend to fall into the patterns that have lead me to the hypotheses above. If there are some people out there who are addressing these issues, I think it's important that they make the discussion as widely available as possible, and try to get more people actively collaborating. If there's one thing that should be as accessible as possible to everyone, it's the strategy for AI safety. This is especially important when you consider that one possible hedge against AI danger is to make your own AI first.

Even if we ignore AGI, AI has consequences like no other thing we have created or encountered. The more powerful it becomes, the more we are effectively making everything and everyone more powerful at the same time. It is both a way to solve any problem and a way to create any problem, and leaving it up to chance is like playing with fire, and also like playing with everything that is dangerous and/or useful at the same time.

Posted on January 5, 2023