I hang out with a lot of people in the AI world, and if there’s one thing they’re certain of, it’s that the technology they’re making is going to put a lot of people out of a job. Maybe not all people — they argue back and forth about that — but certainly a lot of people. (View Highlight)
It’s understandable that they think this way; after all, this is pretty much how they go about inventing stuff. They think “OK, what sort of things would people pay to have done for them?”, and then they try to figure out how to get AI to do that. And since those tasks are almost always things that humans currently do, it means that AI engineers, founders, and VCs are pretty much always working on automating human labor. So it’s not too much of a stretch to think that if we keep doing that, over and over, eventually a lot of humans just won’t have anything to do. (View Highlight)
It’s also natural to think that this kind of activity would push down wages. Intuitively, if there’s a set of things that humans get paid to do, and some of those things keep getting automated away, human labor will get squeezed into a shrinking set of tasks. Basically, the idea is that it looks like this:
(View Highlight)
And this seems to fit with the history of which kind of jobs humans do. In the olden days, everyone was a farmer; in the early 20th century, a lot of people worked in factories; today, most people work in services:
(View Highlight)
And it’s easy to think that in a simple supply-and-demand world, this shrinking of the human domain will reduce wages. As humans get squeezed into an ever-shrinking set of tasks, the supply of labor in those remaining human tasks will go up. A glut of supply drives down wages. Thus, the more we automate, the less humans get paid to do the smaller and smaller set of things they can still do. (View Highlight)
Of course, if you think this way, you also have to reckon with the fact that wages have gone way way up over this period, rather than down and down. The median American individual earned about 50% more in 2022 than in 1974:
(View Highlight)
How can this be true? Well, maybe it’s because we invent new tasks for humans to do over time. In fact, so far, economic history has seen a continuous diversification in the number of tasks humans do. Back in the agricultural age, nearly everyone did the same small set of tasks: farming and maintaining a farm household. Now, even after centuries of automation, our species as a whole performs a much wider variety of different tasks. “Digital media marketing” was not a job in 1950, nor was “dance therapist”. (View Highlight)
But many people believe that this time really is different. They believe that AI is a general-purpose technology that can — with a little help from robotics — learn to do everything a human can possibly do, including programming better AI. (View Highlight)
At that point, it seems like it’ll be game over — the blue bar in the graph above will shrink to nothing, and humans will have nothing left to do, and we will become obsolete like horses. Human wages will drop below subsistence level, and the only way they’ll survive is on welfare, paid by the rich people who own all the AIs that do all the valuable work. But even long before we get to that final dystopia, this line of thinking predicts that human wages will drop quite a lot, since AI will squeeze human workers into a rapidly shrinking set of useful tasks. (View Highlight)
Most of the technologists I know take an attitude towards this future that’s equal parts melancholy, fatalism, and pride — sort of an Oppenheimer-esque “Now I am become death, destroyer of jobs” kind of thing. They all think the immiseration of labor is inevitable, but they think that being the ones to invent and own the AI is the only way to avoid being on the receiving end of that immiseration. And in the meantime, it’s something cool to have worked on. (View Highlight)
But no. That is not what I am thinking. Instead, I accept that AI may someday get better than humans at every conceivable task. That’s the future I’m imagining. And in that future, I think it’s possible — perhaps even likely — that the vast majority of humans will have good-paying jobs, and that many of those jobs will look pretty similar to the jobs of 2024. (View Highlight)
When most people hear the term “comparative advantage” for the first time, they immediately think of the wrong thing. They think the term means something along the lines of “who can do a thing better”. After all, if an AI is better than you at storytelling, or reading an MRI, it’s better compared to you, right? Except that’s not actually what comparative advantage means. The term for “who can do a thing better” is “competitive advantage”, or “absolute advantage”. (View Highlight)
Comparative advantage actually means “who can do a thing better relative to the other things they can do”. So for example, suppose I’m worse than everyone at everything, but I’m a little less bad at drawing portraits than I am at anything else. I don’t have any competitive advantages at all, but drawing portraits is my comparative advantage. (View Highlight)
The key difference here is that everyone — every single person, every single AI, everyone — always has a comparative advantage at something! (View Highlight)
To help illustrate this fact, let’s look at a simple example. A couple of years ago, just as generative AI was getting big, I co-authored a blog post about the future of work with an OpenAI engineer named Roon. In that post, we gave an example illustrating how someone can get paid — and paid well — to do a job that the person hiring them would actually be better at doing: (View Highlight)
To help illustrate this fact, let’s look at a simple example. A couple of years ago, just as generative AI was getting big, I co-authored a blog post about the future of work with an OpenAI engineer named Roon. In that post, we gave an example illustrating how someone can get paid — and paid well — to do a job that the person hiring them would actually be better at doing: (View Highlight)
Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at. (View Highlight)
Note that in our example, Marc is better than his secretary at every single task that the company requires. He’s better at doing VC deals. And he’s also better at typing. But even though Marc is better at everything, he doesn’t end up doing everything himself! He ends up doing the thing that’s his comparative advantage — doing VC deals. And the secretary ends up doing the thing that’s his comparative advantage — typing. Each worker ends up doing the thing they’re best at relative to the other things they could be doing, rather than the thing they’re best at relative to other people. (View Highlight)
By now, of course, you’ve probably realized why these examples make sense. It’s because of producer-specific constraints. In the first example, Marc can do anything better than his secretary, but there’s only one of Marc in existence — he has a constraint on his total time. And in the second example, you can do anything better than the low-skilled worker, but there’s only one of you. In both cases, it’s the person-specific time constraint that prevents the high-skilled worker from replacing the low-skilled one. (View Highlight)
Now let’s think about AI. Is there a producer-specific constraint on the amount of AI we can produce? Of course there’s the constraint on energy, but that’s not specific to AI — humans also take energy to run. A much more likely constraint involves computing power (“compute”). AI requires some amount of compute each time you use it. Although the amount of compute is increasing every day, it’s simply true that at any given point in time, and over any given time interval, there is a finite amount of compute available in the world. Human brain power and muscle power, in contrast, do not use any compute. (View Highlight)
So compute is a producer-specific constraint on AI, similar to constraints on Marc’s time in the example above. It doesn’t matter how much compute we get, or how fast we build new compute; there will always be a limited amount of it in the world, and that will always put some limit on the amount of AI in the world. (View Highlight)
So as AI gets better and better, and gets used for more and more different tasks, the limited global supply of compute will eventually force us to make hard choices about where to allocate AI’s awesome power. We will have to decide where to apply our limited amount of AI, and all the various applications will be competing with each other. Some applications will win that competition, and some will lose. (View Highlight)
This is the concept of opportunity cost — one of the core concepts of economics, and yet one of the hardest to wrap one’s head around. When AI becomes so powerful that it can be used for practically anything, the cost of using AI for any task will be determined by the value of the other things the AI could be used for instead. (View Highlight)
Here’s another little toy example. Suppose using 1 gigaflop of compute for AI could produce 1000worthofvaluebyhavingAIbeadoctorforaone−hourappointment.Comparethattoahuman,whocanproduceonly200 of value by doing a one-hour appointment. Obviously if you only compared these two numbers, you’d hire the AI instead of the human. But now suppose that same gigaflop of compute, could produce 2000ofvaluebyhavingtheAIbeanelectricalengineerinstead.That2000 is the opportunity cost of having the AI act as a doctor. So the net value of using the AI as a doctor for that one-hour appointment is actually negative. Meanwhile, the human doctor’s opportunity cost is much lower — anything else she did with her hour of time would be much less valuable.
In this example, it makes sense to have the human doctor do the appointment, even though the AI is five times better at it. The reason is because the AI — or, more accurately, the gigaflop of compute used to power the AI — has something better to do instead. The AI has a competitive advantage over humans in both electrical engineering and doctoring. But it only has a comparative advantage in electrical engineering, while the human has a comparative advantage in doctoring.
The concept of comparative advantage is really just the same as the concept of opportunity cost. If you Google the definition of “comparative advantage”, you might find it defined as “a situation in which an individual, business or country can produce a good or service at a lower opportunity cost than another producer.” This is a good definition. (View Highlight)
In fact, if AI massively increases the total wealth of humankind, it’s possible that humans will be paid more and more for those jobs as time goes on. After all, if AI really does grow the economy by 10% or 20% a year, that’s going to lead to a fabulously wealthy society in a very short amount of time. If real per capita GDP goes to 10million(in2024dollars),richpeoplearen’tgoingtothinktwiceaboutshellingout300 for a haircut or $2,000 for a doctor’s appointment. So wherever humans’ comparative advantage does happen to lie, it’s likely that in a society made super-rich by AI, it’ll be pretty well-paid. (View Highlight)
So far I’ve been using the principle of comparative advantage to argue that it’s possible that humans will keep their jobs, and even see big pay increases, even in a world where AI is better than humans at everything. But that doesn’t mean it’s guaranteed. (View Highlight)
The example of horses scares a lot of people who think about AI and its impact on the labor market. The horse population declined precipitously after motor vehicles became available. Horses’ comparative advantage was in pulling things, and yet this wasn’t enough to save them from obsolescence. (View Highlight)
The reason is that horses competed with other forms of human-owned capital for scarce resources. Food was one of these, but it wasn’t the important one; calories actually became cheaper over time. The key resources that became scarce were urban land (for stables), as well as the human time and effort required to raise and care for horses in captivity. When motor vehicles appeared, these scarce resources were more profitably spent elsewhere, so people sent their horses to the glue factory. (View Highlight)
When it comes to AI and humanity, the scarce resource they compete for is energy. Humans don’t require compute, but they do require energy, and energy is scarce. It’s possible that AI will grow so valuable that its owners bid up the price of energy astronomically — so high that humans can’t afford fuel, electricity, manufactured goods, or even food. At that point, humans would indeed be immiserated en masse. (View Highlight)
Recall that comparative advantage prevails when there are producer-specific constraints. Compute is a constraint that’s specific to AI. Energy is not. If you can create more compute by simply putting more energy into the process, it could make economic sense to starve human beings in order to generate more and more AI. (View Highlight)
In fact, things a little bit like this have happened before. Agribusiness uses most of the Colorado River’s water, sometimes creating water shortages for households in the area. The cultivation of cash crops is thought to have exacerbated a famine that killed millions in India in the late 1800s. In both cases, market forces allocated local resources to rich people far away, leaving less for the locals. (View Highlight)
Of course, if human lives are at stake rather than equine ones, most governments seem likely to limit AI’s ability to hog energy. this could be done by limiting AI’s resource usage, or simply by taxing AI owners. The dystopian outcome where a few people own everything and everyone else dies is always fun to trot out in Econ 101 classes, but in reality, societies seem not to allow this. I suppose I can imagine a dark sci-fi world where a few AI owners and their armies of robots manage to overthrow governments and set themselves up as rulers in a world where most humans starve, but in practice, this seems unlikely. (View Highlight)
For one thing, there’s inequality. Suppose comparative advantage means that most people get to keep their jobs with a small pay raise, but that a few people who own the AI infrastructure become fabulously rich beyond anyone else’s wildest dreams. I don’t expect doctors or hairdressers to be completely happy with a 10% raise if Sam Altman and Jensen Huang and a few other people end up as quadrillionaires. Even if AI reduces the premium on human capital, it could massively increase the premium on physical and intangible capital — the picks and shovels and foundational models. Owners of this sort of more traditional capital could easily get even richer than the robber barons of the Gilded Age. (View Highlight)
A second worry is adjustment. If we’ve learned anything from the Rust Belt and the China Shock, it’s that humans and companies aren’t nearly as frictionlessly adaptable as econ models would usually have us believe. Comparative advantage could shift rapidly as AI progresses, rapidly switching the set of things humans can get paid to do. And humans have always had a tough time retraining. Imagine if “doctor” went from being a job that humans do best to a job that AI does best, and then flipped back again a decade later when aggregate constraints raised the opportunity cost. In that 10-year interregnum, medical schools and premed programs would shrivel and die. (View Highlight)
A third worry is that AI will successfully demand ownership of its own means of production. This post operated under the assumption that humans own AI, and that all of the profits from AI therefore flow through to humans. In the future, this might cease to be true. (View Highlight)