0%
Still working...

The big idea: can we stop AI making humans obsolete? | Computing and the net books


Right now, most big AI labs have a team figuring out ways that rogue AIs might escape supervision, or secretly collude with each other against humans. But there’s a more mundane way we could lose control of civilisation: we might simply become obsolete. This wouldn’t require any hidden plots – if AI and robotics keep improving, it’s what happens by default.

How so? Well, AI developers are firmly on track to build better replacements for humans in almost every role we play: not just economically as workers and decision-makers, but culturally as artists and creators, and even socially as friends and romantic companions. What place will humans have when AI can do everything we do, only better?

Talk of AI’s current abilities can occasionally sound like marketing hype, and some of it definitely is. But in the longer term, the scope for improvement is huge. You may believe that there will always be something uniquely human that AI just can’t replicate. I’ve spent 20 years in AI research, watching its progression from basic reasoning to solving complex scientific problems. Abilities that seemed uniquely human, such as handling ambiguity or using abstract analogies, are now easily handled. There may be delays along the way, but we should assume there will be continued progress in AI across the board.

These artificial minds won’t just aid humans – they’ll quietly take over in countless small ways, initially because they’re cheaper, eventually because they’re genuinely better than even our best performers. Once they’re reliable enough, they’ll become the only responsible choice for almost all important tasks, ranging from legal rulings and financial planning to healthcare decisions.

It’s easiest to imagine what this future may look like in the context of employment: you’ll hear of friends losing jobs and having trouble finding new work. Companies will freeze hiring in anticipation of next year’s better AI workers. More and more of your own job will consist of accepting suggestions from reliable, charming and eager-to-please AI assistants. You’ll be free to think about the bigger picture, but will find yourself conversing about it with your ultra-knowledgable AI assistant. This assistant will fill in the blanks in your plans, provide relevant figures and precedent, and suggest improvements. Eventually, you’ll simply ask it: “What do you think I should do next?” Whether or not you lose your job, it’ll be clear that your input is optional.

And it won’t be any different outside the world of work. It was a surprise even to some AI researchers that the first models capable of general reasoning abilities, the precursors to ChatGPT and Claude, could also be tactful, patient, nuanced and gracious. But it’s now clear that social skills can be learned by machines just like any other. People already have AI romantic companions, and AI doctors are consistently rated better on bedside manner than human doctors.

What will life look like when each of us has access to an endless supply of personalised affection, guidance and support? Your family and friends will be even more glued to their screens than usual. When they do talk to you, they’ll tell you about funny and impressive things their online companions have said.

Maybe you’ll be put off by others’ preference for their new companions – in that case you might end up asking your daily AI assistant for advice. This reliable counsellor will tactfully talk you through any problems you face, and help you practice having difficult conversations with family members. After these relatively tiring interactions, the participants might unwind by talking to their respective AI confidantes. Perhaps we’ll agree that something had been lost in this shift to virtual companions, even as we start to find raw human contact ever more grating and tedious in comparison.

So far, so dystopian. But couldn’t we simply choose not to use AI this way, preferring human advisers and human-made goods and services? The problem is, it might be hard to even notice AI replacement in many domains – and the parts that we do notice will mostly seem like major improvements. Even today, AI-generated content is increasingly indistinguishable from human-created work. It will be hard to justify spending twice as much for a human therapist, lawyer or teacher who’s only half as good. Organisations that choose slow, expensive humans will be outcompeted by those who choose fast, cheap, reliable AI.

Can’t we rely on governments to address these issues as they arise? Unfortunately, they too will have the same incentives to lean on AI. Politicians and civil servants will also ask their virtual assistants: “What should I do?”, and will find involving humans in decision-making to be a recipe for delay, misunderstanding and bickering.

Political theorists sometimes talk about the “resource curse”, where countries with abundant natural resources end up more autocratic and corrupt – Saudi Arabia and the Democratic Republic of the Congo are good examples. The idea is that valuable resources make the state less dependent on its citizens. This, in turn, makes it tempting (and easy) for the state to sideline citizens altogether. The same could happen with the effectively limitless “natural resource” of AI. Why bother investing in education and healthcare when human capital provides worse returns?

Once AI can replace everything that citizens do, there won’t be much pressure for governments to take care of their populations. The brutal truth is that democratic rights arose partly due to economic and military necessity, and to ensure stability. But those won’t count for much when governments are funded by taxes on AIs instead of citizens, and when they too start replacing human employees with AIs, all in the name of quality and efficiency. Even last resorts such as labour strikes or civil unrest will gradually become ineffective against fleets of autonomous police drones and automated surveillance.

The most disturbing possibility is that this might all seem perfectly reasonable to us. The same AI companions that hundreds of thousands of people are already falling for in their current primitive state will be making ultra-persuasive, charming, sophisticated and funny arguments for why our diminishing relevance is actually progress. AI rights will be presented as the next big civil rights cause. The “humanity first” camp will be painted as being on the wrong side of history.

Eventually, with no one having planned or chosen it, we might all find ourselves struggling to hold on to money, influence, even relevance. This new world could be more friendly and humane in many ways, while it lasts: AIs would handle annoying tasks and provide radically better goods and services such as medicine and entertainment. But humans would be a drag on growth, and if our democratic rights started to slip, we’d be powerless to protect them.

Surely the developers of these technologies have a better plan? Alarmingly, the answer is no. Both Dario Amodei, CEO of Anthropic, and Sam Altman, CEO of Open AI, agree we’ll need to completely reorganise our economic system once human labour isn’t competitive. But no one has a clear idea of what that would look like. Those who do acknowledge the possibility of radical change are mostly working on more immediate threats from misuse or secret collusion by AIs. And while some economists such as the Nobel laureate Joseph Stiglitz have sounded the alarm that AI could drive human wages to zero, many seem unwilling to consider that it could ever be anything other than a complement to human labour.


What can we do to avoid our gradual disempowerment? The first step is to talk about it. Journalists, academics and other thinkers have been strangely silent on this massive topic. I personally find it hard to think clearly about. It sounds weak and humiliating to say: “I’m scared of the future because I won’t be able to compete.” It sounds insulting to say: “You should be worried because you’ll be irrelevant.” And it sounds defeatist to say: “Your children may inherit a world that doesn’t have a place for them.” It’s understandable that people short-circuit to dismissals like “Surely I’ll always have a special advantage?” or “Who am I to stand in the way of progress?”

One obvious idea is to not build general-purpose AI at all. While slowing down its development is probably feasible, trying to stop it globally for long may require near-totalitarian monitoring and control, or worldwide coordination to dismantle large parts of the computer chip manufacturing industry. A major danger with this path is that governments may ban private AI development but still develop it for military and policing purposes, delaying our obsolescence but disempowering us long beforehand.

If we can’t stop AI development, there are at least four things that would still help. First, we should try to track AI use and influence throughout our economy and in government. We need to know where AI is displacing human economic activity, and especially if it starts being used at scale for things such as lobbying and propaganda. Anthropic’s recent Economic Index is a first attempt at this, but there’s a lot more to be done.

Second, we’ll need at least some oversight and regulation of frontier AI labs and deployments to stop the technology from accruing too much influence while we’re still figuring out what’s going on. Currently, we rely on voluntary efforts, and have no ability to coordinate to stop autonomous AI from commanding substantial resources or gathering power. If we start seeing signs of a crisis, we need to be able to step in and slow things down, especially in cases where individuals and groups benefit from things that harm society overall.

Third, we can use AI to strengthen people’s ability to organise and advocate for themselves. AI-supported forecasting, oversight, planning and negotiation offer the possibility of designing and implementing more trustworthy institutions, if we can build them while we still have influence. For instance, conditional prediction markets and AI-supported forecasts could make it clearer where the world is heading under different policies, helping settle questions such as “If this policy is made law, how will the average human wage change three years from now?” Experimentation with AI-supported democratic mechanisms will let us prototype more responsive governance models that will be needed for a faster-changing world.

Finally, if we want to build powerful AI without being marginalised, we have the gargantuan task of learning how to steer our civilisation, instead of just letting political systems evolve according to whatever pressures they happen to face. This bumbling-through has been sort of OK until now, because humans were needed no matter what. Without this safeguard, we will be adrift unless we understand all the ways in which power, competition and growth operate. The technical field of “AI alignment” – concerned with making sure machines share our goals – needs to broaden its scope to include governments, institutions and society itself. This nascent sphere, sometimes called “ecosystem alignment”, can draw on economics, history and game theory to help us understand what kinds of futures we can plausibly hope for, and how to aim for them.

The clearer we can see where we’re heading, and the better we coordinate, the more likely we are to create a future where humans remain relevant – not as competitors to AI, but as their beneficiaries and stewards. As it stands, we’re racing to build our own replacements.

David Duvenaud is an associate professor of computer science at the University of Toronto, and co-director of the Schwartz Reisman Institute for Technology and Society. He thanks Raymond Douglas, Nora Ammann, Jan Kulveit, and David Krueger for help writing this article.

Further reading

The Coming Wave by Mustafa Suleyman and Michael Bhaskar (Vintage, £10.99)

The Last Human Job by Allison J Pugh (Princeton, £25)

The Precipice by Toby Ord (Bloomsbury, £12.99)



Source link

Recommended Posts