- Nick Bostrom is a professor of philosophy at the University of Oxford and an expert on the risks posed by artificial intelligence.
- Bostrom, whose work has been endorsed by Elon Musk and Bill Gates, told Business Insider that AI is a greater threat to human existence than climate change.
- He says it’s “a lot to expect” big tech companies, such as Google and Facebook, to devise their own ethical frameworks for AI.
- Bostrom also thinks there should be more people with basic knowledge of AI in governments.
- Visit Business Insider’s homepage for more stories.
One of the world’s leading thinkers on artificial intelligence says the technology is a bigger menace to human civilization than climate change.
Nick Bostrom, an Oxford philosophy professor, told Business Insider: “AI is a bigger threat to human existence than climate change. Climate change is not going to be the biggest change we see this century.”
He adds: “Climate change is unlikely to bring about a good outcome, but if AI’s development turns out badly, it’ll be far worse than climate change. AI could turn out really well for humanity, but it could also turn out really badly.”
Bostrom is a preeminent thinker in his field having published books including “Superintelligence: Paths, Dangers, Strategies.” He is also unusual in appealing to both sides of the debate: His work has been endorsed by both Elon Musk, who holds apocalyptic views on AI, and Bill Gates, a cautiously upbeat advocate for the technology.
It’s why he is careful to qualify his comparison to climate change, a force that could damage planet Earth irrevocably unless humans make radical changes in the next decade.
“The reason that AI is often depicted as evil robots in the media is because it makes for a good story. Robots are more visually compelling than a chip inside a black box; you can see and feel them in a way you can’t with a chip,” he says. “But malevolence isn’t the problem. It’s the possibility that AIs might be indifferent to human goals.”
AI might be indifferent to human goals — and that’s dangerous
All intelligent entities, whether human or artificial, have goals — even if they are pre-programmed. Very simple AIs, such as thermometers, have the goal of successfully measuring temperature, for example.
Bostrom’s fear is that if AIs become competent enough in their pursuit of their goals, they may inadvertently harm humans, even if their goals sound harmless. In a 2003 paper, Bostrom gave the example of an AI whose only goal is to maximise paperclip production.
If this AI was capable of reprogramming itself to improve its own intelligence – something which some Google-developed AIs are already capable of – it may end up becoming so smart that it innovates new ways to maximize the number of paperclips it produces.
At some point, Bostrom writes, it might transform “first all of Earth and then increasing portions of space into paperclip manufacturing facilities.”
If turning the world into a paperclip machine sounds idiotic, that’s simply because it doesn’t align with human goals. The paperclip printer is simply following its own aims to a logical conclusion.
In becoming so astoundingly good at making paperclips, it could end up inadvertently harming humans. It would not have set out to be malevolent, but simply indifferent to any goals beyond its own. The example might sound far-fetched, but Bostrom says AI’s indifference to human endeavour could already be a real threat.
“The biggest ways AI is likely to have a negative impact is in information systems roles, such as selecting news stories that confirm people’s prejudices or acting as surveillance systems,” he says.
Problems are already emerging with the latter, prompting questions about whether firms like Amazon and Microsoft should be selling facial recognition technology to public agencies.
The American Civil Liberties Union (ACLU) revealed in May last year that Amazon had sold Rekognition to government and police agencies for the purpose of public surveillance and to identify “people of interest.” The ACLU also found last year that Rekognition incorrectly identified 28 members of Congress as people who had previously been arrested.
For Bostrom, the big challenge is getting AI under control and programming it to align with human goals. “The first set of challenges will be technical, such as finding a way of developing AI in a controlled way,” he says. “Assuming we solve that, our next goals are societal challenges about creating a world order that serves the common good.”
Big tech is struggling to figure out how to make AI safe
So does Bostrom think the big tech companies are trying hard enough to develop AI in a controlled way? Google, Amazon, Facebook, and Apple are at the cutting-edge of AI development, and yet some academics think they are not developing AI compatible with human goals.
“People I’ve spoken to [at the big tech companies] do care about making AI safe and compatible with human goals,” he says. “I also get some sense that they’re not able to figure out how to go about doing this. It’s a lot to expect each tech company to come up with their own ethical framework for controlling AI.”