At last, some good news, then.
Jeff Nesbit, former director of legislative and public affairs at the National Science Foundation and author of more than 24 books, has examined the latest thinking on AI capabilities.
He concludes that the human race could cease to exist by 2050 – or that we become immortal.
Nesbit explains the theory known as ASI, or ‘artificial super-intelligence’, which posits that AI will evolve into a supercomputer which learns so quickly that it surpasses human intelligence, and solves all problems.
Competing theories
On the one hand, you have the hopefuls like Ray Kurzweil imploring us not to fear artificial intelligence, pointing instead to the older and more pressing threats like bioterrorism or nuclear war.
In fact, Kurzweil argues that mental capabilities are enhanced by AI, and he points out that global rates of violence, war and murder have declined dramatically.
He also argues that AI has helped to find cures for diseases, developed renewable energy resources and, cared for the disabled, among other benefits to society.
Kurzweil puts the date of ‘human level AI’ at 2029, which gives us just enough time to “devise ethical standards”.
Then there’s Rollo Carpenter, creator of the Cleverbot software, which has gained high scores in the Turing test – that is to say, many people have mistaken it for human when communicating with it.
I believe we will remain in charge of the technology for a decently long time, and the potential of it to solve many of the word problems will be realised.
He explains that the ability to develop algorithms necessary for achieving full artificial intelligence is still a few decades away, and explains:
We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.
Billionaire entrepreneur Elon Musk, pioneer of digital money and electric cars, has told students in an interview that we are “summoning the demon” with AI.
Speaking at the AeroAstro Centennial Symposium at the Massachusetts Institute of Technology (MIT), Musk made the following remarks:
If I had to guess at what our biggest existential threat is, it’s probably that [artificial intelligence]. So we need to be very careful.
With artificial intelligence we are summoning the demon. In all those stories where there’s the guy wih the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.
In a 2015 open letter, Musk and Professor Stephen Hawking wrote on the idea that AI could allow development of autonomous weapons, which would revolutionise warfare – and not for the better.
Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group.
Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.
Hawking, who is able to communicate via a technology that uses a basic form of AI, also had this cheery proclamation for the BBC:
The development of full artificial intelligence could spell the end of the human race.
He, too, considers the possibility and potential dangers of ASI, explaining that AI could take off on its own and re-design itself at an ever increasing rate.
Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.
But all of them agree on one thing – sometime in the next 30 years or so, a supercomputer will replicate the human brain and evolve into super-intelligence, or ASI.
Tim Urban, author of ‘Wait, But Why?’ blog, outlines the future:
While most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to…species immortality.