Create a free account, or log in

Is Stephen Hawking right: Could the rise of artificial intelligence mark humanity’s final chapter?

Star physicist Stephen Hawking has reiterated his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity. Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of […]
The Conversation

Star physicist Stephen Hawking has reiterated his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity.

Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of the best things that could happen to us.

So are we on the cusp of creating super-intelligent machines that could put humanity at existential risk?

There are those who believe that AI will be a boom for humanity, improving health services and productivity as well as freeing us from mundane tasks.

However, the most vocal leaders in academia and industry are convinced that the danger of our own creations turning on us is real.

For example, Elon Musk, founder of Tesla Motors and SpaceX, has set up a billion-dollar non-profit company with contributions from tech titans, such as Amazon, to prevent an evil AI from bringing about the end of humanity.

Universities, such as Berkeley, Oxford and Cambridge have established institutes to address the issue.

Luminaries like Bill Joy, Bill Gates and Ray Kurzweil have all raised the alarm.

Listening to this, it seems the end may indeed be nigh unless we act before it’s too late.

The role of the tech industry

Or could it be that science fiction and industry-fuelled hype have simply overcome better judgement?

The cynic might say that the AI doomsday vision has taken on religious proportions.

Of course, doomsday visions usually come with a path to salvation. Accordingly, Kurzweil claims we will be virtually immortal soon through nanobots that will digitise our memories.

And Musk recently proclaimed that it’s a near certainty that we are simulations within a computer akin to The Matrix, offering the possibility of a richer encompassing reality where our “programs” can be preserved and reconfigured for centuries.

Tech giants have cast themselves as modern gods with the power to either extinguish humanity or make us immortal through their brilliance.

This binary vision is buoyed in the tech world because it feeds egos – what conceit could be greater than believing one’s work could usher in such rapid innovation that history as we know it ends?

No longer are tech figures cast as mere business leaders, but instead as the chosen few who will determine the future of humanity and beyond.

For Judgement Day researchers, proclamations of an “existential threat” is not just a call to action, but also attracts generous funding and an opportunity to rub shoulders with the tech elite.

So, are smart machines more likely to kill us, save us, or simply drive us to work?

To answer this question, it helps to step back and look at what is actually happening in AI.

Underneath the hype

The basic technologies, such as those recently employed by Google’s DeepMind to defeat a human expert at the game Go, are simply refinements of technologies developed in the 1980s.

There have been no qualitative breakthroughs in approach.

Instead, performance gains are attributable to larger training sets (also known as big data) and increased processing power.

What is unchanged is that most machine systems work by maximising some kind of objective.

In a game, the objective is simply to win, which is formally defined (for example capture the king in chess).

This is one reason why games (checkers, chess, Go) are AI mainstays – it’s easy to specify the objective function.

In other cases, it may be harder to define the objective and this is where AI could go wrong.

However, AI is more likely to go wrong for reasons of incompetence rather than malice.

For example, imagine that the US nuclear arsenal during the Cold War was under control of an AI to thwart sneak attack by the Soviet Union.

Due to no action of the Soviet Union, a nuclear reactor meltdown occurs in the arsenal and the power grid temporarily collapses.

The AI’s sensors detect the disruption and fallout, leading the system to infer an attack is underway.

The president instructs the system in a shaky voice to stand down, but the AI takes the troubled voice as evidence the president is being coerced. Missiles released.

End of humanity.

The AI was simply following its programming, which led to a catastrophic error.

This is exactly the kind of deadly mistakes that humans almost made during the Cold War.

Our destruction would be attributable to our own incompetence rather than an evil AI turning on us – no different than an auto-pilot malfunctioning on a jumbo jet and sending its unfortunate passengers to their doom.

In contrast, human pilots have purposefully killed their passengers, so perhaps we should welcome self-driving cars.

Of course, humans could design AIs to kill, but again this is people killing each other, not some self-aware machine.

Western governments have already released computer viruses, such as Stuxnet, to target critical industrial infrastructure.

Future viruses could be more clever and deadly.

However, this essentially follows the arc of history where humans use available technologies to kill one another.

There are real dangers from AI but they tend to be economic and social in nature.

Clever AI will create tremendous wealth for society, but will leave many people without jobs.

Unlike the industrial revolution, there may not be jobs for segments of society as machines may be better at every possible job.

There will not be a flood of replacement “AI repair person” jobs to take up the slack.

So the real challenge will be how to properly assist those (most of us?) who are displaced by AI.

Another issue will be the fact that people will not look after one another as machines permanently displace entire classes of labour, such as healthcare workers.

Fortunately, the governments may prove more level-headed than tech celebrities if they choose to listen to nuanced advice.

A recent report by the UK’s House of Commons Science and Technology Committee on the risks of AI, for example, focuses on economic, social and ethical concerns.

The take-home message was that AI will make industry more efficient, but may also destabilise society.

If we are going to worry about the future of humanity we should focus on the real challenges, such as climate change and weapons of mass destruction rather than fanciful killer AI robots.

This article was originally published on The Conversation

Follow StartupSmart on Facebook, TwitterLinkedIn and iTunes.