First, some housekeeping. This thread is a part of a new initiative by the
Toribash Debate Club [TDC]. The TDC aims to educate the masses about the big current events. The restraints on topics and high entry-level to posting means that it isn’t the most active organisation. To boost activity and spread awareness of the TDC, once a week we’ll be posting a topic in the Discussion board, open to everybody for discussion.
As this is a TDC initiative, you are not permitted to insult, ridicule or demean anyone in this thread. Treat other posters with respect. There’s a different, less vicious spirit to TDC threads compared to regular Discussion threads. This said, let’s move on to the topic at hand.
-------------------------------------------------------------------------------------
The AI Challenge by Ele
We are the smartest thing on the planet. What happens when that’s not the case anymore?
Originally Posted by Nick Bostrom
The human species, Homo sapiens, is unaging but not immortal. Hominids have survived this long only because, for the last million years, there were no arsenals of hydrogen bombs, no spaceships to steer asteroids towards Earth, no biological weapons labs to produce superviruses, no recurring annual prospect of nuclear war or nanotechnological war or rogue AI. To survive any appreciable time, we need to drive down each risk to nearly zero. “Fairly good’ is not good enough to last another million years.
AI is an existential threat, that in the last decade, we’ve been hurrying to usher in. Out of control computers that are exponentially smarter than us, ‘rogue AI’, has the greatest scientific minds of today deeply worried.
In a
piece from the Independent in May last year, Stephen Hawking tried to make this future threat of AI abundantly clear to everyone.
Originally Posted by Stephen Hawking
It's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history. Artificial-intelligence (AI) research is now progressing rapidly... fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.
He then lists the potential benefits before exploring the risks;
Everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
How do we control something that is almost incalculably smarter than us? Often, to explain the differential in intelligence between us and AI, people will use the
species metaphor. AI would be so much smarter than us, it’d be like comparing the intelligence of humans to a different species, like an insect. Bostrom explains it in his book, 'Global Catastrophic Risks';
Originally Posted by Nick Bostrom
The AI has magic - not in the sense of incantations and potions, but in the sense that a wolf cannot understand how a gun works, or what sort of effort goes into making a gun, or the nature of that human power that lets us invent guns. The main advice the metaphor gives us is that we had better get Friendly AI right.
But how do you write a constraining source code so flawless that something so exponentially smarter than you can’t find loopholes in?
Elon Musk -
AI is our biggest existential threat.
Originally Posted by Elon Musk
I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.
Elon Musk is acutely aware of the dangers. He compares trying to write a source code that AI can’t break away from and go rogue as
“summoning a demon” you would hope to control. Bostrom, in his book, talks about how AI is capable of altering its own source code to make itself smarter. Because it’s smarter than a human, it can see improvements in its source code and it makes them. Now, because of these improvements, it’s even smarter and it can see
more improvements. Rinse and repeat. Bostrom says that by the time you even realise the computer is doing this and type ‘What are you doing?’, it’ll already have happened.
Last month Musk funded an initiative, to the tune of $10m, aimed at researching ways to ensure Friendly AI.
On Thursday, the SpaceX and Tesla Motors head put his money where his mouth is with a $10 million donation to the Future of Life Institute for the creation of a grant program that will look into how to keep AI friendly...
“Here are all these leading AI researchers saying that AI safety is important,” Musk said in a statement , referring to an open letter signed by a number of leaders in the field. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”
On the other hand, Google is developing a
'Neural Turing Machine', and it’s self-programmable and self-learning. This isn’t the ‘demon’ Musk talked about - at least not yet, anyway. It was programmed to
learn by itself how to perform simple tasks.
These are extremely simple tasks for a computer to accomplish when being told to do so, but computers’ abilities to learn them on their own could mean a lot for the future of AI.
Elon Musk is not going to be happy about this.
He won’t, indeed.
Now, having reflected on all this information, what’s your take on the topic?