HTOTM: FUSION
Original Post
[TDC] The AI Challenge
First, some housekeeping. This thread is a part of a new initiative by the Toribash Debate Club [TDC]. The TDC aims to educate the masses about the big current events. The restraints on topics and high entry-level to posting means that it isn’t the most active organisation. To boost activity and spread awareness of the TDC, once a week we’ll be posting a topic in the Discussion board, open to everybody for discussion.

As this is a TDC initiative, you are not permitted to insult, ridicule or demean anyone in this thread. Treat other posters with respect. There’s a different, less vicious spirit to TDC threads compared to regular Discussion threads. This said, let’s move on to the topic at hand.

-------------------------------------------------------------------------------------

The AI Challenge by Ele

We are the smartest thing on the planet. What happens when that’s not the case anymore?

Originally Posted by Nick Bostrom
The human species, Homo sapiens, is unaging but not immortal. Hominids have survived this long only because, for the last million years, there were no arsenals of hydrogen bombs, no spaceships to steer asteroids towards Earth, no biological weapons labs to produce superviruses, no recurring annual prospect of nuclear war or nanotechnological war or rogue AI. To survive any appreciable time, we need to drive down each risk to nearly zero. “Fairly good’ is not good enough to last another million years.

AI is an existential threat, that in the last decade, we’ve been hurrying to usher in. Out of control computers that are exponentially smarter than us, ‘rogue AI’, has the greatest scientific minds of today deeply worried.

In a piece from the Independent in May last year, Stephen Hawking tried to make this future threat of AI abundantly clear to everyone.
Originally Posted by Stephen Hawking
It's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history. Artificial-intelligence (AI) research is now progressing rapidly... fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

He then lists the potential benefits before exploring the risks;
Everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

How do we control something that is almost incalculably smarter than us? Often, to explain the differential in intelligence between us and AI, people will use the species metaphor. AI would be so much smarter than us, it’d be like comparing the intelligence of humans to a different species, like an insect. Bostrom explains it in his book, 'Global Catastrophic Risks';
Originally Posted by Nick Bostrom
The AI has magic - not in the sense of incantations and potions, but in the sense that a wolf cannot understand how a gun works, or what sort of effort goes into making a gun, or the nature of that human power that lets us invent guns. The main advice the metaphor gives us is that we had better get Friendly AI right.

But how do you write a constraining source code so flawless that something so exponentially smarter than you can’t find loopholes in?

Elon Musk - AI is our biggest existential threat.
Originally Posted by Elon Musk
I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.

Elon Musk is acutely aware of the dangers. He compares trying to write a source code that AI can’t break away from and go rogue as “summoning a demon” you would hope to control. Bostrom, in his book, talks about how AI is capable of altering its own source code to make itself smarter. Because it’s smarter than a human, it can see improvements in its source code and it makes them. Now, because of these improvements, it’s even smarter and it can see more improvements. Rinse and repeat. Bostrom says that by the time you even realise the computer is doing this and type ‘What are you doing?’, it’ll already have happened.

Last month Musk funded an initiative, to the tune of $10m, aimed at researching ways to ensure Friendly AI.
On Thursday, the SpaceX and Tesla Motors head put his money where his mouth is with a $10 million donation to the Future of Life Institute for the creation of a grant program that will look into how to keep AI friendly...

“Here are all these leading AI researchers saying that AI safety is important,” Musk said in a statement , referring to an open letter signed by a number of leaders in the field. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

On the other hand, Google is developing a 'Neural Turing Machine', and it’s self-programmable and self-learning. This isn’t the ‘demon’ Musk talked about - at least not yet, anyway. It was programmed to learn by itself how to perform simple tasks.

These are extremely simple tasks for a computer to accomplish when being told to do so, but computers’ abilities to learn them on their own could mean a lot for the future of AI.

Elon Musk is not going to be happy about this.

He won’t, indeed.

Now, having reflected on all this information, what’s your take on the topic?
I think that artificial intelligence is amazing. The possibilities for it's uses are almost endless. The fact that they can learn and adapt makes them able to function almost perfectly. I feel like the conspiracy theories about A.I. going rampant and taking over mankind are just stupid. If we don't want that to happen they we wont grant them access to freaking nuclear warheads. I'm in favor of the research and development of A.I.. I feel like it could solve so many problems and help so many people. I'm not denying that A.I.s could 'rebel' and harm humans, but it's not like they are created with all of the knowledge in the universe.

Artificial intelligence can be used for many things. The use of it that interests me the most is space travel.
Long distance space travel isn't something that we have achieved yet. It takes too long for our live spans to accommodate. With the use of artificial intelligence to carry out long distance space travel missions we wouldn't have to waste time and money on keeping human passengers alive and all of their live support working. The perfect example of this is "Alien Planet": A science fiction documentary on what it would be like for an A.I. operated space ship to carry out a long distance research mission on an alien planet. I enjoyed watching it very much.
Last edited by Galaxy; Feb 11, 2015 at 04:41 AM.
Originally Posted by Galaxy View Post
I'm not denying that A.I.s could 'rebel' and harm humans, but it's not like they are created with all of the knowledge in the universe.

The point of AI becoming smarter is so that it's capable of doing those wonderful things that you talked about. If it's not smart enough to do them, then it defeats its own purpose. However, that very point where it exceeds human intelligence is when AI becomes that huge potential problem. We rely on engineering its source code to ensure it's friendly, but when it's smarter than the guy that wrote the code, it could find loopholes and go rogue.
Last edited by Ele; Feb 11, 2015 at 06:38 AM.
"fiction documentary" wut?

And a computer adapting and using information it gains is completely un-groundbreaking. It is easy to make a machine work out an effective solution through trial and error using modern programming, but that is the problem, programming. If a computer is just going through its program gaining information to be processed with that program while not making its own programs I feel like it isn't a real breakthrough towards AI.
-----
Originally Posted by Ele View Post
The point of AI becoming smarter is so that it's capable of doing those wonderful things that you talked about. If it's not smart enough to do them, then it defeats it's own purpose. However, that very point where it exceeds human intelligence is when AI becomes that huge potential problem. We rely on engineering its source code to ensure it's friendly, but when it's smarter than the guy that wrote the code, it could find loopholes and go rogue.

This is a very interesting point, we need to be sure that the idea of rebellion is completely inconceivable to it. I feel like we can make safe super intelligent AI as long as we know what we are doing beforehand and do it properly rather than learning the hard way.
Last edited by Zelda; Feb 11, 2015 at 05:02 AM. Reason: <24 hour edit/bump
Good morning sweet princess
Originally Posted by protonitron View Post
And a computer adapting and using information it gains is completely un-groundbreaking. It is easy to make a machine work out an effective solution through trial and error using modern programming, but that is the problem, programming. If a computer is just going through its program gaining information to be processed with that program while not making its own programs I feel like it isn't a real breakthrough towards AI.

You're misunderstanding what the NTM does, and its impact. It creates a neural network, similar to humans, that allows it to mimic the way our brain utilises short-term memory. "The result is a computer that learns as it stores memories and can later retrieve them to perform logical tasks beyond those it has been trained to do." It's self-learning and self-programmable.

From the Technology Review, this article explains it a more bit in-depth. It also mentions the impact of the NTM on the development of AI.
The brain’s ability to recode in this way was one of the keys to artificial intelligence. He believed that until a computer could reproduce this ability, it could never match the performance of the human brain.

Last edited by Ele; Feb 11, 2015 at 05:31 AM.
Originally Posted by Ele View Post
The point of AI becoming smarter is so that it's capable of doing those wonderful things that you talked about. If it's not smart enough to do them, then it defeats its own purpose. However, that very point where it exceeds human intelligence is when AI becomes that huge potential problem. We rely on engineering its source code to ensure it's friendly, but when it's smarter than the guy that wrote the code, it could find loopholes and go rogue.

When you say "find loopholes" it makes it sound like the AI is trying to go rogue, if this iisn't what you mean then sorry, but, the AI cannot try to go rogue as it must always follow its source code, if the source code is to open a door as efficiently as possible, it will look for the best way to open a door, if its source code is to find the best way to make a rifle, it will find the best way to do so, as this is its only ffunction it will not try to break free and go rogue by changing its own source code.


Originally Posted by Stephen Hawking
It's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history. Artificial-intelligence (AI) research is now progressing rapidly... fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

Stephen hawkings meaning in this quote is not abundantly clear; he could have meant that it was a mistake not to look into highly intelligent machines as they are a great opportunity rather than a great threat, the quote says at no point, explicitly, that artificial intelligence is a threat. Although it is clear if the entire article is read that the suggestion is that there is a risk, it still could be taken in a multitude of ways. Firstly that AI is very important and beneficial but we should take some caution while handling it, or it could be seen as too ddangerous and should be left alone until we understand it better, personally I think that the former is more sensible as as of now, despite how much AI has already been made none has gone rogue, and if it is well coded it never will. It is difficultto see how a computer following its code could break its own code and go rogue

Unless they are badly coded there is no reason for the AI to break free and become a threat for that reason I believe that we should continue to look into artificial intelligence as the is still a lot to be gained and learned from this field

Originally Posted by d3noth
And if you don't think that robots can't harm humans due to human stupidity. I'll link an article that I referenced in TDC*http://www.9news.com.au/world/2015/0...up-womans-hair

If a person is clever enough to make an AI cleverer than them are they really stupid enough to not make it secure enough that while following its code the AI cannot break its code?
Last edited by SmallBowl; Feb 11, 2015 at 09:20 AM.
Originally Posted by SmallBowl View Post
the AI cannot try to go rogue as it must always follow its source code, if the source code is etc.

If a person is clever enough to make an AI cleverer than them are they really stupid enough to not make it secure enough that while following its code the AI cannot break its code?

As we can see from the NTM, AI possesses the ability to complete "tasks beyond those it has been trained to do." It's self-programmable, so it's able to do things that we haven't told it to do. A constraining source code, like Musk said, is like summoning a demon. Because AI can do things that we don't intend, we have to be damn sure that our pentagram is airtight. He pumped $10m into researching Friendly AI because he's aware of the very real possibility that shit can go south.

On the clever point, remember the species metaphor. AI is aeons ahead of human intelligence. We may think the source code is airtight, but our thinking is based on our level of intelligence. Something so much smarter than us could see it differently.

Originally Posted by SmallBowl View Post
Stephen hawkings meaning in this quote is not abundantly clear; he could have meant that it was a mistake not to look into highly intelligent machines as they are a great opportunity rather than a great threat, the quote says at no point, explicitly, that artificial intelligence is a threat. Although it is clear if the entire article is read that the suggestion is that there is a risk, it still could be taken in a multitude of ways.

Yes. The point he's trying to make is that "success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."

Originally Posted by SmallBowl View Post
despite how much AI has already been made none has gone rogue, and if it is well coded it never will. It is difficultto see how a computer following its code could break its own code and go rogue

We haven't created any AI on the level that it can pose the sort of problems Hawking talked about. We can't base our complacency on the current level of AI.

Originally Posted by SmallBowl View Post
Unless they are badly coded there is no reason for the AI to break free and become a threat for that reason I believe that we should continue to look into artificial intelligence as the is still a lot to be gained and learned from this field

Yeah. As Bostrom said in his book, we're doubtlessly going to pursue AI, so we've got to make certain that we get Friendly AI right.
The danger of self-programming strong AI is overhyped. The solutions exist already and are used for sensitive equipment - air gap it and control access. It doesn't matter if you let your self-programming strong AI run for a billion generations if all it can access is what you need it to access. There we go, now "our biggest existential threat" has been conquered.

That said, the threat of AI going rogue is also overhyped. For example in the case of Google's neural network AI, even if it became a strong AI, it cannot achieve infinite intelligence because it is bounded by the neurons, hardware, and it probably isn't allowed access to run rampant on the system anyway.

The only situation where there's a real threat of an AI running rogue is when it's given access to the internet, it's own host machine, allowed privilege to modify it's own source, and allowed to function without any monitoring.


NTM is only interesting because of the hybrid approach. Honestly it has been done before and again, kind of overhyped. The point of an NN is for the whole thing to be memory and processing with similar architecture (as in a brain), so separating memory to conventional storage is somewhat of a hack.
<Faint> the rules have been stated quite clearly 3 times now from high staff
Very interesting.. I think there is another thing that is very important. If we talk about AI, we suggest, that the named machine/computer have a own consciousness. And therefor we have to know what a consciousness is. We do not know if a machine have something like that and we do not even know why we have a consciousness or what it is. Due the fact, that we learned a lot about our own brain by studing AI means, that we still dont have a clue what's possible and how intelligent a computer could be. Or if there is any intelligence in machines.
(hope you wont kick me from the club cause of my bad english.. I give my best :P and I dont use translators)
Last edited by BBKing; Feb 11, 2015 at 05:57 PM.
Originally Posted by TDCadmin View Post
First, some housekeeping. This thread is a part of a new initiative by the Toribash Debate Club [TDC]. The TDC aims to educate the masses about the big current events. The restraints on topics and high entry-level to posting means that it isn’t the most active organisation. To boost activity and spread awareness of the TDC, once a week we’ll be posting a topic in the Discussion board, open to everybody for discussion.

As this is a TDC initiative, you are not permitted to insult, ridicule or demean anyone in this thread. Treat other posters with respect. There’s a different, less vicious spirit to TDC threads compared to regular Discussion threads. This said, let’s move on to the topic at hand.

I have no problem with you creating discussion threads but I have a problem when you use this board to promote your organization. Use your signature for that. Also do not create rules unless you are a moderator. You can guidelines regarding content if you wish.
I changed my mind. Your initiative or whatever is not harmful or anything, so just go ahead.

To contribute to this thread:
AI is easily one of the most complex sciences out there. The simplifications some users made in this thread so far are most likely either incorrect or at least not completely true.
Please cite sources for all extraordinary claims in this thread or I will close it as it is highly technical and most people here, including myself, have no idea how it actually works.

For instance I encourage ImmortalPig to back up his claim that AI can easily be controlled. The opposition provided a source by Stephen Hawking who believes that AI has the potential to be very dangerous. I doubt that your simple analysis of the issue is correct when someone like Hawking believes otherwise.
Once you back it up perhaps a healthy discussion can flourish and we can all learn something.
Just making statements will not work in a thread like this.
Last edited by Redundant; Feb 11, 2015 at 10:46 PM.