Secret Santa 2024
Originally Posted by Ele View Post
As we can see from the NTM, AI possesses the ability to complete "tasks beyond those it has been trained to do." It's self-programmable, so it's able to do things that we haven't told it to do. A constraining source code, like Musk said, is like summoning a demon. Because AI can do things that we don't intend, we have to be damn sure that our pentagram is airtight. He pumped $10m into researching Friendly AI because he's aware of the very real possibility that shit can go south.

It is not doing things beyond what the source code tells it to, the AI is programmed to be able to reprogram itself, with the purpose of being better at its original purpose. It is not entirely self programmed as it only adds code, it doesn't create its entire code from scratch.

What things can the AI do that is not intended? If its updating class is coded properly, its updates will be purely beneficial for the task it was designed to do, other aspects would not be affected.
Originally Posted by Redundant View Post
For instance I encourage ImmortalPig to back up his claim that AI can easily be controlled. The opposition provided a source by Stephen Hawking who believes that AI has the potential to be very dangerous. I doubt that your simple analysis of the issue is correct when someone like Hawking believes otherwise.
Once you back it up perhaps a healthy discussion can flourish and we can all learn something.
Just making statements will not work in a thread like this.

I have to back up the claim that an AI on a completely isolated system is not a threat whilst Hawking can make whatever farfetched doomsday claim that he wants? May I remind you that Hawking is a physicist, not a computer scientist, let alone an AI expert.

Don't make this another "hurr durr anything is possible" discussion mate...


EDIT: Actually maybe we should start enforcing the rule that you need to at least know about the topic at hand to be allowed to post...
Last edited by ImmortalPig; Feb 11, 2015 at 08:18 PM.
<Faint> the rules have been stated quite clearly 3 times now from high staff
Originally Posted by ImmortalPig View Post
The danger of self-programming strong AI is overhyped. The solutions exist already and are used for sensitive equipment - air gap it and control access. It doesn't matter if you let your self-programming strong AI run for a billion generations if all it can access is what you need it to access. There we go, now "our biggest existential threat" has been conquered.

That said, the threat of AI going rogue is also overhyped. For example in the case of Google's neural network AI, even if it became a strong AI, it cannot achieve infinite intelligence because it is bounded by the neurons, hardware, and it probably isn't allowed access to run rampant on the system anyway.

The only situation where there's a real threat of an AI running rogue is when it's given access to the internet, it's own host machine, allowed privilege to modify it's own source, and allowed to function without any monitoring.


NTM is only interesting because of the hybrid approach. Honestly it has been done before and again, kind of overhyped. The point of an NN is for the whole thing to be memory and processing with similar architecture (as in a brain), so separating memory to conventional storage is somewhat of a hack.

There are other projects to simulate the human brain. Here is a link to IBM's Watson Microchip based on the human brain. Its main similarity to a human brain comes from its lack of accuracy caused by electrons jumping between the "neurones" due to the microchips unbelievable size. The main aim of the project was to keep the computer tiny and the aforementioned inaccuracy phenomenon, while in some ways being a great limitation, proves that the more like a brain a circuit is the more it starts to pick up certain characteristics. Progress is slowed by the lack of small enough synapse like components which are able to rewire themselves (again based on the synapses of the human brain). If you are interested in finding out more please tell me and I will root out the original article I read which covers the development in greater detail.

I think the IBM microchip would be safer than the NTM for most probably obviously apparent reasons.
-----
Originally Posted by ImmortalPig View Post
I have to back up the claim that an AI on a completely isolated system is not a threat whilst Hawking can make whatever farfetched doomsday claim that he wants? May I remind you that Hawking is a physicist, not a computer scientist, let alone an AI expert.

You have a point, when I heard about Steven Hawking's warning I was especially sceptical. His concern started when the new voice program started autocorrecting words he did not want to use giving him an idea that it was disobeying or even controlling him. Although the origin is not sufficient to undermine the argument, I still feel like we shouldn't just trust Steven Hawking on matters beyond his expertise.

Originally Posted by ImmortalPig View Post
Actually maybe we should start enforcing the rule that you need to at least know about the topic at hand to be allowed to post...

I feel like the thread starter helps ensure this by providing at least the minimum amount of knowledge required to participate in this thread. Therefore the rule you claim we are not properly moderating does not particularly need to be moderated. Your rhetoric is wasted on me, sorry.



I hate to drift off topic, but if we are thinking about the most likely way for humanity to stupidly wipe itself out then we certainly don't need AI to do it. I feel like humanity nuking itself to oblivion is more likely than someone building a super-advanced machine which will decide to do it for us.
Last edited by Zelda; Feb 11, 2015 at 08:55 PM. Reason: <24 hour edit/bump
Good morning sweet princess
Originally Posted by ImmortalPig View Post
I have to back up the claim that an AI on a completely isolated system is not a threat whilst Hawking can make whatever farfetched doomsday claim that he wants?

I doubt you are an expert either so yes you do. You can ask your opposition for sources as well if you like. I was going to once they post opposing ideas. You would also require arguments and sources why all AI in the future would definitely be on isolated systems exclusively.

I am not an expert, or even close to having any knowledge about computer science. I have no idea if your statement is true or not. That is why I need a source to verify it.
Based on what I know I could make speculations about an AI being used to infiltrate certain systems where it would gain control over systems that are not isolated. Since we are talking about potential future scenarios those things have to be considered as well. You are making the speculation that all AI will be exclusively used on completely isolated systems, please elaborate why that is a sound prediction.
Since you are saying that it would solve the problem of an AI being dangerous you are admitting that AIs on not isolated systems are dangerous, I assume. Do you not think that only using them on isolated systems only is a poor precaution? Technologies are not exactly known to be in control of one reasonable instance only.


As I said, this is an incredibly complex field of science already and if you are making predictions about the future it becomes vastly more complex.
I did not pick on you because I dislike you but because you made an extraordinary claim. It is an interesting topic so don't ruin it by being personal. It's annoying and fruitless.
If you don't want to speculate, as your post indicates, you should stay away from these topics. This topic is per definition about speculations. I don't want it to become a “hurr durr anything is possible“ thread either, however. That is why I asked for sources for all claims.
Last edited by Redundant; Feb 11, 2015 at 10:50 PM.
I'll say I actually agree with ImmortalPig on this one. Given the proper precautions, an isolated AI won't ever surpass it's hardware limitations. It won't suddenly start growing an ethernet cable and reach for the nearest modem inch-by-inch. It also won't start 3D printing an army of terminators or whatever without being hooked to a 3D printer (and the internet to get a general idea of what it's supposed to print to start murdering people).
Just keep these entities off the Internet and all is fine.

To be honest, if you write an AI to open a door it won't start making it's own objectives afterwards. Asking questions is a human trait, no other animal does that, so it'd be pretty hard to achieve that in a machine.

I'll write up a more detailed post on this topic when I find the time to do so.
Last edited by ynvaser; Feb 11, 2015 at 11:27 PM.
Originally Posted by ynvaser View Post
I'll say I actually agree with ImmortalPig on this one. Given the proper precautions, an isolated AI won't ever surpass it's hardware limitations. It won't suddenly start growing an ethernet cable and reach for the nearest modem inch-by-inch. It also won't start 3D printing an army of terminators or whatever without being hooked to a 3D printer (and the internet to get a general idea of what it's supposed to print to start murdering people).
Just keep these entities off the Internet and all is fine.

Nobody said it would 3D print an army (yes I know it was just hyperbole). And we aren't necessarily talking about a singular laboratory constructed AI either, and I have my doubts as to whether pig was. One AI is a lot easier. Anyway, the fact that we are creating an organism superior to us is probably risky (I speculate on this matter because nobody can know what this organism will think like if/when it is successfully created) regardless of how careful we are. We have had the firm upper hand over the world (for creatures our size) for thousands of years and changing that naturally comes with a certain possible risk.

However, I have another suggestion based on the project I linked in a previous post. If the computer is not as accurate or reliable as a computer then it would perhaps be less of a threat. I considered talking about limiting its short term memory but I have little understanding of such things neurologically and technologically.

Originally Posted by ynvaser View Post
To be honest, if you write an AI to open a door it won't start making it's own objectives afterwards. Asking questions is a human trait, no other animal does that, so it'd be pretty hard to achieve that in a machine.

So are you arguing that AI of the kind we imagine is impossible? The reason so many years of research and millions of pounds have gone into trying to make computers think like humans is because it is hard to do, if the machine doesn't stop acting like a machine then the project is a failure, so saying that the future of AI is the same as a door opening simple circuit doesn't really add up to me. Perhaps I have misunderstood you.
Good morning sweet princess
I also must agree with Pig and ynvaser. If this hypothetical AI is completely cut off from a main power grid, 3D printers, modem lines etc, it will literally be unable to affect the outside world. Instead, this (potentially) very smart AI would have to convince anyone near it to hook it up to one of the above listed things before it could actually do anything. It may also be impossible for it to become smarter than a certain threshold due to hardware limitations, though this is baseless speculation. Anything that the AI is unable to do itself, we would have to do for it. Unfortunately for us, it seems that convincing someone to help the AI out wouldn't be too hard for it.
All it takes is one bad day to reduce the sanest man alive to lunacy. That’'s how far the world is from where I am. Just one bad day.
Ok, it seems that the general consensus among y'all is that AI could and should just be kept completely isolated from other technology, but in my opinion this would kinda suck to be honest. To keep this thread interesting, can anyone think of a relatively feasible (backed up with links and stuff) alternate (less extreme) solution?
Good morning sweet princess
I can't wait til we have AI smart enough to evolve on it's own. Leave it isolated for a while with language recognition and speech capabilities and see what it comes up with.
Just going to point out for future reference, just because something is "AI" does not mean it is a "strong AI" which is what the thread is about. There is currently no strong AI in existence, and no one is claiming to be close to making one. AI that learns has been around for decades, and is not a threat.
Originally Posted by protonitron View Post
There are other projects to simulate the human brain. Here is a link to IBM's Watson Microchip based on the human brain. Its main similarity to a human brain comes from its lack of accuracy caused by electrons jumping between the "neurones" due to the microchips unbelievable size. The main aim of the project was to keep the computer tiny and the aforementioned inaccuracy phenomenon, while in some ways being a great limitation, proves that the more like a brain a circuit is the more it starts to pick up certain characteristics. Progress is slowed by the lack of small enough synapse like components which are able to rewire themselves (again based on the synapses of the human brain). If you are interested in finding out more please tell me and I will root out the original article I read which covers the development in greater detail.

I think the IBM microchip would be safer than the NTM for most probably obviously apparent reasons.

Yes, like I said, neural networks are nothing new, even hardware ones. The watson chip is set to be one of the first commercially viable, which is why it's interesting. Neural networks like this are not strong AI though.

I'm not sure why you think watson or the NTM are dangerous in any way... Both are practically identical.
Originally Posted by protonitron View Post
You have a point, when I heard about Steven Hawking's warning I was especially sceptical. His concern started when the new voice program started autocorrecting words he did not want to use giving him an idea that it was disobeying or even controlling him. Although the origin is not sufficient to undermine the argument, I still feel like we shouldn't just trust Steven Hawking on matters beyond his expertise.

I feel like the thread starter helps ensure this by providing at least the minimum amount of knowledge required to participate in this thread. Therefore the rule you claim we are not properly moderating does not particularly need to be moderated. Your rhetoric is wasted on me, sorry.

It's not rhetoric, anyone with even a tertiary knowledge of strong AI or computers knows that it is impossible for an isolated computer running an AI to "evolve" into some threat to humanity.

If you airgap the computer and don't hook it up to anything dangerous (eg a nuclear missile launch control) then it can't do anything dangerous. Asking for proof of this is beyond pointless. If you want proof that even a strong AI that is unconnected is harmless, how about disabling all your network connections then replying to this post, and you will quickly understand.

Originally Posted by Redundant View Post
I doubt you are an expert either so yes you do. You can ask your opposition for sources as well if you like. I was going to once they post opposing ideas. You would also require arguments and sources why all AI in the future would definitely be on isolated systems exclusively.

I am not an expert, or even close to having any knowledge about computer science. I have no idea if your statement is true or not. That is why I need a source to verify it.
Based on what I know I could make speculations about an AI being used to infiltrate certain systems where it would gain control over systems that are not isolated. Since we are talking about potential future scenarios those things have to be considered as well. You are making the speculation that all AI will be exclusively used on completely isolated systems, please elaborate why that is a sound prediction.
Since you are saying that it would solve the problem of an AI being dangerous you are admitting that AIs on not isolated systems are dangerous, I assume. Do you not think that only using them on isolated systems only is a poor precaution? Technologies are not exactly known to be in control of one reasonable instance only.


As I said, this is an incredibly complex field of science already and if you are making predictions about the future it becomes vastly more complex.
I did not pick on you because I dislike you but because you made an extraordinary claim. It is an interesting topic so don't ruin it by being personal. It's annoying and fruitless.
If you don't want to speculate, as your post indicates, you should stay away from these topics. This topic is per definition about speculations. I don't want it to become a “hurr durr anything is possible“ thread either, however. That is why I asked for sources for all claims.

Why would I need to prove that all AI will be run on isolated systems?! I never even remotely claimed that, and your statement shows a supreme lack of knowledge.

To avoid you having to ban yourself under rule E, I'll explain.
Firstly, there's many types of AI. You've probably interacted with thousands of learning systems or other AI. For example when you play a game offline, the computer players are AI. Is there any need to run them in an isolated system? No. Are they a danger? No. There is absolutely no risk that the computer enemy you are fighting in Quake will miraculously evolve into a global superpower and threaten the existence of mankind.

What we are talking about is what is called a "self-programming strong AI". It exhibits 2 main characteristics. Firstly it can program itself, so it can make improvements. Of course, as you might be able to guess, in order to make improvements it should be at least as smart and flexible as a human, so that's why being a strong AI is necessary, a strong AI is just one with similar intelligence to a human. Obviously there are many AI that are "better" than humans, for example the AI in some games makes better decisions, faster decisions and has better micro and macro (eg SC2 AI Automaton 2000). In 1997 the Ai to defeat a world champion in chess emerged, since then it's been rare for a human to beat an AI. But these AI lack flexibility (let alone being self programming), so despite being much smarter than humans, they are not strong AI. Again, there is no reason to isolate your chess AI.

Now onto your strawman. I never said that all AI need to be put on isolated systems, as you now know that is absurd. I said that no matter how powerful an AI is, all you have to do is put it on an isolated system and it is defeated. If someone was to try and build a dystopic self programming strong AI, all they would have to do is put it on an isolated system, and there would be absolutely no risk. That's all there is to it. It's a sound prediction because it's already common practice in info sec to isolate systems. There are various viruses that can utilize many vectors, or are polymorphic (eg badBIOS) that have to be isolated. It's nothing new.

I find it hard to believe that anyone who has even a small amount of knowledge of computers or AI would find it to be an "extraordinary claim" that any AI that is completely isolated is completely safe.

Reframing my post as "all AI will definitely be on isolated systems in the future" then saying "I don't want this to become an anything is possible thread either" is not very nice mate. If you don't want it to become like that then don't make that argument.

Originally Posted by protonitron View Post
Ok, it seems that the general consensus among y'all is that AI could and should just be kept completely isolated from other technology, but in my opinion this would kinda suck to be honest. To keep this thread interesting, can anyone think of a relatively feasible (backed up with links and stuff) alternate (less extreme) solution?

The purpose of my argument was to show how incredibly easy it is to stop a doomsday AI.

In the future AI will be much like they are today. Limited input / output. If you have an AI to manage your mail inbox (something that already exists, but in the future just assume it is much smarter, ok?) then it only needs access to the mail inbox. This is novel to do, and even if the AI becomes super smart and malicious all it can do is sort mail, there is no risk of a doomsday. Take the example of chat bots, something very common, all they can do is chat, they are given a message and can reply. All you have to do is build an interface that limits what they can do to that. String sanitization is a very simple thing to do, so again there is no risk.
-----
Actually you know what, for the sake of argument, what exactly do you people think will happen if you create a strong self-programming AI and give it completely unrestricted access to the internet?

Honestly I can't even think of a way to make AI dangerous. So come on doomsdayers, make your argument instead of abstractly saying "AI is scary it will wipe out humans".
Last edited by ImmortalPig; Feb 12, 2015 at 09:50 AM. Reason: <24 hour edit/bump
<Faint> the rules have been stated quite clearly 3 times now from high staff