Just going to point out for future reference, just because something is "AI" does not mean it is a "strong AI" which is what the thread is about. There is currently no strong AI in existence, and no one is claiming to be close to making one. AI that learns has been around for decades, and is not a threat.
Originally Posted by
protonitron
There are other projects to simulate the human brain. Here is a link to IBM's Watson Microchip based on the human brain. Its main similarity to a human brain comes from its lack of accuracy caused by electrons jumping between the "neurones" due to the microchips unbelievable size. The main aim of the project was to keep the computer tiny and the aforementioned inaccuracy phenomenon, while in some ways being a great limitation, proves that the more like a brain a circuit is the more it starts to pick up certain characteristics. Progress is slowed by the lack of small enough synapse like components which are able to rewire themselves (again based on the synapses of the human brain). If you are interested in finding out more please tell me and I will root out the original article I read which covers the development in greater detail.
I think the IBM microchip would be safer than the NTM for most probably obviously apparent reasons.
Yes, like I said, neural networks are nothing new, even hardware ones. The watson chip is set to be one of the first commercially viable, which is why it's interesting. Neural networks like this are not strong AI though.
I'm not sure why you think watson or the NTM are dangerous in any way... Both are practically identical.
Originally Posted by
protonitron
You have a point, when I heard about Steven Hawking's warning I was especially sceptical. His concern started when the new voice program started autocorrecting words he did not want to use giving him an idea that it was disobeying or even controlling him. Although the origin is not sufficient to undermine the argument, I still feel like we shouldn't just trust Steven Hawking on matters beyond his expertise.
I feel like the thread starter helps ensure this by providing at least the minimum amount of knowledge required to participate in this thread. Therefore the rule you claim we are not properly moderating does not particularly need to be moderated. Your rhetoric is wasted on me, sorry.
It's not rhetoric, anyone with even a tertiary knowledge of strong AI or computers knows that it is impossible for an isolated computer running an AI to "evolve" into some threat to humanity.
If you airgap the computer and don't hook it up to anything dangerous (eg a nuclear missile launch control) then it can't do anything dangerous. Asking for proof of this is beyond pointless. If you want proof that even a strong AI that is unconnected is harmless, how about disabling all your network connections then replying to this post, and you will quickly understand.
Originally Posted by
Redundant
I doubt you are an expert either so yes you do. You can ask your opposition for sources as well if you like. I was going to once they post opposing ideas. You would also require arguments and sources why all AI in the future would definitely be on isolated systems exclusively.
I am not an expert, or even close to having any knowledge about computer science. I have no idea if your statement is true or not. That is why I need a source to verify it.
Based on what I know I could make speculations about an AI being used to infiltrate certain systems where it would gain control over systems that are not isolated. Since we are talking about potential future scenarios those things have to be considered as well. You are making the speculation that all AI will be exclusively used on completely isolated systems, please elaborate why that is a sound prediction.
Since you are saying that it would solve the problem of an AI being dangerous you are admitting that AIs on not isolated systems are dangerous, I assume. Do you not think that only using them on isolated systems only is a poor precaution? Technologies are not exactly known to be in control of one reasonable instance only.
As I said, this is an incredibly complex field of science already and if you are making predictions about the future it becomes vastly more complex.
I did not pick on you because I dislike you but because you made an extraordinary claim. It is an interesting topic so don't ruin it by being personal. It's annoying and fruitless.
If you don't want to speculate, as your post indicates, you should stay away from these topics. This topic is per definition about speculations. I don't want it to become a “hurr durr anything is possible“ thread either, however. That is why I asked for sources for all claims.
Why would I need to prove that all AI will be run on isolated systems?! I never even remotely claimed that, and your statement shows a supreme lack of knowledge.
To avoid you having to ban yourself under rule E, I'll explain.
Firstly, there's many types of AI. You've probably interacted with thousands of learning systems or other AI. For example when you play a game offline, the computer players are AI. Is there any need to run them in an isolated system? No. Are they a danger? No. There is absolutely no risk that the computer enemy you are fighting in Quake will miraculously evolve into a global superpower and threaten the existence of mankind.
What we are talking about is what is called a "self-programming strong AI". It exhibits 2 main characteristics. Firstly it can program itself, so it can make improvements. Of course, as you might be able to guess, in order to make improvements it should be at least as smart and flexible as a human, so that's why being a strong AI is necessary, a strong AI is just one with similar intelligence to a human. Obviously there are many AI that are "better" than humans, for example the AI in some games makes better decisions, faster decisions and has better micro and macro (eg SC2 AI Automaton 2000). In 1997 the Ai to defeat a world champion in chess emerged, since then it's been rare for a human to beat an AI. But these AI lack flexibility (let alone being self programming), so despite being much smarter than humans, they are not strong AI. Again, there is no reason to isolate your chess AI.
Now onto your strawman. I never said that all AI need to be put on isolated systems, as you now know that is absurd. I said that no matter how powerful an AI is, all you have to do is put it on an isolated system and it is defeated. If someone was to try and build a dystopic self programming strong AI, all they would have to do is put it on an isolated system, and there would be absolutely no risk. That's all there is to it. It's a sound prediction because it's already common practice in info sec to isolate systems. There are various viruses that can utilize many vectors, or are polymorphic (eg badBIOS) that have to be isolated. It's nothing new.
I find it hard to believe that anyone who has even a small amount of knowledge of computers or AI would find it to be an "extraordinary claim" that any AI that is completely isolated is completely safe.
Reframing my post as "all AI will definitely be on isolated systems in the future" then saying "I don't want this to become an anything is possible thread either" is not very nice mate. If you don't want it to become like that then don't make that argument.
Originally Posted by
protonitron
Ok, it seems that the general consensus among y'all is that AI could and should just be kept completely isolated from other technology, but in my opinion this would kinda suck to be honest. To keep this thread interesting, can anyone think of a relatively feasible (backed up with links and stuff) alternate (less extreme) solution?
The purpose of my argument was to show how incredibly easy it is to stop a doomsday AI.
In the future AI will be much like they are today. Limited input / output. If you have an AI to manage your mail inbox (something that already exists, but in the future just assume it is much smarter, ok?) then it only needs access to the mail inbox. This is novel to do, and even if the AI becomes super smart and malicious all it can do is sort mail, there is no risk of a doomsday. Take the example of chat bots, something very common, all they can do is chat, they are given a message and can reply. All you have to do is build an interface that limits what they can do to that. String sanitization is a very simple thing to do, so again there is no risk.
-----
Actually you know what, for the sake of argument, what exactly do you people think will happen if you create a strong self-programming AI and give it completely unrestricted access to the internet?
Honestly I can't even think of a way to make AI dangerous. So come on doomsdayers, make your argument instead of abstractly saying "AI is scary it will wipe out humans".
Last edited by ImmortalPig; Feb 12, 2015 at 09:50 AM.
Reason: <24 hour edit/bump