Originally Posted by
Redundant
Thank you immportapig, that was an excellent post. Now leave out the personal attacks and you are a very good user in this forum.
As I admitted in my post, I have no idea about AI, that is why I asked for clarification on your stance. You did a very good job.
Hah, I ninja'd TDCadmin. He is right though, you did not provide any sources. Seeing as your post has a decent quality that is alright, I guess. I still encourage you to do find sources for your claims. You posted some very technical claims so please find some articles that support you.
To support what
All I did was explain some very basic concepts that everyone should know BEFORE posting in this thread. If you want to know more about something then say it instead of vaguely saying "your claims".
Originally Posted by
TDCadmin
Watch it, buddy.
Also, yet to see any sources about how the threat of AI is overrated. In the OP you've got an open letter signed by the leading thinkers in the field, all pushing to develop ways to ensure Friendly AI because they realise the massive risks AI poses. How will you trivialise that?
That's because there's yet to be any explanation of HOW AI could possibly be a threat.
How about your read the letter before you make claims about it. They are just saying that beneficial AI (as opposed to weaponized AI for example) should be pursued. They say NOTHING about the "massive risks AI poses". And they certainly don't explain any of them.
I'm not sure why I am being asked to explain how AI isn't a risk, when no one is explaining how AI is a risk. Being asked to prove a negative is illogical. If you can't show that AI is a risk then there's no need for me to defend my position. I literally asked for some explanation or proof that AI could possibly be a threat, but all I get instead is you two saying "lol post some sources for you explaining what AI is".
Are you guys for real right now? In old discussion you both would have been banned for shitposting like that. Let's have a discussion, post some content already. Name your specific concerns and I will respond. Ask for citation on a specific statement and I will respond.
Originally Posted by
TDCadmin
Well, I'm addressing your claim that AI is overhyped. Source pls. I've provided many. I quoted Hawking talking about specific risks and Musk talking about the holistic risk. There's tonnes of articles and books written about the subject that detail the risks. In the OP I quoted a passage from Nick Bostrom's book Global Catastrophic Risks, and in his book he devotes a chapter to the risks of AI. You have yet to provide any sources that back up your claim that the threat of AI is overrated. You just keep saying 'It's obvious, anyone can see it's not a threat'. Hawking, Musk and Bostrom and many others don't see it the way you see it.
They are saying, as Musk said "“that AI safety is important". Why would AI safety be important if there were no risks?
"You are not permitted to insult, ridicule or demean anyone in this thread. Treat other posters with respect."
Last chance, respect the rules of this thread or get out.
~Ele
You have a lot of people saying "AI is scary" but no one saying why, this is what hype is. This whole argument from authority is kind of week, but whatever I'll humor you.
I'm guessing you are misrepresenting the rest of your sources as you did with the letter (which again, does not explain or even mention any risks of AI) so let's have a look:
So what does Musk say about AI? "With artificial intelligence we are summoning the demon." Wow what an abstract claim, and again, no evidence. "Tesla Motors CEO Elon Musk worries about a "Terminator" scenario arising from research into artificial intelligence." ? What does that even mean? He's scared of weaponized strong AI turning rogue en mass?
The linked Hawkings article doesn't actually go into the what Hawkings actually said, it just talks about scary AI in a general sense. "In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets" Yes, that's dangerous, sure. But again, they would have to be a self-programming strong AI, a normal AI that can discriminate targets is not a problem. And as usual, this isn't a problem of AI, it's a problem of all electronic weapons.
Can't comment on GCR, but I can comment on Superintelligence: Paths, Dangers, Strategies. Bostrom's book discusses the possibility of a single superintelligence attempting to take over the world. To call this "a risk of AI" is very misrepresentative. So what does he say, well firstly he agrees that my simple precaution is sufficient,"If the AI has (perhaps for safety reasons) been confined to an isolated computer, it may use its social manipulation superpower to persuade the gatekeepers to let it gain access to an Internet port. Alternatively, the AI might use its hacking superpower to escape its confinement." Now, the second is obviously not possible since anyone building a superintelligence is going to secure it with sufficient encryption that it can't simply hack it's way out. Time bounded protection (even 1024bit is sufficient) is plenty. As for the first scenario? That's the same risk as "what if the guards at the nuclear missile facility just let someone in?!" Social engineering is a very well studied topic so let's not even bother going into it, suffice to say it's simple enough to prevent by not giving anyone the power to do it. It'd make a hell of a movie but it's just not realistic.
Am I to seriously think that Musk/Hawking/Bostrom are seriously afraid of skynet/war games scenarios? Again this discussion has turned into "anything is possible you don't know the future, look at these works of fiction they are all possible!" I am seriously asking you because I want to be sure that what you are asserting is that skynet/war games are the future and that I am expected to disprove them using citations.
If you are unwilling to make an argument yourself or contribute to the discussion, and are only going to say "no but look at what person X says" without making any kind of effort to discuss what is being posted, then please don't post. It may not be against the rules of the thread, but it is against the rules of the subforum, and you are required to abide by them.
Removed complaint about moderator because that belongs in the complaint board for complaints.
Last edited by Zelda; Feb 12, 2015 at 04:25 PM.
Reason: Dang it pig.