This post made me consider bringing back reputation system.
Situation described in an OP would only be possible if said AI is either untested and shouldn't have been allowed out of testing labs or it's actually not an AI but a bunch of if-else cases for making coffee.
Real self-learning AI is scary (primarily because it'd make the humanity face something that's never been seen before), but its creation is most likely inevitable - even if that won't happen in close future.
Obviously it can be a huge threat, especially if tech companies continue working with the military. Smart coffee maker is one thing, smart machine that shoots rockets is another.
Then again I'd question how close would an AI creation be to creating an AI that's capable of self-consciousness, with the latter obviously being the main threat. For example, pretty much any living species are capable of learning in some form but only a few can (somewhat) qualify for being self-conscious.
Getting back the example from the previous paragraph, a rocket-shooting AI is obviously bad but an AI that decides it wants to do that on its own while also building more rockets and replicating itself is much worse.
If you're actually a programmer I'm curious as to how you approach your own work because your logic is very flawed.
If your supposed AI can't avoid an obstacle it's clearly not going to be in a position where it would have any chance of harming anyone. Just like you don't take a new car straight out of the factory without any brake fluid and expect it to stop, you don't take a bunch of code of AI and put it straight into working environment. There is insane amounts of planning and testing involved with developing any type of AI, and that will be even more severe with one that has a any potential of harming anyone.
If you presented a problem that is at least slightly debatable that would be an interesting topic, but preventing a robot from stepping on a child really is just basics of AI programming and I'm surprised you can't wrap your head around it considering your professional claims.
my logic for that situation is based on this video: https://www.youtube.com/watch?v=3TYT1QfdfsM
as for the second part, that was only an example, i created this thread for the general threat, not that single situation.
Ok I managed to watch that video for 10 minutes and was about to kill myself.
"Robot goes to fetch you a coffee but then there's a kid in its way so you rush to hit the killswitch but then the robot fights you because that'd interrupt him from fetching you coffee" - sorry, what? We already have working car autopilots that don't (usually) run over people in 2018 but future AI-having robots won't have simplest infrared cameras to see obstacles in front of them and would require a killswitch not to destroy everything on their path? Also it'd fight you, the owner? That's ridiculous.
It's generally painful to watch, everything he's covered in that 10-minute bit has been resolved years ago. It'd be somewhat okay if AI development basics were under discussion here - but obviously not its future and/or possible threats.