Tuesday, August 18, 2015

AI Conundrum



I was driven to find out more details as to what had occurred that made Stephen Hawking, Elon Musk, Bill Gates, Steve Wozniak and other notables, react so strongly against AI. What did they know that I didn’t?...Turns out quite a bit!

First you need to know the definition of a Turing test. The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Alan Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses.

Okay – here’s the test & what occurred: Three robots were told that two of them had been silenced and they needed to determine which one had not been. All three robots tried saying “I don’t know,” but only one could vocalize. Once that robot heard the sound of its own voice saying, “I don’t know,” it changed its answer and said that it was the one robot that had not been silenced.)

What you also need to know is this exact same test had been given to these three robots previously but this was in fact the first time they actually figured it out – in other words, self-learned. Worried yet?(By the way – look into the magazine – “New Scientist” for details; they have some great articles regarding AI and Self Aware!)

Now Engineers will tell you that they can set limits. That the road the robots take can be altered, they can be instilled with a moral compass. Guess what? A moral compass probably won’t do it!

A moral compass is, and I am quoting the dictionary, used in reference to a person's ability to judge what is right and wrong and act accordingly. I’m not sure that we want them to have a ‘moral compass’. In judging right from wrong, in reference to whom are they judging it? What may be right for the Earth might not be right for man OR what may be right for the robot may not be in our best interest. I think, and this is purely opinion, to be safe, maybe we don’t want robots to be self-aware. Is it necessary? Why? Once we allow them to be self-aware then people will want to give them rights…


While I am not quite as concerned as Stephen Hawking, Elon Musk, Bill Gates, Steve Wozniak etc. I am becoming more so. Part of my reasoning was that I didn’t think we could actually do it, seems we already have.

--Final food for thought; C3PO or HAL 9000. What happens when superior engineering truly engineers something superior — superior to the engineers — can disasters imagined in science fiction become science fact?

---I lied; one more thought and a fact! Remember Isaac Asimov’s Three Laws of Robotics? Surely that will help, no? Well, apparently the United Nations has decided battlefield robots that can decide on their own when it’s a good idea to kill people. Seems a bad precident. (Just Saying!)

No comments:

Post a Comment