Of Asimov, Robots, Artificial Intelligence and What is a Human Anyway


You might say I have too much time on my hands. I would answer that I never have enough time, but my back injury continues and I have had time to think in a few directions.

Whether it is HAL 9000, remember “I’m sorry but I can’t do that Dave” as an answer to “Open the Pod bay doors HAL” from 2001 A Space Odyssey?

If you haven’t tried it, ask Alexa, Siri, Cortana or whatever your speech interface is to the internet, those famous words. “Open the Pod bay doors HAL” If only Arthur C Clarke was around to experience that.

Damn, I just remembered that they had a 4K restoration of the movie at Imax last month for the 50th anniversary of the movie. I was hoping to find someone to go with and then totally forgot about it. That would have been amazing.

AsimovI collect books and in recent years have given away many books that I was never going to read again, but decided to extend my collections of specific writers and starting at the beginning of the alphabet, I looked at what was missing from my Isaac Asimov collection and amongst others bought a copy of The Bicentennial Man.

Asimov is of course famous for the 3 Laws of Robotics. Ironically a lot of people debunked his laws and said they were flawed and used that to criticise him as being unrealistic or perhaps idealistic, which is a trait of many SciFi authors of the 70’s. However, he knew that himself. In many of his stories, robots disobeyed the laws.

There is a great story in this book called That Thou Art Mindful of Him, in which is a play on Psalm 8:4-6, he also infers in some of the stories that he was Jewish through some of the characters and had a keen sense of humor.

In this story (and I’m sorry for the spoiler) a series of robots are produced and given the capability to become self aware, in effect sentient. They redefine what it is to be human and declare themselves as such.

I played with the thought of Singularity and imagined if autonomous cars could pass the Turing Test 

I also looked at what might happen if they didn’t and what hackers might be able to do.

What I keep coming back to and writers like Philip K Dick, Asimov, Clarke, Heinlein and many others foresaw 50 and more years ago and similar to where the TV series Humans is heading, is that humans are dangerous to the planet.

Now I like being human and I hope that my descendants will have safe and healthy planet for thousands of years from now and many of my little stories are in jest.

BUT, if climate change, plastic pollution, air pollution, brinkmanship politics, drought, famine, and war are the result of how great and committed we humans fancy ourselves to be, would it not be realistic if an Artificial Intelligence was developed to the point of Singularity and able to continue to learn with or without programmed biases, would their logic determine that the human race should either be limited or allowed to exterminate ourselves?

Kurzweil looked at it a different way and said that Singularity would occur around 2045 and potentially be a synthesis between human and machine, in effect human 2.0. He would be about 98 at that point in time, so it will be interesting to see if he is still around and if he is right.

Maybe Elon Musk, founder of Tesla and many futuristic projects should have the last word. He’s pretty successful and walks the talk. DARPA, Rex Bionics and hundreds of companies, universities and other innovators are developing systems that will be able to think for themselves. Yes, for specific purposes, but they are being created.

It’s interesting that in this clip, they say that Science Fiction is usually about 50 years ahead of its time. So back to Asimov, reading him today, especially a book like The Bicentennial Man, where like Stephen King and others, he talks about his stories, was he in fact prophetic?

Yes, maybe I’ve had too much time to think, but do you think we should be thinking about this. Just imagined if a machine, say a Robocop decided that using facial recognition or perhaps racial recognition, that you were, could be, or could become a criminal and then think about biases that go into programming, often of necessity.

What conclusions could an AI start taking when given some information and some bias and then left to learn on the basis of that starting point? Oh and I didn’t even mention George Orwell. He wrote Animal Farm in 1945. Remember “All humans are equal, but some are more equal than others”? Shutting up now……..

 

Advertisement

On Ray Kurzweil and Thinking


I was reading a current article by Ray Kurzweil in this month’s edition of The Futurist and it got me to thinking a little. Here are a few random synapse connections from me.

He talked about how the digital neocortex will be be much faster than my wet-ware and that the roughly 300 million pattern recognisers in our biological neocortex will allow us to think in the cloud, using billions or trillions of pattern recognisers.  The IQ part of my brain thinks this could be amazing, although I would worry about dendrite overload or glutamic acid over stimulation, which is associated with conditions such as Alzheimer’s. It’s one thing to connect my brain or nervous system to additional memory, but to extend the processing in and out, is something that I think may require a lot of very careful study.

Earlier this week I wrote a blog about a potential future condition, Google Glasses Separation Syndrome. I recently introduced my daughter to the brilliant book, Flowers for Algernon which follows a similar thread. What happens when you expand a person’s capability to the point that it changes their existence and then potentially remove it again.

I noted that Ray perhaps doesn’t like driving very much because he talked about self driving cars alleviating the requirement of humans to perform the ‘chore of driving’. Sorry Ray, I love driving and so do a large percentage of the people I know. I appreciate that you now work for Google and they are pioneering driver-less cars, but I don’t want to live in a city where eventually the law requires hat the ‘network’ takes over my car. Yes there are benefits in road safety etc.  but with systems such as Fleet Management, MobilEye, and the incentives of PAYD Insurance the roads will become safer without requiring us to take our hands off the wheel.

So IBM‘s Watson won Jeopardy, cool. It is an amazing AI and I love that it is now being used to look for cure’s for cancer amongst other things. But if you start thinking about Watson, a digital neocortex and singularity, what about EQ? It’s one thing to be able to identify things, to be able to locate information, to be able to combine apparently disparate bits of data, but how about feelings, intuition, id and ego? These are the things that make us human.

I like where this is going, but I also want to keep that which is me. Watson might be able to write a hit song by understanding the formulas and this has been tried before. But the song I wrote about a boy whose father lost his job at the plant and asks Santa to find his dad a job, while his mother sits and cries in the bedroom, or the one I wrote about a guy who returns from a tour of duty in Iraq to find his best friend is now sleeping with his girlfriend, that brought tears to Desert Storm vets isn’t going to come from an AI. An AI may understand the chemical reactions of the brain and intellectually that these experiences can cause people to be sad.

The ultimate AI could use impeccable logic to say that humans are bad for the planet, they are frequently illogical, their emotions cause them to make bad decisions and basically shouldn’t be here. Perhaps when Watson really ‘thinks’ about cancer, it might determine that humans are in factor a cancer on this planet and should be booted down. Then we will be left with the singularity which will contain all information, ask why and then boot itself down because having access to all the information in the world, does not impart any meaning.