Of Asimov, Robots, Artificial Intelligence and What is a Human Anyway


You might say I have too much time on my hands. I would answer that I never have enough time, but my back injury continues and I have had time to think in a few directions.

Whether it is HAL 9000, remember “I’m sorry but I can’t do that Dave” as an answer to “Open the Pod bay doors HAL” from 2001 A Space Odyssey?

If you haven’t tried it, ask Alexa, Siri, Cortana or whatever your speech interface is to the internet, those famous words. “Open the Pod bay doors HAL” If only Arthur C Clarke was around to experience that.

Damn, I just remembered that they had a 4K restoration of the movie at Imax last month for the 50th anniversary of the movie. I was hoping to find someone to go with and then totally forgot about it. That would have been amazing.

AsimovI collect books and in recent years have given away many books that I was never going to read again, but decided to extend my collections of specific writers and starting at the beginning of the alphabet, I looked at what was missing from my Isaac Asimov collection and amongst others bought a copy of The Bicentennial Man.

Asimov is of course famous for the 3 Laws of Robotics. Ironically a lot of people debunked his laws and said they were flawed and used that to criticise him as being unrealistic or perhaps idealistic, which is a trait of many SciFi authors of the 70’s. However, he knew that himself. In many of his stories, robots disobeyed the laws.

There is a great story in this book called That Thou Art Mindful of Him, in which is a play on Psalm 8:4-6, he also infers in some of the stories that he was Jewish through some of the characters and had a keen sense of humor.

In this story (and I’m sorry for the spoiler) a series of robots are produced and given the capability to become self aware, in effect sentient. They redefine what it is to be human and declare themselves as such.

I played with the thought of Singularity and imagined if autonomous cars could pass the Turing Test 

I also looked at what might happen if they didn’t and what hackers might be able to do.

What I keep coming back to and writers like Philip K Dick, Asimov, Clarke, Heinlein and many others foresaw 50 and more years ago and similar to where the TV series Humans is heading, is that humans are dangerous to the planet.

Now I like being human and I hope that my descendants will have safe and healthy planet for thousands of years from now and many of my little stories are in jest.

BUT, if climate change, plastic pollution, air pollution, brinkmanship politics, drought, famine, and war are the result of how great and committed we humans fancy ourselves to be, would it not be realistic if an Artificial Intelligence was developed to the point of Singularity and able to continue to learn with or without programmed biases, would their logic determine that the human race should either be limited or allowed to exterminate ourselves?

Kurzweil looked at it a different way and said that Singularity would occur around 2045 and potentially be a synthesis between human and machine, in effect human 2.0. He would be about 98 at that point in time, so it will be interesting to see if he is still around and if he is right.

Maybe Elon Musk, founder of Tesla and many futuristic projects should have the last word. He’s pretty successful and walks the talk. DARPA, Rex Bionics and hundreds of companies, universities and other innovators are developing systems that will be able to think for themselves. Yes, for specific purposes, but they are being created.

It’s interesting that in this clip, they say that Science Fiction is usually about 50 years ahead of its time. So back to Asimov, reading him today, especially a book like The Bicentennial Man, where like Stephen King and others, he talks about his stories, was he in fact prophetic?

Yes, maybe I’ve had too much time to think, but do you think we should be thinking about this. Just imagined if a machine, say a Robocop decided that using facial recognition or perhaps racial recognition, that you were, could be, or could become a criminal and then think about biases that go into programming, often of necessity.

What conclusions could an AI start taking when given some information and some bias and then left to learn on the basis of that starting point? Oh and I didn’t even mention George Orwell. He wrote Animal Farm in 1945. Remember “All humans are equal, but some are more equal than others”? Shutting up now……..

 

Advertisement

Why Would You Want an Amazon Echo and Who Else is Listening


Amazon EchoHow do you feel about a device that is listening to everything that is being said in your home or office and is connected to the Internet? Especially in this age of cyber terrorism,  when we know that criminals already use tools like Facebook to target people with easy to steal and easy to sell assets.

How do you feel when Gmail bombards you with advertisements for products you have just bought or vacation location deals when you have just returned home from that place? Besides being a waste of time because you already have those things, it shows how Google (in return for giving you free email software and storage) has access to everything you write using their freeware.

So when Amazon had their massive sale on Prime Day, I thought about the Amazon Echo and wondered what I would do with it if I had one and why.

I very occasionally use Siri to save a reminder, a note, perhaps in the car to reduce the risk of having pen and paper in my hand when the lights change, or illegally using my phone because of the inherent risks, even at a traffic light. No joke, my wife has had 2 cars she was driving totalled, both while she was legally stationary at a red light. I’m not saying she could have avoided them, she couldn’t, but the people who hit our cars were certainly not paying attention.

So when I take Siri, and I have to admit it has improved in it’s ability to understand my Kiwi accent, the number of times I have said one thing and Siri has randomly rung a phone number for someone I had no desire to speak with and the fact that Siri has even randomly responded to a sound it heard, when I hadn’t even touched my phone is interesting.

I would love to have an automated home. When I got home last night from dinner in the freezing cold city it was around 6 degrees inside our house and I said to my wife how nice it would be if I could have turned on the heat pump via my phone. It has a timer, but we didn’t know when we would be home and I’m sure we could get an electrician to wire a remote switch into that circuit without having to buy a new heat pump, but I digress.

When everything I write makes a permanent footprint on the Internet and the supplier of that service has access to it all in order to ‘assist me’, I struggle with why I would want another device listening to everything I say.

Imagine living in a country like North Korea where you can be put in jail or worse for just saying the wrong thing to the wrong person, and then having a device that YOU PURCHASED providing access to every word that you said in the ‘privacy of your own home’.

Now I know that Amazon is not a spy agency, they can’t afford for this product to risk your privacy because if it did, they could go broke, but they do want to know what you are interested in and devices like these are supposed to be your personal digital assistant. I used to have an executive assistant in one of my jobs and she knew what I wanted or needed often before I did. It was awesome because I knew and trusted her. this device has to listen, even for the Alexa command to work, so technically it is listening to every word, even if in theory it is identifying most as not relevant.

The biggest issue for me with devices like this, which was also why I digressed a little with the heatpump, is that hacking or cyber crime is easy if you know how,or are prepared to pay someone to do it for you. Whether it is hacking someone’s wireless internal garage door or front door, checking Facebook to see who has a boat in the their back yard AND is on holiday overseas is really simple, because most people don’t know how to or don’t want privacy, because they don’t realise the risk.

I read a CBS story this morning about how some cheap Chinese mobiles are apparently surreptitiously sending information from the mobiles back to China using firmware that was in the phones from the factory.

You could say that Alexa can only be woken by people using the word Alexa, then it starts listening for instructions. So what is the first thing that is going to happen when you buy one of these devices? You’re going to tell your friends about it and show it off, so you are going to be using the word frequently and waking it up frequently and it will start listening.

For now I think I’d at least like to be able to use the fingerprint recognition on my phone and tap a button to turn on an appliance if I can’t be bothered standing up to do something like turn on a light. I am happy that I can have a wireless remote system with security algorithms to do that. I’m not sure I want Amazon or a hacker to be able to listen to everything said in my home and then using data mining tools to look for keywords or information that could be used by criminals, or by advertisers to send me more info I don’t want.

How about you?

 

Robots to learn human emotions


At the University of Hertfordshire they have been working on a model of children’s early attachment behavior for robots. Their goal is to apply nature and nurture with artificial intelligence so that robots can become caregivers for children in hospital.

“What the Hal?” I thought when I read about this in The Futurist. If you follow my blog, you will have read previous posts such as the one I wrote about Singularity. AI is obviously going to come, but the concept of nurture applied to a robot is something I struggle with, especially with children and even more so sick children who are in pain or stressed.

In principle the idea of a robot that can play games with children, have unlimited patience and intelligence, makes total sense and is a great idea. But when it comes to EQ, I’m not sure how it would interpret immature and potentially irrational behavior.

There have been a number of studies suggesting that children and even teenagers are often unable to understand the consequences of their actions. Many people argue that risk taking is a natural growth path in the development from children to adults. This makes me wonder what would happen if robots learn from children and interpret their behavior as normal. Imagine for example if a robot goes from learning paper, rock scissors, as in this video and then learns to pillow fight or throw objects, from the children.

I’m not being a Luddite, I love new technology, but I do have some concerns about singularity and whilst I would love a robot to vacuum, mow the lawns, cook and do other chores for me, I would prefer them without the emotional senses.

I’ll leave the last word to HAL 9000

Would you like HAL looking after your sick child?