On Ray Kurzweil and Thinking


I was reading a current article by Ray Kurzweil in this month’s edition of The Futurist and it got me to thinking a little. Here are a few random synapse connections from me.

He talked about how the digital neocortex will be be much faster than my wet-ware and that the roughly 300 million pattern recognisers in our biological neocortex will allow us to think in the cloud, using billions or trillions of pattern recognisers.  The IQ part of my brain thinks this could be amazing, although I would worry about dendrite overload or glutamic acid over stimulation, which is associated with conditions such as Alzheimer’s. It’s one thing to connect my brain or nervous system to additional memory, but to extend the processing in and out, is something that I think may require a lot of very careful study.

Earlier this week I wrote a blog about a potential future condition, Google Glasses Separation Syndrome. I recently introduced my daughter to the brilliant book, Flowers for Algernon which follows a similar thread. What happens when you expand a person’s capability to the point that it changes their existence and then potentially remove it again.

I noted that Ray perhaps doesn’t like driving very much because he talked about self driving cars alleviating the requirement of humans to perform the ‘chore of driving’. Sorry Ray, I love driving and so do a large percentage of the people I know. I appreciate that you now work for Google and they are pioneering driver-less cars, but I don’t want to live in a city where eventually the law requires hat the ‘network’ takes over my car. Yes there are benefits in road safety etc.  but with systems such as Fleet Management, MobilEye, and the incentives of PAYD Insurance the roads will become safer without requiring us to take our hands off the wheel.

So IBM‘s Watson won Jeopardy, cool. It is an amazing AI and I love that it is now being used to look for cure’s for cancer amongst other things. But if you start thinking about Watson, a digital neocortex and singularity, what about EQ? It’s one thing to be able to identify things, to be able to locate information, to be able to combine apparently disparate bits of data, but how about feelings, intuition, id and ego? These are the things that make us human.

I like where this is going, but I also want to keep that which is me. Watson might be able to write a hit song by understanding the formulas and this has been tried before. But the song I wrote about a boy whose father lost his job at the plant and asks Santa to find his dad a job, while his mother sits and cries in the bedroom, or the one I wrote about a guy who returns from a tour of duty in Iraq to find his best friend is now sleeping with his girlfriend, that brought tears to Desert Storm vets isn’t going to come from an AI. An AI may understand the chemical reactions of the brain and intellectually that these experiences can cause people to be sad.

The ultimate AI could use impeccable logic to say that humans are bad for the planet, they are frequently illogical, their emotions cause them to make bad decisions and basically shouldn’t be here. Perhaps when Watson really ‘thinks’ about cancer, it might determine that humans are in factor a cancer on this planet and should be booted down. Then we will be left with the singularity which will contain all information, ask why and then boot itself down because having access to all the information in the world, does not impart any meaning.

 

Robots to learn human emotions


At the University of Hertfordshire they have been working on a model of children’s early attachment behavior for robots. Their goal is to apply nature and nurture with artificial intelligence so that robots can become caregivers for children in hospital.

“What the Hal?” I thought when I read about this in The Futurist. If you follow my blog, you will have read previous posts such as the one I wrote about Singularity. AI is obviously going to come, but the concept of nurture applied to a robot is something I struggle with, especially with children and even more so sick children who are in pain or stressed.

In principle the idea of a robot that can play games with children, have unlimited patience and intelligence, makes total sense and is a great idea. But when it comes to EQ, I’m not sure how it would interpret immature and potentially irrational behavior.

There have been a number of studies suggesting that children and even teenagers are often unable to understand the consequences of their actions. Many people argue that risk taking is a natural growth path in the development from children to adults. This makes me wonder what would happen if robots learn from children and interpret their behavior as normal. Imagine for example if a robot goes from learning paper, rock scissors, as in this video and then learns to pillow fight or throw objects, from the children.

I’m not being a Luddite, I love new technology, but I do have some concerns about singularity and whilst I would love a robot to vacuum, mow the lawns, cook and do other chores for me, I would prefer them without the emotional senses.

I’ll leave the last word to HAL 9000

Would you like HAL looking after your sick child?