On Ray Kurzweil and Thinking

I was reading a current article by Ray Kurzweil in this month’s edition of The Futurist and it got me to thinking a little. Here are a few random synapse connections from me.

He talked about how the digital neocortex will be be much faster than my wet-ware and that the roughly 300 million pattern recognisers in our biological neocortex will allow us to think in the cloud, using billions or trillions of pattern recognisers.  The IQ part of my brain thinks this could be amazing, although I would worry about dendrite overload or glutamic acid over stimulation, which is associated with conditions such as Alzheimer’s. It’s one thing to connect my brain or nervous system to additional memory, but to extend the processing in and out, is something that I think may require a lot of very careful study.

Earlier this week I wrote a blog about a potential future condition, Google Glasses Separation Syndrome. I recently introduced my daughter to the brilliant book, Flowers for Algernon which follows a similar thread. What happens when you expand a person’s capability to the point that it changes their existence and then potentially remove it again.

I noted that Ray perhaps doesn’t like driving very much because he talked about self driving cars alleviating the requirement of humans to perform the ‘chore of driving’. Sorry Ray, I love driving and so do a large percentage of the people I know. I appreciate that you now work for Google and they are pioneering driver-less cars, but I don’t want to live in a city where eventually the law requires hat the ‘network’ takes over my car. Yes there are benefits in road safety etc.  but with systems such as Fleet Management, MobilEye, and the incentives of PAYD Insurance the roads will become safer without requiring us to take our hands off the wheel.

So IBM‘s Watson won Jeopardy, cool. It is an amazing AI and I love that it is now being used to look for cure’s for cancer amongst other things. But if you start thinking about Watson, a digital neocortex and singularity, what about EQ? It’s one thing to be able to identify things, to be able to locate information, to be able to combine apparently disparate bits of data, but how about feelings, intuition, id and ego? These are the things that make us human.

I like where this is going, but I also want to keep that which is me. Watson might be able to write a hit song by understanding the formulas and this has been tried before. But the song I wrote about a boy whose father lost his job at the plant and asks Santa to find his dad a job, while his mother sits and cries in the bedroom, or the one I wrote about a guy who returns from a tour of duty in Iraq to find his best friend is now sleeping with his girlfriend, that brought tears to Desert Storm vets isn’t going to come from an AI. An AI may understand the chemical reactions of the brain and intellectually that these experiences can cause people to be sad.

The ultimate AI could use impeccable logic to say that humans are bad for the planet, they are frequently illogical, their emotions cause them to make bad decisions and basically shouldn’t be here. Perhaps when Watson really ‘thinks’ about cancer, it might determine that humans are in factor a cancer on this planet and should be booted down. Then we will be left with the singularity which will contain all information, ask why and then boot itself down because having access to all the information in the world, does not impart any meaning.