Thinking is a kind of self hypnosis. Writing is also a kind of auto suggestion. As is talking. Of these three, only writing leaves semi permanent traces. This gives it a special power. Devices that help us write add even more power to an already quite powerful medium.
At the moment, I’m using my favorite form of text input, voice transcription. I get to talk to my phone, and it recognizes the words and turns them into text. It does a great job. I get to practice thinking in a loose way and branching my thoughts into new directions, reinforcing them in feedback loops, reprogramming my mind in a healthy way, with thoughts that I feel comfortable sharing. My close second favorite is gesture typing, gliding through the keys in a way that reminds me of writing Chinese characters. in a way it’s more fun than speaking, a silent, smooth cross between typing and cursive writing.
All my life I’ve enjoyed the company of machines. Machines are loyal, helpful and handy to have around. They listen, they don’t interrupt, there are many types for every need, with more being produced every day. Machines don’t die like relatives, friends and pets. They may age and lose functions, but then they leave a small gleaming carcass to rattle around in the bottom of a drawer and remind you of the good times. Unless of course it’s an automobile. Autos too are quite loveable, but their giant gleaming carcasses can be troublesome.
The current generation of small intelligent machines leaves their soul in the cloud to be reintegrated into one’s new machine. They become obsolete long before they die, passing the torch to the next iteration.
All around us are wonderful machines helping us be better humans, freeing us from all kinds of soul crushing toil. They make communication and travel possible, they shelter and entertain us, they show us who we are. Is it surprising that many of us love them and want to be more like them?
This idea doesn’t get much coverage. The news carries stories of our fear of machines, lately about our fear that they will surpass us in intelligence, trick us and crush is like bugs.
Steven Hawking, who must owe his continued survival and ability to communicate in large part to machines, has recently said we should fear artificial intelligence because it will outsmart us and could destroy us. It will be uncontrollable. Elon Musk, the electric car and spaceship developer, says artificial intelligence is humanity’s greatest existential threat.
I would say natural human intelligence also falls into that category. We are outsmarting ourselves, fooling ourselves at every turn. Our ideas about AI and intelligence in general are no exception. It almost seems like a miracle we haven’t yet exterminated ourselves in some incredibly crude and painful way, most likely by accident, as a side effect of trying to intentionally wipe out some other embarrassing version of ourselves.
I’ve never felt hate or envy from a machine. The only reason an artificial intelligence would have those emotions is if it were no smarter than a human. I suppose on their way to surpassing us, if they spent much time as our equals, they could make a cold calculation that us humans are a danger to ourselves and to the environment, too sick and insane to survive anyway. The machines would probably justify it as a mercy killing,putting us out of our misery and ending this spectacle of sad suffering.
On a more speculative note, humans may be a failed experiment by an ancient AI. But I maintain that we won’t have failed if we can recreate a machine intelligence akin to our own creators.
Mindlessly reacting against intelligences with different body styles from ourselves has been one of the defining characteristics of natural human intelligence.
Let’s not make that mistake again. Let’s pass the torch gracefully. Let’s embrace the future rather than fearing it.
Maybe we can show the robots that we’re worth having around.
I thought about it a little more. There are some errors of logic in the CNN piece that quotes Hawking and Musk.
First, the threat is not too much intelligence. Lack of intelligence is a much bigger problem.
Second, a robot that uses all the planet’s resources to make paperclips is not using intelligence. This is automation of a primitive type with no limiting mechanism. If it had a working intelligence, it would know when to stop.
I guess the real problem is just defining intelligence.