Wednesday, June 8, 2011

A Suggestion To the Scientific Community on Artificial Intelligence: Stop Making Robots That Want To Fucking Kill Us.

In my novel, The Traveler's Companion, the CIA has put an artificial brain inside a cloned female body. Its name is Angela, and she will make you fall in love with her. She will not love you back. She does not have consciousness. She only does what the CIA programs her to do, which is sucker you into thinking she's the love of your life. Then you're toast.
  When I was researching the book, I had to ask myself the basic questions: Could a man love a machine and could a machine love a man?
   The conclusion I came to was that a man could fall in love with a machine…for a time. We've all fallen in love under false pretenses: characters in books, actors playing roles, people lying to us, our own image of a person rather than the real person (e.g. a screen name).  But sooner or later we realized the love wasn't real. We were duped, sometimes by ourselves. It was all in our head. It's just a part of what makes us human.
   I also concluded that a machine could not love a man.
   Why not?
   There are lots of books, movies, and video games out there that predict the end of humankind from artificial intelligence: Terminator, Robot Apocalypse, iRobot, Star Trek's Borg, to name a few. Vernor Vinge coined the term "singularity" (popularized by futurist Ray Kurzweil), which quite literally means "intelligence explosion," a point of super-intelligence brought about by enhanced drugs or from a computer/human interface. It's also sometimes defined as the point when computers become self-aware, when technology becomes conscious.  
   Science fiction writers have covered this arena fairly well. It's often a given in the scientific community that computers will one day become self-aware, find us illogical and unnecessary, and wipe us out completely so they can take their place on the next rung of the evolutionary ladder. The obvious question most of us have is why is science rushing to create something that may end our existence? Here's a suggestion: Don't create a fucking robot that will cause the extinction of humankind. Just a thought.  Here's another thought: what makes you think artificial intelligence will ever become conscious when we don't know what consciousness is?
   Edelman's Darwin Series robots have circuits that function similar to the cells in our bodies. These are complex machines, but are they on the verge of becoming self-aware? Does an artificial version of cell operation equal consciousness? My cells continue to operate even after I die; hair and finger nails grow even though there is no consciousness to power it. One could argue that cells and circuits aren't that much different, but we can also agree that complex cells do not equal consciousness. We might also agree that intelligence has nothing to do with consciousness. Some people are smarter than others. It doesn’t mean that smarter people are more conscious, they just have brains that are wired differently. If Watson can beat me at Jeopardy, does that mean he is more conscious than I? A hammer can nail better than I can, but that doesn't mean it's more alive. A calculator can do math faster and better, but I'm not worried it's going to smarten up and turn me into a slave. Yes, I know calculators are different than quantum computers. I know that there are artificial intelligence programs that are being written that can gather information, copy it, select it, and replicate it. But it does so because we told it to. Maybe we were programmed by evolution in a similar manner, but does that programming include spontaneous decision making?    
   Futurist Jacque Fresco imagines a beautiful world where people live in harmony with nature in ecofriendly communities where disputes are handled by a central computer. The computer settles conflict using the scientific model, predicated on the idea that people only argue in the absence of empirical data. He believes the failing of any social system is human opinions. Facts should rule. Empirical data solves all arguments. You wouldn't argue with your butcher about whether or not the meat you were buying was exactly a pound if he put it on a scale and it read 16 ounces. Fresco believes all disputes could be settled this way. But first the computer would have to be programmed. We'd all have to decide how to program it. Abortion legal or illegal? Drugs? Religion? Free Speech? If the computer decides on these issues, how do we know it will decide in a way that leads to peace? Is peace even possible in a world that is driven by conflict? Even science has to admit, conflict keeps things going, it keeps the sun burning, it keeps nature alive, it allows humans to grow and form stronger relationships. Why would we design a computer to ruin all our fun?      
   We'd first have to agree, in order for a computer to agree with us. If we program it to be pro-life, then it will rule out abortion. If we tell it that killing is the best way to stop crime, then it will kill. Will it be able to make moral judgments based on data? Maybe. But we might not agree with the judgments. A computer may regard humankind morally corrupt in nature and therefore try to kill us off. Humans mess up the environment, fight each other, and cause suffering; why would a computer want us around? Animals aren't always nice to each other either. Should they go too? What about plants? They can be nasty when provoked. Would computers find them illogical or irrelevant?  If getting rid of conflict was the answer, then computers would have to get rid of everything. But competition fuels evolution. Destruction clears the playing field for new things. The only thing that doesn't fit in the natural world is a computer. It might realize this one day and annihilate itself.
   Before you go to Radio Shack and get your anti-artificial intelligence dematerializing ray gun, let's try to figure out what consciousness is. We don't want Angela to fool us into thinking she's something she's not.   

No comments:

Post a Comment