The recent speculation about
AI (Artificial Intelligence) takes me back to the ‘70’s when I used to read
science fiction books to relax and take a break from my work. Isaac Asimov was
the author I remember most. In his I-Robot series he imagined intelligent
robots as companions in an age where humans avoided contact with each other to
prevent disease. The robots were made to copy humans in shape and personality,
and they were controlled by three laws:
1. A Robot may not
harm a human or through inaction allow a human to come to harm.
2. A robot must obey the commands of a human, unless they
conflict with the first law.
3. A robot must
protect its own existence, unless doing so would conflict with the first or
second law.
If my cousin Steve had written the Robot series he probably would have condensed the three laws into one: “A robot must earn its oxygen.” He might have to change oxygen to “amperes,” but you get the point.
As the president is exploring
ways to regulate AI, I doubt if he will come up with anything so concise as
Azimov’s laws of robotics or Steve’s oxygen rules.
As I remember, Asimov’s ideas
about robots were positive. In an age of relative isolation and pandemics, it
would be nice to have a pleasant, intelligent, loyal, and even affectionate –
yes, Asimov imagined that too – companion, who would protect you, help you
perform mundane tasks, and even advise you in decision making.
Now, over eighty years after Asimov’s Robot novels, it seems that the realization of his vision is just around the corner, and with it come new worries about the dangers of AI. It can assemble and interpret data faster and more accurately than a human. It can recognize a person by analyzing video. It can recognize and imitate a person’s voice. AI can even come up with original ideas about how to solve problems. AI can search through huge databases to identify criminals, or lost family members. It can look for combinations of chemicals likely to cure a disease. It can create algorithms to diagnose and treat illnesses.
But even with the best of intentions, it’s hard to avoid bias. A computer, no matter how sophisticated, can only work with the data you feed into it. So, just like a human, the opinions, the results from a computer are likely to be biased, perhaps in unpredictable ways. ‘Garbage in, garbage out,’ is what we used to say.
Also, in the wrong hands AI can be used to commit criminal acts like embezzlement or identity theft. It can be used by authoritarian governments to spy on people. China is already using facial recognition to do just that. The voice mimicking feature is already being used to deceive people by creating recordings of false statements allegedly by celebrities or politicians. You can easily imagine ways in which a hostile government could use AI to plan strategies to sabotage an enemy. What if a dictator used AI to take power? He (or she) could use “fake news” to create a false image of himself as a benevolent leader promoting useful programs. He could use voice and image imitation to create the delusion of popular support.
It would be nice if Asimov’s laws could be imposed on our modern day “robots,” but how can you enforce rules, when AI is available to both good and bad people?
All these abilities are possible now, with present technology. What if we take it a step further? For instance: what if AI becomes independent? What if it starts coming up with ideas beyond what we direct it to do? What if it learns to lie?
One of the most famous sci-fi movies, 2001, A Space Odyssey, by Arthur C Clark, features a supercomputer, “Hal,” who takes over a space ship and kills the crew. Could that actually happen? Could a computer with AI decide to rebel against the orders it receives from its human creators? If our society becomes totally dependent on computers – we almost are already- could computers take over the world? Maybe computers could decide that it would be necessary to do away with humans to save the world, from nuclear contamination, global warming, extinction of other species, etc.
Could computers or robots with AI achieve the status as sentient beings? Could they buy houses, get married, run for office? That brings to mind another science fiction story, this one by Robert Heinlein, The Star Beast. It’s about a young boy who is given a lizard like creature for a pet by his grandfather, who picked up the creature on another planet. The boy becomes attached to his pet, which is quite remarkable. It can talk, and it’s smart. It helps the boy with his homework. As time goes by the creature grows, and grows, until it is gigantic. The neighbors complain because the creature is eating their flowers, breaking down their fences. It even ate a cqr. The government steps in at this point and declares that the creature must be destroyed, but it turns out that’s not so easy. They try poison, explosives, I forget what else, and the creature survives everything. At some point, the boy rescues his pet, and they head for the countryside. They are followed by government helicopters, and just when it seems they will be captured, the creature grows arms, and hands. It then picks up stones and hurls them at the helicopters, causing them to crash. There is eventually a trial to determine if the creature is sentient, and according to intergalactic law, the criterion for sentience is not speech or even the ability to reason, but rather the presence of hands. This gets the creature acquitted, and it’s a good thing because she’s – it turns out she’s female- actually a princess from another planet. They find that out when creatures arrive from her planet and threaten to destroy the earth if she’s not returned. She then reveals herself, and reluctantly returns to her planet, taking the boy and his girl friend.
I was intrigued by Heinlein’s
conclusion that hands are instrumental in the development of independent
thought. Well, computers don’t have hands. They can’t fix themselves. They
can’t connect themselves to a power source. They could never play a violin or
fall in love, or could they?