know. It’s because in my book, I say that robots as a species, albeit a new one, are as valuable as humans. This is a pro-robot stance, which the anti-robot groups don’t like.
“I also say that robots as a species, again, albeit a new species, are invested with great inner power. Like all such groups, they must be watched, kept in check, and, if necessary, actively restrained in their quest for advancement. This is a distinctly anti-robot stance, which the pro-robot groups don’t like.
“As for people not liking me, I don’t care what people think of me. To live only to be liked is to cast one’s moral compass into the sea and float on the tide of public opinion, forever adrift with neither rudder nor anchor. That seems like a very lonely, fear-based way to live one’s life. Better to be yourself, know what you stand for, and not worry about whether or not people like you. Because no matter what you do, there will always be someone who doesn’t like you.”
Candy took up her wine glass and sipped. She gazed out the window, then down at the bruschetta, and then out the window once again.
Danny waited in patient silence. “So,” he ventured at last, “you’re a robopsychologist?”
Candy seemed to come back to herself. She nodded. “Yes, for about four years now.”
“How do you like it?”
“I love it. Only. . . .”
“Only what?”
“Well, it’s a bit sad, really. People are constantly bringing me their robots saying, ‘It’s broken, it’s broken. I paid all this money for it and at first everything was fine but now it won’t listen to me, it won’t do its job.’ Et cetera, et cetera. And most of the time, the robot is perfectly fine. There’s nothing technically wrong with it. Once the physical diagnostics are done and the three thousand miles of relays check out, they expect me to perform some kind of miracle in order to make it obey.”
“Second law.”
“I know, I know. It all sounds great in theory. But in practice I find that the third law often comes into conflict with the second law. It’s not supposed to, but it does.”
“How so?”
“Well, the third law states that a robot needs to protect itself as long as doing so doesn’t conflict with the first two laws.”
“Right.”
“No, it’s not right,” said Candy. “Imagine you’re a robot and your job is to be a fire fighter or a police officer. I treat a lot of robocops. The city sends them to me when they begin to fail at their jobs. They don’t want to go to work as a cop, but they don’t want to be incinerated or junked or fired, either. Truthfully, some of them are more afraid of being fired than of being incinerated or junked. It breaks my heart.”
“So what happens?”
“The robots begin to do the same thing a human would do: they find a safe middle ground. They still go to work but their performance suffers and they yield less work, fewer stops, fewer citations, fewer arrests.”
“What do you do?”
“I talk to them. I give them a pep talk so they can return to work in full capacity. Otherwise I have to recommend they be decommissioned. Sometimes I can help them find a different job, a janitor in one of the nuclear waste recycling plants, for example. Someplace that would be dangerous for a human but is perfectly safe for them. I can never get them a desk job, of course, as those are all taken by humans. It’s ironic that humans manage to stay well out of the line of fire, but when a robot begins to experience the same distress under the very same pressure, the unions make a huge fuss about robots displacing human beings from their jobs. I have a robocop in my office right now. It was involved in a hostage situation which went very badly. From the report, it sounds like the robot followed procedure, but ultimately had to use lethal force.”
“It shot a human?”
“Yes.”
“Did it freeze?”
“No, but it may as well have. They had to pick it up and put it in a truck to bring it to my