If liberals don’t care about unborn babies, why are they so worried about robot rights?
The rapid advancement of computing technology and artificial intelligence is making humanists sweat. Squirming over the ethics of giving rights to robots, they strain out gnats…
Humans are empathetic. Darwinists and Christians agree on that, though they disagree about the source of our empathy. The result is that Darwinists are increasingly having trouble recognizing just what is, and what isn’t, a person.
MOUNTAINS OUT OF MOLEHILLS
I think most people do not suffer from this problem. This is a problem created by spending too much time imbibing the absurdities of evolutionary theory. It causes you to lose your ability to think straight. Like this Canadian philosopher from a Toronto-based university:
But artificial intelligence is progressing swiftly. In the not-too-distant future we may begin to feel that our machines have something akin to thoughts and feelings, even though they’re made of metal and plastic rather than flesh and blood. When that happens, how we treat our machines will matter; AI experts, philosophers, and scholars are already imagining a time when robots and intelligent machines may deserve — and be accorded — some sort of rights.
These wouldn’t necessarily be human rights, since these new beings won’t exactly be human. But “if you’ve got a computer or a robot that’s autonomous and self-aware, I think it would be very hard to say it’s not a person,” says Kristin Andrews, a philosopher at York University in Toronto, Canada.
As the NBC article correctly explains, the root of this problem is ethical:
Which raises a host of difficult ethical questions. How should we treat a robot that has some degree of consciousness? What if we’re convinced that an AI program has the capacity to suffer emotionally, or to feel pain? Would shutting it off be tantamount to murder?
To get to the heart of the question, though, you have to have an understanding of just what, exactly, are ethics? Who defines them? Where do they come from?
Another question that you have to sort out: what’s the nature of man? As we can see, Darwinists struggle with this basic question. For them, the lines are blurring between people, animals, and robots:
An obvious comparison is to the animal rights movement. Animal rights advocates have been pushing for a reassessment of the legal status of certain animals, especially the great apes. Organizations like the Coral Springs, Florida-based Nonhuman Rights Project believe that chimpanzees, gorillas, and orangutans deserve to be treated as autonomous persons, rather than mere property.
Steven Wise, who leads the organization’s legal team, says that the same logic applies to any autonomous entity, living or not. If we one day have sentient robots, he says, “we should have the same sort of moral and legal responsibilities toward them that we’re in the process of developing with respect to nonhuman animals.”
WHICH ANIMALS, WHICH RIGHTS?
So here’s a question: How do you present the illusion of doing something good, without actually doing a good thing?
One way is to personify animals. Taking care of needy people is harder than taking care of needy animals. So, to assuage their guilt for not helping people, a person burdened by their guilt might project humanity onto animals, care for the animals, and then pretend that they’ve done something as good as helping actual people.
Along these lines, the animal rights people want humans to establish legal rights for animals.
But this reveals the dilemma associated with the first question: where do ethics come from, and who defines them?
Should chicken welfare laws require 100 square inches of space per chicken? 150? Who should decide? (In California, it’s 116.)
The same problems face the rights of robots:
Of course, deciding which machines deserve moral consideration will be tricky, because we often project human thoughts and feelings onto inanimate entities — and so end up sympathizing with entities that have no thoughts or feelings at all.
You see, someone is going to have to decide just what class of robot is and is not deserving of legal rights. Who will appoint that person or group of people, and just whom will they be? What source of ethics will they draw from?
And how will they draw the line between humanlike-enough and not-enough-humanity?
Let’s hope for our sake that it’s not politicians making these decisions. Should the authority for drawing the distinguishing line between humanity and non-humanity be placed in the hands of fornicating politicians?
Or in the hands of Ph.D.-holding bureaucrats who already have trouble distinguishing between people and animals? Or, oh, I dunno, between male and female?
Christians rest on God and Scripture as our source of ethics. The Bible makes clear that there is a clear distinction between people and animals (and men and women). And, along with that distinction, comes a difference in responsibility, rights, and privileges.
EMPATHIZING WITH NON-HUMANS
The Darwinists believe in what’s essentially a recycled form of Egyptian religion: the chain of being. At the top, consisting of more divine substance than everybody else, is the Pharaoh. At the bottom are the slaves and other animals.
With evolution, creatures are really the same kind of thing, but differentiated based on their level of evolved intelligence or emotional capability (or whatever other criteria the humans at the top of the food chain decide on).
We’re all descended from primordial slime, after all (and definitely not Adam and Eve). We’re all related, but clearly there’s a difference between people and fish.
If primordial slime is at the bottom, then certain animals, like dogs, cats, and monkeys, are somewhere in the middle of the chain. Presumably, robots will be built in the image of humans and placed near the top of the chain, above cats and monkeys, but slightly below us and our big brains — lest we wish for them to conquer us like in Terminator or The Matrix.
Amoebas are at the bottom of that chain. The Nazis also placed Jews at the bottom of that chain. Our modern culture dumps the remains of aborted babies there, as well.
You see, we can personify anything. We’ll project our humanity on to it, and all the rights and responsibilities that go with it. The article at NBC describes an experiment with a stuffed animal:
Kate Darling, a researcher at the MIT Media Lab in Cambridge, Massachusetts, observed something similar when she studied how people interact with Pleo, a toy dinosaur robot. Pleo doesn’t look particularly lifelike — it’s obviously a toy. But it’s programmed to act and speak in ways that suggest not only a form of intelligence but also the ability to experience suffering. If you hold Pleo upside-down, for example, it will whimper and tell you to stop.
In an effort to see just how far we might go in extending compassion to simple robots, Darling encouraged participants at a recent workshop to play with Pleo — and then asked them to destroy it. Almost all refused. “People are primed, subconsciously, to treat robots like living things, even though on a conscious level, on a rational level, we totally understand that they’re not real,” Darling says.
But the same thing is true when we de-humanize people: we treat them accordingly.
The Nazis de-humanized the Jews. The result was monstrous. In his 2012 book, Less Than Human: Why We Demean, Enslave, and Exterminate Others, author David Livingstone Smith wrote about this process. In an excerpt at NPR, we read the consequences of their policy against the Jews:
Let’s begin at the end. The 1946 Nuremberg doctors’ trial was the first of twelve military tribunals held in Germany after the defeat of Germany and Japan. Twenty doctors and three administrators — twenty-two men and a single woman — stood accused of war crimes and crimes against humanity. They had participated in Hitler’s euthanasia program, in which around 200,000 mentally and physically handicapped people deemed unfit to live were gassed to death, and they performed fiendish medical experiments on thousands of Jewish, Russian, Roma and Polish prisoners.
The experiments are ghastly. You can read a summary at the link. Smith writes about the cause of this behavior:
The descriptions in Taylor’s narrative are so horrifying that it’s easy to overlook what might seem like an insignificant rhetorical flourish: his comment that “these wretched people were … treated worse than animals”. But this comment raises a question of deep and fundamental importance. What is it that enables one group of human beings to treat another group as though they were subhuman creatures?
A rough answer isn’t hard to come by. Thinking sets the agenda for action, and thinking of humans as less than human paves the way for atrocity. The Nazis were explicit about the status of their victims. They were Untermenschen — subhumans — and as such were excluded from the system of moral rights and obligations that bind humankind together. It’s wrong to kill a person, but permissible to exterminate a rat. To the Nazis, all the Jews, Gypsies and others were rats: dangerous, disease-carrying rats.
Let me repeat what he just said: it’s wrong to kill a person, but permissible to exterminate a rat. That’s why liberals refer to unborn babies, not as people, not as babies, but as something that sounds subhuman — like “fetus,” or simply an organ of the woman’s body, or “whatever.”
Liberals believe in abortion. They want to escape the reality that aborting a fetus is murdering a baby. That’s why they refer to unborn babies as anything but. They dehumanize babies, which makes it acceptable to exterminate them.
So here we are. Darwinists are debating whether or not to give legal rights to robots in the future. They are having trouble imagining what the criteria should be to separate humanlike robots from robots that aren’t humanlike enough.
In general, they also have trouble distinguishing animals from people. And boys from girls.
But they have absolutely no trouble at all recognizing unborn babies as entities that plainly aren’t human at all. To quote the Canadian philosopher featured in the article: “If we realize that something is actually a ‘someone,’ then we have to take their interests into account.”
In the words of Jesus, these people are straining out gnats, but swallowing camels.
From this evolutionary standpoint, whether we treat robots as people or not merely depends on whether we decide to think of them as people or not.
Do you want them to be human? Then think of them that way.
Do you want them to be non-humans or sub-humans? Then just think of them that way.
To the Darwinist, the decision is arbitrary.
Clearly, unborn babies do not receive the same rights as people. And so, the liberals wish to take away what rightfully belongs to unborn babies, and give it, instead, to a bundle of digital circuitry that we can choose to pretend is human.
If we treat the robots with respect and dignity, then that will absolve us of the consequences of our sexual indiscretion.
Humans are empathetic. We can relate to others because we share a common humanity. In Christianity, we logically explain that it is because people are designed by a loving creator, in his own image. The image of God is the common link among people, and as their creator and owner, He has established what the Declaration of Independence refers to as “certain unalienable rights.” So when the Bible issues a prohibition against murder, we also apply that to unborn babies.
But here, the liberals would suggest that we are simply sympathizing with entities that have no thoughts or feelings at all.
The Bible guarantees human rights, regardless of how certain groups or individuals feel. A just legal system will protect these rights, regardless of how much the liberals belittle them. As stewards of creation, the Bible also requires humans to respect certain rights of animals and the planet.
But these must not be elevated to the point that they are confused with human rights. To do so is to commit a serious theological error. The result is as we’ve seen here: confusion. The true victims of this confusion will inevitably lose their voice in the legal system.
What we have here is a modern, perverted twist on the theological doctrine of double imputation: the humanity of the unborn babies is imputed to the robots, and the inhumanity of the robots is imputed to the unborn baby.
Except I do not think the robots will save us.