Sam is obviously very good at remaining calm and talking in a very soothing, monotone, manner. However, if you read transcripts of what he’s saying, you realize that he repeatedly relies on appeals to emotion.
Regarding his “objective” morality:
So, I’m going to argue that this is an illusion — that the separation between science and human values is an illusion — and actually quite a dangerous one at this point in human history.
Whether it is “dangerous”, or not, is totally irrelevant to whether he has actually presented us with objective morality, or not. Just a calmly stated scare tactic, to try and get people to accept his nonsense.
For instance, there are 21 states in our country where corporal punishment in the classroom is legal, where it is legal for a teacher to beat a child with a wooden board, hard, and raising large bruises and blisters and even breaking the skin.
This is exactly the same kind of appeals to emotion that countless other “objective” moralists present to people. A favourite is torturing babies for fun. Feelings will stir within the vast majority of people, thinking of babies and children being abused. The very fact that they’re relying on an emotional reaction, however, clearly indicates these examples are directed at peoples subjectivity. Sam, and the others, then try to pass of this near unanimous subjective agreement, intersubjectivity, as objectivity. But, that’s just not how objectivity works.
Relying on empirical senses is objectivity. That everyone in a room agrees they see, smell, feel, maybe even taste, an object and concludes it is a flower means that there appears to be a flower there, independent of any individual subject’s senses, therefore independent of all the subjects’ senses. On the other hand, that everyone in the room also agrees that it is a pretty flower is dependent on every subject’s subjective opinion about the flower. That it is pretty is simply subjective agreement. Just like if everyone in a movie theatre thought the movie was great, everyone at an art gallery thought the art was fantastic, or everyone in a pizzeria thought that they made the best pizza.
The fact that Sam has to rely on these types of appeals to emotion, is actually more evidence his “objective” morality is anything but. A truly objective argument should be able to avoid such things.
Now, it is the position, generally speaking, of our intellectual community that while we may not like this, we might think of this as “wrong” in Boston or Palo Alto, who are we to say that the proud denizens of an ancient culture are wrong to force their wives and daughters to live in cloth bags? And who are we to say, even, that they’re wrong to beat them with lengths of steel cable, or throw battery acid in their faces if they decline the privilege of being smothered in this way?
Again with extreme examples, to appeal to the audience’s emotions, in an effort to push them in the direction of agreeing with what is being said. This shouldn’t be included in an objective argument. I don’t argue objective morality, so think about a loved one. Would you do just about anything to protect them from harm? Think about that loved one committing what you consider to be the most immoral thing they could possibly do. Would you do just about anything to stop them? Almost everyone might answer “Yes” to both of those questions. You might not agree with their methods, and might not agree with them about what they need to be protected against, but you still might agree with their general thought process. I can subjectively disagree with their methods and reasons all I want. But, if you’re arguing objectivity, there should be an objective way to tell whether their reasoning and/or methods are objectively right, or wrong. without the appeals to emotion.
But what does voluntary mean in a community where, when a girl gets raped, her father’s first impulse, rather often, is to murder her out of shame?
More appeals to emotion, to back an “objective” argument. We are presented moral thought experiments all the time, in fiction we read, or watch. There are a number of stories where the parent puts their own child out of their misery, or kills their own child before a worse fate is about to happen, and it is presented as the moral thing to do. Or, at least, reasonable. We may disagree with what is, or isn’t, a worse fate, or what level of misery is, or isn’t, worth putting them out of their misery, but we have been presented with similar thought experiments from the parent’s perspective, and have been led to feel that it was an okay thing to do.
Just a note on the “rather often”, as well. If you consider how horrifyingly often girls and women get raped, especially in war torn areas, and the number of times you actually hear about women being killed by their families or government after being raped, “rather often” would seem to actually mean rarely, unless there’s evidence otherwise. You also hear reports like that of a girl who was raped in Afghanistan. The local imam took care of her until her father returned home, and the father then traveled for days with her to the nearest hospital.
Now the irony, from my perspective, is that the only people who seem to generally agree with me and who think that there are right and wrong answers to moral questions are religious demagogues of one form or another.
But the demagogues are right about one thing: We need a universal conception of human values.
Another appeal to emotion. Whether I feel the “need” to know if the earth is round, or not, is irrelevant to whether it objectively is, or not. Whether everyone universally accepts that the earth is round is irrelevant to whether it objectively is, or not.
There are more examples from his objective morality argument, but here are some examples from other arguments:
Imagine that a known terrorist has planted a bomb in the heart of a nearby city. He now sits in your custody. Rather than conceal his guilt, he gloats about the forthcoming explosion and the magnitude of human suffering it will cause. Given this state of affairs—in particular, given that there is still time to prevent an imminent atrocity—it seems that subjecting this unpleasant fellow to torture may be justifiable.
Here, our “objectively” moral Sam Harris is justifying torture. It’s an extreme trolley car thought experiment, designed to elicit an emotional reaction. There is no objectively right or wrong answer to a trolley car scenario. There’s just the results of what most people tend to choose and what most people don’t tend to choose. The results don’t indicate objectivity any more than observing what pizza toppings people tend to choose indicates objectivity.
“In 2007, the psychologists Fiery Cushman and Liane Young and the biologist Marc Hauser administered the test to thousands of web users and found that while 89 percent would flip the track switch, only about 11 percent would push the fat man.”
“That contradiction—that people find giving the man a fatal prod just too disturbing, even though the end result would be the same—is supposed to show how emotions can sometimes color our ethical judgments.”
Sam might be in the 11 percent that does it, either way, but most people don’t actually seem to want to get hands on. They’re okay with the seemingly non-violent act of flipping a switch, but not okay with hands on murder. As with his morality, Sam “ironically” (not so ironic when it becomes habit) lands on the side of religious demagogues. More of the Republican, Christian Right, support torture, than other groups, when surveyed. Sam’s own moral argument should make torturing someone objectively morally wrong. So, now he’s effectively arguing that doing what is objectively morally wrong is what we ought to do, which would seem to contradict an objective “ought not do” included in considering something morally wrong. His argument becomes partly gibberish … torture is something we ought not do that we ought to do … and partly relative … under certain circumstances.
It should be of particular concern to us that the beliefs of Muslims pose a special problem for nuclear deterrence. There is little possibility of our having a cold war with an Islamist regime armed with long-range nuclear weapons. A cold war requires that the parties be mutually deterred by the threat of death. Notions of martyrdom and jihad run roughshod over the logic that allowed the United States and the Soviet Union to pass half a century perched, more or less stably, on the brink of Armageddon. What will we do if an Islamist regime, which grows dewy-eyed at the mere mention of paradise, ever acquires long-range nuclear weaponry? If history is any guide, we will not be sure about where the offending warheads are or what their state of readiness is, and so we will be unable to rely on targeted, conventional weapons to destroy them. In such a situation, the only thing likely to ensure our survival may be a nuclear first strike of our own.
OMG! What if Muslims get nuclear weapons?! Like Pakistan? Whose government he has described as Islamist? Push the button! Another extreme example to elicit emotional reactions, which he then relies on to sound reasonable. He’s not. He’s insane. Sam Harris should never be allowed to run a country, or be any kind of adviser to someone running a country.
Not only is this an extreme appeal to emotion, it shows one of Sam’s less nuanced discussions about Muslims. He flows swiftly from “Muslim” to suicidal “Jihadist” in the blink of an eye. The beliefs of “Muslims” isn’t really a problem, because most are opposed to terrorism and using violence against innocent people. But, sure, some Muslims might be dangerous. Fortunately, most Muslims don’t like those dangerous Muslims either.
One of the things that makes Sam dangerously insane is that he has no clue how “nuclear deterrence” works. It is based on mutually assured destruction. Before your nuclear weapons can hit them, they can send theirs to hit you. Starting a nuclear war is suicidal. It only worked in Japan because they didn’t have any to fire back. Sam just imagines that he has successfully nuked them without them firing back, and wonders what the rest of the world will think. They’ll think you’re dumber than a stump is what they’ll think. Not to defend the actions of terrorists, but what also makes Sam insane is whole thought process is based on totally delusional lies that he has convinced himself with. The upper levels of terrorist organizations tend to have goals they want to live to see achieved. ISIS wants to bring about another Caliphate, an Islamic Empire. Kind of hard to do, if you’re all dead. They aren’t suicidal as a whole group. It’s only low level dimwits that they convince to kill themselves for the cause. Sam, the “expert” on Islam, and Muslim extremists, doesn’t even grasp some very basic facts.
I’m going to talk about a failure of intuition that many of us suffer from. It’s really a failure to detect a certain kind of danger. I’m going to describe a scenario that I think is both terrifying and likely to occur, and that’s not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I’m talking about is kind of cool.
I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it’s very difficult to see how they won’t destroy us or inspire us to destroy ourselves.
After all these sci-fi stories about robots, electronics, computers, taking over the world, we will still be “likely” to make them without an off switch? He doesn’t even think we have free will, but somehow we’re going to make supercomputers that can totally decide to do whatever they want. Complete subjective freedom. The danger is actually in the programmer. Thankfully most programmers don’t consider themselves to be Dr Evil.
The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.
Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.
More appeals to emotion. I’m not sure why Sam is so concerned. In his “objective” morality argument, he described an “objective” hierarchy of beings, with the most intelligent and aware at the top. If the AI is to us as we are to ants, then it should “objectively” have more value, if Sam is consistent. He’s not. He is still concerned primarily with humans. If the AI decided that wiping out humans would increase the “well being” of all AIs, or the “well being” of most conscious creatures, descided humans were like a virus, or plague on the planet, then wouldn’t the AI be “objectively” morally in the right, to do so? That would put us, trying to stay alive, “objectively” morally in the wrong. It is quite obvious that Sam’s hierarchy was completely subjective and that he has an emotional attachment to humans, over all others. He’s a very emotional guy, albeit a calm, monotone, one.