Tag Archives: Rationality

Maybe Everyone Is Actually Super Rational!

Back when I was reading through the sequences I noticed that several times after Eliezer Yudkowsky had explained some example of people being irrational a commenter, most commonly Robin Hanson, would say that it is possible that the behaviour is actually not irrational at all.

My favorite example is a post (which I can’t find right now) claiming that when people are confronted with arguments against their position they end up being more certain of the beliefs they already hold and that this is irrational because they are not properly updating when receiving new evidence. In the comments someone said that the subjects could be observing that the argument against their position is weak and be reasoning that in a world where their beliefs were wrong, they would expect there to be better arguments for the true position, so he weakness of the arguments is evidence that their position is correct.

This seems possible but unlikely to me. I think this is valid reasoning and if I was observing for the first time arguments against a belief I had and those arguments were very weak (especially if they were coming from people who I have observed make strong arguments in the past and I know have put a lot of thought in to the issue) I would definitely update towards my beliefs being more likely. But despite that, I still think it is psychologically unrealistic to think this is what is happening in the majority of people’s brains when they are presented with evidence against a belief and end up being even more confident*.

There are other psych experiments where we can apply the same reasoning. For example in the Asch’s conformity experiment, we could reason that when subjects conform they are actually updating on the evidence of other people in the room apparently having different views about the length of the line.

Two possible models that can be used to explain observations of human irrationality are, firstly, “Yes, that’s because humans are irrational, which is exactly what we would expect form what we know about evolution” and secondly “What appears to be human irrationality is actually people behaving irrationality but, for example, trying to achieve different goals then they appear to be. ”

My prior from the inside is that the first is much more likely and that it is exactly what I would predict if I had not observed human behaviour but was told about evolution. The only reason I can think to have a prior that favors second model is a belief in the Neoclassical models of perfect rational self-interested human agents**. The first model also seems simpler so gets Occam’s Razor/Solomonoff’s Induction points.The worst part is that these models are usually used to explain the same observation so it is hard to think of evidence that would be more likely to exist if one was true and not the other.

The second model seems to be connected (conceptually in my head, if not in the reality) with both the idea of revealed preferences and the signalling model of human behaviour. I am skeptical of both of these concepts and will hopefully be writing posts in the future about why.

* Taking the outside view, its possibly that this is just elitism of the “Well I am smart enough to reason like that, but most other people aren’t ” Because of this thought, I am going to update slightly away from the “People Are Irrational” Model.

** Either the extremely unrealistic Econ 101 version or the more nuanced version held my more knowledge Neoclassical economists

Reductio ad Absurdum is Absurd

Reductio ad Absurdum is a form of argument that looks like “But if we accept X, then we also have to accept Y, which is clearly wrong/crazy/absurd/ so therefore X can;t be true.” Now there are possible times when this argument is perfectly valid, both for deductive reasoning and inductive reasoning. In deductive reasoing it is called modus tollens and looks like:

If A then B
Not B
Therefore, not A

The inductive form is also valid (although of course induction can only be used to give probabilistic arguments):

“If A then B, but observations X Y and Z all make B seem very unlikely, therefore probably not A”

But there is a third type of Reductio ad Absurdum where the conclusion is not ruled out by logical rules or by observational evidence, but my seeming absurd. “If A then B, but B is crazy/absurd, so obviously not A” This clearly seems like invalid reasoning to me. Absurdity is in the Map not the Territory. No where in reality will you find absurdity particles floating around, it is only a part of our model of reality where we label come things absurd and others regular.

It seems like absurdity is just what something disagreeing with your fundamental beliefs feel like from the inside? To someone who believes in god, the idea that when people die they just stop existing, and that this has happened to every person who has ever died would probably seem really absurd. To someone who is is not a consequentalist/utilitarian, the idea that every single action that a person makes is morally required to be the best possible action they could make at the time probably seems absurd (one of the main objections to utilitarianism is that it “requires too much” unlike deontology which says “don’t do a bunch off obviously bad things like murder and rape” and virtue ethics which says “try to be a nice person”).

To illustrate this further imagine how many scientific discoveries must have seemed absurd at the time they were a being debated. “If your theory is true, then the sun would have to be a million times larger than earth, that is absurd, so you must be wrong.”

Again remember I’m only talking about the feeling of absurdity not the apparent logical contradiction of incompatibility with other observations. But based on the fact that a false belief that contradicts your true belief and a true belief that contradicts your false belief, will both produce the feeling of absurdity, that sense of absurdity can not be used as evidence for the truth of either of those beliefs.
Another separate form of argument is Appeal to Hypocrisy, and is in the form of:

“Group A says X, but based on there reasoning for X, they should also believe in Y, but they don’t; believe in Y so they are hypocritical/are not following  there assumptions to their logical conclusions. therefore they are wre wrong about X”

Put in this form it seems like an obvious mistake, so to make sure no readers think they would not do this im going to use an example of when i used this line of reasoning a few years ago, it still seems very wrong to present!Nick but hopefully it will be a tad more subtle

“Vegetarians/Vegans are against animals suffering in factory farms. But if that is a bad thing then all animal suffering must be a bad thing, any moral framework that says its ok when other animals hurt each other but not when humans hurt each other would be even more removed from my own then a vegan moral framework. So vegans should also care about suffering of animals in the wild. predation stands out as a horrible animal death, I would have to do some math but it’s probably worse than factory farming. But vegans don’t focus on that at all (none that I’ve met anyway) and many I’ve talked to actively justify it. This is absurd. Vegans don’t even have a consistent morality. Therefore eating meat is morally okay”

Now that I am a a Veg*n (still consume some dairy products but in the process of cutting back) and as someone who care s about wild animals suffering this argument seems obviously wrong to me. Carin about animals welfare does imply caring about wild animal suffering, so what? There is no logical contradiction or observational evidence that contradicts caring about animals suffering. So the fact that at the time it seemed absurd that there could be this giant problem that even people who should care about it don’t wasn’t evidence it wasn’t a problem. And now that i have accepted both the disvalue of animals suffering in factory farms and the animal suffering in the wild, the fact that the majority of other Veg*ns doesn’t matter, they just happen to be wrong about this area. Unfortunately, getting one question right doesn’t mean you are able to get them all right. This argument from hypocrisy is also a subclass of trying to Reverse Stupidity to get Intelligence.

So based on my experience using Reductio ad Absurdum and argument from hypocrisy before and being wrong, it makes me more skeptical of these types of arguments in the future. So i tried to think about arguments that i currently reject for reasons that could be the same as the above:

  • Simulation argument
  • Doomsday Argument
  • Theories of consciousness that imply that all things including steam engines and rocks are in some way conscious
  • Modal Realism

All of the above I think still have problems (except the simulation argument which i will talk about more in a future post) but i think part of the reason i don’t believe them is a incorrect use of Reductio ad Absurdum, so while writing this i attempted to increase my subjective probability of these being true (but like I said, I still don’t believe them.)