Confusing generalizability with truth

People tend to confuse generalizability with truth. For example, suppose a person believes that all people are innately driven only by self-interest, and that all prosocial actions that people take are born from pure self-interest. This belief could likely be used to frame any human behaviour, explaining it within the context of self-interest. A person might become convinced that this theory must be true because there are so many instances in which it appears to be a valid explanation for behaviour.

Let’s flip it around, and talk about a person who believes that all people are innately driven only by prosociality. In this theory, the apparent self-interest of humans is really just a guise for their deeply-rooted prosociality; perhaps they are making themselves more capable and powerful so that they can help people more, set an example that they think is good, or simply gain respect and admiration from others. Anyone who looks at the world through this frame will find that they can ‘explain’ essentially any human behaviour. Again, it is easy to think that the large number of different behaviours that this theory seems to be capable of explaining is an indication that it is true.

Clearly, these two views are mutually exclusive, so one or both needs to be incomplete or wrong. That might be our first hint that something is wrong with this approach to figuring out what is correct or true. A further example is the pair of divergent beliefs: that a) a transcendent God plans everything that happens in the world, and b) the universe is merely mechanistic interactions of particles and energy. Again, these explanations could be molded to fit with a vast array of experiences – perhaps all. Again, they are incompatible.

What is wrong with this approach? If something appears to fit with lots of different things that we know or have experienced, doesn’t that make it true?


All that means is that this particular theory has failed to be falsified directly by what you have experienced – or at least those things you have applied it to. For example, the pure self-interest belief would have trouble explaining a seemingly selfless act. The pure prosociality belief would have some trouble explaining anti-social acts and hermits. Attempting to explain these things would be awkward and likely quite complicated – a sign that something is likely wrong.

If you find that you are able to explain any experience using your theory, you might become convinced that it must, therefore, be universally true. You have unlocked some sort of understanding that lets you understand the entire world.

To be frank, this is a very dangerous and very wrong belief.

If you can explain anything, then what do you actually know? If I ask you, what will happen in the future, A or B? Can you tell me with accuracy which one will happen (or even which one is more likely) if you are operating using a belief structure that can explain all outcomes? How can you discern between fact and fiction? If you can explain anything, how can you tell if I deliberately tell you wrong things?1

If your system of understanding the world can’t tell fact from fiction, you don’t know anything. If your beliefs don’t constrain anticipation, they don’t apply to the real world.2 To know whether beliefs are true, we must use them to make definite predictions and then test whether these predictions come true. However, we must also think of predictions that would indicate that our belief is wrong, and check for those as well. We must expose our theories to the threat of falsification.

If a belief or explanation makes definite predictions about the future and repeatedly survives our best efforts at falsification, we can consider it more likely to be true. If a theory has not been falsified, the degree to it has been exposed to efforts to falsify it is an indication of its ‘truth’. That is not to say that it is the truth but merely that it has been adequate in explaining something thus far.

To put this another way, it doesn’t really matter how many things seem to confirm your theory. What matters is:

  1. does it make definite predictions about the future which could be wrong, and
  2. does it survive repeated attempts at falsification, always turning up the right answer in every situation we can think of.

If you can answer yes to both of these questions, then you have a belief worth believing in.

  1. Making beliefs pay rent, Less Wrong. Retrieved 2013-03-07. []
  2. Making beliefs pay rent (in Anticipated Experiences), Less Wrong, Retrieved 2013-03-07. []
Posted in Rationality

Leave a Reply

Your email address will not be published. Required fields are marked *