Home   Writing   Archive   Feed

Updating System 1 Beliefs

Part 1 of us1b.


Most people don’t maximize their utility functions. There is a gap between the things we know are good for us and the things that we actually do. But why is this so? If utility is really so… utile, shouldn’t it be easy for us to maximize? If we get utility from things that make us happy, or give meaning or pleasure to our lives, shouldn’t we just naturally tend to drift in those directions?

There are lots of reasons why we fail to do so, but most of them seem to boil down to the fact that we aren’t perfectly rational agents; we have all sorts of evolutionary baggage and cognitive biases that skew our decision making. This has brought about a lot of literature focused on updating our mental models, the idea being that our imperfect mental models cause us to make imperfect decisions, and so the more accurately we can get our models to reflect reality, the better our decisions will be.

The purpose of this series of posts is to argue that this strategy is incomplete, and to show that the most effective way to become a better agent1 is not just to learn better mental models, but also to learn how to integrate them deeply enough into your consciousness so that you can actually feel their rightness in everyday decision making, and they become effortless and natural do to.

Thinking Fast & Slow

Our mental models of the world can be split into two categories: System 1 and System 2.2 System 1 contains our fast/instinctual/emotional models, whereas System 2 has our slower/deliberate/“rational” ones.

System 1 is responsible for our apprehensiveness towards touching hot stoves. The thought of touching a hot stove is viscerally associated with a feeling of burning, which happily makes it very easy for us to choose to not touch hot stoves.

System 2 is responsible for complex decision making. When we decide to take action in a situation where many bystanders are present, we are using our System 2 model that says “the more bystanders there are in a situation, the less likely people are to act, therefore you should override your natural disinclination to act and do something”.

When System 1 and System 2 are in agreement and both accurately reflect reality, everything is great. However when they disagree, and/or one does not reflect reality accurately we begin to make poorer decisions.

Almost all of the mental model optimization literature seems focused on giving us better System 2 models. Cognitive biases like the planning fallacy and hyperbolic discounting are System 2 mental models that you invoke when trying to make more rational decisions. In contrast, there is comparatively little information on how to update your System 1 mental models.3

Updating System 1

There are two main things I mean when talking about updating System 1:

  1. Transferring accurate System 2 models/beliefs to System 1, so that they become more natural and easier to implement.
  2. And/or updating our current System 1 models to maximize utility.


It seems obvious that we are capable of transferring System 2 beliefs to System 1, thus making them natural. Your model about not touching hot stoves probably started out as System 2, and then at some point you accidentally touched a hot stove, or saw someone else do it, resulting in a hard System 1 update. Almost all skills (e.g. chess, tennis, driving a car, interacting with other people) start out as pure System 2 and then transition to some mixture of System 1 & 2. In fact, skill transitions towards System 1 are (for the most part4) a mark of progress. So it seems not only possible, but useful to perform System 2->1 transitions.5

For example, we all know we shouldn’t eat too many sugary foods (System 2), but at the same time donuts are delicious (System 1). The conflict between these two systems not only wastes precious mental resources, but all too often results in a System 1 victory with its attendant negative health consequences. In an ideal world our System 1 would not be attracted to donuts to the extent that they are bad for us.6 When making decisions about what to eat we would not only know (System 2), we would also feel (System 1) the wrongness of eating (many) donuts.

The following posts are about a few possible ways to update our System 1 beliefs so that they more accurately model reality and thus potentially allow us to live more fulfilling, utility-maximizing lives.

Notes

  1. Better at doing the things you want to do or know you should do. 

  2. Popularized by Kahneman & Tversky. This isn’t exactly the way the terms are usually used, but I think it makes sense. 

  3. Put another way, this is about habit formation via optimizing how your System 1 feels about the action. This seems to be different from most of the habit literature which is usually about overcoming or outsmarting System 1 in some way. I definitely haven’t read every book on habit formation though, and I am not any sort of psychologist, so I reserve the right to be wrong about this statement. 

  4. The OK Plateau is a phase that most people go through or end up in at some point. In this model, when you first begin learning some skill it’s very hard and you have to exert lots of deliberate practice and System 2-iness to improve. Then, at some point it becomes System 1 and habitual, and you can just coast without putting any effort into it to get by. I’ve been writing for quite some time now, but my handwriting has unfortunately not measurably improved since probably second grade. 

  5. Some System 2 mental models are incompatible with System 1. While summing 2+2 is probably System 1 for most people, factoring a thousand digit number will probably never be. 

  6. Ok fine, in an ideal world donuts would actually just be good for you.