One of the mistakes we make is assuming that people will make logical and rational decisions to optimize for the perfect overall outcome. People do make decisions that seem logical to them, however they do so within their own context. They do what’s right for them, not what’s optimal for the overall situation.
Last week, I was hiking with a group. We were descending through a mountain pass and the people at the front of the line were setting a fairly quick pace. We found that the group would quickly split into two or more smaller groups, moving at different speeds and that the first group would have to frequently stop to let everyone else catch up.
If you’ve read The Goal, then you know the story of Herbie and the obvious solution to this problem. The slowest hiker should move to the front of the line so that they set the pace, and this way, everyone stays together. We may be moving slower than some people would like, but we’re all moving at a steady state. We’ve optimized the overall system.
What we ignore in The Goal is that Herbie is a child and when the scout leaders tell him to hike at the front, there is no argument. When we’re dealing with adults, it’s not always as simple to make changes, because we’re each thinking of our own needs, over the groups needs.
In our case, the slowest person didn’t want to hike at the front. They wanted to be at the back so that nobody was behind them.
This clearly isn’t optimal from a systems flow perspective, but it did make sense for this person, and none of us wanted to argue. This wasn’t work after all — we were here to have fun.
Although not as obvious as my hiking example, we see this play out all the time at work.
- I work alone even though we would get better overall results if I paired with someone else.
- I manually test my own work because that’s better for me, even though it would be better for the overall system if all tests were automated.
- I work in a feature branch, keeping all my work isolated from others, so that I personally go faster, even through integrating frequently would be a better answer for the team as a whole.
We’re so focused on making sure that we individually are working well, that we miss the fact that our decisions are often making the larger system worse. We need to consider more than just our own effectiveness.
So why are we so short-sighted when it comes to overall effectiveness?
There are two main answers:
- We’re not rewarded for that. We’re rewarded, in most cases, for individual accomplishment and we have been our whole lives. All the way through school, we were told that looking at other peoples answers was cheating. We’ve been measured on what we individually are able to accomplish. So when we enter business and are told that we’re part of a team, we don’t actually know what that means. So we continue to work individually.
- Looking beyond ourselves and understanding the overall system is cognitively difficult and our brains are highly optimized to pick the answer that requires the least amount of energy to solve. This is the essence of all cognitive biases. Specifically, when we’re faced with a cognitively difficult problem, we are very susceptible to a psychological effect called attribute substitution where we replace the hard problem with an easier one and solve for that. The easier problem is solving only for my own effectiveness and so I do that.
If we want to change this and start to improve the bigger picture, how would we overcome this?
First, we need to stop rewarding individual contribution and start rewarding teamwork. Motivation is a much larger topic and rewards are only a piece of that, and yet when people are motivated to do something different, it’s always going to work against us.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” – Upton Sinclair
Second, we need to educate people on systems thinking. Being able to look at, and optimize, the larger picture is not an obvious skill, and it’s not one that most people have had the opportunity to develop.
Theory of Constraints is a pragmatic step into that world, and the books The Goal (mentioned above) or The Phoenix Project are great introductions to that. Note that they’re effectively the same book, set in different contexts. If you’ve read one, you’ve already got the lessons from the other.
The key is to remember that when we think that by optimizing our own behaviour, we’re automatically improving the overall system, we’re usually wrong. Improving one will often make the other worse and we need to carefully consider which is more important. Hint: system performance is almost always more important.