A somewhat unified view of decision making: part 3
I last left you with two thoughts about decision making:
- You don't have to pick between the recognition-primed (intuitive) decision model and the more rational models. Both are valuable.
- Simulation can be involved in both decision-making models. That particular insight might help us integrate thinking about the two.
I'll stop talking about simulation for a bit and talk about Insight Quality.
We all make mistakes. If Insight Quality is measured on a (hypothetical) scale from 0 to 100, it's likely never at 0 and never at 100.
We learn by making mistakes and reacting appropriately to them. If you're a system dynamicist, you recognize that feedback systems work because of an error signal: that difference between what you want and what you've got. So don't worry about making a mistake (especially small ones), but do pay attention to what you do with mistakes.
Insight Quality, like Actionable Insight, declines if left alone. Pick a skill, even riding the proverbial bicycle: if you don't use it for a while, you'll get rusty at doing it, and you'll likely make a few mistakes you wouldn't have made earlier.
Before I go further, I'll admit that the distinctions between Actionable Insight and Insight Quality and how one increases either are a bit artificial. I'll keep them distinct to make my point, but please understand that I understand there's overlap.
How do we improve our organization's or our personal Insight Quality? We need a feedback loop, a means to observe when things aren't going right, so that we can learn from them. In this model, perhaps the easiest spot to observe those situations is when Latent Problems re-emerge (or, almost equivalently, when we see Problems we recognize as old friends).
There is a methodology called action research (or action learning—the distinction between the two is a bit fuzzy, but action research often carries the connotation of being written down and shared). Those describe the actions involved in that entire loop that encompasses both action and learning.
In a way, you can look on simulation and action learning as two different things. One, you might think, comes before the decision to inform you; the other comes after the decision to improve you.
You might also look on them as very similar. With simulation, you're looking at the analog (typically, a computer model) of the real-world situation to gain insights. With action learning, you're also looking at an analog of the situation you're facing, but this analog is the history of your and others' past attempts.
In either case, I submit that we need to blend both in our work. We need rational processes (and simulation is often a valuable approach) to build Actionable Insight, and we need action research or learning to correct lessons we mis-learned or forgot from our rational processes.
Based on these foundations, we need the ability and willingness to make good decisions in a timely fashion, often using a more intuitive (and yet still simulation-based, if we accept the recognition-primed decision model as a description of reality) approach, and we need the wisdom to know when our intuition won't suffice and we need to drop back to a rational model.
So what's left? I've got one more installment planned to tie a few loose ends together and to live up to my promise to fit Tom Peters into the equation.