Thursday, May 12, 2005

More on intuition


I've finished Gary Klein's Sources of Power: How People Make Decisions, and I'm glad I read both that and The Power of Intuition. I like his graphical model of recognition-primed decision making in The Power of Intuition better, and I like the breadth of this book.

Klein has apparently done much studying of how people really make decisions over a number of years, he's created what appear to be useful tools, and his ideas have apparently been picked up by the U.S. military and others. I applaud anyone doing that much to help us understand how we make decisions and how we can do a better job at making decisions.

Yet something is troubling me, too. Perhaps it's just that he's suggesting that something I was taught and used along with many at one of my former employer's may not be the best way to work, but I think it's more than that.

Here are a couple of reasons that my intuition (irony intended) is raising red flags. In chapter 16, he claims that scientists look for confirmation of their work, when they should try to disconfirm their theories. There is likely some truth to that: perhaps all of us wish our favorite theories are true, and we may indeed occasionally put our thumbs on the side of the scale favoring our pet theories. My introduction to the scientific method emphasized the need for a "null hypothesis": a statement that was the opposite of our conjecture. Instead of proving the conjecture, the scientist's task is to disprove the null hypothesis, and the scales are heavily biased in favor of accepting that null hypothesis unless the evidence is overwhelming. That seems like a serious attempt at disconfirmation.

When I became an engineer, I discovered that a key part of an engineer's job is trying to break what you've made. At one company I know, after circuits are designed, engineers test them over environmental extremes of temperature, humidity, and electromagnetic interference to see if you can break them in reasonable use. Then they're subjected to "abuse testing," in which people try potentially ridiculous things (shorting things together, plugging the circuit into the wrong power source, etc.) to see if they can withstand unreasonable use. That seems like another serious attempt at disconfirmation.

He also claims that few use the rational-decision-making approach of comparing alternatives in practice. To a large degree, he's correct: I'm sure I make many a decision in the course of a day using something close to his model. Yet I recall a decision I had to make for a client recently, and my process didn't seem all that unusual. They asked me to do a task, and they had always used approach A. I looked at the task, realized it would likely be challenging that way, and thought of approaches B and C. I first thought B would be the best, but I followed much the process mentioned above. With a bit more data and a bit more investigation, C turned out to be the likely best choice. I made my proposal to my client, they accepted it (I think at least partially because I had looked at alternatives, so I didn't appear biased), and we put it in place successfully.

What now? Perhaps it's time for a bit of action research, including its emphasis on attempts at disconfirmation, on how I make decisions. Perhaps we can each contribute to an improved understanding of this important process, and perhaps we can get better at what we do.

I've found I learn well when I combine the action and reflection cycle of action research with the exploration of new ideas, such as those I've gained from Klein's work. I'll be using many of the ideas I gained from reading his books.

What are your thoughts?

Labels:

0 Comments:

Post a Comment

<< Home