Tuesday, November 21, 2006

System dynamics, systems thinking, and evaluation

The recent Evaluation 2006 conference had twelve sessions sponsored by the Systems in Evaluation Topical Interest Group. There was lots of great material; I'd like to highlight one to give a flavor of how systems thinking (system dynamics, in this case) and evaluation can fit together.

Robin Miller of Michigan State University spoke about a system dynamics investigation she and a team (Ralph Levine, Kevin Khamarko, Maria T. Valenti, and Miles Allen McNall) at MSU carried out. It seems that the Centers for Disease Control and Prevention developed a set of programs for addressing the needs of groups at risk to HIV/AIDS. They tested the programs and found them to work.

That's the good news. The not-so-good news came when states began to mandate the use of those programs for community-based organizations wanting funding to work on HIV/AIDS. Robin and her team weren't confident that community-based organizations could bring all the money, time, focus, and expertise that the CDC had to the implementation of those programs.

She and her team developed and tested a system dynamics model that, in essence, couples the program theory behind one of the CDC programs with the operating characteristics of typical community-based organizations. In a way, system dynamics allowed the MSU team to conduct a summative evaluation at a point when data was only available for a formative evaluation.

The MSU model suggests these programs indeed will likely not be nearly as successful when carried out with the normal resources of community-based organizations. While they are continuing to explore the issue, that's already important information; it may lead to some states thinking more about the flexibility they provide such organizations in addressing an important need. She presented a more detailed report of this work at this past summer's System Dynamics Society conference.

What are the lessons for program evaluation?


  1. Systems approaches can be valuable tools for evaluation, but they don't replace evaluation.
  2. Different systems approaches may fit different evaluation needs.

    • For example, soft systems methodology or critical systems heuristics may help in the valuing and worth stages—determining what indeed will be valued in a program.
    • System dynamics, complex adaptive systems, cultural-historical activity theory, and other approaches may help in the design, testing, and communications of the program theory. As Bob Williams pointed out in sessions at Evaluation 2006, multiple methodologies, not just multiple methods, may give us better insight into the programs we're evaluating, for it allows for broader triangulation.
    • Comparing the results of those approaches with what happens in the real world may support formative evaluation efforts.
    • A summative evaluation may draw on those approaches to understand and explain what took place.

  3. System dynamics can help explore feedback systems.

    • Such a model can help explore why feedback systems (for example, word-of-mouth recruitment in this model) work as they do.
    • Such a model can help explore claims of causality by allowing the evaluator to activate one proposed causal mechanism at a time in the model to see if it can account for observed behavior.
    • Such a model can let the evaluator measure and observe otherwise unobservable variables in the system, whether it's a look at the future or a look at a variable that can only me measured with great difficulty and at great cost.

  4. There's no fixed approach that works in all cases and for all purposes. There may never be.



Note: system dynamics is not the tool to predict the future, if indeed there is such a tool. It is a tool to help one understand likely future patterns of behavior, given the policies in place in a program.


If you want more information about any of the systems approaches I mention, Bob Williams' Web site has a wealth of information under Systems Stuff.

0 Comments:

Post a Comment

<< Home