Friday, November 14, 2008

Introducing literate modeling

Organizational modeling has much in common with programming. If you create or ask for organizational models in any one of a number of text-based systems, you likely realize that. If you use any one of a number of GUI-based systems, you may recognize what you're doing as visual programming.

In organizational modeling, the challenges are many:

  • How to create a competent model
  • How to make the process transparent so others can collaborate and still others can benefit
  • How to organize the model in ways that are understandable both to people and the computer


It's both easier and harder than it sounds: it's easier to produce useful, insightful models than many might think, and it's harder to create solid models than some would have you believe.

Despite the claims of some, I don't think there's one right way: I think we need a diversity of approaches. To that end, I've been exploring text-based modeling and simulation for the last few years. Recently I've begun to merge the ideas of literate programming and organizational modeling into what you might call literate modeling. Strictly speaking, that's only doable with text-based modeling languages. It brings with it a few features that seem important:


  • Thinking about the model and creating the model go hand-in-hand. I've already found that text-based modeling has helped me think in new and deeper ways about the problems I've addressed, and literate modeling strengthens that thinking by linking them even more tightly.

    Why is that so? Part of the reason has little to do with literate modeling and textual programming per se: I found that splitting up the work of modeling and simulation into different types of tasks (conceptualizing and modeling a problem, designing experiments to test theories of action, analyzing and thinking about experimental results, and communicating those results to others) helped me do a better job at each stage.

    Literate modeling adds a writing component, which may help us think more carefully about our modeling decisions, to the modeling process. Forcing the explanation of the model to go hand-in-hand with its creation seems to keep me from making assumptions I can't support. In fact, I tend to let the writing drive the modeling, which I think drives models even more from the aspect of hypothesized causality than from the aspect of what works technically. You can see what others have said about literate programming; I think those ideas apply to literate modeling, too.

    Of course, you can document normal models extensively, too. As others have said, literate approaches link the thinking required to explain a model (or a program) to other people tightly to the model (or program) itself; the model flows from the thinking and writing instead of the writing becoming partially a reverse engineering of the model code.


  • Model consumers, collaborators, and even developers think most naturally about models in a sequence that doesn't necessarily match the sequence needed by the simulator. Literate modeling allows you to decouple those two sequences completely.


  • To explain a model, you sometimes need graphs, diagrams, pictures, tables, or other non-textual components. A good literate modeling system can integrate all those components easily without being tied down to the features offered by a particular simulator. In a way, this is the old Unix philosophy: use a collection of smaller tools, each well-suited to the task at hand, and put them together flexibly as you need them.


  • Literate modeling is one approach to applying some of the best of the lessons of software engineering to a related field in order to do better work.


Why do you care? Why should you care?

If you work with or use organizational simulation models to make better business or organizational decisions, perhaps literate modeling offers you a way to have more transparent, more collaborative, more understandable, and more well-thought-out models. Better models, applied appropriately to generate better insights, might just help you make better decisions. Better decisions, carried through with good implementation, might just lead to better results over the long haul.

Is literate modeling or, for that matter, text-based modeling the only way to go? By no means! By the rule of diversity, if nothing else, that would be wrong. Each of the graphical modeling tools I use has brought its unique strengths to the problem-solving process, and I continue to use them. "Horses for courses," as they say. Yet don't count out text-based tools for effectively engaging those with whom you're working, don't think that text-based modeling and literate modeling has to be ponderous, and don't count out the power of text and writing to help you think more effectively about the challenges you face.

What do you think?

Labels: , , , ,

4 Comments:

Blogger Deans said...

Hi Bill-

Very interesting post! I completely agree with your statements about the tremendous value of models and analysis/simulation. I just wish that I had come up with such powerful prose to articulate the value of good models.

I realize that you're a MCSim proponent, but I gather from your "Other Tools" page that you also appreciate the value of discrete event simulation. Our team -- http://www.foresightsystems-mands.com -- provides a very flexible resource aware modeling and simulation environment that might be applicable to some of your cases. With both graphical and textual modeling languages, we generally find that our models effectively communicate to both our collaborators and our simulator.

As you are someone who's obviously done a great deal of serious thinking about system modeling, we'd love to have you participate in the discussion on our System Modeling Perspectives blog -- http://www.foresightsystems-mands.com/blog/


Keep up the excellent work,

Dean Stevens

14 November, 2008 17:41  
Blogger Bill Harris said...

Dean, thanks for stopping by, and thanks for the kind words. I do like MCSim, but I also use iThink and Vensim, among other tools, depending upon the situation. And indeed, I have also benefited from good discrete event models; "Emphasis on Business, Technology, and People Cuts Turnaround Time at Hewlett-Packard's Lake Stevens Division" documents a project in which a discrete event simulation played an important role.

I have subscribed to your Atom feed; thanks for the invitation.

14 November, 2008 19:15  
Blogger Wayne said...

Yes, very interesting. I followed links to the McSim GNU page, but there was nothing there for me to read in order to get sense of what it might do better than, say, Vensim PLE. I wasn't ready to download it, so I stopped at that point. In order to be motivated to learn a new simulation tool, it would need fill an important niche that is currently going begging. My toolkit already contains six simulation packages (2 SD, 3 DSS, 1 ABS) that seem to cover the space fairly well. Still, it would be great to see a table comparing the S&W of McSIM to Vensim and Ithink, for example. I certainly agree that models need to be transparent and understandable to potential users!

16 November, 2008 08:20  
Blogger Bill Harris said...

Good request, Wayne, and perhaps a topic for another day. I'm still puzzling through the question myself of which tool to use for which purposes; as you know, I use other simulators in teaching and consulting, as well. Here are some of my observations on MCSim:

First, I'm not inclined to worry about some of the fine detail (simulator X has x functions, while simulator Y has y), as you can create system dynamics models in a multitude of tools; I wrote my first model in C-64 Logo—not a platform with lots of system dynamics tools. The key features I find enticing in MCSim include (not in priority order):

- LSODES integration, avoiding the need for setting DT or TIME STEP: While that seems trivial, it means I no longer need to go back at the end of a modeling exploration to verify that my choice of DT hasn't been made poor by subsequent model changes. MCSim still provides Euler integration for those times when it's appropriate.

- Interpolation through the GSL: Most simulators offer a limited set of alternative interpolation algorithms for dealing with tabular nonlinearities, and they don't always fully document their implementation. With MCSim, you can easily use any of the GNU Scientific Library's interpolation algorithms, or you can write your own. That said, calling the GSL as a bit of inline C code in an MCSim model is a bit arcane.

- Text-based models: Many, but not all, simulators store models in a binary format. While I don't have a problem with proprietary simulators when they are called for (I'm currently evaluating a simulator with a multi-thousand-dollar price tag), I like the model to be stored in plain text. For one, I can then see the model even if I no longer have access to the simulator. For another, I can use other programs (including CWEB, the program that kicked off this thread) to create or manipulate models. For still another, I can readily use my existing revision control system to manage different versions of models and to help me see what has evolved.

- A "small tools" approach: Often, when you buy a simulator, you simultaneously buy a model editor, a forms editor, and a graphing program. I have my own editor I like, I have multiple graphing programs I like, and I have other programs to manipulate and analyze data. MCSim forces you to select each of those pieces yourself to fit your style and current needs. (That's another way of saying it doesn't have any of those features, much as traditional Unix programs were designed to do only one thing and to do it well.)

- Parallel simulation: While MCSim doesn't really offer parallel simulation, it can seem like it. You can define a set of simulation experiments (2, 20, 200, whatever) and then run them all with one command. Then you have all the data in one file, and you can spend your time analyzing individual runs and differences between runs.

- Multi-level models: Much system dynamics work is deterministic, and MCSim works well there. Where MCSim really shines is in Metropolis sampling of multi-level models. I've done some experimentation with combining its Bayesian analysis with system dynamics, and I think there's a real contribution to be made. Indeed, there was a recent (and independent) article in the System Dynamics Review about integrating Bayesian techniques with system dynamics.

What really attracted me to MCSim was the process it fosters. With integrated systems, I find I tend to attend to all aspects of the process at once: placing the first stock on the diagram is simultaneously a modeling decision (what stocks are important?), a communications decision (where do I place things so others can make sense of my model?), and a parameterization and experimental design decision (what's the initial value for this stock?).

When I did my first MCSim model, I did the model design without much worrying about those other aspects. When the model was complete, I began to think about the experiments I wanted to run and found myself surprisingly designing a factorial experiment—something I don't think I had done before, at least not right from the start. That led to a set of (I think it was) five experiments to run at once, and then I moved to analysis—looking at the data in different ways. As you can see if you look at some of my earlier postings on MCSim, I came up with a few graphs I've never seen in any of the commercially-available simulators I've used.

In essence, I found that MCSim led me to think more deeply about each phase of the modeling process (modeling the problem, designing experiments, analyzing data, and communicating results) than the alternatives had. With the addition of literate modeling, I'm finding it is encouraging me to be even more thoughtful about tradeoffs I'm making in designing those models.

So do I think that MCSim is the only way to go? No. For system dynamics modeling alone, I'm finding I have at least three alternatives with particular reasons to select each:

- If I want to create a model quickly, I like Vensim. There's something about its UI that, while perhaps a bit quirky compared to other applications, is very productive. It also has the best facility for dealing with units of any simulator I own (MCSim doesn't even deal with units), and I think units checking, whether manual or automated, is an important part of modeling. I also like Vensim's causal tracing graphs.

- If I want to create a model I can share with others interactively, I quite like iThink. It has great features for leading people through a story and allowing them to modify and test new scenarios. It's also much easier for people to learn than MCSim.

- If I want to think deeply about a problem, I like MCSim.

That brings me to my current puzzle. The productivity of Vensim might fit better with a fast prototyping and interative approach to modeling, which is more like today's agile approaches to software development. That's very attractive, but I sometimes have the feeling that I may complete my modeling work (not just the model creation) faster and more effectively in MCSim because it encourages me to focus more intensely on each phase, one at a time (less multi-tasking).

I'm not sure, though. I just started a literate model Friday, and I feel the desire to start simulating even as I'm still in the writing and thinking mode. I keep reminding myself that my prior experience suggests the thinking is important now and the real speed will come because I've thought through the problem better (fewer iterations later) and because the later stages will go more quickly.

Does that help answer your question? If you're happy with your simulators, I see no reason to change. As I've been told, it's the poor carpenter that blames the tools; we can create good, useful models with a wide variety of simulators. I've been quite happy with any of the three I currently use, and I find I can do most anything with any of the three, if I want to. I'm just pleasantly surprised at the benefits offered by this simulator that's largely unknown in the system dynamics community. Perhaps the answer is to develop fluency in a range of tools and approaches. I recall research by Bill Curtis and his group at the old MCC that suggested fluency in a number of programming languages made for better software developers, perhaps because they could visualize a wider variety of solution alternatives. Might the same apply to simulation modeling?

16 November, 2008 14:42  

Post a Comment

<< Home