Monday, July 03, 2006

Flu Scenarios --- Are You Scared Yet?

Conventional wisdom among bloggers is that one should not send readers away at the beginning of an article. This rule makes good sense, but I am going to break it. There is a beautifully prepared four-page cover article from Risk and Insurance that is worth your attention. In particular, at the end of the article there is a table that you may want to pin to a wall. It provides a clear half-page summary of five scenarios that deserve a serious slice of the collective mindshare.

After you have read "Model Apocalypse" by Matthew Borodsky please return for a discussion of the underlying methodology.

Scenario Development --- What is It and How is It Used?

In a nutshell, scenario development is a matter of writing down (or just considering) the way things might work out. This is an ancient process that is quite familiar to anyone who has ever played chess or read a biography of Napoleon.

It is natural to hope that computers could improve this process, and in some limited domains they can. When I was an assistant professor, I could beat any chess playing program in the world; now some thirty years later I play chess about as well as before, but there are shareware programs that can beat me 100 games in a row. This progress has been achieved by efficient computation of a vast number of feasible scenarios.

In war gaming there has also been substantial --- though less definitive --- progress. However one feels about the wisdom of the Wars in Iraq, it is plain that military planning took place with vastly more depth and detail than could have been imagined by commanders of an earlier generation.

Computers and Scenario Generation

How does one design a scenario generator? If we leave aside for the moment a few bells and whistles, we see that the design process is both simple and highly limiting:
  • You consider a collection of discrete and continuous variables that your core scientific knowledge tells you to be relevant to your scenarios. In this case, the truly key variables are contagion rates and mortality rates. The other event variables that appear in the table on the last page of the Risk and Insurance article may seem to add realism, but with a little thought they will be seen to be largely cosmetic.
  • You then generate scenarios using either cross tables (if you have few variables) or event trees (if you have many variables).
When you strip the process down to its basic elements, you come to grips with the lamentably GIGO nature of scenario generation. We have some basic possibilities for infection rates and mortality rates that are not garbage, but when we start to go beyond these --- well, that's when the garbage starts to show up.

Scenario Development --- What Went Wrong Here?

In chess or even in military conflict, computers help us deal with massive detail. In chess the breakthrough came when investigators at IBM decided to approach the problem via direct, brute-force computation. Earlier attempts to use clever human-like heuristics had all ended in failure.

In the pandemic flu situation we (1) have no reliable detail and (2) no more complexity than one can handle with a few index cards. In such situations, computers cannot perform more effectively than creative well-informed individuals.

Finally --- Waterman's Paradox

In a Wharton seminar not long ago, Richard Waterman isolated an important behavioral phenomenon that I have come to call Waterman's Paradox. Richard talked from the heart about his own consulting experience; I'll put his story in a few lines:
  • As a consultant you build a model, run it, and --- by luck of the draw --- you happen to get results that you know your clients won't like and won't believe.
  • You say to yourself "This can't be right." You then change the model.
  • After another loop (or two) through this process, you finally get a model that is consistent with the original intuition of your clients. Incidentally, this tends to happen about the time your clients start asking for their report.
  • With some relief that you now have a model that confirms what everybody believed at the beginning, you say to yourself "This looks right" and you start writing the report.
It takes exceptional personal integrity for a consultant to tell the story that Richard told, but as he was telling it you could see the heads in the room nodding up and down. Richard was telling the truth. All model builders have a huge bias toward confirming their original intuition.

Computers can add little or nothing to the honest development of top-level scenarios for pandemic flu. So far --- and in the foreseeable future --- they just repeat what we have already sketched on the backs of envelopes. For genuinely novel insight, I'd rather count on a late-night Charlie Rose roundtable session with guests like Tom Clancy and Laurie Garrett.

On the other hand, if you want to get down to more detailed scenarios, say of the kind that might tell you the order by which Houston hospitals will start closing due to flu case overload, then computer models can be of genuine help. Such projects are worth doing, even if --- like all projects --- they must live with Waterman's paradox.


1 Comments:

Blogger Shane said...

Regarding Waterman's paradox, I recently attended a talk where the presenter was selling their procedure as an "honest" approach to the problem. By that, they meant that their procedure would not be as bias as competing procedures by the user's expected/desired outcome. The cynic in me felt like asking "doesn't that just mean that they won't use your procedure"?

3:56 PM  

Post a Comment

<< Home