

Exploratory Modeling
Steve Bankes
RAND, Santa Monica, California
Introduction: Exploratory modeling is a research methodology that uses computational experiments to analyze complex and uncertain systems (Bankes, 1993). Exploratory modeling can be understood as search or sampling over an ensemble of models that are plausible given a priori knowledge or are otherwise of interest. This ensemble may often be large or infinite in size. Consequently, the central challenge of exploratory modeling is the design of search or sampling strategies that support valid conclusions or reliable insights based on a limited number of computational experiments.
Exploratory modeling can be contrasted with the use of models to predict system behavior where models are built by consolidating known facts into a single package. When experimentally validated, this single model can be used for analysis as a surrogate for the actual system. Examples of this approach include the engineering models that are used in computer aided design systems. Where successful, this "consolidative" methodology is a powerful technique for understanding the behavior of complex systems. Unfortunately, for many systems of interest, the construction of models that may be validly used as surrogates is simply not a possibility. This may be due to a variety of factors including the unfeasibility of critical experiments, immaturity of theory, or the nonlinearity of system behavior, but is fundamentally a matter of not knowing enough to make predictions. For such systems, a methodology based on consolidating all known information into a single model and using it to make best estimate predictions can be highly misleading.
Exploratory modeling can be useful when relevant information exists that can only be exploited by building models, but where this information is insufficient to specify a single model that accurately forecasts system behavior. In this circumstance, models can be constructed that are consistent with the available information, but such models are not unique. Rather than specifying a single model and falsely treating it as a reliable image of the target system, the available information is consistent with a set of models, whose implications for potential decisions may be quite diverse. A single model run drawn from a potentially infinite set of plausible models is not a "prediction", rather it provides a computational experiment that reveals how the world would behave if the various guesses any particular model makes about the various unresolvable uncertainties were correct. Exploratory modeling is the explicit representation of the set of plausible models, and the process of exploiting the information contained in such a set through a constellation of computational experiments.
A set, universe, or ensemble of models, which are plausible or interesting in the context of the research or analysis being conducted, is generated by the uncertainties associated with the problem of interest, and is constrained by available data and knowledge. Exploratory modeling can be viewed as a means for inference from the constraint information that specifies this set or ensemble. Selecting a particular model out of an ensemble of plausible ones requires making suppositions about factors that are uncertain or unknown. One such computational experiment is typically not that informative (beyond suggesting the plausibility of its outcomes.) Instead, exploratory modeling methodology must support reasoning about general conclusions through the examination of the results of numerous such experiments. Thus, exploratory modeling can be understood as search or sampling over the ensemble of models that are plausible given a priori knowledge.
Central Problem: Inferring global properties of a large or infinite set from a finite sample requires inductive inference, which is generally a more difficult problem than any specific question of deductive inference. Thus, the problem of how to cleverly select the finite sample of models and cases to examine from the large or infinite set of possibilities is the central problem of any exploratory modeling methodology. A wide range of research strategies are possible, including structured case generation by Monte Carlo or factorial experimental design methods, search for extremal points of cost functions, sampling methods that search for regions of "model space" with qualitatively different behavior, or combining human insight and reasoning with formal sampling mechanisms. Computational experiments can be used to examine ranges of possible outcomes, to suggest hypotheses to explain puzzling data, to discover significant phases, classes, or thresholds among the ensemble of plausible models, or to support reasoning based upon an analysis of risks, opportunities, or scenarios. Exploration can be over both real valued parameters and nonparametric uncertainty such as that between different graph structures, functions, or problem formulations.
In making policy decisions about complex and uncertain problems, exploratory modeling can provide new knowledge even where validated models cannot be constructed. One example is the use of models as existence proofs or hypothesis generators. Demonstrating a single plausible model/case with counterintuitive properties can beneficially change the nature of a policy discussion. Another simple example of potentially credible inductive inference from model exploration is provided by situations where risk aversion is prudent. Here an exploration that develops an assortment of plausible worst case failure modes can be very useful for designing hedging strategies. This is true even if models are not validated, and sensitivities are unknown. Other examples of useful research strategies include the search for special cases where small investments could (plausibly) produce large dividends, or extremal cases (either best or worst) where the uncertainties are all one sided, and a fortiori arguments can be used. All these examples depend on the fact that partial information can inform policy even when prediction and optimization are not possible. The space of models and associated computational experiments can be searched for examples with characteristics that are useful in choosing between alternative policies. The search for information of use in answering policy questions can often be served by the discovery of thresholds, boundaries, or envelopes in a space of models that decompose the space into subensembles with different properties. For example, exploratory modeling could seek to discover which models or initial states have stable or chaotic dynamics. Or search could have the goal of discovering which regions in model space favor either of two alternative policies.
Exploratory modeling has proven to be a very powerful approach to the discovery of robust decisions or policies, especially through the use of adaptive strategies. Here the set of plausible models or plausible futures is used as a challenge set for designing adaptive policies that are robust against the full range of foreseeable future situations. In such applications, exploratory modeling provides an important alternative to choosing policies through optimization.
Application Types: Exploratory modeling can be driven by data, a question or decision, or by the needs of model development. Data driven exploration can be used to support model specification, exploring over alternative model structures that might be used to explain a dataset. Or, it can provide an alternative to maximal likelihood or maximal entropy approaches to model estimation by supporting, for example, the visualization of level sets in likelihood surfaces. Question driven exploration begins with a question we wish to answer (e.g. what policy should the government pursue regarding global warming) and addresses this question by searching over an ensemble of models and cases believed to be plausible in order to inform the answer. Question driven exploration provides an alternative to supporting decision making through forecasting or prediction. Exploratory modeling also provides a strong alternative approach to model development in allowing guesses about uncertain modeling details to be avoided during the process of programming and delayed until the process of model use where these guesses can be motivated by the actual strategy of model based problem solving.
See Practice of OR/MS; Public policy analysis; Soft systems methodology; Validation.
References
[1] Bankes, S. (1993), "Exploratory Modeling for Policy Analysis," Operations Research, 41, 435449.
[2] Bankes, S. (1994) "Exploring the Foundations of Artificial Societies: Experiments in Evolving Solutions to Nplayer Prisoner's Dilemma", in Artificial Life IV, Rodney Brooks and Pattie Maes (eds.), MIT Press.
[3] Bankes, S., and D. Margoliash (1993), "Parametric Modeling of the Temporal Dynamics of Neuronal Responses Using Connectionist Architectures", Journal of Neurophysiology, vol. 69, no. 3, pp. 980991.
[4] Brooks, A., B. Bennett, and S. Bankes, (1999) "An Application of Exploratory Analysis: The Weapon Mix Problem", Military Operations Research, vol. 4, no. 1, pp. 6780.
[5] Campbell, D., J. Crutchfield, D. Farmer and E. Jen (1985), "Experimental Mathematics: The Role of Computation in Nonlinear Science," Communications of the ACM, 28, 374384.
[6] Hodges, J. (1991), "Six (or so) Things You Can Do With a Bad Model," Operations Research, 39, 355365.
[7] Lempert, R., M. Schlesinger, and S. Bankes, (1996) "When We Don't Know the Costs or the Benefits: Adaptive Strategies for Abating Climate Change," Climactic Change.
[8] Park, G., and Lempert, R. (1998), "The Class of 2014: Preserving Access to California Higher Education", RAND, MR971.
[9] www.evolvinglogic.com
> return to News, Press and Publications section
> up




