Week 12 Discussion Questions

Post your week 12 discussion questions here as comments.

This entry was posted in Discussion questions, Week 12. Bookmark the permalink.

6 Responses to Week 12 Discussion Questions

  1. mmatella says:

    Discussion Questions (4/9/13): Modeling and uncertainty

    Grimm et al. (2005)
    • How does the concept of degrees of freedom inform our ideas about whether a model is sufficiently parameterized?

    • Do good or useful models always exhibit structural realism?

    • Grimm et al. say that by analyzing additional patterns one can continually refine models. Is it possible that more pattern analysis could lead to contradictory findings or that it could confound understanding rather than improve it?

    • Using existing pattern analysis, how does a model predict novel ecosystem states?

    Beven (1996)
    • Grimm et al. do not make any reference to the equifinality problem (Beven 1996). They seem to believe the pattern modeling will not produce different models that are both equally good, but they do acknowledge that conceptual models might reflect the bias of individual observers. Is this a flaw in the POM approach as described by Grimm et al.?

    • How does one determine whether they have poor hypotheses or if a problem is currently undecidable?

    Hornberger & Spear (1981)
    • When doing sensitivity ranking of parameters, H&S point out correlated parameters might not separate in the distribution function behavioral differencing. But it is possible that they could be correlated with another parameter that is important. Would not an analysis of all the parameters identify that strong, important one? Why is the correlation important?

    • H&S described an application of their technique using an algae bloom model that included 19 parameters. They report all the significant information for groups is univariate. Does this mean of the 19 parameters examined, none are redundant? Is 19 a large number of parameters to use in a model simplifying a complex system?

    • Is it good practice to do principal components analysis to check for covariance effects? Why not just do a PCA and use weights on individual parameters in the eigenvector to indicate parameter importance instead of the H&S sensitivity ranking using Kolmogorov-Smirnov two sample tests? Do the two types of testing tell you remarkably different information?

    • Are there types of behavior classifications that are (or are not) dominated by mean shifts? Does PCA indicate whether the use of induced mean shifts is appropriate for behavioral classification?

  2. amunozsaez says:

    Hornberger & Spear (1981)
    Are this kind of models of probability trying to create a model to predict the result that you already know or are exploring new theories? How a dichotomy process of classification between a behavior B or not behaviour B could be useful to explain a stochastic event?

    Grimm et al. (2005)
    “In POM, we explicitly follow the basic research program of science: the explanation of observed patterns”. How can the observer deal with their own perception of the phenomena, in order to determine the patterns? Which is the possibility of omit a relevant but inconspicuous (to the eyes of the observer) pattern? For example a couple of decades ago the effect of CFCs gases was not linked with its effect in in the atmospheric ozone, but now the comprehension of the impact of CFCs gases is clear.

    Beven (2006)
    “for most models, there may be many combinations of parameter values that will provide almost equally good fits to the observed data” (p293). Which is the value of include an outlier parameter in your model? What happen when the outliers reflect important changes in the behavior of the phenomenon?

  3. mvg says:

    Grimm et al.
    What if there are multiple ways in which the patterns can be produced? Does it matter? What are other complicating factors?
    Do you agree with the claims made here about how POM reduces parameter uncertainty?
    How is this approach influenced by challenges of collecting/quantifying data and patterns to validate models? How do you identify patterns? How do you know your pattern selection sufficiently tests your models?
    What of the fish schooling example where many models fit? Is this a sign that you can reduce the complexity of the model or that you need more patterns for validation? How do you tell the difference?
    Regarding the presented figure of Anasazi settlements: is this a sufficient representation? How does one determine sufficient representation (especially when modeling other processes might provide a better fit)? In this case, the model has more noise than the data!
    If these models are developed and calibrated with real data, the point would seem to be understanding system mechanisms and using models for prediction. What is the transferability of these models? Can you reasonably apply a model derived and calibrated on system A to similar but different system B? These models seem best suited for analyzing and predicting well-measured systems. What are the limitations and possibilities for application?

    Beven
    Given equifinality in models (both in the sense of unknowability and the sense of uncertainty), what is the efficacy of modeling? How are models best used?
    What are the effects of influential factors outside of the practice itself (e.g. career timescale, model application markets, social conditioning)?
    Is it possible/useful to separate uncertainty from unknowability? What are the implications for modeling practice?

    Hornberger and Spear
    Determining parameter distributions requires an a priori understanding of the system. How does this limit/influence the approach presented here? What about interaction between parameters that would influence probability distributions?
    How is this approach similar/different/complementary to first-level pattern matching from Grimm et al.?
    How would this approach be applied to a scenario of non-binary behavior?
    What is your response to the authors’ concluding claims? “The methodology developed in this paper avoids the problems inherent in the use of simulation models as deterministic predictors, by concentrating on the probability of obtaining a result that is consistent with qualitative aspects of the behaviour under a full range of parameter uncertainty. Thus, it provides the basis for making practical use of simulation models in the field of environmental management.”

  4. jnatali says:

    Grimm et al 2006.

    1. What are potential benefits and pitfalls of the inverse modeling technique, described as “fitting all calibration parameters by finding values that reproduce multiple patterns simultaneously.” In the brown bear dispersal example, parameter filtering reduced the model’s sensitivity, but required quantified criteria for agreement between observed and simulated patterns. Is inverse modeling basically systematic, automated trial and error? Is the Monte Carlo technique (Hornberger and Spear 1981) a probabilistic (and better, more specific??) example of this? What about Bevin’s GLUE approach?

    2. Can inverse modeling techniques be useful in other modeling approaches (non-agent-based)? Is the greatest challenge then in defining and implementing quantified criteria (a “behavior-defining algorthim” in Hornberger and Spear)? This assumes that if you recreate the pattern, as verified by the criteria, that you have a well-behaved model. Is the technique just a process of elimination (well-behaved versus not) that results in a set of well-behaving models. At that point, how do you validate which interaction of processes is creating the pattern in reality? Can probability and statistics uncover process by narrowing down the possibilities?

    3. Which is more worthwhile to compare: contrasting parameter sets (“inverse modeling” or “indirect parameterization”), decision models within an ACS (“strong inference”), or modeling approaches?

    4. What distinguishes an algorithmic versus analytical approach to theory (last sentence)?

    5. Do POM/ACS approaches allow for non-linear interactions, thresholds and feedbacks? Should this be emphasized more?

    Hornberger and Spear 1981.

    5. Determining critical areas of uncertainty sounds like another process of elimination approach. Is this an example inductive reasoning to form hypotheses? I’m getting the impression that this applies to many modeling approaches, is that fundamental to modeling in general — it’s good for hypotheses formulation but not falsification (Beven p 195)? Do we need to pair experiments with models in order to close the loop on theory development? Is the loop then observe, model, experiment, repeat? Or is this too one-size-fits-all? Is there a “complex systems approach” to experimentation, especially in light of Beven’s observation that it’s difficult to obtain data over sufficient time periods to decide between multiple hypotheses?

    Beven 1996.

    6. What is the meaning of “overparameterized in a systems identification sense?” He references Kirby 1975, I could go look…

    7. How are mediating models deductive or do they work from the middle outward? Is it important to distinguish top-down versus bottom-up modeling frameworks? Do they serve different purposes?

    8. If relativism is the common practice, how does work of science move beyond publishing a paper on a single (or even comparative) modeling approach with qualitative measures of model performance and uncertain representation of reality? Beven argues for more rigorous methods for analyzing modeling results to refine the relativism, but are there other ways of looking at this problem? He suggests that ‘critical and novel analyses’ can improve model structure. Are there examples of this since the publication of his paper in 1996.

    9. Beven discusses the need for (and challenges of) pairing models with experiments: “collecting measurements that will allow for different hypotheses and assumptions to be tested in a way that eliminates some of the set of possible behavioral models.” Again, what are ideas for a “complex systems approach” to experimentation?

  5. waterunderground says:

    Beven writes, “It has been suggested that all hydrological models can easily be invalidated as descriptions of reality and that even the most ‘physically based’ models must be considered as merely conceptual descriptions as used in practice (see Beven 1989), and not very good descriptions at that.” He goes on to say that calibration can yield “acceptable” solutions; still, there seems to be a large disconnect between hydrologists’ understanding of the uncertainty in models and policymakers’ demands on them. Is there any way to bridge this gap, and if not, what are the implications?
    “It may be possible to design testable hypotheses and associated experiments that would allow model structures or parameter sets to be designated as non-behavioural, i.e. a certain class of models or parameter sets will be deemed falsified. … Such an approach has much in common with the Bayesian methodology espoused by Howson and Urbach (1989). Rejection of all the models tried on the basis of some reasonable criteria will suggest a serious lack of predictive capability.” (296). I’m not clear on which types of models Beven is referencing here—physically based? simplified? Could you elaborate on the Bayesian connection?
    “…equifinality of hypotheses and models today, when properly recog- nised, can lead to the formulation of experimental and analytical methodologies that may allow rejection of some of the competing explanations in the future.” What are some examples of these methodologies?
    Hornberger and Spear 1981
    In discussing the sensitivity ranking procedure, the authors state that large values of dm,n indicate that a parameter is important, but how large is large? What about the case in which a key parameter is not considered?
    Grimm et al 2007
    In the beech forest example, the authors describe patterns operating at different scales that were used to identify key parameters. But how does one go about identifying and interpreting these patterns? Is it enough to rely on other studies? Does it proceed iteratively, with periods of field work alternating with modeling? How is this similar to or different from Hornberger and Spear, who advocate using simple numerical models early in a research program?

  6. tmj143 says:

    1. Can you elaborate on the false parameters that are known to be false mentioned in Beven? What are the most common assumptions in these models?

    2. Do these models accurately predict catastrophic shifts?

    3. Is equifinality only present in systems which have already exceeded their relaxation times?

    4. Is most modeling limited by computational power, or is it limited by other constraints? I.e. is supercomputing big in this field?

    5. Are there specific subfields where patterned modeling is more useful than a top-down approach? Having discrete units which cannot be further broken down makes sense, like fish.

    6. How has behavioral modeling evolved since H&S?

Leave a Reply