Week 15 Discussion Questions

Post your week 15 discussion questions here as comments.

This entry was posted in Discussion questions, Week 15. Bookmark the permalink.

3 Responses to Week 15 Discussion Questions

  1. mvg says:

    Why do intermediate complexity models not diverge from the envelope of possibilities? How does one explore uncertainty with EMICs?

    What does past/present accuracy say about future accuracy?
    The authors describe using past correlations to determine future accuracy. What are other methods of determining future accuracy? What are the characteristics of a system that validate/invalidate a relationship between past/present and future accuracy?

    In anticipation of significant changes in the future, it is important to examine model behavior under significantly different regimes. If states are different enough, however, the processes may be different. How do you distinguish/determine/address when a state shift becomes a process shift?

    How does one deal with multiple equally well fitting parameterizations?

    One of the conditions for justifiable tuning is the following: “The number of degrees of freedom in the tuneable parameters is less than the number of degrees of freedom in the observational constraints used in model evaluation.” What does this mean? In other words, tune to simple things, not complex patterns and therefore do pattern matching by dynamics, not parameterization? What would Grimm have to say about this?

    What does it mean to parameterize a physical process that cannot be resolved by model? Using statistical descriptions instead of process descriptions?

    This paper indicates that there is no consensus on how to divide computer resources among the following:
    – finer numerical grids (better simulations)
    – more ensemble members (better uncertainty estimates)
    – more processes (more complete system)
    What are some examples of research needs or justifications for each of these different emphases? What sorts of projects would require each of these different foci (generally speaking, not just limited to climate models)? Are there examples of research that we have read about this semester that would benefit from one or more of these?

    What is a prognostic variable and how is it used?

    The authors cite a study of the impact of the level of complexity used for parameterizations of surface processes which finds that variation in complexity does not lead to differences and thus that variables are not limited by uncertainties in how to parameterize.
    Is this sort of analysis possible when many different types of parameterizations are not already done? How would you approach this evaluation in that case?

    It would seem like initialization method would have a significant impact on model results. Why is uniformity in initialization method better? It makes intercomparison more valid, I imagine, but does this introduce bias? Or is the Stouffer method simply a superior method?

    A multi-model mean better matches observations than any individual model.
    What are the implications of a better match and reliance on multi-model means?

    What are your thoughts on why a multi-model mean would be closer to observed data than any individual model?

    “EMICs can explore the parameter space with some completeness and are thus appropriate for assessing uncertainty.” What does this mean?

  2. waterunderground says:

    “There is currently no consensus on the optimal way to divide computer resources among: finer numerical grids, which allow for better simulations; greater numbers of ensemble members, which allow for better statistical estimates of uncertainty; and inclusion of a more complete set of processes (e.g., carbon feedbacks, atmospheric chemistry interactions).” How do reduced complexity models (incorporating several of the above factors) compare in their outputs?
    How do EMICs vary in which parameters are included? Do these choices help identify which parameters are most important in driving climate dynamics in particular regions.
    “Ensemble methods (Murphy et al., 2004; Annan et al., 2005a; Stainforth et al., 2005) do not always produce a unique ‘best’ parameter setting for a given error measure.” How are issues of equifinality addressed in cases where tuning identifies several behavioral models?
    Since the TAR, what are the most significant advances in incorporating terrestrial feedbacks into models?

  3. waterunderground says:

    Here they are again with better formatting!

    “There is currently no consensus on the optimal way to divide computer resources among: finer numerical grids, which allow for better simulations; greater numbers of ensemble members, which allow for better statistical estimates of uncertainty; and inclusion of a more complete set of processes (e.g., carbon feedbacks, atmospheric chemistry interactions).” How do reduced complexity models (incorporating several of the above factors) compare in their outputs?
    How do EMICs vary in which parameters are included? Do these choices help identify which parameters are most important in driving climate dynamics in particular regions?

    “Ensemble methods (Murphy et al., 2004; Annan et al., 2005a; Stainforth et al., 2005) do not always produce a unique ‘best’ parameter setting for a given error measure.” How are issues of equifinality addressed in cases where tuning identifies several behavioral models?

    Since the TAR, what are the most significant advances in incorporating terrestrial feedbacks into models?

Leave a Reply