SCALETOOL
 
E-mail
Password  
Remember me
 Forgotten password
SCALETOOL
 

 
Best Practice for model application
In the following you will find a collection of guidelines for best practice during model application. Topics covered are:

UNCERTAINTY, STOCHASTICITY, AND PARAMETER ESTIMATION


What sources of uncertainty are identified?
  • poor quality or low quantity of data (e.g. spatiotemporal sources of variance, like the existence and form of density dependence or the degree of spatial autocorrelation among local population dynamics, are difficult to discern without detailed, often long-term data)
  • difficulties in parameter estimation
    • Does parameter estimation address and capture the limitations of the field data?
    • Does parameter estimation translate uncertainty into parameter ranges,variances,or statistical distributions?
    • Is it possible to distinguish and quantify different sources of variability and potential biases?
      • sampling variability and errors result in poor estimates of population parameters,
      • process variability (e.g. variance in vital rates) may reflect true attributes of population dynamics
  • weak ability to validate model
  • effects of alternate model structures
Is there a proper distinction between uncertainty and stochasticity?

stochasticity - natural variation

One should distinguish among environmental, demographic, and genetic sources of stochasticity and address all sources of stochasticity, including catastrophes, in sensitivity analyses (White 2000). Catastrophes and disturbances (e.g., fires or floods) may differ from regular environmental stochasticity (e.g., fluctuations in temperature or rainfall) in amplitude, in the processes involved, and the nature of their effects on different life stages, individuals, populations, and the recovery of the environment itself (White 2000; Morris & Doak 2002).

uncertainty - lack of knowledge (epistemic uncertainty)

Uncertainty emerges from various sources,stochasticity being only one of them. Not all sources of uncertainty can be handled by having more or better data.

DECISIONS CONCERNING VERIFICATION, VALIDATION AND CALIBRATION

Model verification is absolutely necessary. It implies testing of whether the model works according to its specifications. The model should be inspected for any omissions and errors, either in model design or parameter estimation, which could affect model results Model verification can be done in a variety of ways, such as: planned simulation experiments, implementation in another software, and independent check by another modeller (for more details see Schmolke et al 2010).

Model validation is highly desired, but often impossible due to the absence of independent data. Important questions for validation are:

  • Is it possible to validate PVA predictions in the field?
  • Is it possible to validate PVA predictions through the use of newly accumulated data?
  • Is it possible to validate PVA predictions by monitoring of management outcomes?

Model calibration can sometimes be required, especially when scarce or no data are available for some parameters. Important decisions to be made when calibrating a model are:

  • choice of the parameters to be calibrated, and the ranges used for calibration;
  • choice of the optimization method to be used.
Some useful references when calibrating a model are Schmolke et al. 2010, Hartig et al. 2011, Marjoram et al. 2003, Piou et al. 2009, Grimm et al. 2005.

It is advisable, at the onset of the model design and implementation, to split the available data (whenever possible) into three parts: for calibration, parameterization, and verification. In this way one can ensure a possibility to validate the model if no new data are planned to be collected in the near future.

SENSITIVITY ANALYSES

We suggest that sensitivity analysis should be performed for any model. Local sensitivity analysis, the simplest type of sensitivity analysis, is one in which the values of each parameter are varied independently of other parameters. It is relatively easy to perform. To explore the interactions among parameters, however, we recommend global sensitivity analyses.

Important issues to consider when performing a sensitivity analysis are:

  • Choice of the model parameters to be addressed by a sensitivity analysis (see overview below). It has to be made in light of the model's purpose, available field data and the effect of parameters on viability.
  • Range of the parameter values to test;
  • Viability measure(s) to use as a response in sensitivity analysis.

Important questions to be answered when performing a sensitivity analysis are:

  • Are the parameters that have a strong effect on viability assessment identified?
  • Do dependencies between model parameters emerge from artefacts in model structure or a reflection of important ecological processes?
  • Are the parameter sensitivities analyzed individually? If yes, are there any synergistic effects that may prevent sensible interpretation of model sensitivity?
  • Is the importance of interactions among model parameters assessed?
  • Is the importance of nonlinearities in model response to parameter variation assessed?
For more details concerning sensitivity analysis see Saltelli & Annoni 2010, Hamby 1994, Saltelli et al 2006, Cariboni et al. 2007.

Potentially important parameters for sensitivity analysis

We suggest a careful choice of parameters to be addressed by a sensitivity analysis. In terms of the strength of effect on model outcomes, Pe'er et al. (2013) found the following ranking of importance (from high to low):

  • Parameters concerning mortality
  • Parameters reflecting environmental stochasticity
  • Parameters concerning catastrophes
  • Parameters describing landscape management
  • Number of patches used in the model
  • Connectivity
  • Type of density dependence used and parameters describing it
  • Area of the study
  • Population growth rate
  • Parameters describing reproduction
  • Initial population size
  • Dispersal distance and other parameters describing dispersal
  • Sex ratio
  • Age distribution

Among these, particularly important parameters to consider may be: parameters concerning catastrophes, number of patches, and the type and parameters relating to density dependence. For these parameters, we found a mismatch between their strength of effect and their inclusion in sensitivity analyses.

DECISIONS CONCERNING SIMULATION DURATION AND TIME HORIZON

Several viability measures for PVA are defined with respect to a given time horizon. In order to identify an appropriate time horizon (or several alternative ones), and the related simulation duration we suggest addressing the following questions:

Is simulation duration long enough to:

  • identify long-term population trends?
  • avoid transient dynamics due to initial conditions?
  • distinguish among outcomes of alternative management scenarios?
Is simulation duration short enough to:
  • limit the propagation of uncertainty over time?
  • be relevant to specific, often pressing, conservation decisions?

Irrespective of the chosen simulation duration, we suggest including a range of time horizons when reporting PVA results (in particular when ranking different management actions). An alternative is to select viability measures that do not depend on the time horizon (see also: Choice of viability measures).

CHOICE OF VIABILITY MEASURES

Many viability measures are currently available, and researchers have to identify those that are the most suitable for their study, however we encourage reporting of several viability measures, as they can complement each other and reveal adidtional information. To facilitate the choice of the viability measures, we offer a short description of most viability measures currently used, their calculation, suggestions regarding their use, and relevant references. We note that the selection of viability measures is mostly relevant for reporting, but may affect also the decisions taken during model application.

Viability measure Meaning Calculation Recommendations for PVA
P0(t) Probability of extinction by time horizon t Count extinction events over multiple simulations versus the time at which they occur and plot their cumulative distribution over time. Report P0(t) for several time horizons; for consistency with international listing thresholds and to facilitate comparison across studies, report P0(100) as one of these time horizons.
PN Quasi-extinction risk Plot the minimum population size N observed during the course of each simulation iteration, against their cumulative distribution. Can be used when global extinction is not possible (Burgman et al. 1993); to advance comparability report outputs for multiple values of N, including N = 0 if possible, for comparison with P0(t).
Tm Intrinsic mean time to extinction Plot ln(1 - P0(t)) versus time t. The plot yields a straight line with slope 1/Tm (Grimm & Wissel 2004). Use Tm to enable approximating P0(t) for any time horizon based on P0(t) ? t/Tm; it is insensitive to initial simulation conditions, and may reveal generic information about extinction risk and viability.
EMP Expected minimum population size Record the smallest population size obtained in each simulation iteration. Rarely reported in PVA studies; a simple and effective measure which should be more frequently used especially for sensitivity analyses and when the risk of extinction is small (McCarthy & Thompson 2001).
Ne Expected population size Plot Ne over time to provide a simple and intuitive visualization of population behavior and comparison between scenarios. An important "currency" for decision makers, but the tails of distribution must be depicted to account for the range of potential outcomes; should be considered in conjunction with other measures of risk.
? Mean intrinsic growth rate of a population Provides a simple measure of the potential for population growth. Useful for differentiating alternative population trends (Caswell 2002); should be reported in conjunction with viability measures that provide a measure of risk.
MVP Minimum Viable Population Run simulations with a range of initial population sizes to define the lowest threshold that maintains a viable population (i.e., predefined probability of survival over a given time horizon). Strictly speaking, not a viability measure but a measure of what would be required to achieve viability. Often relevant for policy decisions; provides intuitive information for communication; however oversimplifications may yield misinterpretation, therefore interpret and communicate carefully.
MAR Minimum Area Requirement Run simulations with a range of initial area (or other spatial attributes) to identify the area necessary to support a viable population. See MVP.


PVA TO SUPPORT DECISION MAKING

For PVAs to be accepted by conservation managers, PVAs should consider and rank multiple management options. Moreover, application and decision support would profit from considering not only the ecological benefit but also the costs of management actions if possible. Questions that should be considered by PVAs to make them relevant and supporting for decision makers:

  • Are various management options considered and ranked?
  • Are the considered management options realistic and relevant?
  • Are trade-offs between multiple management options considered and quantified?
  • Are the costs and benefits of multiple management options identified?
  • Are the most salient uncertainties characterized?
  • Are criteria for evaluating differences among management outcomes identified?
Further advice on performance criteria for ranking multiple management options is given in: Lindenmayer & Possingham 1996; Beissinger & Westphal 1998; McCarthy et al. 2003; Bakker & Doak 2009. Cases where alternative options have been quantified: Curtis & Vincent 2008; Johst et al. 2011; Wintle et al. 2011.

 

 
© 2010 - 2021 SCALES. All rights reserved.