Piloting: Selecting and evaluating models
- The text on this page is taken from an equivalent page of the IEHIAS-project.
Selecting and evaluating models
The large majority of integrated assessments rely on the use of models and statistical analysis methods. These typically include:
- Exposure models - to estimate changes in exposures and their antecedents (e.g. source actiivities, releases, environmental concentrations) under the different scenarios;
- Epidemiological or toxicological models - to derive best estimates of exposure-response functions for the study population;
- Population projection models - to estimate the population for the scenario periods;
- Behavioural models - to estimate changes in population behaviour (e.g. time activity patterns, consumption) under the scenarios;
Impact models - to estimate aggregated health impacts (e.g. in the form of disability-adjusted life years or monetary values).
Selecting models
During the feasibility testing phase, the availability and utility of models that can meet these requirements needs to be assessed. The choice of model used for each element in the assessment will depend on many factors, including:
- the accuracy and reliability of the model outputs;
- its data requirements;
- its ease of use (including processing requirements and need for operator expertise);
- its compatability with other models used in the assessment;
- its cost.
Information on many of these aspects can be obtained from the metadata provided with many models; factsheets summarising this sort of information are provided for a range of models in the Modelling compartment of the Toolkit section of this Toolbox. This information cannot always tell the whole story, however, for in many cases the models will need to be used outside their previous range: for example, with different types of data, at different scales, with greater volumes of data, or in association with other models (see Model Linkage).
Model validation
For this reason, it is often important to test and validate the models before they are used. Ideally this is done with a subset of the data that will actually be used in the assessment, but this raises a 'chicken-and-egg' problem: it may not be possible, or worth while, to collect the data before the assessment is well advanced. Similar data sets then need to be sought for model validation. One option in these situations is to use simulated data. As a basis for model validation, these have many advantages. They are not constrained by what other people happen to have collected, for other purposes, but can be formulated to represent the actual conditions that will exist in the assessment. They can be adapted and controlled, to provide specific tests of specific elements of the model. And the 'truth' is known, in that the data are not subject to sampling or measurement errors that might affect real-world data.
The link to SIENA (see below) provides a simulation of an urban environment that can be used for validation of a wide range of modelling approaches. New simulated data may also be added to this environment, where needed, and guidance on how to do so is provided in the attached manual.