Piloting: Selecting and evaluating models

From Opasnet
Jump to navigation Jump to search
The text on this page is taken from an equivalent page of the IEHIAS-project.

Selecting and evaluating models

The large majority of integrated assessments rely on the use of models and statistical analysis methods. These typically include:

  • Exposure models - to estimate changes in exposures and their antecedents (e.g. source actiivities, releases, environmental concentrations) under the different scenarios;
  • Epidemiological or toxicological models - to derive best estimates of exposure-response functions for the study population;
  • Population projection models - to estimate the population for the scenario periods;
  • Behavioural models - to estimate changes in population behaviour (e.g. time activity patterns, consumption) under the scenarios;

Impact models - to estimate aggregated health impacts (e.g. in the form of disability-adjusted life years or monetary values).


Selecting models

During the feasibility testing phase, the availability and utility of models that can meet these requirements needs to be assessed. The choice of model used for each element in the assessment will depend on many factors, including:

  • the accuracy and reliability of the model outputs;
  • its data requirements;
  • its ease of use (including processing requirements and need for operator expertise);
  • its compatability with other models used in the assessment;
  • its cost.

Information on many of these aspects can be obtained from the metadata provided with many models; factsheets summarising this sort of information are provided for a range of models in the Modelling compartment of the Toolkit section of this Toolbox. This information cannot always tell the whole story, however, for in many cases the models will need to be used outside their previous range: for example, with different types of data, at different scales, with greater volumes of data, or in association with other models (see Model Linkage).


Model validation

For this reason, it is often important to test and validate the models before they are used. Ideally this is done with a subset of the data that will actually be used in the assessment, but this raises a 'chicken-and-egg' problem: it may not be possible, or worth while, to collect the data before the assessment is well advanced. Similar data sets then need to be sought for model validation. One option in these situations is to use simulated data. As a basis for model validation, these have many advantages. They are not constrained by what other people happen to have collected, for other purposes, but can be formulated to represent the actual conditions that will exist in the assessment. They can be adapted and controlled, to provide specific tests of specific elements of the model. And the 'truth' is known, in that the data are not subject to sampling or measurement errors that might affect real-world data.

The link to SIENA (see below) provides a simulation of an urban environment that can be used for validation of a wide range of modelling approaches. New simulated data may also be added to this environment, where needed, and guidance on how to do so is provided in the attached manual.


SIENA: spatial simulation tool

Model linkage

Many integrated assessments inevitably require the use of a number of different models: to represent different environmental compartments or media, to simulate different environmental processes, and to provide a means of analysing the different links within the causal chain (from source to impact). Assessment thus involves the use of a range of linked models, with outputs from one model becoming inputs to the next.

Because integrated assessments are rarely conducted in a routine and repetitive way, ready-made systems that provide an integrated suite of models are rarely available. As a consequence, the various models have to be linked and made to operate together specifically for the purpose of the assessment. This can generate a number of problems and uncertainties, not all of which may be immediately apparent.

The most obvious problems are those due to differences in data format, which may make it difficult to pass data between the various models. This is a problem, however, that can usually be readily fixed. For a one-off application, it can be solved by passing the data through a purpose-designed intermediary program, which restructures the data; a more permanent solution can be achieved by setting up clear data protocols at the start, and by implementing some form of data warehouse, which manages storage, integration and retrieval of data.

More difficult are the subtle discrepancies that may occur within the data used and generated by the different models, and in the assumptions on which they are based. Amongst others, these may involve differences in:

definition (e.g. of the variables), which may mean that different elements of the analysis are modelling different (and inconsistent) versions of reality; temporal or spatial scale, such that the data become more generalised at certain stages in the analysis system, thereby removing important variability from the assessment; statistical characteristics of, or requirements for, the data (e.g. regarding spatial or temporal autocorrelation, normality of the distribution, heteroscedasticity), which may invalidate some elements of the analysis; data handling and reporting (e.g. treatment of outliers, rounding, averaging), which may mean that uncertainties are generated at the interface between different models. Detecting, and keeping track of, the resulting uncertainties is not easy, and cannot be done in a post hoc way (e.g. by trying to analyse the errors after the results ahve been generated). Instead, methods for dealing with these issues need to be developed during the Design stage. These might include, for example:

reprogramming some of the models to remove the inconsistencies; running the analysis at a different spatial or temporal resolution to remove scale discrepancies; introducing reporting steps (e.,g. indicators) at crucial intermediary steps in the analysis so that more detailed information is not wholly lost as a result of subsequent generalisation. In order to identify these problems, and to develop solutions, it is also often essential to test the effects of model linkage before the real analysis begins. This can be done with a sub-set of the data to be used in the analysis, or with an independent trial data set from a comparable setting. The difficulties with these approaches, however, are to know what the 'truth' really is (amidst inevitable uncertainties in the data), and to identify the additional uncertainties introduced by modelling. Often, therefore, a more powerful alternative is to use simulated data to trial model linkage, for these provide total control of the testing. The SIENA urban simulator (see panel to the left) provides a ready-made environment for this purpose.

See also

Integrated Environmental Health Impact Assessment System
IEHIAS is a website developed by two large EU-funded projects Intarese and Heimtsa. The content from the original website was moved to Opasnet.
Topic Pages
Toolkit
Data

Boundaries · Population: age+sex 100m LAU2 Totals Age and gender · ExpoPlatform · Agriculture emissions · Climate · Soil: Degredation · Atlases: Geochemical Urban · SoDa · PVGIS · CORINE 2000 · Biomarkers: AP As BPA BFRs Cd Dioxins DBPs Fluorinated surfactants Pb Organochlorine insecticides OPs Parabens Phthalates PAHs PCBs · Health: Effects Statistics · CARE · IRTAD · Functions: Impact Exposure-response · Monetary values · Morbidity · Mortality: Database

Examples and case studies Defining question: Agriculture Waste Water · Defining stakeholders: Agriculture Waste Water · Engaging stakeholders: Water · Scenarios: Agriculture Crop CAP Crop allocation Energy crop · Scenario examples: Transport Waste SRES-population UVR and Cancer
Models and methods Ind. select · Mindmap · Diagr. tools · Scen. constr. · Focal sum · Land use · Visual. toolbox · SIENA: Simulator Data Description · Mass balance · Matrix · Princ. comp. · ADMS · CAR · CHIMERE · EcoSenseWeb · H2O Quality · EMF loss · Geomorf · UVR models · INDEX · RISK IAQ · CalTOX · PANGEA · dynamiCROP · IndusChemFate · Transport · PBPK Cd · PBTK dioxin · Exp. Response · Impact calc. · Aguila · Protocol elic. · Info value · DST metadata · E & H: Monitoring Frameworks · Integrated monitoring: Concepts Framework Methods Needs
Listings Health impacts of agricultural land use change · Health impacts of regulative policies on use of DBP in consumer products
Guidance System
The concept
Issue framing Formulating scenarios · Scenarios: Prescriptive Descriptive Predictive Probabilistic · Scoping · Building a conceptual model · Causal chain · Other frameworks · Selecting indicators
Design Learning · Accuracy · Complex exposures · Matching exposure and health · Info needs · Vulnerable groups · Values · Variation · Location · Resolution · Zone design · Timeframes · Justice · Screening · Estimation · Elicitation · Delphi · Extrapolation · Transferring results · Temporal extrapolation · Spatial extrapolation · Triangulation · Rapid modelling · Intake fraction · iF reading · Piloting · Example · Piloting data · Protocol development
Execution Causal chain · Contaminant sources · Disaggregation · Contaminant release · Transport and fate · Source attribution · Multimedia models · Exposure · Exposure modelling · Intake fraction · Exposure-to-intake · Internal dose · Exposure-response · Impact analysis · Monetisation · Monetary values · Uncertainty
Appraisal