IEHIAS Indicator selection: Difference between revisions

From Opasnet
Jump to navigation Jump to search
(Created page with "Category:IEHIAS Category:IEHIAS models/methods :''The text on this page is taken from an equivalent page of the IEHIAS-project. Indicator design and s...")
 
No edit summary
 
Line 71: Line 71:
# Balanced
# Balanced
#*They must give a fair and unbiased picture of the results of the assessment which does not unduly emphasise or ignore particular aspects, interests or stakeholder groups.
#*They must give a fair and unbiased picture of the results of the assessment which does not unduly emphasise or ignore particular aspects, interests or stakeholder groups.
==See also==
{{IEHIAS}}

Latest revision as of 20:30, 25 September 2014

The text on this page is taken from an equivalent page of the IEHIAS-project.

Indicator design and selection is not a simple, mechanistic process. The indicators have to face two ways: they have to represent the interests of the users, yet they also have to provide a balanced and accurate description of the system being assessed, and the health impacts implied by the scenarios being considered. Tensions inevitably occur in the process, and care is needed in balancing the demands for relevance (on behalf of the users) and interpretability in terms of the science.

As a consequence, indicator design and selection is to a large extent circumstantial, and depends on the nature of the assessment and the context in which it is being done. No universally applicable set of rules can therefore be specified. The guidelines below, however, present a general framework for choosing and constructing indicators, and describe some of the main criteria that need to be taken into account.

General principles

The overall principle should be: only include indiators that are genuinely interpretable, in terms of the message they convey, and relevant in terms of the issue being addressed.

To ensure this:

  1. Base the indicators on the conceptual model of the issue being assessed, in order to make sure that they are relevant. Otherwise, users will be faced with irrelevant information which may cause confusion or distort their responses.
  2. Select indicators from the entire length (or at least different parts) of the causal chain, in order to ensure that they give information about causes and intermediate processes as well as outcomes. Otherwise, users may not be able to see why the problem arises or where they need to act.
  3. Ensure that all key factors that make up the issue (i.e. major sources, exposures and health outcomes), and all major population groups that might be affected, are represented by the indicators. Otherwise the results may be biased.
  4. Avoid unnecessary overlap between indicators, in order to minimise the number of indicators that are produced. Having too many indicators adds to the cost of the analysis and may generate confusion in the minds of users.
  5. Review all indicators to ensure that they are interpretable and relevant; revise or reject any that are not. Otherwise, unnecessary effort will be used in constructing indicators that cannot be properly understood (and may cause avoidable conflicts) and cannot be used.
  6. If at all possible, trial or validate the indicators before starting the assessment – e.g. by using them to analyse past data, or by using them with simulated data. If they do not seem to vary sufficiently under different conditions, or do not make sense, re-examine them and change or reject them if necessary.

Steps in indicator selection

Note: steps 1 to 4 below should be undertaken during issue-framing. Steps 5-9 may be deferred until (or repeated during) the design stage.

  1. Generate a clear conceptual model of the issue that is being assessed, showing the full causal chain from source to health effect, and identifying all the key compartments, agents, pathways and effects that need to be considered.
  2. Examine this model, and identify the main elements for which indicators would be useful. This should include any aspect that users might need to know about, in order for them to make effective decisions.
  3. Translate each of these elements into indicators, by specifying the factors and their properties that would be assessed (e.g. atmospheric particulate concentration, mortality rate).
  4. Review this preliminary list of indicators to ensure that they give a balanced representation of the whole causal chain and all the outcomes of concern. Fill in any gaps with additional indicators, and remove any that are clearly redundant; mark other indicators that might duplicate each other for later reconsideration.
  5. Refine the indicators by specifying in more detail how they will be measured and expressed (e.g. mean annual PM10 concentration, excess mortality due to cardio-vascular disease attributable to air pollution).
  6. Check the indicators against the quality criteria (see below). Reconsider, and if appropriate revise or reject, any that do not meet the criteria. Use this opportunity to select between duplicates where appropriate.
  7. Review the indicators once more against the conceptual model to ensure that there are no gaps nor unnecessary overlaps. If necessary, remove redundant indicators and devise additional ones (then check against the criteria, as above).
  8. If possible, test and validate the final set of indicators either by applying them to existing data (e.g. for past time periods or other study areas) or by simulation (see, for example, the SIENA simulator in the Toolkit section of ths Toolbox). Carefully re-examine any indicators that do not vary as expected, or do not seem to make sense. If the indicators appear to be flawed, revise or reject them (and generate substitutes if appropriate); if the prior expectation of the way the system works seems wrong, re-examine the conceptual model and see if it needs amending.
  9. Create a catalogue of the indicators to be used, specifying the exact definition of each indicator, how it will be computed and the format (e.g. measurement units, level of spatial and temporal aggregation) that will be used.

Quality criteria

Indicators must be both interpretable and useful.

To be interpretable, each indicator must satisfy the following criteria:

  1. Plausibility
    • It must based on a well-evidenced (or at least theoretically sound) causal relationship between environment and health effect
  2. Specificity
    • It must relate to a clear and defined element (or group of elements) within the assessment - i.e. in the source-impact chain.
  3. Sensitivity
    • It must show detectable changes in response to changes in the conditions of interest (e.g. between the assessment scenarios).
  4. Consistency
    • It must be consistent and comparable over space and time – i.e. should be based on data and methods that are available for the whole study area and period.
  5. Robustness
    • It must be unaffected by minor changes in methodology, scale or data.
  6. Representivity
    • It must be representative of the conditions and area of concern – i.e. should not be biased towards particular situations.
  7. Accuracy
    • It should be unaffected by significant uncertainties that make the meaning of the indicator unclear.
  8. Scalability
    • It should be valid, and give consistent results, at different scales and levels of aggregation.

To be useful, each indicator must be:

  1. Relevant
    • It must relate to an explicit and important element of the issue of concern.
  2. Acceptable
    • It must be accepted as meaningful, fair and appropriate by the stakeholders involved in the assessment.
  3. Actionable
    • It must relate to one or more factors that are amenable to influence or control (directly or indirectly) by the users of the assessment.
  4. Additional
    • It should provide additional or supplementary information not given by other indicators in the set.

In addition, the complete set of indicators used in an assessment must give a full and fair picture of the issue that is being assessed.

Collectively, the set of indicators used should therefore be:

  1. Complete
    • They must cover all the main elements of the assessment, in sufficient detail to provide a basis for decision-making.
  2. Balanced
    • They must give a fair and unbiased picture of the results of the assessment which does not unduly emphasise or ignore particular aspects, interests or stakeholder groups.

See also

Integrated Environmental Health Impact Assessment System
IEHIAS is a website developed by two large EU-funded projects Intarese and Heimtsa. The content from the original website was moved to Opasnet.
Topic Pages
Toolkit
Data

Boundaries · Population: age+sex 100m LAU2 Totals Age and gender · ExpoPlatform · Agriculture emissions · Climate · Soil: Degredation · Atlases: Geochemical Urban · SoDa · PVGIS · CORINE 2000 · Biomarkers: AP As BPA BFRs Cd Dioxins DBPs Fluorinated surfactants Pb Organochlorine insecticides OPs Parabens Phthalates PAHs PCBs · Health: Effects Statistics · CARE · IRTAD · Functions: Impact Exposure-response · Monetary values · Morbidity · Mortality: Database

Examples and case studies Defining question: Agriculture Waste Water · Defining stakeholders: Agriculture Waste Water · Engaging stakeholders: Water · Scenarios: Agriculture Crop CAP Crop allocation Energy crop · Scenario examples: Transport Waste SRES-population UVR and Cancer
Models and methods Ind. select · Mindmap · Diagr. tools · Scen. constr. · Focal sum · Land use · Visual. toolbox · SIENA: Simulator Data Description · Mass balance · Matrix · Princ. comp. · ADMS · CAR · CHIMERE · EcoSenseWeb · H2O Quality · EMF loss · Geomorf · UVR models · INDEX · RISK IAQ · CalTOX · PANGEA · dynamiCROP · IndusChemFate · Transport · PBPK Cd · PBTK dioxin · Exp. Response · Impact calc. · Aguila · Protocol elic. · Info value · DST metadata · E & H: Monitoring Frameworks · Integrated monitoring: Concepts Framework Methods Needs
Listings Health impacts of agricultural land use change · Health impacts of regulative policies on use of DBP in consumer products
Guidance System
The concept
Issue framing Formulating scenarios · Scenarios: Prescriptive Descriptive Predictive Probabilistic · Scoping · Building a conceptual model · Causal chain · Other frameworks · Selecting indicators
Design Learning · Accuracy · Complex exposures · Matching exposure and health · Info needs · Vulnerable groups · Values · Variation · Location · Resolution · Zone design · Timeframes · Justice · Screening · Estimation · Elicitation · Delphi · Extrapolation · Transferring results · Temporal extrapolation · Spatial extrapolation · Triangulation · Rapid modelling · Intake fraction · iF reading · Piloting · Example · Piloting data · Protocol development
Execution Causal chain · Contaminant sources · Disaggregation · Contaminant release · Transport and fate · Source attribution · Multimedia models · Exposure · Exposure modelling · Intake fraction · Exposure-to-intake · Internal dose · Exposure-response · Impact analysis · Monetisation · Monetary values · Uncertainty
Appraisal