Guidance and methods for indicator selection and specification: Difference between revisions

From Opasnet
Jump to navigation Jump to search
(New structure of text, old contents to be added)
Line 1: Line 1:
This is a guidance document for selecting and specifying indicators as a part of applying the Intarese method. Some graphs can be found from [[:media:Indicator guidance.ANA|an Analytica file]].
This page explains what is meant with the term '''indicator''' in the context of Intarese, defines what indicators are needed for and how indicators can be used in Integrated risk assessments.


KTL/MNP (E. Kunseler, L. van Bree, M. van der Hoek, M. Pohjola, J. Tuomisto)
KTL/MNP (E. Kunseler, M. Pohjola, J. Tuomisto, L. van Bree)


==Definitions==
==Introduction==


A risk assessment consists of '''variables''', i.e. objects describing particular properties of the world. Variables include physical properties (what is?) and value judgements (what should be?). '''Indicator''' is a variable with special interest, and it will be reported in the assessment report.
===Guidance needed===


*Several approaches to indicators
*Cases in the phase of moving from issue framing to assessment
**Indicator selection and specification as the bridge


The term indicator is a common concept that can be interpreted and used in at least two different meanings during the environment and health risk assessment process. In Intarese, the word indicator only means outcome indicators. The word proxy is used about something that is used as a proxy or surrogate of the real thing, if the real thing is not available.  The figure 1 below clarifies the two applications: "indicators" as proxies and indicators as outcome metrics.
===Background and definitions===


[[Image:Indicators and proxies.PNG]]
*RAs are causal network descriptions of real world phenomena
Figure 1
**Full-chain approach
*RA descriptions consist of variables (basic building blocks of RA)
*Indicators are variables of specific interest
**Purpose of indicators is to help communication about important phenomena
***explaining and monitoring
**Definition: indicators are the variables that need to be reported
*proxy: indirect replacement for a variable - should not be confused with vars & inds


During the risk assessment ''proxy'' measures are inputs to the assessment process, such as central site measurements as surrogates of personal exposure. Outcome indicators are used to present and report the steps in the assessment, such as DALYs or mean population exposures. Stakeholders i.e. policymakers and lay people are particularly interested in the outcome indicators and they can be applied for different aims:
==Different approaches to indicators==
*Policy development or priority-setting
*Health impact assessment and monitoring
*Policy implementation or economic consequence assessment
*Public information and awareness rising or risk perception(WHO, 2002)


Indicators are main units of information serving monitoring ends, evaluation of policies with the overall purpose of effective communication with a wide range of users.
*Several approaches
*different needs require different kinds of indicators


===Need for guidance===
===Examples of approaches===


At this project stage (18 months - May 2007), the assessment methodology is ready for application in policy assessment cases. Case studies have been selected and protocols for case study implementation are in process. The issue frameworks have been formulated and consequent full chain frameworks of the policy assessment case studies are being developed. This guidance document gives detailed information about development of the variables and indicators in the full chain framework. The guidance emphasizes on causality in the full-chain approach and the applicability of the indicators in relation to policy needs.
*WHO
The purpose of this guidance document is to provide practical and focused methodology for selection and specification of indicators. Main emphasis is on the formulation of criteria for indicator selection and specification and design of computational methods and user-friendly display. In the subsequent section, we first of all clarify the term indicator and the different perceptions towards the process of indicator development.
**individual objects
**standardization
*EEA
**typology of four kinds of indicators
*RIVM
**4 sets/panels
**communication to different audiences


==From issue framing to indicator selection==
===General classification of indicators===


The Intarese approach to risk assessment emphasizes on creation of causal linkages between the determinants and consequences in the integrated assessment process. The full chain approach includes interconnected variables which are the leading components.
*Topic-based classification
The full chain variables cover the source-impact chain, which is based on different frameworks developed from the pressure-state-response (PSR) concept originally proposed by the US-EPA (e.g. DPSIR, DPSEEA) and the source-receptor models widely used to represent the fate of pollutants in the environment. [[Scoping for policy assessments (Intarese method)|(Briggs D,16.05.06)]]
*Causality-based classification
*Reference-based classification


<center>
===Intarese indicator types===
[[image:Causal links defined with variables.PNG]]


Figure 2: Indicators and their causal relations are specified simultaneously -  
*Full-chain approach: causality must always be adressed
Circles represent variables; Squares represent indicators.
*Communicative needs define the chosen set of indicators in each case
</center>
*Indicators can be helpful in creating the causal network description
*suitable/non-suitable indicators for Intarese needs


==From issue framing to causal network==


We start our indicator guidance from issue framing: defining a set of variables (circles) and their causal relations (connecting arrows), representing the outline of the assessment framework. The result is a causal chain description of the phenomena to be assessed, on a relatively high level of abstraction, representing the determinants across the chain as variables. At this stage the variable desription contains name and scope. The causal relation between variables is roughly defined as well.
*exclusions and inclusions
*vars to report = indicators
*indicators placed along causal network
*required "help" variables added to complete the causal network
**all variables must be causally linked to chosen endpoints
*specification of variables in iterative steps
*proceed from full-chain level to reasonable level of detail
**scoping tool to help this
**hierarchical control of general & specific variables
**level of detail can be iteratively adjusted to meet e.g. data availability and understanding of causal relations


As a result of issue framing, the main nodes and links in the source - impact chain stand out. Key variables can be selected as indicators and further specified. Selected indicators should be internally coherent – i.e. they should have clear and definable relationships within the context of this chain. The idea behind the indicator selection, specification and use is to highlight the most important and/or significant parts of the source-impact chain which has been/is to be assessed. Indicator selection provides the bridge between the issue framework and the assessment process.
===Selection of indicators===


===Identification of causality in the full chain===
*according to purpose of RA
**communication needs internally and externally
*predefined indicator sets?
*standardized indicators


Indicator development is a continuous and iterative process. The variable specifications, in particular their outcome values and causal relations to connecting variables or indicators are iteratively improved throughout the course of the assessment process as the knowledge and understanding increases. The causal descriptions of variables influence the estimation of the output value with the aid of data, measurements and models and vice versa. If necessary, new indicator variables can be added.
===Specification of indicators===


Throughout the full chain, causality can be improved as seen necessary by combining (too detailed level) variables into more general variables, dividing (too general level) variables into more detailed ones, adding needed variables to the chain, removing variables that turn out irrelevant, changing the causal links etc. For example, the air pollutant variable can be divided into specific pollutant variables, e.g. for NOx, PM2.5 /PM10, BS etc. Each of these pollutants has a different relation to the consequent health effect / impact variables.
*same as for all variables
*estimating variable/indicator result
*explaining how variable/indicator result is estimated
*causalities always defined as part of the variable/indicator specification


There are two critical tests, ''clairvoyant test'' for variables, and ''causality test'' for variable relations. All parts of the full chain description should pass both.
==Structure of variable/indicator==


Variables should describe some real-world entities, preferably things that pass a clairvoyant test. The '''clairvoyant test''' determines the clarity of a variable. When a question is stated in such a precise way that a putative clairvoyant can give an exact and unambiguous answer, the question is said to pass the test.
*why unified structure?
Variables should be related to each other with causal links. '''Causality test''' determines the nature of the relation between two variables. If you alter the value of a variable (all else being equal), a variable downstream (i.e., there is an arrow pointing to it from the first variable) should change. If this change is not reasonable, the causal link does not exist. It may be different than originally stated, for example going to the other direction.
**efficiency
 
**physical reality objects & value judgements can both be described as variables
 
**control of hierarchical information structures
==Indicator selection==
*attributes of variables
 
*comparison of approaches???
When presented with a list of indicators it is often not clear why specific indicators were chosen. Individual interests and organization priorities will influence the indicator selections. Familiar measures are more likely to be identified and there is a natural tendency towards indicators that are consistent with expectations.
A comprehensive selection process is important to document why individual specific indicators are selected. The process should be objective and the choice of indicators appropriate and useful.
Selection criteria are important to help define the relevant dimensions of the indicator and to asssess how well the indicator actually measures the phenomenon of interest. The set of selection criteria should be relevant to the project.
 
 
===Different kinds of indicators===
 
There are many ways to classify indicators. Here, we present a summary classification that is based on several lines of thinking: WHO, European Environmental Agency, RIVM, and others have developed their own ways to look at indicators.
 
'''Topic-based classification'''
 
The topic describes the scientific discipline to which the indicator content mostly belongs to. The RIVM classification mostly follows this thinking. However, note that ''Policy deficit'' indicators by RIVM do not belong here but on the reference-based classification below.
* Health indicators.
* Economic indicators.
* Perception indicators. (This includes equity and other ethical issues.)
* Ecological indicators. (We do not cover these in Intarese, as they are out of the scope of the project.)
 
 
'''Causality-based classification'''
 
Many existing indicators are independent pieces of information without a context of a causal chain (or full chain approach). WHO indicators are famous examples of this. In Intarese, the full chain approach is an integral part of the method, and therefore all Intarese indicators should reflect causal connections to relevant variables. This issue is addressed together with the next classification.
 
 
'''Reference-based classification'''
 
The reference is something that the indicator is compared with, and this comparison is the actual essence of the indicator. This classification is independent of the topic-based classification.
 
{| {{prettytable}}
!Type of indicator
!Reference point
!Examples of use
!Does address causality?
|-----
|Descriptive indicators
|Not explicitly compared to anything.
|EEA type A indicators. Burden of disease. WHO indicators are also often this type.
|No
|-----
|Performance indicators
|Some predefined policy target
|EEA type B indicators. Policy deficit indicators of RIVM.
|Sometimes
|-----
|Efficiency indicators
|Compared with the activity or service that causes the impact.
|EEA type C indicators. Cost-effectiveness analysis.
|Yes
|-----
|Total outcome indicators
|?
|EEA type D indicators. Green gross domestic product. Index of Sustainable Economic Welfare (ISEW).
|?
|-----
|Scenario indicators
|Some predefined policy action, usually a policy scenario compared with business as usual.
|Cost-benefit analysis.
|Yes
|}
 
 
====WHO indicators====
WHO is developing an Environment and Health Information System (EHIS). EHIS is regarded as a valuable tool for monitoring and evaluating the implementation and modification of policies by providing systematically collected and analysed evidence. The objective is to develop a harmonized and evidence-based information system that will serve policy-makers at European, national and local levels and be accessible by the general public as well. Crucial to developing a pan-European EHIS is a set of policy-relevant indicators to measure the situation and changes over time. For this purpose, indicators must monitor the linkages between environmental changes and human health effects and be based on scientific evidence. The DPSEEA (driving forces - pressures - state - exposure - effects - action) model was adopted to specify the policy-relevant indicators along the source - impact chain. (WHO EH indicators for Europe - A pilot indicator-based report, 2004)
 
[[Image:DPSEEA approach.jpg]]
 
 
In terms of policy relevance, exposure-side indicators and health-side indicators are of highest interest. These types cover the forward looking indicators of exposure (i.e. those that presage, and need to be linked to, a potential health effect) and the backward looking indicators of outcome or effect (i.e. those that imply, and need to be attributed to, an exposure or source). Exposure-side indicators are clearly relevant for policy, since they often provide the first indications of the potential for health risk, and the first evidence of the effects of intervention (since many policies are focused on the upper links in the source-impact chain). To be meaningful in the context of health risks, however, they must relate to factors with definable (or at least strongly plausible) links to health outcome. [[Scoping for policy assessments (Intarese method)|(Briggs D, 16.5.06)]] Dose-Response indicators are necessary for clarifying the exposure to health linkage. Moreover, the exposure-side indicators should linked back to its emissions and sources. Exposures can only be reduced when its sources or emission activities are known, therefore source or emission indicators should be introduced as a third type of policy-relevant indicators.
Health-side (or impact) indicators represent the consequences of exposures in terms of health effect (e.g. mortality, morbidity, DALYs) or its further societal impacts (e.g. economic costs, quality of life). Again, to be meaningful in the context of the full-chain approach, they need to have an explicit link back to causal environmental exposures and risk factors. [[Scoping for policy assessments (Intarese method)|(Briggs D, 16.5.06)]]
 
A fifth type of indicator is the action or policy indicator. WHO developed this outcome indicator to assess the policy situation with regard to policy existence, implementation and enforcement. Qualitative information is classified in quantitative numbers in order to make country comparisons possible. The importance of these outcome indicators lies in their ability to express priorities for policy action. (WHO, ENHIS project)
 
====EEA indicators====
 
[The text in this section has been taken from the EEA Technical report No 25, Environmental indicators: Typology and overview, EEA, Copenhagen,1999)].
 
A wide variety of environmental indicators is presently in use. These indicators reflect trends in the state of the environment and monitor the progress made in realising environmental policy targets. As such, environmental indicators have become indispensable to policy-makers. However, it is becoming more and more difficult for policy-makers to grab the relevance and meaning of the existing environmental indicators,
given the number and diversity of indicators presently in use. And new sets of environmental indicators are still to be expected. Therefore, some means of structuring and analysing indicators and related environment/society inter-connections is needed.
 
For the purpose of this INTARESE indicator paper the European Environment Agency (EEA) indicator typology and the DPSIR framework (Driving forces, Pressure, State, Impact,Response) is used.
 
In relation to policy-making, environmental indicators are used for three major purposes:
 
1. to supply information on environmental problems, in order to enable policy-makers to value their seriousness;
 
2. to support policy development and priority setting, by identifying key factors that cause pressure on the environment;
 
3. to monitor the effects of policy responses.
 
In addition, environmental indicators may be used as a powerful tool to raise public awareness on environmental issues. Providing information on driving forces, impacts and policy responses, is a common strategy to strengthen public support for policy measures.
 
'''''EEA Typology of Indicators'''''
 
Indicators can be classified into 4 simple groups which address the following questions:
·  ‘What is happening to the environment and to humans?’ (Type A or Descriptive Indicators)
·  ‘Does it matter?’ (Type B or Performance indicators)
·  ‘Are we improving?’ (Type C or Efficiency indicators)
·  ‘Are we on the whole better off?’ (Type D or Total Welfare indicators)
 
'''''Descriptive indicators (Type A – What is happening to the environment and to humans?)'''''
 
Most sets of indicators presently used by nations and international bodies are based on the DPSIR-framework or a subset of it. These sets describe the actual situation with regard to the main environmental issues, such as climate change, acidification, toxic contamination and wastes in relation to the geographical levels at which these issues manifest themselves. With respect to environmental health, these indicators may also be specified with respect to (personal) (source-specific) exposure indicators and health effects indicators (## of people affected, YLL, DALY, or QUALY).
 
'''''Performance indicators (Type B – Does it matter?)'''''
 
The indicators mentioned above all reflect the situation as it is, without reference to how the situation should be. In contrast, performance indicators compare (f)actual conditions with a specific set of reference
conditions. They measure the ‘distance(s)’ between the current environmental situation and the desired situation (target): ‘distance to target’ assessment. Performance indicators are relevant if specific groups
or institutions may be held accountable for changes in environmental pressures or states.
 
Most countries and international bodies currently develop performance indicators for monitoring their progress towards environmental targets. These performance indicators may refer to different kind of reference
conditions/values, such as:
 
·  national policy targets;
·  international policy targets, accepted by governments;
·  tentative approximations of sustainability levels.
 
The first and second type of reference conditions, the national policy targets and the internationally agreed targets, rarely reflect sustainability considerations as they are often compromises reached through (international) negotiation and subject to periodic review and modification. Up to now, only very limited experience exists with so-called sustainability indicators that relate to target levels of environmental quality set from the perspective of sustainable development (Sustainable Reference Values, or SRVs).
 
Performance indicators monitor the effect of policy measures. They indicate whether or not targets will be met, and communicate the need for additional measures.
 
'''''Efficiency indicators (Type C – Are we improving?)'''''
 
It is important to note that some indicators express the relation between separate elements of the causal chain. Most relevant for policy-making are the indicators that relate environmental pressures to human activities. These indicators provide insight in the efficiency of products and processes. Efficiency in terms of the resources used, the emissions and waste generated per unit of desired output.
 
The environmental efficiency of a nation may be described in terms of the level of emissions and waste generated per unit of GDP. The energy efficiency of cars may be described as the volume of fuel used per person per mile travelled. Apart from efficiency indicators dealing with one variable only, also
aggregated efficiency indicators have been constructed. The best-known aggregated efficiency indicator is the MIPS-indicator (not covered in this report). It is used to express the Material Intensity Per Service unit and is very useful to compare the efficiency of the various ways of performing a similar function.
 
Efficiency indicators present information that is important both from the environmental and the economic point of view. ‘Do more with less’ is not only a slogan of environmentalists. It is also a challenge to governments, industries and researchers to develop technologies that radically reduce the level of environmental and economic resources needed for performing societal functions. Since the world population is expected to grow substantially during the next decades, raising environmental efficiency may be the only option for preventing depletion of natural resources and controlling the level of pollution.
 
The relevance of these and other efficiency indicators is that they reflect whether or not society is improving the quality of its products and processes in terms of resources, emissions and waste per unit output.
 
'''''Total welfare indicators (Type D – Are we on whole better off?)'''''
 
Some measure of total sustainability is needed in order to answer this question, for example, a kind of ‘Green GDP’, such as the Index of Sustainable Economic Welfare (ISEW). As these indicators are, however, currently outside of the EEA’s work programme, there are not further covered here.
 
'''''MNP example on policy deficit indicators'''''
 
To illustrate policy deficit indicators used in various environmental themes, an example has been taken from the (annual) Environmental Balance report (2006) of the Netherlands Environmental Assessment Agency visualizing a simple table format, using different colours to show what the time trends and target achievements are.
 
 
v[[Image:Example.jpg]]
Figure 4 MNP example of policy deficit indicators
 
 
 
 
==Selection criteria==
 
Different sets of selection criteria are in use. In this section, we discuss the WHO and EEA selection process of a core set of indicators as well as a more objective and systematic model that uses a scoring and weighting framework that scores indicators on how well they meet criteria with the criteria being weighted to reflect their relative importance in meeting project objectives.
 
 
====WHO and EEA selection criteria====
When selecting indicators in the source - impact chain in WHO and EEA projects, the policy context or commonly recognised issues are the main drive for indicator selection. WHO has for example developed children's environmental health indicators which measure the implementation of CEHAPE priority goals. (WHO, ENHIS project) Subsidiarity is important as well; information need to be collected at the most relevant level or for specific policy/management purposes. Detailed indicators for local level or specific purposes might feed into broader (core) indicators that can be used at higher policy level or for general public information. Moreover, the indicators need to be associated with a suite of methods to derive them and with methods and approaches to link the indicators across the causal chain. Also incorporation of available information from monitoring and surveillance systems on environmental stressors and health provide selection criteria for indicator development. (WHO, 2002 and Lebret E & Knol A, 2007)
 
Besides these principal criteria for indicator selection, which can be summarized as (i) relevance to users and acceptability, (ii) consistency, (iii) measurability there are several other issues to be taken into consideration. Indicators must be based on known and validated processes or principles; scientific credibility. Sensitivity and robustness are a precondition for indicators, since a change must be responded to while slight variations should be coped with. Moreover, the indicator must be understandable and user-friendly. (Briggs D, 2006)
 
WHO has selected as set of environmental health indicators based on these criteria and expert judgements, see http://www.euro.who.int/EHindicators. A protocol for pilot testing the indicators was formulated to come to a further selection of core and extended sets of indicators. Proposed indicators were screened by a group of experts in terms of their credibility, basic information on the definition, calculation method, interpretation and potential data sources. During the screening process a template for a methodology sheet was designed entailing the attributes that are summarized in the Appendix. During development of the methodology sheets, further consultation with national and international experts and international agencies as well as national ministries and agnecies and holders of environmental and health data was conducted. During development of the methodology sheets it became apparent that for some indicators insufficient data was available to continue development. Indicators were defined in the core set once their relevance for policy and availability of the data was confirmed. Indicators which were deemed policy-relevant but for which data is currently not available were included in the extended set of indicators for future development and use.
In succession to the second selection round, the methodology sheets were further refined. Three major tasks were 1 - Development of a specific technical definition; 2 - Elaboration of a computation method for each indicator; 3 - Check-up of data availability in international sources. The process of development and adjustment of the methodology sheets served as a pre-screening process to determine the need for testing the indicators. Only core indicators were further considered, while adopted indicators and indicators with readily available data from international databases did not require feasibility testing. Other core indicators underwent a screening process to test the feasibility and applicability of the indicators. (WHO ENHIS final technical report, 2005)
 
 
====Alternative selection criteria====
 
[insert figure]
 
The first steps in indicator selection are similar to the WHO and EEA indicator selection process.
The purpose and scope of the indicators are defined. Within the group of experts and potential users, the kinds of measures that provide information that fits within the defined scope should be discussed. The list of potential indicators should be scrutinized by means of defined selection criteria. Agreement on criteria definitions that should fit the specific needs of the project is essential in order to determine the relative importance of the criteria to meet the intended purpose.
 
[insert figure]
 
High weights can be assigned to important criteria and lower weights to those that are less critical. For example, the criterion 'understandable' should be high for a public report, medium for a management report and may be low for reports being used by analysts.
 
==Indicator specification==
 
In each type, the indicators may be expressed in different ways, depending on:
*Whether they are static (state, condition) or dynamic (process, flux) indicators;
*Whether they are expressed in quantitative (‘objective’) or qualitative (perception) measures;
*Whether or not they relate to a formal (and internal) reference level or target (performance indicators).[[Scoping for policy assessments (Intarese method)|(Briggs D, 16.5.06)]]
 
 
In principle, any variable could be chosen as an indicator and the set of indicators could be composed of any types but should cover the steps in the full-chain description. In practice, the generally relevant types of indicators, such as performance indicators can be somewhat predefined and even some detailed indicators can be defined in relation to commonly existing purposes and user needs. This kind of generality is also helpful in bringing coherence between the assessments. 
We suggest that all variables, and thus also all indicators, are specified using a fixed set of attributes. The reasoning behind is to secure coherence between variable/indicator specifications and to enhance efficiency of assessment work and re-usability of the outputs of assessment work. Moreover, it helps in ensuring that all the terms used in the assessment are consistent and explicit. Descriptions should cover the scope of each variable, methods/models used to compute or derive the variable, and the data (and associated data sources) on which these are based. Variables in this context may take different forms and serve different roles (often simultaneously). They may represent inputs to models (derived variables), interim steps in the calculation process (derived variables) and single or combined output values (indicators).We suggest a method for indicator development, including characteristics from both approaches towards indicator development as explained in the previous section.
 
Below is a suggested list of variable/indicator attributes. The list has been developed based on the several principles including, but not limited to, the following:
 
*Variables are the basic building blocks of risk assessments
*Everything in risk assessments is to be described as variables
*Risk assessments are causal-chain descriptions of (a chosen part) of reality
*All variables in a causal-chain description must be causally linked
**Also the causal links are described within the variable specifications (definition:causality)
***In a diagram representation arrows only state the existence of a causal relation, it does not specify the causality
*The risk assessment process proceeds iteratively through specifications and re-specifications of variables (and their causal relations)
 
===Suggested Intarese variable/indicator attributes===
 
#Name
#Scope
#Description
#*Scale
#*Averaging period
#*References
#Unit
#Definition
#*Causality
#*Data
#*Formula
#**Variations and alternatives
#Result
#Discussion
 
 
==Continuous evaluation of indicator selection and specification==
 
During the indicator selection and specification process it might turn out that the indentified indicator is not able to properly cover the step in the assessment process which it should be reporting, as is defined in the indicator purpose and scope. Consequently, a different indicator should be chosen amongst the assessment variables.
 
 
==Appendix: comparison of different approaches to specifying variables==
 
Table. A comparison of attributes used in Intarese (suggestions), ENHIS indicators, pyrkilo method, and David's earlier version.
 
{|{{prettytable}}
! Suggested Intarese attributes
! WHO indicator attributes
! Pyrkilo variable attributes
! [[Policy assessment protocols (Intarese method)|David's variable]] attributes
|-----
| Name
| Name
| Name
| Name
|-----
| Scope
| Issue
| Scope
| Detailed definition
|-----
| Description
| Definition and description
| Description (part of)
| -
|-----
| Description (part of)
| Interpretation
| Description (part of)
| -
|-----
| Description / Scale
| Scale
| Scope or Description
| Geographical scale
|-----
| Description / Averaging period
| -
| Scope or description
| Averaging period
|-----
| Description / Variations and alternatives
| -
| Description
| Variations and alternatives
|-----
| Description (part of)
| Linkage to other indicators
| Description (part of)
| - {{reslink|Do non-causal links between variables exist?}}
|-----
| Unit
| Units
| Unit
| Units of measurement
|-----
| Definition / Causality
| Not relevant
| Definition / Causality
| Links to other variables
|-----
| Definition / Data {{reslink|Data sources belong to Data}}
| Data sources or Related data
| Definition / Data
| Data sources, availability and quality
|-----
| Definition / Formula
| Computation
| Definition / Formula
| Computation algorithm/model
|-----
| Result (a very first draft of it) {{reslink|Worked example is the same thing as Result}}
| Not a specific attribute
| Result (a very first draft of it)
| Worked example
|-----
|Discussion
| -
| -
| -
|-----
| Done by using categories
| -
| Done by using categories
| Type
|-----
| Done by links to glossary
| -
| Done by links to glossary
| Terms and concepts
|-----
| Done by argumentation on the Discussion area
| Specification of data needed
| Done by argumentation on the Discussion page
| Data needs
|-----
| The postition in a causal diagram justifies the existense
| Justification
| The postition in a causal diagram justifies the existense
| -
|-----
| Not relevant
| Policy context
| Not relevant
| Not relevant
|-----
| Not relevant
| Reporting obligations
| Not relevant
| Not relevant
|}
 
[[Category:Needs editing]] [[Category:Intarese general method]]
 
 
==Appendix==
 
Features of Intarese risk assessment
 
 
Integrated risk assessment, as applied in the Intarese project, can be defined as the assessment of risks to human health from environmental stressors based on a ‘whole system’ approach.  It thus endeavours to take account of all the main factors, links, effects and impacts relating to a defined issue or problem, and is deliberately more inclusive (less reductionist) than most traditional risk assessment procedures. [[Scoping for policy assessments (Intarese method)|(Briggs D, 16.05.06)]])
 
Key characteristics of integrated assessment are:
#It is designed to assess complex policy-related issues and problems, in a more comprehensive and inclusive manner than that usually adopted by traditional risk assessment methods.
#It takes a ‘full-chain’ approach – i.e. it explicitly attempts to define and assess all the important links between source and impact, in order to allow the determinants and consequences of risk to be tracked in either direction through the system (from source to impact, or from impact back to source). 
#It takes account of the additive, interactive and synergistic effects within this chain and uses assessment methods that allow these to be represented in a consistent and coherent way (i.e. without double-counting or exclusion of significant effects).
#It presents results of the assessment as a linked set of policy-relevant ‘outcome indicators’.
#It makes the best possible use of the available data and knowledge, whilst recognising the gaps and uncertainties that exist; it presents information on these uncertainties at all points in the chain. [[Scoping for policy assessments (Intarese method)|(Briggs D, 16.05.06)]]

Revision as of 11:07, 8 May 2007

This page explains what is meant with the term indicator in the context of Intarese, defines what indicators are needed for and how indicators can be used in Integrated risk assessments.

KTL/MNP (E. Kunseler, M. Pohjola, J. Tuomisto, L. van Bree)

Introduction

Guidance needed

  • Several approaches to indicators
  • Cases in the phase of moving from issue framing to assessment
    • Indicator selection and specification as the bridge

Background and definitions

  • RAs are causal network descriptions of real world phenomena
    • Full-chain approach
  • RA descriptions consist of variables (basic building blocks of RA)
  • Indicators are variables of specific interest
    • Purpose of indicators is to help communication about important phenomena
      • explaining and monitoring
    • Definition: indicators are the variables that need to be reported
  • proxy: indirect replacement for a variable - should not be confused with vars & inds

Different approaches to indicators

  • Several approaches
  • different needs require different kinds of indicators

Examples of approaches

  • WHO
    • individual objects
    • standardization
  • EEA
    • typology of four kinds of indicators
  • RIVM
    • 4 sets/panels
    • communication to different audiences

General classification of indicators

  • Topic-based classification
  • Causality-based classification
  • Reference-based classification

Intarese indicator types

  • Full-chain approach: causality must always be adressed
  • Communicative needs define the chosen set of indicators in each case
  • Indicators can be helpful in creating the causal network description
  • suitable/non-suitable indicators for Intarese needs

From issue framing to causal network

  • exclusions and inclusions
  • vars to report = indicators
  • indicators placed along causal network
  • required "help" variables added to complete the causal network
    • all variables must be causally linked to chosen endpoints
  • specification of variables in iterative steps
  • proceed from full-chain level to reasonable level of detail
    • scoping tool to help this
    • hierarchical control of general & specific variables
    • level of detail can be iteratively adjusted to meet e.g. data availability and understanding of causal relations

Selection of indicators

  • according to purpose of RA
    • communication needs internally and externally
  • predefined indicator sets?
  • standardized indicators

Specification of indicators

  • same as for all variables
  • estimating variable/indicator result
  • explaining how variable/indicator result is estimated
  • causalities always defined as part of the variable/indicator specification

Structure of variable/indicator

  • why unified structure?
    • efficiency
    • physical reality objects & value judgements can both be described as variables
    • control of hierarchical information structures
  • attributes of variables
  • comparison of approaches???