Issue framing in the IEHIAS
- The text on this page is taken from an equivalent page of the IEHIAS-project.
Issue framing represents the first stage in doing an integrated environmental health impact assessment. It is at this stage that we specify clearly what question we are trying to address, and who should be involved in the assessment.
By the end of the issue-framing stage, therefore, we should have defined the scope of the assessment, and the principles on which it will be done. In the process, we should also have resolved any ambiguities in the terms and concepts we might be using, so that everyone involved has a common understanding of what the results of the assessment will mean.
Issue-framing can rarely be done as a singular, one-off process. Considerable reiteration if often required to deal with new insights, as they emerge. The order in which issue-framing is done also needs to be adapted according to circumstance. Five main steps, can, however, be recognised:
- Specifying the question that needs to be addressed;
- Identifying and engaging the key stakeholders who need to be involved;
- Agreeing an overall approach to the assessment and the scenarios that will be used;
- Selecting and constructing the scenarios on which the assessment will be based;
- Defining the indicators that will be used to describe the impacts.
For the sorts of complex (systemic) problems that merit integrated environmental health impact assessment, issue framing can be extremely challenging (see link to Challenges in issue framing, left). Care and rigour in issue framing are therefore crucial if the assessment is to be valid and useful: failure to give the necessary attention at this stage will almost certainly undermine the value of everything that follows.
- 1 Challenges in issue framing
- 2 Defining the question
- 3 Consulting with stakeholders
- 4 References:
- 5 Formulating scenarios
- 6 See also
Challenges in issue framing
Describing complex issues in a way that captures the interests of all the stakeholders concerned, yet can also form a sound and practicable basis for assessment, is inevitably difficult.
Difficulties arise both from the complexity and ambiguity of the issues that need to be assessed, and the multitude of stakeholders (often with different and conflicting interests) who are concerned. As a consequence, issue framing has to deal with several challenges:
- how to identify all the stakeholders who might have interests in the issue and engage them in the process;
- how to define the conditions (in the form of realistic yet relevant scenarios) under which the issue will be assessed;
- how to set practicable limits to the issue without unfairly excluding some stakeholders’ interests and thereby biasing the assessment;
- how to define and agree on a series of indicators that will adequately and fairly capture an describe the results of the assessment.
All four require that issue framing is done as a reiterative process, with each version of the issue being reviewed and debated to ensure that key elements or stakeholders have not been neglected. It also needs to be an open process, with additional stakeholders being invited to take part when new (and unrepresented) interests emerge.
This reiterative process of issue framing often involves a clear cycle, comprising:
- a phase of ‘complexification’, as new factors and relationships are discovered, and new interests taken into account;
- a phase of simplification, as the issue is paired down by eliminating redundant or irrelevant elements, in order to focus on what matters most.
Defining the question
All assessments are done in response to a ‘question’ or ‘concern’. This initial question is often not phrased specifically for the purpose of assessment, but instead to raise awareness and get attention. Even when an official body commissions an assessment, the question may not be clearly and fully described. In most cases, therefore, the issue of interest will need to be carefully considered and redefined.
The aim of doing so is to make sure:
- that it is unambiguous and clearly understood;
- that it really does reflect the issue about which people are concerned;
- that it can form the basis for a sensible and realistic assessment;
- that the rationale for doing an assessment is clearly recognised.
To achieve this the question needs to be phrased in a clear and structured way. Typically, this involves defining (at least in general terms):
- the causes (e.g. human activities, environmental stressors, agents) and/or types of health impacts of concern;
- the area or population of interest;
- the timescale of the concern.
Who defines the question?
Questions about potential health impacts from the environment may arise in different ways, and from different sources. For example:
- Policy-makers may ask ‘What will be the effects of this policy’, or ‘How well are our current policies working’?
- Scientific studies may suggest that specific substances or practices pose a risk to health, which merit investigation.
- Long-term monitoring of the environment or of health may show patterns or trends that give cause for concern.
- Practitioners (e.g. doctors), the media or members of the public may believe that they have observed anomalous patterns or trends (e.g. disease clusters or growing rates of illness) that imply some form of environmental threat to health.
Each of these may merit some form of impact assessment, and if the issues are complex or have wide-ranging implications then an integrated assessment may be appropriate. However, the opportunity actually to undertake an assessment may vary greatly. For example, while policy-makers usually have the authority to commission an assessment, and scientific evidence (if validated by repeated studies) are often powerful enough to motivate action, the costs and complexities involved may limit the ability of members of the public to have an issue assessed in any formal way. How the question originates may greatly condition the type of assessment that is done (and the sorts of issues that are addressed).
What types of questions can be assessed?
Questions about possible impacts of environment on health can take many forms – and they can relate to the positive as well as the adverse effects. Nevertheless, not all these questions merit an integrated assessment. Some may be relatively simple and be better addressed by other means (see IEHIA in relation to other forms of assessment). Others may be too general or vague to be capable of assessment.
Integrated environmental health impact assessment is most appropriate, therefore, for relatively complex issues that have the capability to affect large areas and large numbers of people. These are sometimes described as systemic – in that they typically involve a range of different causes, deriving from different environmental, social, economic, political or technological sectors, and have many different health (and other) impacts. Systemic issues are perhaps increasingly common in the modern world, not least because of the increased scale of technology and the ever larger imprint of society on the environment. Obvious examples include climate change, food security and many aspects of environmental pollution.
As this implies, relevant questions are often related to government (or inter-governmental) policies. These are not restricted either to environmental policies, or to policies directly concerned with health. Many other forms of policy (e.g. on energy, transport, agriculture, urban development) also have the capacity to affect health, albeit unintentionally, and thus merit integrated assessment. Moreover, policies are not the only drivers for integrated assessments; other forces for change (such as technological developments, natural environmental changes or hazards, or demographic change) are equally relevant. Any one of these can thus act as the motivation for assessment.
Consulting with stakeholders
A wide range of stakeholders may have interests in integrated assessments. These include not only people (or organisations) with statutory responsibilities for the issue under consideration, but also all those who might be affected either by the issue itself, or by actions taken to address it.
A key step in developing any assessment is to consider who these stakeholders might be, and how they might be involved. In doing so, it is useful to recognise the different roles that the stakeholders might have (see link to left) both because this may help to identify stakeholders who would otherwise be ignored, and because it can help to work out what their specific concerns might be, and how best to involve them.
It also needs to be recognised that any individual may fulfil more than one of these roles (e.g. as victim and manager) both at the same time, and over time as events play out. ‘Stakeholdership’ is thus not a fixed condition, but emerges out of any issue or event. For this reason defining stakeholders is not always easy, and the full range of stakeholders may only become evident as the issue is fleshed out. Wherever possible, it is therefore important to keep an open mind about who the stakeholders are, and to be prepared to involve additional people or organisations as new aspects of the problem emerge.
Types of stakeholder
Stakeholders may be classified in many different ways. Often, the main distinction tends to be between policy-makers and the public. This, however, ignores the subtle, and often overlapping, roles that stakeholders may play in environmental health issues. The consequence may also be that the variations in perceptions and interest (e.g. within the general public) are not recognised. The table below gives a more detailed breakdown of the different types of stakeholder that might need to be considered in an integrated impact assessment. Examples of the range of stakeholders who can be involved in specific issues and assessments are given via the links below.
|Perpetrators||Individuals or groups who are responsible for generating the events or motivating the changes that ultimately cause the health impacts.|
|Purveyors||Individuals or groups who may deliberately or accidentally act to transmit the effects throught the wider population (e.g. carries of a disease; distributors of contaminated foodstuffs).|
|Victims||People and organisations who will bw involuntarily affected by the issue (e.g. subject to the risks); usually members of the public.|
|Beneficiaries||People and organisations who stand to benefit from the issue, or from its management (e.g. Commercial organisations who can sell their services as a result).|
|Informants||People and organisations who provide information on the issue and its associated consequences (e.g. Scientists, monitoring agencies, risk assessors, media).|
|Managers||People and organisations responsibilities for managing the issue and/or its consequences (e.g. Policu-makers, regulators, planners, emergency services, health services).|
Engaging stakeholders in integrated impact assessments is not easy, especially if consultation is to be active and effective, and particularly where there are a large number of potential stakeholders, from different areas of the world.
A wide range of methods for stakeholder consultation exist (see link to left). Which of these is most appropriate will depend on the character of the issue under consideration, the scope and purpose of the assessment, the resources available, and the social and geographic context.
In general, however, more effective consultation is likely to occur when effort is devoted to gaining the trust of the stakeholders, and in engaging those concerned in a sustained dialogue, as many previous studies have shown (see references below). This can rarely be achieved quickly, for trust has to be earned. The most successful approaches to stakeholder consultation are therefore usually those that provide the basis for sustained involvement, and which give stakeholders the opportunity to influence what is done, how it is done and how the outcomes are used.
Methods for stakeholder engagement
A wide range of methods that can be used for stakeholder consultation and engagement are described on the FOR-LEARN website of the Joint Research Centre of the EU, which also gives useful guidelines for selecting and implementing different methods.
Some of the most widely used approaches are summarised below.
|Method||Explanation||Pros and cons|
|Questionnaire||Formalised set of questions sent out (or made available) to respondents - e.g. via the post, telephone or internet.||Limits active involvement of stakeholders to responding to questions and suggestions. Little opportunity for dialogue. Enables access to large samples of individuals.|
|Delphi surveys||Reiterative questionnaire with feedback loop. Participants give initial responses to questions individually. Survey is then reiterated, with participants receiving summary of responses from previous round; on 2nd and subsequent iteration participants can change views and justify their own responses.||Reiterative nature of survey, and opportunity to give and receive feedback provides basis for indirect dialogue between participants, and thus encourages changes of mind in response to argumentation. Can be time-consuming,and participants may falsely gravitate to a consensus they don't truly agree with.|
|Focus groups||Small, invited groups of individuals, usually selected to represent specific stakeholder groups, who meet once (or a few times( at the behest of the organiser. Discussion is partially structured, with particupants also able to debate and respond to open-ended questions.||Enables active and wide-ranging discussion, which allows participants to have formative role in assessment. Can be time-consuming, and difficulties arise in ensuring fair representation of stakeholder groups, and balanced debate amongst participants.|
|Citizens' panels||Relatively large, demographically representative panels of citizens, who are surveyed regularly (and may meet) to elicit advice on issues of public concern.||Provides sustained dialogue with a large group of individuals, but difficulties arise in ensuring representivity and maintaining membership. May also be costly to sustain.|
|Stakeholder partnerships||Long-standing groups of individuals, representing major stakeholder groups, who meet on a relatively frequent basis to discuss and advise on public policy issues.||Permanency of groups helps build deeper insight and truyst and give contnuity of stakeholder input. Members may become detached from stakeholders they represent, and inequalities within the group may become permanently established, biasing the process. Costly to sustain.|
Open assessment is a method that attempts to answer the following research question and to apply the answer in practical assessments: how can scientific information and value judgements be organised for improving societal decision-making in a situation where open participation is allowed?
In practice, the assessment processes are performed using Internet tools (notably Opasnet) along with more traditional tools. Stakeholders and other interested people are able to participate, comment, and edit the content as it develops, from an early phase of the process. Open assessments explicitly include value judgements, thereby extending its application beyond the traditional realm of risk assessmen into the risk management arena. It is based, however, on a clear information structure and scientific methodolgy in order to provide clear rules for dealing with disputes. Value judgements thus go through the same open criticism as scientific claims; the main difference is that scientific claims are based on observations, while value judgements are based on opinions of individuals. Like other terms in the field of assessment 'open assessment' is subject to some confusion. It is therefore useful to distinguish clearly between:
the open assesment methodology; the open assessment process - i.e. the actual mechanism of carrying out an open assessment, and the open assessment product or report - i.e. the end product of the process. To ensure clarity, open assessment also attempts to apply terms in a very strict way. In the summary below, therefore, links are given to further information on, and definitions of, many of the terms and concepts used.
Open assessment as a methodology
Open assessment is built on several different methods and principles that together make a coherent system for collecting, organising, synthesising, and using information. These methods and principles are briefly summarised here. A more detailed rationale about why exactly these methods are used and needed can be found in the Open assessment method. In addition, each method or principle has a page of its own in Opasnet.
The basic idea of open assessment is to collect information that is needed in a decision-making process. The information is organised as an assessment that predicts the impacts of different decision options on some outcomes of interest. Information is organised to the level of detail that is necessary to achieve the objective of informing decision-makers. An assessment is typically a quantitative model about relevant issues causally affected by the decision and affecting the outcomes. Decisions, outcomes, and other issues are modelled as separate parts of an assessment, called variables. In practice, assessments and variables are web pages in Opasnet, a web-workspace dedicated for making these assessments. Such a web page contains all information (text, numerical values, and software code) needed to describe and actually run that part of an assessment model.
These web pages are also called information objects, because they are the standard way of handling information as chunk-sized pieces in open assessments. Each object (or page) contains information about a particular issue. Each page also has the same, universal structure: a research question (what is the issue?), rationale (what do we know about the issue?), and result (what is our current best answer to the research question?). The descriptions of these issues are built on a web page, and anyone can participate in reading or writing just as in Wikipedia. Notably, the outcome is owned by everyone and therefore the original authors or assessors do not possess any copyrights or rights to prevent further editing.
The structure of information objects can be likened to a fractal: an object with a research question may contain sub-questions that could be treated as separate objects themselves, and a discussion about a topic could be divided into several smaller discussions about sub-topics. For example, there may be a variable called Population of Europe with the result indexed by country. Instead, this information could have been divided into several smaller population variables, one for each country - for example in the form of a variable called Population of Finland. How information is divided or aggregated into variables is a matter of taste and practicability and there are no objective rules. Instead, the rules only state that if there are two overlapping variables, the information in them must be coherent. In theory, there is no limit to how detailed the scope of an information object can be.
Trialogue is the term used to define Wikipedia-like contributions. The trialogue concept emphasises that, in addition to having a dialogue or discussion, a major part of the communication and learning between the individuals in a group happens via information objects, in this case Opasnet pages. In other words, people not only talk or read about a topic but actually contribute to an information object that represents the shared understanding of the group. Wikipedia is a famous example of trialogical approach although the wikipedists do not use this word.
Groups are crucial in open assessment because all research questions are (implicitly) transformed into questions with the format: "What can we as a group know about issue X?" The group considering a particular issue may be explicitly described, but it may also be implicit. In the latter case, it typically means anyone who wants to participate, or alternatively, the whole of humanity.
The use purpose of information is crucial because it is the fuel of assessments. Nothing is done just for fun (although that is a valid motivation as well) but because the information is needed for some practical, explicit use. Of course, other assessments are also done to inform decisions, but open assessments are continuously being evaluated against the use purpose; this is done to guide the assessment work, and the assessment is finished as soon as the use purpose is fulfilled.
Open assessment attempts to be a coherent methodology. Everything in the open assessment methodology, as well as in all open assessment process, is accepted or rejected based on observations and reasoning. However, there are several underlying principles that cannot be verified using observations, called axioms of open assessment. The six axioms, which are essentially Cartesian in origin, are:
- The reality exists;
- The reality is a continuum without, for example, sudden appearances or disappearances of things without reason;
- I can reason;
- I can observe and use my observations and reasoning to learn about the reality;
- Individuals (like me) can communicate and share information about the reality;
- Not everyone is a systematic liar.
Inference rules are used to decide what to believe. The rules are summarised as follows:
- Anyone can promote a statement about anything (promote = claim that the statement is true).
- A promoted statement is considered valid unless it is invalidated (i.e., convincingly shown not to be true).
- The validity of a statement is always conditional to a particular group (which is or is not convinced).
- A statement always has a field in which it can be applied. By default, a scientific statement applies in the whole universe and a moral statement applies within a group that considers it valid.
- Two moral statements by a single group may be conflicting only if the fields of application do not overlap.
- There may be uncertainty about whether a statement is true (or whether it should be true, in case of moral statements). This can be quantitatively measured with subjective probabilities.
- There can be other rules than these inference rules for deciding what a group should believe. Rules are also statements and they are validated or invalidated just like any statements.
- If two people within a group promote conflicting statements, the a priori belief is that each statement is equally likely to be true.
- A priori beliefs are updated into a posteriori beliefs based on observations (in case of scientific statements) or opinions (in case of moral statements) and open criticism that is based on shared rules. In practice, this means the use of scientific method. Opinions of each person are given equal weight.
Tiers of open assessment process describe typical phases of work when an open assessment is performed. Three tiers are recognised as follows:
- Tier I: Definition of the use purpose and scope of an assessment.
- Tier II: Definition of the decision criteria.
- Tier III: Information production.
It is noteworthy that the three tiers closely resemble the first three phases of integrated environmental health impact assessment, but the fourth phase (appraisal) is not a separate tier in open assessment. Instead, appraisal and information use happens at all tiers as a continuous and iterative process. In addition, the tiers have some similarities also to the approach developed by the BRAFO project.
It is clear that within a self-organised group, not all people agree on all scientific or moral statements. The good news is that it is neither expected nor hoped for. There are strong but simple rules to resolve disputes, namely rules of structured discussions. In straightforward cases, discussions can be informal, but with more complicated or heated situations, the discussion rules are followed:
- Each discussion has one or more statements as a starting point. The validity of the statements is the topic of the discussion.
- A statement is valid unless it is attacked with a valid argument.
- Statements can be defended or attacked with arguments, which are themselves treated as statements of smaller discussions. Thus, a hierarchical structure of defending and attacking arguments is created.
- When the discussion is resolved, the content of all valid statements is incorporated into the information object. All resolutions are temporary, and anyone can reopen a discussion. Actually, a resolution means nothing more than a situation where the currently valid statements are included in the content of the relevant information object.
Technical functionalities supporting open assessment
Opasnet is the web-workspace for making open assessments. The user interface is a wiki and it is in many respects similar to Wikipedia, although it also has enhanced functionalities for making assessments. One of the key ideas is that all work needed in an assessment can be performed using this single interface. Everything required to undertake and participate in an assessmeent is therefore provided, whether it be information collection, numerical modelling, discussions, statistical analyses on original data, publishing original research results, peer review, organising and distributing tasks within a group, or dissemination of results to decision-makers.
In practice, Opasnet is an overall name for many other functionalities than the wiki, but because the wiki is the interface for users, Opasnet is often used as a synonym for the Opasnet wiki. Other major functionalities exist as well, aas outlined below. The main article about this topic is Opasnet structure.
Most variables have numerical values as their results. Often these are uncertain and they are expressed as probability distributions. A web page is an impractical place to store and handle this kind of information. For this purpose, a database called Opasnet Base is used. This provides a very flexible storage platform, and almost any results that can be expressed as two-dimensional tables can be stored in Opasnet Base. Results of a variable can be retrieved from the respective Opasnet page. Opasnet can be used to upload new results into the database. Further, if one variable (B) is causally dependent on variable (A), the result of A can be automatically retrieved from Opasnet Base and used in a formula for calculating B.
Because Opasnet Base contains samples of distributions of variables, it is actually a very large Bayesian belief network, which can be used for assessment-level analyses and conditioning and optimising different decision options. In addition to finding optimal decision options, Opasnet Base can be used to assess the value of further information for a particular decision. This statistical method is called Value of information.
Opasnet contains modelling functionalities for numerical models. It is an object-oriented functionality based on the R statistical software and the results in Opasnet Base. Each information object (typically a variable) contains a formula which has detailed instructions about how its result should be computed, often based on results of upstream variables in a model.
Meta level functionalities
In addition to work and discussions about the actual topics related to real-world decision-making, there is also a meta level in Opasnet. Meta level means that there are discussions and work about the contents of Opasnet. The most obvious expression of this are the rating bars in the top right corner of many Opasnet pages. Peer rating means that users are requested to evaluate the scientific quality and usefulness of that page on a scale from 0 to 100. This information can then be used by the assessors to evaluate which parts of an assessment require more work, or by readers who want to know whether the presented estimates are reliable for their own purpose.
The users are also allowed to make peer reviews of pages. These are similar to peer reviews in scientific journals, with written evaluations of the scientific quality of content. Another form of written evaluation are acknowledgements, which are a description about who has contributed what to the page, and what fraction of the merit should be given to which contributor.
Estimates of scientific quality, peer reviews and acknowledgements can be used systematically to calculate how much each contributor has done in Opasnet, though these practices are not yet well developed: contribution scores are so far the only systematic method even roughly to estimate contributions quantitatively.
Respect theory is a method for estimating the value of freely usable information objects to a group. This method is under development, and hopefully it will provide practical guidance for distributing merit among contributors in Opasnet.
Why does open assessment work?
Many people are (initially at least) sceptical about the effectiveness of open assessment. In part, this is because ther approach is new and has not yet been widely applied and validated. Most examples of its use are for demonstration purposes. A number of reasons can nevertheless be advanced, supporting its use:
- In all assessments, there is a lack of resources, and this limits the quality of the outcome. With important (and controversial) topics, opening up an assessment to anyone will bring new resources to the assessment in the form of interested volunteers.
- The rules of open assessment make it feasible to organise the increased amount of new data (which may at some points be of low quality) into high-quality syntheses within the limits of new resources.
- Participants are relaxed with the idea of freely sharing important information - a prerequisite of an effective open assessment - because open assessments are motivated by the shared hope for societal improvements and not by monetary profit. This is unlike in many other areas where information monopolies and copyrights are promoted as means to gain competitive advantage in a market, but as a side effect result in information barriers.
- Problems due to too narrow initial scoping of the issue are reduced by having with more eyes look at the topic throughout the assessment process.
- It becomes easy systematically to apply the basic principles of the scientific method, namely rationale, observations and, especially, open criticism.
- Any information organised for any previous assessment is readily available for a new assessment on an analogous topic. The work time for data collection and the calendar time from data collection to utilisation are also reduced, thus increasing efficiency.
- All information is organised in a standard format which makes it possible to develop powerful standardised methods for data mining and manipulation and consistency checks.
- It is technically easy to prevent malevolent attacks against the content of an assessment (on a topic page in Opasnet wiki) without restricting the discussion about, or improvement of, the content (on a related discussion page); the resolutions from the discussions are simply updated to the actual content on the topic page by a trusted moderator.
These points support the contention that open assessment (or approaches adopting similar principles) will take over a major part of information production motivated by societal needs and improvement of societal decision-making. The strength of this argument is already being shown by social interaction initiatives, such as Wikipedia and Facebook, However, an economic rationale also exists: open assessment is cheaper to perform and easier to utilise, and can produce higher quality outputs than current alternative methods to produce societally important information.
Briggs, D.J. and Stern, R. 2007 Risk response to environmental hazards to health – towards an ecological approach. Journal of Risk Research 10, 593-622.