Accuracy and uncertainty
- The text on this page is taken from an equivalent page of the IEHIAS-project.
The results of assessments clearly need to be accurate if they are to provide reliable and robust information to policy-makers and other stakeholders concerned about environmental health issues. Accuracy, however, is not an absolute, but a matter of degree. All assessments are to some degree uncertain (and thus contain inaccuracies), if only because the phenomena being studied are themselves variable and somewhat unpredictable. What matters is whether the results are accurate enough to serve their purpose. To know this, we need to understand, and be able to evaluate, the uncertainties involved in the assessment and their implications for the consequences of any decisions that might be made.
Identifying how uncertainties might arise, where they come from, and how they affect the results is therefore an important element of any assessment. This is not something that can be left to the end, and 'bolted on' to the analysis once the assessment is complete. To do so would not only risk missing important uncertainties, which were not evident in the results, but would also mean that the accuracy of the results might be unnecessarily compromised: uncertainties that could have been avoided or reduced by judicious changes in the assessment procedures would have been allowed to develop and survive. Instead, careful examination of potential sources of uncertainty needs to be carried out during the Design stage, in order to:
- identify likely areas of uncertainty;
- map out how these uncertainties might persist and propagate through the assessment process;
- assess the likely scale of the uncertainties involved;
- evaluate the potential overall effect of the various uncertainties, in combination, on the assessment results;
- define and evaluate possible ways of reducing or eliminating the uncertainties.
As with most elements of Design, this is not a one-off procedure. The first four steps require reiteration, for example, in order to explore the possible consequences of any changes in the assessment methodology suggested in step 5. And the whole process needs to be repeated at intervals throughout the assessment, in order to ensure that he initial evaluation is still valid, and that unforeseen uncertainties have not arisen as the assessment has progressed.
None of this is easy, for uncertainties take many forms and, by their very nature, arise largely from lack of prior knowledge - e.g. about the state of the system under consideration, how it operates, or how these processes can be simulated and represented. Uncertainty analysis is therefore an attempt to explore and describe the unknown. Users also often have a very shaky appreciation of both the conceptual and statistical characteristics of uncertainty, so need help in understanding what the results of uncertainty analysis actually mean. For these reasons, special attention needs to be given to ensuring that:
- a clear conceptual framework is used for characterising uncertainties;
- methods of specifying and evaluating the uncertainties are consistent, robust and reliable;
- the results of the uncertainty analysis are reported in an understandable and unambiguous form.
Further information on concepts and methods for characterising uncertainty is provided by the links below.
References
- Briggs, D.J., Sabel, C.E. and Lee, K. 2008 Uncertainty in epidemiology and health risk and impact assessment. Environmental Geochemistry and Health 31, 189-203.
- Knol, A.B., Peterson, A.C., van der Sluijs, J.P. and Lebret, E. 2009 Dealing with uncertainty en environmental burden of disease assessment. Environmental Health 8:21. doi:10.1186/1476-069X-8-21
- Rotmans, J. and van Asselt, B.A. 2001 Uncertainty in integrated assessment modelling: a labyrinthic path. Integrated Assessment 2, 43-55.
See also
- File:Characterising uncertainty iehias.pdf
- Defining and describing uncertainties
- Framework for uncertainty classification