|
|
(5 intermediate revisions by one other user not shown) |
Line 1: |
Line 1: |
| ==Uncertainty==
| | #REDIRECT [[:heande:Talk:Purpose and properties of good assessments]] |
| | |
| {{discussion
| |
| |Dispute= Should diagnosing and communicating of ''uncertainty'' not be criteria for good integrated risk assessment?
| |
| |Outcome= diagnosing and communicating uncertainty are included in the properties of good risk assessment
| |
| |Argumentation =
| |
| {{defend|#1: |'''Including the diagnosis and communication of uncertainty as criteria for good integrated risk assessment would be in line with W.P.1.5 deliverable 7: Uncertainty report's message:''' "Experts must be able to systematically diagnose and communicate the uncertainties characterising their assessments" (p.26) "in order to provide an impartial source of facts upon which policy decisions can be based." (p.9)|--[[User:Sjuurd|Sjuurd]] 17:40, 27 June 2007 (EEST)}}
| |
| | |
| {{defend|#2: |Diagnosing and communicating uncertainties of the assessment and its parts are important. They are covered in the properties of good risk assessment under the quality of content (informativeness and calibration) and applicability (usability and availability). In essence, the informativeness and calibration describe how well defined, i.e. how uncertain, the contents of the assessment are. Usability and availability describe how well the information, e.g. about the uncertainty, is conveyed to its target.|--[[User:Mikko Pohjola|Mikko]] 18:16, 27 June 2007 (EEST)}}
| |
| }}
| |
| | |
| ==Moderator qualifications==
| |
| | |
| {{discussion
| |
| |Dispute= Moderator qualifications need to be high.
| |
| |Outcome= Not accepted. Although this may be true in practice, the method does not set quality restrictions for moderators.
| |
| |Argumentation =
| |
| {{defend_invalid|#1: |If I've understood right, a good general level understanding is required from a person to be a moderator in a risk assessment. The quality of the assessment is thought to be achieved, basically, through several variable-related, variable-linkage-related, argumentation-based quality aspects. I don't see these aspects adequate SOLELY as themselves for the guarantee of good risk assessments (RAs) with good outcomes. The moderator has to understand complex causes and effects. How could she/he build up a relevant full-chain RA or a relevant RA network, figure out what variables are needed, what would be usefull as addings and what would be all the important uncertainties involved in those and how these uncertainties could be overcome or dealt with if crucial knowledge and comprehensive understanding is lacking? Unfortenately I don't see this could be overcome with expert judgement or guidelines, because comprehensive understanding on the whole network has to be build up by the moderator itself. I see someone has to "run the show". I see expert judgement, argumentation and variable as well as the assessment "rules" as very helpfull means of making it all a lot easier for a moderator that is skilled and experianced enough (specialist or almost such) on the whole issue under assessment. Additionally, an early began discussion between the risk assessor and the risk manager so that they really understand each others is crucial for the final usability of the assessment. I'm sure we agree at least on the latter.|--[[User:Anna Karjalainen|Anna Karjalainen]] 14:23, 11 June 2007 (EEST)}}
| |
| :{{attack|#2: |A moderator of an assessment or a core group of assessors taking care of moderating an assessment may be necessary to carry out an assessment in practice. The role of the moderator(s) should anyhow be a facilitator rather than a manager. The contents of an assessment, from defining the purpose to issue framing and further to carrying out and finalizing the assessment, should be a synthesized result of the contributions of the participants (assuming broad collaboration here), not a production of the moderator(s). Good real-life examples to prove that this works are still lacking, but there are signs that indicate that the distribution of responsibility can be divided to a diverse group of people even in complex issues and it can produce good quality outputs.|--[[User:Mikko Pohjola|Mikko]] 15:02, 27 June 2007 (EEST)}}
| |
| {{defend invalid|#3: |I see in different assessments the moderator or a core group of moderators (both can be possible and should be conceivable) can be both, either a facilitator or sometimes a "manager". Ideal situation would be of cource, and probably quite attainable, that it would most often be a facilitator, but I'm trying to be also realistic and see that in certain cases the role of the moderator would be quite strong. So I think we should leave space for both possibilities. I agree that the assessment should be a synthesized result that's for sure. I DO NOT suggest that a risk assessment should be exclusively a moderator's product. What I do state is, that IN ORDER TO GUIDE THE ASSESSMENT PROCESS SUCCESFULLY THROUGH, i.e. to be either a facilitator or a "manager" the moderator needs to have (or build on) a fairly good understanding on the assessment under question. So the moderator or the core group of moderators need to be specialists or experts enough, a general level understanding sounds not enough to me. |--[[User:Anna Karjalainen|Anna Karjalainen]] 09:57, 28 June 2007 (EEST)}}
| |
| :{{attack|#(number): |We must be specific here about the role of this argument. The method itself does not say anything about the qualifications of a moderator, and that is intended. There is no "driver's license" for a moderator that is needed to be allowed to moderate. Anyone can moderate a risk assessment. But, of course, as argument #3 says, a poorly skilled moderator is unlikely to produce good assessments. However, he/she should be allowed to try. One practical reason for this is that it would be very difficult to establish widely accepted quality criteria for moderators, and control for them.|--[[User:Jouni|Jouni]] 15:22, 29 August 2007 (EEST)}}
| |
| }}
| |
| | |
| ==Selection of indicators==
| |
| | |
| {{discussion
| |
| |Dispute= Are properties of good risk assessments intended as selection criteria for indicators or assessment endpoints?
| |
| |Outcome= The properties of good risk assessments are intended to be used in evaluating the goodness of the outputs and the process of assessment
| |
| |Argumentation =
| |
| {{attack|#1: |The properties of a good risk assessment is described by trying to categorizise important properties distinctive for good RAs. Why is the used approach for this purpose chosen? is it ment to clarify the goodness-related properties of a RA? I ask this, because some of the presented concepts are not very easily opened up to me. For instance the informativeness: what is stated about it is very much true and understandable as such. However, to my opinion, it would not always be possible to describe variables as distributions or even estimates of distributions in a way that it actually would add up to the infomativeness. I also have difficulties in figuring out what is really ment with the calibration concept. What is correctness and what is the real value? Nature or for instance a individual body is continuously shifting from one state of situation to another, so something that could be thought as real doesn't really actually exist. The term calibration automatically in my mind refers to something much more narrow and measurable than can almost ever be reached in the context it is now used in. The selection criteria for indicators in human health or ecological risk assessments are traditionally viewed as: 1) social relevance, 2) biological and human health relevance, 3) an unambiguous operational definition, 4)accessibility to prediction and measurement, 5) susceptibility to hazardous agents and 6) unresponsiveness to any irrelevant impacts concerning the assessment scope. The last two points are related although they do not automatically exclude each others. To fulfill these requirements is not at all easy, quite far from it, so why make it sound any more complicated than it already is? The above indicator selection criteria may be viewed as conventional, but they still very much fit for use. Could/should these issues be brought up from the stated aspects in the document to your opinion?|--[[User:Anna Karjalainen|Anna Karjalainen]] 15:15, 11 June 2007 (EEST)}}
| |
| :{{attack|#2: |The properties of good risk assessments are intended for evaluating the performance of the assessments: its outputs and the process. The properties can be applied for the assessment as whole, but the properties related to effectiveness also to parts of the assessment, e.g. individual variables or indicators. The purpose of an assessment always needs to be defined. The purpose then defines what the output, i.e. the assessment product structure, should be like. This then defines how the assessment process needs (or can) be structured. The properties of good risk assessment can then be used in evaluating the performance of these issues, i.e. (1) does the output meet its purpose, (2) was the process efficient. Within the assessment process, the selection of indicators and endpoints should reflect the purpose, including the intended users and uses of the output. The abovementioned criteria for indicator selection can be used in that.|--[[User:Mikko Pohjola|Mikko]] 15:02, 27 June 2007 (EEST)}}
| |
| {{defend|#(number): |There is no point in your reply that I could not actually agree on. I merely pointed out certain issues that I find tricky as concepts. One of those, calibration, was found tricky also by David Briggs just recently. I also pointed out certain issues that to my opinion should be included when figuring out the properties of good RAs. I know, they are actually included in the stated terms, but what harm would it be if the indicator selection criteria would be clarified a bit as stated? I guess it's just up to you and Jouni if you like to take into account these comments.|--[[User:Anna Karjalainen|Anna Karjalainen]] 10:24, 28 June 2007 (EEST)}}
| |
| :{{comment|#(number): |I think the question about indicator selection criteria goes beyond the point here, because they are not discussed in the properties of good risk assessment. The chosen indicators, whichever they may be, can be evaluated using these properties, but the properties do not explicitly address the selection of indicators. I have no objections for considering the abovementioned criteria in relation the questions about indicator selection, which is considered in more detail in [[Guidance and methods for indicator selection and specification]] page.|--[[User:Mikko Pohjola|Mikko]] 13:03, 28 June 2007 (EEST)}}
| |
| }}
| |
| | |
| ==Relation of relevance to content quality and applicability==
| |
| | |
| {{discussion
| |
| |Dispute= Both internal relevance and external relevance both belong to the category: quality of content
| |
| |Outcome= Under discussion: discussion is split.
| |
| |Argumentation =
| |
| {{attack invalid|#1: |Internal relevance is defined as the internal coherence of the description, i.e. correctness and completeness of the variable relation definitions. External relevance is the correctness and completeness of what is included in assessment in relation to its purpose. The previous is objective in its nature, because its point of reference is reality, but the latter is subjective and case-specific and it refers to the contextually and situationally ever-changing needs set by the intended use of the assessment product. Should external relevance be moved to category: applicability? Or should maybe the definitions of the properties be re-considered and re-specified more accurately?|--[[User:Mikko Pohjola|Mikko]] 13:31, 24 August 2007 (EEST)}}
| |
| :{{defend invalid|#2: |I agree: external relevance should be moved to applicability.|--[[User:Jouni|Jouni]] 15:08, 29 August 2007 (EEST)}}
| |
| :{{attack|#3: |I have come to disagree with myself. After serious discussions on relations of properties and uncertainty and targeting properties to information structure it has appeared that relevance is after all a property related only to quality of content. Quality of content is about the issue itself, i.e. about the relation of the description content and the essence of the target of description (particular piece of reality). In other words the point of reference is reality. Applicability is about how well does the description suit the use purpose, i.e. how does it function within a certain context. The point of reference here is the use purpose (or use, user, ...?). Relevance of some information within some description of reality is defined in relation to reality, not use purpose of that description. Therefore, relevance can only be a sub-property of quality of content. To put it briefly: QofC is about the issue itself, applicability is about how does a description of an issue function. Relevance refers to previous.|--[[User:Mikko Pohjola|Mikko]] 16:31, 15 October 2007 (EEST)}}
| |
| {{comment|#(4): |This discussion is about two questions: (1) whether ''internal'' relevance belongs to the Quality of Content, and (2) whether ''external'' relevance belongs to the Quality of Content. This makes a clear discussion difficult, because attacking arguments can relate to both properties or just to a single one. I therefore have opened two seperate discussions below.|--[[User:Sjuurd II|Sjuurd II]] 22:17, 21 February 2008 (EET)}}
| |
| }}
| |
| | |
| {{discussion
| |
| |Dispute= Internal relevance belongs to the category Quality of Content.
| |
| |Outcome= Under discussion (to be changed when a conclusion is found)
| |
| |Argumentation =
| |
| Add argumentation using attack-, defend- and comment buttons
| |
| | |
| {{defend|#(1): |Something is internal relevant, if it relates logically to the rest of the content and it is not abundant. (Abundant paragraphs, variables, scenarios, etc. decrease the quality of the content, because they distrack the reader.)|--[[User:Sjuurd II|Sjuurd II]] 22:48, 21 February 2008 (EET)}}
| |
| }}
| |
| | |
| {{discussion
| |
| |Dispute= External relevance belongs to the category Applicability of the Output.
| |
| |Outcome= Under discussion (to be changed when a conclusion is found)
| |
| |Argumentation =
| |
| Add argumentation using attack-, defend- and comment buttons
| |
| | |
| {{defend|#(1): |The assessment is external relevant, if it addresses the right problems in the assessment context. For example, does the assessment addresses several alternatives to nuclear energy. If assessments address the right problems, they can be applied by the decision makers|--[[User:Sjuurd II|Sjuurd II]] 22:48, 21 February 2008 (EET)}}
| |
| }}
| |
| | |
| == Usability depends on external relevance and understandability ==
| |
| {{discussion
| |
| |Dispute= Usability depends on external relevance and understandability
| |
| |Outcome= Under discussion (to be changed when a conclusion is found)
| |
| |Argumentation =
| |
| {{defend|#(1.1): | If the assessment output is relevant in the (decision) context, intended users can use it. |--[[User:Sjuurd II|Sjuurd II]] 11:33, 22 February 2008 (EET)}}
| |
| {{defend|#(1.2): | AND If the assessment output is understandable for the intented users, they can use it. |--[[User:Sjuurd II|Sjuurd II]] 11:33, 22 February 2008 (EET)}}
| |
| }}
| |