Talk:Assessment

From Opasnet
Revision as of 23:17, 4 December 2014 by Jouni (talk | contribs) (discussions moved from Talk:Assessment structure)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Rationale for scope?

How to read discussions

Fact discussion: .
Opening statement: Rationale is needed also for Scope.

Closing statement: Under discussion (to be changed when a conclusion is found)

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:

←--1: . The current Rationale is mostly about reasoning why the answer is good with regard to the question. Only the subattribute Stakeholders considers issues related to the goodness of the question. --Mikko Pohjola 14:47, 23 January 2013 (EET) (type: truth; paradigms: science: defence)

----4: . The subattribute Stakeholders is confusing, e.g. because participants and user are listed elsewhere. How about Needs?, Impacts?, Values?, Interests?, (Secondary) Aims/Goals --Mikko Pohjola 14:47, 23 January 2013 (EET) (type: truth; paradigms: science: comment)
----#: . I agree. The table Stakeholders does not really talk about stakeholders but their values or impacts they value. Secondary in front of aims is not good, because also the decision maker's aims (the primary aims of the assessment) are described in the same table. Values is a problematic word because it can be confused with numbers. Needs is not good because this contains both needs and wants. In conclusion, I'm tempted to suggest Interests, which are then associated with impacts (outcomes of interest). --Jouni 16:02, 23 January 2013 (EET) (type: truth; paradigms: science: comment)

←--2: . There is a need for a place for explicating e.g. the aims of decision makers (perhaps also means of evaluating success in reaching them) and the relevant value judgments of different stakeholders etc. that (should) result from the development of shared understanding between relevant participants in the assessment question formulation phase. --Mikko Pohjola 14:47, 23 January 2013 (EET) (type: truth; paradigms: science: defence)

←--3: . Rationale could be split into two - Rationale for question & Rationale for answer - or then the subattributes of Rationale could just be complemented/adjusted to address the needs of explicit reasoning for question. --Mikko Pohjola 14:47, 23 January 2013 (EET) (type: truth; paradigms: science: defence)
←--#: . Under Rationale, there could be a subattribute Rationale for scope. I wouldn't split Rationale into two, because it is a nice and clean system to have question - answer - rationale everywhere. But scoping (not only the question) really needs rationale, and a good place for that is under Rationale. --Jouni 16:02, 23 January 2013 (EET) (type: truth; paradigms: science: defence)
----5: . How about the outcomes of developing shared understanding in the interpretation of assessment results? Should they also have a place within the assessment structure or should they belong to some other part within a management system of knowledge-based decision making (e.g. Rationale for decision?) --Mikko Pohjola 14:47, 23 January 2013 (EET) (type: truth; paradigms: science: comment)
----#: . I have tried to develop the modelled assessment structure in such a way that it would capture at least most parts of shared understanding: different valuations are explicated in the table Interests (previously Stakeholders); different hypotheses are described in the relevant variables; and different worldviews can be described as scenarios irrespective of their scientific support. In theory, this system should be able to capture and propagate most aspects of shared understanding. Of course, interpretation of results is needed, but that happens under Results and Conclusions and there is no need for a separate subheading for shared understanding. Actually, it would be worrying if there was a need for it. --Jouni 16:02, 23 January 2013 (EET) (type: truth; paradigms: science: comment)

Clarification of terms

How to read discussions

Fact discussion: .
Opening statement: Clarify the meanings of the words "assessment product" and "endpoints"

Closing statement: The need for clarification accepted.

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:

←--1: . Using assessment product and endpoints is a bit confusing. what do you mean by this? results of the indicator variables?, results of the assessment? (whatever this is), health endpoints? Is the assessment product the assessment as a whole (i.e. net of variables at a certain stage in time) or the results of certain indicators? --Alexandra Kuhn 12:17, 14 May 2008 (EEST) (type: truth; paradigms: science: defence)

----2: . Assessment product is now defined. The word endpoint is no longer used. Result is the attribute for variables and assessments. The result of an assessment is a compilation of the results of all indicators and analyses in the assessment. --Jouni 23:16, 14 May 2008 (EEST) (type: truth; paradigms: science: comment)
----3: . Much better! So is the assessment product the same as the result of the assessment as you define it in your comment? If yes, we should add that definition in brackets to the first item of the article page. --Alexandra Kuhn 07:45, 16 May 2008 (EEST) (type: truth; paradigms: science: comment)

How to read discussions

Fact discussion: .
Opening statement: Add causal diagram to the attributes of an assessment

Closing statement: Resolution not yet found.

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:

←--1: . Although the causal diagram can be derived from the variables themselves and as such does not add any new content, it should nevertheless be listed here. Because it depicts the assessment and many people understand a graphic better than a set of abstract descriptions. Also, one sees if the variables one is creating fit together. I would even say, the normal way to scope an assessment is starting with the causal diagram (after the purpose and boundaries). --Alexandra Kuhn 11:18, 29 March 2008 (EET) (type: truth; paradigms: science: defence)

⇤--2: . Given its variables, the causal diagram itself does not contain additional information. Therefore, the diagram should not be an attribute or subattribute. However, it can be used as a subtitle so that the Definition divides into the Causal diagram which contains decision variables, indicators, and other variables; and then the other parts of the definition are Analyses and Indices. In addition, it is recommended that the definition does contain the causal diagram used in the assessment. It is still not a sub-attribute, but rather a narrative description. --Jouni 22:42, 31 March 2008 (EEST) (type: truth; paradigms: science: attack)

----3: . Indeed, the causal diagram is only an alternative way of representing the contents of an assessment. --Mikko Pohjola 14:42, 15 May 2008 (EEST) (type: truth; paradigms: science: comment)

----4: . I did not say that it has new information. I SAID it does not contain additional information. BUT still I think it is useful to have it there. --Alexandra Kuhn 17:27, 9 June 2008 (EEST) (type: truth; paradigms: science: comment)

How to read discussions

Fact discussion: .
Opening statement: Rename risk assessment

Closing statement: Accepted.

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:
←--1: . The assessment structure is more general than only for risk assessment. Rename it therefore in assessment (or maybe open assessment). --Alexandra Kuhn 11:18, 29 March 2008 (EET), --Jouni 22:42, 31 March 2008 (EEST) (type: truth; paradigms: science: defence)


Appraisal

How to read discussions

Fact discussion: .
Opening statement:

Closing statement: Resolution not yet found.

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:
←--1: . If we consider appraisal as incorporation of value judgments within the assessment, the means by which this is done should be explicated in the definition. --Mikko Pohjola 13:41, 9 February 2009 (EET) (type: truth; paradigms: science: defence)

Scenarios

How to read discussions

Fact discussion: .
Opening statement: Scenarios should belong under definition/analyses

Closing statement: Resolution not yet found.

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:

←--1: . Scenarios, meaning intentional deviations from the best estimate for a variable or a set of variables, are a means of analyzing the information within an assessment. Therefore scenarios should belong under definition, most nicely under analyses, instead of scope. A description of the base-case, i.e. the best estimate should belong to scope instead. --Mikko Pohjola 09:14, 10 February 2009 (EET) (type: truth; paradigms: science: defence)

----2: . The question still remains: is there some conceptual difference between conditioning and scenarios? --Mikko Pohjola 09:14, 10 February 2009 (EET) (type: truth; paradigms: science: comment)

Participants

How to read discussions

Fact discussion: .
Opening statement: Participants should belong under definition

Closing statement: Resolution not yet found.

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:
←--1: . Arranging, organizing, inviting participation to the assessment is actually a means of attempting to adequately answer the assessment question(s) defined in the scope. Purpose, boundaries and users (which aggregately could be called e.g. the assessment problem) reflect the assessment external needs that are addressed. Participants (and scenarios as well) are more of means of getting about in making the assessment in trying to adequately reflect the problem. If scope is intended to be the problem, definition the hypothetical suggested solution to the problem, and result the outcome of the solution attempt, I would say that participants (and scenarios as well) belong to the definition. --Mikko Pohjola 08:46, 12 February 2009 (EET) (type: truth; paradigms: science: defence)

Role of models?

How to read discussions

Fact discussion: .
Opening statement: Models do not have a natural location within the assessment structure

Closing statement: Resolution not yet found.

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:
←--1: . There is a place for a causal diagram, basically for the purpose of illustrating the definition of the assessment, but this does not cover functional models that one could use for igniting inferences of many kinds about various aspects of the assessment and its various parts. Playing around with a model could be very enlightening e.g. in relation to analyzing the assessment, generating scenarios, identifying decision options, exploring the comprehensiveness of the list of included variables, developing a corpus of a new assessment from an old assessment model etc. These knowledge creating practices take place all over the assessment. --Mikko Pohjola 03:51, 25 November 2009 (UTC) (type: truth; paradigms: science: defence)