Open assessment: Difference between revisions
(todo link updated) |
(summaries added; half way through) |
||
Line 2: | Line 2: | ||
[[Category:Glossary term]] | [[Category:Glossary term]] | ||
{{encyclopedia|moderator=Jouni}} | {{encyclopedia|moderator=Jouni}} | ||
''For a brief description about [[open assessment]] and the related workspace, see [[Opasnet]]. | |||
<section begin=glossary /> | <section begin=glossary /> | ||
:'''Open assessment''' (previously also known as '''pyrkilo''') is a method that attempts to answer the following research question and to apply the answer in practical assessments: | :'''Open assessment''' (previously also known as '''pyrkilo''') is a method that attempts to answer the following research question and to apply the answer in practical assessments: | ||
Line 8: | Line 10: | ||
<section end=glossary /> | <section end=glossary /> | ||
==Open assessment as a methodology== | |||
Open assessment is built on several different [[method]]s and principles that together make a coherent system for collecting, organising, synthesising, and using information. These [[method]]s and principles are briefly summarised here. A more detailed [[rationale]] about why exactly these [[method]]s are used and needed can be found from [[Open assessment method]]. In addition, each [[method]] or principle has a page of its own in [[Opasnet]]. | |||
The key concepts in [[open assessment]] that are not typical in other assessment methods are the explicit roles of groups and information use purpose. '''Groups''' are crucial because everything is transformed into questions with this format: "What can we as a group know about issue X?" The group considering a particular issue may be explicitly described, but it may also be implicit. In the latter case, it typically means ''anyone who wants to participate'', or alternatively, ''the whole mankind''. The '''use purpose of information''' is crucial because that is the fuel of assessments. Nothing is done just for fun (although that is a valid motivation as well) but because the information is needed for some practical, explicit use. The [[performance]] of an [[assessment]] is evaluated against how well it serves its use purpose. | |||
[[Trialogue]] and [[information object]]s are used to operate with information. Information objects are web pages in [[Opasnet]], a web workspace. Each object (or page) contains information about a particular issue. Each page also has the same, [[attribute|universal structure]]: a research question (what is the issue?), rationale (what do we know about the issue?), and result (what is our current best answer to the research question?). [[Trialogue]] is concept that in addition to having a dialogue or discussion, a major part of the communication between the individuals in a [[group]] happens via [[information object]]s. In other words, people not only talk about a topic but they actually write together a description about it. The description is built on a web page, and anyone can participate in reading or writing. Notably, the outcome is owned by everyone and therefore the original authors do not possess any copyrights or rights to prevent further editing. [[Wikipedia]] is a famous example of [[trialogue|trialogical approach]] (although the wikipedists do not use this word). | |||
[[Open assessment]] attempts to be a coherent methodology system where everything is either directly or indirectly based on [[axioms of open assessment]], or can be motivated by observations or practical experience that a particular [[method]] works. | |||
'''[[Axioms of open assessment]]''' set the foundations about things that cannot be empirically proven. The six axioms are the following: 1) The reality exists. 2) The reality is a continuum without e.g. sudden appearances or disappearances of things without reason. 3) I can reason. 4) I can observe and use my observations and reasoning to learn about the reality. 5) Individuals (like me) can communicate and share information. 6) Not everyone is a systematic liar. | |||
'''[[Benefit-risk assessment of food supplements#Result|Tiers of open assessment process]]''' describe typical phases of work when an [[open assessment]] is performed. The tiers are the following: Tier I: Definition of the use purpose and scope of an assessment. Tier II: Definition of the decision criteria. Tier III: Information production. It is noteworthy that the three tiers closely resemble the first three phases of [[IEHIA]], but the fourth phase (appraisal) is not a separate tier in [[open assessment]]. Instead, appraisal and information use happens at all tiers as a continuous and iterative process. In addition, the tiers have some similarities also to [[BRAFO]] approach. | |||
'''[[Inference rules]]''' are used to decide what to believe. The rules are summarised here. 1) Anyone can promote a [[statement]] about anything (''promote'' = claim that the [[statement]] is true). 2) A promoted [[statement]] is considered valid unless it is invalidated (i.e., convincingly shown not to be true). 3) Uncertainty about whether a statement is true is measured with [[subjective probability|subjective probabilities]]. 4) The validity of a [[statement]] is always conditional to a particular [[group]] of people. 5) A [[group]] can develop other rules than these inference rules (such as mathematics or laws of physics) for deciding what to believe. 6) If two people within a group promote conflicting statements, the ''a priori'' belief is that each statement is equally likely to be true. 7) ''A priori'' beliefs are updated into ''a posteriori'' beliefs based on observations and [[open criticism]] that is based on shared rules. In practice, this means the use of [[scientific method]]. | |||
[[Statement]]s about moral norms are developed using the '''[[morality game]]'''. | |||
[[Roles, tasks, and functionalities in Opasnet]] | |||
[[Morality game]] | |||
[[Discussion structure]] [[Discussion method]] [[Discussion]] | |||
[[Perspective levels of decision making]]? Do we need this? | |||
[[Opasnet Base]] | |||
[[Respect theory]] | |||
[[Peer rating]] [[Rating bar]] | |||
[[Open participation]] | |||
[[Falsification]] | |||
[[Bayesian inference]] [[Value of information]] | |||
[[Darm]] | |||
Open assessment is not very difficult as such. Usually the problem is that people don't believe how simple it actually is. | Open assessment is not very difficult as such. Usually the problem is that people don't believe how simple it actually is. | ||
Line 87: | Line 123: | ||
* [[Open Assessors' Network]] | * [[Open Assessors' Network]] | ||
* [[Open assessment method]] | * [[Open assessment method]] | ||
* [[Opasnet | * [[Opasnet Base]] | ||
==Keywords== | |||
==References== | |||
<references/> | |||
==Related files== | |||
{{mfiles}} |
Revision as of 14:49, 30 December 2010
This page is a encyclopedia article.
The page identifier is Op_en2875 |
---|
Moderator:Jouni (see all) |
|
Upload data
|
For a brief description about open assessment and the related workspace, see Opasnet.
<section begin=glossary />
- Open assessment (previously also known as pyrkilo) is a method that attempts to answer the following research question and to apply the answer in practical assessments:
- How can scientific information and value judgements be organised for improving societal decision-making in a situation where open participation is allowed?
- Open assessment can also refer to the actual making of such an assessment (precisely: open assessment process), or the end product of the process (precisely: open assessment product or report). Usually, the use of the term open assessment is clear, but if there is a danger of confusion, the precise term (open assessment method, process, or product) should be used. In practice, the assessment processes are performed using Internet tools (notably Opasnet) among traditional tools. Stakeholders and other interested people are able to participate, comment, and edit its contents already since an early phase of the process. Open assessment is based on a clear information structure and scientific method as the ultimate rule for dealing with disputes.
<section end=glossary />
Open assessment as a methodology
Open assessment is built on several different methods and principles that together make a coherent system for collecting, organising, synthesising, and using information. These methods and principles are briefly summarised here. A more detailed rationale about why exactly these methods are used and needed can be found from Open assessment method. In addition, each method or principle has a page of its own in Opasnet.
The key concepts in open assessment that are not typical in other assessment methods are the explicit roles of groups and information use purpose. Groups are crucial because everything is transformed into questions with this format: "What can we as a group know about issue X?" The group considering a particular issue may be explicitly described, but it may also be implicit. In the latter case, it typically means anyone who wants to participate, or alternatively, the whole mankind. The use purpose of information is crucial because that is the fuel of assessments. Nothing is done just for fun (although that is a valid motivation as well) but because the information is needed for some practical, explicit use. The performance of an assessment is evaluated against how well it serves its use purpose.
Trialogue and information objects are used to operate with information. Information objects are web pages in Opasnet, a web workspace. Each object (or page) contains information about a particular issue. Each page also has the same, universal structure: a research question (what is the issue?), rationale (what do we know about the issue?), and result (what is our current best answer to the research question?). Trialogue is concept that in addition to having a dialogue or discussion, a major part of the communication between the individuals in a group happens via information objects. In other words, people not only talk about a topic but they actually write together a description about it. The description is built on a web page, and anyone can participate in reading or writing. Notably, the outcome is owned by everyone and therefore the original authors do not possess any copyrights or rights to prevent further editing. Wikipedia is a famous example of trialogical approach (although the wikipedists do not use this word).
Open assessment attempts to be a coherent methodology system where everything is either directly or indirectly based on axioms of open assessment, or can be motivated by observations or practical experience that a particular method works.
Axioms of open assessment set the foundations about things that cannot be empirically proven. The six axioms are the following: 1) The reality exists. 2) The reality is a continuum without e.g. sudden appearances or disappearances of things without reason. 3) I can reason. 4) I can observe and use my observations and reasoning to learn about the reality. 5) Individuals (like me) can communicate and share information. 6) Not everyone is a systematic liar.
Tiers of open assessment process describe typical phases of work when an open assessment is performed. The tiers are the following: Tier I: Definition of the use purpose and scope of an assessment. Tier II: Definition of the decision criteria. Tier III: Information production. It is noteworthy that the three tiers closely resemble the first three phases of IEHIA, but the fourth phase (appraisal) is not a separate tier in open assessment. Instead, appraisal and information use happens at all tiers as a continuous and iterative process. In addition, the tiers have some similarities also to BRAFO approach.
Inference rules are used to decide what to believe. The rules are summarised here. 1) Anyone can promote a statement about anything (promote = claim that the statement is true). 2) A promoted statement is considered valid unless it is invalidated (i.e., convincingly shown not to be true). 3) Uncertainty about whether a statement is true is measured with subjective probabilities. 4) The validity of a statement is always conditional to a particular group of people. 5) A group can develop other rules than these inference rules (such as mathematics or laws of physics) for deciding what to believe. 6) If two people within a group promote conflicting statements, the a priori belief is that each statement is equally likely to be true. 7) A priori beliefs are updated into a posteriori beliefs based on observations and open criticism that is based on shared rules. In practice, this means the use of scientific method.
Statements about moral norms are developed using the morality game.
Roles, tasks, and functionalities in Opasnet
Discussion structure Discussion method Discussion
Perspective levels of decision making? Do we need this?
Bayesian inference Value of information
Open assessment is not very difficult as such. Usually the problem is that people don't believe how simple it actually is.
Open assessment typically starts with a need for an informed decision. Someone must decide about something, and the decision should be based on the best information about the topic and the objectives that are pursued. The decision can be anything, and the decision-maker can be anyone. Open assessment does not perform too well, if the decision is very small and does not warrant much effort (where do we go for lunch today?), or if a large part of the related information is private or cannot be released to open scrutiny due to other reasons (which girl should I marry?). However, both of these decisions can also be assessed from a general point of view, and then they are good topics for open assessment (What are good places for lunch such that serve good food, are not expensive, and do not take too much time from our workplace? Which properties of a relationship or a potential spouse predict a successful and happy marriage, and which predict an unsuccessful and unhappy marriage?)
As the examples showed, the decision was formulated as a question. This is the first, critical step in open assessment. All information gathering starts with an identification of a question that needs to be answered. If you don't know your question, you actually don't know what you are trying to decide. Therefore it is essential that a question is formulated.
I should mention here that this is the reason why open assessment works with basic science as well. Science asks research questions and attempts to answer them based on scientific information. When a question has been defined, it is treated in the exactly same way, be it a policy question or a science question. Actually, the difficult question of a science-policy interface practically disappears, because politicians and scientists are
The question must be clear enough that it can be answered unambiguously. It must also be possible to see which answers are better than others. In the case of a decision question, some performance criteria must be defined. This means that we must define what a good decision is, in other words, which are outcomes to be pursued and which are to be avoided. Although "successful and happy marriage" is still rather fuzzy, we can assume that if a large group of people ranked a large number of marriages based on successfulness and happiness, the results would show clear patterns. In addition, the research question rules out a lot of objectives such as the amount of money that the potential farther-in-law is going to give to the newly-wed couple. Or, to be precise, this money comes into the assessment only through its potential to facilitate a happy marriage, but it does not have any intrinsic value in the comparison of options in the assessment.
When the question has been defined, the rest of the work is to answer the question as precisely as it is possible with the information available. The scientific method is applied here. It means that anyone can suggest potential answers, and other people try to falsify them, i.e. use data and argumentation to show that an answer is wrong or irrelevant, given the question.
An answer to the main question may depend on a lot of other issues. These, again, are framed as questions, and their answers are used to answer the original question. Thus, the whole assessment consists of pieces of information, which themselves consist of pairs of questions and answers.
Why is open assessment a revolutionary method?
There are several things that are done differently, and arguably better, in open assessment compared with traditional ways of collecting information. These are briefly listed here and then described in more detail.
- Open assessment can be applied to most decision-making situations.
- Open assessment helps to focus on relevant issues.
- Important issues are explicated.
- The expression of values is encouraged.
- It becomes more difficult to promote non-explicated values, i.e. hidden agendas.
- Open assessment focuses on primary issues and thus gives little emphasis on secondary issues.
- Open assessment separates the policy-making (developing and evaluating potential decision options) and the actual decision-making (making of the decision by the authoritative body).
- Open assessment breaks the information monopoly of the authoritative body and motivates participation.
- Open assessment makes the information collection quicker and easier.
- Open assessment is based on the scientific method.
- Open assessment does not prevent the use of any previous methods.
Open assessment can be applied to most decision-making situations.
Open assessment helps to focus on relevant issues.
The assessment work boils down to answering the assessment question. Whatever helps in answering is useful, and whatever does not help is useless. You can always ask a practical question from a person who suggests additional tasks: "How would this task help us in answering the question?" It is a simple question to ask, but a difficult one to answer by a bureaucrat.
Important issues are explicated.
The key part of an assessment product is an answer to the assessment question. The question can easily be falsified, unless it is explicitly defended by relevant arguments found in the assessment. This simple rule forces the assessment participants to explicate all issues that might be relevant for the end users when they evaluate the acceptability of the answer.
The expression of values is encouraged.
Valuations are used to optimise decisions. Implicit values are not used to make conclusions. Therefore, if you don't like Chinese food, you must express this value in the lunch place assessment, otherwise the value is ignored. Any values can be included, and these will be taken into account.
It becomes more difficult to promote non-explicated values, i.e. hidden agendas.
Open assessment focuses on primary issues and thus gives little emphasis on secondary issues.
Secondary issues include preparation committees, meeting minutes and so on. After all, all these secondary issues are only needed to get answers to the questions. Instead of trying to get into the committee, participate in the discussions, and write the minutes, you just go to your assessment page and write down your suggestions, and you are done. Of course you can still participate in any meetings to stimulate your thinking, but many people have experience on meetings that take a lot of working time without giving any stimulation.
Open assessment separates the policy-making (developing and evaluating potential decision options) and the actual decision-making (making of the decision by the authoritative body).
Open assessment opens the policy-making so that anyone can participate and bring in their information.
Open assessment breaks the information monopoly of the authoritative body and motivates participation.
The authoritative body can still make the decision just like before. But this body has no say over issues that will be included in an open assessment. The inclusion or exclusion of issues depends only on relevance, which ultimately depends only on the assessment question.
Open assessment makes the information collection quicker and easier.
Open assessment is based on the scientific method.
Open assessment does not prevent the use of any previous methods.
Open assessment does not preclude any methods. It only determines the end product (a hierarchical thread of questions and answers, starting from the main question of the assessment), and how individual pieces of information are evaluated (based on relevance, logic, and coherence with observations). Of course, some traditional methods will perform poorly in these conditions, but those who insist to use them are free to do so.
See also
Keywords
References
Related files
<mfanonymousfilelist></mfanonymousfilelist>